Managing AI Disruption

Society’s response to disruptive technologies like AI follows a three-stage pattern: regulation, adaptation, and acceptance. Regulations tend to focus on first-order concerns, but overlook second-order consequences like the potential erosion of democratic values due to increased transparency of knowledge.

This article was first published in The Mint. You can read the original at this link.

The 26/11 Mumbai terror attack of 2008 was arguably the most heinous violation of India’s territorial integrity. Apart from the loss of lives and wanton destruction of property, the fact that an enemy nation could send armed fighters to South Mumbai, where they indiscriminately shot and killed innocent civilians, left such a scar on the national psyche that the event still lingers in our collective consciousness.

Three Stage Response to Disruptive Events

We typically have a three-stage response to disruptive social events. Our first (almost instinctual) reaction is to ratchet up regulation, using laws to plug gaps that led to the disruption in the first place. Immediately following the terrorist strikes of 2008 in Mumbai, we limited the manner in which public spaces could be accessed, willingly subjecting ourselves to a level of scrutiny that had not previously existed.

We eventually transitioned to the second stage, where we developed strategies to cope with the consequences of these new regulations. Instead of chafing at the inconvenience we were suffering, we learnt to leave early for the airport knowing that delays at security checkpoints could be unpredictable, and to pack smartly—keeping liquids within permissible volumes and taking care to remove electronics, belts and shoes before passing through metal detectors.

We then moved to the third stage of our response to disruptive change: We accepted these inconveniences as the necessary and acceptable cost of ensuring our personal safety and security. As a result, we no longer question the theatre of security we all play a part in or wonder whether there might be other, less intrusive means by which we could achieve the same results.

Rules created in immediate response to the indiscipline of a few are often a disproportionate reaction to the disruption caused. While this might be viewed as justifiable in the very aftermath of the incident, the continued imposition of these restrictions needs to be constantly assessed. That said, once we pull out a sledgehammer to slay a fly, it’s hard to re-holster the weapon.

The Three Stages of AI Disruption

There is a similar disruption underway today with the explosive growth of artificial intelligence (AI). Creators and educators around the world worry that the harmful consequences of these new technologies will wreak havoc on their ways of working, preventing them from using skills they have amassed over a lifetime to earn a living. In the US, this has caused actors and screen writers to go on strike, worrying that if they do not assert themselves, studios will get AI to replace them. Educational institutions, on the other hand, have banned students from using it, fearing that unless they do this, students will never learn.

Governments have already moved to Stage 1 of the response to a radical social disruption. They are enforcing existing regulations and enacting new ones to address the harms they believe could be caused by these new technologies. The Italian data protection regulator banned OpenAI till it was satisfied that the personal data of Italian residents was adequately protected. Other governments have begun to issue guidelines and regulations that limit what AI companies can do and how; they are looking to address, through laws, the risks everyone complains of.

The trouble is that these measures only address first-order concerns, such as the direct harms that result if decisions made by machines turn out to be unfair and how creators need to be compensated for the losses they may suffer on account of AI’s ability to generate the art that were paid to make.

These, as we have seen, are concerns that are typically addressed at Stage 2, when we develop mitigation strategies to deal with the disruption. We know that given time, creators will learn to properly leverage AI—first as tools that improve their efficiency at performing repetitive tasks, but eventually by augmenting creativity, helping them perform feats humans can’t on their own.

From there on, it will not be long before we truly embrace what AI has to offer, appreciating its shortcomings well enough to extract from it the many benefits it has to offer. When that happens, for every established artist who resists the use of AI, there will be a dozen others who use AI to develop their own artistic style that incorporates and embeds this new technology in their output. Once we realize that decisions taken by AI are fair, more often than not, it will not be long before we wholeheartedly embrace AI decision making.

Second Order Consequences of AI

It is only at this point that the second -order consequences of artificial intelligence will become apparent. This is when it will occur to us that instead of worrying about AI-generated hallucinations and fake news, we should have been concerned about what will happen once AI starts reducing the opacity of global knowledge. As Samuel Hammond points out, AI will improve the transparency of information, and this in itself could impose a cost that nobody really understands. Once knowledge is allowed to expand in this manner, it will reduce the inherent barriers that exist against attempts to manipulate opinions and what the world thinks. And when that happens, the resulting knowledge explosion could threaten our liberal democratic order.

All of which suggests that rather than looking to regulate AI, we should be focusing our energy on learning to master it. Rather than worrying about the first-order harms of this technology, we need to understand the second-order consequences of using AI—before it is too late.