AI for the Global South

The European Union has agreed to a new law to regulate artificial intelligence (AI) by imposing transparency requirements on general AI models and stronger restrictions on more powerful models. The US offers a broader, more nuanced framework. However there exists a North-South divide - with the Global South viewing AI as beneficial as contrasted to the more risk-focused approach of the Global North.

This article was first published in The Mint. You can read the original at this link.


Last week, Commissioner Thierry Breton announced that a historic deal had been reached between the countries of the EU to enact a new law to regulate artificial intelligence (AI). With this, Europe will become the first geographical region to enact regulation that stipulates how AI development and use should take place.

While the finer details were still emerging at the time of writing, it seems that Europe will be adopting a two-part approach to regulation: General purpose AI models will be subject to transparency requirements, while the more powerful models, such as facial recognition and social scoring technologies as well as AI applications that “manipulate human behaviour to circumvent their free will,” will be subject to stronger restrictions. Commending the agreement reached, EU President Ursula von der Leyen  said that it had transposed European values into a new era.

Some Concerns

Despite assurances that this legislation would be the launch-pad needed by EU startups and researchers to lead the global race for AI dominance, there was real concern that the new law would end up significantly hampering innovation. While no one is opposed, conceptually, to a risk-based approach, the fact that the AI Act attempts to apply those ideas to this technology (the foundation models themselves) and not just how it is used has raised concerns that European AI models will end up lagging others.

Of greater concern is the fact that even after the new law is enacted, it will not come into effect for another 12 to 24 months. This is an extraordinarily long period, given the rapid pace of change in the field of Generative AI. Which leads us to question of whether this ponderous legislative approach —that has come to epitomize European data governance—is the right way to go when it comes to regulating AI.

In October this year, US President Joe Biden issued an executive order that offered a different way of thinking about AI regulation. While over 100 pages long, the document was organized across eight core principles: establishing new standards for AI safety and security; protecting privacy; advancing equity and civil rights; standing up for consumers, patients and students; supporting workers; promoting innovation and competition; advancing American leadership abroad; and ensuring responsible and effective government use of AI. This is the US approach to regulating this new and utterly disruptive technology—one that is as broad as it is deep, covering a broad range of horizontal regulatory concerns, while still going deep into the nuances of specific AI use-cases in different sectors.

Geopolitics of AI

It is also impossible to ignore the geopolitical implications of an AI augmented world. In discussions over the past week, I heard US policymakers talk about regulation of the space in terms of having “a small yard with a high fence.” In other words, the US is looking to ensure that critical choke-points for foundational technologies in the AI domain remain within its yard, and it will look to build regulatory fences that are so high that strategic competitors of the US will not be able to exploit American technologies to undermine its security interests. Given that much of the development of foundational models has taken place in the US, other countries would be wise to assess how such an approach by America would affect their own national interests.

These issues were at the forefront of many discussions at the Global Technology Summit 2023 that took place in New Delhi last week. Over the course of the three-day conference, I had an opportunity to listen to and participate in a number of panel and round-table discussions in which experts from around the world discussed how existing regulatory frameworks would have to reorient themselves in the light of new technologies and geopolitical risks to address the new futures that are before us.

What about the South?

As I listened in on these conversations from the sidelines, I was struck by the extent to which discussions around AI regulation are currently being led by countries of the Global North. I grew increasingly concerned by the fact that, given the way in which their anxieties and fears were being translated into regulations that would eventually define the way in which the rest of the world consumes this new technology, the interests of the Global South may end up being imperfectly represented.

The Global North and Global South think very differently about AI regulation. While countries of the Global North worry about the risks of this new technology, they do so from a place of privilege—sitting, as they do, at the top of the economic food chain. For countries of the Global South, however, AI represents an opportunity to leapfrog over some traditional stages of development in ways that would improve productivity in general and enhance their efforts to pull their populations out of poverty.

We can therefore expect AI regulations created by the Global North to be firmly focused on mitigating risks and minimizing harms. Regulations put in place by the Global South, on the other hand, are likely to have a significantly different emphasis in that they will look to promote the use of AI for empowerment.

However, if countries of the Global South are not given a seat at the table as the Global North initiates AI regulation, the considerations of countries like India will not be adequately represented in the global AI discourse.

We need to raise our voices so that they can be heard everywhere.

Before it is too late.