A New AI Law

There are many who claim that the harms AI can cause can only be addressed by a new legislation specifically designed to address them. That said existing laws have, more often than not, been framed in terms that are broad enough to deal with these harms regardless of the technology that caused them.

This article was first published in The Mint. You can read the original at this link. If you would like to receive these articles in your inbox every week please consider subscribing by clicking on this link.


In an earlier article in this column, I spoke of a lecture that Justice Easterbrook once delivered on the subject of property in cyberspace. His talk was titled ‘Cyberspace and the Law of the Horse,’ which was his way of highlighting the foolishness of coming up with fresh laws to regulate new technologies when general principles of law already addressed the concerns these new technologies raised.

He argued that there had been a number of cases where horses were bought and sold; where redress was sought for injuries caused when people were either kicked by a horse or had fallen off one of them. None of this, he said, moved us to enact a special “Law of the Horse” to deal with all these “new” harms. The general law of transfer of property, of torts, and the regulation of commercial transactions were more than enough to provide a legal framework within which these harms could be addressed.

This is advice that many of those who are writing about artificial intelligence (AI) regulation would do well to heed. Much of what is being written about AI and the harms it could cause point to the need for fresh regulation that writers feel will address the harms AI can cause. Some jurisdictions, like the EU, have already acted on these suggestions by enacting new AI legislation. Given the coercive power of the Brussels Effect, many other countries are following in its footsteps and it is more likely than not that Indian regulators will be compelled to consider going down the EU’s path.

Regulation of AI

But do we really need a new law to specify how the harms caused by AI need to be addressed? To answer this question, I took a closer look at some of the harms attributed to AI to see if they were so special that they needed a new regulation, or whether existing law was sufficient to address the harms they caused.

Let’s take, for instance, fake news—the concern that AI can be used to generate content (either audio or video) that can make it seem as if individuals had said or done something they, in fact, did not do. Voice AI has reached a point where it is capable of mimicking a real person’s speech in both intonation and mannerism. Video AI can generate footage that makes it appear as if they are doing things they never did. This is impersonation, and, regardless of the technology used, is not only a violation under the Information Technology Act of 2000, it is also a crime under the Bharatiya Nyaya Sanhita, 2023. Where it causes harm to a person’s reputation, it also amounts to defamation.

Then there is the concern that AI might be used to incite communal tension—that someone might use it to create the impression that certain persons are purposefully offending the sensibilities of those of another caste or community with a view to incite conflict. The use of electronic communications to promote disharmony and feelings of enmity between religious, racial or regional groups is a crime under Section 196 of the Bharatiya Nyaya Sanhita. It matters little what technology was used, and there is no need to enact a special law just to address this concern.

Pretty much any harm that you think is exacerbated by AI—be it misleading advertising, election interference, forgery or bias—is covered by existing provisions of law. In almost every instance, since these laws have been drafted so broadly as to cover a wide range of circumstances, it will make little difference if the harm is committed using a technology that did not exist at the time when the law relevant to the particular case was enacted.

This is why I do not believe there is any need to enact a brand-new AI Act to address the so-called harms of AI. What we might need to do is educate our regulators about AI and the fact that it makes it possible to commit crimes in ways that were not previously possible. We also need to train them on new forensic techniques that will help them investigate AI crimes, so that they can better detect whether or not AI has been used.  Wherever possible, we should teach them how they can use AI itself to do this detection work. Chances are, this will reduce the time and cost of investigating AI harms.

Regulations for AI

While we may not need to enact a new law to address the harms caused by AI, there may be a need for us to amend existing regulations to maximize the benefits we can extract from this new technology. Since most existing legal frameworks were conceptualized before Generative AI, some of the language used in these statutes may, if interpreted strictly, not permit some of what AI does.

Take, for example, intellectual property law. Copying a literary work without the permission of its author is technically a violation of copyright. However, despite having ingested literary works as part of their training data-sets, AI models almost never output verbatim the actual excerpts of those works. And since copyright law only protects the specific expression of ideas in original works of authorship—not the underlying ideas in a work—prompt responses that reference the ideas in a literary work do not violate the spirit of the law.

Many countries have recognized this distinction and introduced specific exemptions to exclude “training” from the strict applicability of the law. This is referred to as the text and data mining (TDM) exemption, and jurisdictions like Japan and Europe have incorporated it into their law. We should  review our own laws to make sure existing legal provisions do not accidentally come in the way of our AI ambitions.