AI Governance Guidelines

The new AI Governance Guidelines issued by MeiTy describe a pro-innovation regulatory framework for AI in India that strikes a balance between the need to encourage AI adoption while safeguarding against the risks that might result.
This is a link-enhanced version of an article that first appeared in the Mint. You can read the original here. If you would like to receive these articles in your inbox every week please consider subscribing by clicking on this link.
Last week, the Ministry of Electronics and Information Technology released the India AI Governance Guidelines, a proposed framework for governing artificial intelligence (AI). As a member of the committee tasked with drafting the final report, it is gratifying to see the recommendations become the basis on which AI will be governed in the country.
At the outset, it is important to clarify that this document sets out the committee’s recommendations on how the country should approach AI governance. It is up to the government to review them and take the steps it thinks appropriate for implementation.
The committee unanimously agreed to base the guidelines in the same seven principles that had been proposed in the Reserve Bank of India’s FREE-AI Committee report (of which I was also a part). Those principles were modified slightly so that they could be applied beyond the financial sector and aligned with national priorities. I am glad to see that the pro-innovation tilt these principles embody has become the hallmark of AI governance in India.
While the committee’s recommendations spanned six pillars, I will, in this article, only focus on those relating to policy and regulation. It will come as a relief to businesses that have been dreading having to comply with yet another law that the committee has concluded that there is no need for India to have an AI-specific law (along the lines of the EU AI Act). Existing laws are enough to address the risks arising from the use of AI, and just because a new technology comes to the fore doesn’t mean we have to enact a new law to deal with it. That said, the committee has pointed out that some laws may need to be amended to ensure that they are aligned with the ways in which AI operates.
Data Protection and Copyright
For instance, the committee felt that there may be a need to review the provisions of the Digital Personal Data Protection Act, 2023 to assess whether they are still effective in the context of modern AI systems. Such a review could include issues such as whether the exclusion of publicly available personal data from the purview of the Act would allow AI models to be trained on it and whether the various data protection principles (such as purpose limitation) are appropriate given the way AI systems operate.
With regard to copyright, the committee was inclined to strike a balance between fostering innovation and protecting the rights of copyright holders. To that end, it was in favour of implementing a text and data mining exemption along the lines of what has already been adopted by several countries while still protecting creator rights. Since the Department for Promotion of Industry and Internal Trade has set up a panel to look into this very issue, the committee refrained from making firm recommendations in this regard. I am eager to see what they recommend.
As is often the case with multi-stakeholder committees, the deliberations were long and the arguments protracted. Wherever consensus was elusive, care was taken to ensure that the report reflected all viewpoints. In some instances, multiple solutions were available to address the issues before us. Rather than being prescriptive about a specific approach, we listed the available options so that the government could choose which to adopt. Given the cross-cutting nature of AI, we believed that this would provide policymakers with the flexibility they need, considering the diverse circumstances they encounter.
Content Authentication
A case in point is content authentication. The committee was mindful of the many ways in which Generative AI technologies could foster creativity in image, video and music generation, but was equally concerned about the harm that could result. To strike a balance, we examined various content authentication and provenance technologies to see if any of them could be used to help identify content generated or modified by an AI system.
While organisations like the Coalition for Content Provenance and Authenticity (C2PA) have developed standards that signify provenance, we found that malicious actors could bypass these safeguards. This is why, rather than making a firm recommendation as to whether or not AI-generated content should be watermarked, we highlighted the technology solutions available and left it to the AI governance group to recommend what would actually work. Speaking for myself, I am convinced that rather than watermarking the content generated by AI, in a world of deepfakes, it is wiser to label what’s true.
Technolegal Measures
Despite the success it has had in the context of digital public infrastructure, we had similar concerns with a wholehearted endorsement of techno-legal measures for the regulation of AI. Having written the book on techno-legal regulation, I know how those with a techno-legal hammer tend to see every problem as a nail. That is why, although the report mentions that such measures may be considered, it recommends that they be properly tested and only deployed if there is a clear regulatory objective.
Contained within India’s new AI guidelines are several novel approaches to regulating AI. They attempt to strike a balance between the developmental need to deploy artificial intelligence and societal concerns around doing so safely. For other countries of the Global South that are grappling with similar concerns around how this new technology should be governed, India’s pro-innovation approach could offer a template they could adopt.
