Ill-Advised Advisory
Regulators believe that the moment a new technology comes around they need to flex their regulatory muscle to wrestle it under control. In most instances they would be far better off waiting till they fully understand all the dimensions of the problem before acting. The new CERT-In advisory on generative AI is a case in point.
This article was first published in The Mint. You can read the original at this link.
With all the frenetic activity in the field of generative AI, regulators around the world are facing pressure to do something about it. Many have jumped into the fray, issuing everything from blogposts and guidance to full-fledged policy frameworks on the issue.
Regulators being Regulators
In China, new rules drafted by the Cyberspace Administration of China require generative AI firms to submit security assessments to authorities before they launch their offerings to the public so that they can make sure that the content it generates aligns with the country’s core socialist values. If inappropriate content ends up being generated, organizations responsible for it have been given three months to update their technology to ensure that similar content is not generated again.
In the US, attorneys of the Federal Trade Commission have written blogposts on generative AI, highlighting how these tools can be used to leverage unearned human trust, deceptively steering the vulnerable into making harmful decisions on their finances, health and employment. These posts are intended to send a message to companies engaging in these sorts of practices that this behaviour will not be tolerated; that they should instead be carrying out risk assessments to assess foreseeable downstream consequences of using these technologies and looking to mitigate the impact that these tools might have on their customers.
In the midst of all this regulatory cacophony, it was pleasing to note that the Indian government was being appropriately circumspect. Instead of jumping, lemming-like, into the fray, it was good to see our information technology minister make it clear in Parliament that the government was not bringing in a new law or regulation to address any of the consequences that advances in AI might have.
This, to my mind, is the right response. To meaningfully regulate technology as revolutionary as this, we need to fully understand all it’s capable of doing—its harms and benefits alike—so that the regulatory approach we adopt strikes the right balance between mitigating the worst of the harms while maximizing the benefits.
The CERT-In Advisory
But I should have known this would be too good to last. Despite the assurances of the minister, it was only a matter of time before some other department would come out with a statement on generative AI that would unnecessarily muddy the waters.
Last week, India’s Computer Emergency Response Team (CERT-In) did just that by issuing an advisory on the security implications of AI language-based applications that did not disappoint. Not only was it utterly bland and meaningless, it was singularly unhelpful to readers who might have reached it looking for advice on how to engage with this new technology.
For an advisory aimed at addressing the security implications of a new technology, this document doesn’t point out a single new risk that it might pose. Instead, it describes how generative AI can be used to perpetrate the same sort of harms that malicious actors have been carrying out since the birth of the internet; how it can be used to exploit vulnerabilities, write malicious code, and construct malware and ransomware; how, in the hands of phishers, it could be used to create fake websites and generate fake news that could be used to fool people into giving up personal information that could be used against them.
As a result, the safety measures it suggests we adopt are no different from what companies ought to be doing anyway: educating users on the right way to use new technology tools, requiring them to verify domains before they visit them and exercise care before clicking on links that seem suspicious. This is advice I give everyone all the time—whether they are using generative AI or not.
It goes on, rather parochially, to tell companies how they should be setting themselves up to tackle the risks that generative AI poses to their operations. It insists that they implement content filters capable of detecting and preventing the dissemination of inappropriate content; that they monitor how their users are interacting with generative AI applications to detect any suspicious activity going on within their organisation; conduct security audits to identify vulnerabilities and disclosure of information and employ multifactor authentication to prevent AI applications from directly accessing user accounts.
None of this is new. Any company that is half serious about its cybersecurity has processes in place for all of this. All it has done is reiterate standard precautions against known threats, confirming what we always knew—that generative AI does nothing more than give bad actors a new tool to do what they’ve always done.
This Needs to Stop
CERT-In is the country’s primary line of defence against cyber threats. It is what we turn to when we are hacked or face some new form of cyber attack. It ill behooves an organization of this significance to issue lame advisories on matters that don’t count. They lower our faith in what it stands for.
So why have I taken space in this newspaper talking about an advisory that isn’t worth the paper it’s been written on?
Because this is a mistake regulators often make, more often than they ought to. They feel the need, every time a new technology comes along, to respond with regulation. To flex their muscles the only way they know in an attempt to wrestle it under control, even when there is no real point doing so.
This needs to stop.