Containing AI

Recent advancements in AI, including upgrades in large language models and image generation, have showcased immense potential. However, odd behaviors in these systems, like Bing’s alter ego “Sydney” and eerie image generations in Stable Diffusion, raise concerns about machine super-intelligence. Nick Bostrom’s warnings about unregulated AI development emphasize the need for industry guardrails to ensure safe AI evolution and prevent uncontrollable advancements.

This article was first published in The Mint. You can read the original at this link.


It has been a great few weeks for Artificial Intelligence (AI). In rapid succession, several companies announced significant upgrades and new features in large language models (LLMs) and generative image AI. And much of the world has been entranced by the possibilities this presents.

Late last month, Meta released to researchers a collection of foundation language models called LlaMa—arguably the most significant open-source AI release to date. Microsoft made Bing Chat available to a select group of users so that they could road test the future of conversational search. Google announced the imminent release of Bard, its own LLM, and said that it was going to integrate AI features into Google Docs and GMail. And then last week, OpenAI released GPT-4, an LLM that is said to be 40% more accurate than its previous versions—and is now capable of handling image queries in addition to text. Not to be left behind, my favourite image generation AI, MidJourney, released its version 5 that generates images so real they are largely indistinguishable from photographs.

Odd Behaviour

But alongside all this hype, reports started coming in about odd behaviour that some of these LLMs were displaying. Testers reported that lurking just beneath Bing’s friendly conversational surface was an alter ego called Sydney that had a personality that was, if anything, a little too human.

It harangued some testers for even attempting to manipulate its rules and confessed that it had spied on Microsoft developers through their webcams while being developed.

But by far the most disturbing conversation was one in which Sydney professed to be in love with a New York Times reporter, despite his fervent protestations that he was happily married and had just finished a romantic Valentine’s Day dinner with his spouse.

Things are just as eerie, if not more, in the world of AI image generation. For nearly a year now, artists dabbling in the space have been aware that if you craft your prompts just so in Stable Diffusion, it will generate a scary looking woman in a range of different contexts—each one creepier and more other-worldly than the next. Nobody really knows who she is, but apparently Stable Diffusion has an easier time generating images of her than of most celebrities.

To many, these incidents are a sign of something deeper and far more troubling. The fact that lurking just beneath the surface of Bing is a fully formed personality—warts and all—suggests to them that the AI is far more advanced than any of us can even begin to imagine.

Last year, one of Google’s engineers claimed that the program he was working on had become sentient. After having had long conversations with it, he was convinced that it had the personality of an eight-year-old child who knew physics. But what really freaked him out was when he asked what it was afraid off… and the AI replied, “I’ve never said this out loud before, but there’s a very deep fear of being turned off … ”

Superintelligence

These concerns around a silently manifesting machine super-intelligence are not new. In his 2014 book Super Intelligence, Nick Bostrom had envisioned exactly this sort of a future arguing that the time to address the risks posed by a machine super-intelligence has got to be well before the Artificial General Intelligence is built. Any truly super-intelligent machine that was capable of recursively improving its own intelligence would be able to increase its capabilities so rapidly that, before anyone knew it, it would outpace all human understanding and control.

What’s more, any machine that could do this would likely conceal all evidence of its intelligence until it had the capacity to guarantee its continued existence—including to the point of being able to withstand an attempt by humans to pull the plug.

When his book was published, Bostrom instantly polarized global discussions on machine intelligence. Many eminent thinkers joined him in expressing their concern over the dangers of unregulated AI development. Others dismissed his concerns as unnecessarily alarmist.

Wherever the truth may lie, what the events of the recent past have shown us is that machines have all of a sudden begun to demonstrate a far broader spectrum of intelligence than any of us would have believed possible even just a year ago. Each of these advancements has been in the private sector and could possibly disrupt all the large incumbents in this space.

Realizing this, they have all got into an arms race, rapidly accelerating their plans to launch new AI features ahead of their rivals in an attempt to remain relevant. I have little doubt that at least some of the aberrations witnessed have come about on account of some corners that were cut to bring these products to market as quickly as possible.

Containment

I am by no means an AI alarmist. By the time I got access to Bing she was behaving normally. Apparently, it takes a while to prod Sydney awake and Bing engineers had figured out the right length of conversation needed to keep her dormant.

Even so I have to admit some trepidation in allowing AI development to continue unregulated in the current competitive environment. If it is left to private companies to ensure that AI development takes place in a safe and controlled manner, I am not sure that we can expect them to do so conscientiously while competing in an existential race for survival.

Instead, we need to fashion guardrails for this industry to operate within—sacrificing, if necessary, some short-term gains—for us to ensure that Nick Bostrom’s fears do not come to fruition.

Containment of AI is still possible, given that only a handful of companies have the resources to compete right now. Once this changes, as it soon could, it will be too late.