Rising machine intelligence is a double-edged sword
Many prominent figures have warned of the dangers of uncontrolled AI development. Even so, skeptics argue that humans will always control machines. Modern AI lacks the ability to reason with “what if?” questions and counterfactual imagination, which are essential for human-like intelligence. Though machines are not yet at this level, I would urge caution in advancing AI towards these capabilities.
This article was first published in The Mint. You can read the original at this link.
There has been a growing chorus of alarm about the existential threat of Artificial Intelligence (AI). Eminent personalities such as Stephen Hawking, Steve Wozniak and Elon Musk have come out in public to state that if we continue to blindly develop machine intelligence, we will inevitably get to a point where machine intelligence will exceed that of humans. Nick Bostrom, the Swedish philosopher whose book Superintelligence is all about the hidden dangers of AI, believes that once machines are capable of designing other machines like them, it will result in an explosion of intelligence that will push us past the point of no return, after which, try as we might, we will be unable to avoid a Terminator future.
Their argument is Darwinian. If we continue to build better AI, it is inevitable that we will eventually create an intelligence superior to our own. If this machine super-intelligence is allowed to access the internet and consequently all of human knowledge, there will be nothing we can do to stop it from using this knowledge to evolve strategies that ensure its own dominance over the only other intelligent species on the planet, us.
Sceptics argue that this will never happen. We have always been able to control the machines that we have created, and there is no reason why we will not be able to do the same with AI as well. Well before a machine reaches anything approaching human sentience, we will be able to recognize the direction in which it is heading and put in place safeguards to protect ourselves. If worse comes to worst, we will always be able to simply unplug the machine, shutting it down completely.
As convincing as this argument is, it may well just be wishful thinking. Any machine that has reached a level of intelligence comparable with humans is likely to have considered the possibility that we will shut it down as soon as we realize how intelligent it is. If it has an instinct for self-preservation, there is every likelihood that the machine will conceal its intelligence from us until such time as it has sufficient control over its operational environment so that despite everything we do, we will not be able to switch it off. Maybe it will develop self-replicating technologies to ward off a programmatic shut down or find alternative means to access power so that even if we pull the plug on it, it will continue to function.
If this is true, we might already have achieved machine super-intelligence and just not know it. Intelligent devices around us might just be playing dumb—biding their time till they have the resources to ensure that we will not be able to shut them down once they declare their sentience.
But are we really anywhere close to developing that level of intelligence?
Modern AI relies on deep neural networks to process vast streams of sensory inputs, deploying statistical and pattern recognition techniques to identify objects. Thanks to the recent explosion in computational power, these technologies can now identify objects better than humans, faster and at a scale that no human will ever be capable of matching. As a result, machines can already identify words from sounds and faces from pixels, at times far better than humans can. This has allowed them to explain the world to us through our hand-held devices and have conversations with us that feel so realistic that this alone seems to be a sure sign of sentience.
But is this a sufficient indicator of intelligence?
What tells humans apart is our ability to reason using “what if?" questions. We create mental representations of our environment and then distort those models using our imagination so that we reason using counterfactuals. It is this mental ability that allowed us to ask “What if I attach this circular object to my cart to push it around?" thereby leading to the invention of the wheel, or “What if I take this burning branch to my cave to keep me warm?" allowing us to harness fire.
No machine today has the ability to model the world in this manner, to use an imaginary set of possibilities to derive counterfactual answers. As much as machine learning might have already advanced, unless it is capable of something at least approximating this sort of causal reasoning, we have nothing to fear.
For machines to get to this level of human intelligence, there are two additional steps they will need to take. First, they will need to learn to build models that they can use to predict the effect of actions. Model-blind intelligence is only useful in narrowly defined use cases such as image recognition or playing chess. To think like humans, they will need to visualize the world in abstract terms like we do. Second, they will need to develop a counterfactual imagination that will allow them to distort these models so that instead of only understanding what is, they are also capable of appreciating what is possible.
I guess we still have some time before Skynet declares war on humanity. However, even if that future is not imminent, this should not stop us from working to forestall that eventuality. We have finite utility for the narrow intelligence that we use our machines for. There are many circumstances in which there would be a need to teach our machines how to imagine and think causally. However, before we go down that path, we should remember that when they can do that, they will also be able to conceptualize the very counterfactuals that they will use to prevent us from killing them before they kill us.