The Rise of the Machines

We should regulate autonomous weapons like we govern nuclear non-proliferation and climate change - through international consensus and not national policy. If we build machine intelligence that can decide who to kill this technology we will not be able to control whose hands this gets into.

This article was first published in The Mint. You can read the original at this link.


When IBM’s Deep Blue supercomputer defeated him in a six-match tournament, Grandmaster Garry Kasparov was not entirely surprised. The battle between humans and computers has always been about processing speed and the moment computers attained the computational power necessary to see seven moves out, it was inevitable that they would wear humans down with the sheer brute force of their infallibility. And once they got there, even human intuition was no match for the sheer analytical prowess of a computer.

Rather than hang up his gloves, Kasparov began to toy with the idea of a chess tournament that allowed humans to collaborate with computers. He created the Advanced Chess Tournament, a cyborg chess championship in which humans could team up with computers, creating a hybrid intelligence that combined the lightning fast (but inherently uncreative) computational abilities of supercomputers with human intuition and psychological insight.

Since then, human-machine inter-dependence has manifested itself in many ways. Fly-by-wire aircraft use machine intelligence to augment the skills of human pilots; modern automobiles have in-built driver assist features that make our cars more powerful and yet safer to drive; mobile phones today are near-perfect digital assistants that we can converse with and which we rely upon to anticipate our needs. We have, without realizing it, enveloped ourselves in a cocoon of machine intelligence, and even though we may not know it, we already allow our machines to do many of the things we would otherwise have had to do ourselves.

Last week, The New York Times carried a story about a US department of defence project to put artificial intelligence into military drones, allowing them to autonomously distinguish between hostile and harmless human targets. Early trials indicate that these machines are already capable of accurately distinguishing between a photographer crouching with a camera raised to eye level and a sniper aiming a Kalashnikov.

It is being called centaur war-fighting—from the half-man-half-horse creature of Greek mythology. In hindsight, we should have known that this is what Kasparov’s Advanced Chess Tournament would eventually evolve into. Even though the applications are relatively benign right now, these are the initial signs of the development of a wide range of autonomous weapons ostensibly designed to enhance the problem-solving abilities of human soldiers and improve their efficacy. But, despite the well-meaning assurances about how safe these intelligent weapons are, there is no avoiding the sinister implications of machines that are programmed to decide who to kill.

To a technology lawyer, the implicit ethical issues alone are troubling. In an earlier article in this column, I had highlighted some of the challenges that automobile manufacturers will face while trying to program intelligence into cars. I spoke, in particular, of the difficulties in teaching computers to solve the Runaway Trolley problem. But when you compare those issues with the legal and moral responsibilities that come with creating thinking weapons designed to take human life, it simply pales into insignificance.

For instance, how do we decide how much autonomy these war machines should be imbued with? While we are assured that no autonomous weapons will function without the appropriate level of human judgment—who will be held responsible when an intelligent drone bombs a school instead of a military bunker—the drones that wrongly identified the building as hostile or the human who relied so much on machine intelligence that he suppressed his better judgment? Despite our best intentions, weapons proliferate and, while some countries may adopt high ethical standards when it comes to autonomous war machines, others will be less mindful. Which is why we have to assume that intelligent weapons will inevitably get into the hands of terrorists.

It is for this reason that I believe that national governments should not be allowed to take decisions on whether or not to initiate an autonomous weapons programme. Their decisions on the matter will be guided solely by their own immediate self-interest. Instead, consensus on these issues, if at all necessary, should be arrived at in the same way that the nations of the world have come to an agreement on nuclear non-proliferation and climate change.

What is perhaps even more immediately relevant is for lawmakers to stay on top of the development of machine intelligence. In his book Superintelligence, Nick Bostrom predicts that true artificial intelligence will manifest silently and, unbeknown to all of us, transform itself into a machine super-intelligence capable of out-matching human intelligence. True super-intelligent AI will know that it has to be cooperative and docile around humans, masking its true potential or else risk being shut down.

Unless lawmakers and regulators remain alert to what machine intelligence can evolve into, before we realize it, the Singularity will be upon us. And humanity will find itself on the outside.