Will humans be part of the wars of the future?
There are inherrent challenges of using autonomous weapons with human oversight - in particular the fact that human judgment becomes an “inconvenient impediment” to the speed of modern warfare. Future wars, especially in the cybernetic arena, may render human intervention meaningless - and that should raise ethical and practical concerns.
This article was first published in The Mint. You can read the original at this link.
We have long believed that the easiest way to ensure that autonomous weapons are safe is to put a human in the loop. This is why we have designed our autonomous systems so that no matter how sophisticated the technology might be in identifying the target and delivering the payload within its vicinity, that final judgement call as to whether or not the strike should take place is left to a human being. This construct, we feel, addresses many of the ethical and legal issues with robotic weapons and allows us to proceed to use them.
There are many shades of grey within the broad construct. Some autonomous weapons are designed so that a human identifies the target and then leaves it to the weapon to strike it—even if it has subsequently moved away from its original position. Others loiter in the area where the target was spotted—sometime for days on end—until they have positively identified the target to their human controllers who then authorize the strike.
In every instance, it is the human not the weapon itself that takes the final decision. This has allowed us to rationalize—at least to ourself—the use of intelligent war machines in the field of battle.
During the Gulf Wars, the Patriot PAC-2 missiles were the US army’s primary line of defence against the threat of incoming ballistic missiles. This all-in-one autonomous weapons system could scan the skies for radar signals that indicated incoming Iraqi missiles and tag these signals for the attention of a human who ultimately took the final decision to fire or not. This system worked so well that it is rumoured that during that war, Patriot missile batteries engaged over 40 Iraqi Scud missiles, effectively neutralizing their threat.
On 22 March 2003, a Patriot missile battery identified a radar signal as an incoming anti-radiation missile designed to take out US radar installations on the ground. The lieutenant in charge had seconds to decide. Without the benefit of any information other than the recommendation she had been provided by the semi-autonomous weapon system, she gave the order to fire and the Patriot missile took out the threat.
It was only a day later that it was discovered that the Patriot weapon system had mis-identified a friendly US aircraft coming home to land as an incoming enemy missile. The human who was supposed to exercise judgement to ensure that such mistakes do not happen, either had not had enough information or enough time to recognize that the weapon had made a mistake.
This is an example of weapon fratricide— a term used to describe the circumstances in which a weapon turns on its own. It is one of the many challenges associated with deploying these increasingly advanced weapon systems into the field of battle, even with a human in the loop to take the final call. It is the reason why the calls for an absolute ban on autonomous weapons have, of late, begun to grow more shrill.
Incidents like this have shaken our belief in the efficacy of having a human in the loop. We are beginning to realize that human oversight is not nearly enough. The speed at which modern wartime decisions need to be taken makes the human an inconvenient impediment. As they come under pressure to take quick calls, their judgement is often impaired, so much so they tend to blindly follow the suggestions provided by the Artificial Intelligence system, defeating the very purpose for which they were put in that chair.
That said, as long as wars are fought using weapons of destruction—explosive projectiles targeted at combatants or military installations—there is still a chance that a human being overseeing the conflict will be able to avert a mistake. It takes time for the missile to reach its target and it is possible for a human to intervene and countermand the decision, averting a disaster before it happens.
However, this is not the arena in which all the battles of the future will be fought. We are fast moving to a world in which a significant part of the war between nations will be cybernetic. This battle will take place at the speed at which microprocessors communicate—far beyond the capacity of humans to process or intervene.
We have already seen malware like Stuxnet insinuate itself into power plants, water installations, traffic lights and factories autonomously and without the directions from central command, taking control of the programmable logic controllers that operate these machines. At the same time, we have seen how high-velocity trading bots on the stock exchanges operate, trading on the market at speeds so fast that it is impossible for human traders to match.
If you couple the speed of a trading algorithm with the autonomous design of the Stuxnet virus, you will begin to get a sense of what the wars of the future will look like. These weapons of war will be able to scale rapidly, penetrate into our infrastructure systems and rapidly eviscerate us from within. Our only defence against such an onslaught is to deploy defensive Artificial Intelligence systems that can identify the attack and have the ability to autonomously counter-attack to protect the system.
It is impossible to have humans in the loop of this sort of a war. Humans simply cannot operate at machine speed and having them in the loop is simply meaningless. But I am not sure if the alternative— relying on defensive Artificial Intelligence to take out these new computer-powered weapons—is any less unsettling.