Machines can err but humans aren’t infallible either
It is important to incorporate human oversight into automated systems. Despite the efficiency of these systems, there is a need to balance human judgment with machine precision in critical decision-making processes.
This article was first published in The Mint. You can read the original at this link.
On the morning of 26 September 1983, alarms went off at Serpukhov-15, the secret command centre of the Soviet Air Defence Forces. Soviet satellites that comprised Russia’s early warning system were reporting that five Minuteman intercontinental ballistic missiles (ICBMs) had just been launched from an American base and were headed for the Soviet Union. Russia had just a few minutes to respond before the missiles—which were no doubt aimed at its own strike capabilities— destroyed the country’s ability to retaliate.
It was the height of the Cold War. Both the Soviet Union and the US were bristling with nuclear weapons and relations between the two global superpowers were at an all-time low. Just three weeks earlier, the Russian air force had shot down a Korean Airlines aircraft on its way to Seoul from New York, for straying into Soviet airspace. All passengers on board, including US Senator Larry McDonald, were killed. The Soviet Union was on high alert for a possible counterstrike. It was against this background that Colonel Stanislav Petrov, the duty officer at Serpukhov-15, had to take a call. He had to decide whether the US had in fact launched a nuclear attack on the Soviet Union or whether his early-warning system was malfunctioning. If it was an attack, he had to alert Moscow as soon as possible as the ICBMs would reach Russia within the next 25 minutes. However, if he was wrong, any action that the Soviet Union took as a result of his warning could turn out to be the first strike of World War III.
Petrov’s gut told him that if this was a real attack, the US would have launched more than 5 missiles. What’s more, Soviet ground-based radar systems that were supposed to detect missiles that rose above the horizon were not reporting anything, even though they should have spotted something within minutes of launch. Despite the evidence of his machines, Petrov decided not to escalate.
It was later discovered that the satellites had mistaken the sun’s reflection off the clouds for a missile. The code responsible for the error had to be entirely rewritten to prevent future false alarms. Had it not been for Petrov’s clear-minded analysis of the situation, that simple error in the software might have precipitated a nuclear catastrophe.
This incident is often used as an example to demonstrate why it is so important to have a human in the loop. As good as computers are at processing vast amounts of data, even simple decisions that most humans easily get right seem to be beyond the capabilities of the machines. It seems the only way they can be trusted is if their actions are supervised by a human at all times.
This is why jurisprudence around the world has been developed to ensure that decisions made by automated systems are open to being referred to humans for review. In the European General Data Protection Regulation, for example, Article 22 stipulates that all people whose rights, freedoms and legitimate interests are affected by automated decisions should have the ability to contest that decision before a human being. Similar provisions exist in other privacy laws around the world. It is the same fear of fallibility that has hamstrung the development of autonomous vehicles; we seem unable to accept driverless cars that do not have a human in the driver’s seat who can take over if the machine is about to make a mistake.
That said, as good as some humans are at taking decisions, we are all subject to biases that more often than not cloud our judgement. We are subject to various environmental influences—fatigue, irritation and ill health—all of which can seriously impair our ability to make the right call when needed. We all know that none of us is above being suborned or influenced to make a decision in favour of one person over another even when we know that we should be impartial. It is precisely because of these human frailties that we decided to build automated alternatives that are fairer. It is in response to this need that we built these complex computer systems that are capable of analysing facts and which use complex algorithms and decision models to arrive at decisions based solely on the available data.
Today, after more than a decade of studying the outputs from such systems, the realization is slowly dawning on us that computers do not always arrive at fair decisions. In some instances, automated decisions have resulted in harm being caused to individuals. As a result, we now face a widespread backlash against all forms of computerized decision making to the extent that demands for an absolute ban on all forms of automation are getting more strident.
However, it would be a pity if we were to blindly heed these calls. As much as Petrov is to be commended for his clarity of thought in deciding not to push the nuclear button, we have to recognize that we were lucky to have had him in the hot seat at that precise moment in history. We could, just as easily, have had in his place someone who was tired or easily influenced by the high pitched rhetoric of the day. Had that been the case, the outcome would likely have been far more catastrophic.
Computers were not designed to operate as substitutes for good decision makers like Stanislav Petrov. Instead, they were intended to pick up the slack for the rest of us who are not always quite so clear-minded. As fallible humans make up the majority, we’d do well not to get rid of all the machines just yet.