The perilous consequences of automation dependency

The crash of Air France Flight 447 in 2009 highlights the dangers of over-reliance on automation in aviation. When the autopilot disengaged due to a malfunction, the pilots’ lack of manual flying experience led to a fatal error. But counter-intuitively - rather than relying on human intervention we need to trust more in machines and building better systems to automate them.

This article was first published in The Mint. You can read the original at this link.


At 11.13pm on 31 May 2009, Air France Flight 447 crashed into the Atlantic, killing all 228 souls on board. When it went down, it was being piloted by three highly trained pilots—two of them had been trained from scratch by Air France, flying Airbuses all their professional lives, and a captain who had over 11,000 flight hours under his belt. The plane they were flying was the Airbus 330, the most advanced commercial aircraft in the world at the time, equipped with fly-by-wire technology that was so good that in the 15 years since it was first introduced in 1994, there hadn’t been a single crash. And yet, for reasons that flight crash investigators still find hard to explain, the plane went down, done in by the very pilot errors that automation was supposed to protect against.

On that fateful day, the pilots, in accordance with standard procedure, switched the flight to autopilot within four minutes of take-off. Under normal circumstances, the plane would have remained on autopilot till minutes before touchdown, flown for all that time under the complete control of the plane’s flight-management computer that was programmed to keep it on the route that had been input by Air France’s dispatchers in France.

However, somewhere in the middle of the Atlantic, ice crystals began to accumulate inside the plane’s nose-mounted air pressure probes, clogging them to the point where they were no longer able to provide reliable airspeed indications to the flight management system. The computer, which had been programmed to hand over control to human pilots when it didn’t have enough data to safely fly the plane, disengaged itself from the controls after telling the pilots that they would have to fly it like a conventional aircraft. Which is why, a pilot who, until now had rarely ever flown the plane at any time other than take-off and landing, had to suddenly take control of the aircraft mid-flight. Based on flight recorder evidence, his inexperience began to show almost immediately: The plane began rocking from side to side and then inexplicably went into a steep climb.

Planes get their lift from their wings. So long as these wings are kept at a positive angle of attack, the air flows under them with sufficient thrust to keep the aircraft aloft. The greater the angle of attack, the more efficient the flight is—but when it becomes too steep, the air stops flowing smoothly over the wings, causing it to stall. Once that happens, the wings change from being aerodynamically efficient to generating enormous drag, far greater than what the thrust generated by the engine can overcome.

The way to recover from a stall is to lower the nose and dive, forcing air over the wings till it generates enough lift to get the plane out of the stall. As counter-intuitive as it might sound, this is basic flying. Had the pilots of the stricken AF447 done this without panicking, everything would have been all right. Unfortunately, unaccustomed as they were to doing anything but flying-by-wire, their knowledge of what to do to control the aircraft in an emergency was fatally rusty.

This sort of reliance on automation has led to disaster in many situations. There have been people who have trusted their navigation systems so implicitly that, despite the evidence of their eyes, they have driven their cars off cliffs or into inhospitable deserts without so much as a raised eyebrow. We are so accustomed to turn-by-turn navigation that we’ve lost the art of finding our way by looking for waypoints and landmarks along the way. So dependent are we on automated systems that whether it be checking into a flight or a hotel, or processing a sale, or simply figuring out whether it’s raining outside, we are paralyzed into inaction if our automated systems ever stop working.

Some say the solution is to wean ourselves off machines, turning them off once in a while so that we can remember what it is like to have to fend for ourselves. This way, in the rare event that our devices fail us, at least we are not entirely bereft. Airlines now encourage pilots to switch off the autopilot every once in a while so that they can practise flying the plane the old-fashioned way. The rest of us would do well to switch off GPS every so often and try to find our way using old-fashioned maps and physical landmarks.

But maybe we should be using an entirely different approach.

Automated systems are improving so rapidly that they will soon be able to do everything we can and more. Despite this, we still feel the need to always have a [[human in the loop]]—to take over in case something untoward happens. We do this because we believe that humans will always be able to apply some instinctual intelligence to find solutions where machines can’t. What we don’t realize is that after decades of not having to intervene, humans are now hopelessly out of touch. If they are given the controls of complex machines, there is little likelihood that they will be able to magically hit upon the correct course of action.

What we need to do, perhaps counter-intuitively, is trust more in our machines, building them so that there is no longer any need to have humans in the loop. We need to force ourselves to build better systems— with redundancies if required—that are perfectly capable of operating without human supervision. After all, we built these machines to eliminate our human errors. It’s time to get out of the way and let them do their job.