Can A Computer Think Like A Pilot? It’s A Trivial Question

In some ways, it’s the wrong question to ask. First, we need to think about what it will take to get there.

In the game of life and evolution there are three players at the table: human beings, nature and machines. I am firmly on the side of nature. But nature, I suspect, is on the side of the machines.

--George Dyson, historian

The other night I was watching an amusing program called James May: Our Man in Japan. It’s kind of a quirky cultural travelogue boring beneath the surface of the usual tourist fare. In one of the episodes, it was revealed that 80 percent of the sushi sold in Japan is made by robots. Did you know that? I didn’t. I’ll come back to that in a bit.

In the video clipped below, Jerry Gregoire, founder of Redbird, did a clever and well thought out essay on why computers will never replace pilots. Not soon, not ever. On the premise that never is a long time, I’m going to take the opposite view: A lot of occupations will be replaced by automation eventually, including pilots. But first, take a few minutes to watch the video. I’ll wait.

I’ve been reading some about advances in artificial intelligence in a couple of good books, "Our Final Invention: Artificial Intelligence and the End of the Human Era" by James Barrat and "The Rise of the Machines: A Cybernetic History" by Thomas Rid. Both authors come at AI from different directions, but taken together, the two paint a picture of a field much further along than most of us imagine. And Barrat’s book was written five years ago.

“AI” is typically thought of as a monolith—a single thing to describe machines that think and learn. In fact, there are levels of AI. So-called ANI or artificial narrow intelligence works to perform narrow functions or tasks more efficiently than humans can—making sushi, for instance, or Siri flubbing every third question or maybe even the autonomous subway system that runs on strict programming rules. Artificial general intelligence or AGI implies human intelligence or nearly while ASI—artificial super intelligence—implies machine learning capable of exponential intelligence growth.  AGI/ASI is what Elon Musk is terrified of. Some in the AI field believe his fears are not misplaced.

But here’s where I’ll get into trouble by pissing off a lot of people. In the overall arc of human challenges, flying an airplane doesn’t take exceptional intelligence. We’ve romanticized the skill required because of hidebound training methods and the necessity of grinding through regulatory hurdles, but the hand-eye part is easily handled by machines and so is some of the required analysis.

Recall that with a trivial amount of processing power by AI standards, Garmin’s Autoland can find a suitable airport, set up an approach and land the airplane. And it was five years ago that Diamond first flew a fully autonomous flight in a DA42. (No serious AI there; just careful programming.) And by the way, little of this is relevant to human joy of slipping the surly bonds. That’s a different blog.  

If you follow our news columns, you will have read that in August, in a DARPA project called AlphaDogFight, an AI algorithm designed for the purpose readily beat a skilled F-16 pilot in five out of five dogfights. That was bad news/good news. The bad news is that many of us surely thought such a thing couldn’t happen for 10 years or more, if it could happen at all. The good news is that as the fights progressed, the human learned enough to survive longer. But he still died.

What happens when the machine learns minimally? Or at 10 times the human rate? Whether you believe this is even possible may define where you reside on the only-humans-can-fly spectrum. But it’s a simple leap to imagine a swarm of cheap AI aircraft overwhelming an adversary.

Place this in the context of airline transportation and the challenge looks less daunting, at least to me. The flight demands are more regimented, considerably less dynamic and tilted toward reliability and planning/actions to avoid accidents. Consider AI dealing with recent accidents, like the 737 MAX crashes. Could an AI pilot have handled it better? Tantalizing question. All we know is that humans could not, although some insist that other humans could have. And that’s the problem with humans; machines are more consistent. It’s easy to dig through accidents and find plenty where humans caused the crash of a perfectly good machine.

Even if machine-flown airplanes existed—and they’re in the skunkworks stage right now—would any paying passengers get on them? Another tantalizing question. Now, the sushi. It used to be considered a rare delicacy in Japan prepared by chefs with years of training. Even in a conforming and traditional society like Japan, the robots—which first appeared in the late 1970s—were embraced because they made an expensive product affordable. Is it an artifice to say, yeah, but airplanes are different? Tantalizing question three.

In his VLOG, Jerry mentions the so-called Turing halt problem, which posits that for a given series of inputs, it can’t be determined whether a program will halt or run forever. But if you're enthralled with the idea of the human thinking paradigm, a better Turing reference is the famous test in which a judge asks a computer and a human a series of questions. If the answers are indistinguishable, the machine is intelligent. On the other hand, whether the computer is "thinking" the way a human thinks is utterly irrelevant, as long as it does intelligent things. Human pilots think themselves into all kinds of disasters. I think I have enough fuel. I think I can clear those trees. I wasn't even thinking the airspeed was too slow.

By the way, IBM’s Watson failed the Turing test, but Google’s Assistant with Duplex passed, or very close. It’s a bit of parlor trick for now. But 10 years from now? As part of his research, Barrat polled 200 AI experts on predictions for the future, asking them to estimate when AGI would be achieved. Among the four choices, 42 percent said by 2030, 25 percent guessed 2050, 20 percent by 2100. Two percent said never. Elon Musk says that based on improvements in autopilots for Tesla cars, he thinks progress is already in the exponential stage and AGI will come sooner.

There are antagonistic levels of hubris operating here. One is driven by what scientists call normalcy bias—the only reason to believe something won’t or can’t happen is because it hasn’t happened yet and that the old rules—like the Turing halt—will always apply. We lack the imagination to accept that a machine can think because we believe only humans can do that. Only humans can recognize and respond to a novel situation beyond a computer programmer’s limited ability to account for everything. Only humans can triage closely spaced decision options and pick the right one.

The second is the conceit that humans can reverse engineer their own brains—a research thrust that is in fact underway—to produce a machine that can learn, analyze and write its own programing on the fly, irrespective of whether it thinks like a human. I don’t know if this will happen in my lifetime or in five years. But I’m convinced it will happen because, as George Dyson said, nature is with the machines.

In the headline, I said whether a computer can replace a pilot or not is a trivial question. It’s trivial not because of an assumption that of course computers will be able to do this, but because the higher order question is would you want them to? That has nothing to do with passenger fears of a cockpit without humans and everything to do with guardrails and ethics on how AGI develops and deploys and how it’s used. AGI may be neutrally moral, human friendly or indifferent or, just as likely, something we can't even understand. In the hands of malign actors—including the machines themselves—AGI could be the horror Musk imagines. Next to that, clearing a robot for takeoff is child’s play.