Can A Computer Think Like A Pilot? It’s A Trivial Question

41

In the game of life and evolution there are three players at the table: human beings, nature and machines. I am firmly on the side of nature. But nature, I suspect, is on the side of the machines.

–George Dyson, historian

The other night I was watching an amusing program called James May: Our Man in Japan. It’s kind of a quirky cultural travelogue boring beneath the surface of the usual tourist fare. In one of the episodes, it was revealed that 80 percent of the sushi sold in Japan is made by robots. Did you know that? I didn’t. I’ll come back to that in a bit.

In the video clipped below, Jerry Gregoire, founder of Redbird, did a clever and well thought out essay on why computers will never replace pilots. Not soon, not ever. On the premise that never is a long time, I’m going to take the opposite view: A lot of occupations will be replaced by automation eventually, including pilots. But first, take a few minutes to watch the video. I’ll wait.

I’ve been reading some about advances in artificial intelligence in a couple of good books, “Our Final Invention: Artificial Intelligence and the End of the Human Era” by James Barrat and “The Rise of the Machines: A Cybernetic History” by Thomas Rid. Both authors come at AI from different directions, but taken together, the two paint a picture of a field much further along than most of us imagine. And Barrat’s book was written five years ago.

“AI” is typically thought of as a monolith—a single thing to describe machines that think and learn. In fact, there are levels of AI. So-called ANI or artificial narrow intelligence works to perform narrow functions or tasks more efficiently than humans can—making sushi, for instance, or Siri flubbing every third question or maybe even the autonomous subway system that runs on strict programming rules. Artificial general intelligence or AGI implies human intelligence or nearly while ASI—artificial super intelligence—implies machine learning capable of exponential intelligence growth.  AGI/ASI is what Elon Musk is terrified of. Some in the AI field believe his fears are not misplaced.

But here’s where I’ll get into trouble by pissing off a lot of people. In the overall arc of human challenges, flying an airplane doesn’t take exceptional intelligence. We’ve romanticized the skill required because of hidebound training methods and the necessity of grinding through regulatory hurdles, but the hand-eye part is easily handled by machines and so is some of the required analysis.

Recall that with a trivial amount of processing power by AI standards, Garmin’s Autoland can find a suitable airport, set up an approach and land the airplane. And it was five years ago that Diamond first flew a fully autonomous flight in a DA42. (No serious AI there; just careful programming.) And by the way, little of this is relevant to human joy of slipping the surly bonds. That’s a different blog.  

If you follow our news columns, you will have read that in August, in a DARPA project called AlphaDogFight, an AI algorithm designed for the purpose readily beat a skilled F-16 pilot in five out of five dogfights. That was bad news/good news. The bad news is that many of us surely thought such a thing couldn’t happen for 10 years or more, if it could happen at all. The good news is that as the fights progressed, the human learned enough to survive longer. But he still died.

What happens when the machine learns minimally? Or at 10 times the human rate? Whether you believe this is even possible may define where you reside on the only-humans-can-fly spectrum. But it’s a simple leap to imagine a swarm of cheap AI aircraft overwhelming an adversary.

Place this in the context of airline transportation and the challenge looks less daunting, at least to me. The flight demands are more regimented, considerably less dynamic and tilted toward reliability and planning/actions to avoid accidents. Consider AI dealing with recent accidents, like the 737 MAX crashes. Could an AI pilot have handled it better? Tantalizing question. All we know is that humans could not, although some insist that other humans could have. And that’s the problem with humans; machines are more consistent. It’s easy to dig through accidents and find plenty where humans caused the crash of a perfectly good machine.

Even if machine-flown airplanes existed—and they’re in the skunkworks stage right now—would any paying passengers get on them? Another tantalizing question. Now, the sushi. It used to be considered a rare delicacy in Japan prepared by chefs with years of training. Even in a conforming and traditional society like Japan, the robots—which first appeared in the late 1970s—were embraced because they made an expensive product affordable. Is it an artifice to say, yeah, but airplanes are different? Tantalizing question three.

In his VLOG, Jerry mentions the so-called Turing halt problem, which posits that for a given series of inputs, it can’t be determined whether a program will halt or run forever. But if you’re enthralled with the idea of the human thinking paradigm, a better Turing reference is the famous test in which a judge asks a computer and a human a series of questions. If the answers are indistinguishable, the machine is intelligent. On the other hand, whether the computer is “thinking” the way a human thinks is utterly irrelevant, as long as it does intelligent things. Human pilots think themselves into all kinds of disasters. I think I have enough fuel. I think I can clear those trees. I wasn’t even thinking the airspeed was too slow.

By the way, IBM’s Watson failed the Turing test, but Google’s Assistant with Duplex passed, or very close. It’s a bit of parlor trick for now. But 10 years from now? As part of his research, Barrat polled 200 AI experts on predictions for the future, asking them to estimate when AGI would be achieved. Among the four choices, 42 percent said by 2030, 25 percent guessed 2050, 20 percent by 2100. Two percent said never. Elon Musk says that based on improvements in autopilots for Tesla cars, he thinks progress is already in the exponential stage and AGI will come sooner.

There are antagonistic levels of hubris operating here. One is driven by what scientists call normalcy bias—the only reason to believe something won’t or can’t happen is because it hasn’t happened yet and that the old rules—like the Turing halt—will always apply. We lack the imagination to accept that a machine can think because we believe only humans can do that. Only humans can recognize and respond to a novel situation beyond a computer programmer’s limited ability to account for everything. Only humans can triage closely spaced decision options and pick the right one.

The second is the conceit that humans can reverse engineer their own brains—a research thrust that is in fact underway—to produce a machine that can learn, analyze and write its own programing on the fly, irrespective of whether it thinks like a human. I don’t know if this will happen in my lifetime or in five years. But I’m convinced it will happen because, as George Dyson said, nature is with the machines.

In the headline, I said whether a computer can replace a pilot or not is a trivial question. It’s trivial not because of an assumption that of course computers will be able to do this, but because the higher order question is would you want them to? That has nothing to do with passenger fears of a cockpit without humans and everything to do with guardrails and ethics on how AGI develops and deploys and how it’s used. AGI may be neutrally moral, human friendly or indifferent or, just as likely, something we can’t even understand. In the hands of malign actors—including the machines themselves—AGI could be the horror Musk imagines. Next to that, clearing a robot for takeoff is child’s play.

Other AVwebflash Articles

41 COMMENTS

  1. So what happens when a two-engine whatever full of revenue PAX hits a flock of birds and both engines flame out. Are these things gonna be programmed to land in the Hudson ?? And what about the moral or ethical decisions that’d be required? Flying under normal rules and in normal circumstances is one thing; flying under the duress of mechanical or weather related problems is entirely different.

    For me — personally — as soon as there’s no one up front, that’s when I stop flying commercially. It’s one thing to take the pilotless trains to the A and B terminals at MCO. It’s entirely another to mail myself in an aluminum tube sans pilot.

    • That’s why AGI is required, not ANI. There are factors unrelated to flying that come into play in these ‘off-nominal’ situations.

    • The moral and ethical decisions are seriously lacking in most AI conversations.

      Take the hypothetical car in the video, with the passenger sitting in the back.

      If a police officer were to pull the car over for speeding, who would get the ticket? The passenger? The programer? The manufacturer? Would the cop even write the ticket?

      But, you say, the car would never speed so it would never get pulled over for speeding. To that, well, I never speed, so why do I get pulled over for speeding and the AI gets the pass?

      Further, imagine that the AI vehicle is driving through a neighborhood and a child darts in front of the car. The AI quickly calculates that the brakes will not stop the car in time to miss the child. To miss the child, it must turn and go to the sidewalk. On the sidewalk, an elderly couple is taking a walk. If the AI chooses the sidewalk, it will not miss the elderly couple. Which person(s) will the AI choose to kill? Or if you like, which will the AI allow to survive? Does the AI make that decision or the programmer?

    • I stopped flying commercially after 911. It was already tedious and frustrating, with boorish and inconsiderate passengers too lazy to deal with bag check, resulting in inordinate time spent blocking aisles while they tried to stuff all their worldly belongings in the overhead, and reversing that after landing. Airplanes were being packed to the gills with airlines trying to maximize revenues, and courteous stewardesses who used to serve full meals and drinks in real glasses were replaced by surly “flight attendants”, male AND female, who served only nuts and crackers, and drinks in paper cups. Those “flight attendants” didn’t give a rip about customer satisfaction – they were there for the paycheck.

      Then 911 ushered in TSA with all the interminable lines, requirements to partially undress, and the coveted and much sought-after groping we all grew to expect and hate. Commercial flying, anyone? Not for this guy. Buh-bye, airlines. If I can’t drive myself there, I don’t go.

    • That’s a false question. It is based on a specific situation. What matters in aviation is overall safety. Can AI pilots always save the day in some specific unusual situation? No (and neither can humans in every case) but that’s irrelevant in any case. If an AI equipped commercial air transport system has fewer crashes overall than a natural intelligence equipped transportation system, then AI wins.

      AI doesn’t have strokes (the likely cause of the recent santee crash), it doesn’t get tired , it doesn’t get offended by ATC, it doesn’t have “getthereitis”, it doesn’t confuse one instrument reading for another, it doesn’t accidentally swap to numbers when programming the navigation system, it doesn’t get disoriented in IMC… I could go on an on. Humans suck at being pilots.

  2. I would be surprised if AI never evolves to the point of an autonomous airliner being a technical possibility. The real question is if they will ever be ethically or legally possible, never mind whether anyone would actually want to pay to fly on one (I certainly would not). I’m sure they’d be perfectly fine as long as everything is operating normally, but what happens when everything is not normal? And what happens if one or more passengers die as a result? Who is legally and ethically responsible for those deaths? The airline, the manufacturer, the AI programmer? And that’s not even getting into the philosophical question of what’s the point of living if everything is automated (in the year 2525).

    • Some of these questions have been answered by elevators.

      Automatic elevators existed for many years, but most people didn’t trust them. Only elevators with a live, human operator on board was considered safe.

      Until the elevator operator unions went on strike, the biggest occurring in NYC in 1945. The inconvenience of so many people climbing so many stairs in so many skyscrapers spurred the adoption and deployment of automatic (aka ‘driverless’) elevators over the next decade.

      That doesn’t mean automatic elevators are perfect – they still malfunction and people die. But the main public perception nowadays is that they’re “safe” and convenient.

      “Driverless” cars will likely be viewed the same way one day, though it will probably take a human generation after the technology matures.

      As for “pilotless” aircraft – it will probably be a hybrid system of fully automated flight with a human “pilot/monitor” to keep tabs and things (and feed the dog…). Similar to several modern partially- and fully-automated passenger train systems that still have an engineer on board ‘just in case’.

      Granted, the current 100% automated trains are little more than ‘horizontal’ elevators and are far removed in complexity from piloting an airplane. But they help answer some of the legal and ethical questions about automated transport in general.

  3. Long-suffering readers of this space are familiar with one of many YARSisms: “The very best implementation of a flawed concept is, itself, fatally flawed.” In my experience, most flawed concepts have their nexus in flawed premises. In the case of the linked video, the first flawed premise is that flying an airplane requires thought. The syllogism that this premise spawns goes something like this:
    • Flying an airplane requires thought
    • Machines cannot think, therefore
    • Machines cannot fly airplanes
    And yet they can. In fact, they do. Garmin’s Autoland is just one example.
    In his absolutely excellent easy, Paul says: “In the headline, I said whether a computer can replace a pilot or not is a trivial question.” Actually, Paul’s headline asked: “Can a computer think like a pilot?” Am I picking nits? Is this a distinction without a difference? Not at all. In fact, my fundamental argument consistently has been that machines can replace pilots, precisely because flying an airplane does not require thought.
    The video’s author runs full tilt with the flawed premise that flying requires thought, then attempts to convince the viewer that “machine thinking” is – and forever will be – incapable of emulating the infinite capabilities of the magnificent human mind. Paul does a great job summarizing the author’s sentiments, when he says: “We lack the imagination to accept that a machine can think because we believe only humans can do that. Only humans can recognize and respond to a novel situation beyond a computer programmer’s limited ability to account for everything. Only humans can triage closely spaced decision options and pick the right one.”
    Let me pose this question: If flying an airplane can be accomplished without any thought, how germane is any computer’s lack of ability to think? Garmin’s example answers: “not at all.”
    The author of the video asserts – without foundation – that Artificial Intelligence would be the way that engineers like me would attempt to do the impossible. Another flawed concept, spawned by another flawed premise.
    Machine Learning probably has a place onboard certain military aircraft (warning: Skynet). But employing AI/ML aboard civil aircraft is a conceptually flawed idea (refer to earlier-cited YARSism). Why?
    Because machine learning by definition includes the autonomous altering of instruction sets, as a consequence of the individual machine’s real-world experiences. Think back to your well-worn copy of The Fundamentals of Instruction: “What is Learning? Learning is a change in behavior that occurs as a result of experience.” And there’s the fatal flaw: the behavior of the machine will change, as a result of what it “learns.” Bad enough if there’s one self-taught machine in the sky. Chaos if there are thousands.
    Indulge me in another YARSism: “Predictability is the foundation of anticipation.” Anticipation of others’ behavior is what allows us to navigate what otherwise would be a world of chaos.
    Thus, the very concept of using AI in an autonomous aircraft control system is flawed – fatally. Consequently, there’s no point in arguing about how good some particular implementation of AI is.
    A good old Expert System design is both adequate and desirable. You might want to spend three of four minutes reading this 1,200-word piece as background.
    Finally (hold your applause), Larry Stencil asked: “So what happens when a two-engine whatever full of revenue PAX hits a flock of birds and both engines flame out. Are these things gonna be programmed to land in the Hudson ??”
    Politely, these things will react to a total lack of thrust by managing a glide to the most-benign available landing spot. In Sully’s case, that was the only open space within gliding distance – the Hudson river. But consider this: Without casting any aspersions upon Sully’s abilities, he had the good fortune of daylight VMC conditions. A machine would be able to do the deed at night, in zero-zero weather – because it doesn’t have the human limitation of needing to be able to see.

    • The link wouldn’t post. The piece I recommended can be found at airfactsjournal.com/2020/07/autonomous-control-systems-what-does-it-really-mean-for-aviation

    • YARS: For once, I agree with your fundamental premise. Flying does not require thinking. In fact, when humans inject too much thought into the process of flying, accidents tend to happen.

      However, I have a slightly different way of thinking about the implementation of AI than you:

      “And there’s the fatal flaw: the behavior of the machine will change, as a result of what it “learns.” Bad enough if there’s one self-taught machine in the sky. Chaos if there are thousands.”

      What if the AI implementation was a collective learning process? What if all aircraft of the same type learned from each others’ experience via centralized AI? What if there were a way for aircraft of different types to learn from each other? This way, the aircraft would (in theory) act fairly predictably in most situations, especially situations where it’s critical that they act predictably (think in congested airspace). Rather than a busy uncontrolled GA airport on the first sunny Saturday of the year, the whole sky resembles a military formation demo team: Absolute precision, skills honed together, moving as one, knowing exactly what the other is about to do.

      Now, there’s still a need to have individualized AI to allow the aircraft to learn how to operate in a degraded state and get on the ground safely, but then the greater type hive mind could learn from this experience too, in being able to more easily manage a similar degraded situation later on. I mean, I’m theorizing here, bigtime, and I’m far from an AI expert, but I like to try to look at concepts without the blinders of forcing them into our current paradigm.

      “Without casting any aspersions upon Sully’s abilities, he had the good fortune of daylight VMC conditions. A machine would be able to do the deed at night, in zero-zero weather – because it doesn’t have the human limitation of needing to be able to see.” – Nail on the head. We hold up Sully and Skiles as the gold standard that is impossible to match in AI. But what did they actually accomplish relative to other crews? They managed the situation exactly as they needed to. Impeccable decision making, impeccable communication, impeccable execution. They’re only on a pedestal (though they do deserve to be on the pedestal) because they did all the right things when many (most?) human pilots would have screwed it up somehow. I’m of the belief that AI would be less prone to making those kinds of errors, and probably would *improve* the chances of a Miracle on the Hudson rather than reduce them. The AI wouldn’t fall into traps of overthinking. It wouldn’t be thinking about its kids. It wouldn’t have its life flash before its eyes. It wouldn’t be degraded by having lost a few hours sleep the night before. It wouldn’t hesitate because the seat of its pants disagreed with the instruments. It wouldn’t miss an instrument reading. It wouldn’t misunderstand some communication with its copilot. And the outcome would be exactly the same in night IMC as it would be in day VMC.

      • Alex:

        What you’re suggesting is this: using AI to come up with better software (in controlled, ground-bound circumstances) could be useful. Yes, in the software development environment, it could. But releasing any AI into the wild would be a fatal mistake, because each instance of the software quickly would become unique, thus rendering its behavior unpredictable, and thus unreliable. In order for this stuff to work reliably, EVERY airborne instance of the software must be identical.

        You opined: “…there’s still a need to have individualized AI to allow the aircraft to learn how to operate in a degraded state and get on the ground safely.” Absolutely NOT. Operating safely under ALL conditions is a requirement of the software, not some uncompleted task whose solution is to be “discovered” by each instance, ad hoc.

        And what do you mean, “For once…” 😉

        • “Operating safely under ALL conditions is a requirement of the software, not some uncompleted task whose solution is to be “discovered” by each instance, ad hoc.”

          To your point YARS, who even in their worst nightmares could have conceived of QF32’s black swan event let alone created and then taught this highly unlikely randomness to an intelligent automated system? Isn’t that ultimately a human limitation imposed upon AI?

          Captain Richard de Crespigny made conscious choices prior to and during that flight which made the difference between success and failure. Some of those choices would have been learnable by AI, some maybe not. His lifestyle included studying aircraft systems several hours per day – all teachable and learnable for AI. Prior to flight he let every pilot on board know they were part of the flight team whether they were at rest or on active cockpit duty and indeed enlisted them all during the event – a moot point for AI. Taking inventory of systems after the malfunction de Crespigny determined there was no possible way of knowing from information impartable by the aircraft how much of each system he did not have but he did ascertain that he could divine, (not necessarily determine) what he did have remaining at his disposal sufficient to complete the flight successfully by choosing the exact correct configuration and speeds and executing them accurately. If the aircraft has no built in means to impart 100% system status even under normal conditions to any crew, human or not and no checklist to cover the event, could AI have divined what it did not and did have remaining at its disposal under abnormal conditions if the aircraft had no way of imparting that information to it?

          Finally, knowing that he had only about a 5 knot approach speed tolerance between too fast for runway availability and too slow aerodynamically, he decided he could more accurately hand fly the approach than could his automation.

          Whereas AI could learn to choose to land in the Hudson with all engines out and execute it well, is AI and will it ever be capable of the kind of judgement and frankly accuracy required of Captain de Crespigny? At some point is an analog arm and wrist on a control stick and remaining power levers more capable of accuracy driven by a human brain than a binary robotic arm driven by binary robotic intelligence?

          • John:

            You seem to have missed my primary assertion: AI is the WRONG approach to autonomous aircraft control systems.

            You might want to read the article that I recommended over at airfactsjournal. It includes some mention of the ad hoc nature of impediments and mitigations, an understanding of which is required, in order to wrap your mind around the essence of coding autonomous systems.

            Enjoy?

  4. No YARS I was actually agreeing with your primary assertion when I wrote “To your point”. My bad for apparently not communicating clearly enough.

  5. Bravo YARS. The only point of disagreement I have in Paul’s typically excellent essay, is his description of the Redbird video as “clever and well thought out”. I found it neither.

  6. Hahaha. Spend a few hours flying a pax jet in 121 and you’ll realize how far off full automation is from reality. Not. Even. Close.

    From bad GS captures (or any Nav intercepting issues for that matter) to traffic avoidance, runways, taxiing at ORD or IAH. Runway contamination. Go-arounds. Random AP disconnects, random upsets, any random “why the hell did it do thats?”. This stuff happens every single flight. The stick and rudder part, sure. The decision-making part? Nah, no way. Need eyeballs, and a human brain for all that.

    Prior to my airline job I was a Mech Eng and worked with automation all day. I too once thought we ought to be on the precipice pilot-less pax airplanes. Once I started flying for a living, I was shocked at how much human intervention is needed on every flight to keep the shiny side up. We have a lonnnnggg way to go, if ever.

    • Please bear in mind: Autonomous flight control systems have very little in common with autopilot/FMS systems. And machine decision-making is a very mature technology.

  7. With respect, you asked the wrong question.
    What you should have asked is can machines replace pilots for the middle of the night freight runs which are harmful for pilots, and disturb people on the ground. (Working nights increases your chances of dying from cancer by five times.)
    And the answer is yes.
    Of course the pilots’ unions are jumping up and down and swearing than only someone with the equivalent of a masters’ degree can fly the things — they still have not absorbed the implications for the members of the night work findings.

  8. After Sullys bird strike, I wonder if AI would have IMMEDIATELY turned back to to LGA (which could have been done). (safely).

  9. If nature were with the machines,
    there would be no need for F-16s,
    let alone for DARPA and Duplex.
    It would all happen automatically,
    or as the inevitable by-product of
    digital evolution. The scribes of the
    hitech tribe worship their own idols.
    Conversely, the myth of the machine
    makes “ethical guidelines” irrelevant.
    For the actors are programmed to be
    malign, and have long since forfeited
    their humanity, if they ever had any
    to lose. That was Turing’s point—in
    the wake of Hitler, he preferred the
    company of computers, and figured
    our chances of survival were better
    if we traded bigotry and bullets for
    reason and reflection. If machines
    inherit the earth, it will be because
    we failed as a species, not because
    we made progress as savants.

  10. There’s a sardonic saying among the AI academics and researchers that says “artificial intelligence is always ten years away.” The more they tackle the problem, the more they learn about how difficult the problem is. In some ways it reminds me of this quote by research engineer and scientist Emerson W. Pugh “If the human brain were so simple that we could understand it, we would be so simple that we couldn’t.”

    That being said – I can imagine, at some point, computers and programs will get sophisticated enough to be considered artificial intelligence. But I think that point is farther away than most people realize. It’s easy to look back and see how far we’ve come. But it’s harder to look forward and see how much further we have to go.

  11. People who worry about rogue AI should keep in mind the basis behind the method actor’s “what’s my motivation?” question. As in “why would my AI-driven car WANT to kill me?”

    I envision the eventual development of general-purpose core “AGI engines”, hardware/software of scaled capacity operating under a shell application, an application which in turn constrains & directs the learning & optimizing efforts of the AGI core to serving the purposes of the application. A long way from a free-form ASI Skynet, which apparently had carte blanche to write its own application shell.

    In any case, more concerning than rogue AI in my estimation is the problem of dealing with the ever-increasing percentage of humanity which really has no “application”, nothing to offer society and vice-versa. Existential angst, anyone?

    • “In any case, more concerning than rogue AI in my estimation is the problem of dealing with the ever-increasing percentage of humanity which really has no “application”, nothing to offer society and vice-versa.”

      I hesitate to ask you for clarification because it sounds like you’ve joined ranks with the most dangerous despots in history and present day North Korea. Our society is fundamentally built on the premise that we’re all created equal which in my interpretation does not square with your estimation. Hopefully I’ve missed your point and I’m wrong in my interpretation of your estimation.

      • I interpreted the statement as meaning once AI takes over more and more jobs, what will be left for people without the skills to do something else? In other words, will AI create a world where there are no longer enough jobs for humans?

        I remain somewhat optimistic that new jobs will replace old. For example, in 1900 something like half the population was involved in agriculture. Today that figure is about one percent. We don’t have 50% unemployment because as new technologies replaced much manual labor, new jobs arose.

        Though I say “somewhat” optimistic because while previous technologies change the type of labor the worker performed, AI is aimed at the worker itself. So the future will be… interesting… to see what new jobs come along that AI can’t do.

  12. When it become statistically safer to keep the pilot from interfering with the machine rather than allow the pilot to take control from the machine, it will be safer to fly without a pilot. Such machines may well make fatal mistakes like the Redbird video warns, but if humans make more, the machine is still the right choice. We are progressing towards that point, with more and more work being done by cockpit machines or devices. Before pilots are eliminated entirely, there will be a class of machines that will warn the pilot of his errors in real time in order to prevent disasters or even risky or sloppy flying, while leaving ultimate and final control with the pilot. At some point after that, allowing the pilot to remain in ultimate control will be regarded as the riskier if not actually reckless option. We will surely get to that point before the turn of the century, if not much sooner.

  13. A lot of machine “learning” is based on neural networks, but the results of this learning are – at least currently – impossible to audit or unravel. There are interesting experiments that show that minor alterations to e.g. traffic signs can render them unreadable to or even cause current technology traffic sign awareness systems to “see” a different sign. Rather innocuous looking stickers can foil camera-based perception systems. Humans and neural networks rely on entirely different means to perceive the world and we certainly don’t want to end up at a point where a strategically placed dot pattern on a billboard under a final approach will cause an automated airliner to go-around or worse. The “if it acts like a dog” approach to test automated piloting systems by submitting them to the same tests human pilots have to pass to get a license is a fallacy. I have no doubt that autopilots are able to fly better than pilots as they lack the slow biological processing between perception and action. But pilots are required to demonstrate but a small percentage of their skills during a check ride because of the assumption that they will be able to use their systems knowledge to extrapolate in case something requiring extra skills happens to them. There is no base in assuming that a machine learning system will be able to deal with a situation it has never encountered before and much of our current certification system is not based on multiple failures happening (although they do, as demonstrated by the Quantas A380 incident. (As impressive as the Garmin Autoland is, it is designed as a measure of last resource to avoid an even worse outcome – crashing without a pilot – and AFAIK is based on the assumption that there is no other failure beyond the pilot not being able to perform his duties. Should the pilot pass out because e.g. an engine failure excites him to the point of a heart attack, the Garmin system won’t be as helpful as another pilot. The much cited dogfight scenario was based on the computer opponent having total situational awareness, something not usually afforded to human pilots who have to fly airplanes the actual status of many parts of which they don’t know for lack of sensors, external cameras, etc.)

  14. Paul, you’ve aroused the philosopher/technologist community. Your article and the comments about it are most interesting. As for me, a former Gulfstreamer rooted in dusty old Honeywells, it’s all moving very fast and well beyond my scope of reference. These days I am content to hand-prop my trusty steed to go up and look down on the thoroughly Google-surveyed earth and enjoy God’s creation as it has always been, despite man’s constant effort to somehow make it “better”.

    • Ditto Alex right down to being “a former Gulfstreamer rooted in dusty old Honeywells, it’s all moving very fast and well beyond my scope of reference” and who “enjoys God’s creation as it has always been” from a small airplane.

      Those dusty old Honeywells did however represent the cutting edge at one time, and I well remember the very day when I had to make the conscious decision, and a conscious decision it was, to move on from the comfort of the steam gauge and the VOR to a new architecture which required programming at least a basic portion of the flight prior to engine start. I’m audacious enough to believe I could still be up to the task of deciding and even looking forward to moving on to whatever comes next.

      • My son captains one of those new fully-Garmin Citations at NetJets. What he’s told me about that has left me in the dust. ASCB is alive and well but now it’s an information superhighway compared to what we learned back in the day.

  15. Fantastic article. The largest artificial neural network in existence today (late 2020) consists about 100 billion neurons. That runs on hot racks of GPU’s weighing thousands of pounds and consuming 20 to 100 kW of electric power. Our brains contain about 100 trillion neurons. In electronics, increases in component count proceeds on an exponential curve. Getting all that compute power into a low-power and light weight flight ready package represents some pretty grand technological challenges. As an engineer, I believe solution of these will take a while.

    Secondly, deep machine learning of the type that the computer scientists feel can mimic or surpass the human brain in learning tasks exhibits great skill when tested within the range of the training data. The challenge arises outside the range of training data, where the machine learning algorithm is forced to extrapolate. The human pilot relies on experience, often with good outcomes, sometimes not. How will HAL deal with the unanticipated? I suspect in a similar fashion to its organic counterparts.

    The campy and way ahead of its time 1970’s sci. fi. flick “Dark Star” provides an entertaining example of where I’m coming from.

  16. “Otto” won’t need a layover nor will it have the opportunity to get drunk then report for the next flight hungover.

  17. First we need to agree on terms. One accepted machine design definition states that ‘thinking is the manipulation of memory’. We think when we analyze memories in relation to other memories (I guess when we sit and ‘think about something’ that happened) and when we apply memories to current input (the aircraft is stalling, I apply training memories to solve the problem). So the poll question is badly formed: of course a computer can “think like a pilot”, all it has to do is manipulate memory. So that’s a definite YES but not to the most pertinent question. Which is: what if something happens which doesn’t fit our stash of training/memories we can apply? These are the so-called ‘corner cases’ which other commenters raise. They require some specific design to define some handling of unexpected situations, neglect of which have famously led to fatalities in the ‘smart’ aircraft systems now in use. The Airbus A400M crash in Seville is an example of really bad embedded control initialization and exception handling. Until we can get those foundational aspects right, piling AI of any kind on top of them will do little good. The core of the design, including power on self test and runtime exception handling must be robust or the whole design is brittle. And how do we test and verify that the design is robust? There are methods and tools but few universally applied standards: witness the 737Max debacle. It’s a messy problem with no clean solutions. That doesn’t make it insoluble, just very challenging. I am an industrial embedded control systems designer and forensic engineer and it still sometimes surprises me how many unexpected ways things can fail. We’ll get there first with self-driving cars where the bar is lower: 37,000 motor vehicle fatalities in 34,000 incidents (in 2016). At least computers don’t get drunk or fall asleep. Driving is a perhaps simpler 2D problem vs 3D in the air. But look at the SpaceX automated docking with the ISS. Solutions to some of this may be out of the scope of our current experiences, just like many of us didn’t imagine the internet or cell phones… or crossing the Atlantic nonstop.

  18. Can a computer think like a pilot?

    Definition of Think by Merriam-Webster:
    Think..transitive verb. 1 : to form or have in the mind. 2 : to have as an intention… thought to return early. 3a : to have as an …. think it’s so. b : to regard as or consider… think the rule unfair.
    ‎Definition of thinking · ‎I Think So · ‎Come To Think Of It · ‎I Think Not

    Dictionary.com
    Verb (used without object), thought, think·ing.
    To have a conscious mind, to some extent of reasoning, remembering experiences, making rational decisions, etc.
    To employ one’s mind rationally and objectively in evaluating or dealing with a given situation:
    Think carefully before you begin.
    Verb (used with object), thought, think·ing.
    to have or form in the mind as an idea, conception, etc.
    to have or form in the mind in order to understand, know, or remember something else:
    Romantic comedy is all about chemistry: think Tracy and Hepburn. Can’t guess? Here’s a hint: think 19th century.
    Adjective
    Of or relating to thinking or thought.
    Informal. stimulating or challenging to the intellect or mind:
    The think book of the year.
    Compare think piece.

    I love Paul’s thought-provoking title asking this question: Can A Computer Think Like A Pilot? It’s A Trivial Question. My answer is no.

    Taking in consideration what the word think means, a computer cannot really think. The human mind, at any given time is a sum total of all life’s experiences. Whatever, the mind has experienced such as reading, study, reflection, analyzing, including what all of our combined senses gathered experientially, in addition to sharing and receiving from other human minds their experiences, is something a computer cannot do. This accumulation takes place even in the womb. Mothers and fathers can literally connect with the developing baby’s mind and body in the womb with simply sound or an external caress.

    A computer cannot have an intention and then change its mind without being stimulated by data to force the change. It can only react to information going into it. That is not thinking.

    A computer has a manufacture date…sort of a birthday. At that point, the sum total of its parts has to be started with some sort of external programming. It has no internal instinct, no sense of itself during manufacturing, no bent or particular inquisitiveness to help its ability to excel in one particular direction or another. It has to be programmed, to be directed toward a specific designed function. Eventually, it too can become a sum of its total experiences but cannot recognize a need for a behavior change.

    Staying within the confines of the question “can a computer THINK like a pilot”, using the term think correctly, it cannot.

    It can react to programmed information. Or can accumulate data and make a decision in reaction to algorithmic accumulation of information that can manipulate controls properly for that moment. But as been noted by many far smarter than me, accumulating data, both good and bad , AI cannot make a behavioral change that defies that sum total. In many flying cases, pilots make the right decision when all the accumulated data suggests the proper reaction should be far different.

    Sully and Skiles made the right decisions, converting those decisions to the right actions, executing perfectly. Yet there are some who have stated they could have made an airport had they reacted sooner suggesting AI could have executed even better. Maybe even a night landing in the water IFR.

    But would AI have walked the passenger compartment making sure all of the passengers were off the airplane? Would AI have provided tangible encouragement to those passengers who might have panicked without the crews considerable nobility exercising compassion, diligence, and a sense of duty that benefited everyone aboard contributing to the overall safety of the entire event? Absolutely no.

    Therefore, a computer cannot think like a pilot. It can only react without regard for human needs. It can only react to the needs of the machine. To me, that is not enough.

    • “But would AI have walked the passenger compartment making sure all of the passengers were off the airplane? Would AI have provided tangible encouragement to those passengers who might have panicked without the crews considerable nobility exercising compassion, diligence, and a sense of duty that benefited everyone aboard contributing to the overall safety of the entire event? Absolutely no.”

      Politely, that’s what flight attendants do.

  19. Having recently retired from a 40 year military/civilian career from flying off aircraft carriers to piloting 787’s, I have been fortunate to only having had a few serious mechanical issues or close calls involving the weather or near midair’s. I never had an engine fail or catch fire (inflight). I am convinced, however; that my human intervention, or that of a fellow crew member on several occasions, prevented the loss of an aircraft and potential loss of life. No artificial intelligence would be capable of doing what the crews of United 232 or Qantas 32 did to save their airplanes and passengers’ lives.

    There are times that require hands on controls, when all autopilots have failed, all electrics and/or all hydraulics are out and the airplane is essentially a falling paperweight needing guidance in the direction of the crash. Al Haynes and Denny Fitch are the heroes no computer can ever be.

  20. “”Stephen Hawking has said, “The development of full AI could spell the end of the human race.” Elon Musk has tweeted that AI is a greater threat to humans than nuclear weapons. When extremely intelligent people are concerned about the threat of AI, one can’t help but wonder what’s in store for humanity.”” Excerpt from The Brain vs. Computers by Fritz van Paasschen of the Thrive Global. Good article.

    medium.com/thrive-global/the-human-brain-vs-computers-5880cb156541#:~:text=Brains%20are%20also%20about%20100%2C000,or%20minus%20a%20few%20decades.&text=Computer%20processors%20are%20measured%20in%20gigahertz%3A%20billions%20of%20cycles%20per%20second.

    My impression is that human nature will slow down or prevent computer systems autonomy in Commercial aviation where transport passengers are involved, but not cargo – at first. However, the idea will continue to crawl, maybe to the end of the century, before partial AI operations are made possible.

LEAVE A REPLY