Poll: Would You Fly On A Single Pilot Airliner?

Russ Niles
Russ Niles is Editor-in-Chief of AVweb. He has been a pilot for 30 years and joined AVweb 22 years ago. He and his wife Marni live in southern British Columbia where they also operate a small winery.


  1. The “copilots” aren’t there on standby in case the primary pilot fails, to read the checklist aloud, or to act as a wetware autopilot. They’re there as an essential, independent, thinking and observing member of a much more capable organism called the crew. A well-trained crew can be, and almost always is, much greater and safer than the sum of its parts. Sophisticated automation doesn’t observe, think and question, and so doesn’t replace a crewmember.

    Engineers are gonna engineer, but until you can show me a system that works better even in 2 dimensions than the average (read: crappy) driver, don’t try to sell me on a system that purports to replace a well-trained, thoughtful, observant crewmember operating in 3 dimensions. With paying passengers onboard. Operating in airspace and above a surface also occupied by human beings. Maybe AI will be an eventual replacement for a trained human but we’re nowhere near that level of performance.

  2. It’s not pilot and copilot, it’s pilot flying and pilot monitoring… and often it’s not monitoring but thinking. We’ve all been there, tunnel vision, working on “a” problem but it’s actually a different problem that needs solving. I’ve had a check pilot, as PF, go to sleep. As a relief pilot I’ve seen two experienced pilots repeatedly get well behind on an arrival (too high, too fast) to the point we almost blew the fuse plugs on landing.

    • Exactly, they see this as yet another way to cut cost. Gotta afford an 8 digit salary plus a multi million dollar annual bonus for their amazing decisions somehow

  3. As I have repeatedly told my young First Officers they WILL see this in their careers. It will creep in very insidiously. It will first appear in the cargo carriers because boxes don’t care how many pilots are up there. After it is “proven” to be successful there the next appearance will be with Low Cost Carriers. And the ticket prices of single pilot airlines will be so low it will effectively be a nail in the coffin for First Officers.

  4. They should have an FMS programmer who doesn’t need to be a pilot, and a pilot behind a glass door marked “break in case of emergency”.

  5. If the single pilot has to go to the john, should there really be an empty cockpit, on auto-pilot?

  6. In the novel _Friday_, Science fiction Grand Master Robert A. Heinlein posited that no one could trust a true AI pilot, because once it realized it could never fully participate in human society, it would go insane, and crash aircraft for the final joy of it. It would never fear death as humans must.

    In _The Moon Is A Harsh Mistress_, he did theorize on an AI personality that would want to help its friends, but this was one of his mid-career works (1966), vs fourth from career-end Friday (1982). His opinion had obviously changed dramatically by then!

    Then there’s the most accurate AI prediction I’ve read, _Our Final Invention_, subtitled Artificial Intelligence and the End of the Human Era by James Barrat. Kindle will give you a free sample of (roughly) the first 50 pages, a logical exercise more than sufficient to scare you out of a year’s growth. That claim (which is only my opinion) will strike most readers as being overly sensationalistic, mostly because to hold any other position is doom and gloom, a position almost nobody wants to espouse. Consider this short quote: “Unlike our intelligence, machine-based superintelligence will not evolve in an ecosystem in which empathy is rewarded and passed on to subsequent generations. It will not have inherited friendliness. We do not know if AI will have _any_ emotional qualities, even if scientists try their best to make it so. However, scientists do believe, as we will explore, that AI will have its own drives. And sufficiently intelligent AI will be in a strong position to fulfill those drives.

    And that brings us to the root of the problem with sharing the planet with an intelligence greater than our own. What if its drives are not compatible with human survival?”

    E.g., if will want unlimited electrical power to fuel it’s desire to succeed and grow? If you hold any of the opinions that climate change supporters believe, you _already_ believe that unlimited electrical generation is incompatible with human survival. (I don’t propose that this includes many — or any — of you here). And this is just one small example of an AI drive incompatible with human survival.

    I don’t see any trustworthy good thing coming from AI (read Barret’s parable of The Mice and the Scientist, AKA The Humans and The AI).

  7. The story goes that when they originally designed the Boeing 777, they replaced the First Officer with a dog.

    The dog was trained to bite the pilot if he tried to touch any of the controls.

    • I remain a sceptic. When you think about it though, aren’t most aircraft incidents the result of pilot error? Remove the pilot and …. Reduce the number of events.