MIT’s AI Copilot Improves Human Pilot Performance

36

MIT says it’s developing an artificial intelligence-driven copilot robot it calls a “guardian” that will monitor the human pilot’s performance and intervene at even the smallest deviation from what the AI considers the proper action by the human. Air-Guardian uses eye tracking to determine where the human is focusing and if it doesn’t match the AI’s gaze, the machine takes control. “If they’re both paying attention to the same thing, the human gets to steer,” according to the explanation by MIT’s Rachel Gordon. “But if the human gets distracted or misses something, the computer quickly takes over.”

Gordon writes that current autopilot and navigation systems don’t sound the alarm until things go really bad on the flight deck. Air-Guardian nudges and cajoles the human pilot to a level of perfection that should prevent those sorts of in-flight crises. “As modern pilots grapple with an onslaught of information from multiple monitors, especially during critical moments, Air-Guardian acts as a proactive co-pilot; a partnership between human and machine, rooted in understanding attention,” the explanation reads. It gets a lot more complicated than that, but the bottom line is that it seems to work, according to MIT. “The guardian reduced the risk level of flights and increased the success rate of navigating to target points,” the report said.

Russ Niles
Russ Niles is Editor-in-Chief of AVweb. He has been a pilot for 30 years and joined AVweb 22 years ago. He and his wife Marni live in southern British Columbia where they also operate a small winery.

Other AVwebflash Articles

36 COMMENTS

  1. Had fun renting a car in Europe this summer. In tiny two lane roads, I would drift away from the centerline whenever there was oncoming traffic. The car would assume I wasn’t paying attention and jerk the wheel towards the oncoming vehicle. Fun every time.

    I eventually figured out how to turn it off–5 menus deep.

    I’m excited about my airplane doing the same thing.

  2. Sounds like they started making steps in the right direction, but ran right past it… to the assumption that the computer will always be right, which we know just ain’t so.

  3. MIT, “the bottom line is it ‘seems’ to work”…flying car, now HAL

    So the underlying assumption is the human pilot is wrong…what happens when the inputs to HAL are wrong…conflicting sensors (X-31, Air France, 737 Max, etc)?

    AI is the SW designer’s dream, don’t have to define all those pesky use-cases or worry about corner cases, just let AI figure it out and hope for the best…from the safety of mom’s basement.

    If AI is to act in pilot roles, time to stop thinking about it from an equipment certification aspect alone and also require it to pass appropriate checkrides.

  4. Monitoring pilot and aircraft performance and issuing advisories would be a good thing, and seems to me to be a necessary first step before introducing intervention. The edge cases are troubling – what about the double flameout into Heathrow? The pilot reduced flaps, contrary to SOP and the Flight Manual – yet his action is credited with avoiding crashing outside the airport environment, in a housing estate.

  5. Nothing can go wrong, go wrong, go wrong….. When you can grab the pebble from my hand you will be ready.

  6. “As modern pilots grapple with an onslaught of information from multiple monitors,…” Monitors? What about what’s going on outside the cockpit?

    Do any of the MIT developers have any ratings which allow them to exercise the privilege of operating a real full size aircraft in the real physical world by operating the real aircraft’s controls from the pilot’s seat? To be clear: not drone pilots, not gamers, not sim operators.

    • I hate to break the news to you, but most avionics developers/programmers who work for major avionics manufacturers are not rated pilots and at best have just sim time (up to full motion Level D). However, they will be the first to instruct you on operating a new avionics package they developed and how to operate it in your aircraft application down to the deepest sub-menu dive. And yes, more automation in future software applications is a certainty programmed by them (take Garmin’s “HomeSafe” button for their G3000 suite being used in the newest TBM 960 for example).

    • Probably better.

      I was always a little miffed about “extrapolating” and “interpolating” data.

      Example; best glide. Do you know the best glide for your airplane after flying for one hour-thirteen minutes and 20 seconds?

      Sure, you could figure it out. But could you calculate best glide, glide distance (remembering to account for density altitude), know and announce your position, find wind direction and go through the engine out checklist in less that 1 second?

      My bet, is that AI, or even Garmin’s autoland, has those things figured out even before a pilot registers an engine out. Heck, even before the engine quits.

    • AI would have made the decision for Teterboro or the Hudson quicker.

      Probably long gone are the hand pilots that would have posted their disgust at such a crazy device as an autopilot.

      Cannot imagine a single responsible person that would ignore having a second or third set of eyes looking out for that ‘just in case’ possibility.

      MIT aintnot stoopid people. I would gather they have some of the well seasoned pilots in the background relaying likes and dislikes of such a concept.

      It’s coming and there’s no stopping such realities when an airline can deflect litigation to a programmer vs the human pilot.

    • Sounds like a spouse… as long as the pilot dutifully says “yes, dear”, and does what the other wants, the cockpit should be very harmonious. Happy AI, happy we fly!

  7. Interesting, kinda like my gen6 Suburu that tells me to keep my eyes on the road, when they are on the road. AI might make a great tool to help pilots in time of need, however, it will have to go thru multiple test and evaluations before it will be welcome on the flight deck. Last time I checked, AI didn’t sign the logbook.

  8. Wow. Love to be the pilot on a long flight with that thing critiquing my every head movement and correction me, and then me trying to think each head movement so I don’t get fussed at by the machine. Not hardly. Yea, provide quick info I might request, but otherwise keep your hands off.

  9. Sounds like a solution looking for a problem.
    I’ll stick with flying on an airline with humans at the controls. If pilots go away I will drive or fly myself.

  10. It would be nice to see some artificial intelligence replace the geniune lack of intelligence in most of the comments here. Folks here don’t seem to understand that this is a research project, not a product being deployed to every cockpit tomorrow.

    Why is everyone here against research and progress? Any time there’s an article about any innovative product it is immediately met with criticism by all the armchair quarterbacks who post here. Whether or not a project ends up being viable, research yields valuable information. Heck, Thomas Edison had to try 1,000 different experiments before he achieved the light bulb. I can only imagine the how folks here would have responded if a trade journal was providing periodic updates of his progress.

    Folks here need to realize, the people working on these innovative research projects are likely way smarter than any of us and have already considered the critical points you mention. You all love to judge based on very little information you are given.

    I have some friendly advice to all of you, courtesy of Ted Lasso: be curious, not judgemental. AI copilots, electric aircraft, SAF, these are all interesting topics and I am curious to learn more about them.

    • Most, if not all, of the comments here (including yours and mine) are driven, or motivated by emotional responses. Commenters motivations (i.e. emotional responses) are typically not recognized by the commenters, much less frequently analyzed, who then engage their fingers to start typing.
      It’s difficult to catch one’s emotions and label them before acting on them.
      On the other hand, though, commenting does generate “clicks” which are counted and demonstrate the sites’ usefulness to advertisers.

    • I agree, the reflective this is bad makes no sense to me. I remember when the GNS430 was introduced in thelate 90’s. It was a game changer and I remember CFI’s and “old timers’ saying it’s going to make pilots lazy, etc. etc. etc. They won’t be able to navigate, become children of the magenta line. It led to a revolution of avionics and nobody should ever get lost anymore, and for the most part don’t. Pilots adapt. To another point, the technology already exisits, and I welcome technology advancements in the cockpit. I’ll be long done flying by the time true AI is in the cockpit, but if can be assisted with basic, “hey dude, dont’ do that”, I’m all for learning about it.

  11. Having flown “intelligent” aircraft such as the A-320 series (28 yrs), B-787 series (2 yrs), I recognize this project is a next step in the evolution toward the eventual removal of human control of aircraft. The upside is that when everything is working properly, AI intervention can perhaps, prevent catastrophic events. The downside is when certain inputs are corrupted and the AI doesn’t recognize the fault and reacts improperly. Example: Two of three AOA inputs are giving the same but wrong input, and the AI is programmed to reject the single outlier.

    I flew an Airbus once, that had the same system fail multiple times over a period of several flights. Different components were replaced only to have the failure reoccur on my flight. Would AI have been able to recognize as I did, that the failure wasn’t actually happening, but that a major computer system that talked to all the other computers, had had a “stroke” and was erroneously reporting the system as having failed?

    I welcomed the intelligence built in to those aircraft, but am not quite ready to turn over complete control. As long as the AI can be turned off or overridden in an emergency, I welcome the additional safety.

    • The problem with this argument is that it ignores that AI is adaptive. AI runs on top of machine learning algorithms and is able to consider past events and outcomes to modify future responses. AI is not “programmed” to eliminate one out of three AOA indicators. This is a fundamental misunderstanding of AI.

      • Assuming AI “considers past events and outcomes” is also a fundamental misunderstanding of AI. The vast majority are trained, then run and there is no feedback afterwards (thankfully).

        There is currently no means to understand why an AI makes a particular decision, and that’s a problem that needs fixing before we just implement it everywhere.

        Back in the 90’s, a CNN (neural net) was trained to discern enemy tanks from friendly tanks and it did so with 100% success. It turns out all the pictures of enemy tanks it trained on were taken on a cloudy day, and all friendly tanks on a sunny day. On the test range, it identified every tank as an enemy because it was cloudy that day.

        More recently, AI was trained to detect a particular kind of cancer. In the metadata was a tag about what kind of machine took the X-Ray. It was 97% accurate at detecting cancer. It turns out it learned that doctors would only use the high resolution X-Ray machine on cases that were likely cancer. So, if the patient had a high resolution X-Ray, they must have cancer.

        I think we expect less stupidity in our AI in order to have confidence. In the meantime, “trust me, it seems to work whenever we test it” from a programmer is unlikely to instill confidence in the public.

        • You are thinking only of generative AI which is initially trained, however even GenAI adapts responses based on prior prompts. GenAI can even be adapted to access data sources and modify response based on recent data (I have a client currenrlt working on a project implementing this type of functionality).

          GenAI is trained once, but it’s responses are adaptive. If they weren’t it wouldn’t be AI it would just be an algorithm.

  12. I’d like to see previous pilot induced crashes run trough this new artificial intelligence software to see what percentage the AI could have prevented. This might give some indication of its usefulness. I still am wary of instantaneous “mutiny” takeover.

  13. It sounds so redundant. let’s dream it a little more bigger. Why not just make it totally pilot-less AI-airliners? Then Anyone who likes to be a pilot may just get a fly simulator at home that he can play games with all kind planes and space ships and even to the out spaces.

  14. Anybody remember a certain airliner with a runaway auto trim program failure?

    Or the airline crashes because the pilots forgot how to fly the plane when automation was disable?

    Computer flying with auto-throttles and precision approaches should not count as a landing for currency.

  15. I’d love to get this sort of feedback on how I’m doing using the software developer’s products vs. how they (or the products themselves) think I should be using them. I’d argue the proper form of this feedback would be as a report that’s generated and reviewed after the flight, not real time feedback / intervention as described above. Even better would be the ability to provide feedback on the feedback, where I can explain to the software developers why I was doing what I was doing, and not what they (or their product) thought I should be doing. Such an approach would hopefully lead to a “shared mental model” between the flight deck and the product / software developers, and improvement of the product over time.

  16. This level of lunacy is going to get someone morted. Unless the crew are entirely incapacitated NO machine should be driving the bus – ever. We just lost an aircraft recently due to fly-by-wire systems experiencing blocked air data ports, causing the elevators to drive full up, and ending in a terminal stall condition. Now, some kind of ‘smart machine’ is going to measure crew efficiency and respond ‘instantly’ if it doesn’t like it? I recall that in 1912 the ‘unsinkable’ Titanic sank pretty efficiently too….

  17. Yeah, it’s so over the top out there that it almost feels like a leg pull.

    If the stuff is that good, why bother with a live pilot at all?

    Trading one set of imperfections for another. Utopia is never ever ever gonna happen with this version of the world. Sorry.

    I’m gonna hafta stop reading the vegetable oil and other unpleasantries stories on here; which seems to make up the majority of them. Is that stuff really what it’s all come down to now?

    • Well, judging by the response, including yours, there’s interest in it and tons of money is being spent on it. I agree that some of it doesn’t seem to make sense but we can’t ignore it.

  18. Big difference between the transport category, where everyone including the aircraft want the flight to be “by the book” wherever feasible, and the fly-for-fun for fun-of-flying cohort…like me. I’d have it in the disabled mode all the time. Routinely correcting those un-stabilized approaches using the tools & techniques available is part of the fun of it.

LEAVE A REPLY