Embraer Tests Autonomous Aircraft

17

Embraer announced that it has successfully completed the first test of an autonomous aircraft in Brazil. As shown in the video below, the taxi test, which was overseen by a pilot in the cockpit, took the aircraft “along a previously established path without human interference.” According to Embraer, an integrated artificial intelligence system capable of acting independently on acceleration, steering and braking monitored the aircraft’s “external and internal conditions.”

“Our strategy for technology development in autonomous systems seeks to position the country at the forefront of artificial intelligence processes in a variety of applications”, said Embraer Executive Vice President of Engineering and Technology Daniel Moczydlower. “Achieving this technological milestone in Embraer’s 50th anniversary month demonstrated not only the importance of bringing industry closer to the university, but also how prepared and engaged our people are for the journey of excellence needed for the coming decades.”

The test was conducted as part of a partnership between Embraer and Brazil’s Universidade Federal do Espírito Santo (Ufes). It took place at Embraer’s Gavião Peixoto facility in São Paulo, Brazil.

Video: Embraer
Kate O'Connor
Kate O’Connor works as AVweb's Editor-in-Chief. She is a private pilot, certificated aircraft dispatcher, and graduate of Embry-Riddle Aeronautical University.

Other AVwebflash Articles

17 COMMENTS

  1. If a human can “take over,” it isn’t “autonomous.”

    And AI is NOT the way to go, if you’re designing autonomy. Code branching = behavior branching = unpredictable behavior = bad.

    • Did these people NOT watch Star Trek when growing up? The M5 unit shows what happens when you create fully autonomous (and thus unpredictable) behaviors in machines.

    • Autonomous does not mean a human can’t take over it means a human doesn’t have to for it to conduct the task, like taxing. If it has the ability to act independently it is autonomous.

      • Actually, if anyone or anything can just jump in and take control, it is NOT autonomous.
        Instead, it’s a matter of: “How long is your leash?”

    • My experience in IT is that people writing the marketing tend to misrepresent what is being used. AI, Machine Learning, Pattern Matching, etc. they don’t know the difference, but it sounds better to customers and investors to just call it AI. I wouldn’t be surprised if that were the case here.

    • By definition, it is NOT autonomous – without notice or permission, an outside entity can usurp its control.

      The misuse of terminology is a corrosive force upon the language. Since engineers should know better, I’m left to conclude that such misuses are purposeful. And deceit has no legitimate place in the profession.

  2. Mark:
    The basic problem with the Star Trek’s M5 was NOT its autonomy. It was its inclusion of “human qualities” (read:”artificial intelligence”) that enabled its unpredictable behavior.

    The behavior of a properly-designed autonomous control system is completely predictable, because it has no ability to modify its own code.

    • “properly-designed autonomous control system is completely predictable”

      There is no completely predictable properly-designed autonomous control system for aircraft my friend; never will be. It’s an impossibility. Reason is that many flights every day are diverted, diversions based on the unpredictable. You cannot design a completely predictable system for an unpredictable environment because flight safety is in this equation. Past history has shown us that completely predictable autonomous systems will still go off-the-rails; Star Trek reminds us that AI also is not a solution 😉

  3. “You cannot design a completely predictable system for an unpredictable environment because flight safety is in this equation.”

    If you’re any good at your job, you absolutely can – and will – design a system that reacts to impediments-to-achievements-of-objectives in completely predictable and repeatable fashion. That’s what true autonomous systems do – they make decisions in continuously-evolving situations.

    • No one can design 100% completely predictable systems; anyone reading software rev numbers or patch levels will back me up on that! Don’t even get me started on the myriad of sub-components and sensors (each having their own revision levels and reliability issues) that would make up such a system. Didn’t Boeing recently make software changes to one of their aircraft’s premiere flight systems?

      If (and I really mean IF) you could actually make a 100% predictable and reliable autonomous flight system for airliners, wouldn’t that make them MORE susceptible to danger? I mean that each plane would come with a handbook of how it reacts 100% of the time to different situations? Seems like it would then make it child’s play for someone with ill intent to spoof or disable bits that derail the whole works?

    • It has nothing to do with being good at your job. Flight systems undergo verification and validation before they are certified to operate in the system. If you introduce “true” AI it will have the ability to learn new ways of doing something. The ability for it to learn and get better is what makes it unpredictable. And once it changes even one of its lines of code it needs new V&V. I haven’t even started on Model Drift.

  4. The only time we’re even interested in the “unique” experiences of any given instance of a control system, is when the outcome was unhappy. That presents a learning opportunity – for the code authors.

    Software revisions typically are the result of two causes:
    “Bug” fixes
    Other improvements.

    Bugs, by definition, are a failure of the code to meet the intent of the author. Their behavior is unpredicted, but – once discovered – is not unpredictable.
    Bug fixes are another topic.

    Bad sensors? Good code includes background processes that continuously challenge sensors, in an effort to discriminate between bad news and bad data.

    Boeing is making changes to a kludge layer of “automation” that highlights the dangers of relying upon the predictable performance of… wait for it… pilots.

    • “that highlights the dangers of relying upon the predictable performance of… wait for it… pilots.”

      Nomad: You are the creator.
      Kirk: But I admit, I am imperfect. How could I have created a perfect being like you?
      Nomad: Answer unknown. I shall analyze… Analysis complete: Insufficient data to resolve problem.

  5. Peter:
    You should re-read my comment about “true autonomous systems.”
    I asserted that AI has no place in them.

LEAVE A REPLY