And Now, A ChatBot For All Your Aviation Questions

21

Fads and trends blow through the cyberverse like tumbleweeds in Texas and far be it from me to feign being above it all. So the hot new thing this month is ChatGPT which journalists are fooling with to grind out copy for The Bottomless Maw of Endless Content. ChatGPT has a potential connection to the entire universe, including aviation. So let’s take it for a spin.

First, in case you haven’t been paying attention, what exactly is ChatGPT? Easy. Just ask it:

“GPT (short for “Generative Pre-training Transformer”) is a type of machine learning model developed by OpenAI that has been used for a variety of natural language processing tasks, including language translation, question answering, and text generation. GPT models are trained on large amounts of text data and are able to generate human-like text by predicting the next word in a sequence based on the context of the previous words.”

So it’s really rudimentary AI with an emphasis on machine learning, meaning it improves itself reiteratively, but is primarily a language processor. It is not sentient and doesn’t seem remotely near the singularity. It won’t even pass the Turing test because if you press forward on some subjects, it will give up the game by admitting it’s a machine with programed limits.

Nonetheless, even though it doesn’t think the way you and I do, its logical language processing in pursuit of even complex questions provides competent enough answers to make me think that 10 years from now, or maybe 20, distant iterations of it could easily be a player in many aspects of life including aviation. But the designers will have to give ChatGPT something it doesn’t now have: judgment. Can that challenge be met? I would guess yes, eventually.

For now, I peppered ChatGPT—or really, the Assistant, which is the human interface part of it—with all sorts of aviation questions and it spat back credible answers in under 10 seconds. Here’s an example: Which is safer, driving a car or flying in a light aircraft? The initial answer compared cars to airlines, but that’s not what I asked. Still, it correctly answered that airline flying is far safer than driving and gave the best data methodology—accidents per miles traveled—for the reader to understand the risks. The app has a do-over button labeled “regenerate response.” Without changing the wording, I asked again. This time, the answer was properly focused on light aircraft, but also mangled.

“Generally, driving a car is safer than flying in a light aircraft. According to data from the National Highway Traffic Safety Administration (NHTSA), there were an estimated 38,680 traffic fatalities in the United States in 2020. In contrast, the National Transportation Safety Board (NTSB) reported a total of 571 aviation accidents in the United States in 2020, with a total of 598 fatalities. This means that, on a per-mile basis, the risk of dying in a car accident is much higher than the risk of dying in an aviation accident.”

The problem? The Assistant lacked the ability to understand and calculate rate-based incidence by considering normalized exposure, say miles traveled or hours flown or driven. I’ve struggled with this myself when making these comparisons, but light aircraft flying has a higher fatal risk that driving in your car so the casual inquirer would be misled. It gets that part of the answer right, but runs off the rails on the per-mile analysis. So how’s it coming up with these answers? It’s not a Google search, as explained by Assistant itself. It uses predictive analysis on the likelihood of its internal text searches matching the query, or as it says:

“I am a computer program that is designed to generate human-like text based on the information and knowledge that I have been trained on. I am not a chatbot, and I do not have the ability to engage in conversations in the same way that a person would. Instead, I am designed to generate responses to specific questions or prompts that are provided to me. I do not have access to the internet and cannot search for information or browse websites. All of the knowledge and information that I use to generate responses is contained within my own programming. I do my best to provide accurate and useful information to the best of my ability based on my training and knowledge.

That means that the accuracy of the answer you seek may be related to the precision of the question; how it’s worded. I asked this: Show an example of drag coefficient calculation. In five seconds, back came an answer with bullet points explaining the top line considerations—drag defined, density and velocity, reference area. It then gave a sample calculation that illustrated the point well enough, although it was hardly a master class on the subject. It was easier and faster than a Google search. Speaking of which, some in the tech industry think ChatGPT may threaten Google, but I’d guess Google would acquire it before that. Or build its own version.

Here’s the Assistant’s answer to my question on how to land a light aircraft in a crosswind:

“The process of landing a light airplane in a crosswind is similar to landing a larger airplane in a crosswind. Here are the steps that a pilot may take to land a light airplane in a crosswind:

  1. Set up the approach: The pilot will set up the approach to the runway by aligning the aircraft with the centerline of the runway and establishing the appropriate descent rate.
  2. Crab into the wind: As the airplane approaches the runway, the pilot may need to “crab” into the wind by aligning the aircraft with the wind and using rudder and aileron inputs to maintain the proper alignment. This allows the aircraft to maintain a stable approach while still compensating for the wind.
  3. Flare and touchdown: As the airplane approaches the runway, the pilot will flare the aircraft to reduce the descent rate and prepare for touchdown. The pilot may need to use additional rudder inputs to maintain alignment with the runway centerline as the aircraft touches down.
  4. Use of spoilers and brakes: After touchdown, the pilot may deploy the spoilers and use the brakes to bring the aircraft to a stop. In a crosswind, the pilot may need to use additional rudder inputs to maintain alignment with the runway centerline as the aircraft slows down.”

Not bad, but biased toward heavy airplanes with spoilers. And no side slips? When I asked separately about side slips, I got a good explanation of what side slips are and how to do them, including a mention of using them for crosswind landings. But the Assistant didn’t make the connection in the first answer.

As a push to test, I asked the app if it was safe to fly my Cirrus from Florida to Boston, on a day when New England was getting slammed with snow and ice. Not going there, says the Assistant.

“It is not appropriate for me to provide specific safety advice for your flight, as I do not have access to current weather or other pertinent information. Additionally, I am not a certified aviation professional and do not have the necessary knowledge or expertise to assess the safety of your flight.

Before planning or conducting any flight, it is important to carefully assess the safety of the route and the aircraft, and to consider any potential risks or hazards. You should consult with a qualified aviation professional, such as a pilot or flight instructor, and consult relevant weather forecasts and other resources to help you make an informed decision about the safety of your flight.”

A good answer, with no risk of an ego faking it just to look knowledgeable. I might suggest the same reticence for any question involving humor. I asked the assistant to write a joke about a pilot who becomes lost.

Why was the pilot who became lost always calm? Because they had a GPS!

Hit the snare drum. Then the hook. I do find it curious that the Assistant wasn’t troubled by inserting an awkward pronoun. Is it possible that language learning is somehow sensitive to vague political correctness? Bad as the joke is—and it’s awful—I would have used he, the slings and arrows of sexism be damned.

ChatGPT is evidently loaded with trillions of lines of text to search, interpret and analyze for its answers. And however it does it, it does it quick. Most of these answers come within 10 seconds, although several times it demurred with a busy server response. For example, I asked it if pigs will ever fly and it basically said, “I got nothin.”

Enthralled with my own stature as a landmark contributor to aviation literature, I asked the bot if it knew of me or of Berge, who actually has raised the literary bar in what is otherwise a vast wasteland of babbling about stalls, crashes and the price of avgas.

“I’m sorry, but I don’t have any information about Paul Berge. Without more context or information, I am unable to provide any further details.

I’m sorry, but I don’t have information about Paul Bertorelli. It is possible that Paul Bertorelli is a person who is involved in the aviation industry or who has written about aviation. However, without more context or information, I am unable to provide any further details.”

Now there’s an interesting quirk. It’s the Law of Who Spews the Most Gets a Hit. I’ve been hurling words down the black pipe of the internet intake manifold longer than Berge has, thus I merit at least a desultory mention that I might be an aviation journalist. Or perhaps not.

I’ll check back in a few years for a more definitive determination. One can only hope machines really can learn.

Other AVwebflash Articles

21 COMMENTS

  1. You can poke fun all you want, but this has serious potential to be absolutely huge, in almost every aspect of life. It has already been fed enough information about computer programming that it’s capable of writing programs that do some pretty complex stuff, and that part of it will only get better and better. I suspect every aspect of it will, too, and in a few years, it is likely to be writing The Nall Report, for example.

    • I agree – for a beta product this is a game changer, or least the clear potential that it demonstrates, which will almost certainly be matched or exceeded by Google, Microsoft, etc.

      I’ve been feeding GPT with some of the most complex “tasks” that weren’t possible before, just to get a sense of how capable this public demonstration is.

      “write an article about ethnic diversity and crime rates between Toronto Canada and Salt Lake City” which it creates impressive verbiage with sources & stats. Then “write it in Portuguese”, “write it in Japanese”, etc. When I asked it to write it in form of a poem in German, it told me to “slow down”. College essays have been altered in the 2 weeks this beta has come out, as well was other things we haven’t thought of, a similar disruption that libraries encountered when Google appeared.

      “write a program in C# to calculate great circle distances between two different cities” – computer programming is intensive, burns out many, turns minds into mush. This accelerates that process dramatically.

      “How do I calculate V1 in a small single engine airplane” – nobody does that, but it provides information on how the K factor is determined.

      “Compare and contrast men’s aviation hobbies with attractive women as a poem in King James English” – silly exercise that shows its language processing capabilities.

      There’s a website describing how immigrants can use Chat GPT to understand the immigration process, create the needed court documents (ordinarily performed by lawyers), and prepare the step-by-step process. (Adios, immigration lawyers)

      How about a shot at the staffs of lawmakers?

      “write a bill in legislative markup language establishing the need and a timeline for introducing biofuels in aviation”. Senior congressional staff & attorneys will pivot from writing this “priesthood” language themselves to comparing their proposals with those of ordinary citizens and defending the details of their own “code”. The “Peoples House”, indeed.

      Google and Microsoft will have a competitive response in 2023, certainly. It will take more time for AI to seep into aviation, but remember, this demo is just 2 weeks old, and while it’s fairly easy to get it to provide erroneous information or wrong answers, this is just a glimpse of what’s possible.

  2. I note that Microsoft’s typing predictive feature is getting more and more ambitious, suggesting not just the word you start typing but often going several words ahead. Soon you’ll be able to read an incoming communication, dash off “Generate Reply + Negative + Politely” and go on to something else.

  3. As a software Developer (and inactive pilot) I am not that worried about Chart’s muscling in on either environment soon though were it to take over some jobs, lawyers would be a good start.

    I also was curious about it’s judgement capability, but I started with giving it a current METAR for GSP when we were down to 400 ovc and 1.5 visibility. I asked what type of approach would it chose given that information (flying a Cessna 172) and it came back with “a visual approach” which didn’t seem correct. I prompted with would you use an ILS approach in a light plane and it still liked a visual. I’m not IFR rated, but it seems that visual was not right.

    So, Chart is not perfect, I sure don’t want it making decisions and as far as coding is concerned, while it did beat @ 50% of programmers in a contest the coding problem was not that complex.

    My biggest concern is that the more people turn to the “quick” answer, the less smart we become collectively. I am reminded of the SFO accident with (Asiana(?)) airlines where two fairly experienced pilots flew it into the breakwater at the end of the runway because they could not see, not had not flown a simple hand flown visual approach on a clear day. The company pushed using the software in the panels, preempting the software behind the eyes.

    if it becomes too easy, it becomes less interesting, and while the ideal is to “free up humanity”, what do we do with all those freed up people? This is not a horse and buggy being replaced by a horse less carriage case as the buggy driver can learn to drive a car. This is removing the driver from both….I’d like to see if ChartGPT can solve that problem.

  4. I thought it did quite well with this one:

    Q – What should I get my girlfriend for Christmas?

    A – It can be challenging to think of the perfect gift for your girlfriend, especially at Christmas. Some ideas for gifts that your girlfriend might appreciate could include:

    Something personal: Consider giving a gift that has personal meaning or that shows you have put thought into the present. This could be a piece of jewelry with a special design or a framed photo of the two of you.

    An experience: If your girlfriend is someone who values experiences over material possessions, consider giving her an activity or event that you can enjoy together. This could be tickets to a concert or a cooking class.

    A self-care gift: Consider giving your girlfriend something that will allow her to relax and take care of herself. This could be a spa gift basket, a massage, or a new piece of exercise equipment.

    A thoughtful gesture: Sometimes, it’s the little things that mean the most. Consider making your girlfriend a special meal or doing something kind for her, such as running errands or doing a household task for her.

    Ultimately, the most important thing is to choose a gift that is meaningful and shows your love and appreciation for your girlfriend. It may be helpful to think about your girlfriend’s interests and what she would appreciate most.

    But on asking it some other questions – it is also a bit like a politician – it doesn’t directly address THE answer. It slides around it with some “mealy mouthed” talk. 🙂

  5. OK – Sorry – could not resist…… 🙂

    Q – Who is Paul Bertorelli?

    A – I’m sorry, but I don’t have any information about a person named Paul Bertorelli. It’s possible that you are thinking of a different person, or that Paul Bertorelli is a private individual who has not made a public impact or left a significant online presence. Without more context or information, I am unable to provide any additional information.

    and in the interests of fairness – yet again – I am mistaken for someone else….

    Q – Who is Graeme Smith?

    A – Graeme Smith is a former South African cricketer who served as the captain of the South African national team from 2003 to 2014. He is widely regarded as one of the greatest captains in the history of cricket and is the only player to have captained his team in more than 100 Test matches.’………Smith retired from international cricket in 2014, but continues to play domestic cricket in South Africa and abroad. He is also a cricket commentator and has worked as a cricket analyst for various media outlets.

  6. This reminds me of the time a bunch of us firebomber pilots were sitting in the tanker base ready room waiting for a dispatch. One of the guys grabs his phone and declares he should call his girlfriend.

    This predictably draws some unflattering comments from us so he yells “F**k Off! His phone then cheerfully pipes up with “Dialing Flight Ops” in a rather sexy female voice….

  7. I’ve often wondered when these AIs will approach handling all ATC tasks. Ingesting 10 years of ATC tapes, weather data, and traffic information, coupled with the amount of real-time data available, it’s easy to imagine an advanced ATC control system that is more AI than human. Many parts are already there like clearance generations and flow control. Soon the comms radio will have an ATC gender option (male, female, etc.), accent option (Southern, Mid-west, etc.), and speed control (student pilot, average, NY approach).

    • Then, encode your preferences into your transponder, so in addition to listening for your callsign/reg, you can also/instead listen for your preferred voice gender/accent/speech rate to know ATC’s addressing you. Same voice gender/accent/speech rate ramp-to-ramp, too!

  8. From the ChatBot:

    “It is not appropriate for me to provide specific safety advice for your flight, as I do not have access to current weather or other pertinent information. Additionally, I am not a certified aviation professional and do not have the necessary knowledge or expertise to assess the safety of your flight.”

    Does anyone else see “HAL” from 2001–A Space Odyssey” in these computer-generated responses? (“Open the pod bay doors, HAL”–“I’m sorry, Dave, I cannot do that.”) The film came out in 1968–nearly 55 years ago–and author Arthur C. Clarke wrote the predecessor “The Sentinel” in 1951–71 years ago. Sometimes, Life DOES imitate Art!

    We can only hope that we do not cede total control to the computer!

  9. I find it interesting that they want you to give them your phone number before they’ll give you an account to use this “service”. Why on earth do they need your phone number? And a throw-away VOIP number won’t do. They check for that and refuse to accept it. They won’t even take a Google Voice number.

    Why do you even need an account to use ChatGPT? Just put it up as a web page and let people make queries, like Google. This has spammer data collection written all over it. No thanks.

  10. “Why was the pilot who became lost always calm? Because they had a GPS!

    Hit the snare drum. Then the hook. I do find it curious that the Assistant wasn’t troubled by inserting an awkward pronoun. Is it possible that language learning is somehow sensitive to vague political correctness? Bad as the joke is—and it’s awful—I would have used he, the slings and arrows of sexism be damned.”

    Actually, if the joke is a direct quote, the use of that pronoun works for me. The pilot was referred to as “the pilot”, with no hint of gender, so what other pronoun should it have used?

  11. The scary thing about these AI services is that they can produce very legitimately-looking studies with charts and tables. So there will be a lot of mis/dis-information backed up by purported ‘good studies’. A shame because it could also be used for a lot of great things.