Coming Soon: The ‘Semi-Professional Pilot’?

57

An opinion writer for Forbes has postulated that the future of the pilot trade is just that, “semi-professional” monitors of autonomous machines that actually discourage human intervention because we are so sloppy. In a column, Paul Kennard says humans are too imprecise to get the best performance and longevity out of aircraft systems and cost too much to train to fit the razor-thin margin world of modern low-cost air travel and he believes the burgeoning urban mobility industry provides the answer to both practical application and societal acceptance.

“The compromise then, perhaps, is the semi-professional pilot,” Kennard wrote. “One that never follows a conventional path to a qualification by learning how to fly ‘stick and rudder’ piston trainers, but instead does a zero flight time course in an urban air mobility platform simulator complex. In much the same way that Uber and the smartphone have undermined ‘the knowledge’ required by yellow cab drivers, automation and UAM will likely do the same to the aviation workplace.”

By way of example, Kennard points to SpaceX’s Crew Dragon, which splashed down successfully Sunday in precisely the right place at the exact time while two astronauts watched their fate unfold on high-resolution screens. They had the ability to step in if critical functions didn’t happen on schedule. Even their seats reclined automatically. He said the airlines and manufacturers are watching all this closely, particularly the urban air mobility industry. “Once UAM is proven, licensed and demonstrably safe, airlines will start asking for the some of their platforms to be similar. Future inter-city aircraft will take the UAM approach and scale it up,” he wrote.

Other AVwebflash Articles

57 COMMENTS

  1. Oh no, please, no. AI is not intelligent, there is no computer system in the world that has yet had an original thought, because they are not sentient! For all that humans make mistakes, what about all the millions of times that human sentience, skill and training has averted what could have developed into at least an ‘incident’, or at worst an accident? This is ignored. As a now retired computer systems engineer and programmer with over 40 years of experience, I do not want to see the day we completely hand over to non-sentient silicon. I honestly believe that in the cockpit of an aircraft, or the bridge of a ship, or the driving seat of a car there should be human sentience ultimately in control. Do we really have to dumb down the human race to the point where our machines have to do everything?

      • So, Capt. Al Haynes and the crew of United 232 didn’t have any original thoughts? How about Capt. Crespigny and four other flight deck crew, flying a nearly fully automated A380 Qantas 32, who successfully dealt with multiple (mathematically impossible to occur) system failures, and safely landed their crippled jet? THEN they had to deal with a major fuel leak near hot brakes, with an engine that couldn’t be secured. No original thought, eh?

          • Your statement is misleading. I guess it depends on what you mean by original thought. I have 42 years in large airplanes (over 100,000 LBS.) and 29,000 hours. I never stopped studying. The goal was to learn from the mistakes of others so you don’t repeat them or it gives you insight to the emergency or malfunction you are dealing with. Many times I had to make decisions that were not addressed in the manual. But between the manuals and your systems knowledge and your copilot you would choose the best response. So I guess it depends on what you means by original thought.

          • William:

            It’s really quite simple; hard to see how it could mislead you.
            Flying an airplane does not require ANY thinking – original, or otherwise. That’s why it can be done by a machine – machines do not think.

          • YARS. And machines / computers fail all the time. There is hardly a single flight that does not require some reset of a computer or a malfunction, even though most are simple. But when they have a plane that flys itself be my guest. You and your family can be the first passengers.

      • Yars, I’m sorry, but what are you even saying? As a pilot I can tell you that we often encounter situations we’ve never seen before. They may not always be dramatically different but they sometimes are. It’s really not always as simple as power and pitch. Unless all the systems communicate with each other perfectly and can diagnose errors perfectly there will always be need for an outside operator, and until we have Star Wars level C3P0 / R2D2 droid-like robots those operators will have to be human.

        No original thought? Yeah, sure, when everything goes right. Obviously. No crosswind, no gusts, all the hoses and rods and cables and switches and circuits operating properly. And the moment something goes wrong there’s no annunciator for, or god forbid the system thinks one thing that a human could have told it is simply incorrect, the system will just not work right.

        You think an autonomous system has the ability to diagnose a fault in its own system and correct it? Even if they were self aware that’s like saying a surgeon, while flying a plane, could perform brain surgery on themselves. No original thought. Please. Tell that to the crew of the Alaska 261 who figured out to fly upside down to maintain control. You think a computer would have thought of that? Even if it realized it had the control problem, no chance. And if the altimeter failed while inverted, then what? You think there’d be a radar altimeter on the top of the plane when it’s designed to fly uninverted?

        Unique situations. Unique solutions. In other words: Original thought. I promise you that all the training we get with, “What would you do now?” and the instructor just kills the throttle is not wasted. The day my co-pilot stops having ‘original thoughts’ is the day I don’t fly with them anymore.

        • I have years of experience designing systems that do diagnose their own faults. It’s standard design technique.
          Such systems do things that are 10x as complex as flying an airplane.
          Some of mine have been in continuous operation for more than 200,000 hours. They’re quite reliable – and they do NOT “think.”

    • The Boeing CST-100 test flight is proof that bad things can happen without a human in the loop. As Boeing claimed, if a human was on board, they would have fixed the problem then. As advanced as AI has gotten, it still can’t think creatively in situations that go beyond its programming.

  2. “In much the same way that Uber and the smartphone have undermined ‘the knowledge’ required by yellow cab drivers” — well a yellow cab driver is still faster and more efficient in the big city than your average Uber “pilot”, and for the latter to get to the same proficiency it takes years just as well.

      • Except you still have to have fully trained pilots to fix what the automation messes up. The solution for a runaway elevator trim? Slam your foot down on the trim wheel. I’ve yet to see a cockpit with a robot trim wheel slam foot. Ever ask yourself what happens when the system literally can’t physically control the vehicle and there’s nobody there who knows how to handle the situation? Yeah. That’s why a fully trained and expert pilot needs to be. Maybe even more than if there isn’t an automated system: Because when the automation fails, it’ll be in a really serious emergency involving loss of control. And that’s when we want a ‘semi professional’? Well, you can. I’ll let you be the test passenger on that one.

  3. Astronauts in capsules have always been mostly passengers. Recall the famous “Spam in a can” quote from “The Right Stuff”. Computers have always been in control of manned rockets. And really, once the launch is committed, the trajectory of the vehicle is set, although minor course deviations can have major consequences later on, which is why you need to have computers to keep on top of things. You still need to have well-trained people in control in case of an Apollo-13 like emergency, but in Space, it’s much better to let George do the driving.

  4. Didn’t Sulley and Jeff end this argument?

    The L1011 could Autoland and it was 1950’s technology. Airline pilots and business aviation pilots are in the cockpit to make the extremely hard decisions that computers cannot make. Do a hundred people die or a thousand?

    Sulley knew almost instantly that they wouldn’t make it back to LGA and Tereboro. He knew if he tried the likelihood of hitting a building with hundreds of people was high. How? Countless hours flying, training and talking to other pilots about similar scenarios.

    Smaller decisions similar in complexity like this happen everyday at the airlines. Most of the time the passengers don’t even know.

    Don’t get me started with the idea of someone hacking into the airplanes computer system and taking control.

    How about the mechanical problems. Currently, airlines are allowed to takeoff with some systems not working because the pilot IS the backup system.(MEL and CDL) Again, numerous times a day without the passengers knowing.

    Will pilotless commercial flying happen? Probably, but I bet very few current professional pilots would ride on one.

      • Yars, I must respectfully disagree. Again, Sully knew by instinct that he could not make Teterboro or any other field, and he knew that a turn back to KLGA was out of the question. He also realized that both engines went in spite of what the EICAS told them.

          • Yars you’re not even listening to yourself. The system thought the engines weren’t out. They were out. How could the system operate properly if it didn’t even know it’s own failure?

          • Elisa:
            THE system.
            Not an autonomous control system.
            I can’t – and won’t – defend the design of the Airbus NON-autonomous FMS.

        • That’s the main problem with autonomy – bad data can produce bad results. MCAS isn’t autonomy, but it’s an example of how a computerized system that is being fed bad data can do something unintended (and there are other examples too). Sure, you can write work-arounds so that the system uses multiple inputs and cross-checks them, but there’s still the possibility that you’ll make assumptions about certain data never being invalid and yet it still happens.

          Of course, human pilots can make stupid decisions too (AF447 as an example), but many of the recent examples are really cases of poor training. But at least the legal system is set up to deal with humans. Who would be to blame for a automated airliner crash?

          • Engineers look for causes. Lawyers look for compensation. Shakespeare was right.

            And the bad data that you fear is the exact same data – regardless of whether the “pilots” are people or machines. Fact is, machines process data a lot better and a lot faster than humans do – or ever will.

      • The electronics WILL ONLY do what they’ve been told (programed) to do, yars. If the fault logic process that Sully used could be translated into strings of “1”‘s and “0”‘s, Sully’s stricken aircraft would have done the same thing, but likely landed farther along the river due to more precise control of all the variables. UNFORTUNATELY, software development at that level is horrendously expensive given the probability of it happening. Multiply by every other possible scenario and it’s more cost effective to walk, or take a sailboat.

        My vote, keep at least one properly trained live body as the first to arrive at the scene, with the electronics ultimately obeying said live body.

        • You don’t program for every possible scenario. Pilots don’t do it that way; computers don’t, either.

          Every flight is a series of evolving circumstances. Sometimes, some circumstances can act as impediments to the safe completion of a flight. When that happens, pilots – and properly-designed-autonomous control systems – react to those adverse circumstances in such a way as to MITIGATE the consequences of those circumstances.

          Impediments are ad hoc; mitigations are ad hoc. You play with the cards that you are dealt – without having to consider every possible hand.

      • You are saying that a fully autonomous system could do what Sully did?

        At what point?

        At the point when the birds hit the plane? At the point of determining flight capability? The point when a decision was needed where to set down the airplane?

        You cannot program those moments. You can program a plane to fly automatically based upon given information like height, speed, distance, you can have a program take an object from near space to land on a barge, but if the conditions change, if the input exceeds the logic framework of the program, then like what happened to a Falcon9 stage will happen…it crashed, because it could not adapt or reacte to conditions outside its input.

        As the the OP stated, there is no AI and until there is senescence, a human mind will still always have better decision making capabilities than an computed expert system. As a software developer of over 40 years, Garbage in yields garbage out. there was a plane in Europe that had it’s flight controls crossed and the pilots discovered it after take off. They spent 20 minutes trying to stay alive doing things that were not normal. Would your expert system handle that or just be happy to fly the plane into the ground.

        • “You cannot program those moments.” Yeah, actually you can.
          Sully faced a loss of both engines. It matters not a whitt whether that loss was caused by birds, bats, or space aliens. Set up a glide; decide where to land; determine whether a re-light is possible; proceed accordingly. Communicate your situation, as time permits.

          NONE of that requires any thinking, artificial or otherwise.

          “…a human mind will still always have better decision making capabilities than an computed expert system.” Quite the opposite, actually. A machine can run a thousand diagnostic checks in the time that it takes a human to say “what the Hell just happened?” And it ALWAYS will react in the way that its programmers instructed it to do – after thousands of hours of careful consideration.

          • Therein also lies the problem – it can only ever do what the programmers instructed it to do. Usually non-pilot programmers, at that. Even using neural net programming, the computer is still only limited to certain scenarios.

            It’s easy to talk about automating what the crew of UA232 did, or US1549, or countless other flights that experienced something new and did what no other crew had previously done before. They learned from previous similar events (and had some fortune on their side) and were able to affect a successful (or at least partially successful, in the case of UA232) outcome. The only way to accomplish this with an automated airliner is to constantly be providing updates to it. And with each update you run the risk of introducing new errors. Even if it’s just an updated database of “if this happens do this”, it is still prone to error.

            For those of you who feel comfortable riding in a fully-autonomous airliner, go ahead. You just won’t ever convince me to do so as well.

    • It was decided long before Sully and Jeff, but was not recorded for all to see, the B767 out of fuel, the pilots dead sticking it onto a drag strip.
      https://en.wikipedia.org/wiki/Gimli_Glider
      But it’s coming anyway, starting with freighters, moving to passenger aircraft. The airlines can make lots more money by putting passengers where pilots used to sit (and not paying pilots). The big question is where does one get the money to buy a ticket when AI has taken over all the jobs?

  5. One only has to look back at the famous 777 landing in San Francisco where two experienced pilots watched while their aircraft captured an ILS with a NOTAM’d out glideslope,finally noticed the aircraft appeared to be too low,and then waited for the auto-throttles to correct. In that case,the pilots were “semi-professional and didn’t do as well as a 500 hour private pilot would have done. Still remember how someone in the media called the two pilots Wi Tu Lo and Sum Ting Wong,and got hammered for it.

  6. I would echo what many others here have, in that we are sadly already there, especially in regards to most foreign carriers. Good stick and rudder skills are the purview of a minority of today’s commercial pilots.

  7. ”“Once UAM is proven, licensed and demonstrably safe, airlines will start asking for the some of their platforms to be similar. Future inter-city aircraft will take the UAM approach and scale it up,” he wrote.”

    “Once UAM is proven, licensed and demonstrably safe”…this phrase is the tendon’s, muscles, attached to the Achilles’s Heel in this UAM semi-professional pilot/potential fully autonomous debate.

    Once UAM is proven…with range, battery surplus, motor power, usable, useful load, safe infrastructure specifically designed for UAM use, within the complete package of a cost effective and profitable UAM system. At the same time UAM’s are being proven, what must be in place is yet to be established FAA regulations covering all aspects of revenue flights (including integration with manned aircraft, manufacturing certification, and operating limitations for UAM’s). After all of that is met, UAM’s have to demonstrate safety. That requires use, time in service. Until you have met these parameters at a minimum, you have no benchmark for a semi-professional pilot. Nor autonomous flight either. All of this has to be in play to have a “semi-professional pilot enter the picture. This can only happen after all the benchmarks have been met with a professional pilot having been at the controls or like the Space-X flight crew, extensively trained via conventional aircraft methods, on board ready to intervene when and if circumstances require.

    That Space-X crew did not start in a sim, graduate in a sim, and sit arms folded in subsequent post training sim time, and then simply step on board and travel to the ISS. No, these fellows know how to fly conventional aircraft in earth’s atmosphere. They used that accumulated knowledge base in addition to complete understanding of the Space-X systems in addition to manual/hand flying practice in the sim.

    Another aspect that seems to be left out of this debate is deterioration. Everything in either outer space but especially true of anything within the earth’s atmosphere is subject to decay. All components decay. Everything within this planet atmosphere, including circuit boards, processors, motors, servos, brain boxes, etc decay, deteriorate, and simply wear out. And there is no system that can diagnose, repair, and rejuvenate on it’s own. AI does not have the ability to repair and rejuvenate, create, manufacture, complete, test, and then install a worn, defective, failed part. Somewhere in this chain, human interaction is required for a machine to be sustainable. It is not self sustainable.

    Only a living human being has any internal mechanism(s) that can repair, rejuvenate, recreate itself. Even unhealthy humans have some internal abilities to repair itself. And this includes the mind. Yet, in the end, even human beings are subject to the same decay process as is every living organism, cell, plant, DNA chain, and animal. Even the earth itself is in a state of decay that is exceeding its ability of rejuvenation. Collectively, we are in a long glide to an inevitable crash.

    As well pointed early within these comments, a machine, AI, computers, are not sentient. They cannot feel, cannot absorb nor interpret environmental feelings that are brought to the conscious human mind, or for that matter, those unseen stimulus to the body that says….something is wrong. Color, sound, musical strains, and nutrition affect the human mind. But so does the thought of one human being to another. Over simplistically, call it vibes. And many time those vibes call to action a human being in extraordinary circumstances to perform well above any established or previously known norm, making for a super-human performance that has no scientific basis. I would have one consider Desmond Doss on Hacksaw Ridge during WWII. His actions being seen by both the Japanese and Americans demonstrating an in-explainable performance of human endurance that made him essentially super-human, even beyond the capability of the weaponry at hand… to destroy him. And he did this over and over again for almost 16 hours. Once again, in full sight of both opponents.

    That is the “secret sauce” when combined with suburb training in manned aircraft that made up the very fabric of Captain Haynes and crew on that DC-10, the team of Sully and Skiles, including the flight attendants on board the Airbus over the Hudson that cannot be replicated nor initiated by AI, autonomous machines, or algorithms. Besides, the end user of this UAM movement is human beings with all our flaws. There will be no world run by machines for machines. Ironically, it is human beings, warts and all, who has to write the initial lines of programming to launch any machine including AI. And it is the same flawed human beings, warts and all, who will have to evaluate the “perfection” of machines to determine when perfection is reached.

    Opinions are like rear ends…everyone has one including Forbe’s magazine and this particular writer. I think ATP made the right call in ordering more Pipers for flight training. A few of those pilots might be going to Mars with that piston single experience an incalculable asset for circumstances that can be only understood by a sentient human being. Houston, we have a problem…which was solved by human beings with autonomy helpless.

  8. “The fewer the facts, the stronger the opinions”.
    Arnold Glasow
    He also advised, “Its harder to conceal ignorance than to acquire knowledge”.
    I would submit that in the case of the guy (troll) here who seems to think he knows so much more about flying decision making than everyone else; should also get a little actual flying experience to speak about what he currently doesn’t know.

        • “Yours is the voice of theory with no practical experience.”
          Thanks for today’s laugh. I needed that.

          Here’s a very brief bio, from Air facts Journal:

          Tom Yarsley is a retired international-award-winning design engineer, who spent 45 years on the bleeding edge of commercial and military technology, in aerospace, marine, environmental, entertainment, and telecommunications applications. His specialties include business-process-automation, and radiation-hardened mission-critical autonomous control systems. He is a multi-thousand-hour pilot and flight instructor, whose greatest joy is giving primary instruction to aspiring aviators. Yars has been described as having “a face that’s perfect for radio.” A volunteer at his high school alma mater for 54 years, he still can be caught behind a microphone at local sporting events.

          • YARS; Thank you. That helps understand some of what you are saying. Not that I buy into it but at least you have some insight which I could not grasp by your posts.
            For Professional pilots that have spent their entire lives trying to avoid accidents and studying accidents your comments can callous. Such as flying an airplane doers NOT require even one “original thought.”
            Many times there were serious issues that were not even in the manuals that required a good systems knowledge to work thru.

  9. ‘He has nothing to contribute other than his trite uniformed, opinions. He is a troll.

    It’s really just an opinion thread, though self-deprecation and poor grammar are always welcomed.

    Personally, I am challenged by differing opinions. Helps me learn about myself and understand things better.

  10. I’m in your camp YARS, there’s lots of proof computer Autonomous Intelligence is reliably working today. All of transportation has moved to computer controlled engines. Can anyone give an example of a non-computer controlled powerplant in any industry that performs more efficiently? What about reliability?

    Who can argue that Digital Autopilots are not more reliable and capable then the previous non-digital autopilots? Auto systems fail so rarely that some pilots don’t even know how to turn them off and takeover when they do fail. MCAS.

    • As far as reliability the PT6 which is still not computer controlled has one of the best records out there for reliability. I can’t comment on FADAC engine control systems, but the Hawker 800xp I fly is digital computer controlled, and it has a switch to turn off the computer when it malfunctions airborne.

      I still remember a few years ago when Bill Gates was criticizing the CEO of GM for putting out products that he considered “old fashioned”. That CEO then said if his vehicles were as reliable as the software and computers he put out, GM cars would be breaking down every mile for a reboot.

      I’m sure some day automation to fly an airliner with no pilot input may be invented, but I doubt that will happen in my lifetime.

      • You might want to read a piece of mine, over on AirFactsJournal.com
        It’s very brief, but it does highlight the fundamental – conceptual – difference between an autopilot/FMS, and an autonomous control system.

  11. In so many debates, all or nothing thinking seems to dominate. That makes no sense to me.

    General aviation has made some changes, some evolutionary progress, albeit at a glacial pace, compared to just about any other mode of transportation. A lot of that is due to a plethora of government regulations designed for our safety ( plenty of opportunity for debate here). Another is liability/litigation ( more opportunity for debate). A dominant factor is the ability or inability to amortize the potential improvement through design, certification, and production costs because of lack of economy of scale ( a lot of room for discussion and debate).

    Commercial/military aviation has made many quantum leaps forward designing, implementing, and utilizing leading edge technology. Economy of scale is totally different than of GA. Use of services provided by the military inspired, commercial aviation fulfilled technology that the combined military/commercial segment of aviation inspired is far more readily used and understood by the average American citizen than it’s familiarity and use of GA.

    However, the training aspects of GA, Commercial, and Military aviation are very similar. The aspiring pilot starts off in little airplanes and moves eventually into larger, faster, more sophisticated aircraft as the segment he/she aspires to fly in requires. Folks can start off in a non-electrical, Armstrong starter equipped Cub, and learn to fly. However, for a PPL the student would have to learn, master to PPL minimum standards another airplane more technology sophisticated requiring a minimum working knowledge of say a Cessna 150. Once the PPL check ride is passed, the newly minted pilot can go back to the Cub, never looking back at the more technology sophisticated/equipped C150 that the PPL requires, however, does not exist on that basic Cub. Some skills such as mastery of the tail wheel will have long term positive effects on how that student adapts and flies the C150 even if it is just for the PPL check ride. Night flying requirements, cross country flights, and basic understanding and use of flying in controlled airspace, along with the basic use of the necessary avionics that the C150 has vs the Cub will still be beneficial for the Cub driver even if the intention is to fly locally, day VFR only.

    Previous glider time proved to be an asset for Sully, crew, and passengers. That glider time was never intended to be an asset for flying commercial airliners. But in the case of an Airbus suffering a complete loss of all power shortly after take-off, an unintended asset learned many years prior to the ditching in the Hudson proved to be invaluable to Sully, Skiles, the remaining crew, passengers, and all of the potential victims living on both land and sea beneath that now very large glider.

    All or nothing thinking does not have a rightful place at the aviation table…at least in my opinion. It has no place in flight training, nor does it have its place in technology. Operating in a three dimensional world exponentially increases possibilities of failure, omission of anticipating probability of catastrophic circumstances such as a double engine flame-out by bird/FOD ingestion causing dissimilar spool downs including how the onboard engine computer managed engines displayed its progression of failure to the crew. The design and intent of flight training to a specific, narrow, and well defined set of parameters, for the purpose of shepherding, monitoring, or intervening only in an emergency does not allow the opportunity to use any other asset that might prove to be the factor that determines a safe outcome vs a catastrophe.

    Regardless if it as a UAM, fixed wing light-plane, an airliner, SR-71, or the Space-X Dragon, all of those operate in the 3 dimensional world of flight launching from terra firma (or maybe a sea platform), dealing with all the affects of the earth’s atmosphere and weather. In the case of Space-X, it has to do all that, plus get out of the atmosphere, travel through space, and then return.

    The debate starts about the superiority of performance of automation vs human flying. Therefor, since it has been proven superior in the largest percentage of cases, then flight training can be very narrow, duty specific, generating a sort of flying manager, aerial shepherd, overseeing the flight but not really involved with its actual flying. I believe the three dimensional world demands more than that. Maybe not today, but it will not be always on the timetable of automation. Nor will automation include all possibilities, be able to encompass all scenarios, and anticipation of all possibilities of failure, mechanical, introduced weather wise, or by manned aircraft within its proximity.

    This is why all or nothing thinking cannot be allowed to dominate the discussion. There is no debate that a modern autopilot can fly most airplanes more efficiently than the human pilot. Likewise, manage large turbines better and efficiently. But it cannot respond to anything outside of it’s computerized pre-programmed parameters. Plenty of mostly reliable automobiles to prove that computerized engine management can out perform the old carburetor mixed fuel fired by points ignition. Equally true is the number of engine issues, including failure, or transmissions refusing to cooperate with the engine, while failing to trip a fault code, yet failing enough to make it difficult or impossible to drive or start the car. However, according to all the data, there is nothing wrong. Many have seen the comment, “cannot duplicate the problem” but have a vehicle that cannot be used.

    I contend the sentient human can sense a problem, in many demonstrative cases, sooner than the computer. Many of us note a change, for whatever reason, before the automation has a parameter that has changed but not far enough to set off the proverbial fault code. That is a significant reason enough to me, to see that autonomous, pilot-less flight may be viable in some cases, but not all cases. The earth’s atmosphere is fluid enough, always changing, in a state of decay or rejuvenation but never static. That it is one place that all or nothing thinking cannot provide all the answers.

    Aviation, in my Johnny Lunch-bucket opinion, will always be a marriage of human sentient involvement with advanced technology. That will demand a complete professional pilot ( not a semi-pro from the farm system) for the satisfaction of the human end user, and potentially take care of situations that according to the computer says nothing is wrong. That is when the human, professionally trained in many aviation aircraft and disciplines, will be able to evaluate the situation sooner, faster, and perform inexplicably better than the automation that has no sense of the environment other than what a human being programmed into it.

    So, YARS is welcome at my table, can fly with me, probably teach me all sorts of things about useful technology and the art of flying well. I am confident, he has not succumbed to all or nothing thinking, because he spends time teaching people the art of flying, not the command of automation as it relates to flying. I believe the two, the accredited engineer can get along with the graduate of “Hard Knocks Vocational School” and learn a bit from each other. That is what so cool about aviation.