The Pilot's Lounge #106: Why Do Smart People Bend Airplanes?
In September I briefly escaped the grasp of the big, tattered, old recliners in the Pilot's Lounge at the virtual airport and made my way west to the annual fly-in and convention of LightHawk. It is a public-benefit flying organization dedicated to making flights in support of conservation and the environment -- the environmental air force, if you will, and its volunteer pilots happen to do some of the coolest flying imaginable. Each year it puts on a convention to thank those pilots. Again this year there were a number of excellent presentations.
One of the speakers was Key Dismukes, a LightHawk volunteer pilot who is otherwise on this planet to fly sailplanes. In order to support that habit he obtained a Ph.D. and is Chief Scientist for Aerospace Human Factors at NASA Ames Research Center. He has been involved in some very sophisticated research into the "Why?" of aircraft accidents; more specifically, into why educated, smart, competent people who hold pilot certificates make mistakes that culminate in twisted metal and broken bodies.
For those who complain that we spend too much time looking at aviation safety issues, I certainly recognize that, in the U.S., each year we waste some 47,000 people in automobiles, over 150,000 due to errors in hospitals and untold tens of thousands simply because they have made the decision to gorge themselves into obesity. I also know that more people die falling in the bathtub than in aircraft and that, as a nation, we have never been terribly good at focusing on the real risks we face. Nevertheless, when we are engaged in an activity that has a significant inherent risk, we owe it to ourselves, and our loved ones, to try to figure out how to reduce the risks as much as possible, even if other endeavors kill and maim far more of our fellow citizens.
It's also gratifying to recognize that, because we have focused on aviation safety, we happen to have done a pretty good job. Over the years we've taken what was once an extremely risky way of getting from one place to another to where bizjet and airline travel have become roughly tied as the second safest way to travel in history (mountain cable cars are still in first place).
There is a continuing determination to make flight even safer and Key Dismukes and his team at NASA -- where the first "A" stands for Aeronautics" -- have taken the research and analysis in that area one step beyond what has been done to date.
Key Dismukes recently collaborated on a book containing the results of the extensive research done on the subject, "The Limits of Expertise: Rethinking Pilot Error and the Causes of Airline Accidents." It should be available soon. If you are interested, there is information on a NASA web site. The book presents the study of roughly a score of airline accidents and the rather dramatic conclusions drawn from them.
Key's presentation in September outlined the approach his team took to its research into accidents, their findings, and some recommendations for improvement. The recommendations included some things that any pilot can use on every flight to reduce the risk of an unhappy conclusion.
In looking at the approach Key and his team took to their research into accidents, I saw that they did their best to keep an open mind, despite the fact that the NTSB had already published a "probable cause" for each. I was struck by the emphasis he placed on avoiding "hindsight bias" during the examination of each accident. His team was very careful to look at each stage of each accident flight with only the information that was available to the pilot/flight crew at that point in the flight and not with the benefit of what was learned later in the flight or after the accident. After all, at each point of the flight the pilot does not know the outcome, and does what seems reasonable at that point, given what he or she knows at that moment, within the limits of human information processing.
Although Key's research involved airline accidents, much of what was learned applies to general aviation and Key's presentation for LightHawk was geared to general-aviation operations. I felt that the conclusions reached by the scientists applied across the board, largely because their technique went beyond just identifying what went wrong in a particular accident to find the underlying items/issues/factors that had a significant degree of commonality in the accidents. We have all seen procedures instituted or regulations promulgated in response to a particular accident; and we have seen that they were not noticeably effective in preventing the next one. That's because they attacked a symptom and did not go deep enough to deal with the underlying factors that increased the risk faced by pilots when they strap on an airplane and find themselves in an abnormal or hazardous situation.
Very Good Pilots Make Mistakes
At least in general aviation, I've noticed there is a bias, sometimes spoken aloud, that a pilot who made some sort of a mistake while flying and had an accident was either not terribly bright, lacked basic skills or just plain didn't have the magical "right stuff." As an instructor, I've certainly seen pilots with poor skills or who weren't terribly bright or had lousy judgment, and I've been convinced that a few were going to crash airplanes -- and I wasn't surprised when they did. I've also seen some extraordinarily good pilots who were possessed of all the right stuff imaginable, and who then made mistakes and crashed.
There has been research into this topic, both by Key's team and others. The conclusion of all the research is that when one looks at the spectrum of accidents in which a mistake on the part of a pilot was involved, the pilots who made mistakes were not from the lesser qualified, or less intelligent side of the bell curve of pilots. They were, as a whole, no better or no worse than anyone else.
An error on the part of a pilot, by itself, is not de facto evidence of a lack of competence, skill or judgment.
One way to confirm this conclusion is to look at the Aviation Safety Reporting System (ASRS), which contains reports of mistakes made by pilots and voluntarily reported to NASA. What you'll find is that those mistakes, made in flights that had a successful conclusion, are the same ones that were made by pilots in flights that crashed. The difference is that in the accident flights, the mistakes combined with other factors to lead to an accident.
What is interesting is that even though pilots make mistakes, they make them in a different way than do laypeople. Pilots have undergone a degree of training and education and may be considered experts. Experts, when faced with a task for which they have been trained, will perform the task with almost 100% reliability in routine conditions; much more accurately than a layperson. But, pilots, being human, make mistakes. What Key's team found was that, as experts, pilots make mistakes in ways that can be roughly predicted. Because of that, pilots can be educated as to the conditions in which they are more likely to make mistakes, how to recognize the onset of such conditions and how to be more able to either avoid making mistakes or to catch and correct them before they combine with other factors, or other mistakes, to create an accident chain. In addition, designers of aircraft, avionics and cockpit warnings and displays can recognize that even very good pilots make mistakes, under predictable conditions, and can design their products so as to provide accurate, easy-to-interpret information that does not give false alarms. (Because inaccurate, difficult-to-interpret information and false alarms are all situations in which pilots have been shown to be more likely to make mistakes).
As an aside, while we are accepting that even the finest pilots in the world make mistakes, Key's team also concluded that human pilots are better than computers because computers have an extremely limited capability for dealing with out-of-the-ordinary situations and for making the correct judgment when there are a number of factors of varying importance to consider. On top of that, human pilots are expected to -- and do -- make up for deficiencies in the design of systems, aircraft, airport, avionics, display, warning and ATC, and do so effectively every single day.
The scientists also found that once a situation becomes challenging and a mistake is made, it is not unusual for the situation to "snowball," to go downhill faster and get bigger and the risk for further mistakes to increase.
Key's team identified a factor that figured into a significant number of accidents and must be taken into account even though it is generally a good thing: the desire to complete the flight to the planned destination. Yet, sometimes it is precisely the wrong thing to do when things are going south and the pilot's cognitive ability is adversely affected by massive sensory overload. The desire to press on and land can be deadly.
For any number of very good reasons, we pilots are spring-loaded to continue/complete the original plan. Remember kindergarten when at least one of the items on your report card was whether you could be counted on to complete something you started? It gets conditioned into us early. The military gives medals for completing assignments despite ever increasing odds. The problem, when it comes to safely operating an airplane, is that the mission orientation/plan-continuation bias means we tend to press on in spite of changing conditions. It also means that the bias gets stronger as we get nearer to the completion of the activity and it actually works to actively prevent us from noticing subtle clues that the conditions in place when we made our original plan have changed. A good example is the reluctance to carry out a missed approach when the weather has gone below minimums and the willingness to go well below minimums in the hope of finding a runway. A further example is a strong unwillingness to make a go-around.
The effect of plan-continuation bias in making pilots oblivious to subtle cues, especially near the end of a flight, is often seen in a pilot-induced-oscillation landing accident. Being determined to carry out a landing -- in spite of the fact that errors have a way of snowballing, thus inducing sensory overload, adversely affecting our ability to think strategically and recognize that things have really gone down the tubes -- is a major factor in why a pilot starts making gigantic pitch control inputs -- out of synch with the airplane -- and breaks off a nosewheel. Things go bad so fast, creating such a high workload on top of the plan-continuation bias and sensory overload, that Dismukes concluded that even a well-trained pilot often cannot get beyond reactive thinking to the more difficult and higher level of cognitive activity -- proactive/strategic thinking -- and realize that the only solution is to go around.
Inducing Mistakes by Pilots
So what are the factors that Dismukes' team found to be likely to cause expert pilots to make mistakes that can then combine with those factors to lead to an accident? None of them will surprise you.
The mistake inducers can almost be boiled down to one: stress.
A little stress is a good thing. Without it, we tend to be bags of jelly, not really able to deal with sensory input. Stress wakes us up, gets the blood flowing, causes us to be alert and ready to deal with what's coming at us. Just because something is stressful, by itself, is not necessarily bad.
The problem is, at some point, stress does very bad things to a pilot's ability to carry out the functions required to be a pilot. This is true no matter what the source: family problems, difficulty at work, bad weather, a system malfunction on the airplane, ATC overload, unfamiliar airport, etc. At some level it goes from good, to not so good and then just plain awful, and it "hampers skilled performance by narrowing attention and reducing working memory capacity." To paraphrase Key Dismukes, we effectively get dumber through no fault of our own.
Just when we pilots need all of our facilities most -- in extremely stressful, high-workload situations -- those facilities progressively abandon us and our level of functioning becomes lower and lower on the cognitive scale, from strategic to tactical to reactive. That's how we evolved. Our ancestors got into it with a woolly mammoth and survived because stress caused their brains to function in a reactive mode, completely concentrating on how to survive this one encounter and not to worry about how we might have made a better plan or that maybe we shouldn't be hunting for this particular mammoth right now.
Most of the time the pilot's heightened senses and reactions get through the immediate crisis. The problem lies in the ability, for example, to make the decision to abandon the approach to this airport that is on the edge of the thunderstorm while still outside the outer marker and on the edge of another boomer. When the pilot is under that much stress, the thinking process often gets reduced to a focus of just getting the airplane on the ground because there is just so much input that there is no cognitive capacity left to step up to a higher level of reasoning and decide that this plan stinks. No capacity left to decide to point the airplane toward another airport that has better weather or at least go somewhere outside of the cells and hold until they move away from this area.
Under severe stress, we lose the ability to put the immediate situation aside, to look at it objectively, and make overall command decisions.
We also progressively lose the ability to process information we receive. We may get a wind-shear alert, with the controller giving us wind direction and velocity at locations around the runway, and even though every single one of them has a direction and velocity that exceeds the published limitations of the airplane, it's very common for even the best pilot to be unable to process the information and realize that it means that it will be physically impossible to keep the airplane on the runway even if the pilot can touch down on it.
Combined with plan-continuation bias, under the stress of severe weather, with some fatigue tossed in, very good pilots have been known to keep on going in the midst of thunderstorms, toward a runway that has a direct crosswind far beyond the capability of the airplane, when turning 90 degrees would take them out of weather within two minutes and they would have time to settle down and decide whether to try the approach again or go elsewhere.
Naturally, as you've guessed by now, weather is a factor. As weather gets worse, the propensity for pilots to err increases.
Another factor is time pressure. No kidding, when the fuel gauges say it's time to be on the ground, or the controller is giving vectors to you every few seconds as you're trying to run the approach checklist, fish out the correct approach plate and program the GPS, time pressure means your chance of making a mistake goes up.
Pilot workload is a major factor in error generation. What Dismukes referred to as "concurrent task management" and workload issues appeared in the vast majority of accidents reviewed in the study. A pilot can absorb a certain amount of information in a given period of time. For example, under good circumstances, a pilot can recall about 10 digits presented in one radio transmission. As stress increases or manual control of the airplane is difficult in questionable weather, as the workload on the pilot increases, the ability to "integrate a high volume of diverse information to evaluate the situation" drops off. Pilots are more likely to "fail to note subtle cues and are less able to integrate and interpret information from multiple sources. They may also revert to a reactive mode; rather than strategically managing the situation, they may respond to each demand as it arises, without prioritizing, because they lack sufficient free mental resources to take a strategic approach. Monitoring and cross-checking may also suffer."
Workload means "multitasking." Pilots multitask all the time and, if you watch any television commercials, those who multitask are considered to be cool and "with it." Yet we aren't told the ugly truths about handling a lot of different things at once: There is an inherent difficulty in reliably switching attention back and forth between tasks, which is borne out in repeated studies. When we multitask, not only are we more likely to make a mistake on any one of the tasks, it takes us longer to accomplish all of them than if we had done them one at a time to completion. Workload demands on pilots are such that we often do not have the luxury of doing one task to completion before moving on to the next; so when we face the reality of multitasking, we must be aware that our risk of screwing up on something is higher. And because multitasking, by definition, means that one task is interrupted, it is more likely that we will forget to complete the interrupted task.
Here's what Key's study said: "The combination of stress and surprise with requirements to respond rapidly and to manage several tasks concurrently, as occurred in several of these accidents, can be lethal."
Being surprised means the risk of mistakes goes up. This is why a false alarm in a warning system or an incorrect cockpit display can trigger the mistake that starts the snowball that leads to an accident.
Interestingly enough, a pilot who goes through formal training for a procedure in a particular fashion and then learns that pilots in the field do it another way, or is told by a subsequent instructor not to do it the way he or she was formally trained, is more likely to make a mistake in performing that procedure when called to do so under a stressful situation.
Trying to do a procedure for the first time under stress (not having practiced that procedure) is a good predictor for making a mistake. Pilots who try to shut down the engine(s) to save the props after realizing a gear-up landing is necessary have a lousy record of pulling off the landing without hurting or killing someone in the airplane.
Interestingly, repetitious operations, things we do the same way every flight, such as checklists and briefings, have their own dangers in that pilots do them so often there is a risk, particularly if the task is interrupted, of "looking without seeing" and forgetting items. Pilots forget the landing gear, even when using the checklist. Airliners have gone off of runways because there was an interruption due to bad weather and ATC and the crew missed arming the ground spoilers. Further, there can be a tendency for repetitious operations to make pilots reactive rather than proactive -- looking for potential problems and having a plan in mind should they make an appearance.
Onboard malfunctions, pressure from one's employer or a passenger to complete a flight and confusing ATC transmissions were also identified as items that can increase the risk of a mistake on the part of the pilot.
Is There Any Hope?
So what is the poor slob in the left seat supposed to do? We get the impression that high workload, crummy weather, distractions, system failures, stress, fatigue -- all the fun stuff that a pilot deals with every day -- are out there combining to reduce the level of intellectual functioning of pilots to the point they become blithering idiots who are lucky to ever place an airplane on a runway without creating modern art.
Not so. The reality is that we humans handle all of this very well virtually all the time. What we need are tools to allow us to do it even better. Dismukes' study didn't just claim that things were hopeless and then give up. The team came up with a number of ideas that I, for one, thought made a lot of sense, and most of them can be applied by any pilot flying any sort of aircraft.
The first, and the one that is out of the hands of most pilots, is for designers to accept the fact that pilots are human, and even though they go through a fair amount of training and have a certain expertise, they make mistakes. Therefore, systems and displays must be designed to minimize false alarms to maximize simple, unambiguous information that can be evaluated and processed accurately by intelligent human beings who are under tremendous stress and at risk for misinterpreting data.
For pilots, having hard and fast bottom lines established well before a flight simplifies decision-making and makes it easier to think strategically throughout a flight. Having an absolute maximum 90-degree crosswind number or minimum altitude to descend to on an approach may seem like a small thing, but it greatly reduces the chance of a pilot pressing on and trying to land when the stress level is through the roof. When I give a flight review or instrument competency check, the pilot must come up with hard numbers as to the minimum runway length that the pilot will use with that airplane and the maximum 90-degree crosswind. The pilot must also write down the minimum ceiling and visibility at which they are willing to shoot a precision, non-precision and circle-to-land approach, day and night. I've been doing this long enough that I have had pilots call me up and tell me that -- because they had written down such bottom lines -- they had the ammunition to tell someone who was pressing them to fly that the weather was too bad to make the trip. Having hard and fast bottom-lines simplifies a pilot's decision-making process and makes it easier for a pilot, especially when confronted by deteriorating weather and its associated demands, to abandon a plan, such as an instrument approach to the intended destination, rather than trying to force it to work.
When a pilot is consciously aware that his or her ability to think critically diminishes when workload and stress goes up and that there is a very real plan-continuation bias, that pilot is more likely to be able to be proactive, to step back and evaluate whether the original plan is still viable or whether it's time to undertake one of the fall-back plans.
Strategic Advance Planning
Pilots routinely make strategic plans for the conduct of flights. By expanding that idea a little to imagine some "what ifs" at various points along the planned flight -- weather deterioration, passenger problem, airplane or system malfunction, or other potential hazard -- and then decide on alternative plans for each major stage of the flight, the risk of plan-continuation bias and the negative effects of snowballing workload and stress are reduced.
Slow the airplane down when the workload goes up. In the terminal area if you are at high cruise and things are happening too fast, pull the power back to the bottom of the green and buy more time to handle the workload.
Checklists and briefings can be used in more positive ways. Carry them out a little more slowly and deliberately, possibly touching each item being checked. Initiate checklists at specific points in the flight: top of climb, top of descent, etc. Any interruption or task that gets performed out of sequence is treated as a red flag with consistent cues being set up to remind the pilot that there was an interruption. (For instance, if that section of the checklist is incomplete, place it on the throttle quadrant; if the section is complete, stow the checklist in its normal location). Briefings can be used as a tool to look ahead to the next phase of flight and question whether the situation being approached is truly routine and as expected at the beginning of the flight.
I like the idea that someone has been willing to say that pilots are experts (which is a recognition of something we've known all along -- that pilots are pretty cool) and that they are human and, therefore, they make mistakes (no matter how well-trained, motivated, competent and capable). Therefore, when such a reality is recognized, objective analysis is possible and workable ideas for improving our level of safety can be found. I think Key Dismukes and his team at NASA have completed a study that is going to be a benefit to all of us. You just might want to get yourself a copy.
See you next month.
Want to read more from Rick Durden? Check out the rest of his columns.