NTSB Addresses Unexpected Pilot Behavior, Multiple Alarms


The NTSB issued a report on Thursday asking the FAA to ensure aircraft regulators and designers consider the effects of multiple cockpit alarms and what can happen when pilots don’t react as expected to emergency situations. According to the NTSB, the report’s seven recommendations stem from its support of the ongoing investigations by Indonesia’s Komite Nasional Keselamatan Transportasi (KNKT) and the Aircraft Accident Investigation Bureau of Ethiopia into the fatal crashes of Ethiopian Airlines Flight 302 on March 10 and Lion Air Flight 610 on Oct. 29, 2018, both Boeing 737 MAX aircraft.

“We saw in these two accidents that the crews did not react in the ways Boeing and the FAA assumed they would,” said NTSB Chairman Robert Sumwalt. “Those assumptions were used in the design of the airplane and we have found a gap between the assumptions used to certify the MAX and the real-world experiences of these crews, where pilots were faced with multiple alarms and alerts at the same time.” Sumwalt emphasized that the report (PDF) does not analyze the actions of the accident pilots.

The recommendations include ensuring that system safety assessments for transport-category airplanes “consider the effect of all possible flight deck alerts and indications on pilot recognition and response” and incorporate design enhancements, pilot procedures, and training requirements to “minimize the potential for and safety impact of pilot actions that are inconsistent with manufacturer assumptions.” The board also recommended the development and incorporation of tools and methods “for use in validating assumptions about pilot recognition and response to safety-significant failure conditions as part of the design certification process” along with development and implementation of design standards for “aircraft system diagnostic tools that improve the prioritization and clarity of failure indications (direct and indirect) presented to pilots to improve the timeliness and effectiveness of their response.”

Other AVwebflash Articles


        • That’s not how you program it.
          You code for resources, and the lack thereof. The CAUSE of a degradation of resources rarely is important, except to the extent that a cause-specific response will mitigate that loss.

          Pilots don’t train for a meteor strike on an engine; the aircraft manufacturer doesn’t write an emergency checklist for it. When ANYTHING impacts and consequently disables an engine, the nature of the impacting object is immaterial. The pilot (or the autonomous control system) deals with the consequences of the loss of the resource. It’s an engine failure, and/or an engine fire – procedures for mitigation are established and well-understood.

    • Most stall-spins are a result a pilot not reacting in the way that their instructor assumed they would. Heck, most accidents are that way. The MAX setup was heinous—single point of failure with no warning—but because it was [should have been] survivable, it exposed a different kind of failure.

  1. In my 50+ years of aviation, I have watched “warnings”, especially aural warnings, go from simple to what often can sound like a music concert. “Way back when” about the only aural warning was the fire bell and it was just that, a fire bell. Got your attention and you knew instantly what the problem was and what the required first steps were. The rest, well, in most cases the MASTER CAUTION light illuminated causing you to look at the annunciator panel(s) to see what your next challenge was. In training, we also often faced multiple emergencies at once. A lot of work, sure, but it kept one up on one’s aircraft and caused you to prioritize the issues so you could put the aluminum back on the ground as safely as possible.
    Not saying that what I learned was any better or worse than what folks face today. I do know that today, training is a dirty word amongst management (or private aircraft owners) that creates a cost center not a revenue source. Way too much CBT and way too little classroom and simulator time. Why… too costly. Teaching only the barest minimum is not smart or healthy. As I got to the end of my operational career, I heard way too many versions of “you don’t need to know that”. A pilot should know his or her aircraft thoroughly, not to build or fix it, but to at least be fairly familiar with what is lost when that MFD says something is broke. Heck, understanding the systems better might just result in being able to “fix” the issue anyway.

    • That is my biggest concern here as well. Technology works well in most cases and I know most pilots know what to do. What worries me is the greed factor that hinges on our security and lives. Those who look for less training to cut costs and the industry bending over backward to meet those targets are the culprits here. Despite these costs going down, my airfare goes up. In aviation that has no place when my life is at stake. Airliners play a dirty role and are rarely mentioned. We can blame Boeing and the FAA for their role but airline companies have a lot to answer.

      As long as profits prime, we’ll have more problems. The answer is not only in more training but getting the right people at the decision levels.

  2. “ … crews did not react in the ways Boeing and the FAA assumed they would,” Alarm overload? Saturation of the senses? Confusion? AF447? “Musical concert” or maybe more like Jazz funeral music? Cramming five pounds of stuff in a One pound container?

  3. In 1978, at Allegheny Airlines in Pittsburgh Penn, I received my first introduction to jet airline flying/training. I was 20 years old. That training involved detailed reviews of each system and how it worked. Classes were taught by senior engineers who had come up in the maintenance department and we were required to know in great detail just about everything on the aircraft. What powered every light and switch and temperatures and pressures that caused lights to illuminate and things to happen were mandatory. The classes were 8-10 hours per day with study required in the evening and took, as I recall, about 3 weeks. Then, followed 7 days of ‘fixed base sim’ to learn the systems followed by the ‘real simulator’. Simulator involved multiple and compound failures that required a pilot to think and often, amend the way the checklists were used to account for said failures. This type of training method was still applied in the 80’s when I was trained on the B727 with Eastern. By this time, simulators were almost as realistic as they are today and compound failures were still required and quite realistic. Since then, I have been trained on and flown the B737 200 300 400 800 and Max. I am current on the MAX. To say the training has been ‘dumbed down’ would be an understatement. Gone are the days spent in classrooms with instructors who knew the systems and could explain them. And gone are pilots who understand them It’s all CBT and ‘need to know’. As a 20 year old first officer on the BAC 1-11 in 1978, I had much greater knowledge and understanding of it’s less sophisticated systems than at least 99.5% of the captains of any B737 flying today have of their aircraft. Yes, the systems, when working correctly and when properly used make things easier. And as a result, in the interest of saving costs, the training has been ‘dumbed down’ and the computer based training we now receive is shameful compared to the earlier training I was fortunate to have been afforded. And perhaps, for some reason, not the least of which is that they are more difficult to manage and will result in greater numbers of pilots failing an initial upgrade or recurrent check, gone are the multiple failures. Check rides that were difficult and that were full of surprises are a thing of the past. Pilots were required to demonstrate hand flying skills. Now, the emphasis is on managing the autopilot. We know just about what to expect on any check ride. Should it be a surprise then that pilots with minimal hand flying skills, who have minimal knowledge of the tube they fly, who have had few if any compound/multiple failures and few surprises react poorly when they have to deal with a compound failure which to them is a surprise. While some are more adept than others, there are no born pilots. If we want to have pilots who can react rationally, calmly and correctly to surprising situations, then they must understand their aluminum missile and their training and checks must include surprises that cause the use of grey matter and system knowledge. Pilots in general, react in a manner reflective of training. When trained poorly, they react poorly. And this would explain the two recent MAX accidents.

  4. I seem to recall an accident many years ago (possibly a 757) in either Central America or South America where the plane departed (at night?) with blockage of the pitot probes or the static ports due to insects. The crew had to deal with conflicting indications (overspeed clacker and stall warnings) and the confusion over which warning to believe contributed to the total loss of life.
    I also recall the A380 engine failure (QANTAS?) where so many Warning and Caution messages were produced that a crew of four were unable to exhaust the action items before the aircraft was landed back at the point of departure.
    So this isn’t something new or unique to the 737. It is a matter of making the warning systems better so the aircraft designers don’t send a crew “down a rabbit hole” when there is some singular item that needs to be accomplished immediately.

  5. I agree with Kel T’s comments, he makes excellent points. Len Morgan (passed away in 2005) who wrote articles for flying magazine was a WWII pilot turned airline pilot. He once wrote “An airplane might disappoint any pilot, but it’ll never surprise a good one.” I recall an article Len wrote that discussed all the systems he needed to be proficient on for a new Type checkout, everything from the electronics to the hydraulics and a lot of stuff in between.

    Years ago I noted the same thing to be important on every aircraft you fly. A Cessna 150 pilot lost communication in a Class D airspace, he’d hit the mute button on the Unicom/Radio system. He had good advice after that. He wasn’t leaving the ground in an aircraft where he did not know what every button and switch did on the panel.

    Now we have a dilemma where stick and rudder skills may not be as good as they once were of pilots coming up through the ranks of aircraft in a logical procession. The 2 seater, to the big 4 seater, to that gigantic Cherokee 6 (airplanes hold that many people?). And onto truly big iron. In the MAX incidents the aircraft surprised the pilots and they did not seem to be able to react quick enough to resolve the situation for whatever reason. As any fatal accident the loss of life is the tragedy but as always it’s important to learn from the particular mistakes, make changes and move on.

  6. Pilot error over the years has become the simple answer to just about every accident. Kel T’s comments are well taken. I agree that we have a problem in aviation where we mistakenly believe that automation can solve all our ills. As a result we are in fact seeing less emphasis on comprehensive pilot training than we did at one time. This approach to flight safety misses a couple of important aspects I don’t hear talked about often. 1) Even when a pilot commits an error or series of errors we frequently fail to take into account the system within which that pilot was forced to operate. In the neverending drive for cost reductions, pilots more frequently find themselves the last opportunity to correct systemic deficiencies. These threats could fill a book and only someone who has actually sat for a length of time in the pointy end can truly comprehend their extent. On any given flight pilots are called on countless times to intervene in an effort to keep their flights both safe and on time. The ruthless cost cutting by many airlines actually makes these threats AKA distractions even more numerous than they were at one time. So now we have pilots that are operating at near task saturation on a regular basis. What happens when unexpected circumstances increase that task load or the pilot for whatever reason is not operating at 110% on any given day? Things get missed and the end result is a slightly smug and self satisfied rendering of a “pilot error” determination. 2) This conversation also frequently results in calls for more unmanned systems since the pilots are “Clearly” the reason for all this chaos. Not only do these calls fail to grasp the number of accidents or accident chains that pilots break on every single flight; they also fail to take into account the fact that as long as aircraft are operated in a dynamic environment as complex as the sky that contains limitless uncertainties there will be a need for the kind of aeronautical risk management and decision making that no computer yet conceived can begin to manage. To be certain automation and SOP’s have made great contributions to flight safety. Still, I see nothing to convince me the concept that the best safety component any aircraft can ever employ is a WELL trained pilot is in any danger.