The just-released Indonesian accident report on the 737 MAX crash in October 2018 will have something for each of us and one thing for all of us: It’s a poster child for that unassailable nugget of aviation wisdom that accidents comprise a chain of errors leading to a blackened crater. The chief investigator said as much and rattled off the nine links.
For those hard-bitten cynics who said the Indonesian report would be a political whitewash absolving the airline and the pilots, squint a little and you can support that. Read hierarchically, Boeing’s at the top, aircrew performance near the bottom. But it’s not nearly the puff ball some predicted and, in my view, is relatively evenhanded. It lets the pilots and airline off easier than I would and Boeing a lot easier.
In the almost year to the day since this accident occurred, I’ve read enough to sense a powerful yearning to blame one of two entities: Boeing or the pilots. The pilots or Boeing. But as with most accidents, it’s not so simple. The blame-the-pilots argument was eloquently argued by famed aviation writer William Langewiesche in a 14,000-word New York Times magazine article assuredly titled What Really Brought Down the Boeing 737 Max? A month and half before the fact, the deck on that article presaged the accident report: Malfunctions caused two deadly crashes. But an industry that puts unprepared pilots in the cockpit is just as guilty.
Both of those things are true, but unsatisfying if you want to assign relative weight to links in the chain. Was the pilot link 70 percent and the rest, added up, the remainder? Or were Boeing’s tragic missteps in certifying the MAX the overwhelming driver, with the pilots merely abetting? “Cause” and “contributing” are two different things.
Take your pick. Mine is that the relative weight doesn’t matter because accident investigation isn’t intended to assign blame but to learn enough to prevent the next one. And it seems clear to me—although the Indonesian Transportation Safety Committee didn’t say as much—that this was a systemic failure; an uncharacteristic lurch back to the bloody days of the 1950s, when multiple crashes a year made it worthwhile to maintain flight insurance kiosks in airports.
We don’t do that anymore because the contemporary airline accident rate is functionally zero, at least in the U.S. On the hardware side, we got there with an ever more refined science- and data-driven certification process jollied along by just enough internationally standardized regulatory oversight to protect the industry against its own excesses. But given how the FAA’s Organization Designation Authorization has worked, the industry—mainly Boeing and Airbus—has done impressively well at avoiding disasters. Until the MAX came along. And two crashes of the same type within six months is a disaster.
The Indonesian investigators didn’t venture into these waters, at least not very deeply or vigorously. They offered a vague recommendation that the FAA review the ODA process. We reported that the Joint Authorities Technical Review, a consortium of international regulators, concluded that the FAA went overboard on Boeing’s ODAs and needs to step up and step in. But former NTSB Chairman Chris Hart, who chaired that group, insisted the system isn’t broken.
“The U.S. aviation system each day transports millions of people safely, so it’s not like we have to completely overhaul the entire system, it’s not broken. But these incidents have shown us that there are ways to improve the existing system,” Hart said is a speech before the JATR report was released.
Credible and soothing as that sounds, it’s still a soft pedal. Boeing declared or at least thought the MAX had the vaunted 10-9 reliability and the FAA’s job was to at least check the math. It failed to do so, in part because Boeing wasn’t entirely forthcoming with the FAA and had unrealistic expectations of what pilots could be expected to handle and fix. The report did say that. That’s a backhanded way of saying it was Boeing’s fault that the pilots couldn’t handle the emergency that MCAS—a system they didn’t know about—threw at them.
The immediate payoff of that was two crashes and 346 deaths. The longer term is playing out in the “MAX effect” as certification projects at all levels are getting additional scrutiny from an FAA now fearful of political blowback. Justified or not, the delays are piling up.
The report was critical of the MCAS design, relying as it did on a single AoA sensor operated by buggy software that failed to apprise the pilots of faults in a system that they knew nothing about because it wasn’t described in the documentation. In a world of perfect maintenance, this might never have surfaced as a problem, but Lion Air’s maintenance was anything but perfect.
By international standards, its record keeping was shoddy and its understanding of the MAX’s complex systems was incomplete, culminating in the dispatch of a flyable but unairworthy airplane. Aggravating that, and related to Boeing’s poor documentation, the crew that flew the flight immediately prior to the accident flight experienced the same faults as the accident crew.
But those pilots failed to convey the information that the left-side stick shaker was activated continuously and that the trim was in an intermittent runaway condition—operated by a system they didn’t know existed. That meant that both maintenance technicians and the next crew were in the dark. Because of that and the fact that documentation didn’t alert the crew or the maintainers that the airplane lacked AoA disagree capability, the technicians fixed the wrong thing. They flushed the pitot system and released the airplane for service, all but assuring that the accident crew would confront an abnormal that didn’t present as plain-vanilla runaway trim.
The previous crew, aided by a jump seater, had contained the misfiring MCAS system by using the stab trim cutout switches and while their handling of the abnormal was admirable, the captain still decided to proceed normally to the destination with the stick shaker continuously activated when he should have landed immediately. Confronted with the same situation, the accident crew wasn’t as competent. It failed to declare an emergency and mishandled the response to the runaway trim. Local ATC added to pilot workload by issuing a stream of directives. Eventually, the pilots of Flt. 610 lost the trim tug of war with the faulty MCAS activation.
Among the accident report’s 82 detailed findings was the conclusion that Boeing thought the failure of a single AoA with MCAS activation was beyond extremely improbable, thus it justified its decision not to document the system. This was supported by sim flights and other testing that didn’t consider the ramifications. The Indonesian report found that Boeing’s confidence in pilots to sort out such faults was misplaced. In the dry language of the post mort, “assumptions … about pilot response to malfunctions which, even though consistent with current industry guidelines, turned out to be incorrect.” This contradictory finding can be read to suggest Lion Air needed better pilots or that Boeing just expected too much of all pilots.
As for the pilots themselves, the report found the captain’s CRW skills were wanting and that the first officer was confused and didn’t know required memory items for airspeed disagree alerts. Its review of his training records found numerous complaints about his skills and handling of simulator exercises. He complained of an early duty call on a day he wasn’t scheduled to fly and the captain had the flu, with a hacking cough. Could the pile have been made any higher?
The safety committee issued a long list of recommendations related to oversight of certification and human factors such as training manuals, crew behavior in emergencies and the effect of multiple alarms and how pilots deal with them. While it dinged Lion Air for suboptimal hazard reporting methods and record keeping, it was curiously silent on the lack of an overall safety culture and on pilot training.
If there’s a shortcoming in the investigation, that might be it. I’ve heard professionals argue—some themselves MAX pilots—that the crew should have been able to handle the MCAS runaway and that if they had, neither of the 737s would have crashed. I think this is undeniable. But also undeniable is that through a series of bad decisions and lack of regulatory oversight, Boeing built an airplane that confronted the pilots with a confusing abnormal. The fact that it happened twice in six months shows that Boeing was wrong in its understanding of how improbable such an event could be, regardless of what ignited it and irrespective of what acceptably skilled pilots should be able to handle.
In that sense, I think it’s wrong to separate what the pilots knew, didn’t know or did from what Boeing knew, didn’t know or did. By design and with great success, we’ve lumped everything into one internationally approved safety-driven system. And in this case, the system failed.