Scaled Composites’ Bad Day in the High Desert

0

I have argued many times in the past that in order for commercial space operations to succeed, they need to be given the opportunity to fail. The truth is, aviation did not grow up in a bloodless environment. Many died learning the lessons that we take for granted today. Commercial spaceflight may indeed have to follow this same dangerous course if it is to proceed at a pace that makes it worth an investor’s while. Thousands of engineers can spend hundreds of thousands of hours at their desks dreaming up failure scenarios, yet they will still miss some. Actually flying is a fast way of flushing out these problems, but it’s also a costly way, as Scaled Composites has discovered following a tragic accident involving SpaceShipTwo last October. As we’ve reported, the NTSB ruled this week on the cause of that accident.

We cannot proceed carelessly into this new ocean, nor can we tread with such caution that we lose interest before we leave the harbor. The important thing is not to eschew risk, but to refuse to take unreasonable risk when we have already learned the hard lessons before. We need to embrace the lessons of the past, no matter their source. It matters not if we don’t like those lessons; they’re real whether or not we salute them. Good design allows for failure; it allows for mistakes and it generates robust systems.

And so we take a look at the outcome of the NTSB’s investigation into the loss of SpaceShipTwo, a crash that cost the life of the co-pilot and seriously injured the pilot. A few hours spent poring through the Board’s report, the transcript of the hearing, and most important, the many documents included in the public docket on the accident gave me a sense of dj vu. I have seen this accident before and it is always maddening how we seem so powerless to prevent it. Powerless because like the Shuttle accidents that precede it, we simply didn’t anticipate this particular combination of technical limitations, human judgment, and human interactions combining to cause a fatal mistake. You can’t anticipate that which you can’t imagine.

The Board pinned the proximate cause on the co-pilot’s early unlocking of the spacecraft’s feather mechanism. This occurred at a time when the vehicle was accelerating through the transonic region, a time where aerodynamic forces were doing strange things to the airframe. The checklist called for the feather mechanism to be unlocked prior to reaching Mach 1.4 and a great deal of talk and training had gone into the need to make sure this was not done late. Unfortunately, less emphasis was put on not doing it too early, and therein resided the fatal flaw because going transonic with this vehicle caused forces on the unlocked system that back drove the feather actuators and caused the vehicle to fold up like a creased piece of cardboard. The need to have the feather mechanism locked during this flight regime was known, but had not been reinforced to the pilots in their training process for several years.

When I first heard shortly after the accident that the unlock had occurred early, I recognized a scenario I have seen happen many times when pilots participate in both development and training runs in simulators. In training, we are expected to rehearse actions exactly as they will be taken in flight, according to procedures and rules. In development runs, we often take shortcuts in procedures to get to the part of the run we are trying to study. Oftentimes, these shortcuts—such as arming a deployable landing gear early in the Shuttle sim—carry over into our training runs, and the instructors have to beat us up for taking them by dropping the gear early once it has been armed.

So my first thought was that they weren’t training the way they were going to fly. And as is so often true, the first answer to an accident investigation was wrong. Reading through the interviews included in the docket, it appears that this co-pilot did not have a habit of doing the unlock early. The evidence to support my first answer wasn’t there.

So what, then, caused him to make this fatal mistake on that specific day? Was it excitement, the desire not to be late, the highly tuned sense of operating in a time-compressed environment? Maybe it was a combination of all of them, coupled with a lack of the necessary knowledge of why “early” in this case was bad. The crew was operating mostly from memory, but that in itself is not a crime. Things happen fast during powered flight. In the Shuttle, we had all the procedures needed for ascent and ascent aborts on flip-board checklists on the forward window frame. Most of the time in the sim, I’d get to Main Engine Cut-off, the end of ascent, and the checklist was still on the pre-launch page, because I knew the few procedures during that dynamic phase by rote. I was executing, not reading and flipping pages.

In SpaceShipTwo, the time from release to the Mach 1.4 hack was on the order of 20 seconds, and while the crew didn’t have a lot to do, they also probably didn’t have time to read and execute. So memorization is OK. But we can’t discount the effects of time compression (or dilation) in a high-stress, high-energy environment. We can imagine that with the engine operation and aerodynamic loads, there was a lot of noise and vibration in the cockpit. Some items had time criticality based on speeds and other events and the crew knew that the unlock had to occur before Mach 1.4 and getting ahead is always better than getting behind. Rehearsals were over, this was game day and maybe the co-pilot was just operating more quickly than before because of it. And the warning about not unlocking early had not been reinforced in recent training, one of many thousands of details about a high-performance craft that simply wasn’t at the forefront of the brain at that time.

I guess the real question I have to ask is if it was necessary to design the vehicle with critical manual operations that had to be performed at specific times, or measurable cues. I am a firm believer in crewed vehicles, both because humans are flexible to handle changing situations and because I believe that without the human element, the whole exercise of flight is much less inspiring. But I am also a believer that when it comes to routine operations, automation can be our friend.

Scaled Composites grew out of a design philosophy that believes that the simplest solution is always the best. I believe that in engineering, all designs should be as simple as possible, but no simpler. Automation is not always simple, not always cheap and can lead to its own issues. But humans are fallible and designing a system where one human mistake causes a fatal loss of the vehicle and crew is simply not the best option. Sure, any pilot can crash any airplane by making the wrong control movements in the flare. But hands-on stick and rudder skills are something we assume pilots have ingrained into their muscle memory. Knowing exactly at what meter reading to do a certain system action is not.

The Space Shuttle had Autoland capability built in from the start, but because of rare failure modes inherent in the system and the difficulty of taking over manually late in the approach, we never used it. But ascents were always flown by the autopilot because bad things happened very quickly during boost phase. We trained for manual ascents, but no one ever flew one. It just wasn’t prudent.

Instead, we used the humans to monitor the automatic sequences and used our knowledge to judge how the automation was doing, prepared to step in only if necessary. A process that requires perfectly timed execution based on measurable cues or timing is statistically safer when done by machines. And while that costs more, it is a lesson learned that shouldn’t have to be learned over and over again. Certainly, if it can help avoid a single-point catastrophic failure, it should be considered.

Is Scaled doing the right things to make their spaceship and its operation as low risk as possible in an inherently high-risk environment? In reading through the many pieces of evidence in the NTSB docket, I see lots of references to procedures, training, risk evaluation and mitigation. I see lots of good things about design reviews, flight readiness reviews and all of the many positive things we like to see in a spaceflight operation. They’re not just kicking the tires and lighting the fires—they are trying to apply the lessons learned by NASA and others about human spaceflight and how to do it.

Maybe one of the toughest things for everyone to realize is the thing that people complained the most about NASA and the Shuttle Program. To the general public, it appeared boring. We did things the same way every time. We used the autopilot to fly much of the time instead of the pilots. Believe me, from the inside, every flight was incredibly exciting, often more exciting than we wanted it to be. But we worked hard to make them routine. Scaled and the new commercial spaceflight industry are selling the excitement of flight into space and part of that might be the romance of the pilots at the controls all of the time. But humans are fallible and you have to ask yourself if the goal is to make things as exciting as possible, or to provide people with the unique experience of flying to space, with the excitement coming from the experience of being where few have gone before.

It doesn’t take an organization the size of NASA to do spaceflight right. But it does take an attitude where you realize that there is always something that you have missed, something that you have to learn more about. You have to be constantly looking over your shoulder for the gremlins, the single-point failures and the unexpected. You have to fly with a little apprehension all the time to keep you sharp and you have to know every constraint, every little bit of detail about your craft and its trajectory. You can never let your guard down and you have to expect mistakes. So systems need to be designed to be tolerant of those mistakes. Forget the myth of the infallible pilot; let’s just get the job done with a little risk as possible. And let’s make sure that we know what those risks are.

Paul F. Dye is editor of AVweb’s sister publication, KITPLANES, and is a retired NASA lead flight director with many Shuttle missions in his logbook.

LEAVE A REPLY