Infinite Variables of Pilot Error
Part 1: Flight Crews
Of course pilot error can be attributed to every aviation crash. A pilot can have an engine blow up or fall off causing compete hydraulic failure, and when they land fast and beyond touchdown zone, the accident analysis will say the pilots didn’t manage their airspeed appropriately.
Every aviation accident can start with the words: If only the pilot(s) … noticed the under torqued bolt, set the flaps, recognized the wind shear, known the pitot/static system was iced over, etc. There is also a financial and psychological desire to blame the pilot. Aircraft manufactures will always turn blame to the actions of the pilot rather than admitting a flaw in the design, even if a tail section comes off from the copilot using the rudder. For the general public, to know that an accident could have been prevented moves the tragedy into an acceptable category that makes us believe we can still control it in the future. It allows us to put the variable on the pilot(s), not the airplane.
The reality is that aviation can be catastrophe free, but never accident free. That’s as close to perfection that we can expect. We’re so close, but we haven’t found the perfect balance yet. Every advancement in aviation technology comes with trial and error. Everything stays the same until we have a tragedy and then we analyze and adjust. We learn from our mistakes, but on any given flight, private or commercial, there are hundreds of choices that pilots must make and the variables are infinite. There is absolutely no way to train and prepare for every possibility. With that in mind, we’ve begun to rely more heavily on technology and with it brings new mistakes. The advancements made in just the last decade are astounding. However, now with computers onboard trying to help pilots think, we have to throw in an entirely new set of infinite variables; doubling infinity. Pilots have discovered that computers are just as fallible as humans.
The specific triggers of an accident are vast, sequential and unique, but lately there is a specific reason why airplanes are coming down when they shouldn’t be: decision and perception errors by the pilots. The most recent being the TransAsia Flight GE235 in Taiwan. The question, of course, especially by those outside the industry, is how could a pilot shut down the wrong engine?! Easily, and every multi-engine pilot knows it could happen so we all train for it. When you lose an engine on a multi-engine airplane, it’s not the dead engine that is the primary concern, it’s the good engine at full power that is trying to yaw and turn you into the dead one. Subliminally, in the moment of mechanical chaos, it’s easy to place the thought in your mind that you need to shut down the engine causing you control issues. In this case, they shouldn’t have done anything except fly the airplane, come back around, and land where they just took off.
Flying a commercial airplane with an engine out isn’t and shouldn’t be a big deal because pilots train for it, constantly, in full motion simulators and in the real thing. Since the flight training and specific cockpit transcripts have not been released to the general public yet, we don’t know which pilot said or did which action, but from initial reports there is a concurrence by both pilots to shut down the left engine, which was operating normally.1 What I’ve seen happen is that the flying pilot is so busy with trying to figure out what happened, that when the nonflying pilot starts calling out which engine failed and to verify fuel shutdown, the flying pilot’s situational awareness is gone. They aren’t truly listening, they are instead agreeing because they want to assume that the other pilot is correct in their assumption. So, in this instance, when the operating engine’s throttle was reduced to idle, they momentarily had full control and forward momentum of their aircraft again. It felt right. Thirty seconds later, they “confirmed” the action, so they shut down the “correct” (in their mind) engine so its fuel valve was also shut off. It didn’t take long for them to see the instant result so they tried to restart the left engine again, but they just didn’t have time. The aircraft entered a stall and there was no time or altitude to recover. This was a decision and perception error, not a mechanical failure crash. If both engines failed, it would’ve been a mechanical failure crash, but losing an engine isn’t the reason why the airplane crashed.
The industry and media isn’t as quick to release black box information when the issues are muddled. As of this day, the AirAsia flight crash is still a mystery, but we know that the aircraft was near thunderstorms, trying to climb and that the stall warning system went off. It fell to 24,000 feet and out of radar detection. In this case, the media isn’t as quick to blame the pilots because the captain had 20,000 hours of flying and 6,100 on the Airbus 320.
The best example of how new technology can create a special kind of deception is the Air France Flight 447 A330 crash in June of 2009. The airplane crashed even though both engines and all systems were operating normally in the last few minutes. It was simply a misperception of the pilots not understanding that the pitot/static system had temporarily iced over which turned off the autopilot and led to a cascade of errors. The irony is that the pitot/static system did begin giving correct information again, so when the airplane crashed, all systems were fully functioning and normal at the time of impact. The pilots forgot the basic rule of just fly the airplane. They were trying to understand the computers and forgot to just fly. (For more in-depth description, click here: https://disciplesofflight.com/crew-resource-management/).
So what? We get it. The pilots made a mistake, and while we sit on the ground and judge them, it’s easy to see where they went wrong. If there is an infinite number of variables in pilot error, how do we train for everything? We don’t, but we can do something about these recent failures, and it comes back to what you learned in private pilot training, sharing mistakes, and the use of checklists.
And now to completely blow your mind; I suggest we do some training without a checklist. Wait! I deeply believe that checklists are absolutely vital, researched and proven. Because they are, pilots and teachers have forgotten what happens to human instinct when given a situation that doesn’t fall neatly into a checklist. I’m suggesting we spend a little time in the full motion simulator completely unprepared and without a checklist and let the pilots be put in mysterious emergency situations where they have to try and recover without a checklist. Let them do it all wrong on purpose. Before the simulator session, have the instructor tell one of the pilots to purposely make random mistakes and see what the other pilot does with it. Make the session lighthearted because that’s where pilots spend most of their crew time together. The point is not to make someone fail, the point is to learn about variables in a safe environment. Pilots are supposed to fail this portion so they will remember and learn. It’s also a reminder to both the teacher and student of what human instinct does under stress and a method to determine what common mistakes happen with new technology. The results should be anonymous and shared thorough the industry, worldwide. It would be beneficial to contrast crew resource management in the United States with other cultures and mindsets. This way, we could see common denominator mistakes regardless, or because, of how pilots are trained. This exercise is simply a method to realign the reflexive thought process of pilots in a crew environment, not to disregard checklists.
There are different philosophies on checklists. I’ve flown for companies that want us to memorize the first ten items of emergency checklists, and I’ve flown for companies that have you memorize one or two items and then use the checklist for the rest. That first item for either philosophy is: fly the airplane. The major airlines share the basic philosophy of using checklists. Training is focused on the proper use of them and it should be. During recurrent training, we often focus on a system and then practice its failure in the simulator. Or, for example, we’ll review wind shear in the preflight briefing, and then we’ll duplicate the scenario of the DFW crash. No checklists involved here, but we are in the mindset to know what’s coming. We spend hours in the simulator under the mindset that an emergency is going to happen. What we can’t train for is throwing pilots into an emergency when they least expect it. The best we can do is throw a few emergencies at a flight crew and tell them to react without following a checklist to see where human nature takes us.
The industry needs to know and share common errors even though there is an infinite number of variables. This lesson has nothing to do with checklists and everything to do with getting pilots to think outside the box and rely on basic airmanship while surrounded by computers trying to think for them. Checklists wouldn’t have saved the ATR from crashing in Taiwan. They followed the checklist, but for the wrong engine. However, having pilots practicing in the simulator to think about what it takes to keep the airplane in the air would have kept them alive. If they had done nothing, except fly the airplane, they’d be here to tell us the story.
Coming Up: Part II – General Aviation Pilot Error Variables
1 Parrett, Bradley. “Engine Failure, Related Procedures Focus of TransAsia Probe.” Engine Failure, Related Procedures Focus of TransAsia Probe. Avation Week, n.d. Web. 20 Feb. 2015