Qantas terror blamed on computer
A couple of weeks a ago, I posted on what happened on flight 447: What Really Happened Aboard Air France 447
Now we have the final report on an in-flight software glitch: Qantas terror blamed on computer.
These articles begin to demonstrate how absolutely difficult it is to write computer software that behaves properly with bad data input. It is the old "garbage in - garbage out" concept. In real time systems, the software cannot just dump an error and exit like the typical PC or tablet app does. I am sure that Airbus ran 100,000's or 1,000,000's of hours of simulations with the code base. The testers threw as many combinations of failures and wierd scenarios at the flight software as possible. The Qantas incident was one of three that occurred in 128 million hours of operations. You can see how hard it is to find these type of errors before the software goes into production.
Then there is the gap between what the programmer was thinking when writing the software and the user is thinking when using it. The Airbus flight control system normally will not allow the pilots to stall the aircraft. The flight control system will actually override the pilots when the human inputs would stall the aircraft, EXCEPT when the flight software exits normal operation mode. When the flight software on Air France 447 exited normal operations due to the icing of the p tubes, the automatic safeguards were disabled. Now the pilots could stall the aircraft. Did they know it was now possible to cause a stall? Evidently not, they stalled a perfectly good aircraft all the way into the ocean. The gap between how the engineers designed the flight software and how the users (the pilots) thought it worked proved fatal.
The intention of the programmers were good. There are circumstances where the automatic safeguards need to be disabled. It probably would have been a good idea to let the users know about it too.
Now we have the final report on an in-flight software glitch: Qantas terror blamed on computer.
These articles begin to demonstrate how absolutely difficult it is to write computer software that behaves properly with bad data input. It is the old "garbage in - garbage out" concept. In real time systems, the software cannot just dump an error and exit like the typical PC or tablet app does. I am sure that Airbus ran 100,000's or 1,000,000's of hours of simulations with the code base. The testers threw as many combinations of failures and wierd scenarios at the flight software as possible. The Qantas incident was one of three that occurred in 128 million hours of operations. You can see how hard it is to find these type of errors before the software goes into production.
Then there is the gap between what the programmer was thinking when writing the software and the user is thinking when using it. The Airbus flight control system normally will not allow the pilots to stall the aircraft. The flight control system will actually override the pilots when the human inputs would stall the aircraft, EXCEPT when the flight software exits normal operation mode. When the flight software on Air France 447 exited normal operations due to the icing of the p tubes, the automatic safeguards were disabled. Now the pilots could stall the aircraft. Did they know it was now possible to cause a stall? Evidently not, they stalled a perfectly good aircraft all the way into the ocean. The gap between how the engineers designed the flight software and how the users (the pilots) thought it worked proved fatal.
The intention of the programmers were good. There are circumstances where the automatic safeguards need to be disabled. It probably would have been a good idea to let the users know about it too.
Comments
Post a Comment