Saturday, April 23, 2011

Lessons from Deepwater Horizon and Fukshima

We marked the one-year anniversary of the Deepwater Horizon this week.  The Economist noted the event with a great editorial ("In place of safety nets") with a clear message for engineers - - don't assume disasters won't happen at the frontiers of technology.  Presume they will.  Consider the opening paragraph in the article:

Technology does not inflate like a balloon, expanding human power over nature evenly in all directions and at all scales.  It grows like a sea urchin: long spines of ability radiate out towards specific needs and desires.  Some of those spines now reach dizzying  distances, allowing what would once have been impossible tasks; coaxing kilowatt hours by the million from the inner workings of atoms, or driving tiny oil pipes miles through the crust of the Earth.  But spines are brittle, and they stand alone.  When one breaks - - as happened on board the Deepwater Horizon rig in the Gulf of Mexico a year ago or at the Fukshima Dai-ichi nuclear plant in Japan last-month - - there is no ameliorative technology on par with which has failed.  Instead there is floundering; there is improvisation; and there is vast damage.

The article highlights three recommendations regarding coping strategies with brittle technologies - -
  1. Firms involved in these types of complex systems have to accept that if things seem safe and sure in day-to-day operations, disasters still happen.  Things like blowout preventers and backup generators need to word as advertised - - they need to prevent and they need to backup.
  2. You need to develop at least some broadly applicable technologies for repair and remediation before they are needed.  Coping is a real-time event - - the things that you think you will need, have them available.
  3. Situational awareness is invaluable.  Steven Chu, energy secretary, was reportedly shocked to find that the only source of information from the Deepwater Horizon's blowout preventer was a single gauge.  Sensor systems for getting information out of containment vessels, off sea floors, and from all sorts of other out-of-the-way places should be deployed widely and in redundant ways.
Engineers also need a greater understanding of the complexity sciences.  One central lesson that has come through from systemic  failures is the need for a prognostic approach with which one can anticipate problems, rather than relying on the current "react-and-fix" methodology for managing systemic risks.  The engineering communities need concepts, methodologies, and automation tools to model, analyse, predict, explain, and model the behavior of such systems and components in various environments. 

Sometimes its just a matter of translating Engineering into English.  The lessons applied to any complex enterprise - - Take care of the little things.  Pay attention to the stuff that doesn't quite make sense.  Don't ignore those anomalies and hope they'll go away of their own volition.  Respect the rules.  Follow proper procedures.  Don't ignore low-probability, high-consequence scenarios.  Hope for the best, but plan for the worst.

No comments:

Post a Comment

Note: Only a member of this blog may post a comment.