Subscribe to Blog

I spent the early part of my career working on critical nuclear systems where the plant (machine) would serve up information and warnings to the operator (human) in order to help the operator make complex decisions fast.  I’d not thought about much of those experiences as I’ve spent the better part of the last decade working in SaaS, but with last week’s false alarm nuclear warnings in Hawaii, a lot of interesting design trade-offs came flooding back to me.

Text Alert

Plenty of articles like this, False Alarm due to UI Design Flaw, point out the likely cause was operator error.  To some, this may seem absurd.  How in the world could something this important be left fail due to a clunky interface?  While inexcusable, this is something I understand.  The military is full of these tradeoffs – fight like you train.  Sweat in times of peace so you don’t have to bleed in times of war.  This mentality shows the design goal to make the simulated training drill as close as possible to the real response.  However, you don’t want your training to inadvertently cause real world crisis.Hawaii

Sure, a bunch of design solutions could exist, like:

  • Use of color
  • Use of icons
  • Use of multi-layered warnings (with color and icons)
  • Use of sound

But, my reflection from all of this is the importance of developing environments where training is possible.  It requires real thought work.  To make continuous testing possible requires an investment upfront.  It requires forward planning and technical discipline.  Operator error will always be present in these situations because you want a system with as few impediments to releasing a time sensitive warning as possible.  However, you can’t have a system where every time you drill to remove the risk of operator error (false positives), you introduce another instance where there is a real probability of introducing operator error (false negatives).

Pulling this from the Nuclear Safety world and back to the SaaS world, SaaS has much less potential to create catastrophic harm.  If you make a typo while updating a lead record in the CRM, at worst you trigger an unexpected marketing campaign and an email goes out.  But, no person likes to make mistakes – whether big or small.Try Again

So the big takeaways for product development and design teams are to:

  • Have effective testing in place to understand where users may make mistakes.
  • Make sure that calls to action with vastly different outcomes are significantly differentiated.

Mission critical systems always have risk.  But, as this incident shows, most failures don’t occur in the depths of nuanced detail.  Engineers love details and can normally build test strategies to mitigate edge cases.  Most failures come from the overlooked tasks – they come in the form of human-machine interactions.

More from this author

Scott Hutchins

Technical Co-founder

Scott Hutchins
Scott Hutchins

Scott Hutchins

Technical Co-founder

More from this author

Watch Highlight Reels

Find out how Truthlab can shed light on the customer experience with the truth quotient.

Customer Experience Update