Jump to ratings and reviews
Rate this book

Meltdown: Why Our Systems Fail and What We Can Do About It

Rate this book
A groundbreaking take on how complexity causes failure in all kinds of modern systems--from social media to air travel--this practical and entertaining book reveals how we can prevent meltdowns in business and life

"Endlessly fascinating, brimming with insight, and more fun than a book about failure has any right to be, Meltdown will transform how you think about the systems that govern our lives. This is a wonderful book."--Charles Duhigg, author of The Power of Habit and Smarter Faster Better

A crash on the Washington, D.C. metro system. An accidental overdose in a state-of-the-art hospital. An overcooked holiday meal. At first glance, these disasters seem to have little in common. But surprising new research shows that all these events--and the myriad failures that dominate headlines every day--share similar causes. By understanding what lies behind these failures, we can design better systems, make our teams more productive, and transform how we make decisions at work and at home.

Weaving together cutting-edge social science with riveting stories that take us from the frontlines of the Volkswagen scandal to backstage at the Oscars, and from deep beneath the Gulf of Mexico to the top of Mount Everest, Chris Clearfield and Andras Tilcsik explain how the increasing complexity of our systems creates conditions ripe for failure and why our brains and teams can't keep up. They highlight the paradox of progress: Though modern systems have given us new capabilities, they've become vulnerable to surprising meltdowns--and even to corruption and misconduct.

But Meltdown isn't just about failure; it's about solutions--whether you're managing a team or the chaos of your family's morning routine. It reveals why ugly designs make us safer, how a five-minute exercise can prevent billion-dollar catastrophes, why teams with fewer experts are better at managing risk, and why diversity is one of our best safeguards against failure. The result is an eye-opening, empowering, and entirely original book--one that will change the way you see our complex world and your own place in it.

294 pages, Hardcover

First published March 1, 2018

Loading interface...
Loading interface...

About the author

Chris Clearfield

1 book30 followers
Chris Clearfield is a science geek and a reformed derivatives trader who became more interested in writing about risk than taking it. He's the coauthor of Meltdown: Why Our Systems Fail and What We Can Do About It with his friend and longtime collaborator András Tilcsik.

Chris lives in Seattle, Washington with his family. When he's not writing or working on short author biographies, Chris, a certified flight instructor, can be found at the airport teaching people how to fly.

Ratings & Reviews

What do you think?
Rate this book

Friends & Following

Create a free account to discover what your friends think of this book!

Community Reviews

5 stars
435 (34%)
4 stars
531 (42%)
3 stars
238 (18%)
2 stars
48 (3%)
1 star
11 (<1%)
Displaying 1 - 30 of 137 reviews
Profile Image for Mehrsa.
2,235 reviews3,634 followers
May 5, 2018
There are some interesting insights in this book but most of them are derivative of other people's work--either scholars of complexity theory or just management and diversity gurus. Basically, the idea here is that systems that are complex and tightly coupled will experience meltdowns. But that there are signs that should not be ignored. Fair enough. And how do you deal with it? Basically by listening to diverse voices doing a pre-mortem, getting rid of heirarchies and letting all team members speak, etc. I was hoping for more theoretical work on complexity and not just a bland management book.
Profile Image for C.
354 reviews3 followers
March 24, 2018
I gave this 5 stars because of the shocking insight, research, and ways we're learning more and more on how to avoid "Meltdown."

we rely heavily on computers, but it seems most of catastrophes happen due to human error, like the tool left inside the engine of a plane, scandal at the Oscars, etc. we need more people to speak up if they think they messed up. I enjoyed that part in this book. I commend those individuals. It's a shame that people have died to find out what went wrong and why. Should of been discovered earlier whether it was laziness or cost-cutting. I agree about over-worked nurses. I didn't want to skip any part in this book because I really wanted to learn and actually wanted more, felt like I didn't know about a lot of these things that happened. I was shocked by the ATM, Starbucks, and specially by the way people can hack into those things, cars, really. That's a very scary thought. Another part was about the airplanes at night and relying on people on the ground and their instruments in the cock pit. This book was not at all how I thought it would be and I'm glad I read it because it is an eye-opening book and very well written, the research, just everything. I'm glad there are people who take this seriously and do all they can to fix the problem and do tests to make it a better world for us.

I highly recommend.


Thank you Net Gallery, Authors, and Penguin Press for this copy. Looking forward to reading more like this. Gives to a new prospective. This would make a great documentary if there isn't already one. I just want to grab someone and tell them all about this book.

Cherie'

#netgallery #penguinpress #meltdown
Profile Image for Jingga.
58 reviews15 followers
July 27, 2021
An insightful book that helps you understand why, in our technological age, more catastrophes are likely to happen. The authors have a really good explanation - that it is due to the tight coupling of the processes involved and the transformative power of the technologies. This book provide a highly readable examples of meltdowns, from nuclear crisis, financial crisis, space shuttle explosions, oil spills, and many more. The book also present a series of solutions to better manage these risks along with examples of their successes.
1 review
April 3, 2018
Meltdown is an excellent book for anyone curious about making lives, communities, and the world more resilient. The stories are relevant, authentic, and engaging, and lead directly to lessons worth trying out in our own organizations and systems. In world that at times can feel like it is replete with disaster Chris and Andrȧs remind us that we can take small steps in any place to improve the robustness of our systems to stop meltdowns of the large and small.

In my field of global environmental health, one of the great failures of development funding was the drilling of thousands of wells in Bangladesh to reduce water-borne disease, only to belatedly discover that this “solution” introduced a wholly new disaster from arsenic in the groundwater. Thus, as noble-missioned an organization as UNICEF inadvertently perpetrated “the largest poisoning in history” by not recognizing systematic risk.

This sort of unintended consequence and systematic failure, even by well intentioned actors, is the type of problem that Chris Clearfield and Andrȧs Tilcsik aim to prevent through better system design in their award-winning book, Meltdown: Why Our systems Fail and What We Can Do About It.

The book starts off with a litany of system failures across industries and scales, from airplanes and nuclear power plants, to Starbucks coffee and cooking our Thanksgiving meal. In our modern world, as both systems and problems become more complex and more intertwined (“coupled”), the possibility and scope of disaster grows.

Fortunately for us, the majority of Meltdown is oriented towards solutions that are relevant from our own kitchen all the way to a war room in prevention problems from becoming disasters. Chris and Andrȧs offer specific, actionable advice and tools to improve our systems for reducing disaster.

Profile Image for Mehtap exotiquetv.
426 reviews262 followers
November 11, 2023
Das Buch ist etwas angelehnt an die Black Box Thinking Strategie.
In unserer komplexen Welt passieren Fehler, die auch tot zur Folge haben können. Doch warum? Und wie kann man dem entgegenwirken? All diese Methoden und Strategien, die über die Zeit entwickelt worden sind, kann man eben auch im privaten Alltag nutzen.
Profile Image for Alex.
117 reviews1 follower
Read
December 31, 2019
Chris Clearfield brings clarity to complexity by looking at the little things that make for big problems. He goes through the catastrophes in our everyday systems, breaking down why complex systems are prone to failure, accidents, and intentional wrongdoing.

Grounded in examples most of us would know from the news, we learn about patterns across failure-prone systems (non-linear cause and effect as well as tight "coupling" across the parts), why our solutions to these patterns tend to fail (we unintentionally patch complexity by adding even more complexity), and how our very human nature lead us to construct these vulnerable system (such as confirmation bias and conformity tendencies). Each part of the book feels anticipatory. The common pitfalls address the lacking solutions we might come up with if we just learned about the patterns, while learning about our nature is important for us to be reflective as we execute and maintain these systems.

Overall I found the book quite helpful, it arms me with frameworks to label problematic complexity that I see, stories and examples to use as metaphors and parallels (my favorite might be the idea of a VP of Common Sense), as well as a call to reflection for myself (there's a need for humility and willingness to listen not just knowledge and information to address these problems).
Profile Image for Venky.
995 reviews371 followers
March 30, 2022
In December 2012, Starbucks launched an ambitious PR campaign to enhance their brand value. In collaboration with Twitter, the hashtag #SpreadTheCheer was launched, and the tweets were displayed live on a giant screen at the Natural History Museum in London, where Starbucks also sponsors the ice skating rink. What should have been an otherwise gala affair soon became the beverage giant’s worst nightmare. An avalanche of tweets alluding to Starbuck’s controversial tax avoidance strategies marred the hashtag campaign and all efforts to mitigate the damage failed as there was a software sabotage.

While all that was lost in the pear shaped PR campaign of Starbucks was reputation, tragedy of an unimaginable magnitude struck the citizens of Fukushima, in Japan on March 11, 2011. A massive earthquake followed by a raging Tsunami caused immense damage to the Fukushima-Daiichi nuclear power plant, thereby leading to the worst nuclear disaster since Chernobyl in 1986. A post-mortem analysis revealed that backup generators to keep cooling the plant in the event of main power loss were constructed in low-lying areas, which were susceptible for flooding during a tsunami, something which a proper hazard analysis would have identified.

American global financial service firm Knight Capital lost $460 million in 45 minutes on August 1st, 2021, due to an incredulous algorithmic fiasco. The company deployed a new version of a software on their production servers. At 9.00 A.M on the 1st of August, the New York Stock Exchange opened for trading. Within 45 minutes, Knight Capital had “involuntarily” executed a whopping 4 million trades through their servers, thereby finding itself teetering on the brink of bankruptcy. Traditional trading wisdom dictates that an individual should buy shares when the price is low, and sell when the price is high. However, in the case of Knight Capital’s servers, a programme named ‘The Power Peg’ was designed to buy a stock at its ask price, and then immediately sell it again at the bid price, losing the value of the spread. Although a few cents may not seem significant, when a computer is spitting out thousands of trades per second, the cumulative value results in a cataclysmic outcome.

So what links the Starbucks PR disaster, with the Fukushima nuclear plant meltdown, and Knight Capital’s malfunctioning bug? Former derivatives trader and licensed commercial pilot Chris Clearfield and Canada Research Chair in Strategy, Organisations, and Society at the University of Toronto, Andras Tilcsik, in their fascinating book, “Meltdown” attempt to prise out the common fault lines that characterise major disasters.

In their endeavour, the authors bank on the work of famed sociologist Charles Perrow. Perrow shot into the limelight with his analysis of the Three Mile accident that flooded suburban Pennsylvania with toxic nuclear waste in 1979. A routine maintenance metamorphosed into a thermodynamic crisis, causing an absolute scare before being reigned in. As Clearfield and Tilcsik illustrate, at the heart of every major catastrophe lies a “tightly coupled” system. The tighter the system, lower the slack time and margin for error. Multiple moving parts pose the danger of a domino effect. Malfunctioning or failure of a single part leads to a cascading impact on the remaining parts. A veritable train wreck waiting to happen.

Often times the most complicated of problems suffer from the simple absence of imagination. A classic example being the monumental blunder at the Oscar awards 2017. The movie La La Land was wrongly named the best picture instead of the actual winner, Moonlight, thereby plunging the Awards ceremony in a sea of ignominy. While Accounting firm PricewaterhouseCoopers (PwC), responsible for handling ballots for the Academy Awards for more than 80 years, was to be squarely blamed for the snafu since the PwC partners handed over the wrong ballot, the underlying cause for the blunder was the design of the envelopes themselves. While the category of the awards itself was set out in minuscule print, the underlying movie was set out in garishly bold font. Hence when Warren Beatty and Faye Dunaway announced La La Land as the winner, the category was in fact for the best actress. Emma Stone bagged the award for the movie – you guessed it right – La La Land!

One tried and tested way to avoid the perils of a tightly coupled system, as Clearfield and Tilcsik exhort is to adopt an “outside in” perspective. Diversity and inclusivity instead of homogeneity also helps. The fall of the storied medical diagnostic company Theranos and its flamboyant founder Elizabeth Holmes is a case in point. The elaborate deception in the process of drawing blood samples, committed by Holmes was completely lost on a Board of Directors that was completely made up of distinguished and elderly males who had no clue of what the company or its founder was up to.

The authors also identify myriad and variegated practices that can enable an organisation to come out of the rut of groupthink and also to work around the complexities embedded within a system. Employing more accurate risk-forecasting tools; obliterating groupthink by nurturing dissent in decision-making; laying down rules for diverse workforces are a few factors that attenuate complacency and birth the creation of valuable perspectives for managing operations and strategy.

In an incredibly interconnected age, where once seemingly esoteric concepts such as Internet Of Things are commonplace and ubiquitous impressions, Meltdown is an essential and timely read. It is also a warning to shed the last vestige of complacency.
Profile Image for Paulomi.
19 reviews
July 30, 2022
I rarely blanket recommend a book as a must read to people. But this one is so brilliantly written that I can confidently put it as my best read of 2022 and the best non fiction read ever! Not a great fan of business books as I find them pretty vague, repetitive and preachy. When I read review of this book on one of the book groups I follow, it piqued my interest substantially enough to order it right away. I don't easily buy books that cost above INR 1000 but this one is a sureshot investment.

The best thing about this book are incidents ranging from big failures like nuclear meltdowns to smaller scale ones like mess up in workings of a post office. The book is full of real life anecdotes and incidents that keep you hooked. How a tsunami caused a nuclear reactor collapse in one part of the city but people actually took a safe haven in another nuclear reactor premise. What was differently done in both? How bypassing a simple validation check left NASDAQ embarrassed and led to heavy fines on a million dollar Facebook IPO. How the person who defied Steve Jobs to ensure safety was rewarded for dissent and how Theranos got away for so long by fooling the entire medical fraternity. How airlines crashed because of lack of comfort in expressing dissent and how accidents were prevented just because one person asked uncomfortable questions. How two space rockets to Mars crashed because there was confusion on conversion using the English system or Metric system. And these are just few of many.

The take away from the book while reading the incidents with fascination (and discussing with the kids endlessly) is that while the corporates today are focused on diversity and innovation, the approach may not entirely work. The question should be not to bring diversity to a group but rather prompt people to ask why diversity is not naturally present and remove bias if any. The practice of innovation is not to enhance a current way of doing things but rather running a premortem to see what small gaps can result in large failures and plugging those gaps. The book encourages or rather rewards dissent and talks about how an outsider perspective can actually result in safer and stable functioning.

Makes the reader think and question at the best and leave the reader in splits with a laid back wit prevalent across the book at the least. Don't miss this one. It's a gem.
3 reviews
March 7, 2018
Was lucky to get this in a giveaway. I had high expectations because of the glowing quotes on the back from authors I like (Dan Pink and Charles Duhigg) and I really liked this book. It is very readable and often entertaining, and it is actually much broader than it seems. It uses a single framework to explain a great range of things, from disasters in your kitchen to meltdowns on Twitter to airplane crashes. Some interesting thoughts on where we're headed as a society and a lot of good tips for managing teams/companies no matter what your industry or field is. And like the authors say, you don't need to be a CEO to make a difference.
Profile Image for Daniel.
653 reviews85 followers
April 25, 2018
This book is about catastrophic events, but taking a totally different approach from the Black Swan.

Tight coupling + complexity = meltdown

Fukushima. Long island, Target in Canada. Enron. Flint water. Washington Metro. Aircraft disasters. Oscar mix up.

You cannot think of all the potential problems because complex systems interact with each other and create unforeseeable problems. Each person in the system can only see a small part of it. However warning signs do appear first.

With the rise of automation and algorithms, the world is getting more complex, so disasters are going to happen more.

How to avoid it?
1. Reduce complexity: not usually possible. So reduce coupling by allowing more time. Try to get feedback
2. Be systematic in decision making. Agree on criteria first and then score them to come up with a score.
3. Do a premortem
4. Anonymous reporting and improvement system. Of course this did not work for the UK NHS’s Dr Hawa-Gaba.
5. Never ignore warning signs and close calls. But how?
6. Allow dissenters to voice out problems and solutions easily, in a structured manner. Soften power cues.
7. Have a diverse group because its members would not blindly trust each other’s judgment, avoiding herd mentality. The members are also able resist the famous social pressure in the wrong answer experiment. But this is hard: mandatory diversity programs make things worse! Fortunately voluntary ones work. Also formal rotation mentoring programs involving everyone reduce bias. A diversity tracking program is also effective. Bring in the amateurs to the board so every assumption is debated.
8. Bring in an outsider
9. Resist whoever is applying pressure on you, even if it is Steve Jobs. Stop everything and change course completely if necessary. For intense situations that are not going well, pause now and then and take a look at the big picture.

I learnt so much from this book!
Profile Image for Ola Rabba.
4 reviews1 follower
February 8, 2018
I got this book pre-release (Amazon Vine) and really enjoyed it. The topic is super-interesting and important. Despite the serious subject matter, this book still manages to be a surprisingly entertaining read. It is a mix of case studies (ranging from very funny to very sad), interesting social science research, and a framework that brings it all together. My favorite parts were the sections about how diversity helps teams avoid failure and how we can use small failures to anticipate big disasters. The book also provides some of the clearest and most accessible analysis (that I have read) of events like the Volkswagen scandal, the lead/water crisis in Flint and the Oscars gala envelope mistake (La La Land!).

There is lots of food for thought about where we are headed as a society (some depressing conclusions but also some optimistic ones), some good business advice and even some good life and career advice (for example, about how to avoid bad decisions when buying a new house or launching a project at work).

If you enjoy books by Daniel Pink, Adam Grant, Malcolm Gladwell, Jonah Berger, Charles Duhigg, Chip and Dan Heath, and similar authors, you won't be disappointed. It is accessible, entertaining and still quite practical. I learned a lot, and once I got into it, I couldn't put it down.
Profile Image for Meaghan Johns.
46 reviews16 followers
June 9, 2018
"All too often, when we deal with a complex system, we assume that things are working just fine and discard evidence that conflicts with that assumption."

This is one of those books that is so relevant to my work in project delivery and the world I (we) live in that, even after borrowing this book from the library, I decided to straight up buy it because of the sheer amount of notes I took.

Our world has become increasingly complex, and our systems have become less transparent and more tightly coupled. Meltdown argues that when systems become both complex and tightly coupled, they land directly in the danger zone - the place that causes these catastrophic failures and meltdowns (and also the hit track by Kenny Loggins).

Clearfield and Tilcsik offer some much needed wisdom for dealing with complexity and finding ways to avoid these failures. This is an especially great book for project management professionals and anyone who works for a company where safety is paramount, but I think it's also useful for anyone who's interested in better understanding the way system failure works.

As a bonus, the book offers up excellent arguments for increased diversity in the workplace and in the boardroom, as well as suggestions for getting employees to speak up when they spot a potential problem.

(Plus, plenty of Toronto shout-outs! Yeah boi.)
Profile Image for Fred Hughes.
765 reviews49 followers
July 4, 2018
A great insight into complex systems and the simple humans that try to control them.

Communications seems to be the magic solution, as it is with most problems, in an age when texting is the standard mode of communications.

Good read
Profile Image for Dee Eisel.
208 reviews4 followers
August 26, 2018

A few years ago when I still had Scribd, I found a book by Charles Perrow called “Normal Accidents.” My Goodreads review of it is here. As it turns out, that wasn’t the kind of book I was looking for. “Meltdown” is exactly what I was looking for. It takes Perrow’s theories and provides a more modern and digestible framework.


Perrow’s thesis is that in systems with sufficient complexity and tight coupling (not a lot of time or room for error), accidents are inevitable. He calls them normal accidents. “Meltdown” uses this and applies it to more recent accidents - everything from Wall Street crashes to Enron to software bugs to potential issues with dams and nuclear power plants. Where Perrow was writing in the 80s, which was the thing I remember most from his book, Clearfield and Tilcsik have the advantage of everything he knew and everything that has happened since.


This doesn’t make me feel any better on a global scale, because if anything normal accidents have become more normal and expanded out into more areas of life. “Meltdown” makes it clear that areas that formerly were loosely coupled are now tightening, such as dam safety. It does also point out areas where active work to decrease issues has been successful, such as cockpit resource management (a philosophy of flight decks where first officers feel more empowered to challenge potentially dangerous actions by their captains). Overall, though, I don’t feel like my world is any safer than it was before.


That’s not to say it can’t become safer. Taking lessons from Perrow and other systems analysts can help and have helped many businesses. It’s too bad this wasn’t around before Target rolled out in Canada. Instead of being an object lesson in failure for Clearfield and Tilcsik, they could have been a lesson in success. Five of five stars.

Profile Image for Wendy.
2,356 reviews42 followers
May 12, 2018
“Meltdown: Why Our Systems Fail and What We Can Do About It” which I won through Goodreads Giveaways is a compelling look at system failures and the solutions to avoid a meltdown. Broken into two parts the first half of the book gives insight into why systems which today are more capable, complex and less forgiving can be problematic, killing people accidently, bankrupting companies, and even giving rise to the innocent being jailed. As the authors also point out, even a small change to one of these systems can make them surprisingly more vulnerable to accidental failures, hacking and fraud.

The second part offers useful solutions like learning from small errors, being cognizant of warning signs, listening to the input of skeptics and using the diversity in a company to avoid big mistakes like Target failed to do when management made the decision to expand into the Canadian market.

A serious topic but presented in an entertaining manner, Chris Clearfield and András Tilcsik blend case studies with social science research into a convincing argument that’s utterly captivating and doesn’t let you go until the end. I enjoyed their insights into crises like Three Mile Island, Enron, Washington Metro Train 112, and aircraft disasters which fuel the positive solutions they provide to stop future failures.

“Meltdown: Why Our Systems Fail and What We Can Do About It” is a fascinating book that’s well-written, interesting and should be at the top of everyone’s reading list.
17 reviews
July 13, 2020
The book is brilliant. It starts by describing how our systems work nowadays and how the evolution from systems which were more simple, and had more slack in their structures, can suffer a Meltdown.

One of the reviews I’ve read before reading it, mentioned that the authors quote a lot of research and do very little original work. What I experienced, in fact, made it seem like a compliment, because there is nothing wrong in compiling a ton of research as long as you mention the sources and explain (which the authors do very well) how are they connected.

The book, though, is not for every “type” of reader, because it has too much details on very specific explanations. Ex: To explain the cooling system of a Nuclear Power Plant, the authors use a lot of details, and if you are not a very curious person about “how everything works”, it might be better to pass this book.

A big pro for this book, is that they try to suggest solutions (some managerial tools) and ways to prevent Meltdowns from happening in any situation. From choosing which house to buy to how pilots should behave in landing the plane when the charts don’t show detailed altitudes.

Do I recommend it? Yes. It is a great book, but depends on who is reading. If you want a compiled guide about failures in modern systems and how to prevent it, go forward. But if you don’t like detailed explanations about various topics or just want to see a more extensive approach, with more data from the studies they quote, it might be better just to research the studies in the index.
Profile Image for Fiona.
878 reviews8 followers
April 5, 2018
A fascinating gloss of how systems break down: its all about complexity and coupling, a simple concept with infinite applications. I really wish this book had been longer, a phrase I don't often utter.

Thank you to Penguin/Random House for the free copy for review. It was delicious.
Profile Image for Rayfes Mondal.
374 reviews5 followers
April 16, 2018
We increasingly rely on complex systems that fail in unforeseen ways. This book describes many of them and steps we can take to reduce failure. An enjoyable, informative read.
Profile Image for Peter Immanuel.
13 reviews
January 21, 2020
Great book!

Using several fascinating stories, the authors show how a combination of complexity (i.e. when parts of a system interact in non-linear, hidden, and unexpected ways) and tight coupling (i.e. when there is little slack among the parts of a system and the failure of one part easily affects the others) causes meltdowns.

The authors then offer suggestions on how to make meltdowns less likely, while talking about the research on which these suggestions are built. Some of these suggestions include: making systems more transparent; removing unnecessary bells and whistles (even warning systems that are more complex than they need to be!); conducting pre-mortems before embarking on a project; making teams more diverse; and welcoming dissent (even if it hurts).

It's a fun read, with insights that could be useful at work and maybe even in daily life.
100 reviews131 followers
Want to read
March 18, 2018
I have never received this book; when are you planning on sending it?
Profile Image for Matthew Wynd.
14 reviews
May 9, 2018
Overcoming system failure in an increasingly complex world is a daunting topic. This book lays out how every one of us can contribute to building simple and successful systems to stay ahead of complex system meltdown. Favorite topics within this book include Pre-mortems and how Charles Parrow's Technology Classification Matrix can help us.

Don't skip the epilogue. There is an excellent reference to how Yeats' The Second Comming is used inaccurately to describe why world news seems to be getting progressively worse.

One of the most useful and strangely comforting books I have read in the past few years.
33 reviews
May 6, 2018
I entered (and won) a giveaway for the book based on the description of the book given. And I was not disappointed, fascinating account of how simple things cause complex systems to fail, sort of a explanation of human hubris, and why we need to think out all possible effects, but can’t as some of them we can’t even conceptualize.
Profile Image for Melissa T.
574 reviews30 followers
February 3, 2019
*I won a copy of this via Goodreads Giveaways*

This is an interesting look at different types of meltdowns and failures throughout history. It covers topics ranging from stock market and business failures, to retail and medical failures.

It seems that the main causes of meltdowns are the intricacy of systems and organizations, and the combination of small errors that can add up and lead to big problems.

There are a lot of concrete, well placed examples of these different types of meltdowns. I appreicated the digestibility of these examples. This book is full of research, which can sometimes lead to dry reading, and boredom. The research is solid, and the execution is as well, which makes this easy to take in.

The book also highlights the true importance of diversity in business and the workplace. The true importance is not the numbers, or feel good quota that diversity can bring to a company, but the variety of ideas, and therefore, more innovation.

One thing that stood out of me in particular was a section on mandatory workplace programs. The book talked about mandatory diversity programs. When programs are mandatory, people are less receptive to them, and they actually made diversity in these workplaces worse, not better.

I applied the concept to a mandatory workplace program that I had to go through, on bullying. Everyone grumbled and groused about it, therefore making all of us less receptive. After the implementation of the mandatory sessions, I actually noticed more instances of what I would consider bullying/disrespectful behavior.

This does get a little oversimplified and repetitive in places, but overall a solid book.
Profile Image for Eddie Choo.
93 reviews7 followers
April 27, 2018
Understanding Complex Systems

A wonderful intellectual successor to Charles Perrow’s Normal Accidents, which forms the intellectual spine of the book. This book doesn’t look at the broader economic and social causes of why these meltdowns along the way, though- but still a sufficient read.
Profile Image for Cropredy.
423 reviews10 followers
March 9, 2023
The premise of the book is that disasters happen (Fukishima, various airline and train crashes, Theranos, etc) and based on a study of many such disasters, a theory of general principles of why they happen and how they can be mitigated can be formulated.

The book has plenty of fascinating case studies of certain catastrophes some of which you will recognize, others possibly not. Once the bulk of the case studies are laid out, the theory is laid down (tightly-coupled systems being a dominant one as well as lack of diverse opinions in decision making). Then more case studies are described and the theory can be used as a decent explanation.

The last third of the book are prescriptions (e.g. pre-mortems) and more case studies are used to show when the prescriptions were applied and how disasters were avoided or mitigated.

The book is a quick read and is recommended for anyone in a position of authority over decision making, especially when big sums of money or human life are involved. Acting on your own experience ("your gut") is not going to cut it. Nor is simply following "the way it has always been done".
Profile Image for Jeremy.
618 reviews13 followers
July 22, 2019
There is nothing new in this book, just a bunch of stories we are already familiar with, repackaged to make a new point about complex systems. The main point seems to be that complex systems can lead to failure, so be on the lookout. The fact that certain stories/studies get repackaged and sold with a different moral makes me pretty suspicious. Some decent points were made, like how diversity causes us to be more rigorous in our thinking, since we are less likely to challenge those who are like us. But by and large this book offers little of value.
2 reviews1 follower
February 11, 2018
This is a thought-provoking and highly readable book that was a source of fresh, new ideas for me. I knew some of the stories quite well (VW, Flint, the New York Times serial plagiarist scandal, the Academy Awards debacle etc.) and was familiar with some of the research, but this books offered the excitement of looking at these events through a different lens.

91 reviews23 followers
March 12, 2019
Отличный и доступный текст о том, как и почему ломаются сложные системы от Фукусимы до карточек с победителями "Оскар". Разработчикам и проектировщикам будет полезно. В России книга теряется в общем потоке, а вот на Амазоне заслуженно набирает всё больше положительных рецензий.
8 reviews
July 27, 2018
I enjoyed the book - some reviewers say it is heavy on case studies and light on takeaways. I agree, although the case studies are good. Below are my notes of takeaways:

Meltdowns are more often caused by a number of small component failures that interact in unpredictable ways than by a particular component or operator failure. Failures are more likely to happen when systems are both:

· Complex: many components are joined by complex (non-linear) interaction, so that the failure of one affects many others. Complexity can cause such rare interactions that it’s impossible to predict most of the error chains that will emerge. Complex systems are characterised by indirect of inferential information sources (not directly observable) and unfamiliar or unintended feedback loops.
· Tightly coupled: when the components of a complex system are tightly coupled, failures propagate though the system quickly. Tightly coupled systems are characterised by: Delays are “not possible”; Sequence of events are invariant; Alternative paths not available; Little opportunity for substitution or slack; Redundancies are designed in and deliberate
In the last few decades complexity and coupling have increase in most systems, shifting many of them in to the ‘danger zone’. We are in the ‘golden age’ of meltdown so what should organisations do?
1) Build better systems and improve decision making
Organisations need to consider complexity and coupling as key variables when planning and building systems. For instance, sometimes redundancy, intended as a safety feature, adds complexity, creating the potential for unexpected interactions. In fact, safety systems are the biggest single source of catastrophic failure in complex, tightly coupled systems.

When building systems, a way to reduce complexity is to add transparency. Transparency makes it hard for us to do the wrong thing and makes it easy to realise if we have done a mistake.

A number of tools can also be used to make decisions in wicked environments (where there is no feedback), such as:

· SPIES (subjective probability interval estimates) is a decision making method shown to produce more precise estimates within a desired probability threshold. Useful for estimating projects lengths or anti-tsunami wall height.

· All-pairs testing – using a set of predetermined criteria for decision making. If many criteria the decision becomes too complex. Using paired comparison is a way of bringing everything together into one whole picture.

· Premortem – a managerial strategy in which a project team uses ‘prospective hindsight: imagining that a project has failed, and then working backward to determine what potentially could lead to.


2) Detect and learn from the early warnings that complex systems often provide

To manage complexity we need to learn from the information our systems throw at us in the form of weak signals of failure: small errors, close calls and other warning signs.

Anomalising: Organisations such as airlines have figured out a process for learning from small lapses and near misses: Gather (e.g. by collecting close call reports); fix; find root causes (dig deeper); share (sometimes not just within a company but, as with Callback, across the entire aviation industry, a regulator could play a role in that); and audit (ensure solutions actually work).

An organisation’s culture sits at the centre of all this – a culture must be created where mistakes and incidents that have occurred in the system are openly shared.

3) Make effective teams

Features of effective teams to prevent catastrophic failures

· Members speak up - when they know about hidden risks or have a suspicion that something is not right. Sceptical voices are crucial. Conditions for this require that 1) people know how to raise concerns, 2) power cues are softened, 3) leaders speak last.
· Diversity – diversity creates less familiar and even less comfortable environments that make people more sceptical, critical and vigilant, all of which makes them more likely to catch errors.
· Outsiders input – from other parts of the organisation or outside of the organisation are very valuable as their position is more objective and lets them see different things than insiders do.
· Monitor and diagnose - A common source of failure is the plan continuation bias: failure to stop performing the task even when the circumstances change. In very tightly coupled systems stopping is not an option (critical surgery, runaway nuclear reactor), but the teams that prevent failures better are the ones that have learned to rapidly move to the cycle ‘perform task’, ‘monitor’, ‘suggest diagnosis’
· Role shifting – When teams without much cross-training faces a surprise in a complex system, a meltdown can result. Role shifting requires several people to know how to accomplish a particular task, but it also means everyone needs to understand how the various tasks fit into the bigger picture.
Profile Image for Mark Mitchell.
157 reviews1 follower
September 19, 2020
Clearfield's book is a compendium of disaster tales, a collection of ghost stories for the modern age. He tells of train crashes, plane crashes, oil spills, and nuclear meltdowns. In each case, Clearfield shows how "complex, tightly-coupled" systems (defined as those whose inner workings are opaque and where interactions between the parts can cause the failure of the system as a whole) can result in calamity. Clearfield's work references Charles Perrow's research and thinking extensively. Perrow's Normal Accidents: Living with High-Risk Technologies is an older book covering some of the same ground.

Clearfield provides an excellent exposition, in easily readable form, of Perrow's thesis: that complex, tightly-coupled systems fail in unpredictable, often-catastrophic ways. Clearfield leads the reader through the actual failure showing how a cascade of failures caused the whole thing came crashing down. Then, in the second portion of the book, he attempts to show how better practices can mitigate the risks of such systems. While the ideas are valuable (make it hard for operators to do the wrong thing, learn from near-failure situations, encourage dissent), I am skeptical that they are sufficient.

Nassim Nicholas Taleb's Antifragile: Things That Gain from Disorder discusses some of the same ideas. One of Taleb's deep insights is that one should not build systems whose failure is truly catastrophic. A system with a small probability of an unacceptable failure is not a well-designed system. To be fair, in this framework, a plane crash is an acceptable failure as it kills hundreds and costs merely tens of millions of dollars. In contrast, an unacceptable failure causes tremendous damage over a wide area or to a huge number of people. Clearfield is more concerned with how to reduce the risk of what I have termed an "acceptable" failure. None-the-less, those interested in Clearfield's book should also read Taleb's work. Those of us trained as engineers tend to lack the humility to refrain from building systems that we cannot possibly make safe, and Clearfield's work helps to show how difficult it is to create a truly safe system.

Clearfield's book makes breezy reference to other popular works in related fields, including the outstanding Thinking, Fast and Slow by Daniel Kahneman by Malcolm Gladwell, and Smarter Faster Better: The Secrets of Being Productive in Life and Business by Charles Duhigg. Readers who have already digested those titles will find parts of Clearfield's book repetitive. Still, those who have not will receive the benefit of a quick overview of other recent contributions to psychology and habit research. All-in-all, Clearfield has done readers a service by providing a clear view of the risks of modern systems and important risk-mitigation ideas.
Displaying 1 - 30 of 137 reviews

Can't find what you're looking for?

Get help and learn more about the design.