learning from failed decisions

24
Learning From Failed Decisions Paul C. Nutt, PhD M y 30-year study of decision making collected over 400 decisions, identifying practices meriting emulation and those to avoid (Nutt, 2008, 2010b). This stream of research uncovered how organizational leaders carry out decision making and the success realized, accounting for the situation being confronted. Each decision-making practice was linked to decision outcomes and their consequences, control- ling for content and context. To illustrate why some practices work and others do not, cases pinpointed what influences decision-making success and failure. There were several key findings. First, and perhaps most important, half of the decisions studied failed (Nutt, 1999). Generalizing this finding suggests that failed decisions are a commonplace event in organiza- tions, producing both wasted resources and forgone benefits (Nutt, 2002). Further analysis found that deci- sion makers were not at the mercy of the situation. Rather, decision maker practices had more influence on success than did customer tastes, interest costs, regulations, and other situational constraints that erect barriers and pose difficulties. When decision makers followed best practices, success doubled. A key objective of the research was to identify and codify both best practices and practices to avoid. This article focuses on learning practices, following the traditions of Huber (1991), Kolb (1983), Nystrom and Starbuck (1984), and Senge (1990). (The other six practices—validate claims, manage social and political forces, set desired results, search broadly, evaluate to expected results, and consider ethical issues—are discussed in Nutt, 2002.) Unsuccessful decision makers were misled by windfall successes and bad luck failures, as well as hindsight biases, making it difficult for them to see causes of failure, which is essential for learning to occur. Decision makers in my studies did not reflect on what went wrong in a failed decision. Instead, they took steps to hide the reasons for failure with a cover-up. This behavior was linked to perverse incentives that rejected the possibility of failure, even though some failure is unavoidable. Successful decision makers were able to 15 PERFORMANCEIMPROVEMENTQUARTERLY,23(3)PP.15–38 & 2010 International Society for Performance Improvement Published online in Wiley Online Library (wileyonlinelibrary.com). DOI: 10.1002/piq.20088 The consequences and dilemmas posed by learning issues for decision making are discussed. Learning requires both awareness of barriers and a coping strategy. The motives to hold back information essential for learning stem from perverse incentives, obscure outcomes, and the hindsight bias. There is little awareness of per- verse incentives that encourage cover- ups and limit discussion. The article shows how cover-ups arise, how to root out perverse incentives, and how to encourage disclosures to understand failure. Cases are used to illustrate per- verse incentives and how cover-ups arise. Then actions that encouraged learning as well as ways to deal with obscure outcomes and hindsight biases are offered.

Upload: paul-c

Post on 06-Jul-2016

214 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Learning from failed decisions

Learning From Failed Decisions

Paul C. Nutt, PhD

My 30-year study of decision making collectedover 400 decisions, identifying practicesmeriting emulation and those to avoid (Nutt,

2008, 2010b). This stream of research uncovered howorganizational leaders carry out decision making andthe success realized, accounting for the situation beingconfronted. Each decision-making practice was linkedto decision outcomes and their consequences, control-ling for content and context. To illustrate why somepractices work and others do not, cases pinpointed whatinfluences decision-making success and failure.

There were several key findings. First, and perhapsmost important, half of the decisions studied failed(Nutt, 1999). Generalizing this finding suggests thatfailed decisions are a commonplace event in organiza-tions, producing both wasted resources and forgonebenefits (Nutt, 2002). Further analysis found that deci-sion makers were not at the mercy of the situation. Rather, decision makerpractices had more influence on success than did customer tastes, interestcosts, regulations,andothersituationalconstraints thaterect barriersand posedifficulties. When decision makers followed best practices, success doubled.

A key objective of the research was to identify and codify both bestpractices and practices to avoid. This article focuses on learning practices,following the traditions of Huber (1991), Kolb (1983), Nystrom and Starbuck(1984), and Senge (1990). (The other six practices—validate claims, managesocial and political forces, set desired results, search broadly, evaluate toexpected results, and consider ethical issues—are discussed in Nutt, 2002.)Unsuccessful decision makers were misled by windfall successes and badluck failures, as well as hindsight biases, making it difficult for them to seecauses of failure, which is essential for learning to occur. Decision makers inmy studies did not reflect on what went wrong in a failed decision. Instead,they took steps to hide the reasons for failure with a cover-up. This behaviorwas linked to perverse incentives that rejected the possibility of failure, eventhough some failure is unavoidable. Successful decision makers were able to

15

P E R F O R M A N C E I M P R O V E M E N T Q U A R T E R L Y , 2 3 ( 3 ) P P . 1 5 – 3 8

& 2010 International Society for Performance Improvement

Published online in Wiley Online Library (wileyonlinelibrary.com). DOI: 10.1002/piq.20088

The consequences and dilemmasposed by learning issues for decisionmaking are discussed. Learningrequires both awareness of barriersand a coping strategy. The motives tohold back information essential forlearning stem from perverse incentives,obscure outcomes, and the hindsightbias. There is little awareness of per-verse incentives that encourage cover-ups and limit discussion. The articleshows how cover-ups arise, how to rootout perverse incentives, and how toencourage disclosures to understandfailure. Cases are used to illustrate per-verse incentives and how cover-upsarise. Then actions that encouragedlearning as well as ways to deal withobscure outcomes and hindsightbiases are offered.

Page 2: Learning from failed decisions

see fallacies in embracing windfall successes and decrying bad luck failures.They also were likely to recognize a richer set of possible causal events. Thisallowed them to recognize and then neutralize perverse incentives, creatingan environment in which learning can occur. This article shows how perverseincentives arise and what to do to root them out so learning can occur. First,the nature of perverse incentives is discussed and then how misleadingoutcomes and biases arise.

My discussion of decision debacles (those that produce disastrousoutcomes with long-term effects) will be used to illustrate key points. Thecases, drawn from my database, highlight failure-prone practices (Nutt,2002, 2010a). Examining decisions in this way has considerable precedent,such as Snyder and Page (1958) and their study of the Korean War, Hall’s(1984) exposure of the Bay Area Rapid Transit fiasco, and McKie’s (1973)study of London’s aborted third airport. My own studies examined aneclectic utility’s buying a village to avoid dealing with pollution, Perrier’sunnecessary recall of its bottled water to signal a commitment to productquality, and CompuServe’s botched attempt to acquire AOL (Nutt, 2010a).Following this tradition, the article draws on cases to illustrate key points.First, an investment debacle at Barings Bank is used to demonstrate perverseincentives and the reactions provoked in organizational members whopossess information essential to learn about the causes of failure. Thisdecision is profiled in Table 1. Several additional debacles, drawn from mycases and summarized in Tables 2 and 3, will be used throughout the article toillustrate key points (Nutt, 2002).

What Blocks Learning

Research into failed organizational decisions finds that key players evadea postmortem discussion (Nutt, 1999). In the wake of a failed decision,decision makers are reluctant to reveal information that could exposemistakes and errors that could be linked to their actions. In my studies, noone involved in a failed decision revealed anything that could be incriminat-ing. Analysis found that obscure outcomes and a hindsight bias set the stagefor perverse incentives, which coaxed key players to conceal information.

Perverse incentives entice decision makers to do things that those higherup in the organization say they do not want. Some perverse incentives rewardthe wrong things, and others stem from the organization’s climate. Forexample, some information systems quantify what is easily measurable, suchas costs, and ignore more substantive issues crucial to a business, such asquality. The U.S. automotive industry was acutely aware of its cost structurebut essentially ignored quality tracking for decades. Insight into qualityissues that could have been collected systematically from customer com-plaints and dealer responses were disregarded as company officials posturedabout the importance of quality. Officials failed to see that subordinates wereresponding to what was being implicitly emphasized in measurement.Espoused aims (quality) and reporting system drivers (cost) created confu-

16 DOI: 10.1002/piq Performance Improvement Quarterly

Page 3: Learning from failed decisions

sion about what was important and prompted anyone who could be blamedfor low quality to take defensive action.

A more insidious perverse incentive is woven into the fabric of anorganization’s climate. For example, some lea-ders allow unrealistic expectations to take rootand grow. Subordinates are given an assign-ment and told to give their best effort. Ashoped-for results gravitate into expectations,pressure mounts. To get choice assignmentsone must ‘‘up the ante.’’ Commitments aremade that guarantee a good result. This canbe difficult to pull off because success oftendepends on matters beyond the individual’scontrol. In such a climate, those higher up in an organization expect everydecision to be a winner and hand out swift reprimands when there is failure.Decision makers observe and learn that failure is not tolerated. They also seethat many decisions, their own and those of others, fail, a dilemma-producingsituation. Failures are unavoidable but those who are higher up expectsuccess and punish failure. A perverse incentive is created that coaxes peopleto hide information that could explain what prompted failure.

In the wake of a failed decision, a decision maker does not wait for thesecond shoe to drop. If there could be an error, a lapse of judgment, or atraceable oversight, the decision maker has considerable motivation tocontrive excuses and take defensive action. This can be done in several ways.Decisions can be buried. A defense is prepared for decisions that could getunearthed to shift attention to favorable consequences and discount theimportance of unfavorable ones. Reports are lost, consequences buried in amass of detail, and lines of responsibility blurred to distract, confuse, andmislead. If all this fails, excuses are contrived to rationalize the outcome.Soon it is not clear what happened or why. Higher-ups hear contradictorytales and must plow through masses of detail, which draw them away fromasking questions that could be revealing. To sort all this out fritters awaytime, with dubious benefits. Shrouded outcomes and clever rationalizationshide failure-prone decision-making practices, giving no one reason toquestion the practices. I saw these maneuvers in many of the failures in mycases (Nutt, 2002, 2010a).

Organizational leaders are often blind to perverse incentives and theirown role in creating them. They fail to see how a culture that refuses to acceptfailure forces people into a defensive position. The pressure to produceresults, when some failure is inevitable, creates the perverse incentive. Anyadmission of error will bring misery. But rational folks avoid misery, whichcreates a powerful incentive to hide one’s mistakes. Even when there is littlechance of hiding the error, rational people do not own up. Because a decisioncould turn out badly, no one collects information about it. The hope is todeflect questions that could expose the decision maker to sanctions. Badoutcomes are revealed in carefully measured doses, if at all. This refusal toface reality is often viewed with alarm by onlookers, who fear that organiza-

In the wake of a faileddecision, decision makersare reluctant to revealinformation that couldexpose mistakes and errorsthat could be linked to theiractions.

Volume 23, Number 3 / 2010 DOI: 10.1002/piq 17

Page 4: Learning from failed decisions

tional decision makers are either oblivious or unwilling to deal the con-sequences of their actions. Insiders are seldom sufficiently empowered tospeak out. To remove the threat posed by decision, leaders must breakthrough this veil of fear.

In sum, perverse incentives block attempts to conduct appraisals. Theseincentives set in motion defensive action. To deflect questions, bad news isoffset with good news. Such cover-ups are always two-tiered: the distortedgood news and the steps taken to mislead with misinformation. The latteraction leads to a cover-up of the cover-up, which grows to include the act ofdeception. Decision makers must hide both the creation of misinformationand the steps that they took to deceive. Such behavior is quite rational, but iteffectively blocks learning (Argyris & Schon, 1978). The need to keep the lidon swamps all other considerations. When this occurs, leaders and people inoversight roles have no idea what is going on. Matters that becomeundiscussable in this way produce gaps and inconsistencies in reasoningand how the reasoning applies to a decision, which goes undetected.Discussion, which allows the reasoning behind a decision to be dissected,is crucial to learning. The lack of discussion during the collapse of BaringsBank will be used to show how learning is blocked in failed decisions.

The Collapse of Barings Bank

In 1995, Barings Futures Singapore (BFS) went bankrupt, leaving in itswake hundreds unemployed and debts of $1.7 billion. The Barings group waspurchased for the token price of 1 pound sterling by a Dutch bankingconglomerate. How did the Barings Bank, a storied institution dating to 1763,come to such a fate? (See Table 1.)

The story begins with Nick Leeson, a young trader with a knack for hardwork and cutting corners. Leeson began work for Barings in the futures andoptions settlements section. In his book (1996), Leeson notes that no one atBarings knew anything about futures trading. In addition, bank officialsasked no questions about such trading for fear of appearing incompetent.Leeson saw his superiors as fools and the bank’s rules as archaic, and thus easyto circumvent. His first opportunity to do so came in 1990. At the Jakartaoffice, management had allowed 100 million pounds sterling in unclaimedshare certificates to build up. They had no idea what to do about this, so theyignored the situation. Leeson was able to fix things by covering the 100million pound loss after a year of effort and was rewarded with the post ofgeneral manager of Barings Futures Singapore.

Later that year, Leeson created the infamous 88888 account inthe Singapore office to handle ‘‘error accounts.’’ The account was originallyset up to hold, for later payment, clients’ losses from small tradingerrors made by the bank. The ‘‘five-eights,’’ a kind of round-off error in astock trade, indicates how errors occur (Reyes, 1995). The account wasto be cleared quickly as adjustments were made. The London officedecided to handle this centrally and ordered all local offices to close their

18 DOI: 10.1002/piq Performance Improvement Quarterly

Page 5: Learning from failed decisions

88888 accounts. But Leeson left the account open and used it to hide a20,000 pound error.

Later Leeson began making unauthorized trades on behalf of the bank,booking the trades to the 88888 account. Initially he had some success, but bymidyear, he had incurred 1 million pounds in losses. His trading behavior wasakin to that of gamblers in their attempts to recover a loss by doubling thenext bet. This trading approach failed and produced still more losses. Successin covering up the earlier error coaxed Leeson to hide his trading losses in the88888 account. He then contrived a series of cover-ups to conceal losses fromauditors and inflate his section’s profits. Top management accepted thereported profits and made funds available to Leeson, who now could makestill more trades.

Leeson was trading in risky derivatives in the name of the bank.Derivatives are financial instruments designed to provide tradingparties a measure of protection against unexpected price shifts. The traderlays out a portion of the value of the asset hoping to benefit from theasset’s price fluctuations. Futures and options can be bought. A future’scontract stipulates a delivery date for a commodity, such as silver or oats, or afinancial instrument, at a specified price. Most are cash settled withoutthe asset being delivered. A buyer receives or pays the difference between themarket and the contract price of the asset. An option gives the buyer the

TABLE 1 SECURITIES TRADING AT BARINGS BANK

Claims Results seem good, no need to monitor means or methods.

Concerns

a) Recognized None (officials ignored bank’s position and practices).

b) Hidden Audit irregularities, profit drives principal, reputation and image

threatened, risk and gain trade-offs.

Directions (Implicit) make profit any way you can.

Options considered More of the same.

Extent of search

and innovation

None.

Use of evaluation Routine audits, external audits, standard reporting procedures.

Impact of evaluation None.

Barriers to action Protecting bank’s image and reputation paramount. Resistance to change,

monolithic structure, lack of know-how (incompetence), structure allows

accountability slip-ups.

Ethical concerns Bonuses encourage risky behavior when low-risk investments are bank policy.

Accountability ignored. Scapegoating. Evasion of responsibility.

Barriers to learning Culture and incompetence. Regulatory and audit divisions of bank unwilling

to challenge upper management.

Perverse incentives Bonus system.

Volume 23, Number 3 / 2010 DOI: 10.1002/piq 19

Page 6: Learning from failed decisions

right to buy or sell shares at a fixed price at or before a future date and pays afee for this.

Leeson was expected to deal in low-risk derivatives or the arbitrage ofderivatives between the Nikkei and the Singapore International MonetaryExchange (SIMEX), which is risk free. Bank officials should have questionedLeeson’s reports because huge profits from low-risk derivative trades arehighly unlikely. However, times were tough, and banks were piloting newways to make money. Austere conditions had forced banks to move intostock market investments to supplement the modest fees earned fromborrowing from and lending to clients. This may have been a factor in thefailure to question Leeson’s activities. Instead of questioning, managementadvanced 800 million pounds to cover his position. As Leeson (1996) says,‘‘[Higher-ups] wanted to believe that my profits were real.’’ In addition, thebank reinforced Leeson’s activities by handsomely compensating him withover 1 million pounds annually in salary and bonuses.

Top managers at Barings brought about the debacle by their lack ofoversight. A recount of the opportunities to take action that were ignored ormishandled is revealing. Early on, an internal audit concluded that no onewas supervising Leeson, but nothing was done to remedy the situation. At thetime, Leeson was sitting on losses of 1 million pounds. The bank could havebeen saved had top management acted on the audit report. Leeson’s supe-riors knew of his dual role as chief trader and settlements head but ignored it.The initial audit questioned their decision, but Leeson’s bogus profits madethe auditors wary of questioning someone who was creating so much profitfor the bank.

Poor accountability at Barings allowed the situation to ripen. A matrixstructure had Leeson reporting to local managers in BFS and productmanagers in London, and both assumed that the other was accountable forLeeson. The bank had norms for risky position, trading performance, andfunding allocations that were to be monitored daily by its Asset and LiabilityCommittee (ALCO). Yet Leeson’s huge profits, which exceeded the Baringsgroup assets, failed to attract ALCO’s attention. Instead, ALCO was pre-occupied with finding ways to fund Leeson’s trading. Early in 1995, a SIMEXmargin call alerted the ALCO to Leeson’s trading. At this point, ALCO askedLeeson to reduce his positions but never followed up to see that this was done(Inspectors of Barings Futures, 1996).

No one questioned Leeson’s methods because his results were good andno one in London had the slightest idea how futures and options worked.When a later audit by Coopers and Lybrand uncovered a 50 million poundshortfall in Barings’s futures operation, Barings officials dismissed it as aroutine error. As these events unfolded, it was common knowledge in theSingapore markets and SIMEX that Barings was teetering on the brink ofdisaster. Early the next year, still another audit blew Leeson’s cover, and thistime he resigned. Soon after, Leeson was convicted of illegal trading andsentenced to 8 years in a Singapore prison.

Observers contend that the Barings debacle could have been avoided if themost basic control mechanisms had been in place (Overell, 1995) and if the

20 DOI: 10.1002/piq Performance Improvement Quarterly

Page 7: Learning from failed decisions

bank’s climate had allowed questioning (Brilliant, 1995). Instead, a profitincentive dominated, driving out any other consideration. Greed made bankofficials blind to the methods being used (Fay, 1996). Leeson was, in part, avictim of this. Bank officials encouraged him to continue his risky trading andrewarded him for doing so. Such an incentive to perform needs somequalifications, and making them clear is the responsibility of top management.Theclimateof thebankalsocreatedperverse incentives.PrideandreputationatBarings had dysfunctional consequences. A reputation as being a cut above therest came from being one of England’s oldest banks. Top management’s conceitkept them from asking questions to avoid looking foolish. However, they werenot above letting Leeson take the fall for their failures, although top manage-ment is clearly responsible for make-or-break decisions.

Learning Failures at Barings Bank

Perverse incentives and cover-ups prompted learning failures at BaringsBank. Leeson hid his illegal trading, and bank officials hid their inability tounderstand the futures trading business. Top management created incen-tives that enticed Lesson to do what Baring officials insisted that they did notwant: high-risk trading. Bank officials baited Leeson with enormous bonuses,and Leeson’s own greed pushed him to take the bait. As he notes, ‘‘If themarket had turned, I’d have made millions and become a hero’’ (Leeson,1996). All this was made possible because no one talked about it. Topmanagement, their appetite whetted by huge profits, failed to see eventsrealistically. They ‘‘did not see that they did not see’’ that such profits werevery unlikely. Bank officials could not admit they had no way to tell what wasreally going on. They covered up their incompetence and then covered up theoversight failures that the incompetence created. Being considered a ‘‘cutabove’’ makes it hard to acknowledge questionable practices. Perverseincentives in the bonus system made a bad situation worse.

The bank’s storied reputation belies the actions of its top managers.Reputation and profit by any means are likely to be incompatible. Bankofficials instructed traders to trade in only low-risk securities but rewardedthose who engaged in high-risk trading. Huge bonuses send a clear messageabout what was valued. A ‘‘greed is good’’ dictum was being followed.Continuing success makes it difficult to raise questions about practices thatmade the success possible. Officials at Barings also postured about their guilt.Checks and balances were not in place, even though no one in the tradingbusiness is allowed to book his or her own transactions.

The regulatory structure in the United Kingdom failed to learn as well. Asa key official said, ‘‘The events leading up to the collapse of Barings do not, inour view... point to a need for any fundamental change in regulation’’ (Sraeel,1995, p. 4). Such a position makes more of the same possible. Others in chargeof regulation have failed to learn as well. A year later, a trader at Daiwa Banklost millions and did not report the loss to the U.S. Securities and ExchangeCommission, which led to a revocation of the bank’s charter in the United

Volume 23, Number 3 / 2010 DOI: 10.1002/piq 21

Page 8: Learning from failed decisions

States. The recent worldwide subprime mortgage scandal illustrates howgovernment officials continue to avoid exercising appropriate oversight ofsuch trading.

The insidious creep of avoiding discussion keeps decision makers fromseeing a situation that is spinning out of control. The pattern of deception hasgood news offsetting bad news, cover-ups of the information dumps, and acover-up of the cover-ups, none of them discussed. Outcomes and missedopportunities, as well as a summary of the cover-ups and discussability forthe Barings Bank case and several other debacles, are provided in Table 2.

Rooting Out Perverse Incentives

Argyris, Putnam, and Smith (1987) offer a simple case that illustrates howpeople become caught in an undiscussability dilemma and what is required toget them out of a predicament. Consider the following example. In my trainingsessions, I often pose this situation. The chief operating officer (COO) of acompany, while doing a performance appraisal of a long-time senior-levelemployee with a record of poor performance, told him: ‘‘Your performance issubstandard, and you appear to have a chip on your shoulder. I’ve heard thewords lethargy, uncommitted, and disinterested used to describe your efforts.We can’t have this in our senior-level people. I know you want to discussinjusticesyou believe people have inflictedonyou inthe past,but this ishistory,and a rehash of history won’t get us anywhere. Let’s talk about today and yourfuture with the company.’’ I then ask for an appraisal of the COO’s appraisaland ask, ‘‘How effective was the COO?’’ Most say that the approach has littlechance of success. Asked to go further, attendees at my executive trainingsessions describe the COO’s approach as judgmental, degrading, accusatory,threatening, and even bullying. They say that the likely impact on the listener isto create panic and defensive actions. They then say that it is counter-productive and unjust to pressure someone and not to listen to what the otherperson has to say. Many sum this up as a ‘‘win’’ orientation that emphasizesbeing rational to achieve a purpose but suppresses feelings while doing so.One’s position is advocated to save face. This face-saving then leads tomiscommunication, self-fulfilling prophecies, and escalating errors.

Then I ask the attendees what they would say to the COO who gave theappraisal. They invariably reproduce the same kind of appraisal, withoutbeing aware they are doing so. This illustrates discussability, a trap that wecan all fall into. To avoid the trap requires a new approach.

The Ladder of Inference

A new way to approach inquiry, and its inference making, is needed tododge the learning trap set by perverse incentives. The recommended stepsare observable data, culturally dictated meanings, mutually understoodmeanings, and our conclusions (Argyris, 1982).

A crucial step is to agree on the data to be used. First, provide directlyobservable data—those in which the origin is made clear. Ask the other party

22 DOI: 10.1002/piq Performance Improvement Quarterly

Page 9: Learning from failed decisions

TABLE 2 LEARNING BARRIERS FOUND IN DECISION DEBACLES

BARINGS BANK

DENVER INTERNATIONAL

AIRPORT EURODISNEY

Outcomes 1.7 billion pound losses

and the bank’s collapse

Delayed opening,

controversy over its cost,

baggage system failures

Opening with sabotaged

systems, bomb threats,

transit strike, French

farmer boycott

Barings group purchased

for $1.00 by Dutch

conglomerate

Ten years to recoup

investment

Missed

opportunities

Corrective action to

recognize and stop

trading

Using Munich system to

benchmark system

problems and time lines

Other locations, other

projects

Run conventional baggage

handling in tandem with

new one

Put French culture into

operations (alcohol,

picnicking)

Always one park behind

Cover-ups Risky trading Time lines overly

optimistic

Limiting downside risk

also limits upside gains

Failure to understand

futures

Baggage system design

oversights

Reasons for rejecting

audit disclosures

Bonuses for high-risk

trading

Discussability No one at Barings would

engage in questionable

practices

Cannot reveal design

oversights

Cannot admit failure to

consider French culture

Environment in which

people can say one thing

and do another

Tactics used to drum up

support in Congress

Decision made for wrong

reasons, perhaps no

reasons

NATIONWIDE ARENA SHELL’S BRENT SPAR QUAKER-SNAPPLE

Outcomes Columbus, Ohio, attracted

a National Hockey League

team

Consumer boycott Losses of $85 million

Appearance of duping

public to subsidize private

sector project

Social consciousness

questioned

Finance by selling

profitable pet food and

bean divisions

Now use stakeholder

assessments

Missed

opportunities

Better off building without

public participation

Failure to involve a credible

environmental group

Internal restructuring to

thwart takeover

(Continues on next page)

Volume 23, Number 3 / 2010 DOI: 10.1002/piq 23

Page 10: Learning from failed decisions

if he or she agrees with these data and wants to offer anything more. This cantake some time, so allow the other party time to collect these data and offerqualifications, ramifications, and rebuttals to your data. Second, explore howthe other person sees your data, and seek confirmation with that person.Third, make explicit judgments and state opinions to show when theconsequences of the other’s actions were inevitable, but avoid saying thatit was his or her intention to produce these consequences. This keeps theappraiser from unilaterally controlling the situation by making attributionsabout the other person’s motives. Encourage the other person to expressfeelings and ideas along the way. The ladder of inferences calls for datasharing and testing, finding data that both parties agree are relevant, makinginferences with these data, agreeing on the inferences being made, anddeveloping a shared conclusion.

The aim of this conversation is to drive out perverse incentives, implicitand explicit, and set in their place new incentives that alter the decision-making climate. The hoped-for shift moves from a climate of blame to onethat encourages people to disclose their practices and appraise them. Tolearn about what does and does not work in the search for best practicesrequires norms that stress the common good. Such norms must be createdand institutionalized before decision outcomes will be made public forreview and comment. Decision makers can then inquire about and reflecton previous decisions in which there was a learning failure.

Applying the Ladder

To show how perverse incentives can be rooted out and how learningcould have been facilitated, I use the cases in the Tables 2 and 3. This requirespiecing together facts available at the time for each of the cases.

Underestimate public

outcry to decision seen as

environmentally

unfriendly

Improve Gatorade’s

distribution and marketing

Early divestiture of bad

acquisition

Cover-ups Plan B Errors in estimates of

environmental impact

Faulty motives and

decision process

Extent of corporate welfare

in plan A and plan B

Fire those who object

Discussability Cannot discuss backup

plan because it would

reveal a deception

Unable to admit errors or

lack of use of best-case

scenarios

Reasons for acquisition

and how decision was

made

Board complacency

TABLE 2 LEARNING BARRIERS FOUND IN DECISION DEBACLES (CONTINUED)

NATIONWIDE ARENA SHELL’S BRENT SPAR QUAKER-SNAPPLE

24 DOI: 10.1002/piq Performance Improvement Quarterly

Page 11: Learning from failed decisions

Barings Bank. At Barings Bank, both internal and external auditorsattempted to offer information identifying trading irregularities to Baringstop management. Senior officials could have hired a futures tradingconsultant to help them make inferences with the auditors’ data andavoided the bank meltdown. Multiple consultants may have been neededhere to substantiate what was taking place. If each brought the same message,it would offer confirmation of an emerging crisis. The lesson is to ask for helpwhen there is a knowledge gap.

Shell and the Brent Spar Decision. Shell officials were candid as theydiscussed their reasons for preferring the option of dumping an obsoleteoil platform (the Brent Spar) at a deep-sea location—but only to a point. Notall data were revealed, and some additional data became known after Shellhad decided on a deep-sea disposal site. When Greenpeace challenged thedecision, data were offered disputing Shell’s claims that a deep-sea disposal ofthe Brent Spar would have little environmental impact. Greenpeace lateracknowledged that it misrepresented its own data to bolster its position.Greenpeace’s spokespersons claimed this was justified because their positionwas right and because Shell’s data were also inaccurate.

Shell officials missed a chance to develop shared data about deep-seadisposal. Officials could have offered on-site inspection of the spar by a thirdparty to verify what the company had done to clean it, the sludge thatremained, and the sludge’s toxicity. Making this information public wouldhave allowed Shell to correct errors in its estimates and take correctiveaction, such as removing the additional sludge. Each party’s environmentalimpact claims could be turned over to a project team appointed by Shell,Greenpeace, and public officials in Norway, the United Kingdom, and otheraffected areas. The team would consider data offered by each party and thoseof their own experts.

The intent was to explore the spar’s eventual decomposition on the oceanfloor, which would release any remaining sludge, and make inferences aboutits environmental impact. Had such data been allowed to emerge, publicsupport could have been gained and Greenpeace’s fraudulent claims ex-posed. To do so, Shell would have had to hold its deep-sea disposal remedy inabeyance, pending a review by independent parties.

Denver International Airport. The remake of the airport in Denver had twooptions: remodel Stapleton, the existing airport, or build a new airport. Thenew airport option was selected. Critics claimed that a new airport was notneeded. To defuse the controversy, Federico Pena (Denver’s mayor and a keynew airport proponent) could have set up a commission to examine thesituation. Those pushing for a new airport, such as the Federal AviationAdministration, and those criticizing it, such as the airlines and Pena’spolitical opponents, could have been asked to suggest experts for anevaluation panel. The panel would be given the task of weighing datasupporting remodeling Stapleton Airport and a new airport. The questioncould have been settled by a public report that identified the strengths andweaknesses of the two proposals.

Volume 23, Number 3 / 2010 DOI: 10.1002/piq 25

Page 12: Learning from failed decisions

A similar procedure is recommended for major infrastructure projects,such as light rail and sports arenas. A bipartisan group examining the need forlight rail and other large-scale image-driven initiatives to be supported withpublic dollars can pull together data from several viewpoints and seek aconclusion that all the data support. Each party would gain insights into newways to view the decision. This approach would help cities, such as Chicago,see the value of jumping on the ‘‘Olympic bandwagon’’ and other priceyimage-conscious initiatives that demand public support. Thinking throughwhat is expected from such decisions sets aside tired or hackneyed argu-ments with a clearer picture of needs and possibilities and the required costand likely payoffs. The debate becomes more enlightened and the publicinterest more apt to be served.

Ford Pinto. In the Ford Pinto debacle, officials blocked a recall knowing thatthe Pinto had a defective gas tank that was likely to explode in a rear-endcollision. In fact, Ford officials appear to have learned little: botched recallshave been replayed many times, such as the defective Bridgestone/Firestonetires and roller-over-prone Ford Explorer. Lawyers representing 2 millioncurrent and former Ford vehicle owners argued that a faulty ignition device inFord vehicles caused stalling. In over 300 Ford models made between 1983and 1995, the ignition was located near the engine block so heat caused thedevice to fail and stall the engine. Internal documents show that Fordconfirmed the problem and could have moved the ignition module to acooler spot for $4 per vehicle. Lawyers claim Ford concealed informationabout the ignition location problem from federal safety regulators. Forddenies that these vehicles stall, but has settled dozens of death and injury suitsconnected with stalled vehicles.

Critics and Ford had different data. They made no attempt to share thedata, let alone develop a shared inference about what the data held by the twoparties suggested, such as a new ignition system or a vehicle repurchase.There was little chance that either of these options would be adopted untileach party learned about the other’s views.

Finally, after 18 years of debate, as bad press about the Ford SUV and itstires depressed its stock price, Ford agreed in late 2001 to recall 5 million carsat a cost of $2.7 billion. Nevertheless, the parties still saw the data differently.The Ford president at the time, Nasser, said there was insufficient statisticalevidence that the ignition module presented a safety problem. Federal safetyofficials disagreed and claimed that Ford had concealed information aboutignition problems. A joint effort by critics, like the National Highway TrafficSafety Administration and Ford each offering data for the other to interpret,could have speeded up recall decisions and saved lives. It might even havesaved money.

Ford is currently facing $2.4 billion in claims from Bronco SUV owners,$1.7 billion for asbestos-related ailments, and $600 million for defectiverestraint systems. Consumer Reports magazine published reports nearly 10years ago that the Ford Explorer and Expedition were subject to rolling over.Ford denies these claims, but redesigned the vehicles in 2001 along the lines

26 DOI: 10.1002/piq Performance Improvement Quarterly

Page 13: Learning from failed decisions

TABLE 3 DECISION DEBACLES AND THEIR PERVERSE INCENTIVES

CASE DESCRIPTION PERVERSE INCENTIVES

Quaker’s acquisition of Snapple: Quaker is a diversified company

making products ranging from pet food to cereal. The CEO

acquired Gatorade and marketed it into a star. The CEO’s approach

to due diligence consisted of tasting the product. Amid rumors of a

takeover, the same approach was applied to Snapple in an effort to

acquire debt, making Quaker a less desirable takeover target.

Snapple, however, had neither manufacturing nor distribution

synergy with Gatorade. Gatorade was centrally produced, and

Snapple was produced locally by a franchise with firm contracts.

Snapple would have to give up lucrative supermarkets to distribute

Gatorade to its smaller outlets. Snapple had 20 million in outdated

inventory due to poor inventory control procedures. An

acquisition undertaken for questionable reasons and faulty

analysis proved to be disastrous. The CEO charged underlings with

‘‘making it work,’’ and when they failed, he blamed them. Snapple

was sold for $300 million, far below its $1.8 billion purchase price.

Hero image hard to maintain

(success the norm)

Windfall outcome distorted

how well past practices worked

Denver International Airport (DIA): The inadequacies of Stapleton

Airport, Denver’s close-in airport, included runways that were too

close and too short and posed safety concerns and limited flights in

inclement weather at the country’s sixth busiest airport. Pena, in his

candidacy for mayor of Denver, saw this as an opportunity, and he

called for a new airport instead of a planned renovation. Ballot

initiatives proved controversial, but after years of effort, the DIA

was approved. Pena spearheaded a state-of-the-art facility with a

dramatic mix of architecture and technology located more than an

hour’s drive from the city on land owned by his family. Studies and

contracts were let to family and friends to do key work. Costs were

misrepresented to make the project seem feasible. This increased

airport financing cost when its bond rating approached junk

designation. Technology was faulty, leading to long waits for bags

and interairport transport. Travelers grumbled about the time and

cost to get to Denver. The DIA’s opening was delayed five times.

The project came in at $4.9 billion, well over budget, which more

than doubled the cost per passenger.

Accurate cost estimates make

project look bad

Pena’s personal interests

EuroDisney: Walt Disney had a long-standing fascination with

Europe and hoped for a presence there. Soon after a park opened in

Tokyo to record crowds, Eisner, the current CEO, set out to realize

‘‘Walt’s dream.’’ Two hundred sites were considered, and

possibilities were quickly narrowed to Spain and France. The

French government provided considerable financial bait,

offsetting negatives of weather and the disposition of the French.

At the park opening, Eisner was pelted with brie cheese. French

intellectuals called the park ‘‘Euro-dismal.’’ The fiascos continued a

record of correcting errors made in the previously constructed

park, failing to anticipate the current one. The Anaheim project had

failed to develop land around the park, Orlando had plenty of land

but underestimated hotel demand, and in Tokyo, officials failed to

secure royalties for Disney characters. The French deal seemed to

overcome all this by getting cheap land at bargain prices and

government subsidies. But Eisner forgot the park was just a short

Eisner wanted the project so others

expected to support it

Walt’s dream

(Continues on next page)

Volume 23, Number 3 / 2010 DOI: 10.1002/piq 27

Page 14: Learning from failed decisions

Table 3 DECISION DEBACLES AND THEIR PERVERSE INCENTIVES (CONTINUED)

CASE DESCRIPTION PERVERSE INCENTIVES

train ride from Paris, arguably one of the top travel destinations in

the world. As a result, hotel occupancy at the park was just 37%.

Demand was influenced by costs and limitations on picnicking and

alcohol. Europeans wanted to see Americana in America, if at all,

not in France. Losses reached $1 million a day, ending Eisner’s run

as a miracle worker.

Shell’s disposal of the Brent Spar: The Brent Spar, a huge floating oil

storage facility weighting nearly 15,000 tons, served the North Sea

oil field off Scotland. But completion of a pipeline rendered the

Spar obsolete, and attempts to refurbish or sell it were

unsuccessful. After its decommissioning, issues arose concerning

its structural integrity. Company officials posed three disposal

options: dismantle in a port and sell for scrap, dump on location, or

dump at a deep-sea site. After analysis, a deep-sea site was

selected, and governments in Norway and the United Kingdom

were informed, with no objections voiced. As the time approached

to dump the Spar, Greenpeace mounted a furious campaign to

stop what it called a ‘‘precedent-setting environmentally

irresponsible act.’’ To make the point, Greenpeace activists flew to

the oil platform in a helicopter and boarded it amid worldwide TV

coverage. Activists then made several claims that were either

exaggerated or outright misrepresentations. Officials at Shell had

made misrepresentations as well, understating contaminants in

the Spar and overstating its structural integrity. Greenpeace

argued that one misrepresentation deserved another. The ensuing

media feeding frenzy forced Shell to abandon its plans and

dismantle the Spar in port, at a huge increase in cost.

Fear of future disposal rules

Suspicious of public partners

Openness not expected to be

reciprocated

Ford Pinto: Ford’s recall blunders have a long history. The Pinto,

with its exploding gas tanks, arguably heads the list. The safety of

the car was in doubt from the beginning when crash tests found 8

of 10 cars failed. The 3 that did not fail had a gas tank fix. Company

officials rejected a fix of $2.35 a vehicle, $137 million in total, as too

expensive. They reasoned that the cost of a fix exceeded the cost of

paying those injured and their families, estimated at nearly $50

million for injury compensation and car value (as shown in court

documents). Ford officials reasoned they were making a low-cost

car that could not be expected to perform perfectly, when in fact

there was risk to owning one. A tragic accident occurred when

three college women stopped on a berm in a Pinto. A rear-end

collision exploded the gas tank, killing all inside. Lawsuits ensued

that made public Ford’s policy regarding recalling the Pinto. The

company dodged a huge loss in court with a technicality, but

company reputation was damaged for decades.

Profit drives principle

Nationwide Arena: Local leaders in Columbus, the largest city in

Ohio, had long sought a sports team to overcome the city’s

provincial image. They quickly determined that teamless cities are

involved in a one-sided courtship, drawn into bidding wars to offer

new stadiums with luxury boxes and other amenities to attract the

attention of sports team owners. When the National Hockey

A team at any cost

Desire to reduce risk kept owners

from seeing a way to make more

money

Availability of public dollars in

various forms

28 DOI: 10.1002/piq Performance Improvement Quarterly

Page 15: Learning from failed decisions

recommended by the report. This put a new light on the Ford–Firestone tirefailure debacle. In the end, it may be cheaper to find accommodation than tofight such claims.

Recently Toyota vehicles have been found to have an acceleration surge.The company was found to have concealed information about the defect andwithheld information about the extent of the problem and the prospect thatthe proposed fixes will eliminate the defect. Meanwhile Toyota’s reputationhas taken a huge hit. History is repeated yet again.

EuroDisney. The EuroDisney fiasco could have been avoided had Eisnerbeen open to other options and park locations (see Table 3). The supportersof all of the possibilities could have made presentations to top management.Proponents are asked to listen to the ideas of others and present a report thatincorporates what others want to do in their proposal. The report takes acompany perspective, indicating which projects each proponent wouldadopt and why. Such a process at Quaker would have avoided the Snappledebacle, which was caused by an acquisition by Shaker that lacked synergywith Gatorade (see Table 3).

Misreading Outcomes

Windfall successes and bad luck failures deflect the inquiry essential forlearning. A ‘‘windfall’’ outcome can hide bad practices (Nisbett & Ross, 1989).Voters’ defeat of tax support for a sports arena in Columbus, Ohio (seeTable 3), created a windfall for Nationwide Insurance, the arena’s primarysponsor: the city’s ‘‘plan B’’ of private funding proved to be more profitable

Table 3 DECISION DEBACLES AND THEIR PERVERSE INCENTIVES (CONTINUED)

CASE DESCRIPTION PERVERSE INCENTIVES

League (NHL) listed Columbus as a potential site, it galvanized yet

another effort, and local leaders put together a plan to attract a

franchise. Following the courtship strategy, they proposed a 0.5%

sales tax to fund an arena to house the team and lure the NHL into

the city. Officials claimed there was no plan B to capture the long-

sought-after professional team. After a bitter campaign, voters

soundly defeated the proposal. The next morning several local

leaders put up the funds to build an arena, with surreptitious public

monies to gift and fix an unused site in the downtown area,

presented as ‘‘urban development.’’ To sweeten the pot, arena real

estate taxes were to be slashed to a fraction of the facility’s worth,

with the principal partner, Nationwide Insurance, to offer the

public school a token sum to compensate for the $65 million if the

arena was valued at market. Local community officials were aghast

at the proposal, but the arena was built as planned. Current market

conditions have forced Nationwide to seek help to fund the bonds

used to finance the project. It has found no takers.

Source. Cases adapted from Nutt (2002).

Volume 23, Number 3 / 2010 DOI: 10.1002/piq 29

Page 16: Learning from failed decisions

than the original plan of tax support. The decision to push tax support washardly a good one, but it produced a good outcome, illustrating how goodresults can hide bad practices.

Quaker’s promotion of Gatorade hit the right market at just the righttime and produced a windfall for the company and for the CEO. The CEO saw

blind luck as an endorsement of his unprofessionalacquisition practices, which amounted to nothingmore than tasting the product. Bad luck mayaccompany good practices and lead to undesirableoutcomes. Shell did not ignore environmentalissues. Nevertheless, Greenpeace was able to blockthe company’s planned deep-sea disposal of the

Brent Spar oil storage platform. A far greater environmental threat arosewhen a damaged oil rig with full tanks was allowed to sink near the shorelineof Peru. There was no activism by Greenpeace for this dumping. No onecould have foreseen how Greenpeace claims would ignite a cause that wouldmobilize worldwide public opposition to a legal remedy. Such chance eventscan make it impossible to realize a good outcome no matter what decision-making practices are followed.

Research shows that decision makers find it difficult to link outcomesrealized from a decision with their decision-making practices (Nutt, 2002,2010). Bad practices found in decisions that turn out well because of goodluck are continued, and good practices in decisions that turn out badly due tobad luck are discredited. Decision makers are often unable to make ameaningful assessment of such practices. To make such an assessment,decision makers must be able to select among four kinds of decisionpostmortems—outcomes known or not and practices appropriate or not(Nutt, 1989).

When practices are appropriate and outcomes are successful, theassessment should focus on context. To learn, decision makers account forthe context in which good results were realized, denoting the time, topic,urgency, and other aspects of the situation. Failing to appreciate context canlead decision makers to export a good practice to a situation for which it is illsuited. This type of learning can be difficult. If decision makers are flush withsuccess, the feel-good aura extends to everything associated with the success,including the practices followed. It is quite natural to apply the practicesassociated with a success to the next decision without much thought. Disneyofficials saw past park successes but not the context that made the parksappealing. People in Japan starved for the ‘‘Disney view of America’’ werecaptivated. The French were not.

Good outcomes are realized when outcomes like revenues leap farbeyond expectations, creating a windfall. Windfall successes have little todo with the acts of decision makers or their decision-making practices. Anunbiased recall of the estimates made for key factors is needed to detect awindfall outcome. This is rare. It is much too easy to be swept away by thegood fortune of, for example, sales that are much greater than what wasexpected. To take credit, decision makers take defensive action that makes

Windfall successes and badluck failures deflect the

inquiry essential forlearning.

30 DOI: 10.1002/piq Performance Improvement Quarterly

Page 17: Learning from failed decisions

the windfall appear to stem from their actions or their plans. When a decisionmaker makes a serendipitous event seem planned, others are likely to bemisled and then endorse the decision maker and his or her practices.

The Nationwide Arena backers were crowing about their sports teamand its average attendance of more than 16,000 fans per game, until recentlywhen attendance dropped because fans tired of watching poor teams. Onehopes that Nationwide sees the flaws in its analysis and avoids them in futureprojects. Officials in cities looking at a new airport, such as London and St.Louis, should take a careful look at Denver International Airport’s success.Claims that the airport is now being used has more to do with the lack ofviable options than good planning. The Quaker CEO’s success with Gatoradehad little to do with his personal likes and dislikes. Recall that the CEO’s duediligence amounted to little more than tasting the product. He repeated the‘‘taste test’’ approach for Snapple and had quite a different experience. Fordavoided a big loss in the Pinto lawsuit because of sheer luck in jury instruction(see Table 3). The company’s many bad decisions on recalls since arenoteworthy. A windfall outcome can keep decision makers from makingcritical appraisal of their decision-making practices.

When decision-making practices are good and the outcome is not, thereappears to be a need for better forecasts. Learning stems from uncoveringassumptions buried in a forecast to detect risk. Note how faulty assumptionswere fatal to the forecasts made in the Shell disposal decision. Shell’s analysisapproach used good practices with bad and incomplete data. Decisionmakers doing a postmortem must look at forecasts to see this.

Both decision-making practices and forecasts can be flawed. This can bedifficult to discover because both the practices and forecasts contributed tothe failure. Addressing one without the other leads to incomplete learning.To avoid limiting corrective action to one or the other, both the forecast anddecision-making practices should be checked, beginning with the forecastand its assumptions. The EuroDisney decision merged questionable fore-casting of hotel occupancy and park attendance with poor decision-makingpractices. Such a decision can be assessed by first finding how risk washandled in each key forecast (Nutt, 2002) and then conducting an appraisal ofthe practices used following the guidelines offered above.

Hindsight Biases

Creeping determinism sets the final barrier to learning what prompted afaileddecision(Nutt,2002).A ‘‘hindsightbias’’ (Fishhoff,1975)canemergeaftera decision is made and its results fall below expectations. Critics find it hard tobelieve that a decision maker ignored the warning signals that in hindsightappear obvious. Because memory of what happened becomes distorted, theappraiser is misled. The person doing an appraisal consolidates chance events,which makes them seem less chancy. With sufficient consolidation, disastrousdecisions will seem inevitable and thus unavoidable. A postmortem of theChallenger space shuttle disaster illustrates how this can happen.

Volume 23, Number 3 / 2010 DOI: 10.1002/piq 31

Page 18: Learning from failed decisions

The Space Shuttle Disaster

Many claim that the Challenger space shuttle disaster of 1986, whichkilled seven astronauts, could have been foreseen. There was a known risk inlow-temperature launches. Engineers at Morton-Thiokol, the rocket boos-ter contractor, found that O-rings in the booster do not seal properly attemperatures below 501F. The engineers documented these concerns inseveral reports, and decision makers at the National Aeronautics and SpaceAdministration (NASA) were aware of the reports. The engineers alsoclaimed that the same situation had arisen during a previous shuttle launchcountdown and only a last-minute increase in air temperature had averted adisaster. NASA administrators contended that nothing would ever belaunched if they listened to the advice that comes pouring in from all quartersof engineers during a countdown.

From the vantage point of hindsight, the shuttle disaster seems pre-ventable. A prior prediction was made and verified but ignored. An earliertragedy had been averted because of a chance event: a sudden increase in airtemperature at the Cape Canaveral launch site. NASA administrators were(and are) political appointees, many without a technological background.NASA leaders were unable to understand the engineers on the rare occasionsthat they tried. With this background in mind, blame for the shuttle disasterseems to rest squarely on the launch team at NASA.

Before buying into this explanation, consider how creeping determinismemerged in the congressional hearings set up to investigate the disaster.After a critic listens to the testimony, the risk of O-ring failure seems far morecertain than it was at the time of the launch. Creeping determinism makes theexplosion seem preordained, and stochastic events—the likelihood of O-ringfailure—appear deterministic. The failure to heed warnings about the risk ofa low-temperature launch is now taken to be the cause of the disaster, not oneof several contributing factors. Other equally plausible explanations, such asbureaucratic smugness and ponderous decision processes, are swept aside(Mitroff & Pauchant, 1990). NASA’s management had grown from 1,050employees per launch in 1966 to an estimated 1,850 per launch budgeted forthe shuttle program. Transactions that once took 6 weeks take 6 months.Nevertheless, panels investigating disasters must find scapegoats and fallguys. NASA’s top manager was sacked and replaced with his predecessor,who was also a political appointee with little technological sophistication.NASA’s decision processes remained largely unchanged when the Columbiashuttle disaster occurred years later.

How Biases Arise

Knowledge of an outcome restricts memory (Hogarth, 1980; Nisbett &Ross, 1989). Instead of recalling the past in terms of the uncertainties that werepresent as a choice was made, one is prone to come up with a reconstructionthat accounts for the outcome. To make sense of what happened, there is arevision-creating coherence in the antecedents observed that are then linkedto the outcome. For instance, the Japanese attack on Pearl Harbor seemspredictable in retrospect, as does the economic chaos that stemmed from the

32 DOI: 10.1002/piq Performance Improvement Quarterly

Page 19: Learning from failed decisions

Arab oil embargo. Decisions in which an outcome becomes known enticepeople to see the outcomes as inevitable and clearly related to available cues.When the same decision is described as having an unknown outcome, itproduces less certainty and fewer and less intense cue associations.

Making predictions of yet-to-be-realized events is riddled with uncer-tainty. A decision maker attempting a prediction is confronted with manypaths, each capturing an important contingency, and none of them can befully understood at the time of a choice. For example, to have made moretimely decisions concerning gas rationing following the Arab oil embargorequired accurate estimates of the shortfalls of hundreds of petroleumproducts and the economic consequences of each, as well as anticipatinghighly unexpected events. Consumers reacted irrationally: they lined up atgas pumps with nearly full tanks of gas to top off their tanks, creating a chaoticand explosive situation across the country.

Predictions require imagination, flexibility, and thoughtful ruminationsabout unexpected events that can arise. Hindsight requires little imaginationand allows a critic to trumpet a now clear-cut relationship of cues andconsequences. The critic sees the cues as causal because they were presentwhen they experienced the outcome and its consequences. It is easy to makethis association after an outcome is realized because uncertainty about thecue has been washed away. Memory distortions and the lack of surpriseinitiate the bias.

Memory Distortions. One’s memory can be deceiving when the outcome of adecision is known because it distorts what happened and permitsrationalizations (Tversky & Kahneman, 1973). Blame is quick when thereseems to have been an avoidable error, as in both shuttle disasters. Somedecisions produce good outcomes for serendipitous reasons. Decisionmakers become visionary leaders when their decisions turn out right andbumbling idiots when they fail. Limited memory capacity creates the need toconsolidate events to permit recall. It is expedient to forget attributions thatprove to be incorrect. This leads decision makers to bogus linkages betweencues and outcomes that persist over time and distort their learning. Knowinghow things turned out can embed bogus cue-outcome associations inmemory. These bogus associations become firmly implanted and difficultto unlearn.

Lack of Surprise. When a decision maker sees the outcome of a decision, allsurprise vanishes (Tversky & Kahneman, 1981). This has two implications.First, a lack of surprise suggests that there is little to learn. The decisionoutcome is treated as if it was preordained. Second, the causal explanationapplied in future decisions is likely to fail. Consider a company CEOreflecting on the costs and benefits of product advertising. If the adsprecede an increase in sales, a linkage between the ads and sales can bepresumed. Other equally plausible explanations, such as a dedicated salesforce and good training, are then discarded, overlooking the role thatsalespeople and trainers had in the success. Such an after-the-fact

Volume 23, Number 3 / 2010 DOI: 10.1002/piq 33

Page 20: Learning from failed decisions

connection of ads to sales is a poor way to learn because the connection lacksvalidation. People rarely seek out information that tests beliefs formed afterthe fact about the relationship between action and outcome relationships. Ifdecision makers attribute success to their acts, they may overlook equallyplausible explanations for success in their dedicated and hard-working staff.This leads to disillusioned subordinates and missed learning opportunities.Finding other plausible explanations for a success provides a morepenetrating view of the decision and recreates the level of surprise neededto learn. The hindsight bias assumes away this surprise.

Applying the Hindsight Bias to the Cases

Hindsight biases were found to limit learning in the failed decisionsfound in my database (Nutt, 2002, 2010a). Civic leaders who supported thearena in Columbus point to the hockey team they attracted and attendanceabove the 12,000-per-game target and crow about their foresight. A hockeyteam now seems preordained. The Sydney Opera House, at one time theposter child for cost overruns, is now a tourist attraction and was promi-nently featured in city’s ads for the 2000 Olympics (Nutt, 2010a). Thisoverlooks that the Opera House is a performing arts center disaster; it is toosmall to stage a major opera, and opera ticket holders on a peak day have todeal with parking and mass transit snafus (Nutt, 1989). San Francisco’s BARTtransit line was considered an urban transit disaster (Hall, 1984) until tens ofthousands of people, stranded in San Francisco after the 1989 earthquake,escaped via BART. In defending BART, urban transit advocates say that amajor earthquake had been predicted for almost a century. Was BART builtto rescue people stranded by a quake? In the aftermath of the quake, the newsmedia asked engineers looking at collapsed overpasses and bridges if theyhad been inspected recently, a classic illustration of the hindsight bias.

EuroDisney (now Disneyland Paris) is now making money. An appraisalwould ask if stockholders should have had to wait 10 years to break even on a$200 million investment. Denver International Airport’s baggage systemnow functions, albeit slowly, and this limited success must account for thesubstantial investment by the city to get it to work. Critics of the MillenniumDome, an attraction in London, now say, ‘‘I told you so,’’ when it became clearthat the event attracted little interest and fewer paying customers (Nutt,2002). All this seems inevitable today, but each must be judged by the factsavailable when the decision was made.

Dealing With Hindsight Bias

To cope with the bias in hindsight requires reviewing a decision with all ofits ambiguities and uncertainties. Parole boards are vigorously criticizedwhen a felon who is released commits a crime. Critics ignore evidence ofmodel behavior in confinement and the degree of risk in comparable cases on

34 DOI: 10.1002/piq Performance Improvement Quarterly

Page 21: Learning from failed decisions

which the decision to parole was made. To make an unbiased determinationof whether parole is warranted, an appraisal must be confined to the factsavailable at the time the decision is made. To determine the extent to whichan error is avoidable, the same doubts faced by a decision maker must bepresent when doing an assessment. Hearing about a released prisoner whocommits a crime removes all doubts and fatally biases the assessment.

There are several key steps in conducting an unbiased review for parole(Nutt, 1989). First, disguise the outcome. Identifying the individuals involvedis likely to reveal whether a crime was committed after the release. People areremarkably adept at gathering facts that are consistent with a knownoutcome, suggesting that decision makers knew all along of the risks theywere taking in releasing a prisoner. To examine parole board releasedecisions, reconsider them with the facts available when the release decisionwas made, such as the record of violent behavior and findings frompsychological testing. A reviewer is given access to the same informationthat the board had and is asked to make a parole recommendation. Note howthe simulation excludes information that reveals more about the situationthan the decision maker (parole board members) would have known.Compare the number of people recommended for release who committeda crime, using data from prisoners who were released who did or did notcommit crimes. This separates chance events from those that were foresee-able. Such a comparison provides a way to determine the difficulty of therelease decision and the degree of precision that is possible. Such a proposalwas met with fear by leaders of the Ohio Department of Correction androundly rejected, again illustrating the power of perverse incentives.

Consider a second example in which graduate school admission deci-sions are tested to learn the extent of precision that is possible (Nutt, 1989). Alearning simulation can be set up as follows. Students known to be ‘‘stars’’ or‘‘admission mistakes’’ are identified using data depicting their job andacademic successes. The ‘‘stars’’ and ‘‘mistakes’’ are characterized withinformation used to make the admission decision (test scores, grade pointaverage, and other indicators). Members of an admission committee are thenasked which students should be admitted. The track record of members of anadmissions committee can be assessed according to the precision with whichthey can identify the disguised ‘‘stars’’ and ‘‘admission mistakes’’ as studentswith or without potential. The members of the committee with the best trackrecords could be asked to train incoming committee members.

Grant review agencies could use a similar approach. The ratings ofindividuals serving on review panels are tabulated. Averages of rankscores are compared to the scores of each member to identify whotends to rank high and low. The results of a particular panel can be examinedin light of members’ tendencies to be harsh or lenient. Panels can beconstructed to meet particular aims of the sponsor, such as stringent reviews,because of anticipated budget cuts. Shell should try this. Company geologistsat Shell are 90% sure of a ‘‘soaking’’ site (one that makes money for thecompany) in their recommendations, but have it right only 50% of the time(Nutt, 1989).

Volume 23, Number 3 / 2010 DOI: 10.1002/piq 35

Page 22: Learning from failed decisions

Conclusion

Decisions produce outcomes that have costs and benefits. Learning callsfor assessing these outcomes and the actions taken to realize them (Carroll,1998). Fire departments learn by reviewing the handling of major fires. Thereview examines how firefighters and equipment were dispatched and usedon the scene and looks for practices that should be modified. All involved areassembled to determine how the fire could have been fought differently toreduce loss of life, injuries, and property losses.

Surgeons, cardiologists, and other diagnosticians gather regularly toreview heart surgery cases. Their review examines the progress of heartsurgery patients by comparing notes about predicted prognosis, proceduresused, and outcomes, questioning methods and sharing experiences. Suchreviews are mandatory for in-service training in the best U.S. hospitals.

The partners of some consulting firms debrief consultants when theyreturn from an engagement. They ask probing questions to learn what wasdone and what worked, and to offer advice. The partners look for ways to usethe knowledge gained to serve new clients (do the same thing at the samebilling rate for less cost) and isolate best practices. The ideas extracted fromsuch sessions codify best practices and how issues such as downsizing wereresolved. Consulting companies create knowledge management archives todocument best practices and successful recommendations (Dierkes, Alexis,Antal, Stopfors, & Vonderstein, 2001).

Organizational leaders in my studies inadvertently thwarted such learningwith their practices and incentives. These include perverse incentives, mis-reading outcomes, and hindsight biases. Perverse incentives coax people totake defensive action. This action misrepresents outcomes and events. Then acover-up of the misrepresentation is created, which leads to a cover-up of thecover-up. The cover-up of the cover-up requires a decision maker to keep thelid on; there is no discussion. To root out perverse incentives prompting theseactions, move inquiry back to a data collection stage. Decision makers areasked to reveal their data and allow other parties to rebut and qualify those dataand offer their own data. They should seek data that everyone agrees arerelevant. Inferences made with the pooled data to reach a joint conclusion willreplace individual interests with communal ones.

Chance events make it possible for good decision practices to lead to badoutcomes, due to back luck, and bad decision practices to good outcomes,creating windfalls. Windfalls can be misread to conclude that good practiceswere followed and bad luck used to infer that good practices should beabandoned. Both lead to misjudging the value of decision-making practices.Forecast failures must be separated from bad decision-making practices toget a fix on failure-prone decision-making practices that require correction.

A hindsight bias makes a decision outcome seem inevitable. The cuesthat precede a bad outcome are seen as clear-cut, making the consequencesof the outcome avoidable. To appraise a decision, decision makers are calledon to recreate the uncertainty and surprise experienced when the decisionwas made.

36 DOI: 10.1002/piq Performance Improvement Quarterly

Page 23: Learning from failed decisions

References

Argyris, C. (1982). Reasoning, learning, and action: Individual and organizational. San

Francisco: Jossey-Bass.

Argyris, C., Putnam, R., & Smith, D. M. (1987). Action science. San Francisco: Jossey-Bass.

Argyris, C., & Schon, D. (1978). Organization learning: A theory of action perspective.

Reading, MA: Addison-Wesley.

Brilliant, D. (1995, November 1). The tone at the top. Banker, 145, 26–27.

Carroll, J. (1998). Organizational learning activities in high hazard industries: The logics of

underlying self analysis. Journal of Management Studies, 35, 21–35.

Dierkes, M., Alexis, M., Antal, A., Stopfors, J., & Vonderstein, A. (Eds.). (2001). The annotated

bibliography of organizational learning and knowledge creation. Berlin: Buchtone.

Fay, S. (1996, November 1). The collapse of Barings. Management Accounting, 74, 14.

Fishhoff, B. (1975). Hindsight and foresight: The effect of outcome knowledge on

judgment under uncertainty. Journal of Experimental Psychology, Human Perception,

and Performance, 1, 288–299.

Hall, P. (1984). Great planning disasters. Berkeley: University of California Press.

Hogarth, R. (1980). Judgment and choice. Hoboken, NJ: Wiley.

Huber, G. (1991). Organizational learning: An examination of the contributing processes

and literatures. Organization Science, 2, 88–115.

Inspectors of Barings Futures (I of BS), Singapore PTE LTD, 1996.

Kolb, D. A. (1983). Problem management: Learning from experience. In S. Srivastra (Ed.),

The executive mind (pp. 109–143). San Francisco: Jossey-Bass.

Leeson, N. (1996). Rogue trader: How I brought down Barings Bank and shook the financial

world. New York: Little, Brown.

McKie, D. (1973). A sadly mismanaged affair: The political history of the third London airport.

London: Croon Helm.

Mitroff, I., & Pauchant, T. (1990). We’re so big and powerful nothing bad can happen to us.

New York: Buchtone.

Nisbett, R., & Ross, L. (1989). Human inferences: Strategies and shortcomings of human

judgments (rev. ed.). Hoboken, NJ: Wiley.

Nutt, P. C. (1989). Making tough decisions. San Francisco: Jossey-Bass.

Nutt, P. C. (1999). Surprising but true: Half of organizational decisions fail. Academy of

Management Executive, 1(4), 75–90.

Nutt, P. C. (2002). Why decisions fail: Avoiding the blunders and traps that lead to debacles.

San Francisco: Berrett-Koehler.

Nutt, P. C. (2008). Investigating decision making processes. Journal of Management

Studies, 45, 425–455.

Nutt, P. C. (2010a). Building an action theory of decision making. In P. C. Nutt & D. C. Wilson

(Eds.), Handbook of decision making. Oxford, UK: Wiley-Blackwell.

Nutt, P. C. (2010b). An empirical comparison of decision making processes. In P. C. Nutt & D.

C. Wilson (Eds.), Handbook of decision making. Oxford, UK: Wiley-Blackwell.

Nystrom, P., & Starbuck, W. (1984). To avoid organizational crises, unlearn. Organizational

Dynamics, 12(4), 53–65.

Overell, S. (1995, September 21). Barings collapse blamed on lack of HR strategy. People

Management, 1, 8.

Reyes, A. (1995, October 27). Uncovering the cover-up. Asiaweek. Retrieved from http://

www.cnn.com/ASIANOW/asiaweek/95/1027/biz1.html

Senge, P. (1990). The fifth discipline: The art and management of the learning organization.

New York: Doubleday.

Snyder, R. C., & Page, G. D. (1958). The United States decision to resist aggression in Korea:

The application of an analytical scheme. Administrative Science Quarterly, 3, 341–378.

Sraeel, H. (1995, September 1). The Barings report lacks cuts. Bank System and Technology,

32, 4.

Tversky, A., & Kahneman, D. (1973). Availability: A heuristic for judging frequency and

probability. Cognitive Psychology, 5, 207–232.

Volume 23, Number 3 / 2010 DOI: 10.1002/piq 37

Page 24: Learning from failed decisions

Tversky, A., & Kahneman, D. (1981). The framing of decisions and the psychology of choice.

Science, 211(4481) 453–458.

Related Readings

Chernoff, J. (1995). Barings legacy: Tighter controls. Pensions and Investments, 23, 1–3.

Guan, L. K. (1996, April 1). Barings bankruptcy and financial derivatives. Asia Pacific Journal

of Management, 13, 117–119.

Investment Banking. (1995, August 1). Out of control. Banker, 145, 15–16.

Ostro-Landau, N. (1995, April 1). It’s not just greed, Stupid. International Business, 18.

PAUL C. NUTT

Paul C. Nutt, PhD, is professor emeritus, Fisher College of Business at theOhio State University, and professor of management at University ofStrathclyde. He received a PhD from the University of Wisconsin, Madison,and a B.S.E. and M.S.E. from the University of Michigan. He has written over150 articles and eight books and received numerous awards for his researchand teaching from the Decision Sciences Institute, the Academy of Manage-ment, the Institute for Operation Research and Management Sciences,Emerald publishing citations of excellence, and others. He is a fellow in theDecision Sciences Institute. His books include Handbook of Decision Mak-ing, Why Decisions Fail, Strategic Management, and Making Tough Deci-sions. His work has appeared in Fortune, the Wall Street Journal, FastCompany magazine, and NPR/PRI’s Marketplace. He regularly consultsfor public, private, and nonprofit organizations and does executive educa-tion. Mailing address: 2599 W Choctaw Drive, London, OH 43140.E-mail: [email protected]

38 DOI: 10.1002/piq Performance Improvement Quarterly