perspectives on human error: hindsight biases and local rationality

38
1 Perspectives on Human Error: Hindsight Biases and Local Rationality David D. Woods Cognitive Systems Engineering Laboratory Institute for Ergonomics The Ohio State University Richard I. Cook Department of Anesthesia and Critical Care University of Chicago Introduction Early Episodes of “Human Error” Consider these episodes where some stakeholders reacted to failure by attributing the cause to “human error,” but where more careful examination showed how a combnation of factors created the conditions for failure. 1 Case 1: In 1796 the astronomer Maskelyne fired his assistant Kinnebrook because the latter’s observations did not match his own (see Boring, 1950). Bessel, another astronomer, studied the case empirically and identified systematic factors 2 which produced imprecise observations. 1 The term stakeholders refers to all of the different groups who are affected by an accident in that domain and by changes to operations, regulations, equipment, policies, etc. as a result of reactions to that accident. For example, an accident in aviation affects the public as consumers of the service, the FAA as regulators of the industry, pilots as practitioners, air traffic controllers as practitioners, air carrier organizations as providers of the service, manufacturers of the aircraft type as equipment designers, other development organizations that develop other equipment such as avionics, navigation aids, and software, and other groups as well. 2 By systematic factors, we mean the behavior is not random but lawful, i.e., there are empirical regularities, factors that influence behavior are external to individuals (system properties), these factors have an effect because they influence the physical, cognitive and collaborative activities of practitioners, and, finally, because there are regularities, there are predictable effects.

Upload: dominh

Post on 12-Jan-2017

226 views

Category:

Documents


3 download

TRANSCRIPT

Page 1: Perspectives on Human Error: Hindsight Biases and Local Rationality

1

Perspectives on Human Error:Hindsight Biases and Local Rationality

David D. WoodsCognitive Systems Engineering Laboratory

Institute for ErgonomicsThe Ohio State University

Richard I. CookDepartment of Anesthesia and Critical Care

University of Chicago

Introduction

Early Episodes of “Human Error”

Consider these episodes where some stakeholders reacted to failure byattributing the cause to “human error,” but where more careful examinationshowed how a combnation of factors created the conditions for failure.1

Case 1:

• In 1796 the astronomer Maskelyne fired his assistant Kinnebrook becausethe latter’s observations did not match his own (see Boring, 1950).

• Bessel, another astronomer, studied the case empirically and identifiedsystematic factors2 which produced imprecise observations.

1 The term stakeholders refers to all of the different groups who are affected by an accident in thatdomain and by changes to operations, regulations, equipment, policies, etc. as a result of reactions tothat accident. For example, an accident in aviation affects the public as consumers of the service, theFAA as regulators of the industry, pilots as practitioners, air traffic controllers as practitioners, aircarrier organizations as providers of the service, manufacturers of the aircraft type as equipmentdesigners, other development organizations that develop other equipment such as avionics, navigationaids, and software, and other groups as well.

2 By systematic factors, we mean the behavior is not random but lawful, i.e., there are empiricalregularities, factors that influence behavior are external to individuals (system properties), thesefactors have an effect because they influence the physical, cognitive and collaborative activities ofpractitioners, and, finally, because there are regularities, there are predictable effects.

Page 2: Perspectives on Human Error: Hindsight Biases and Local Rationality

2

The implicit assumption was that one person (the assistant) was the source offailure whether due to some inherent trait or to lack of effort on his part.Bessel broke free of this assumption and empirically examined individualdifferences in astronomical observations. He found that there were widedifferences across observers given the methods of the day. The techniques formaking observations at this time required a combination of auditory andvisual judgments. Those judgments were shaped by the tools of the day,pendulum clocks and telescope hairlines, in relation to the demands of thetask. Dismissing Kinnebrook did not change what made the task difficult, didnot eliminate individual differences, and did not make the task lessvulnerable to sources of imprecision. Progress was based on searching forbetter methods for making astronomical observations, re-designing the toolsthat supported astronomers, and re-designing the tasks to change thedemands placed on human judgment.

Case 2:

• In 1947 investigations of military aviation accidents concluded that piloterrors were the cause of the crashes.

• Fitts and Jones empirically studied pilot performance in the cockpit andshowed how systematic factors in interpreting instruments and operatingcontrols produced misassessments and actions not as intended (see Fittsand Jones, 1947).

The implicit assumption was that the person closest to the failure was thecause. Investigators saw that the aircraft was in principle flyable and thatother pilots were able to fly such aircraft successfully. They could show howthe necessary data were available for the pilot to correctly identify the actualsituation and act in an appropriate way. Since the pilot was the humanclosest to the accident who could have acted differently, it seemed obvious toconclude that the pilot was the cause of the failure.

Fitts and his colleague empirically looked for factors that could haveinfluenced the performance of the pilots. They found that, given the designof the displays and layout of the controls, people relatively often misreadinstruments or operated the wrong control, especially when task demandswere high. The misreadings and misoperations were design-induced in thesense that researchers could link properties of interface design to theseerroneous actions and assessments. In other words, the “errors” were notrandom events, rather they resulted from understandable, regular, andpredictable aspects of the design of the tools practitioners used.

The researchers found that misreadings and misoperations occurred, but didnot always lead to accidents due to two factors. First, pilots often detectedthese errors before negative consequences occurred. Second, the misreadings

Page 3: Perspectives on Human Error: Hindsight Biases and Local Rationality

3

and misoperations alone did not lead directly to an accident. Disaster or nearmisses usually occurred only when these errors occurred in combination withother factors or other circumstances.

In the end, the constructive solution was not to conclude that pilots err, butrather to understand principles and techniques for the design of visualdisplays and control layout. Changing the artifacts used by pilots changed thedemands on human perception and cognition and changed the performanceof pilots.

Erratic People or System Factors?

While historical, the above episodes encapsulate current, widespread beliefs inmany technical and professional communities and the public in general about thenature of human error and how systems fail. In most domains today, from aviationto industrial processes to transportation systems to medicine, when systems fail wefind the same pattern as was observed in these earlier cases.

1. Stakeholders claim failure is “caused” by unreliable or erratic performanceof individuals working at the sharp end3 who undermine systems whichotherwise worked as designed (for example, see the recent history of pilot-automation accidents in aviation; Billings, 1996; Woods and Sarter, in press).The search for causes tends to stop when we can find the human or groupclosest to the accident who could have acted differently in a way that wouldhave led to a different outcome. These people are seen as the source or“cause” of the failure, that is, the outcome was due to “human error.” Whenstakeholders see erratic people as the cause of bad outcomes, then theyrespond by calls to remove these people from practice, to provide remedialtraining to other practitioners, to urge other practitioners to try harder, and toregiment practice through policies, procedures, and automation.

2. However, researchers look more closely at the system in which thesepractitioners are embedded and their studies reveal a different picture.4 Theirresults show how popular beliefs that such accidents are due simply to 3 It has proven useful to depict complex systems such as health care, aviation and electricalpower generation and others as having a sharp and a blunt end (Reason, 1990). At the sharpend, practitioners interact with the underlying process in their roles as pilots, spacecraftcontrollers, and, in medicine, as nurses, physicians, technicians, and pharmacists. At the bluntend of a system are regulators, administrators, economic policy makers, and technologysuppliers. The blunt end of the system controls the resources and constraints that confront thepractitioner at the sharp end, shaping and presenting sometimes conflicting incentives anddemands (Reason, in press).

4 The term practitioner refers to “a person engaged in the practice of a profession or occupation”(Webster’s, 1990).

Page 4: Perspectives on Human Error: Hindsight Biases and Local Rationality

4

isolated blunders of individuals mask the deeper story -- a story of multiplecontributors that create the conditions that lead to operator errors. Reason(1990, p. 173) summarizes the results: “Rather than being the main instigatorsof an accident, operators tend to be the inheritors of system defects... Theirpart is that of adding the final garnish to a lethal brew whose ingredientshave already been long in the cooking.” The empirical results revealregularities in organizational dynamics and in the design of artifacts thatproduce the potential for certain kinds of erroneous actions and assessmentsby people working at the sharp end of the system (Woods, Johannesen, Cookand Sarter, 1994; Reason, 1998).

For example, one basic finding from research on disasters in complex systems(e.g., Reason, 1990) is that accidents are not due to a single failure or cause.Accidents in complex systems only occur through the concatenation ofmultiple small factors or failures, each necessary but only jointly sufficient toproduce the accident. Often, these small failures or vulnerabilities are presentin the organization or operational system long before a specific incident istriggered. All complex systems contain such “latent” factors or failures, butonly rarely do they combine to create an accident (Reason, 1998).

This pattern of multiple, latent factors occurs because the people in anindustry recognize the existence of various hazards that threaten to causeaccidents or other significant consequences, and they design defenses thatinclude technical, human, and organizational elements. For example, peoplein health care recognize the hazards associated with the need to delivermultiple drugs to multiple people at unpredictable times in a hospital settingand use computers, labeling methods, patient identification cross-checking,staff training, and other methods to defend against misadministrations.Accidents in these kinds of systems occur when multiple factors join togetherto create the trajectory for an accident by eroding, bypassing or breakingthrough the multiple defenses. Because there are a set of contributors,multiple opportunities arise to redirect the trajectory away from disaster. Theresearch has revealed that an important part of safety is enhancingopportunities for people to recognize that a trajectory is heading closer to apoor outcome and to recover before negative consequences occur(Rasmussen, 1986). Factors that reduce error tolerance or block error detectionand recovery degrade system performance.

Behind the Label Human Error

Based on this pattern in the data, researchers see that the label “human error”should serve as the starting point for investigating how systems fail, not as aconclusion. In other words, human performance is shaped by systematicfactors, and the scientific study of failure is concerned with understanding

Page 5: Perspectives on Human Error: Hindsight Biases and Local Rationality

5

how these factors shape the cognition, collaboration and ultimately thebehavior of people in various work domains.

This research base has identified some of these regularities. In particular, weknow about how a variety of factors make certain kinds of erroneous actionsand assessments predictable (e.g., Norman, 1983; Norman, 1988; Hollnagel,1993). Our ability to predict the timing and number of erroneous actions isvery weak, but our ability to predict the kind of errors that will occur, whenpeople do err, is often good. For example, when research pursues this deeperstory behind the label “human error,” we find imbalances between thedemands practitioners face and the resources available to meet the demandsof that field of activity (Rasmussen, 1986). These demand-resourceimbalances can affect the development of necessary expertise (Feltovich, Ford,and Hoffman, 1997), how the system brings additional expertise to bearespecially when more difficult problems emerge, how people cope withmultiple pressures and demands (Klein, 1998), how the system supportscooperative work activities especially when the tempo of operationsincreases, and how organizational constraints hinder or aid practitionerswhen they face difficult tradeoffs and dilemmas (Weick and Roberts, 1993).

This chapter will explore only a small portion of the issues that come to thefore when one goes behind the label human error. The space of social,psychological, technological, and organizational issues is large in part becausepeople play such diverse roles in work environments and because workenvironments themselves vary so much.

Two Perspectives: Studying the Factors that Affect Human Performance andStudying Reactions to Failure

The chapter revolves around an ambiguity in the label human error. Thetwo historical episodes introduced at the beginning of the chapter illustratethis ambiguity.

When we use the label human error we are sometimes referring to theprocesses and factors that influence the behavior of the people in thesituation. From this perspective investigators are trying to understand thefactors that lead up to erroneous actions and assessments. In this sense, thelabel human error points at all of the factors that influence humanperformance -- in particular, the performance of the practitioners working insome field of activity. In the examples at the beginning of the chapter, Besseland Fitts followed up the incidents with this kind of investigation.Understanding human performance is a very large subject (in part the subjectof much of this volume) and here we will examine only a few of the relevantissues -- predominately cognitive factors, but also how those cognitive

Page 6: Perspectives on Human Error: Hindsight Biases and Local Rationality

6

processes are shaped by artifacts, coordination across multiple people, andorganizational pressures.

While the above constitutes the bulk of this chapter, the label human errorcan refer to a different class of psychological issues and phenomena. After-the-fact, stakeholders look back and make judgments about what led to theaccident or incident. In the examples at the beginning of the chapter,Maskelyne and the authors of the aviation accident reports reacted after-the-fact with the judgment that individuals were the cause of the accidents.Labeling a past action as erroneous is a judgment based on a differentperspective and on different information than what was available to thepractitioners in context. In other words, this judgment is a process of causalattribution, and there is an extensive body of research about the social andpsychological factors which influence these kinds of attributions of causality(e.g., Kelley, 1973; Fischhoff, 1975; Baron and Hershey, 1988; Hilton, 1990).From this perspective, error research studies the social and psychologicalprocesses which govern our reactions to failure as stakeholders in the systemin question (Tasca, 1990; Woods et al., 1994, chapter 6).

Our reactions to failure as stakeholders are influenced by many factors. Oneof the most critical is that, after an accident, we know the outcome and,working backwards, what were critical assessments or actions that, if they hadbeen different, would have avoided that outcome. It is easy for us with thebenefit of hindsight to say, “how could they have missed x?” or “how couldthey have not realized that x would obviously lead to y.”

Studies have consistently shown that people have a tendency to judge thequality of a process by its outcome (Baron and Hershey, 1988; ; Lipshitz, 1989;Caplan, Posner and Cheney, 1991). In a typical study, two groups are asked toevaluate human performance in cases with the same descriptive facts butwith the outcomes randomly assigned to be either bad or neutral. Those withknowledge of a poor outcome judge the same decision or action moreseverely. This is referred to as the outcome bias (Baron and Hershey, 1988)and has been demonstrated with practitioners in different domains. Forexample, Caplan, Posner, and Cheney (1991) found an inverse relationshipbetween the severity of outcome and anesthesiologists’ judgments of theappropriateness of care. The judges consistently rated the care in cases withbad outcomes as substandard while viewing the same behaviors with neutraloutcomes as being up to standard even though the care (i.e., the precedinghuman performance) was identical. The information about outcome biasedthe evaluation of the process that was followed.

Other research has shown that once people have knowledge of an outcome,they tend to view the outcome as having been more probable than otherpossible outcomes. Moreover, people tend to be largely unaware of themodifying effect of outcome information on what they believe they could

Page 7: Perspectives on Human Error: Hindsight Biases and Local Rationality

7

have known in foresight. These two tendencies collectively have beentermed the hindsight bias . Fischhoff (1975) originally demonstrated thehindsight bias in a set of experiments that compared foresight and hindsightjudgments concerning the likelihood of particular socio-historical events.Basically, the bias has been demonstrated in the following way: participantsare told about some event, and some are provided with outcomeinformation. At least two different outcomes are used in order to control forone particular outcome being a priori more likely. Participants are then askedto estimate the probabilities associated with the several possible outcomes.Participants given the outcome information are told to ignore it in coming upwith their estimates, i.e., “to respond as if they had not known the actualoutcome,” or in some cases are told to respond as they think others withoutoutcome knowledge would respond. Those participants with the outcomeknowledge judge the outcomes they had knowledge about as more likely thanthe participants without the outcome knowledge, even when those makingthe judgments have been warned about the phenomenon and been advisedto guard against it (Fischhoff, 1982). Experiments on the hindsight bias haveshown that: (a) people overestimate what they would have known inforesight, (b) they also overestimate what others knew in foresight, and (c)they actually misremember what they themselves knew in foresight.

Taken together, the outcome and hindsight biases have strong implicationsfor error analyses.• Decisions and actions having a negative outcome will be judged more

harshly than if the same process had resulted in a neutral or positiveoutcome. We can expect this result even when judges are warned aboutthe phenomenon and have been advised to guard against it.

• Judges will tend to believe that people involved in some incident knewmore about their situation than they actually did. Judges will tend to thinkthat people should have seen how their actions would lead up to theoutcome failure.

One sense of studying “human error” then involves understanding howsocial and psychological processes such as hindsight and outcome biases shapeour reactions to failure as stakeholders in the failed system.5 In general, wereact, after the fact, as if the knowledge we now possess was available to theoperators then. This oversimplifies or trivializes the situation confronting

5 In a narrow sense, the outcome and hindsight biases refer to specific experimental findings fromdifferent test paradigms. In a broader sense, both of these experimental results, and other results, referto a collection of factors that influence our reactions to failures based on information that is availableonly after the outcome is known. In the context of error analysis, we have used the label “hindsightbias” to refer to the broader perspective of a judge looking back in hindsight to evaluate theperformance of others (Woods et al., 1994). Both specific experimental results illustrate ways in whichpeople with the benefit of hindsight can misperceive and misanalyze the factors that influenced thebehavior of the people working in the situation before outcome was known.

Page 8: Perspectives on Human Error: Hindsight Biases and Local Rationality

8

the practitioners, and masks the processes affecting practitioner behaviorbefore-the-fact. As a result, hindsight and outcome bias blocks our ability tosee the deeper story of systematic factors that predictably shape humanperformance.

There is limited data available about the reactions of stakeholders to failure(but see, Tasca, 1990 and the review in Woods et al., 1994, chapter 6). Thisperspective shifts the focus of investigation from studying sharp endpractitioners to studying stakeholders. It emphasizes the need to collect dataon reactions to failure and to contrast those reactions after-the-fact to resultson the factors that influence human performance before-the-fact derivedfrom methods that reduce hindsight biases. Such studies can draw from theconceptual base created by experimental studies on the social andpsychological factors in judgments of causal attribution such as hindsightbiases.

The main portion of this chapter will focus on the deeper story behind thelabel human error of factors that influence human performance, especiallysome of the cognitive factors. Then, we will return to hindsight and outcomebias to illustrate several misconceptions about cognition that arise often whenincidents are reviewed with knowledge of outcome.

Cognitive Factors and Human Performance

What cognitive factors affect the performance of practitioners in complexsettings like medicine, aviation, telecommunications, process plants, andspace mission control? There are many ways one could organize classes ofcognitive factors relevant to human performance (e.g., Rasmussen, 1986;Norman, 1988; Reason, 1990). Different aspects of cognition will be relevantto various settings and situations. For example, one area of humanperformance is concerned with slips of action such as capture errors oromissions of isolated acts (e.g., Norman, 1981; Reason and Mycielska, 1982;Byrne and Bovair, 1997). We have found it useful to teach people aboutcognitive factors and error by using the concept of bounded or local rationality(Simon, 1957).

Local Rationality

At work, groups of practitioners pursue goals and match procedures tosituations, but they also

• resolve conflicts,

• anticipate hazards,

• accommodate variation and change,

Page 9: Perspectives on Human Error: Hindsight Biases and Local Rationality

9

• cope with surprise,

• workaround obstacles,

• close gaps between plans and real situations,

• detect and recover from miscommunications and misassessments.In these activities practitioners at the sharp end block potential accidenttrajectories. In other words, people actively contribute to safety when theycan carry out these roles successfully. “Error” research on humanperformance tries to identify factors that undermine practitioners’ ability todo these activities successfully. The question then becomes how can the sameprocesses result in success some of the time but result in failure in othercircumstances (Rasmussen, 1986).

The concept of bounded rationality is very useful for helping us think abouthow people can forms intentions act in ways that later events will reveal areerroneous. Peoples’ behavior in the work situations can be considered asconsistent with Newell’s principle of rationality—that is, practitioners usetheir knowledge to pursue their goals (Newell, 1982). But there are bounds tothe data that they pick up or search out, bounds to the knowledge that theypossess, bounds to the knowledge that they activate in a particular context,and there may be multiple goals which conflict (Simon, 1957). In otherwords, people's behavior can be seen as “rational,” though possiblyerroneous, when seen from the point of view of their knowledge, theirmindset, and the multiple goals they are trying to balance (Rasmussen,Duncan and Leplat, 1987). Rationality here does not mean consistent withexternal, global standards such as models, policies or procedures; rationalityin Newell’s principle is defined locally from the point of view of the peoplein a situation as they use their knowledge to pursue their goals based on theirview of the situation. As a result, for the context of error, we will refer to theconcept that human rationality is limited or bounded as “local” rationality(Woods et al., 1994).

Fundamentally, human (and real machine) problem-solvers possess finitecapabilities. They cannot anticipate and consider all the possible alternativesand information that may be relevant in complex problems. This means thatthe rationality of finite resource problem solvers is local in the sense that it isexercised relative to the complexity of the environment in which theyfunction (Klein et al., 1993; Klein, 1998). It takes effort (which consumeslimited computational resources) to seek out evidence, to interpret it (asrelevant), and to assimilate it with other evidence. Evidence may come inover time, over many noisy channels. The process may yield informationonly in response to diagnostic interventions. Time pressure, which compelsaction (or the de facto decision not to act), makes it impossible to wait for allevidence to accrue. Multiple goals may be relevant, not all of which areconsistent. It may not be clear, in foresight, which goals are the mostimportant ones to focus on at any one particular moment in time. Human

Page 10: Perspectives on Human Error: Hindsight Biases and Local Rationality

10

problem solvers cannot handle all the potentially relevant information,cannot activate and hold in mind all of the relevant knowledge, and cannotentertain all potentially relevant trains of thought. Hence, rationality mustbe local—attending to only a subset of the possible evidence or knowledgethat could be, in principle, relevant to the problem.

The role for “error” research, in the sense of understanding the factors thatinfluence human performance, is to understand how limited knowledge(missing knowledge or misconceptions), how a limited and changingmindset, and how multiple interacting goals shaped the behavior of thepeople in the evolving situation. In other words, this type of error researchreconstructs what the view was like or would have been like had we stood inthe same situation as the participants. If we can understand how theirknowledge, their mindset, and their goals guided the behavior of theparticipants, then we can see how they were vulnerable to err given thedemands of the situation they faced. We can see new ways to helppractitioners activate relevant knowledge, shift attention to the critical focusamong multiple tasks in a rich, changing data field, and recognize and balancecompeting goals.

Given that people use their knowledge to pursue their goals, but given thatthere are bounds to their knowledge, limits to their mindset and multiple notalways consistent goals to achieve, one can learn about the performance ofpractitioners at the sharp end by looking at factors that affect

• how knowledge relevant to the situation at hand is called to mind --knowledge in context,

• how we come to focus on one perspective on or one part of a rich andchanging environment and how we shift that focus across multiple eventsover time -- mindset,

• how we balance or make tradeoffs among multiple interacting goals --interacting goals.

We have found it useful to group various findings and concepts into thesethree classes of cognitive factors that govern how people form intentions toact (Cook and Woods, 1994). Problems in the coordination of these cognitivefunctions, relative to the demands imposed by the field of activity, create thepotential for mismanaging systems towards failure. For example, in terms ofknowledge factors, some of the possible problems are buggy knowledge (e.g.,incorrect model of device function), inert knowledge, and oversimplifications(Spiro, Coulson, Feltovich, and Anderson, 1988). In terms of mindset, oneform of breakdown occurs when an inappropriate mindset takes hold orpersists in the face of evidence which does not fit this assessment. Failuresvery often can be traced back to dilemmas and tradeoffs that arise frommultiple interacting and sometimes conflicting goals. Practitioners by thevery nature of their role at the sharp end of systems must implicitly or

Page 11: Perspectives on Human Error: Hindsight Biases and Local Rationality

11

explicitly resolve these conflicts and dilemmas as they are expressed inparticular situations (Cook and Woods, 1994).

Knowledge in Context

Knowledge factors refer to the process of bringing knowledge to bear to solveproblems in context:

• what knowledge do practitioners possess about the system or process inquestion (is it correct, incomplete, or erroneous, i.e., “buggy”?),

• how this knowledge is organized so that it can be used flexibly in differentcontexts, and

• the processes involved in calling to mind the knowledge relevant to thesituation at hand.

Knowledge of the world and its operation may be complete or incomplete andaccurate or inaccurate. Practitioners may act based on inaccurate knowledgeor on incomplete knowledge about some aspect of the complex system or itsoperation. When the mental model that practitioners hold of such systems isinaccurate or incomplete, their actions may well be inappropriate. Thesemental models are sometimes described as “buggy.” The study ofpractitioners’ mental models has examined the models that people use forunderstanding technological, physical, and physiological processes. Severalvolumes are available which provide a comprehensive view of research onthis question (see Gentner and Stevens, 1983; Chi, Glaser, and Farr, 1988;Feltovich, Ford and Hoffman, 1997).

Note that research in this area has emphasized that mere possession ofknowledge is not enough for expertise. It is also critical for knowledge to beorganized so that it can be activated and used in different contexts (Bransford,Sherwood, Vye, and Rieser, 1986). Thus, Feltovich, Spiro, and Coulson (1989)and others emphasize that one component of human expertise is the flexible application of knowledge in new situations.

There are multiple overlapping lines of research related to the activation ofknowledge in context by humans performing in complex systems. Theseinclude:• the problem of inert knowledge, and• the use of heuristics, simplifications, and approximations.

Going behind the label “human error” involves investigating howknowledge was or could have been brought to bear in the evolving incident.Any of the above factors could influence the activation of knowledge incontext—for example, did the participants have incomplete or erroneousknowledge? Were otherwise useful simplifications applied in circumstances

Page 12: Perspectives on Human Error: Hindsight Biases and Local Rationality

12

that demanded consideration of a deeper model of the factors at work in thecase? Did relevant knowledge remain inert? We will briefly sample a few ofthe issues in this area.

Activating Relevant Knowledge In Context: The Problem Of Inert KnowledgeLack of knowledge or buggy knowledge may be one part of the puzzle, but themore critical question may be factors that affect whether relevant knowledgeis activated and utilized in the actual problem solving context (e.g., Bransfordet al., 1986). The question is not just does the problem-solver know someparticular piece of domain knowledge, but does he or she call it to mind whenit is relevant to the problem at hand and does he or she know how to utilizethis knowledge in problem solving? We tend to assume that if a person canbe shown to possess a piece of knowledge in one situation and context, thenthis knowledge should be accessible under all conditions where it might beuseful. In contrast, a variety of research results have revealed dissociationeffects where knowledge accessed in one context remains inert in another(Gentner and Stevens, 1983; Perkins and Martin, 1986).

Thus, the fact that people possess relevant knowledge does not guarantee thatthis knowledge will be activated when needed. The critical question is not toshow that the problem-solver possesses domain knowledge, but rather themore stringent criterion that situation-relevant knowledge is accessible underthe conditions in which the task is performed. Knowledge that is accessedonly in a restricted set of contexts is called inert knowledge . Inert knowledgemay be related to cases that are difficult to handle, not because problem-solvers do not know the individual pieces of knowledge needed to build asolution, but because they have not confronted previously the need to jointhe pieces together.

Results from accident investigations often show that the people involved didnot call to mind all the relevant knowledge during the incident althoughthey “knew” and recognized the significance of the knowledge afterwards.The triggering of a knowledge item X may depend on subtle patternrecognition factors that are not present in every case where X is relevant.Alternatively, that triggering may depend critically on having sufficient timeto process all the available stimuli in order to extract the pattern. This mayexplain the difficulty practitioners have in “seeing” the relevant details whenthe pace of activity is high.

One implication of these results is that training experiences should exerciseknowledge in the contexts where it is likely to be needed.

OversimplificationsPeople tend to cope with complexity through simplifying heuristics.Heuristics are useful because they are usually relatively easy to apply andminimize the cognitive effort required to produce decisions. These

Page 13: Perspectives on Human Error: Hindsight Biases and Local Rationality

13

simplifications may be useful approximations that allow limited resourcepractitioners to function robustly over a variety of problem demand factors orthey may be distortions or misconceptions that appear to work satisfactorilyunder some conditions but lead to error in others. Feltovich et al. (1989) callthe latter “oversimplifications.”

In studying the acquisition and representation of complex concepts inbiomedicine, Feltovich et al. (1989) found that various oversimplificationswere held by some medical students and even by some practicing physicians.They found that “. . . bits and pieces of knowledge, in themselves sometimescorrect, sometimes partly wrong in aspects, or sometimes absent in criticalplaces, interact with each other to create large-scale and robustmisconceptions” (Feltovich et al., 1989, p. 162). Examples of kinds ofoversimplification include (see Feltovich, Spiro, and Coulson, 1993):• seeing different entities as more similar than they actually are,• treating dynamic phenomena as static,• assuming that some general principle accounts for all of a phenomenon,• treating multidimensional phenomena as unidimensional or according to

a subset of the dimensions,• treating continuous variables as discrete,• treating highly interconnected concepts as separable,• treating the whole as merely the sum of its parts.

Feltovich and his colleagues’ work has important implications for theteaching and training of complex material. Their studies and analyseschallenge the view of instruction that presents initially simplified material inmodules that decompose complex concepts into their simpler componentswith the belief that these will eventually “add up” for the advanced learner(Feltovich et al., 1993). Instructional analogies, while serving to conveycertain aspects of a complex phenomenon, may miss some crucial ones andmislead on others. The analytic decomposition misrepresents concepts thathave interactions among variables. The conventional approach may producea false sense of understanding and inhibit pursuit of deeper understandingbecause learners may resist learning a more complex model once they alreadyhave an apparently useful simpler one (Spiro et al., 1988). Feltovich and hiscolleagues have developed the theoretical basis for a new approach toadvanced knowledge acquisition in ill-structured domains (Feltovich, Spiro,and Coulson, 1997).

Why do practitioners utilize simplified or oversimplified knowledge? Thesesimplifying tendencies may occur because of the cognitive effort required indemanding circumstances (Feltovich et al., 1989). Also, simplifications maybe adaptive, first, because the effort required to follow more “ideal” reasoningpaths may be so large that it would keep practitioners from acting with thespeed demanded in actual environments. This has been shown elegantly byPayne, Bettman, and Johnson (1988) and by Payne, Johnson, Bettman, and

Page 14: Perspectives on Human Error: Hindsight Biases and Local Rationality

14

Coupey (1990) who demonstrated that simplified methods will produce ahigher proportion of correct choices between multiple alternatives underconditions of time pressure.

In summary, heuristics represent effective and necessary adaptations to thedemands of real workplaces (Rasmussen, 1986). The issue may not always bethe shortcut or simplification itself, but whether practitioners know the limitsof the shortcuts, can recognize situations where the simplification is nolonger relevant, and have the ability to use more complex concepts, methods,or models (or the ability to integrate help from specialist knowledge sources)when the situation they face demands it

Mindset

“Everyone knows what attention is. It is the taking possession bythe mind, in a clear and vivid form, of one out of what seem severalsimultaneously possible objects or trains of thought” (James, 1890 I, 403-404).We are able to focus, temporarily, on some objects, events, actions in theworld or on some of our goals, expectations or trains of thought whileremaining sensitive to new objects or events that may occur . We can refer tothis broadly as a state of attentional focus or mindset (LaBerge, 1995).

Mindset is not fixed, but shifts to explore the world and to track potentiallyrelevant changes in the world (LaBerge, 1995). In other words, one re-orientsattentional focus to a newly relevant object or event from a previous statewhere attention was focused on other objects or on other cognitive activities(such as diagnostic search, response planning, communication to others).New stimuli are occurring constantly; any of these could serve as a signal weshould interrupt ongoing lines of thought and re-orient attention. This re-orientation involves disengagement from a previous focus and movement ofattention to a new focus. Interestingly, this control of attentional focus can beseen as a skillful activity that can be developed through training (Gopher,1991) or supported (or undermined) by the design of artifacts and intelligentmachine agents (Woods 1995; Patterson, Watts-Perotti, and Woods, in press).

A basic challenge for practitioners at work is where to focus attention next ina changing world (Woods and Watts, 1997). Which object, event, goal or lineof thought we focus on depends on the interaction of two sets of activity. Oneof these is goal or knowledge directed, endogenous processes (often calledattentional set) that depend on the observer’s current knowledge, goals andexpectations about the task at hand. The other set of processes are stimulus-or data-driven where attributes of the stimulus world (unique features,transients, new objects) elicit attentional capture independent of theobserver’s current mindset (Yantis, 1993). These salient changes in the world

Page 15: Perspectives on Human Error: Hindsight Biases and Local Rationality

15

help guide shifts focus of attention or mindset to relevant new events,objects, or tasks.

These two kinds of processes combine in a cycle, what Neisser (1976) calledthe perceptual cycle, where unique events in the environment shift the focusof attention or mindset, call to mind knowledge, trigger new lines of thought.The activated knowledge, expectations or goals in turn guides furtherexploration and action. This cycle is a crucial concept for those trying tounderstand human performance in the workplace (e.g., Jager Adams, Tenney,and Pew, 1995).

There are a variety of interesting psychological phenomena that can beorganized under the label “mindset.” Examples include:

• attentional control and loss of situation awareness (Gopher, 1991; JagerAdams et al., 1995: Durso and Gronlund, this volume),

• revising assessments as new evidence occurs, for example in garden pathproblems and in failures to revise, such as fixation effects (Johnson et al.,1988; DeKeyser and Woods, 1990),

• framing effects and the representation effect (Johnson et al., 1991; Zhangand Norman, 1994; Zhang, 1997),

• juggling multiple lines of thought and activity in time includingbreakdowns in workload management and thematic vagabonding(Dorner, 1983).

There are a variety of problems that can occur in synchronizing mindset togoals and priorities in a changing world depending on problem demands,skill levels, coordinative structures, and the artifacts available to supportperformance, e.g.:

• Irrelevant stimuli may intrude on a primary task, e.g., distraction -- abreakdown in selective attention.

• One’s mindset may be too easily interrupted, as in thematic vagabonding6

-- a breakdown in attention switching.

• One’s mindset may be too hard to interrupt and re-focus, i.e., fixating onone view of the problem -- a breakdown in attention switching.

• Attention may be captured by irrelevant stimuli or relevant stimuli fail tocapture attention in a cognitively noisy workplace, e.g., habituation tonuisance alarms or ‘high rates of false alarms (Woods, 1995; Getty, Swets,Pickett, and Gonthier, 1995).

6 Thematic vagabonding refers to one form of loss of coherence where multiple interacting themes aretreated superficially and independently so that the person or team jumps incoherently from one themeto the next (Dorner, 1983).

Page 16: Perspectives on Human Error: Hindsight Biases and Local Rationality

16

• Breakdowns can occur in setting priorities or making tradeoffs, e.g.,shedding the wrong task under high workload -- a breakdown in attentionswitching.

When the goal of the investigator is to understand the factors that shapehuman performance, a critical step is to trace out how an individual’smindset or a group’s mindset develops and changes as the incident evolves.Often this involves understanding how the mindset of different individualsor groups interact and or remain encapsulated. This analysis reveals the cueswhich attracted attention, how did they match the expectations of theobservers, what was called to mind, what lines of thought were triggered, andhow those goals and issues guided further exploration and action, which inturn generates new cues. This analysis reveals the kinds of attentionalchallenges produced by the evolving problem and the kinds of attentionalbreakdowns that occurred.

Loss of Situation AwarenessSituation awareness is a label that is often used to refer to many of thecognitive processes involved in forming and changing mindset (e.g., JagerAdams et al., 1995; Durso and Gronlund, this volume). Maintainingsituation awareness necessarily requires shifts of attention to inform andmodify a coherent picture or model of the situation the practitioners face.Anticipating how the situation may develop or change in the future state is aparticularly important aspect.

Breakdowns in these cognitive processes can lead to operational difficulties inhandling the demands of dynamic, event-driven incidents. In aviationcircles this is known as “falling behind the plane” and in aircraft carrier flightoperations it has been described as “losing the bubble” (Roberts and Rousseau,1989). In each case what is being lost is the operator’s internal representationof the state of the world at that moment and the direction in which the forcesactive in the world are taking the system that the operator is trying to control.

Fischer, Orasanu and Montvalo (1993) examined the juggling of multiplethreads of a problem in a simulated aviation scenario. More effective crewswere better able to coordinate their activities with multiple issues over time;less effective crews traded one problem for another. More effective crewswere sensitive to the interactions between multiple threads involved in theincident; less effective crews tended to simplify the situations they faced andwere less sensitive to the constraints of the particular context they faced. Lesseffective crews “were controlled by the task demands” and did not look aheador prepare for what would come next. As a result, they were more likely torun out of time or encounter other cascading problems. Interestingly, therewere written procedures for each of the problems the crews faced. Thecognitive work associated with managing multiple threads of activity goesbeyond the activities needed to merely follow the rules.

Page 17: Perspectives on Human Error: Hindsight Biases and Local Rationality

17

This study illustrates how breakdowns in attentional control result from howtask demands challenge human abilities to shift and focus attention in achanging task world (Gilson, 1995).

Failures To Revise Situation Assessments: Fixation Or Cognitive Lockup Diagnostic problems fraught with inherent uncertainties are common,especially when evidence arrives over time and situations can change.Incidents rarely spring full blown and complete; incidents evolve.Practitioners make provisional assessments and form expectancies based onpartial and uncertain data. These assessments are incrementally updated andrevised as more evidence comes in. Furthermore, situation assessment andplan formulation are not distinct sequential stages, but rather they are closelyinterwoven processes with partial and provisional plan development andfeedback leading to revised situation assessments (Woods, 1994).

As a result, it may be necessary for practitioners to entertain and evaluatewhat turn out later to be erroneous assessments. Problems arise when therevision process breaks down and the practitioner becomes fixated on anerroneous assessment, missing, discounting or re-interpreting discrepantevidence (e.g., DeKeyser and Woods, 1990; Johnson et al., 1981, 1988; Gaba andDeAnda, 1989). These failures to revise situation assessment as new evidencecomes in have been referred to as functional fixations, cognitive lockup andcognitive hysteresis. The operational teams involved in several majoraccidents (e.g., the Three Mile Island accident) seem to have exhibited thispattern of behavior (Woods et al., 1987).

In cases of fixation, the initial situation assessment tends to be appropriate, inthe sense of being consistent with the partial information available at thatearly stage of the incident. As the incident evolves, however, people fail torevise their assessments in response to new evidence, evidence that indicatesan evolution away from the expected path. The practitioners become fixatedon the old assessment and fail to revise their situation assessment and plansin a manner appropriate to the data now present in their world. Thus, afixation occurs when practitioners fail to revise their situation assessment orcourse of action and maintain an inappropriate judgment or action in the faceof opportunities to revise. Thus, fixations represent breakdowns in theprocess of error detection and recovery where people discount discrepantevidence and fail to keep up with new evidence or a changing situation.

Several criteria need to be met in order to describe an event as a fixation. Onecritical feature is that there is some form of persistence over time in thebehavior of the fixated person or team. Second, opportunities to revise arecues, available or potentially available to the practitioners, that could havestarted the revision process if observed and interpreted properly. In part, thisfeature distinguishes fixations from simple cases of inexperience, lack of

Page 18: Perspectives on Human Error: Hindsight Biases and Local Rationality

18

knowledge, or other problems that impair error detection and recovery. Thebasic defining characteristic of fixations is that the immediate problem-solving context has biased the practitioners’ mindset in some directioninappropriately. In naturally occurring problems, the context in which theincident occurs and the way the incident evolves activates certain kinds ofknowledge as relevant to the evolving incident. This knowledge, in turn,affects how new incoming information is interpreted -- a framing effect.After the fact or after the correct diagnosis has been pointed out, the solutionseems obvious, even to the fixated person or team.

There are certain types of problems that may encourage fixations bymimicking other situations, in effect, leading practitioners down a gardenpath (Johnson et al., 1988; Johnson, Jamal, and Berryman, 1991; Johnson,Grazioli, Jamal, and Zualkernan, 1992; Roth, Woods, and Pople, 1992). Ingarden path problems “early cues strongly suggest [plausible but] incorrectanswers, and later, usually weaker cues suggest answers that are correct”(Johnson, Moen and Thompson, 1988). It is important to point out that theerroneous assessments resulting from being led down the garden path are notdue to knowledge factors. Rather, they seem to occur because “a problem-solving process that works most of the time is applied to a class of problemsfor which it is not well suited” (Johnson et al., 1988). This notion of gardenpath situations is important because it identifies a task genotype (Hollnagel,1993) in which people become susceptible to fixations. The problems thatoccur are best attributed to the interaction of particular environmental (task)features and the heuristics people apply (local rationality given difficultproblems and limited resources), rather than to a generic weakness in thestrategies used. The way that a problem presents itself to practitioners maymake it very easy to entertain plausible but in fact erroneous possibilities.

Fixation may represent the downside of normally efficient and reliablecognitive processes given the cognitive demands of dynamic situations andcases where diagnosis and response happen in parallel (Woods, 1994). It isclear that in demanding situations where the state of the monitored process ischanging rapidly, there is a potential conflict or tradeoff between the need torevise the situation assessment and the need to maintain coherence. Notevery change is important; not every signal is meaningful. The practitionerwhose attention is constantly shifting from one item to another may not beable to formulate a complete and coherent picture of the state of the system(thematic vagabonding in Dorner, 1983). Conversely, the practitioner whoseattention does not shift may miss cues and data that are critical to updatingthe situation assessment. This latter condition may lead to fixation.

Given the kinds of cognitive processes that seem to be involved in fixation,there are a variety of techniques that, in principle, may reduce this form ofbreakdown. Data on successful and unsuccessful revision of erroneoussituation assessments show that it usually takes a person with a fresh point of

Page 19: Perspectives on Human Error: Hindsight Biases and Local Rationality

19

view on the situation to break a team or individual out of a fixation (Woodset al., 1987). Note that this result reveals a distributed, multi-agentcomponent to cognition at work. Thus, one can change the architecture ofthe distributed system to try to ensure a fresh point of view, i.e., one that isunbiased by the immediate context. Another approach is to try to developdistributed system architectures where one person or group criticizes theassessments developed by the remainder of the group (e.g., a devil’s advocateteam member as in Schwenk and Cosier, 1980). A third direction is predicatedon the fact that poor feedback about the state and behavior of the monitoredprocess, especially related to goal achievement, is often implicated in fixationsand failures to revise. Thus, one can provide practitioners with new kinds ofrepresentations about what is going on in the monitored process (e.g., Woodset al., 1987 for examples from nuclear power which tried this in response tothe Three Mile Island accident).

Interacting Goals

Another set of factors that effect cognition at work is strategic in nature.People have to make tradeoffs between different but interacting or conflictinggoals, between values or costs placed on different possible outcomes orcourses of action, or between the risks of different errors. They must makethese tradeoffs while facing irreducible uncertainty, risk, and the pressure oflimited resources (e.g., time pressure; opportunity costs). One may think ofthese tradeoffs in terms of simplistic global examples like safety versuseconomy. Tradeoffs also occur on other kinds of dimensions. In respondingto an anomaly in domains like aircraft or space vehicles, for example, there isa tradeoff with respect to when to commit to a course of action. Practitionershave to decide whether to take corrective action early in the course of anincident with limited information, to delay the response and wait for moredata to come in, to search for additional findings, or to ponder additionalalternative hypotheses.

Practitioners also trade off between following operational rules or takingaction based on reasoning about the case itself (cf., Woods et al., 1987). Do thestandard rules apply to this particular situation when some additional factoris present that complicates the textbook scenario? Should we adapt thestandard plans or should we stick with them regardless of the specialcircumstances? Strategic tradeoffs can also involve coordination amongpeople and machine agents in the distributed human-machine cognitivesystem (Woods et al., 1994 chapter 4). A machine expert recommends aparticular diagnosis or action, but what if your own evaluation is different?What is enough evidence that the machine is wrong to justify disregardingthe machine expert’s evaluation and proceeding on your own evaluation ofthe situation?

Page 20: Perspectives on Human Error: Hindsight Biases and Local Rationality

20

Criterion setting on these different tradeoffs may not always be a consciousprocess or an explicit decision made by individuals. The criterion adoptedmay be an emergent property of systems of people, either small groups orlarger organizations. The criterion may be fairly labile and susceptible toinfluence or relatively stable and difficult to change. The tradeoffs may createexplicit choice points for practitioners embedded in an evolving situation, ormay influence performance indirectly for example by shifting a team’smindset in a particular direction (e.g., Layton, Smith and McCoy, 1994).

Goal ConflictsMultiple goals are simultaneously relevant in actual fields of practice.Depending on the particular circumstances in effect in a particular situation,the means to influence these multiple goals will interact, potentiallyproducing conflicts between different goals. Expertise consists, in part, ofbeing able to negotiate among interacting goals by selecting or constructingthe means to satisfy all sufficiently or by deciding which goals to focus on andwhich goals to relax or sacrifice in a particular context.

However, practitioners may fail to meet this cognitive demand adequately.An adequate analysis of human performance and the potential for errorrequires explicit description of the interacting goals, the tradeoffs being made,and the pressures present that shift the operating points for these tradeoffs(Cook and Woods, 1994).

The finding potential conflicts, assessing their impact, and developing robuststrategies may be quite difficult. Consider the anesthesiologist. Practitioners’highest level goal (and the one most often explicitly acknowledged) is toprotect patient safety. But that is not the only goal. There are other goals,some of which are less explicitly articulated. These goals include reducingcosts, avoiding actions that would increase the likelihood of being sued,maintaining good relations with the surgical service, maintaining resourceelasticity to allow for handling unexpected emergencies, and others.

In a given circumstance, the relationships between these goals can produceconflicts. In the daily routine, for example, maximizing patient safety andavoiding lawsuits create the need to maximize information about the patientthrough preoperative workup. The anesthetist may find some hint of apotentially problematic condition and consider further tests that may incurcosts, risks to the patient, and a delay of surgery. The cost-reduction goalprovides an incentive for a minimal preoperative workup and the use ofsame-day surgery. This conflicts with the other goals. The anesthetist may besqueezed in this conflict—gathering the additional information, which in theend may not reveal anything important, will cause a delay of surgery anddecrease throughput.

Page 21: Perspectives on Human Error: Hindsight Biases and Local Rationality

21

Given these external pressures some medical practitioners will not follow uphints about some aspect of the patient’s history because to do so would impactthe usual practices relative to throughput and economic goals. However,failing to acquire the information may reduce the ill-defined margin of safetyand, in a specific case, the omission may turn out to be a contributor to anincident or accident. Other practitioners will adopt a conservative stance andorder tests for minor indications even though the yield is low. This mayaffect the day’s surgical schedule, the hospital and the surgeons’ economicgoals, and the anesthesiologists’ relationship with the surgeons. In eithercase, the nature of the goals and pressures on the practitioner are seldommade explicit and rarely examined critically.

Analyses of past disasters frequently find that goal conflicts played a role inthe accident evolution, especially when they place practitioners in doublebinds. In one tragic aviation disaster, the Dryden Ontario crash (Moshansky,1992), several different organizational pressures to meet economic goals,along with organizational decisions to reduce resources, created a situationwhich placed a pilot in a double bind. Demand for the flight created asituation where the flight either could carry all of the passengers awaitingtransport or carry enough fuel to reach their destination, but not both (noother aircraft was available to meet demand). The Captain decided to offloadpassengers and make a direct trip; the Company (a regional carrier) overruledhim deciding to offload fuel and make a refueling stop at Dryden on the wayto their destination.

After landing at Dryden weather was within nominal standards but a freezingrain had begun. Any delays increased the probability of wing ice. Afterlanding, the crew had to keep engines running (a “hot” fuel) or they wouldhave been unable to restart them (a) because the aircraft had departed with aninoperative auxiliary power unit and (b) due to the lack of equipment torestart jet engines at this airport. However, deicing is not permitted with anengine running because of other safety concerns. Several factors led to delays(e.g., they delayed on the ramp waiting for another aircraft to land). Thecaptain attempted to takeoff, and the aircraft crashed due to wingscontaminated with ice.

The immediate reaction was that a pilot error was responsible. In hindsight,there were obvious threats which it appeared the pilot recklessly disregarded.However, this view drastically oversimplified the situation the pilot faced.The actual accident investigation went deeper and found that organizationalfactors and specific circumstances placed the pilot in a goal conflict and doublebind (note the report needed four volumes to lay out all of the differentorganizational factors and how they created the latent conditions for thisaccident).

Page 22: Perspectives on Human Error: Hindsight Biases and Local Rationality

22

One critical part of the investigation was understanding the situation fromthe point of view of the practitioners in the situation. When the investigatorsreconstructed the view of the evolving situation from the vantage point of apilot, the Captain’s dilemmas became clear. Deciding not to take off wouldstrand a full plane of passengers, disrupt schedules, and lose money for thecarrier. Such a decision would be regarded as an economic failure withpotential sanctions for the pilot. On the other hand, the means toaccommodate the interacting constraints of the weather threat and refuelingprocess were not available due to organizational choices not to invest in orcut back on equipment at peripheral airports such as Dryden. The air carrierwas new to jet service and provided minimal support for the flight crew interms of guidance or from company dispatch.

The accident investigation identified multiple, latent organizational factorswhich created the dilemmas. These included factors at the level of theregional carrier (e.g., new to jet operations but with inadequate investment inexpertise and infrastructure to support these operations), at the level of therelationship between the regional carrier and the parent carrier (e.g., theparent organization distanced itself from operations and safety issues in itsnewly acquired subsidiary; the regional carrier minimized communication topreserve its autonomy), and at the level of regulatory oversight (e.g.,breakdowns due to increased workload in a deregulated environmentcoupled with major staff and budget cuts).

As in this case, constraints imposed by organizational or social context cancreate or exacerbate competition between goals. Organizational pressuresgenerated competition between goals that was an important factor in thebreakdown of safety barriers in the system for transporting oil through PrinceWilliam Sound that preceded the Exxon Valdez disaster (NationalTransportation Safety Board, 1990).

It should not be thought that the organizational goals are necessarily simplythe written policies and procedures of the institution. Indeed, the messagesreceived by practitioners about the nature of the institution's goals may bequite different from those that management acknowledges. How theorganization reacts to incidents, near misses and failures can send sharp endpractitioners potent if implicit messages about how to tradeoff conflictinggoals (Rochlin et al., 1987). These covert factors are especially insidiousbecause they affect behavior and yet are unacknowledged. For example, theNavy sent an implicit but very clear message to its commanders by thedifferential treatment it accorded to the commander of the Stark followingthat incident (U.S. House of Representatives Committee on Armed Services,1987) as opposed to the Vincennes following that incident (U.S. Departmentof Defense, 1988).

Page 23: Perspectives on Human Error: Hindsight Biases and Local Rationality

23

Goal conflicts also can arise from intrinsic characteristics of the field ofactivity. An example from cardiac anesthesiology is the conflict between thedesirability of a high blood pressure to improve cardiac perfusion (oxygensupply to the heart muscle) and a low one to reduce cardiac work. The bloodpressure target adopted by the anesthetist to balance this tradeoff depends inpart on the practitioner’s strategy, the nature of the patient, the kind ofsurgical procedure, anticipating potential risks (e.g., the risk of majorbleeding), and the negotiations between different people in the operatingroom team (e.g., the surgeon who would like the blood pressure kept low tolimit the blood loss at the surgical site).

Because local rationality revolves around how people pursue their goals,understanding performance at the sharp depends on tracing interactingmultiple goals and how they produce tradeoffs, dilemmas, and double binds.In this process the investigator needs to understand how the tradeoffsproduced by interacting goals are usually resolved. Practitioners in a field ofactivity may usually apply standard routines without deliberating on thenature of a dilemma, or they may work explicitly to find ways to balance thecompeting demands. The typical strategies for resolving tradeoffs, dilemmasand interacting goals, whether implicit in organizational practices or explicitlydeveloped by practitioners, may be

• robust -- work well across a wide range of circumstance but still notguarantee a successful outcome,

• brittle -- work well under a limited set of conditions but breakdown whencircumstances create situations outside the boundary conditions,

• poor -- very vulnerable to breakdown.Understanding where and how goal conflicts can arise is a first step.Organizations can then examine how to improve strategies for handlingthese conflicts, even though ultimately there may be no algorithmic methodsthat can guarantee successful outcomes in all cases.

Problem Demands

In the above discussion, one should notice that it is difficult to examinecognition at work without also speaking of the demands that situations placeon individuals, teams, and more distributed, coordinated activity(Rasmussen, 1986). The concept of local rationality captures this idea thatcognitive activity needs to be considered in light of the demands placed onpractitioners by characteristics of the incidents and problems that occur.These problem demands vary in type and degree. One incident may presentitself as a textbook version of a well practiced plan while another may occuraccompanied by several complicating factors which together create a moresubstantive challenge to practitioners. One case may be straightforward todiagnose, while another may represent a garden path problem (e.g., Johnson

Page 24: Perspectives on Human Error: Hindsight Biases and Local Rationality

24

et al., 1988; Roth et. al., 1992). One case may be straightforward because asingle response will simultaneously satisfy the multiple relevant goals, whileanother may present a dilemma because multiple important goals conflict.

Understanding the kinds of problem demands that can arise in a field ofactivity can reveal a great deal about the knowledge activation, attentionalcontrol or handling of multiple goals that is needed for successfulperformance. In other words, problem demands shape the cognitiveactivities of any agent or set of agents who might confront that incident. Thisperspective implies that one must consider what features of domain incidentsand situations increase problem demands (coupling, escalation, timepressure, sources of variability particularly unanticipated variability,variations in tempo including rhythms of self-paced and event-drivenactivity, conflicting goals, uncertainty). The expression of expertise and error,then, is governed by the interplay of problem demands inherent in the fieldof activity and the resources available to bring knowledge to bear in pursuit ofthe critical goals. Analyses of the potential for failure look for mismatches inthis demand-resource relationship (Rasmussen, 1986). Two examples ofdemand factors are coupling and escalation.

CouplingOne demand factor that affects the kinds of problems that arise is the degree ofcoupling in the underlying system (Perrow, 1984). Increasing the coupling ina system, increases the physical and functional interconnections betweenparts (Rasmussen, 1986). This has a variety of consequences but among themare effects at a distance, cascades of disturbances, side effects of actions (Woods,1994). Several results follow from the fact that apparently distant parts arecoupled.

• Practitioners must master new knowledge demands, e.g., knowing howdifferent parts of the system interact physically or functionally. Greaterinvestments will be required to avoid buggy, incomplete or oversimplifiedmodels. However, many of these interconnections will be relevant onlyunder special circumstances. The potential for inert knowledge may goup.

• An action or a fault may influence a part of the system that seems distantfrom the area of focus. This complicates the diagnostic search process, forexample, the ability to discriminate red herrings from important “distant”indications will be more difficult (e.g., the problem in Roth et al., 1992). Inhindsight the relevance of such distant indications will be crystal clear,while from a practitioners’ perspective the demand is to discriminateirrelevant data from the critical findings to focus on.

• A single action will have multiple effects. Some of these will be theintended or main effects while others will be “side” effects. Missing sideeffects in diagnosis, planning and adapting plans in progress to cope withnew events is a very common form of failure in highly coupled systems.

Page 25: Perspectives on Human Error: Hindsight Biases and Local Rationality

25

• A fault will produce multiple disturbances and these disturbances willcascade along the lines of physical and functional interconnection in theunderlying monitored process. This also complicates diagnostic search(evidence will come in over time), but, importantly, it will increase thetempo of operations.

EscalationA related demand factor is phenomenon of escalation which captures afundamental correlation between the situation, the cognitive demands, andthe penalties for poor human-machine interaction (Woods et al., 1994): thegreater the trouble in the underlying process or the higher the tempo ofoperations, the greater the information processing activities required to copewith the trouble or pace of activities.

As situations diverge from routine or textbook, the tempo of operationsescalates and the cognitive and cooperative work needed to cope with theanomalous situation escalates as well. As a result, demands for monitoring,attentional control, information gathering, and communication among teammembers (including human-machine communication) all tend to go up withthe unusualness, tempo and criticality of situations. More knowledge andmore specialist knowledge will need to be brought to bear as an incidentevolves and escalates. This occurs through the technological andorganizational structures used to access knowledge stored in differentsystems, places, or people. More lines of activity and thought will arise andneed to be coordinated (Woods, 1994). The potential for goals to interact andconflict will go up. Focusing on the most critical goals may be needed tomake tradeoff decisions. Plans will need to be put into effect to cope with theanomalous situation, but some practitioners will need to evaluate how wellcontingency plans match the actual circumstances or how to adapt them tothe specific context.

If there are workload or other burdens associated with using a computerinterface or with interacting with an autonomous or intelligent machineagent, these burdens tend to be concentrated at the very times when thepractitioner can least afford new tasks, new memory demands, or diversionsof his or her attention away from the job at hand to the interface per se (e.g.,the case of cockpit automation; Billings, 1996). This means the penalties forpoor design will be visible only in beyond-textbook situations. Since thesystem will seem to function well much of the time, breakdowns in highertempo situations will seem mysterious after-the-fact and attributed to “piloterror” (see Sarter and Woods, in press, for this process in the case of cockpitautomation).

Cognition in Context

Page 26: Perspectives on Human Error: Hindsight Biases and Local Rationality

26

While many of the factors discussed in the previous sections appear to beaspects of an individual’s cognitive processes, i.e., knowledge organization,mental model, attention, judgment under uncertainty, we quickly find inwork environments that these cognitive activities occur in the context ofother practitioners, supported by artifacts of various types, and framed by thedemands of the organization (Moray, this volume). While research on errorsoften began by considering the cognition of individuals, investigationsquickly revealed one had to adopt a broader focus. For example, the moreresearchers have looked at success and failure in complex work settings, themore they have realized that a critical part of the story is how resources andconstraints provided by the blunt end shape and influence the behavior of thepeople at the sharp end (Rochlin et al., 1987; Hirschhorn, 1993; Reason, 1998).

When investigators studied cognition at work, whether on flightdecks ofcommercial jet airliners, in control centers that manage space missions, insurgical operating rooms, in control rooms that manage chemical or energyprocesses, in control centers that monitor telecommunication networks, or inmany other fields of human activity, first, they did not find cognitive activityisolated in a single individual, but rather they saw cognitive activitydistributed across multiple agents (Resnick, Leavine, and Teasley, 1991;Hutchins, 1995). Second, they did not see cognitive activity separated in athoughtful individual, but rather as a part of a stream of activity (Klein,Orasanu, and Calderwood, 1993). Third, they saw these sets of activepractitioners embedded in larger group, professional, organizational, orinstitutional contexts which constrain their activities, set up rewards andpunishments, define goals which are not always consistent, and provideresources (e.g., Hutchins, 1990). Moments of individual cognition punctuatedthis larger flow, and they were set up and conditioned by the larger systemand communities of practice in which that individual was embedded.

Fourth, they observed phases of activity with transitions and evolution.Cognitive and physical activity varied in tempo, with periods of loweractivity and more self-paced tasks interspersed with busy, externally pacedoperations where task performance was more critical. These higher temposituations created greater need for cognitive work and at the same time oftencreated greater constraints on cognitive activity (e.g., time pressure,uncertainty, exceptional circumstances, failures, and their associated hazards).They observed that there are multiple consequences at stake for theindividuals, groups, and organizations involved in the field of activity oraffected by that field of activity—such as economic, personal, and safety goals.

Fifth, they noticed that tools of all types are everywhere. Almost all activitywas aided by external artifacts, some fashioned from traditional technologiesand many built in the computer medium (Norman, 1993). More in-depthobservation revealed that the computer technology was often poorly adaptedto the needs of the practitioner. The computer technology was used clumsy

Page 27: Perspectives on Human Error: Hindsight Biases and Local Rationality

27

in that the computer-based systems made new demands on the practitioner,demands that tended to congregate at the higher tempo or higher criticalityperiods. Close observation revealed that people and systems of people(operators, designers, regulators, etc.) adapted their tools and their activitiescontinuously to respond to indications of trouble or to meet new demands.Furthermore, new machines were not used as the designers intended, butwere shaped by practitioners to the contingencies of the field of activity in alocally pragmatic way (Woods et al., 1994, chapter 5).

These kinds of observations about cognition at work has led some to proposethat cognition can be seen as fundamentally public and shared, distributedacross agents, distributed between external artifacts and internal strategies,embedded in a larger context that partially governs the meanings that aremade out of events (Winograd and Flores, 1988; Norman, 1993; Hutchins,1995). Understanding cognition then depends as much on studying thecontext in which cognition is embedded and the larger distributed system ofartifacts and multiple agents, as on studying what goes on between the ears --a distributed cognitive system perspective (Hutchins, 1995).

Hughes Randall and Shapiro (1992, p. 5) illustrate this distributed cognitivesystem viewpoint in their studies of the UK air traffic control system and thereliability of this system.

If one looks to see what constitutes this reliability, it cannot be foundin any single element of the system. It is certainly not to be found inthe equipment . . . for a period of several months during our field workit was failing regularly. . . . Nor is it to be found in the rules andprocedures, which are a resource for safe operation but which cannever cover every circumstance and condition. Nor is it to be found inthe personnel who, though very highly skilled, motivated, anddedicated, are as prone as people everywhere to human error. Ratherwe believe it is to be found in the cooperative activities of controllersacross the ‘totality’ of the system, and in particular in the way that itenforces the active engagement of controllers, chiefs, and assistantswith the material they are using and with each other.

Success and failure belong to the larger operational system and not simply toan individual. Failure involves breakdowns in cognitive activities which aredistributed across multiple practitioners and influenced by the artifacts usedby those practitioners. This is perhaps best illustrated in processes of errordetection and recovery which play a key role in determining system reliabilityin practice. In most domains, recovery involves the interaction of multiplepeople cross stimulating and cross checking each other (e.g., see Billings, 1996for aviation; Rochlin et al., 1987 for air carrier operations; also see Seifert andHutchins, 1992).

Page 28: Perspectives on Human Error: Hindsight Biases and Local Rationality

28

Hindsight and Cognitive Factors

Now that we have explored some of the factors that affect humanperformance, the way in which our knowledge of outcome biases attributionsabout causes of accidents becomes clearer. With knowledge of outcome,reviewers oversimplify the situation faced by practitioners in context (Woodset al., 1994). First, reviewers, after-the-fact assume that if people demonstrateknowledge in some context, then that knowledge should be available in allcontexts. However, research on human performance shows that calling tomind knowledge is a significant cognitive process. Education research hasfocused extensively on the problem of inert knowledge—knowledge that canbe demonstrated in one context (e.g., test exercises) is not activated in othercontexts where it is relevant (e.g., ill-structured problems). Inert knowledgeconsists of isolated facts that are disconnected from how the knowledge can beused to accomplish some purpose. This research emphasizes the need to“conditionalize” knowledge to its use in different contexts as a fundamentaltraining strategy.

Second, reviewers, after-the-fact, assume that if data are physically available,then their significance should be appreciated in all contexts. Many accidentreports have a statement to the effect that all of the data relevant in hindsightwere physically available to the people at the sharp end, but the people didnot find and interpret the right data at the right time. Hindsight biasesreviewers, blocking from view all of the processes associated with formingand shifting mindset as new events occur, the attentional demands of thesituation, and the factors that led to a breakdown in synchronizing mindsetin a changing world. Instead of seeing the mindset related factors and howthey shape human performance, reviewers in hindsight are baffled by thepractitioners inability to see what is obvious to all after-the-fact. In thisexplanatory vacuum reviewers in hindsight fall back on other explanations,for example, a motivational or effort factor (the people involved in anincident didn’t “try hard enough”) and try to improve things by a carrot andstick approach of exhortations to try harder combined with punishments forfailure).

Third, since reviewers after-the-fact know which goal was the most critical,they assume that this should have been obvious to the practitioners workingbefore-the-fact. However, this ignores the multiple interacting goals that arealways present in systems under resource pressure, goals that co-existsometimes peacefully and sometimes work against each other. If practitioneractions that are shaped by a goal conflict contribute to a bad outcome in aspecific case, then it is easy for post-incident evaluations to say that a “humanerror” occurred—e.g., the practitioners should have delayed the surgicalprocedure to investigate the hint. The role of the goal conflict may never benoted; therefore, post-accident changes cannot address critical contributors,factors that can contribute to other incidents.

Page 29: Perspectives on Human Error: Hindsight Biases and Local Rationality

29

To evaluate the behavior of the practitioners involved in an incident, it isimportant to elucidate the relevant goals, the interactions among these goals,and the factors that influence how practitioners make tradeoffs in particularsituations. The role of these factors is often missed in evaluations of thebehavior of practitioners. As a result, it is easy for organizations to producewhat appear to be solutions that in fact exacerbate conflict between goalsrather than help practitioners handle goal conflicts in context. In part, thisoccurs because it is difficult for many organizations (particularly in regulatedindustries) to admit that goal conflicts and tradeoff decisions exist. Howeverdistasteful to admit or whatever public relations problems it creates, denyingthe existence of goal interactions does not make such conflicts disappear andis likely to make them even tougher to handle when they are relevant to aparticular incident.

Debiasing and Studying Human Error

Hindsight biases fundamentally undermine our ability to understand thefactors that influenced practitioner behavior. Given knowledge of outcome,reviewers will tend to oversimplify the problem-solving situation that wasactually faced by the practitioner. The dilemmas, the uncertainties, thetradeoffs, the attentional demands, and double binds faced by practitionersmay be missed or under-emphasized when an incident is viewed inhindsight. Typically, hindsight biases make it seem that participants failed toaccount for information or conditions that “should have been obvious” orbehaved in ways that were inconsistent with the (now known to be)significant information. Possessing knowledge of the outcome, because ofhindsight biases, trivializes the situation confronting the practitioner, whocannot know the outcome before-the-fact, and makes the “correct” choiceseem crystal clear.

Because hindsight biases mask the real dilemmas, uncertainties, anddemands practitioners confront, the result is a distorted view of the factorscontributing to the incident or accident. In this vacuum, we can only seehuman performance after an accident or near miss as irrational, willingdisregard (for what is now obvious to us and to them), or even diabolical.This seems to support the belief that human error often is the cause of anaccident and that this judgment provides a satisfactory closure to the accident.When the label human error ends the investigation, the only response left isto search for the culprits and once they have been identified, remove themfrom practice, provide remedial training, replace them with technology,while new policies and procedures can be issued to keep other practitioners inline.

Page 30: Perspectives on Human Error: Hindsight Biases and Local Rationality

30

The difference between these everyday or “folk” reactions to failure andinvestigations of the factors that influence human performance is thatresearchers see the label human error as the starting point for investigationsand use methods designed to remove hindsight biases to see better the factorsthat influenced the behavior of the people in the situation. When this isdone, from Bessel’s observations to Fitts’ studies to today’s investigations ofcomputerized cockpits and other domains, the results identify the multipledeeper factors that lead to erroneous actions and assessments. These deeperfactors will still be present even if the people associated with the failure areremoved from practice. The vulnerabilities remain despite injunctions forpractitioners to be more careful in the future. The injunctions orpunishments that follow the accident may even exacerbate some of thevulnerabilities resident in the larger system.

We always can look back at people, episodes or cases in any system and usingone or another standard identify any number of “errors,” that is, violations ofthat standard. The key to safety is not minimizing or eradicating an infectionof error (Rasmussen, 1990). Effective, robust, “high reliability” systems areable to recognize trouble before negative consequences occur. This meansthat processes involved in detecting that a situation is heading towardstrouble and re-directing the situation away from a poor outcome is animportant part of human performance related to safety versus failure.Evidence of difficulties, problems, incidents is an important form ofinformation about the organization and operational system that is necessaryfor adaptive and constructive change (Reason, 1998). Studying cognitivefactors, coordinative activities, and organizational constraints relative toproblems demands in a particular domain is a crucial part of generating thisbase of information. Successful, “high reliability” organizations value,encourage, and generate such flows of information without waiting foraccidents to occur (Rochlin, LaPorte, and Roberts, 1987).

Summary

Considering the topic “human error” quickly led us to two differentperspectives.

“Human error” in one sense is a label invoked by stakeholders after-the-factin a psychological and social process of causal attribution. Psychologists andothers have studied these processes and in particular found a perniciousinfluence of knowledge of outcome, such as outcome and hindsight biases,that obscures the factors that shape human performance. Studying “humanerror” from this perspective is the study of how stakeholders react to failure.

“Human error” in another sense refers to the processes that lead up to successand failure (or potential failures) of a system. Researchers, using techniques

Page 31: Perspectives on Human Error: Hindsight Biases and Local Rationality

31

to escape the hindsight bias, have studied such work domains and the factorsthat influence the performance of the people who work in those systems. Inthis chapter, we have reviewed a portion of the results using one heuristicstructure -- knowledge in context, mindset and goal conflicts.

However, there is an irony in these research results. In focusing on cognitivefactors behind the label human error, processes that would seem to resideonly within individuals, researchers instead found

• a critical role for understanding the demands posed by problems, e.g.,coupling and escalation (Perrow, 1984; Rasmussen, 1986),

• that artifacts shape cognition (Zhang and Norman, 1994; Woods et al.,1994, chapter 5),

• that coordination across practitioners is an essential component of success(Resnick, Levine and Teasley, 1991; Hutchins, 1995),

• that organizations create or exacerbate constraints on sharp endpractitioners (Rochlin et al., 1987; Reason, 1998).

When one investigates the factors that produce “human error” in actualwork environments, our view of cognition expands. “Cognition in the wild,”as Hutchins (1995) puts it, is distributed across people, across people andmachines, and constrained by context.

References

Baron, J. and Hershey, J. (1988). Outcome bias in decision evaluation. Journalof Personality and Social Psychology, 54, 569-579.

Boring, E.G. (1950). A History of Experimental Psychology (2nd Edition). NewYork: Appleton-Century-Crofts.

Billings, C. E. (1996). Aviation Automation: The Search For A Human-Centered Approach. Hillsdale, NJ: Lawrence Erlbaum.

Bransford, J., Sherwood, R., Vye, N., and Rieser, J., (1986). Teaching andproblem solving: Research foundations. American Psychologist, 41, 1078-1089.

Byrne, M. D. and Bovair, S. (1997). A working memory model of a commonprocedural error. Cognitive Science, 21, 31-61.

Caplan, R., Posner, K., and Cheney, F. (1991). Effect of outcome on physicianjudgments of appropriateness of care. Journal of the American MedicalAssociation, 265, 1957-1960.

Chi, M. T. H., Glaser, R. and Farr, M. (1988). The Nature Of Expertise.Hillsdale, NJ: Lawrence Erlbaum Associates.

Page 32: Perspectives on Human Error: Hindsight Biases and Local Rationality

32

Cook, R. I. and Woods, D. D. (1994). Operating at the ‘Sharp End:’ TheComplexity of Human Error. In M.S. Bogner, editor, Human Error inMedicine, Hillsdale NJ: Lawrence Erlbaum.

De Keyser, V. and Woods, D. D. (1990). Fixation errors: Failures to revisesituation assessment in dynamic and risky systems. In A. G. Colombo and A.Saiz de Bustamante (Eds.), System Reliability Assessment. The Netherlands:Kluwer Academic, 231-251.

Dorner, D. (1983). Heuristics and cognition in complex systems. In Groner, R.,Groner, M. and Bischof, W. F. (Eds.), Methods of Heuristics. Hillsdale NJ:Lawrence Erlbaum.

Durso, F. T. and Gronlund, S. D. (this volume). Situation Awareness.

Feltovich, P. J., Ford, K. M. and Hoffman, R. R. (1997). Expertise in Context.Cambridge MA: MIT Press.

Feltovich, P. J., Spiro, R. J. and Coulson, R. (1989). The nature of conceptualunderstanding in biomedicine: The deep structure of complex ideas and thedevelopment of misconceptions. In D. Evans and V. Patel (Eds.), CognitiveScience In Medicine: Biomedical Modeling. Cambridge, MA: MIT Press.

Feltovich, P. J., Spiro, R. J., and Coulson, R. (1993). Learning, teaching andtesting for complex conceptual understanding. In N. Fredericksen, R. Mislevyand I. Bejar (Eds.), Test Theory For A New Generation Of Tests., Hillsdale NJ:Lawrence Erlbaum.

Feltovich, P. J., Spiro, R. J., and Coulson, R. (1997). Issues of expert flexibilityin contexts characterized by complexity and change. In Feltovich, P. J., Ford,K. M. and Hoffman, R. R. (Eds.), Expertise in Context. Cambridge MA: MITPress.

Fischer, U., Orasanu, J.M. and Montvalo, M. (1993). Efficient decisionstrategies on the flight deck. In Proceedings of the Seventh InternationalSymposium on Aviation Psychology. Columbus, OH. April.

Fischhoff, B. (1975). Hindsight ≠ foresight: The effect of outcome knowledgeon judgment under uncertainty. Journal of Experimental Psychology:Human Perception and Performance, 1, 288-299.

Fischhoff, B. (1982). For those condemned to study the past: Heuristics andbiases in hindsight. In D. Kahneman, P. Slovic and A. Tversky (Eds.),Judgment under Uncertainty: Heuristics and biases. Cambridge, MA:Cambridge University Press

Page 33: Perspectives on Human Error: Hindsight Biases and Local Rationality

33

Fitts, P. M, and Jones, R. E. (1947). Analysis of factors contributing to 460"pilot-error" experiences in operating aircraft controls (Memorandum ReportTSEAA-694-12). Wright Field, OH: U.S. Air Force Air Materiel Command,Aero Medical Laboratory.

Gaba, D. M., and DeAnda, A. (1989). The response of anesthesia trainees tosimulated critical incidents. Anesthesia and Analgesia, 68, 444-451.

Gentner, D., and Stevens, A. L. (Eds.) (1983). Mental Models. Hillsdale NJ:Lawrence Erlbaum Associates.

Getty, D. J., Swets, J. A., Pickett, R. M. and Gonthier, D. (1995) System operatorresponse to warnings of danger: A laboratory invesitgation of the effects ofpredictive value of a warning on human response time. Journal ofExperimental Psychology: Applied, 1, 19-33.

Gilson, R. D. (Ed.) (1995). Special Section on Situation Awareness. HumanFactors, 37, p. 3-157.

Gopher, D. (1991). The skill of attention control: Acquisition and execution ofattention strategies. In Attention and Performance XIV. Hillsdale, NJ:Lawrence Erlbaum.

Hilton, D. (1990). Conversational processes and causal explanation.Psychological Bulletin, 197, 65-81.

Hirschhorn, L. (1993). Hierarchy vs. bureaucracy: The case of a nuclear reactor.In K. H. Roberts (Ed.), New Challenges To Understanding Organizations. NewYork: McMillan.

Hollnagel. E. (1993). Human Reliability Analysis: Context and Control.London: Academic Press.

Hughes, J., Randall, D. and Shapiro, D. (1992). Faltering from ethnography todesign. Computer Supported Cooperative Work (CSCW) Proceedings.(November) , 1-8.

Hutchins, E. (1990). The technology of team navigation. In J. Galegher, R.Kraut, and C. Egido (Eds.), Intellectual Teamwork: Social and technical basesof cooperative work . Hillsdale, NJ: Lawrence Erlbaum.

Hutchins, E. (1995). Cognition in the Wild. Cambridge, MA: MIT Press.

Page 34: Perspectives on Human Error: Hindsight Biases and Local Rationality

34

Jager Adams, M., Tenney, Y. J., and Pew, R. W. (1995). Situation awarenessand the cognitive management of complex systems. Human Factors, 37, 85-104.

Johnson, P. E., Duran, A. S., Hassebrock, F., Moller, J., Prietula, M., Feltovich,P. J., Swanson, D. B. (1981). Expertise and error in diagnostic reasoning.Cognitive Science, 5, 235-283.

Johnson, P. E., Moen, J. B. and Thompson, W. B. (1988). Garden path errors indiagnostic reasoning. In L. Bolec and M. J. Coombs (Eds.), Expert SystemApplications. New York: Springer-Verlag.

Johnson, P. E., Jamal, K. and Berryman, R. G. (1991). Effects of framing onauditor decisions. Organizational Behavior and Human Decision Processes,50, 75-105.

Johnson, P. E., Grazioli, S., Jamal, K. and Zualkernan, I. (1992). Success andfailure in expert reasoning. Organizational Behavior and Human DecisionProcesses, 53, 173-203.

Kelley, H. H. (1973). The process of causal attribution. American Psychologist,28, 107-128.

Klein, G. A., Orasanu, J., and Calderwood, R. (Eds.) (1993). Decision Making inAction: Models and Methods, Norwood NJ: Ablex.

Klein, G. A. (1998). Sources of Power: How People Make Decisions. MITPress, Cambrdige MA.

LaBerge, D. (1995). Attentional Processing: The Brain’s Art of Mindfulness.Harvard University Press, Cambridge MA.

Layton, C., Smith, P., and McCoy, E. (1994). Design of a cooperative problem-solving system for enroute flight planning: An empirical evaluation.Human Factors, 36, 94-119.

Lipshitz, R. (1989). “Either a medal or a corporal:” The effects of success andfailure on the evaluation of decision making and decision makers.Organizational Behavior and Human Decision Processes, 44, 380-395.

Moray, N. (this volume). The Cognitive Psychology of Industrial Systems:An Introduction to Cognitive Engineering.

Moshansky, V. P. (1992). Final report of the commission of inquiry into theAir Ontario crash at Dryden, Ontario. Ottawa: Minister of Supply andServices, Canada.

Page 35: Perspectives on Human Error: Hindsight Biases and Local Rationality

35

National Transportation Safety Board. (1990). Marine accident report:Grounding of the U.S. Tankship Exxon Valdez on Bligh Reef, Prince WilliamSound, near Valdez, Alaska, March 24, 1989. Report no. NTSB/MAT-90/04.Springfield, VA: National Technical Information Service.

Neisser, U. (1976). Cognition and Reality: Principles and Implications ofCognitive Psychology. San Francisco, CA: W. H. Freeman and Company.

Newell, A. (1982). The knowledge level. Artificial Intelligence, 18, 87-127.

Norman, D. A. (1981). Categorization of action slips. Psychological Review,88, 1-15.

Norman, D. A. (1983). Design rules based on analysis of human error.Communications of the ACM, 26, 254-258.

Norman, D. A. (1988). The Psychology of Everyday Things. New York, NY:Basic Books.

Norman, D. A. (1993). Things That Make Us Smart. Reading, MA: Addison-Wesley.

Patterson, E.S., Watts-Perotti, J.C. and Woods, D.D. Voice Loops asCoordination Aids in Space Shuttle Mission Control. Computer SupportedCooperative Work , in press.

Payne, J. W. and Bettman, J. R. and Johnson, E. J. (1988). Adaptive strategyselection in decision making. Journal of Experimental Psychology: Learning,Memory and Cognition, 14, 534-552.

Payne J. W., Johnson E. J., Bettman J. R. and Coupey, E. (1990). Understandingcontingent choice: A computer simulation approach. IEEE Transactions onSystems, Man, and Cybernetics, 20, 296-309.

Perkins, D., Martin, F. (1986). Fragile knowledge and neglected strategies innovice programmers. In E. Soloway and S. Iyengar (Eds.), Empirical Studiesof Programmers. Norwood, NJ: Ablex.

Perrow, C. (1984). Normal Accidents: Living with High Risk Technologies.New York, NY: Basic Books.

Rasmussen, J. (1986). Information Processing and Human-MachineInteraction: An Approach to Cognitive Engineering. New York: North-Holland.

Page 36: Perspectives on Human Error: Hindsight Biases and Local Rationality

36

Rasmussen, J. (1990). The role of error in organizing behavior. Ergonomics,33, 1185-1199.

Rasmussen, J., Duncan, K. and Leplat, J. (1987). New Technology and HumanError. Chichester, John Wiley & Sons.

Reason, J. and Mycielska, K. (1982). Absent Minded? The Psychology ofMental Lapses and Everyday Errors. Englewood Cliffs, NJ: Prentice Hall.

Reason, J. (1990). Human Error. Cambridge, England: Cambridge UniversityPress.

Reason, J. (1997). Managing the Risks of Organizational Accidents. BrookfieldVT: Ashgate Publishing.

Resnick, L., Levine, J and Teasley, S. D. (1991). Perspectives on Socially SharedCognition. New York: American Psychological Association.

Roberts, K. H., and Rousseau, D. M. (1989). Research in nearly failure-free,high-reliability organizations: Having the bubble. IEEE Transactions inEngineering Management, 36, 132-139.

Rochlin, G., LaPorte, T. R. and Roberts, K. H. (1987). The self-designing highreliability organization: Aircraft carrier flight operations at sea. Naval WarCollege Review, (Autumn) , 76-90.

Roth E. M., Woods, D. D. and Pople, H. E., Jr. (1992). Cognitive simulation as atool for cognitive task analysis. Ergonomics, 35, 1163-1198.

Schwenk, C., and Cosier, R. (1980). Effects of the expert, devil's advocate anddialectical inquiry methods on prediction performance. OrganizationalBehavior and Human Decision Processes. 26.

Seifert, C. M. and Hutchins, E. (1992). Error as opportunity: Learning in acooperative task. Human-Computer-Interaction, 7, 409-435.

Simon, H. (1957). Models of Man (Social and Rational). New York: JohnWiley and Sons.

Spiro, R. J., Coulson, R. L., Feltovich, P. J. and Anderson, D. K. (1988).Cognitive flexibility theory: Advanced knowledge acquisition in ill-structureddomains. Proceedings of the Tenth Annual Conference of the CognitiveScience Society. Hillsdale, NJ: Lawrence Erlbaum Associates.

Page 37: Perspectives on Human Error: Hindsight Biases and Local Rationality

37

Tasca, L. (1990). The Social Construction of Human Error. UnpublishedDoctoral Dissertation, State University of New York at Stony Brook, August1990.

U.S. Department of Defense. (1988). Report of the formal investigation intothe circumstances surrounding the downing of Iran Air Flight 655 on 3 July1988. Washington DC: Department of Defense.

U.S. House of Representatives Committee on Armed Services. (1987). Reporton the staff investigation into the Iraqi attack on the USS Stark. Springfield,VA: National Technical Information Service.

Weick, K. E. and Roberts, K. H. (1993). Collective mind and organizationalreliability: The case of flight operations on an aircraft carrier deck.Administration Science Quarterly, 38, 357--381.

Winograd, T. and Flores, F. (1987). Understanding Computers and Cognition.Reading, MA: Addison-Wesley.

Woods, D. D. (1994). Cognitive Demands and Activities in Dynamic FaultManagement: Abduction and Disturbance Management. In N. Stanton,editor, Human Factors of Alarm Design, Taylor & Francis, London.

Woods, D. D. (1995). The alarm problem and directed attention in dynamicfault management. Ergonomics, 38(11), 2371-2393.

Woods, D. D. and Sarter, N. B. (in press). Learning from AutomationSurprises and Going Sour Accidents. In N. Sarter and R. Amalberti (Eds.),Cognitive Engineering in the Aviation Domain, Erlbaum, Hillsdale NJ.

Woods, D. D. and J.C. Watts, J. C. How Not To Have To Navigate ThroughToo Many Displays. In Helander, M.G., Landauer, T.K. and Prabhu, P. (Eds.)Handbook of Human-Computer Interaction , 2nd edition. Amsterdam, TheNetherlands: Elsevier Science, 1997.

Woods, D. D., Johannesen, L., Cook, R. I. and Sarter, N. B. (1994). BehindHuman Error: Cognitive Systems, Computers and Hindsight. Crew SystemsErgonomic Information and Analysis Center, WPAFB, Dayton OH.

Woods, D. D., O'Brien, J. and Hanes, L. F. (1987). Human factors challenges inprocess control: The case of nuclear power plants. In G. Salvendy (Ed.),Handbook of Human Factors/Ergonomics (First Edition). New York: Wiley.

Yantis, S. (1993). Stimulus-drive attention capture. Current Directions inPsychological Science, 2, 156-161.

Page 38: Perspectives on Human Error: Hindsight Biases and Local Rationality

38

Zhang, J. and Norman, D. A. (1994). Representations in distributed cognitivetasks. Cognitive Science, 18, 87--122.

Zhang, J. (1997). The nature of external representations in problem solving.Cognitive Science, 21, 179--217.