cap. 12

20
CHAPTER 12 Eliciting and Representing the Knowledge of Experts Robert R. Hoffman & Gavan Lintern Keywords: knowledge elicitation, expert systems, intelligent systems, methodology, Concept Maps, Abstraction-Decompo- sition, critical decision method Introduction The transgenerational transmission of the wisdom of elders via storytelling is as old as humanity itself. During the Middle Ages and Renaissance, the Craft Guilds had well- specified procedures for the transmission of knowledge, and indeed gave us the develop- mental scale that is still widely used: initi- ate, novice, apprentice, journeyman, expert, and master (Hoffman, 1998 ). Based on interviews and observations of the work- place, Denis Diderot (along with 140 oth- ers, including Emile Voltaire) created one of the great works of the Enlightenment, the 17 volume Encyclopedie (Diderot, 17511772 ), which explained many “secrets” – the knowl- edge and procedures in a number of trade- crafts. The emergent science of psychology of the 1700 s and 1800 s also involved research that, in hindsight, might legitimately be regarded as knowledge elicitation (KE). For instance, a number of studies of reasoning were conducted in the laboratory of Wil- helm Wundt, and some of these involved university professors as the research partic- ipants (Militello & Hoffman, forthcoming). In the decade prior to World War I, the stage was set in Europe for applied and industrial psychology; much of that work involved the systematic study of proficient domain practi- tioners (see Hoffman & Deffenbacher, 1992 ). The focus of this chapter is on a more recent acceleration of research that involves the elicitation and representation of expert knowledge (and the subsequent use of the representations, in design). We lay out recent historical origins and rationale for the work, we chart the developments during the era of first-generation expert systems, and then we proceed to encapsulate our modern understanding of and approaches to the elicitation, representation, and sharing of expert knowledge. Our emphasis in this chapter is on methods and methodological issues. 203

Upload: reginaldo-nascimento

Post on 03-Dec-2015

11 views

Category:

Documents


2 download

DESCRIPTION

2006 - Ericsson, Charness, Feltovich e Hoffman

TRANSCRIPT

Page 1: Cap. 12

P1: JzG052184097Xc12 CB1040B/Ericsson 0 521 84087 X May 22 , 2006 16:1

C H A P T E R 12

Eliciting and Representing theKnowledge of Experts

Robert R. Hoffman & Gavan Lintern

Keywords: knowledge elicitation, expertsystems, intelligent systems, methodology,Concept Maps, Abstraction-Decompo-sition, critical decision method

Introduction

The transgenerational transmission of thewisdom of elders via storytelling is as oldas humanity itself. During the Middle Agesand Renaissance, the Craft Guilds had well-specified procedures for the transmission ofknowledge, and indeed gave us the develop-mental scale that is still widely used: initi-ate, novice, apprentice, journeyman, expert,and master (Hoffman, 1998). Based oninterviews and observations of the work-place, Denis Diderot (along with 140 oth-ers, including Emile Voltaire) created one ofthe great works of the Enlightenment, the 17

volume Encyclopedie (Diderot, 1751–1772),which explained many “secrets” – the knowl-edge and procedures in a number of trade-crafts. The emergent science of psychologyof the 1700s and 1800s also involved research

that, in hindsight, might legitimately beregarded as knowledge elicitation (KE). Forinstance, a number of studies of reasoningwere conducted in the laboratory of Wil-helm Wundt, and some of these involveduniversity professors as the research partic-ipants (Militello & Hoffman, forthcoming).In the decade prior to World War I, the stagewas set in Europe for applied and industrialpsychology; much of that work involved thesystematic study of proficient domain practi-tioners (see Hoffman & Deffenbacher, 1992).

The focus of this chapter is on a morerecent acceleration of research that involvesthe elicitation and representation of expertknowledge (and the subsequent use of therepresentations, in design). We lay out recenthistorical origins and rationale for the work,we chart the developments during the eraof first-generation expert systems, and thenwe proceed to encapsulate our modernunderstanding of and approaches to theelicitation, representation, and sharing ofexpert knowledge. Our emphasis in thischapter is on methods and methodologicalissues.

2 03

Page 2: Cap. 12

P1: JzG052184097Xc12 CB1040B/Ericsson 0 521 84087 X May 22 , 2006 16:1

2 04 the cambridge handbook of expertise and expert performance

Where This Topic Came From

The Era of Expert Systems

The era of expert systems can be dated fromabout 1971, when Edward Feigenbaum andhis colleagues (Feigenbaum, Buchanan, &Lederberg, 1971) created a system in whicha computable knowledge base of domainconcepts was integrated with an inferenceengine of procedural (if-then) rules. This“expert system” was intended to capturethe skill of expert chemists regarding theinterpretation of mass spectrograms. Otherseminal systems were MYCIN (Shortliffe,1976), for diagnosing bacterial infections andPROSPECTOR (Duda, Gaschnig, & Hart,1979), for determining site potential for geo-logical exploration.

It seemed to take longer for computerscientists to elicit knowledge from expertsthan to write the expert system software.This “knowledge acquisition bottleneck”became a salient problem (see Hayes-Roth,Waterman, & Lenat, 1983). It was widelydiscussed in the computer science commu-nity (e.g., McGraw & Harbison-Briggs, 1989;Rook & Croghan, 1989). An obvious sugges-tion was that computer systems engineersmight be trained in interview techniques(Forsyth & Buchanan, 1989), but the bottle-neck also spawned the development of auto-mated knowledge acquisition “shells.” Thesewere toolkits for helping domain expertsbuild their own prototype expert systems(for a bibliography, see Hoffman, 1992).

By use of a shell, experts entered theirexpert knowledge about domain conceptsand reasoning rules directly into the com-puter as responses to questions (Gaines &Boose, 1988). Neale (1988) advocated “elim-inating the knowledge engineer and gettingthe expert to work directly with the com-puter” (p. 136) because human-on-humanKE methods (interviews, protocol analysis)were believed to place an “unjustified faithin textbook knowledge and what experts saythey do” (p. 135).

The field of expert systems involvedliterally thousands of projects in whichexpert knowledge was elicited (or acquired)

(Hoffman, 1992), but serious problems soonarose. For example, software brittleness(breakdowns in handling atypical cases)and explanatory insufficiency (a printoutof cryptic procedural rules fails to clearlyexpress to non-programmers the reasoningpath that was followed by the software)were quickly recognized as troublesome (forreviews that convey aspects of the history ofthis field, see David, Krivine, & Simmons,1993 ; Raeth, 1990). At the same time, therewas a burgeoning of interest in expertise onthe part of cognitive psychologists.

Expertise Studies in Psychology

The application of cognitive science andthe psychology of learning to topics ininstructional design led to studies of thebasis for expertise and knowledge organi-zation at different stages during acquisi-tion of expertise (Lesgold, 1994 ; Means &Gott, 1988). In the early 1970s, a groupof researchers affiliated with the Learn-ing Research and Development Center atthe University of Pittsburgh and the Psy-chology Department at Carnegie-MellonUniversity launched a number of researchprojects on issues of instructional design inboth educational contexts (e.g., elementary-school-level mathematics word problems;college-level physics problems) and techni-cal contexts of military applications (e.g.,problem solving by electronics techni-cians) (e.g., Chi, Feltovich, & Glaser, 1981)Lesgold et al., 1981. The research empha-sized problem-solving behaviors decom-posed as “learning hierarchies” (Gagne &Smith, 1962), that is, sequences of learn-ing tasks arranged according to difficulty anddirection of transfer.

Interest in instructional design quicklybecame part of a larger program of inves-tigation that generated several foundationalnotions about the psychology of expertise(see Glaser, 1987). A number of researchers,apparently independently of one another,began to use the term “cognitive task anal-ysis” both to refer to the process of iden-tifying the knowledge and strategies thatmake up expertise for a particular domain

Page 3: Cap. 12

P1: JzG052184097Xc12 CB1040B/Ericsson 0 521 84087 X May 22 , 2006 16:1

eliciting and representing the knowledge of experts 2 05

and task as well as to distinguish the pro-cess from so-called behavioral task analysis(e.g., Glaser et al., 1985 ; see Schraagen, thisvolume). A stream of psychological researchevolved that shifted emphasis from stud-ies with naive, college-aged “subjects” whoparticipated in artificial tasks using artificialmaterials (in service of control and manipu-lation of variables) to studies in which highlyskilled, domain-smart participants engagedin tasks that were more representative ofthe complexity of the “real world” in whichthey practiced their craft (Chi, Glaser, &Farr, 1988; Hoffman, 1992 ; Knorr-Cetina &Mulkay, 1983 ; Shanteau, 1992).

Investigators began to shift their atten-tion from cataloging biases and limitationsof human reasoning in artificial and sim-ple problems (e.g., statistical reasoning puz-zles, syllogistic reasoning puzzles) to theexploration of human capabilities for mak-ing decisions, solving complex problems, andforming mental models (Gentner & Stevens,1983 ; Klahr & Kotovsky, 1989; Klein &Weitzenfeld, 1982 ; Scribner, 1984 ; Sternberg& Frensch, 1991). The ethnographic researchof Lave (1988) and Hutchins (1995) revealedthat experts do not slavishly conduct “tasks”or adhere to work rules or work proce-dures but instead develop informal heuris-tic strategies that, though possibly inefficientand even counterintuitive, are often remark-ably robust, effective, and cognitively eco-nomical. One provocative implication of thiswork is that expertise results in part from anatural convergence on such strategies dur-ing engagement with the challenges posedby work.

Studies spanned a wide gamut of top-ics, some of which seem more traditionalto academia (e.g., physics problem solving),but many that would traditionally not be fairgame for the academic experimental psy-chologist (e.g., expertise in manufacturingengineering, medical diagnosis, taxicab driv-ing, bird watching, grocery shopping, naturalnavigation). Mainstream cognitive psychol-ogy took something of a turn toward appli-cations (see Barber, 1988), and today thephrase “real world” seems to no longerrequire scare quotes (see Hollnagel, Hoc, &

Cacciabue, 1996), although there are rem-nants of debate about the utility and sci-entific foundations of research that is con-ducted in uncontrolled or non-laboratorycontexts (e.g., Banaji & Crowder, 1989;Hoffman & Deffenbacher, 1993 ; Hoffman &Woods, 2000).

The Early Methods Palette

Another avenue of study involved attemptsto address the knowledge-acquisition bot-tleneck, the root cause of which lay in thereliance on unstructured interviews bythe computer scientists who were build-ing expert systems (see Cullen & Bryman,1988). Unstructured interviews gained earlyacceptance as a means of simultaneously“bootstrapping” the researcher’s knowledgeof the domain, and establishing rapportbetween the researcher and the expert. Nev-ertheless, the bottleneck issue encouraged aconsideration of methods from psychologythat might be brought to bear to widen thebottleneck, including methods of structuredinterviewing (Gordon & Gill, 1997). Inter-views could get their structure from pre-planned probe questions, from archived testcases, and so forth.

In addition to interviewing, the researchermight look at expert performance while theexpert is conducting their usual or “famil-iar” task and thinking aloud, with theirknowledge and reasoning revealed subse-quently via a protocol analysis (see Chiet al., 1981; Ericsson & Simon, 1993 , Chap-ter 38, this volume). In addition, onecould study expert performance at “con-trived tasks,” for example, by withholdingcertain information about the case at hand(limited-information tasks), or by manip-ulating the way the information is pro-cessed (constrained-processing tasks). In the“method of tough cases” the expert is askedto work on a difficult test case (perhapsgleaned from archives) with the idea thattough cases might reveal subtle aspects ofexpert reasoning, or particular subdomainor highly specialized knowledge, or aspectsof experts’ metacognitive skills, for exam-ple, the ability to reason about their own

Page 4: Cap. 12

P1: JzG052184097Xc12 CB1040B/Ericsson 0 521 84087 X May 22 , 2006 16:1

2 06 the cambridge handbook of expertise and expert performance

reasoning or create new procedures or con-ceptual categories “on the fly.”

Empirical comparisons of KE meth-ods, conducted in the late 1980s, werepremised on the speculation that differentmethods might yield different “kinds” ofknowledge – the “differential access hypoth-esis.” (These studies are reviewed at greaterlength in Hoffman et al., 1995 , and Shadbolt& Burton, 1990.) Hoffman worked withexperts at aerial photo interpretation forterrain analysis, and Shadbolt and Burtonworked with experts at geological andarchaeological classification. Both researchprograms employed a number of knowledge-elicitation methods, and both evaluated themethods in terms of their yield (i.e., thenumber of informative propositions or deci-sion/classification rules elicited as a functionof the task time).

The results were in general agreement.Think-aloud problem solving, combinedwith protocol analysis, proved to be rela-tively time-consuming, having a yield of lessthan one informative proposition per totaltask minute. Likewise, an unstructured inter-view yielded less than one informative pro-position per total task minute. A structuredinterview, a constrained processing task, andan analysis of tough cases were the most effi-cient, yielding between one and two infor-mative propositions per total task minute.

The results from the studies by andShadbolt and Burton and also showed thatthere was considerable overlap of knowl-edge elicited by two of the main techniquesthey used – a task in which domain con-cepts were sorted into categories and a taskin which domain concepts were rated on anumber of dimensions. Both of the tech-niques elicited information about domainconcepts and domain procedures. Hoffmanas well as Shadbolt and Burton concludedthat interviews need to be used in conjunc-tion with ratings and sorting tasks becausecontrived techniques elicit specific knowl-edge and may not yield an overview of thedomain knowledge.

An idea that was put aside is that thegoal of KE should be to “extract” expertknowledge. It is far more appropriate to refer

to knowledge elicitation as a collaborativeprocess, sometimes involving “discovery” ofknowledge (Clancey, 1993 ; Ford & Adams-Webber, 1992 ; Knorr-Cetina, 1981; LaFrance,1992). According to a transactional view,expert knowledge is created and maintainedthrough collaborative and social processes,as well as through the perceptual and cog-nitive processes of the individual. By thisview, a goal for cognitive analysis and designis to promote development of a workplacein which knowledge is created, shared, andmaintained via natural processes of com-munication, negotiation, and collaboration(Lintern, Diedrich, & Serfaty, 2002).

The foundation for this newer perspec-tive and set of research goals had been laidby the work of Gary Klein and his associateson the decision making of proficient practi-tioners in domains such as clinical nursingand firefighting (See Ross, Shafer, & Klein,Chapter 23 ; Klein et al., 1993). They hadlaid out some new goals for KE, includingthe generation of cognitive specifications forjobs, the investigation of decision making indomains involving time pressure and highrisk, and the enhancement of proficiencythrough training and technological innova-tion. It became clear that the methodol-ogy of KE could be folded into the broadermethodology of “cognitive task analysis”(CTA) (Militello & Hoffman, forthcoming;Schraagen, Chapter 11), which is now afocal point for human-factors and cognitive-systems engineering.

The Era of Cognitive Task Analysis

Knowledge engineering (or cognitive engi-neering) typically starts with a problem orchallenge to be resolved or a requirement tobe satisfied with some form of informationprocessing technology. The design goal influ-ences the methods to be used, including themethods of knowledge elicitation, and themanner in which they will be adapted. Onething that all projects must do is identify whois, and who is not, an expert.

Psychological research during the era ofexpert systems tended to define expertisesomewhat loosely, for instance, “advanced

Page 5: Cap. 12

P1: JzG052184097Xc12 CB1040B/Ericsson 0 521 84087 X May 22 , 2006 16:1

eliciting and representing the knowledge of experts 2 07

Table 12 .1. Some Alternative Methods of Proficiency Scaling

Method Yield Example

In-depth careerinterviews abouteducation,training, etc.

Ideas about breadthand depth ofexperience;Estimate of hoursof experience

Weather forecasting in the armed services, forinstance, involves duty assignments havingregular hours and regular job or task assignmentsthat can be tracked across entire careers.Amount of time spent at actual forecasting orforecasting-related tasks can be estimated withsome confidence (Hoffman, 1991).

Professionalstandards orlicensing

Ideas about what ittakes forindividuals toreach the top oftheir field.

The study of weather forecasters involved seniormeteorologists of the US National Atmosphericand Oceanographic Administration and theNational Weather Service (Hoffman, 1991). Oneparticipant was one of the forecasters for SpaceShuttle launches; another was one of thedesigners of the first meteorological satellites.

Measures ofperformance atthe familiar tasks

Can be used forconvergence onscales determinedby other methods.

Weather forecasting is again a case in point sincerecords can show for each forecaster the relationbetween their forecasts and the actual weather.In fact, this is routinely tracked in forecastingoffices by the measurement of “forecast skillscores” (see Hoffman & Trafton, 2006).

Social InteractionAnalysis

Proficiency levels insome group ofpractitioners orwithin somecommunity ofpractice (Mieg,2000; Stein, 1997)

In a project on knowledge preservation for theelectric power utilities (Hoffman & Hanes,2003), experts at particular jobs (e.g.,maintenance and repair of large turbines,monitoring and control of nuclear chemicalreactions, etc.) were readily identified by plantmanagers, trainers, and engineers. Theindividuals identified as experts had beenperforming their jobs for years and were knownamong company personnel as “the” person intheir specialization: “If there was that kind ofproblem I’d go to Ted. He’s the turbine guy.”

graduate students” in a particular domain.In general, identification of experts was notregarded as either a problem or an issuein expert-system development. (For detaileddiscussions, see Hart, 1986; Prerau, 1989.)The rule of thumb based on studies ofchess (Chase & Simon, 1973) is that exper-tise is achieved after about 10,000 hoursof practice. Recent research has suggesteda qualification on this rule of thumb. Forinstance, Hoffman, Coffey, and Ford (2000)found that even junior journeymen weatherforecasters (individuals in their early 30s)can have had as much as 25 ,000 hours ofexperience. A similar figure seems appropri-ate for the domain of intelligence analysis(Hoffman, 2003a).

Concern with the question of how todefine expertise (Hoffman, 1998) led to anawareness that determination of who anexpert is in a given domain can requireeffort. In a type of proficiency-scaling proce-dure, the researcher determines a domainand organizationally appropriate scale ofproficiency levels. Some alternative methodsare described in Table 12 .1.

Social Interaction Analysis, the result ofwhich is a sociogram, is perhaps the lesserknown of the methods. A sociogram, whichrepresents interaction patterns between peo-ple (e.g., frequent interactions), is usedto study group clustering, communica-tion patterns, and workflows and pro-cesses. For Social Interaction Analysis,

Page 6: Cap. 12

P1: JzG052184097Xc12 CB1040B/Ericsson 0 521 84087 X May 22 , 2006 16:1

2 08 the cambridge handbook of expertise and expert performance

multiple individuals within an organiza-tion are interviewed. Practitioners might beasked, for example, “If you have a prob-lem of type x, who would you go to foradvice?” Or they might be asked to sortcards bearing the names of other domainpractitioners into piles according to oneor another skill dimension or knowledgecategory.

Hoffman, Ford, and Coffey (2000) sug-gested that proficiency scaling for a givenproject should be based on at least two ofthe methods listed in Table 12 .1. It is impor-tant to employ a scale that is both domainand organizationally appropriate, and thatconsiders the full range of proficiency. Forinstance, in the project on weather forecast-ing (Hoffman, Coffey, & Ford, 2000), theproficiency scale distinguished three levels:experts, journeymen, and apprentices, eachof which was further distinguished by threelevels of seniority.

The expanded KE methods palette andthe adoption of proficiency scaling rep-resented the broadening of focus beyondexpert systems to support for the creationof intelligent or knowledge-based systems ofa variety of forms.

Foundational Methods of CognitiveEngineering

In North America, methods for CTA weredeveloped in reaction to limitations of tra-ditional “behavioral task analysis,” as wellas to limitations of the early AI knowl-edge acquisition techniques (Hollnagel &Woods, 1983 ; Rasmussen, 1986). CTA alsoemerged from the work of researchers whowere studying diverse domains of expertisefor the purpose of developing better meth-ods for instructional design and enhancinghuman learning (see the chapters by Greeno,Gregg, Resnick, and Simon & Hayes in Klahr,1976). Ethnographers, sociologists of sci-ence, and cognitive anthropologists, work-ing in parallel, began to look at how newtechnology influences work cultures andhow technology mediates cooperative activ-ity (e.g., Clancey, Chapter 8; Hutchins, 1995 ,

Knorr-Cetina & Mulkay, 1983 ; Suchman,1987).

The field of “Work Analysis,” which hasexisted in Europe since the 1960s, is regardedas a branch of ergonomics, although it hasinvolved the study of cognitive activitiesin the workplace. (For reviews of the his-tory of the research in this tradition see DeKeyser, Decortis, & Van Daele, 1998 Militello& Hoffman, forthcoming; Vicente, 1999.)Work Analysis is concerned with perfor-mance at all levels of proficiency, but thatof course entails the study of experts andthe elicitation of their knowledge. Seminalresearch in Work Analysis was conductedby Jens Rasmussen and his colleagues atthe Risø National Laboratory in Denmark(Rasmussen, Petjersen, & Goodstein, 1994 ;Rasmussen, 1985). They began with the goalof making technical inroads in the safety-engineering aspects of nuclear power andaviation but concluded that safety could notbe assured solely through technical engi-neering (see Rasmussen & Rouse, 1981).Hence, they began to conduct observationsin the workplace (e.g., analyses of prototyp-ical problem scenarios) and conduct inter-views with experts.

The theme to these parallel NorthAmerican and European efforts has beenthe attempt to understand the interac-tion of cognition, collaboration, and com-plex artifacts (Potter, Roth, Woods, & Elm,2000). The reference point is the field set-ting, wherein teams of expert practition-ers confront significant problems aided bytechnological and other types of artifacts(Rasmussen, 1992 ; Vicente, 1999).

The broadening of KE, folding it intoCTA, has resulted in an expanded paletteof methods, including, for example, ethno-graphic methods (Clancey, 1993 , Hutchins,1995 ; Orr, 1996; Spradley, 1979). An exam-ple of the application of ethnography toexpertise studies appears in Dekker, Nyce,and Hoffman (2003). In this chapter wecannot discuss all of the methods in detail.Instead, we highlight three that have beenwidely used, with success, in this new era ofCTA: the Critical Decision Method, WorkDomain Analysis, and Concept Mapping.

Page 7: Cap. 12

P1: JzG052184097Xc12 CB1040B/Ericsson 0 521 84087 X May 22 , 2006 16:1

eliciting and representing the knowledge of experts 2 09

Table 12 .2 . A Sample of a Coded CDM Protocol (Adapted from Klein et al., 1989)

Appraisal This is going to be a tough fire,Cue and we may start running into heat exhaustion problems.Cue It is 70 degrees now and it is going to get hotter.Action The first truck, I would go ahead and have them open the roof up,Action and the second truck I would go ahead and send them inside andAction have them start ventilating, start knocking the windows out and workingElaboration with the initial engine crew, false ceilings, and get the walls opened up.Action As soon as I can, order the second engine to hook up to supply and pump to

engine 1.Anticipation I am assuming engine 2 will probably be there in a second.Cue-deliberation I don’t know how long the supply lay line is,Anticipation but it appears we are probably going to need more water than one supply

line is going to give us.Metacognition So I would keep in mind,Contingency unless we can check the fire fairly rapidly.Contingency So start thinking of other water sources.Action-Deliberation Consider laying another supply line to engine 1.

The Critical Decision Method

The Critical Decision Method (CDM)involves multi-pass retrospection in whichthe expert is guided in the recall andelaboration of a previously experiencedcase. The CDM leverages the fact thatdomain experts often retain detailed mem-ories of previously encountered cases, espe-cially ones that were unusual, challenging,or in one way or another involved “criticaldecisions.” The CDM does not use genericquestions of the kind “Tell me everythingyou know about x,” or “Can you describeyour typical procedure?” Instead, it guidesthe expert through multiple waves of re-telling and prompts through the use ofspecific probe questions (e.g., “What wereyou seeing?”) and “what-if” queries (e.g.,“What might someone else have done inthis circumstance?”). The CDM generatesrich case studies that are often useful astraining materials. It yields time-lined sce-narios, which describe decisions (decisiontypes, observations, actions, options, etc.)and aspects of decisions that can be easyor difficult. It can also yield a list of deci-sion requirements and perceptual cues – theinformation the expert needs in order tomake decisions.

An example of a coded CDM transcriptappears in Table 12 .2 . In this example, events

in the case have been placed into a timelineand coded into the categories indicated inthe leftmost column. As in all methods forcoding protocols, multiple coders are usedand there is a reliability check.

Given its focus on decision making, thestrength of the CDM is its use in the cre-ation of models of reasoning (e.g., deci-sions, strategies). Detailed presentations ofthe method along with summaries of stud-ies illustrating its successful use can be foundin Crandall, Klein, and Hoffman (2006)and Hoffman, Crandall, and Shadbolt(1998).

Work Domain Analysis

Unlike the CDM, which focuses on thereasoning and strategies of the individualpractitioner, Work Domain Analysis (WDA)builds a representation of an entire workdomain. WDA has most frequently beenused to describe the structure of human-machine systems for process control, but itis now finding increasing use in the analy-sis and design of complex, systems (Burns& Hajdukiewicz, 2004 ; Chow & Vicente,2002 ; Lintern, Miller, & Baker, 2002 ; Naikar& Sanderson, 2001).

An Abstraction-Decomposition matrixrepresents a work domain in terms of

Page 8: Cap. 12

P1: JzG052184097Xc12 CB1040B/Ericsson 0 521 84087 X May 22 , 2006 16:1

2 10 the cambridge handbook of expertise and expert performance

Figure 12 .1. Two tutorial examples of the Abstraction-Decomposition representation, a primarilytechnical system (Home Cooling, left panel) and a sociotechnical system (Library Client Tracking,right panel).

“levels of abstraction,” where each levelis a distinctive type of constraint. Fig-ure 12 .1 presents matrices for two systems,one designed predominantly around phys-ical laws and the other designed predom-inantly around social values. The libraryexample grapples with a pervasive socialissue, the need for individual identificationbalanced against the desire for personal con-fidentiality. These tutorial examples demon-strate that the Abstraction-Decompositionformat can be used with markedly differ-ent work domains. In both of these matrices,entries at each level constitute the meansto achieve ends at the level above. Theintent is to express means-ends relationsbetween the entries of adjacent levels, withlower levels showing how higher-level func-tions are met, and higher levels showingwhy lower-level forms and functions arenecessary.

Work domains are also represented interms of a second dimension: “levels ofdecomposition,” from organizational con-

text, down to social collectives (teams),down to individual worker or individualcomponent (e.g., software package residingon a particular workstation).

Typically, a work-domain analysis is ini-tiated from a study of documents, althoughonce an Abstraction-Decomposition matrixis reasonably well developed, interviewswith domain experts will help the ana-lyst extend and refine it. Vicente (1999)argues that the Abstraction-Decompositionmatrix is an activity-independent represen-tation and should contain only descrip-tions of the work domain (the tutorialexamples of Figure 12 .1 were developedwith that stricture in mind). However,Vicente’s advice is not followed universallywithin the community that practices WDA;some analysts include processes in theirAbstraction-Decomposition matrices (e.g.,Burns & Hajdukiewicz, 2004).

It is possible to add activity to the repre-sentation yet remain consistent with Vicente(1999) by overlaying a trajectory derived

Page 9: Cap. 12

P1: JzG052184097Xc12 CB1040B/Ericsson 0 521 84087 X May 22 , 2006 16:1

eliciting and representing the knowledge of experts 2 11

from a description of strategic reasoningundertaken by experts. Figure 12 .2 presentsa fragment of a structural description of aweather-forecasting work domain, and Fig-ure 12 .3 presents the same structural descrip-tion with an activity overlay developed froma transcript of an expert forecaster’s descrip-tion of jobs, roles, and tools involved in fore-casting (Hoffman, Coffey, & Ford, 2000).Activity statements (shown as callouts inFigure 12 .3) were coded as falling into oneor another of the cells, and the temporalsequence of the activity was represented bythe flow of arrows as connectors to showhow forecasters navigate opportunisticallythrough an abstraction-decomposition spaceas they seek the information to diagnose andsolve the problems.

When used in this manner, the matrixcaptures important propositions as elicitedfrom domain experts concerning their goalsand reasoning (see, e.g., Burns, Bryant, &Chalmers, 2001; Rasmussen, 1986; Schmidt& Luczak, 2000; Vicente, Christoffersen, &Pereklita, 1995) within the context of collab-oration with larger collectives and organiza-tional goals.

Figure 12 .2 . An Abstraction-Decomposition matrix of a fragment of a weather-forecasting workdomain.

Concept Mapping

The third CTA method we will discussis also one that has been widely usedand has met with considerable success.Unlike Abstraction-Decomposition and itsfunctional analysis of work domains, andunlike the CDM and its focus on reasoningand strategies, Concept Mapping has as itsgreat strength the generation of models ofknowledge.

Concept Maps are meaningful diagramsthat include concepts (enclosed in boxes)and relationships among concepts or propo-sitions (indicated by labeled connectionsbetween related concepts). Concept Map-ping has foundations in the theory ofMeaningful Learning (Ausubel, Novak, &Hanesian, 1978) and decades of research andapplication, primarily in education (Novak,1998). Concept Maps can be used to showgaps in student knowledge. At the otherend of the proficiency scale, Concept Mapsmade by domain experts tend to showhigh levels of agreement (see Gordon, 1992 ;Hoffman, Coffey, & Ford, 2000). (Reviewsof the literature and discussion of methods

Page 10: Cap. 12

P1: JzG052184097Xc12 CB1040B/Ericsson 0 521 84087 X May 22 , 2006 16:1

2 12 the cambridge handbook of expertise and expert performance

Figure 12 .3 . An Abstraction-Decomposition matrix of a fragment of a weather-forecasting workdomain with an activity overlay (statements in callouts are quotes from a forecaster).

for making Concept Maps can be found inCanas et al., 2004 ; and Crandall, Klein, &Hoffman, 2006.) Figure 12 .4 is a ConceptMap that lays out expert knowledge aboutthe role of cold fronts in the Gulf Coast(Hoffman, Coffey, & Ford, 2000).

Although Concept Maps can be made byuse of paper and pencil, a white board, orPost-Its, the Concept Maps presented herewere created by use of CmapTools, a soft-ware suite created at the Institute for Humanand Machine Cognition (free download athttp://ihmc.us). In the KE procedure involv-ing an individual expert, one researcherstands at a screen and serves as the facilitatorwhile another researcher drives the laptopand creates the Concept Map that is pro-jected on the screen. The facilitator helpsthe domain expert build up a representa-tion of their domain knowledge, in effectcombining KE with knowledge representa-tion. (This is one reason the method is rela-tively efficient.) Concept Mapping can alsobe used by teams or groups, for purposesother than KE (e.g., brainstorming, consen-

sus formation). Teams can be structured ina variety of ways and can make and shareConcept Maps over the world-wide web (seeCanas et al., 2004).

The ability to hyperlink digital “reso-urces” such as text documents, images,video clips, and URLs is another signifi-cant advantage provided by computerizedmeans of developing Concept Maps (Cmap-Tools indicate hyperlinks by the small iconsunderneath concept nodes). Hyperlinks canconnect to other Concept Maps; a setof Concept Maps hyperlinked together isregarded as a “knowledge model.” Figure 12 .5shows a screen shot from the top-level Con-cept Map in the System To Organize Rep-resentations in Meteorology (STORM), inwhich a large number of Concept Mapsare linked together. In Figure 12 .5 , some ofthe resources have been opened for illus-trative purposes (real-time satellite imagery,computer weather forecasts, and digitalvideo in which the domain expert pro-vides brief explanatory statements for someof the concepts throughout the model).

Page 11: Cap. 12

P1: JzG052184097Xc12 CB1040B/Ericsson 0 521 84087 X May 22 , 2006 16:1

eliciting and representing the knowledge of experts 2 13

All of the STORM Concept Maps andresources can be viewed at http://www.ihmc.us/research/projects/STORMLK/

Knowledge models structured as Con-cept Maps can serve as living repositoriesof expert knowledge to support knowledgesharing as well as knowledge preservation.They can serve also as interfaces for intelli-gent systems where the model of the expert’sknowledge becomes the interface for a per-formance support tool or training aid. (Fordet al., 1996).

Methodological Concepts and Issues

Research and various applied projects con-ducted since the seminal works on KEmethodology have left some ideas standingand have led to some new and potentiallyvaluable ideas. One recent review of CTAmethods (Bonacteo & Burns, forthcoming)

Figure 12 .4. A Concept Map about cold fronts in Gulf Coast weather.

lists dozens of methods. Although not allof them are methods that would be use-ful as knowledge-elicitation or knowledge-representation procedures, it is clear that theroster of tools and methods available to cog-nitive engineers has expanded considerablyover the past two decades. We look now tocore ideas and tidbits of guidance that havestood the test of time.

Where the Rubber Meets the Road

(1). In eliciting expert knowledge one can: (a)Ask people questions, and (b) Observe per-formance. Questions can be asked in thegreat many forms and formats for inter-viewing, including unstructured interviews,the CDM procedure, and Concept Map-ping, as well as many other techniques (e.g.,Endsley & Garland, 2000). Performance canbe observed via ethnographic studies of pat-terns of communication in the workplace,

Page 12: Cap. 12

P1: JzG052184097Xc12 CB1040B/Ericsson 0 521 84087 X May 22 , 2006 16:1

2 14 the cambridge handbook of expertise and expert performance

Figure 12 .5 . A screen shot of a Concept Map with some opened resources.

evaluations in terms of performance mea-sures (e.g., the accuracy of weather fore-casts), or evaluations of recall, recognition,or reaction-time performance in contrivedtasks or think-aloud problem solving tasks.

(2 ). In eliciting expert knowledge onecan attempt to create models of the workdomain, models of practitioner knowledge ofthe domain, or models of practitioner reason-ing. Models of these three kinds take differ-ent forms and have different sorts of usesand applications. This is illustrated roughlyby the three methods we have describedhere. The CDM can be used to createproducts that describe practitioner reason-ing (e.g., decision types, strategies, deci-sion requirements, informational cues). TheAbstraction-Decomposition matrix repre-sents the functional structure of the workdomain, which can provide context for anoverlay of activity developed from inter-view protocols or expert narratives. Concept

Mapping represents practitioner knowledgeof domain concepts such as relations, laws,and case types.

(3). Knowledge elicitation methods differin their relative efficiency. For instance, thethink-aloud problem solving task combinedwith protocol analysis has uses in the psy-chology laboratory but is relatively ineffi-cient in the context of knowledge elicitation.Concept Mapping is arguably the most effi-cient method for the elicitation of domainknowledge (Hoffman, 2002).We see a needfor more studies on this topic.

(4). Knowledge-elicitation methods can becombined in various ways. Indeed, a recom-mendation from the 1980s still stands –that any project involving expert knowl-edge elicitation should use more than oneknowledge-elicitation method. One com-bination that has recently become salientis the combination of the CDM with thetwo other procedures we have discussed.

Page 13: Cap. 12

P1: JzG052184097Xc12 CB1040B/Ericsson 0 521 84087 X May 22 , 2006 16:1

eliciting and representing the knowledge of experts 2 15

Concept-Mapping interviews almost alwaystrigger in the experts the recall of previ-ously encountered tough cases. This can beused to substitute for the “Incident Selec-tion” step in the CDM. Furthermore, casestudies generated by the CDM can be usedas resources to populate the Concept-Mapknowledge models (see Hoffman, Coffey, &Ford, 2000). As another example, Naikarand Saunders (2003) conducted a WorkDomain Analysis to isolate safety-significantevents from aviation incident reports andthen employed the CDM in interviews withauthors of those reports to identify criticalcognitive issues that precipitated or exacer-bated the event.

(5). The gold is not in the documents.Document analysis is useful in bootstrap-ping researchers into the domain of studyand is a recommended method for initiatingWork Domain Analysis (e.g., Lintern et al.,2002), but experts possess knowledge andstrategies that do not appear in documentsand task descriptions. Cognitive engineersinvariably rely on interactions with expertsto garner implicit, obscure, and otherwiseundocumented expert knowledge. Even inWork Domain Analysis, which is heavilyoriented towards Document Analysis, inter-actions with experts are used to confirmand refine the Abstraction-Decompositionmatrices.

In the weather-forecasting project (Hoff-man, Coffey, & Ford, 2000), an expert toldhow she predicted the lifting of fog. Shewould look out toward the downtown andsee how many floors above ground level shecould count before the floors got lost in thefog deck. Her reasoning relied on a heuristicof the form, “If I cannot see the 10th floorby 10 AM, pilots will not be able to takeoff until after lunchtime.” Such a heuris-tic has great value but is hardly the sortof thing that could be put into a formalstandard operating procedure. Many obser-vations have been made of how engineers inprocess control bend rules and deviate frommandated procedures so that they can dotheir jobs more effectively (see Koopman &Hoffman, 2003). We would hasten to gen-eralize by saying that all experts who work

in complex sociotechnical contexts possessknowledge and reasoning strategies that arenot captured in existing procedures or doc-uments, many of which represent (naughty)departures from what those experts are sup-posed to do or to believe (Johnston, 2003 ;McDonald, Corrigan, & Ward, 2002).

Discovery of these undocumented depar-tures from authorized procedures representsa window on the “true work” (Vicente,1999), which is cognitive work independentof particular technologies, that is, it is gov-erned only by domain constraints and byhuman cognitive constraints. Especially afteran accident, it is commonly argued thatexperts who depart from authorized proce-dures are, in some way, negligent. Neverthe-less, the adaptive process that generates thedepartures is not only inevitable but is also aprimary source of efficient and robust workprocedures (Lintern, 2003). In that thesewindows are suggestive of leverage pointsand ideas for new aiding technologies, cog-nitive engineers need to pay them seriousattention.

(6). Differential access is not a salientproblem. The first wave of comparativeKE methodology research generated thehypothesis that different “kinds” of knowl-edge might be more amenable to elicita-tion by particular methods (Hoffman, 1987),and some studies suggested the possibilityof differential access (Cooke & MacDonald,1986, 1987; Evans, Jentsch, Hitt, Bowers, &Salas, 2001). Tasks involving the generationof lists of domain concepts can in fact resultin lists of domain concepts, and tasks involv-ing the specification of procedures can infact result in statements about rules or pro-cedures. However, some studies have foundlittle or no evidence for differential access(e.g., Adelman, 1989; Shadbolt & Burton,1990), and we conclude that a strong versionof the differential-access hypothesis has notheld up well under scrutiny. All of the avail-able methods can say things about so-calleddeclarative knowledge, so-called proceduralknowledge, and so forth.

All KE methods can be used to iden-tify leverage points – aspects of the orga-nization or work domain where even a

Page 14: Cap. 12

P1: JzG052184097Xc12 CB1040B/Ericsson 0 521 84087 X May 22 , 2006 16:1

2 16 the cambridge handbook of expertise and expert performance

modest infusion of supporting technologiesmight have positive results (e.g., redesignof interfaces, redesign of the workspacelayout, creation of new functionalities forexisting software, and ideas about entirelynew software systems.) Again we can usethe project on expert weather forecastingas an example (Hoffman, Coffey, & Ford,2000). That project compared a numberof alternative knowledge-elicitation meth-ods including protocol analysis, the CDM,the Knowledge Audit (Militello & Hutton,1998; see Ross, Shafer and Klein, this vol-ume), an analysis of “Standard OperatingProcedures” documents, the Recent CaseWalkthrough method (Militello & Hutton,1998), a Workspace and Workpatterns anal-ysis (Vicente, 1999), and Concept Mapping.All methods yielded data that spoke to prac-titioner knowledge and reasoning and all alsoidentified leverage points.

(7). “Tacit” knowledge is not a salient prob-lem. Without getting into the philosophi-cal weeds of what one means by “kinds”of knowledge, another concern has to dowith the possibility that routine knowledgeabout procedures or task activities mightbecome “tacit,” that is, so automatic as tobe inexpressible via introspection or ver-bal report. This hangover issue from theheyday of Behaviorism remains to this daya non-problem in the practical context ofeliciting knowledge from experts. For onething, it has never been demonstrated thatthere exists such a thing as “knowledge thatcannot be verbalized in principle,” and theburden of proof falls on the shoulders ofthose who make the existence claim. Againsidestepping the philosophical issues (i.e., ifit cannot be articulated verbally, is it reallyknowledge?), we maintain that the empiri-cal facts mitigate the issue. For instance, inConcept-Mapping interviews with domainexperts, experience shows that almost everytime the expert will reach a point in making aConcept Map where s/he will say somethinglike, “Well, I’ve never really thought about that,or thought about it in this way, but now thatyou mention it . . . ,” and what follows willbe a clear specification on some procedure,strategy, or aspect of subdomain knowl-

edge that had not been articulated up tothat point.

(8). Good knowledge elicitation proceduresare “effective scaffolds.” Although there maybe phenomena to which one could legiti-mately, or at least arguably, append the des-ignation “tacit knowledge,” there is no indi-cation that such knowledge lies beyond thereach of science in some unscientific nether-world of intuitions or unobservables. Overand over again, the lesson is not that thereis knowledge that experts literally cannotarticulate, nor is it the hangover issue ofwhether verbalization “interferes” with rea-soning. Rather, the issue is whether the KEprocedure provides sufficient scaffolding tosupport the expert in articulating what theyknow. Support involves the specifics of theprocedure (e.g., probe questions), but it alsoinvolves the fact that knowledge elicitationis a collaborative process. There is no sub-stitute for the skill of the elicitor (e.g., inframing alternative suggestions and word-ings). Likewise, there is no substitute forthe skill of the participating practitioner.Some experts will have good insight, butothers will not. Though it might be pos-sible for someone to prove the existenceof “knowledge” that cannot be uncovered,knowledge engineers face the immediate,practical challenges of designing new andbetter sociotechnical systems. They accom-plish something when they uncover usefulknowledge that might have otherwise beenmissed.

New Ideas

Recent research and application efforts havealso yielded some new ideas about theknowledge elicitation methods palette.

(1). The (hypothetical) problem of differ-ential access has given way to a practi-cal consideration of “differential utility.” Anygiven method might be more useful forcertain purposes, might be more applica-ble to certain domains, or might be moreuseful with experts having certain cogni-tive styles. In other words, each knowledge-elicitation method has its strengths andweaknesses. Some of these are more purely

Page 15: Cap. 12

P1: JzG052184097Xc12 CB1040B/Ericsson 0 521 84087 X May 22 , 2006 16:1

eliciting and representing the knowledge of experts 2 17

methodological or procedural (e.g., tran-scription and protocol analysis takes a longtime), but some relate to the content ofwhat is elicited. The CDM has as its strengththe elicitation of knowledge about percep-tual cues and patterns, decision types, andreasoning strategies. The strength of Con-cept Mapping lies in the creation of knowl-edge models that can be used in the cre-ation of knowledge bases or interfaces. WorkDomain Analysis, which maps the func-tional structure of the work domain, can pro-vide a backdrop against which the knowl-edge and skills of the individual expert canbe fitted into the larger functional contextof the organization and its purposes. Prod-ucts from any of these procedures can sup-port the design of new interfaces or eventhe redesign of workplaces and methods ofcollaboration.

(2 ). Methodology benefits from oppor-tunism. It can be valuable during aknowledge-elicitation project to be open toemerging possibilities and new opportuni-ties, even opportunities to create new meth-ods or try out and evaluate new combina-tions of methods. In the weather-forecastingproject (Hoffman, Coffey, & Ford, 2000),Concept-Mapping interviews demonstratedthat practitioners were quite comfortablewith psychologists’ notion of a “mentalmodel” because the field has for years dis-tinguished forecaster reasoning (“concep-tual models”) from the outputs of themathematical computer models of weather.Indeed, the notion of a mental model hasbeen invoked as an explanatory concept inweather forecasting for decades (see Hoff-man, Trafton, & Roebber, forthcoming).Practitioners were quite open to discussingtheir reasoning, and so a special interviewwas crafted to explore this topic in detail(Hoffman, Coffey, & Carnot, 2000).

(3). Knowledge elicitation is not a one-offprocedure. Historically, KE was consideredin the context of creating intelligent sys-tems for particular applications. The hori-zons were expanded by such applicationsas the preservation of organizational orteam knowledge (Klein, 1992). This notionwas recently expanded even further to

the idea of “corporate knowledge manage-ment,” which includes capture, archiving,application to training, proprietary analysis,and other activities (e.g., Becerra-Fernandez,Gonzalez, & Sabherwal, 2004 ; Davenport& Prusak, 1998). A number of governmentand private sector organizations have founda need to capture expert knowledge prior tothe retirement of the experts and also theneed, sometimes urgent, to reclaim exper-tise from individuals who have recentlyretired (Hoffman & Hanes, 2003). Instan-tiation of knowledge capture as part of anorganizational culture entails many poten-tial obstacles, such as management and per-sonnel buy-in. It also raises many practicalproblems, not the least of which is how toincorporate a process of ongoing knowledgecapture into the ordinary activities of theexperts without burdening them with anadditional task.

Recognition of the value of the analy-sis of tough cases led to a recommenda-tion that experts routinely make notes aboutimportant aspects of tough cases that theyencounter (Hoffman, 1987). This idea hasbeen taken to new levels in recent years. Forinstance, because of downsizing in the 1980s,the electric power utilities face a situation inwhich senior experts are retiring and thereis not yet a cohort of junior experts whoare primed to take up the mantle (Hoffman& Hanes, 2003). At one utility, a turbinehad been taken off line for total refitting,an event that was seen as an opportunityto videotape certain repair jobs that requireexpertise but are generally only requiredoccasionally (on the order of once every 10 ormore years).

Significant expertise involves consider-able domain and procedural knowledge andan extensive repertoire of skills and heuris-tics. Elicitation is rarely something thatcan be done easily or quickly. In elicitingweather-forecasting knowledge for just theFlorida Gulf Coast region of the UnitedSates, about 150 Concept Maps were madeabout local phenomena involving fog, thun-derstorms, and hurricanes. And yet, dozensmore Concept Maps could have been madeon additional topics, including the use of

Page 16: Cap. 12

P1: JzG052184097Xc12 CB1040B/Ericsson 0 521 84087 X May 22 , 2006 16:1

2 18 the cambridge handbook of expertise and expert performance

the new weather radar systems and the useof the many computer models for weatherforecasting (Hoffman, Coffey, & Ford,2000). A current project on “knowledge re-covery” that involves reclamation of expertknowledge about terrain analysis from exist-ing documents such as the Terrain AnalysisDatabase (Hoffman, 2003b) has generatedover 150 Concept Maps containing morethan 3 ,000 propositions.

Although knowledge elicitation on such ascale is daunting, we now have the technolo-gies and methodologies to facilitate the elic-itation, preservation, and sharing of expertknowledge on a scale never before possible.This is a profound application of cognitivescience and is one that is of immense valueto society.

Practice, Practice, Practice

No matter how much detail is providedabout the conduct of a knowledge-elicitationprocedure, there is no substitute for prac-tice. The elicitor needs to adapt on the flyto individual differences in style, personal-ity, agenda, and goals. In “breaking the ice”and establishing rapport, the elicitor needsto show good intentions and needs to besensitive to possible concerns on the part ofthe expert that the capture of his/her knowl-edge might mean the loss of their job (per-haps to a machine). To be good and effectiveat knowledge-elicitation, one must attemptto become an “expert apprentice” – expe-rienced at, skilled at, and comfortable withgoing into new domains, boostrapping effi-ciently and then designing and conductinga series of knowledge-elicitation proceduresappropriate to project goals. The topic ofhow to train people to be expert apprenticesis one that we hope will receive attentionfrom researchers in the coming years (seeMilitello & Quill, forthcoming).

Acknowledgments

The senior Author’s contribution to thischapter was supported through his partici-pation in the National Alliance for Exper-

tise Studies, which is supported by the“Sciences of Learning” Program of theNational Science Foundation, and his par-ticipation in The Advanced Decision Archi-tectures Collaborative Technology Alliance,which is sponsored by the US ArmyResearch Laboratory under cooperativeagreement DAAD19-01-2-0009.

References

Adelman, L. (1989). Management issues inknowledge elicitation. IEEE Transactions onSystems, Man, and Cybernetics, 19, 483–488.

Ausubel, D. P., Novak, J. D., & Hanesian, H.(1978). Educational psychology: A cognitive view,2 nd ed. New York: Holt, Rinehart and Winston.

Banaji, M. R., & Crowder, R. G. (1989). Thebankruptcy of everyday memory. AmericanPsychologist, 44 , 1185–1193 .

Barber, P. (1988). Applied cognitive psychology.London: Methuen.

Becerra-Fernandez, I., Gonzalez, A., & Sabher-wal, R. (2004). Knowledge management: chal-lenges, solutions, and technologies. Upper SaddleRiver, NJ: Prentice-Hall.

Bonaceto, C., & Burns, K. (forthcoming). Map-ping the mountains: A survey of cognitive engi-neering methods and uses. In R. R. Hoffman(Ed.), Expertise out of context: Proceedings of theSixth International Conference on NaturalisticDecision Making. Mahwah, NJ: Erlbaum.

Burns, C., & Hajdukiewicz, J. R. (2004). Ecologi-cal interface design. Boca Raton, FL: CRC Press.

Burns, C. M., Bryant, D., & Chalmers, B.(2001). Scenario mapping with work domainanalysis. In Proceedings of the Human Factorsand Ergonomics Society 45 th Annual Meeting(pp. 424–428). Santa Monica, CA: Human Fac-tors and Ergonomics Society.

Canas, A. J., Hill, G., Carff, R., Suri, N., Lott, J.,Eskridge, T., Gomez, G., Arroyo, M., & Carva-jal, R. (2004). CmapTools: A Knowledge Mod-eling and Sharing Environment. In A. J. Canas,J. D. Novak, & F. M. Gonzalez (Eds.), ConceptMaps: Theory, Methodology, Technology, Proceed-ings of the 1st International Conference on Con-cept Mapping (pp. 125–133). Pamplona, Spain:Universidad Publica de Navarra.

Chase, W. G., & Simon, H. A. (1973). Perceptionin chess. Cognitive Psychology, 4 , 55–81.

Page 17: Cap. 12

P1: JzG052184097Xc12 CB1040B/Ericsson 0 521 84087 X May 22 , 2006 16:1

eliciting and representing the knowledge of experts 2 19

Chi, M. T. H., Feltovich, P. J., & Glaser, R. (1981).Categorization and representations of physicsproblems by experts and novices. Cognitive Sci-ence, 5 , 121–152 .

Chi, M. T. H., Glaser, R., & Farr, M. L. (Eds.)(1988). The nature of expertise. Hillsdale, NJ:Erlbaum.

Chow, R., & Vicente, K. J. (2002). A field studyof emergency ambulance dispatching: Impli-cations for decision support. In Proceedings ofthe Human Factors and Ergonomics Society 46thAnnual Meeting (pp. 313–317). Santa Monica,CA: Human Factors and Ergonomics Society.

Clancey, W. J. (1993). The knowledge levelreinterpreted: Modeling socio-technical sys-tems. In K. M. Ford & J. M. Bradshaw (Eds.),Knowledge acquisition as modeling (pp. 33–49,Pt. 1). New York: Wiley.

Cooke, N. M., & McDonald, J. E. (1986). A for-mal methodology for acquiring and represent-ing expert knowledge. Proceedings of the IEEE74 , 1422–1430.

Cooke, N. M., & McDonald, J. E. (1987). Theapplication of psychological scaling techniquesto knowledge elicitation for knowledge-basedsystems. International Journal of Man-MachineStudies, 2 6, 533–550.

Crandall, B., Klein, G., & Hoffman, R. R. (2006).Working Minds: A practitioner’s guide to cognitivetask analysis. Cambridge, MA: MIT Press.

Cullen, J., & Bryman, A. (1988). The knowledgeacquisition bottleneck: Time for reassessment?Expert Systems, 5 , 216–225 .

David, J.-M., Krivine, J.-P., & Simmons, R. (Eds.)(1993). Second-generation expert systems. Berlin:Springer Verlag.

Davenport, T. H., & Prusak, L. (1998). Work-ing knowledge: How organizations manage whatthey know. Cambridge, MA: Harvard BusinessSchool Press.

De Keyser, V., Decortis, F., & Van Daele,A. (1998). The approach of Francophoneergonomy: Studying new technologies. In V.De Keyser, T. Qvale, & B. Wilpert (Eds.),The meaning of work and technological options(pp. 147–163). Chichester, England: Wiley.

Dekker, S. W. A., Nyce, J. M., & Hoffman, R. R.(March–April 2003). From contextual inquiryto designable futures: What do we need to getthere? IEEE Intelligent Systems, pp. 74–77.

Diderot, D. (with d’Alembert, J.) (Eds.) (1751–1772). Encyclopedie ou Dictionnaire raisonnedes sciences, des arts et des metiers, par une

Societe de Gens de lettere. Paris: Le Breton.Compact Edition published in 1969 by TheReadex Microprint Corporation, New York.Translations of selected articles can be foundat http://www.hti.umich.edu/d/did/.

Duda, J., Gaschnig, J., & Hart, P. (1979). Modeldesign in the PROSPECTOR consultant sys-tem for mineral exploration. In D. Michie(Ed.), Expert systems in the micro-electronic age(pp. 153–167). Edinburgh: Edinburgh Univer-sity Press.

Endsdley, M. R., & Garland, D. L. (2000). Situa-tion awareness analysis and measurement. Hills-dale, NJ: Erlbaum.

Ericsson, K. A., & Simon, H. A. (1993). Protocolanalysis: Verbal reports as data, 2nd ed. Cam-bridge, MA: MIT Press.

Evans, A. W., Jentsch, F., Hitt, J. M., Bow-ers, C., & Salas, E. (2001). Mental modelassessments: Is there a convergence among dif-ferent methods? In Proceedings of the HumanFactors and Ergonomics Society 45 th AnnualMeeting (pp. 293–296). Santa Monica, CA:Human Factors and Ergonomics Society.

Feigenbaum, E. A., Buchanan, B. G., & Leder-berg, J. (1971). On generality and prob-lem solving: A case study using the DEN-DRAL program. In B. Meltzer & D. Michie(Eds.), Machine intelligence 6 (pp. 165–190).Edinburgh: Edinburgh University Press.

Ford, K. M., & Adams-Webber, J. R. (1992).Knowledge acquisition and constructive epis-temology. In R. Hoffman (Ed.), The cognition ofexperts: Psychological research and empirical AI(pp. 121–136). New York: Springer-Verlag.

Ford, K. M., Coffey, J. W., Canas, A., Andrews,E. J., & Turne, C. W. (1996). Diagnosis andexplanation by a nuclear cardiology expert sys-tem. International Journal of Expert Systems, 9,499–506.

Forsyth, D. E., & Buchanan, B. G. (1989). Knowl-edge acquisition for expert systems: Some pit-falls and suggestions. IEEE Transactions on Sys-tems, Man, and Cybernetics, 19, 345–442 .

Gaines, B. R., & Boose, J. H. (Eds.) (1988). Knowl-edge acquisition tools for expert systems. London:Academic Press.

Gagne, R. M., & Smith, E. C. (1962). A studyof the effects of verbalization on problemsolving. Journal of Experimental Psychology, 63 ,12–18.

Gentner, D., & Stevens, A. L. (Eds.) (1983). Men-tal models. Mahwah, NJ: Erlbaum.

Page 18: Cap. 12

P1: JzG052184097Xc12 CB1040B/Ericsson 0 521 84087 X May 22 , 2006 16:1

2 2 0 the cambridge handbook of expertise and expert performance

Glaser, R. (1987). Thoughts on expertise. In C.Schooler & W. Schaie (Eds.), Cognitive func-tioning and social structure over the life course(pp. 81–94). Norwood, NJ: Ablex.

Glaser, R., Lesgold, A., Lajoie, S., Eastman, R.,Greenberg, L., Logan, D., Magone, M., Weiner,A., Wolf, R., & Yengo, L. (1985). “Cognitivetask analysis to enhance technical skills trainingand assessment.” Report, Learning Researchand Development Center, University of Pitts-burgh, Pittsburgh, PA.

Gordon, S. E. (1992). Implications of cognitivetheory for knowledge acquisition. In R. R.Hoffman (Ed.), The psychology of expertise: Cog-nitive research and empirical AI (pp. 99–120).Mahwah, NJ: Erlbaum.

Gordon, S. E., & Gill, R. T. (1997). Cognitivetask analysis. In C. Zsambok and G. Klein(Eds.), Naturalistic decision making (pp. 13 1–140). Mahwah, NJ: Erlbaum.

Hart, A. (1986). Knowledge acquisition for expertsystems. London: Kogan Page.

Hayes-Roth, F., Waterman, D. A., & Lenat, D. B.(1983). Building expert systems. Reading, MA:Addison-Wesley.

Hoffman, R. R. (1987, Summer). The problemof extracting the knowledge of experts fromthe perspective of experimental psychology. AIMagazine, 8, 53–67.

Hoffman, R. R. (1991). Human factors psy-chology in the support of forecasting: Thedesign of advanced meteorological worksta-tions. Weather and Forecasting, 6, 98–110.

Hoffman, R. R. (Ed.) (1992). The psychology ofexpertise: Cognitive research and empirical AI.Mahwah, NJ: Erlbaum.

Hoffman, R. R. (1998). How can expertise bedefined?: Implications of research from cogni-tive psychology. In R. Williams, W. Faulkner, &J. Fleck (Eds.), Exploring expertise (pp. 81–100).New York: Macmillan.

Hoffman, R. R. (2002 , September). An empir-ical comparison of methods for eliciting andmodeling expert knowledge. In Proceedings ofthe 46th Meeting of the Human Factors and Er-gonomics Society (pp. 482–486). Santa Mon-ica, CA: Human Factors and ErgonomicsSociety.

Hoffman, R. R. (2003a). “Use of Concept Map-ping and the Critical Decision Method tosupport Human-Centered Computing for theintelligence community.” Report, Institute forHuman and Machine Cognition, Pensacola, FL.

Hoffman, R. R. (2003b). “Knowledge recovery.”Report, Institute for Human and Machine Cog-nition, Pensacola, FL.

Hoffman, R. R., Coffey, J. W., & Carnot, M. J.(2000, November). Is there a “fast track” intothe black box?: The Cognitive Modeling Proce-dure. Poster presented at the 41st Annual Meet-ing of the Psychonomics Society, New Orleans,LA.

Hoffman, R. R., Crandall, B., & Shadbolt, N.(1998). A case study in cognitive task analysismethodology: The Critical Decision Methodfor the elicitation of expert knowledge. HumanFactors, 40, 254–276.

Hoffman, R. R., & Deffenbacher, K. (1992). Abrief history of applied cognitive psychology.Applied Cognitive Psychology, 6, 1–48.

Hoffman, R. R., & Deffenbacher, K. A. (1993).An analysis of the relations of basic and appliedscience. Ecological Psychology, 5 , 315– 352 .

Hoffman, R. R., Coffey, J. W., & Ford, K. M.(2000). “A case study in the research paradigmof Human-Centered Computing: Local exper-tise in weather forecasting.” Report to theNational Technology Alliance on the Con-tract, “Human-Centered System Prototype.”Institute for Human and Machine Cognition,Pensacola, FL.

Hoffman, R. R., Ford, K. M., & Coffey, J. W.(2000). “The handbook of human-centeredcomputing.” Report, Institute for Human andMachine Cognition, Pensacola, FL.

Hoffman, R. R., & Hanes, L. F. (2003 /July–August). The boiled frog problem. IEEE: Intel-ligent Systems, pp. 68–71.

Hoffman, R. R., Shadbolt, N. R., Burton, A. M.,& Klein, G. (1995). Eliciting knowledge fromexperts: A methodological analysis. Organiza-tional Behavior and Human Decision Processes,62 , 129–158.

Hoffman, R. R., Trafton, G., & Roebber, P. (2006).Minding the weather: How expert forecastersthink. Cambridge, MA: MIT Press.

Hoffman, R. R., & Woods, D. D. (2000). Studyingcognitive systems in context. Human Factors,42 , 1–7.

Hoc, J.-M., Cacciabue, P. C., & Hollnagel, E.(1996). Expertise and technology: “I have afeeling we’re not in Kansas anymore.” In J.-M.Hoc, P. C. Cacciabue, & E. Hollnagel (Eds.),Expertise and technology: Cognition and human-computer cooperation (pp. 279–286). Mahwah,NJ: Erlbaum.

Page 19: Cap. 12

P1: JzG052184097Xc12 CB1040B/Ericsson 0 521 84087 X May 22 , 2006 16:1

eliciting and representing the knowledge of experts 2 2 1

Hollnagel, E., & Woods, D. D. (1983). Cognitivesystems engineering: New wine in new bottles.International Journal of Man-Machine Studies,18, 583–600.

Johnston, N. (2003). The paradox of rules: Proce-dural drift in commercial aviation. In R. Jensen(Ed.), Proceedings of the Twelfth InternationalSymposium on Aviation Psychology (pp. 630–635). Dayton, OH: Wright State University.

Hutchins, E. (1995). Cognition in the wild. Cam-bridge, MA: MIT Press.

Klahr, D. (Ed.) (1976). Cognition and instruction.Hillsdale, NJ: Erlbaum.

Klahr, D., & Kotovsky, K. (Eds.) (1989). Complexinformation processing: The impact of Herbert A.Simon. Mahwah, NJ: Erlbaum.

Klein, G. (1992). Using knowledge elicitation topreserve corporate memory. In R. R. Hoff-man (Ed.), The psychology of expertise: Cogni-tive research and empirical AI (pp. 170–190).Mahwah, NJ: Erlbaum.

Klein, G. A., Calderwood, R., & MacGregor, D.(1989). Critical decision method for elicitingknowledge. IEEE Transactions on Systems, Man,and Cybernetics, 19, 462–472 .

Klein, G., Orasanu, J., Calderwood, R., & Zsam-bok, C. E. (Eds.) (1993). Decision making inaction: Models and methods. Norwood, NJ:Ablex Publishing Corporation.

Klein, G., & Weitzenfeld, J. (1982). The use ofanalogues in comparability analysis. AppliedErgonomics, 13 , 99–104 .

Knorr-Cetina, K. D. (1981). The manufacture ofknowledge. Oxford: Pergamon.

Knorr-Cetina, K. D., & Mulkay, M. (1983). Sci-ence observed. Berkeley Hills, CA: Sage.

Koopman, P., & Hoffman, R. R. (2003 /November–December). Work-arounds, make-work, and kludges. IEEE: Intelligent Systems,pp. 70–75 .

LaFrance, M. (1992). Excavation, capture, col-lection, and creation: Computer scientists’metaphors for eliciting human expertise.Metaphor and Symbolic Activity, 7 , 135–156.

Lave, J. (1988). Cognition in practice: Mind, math-ematics, and culture in everyday life. Cambridge:Cambridge University Press.

Lesgold, A. M. (1984). Acquiring expertise. InJ. R. Anderson & S. M. Kosslyn (Eds.), Tuto-rials in learning and memory: Essays in honor ofGordon Bower (pp. 31–60). San Francisco, CA:W. H. Freeman.

Lesgold, A., Feltovich, P. J., Glaser, R., & Wang,M. (1981). “The acquisition of perceptual diag-nostic skill in radiology.” Technical Report No.PDS-1, Learing Research and DevelopmentCenter, University of Pittsburgh, Pittsburgh,PA.

Lintern, G. (2003). Tyranny in rules, autonomyin maps: Closing the safety management loop.In R. Jensen (Ed.), Proceedings of the TwelfthInternational Symposium on Aviation Psychology(pp. 719–724). Dayton, OH: Wright State Uni-versity.

Lintern, G., Diedrich, F. J. & Serfaty, D. (2002).Engineering the community of practice formaintenance of organizational knowledge. Pro-ceedings of the IEEE 7th Conference on HumanFactors and Power Plants (pp. 6.7–6.13). NewYork: IEEE.

Lintern, G., Miller, D., & Baker, K. (2002). Workcentered design of a USAF mission planningsystem. In Proceedings of the 46th Human Fac-tors and Ergonomics Society Annual Meeting(pp. 531–535). Santa Monica, CA: Human Fac-tors and Ergonomics Society.

McDonald, N., Corrigan, S., & Ward, M. (2002).Cultural and organizational factors in systemsafety: Good people in bad systems. Proceedingsof the 2 002 International Conference on Human-Computer Interaction in Aeronautics (HCI-Aero2 002 ) (pp. 205–209). Menlo Park, CA: Ameri-can Association for Artificial Intelligence Press.

McGraw, K. L., & Harbison-Briggs, K. (1989).Knowledge acquisition: Principles and guidelines.Englewood Cliffs, NJ: Prentice Hall.

Means, B., & Gott, S. P. (1988). Cognitive taskanalysis as a basis for tutor development:Articulating abstract knowledge representa-tions. In J. Psotka, L. D. Massey, & S. A.Mutter (Eds.), Intelligent tutoring systems:Lessons learned (pp. 35–57). Hillsdale, NJ:Erlbaum.

Mieg, H. (2000). The social psychology of expertise.Mahwah, NJ: Erlbaum.

Militello, L. G., & Hutton, R. J. B. (1998). AppliedCognitive Task Analysis (ACTA): A practi-tioner’s toolkit for understanding cognitive taskdemands. Ergonomics, 41, 1618–1641.

Militello, L., & Hoffman, R. R. (forthcoming).Perspectives on cognitive task analysis. MahwahNJ: Erlbaum.

Militello, L., & Quill, L. (2006). Expert appren-tice strategies. In R. R. Hoffman (Ed.), Expertiseout of context. Mahwah, NJ: Erlbaum.

Page 20: Cap. 12

P1: JzG052184097Xc12 CB1040B/Ericsson 0 521 84087 X May 22 , 2006 16:1

2 2 2 the cambridge handbook of expertise and expert performance

Naikar, N., & Sanderson, P. M. (2001). Evaluat-ing system design proposals with work domainanalysis. Human Factors, 43 , 529–542 .

Naikar, N., & Saunders, A. (2003). Crossingthe boundaries of safe operation: A technicaltraining approach to error management. Cog-nition Technology and Work, 5 , 171–180.

Neale, I. M. (1988). First generation expertsystems: A review of knowledge acquisitionmethodologies. Knowledge Engineering Review,3 , 105–146.

Novak, J. D. (1998). Learning, creating, andusing knowledge: Concept maps® as facilitativetools in schools and corporations. Mahweh, NJ:Lawrence Erlbaum Associates.

Orr, J. (1996). Talking about machines: An ethnog-raphy of a modern job. Ithaca, NY: Cornell Uni-versity Press.

Potter, S. S., Roth, E. M., Woods, D. D., andElm, W. C. (2000). Bootstrapping multipleconverging cognitive task analysis techniquesfor system design. In J. M. Schraagen and S. F.Chipman (Eds.), Cognitive task analysis(pp. 317–340). Mahwah, NJ: Erlbaum.

Prerau, D. (1989). Developing and managing expertsystems: Proven techniques for business andindustry. Reading, MA: Addison Wesley.

Raeth, P. G. (Ed.) (1990). Expert systems. NewYork: IEEE Press.

Rasmussen, J. (1985). The role of hierarchicalknowledge representation in decision-makingand system-management. IEEE Transactions onSystems, Man, and Cybernetics, SMC-15 , 234–243 .

Rasmussen, J. (1986). Information processingand human-machine interaction: An approachto cognitive engineering. Amsterdam: North-Holland.

Rasmussen, J. (1992). Use of field studies fordesign of workstations for integrated manufac-turing systems. In M. Helander & N. Naga-machi (Eds.), Design for manufacturability: Asystems approach to concurrent engineering andergonomics (pp. 317–338). London: Taylor andFrancis.

Rasmussen, J., Petjersen, A. M., & Goodstein,L. P. (1994). Cognitive systems engineering. NewYork: John Wiley.

Rasmussen, J., & Rouse, W. B. (Eds.) (1981).Human detection and diagnosis of system failures.New York: Plenum Press.

Rook, F. W., & Croghan, J. W. (1989). Theknowledge acquisition activity matrix: A sys-tems engineering conceptual framework. IEEETransactions on Systems, Man, and Cybernetics,19, 586–597.

Schmidt, L., & Luczak, H. (2000). Knowledgerepresentation for engineering design based ona cognitive model. In Proceedings of the IEA2 000/HFES 2 000 Congress (vol. 1) (pp. 623–626). Santa Monica, CA: Human Factors andErgonomics Society.

Scribner, S. (1984). Studying working intelli-gence. In B. Rogoff & S. Lave (Eds.), Every-day cognition: Its development in social context(pp. 9–40). Cambridge: Harvard UniversityPress.

Shadbolt, N. R., & Burton, A. M. (1990). Knowl-edge elicitation techniques: Some experimen-tal results. In K. L. McGraw & C. R. West-phal (Eds.), Readings in knowledge acquisition(pp. 21–33). New York: Ellis Horwood.

Shanteau, J. (1992). Competence in experts:The role of task characteristics. OrganizationalBehavior and Human Decision Processes, 53 ,252–266.

Shortliffe, E. H. (1976). Computer-based medicalconsultations: MYCIN. New York: Elsevier.

Simon, H. A., & Hayes, J. R. (1976). Under-standing complex task instructions. In D. Klahr(Ed.), Cognition and instruction (pp. 51–80).Hillsdale, NJ: Erlbaum.

Spradley, J. P. (1979). The ethnographic interview.New York: Holt, Rinehart and Winston.

Stein, E. W. (1997). A look at expertise from asocial perspective. In P. J. Feltovich, K. M. Ford,& R. R. Hoffman (Eds.), Expertise in context(pp. 181–194). Cambridge: MIT Press.

Sternberg, R. J., & Frensch, P. A. (Eds.) (1991).Complex problem solving: Principles and mecha-nisms. Mahwah, NJ: Erlbaum.

Suchman, L. (1987). Plans and situated actions:The problem of human-machine communication.Cambridge: Cambridge University Press.

Vicente, K. J. (1999). Cognitive work analysis:Towards safe, productive, and healthy computer-based work. Mahwah, NJ: Erlbaum.

Vicente, K. J., Christoffersen, K., & Pereklita,A. (1995). Supporting operator problem solv-ing through ecological interface design. IEEETransactions on Systems, Man, and Cybernetics,SMC-2 5 , 529–545 .