enticing mistakes: a strategy within simulation training ... · (nic, 2007). there may as a result...

10
Interservice/Industry Training, Simulation, and Education Conference (I/ITSEC) 2008 2008 Paper No.8199 Page 1 of 10 Enticing mistakes: A strategy within simulation training of soft skills Robert Hubal, Geoffrey Frank RTI International Research Triangle Park, NC {rhubal,gaf}@ rti.org ABSTRACT Simulation-based systems are increasingly being used for training “soft” skills such as providing cultural understanding, conducting interrogations and interviewing, and assessing adaptive thinking and leadership. Simulation-based training systems can be conceived as having three major components. First, an environment model drives actions and responses of simulated entities (objects, machines, terrain, avatars) in the virtual environment. Second, a student model maintains the system’s understanding of the state of the student’s knowledge and skills. Third, an instructional model selects and sequences the learning experiences of the student and provides feedback to the student based on inputs from the environment model and the student model. The latter two components partly define intelligent tutoring to guide simulation flow to promote learning. This paper describes lessons learned in evolving simulation-based training systems for procedural skills into trainers for soft skills, particularly changes required in the student and instructional models. These simulations are being developed for intelligence analysis training. The remediation methods of the instructional model developed for procedural training were revised for soft skills since the soft skill performance criteria are less well defined in terms of student actions and simulation events. This revision required a more robust student model that can infer student bias and other imperfect conceptual models. The sequencing of instructional events was modified to take advantage of parameterized initial values and introduce a “sting” meant to entice students to make decisions consistent with imperfect conceptual models. The selection of enticements requires more interactions between the student model and the instructional model than was present in the procedural training simulations. Scenario-based training supports practice and assessment on multiple learning objectives at the same time. The configuration and sequencing of instructional events provides variable reinforcement of multiple learning objectives. ABOUT THE AUTHORS Dr. Robert Hubal is a cognitive scientist in RTI’s Center for Distributed Learning. He holds a Ph.D. in cognitive psychology from Duke University. His research focuses on the intelligent use of technology for education, training, and assessment. His recent projects include studying how to integrate user sensing into simulation training systems; the tools, techniques, and data organization needed to perform visual analytics; and usability and acceptance of synthetic character applications used by various populations. Dr. Geoffrey Frank is a principal scientist in RTI’s Center for Distributed Learning. He has a Ph.D. in computer science from the University of North Carolina at Chapel Hill. He is a member of the IEEE Learning Technology Standards Committee, and has contributed to the development of use cases for the IEEE Reusable Competency Definition Standard, P1484.20.1. He was project engineer for several maintenance training applications, and has led the training analysis of web-delivered simulations for the Army Signal Center. He is a coauthor of the Signal Center Masterplan for Lifelong Learning.

Upload: others

Post on 31-Jul-2020

3 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Enticing mistakes: A strategy within simulation training ... · (NIC, 2007). There may as a result be systemic process improvements needed for the Intelligence community (e.g., Cooper,

Interservice/Industry Training, Simulation, and Education Conference (I/ITSEC) 2008

2008 Paper No.8199 Page 1 of 10

Enticing mistakes: A strategy within simulation training of soft skills

Robert Hubal, Geoffrey Frank RTI International Research Triangle Park, NC {rhubal,gaf}@ rti.org

ABSTRACT

Simulation-based systems are increasingly being used for training “soft” skills such as providing cultural understanding, conducting interrogations and interviewing, and assessing adaptive thinking and leadership. Simulation-based training systems can be conceived as having three major components. First, an environment model drives actions and responses of simulated entities (objects, machines, terrain, avatars) in the virtual environment. Second, a student model maintains the system’s understanding of the state of the student’s knowledge and skills. Third, an instructional model selects and sequences the learning experiences of the student and provides feedback to the student based on inputs from the environment model and the student model. The latter two components partly define intelligent tutoring to guide simulation flow to promote learning.

This paper describes lessons learned in evolving simulation-based training systems for procedural skills into trainers for soft skills, particularly changes required in the student and instructional models. These simulations are being developed for intelligence analysis training. The remediation methods of the instructional model developed for procedural training were revised for soft skills since the soft skill performance criteria are less well defined in terms of student actions and simulation events. This revision required a more robust student model that can infer student bias and other imperfect conceptual models. The sequencing of instructional events was modified to take advantage of parameterized initial values and introduce a “sting” meant to entice students to make decisions consistent with imperfect conceptual models. The selection of enticements requires more interactions between the student model and the instructional model than was present in the procedural training simulations. Scenario-based training supports practice and assessment on multiple learning objectives at the same time. The configuration and sequencing of instructional events provides variable reinforcement of multiple learning objectives.

ABOUT THE AUTHORS

Dr. Robert Hubal is a cognitive scientist in RTI’s Center for Distributed Learning. He holds a Ph.D. in cognitive psychology from Duke University. His research focuses on the intelligent use of technology for education, training, and assessment. His recent projects include studying how to integrate user sensing into simulation training systems; the tools, techniques, and data organization needed to perform visual analytics; and usability and acceptance of synthetic character applications used by various populations.

Dr. Geoffrey Frank is a principal scientist in RTI’s Center for Distributed Learning. He has a Ph.D. in computer science from the University of North Carolina at Chapel Hill. He is a member of the IEEE Learning Technology Standards Committee, and has contributed to the development of use cases for the IEEE Reusable Competency Definition Standard, P1484.20.1. He was project engineer for several maintenance training applications, and has led the training analysis of web-delivered simulations for the Army Signal Center. He is a coauthor of the Signal Center Masterplan for Lifelong Learning.

Page 2: Enticing mistakes: A strategy within simulation training ... · (NIC, 2007). There may as a result be systemic process improvements needed for the Intelligence community (e.g., Cooper,

Interservice/Industry Training, Simulation, and Education Conference (I/ITSEC) 2008

2008 Paper No.8199 Page 2 of 10

Enticing mistakes: A strategy within simulation training of soft skills

Robert Hubal, Geoffrey Frank RTI International Research Triangle Park, NC {rhubal,gaf}@ rti.org

INTRODUCTION

This paper describes an innovative approach to scenario design to show students the consequences of their decisions and actions that stem from imperfect conceptual models. There are thus two main topics introduced in this paper, the idea of imperfect conceptual models and the novelty of our approach, which focuses on an affective component of learning.

Imperfect Conceptual Models

Much has been made about different forms of bias in the prewar assessments of Iraqi weapons of mass destruction (SSCI, 2004) and Iranian nuclear intentions (NIC, 2007). There may as a result be systemic process improvements needed for the Intelligence community (e.g., Cooper, 2005), but there is also a need for training of the individual analyst within a decision-making collective to recognize biases that may intrude in collaborative analyses and how to manage those biases.

Biased reasoning within Intelligence decision-making is just one example of what in this paper we are calling imperfect conceptual models. Other examples would be mental set (persisting with one or few typical solution procedures that were often successful in the past but not necessarily most effective in the present; e.g., Smith, 1995) and semantic abstraction (e.g., integrating content into one’s cognitive structures so as to make it more meaningful, but at the expense of lost detail; Bransford & Franks, 1971) when they are combined with the mission and critical tasks. These “imperfections” should not be viewed as flaws, rather, they are part and parcel of the heuristics and strategies of everyday reasoning (Woll, 2002).

Two other examples of imperfect conceptual models from disparate areas demonstrate the breadth of the concept. First, in developing a training simulation for Preventative Maintenance Checks and Services for the AN/GSC-52A, a satellite communications ground station that is part of the Defense Satellite Communications System (Cooper et al., 2004), we noticed an “If it ain’t broke don’t fix it.” attitude exhibited by novice operator/maintainers. However, this attitude serves poorly in the field, where degraded

equipment can quickly break down, and it is really the job of operator/maintainers, when possible, to keep all equipment to full readiness. We devised a training scheme to make the consequences of this attitude clear by forcing students to consider the difference between a restore link task and a diagnose and repair task. For the former, the task is simply to get a communications link up and working by whatever means, for instance to meet a Commander’s Critical Information Requirements. For the latter, the task is to delve into a communications system using set procedures and determine with precision the cause of a fault.

Second, in a very different setting, we identified a training need for medical students to learn to remember to request that a parent leave the room during an adolescent social history (Deterding, Milliron, & Hubal, 2005). We devised a simulation where a virtual adolescent patient presents very different histories to the medical student depending on who else is in the room. If the student does not request that a parent leave the room, and give a satisfactory reason for doing so to the parent, then the after-action report generated based on the student’s actions will show the equivalent of a NOGO.

Training Approach

The training approach described here is not to restrict students from demonstrating imperfect conceptual models but instead enable them, by enticing mistakes. This approach is quite different from that of the line of procedural skills simulation training systems we have developed (Hubal, 2005), where system faults are introduced and the students have to follow set procedures to learn to uncover the faults. Here, where the quality of a decision and resulting actions directly relates to the quality of the data and analyses applied to that data, or to means to acquire the data, imperfect conceptual models pose a threat and can have unfortunate consequences for the implementation of the decision (Heuer, 1999). Imperfect conceptual models pose a particular risk for collective decision-making, when data collection and analysis are performed by different people, and an analyst must rely on others. In those cases, the analyst must understand the potential influences of imperfect conceptual models on other’s work as well as his or her own.

Page 3: Enticing mistakes: A strategy within simulation training ... · (NIC, 2007). There may as a result be systemic process improvements needed for the Intelligence community (e.g., Cooper,

Interservice/Industry Training, Simulation, and Education Conference (I/ITSEC) 2008

2008 Paper No.8199 Page 3 of 10

As the examples suggest, what is needed for our training approach is an analysis that considers (1) what are the new or different decisions and actions that would not “normally” be done in a scenario having a mission and specific critical tasks, and (2) what are the consequences of not making those decisions nor performing those actions.

Once the consequences are understood, we design scenarios to force (or at least encourage) student decisions and actions that reveal imperfect conceptual models. The design involves setting up initial conditions, developing “scripts” and behaviors for virtual actors and entities, and devising variations on the scenario’s theme to plot out when the decisions and actions are necessary. Last, the design involves setting up instructional modules so that the training provides variable reinforcement of the exposure to the student’s own and other entities’ imperfect conceptual models, with threads of combined types of heuristics and strategies weaved through the course of instruction.

In this paper we present preliminary work in a simulated intelligence setting for enticing students to exhibit mistakes related to imperfect conceptual models.

TRAINING COLLECTIVE DECISION-MAKING

Traditional approaches for training collective decision-making in a networked environment involve bringing the team together (in time and often in space) and replicating the networked environment with complex simulations and stimulation of the data collection tools. These training events are difficult to schedule and expensive to conduct, and provide limited opportunities for exploring different situations requiring different decision-making strategies.

These intense and expensive collective training events are a much better alternative than on-the-job training when bad decisions can lead to disasters. However, they represent an inefficient use of resources for the participants to come up to speed on their individual staff skills. Thus there is a need for training individuals to prepare for these collective training events.

A Blended Training Strategy

We advocate a blended approach to training where first individuals become familiarized with tasks, then they acquire and practice skills prior to engaging in collective exercises (Frank, Helms, & Voor, 2000; Hubal & Helms, 1998). Individuals can learn about the process and its context (e.g., the environment and organization in which skills take place). This approach

allows individuals with a wide range of backgrounds to reach a common level of proficiency. It also allows individuals to train on skills they need with scenarios that would not be appropriate for collective training. In our approach, only after individual training is complete does scenario-based collective training take place.

This training approach arose out of discussions with one of the Army TRADOC schools on collective tactics training for digital Tactical Operations Centers (TOCs). The issues focused on reducing the acquisition and life-cycle costs of Army Tactical Command and Control Systems (now called Army Battle Command Systems, or ABCS). Officers in the TOCs must serve as information integrators and decision-makers, and are supported by operators. The entire TOC staff must operate as a team, integrating information across multiple ABCS and making decisions based on information extracted from these systems. At the time, the school used a small-group instruction model that they found effective, but not efficient, in that their approach required that soldiers be collocated, and a number of groups be in session simultaneously. Since actual ABCS were precious devices, the school was not funded to provide the maintenance personnel and system administrators to support the ABCS. Through our discussions we determined that to enable soldiers to master critical skills only two of the ABCS were critical in the classroom, while familiarization with the other ABCS was sufficient via use of other training media. Additional analysis focused on handling the student load effectively, settling on low-overhead constructive simulations available in the classroom, rather than having soldiers rotate through a high-overhead hands-on simulation.

The current testbed, intelligence analysis training, however, is very different from training on tactical operations, maintenance, and other procedural skills. To enable individuals to learn to recognize how imperfect conceptual models may influence intelligence production, we promote a combination of individual simulation training and small-group exercises. One example is to simulate the interactions among battle staff who must operate as a team, integrating information across multiple battle command systems and making decisions based on information extracted from these systems (see Cianciolo & Sanders, 2005). Another example is to simulate the interactions between a single intelligence analyst whose responsibility is to produce intelligence for a higher-level decision-maker and a number of collaborators who support the analyst with collected intelligence. In these examples, the student first becomes familiar with imperfect conceptual models and their features, then practices recognizing the behavior associated with such

Page 4: Enticing mistakes: A strategy within simulation training ... · (NIC, 2007). There may as a result be systemic process improvements needed for the Intelligence community (e.g., Cooper,

Interservice/Industry Training, Simulation, and Education Conference (I/ITSEC) 2008

2008 Paper No.8199 Page 4 of 10

models during the interactions. Practice involves considering how questions are asked, from what information is retrieved, and what heuristics or strategies are introduced into the simulated collaborators’ responses by the simulation. The small-group exercises would build on skills acquired through the simulation, focusing on group interactions and how imperfect conceptual reasoning might be recognized and addressed. The emphasis would be on recognition of and attending to imperfect conceptual models, not their elimination, as learning objectives.

A New Form of Soft Skills Simulation Training

Simulation-based systems are increasingly being used for training skills that don’t always have set procedures. Much of our and others’ past work has focused on the strategies that should be used to overcome “adversity”, in the same sense that our procedural trainers dealt with faults. Examples include: gaining cultural understanding (Lane, et al., 2007), where appropriate techniques for interacting with persons from different cultures are learned; conducting interrogations and interviewing (Deterding, et al., 2005; Hubal, Frank, & Guinn, 2003), where rapport-building and de-escalating dialog methods are learned; and demonstrating adaptive leadership skills (Raybourn, et al., 2005) needed by current and future force commanders.

Similar to strategies to overcome adversity, strategies to address imperfect conceptual models rely on having awareness of the situation. However, there are three important differences between our past work and that presented here.

• First is a different instructional paradigm than we and others have typically taken, a need to engage the student in addressing imperfect conceptual models. Before, we strove to have the student learn set procedures to manage adverse (“faulted”) situations, but the faults were pre-determined, not unexpected, and explainable. Now we introduce imperfect conceptual models, that is, faulty mental models or behaviors that seek to cause the student to fail to succeed, and in turn question assumptions, compare alternative hypotheses, and test implications.

• Second, the environment models that drive actions and responses of simulated entities (objects, machines, terrain, and particularly avatars) become more complex because we now need to model not just constructs like cognition and emotions but also biases, mental sets, and schematic abstractions, and interactions among the constructs.

• Third, what were generally static learning objectives are now dynamic. What student models we had for maintaining an understanding of the state of the student’s knowledge and skills relied on performance measures that derived from known critical tasks. Now student modeling is extended to consider the imperfect conceptions inherent in real-world problem-solving, and thus constantly changing measures of successful or accurate performance.

A SIMULATION FOR STAFF TRAINING

Individual Training Learning Objectives

The authors are part of a team that developed a simulation for training individual staff members to prepare them for collective decision-making. The simulation immerses the student in the preparation of decision-making documents. The student prepares documents (called “student products”) by finding and integrating information found in various source documents, including documents provided by other persons. A key part of the training is communication with other staff to obtain needed source documents and other forms of information. The student needs to understand the roles of the team members to get the information needed to complete the product, to support decision-making by a superior or by higher-level analysts. This category of tasks implies the following types of learning objectives for the instructional model:

• Understanding the flow of information required from multiple sources to provide an accurate basis for decision-making.

• Understanding the roles of members of the team, or correspondingly the capabilities of colleagues with different sub-specialties, particularly what kinds of information they can provide to support the decision-making and what communication approaches are appropriate to collect information from people spread across the organization.

• Ensuring that any decision-making documents prepared by the student are consistent with the source information.

• Recognizing bias in analysis performed by members of the team. For instance, extracts of raw intelligence from field offices or even a patrolman’s report can be skewed by the team member’s point of view.

• Using the structure of the decision-making process to reduce the risk of bias.

Page 5: Enticing mistakes: A strategy within simulation training ... · (NIC, 2007). There may as a result be systemic process improvements needed for the Intelligence community (e.g., Cooper,

Interservice/Industry Training, Simulation, and Education Conference (I/ITSEC) 2008

2008 Paper No.8199 Page 5 of 10

We stress that the intent of our training is not to rid bias (or any imperfect conceptual model) from the collective decision-making, which may not even be possible (Marrin, 2007), but to ensure that the analyst recognizes what types of bias might influence his/her or others’ analytic processing and what steps to take to account for those biases.

Scenario Based Training

This simulation extended the capabilities of our previous individual training simulation architecture (see Frank, et al., 2003). It provides an interactive environment for reviewing source documents, composing student products, and communicating with virtual (i.e., simulated) superiors, subordinates, peers, and members of the team (called “simulated roles”). Relevant source documents as well as the current state of student products are available to the student through access to a portal. The information in the portal is updated by the simulated roles in response to communications from the student. In the simulation environment, the student communicates with the simulated roles through e-mail or a chat tool (as a proxy for face-to-face interactions), and the simulated roles communicate in a similar fashion, as well as by updating documents in the portal. One key feature is the integration of an augmented transition network (Guinn & Hubal, 2006) to guide the communications among team members.

Automated Just-in-time Assessment

The simulation provides just-in-time assessment and feedback to the student using an after-action review (AAR) report like that developed for previous distributed simulations (Frank et al., 2004). In this work new assessment methods were developed that focus on validating the consistency of elements of the student products with appropriate source documents, to include assessing influences of biased analyses. An example of feedback is shown in the lower frame of the web-page displayed in Figure 1. This approach has been enhanced to include information about the state of the simulated roles as well as the final state of the student product.

Our approach to just-in-time feedback was to “mark up” the student products. Since these applications train processes rather than sequential procedures, the assessment focuses both on student actions and on the end-state of documents created by the student (i.e., student products). The feedback presents the information in a different way than procedural training AARs. That is, the traditional AAR is presented in terms of critical tasks and performance measures,

similar to a class grade book summarizing multiple assignments. In contrast, this just-in-time feedback is presented in terms of each section of the student document, similar to marked up assignment being returned to the student. Semantically-rich techniques for providing intelligent tutoring and answering the question “What do I do now?” are being developed.

Missing menuItems fromSchool Solution

Error text from differencesMenu item from document class

Color codedMenu items based on student product

AAR filtered forStudent product and section

Highlighted section of student product

Menu bar for selecting product & section

Generated AAR items

Figure 1. Screenshot of just-in-time feedback

Demonstration Simulation

This resarch focused on a demonstration based on our work involving civilian intelligence analysis (Hubal, Staszewski, & Marrin, 2007). The demonstration uses the categories of political, military, economic, and social analysis (PMES) as functional areas. We constructed a lesson footprint based on the CIA seven-step analysis process (see McCue, 2007), and incorporated one live and six simulated roles:

• a junior Intelligence analyst, who is played by the student;

• a political analyst, a specialist in the political affairs of the country of interest;

• a military analyst, a specialist in the military affairs of the country of interest;

• an economics analyst, a specialist in the economic affairs of the country of interest;

• a human intelligence (HUMINT) specialist, who collects information by interviewing people in or related to the country of interest;

• an imagery intelligence (IMINT) specialist, who collects and analyzes imagery related to the country of interest, particularly satellite and aerial photography;

• a signals intelligence (SIGINT) specialist, who collects and analyzes communications related to the

Page 6: Enticing mistakes: A strategy within simulation training ... · (NIC, 2007). There may as a result be systemic process improvements needed for the Intelligence community (e.g., Cooper,

Interservice/Industry Training, Simulation, and Education Conference (I/ITSEC) 2008

2008 Paper No.8199 Page 6 of 10

country of interest, including radio traffic and cell phone messages.

As shown in Figure 2a, this demonstration focuses on a single student product, called a National Intelligence Estimate (NIE). This document presents a “Key Estimative Question”, the basis for the product, and then an overview section and subsections for PMES analyses from which the student selects given a multitude of choices. Each subsection may include several specific items. The student is responsible for filling in these items and identifying supporting data.

Figure 2a. Screenshot of demonstration NIE

Figure 2b. Screenshot of

portal

The portal of documents (Figure 2b) is shown on the left of the screen. The portal is initialized with the template for the student product (the NIE) and with pertinent open source data (i.e., non-classified information such as magazine and newspaper articles and links to websites). Requests sent to the team members cause additional (typically classified) items to be added to the portal, such as credit reports, photographs, images, interviews, and phone logs.

The demonstration expands the complexity of interactions with the simulated roles, employing an augmented transition network (Guinn & Hubal, 2006). In this case, the student is expected to verify sources and opinions by communicating with different sources of information. The demonstration uses a relatively simple model for this verification, but the technology can readily support more complex models as needed to assess biased processing, as defined by subject-matter

experts. Figure 3 shows the message where the economics analyst is casting doubt on information provided by the HUMINT source. A result of this denial is that the student should not include the HUMINT source information in the NIE.

Figure 3. Example of simulated role interaction

Figure 4 shows just-in-time feedback produced by the simulation for the NIE shown in Figure 2a, using the following hierarchical assessment structure: Functional areas, critical tasks, and performance measures. For the demonstration, the functional areas are PMES analyses. Parallel critical tasks are provided under each functional area: Collect data from all available sources, and fuse and disseminate analysis results that support decision-makers. Each of these critical tasks has multiple performance measures, two of which are shown in Figure 4, the feedback for the political analysis section of the NIE, as highlighted in the middle frame.

TRAINING TO RECOGNIZE, AVOID BIASES

We have accumulated a number of theoretical approaches to help students recognize and overcome different types of bias, as an initial means of addressing imperfect conceptual models. We are now investigating how the simulation training system structure can be used to efficiently and effectively implement some of these approaches. Our strategy for specifying the training is to describe the initial conditions and assessment methods for multiple scenarios. Since bias is rarely demonstrated consistently, the assessments must collect data across multiple practice scenarios to detect a pattern and reinforce appropriate behaviors for avoiding bias. Each practice scenario needs to be selected to require the student to recognize bias and take appropriate action. Consistent with real life,

Page 7: Enticing mistakes: A strategy within simulation training ... · (NIC, 2007). There may as a result be systemic process improvements needed for the Intelligence community (e.g., Cooper,

Interservice/Industry Training, Simulation, and Education Conference (I/ITSEC) 2008

2008 Paper No.8199 Page 7 of 10

multiple forms of bias may be present in any one scenario, and variable reinforcement will help to ensure the effectiveness of the training over multiple sessions. A major advantage of the simulation over live role-playing is its ability to support training by many practice sessions interspersed with regular feedback.

Figure 4. Example intelligence analysis AAR with feedback on simulated role interactions

Our training strategy is having the student work through many practice scenarios with feedback after each scenario, and maintaining a record of the student’s knowledge and skills in a student model. The instructional strategy is to configure a practice scenario somewhat like setting up a “sting”. Given the student’s history of performance on previous scenarios, our tactic is to configure the next scenario to try to entice one or more forms of biased behavior on the part of the student, or to introduce bias by the behaviors of simulated collaborators. That is, the intent is to set up the environment so that the student tends to exhibit the biases we expect, or we require that the student recognize and react to bias on the part of a collaborator. In turn, that scenario’s AAR would be focused on a specific student weakness. Some examples for some representative biases follow.

Order Bias

Information order bias suggests that what is produced depends on what sources are seen first.

To try to entice order bias on the part of the student, the simulation might reorder the source selection under each functional area (the column of boxes along the right of Figure 2a) where students explicitly identify the source for selected items, and track if reordering influences which sources are referenced by the student. The simulation can also introduce order bias into the source data and then assess the student’s ability to recognize and avoid replicating this bias. For example, the simulation can position a key item in a list of conclusions on the second page of a source document, while positioning less relevant items on the first page (see Keane, O’Brien, & Smyth, 2008). If the student consistently misses items on later pages of documents, the simulation will report a tendency towards order bias in the AAR.

To assess order bias in the first place entails monitoring the student’s actions, a straightforward task within a simulation. From what source a student identifies and by having tracked the order that the student accesses information (either through communications with team members or by bringing up portal documents), we can determine if there are some information order biases. Since the simulation will assess bias if specific responses are selected by the student and appropriate actions to avoid the bias are not taken.

Confirmation Bias

Confirmation bias suggests that what is produced fits with preconceived ideas (perhaps ignoring conflicting data). This type of bias shares features of other imperfect conceptual models such as mental set and semantic abstraction.

One approach to enticing confirmation bias is through the statement of the Key Estimative Question. The form of a statement can influence how it is interpreted (Turner, et al., 1992). We have two methods for the simulation to allow the student to restate the question. First, it can allow the student to request confirmation of a restated question from the customer (e.g., an officer or a higher-level decision-maker, who will make use of the student’s product), since when the question is worded differently, the focus of attention can shift towards or away from the point of view of that individual. Second, the student can restate the question in the requests for information from the simulated roles.

An approach to assessing whether or not the students

Page 8: Enticing mistakes: A strategy within simulation training ... · (NIC, 2007). There may as a result be systemic process improvements needed for the Intelligence community (e.g., Cooper,

Interservice/Industry Training, Simulation, and Education Conference (I/ITSEC) 2008

2008 Paper No.8199 Page 8 of 10

are using a process to avoid confirmation bias is to assess what actions the students take to seek supporting and confirmatory (or disconfirmatory) evidence from their simulated role sources. The simulation can have a source demonstrate confirmation bias through the choice of supporting evidence the source provides for its analysis. For example, in our NIE simulation we have one simulated role returning a seemingly relevant and damning image that turns out to be discredited, according to another reliable, independent source. The simulation may cause independent sources to conflict over a conclusion, and then assess students on their ability to rank the conflicting positions based on the accessibility or reliability of the supporting data. Another approach when a simulated role returns a competing hypothesis is to assess the student’s willingness to follow leads suggested by the competing hypotheses. In this case, it is important for the simulation to provide clear guidance to the student on when to end the resolution process. The more complex real-life situations should be addressed by collective training or small group discussions.

Another approach to assessing whether or not the student is using a process to avoid confirmation bias is to assess what actions the student takes to record and process unexpected findings.

Accessibility Bias

Accessibility bias suggests that the familiarity, salience, or vividness of information can influence its interpretation.

To entice accessibility bias, the simulation can make access to critical information easier or more difficult, requiring fewer or more requests for information from sources and portraying the information more or less prevalently in the returned portal documentation.

It is important to get students to understand the need to focus attention on the familiarity, validity, reliability, and consistency of information sources. As described above, students in the simulation may be required to rank their confidence in different sources to resolve conflicting feedback from different simulated roles.

Process Bias

Process biases suggest that alternative outcomes are not generated, the impact of counter-examples or counter-factual information is not estimated, or originally rejected alternatives are not reconsidered.

One way that the simulation tracks the student’s information processing is by monitoring which responsibilities a student assigns to which simulated

role. The student product templates typically have a number of sections and the student may delegate responsibilities for those sections to one or more simulated roles. One form of process bias that the simulation can detect is when the student always assigns the same sections of the product to the same simulated roles, despite indications that input from different roles is required. Similarly, the simulation can provide a scenario where different confirmation sources are required.

Another strategy is to have students identify and carefully analyze linchpin evidence (i.e., data that cause the choices of certain alternatives over others). In the simulation this strategy involves analyzing which of several competing actions the student takes after encountering critical evidence (such as one simulated role contradicting conclusions reached by another role) to determine how well the student has appraised the evidence. The simulation can also assess the order in which items are listed by the student in a particular section, and encourage the student to confront a potential bias by listing conclusions by strength of evidence.

A related strategy is that students defend how they integrated data. Similar to having students describe their goal-setting, this strategy requires students be aware of their cognitive processing. One approach in the simulation is to make students explain in words why they chose a particular selection for one of the functional areas, in addition to identifying a source for their selection. In this case, the simulation should be used to support group discussions or interactions with the instructor.

Sunk Costs Bias

Sunk costs bias suggests that considering the effort already expended is a cause for integrating particular information.

An anecdote provided to us illustrates the potential negative consequences of sunk costs bias. A unit spends considerable effort determining the cell phone numbers of terrorist leaders and developing and implementing a collection process that will notify the unit commander when a phone call is detected from one of these phone numbers. After several months of no calls from this number, a call is detected and the intelligence cell quickly provides the location of the call to the unit commander, who sends a force out to detain the person making the call. When the force arrives at the site of the call and finds the phone, the caller is a young boy. However, on the return trip to the base, the force is ambushed. In this case the desire to

Page 9: Enticing mistakes: A strategy within simulation training ... · (NIC, 2007). There may as a result be systemic process improvements needed for the Intelligence community (e.g., Cooper,

Interservice/Industry Training, Simulation, and Education Conference (I/ITSEC) 2008

2008 Paper No.8199 Page 9 of 10

react quickly to the key event that was based on a lot of sunk costs in collection was not tempered by caution given the long period of no phone calls.

To entice sunk costs bias, we need to get the students to “invest” in a particular approach. One strategy in the simulation is to get the students to commit to a process at one stage of the training and then assess them on their flexibility to detect anomalies and revise their process to react to the anomalies. It is relatively easy through the choice of scenarios to get students to form a kind of mindset where a known, familiar method that worked previously is applied as an analog in a new scenario. The new scenario can be structured to support surface and/or structural similarity (Holyoak & Koh, 1987), yielding a traceable context in which to assess the student’s processing bias.

The simulation might require students to communicate with the team about the changes in process, a tedious task that students may avoid. To assess sunk costs bias we follow similar approaches to how we assess process bias (i.e., an unwillingness to adapt the process and discard the results of extensive previous efforts based on linchpin evidence). The simulation can infer a sunk cost bias if the student consistently adheres to the original process of an analysis rather than using linchpin evidence to redirect the process.

FUTURE WORK

Additional efforts are needed to adapt the simulation for enticing other kinds of imperfect conceptual models, not just bias, and to address other soft skills such as interviewing and de-escalation (Hubal, et al., 2003). We are working on creating a sufficiently rich set of scenarios across a number of domains (military, justice, medical) to support the assessment of imperfect conceptual models.

A long range goal is to assess the effectiveness of this training in a controlled trial. Ultimately, a longitudinal study that compares the work products of students taking this training as compared with those who don’t get this training would be the most convincing form of validation.

SUMMARY

In this paper we present an innovative approach for scenario design to show students consequences of their decisions and actions that stem from imperfect conceptual models. The approach is based on affective interactions with learning in the sense that we entice students to make mistakes, engaging the students in an

instructional design more affective than rote procedural learning. We describe how the use of a simulation can assess students’ information processing for evidence of analytic bias, a kind of imperfect conceptual model. In this paper we describe a demonstration simulation based on a study with civilian intelligence analysts, but with ready applicability to other domains including military intelligence. We believe this approach represents a cost-effective methodology to make students aware of heuristics and strategies associated with imperfect conceptual models that can creep into their analyses and ultimately affect collective decision-making.

ACKNOWLEDGEMENTS

We wish to thank our current and former colleagues in RTI’s Center for Distributed Learning. Noah Evens and Shane Lardinois were the primary simulation software developers. The work relied heavily on a military intelligence example implemented by Patti Krizowsky and Steve Luthringer. The work was guided by Hal Waters and Brooke Whiteford, and by our I/ITSEC bird dog. This paper benefited from input from Don Gemeinhardt.

REFERENCES

Bransford, J.D., & Franks, J.J. (1971). The abstraction of linguistic ideas. Cognitive Psychology, 38, 331-350.

Cianciolo, A.T., & Sanders, W.R. (2005). Diagnosing shortfalls in war-gaming effectiveness: A model-based approach. Proceedings of the Interservice/Industry Training, Simulation and Education Conference (pp. 900-908). Arlington, VA: National Defense Industrial Association.

Cooper, G., Whiteford, B., Frank, G., Perkins, K., & Lizama, J. (2004). Embedded distributed training: Combining simulations, IETMs, and operational code. Proceedings of the Interservice/Industry Training, Simulation and Education Conference (pp. 205-213). Arlington, VA: National Defense Industrial Association.

Cooper, J.R. (2005). Curing analytic pathologies: Pathways to improved intelligence analysis. Washington, DC: Central Intelligence Agency Center for the Study of Intelligence.

Deterding, R., Milliron, C., & Hubal, R. (2005). The virtual pediatric standardized patient application: Formative evaluation findings. Studies in Health Technology and Informatics, 111, 105-107.

Frank, G., Whiteford, B., Brown, R., Cooper, G., Evens, N., & Merino, K. (2003). Web-delivered

Page 10: Enticing mistakes: A strategy within simulation training ... · (NIC, 2007). There may as a result be systemic process improvements needed for the Intelligence community (e.g., Cooper,

Interservice/Industry Training, Simulation, and Education Conference (I/ITSEC) 2008

2008 Paper No.8199 Page 10 of 10

simulations for lifelong learning. Proceedings of the Interservice/Industry Training, Simulation and Education Conference (pp. 170-179). Arlington, VA: National Defense Industrial Association.

Frank, G., Whiteford, B., Hubal, R., Sonker, P., Perkins, K., Arnold, P., Presley, T., Jones, R., & Meeds, H. (2004). Performance assessment for distributed learning using after action review reports generated by simulations. Proceedings of the Interservice/Industry Training, Simulation and Education Conference (pp. 808-817). Arlington, VA: National Defense Industrial Association.

Frank, G.A., Helms, R.F., & Voor, D. (2000). Determining the right mix of live, virtual, and constructive training. Proceedings of the Interservice/Industry Training, Simulation and Education Conference (pp. 1268-1277). Arlington, VA: National Defense Industrial Association.

Guinn, C., & Hubal, R. (2006). Augmented transition networks (ATNs) for dialog control: A longitudinal study. Proceedings of the International Conference on Computational Intelligence Special Session on Natural Language Processing for Real Life Applications (#523-815) (pp. 395-400). Calgary, AB: Acta Press.

Heuer, R.J. (1999). Psychology of intelligence analysis. Washington, DC: Center for the Study of Intelligence, Central Intelligence Agency.

Holyoak, K.J., & Koh, K. (1987). Surface and structural similarity in analogical transfer. Memory & Cognition, 15(4), 332-340.

Hubal, R. (2005). Design and usability of military maintenance skills simulation training systems. Proceedings of the Human Factors and Ergonomics Society Annual Meeting (pp. 2110-2114). Santa Monica, CA: Human Factors and Ergonomics Society.

Hubal, R., Staszewski, J., & Marrin, S. (2007). Overcoming decision making bias: Training implications for intelligence and leadership. Proceedings of the Interservice/Industry Training, Simulation and Education Conference (pp. 798-808). Arlington, VA: National Defense Industrial Association.

Hubal, R.C., & Helms, R.F. (1998). Advanced learning environments. Modern Simulation & Training, 5, 40-45.

Hubal, R.C., Frank, G.A., & Guinn, C.I. (2003). Lessons learned in modeling schizophrenic and

depressed responsive virtual humans for training. Proceedings of the Intelligent User Interface Conference (pp. 85-92). New York, NY: ACM Press.

Keane, M.T., O’Brien, M., & Smyth, B. (2008). Are people biased in their use of search engines? Communications of the ACM, 51(2), 49-52.

Lane, H.C., Core, M.G., Gomboc, D., Karnavat, A., & Rosenberg, M. (2007). Intelligent tutoring for interpersonal and intercultural skills. Proceedings of the Interservice/Industry Training, Simulation and Education Conference (pp. 843-853). Arlington, VA: National Defense Industrial Association.

Marrin, S. (2007). At arm’s length or at the elbow?: Explaining the distance between analysts and decisionmakers. International Journal of Intelligence and CounterIntelligence, 20, 401-414.

McCue, C. (2007). Data mining and predictive analysis: Intelligence gathering and crime analysis. Oxford, England: Elsevier.

National Intelligence Council (2007). Iran: Nuclear intentions and capabilities. Washington, DC: Office of the Director of National Intelligence National Intelligence Estimate.

Raybourn, E., Heneghan, J., Deagle, E., & Mendini, K. (2005). Adaptive thinking & leadership simulation game training for Special Forces officers. Proceedings of the Interservice/Industry Training, Simulation and Education Conference (pp. 789-796). Arlington, VA: National Defense Industrial Association.

Senate Select Committee on Intelligence (2004). Report on the U.S. Intelligence community’s prewar intelligence assessments on Iraq. Washington, DC: U.S. Senate.

Smith, S.M. (1995). Getting into and out of mental ruts: A theory of fixation, incubation, and insight. In R.J. Sternberg & J.E. Davidson (Eds.), The nature of insight (pp. 229-251). Cambridge, MA: MIT Press.

Turner, C.F., Lessler, J.T., George, B.J., Hubbard, M.L., & Witt, M.B. (1992). Effects of mode of administration and wording on data quality. In C.F. Turner, J.T. Lessler, & J.C. Gfroerer (Eds.), Survey Measurements of Drug Use: Methodological Studies. DHHS Pub. No. (ADM) 92-1929. Washington, DC: Government Printing Office.

Woll, S. (2002). Everyday thinking: Memory, reasoning, and judgment in the real world. Mahwah, NJ: Lawrence Erlbaum.