Example of a Complementary use of Model Checking and cci. of a Complementary use of Model Checking and Agent-based Simulation ... two distinct analysis methods, ... approaches can be used to

Download Example of a Complementary use of Model Checking and cci.  of a Complementary use of Model Checking and Agent-based Simulation ... two distinct analysis methods, ... approaches can be used to

Post on 31-Mar-2018

214 views

Category:

Documents

2 download

Embed Size (px)

TRANSCRIPT

<ul><li><p>Example of a Complementary use of ModelChecking and Agent-based Simulation</p><p>Gabriel E. GelmanSchool of Aerospace EngineeringGeorgia Institute of Technology</p><p>Atlanta, USAg.gelman@gatech.edu</p><p>Karen M. FeighSchool of Aerospace EngineeringGeorgia Institute of Technology</p><p>Atlanta, USAkaren.feigh@gatech.edu</p><p>John RushbyComputer Science Laboratory</p><p>SRI InternationalMenlo Park, USA</p><p>rushby@csl.sri.com</p><p>AbstractTo identify problems that may arise between pilotsand automation, methods are needed that can uncover potentialproblems with automation early in the design process. Suchpotential problems include automation surprises, which describeevents when pilots are surprised by the actions of the automation.In this work, agent-based, hybrid time simulation and modelchecking are combined and their respective advantages leveragedin an original manner to find problematic human-automationinteraction (HAI) early in the design process. The Tarom 381incident involving the former Airbus automatic speed protectionlogic, leading to an automation surprise, was used as a commoncase study for both methodology validation and further analysis.Results of this case study show why model checking alone hasdifficulty analyzing such systems and how the incorporation ofsimulation can be used in a complementary fashion. The resultsindicate that the method is suitable to examine problematic HAI,such as automation surprises, allowing automation designers toimprove their design.</p><p>Index Termssimulation, model checking, automation sur-prise, mental model, formal methods</p><p>I. INTRODUCTION</p><p>One failure of human-automation interaction (HAI) thatexhibits significant risk to aviation is when automation be-havior deviates from what pilots expect. This effect is calledautomation surprise (AS) and has been examined by Wiener[1], Sarter and colleagues [2] and Palmer [3]. A commonexample of AS in aviation is when the aircraft flight modeautomatically changes and the pilot is neither aware of thechange, nor able to comprehend the behavior resulting fromthe flight mode, i.e. mode confusion. Javaux [4] concludedthat mode confusion appears because pilots are not awareof all the preconditions for a given mode change. Javauxswork explained how pilots simplify or reduce the number ofpreconditions that trigger a certain mode change to only thepreconditions that are salient. Preconditions for intended modechanges that are not salient, but are almost always satisfied, areforgotten. Circumstances in which the forgotten preconditionis not satisfied often lead to mode confusion.</p><p>The challenge presented by AS is that the automation isoperating as designed. Yet, the flight crew is still unable topredict or explain the behavior of the aircraft. It is clear that theproblem lies with a mismatch between the pilots expectation,</p><p>which is driven by his/her mental model of the autoflightsystem, and the underlying control logic. Norman [5] has donefundamental work in the human-computer interaction domain,such as HAI, of mental models. To be able to investigatesystem behavior and predict AS and other potential problemsthat may arise between collaborators in such systems, a methodfor designers is needed that investigates not just autoflightsystem validation, but the pilots understanding on how theparticular design works.</p><p>Two methods for analyzing system behavior and detectingpotential problems in HAI are: model checking (MC) as doneby Rushby and colleagues [6][8] and agent-based simulationas done by Corker and colleagues [9]. MC and simulation aretwo distinct analysis methods, both having their advantagesand limitations. MC originated in computer science as atechnique to check if a finite-state system (the model) satisfiesa given specification, usually written in a temporal logic. If thespecification is not satisfied, the model checker can constructa counterexample that drives the model through a scenario tomanifest the violation.</p><p>MC technology has advanced considerably and is no longerlimited to finite-state systems; so-called infinite boundedmodel checkers can handle models that include mathematicalinteger and real values, and checkers for hybrid systems allowthese values to be related by differential equations. The appealof MC as an analysis technique is that it considers all possiblebehaviors of the model, so it is certain to find a violation of thespecification if one exists, and can prove correctness otherwise.</p><p>A limitation of MC is that the underlying technology isautomated deduction, where all problems are at least NP-hard,and so the model cannot be too large or too complicated if theanalysis is to complete in a reasonable time. Consequently, thefull or concrete model must often be considerably simplifiedbefore it can be model checked. If the simplified model is atrue abstraction of the concrete model (i.e., includes all itsbehaviors) then safety properties proved of the abstraction arealso true of the concrete model; the converse may not be so,however, for a property that is true of the concrete system mayfail in MC because the abstraction is too coarse (i.e., the modelhas been oversimplified, leading to spurious counterexamples).</p><p>978-1-4799-0652-9/13/$31.00 c2013 IEEE</p></li><li><p>To apply MC to HAI problems, we compose models ofthe automation (often a complex state machine), the controlledplant (typically a hybrid system whose dynamics are describedby differential equations), and human expectations about futurestates, and look for divergences between expectation and truestate. To make the analysis feasible, the models of the plantand of the automation usually must be drastically simplified(e.g., time may not be accurately modeled, or the plant mayeven be omitted altogether), which can result in a spuriouscounterexample. The counterexample must then be examinedto see if it suggests a concrete scenario that manifests a genuineproblem, or is spurious. If it is spurious, the abstraction mustbe refined to eliminate the responsible oversimplification, andthe whole process repeats.</p><p>Due to the difficulties of constructing effective abstrac-tions, interpreting counterexamples, and refining abstractionsto eliminate spurious counterexamples, analysis by MC hasbeen impractical for all but the most basic HAI problems [10].</p><p>Thus, HAI has been more generally analyzed using acombination of human-in-the-loop studies and agent-basedsimulation where the human agents are modeled to capturethe aspects of human perception and cognition of interest tothe study at hand. Prime examples of such an agent-basedmodeling approach used in the aviation domain are the MIDASsimulation created by Corker and colleagues [9] and WMC(Work Models that Compute) as described in [11], [12]. Agent-based simulations have the ability to use a variety of timingmechanisms (continuous, discrete event, or hybrid) to accountfor the dynamics of the systems under analysis, and generallyinclude hundreds of states for when all agents are accounted.Agent-based simulations, however are limited to exploringone trace through the state-space per run. Thus to morefully explore the system multiple simulation runs are required.Common simulation analysis techniques such as Monte Carlomethods attach probability distributions to key input variablesand then randomly sample from the distributions to performthousands of simulation runs. Simulations can never exhaus-tively explore the state-space that their models represent, andthus cannot be used to prove the safety of the modeled system.Simulations can provide more information about the eventsleading to problems in HAI and work with sophisticated andrealistic models of human agents, automated agents and theirenvironment. However, simulations depend strongly on theassumptions made, as well as the quality of the modeling andinput data.</p><p>MC and agent-based simulation have complementarystrengths and weaknesses: MC can examine all possible sce-narios, but only for simplified models; simulation can examinerealistic models, but only a fraction of the state-space. Amethod that leverages the strengths of both could allow amuch more complete and systematic evaluation of realisticmodels than current practice. The objective of this work is toevaluate a common scenario using both methods, compare andcontrast the outputs and inputs, and examine the assumptionsand requirements for the modeling. More generally, suchapproaches can be used to study future autonomy and authority</p><p>Fig. 1. Exemplary scenario in which AS occurs.</p><p>assignments between pilots and automation, the risks such newassignments bear and ultimately how to mitigate these risks.Finally, the ability to use two distinctive methods jointly couldhave implications on methods for examining human-integratedsystems in general.</p><p>II. EXAMPLE SCENARIO: A320 SPEED PROTECTIONThis work uses a known AS found in previous versions</p><p>of Airbus models A300, A310 and A320 as a case study. Anunexpected automatic flight mode change during approach wasinitiated by the automatic speed protection logic. The result isa mode change from vertical speed / flight path angle (V/SFPA) mode, in which the aircraft maintains either verticalspeed or flight path angle at a certain value, to an open modewhich gives priority to the airspeed rather than the verticalspeed or flight path angle. Two open modes are available, openclimb (OPEN CLB) or open descend (OPEN DES). The openmodes ignore any flight path angle or vertical speed previouslycommanded and rely on the altitude entered in the FlightControl Unit (FCU). If the FCU altitude is set to an altitudeabove the current altitude, the automation switches into theOPEN CLB flight mode and starts climbing. In descent thismode change results in an AS to the pilots who are attemptingto land. The most prominent example of where this flight modelogic almost lead to an accident was Tarom flight 381 involvingan Airbus A310, traveling from Bucharest to Paris Orly in1994. The mode reversion to OPEN CLB caused the aircraftto climb steeply on approach, stall and lose over 3,000ft inaltitude before the pilots recovered at 800ft.</p><p>To examine the benefits and limitations of the methodologypresented in this work, the Tarom flight 381 scenario wasmodified to serve as an example scenario. The sequence of themodified scenario examined in this work is shown in Figure 1.In this scenario, the pilots are told by Air Traffic Control (ATC)to level off while on the ILS glideslope (G/S) and subsequentlyare forced to descend with a higher FPA to recapture the G/Swhen ATC clears them to resume course. ATC may issue suchan instruction if the airspace is congested or the runway isstill occupied. Such a scenario is easy to implement and itis realistic. The longer the aircraft is commanded to hold thealtitude, the steeper the FPA is required to regain the G/S, andthe more the aircraft accelerates.</p></li><li><p>This incident is interesting as it involves the correct func-tioning of autoflight control logic, yet resulted in unacceptablebehavior of the joint human-automation system. The controllogic acted in such a way as to surprise the pilots, and inthe case of Tarom Flight 381, for the pilots to fight theautoflight system until the aircraft stalled. It is this interactionof autoflight logic, context, and human variability that formthe difficulties in identifying and analyzing HAI problems.</p><p>A key challenge in modeling AS, such as the AS linked tothe described Airbus speed protection logic, is the inclusion ofthe pilots mental model. Here, the mental model comprisestwo constructs: expectations and beliefs. The pilots expec-tations about the behavior of the aircraft are based on thepilots belief about the aircrafts current flight mode and hisexpectation of the aircraft behavior resulting from this mode.The pilots expectations are based on his observations about thecurrent state of the aircraft, previous actions and mental model,i.e. understanding of how the aircraft will react to commandsgiven its current state. When the pilot knowingly commandschanges, then those changes are expected and anticipated.</p><p>The scenario described above was modeled in MC usingthe SAL (Symbolic Analysis Laboratory) suite of modelcheckers from SRI [13]. The implementation is thoroughlydescribed in [14], and will only be described briefly here.The model contained three components: automation (theautopilot and autothrottle), pilots (all aspects of human-automation interaction) and airplane (the dynamics ofthe aircraft with its engines) and seven state variables:flap_config, vertical_mode, mental_model (enu-merated types), speedvals, altvals, thrustvals, andpitchvals (reals with bounded ranges), and two constants:VMAX speedvals maximum and Vfe speedvals maximum withflaps extended. Additionally, a constraints module isspecified to model the aircraft dynamics and is modeled asa synchronous observer to restrict the model checking tothose states which do not violate the constraints, i.e. onlyinspect states which represent realistic aircraft dynamics. Afinal module in the form of another synchronous observer isadded to seek out the AS, should it occur. The AS is flaggedby this module when the expectation (here: mental model)does not correspond to an aspect of the state of the aircraftanymore. In this case, when the pitch deviates from the pilotsexpectation about the pitch.</p><p>The example scenario was also modeled in the hybrid-timeagent-based simulation framework, WMC. Specifically, WMCis used to model the aircraft dynamics, autoflight and flightmanagement system logic, the flight crew and ATC. Similar toMC, a mental model for human agents is required in WMC tosimulate an AS. In contrast to SAL, WMC has more flexibilitywith actions that are performed by the agents. The structureof WMC makes it simpler to create the mental model aspectof beliefs about the current state of the aircraft. The beliefscan be updated in the background every time the human agentperforms an action where he perceives state changes in hisenvironment. However, simply taking a copy of the state ofthe world every timestep and updating the belief system is not</p><p>sufficient to detect an AS. These beliefs are updated regularlyand are therefore always in sync with reality and provide noway to detect an AS. The human agents expectations aspectof the mental model (in SAL simply called the mental model)must also be modeled. Such an expectation construct needs toformulate the agents expectation about the aircrafts futurestate based on the actions he performs. Four basic aircraftstate variables are used to compute pilot expectation: altitude,airspeed, vertical speed and heading. AS result from the pilotsexpectations being violated about these variables.</p><p>To formulate expectations, the aircraft variables that causethe changes to the basic aircraft variables were defined. Differ-ent flight modes follow different guidance logic. Depending onthe flight mode there are different target values the aircraft triesto maintain or achieve, which includes airspeed, vertical speed,altitude, latitude, lon...</p></li></ul>

Recommended

View more >