example of a complementary use of model checking and cci. of a complementary use of model checking...

Download Example of a Complementary use of Model Checking and cci.  of a Complementary use of Model Checking and Agent-based Simulation ... two distinct analysis methods, ... approaches can be used to

Post on 31-Mar-2018

214 views

Category:

Documents

2 download

Embed Size (px)

TRANSCRIPT

  • Example of a Complementary use of ModelChecking and Agent-based Simulation

    Gabriel E. GelmanSchool of Aerospace EngineeringGeorgia Institute of Technology

    Atlanta, USAg.gelman@gatech.edu

    Karen M. FeighSchool of Aerospace EngineeringGeorgia Institute of Technology

    Atlanta, USAkaren.feigh@gatech.edu

    John RushbyComputer Science Laboratory

    SRI InternationalMenlo Park, USA

    rushby@csl.sri.com

    AbstractTo identify problems that may arise between pilotsand automation, methods are needed that can uncover potentialproblems with automation early in the design process. Suchpotential problems include automation surprises, which describeevents when pilots are surprised by the actions of the automation.In this work, agent-based, hybrid time simulation and modelchecking are combined and their respective advantages leveragedin an original manner to find problematic human-automationinteraction (HAI) early in the design process. The Tarom 381incident involving the former Airbus automatic speed protectionlogic, leading to an automation surprise, was used as a commoncase study for both methodology validation and further analysis.Results of this case study show why model checking alone hasdifficulty analyzing such systems and how the incorporation ofsimulation can be used in a complementary fashion. The resultsindicate that the method is suitable to examine problematic HAI,such as automation surprises, allowing automation designers toimprove their design.

    Index Termssimulation, model checking, automation sur-prise, mental model, formal methods

    I. INTRODUCTION

    One failure of human-automation interaction (HAI) thatexhibits significant risk to aviation is when automation be-havior deviates from what pilots expect. This effect is calledautomation surprise (AS) and has been examined by Wiener[1], Sarter and colleagues [2] and Palmer [3]. A commonexample of AS in aviation is when the aircraft flight modeautomatically changes and the pilot is neither aware of thechange, nor able to comprehend the behavior resulting fromthe flight mode, i.e. mode confusion. Javaux [4] concludedthat mode confusion appears because pilots are not awareof all the preconditions for a given mode change. Javauxswork explained how pilots simplify or reduce the number ofpreconditions that trigger a certain mode change to only thepreconditions that are salient. Preconditions for intended modechanges that are not salient, but are almost always satisfied, areforgotten. Circumstances in which the forgotten preconditionis not satisfied often lead to mode confusion.

    The challenge presented by AS is that the automation isoperating as designed. Yet, the flight crew is still unable topredict or explain the behavior of the aircraft. It is clear that theproblem lies with a mismatch between the pilots expectation,

    which is driven by his/her mental model of the autoflightsystem, and the underlying control logic. Norman [5] has donefundamental work in the human-computer interaction domain,such as HAI, of mental models. To be able to investigatesystem behavior and predict AS and other potential problemsthat may arise between collaborators in such systems, a methodfor designers is needed that investigates not just autoflightsystem validation, but the pilots understanding on how theparticular design works.

    Two methods for analyzing system behavior and detectingpotential problems in HAI are: model checking (MC) as doneby Rushby and colleagues [6][8] and agent-based simulationas done by Corker and colleagues [9]. MC and simulation aretwo distinct analysis methods, both having their advantagesand limitations. MC originated in computer science as atechnique to check if a finite-state system (the model) satisfiesa given specification, usually written in a temporal logic. If thespecification is not satisfied, the model checker can constructa counterexample that drives the model through a scenario tomanifest the violation.

    MC technology has advanced considerably and is no longerlimited to finite-state systems; so-called infinite boundedmodel checkers can handle models that include mathematicalinteger and real values, and checkers for hybrid systems allowthese values to be related by differential equations. The appealof MC as an analysis technique is that it considers all possiblebehaviors of the model, so it is certain to find a violation of thespecification if one exists, and can prove correctness otherwise.

    A limitation of MC is that the underlying technology isautomated deduction, where all problems are at least NP-hard,and so the model cannot be too large or too complicated if theanalysis is to complete in a reasonable time. Consequently, thefull or concrete model must often be considerably simplifiedbefore it can be model checked. If the simplified model is atrue abstraction of the concrete model (i.e., includes all itsbehaviors) then safety properties proved of the abstraction arealso true of the concrete model; the converse may not be so,however, for a property that is true of the concrete system mayfail in MC because the abstraction is too coarse (i.e., the modelhas been oversimplified, leading to spurious counterexamples).

    978-1-4799-0652-9/13/$31.00 c2013 IEEE

  • To apply MC to HAI problems, we compose models ofthe automation (often a complex state machine), the controlledplant (typically a hybrid system whose dynamics are describedby differential equations), and human expectations about futurestates, and look for divergences between expectation and truestate. To make the analysis feasible, the models of the plantand of the automation usually must be drastically simplified(e.g., time may not be accurately modeled, or the plant mayeven be omitted altogether), which can result in a spuriouscounterexample. The counterexample must then be examinedto see if it suggests a concrete scenario that manifests a genuineproblem, or is spurious. If it is spurious, the abstraction mustbe refined to eliminate the responsible oversimplification, andthe whole process repeats.

    Due to the difficulties of constructing effective abstrac-tions, interpreting counterexamples, and refining abstractionsto eliminate spurious counterexamples, analysis by MC hasbeen impractical for all but the most basic HAI problems [10].

    Thus, HAI has been more generally analyzed using acombination of human-in-the-loop studies and agent-basedsimulation where the human agents are modeled to capturethe aspects of human perception and cognition of interest tothe study at hand. Prime examples of such an agent-basedmodeling approach used in the aviation domain are the MIDASsimulation created by Corker and colleagues [9] and WMC(Work Models that Compute) as described in [11], [12]. Agent-based simulations have the ability to use a variety of timingmechanisms (continuous, discrete event, or hybrid) to accountfor the dynamics of the systems under analysis, and generallyinclude hundreds of states for when all agents are accounted.Agent-based simulations, however are limited to exploringone trace through the state-space per run. Thus to morefully explore the system multiple simulation runs are required.Common simulation analysis techniques such as Monte Carlomethods attach probability distributions to key input variablesand then randomly sample from the distributions to performthousands of simulation runs. Simulations can never exhaus-tively explore the state-space that their models represent, andthus cannot be used to prove the safety of the modeled system.Simulations can provide more information about the eventsleading to problems in HAI and work with sophisticated andrealistic models of human agents, automated agents and theirenvironment. However, simulations depend strongly on theassumptions made, as well as the quality of the modeling andinput data.

    MC and agent-based simulation have complementarystrengths and weaknesses: MC can examine all possible sce-narios, but only for simplified models; simulation can examinerealistic models, but only a fraction of the state-space. Amethod that leverages the strengths of both could allow amuch more complete and systematic evaluation of realisticmodels than current practice. The objective of this work is toevaluate a common scenario using both methods, compare andcontrast the outputs and inputs, and examine the assumptionsand requirements for the modeling. More generally, suchapproaches can be used to study future autonomy and authority

    Fig. 1. Exemplary scenario in which AS occurs.

    assignments between pilots and automation, the risks such newassignments bear and ultimately how to mitigate these risks.Finally, the ability to use two distinctive methods jointly couldhave implications on methods for examining human-integratedsystems in general.

    II. EXAMPLE SCENARIO: A320 SPEED PROTECTIONThis work uses a known AS found in previous versions

    of Airbus models A300, A310 and A320 as a case study. Anunexpected automatic flight mode change during approach wasinitiated by the automatic speed protection logic. The result isa mode change from vertical speed / flight path angle (V/SFPA) mode, in which the aircraft maintains either verticalspeed or flight path angle at a certain value, to an open modewhich gives priority to the airspeed rather than the verticalspeed or flight path angle. Two open modes are available, openclimb (OPEN CLB) or open descend (OPEN DES). The openmodes ignore any flight path angle or vertical speed previouslycommanded and rely on the altitude entered in the FlightControl Unit (FCU). If the FCU altitude is set to an altitudeabove the current altitude, the automation switches into theOPEN CLB flight mode and starts climbing. In descent thismode change results in an AS to the pilots who are attemptingto land. The most prominent example of where this flight modelogic almost lead to an accident was Tarom flight 381 in

Recommended

View more >