the role of reflection in simulating and testing agents: an exploration based on the simulation...

18
This article was downloaded by: [Tufts University] On: 27 October 2014, At: 07:33 Publisher: Taylor & Francis Informa Ltd Registered in England and Wales Registered Number: 1072954 Registered office: Mortimer House, 37-41 Mortimer Street, London W1T 3JH, UK Applied Artificial Intelligence: An International Journal Publication details, including instructions for authors and subscription information: http://www.tandfonline.com/loi/uaai20 The Role Of Reflection In Simulating And Testing Agents: An Exploration Based On The Simulation System James A. M. Uhrmacher a , M. Röhl a & B. Kullick b a Department of Computer Science , University of Rostock , Rostock, Germany b Microsoft Services Custom Development , Unterschleissheim, Germany Published online: 09 Jun 2010. To cite this article: A. M. Uhrmacher , M. Röhl & B. Kullick (2002) The Role Of Reflection In Simulating And Testing Agents: An Exploration Based On The Simulation System James, Applied Artificial Intelligence: An International Journal, 16:9-10, 795-811, DOI: 10.1080/08839510290030499 To link to this article: http://dx.doi.org/10.1080/08839510290030499 PLEASE SCROLL DOWN FOR ARTICLE Taylor & Francis makes every effort to ensure the accuracy of all the information (the “Content”) contained in the publications on our platform. However, Taylor & Francis, our agents, and our licensors make no representations or warranties whatsoever as to the accuracy, completeness, or suitability for any purpose of the Content. Any opinions and views expressed in this publication are the opinions and views of the authors, and are not the views of or endorsed by Taylor & Francis. The accuracy of the Content should not be relied upon and should be independently verified with primary sources of information. Taylor and Francis shall not be liable for any losses, actions, claims, proceedings, demands, costs, expenses, damages, and other liabilities whatsoever or howsoever caused arising directly or indirectly in connection with, in relation to or arising out of the use of the Content. This article may be used for research, teaching, and private study purposes. Any substantial or systematic reproduction, redistribution, reselling, loan, sub-licensing, systematic supply, or distribution in any form to anyone is expressly forbidden. Terms & Conditions of access and use can be found at http://www.tandfonline.com/page/terms- and-conditions

Upload: b

Post on 01-Mar-2017

222 views

Category:

Documents


0 download

TRANSCRIPT

This article was downloaded by: [Tufts University]On: 27 October 2014, At: 07:33Publisher: Taylor & FrancisInforma Ltd Registered in England and Wales Registered Number: 1072954 Registeredoffice: Mortimer House, 37-41 Mortimer Street, London W1T 3JH, UK

Applied Artificial Intelligence: AnInternational JournalPublication details, including instructions for authors andsubscription information:http://www.tandfonline.com/loi/uaai20

The Role Of Reflection In Simulating AndTesting Agents: An Exploration Based OnThe Simulation System JamesA. M. Uhrmacher a , M. Röhl a & B. Kullick ba Department of Computer Science , University of Rostock , Rostock,Germanyb Microsoft Services Custom Development , Unterschleissheim,GermanyPublished online: 09 Jun 2010.

To cite this article: A. M. Uhrmacher , M. Röhl & B. Kullick (2002) The Role Of Reflection InSimulating And Testing Agents: An Exploration Based On The Simulation System James, AppliedArtificial Intelligence: An International Journal, 16:9-10, 795-811, DOI: 10.1080/08839510290030499

To link to this article: http://dx.doi.org/10.1080/08839510290030499

PLEASE SCROLL DOWN FOR ARTICLE

Taylor & Francis makes every effort to ensure the accuracy of all the information (the“Content”) contained in the publications on our platform. However, Taylor & Francis,our agents, and our licensors make no representations or warranties whatsoever as tothe accuracy, completeness, or suitability for any purpose of the Content. Any opinionsand views expressed in this publication are the opinions and views of the authors,and are not the views of or endorsed by Taylor & Francis. The accuracy of the Contentshould not be relied upon and should be independently verified with primary sourcesof information. Taylor and Francis shall not be liable for any losses, actions, claims,proceedings, demands, costs, expenses, damages, and other liabilities whatsoever orhowsoever caused arising directly or indirectly in connection with, in relation to or arisingout of the use of the Content.

This article may be used for research, teaching, and private study purposes. Anysubstantial or systematic reproduction, redistribution, reselling, loan, sub-licensing,systematic supply, or distribution in any form to anyone is expressly forbidden. Terms &Conditions of access and use can be found at http://www.tandfonline.com/page/terms-and-conditions

u THE ROLE OF REFLECTIONIN SIMULATING AND TESTINGAGENTS: AN EXPLORATIONBASED ON THE SIMULATIONSYSTEM JAMES

A.M. UHRMACHERandM. RO« HLDepartment of Computer Science, University of Rostock,Rostock, Germany

B. KULLICKMicrosoft Services CustomDevelopment,Unterschleissheim, Germany

Simulation methods offer an experimental approach for analyzing the dynamic behavior of

multi-agent systems. Multi-agent systems are able to access their own behavior. If agents

are specified in the modeling language and become part of the simulation, the simulation

system has to support reflection, i.e., models which access their own structure and

behavior. The system theoretic formalism DYNDEVS allows to specify reflective dynamic

models and their behavior and forms a sound base to implement systems for simulating and

testing agents. Within a simulation system, the behavior of agents can be analyzed, based

on conceptual models, or by coupling software agents and simulation system. To support

the latter, in JAMES, models are equipped with peripheral ports to enable them to

communicate with externally running processes, e.g., agents. Thereby, models form an

interface between simulation and agents and can easily be used to reflect ‘‘online’’ the

state and activities of the externally running agent. To intertwine simulation and agent

execution, method calls of the agents have to be redirected from the normal runtime

environment to the simulation. In the opposite direction, events of the simulation have to be

transformed into concrete method calls. Higher-order programming mechanisms, such as

the reflection mechanisms in Java, are useful for implementing this type of interaction

between simulation and agents. Thus, from theory to implementing, reflection in its many

facets plays a crucial role in developing simulation systems for agents.

Address correspondence to A.M. Uhrmacher, Department of Computer Science, University of

Rostock, Albert-Einstein-Strasse 21, Rostock, D-18051, Germany. E-mail: [email protected]

795

Applied Artificial Intelligence, 16:795–811, 2002

Copyright # 2002 Taylor & Francis

0883-9514/02 $12.00 +.00

DOI: 10.1080/08839510290030499

Dow

nloa

ded

by [

Tuf

ts U

nive

rsity

] at

07:

33 2

7 O

ctob

er 2

014

INTRODUCTION

The more complex agent applications become in terms of number ofagents, network nodes to be visited, or deliberation capabilities, the more theeffort will pay to thoroughly analyze their behavior and performance.Simulation forms an experimental method to analyze the behavior of agentsin virtual dynamic environments and has been widely employed to testthe interplay between reactivity and deliberation (Schut and Wooldridge2000; Excelente-Toledo, Bourne, and Jennings 2001); cooperation strategies(Asada, Kitano, Noda, and Velosa 1999); mobile agents in networks(Dikaiakos and Samaras 2000); the relationship between individual utilityfunctions; and common goods (Wolpert and Lawson 2002).

In the following, we explore the role of reflection in simulating agents. Inthe context of programming, reflection is defined as ‘‘an entity’s integralability to represent, operate on, and otherwise deal with itself in the same waythat it represents, operates on, and deals with its primary subject matter’’(Ibrahim 1991). The ability of reflection is closely related to the self-awareness of programs and their ability of self-adaption. Consequently, theability of reflection is considered a salient feature of software agents (Ferberand Carle 1992; Jennings et al. 1998).

In system theory, George Klir was one of the first to address the problemof reflection by introducing meta systems in his General System’s ProblemSolver, which supports specifying different levels of system-analytical pro-blems (Klir 1985). Meta systems contain the knowledge about how beha-vioral or structural systems are interrelated over time. Whereas thecomposition of systems helps to integrate several compatible systems into alarger one, meta systems ‘‘integrate’’ systems by replacement, i.e., they areonly temporarily part of the overall system. Zeigler and Oren developed‘‘variable structure models’’ to describe meta systems: models that are able torepresent, control, and modify their own behavior (Zeigler and Oren 1986).Thus, from the point of system theory, multi-agent systems are reflective,dynamic systems. To support the simulation of multi-agent systems, thesimulation and its underlying formalism have to support reflection, i.e.,models which access their own structure and behavior.

The implementation and application of dynamic test scenarios for multi-agent systems requires considerable modeling effort. Typically, the agent isnot modeled in its entirety. Early simulation systems for agents alreadyallowed to plug code fragments, or single modules into the skeleton of anagent model (Montgomery and Durfee 1990). To further reduce the modelingeffort, agents are sometimes treated as external source and drain of events(Pollack 1996; Anderson 1997). The loose coupling of simulation and agentssaves the user the extra effort to specify the agent in the modeling language ofthe simulation system. However, typically, more effort is required to analyze

796 A. M. Uhrmacher et al.

Dow

nloa

ded

by [

Tuf

ts U

nive

rsity

] at

07:

33 2

7 O

ctob

er 2

014

the interaction and actions of agents in the virtual world. The agent is onlyperceivable by the effects its activities have on the environment. If its stateand activities shall be ‘‘mirrored’’ in the simulation, specific mechanisms arerequired.

‘‘What do you see when you look at your face in the mirror? The obviousanswer is that you see your face looking in the mirror. But this obviousanswer fails to acknowledge the fact that a face looking in the mirror isactually doing two things—trying to see itself and presenting itself to be seen.Sometimes these two activities are visibly distinct . . . Yet even when your facejust stares out at you, flat-footed, it bears the same two aspects: It is botheyeing you, in order to see you, and facing you, in order to be seen.’’(Velleman 1989, 3)

A MODELING FORMALISM FOR SIMULATINGMULTI-AGENT SYSTEMS

Most modeling and specification formalisms presuppose static structuresof composition and interaction, which reduces their usability for multi-agentsystems (Jung and Fischer 1998) and nurtures the development of extensions.For example, Asperti and Busi developed Dynamic Nets when they foundPetri Nets too static to be directly used to express processes with changingstructure (Asperti and Busi 1996). Dynamic Nets are a reflective extension ofPetri Nets whose transitions produce new Petri Nets (Asperti and Busi 1996).They suggested to extend colored Petri Nets to allow an explicit transmissionof processes. Petri Nets can be defined as token colors, and thus support themodeling of mobility. Reference Nets are an implementation of this idea(Kohler et al. 2001). As do Dynamic Nets, the formalism DYNDEVS addsreflection to the DEVS formalism to capture the notion of self-aware and self-manipulating agents (Uhrmacher 2001).

The Basis: DEVS

DEVS belongs to the formal and general approaches to discrete eventsimulation. The model design supports a hierarchical, compositional con-struction of models. It distinguishes between atomic and coupled models.Atomic models are equipped with input and output ports (X;Y ) by whichthey communicate with their environment. Their behavior is defined bytransition functions, an output function (l), and a time advance function (ta),which determines how long a state persists ‘‘per se.’’ An internal transitionfunction (dint) dictates state transitions due to internal events, the time ofwhich is determined by ta. At the time an internal event is due, the outputfunction (l) produces an output, which is sent via the output ports. Theexternal transition function (dext) is triggered by the arrival of external events

Reflection in Simulating Agents 797

Dow

nloa

ded

by [

Tuf

ts U

nive

rsity

] at

07:

33 2

7 O

ctob

er 2

014

via the input ports. A coupled model is a model consisting of differentcomponents and specifying the coupling between its components. Same asatomic models, coupled models are equipped with input and output ports. Acoupled model is described by a set of component models, which may beatomic or coupled, and by the couplings that exist among the componentsand between the components and its own input and output ports. Coupledmodels do not add functionality to atomic models since each coupled modelcan be expressed as an atomic model, i.e., DEVS models are closed undercoupling. By a hierarchy of atomic and coupled models, DEVS supports thedescription of agents and their environment as time-triggered compositeautomata. An abstract simulator equips the DEVS formalism with a clearexecution semantic.

A Reflective Extension: DYNDEVS

DEVS is based on static model structures and does not provide means forreflection. However, those are crucial in modeling and simulating agents. Tosupport variable structures in DEVS, several approaches have been developedsince the eighties. Most of them distinguish between controlling andcontrolled units. A model can not change itself and its adaptation depends on

A Dynamic DEVS is a structure

dynDevs ¼df hX;Y;Zi;Zo;minit;MðminitÞi withX;Y sets of model inputs, model outputs,

Zi;Zo inputs and outputs from and to external processes

minit 2 MðminitÞ the initial model

MðminitÞ is the least set having the structure

fhS; sinit; dext; dint; ra;l; taijS set of states

sinit 2 S the initial state

dext : Q X! S Zo the external transition function is triggered by the arrival of events

which have been produced by other models (its influencers) with

Q ¼ fðs; z; eÞ : s 2 S; z 2 Zi; 0 � e � taðsÞgdint : S Zi ! S Zo the internal transition function is triggered by the flow of time

ra : S Zi ! MðminitÞ the model transition function determines the next ‘‘incarnation’’ of

the model in terms of state space and behavior pattern

l : S Zi ! Y the output function fills the output port and triggers

external transitions within influenced components

ta : S Zi ! Rþ0 [ f1g the time advance function determines how long a state persists g

and satisfying the property

8n 2 MðminitÞ:ð9m 2 MðminitÞ: n ¼ raðsmÞ with sm 2 SmÞ _ n ¼ minit

FIGURE 1. The formalism DYNDEVS: Please note that compared to the specification of the original

formalism, peripheral ports have been introduced in addition.

798 A. M. Uhrmacher et al.

Dow

nloa

ded

by [

Tuf

ts U

nive

rsity

] at

07:

33 2

7 O

ctob

er 2

014

another model. Thereby, a hierarchy of controllers is installed (Zeigler et al.1991; Barros 1997).

To support models that are able to access their composition, interaction,and behavior structure from an agent’s perspective, the DYNDEVS formalismhas been developed, which expresses agents as reflective, time-triggeredcomposite automata (Uhrmacher 2001). Figure 1 shows the definition of anatomic model in DYNDEVS. An atomic model in DYNDEVS is defined as a set ofmodels that inherit state set, transition, output, and time advance functionsfrom DEVS atomic models. The reflectivity is introduced by the model tran-sition (ra), which maps the current state of a model into a set of models towhich the model belongs. Thereby, sequences of models are produced. Amodel can change its own state and its behavior pattern, i.e., its transition,output, and time advance function during simulation. The model transition(ra) does not interfere with other transitions, it preserves the values ofvariables which are common to two successive models and assigns initialvalues to the ‘‘new’’ variables (for a more detailed discussion of the form-alism see [Uhrmacher 2001]).

As the coupled model holds the information about composition andinteraction between components, a change of composition or interaction,even though induced by an atomic model, takes effect at the level of thecoupled model. Therefore, coupled DYNDEVS models are introduced. Coupledmodels are formed by sets of models as well. A so-called network transitionmaps the current state of the coupled model in terms of the states of itscomponents into a possibly new network with new components, new cou-plings, new domains of these components, and a new network transitionfunction. It is triggered by changes in its components’ states, which have beeninduced by external or internal transitions, and interprets those looking forpossible implied structural changes at the coupled model level. We have tonote that, ultimately, structural changes at the level of coupled models comedown to changes of the underlying atomic model, since atomic models inDYNDEVS are closed under coupling.

Often only the implementation defines the semantics of the formalism,e.g., with respect to events happening at the same time (Harel and Naamad1996). In DYNDEVS, the semantics are defined by the abstract simulator,which is responsible for executing the model and for dissolving conflictsbetween structural and non-structrual changes in an unambiguous way(Uhrmacher 2001).

The Implementation: James

Based on the formalism DYNDEVS, JAMES (A Java-Based Agent ModelingEnvironment for Simulation) (Uhrmacher et al. 2000) has been developed.Each agent can be equipped with a knowledge base, i.e., a collection of facts

Reflection in Simulating Agents 799

Dow

nloa

ded

by [

Tuf

ts U

nive

rsity

] at

07:

33 2

7 O

ctob

er 2

014

about itself and its environment the agent assumes to be true. Based on thisknowledge, an agent is able to deliberate, e.g., to develop action plans. Anaction typically has some effect on the internal state of the agent, in additionit might affect an agent’s environment. The latter takes shape in charging theoutput ports, and thus influencing other components via their input ports orin initiating structural changes, e.g., the change of a behavior pattern, thecreation and adding of models, the deletion or removal of itself, and acces-sing its interaction structure.

Whereas structural changes are initiated by atomic models, many ofthem, e.g., the creation of new models, are actually executed at the level ofthe coupled model. An explicit network transition function does not exist atthe level of the coupled model. Instead each component purveys implicitlyone part of the network transition, which is composed at the coupled modellevel. Since structural changes can be initiated concurrently, conflicts mightarise. To prevent these conflicts and to emphasize the perception of auton-omous, yet knowledge- and resource-restricted agents, the range to initiatestructural changes has been restrained. Models can create and add models inthe coupled model to which they belong, they can delete and remove them-selves, and they can access their interaction structure. To initiate structuralchanges outside their boundary, agents have to turn to communication andnegotiation. Thus, a movement from one coupled model to another impliesthat another atomic model complies with the request to add the movingmodel into the new interaction context. To facilitate modeling, all atomicmodels are equipped with default methods that allow them to react to thoserequests. However, these default reactions can be suppressed to decidedeliberately what requests shall be executed. The freedom to decide whetherto follow a certain request, e.g., to commit suicide and its knowledge, i.e.,beliefs, about itself and its environment, distinguish active agents from more‘‘reactive’’ entities (Jennings et al. 1998).

Executing the model according to the user’s specification and the giveninitial situation is the task of a discrete event simulator. Each model isinterpreted and executed by a tree of processors, which reflects the hier-archical compositional structure of the model. Each of the processors isassociated with a component of the model and responsible for invoking thecomponent’s methods and controlling the synchronization by exchangingmessages with the other processors of the processor hierarchy. The change ofmodel structure is reflected in an according change of the processor tree.Different distributed, parallel execution strategies have been implemented inJAMES (Uhrmacher and Gugler 2000; Uhrmacher and Krahmer 2001).Whereas one adopts a conservative strategy where only events which occur atexactly the same simulation time (including starting external processes) areprocessed concurrently, two other strategies split simulation and externalprocesses into different threads and allow simulation and deliberation to

800 A. M. Uhrmacher et al.

Dow

nloa

ded

by [

Tuf

ts U

nive

rsity

] at

07:

33 2

7 O

ctob

er 2

014

proceed concurrently by utilizing simulation events as synchronizationpoints. The implemented execution mechanisms are based on the abstractsimulator introduced in DYNDEVS.

An Example: Modeling Mobile Agents

Modeling mobile agents in JAMES requires the utilization of thedifferent transition functions, and the output function of the atomic model(Uhrmacher et al. 2000). The process of moving comprises adding andremoving model components from coupled models (Figure 2), modifying theinteraction structure within the coupled models, and the possibility ofsending references, i.e., names or model components within messages.

In the scenario depicted in Figure 2 (Uhrmacher and Kullick 2000), themodel that represents a client C requests a task from another model thatrepresents an agent A. The agent model responds by invoking its externaltransition function and decides to move to the location L2. Both location L1

and L2 are represented by coupled models. After some simulation time haselapsed as determined by the time advance function, the agent model chargesits port with a request to migrate, and thus initiates the migration. Outputfunction and internal transition function are intrinsically connected; theyform a unity and are invoked at the same simulation time. This offers theopportunity to update the state of the agent model at the moment at whichthe migration starts. The model removes itself and starts traveling. The modelceases to exist within the former location L1. The structure of the coupledmodel L1 has changed. The time the movement will take to be completeddepends on the model that is located along the route, i.e., Channel. TheChannel that connects both locations L1 and L2 might be modeled as asimple atomic model, a coupled model, or even represent an entire network.

Finally, the message, including the agent A, will reach its destination S.The receiver S will be activated via its external transition function and will beasked to insert the agent model into the new location L2. After inserting theagent into its new location, S will wake up the moved agent by sending a

FIGURE 2. An agent model moving between two locations.

Reflection in Simulating Agents 801

Dow

nloa

ded

by [

Tuf

ts U

nive

rsity

] at

07:

33 2

7 O

ctob

er 2

014

welcome message. The external transition function of the agent model servesas an entry point to resume processing.

INTEGRATING AGENTS INTO SIMULATION

In the above example, a communication with external processes does nottake place. Agents and their virtual environment are entirely modeled inJAMES. However, in more realistic agent simulations, an interaction withexternal processes is typically required, if only to invoke external planningsystems (Schattenberg and Uhrmacher 2001).

Peripheral Ports

To test planning and commitment strategies of agents (Schattenberg andUhrmacher 2001) in JAMES, models have been equipped with peripheral ports(Figure 1), which are now used to support the interaction of atomic modelswith external processes in general.

The classical ports of DEVS models collect and offer events that areproduced by models. The peripheral ports in JAMES allow models to com-municate with processes that are external to the simulation. Thereby, not theentire simulation system as one black box interacts with external agents, buteach single model can function as an interface to external processes. If agentsand simulation shall interact in simulation time, a function transformsexternal resource consumption into simulation time. All functions, includingstate transition functions, model transition function, output function, andtime advance function are also based on, and partly directed to, the peri-pheral ports. The external process can fill the peripheral ports at a (wallclock) time when the external process finishes its execution or at a simulationtime, which is calculated by applying a time model function to the resourceconsumption of the external process. Examples for consumed resources arethe processor time needed for the calculation, the amount of nodes that aplaner created during plan generation, or the amount of instructions thathave been executed. Easily a simulator could be implemented within JAMES

that would support an asynchronous exchange of messages in wall clocktime. Currently, the different simulators in JAMES support only an explicitsynchronization between simulator and external process based on simulationtime. The model offers its events to the external process via the peripheralports. They can not only be used to exchange information between simula-tion models and external planers but to exchange information between amodel and an agent as a whole, which runs concurrently to the simulation.

802 A. M. Uhrmacher et al.

Dow

nloa

ded

by [

Tuf

ts U

nive

rsity

] at

07:

33 2

7 O

ctob

er 2

014

Representatives

To save the extra modeling effort, agents are often treated as externalsource and drain of events. Requests and messages that are normally sent tothe agent’s real environment are redirected to the simulation system. Agentsand simulation system are synchronized in simulation time, in this casemessages that are exchanged between agents and simulation are labeled withtime-stamps (Pollack 1996; Anderson 1997), or simulation and agentsinteract in wall clock time. An example for the latter is the soccer simulator ofthe RoboCup initiative (Noda 1995). It checks frequently whether any agenthas produced a message for the simulator, otherwise the simulation proceeds.Slowing down the execution of the simulation engine diminishes the timepressure for the agents. The purpose of this type of simulation is to supportcompetition games rather than a thorough testing, which requires morecontrol of the experiment. Recently, the bias, which is introduced by theasynchronous interaction via polling, has been analyzed to improve thesynchronization between simulator and agents (Butler et al. 2001).

The advantage of loosely coupling agents and simulation system is that itallows to switch arbitrarily between executing the agents in their normalruntime environment and in the virtual test environment. Compared to theapproach in which agents are explicitly modeled, more effort is required toanalyze the agents’ behavior. Agents are not explicitly represented in thesimulation and their behavior can only be analyzed based on the inducedeffects. The idea of representatives is to associate models with actual agents tocombine the benefits of both approaches. A model which represents an agentinteracts with it while it is running via its peripheral port. Thereby, thesemodels ‘‘reflect,’’ i.e., give evidence of, the actual agent’s state and activities.

As does the mirror image in our introduction, representatives reflect thecurrent state of the agent and its activities in the simulation. Like all models,representatives are abstractions and employed to focus the view on relevantaspects and changes within the agents. However, unlike most other models,they are doing this ‘‘online.’’ Thereby, their behavior is not only controlledby the dynamics of the simulation, but also by the dynamics of the externalprocesses. They are controlling the agents and are controlled by them. ‘‘It isboth eyeing you, in order to see you, and facing you, in order to be seen.’’

Before we describe these controlling mechanisms in detail, we will give anexample of representatives that have been defined in JAMES.

An Example: Representatives of MOLE Agents

MOLE represents a Java-based mobile agent system (Baumann et al.1997). Engines which represent the MOLE runtime system transform andforward messages between locations and the network. Each engine can

Reflection in Simulating Agents 803

Dow

nloa

ded

by [

Tuf

ts U

nive

rsity

] at

07:

33 2

7 O

ctob

er 2

014

comprise a set of locations. They offer certain services to the agent andrepresent the source and destination of moving agents. MOLE agents areequipped with a set of methods, e.g., for migrating, remote procedure calls(RPC), sending and receiving messages, and for handling the individual lifecycle. In addition, MOLE agents can use the entire functionality of Java.Agents can comprise a dynamic set of concurrent running or waiting threadsand are not restricted to one line of activity. Therefore, running agents aretypically represented as a group of models in JAMES (see Figure 3 [Uhrmacherand Kullick 2000]).

The life of an agent starts in the moment a location initiates the creationof an agent. To become an active member of an agents society, the pre-paration method signs responsible. The preparation method is invoked at thetime at which an agent is created or just awakened after a successfulmigration. Thereafter, the working phase of an agent starts, which includesactivating the start method and handling incoming messages and calls con-currently. Whereas the start method runs exactly once, several messages andcalls can arrive at the same time. This requires handling several concurrentcomputation processes. In JAMES, a MOLE agent is represented as one modelsurrounded by models that represent its running or waiting threads (seeFigure 3 [Uhrmacher and Kullick 2000]).

At the moment an RPC reaches the agent core model, the core agent willcreate a satellite to dispatch the remote procedure call (Figure 4). The satellitewill transform the incoming request by using the Java reflection into calling aconcrete method of the MOLE agent (see Figure 5 [Uhrmacher and Kullick2001]). In the opposite direction, significant events of the executing MOLE

thread, e.g., the invocation of the migration method, are translated intoevents, directed to the simulation system, which the satellite will forward tothe agent core model (Figure 6).

FIGURE 3. Locations, agents, and their processes in JAMES.

804 A. M. Uhrmacher et al.

Dow

nloa

ded

by [

Tuf

ts U

nive

rsity

] at

07:

33 2

7 O

ctob

er 2

014

In the case of a migration request, the core agent will ask all its satellitesto suspend current threads, its state is serialized, and it launches the migra-tion request into the network, and will change to the state migrating. Theagent remains in a kind of hibernation at its former location. It waits foreither an acknowledgment of a successful migration or a notification that themigration has failed. In the former case, the agent has been successfullyinstalled at its new location and has invoked its start routine, the agent is nolonger needed at the old location. It informs all satellites to stop themselvesand commits suicide. In the latter case, something has hindered the instal-lation of the agent in its new location; it becomes lively again, and informs itssatellites to resume their activities.

Some of the state changes of a representative are initiated by incomingsimulation events produced by other models and will influence the agentexecution. Other state changes of the model are initiated by the associated

FIGURE 4. The JAMES model of a core agent described as a statechart.

Reflection in Simulating Agents 805

Dow

nloa

ded

by [

Tuf

ts U

nive

rsity

] at

07:

33 2

7 O

ctob

er 2

014

FIGURE 5. The JAMES model of a thread described as a statechart.

FIGURE 6. Interwining, MOLE and JAMES.

806 A. M. Uhrmacher et al.

Dow

nloa

ded

by [

Tuf

ts U

nive

rsity

] at

07:

33 2

7 O

ctob

er 2

014

MOLE agent. Thus, representatives are both controlling and being controlledby the externally running agent.

INTERTWINING SIMULATION AND AGENT EXECUTION

‘‘When you examine a mole on your chin, for example, you don’t justlower your gaze until it lights on that part of your reflection; you also jut outyour chin, until it intercepts your reflected gaze. In this case, there’s nomistaking the fact that the face in the mirror is both seeking itself andshowing itself simultaneously.’’ (Velleman 1989, 3)

Methods in MOLE are not simply executed as Java methods but reflectedto make sure that the execution adheres to the security policy. Due to thismechanism of reflection within MOLE, the invocation of methods can easilybe identified and the calling thread can be suspended to be resumed after-wards. Methods of the MOLE API, which constitute the interface betweenMOLE agents and their runtime environment, have only to be slightly changedto redirect calls and messages to the simulation system. Those methodsautomatically fill in a peripheral port and trigger the state changes of theatomic model. The port is filled at a simulation time, which is determined byapplying a function to translate the resources consumed into simulation time.The resources have been consumed in between starting the thread and thethread reaching a method invocation that is directed to the environment ofthe agent. Executing one of these methods results in charging a peripheralport of the associated satellite, i.e., Z (see Figure 5 [Uhrmacher and Kullick2001]), and in suspending the thread of the MOLE agent. Later the thread willbe resumed or stopped by the satellite model. Thus, each agent toagent communication in MOLE is transformed into a communicationfrom MOLE agent to JAMES simulation and back. In the opposite direction, ifevents produced by models reach the agent core model, the core agentmodel will either create new satellites, which will generate a new agentthread by invoking a method via Java reflection, or will forward themessage to an existing satellite, which will resume the suspended thread(Figure 6).

Whereas the agent core model represents the central focus of control, itssatellites provide the interfaces to the agent’s processes. Together theyimplement a conceptual view of the state and the behavior of an agent. Notonly MOLE agents, but also MOLE locations and engines are associated withrepresentatives, which are part of a network simulation. Messages, calls, andagents are propagated through the virtual network based on the simulationmechanisms provided in JAMES, and according to the actual model of thephysical network which underlies the experiment.

A hierarchy of processors is defined in JAMES for executing themodel:Witheach coupledmodel, a coordinator is associated, andwith each atomicmodel, a

Reflection in Simulating Agents 807

Dow

nloa

ded

by [

Tuf

ts U

nive

rsity

] at

07:

33 2

7 O

ctob

er 2

014

simulator is associated. Figure 7 shows the interaction between a JAMES model,i.e., a Start satellite, its Simulator, the ComputationHandler, and theExternal Process. From the moment the agent core model receives the ‘‘startup’’ notification from the location, it will create the satellite Start. With thecreation of the Start satellite, its simulator is created, which will execute theinitialize method of the model. All models are equipped with ‘‘init’’ methods.They are used to initialize model components before and during simulationruns. Within the ‘‘init’’ method, the external computation code is started.

To allow simulation and external processes to proceed concurrently, thecomputation handler signs responsible. This thread is responsible for mon-itoring the external activities and invoking the methods of the MOLE agent viareflection. The satellite model changes to the state Running and will returnthe control to the simulator, and thus to the simulation, which will continueprocessing events.

While executing the start method of the MOLE agent, a remote proceduremight be called. If a remote procedure call occurred in the runtime environmentofMOLE, the location would be notified to locate and contact the remote agent,and call the remote agent’smethod. To integrate agents into the simulation, theMOLE API was changed. It now redirects the call into the simulation. TheComputationHandler notifies the simulator about the event, which will chargethe peripheral input port of the model with the information at the simulation

FIGURE 7. A MOLE agent is created and executes a remote procedure call in his start method.

808 A. M. Uhrmacher et al.

Dow

nloa

ded

by [

Tuf

ts U

nive

rsity

] at

07:

33 2

7 O

ctob

er 2

014

time that has been determined, depending on the time model and the resourceconsumption of the external process. The resource consumption has beenmonitored by the ComputationHandler, e.g., by setting the wall clock.

Generally, the arrival of events at the peripheral input ports are modeledas internal events, since they are triggered by the flow of simulation time.With each internal event in JAMES, an output is associated. In this case, theoutput function is used to launch the remote procedure call into the simu-lated network. The state of the satellite will change to waiting.

Eventually, the answer of the remote procedure call will arrive. The agentcore model sends the event to the satellite. Actually, the simulator of the agentcore model will send the event as an output to the coordinator, which willforward it to the simulator of the satellite. The simulator will invoke theexternal transition function. During this transition function, the peripheraloutput port will be charged, thereby the ComputationHandlerwill be notified,and the state of the satellite will change to running again. The ComputationHandler will forward the return value to the waiting external process, whichwill resume the execution of the start thread of the MOLE agent. Again thesatellite is running and the ComputationHandler is waiting for the externalprocess to complete or to produce an event, which is directed to the simulation.

This procedure is based on the idea that simulation and agents executeconcurrently and are synchronized in simulation time. The simulator keeps itsexecution in pace with the external processes, and initiates external executions.By introducing the computation handler, the execution of simulation andexternal processes are decoupled so they can run concurrently. However, byusing the time model and the resource consumption, synchronization pointsbetween agent and simulation execution are determined.

What role does reflection play in this solution? Reflection helped uscertainly in a practical way. Higher order functionality, e.g., Java reflection,provides a means to reason about calling procedures, to process the callof procedures as data within the simulation, and to use this information toinvoke the according method of the agent. Reflection helped us in a meta-phorical way in understanding the role of the representatives as actuallydoing two things—trying to see the agent and presenting the agent to be seen.

CONCLUSION

By integrating and testing mobile agents of the MOLE agent system inJAMES, different forms of reflection are employed. To simulate phenomena,such as migration, in terms of models that migrate, appear, and disappear,the simulation system has to support the definition and execution of reflectivemodels with the ability to represent, control, and modify their own behavior.Therefore, the formalism, which is underlying the simulation system, has tosupport reflection as a process involving self-awareness. The formalism

Reflection in Simulating Agents 809

Dow

nloa

ded

by [

Tuf

ts U

nive

rsity

] at

07:

33 2

7 O

ctob

er 2

014

DYNDEVS has been introduced to model reflective dynamic systems.Reflection has also been employed to realize the interaction between simu-lation and externally running agents, if agents shall not be modeled, but‘‘plugged’’ into the simulation system. Thus, not only at the theoretical levelof modeling formalism reflection played a central role: On the implementa-tion level, an extensive use of Java reflection and slight changes of the MOLE

API helped us to run MOLE agents in the simulation without changing thesource code. The programmer can switch arbitrarily between an execution inthe real environment and the virtual test environment.

Reflection in the sense of mirroring has motivated the implementation ofthe interface between agents and simulation as representatives: models whichare connected to externally running agents via their peripheral ports andreflect the state and activities of the agents ‘‘online’’ into the simulation.Representatives as mirror images of agents are doing two things: They areseeking and showing themselves simultaneously. They are controlling theexecution of the agent and are controlled by it. As the representatives are alsoforming models, variants of the representatives can be used as conceptualmodels in early phases of designing agents. The concept of representativeswill facilitate future systematic experiments to analyze the behavior of multi-agent systems during different stages of the agent development process.

REFERENCES

Anderson, S. 1997. Simulation of multiple time-pressured agents. In Proc. of the Wintersimulation Con-

ference, WSC’97, pp. 397–404. Atlanta.

Asada, M., H. Kitano, I. Noda, and M. Velosa 1999. Robocup: Today and tomorrow what we have

learned. Artificial Intelligence 110(2):193–214.

Asperti, A., and N. Busi 1996. Mobile petri nets. Technical Report UBLCS-96-10, University of Bologna.

Barros, F. 1997. Modeling formalism for dynamic structure systems. ACM Transactions on Modeling and

Computer Simulation 7(4):501–514.

Baumann, J., F. Hohl, K. Rothermal, and M. Strasser 1997. Mole-concepts of a mobile agent system.

WWW Journal, Special Issue on Applications and Techniques of Web Agents 1(3):133–137.

Butler, M., M. Prokopenko, and T. Howard 2001. Flexible synchronisation within robocup environment:

A comparative analysis. In RoboCup 2000, Volume 2019 of LNAI, eds. P. Stone, T. Balch, and G.

Kraetzschmar, 119–128. London: Springer.

Dikaiakos, M., and G. Samaras 2000. A performance analysis framework for mobile-agent systems. In

First Annual Workshop on Infrastructure for Scalable Multi-Agent Systems, The Fourth International

Conference on Autonomous Agents’2000, eds. Wagner and Rana, Lecture Notes in Computer Science,

vol. 1887, pp. 180–187. London: Springer.

Excelente-Toledo, C., R. Bourne, and N. Jennings 2001. Reasoning about commitments and penalties for

coordination between autonomous agents. In Agents’2001 - Proc. of the 5th International Conference

on Autonomous Agents, pp. 131–138, Montreal, Canada.

Ferber, J., and P. Carle 1992. Actors and agents as reflective concurrent objects: A mering IV perspective.

IEEE Transactions on Systems, Men and Cybernetics 21(6).

Harel, D., and A. Naamad 1996. The STATEMATE semantics of statecharts. ACM Transactions on

Software Engineering and Methodology 5(4):293–333.

Ibrahim, M. 1991. Report on OOPSLA=ECOOP ’90 Workshop on Reflection and Metalevel Architecutres

in Object-Oriented Programming. OOPS - Messenger 73–80.

810 A. M. Uhrmacher et al.

Dow

nloa

ded

by [

Tuf

ts U

nive

rsity

] at

07:

33 2

7 O

ctob

er 2

014

Jennings, N. R., K. Sycara, and M. Wooldridge 1998. A roadmap of agent research and development.

Autonomous Agents and Multi-Agent Systems 1(1):275–306.

Jung, C., and K. Fischer 1998. Methodological comparison of agents models. Technical Report D-98-1,

DFKI, Saarbrucken.

Klir, G. 1985. Architecture of Systems Problem Solving. New York: Plenum Press.

Kohler, M., D. Moldt, and H. Rolke 2001. Modelling the structure and behavior of petri net agents. In

ICATPN, Volume 2075 of LNCS eds. J.-M. Colom and M. Koutny, 224–241. Berlin: Springer.

Montgomery, T., and E. Durfee 1990. Using MICE to study intelligent dynamic coordination. In Second

International Conference on Tools for Artificial Intelligence, Washington, DC, 438–444. Institute of

Electrical and Electronics Engineers.

Noda, I. 1995. Soccer server: A simulator for Robo Cup. In JSAI AI-Symposium 95: Special Session on

RoboCup, pp. 29–34.

Pollack, M. 1996. Planning in dynamic environments: The DIPART system. In Advanced Planning

Technology, ed. A. Tate. Cambridge, MA: AAAI.

Schattenberg, B., and A. Uhrmacher 2001. Planning agents in JAMES. Proceedings of the IEEE 89(2):

158–173.

Schut, M., and W. Wooldridge 2000. Intention reconsideration in complex environments. In Agents 2000:

Proceedings of the Fourth International Conference on Autonomous Agents Barcelona.

Uhrmacher, A. 2001. Dynamic structures in modeling and simulation – a reflective approach. ACM

Transactions on Modeling and Simulation 11(2):206–232.

Uhrmacher, A., and K. Gugler 2000. Distributed, parallel simulation of multiple, deliberative agents. In

Parallel and Distributed Simulation Conference (PADS’2000), pp. 101–110. Bologna: IEEE Com-

puter Society Press.

Uhrmacher, A., and M. Krahmer 2001. A conservative, distributed approach to simulating multi-agent

systems. In Proc. European Multi-Simulation Conference, 257–264. Prague: SCS.

Uhrmacher, A., and B. Kullick 2000. Plug and test software agents in virtual environments. In Winter

Simulation Conference WSC’2000, pp. 1722–1729. Orlando, FL.

Uhrmacher, A., and B. Kullick 2001. Interaction between simulation and multi-agent systems: An ex-

ploration into MOLE and JAMES. In Proceedings, of the 5th International Conference on Autono-

mous Agents: Agents’01, Montreal. Sheridan.

Uhrmacher, A. M., P. Tyschler, and D. Tyschler 2000. Modeling mobile agents. Future Generation

Computer System 17:107–118.

Velleman, J. 1989. Practical Reflection. Princeton: Princeton University Press.

Wolpert, D., and J. Lawson 2002. Designing agent collectives for systems with markovian dynamics. In

AAMAS’2002: Autonomous Agents and Multi-Agent Systems 2002.

Zeigler, B., T. G. Kim, and C. Lee. 1991. Variable structure modelling methodology: An adaptive com-

puter architecture example. In Transactions of the SCS 7(4): 291–319.

Zeigler, B., and T. Oren 1986. Multifacetted, multiparadigm modelling perspectives-tools for the 90’s. In

Proc. of the Winter Simulation Conference, San Diego, 708–712. SCS.

Reflection in Simulating Agents 811

Dow

nloa

ded

by [

Tuf

ts U

nive

rsity

] at

07:

33 2

7 O

ctob

er 2

014