thesis

38
Final Year Project Report AF-OpenSim Niall Deasy A thesis submitted in part fulfilment of the degree of BA/BSc (hons) in Computer Science Supervisor: Dr Rem Collier Moderator: Dr Mauro Dragone UCD School of Computer Science and Informatics College of Engineering Mathematical and Physical Sciences University College Dublin May 5, 2011

Upload: niall-deasy

Post on 04-Dec-2014

43 views

Category:

Documents


2 download

TRANSCRIPT

Page 1: Thesis

Final Year Project Report

AF-OpenSim

Niall Deasy

A thesis submitted in part fulfilment of the degree of

BA/BSc (hons) in Computer Science

Supervisor: Dr Rem Collier

Moderator: Dr Mauro Dragone

UCD School of Computer Science and Informatics

College of Engineering Mathematical and Physical Sciences

University College Dublin

May 5, 2011

Page 2: Thesis

0.1 Acknowledgements

I would like to thank everybody who has helped me during this project. In particular I wishto thank my Supervisor Dr Rem Collier, who’s expertise in multi-agent systems and AgentFactory was invaluable to the projects success. His genuine interest and support in assistingme throughout this project was greatly appreciated. I would also like to thank Dr MauroDragone for assisting me in the early stages of the project, and supporting me in the firstfew vital steps. Finally I would like to thank Dr. Eleni Mangina for her constant supportthroughout this final year. She was always there for anybody who needed her support oradvice, which is rare and so appreciated.

Page 1 of 36

Page 3: Thesis

Table of Contents

0.1 Acknowledgements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1

0.2 Project Specification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4

0.3 Abstract . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5

1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6

2 Background Research . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8

2.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8

2.2 A brief history of Multi Agent Systems (MAS) . . . . . . . . . . . . . . . . . 8

2.3 Agent Factory (AF) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9

2.4 Environment Interface Standard (EIS) . . . . . . . . . . . . . . . . . . . . . 13

2.5 OpenSim . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15

2.6 OpenMetaverse . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16

2.7 Xstream . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17

3 Core Architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18

3.1 Rebuilding Foundations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18

3.2 OpenMetaverse & XStream . . . . . . . . . . . . . . . . . . . . . . . . . . . 19

3.3 Proposed Architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19

3.4 Communications Layer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21

3.5 Sensor Manager . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21

3.6 Actions Manager . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23

3.7 Interactive Objects . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26

3.8 GUI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28

4 Agents and OpenSim . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29

4.1 EIS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29

4.2 EIS & AgentFactory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31

5 Evaluation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33

5.1 The Scenario . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33

Page 2 of 36

Page 4: Thesis

5.2 Implementation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34

5.3 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34

6 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35

6.1 Future Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35

Page 3 of 36

Page 5: Thesis

0.2 Project Specification

The objective of this project is to enable autonomous virtual characters in virtual environ-ment such as Second Life. Second Life is an online 3D virtual world, which offers excellentopportunities to create interactive simulations for various purposes, thanks to its inbuiltphysics, programmability and collaborative features. One possible applications of the targetsoftware is to enable ICT designers to implement actual Virtual Reality scenarios of AmbientAssisted Living (AAL) in domestic settings. For instance, such a system may be used toverify interaction designs and test AAL products in a simulate environment populated bysimulated users.

Agent Factory, a java-based agent tool for rational decision making developed in UCD, willbe used to inject goal-oriented behavior into the avatars . Both these avatars and the virtualsimulated environment will be based on Open Simulator , often referred to as OpenSim, anopen source server platform compatible with Second LifeTM, that can be accessed through avariety of clients, on multiple protocols.

In particular, the proposed project will improve the design and extend a pre-existing OpenSimtext-based interface, OpenSim4OpenCog (OS4OC) . OS4OC is a C# program which opensup an interactive console that can be used to instruct the avatar. While OS4OC supports alist of rudimentary actions (e.g. jump, sit, crouch, move, say...), further sensing and actingcapabilities are needed to enable more sophisticated physical and social interaction (e.g.accounting for object manipulation, deictic gestures, facial expressions...).

Mandatory:

Familiarize with OpenSimulator and OpenSim4OpenCog

Write a Java client to impart instructions to the avatar, access the information originated fromthe virtual world, and maintain a representation (world model) of the avatar’s surroundings.

Create an extensible set of C# classes operating between CogBot and OpenSim. Theseclasses should extend OS4OC offerings, and should be tailored to the purpose of Java Agentsby handling a messaging protocol (based on TCP-IP) with the Java client.

Discretionary:

Integrate the Java client with Agent Factory, by using one of its standard interface capabilitiesto populate the agent’s belief model and interact with its reasoning apparatus.

Exceptional:

Build a model of a home (including furniture etc) relying as much as possible on available3D models and mirroring a real AAL test-bed.

Implement a set of activities, such as ’make tea (switch on kettle, take milk...)’, ’watch tv’driven by agent’s plans formulated in Agent Factory

Page 4 of 36

Page 6: Thesis

0.3 Abstract

Abstract One of the greatest issues in system design and specification is predicting if thesystem will function as expected. This project is particularly concerned with the testing ofAmbient Assisted Living Scenarios (AAL) in domestic settings. AAL scenarios are sensorsystems, which integrate into a domestic environment Their function is to assist the occupantin their every day life within that environment AAL scenarios can range from AAL designedfor special needs to AAL designed for more productive and easy living [10]. One possiblesolution to this problem may exist in virtual testing environments. Virtual environmentshave become increasingly popular over recent years as a means of testing such systems. Onesuch Virtual Environment is OpenSimulator (OpenSim), a completely open source virtualsimulator server, maintained and developed by an open source community. Using a vir-tual environment such as OpenSim, in conjunction with an interpreter (OpenMetaverse orOS4OC), this paper will discuss its aims to develop a system, which can tackle such problems.This paper will also describe in detail the technologies used in the process of designing thissystem, as well as the issues encountered.

Page 5 of 36

Page 7: Thesis

Chapter 1: Introduction

Over the past few decades, computer technologies have advanced at an exponential rate.Today computer chips are affordable and in abundance. In recent years computer systemshave developed new and interesting abilities to integrate themselves into our everyday lives.This is mostly accredited to sensor systems, which now exist in almost every form known toman kind. We live in a society where computers can now see, hear and smell even beyondour own sensing abilities. These ‘sensing systems’ can now be seen in cars, mobile phone,laptops and even our own homes. Take the modest house alarm, a system which completelyrelies on its sensors to detect the presence of intruders. This simple sensor system has beenaround for over a decade now, and its demand keeps growing.

There exists huge potential with these sensing systems, particularly in the area of assistedliving environments. Ambient Assisted Living (AAL) is a program funded by many europeancountries, whose primary goal is to provide a living environment designed to accommodateelderly in the comfort of their own home[10]. There are many issues faced with AAL sys-tems such as reliability, limitations and adaptability. Given the dynamic nature of domesticenvironments, AAL systems need to be able to adapt while still retaining their ability to func-tion properly after doing so. There exist many methods of implementing adaptable systems,however the output of which can be difficult to predict.

One possible solution to this problem is to test the system with random and diverse scenarios.In reality, such tests can be costly and time consuming. One of the main concerns for manystartup projects is how much will the equipment cost? What specifications will we need?and most important of all, will it work? This paper describes a virtual environment system,which allows for systems to be tested within it. It is particularly aimed towards systemswhich require user interaction, such as evacuation plans or sensor system testing.

The simulator used in this project is called OpenSim, which is based on the popular SecondLife platform. There are also two other components to this system which run in conjunctionwith the OpenSim virtual environment. These are the Java Client, which autonomously ormanually imparts instructions to the avatar, and a layer which lies between the Java clientand the simulator and should provide a form of communication between the two. The JavaClient should also be able to accommodate an autonomous agent system, in particular UCDsown AgentFactory.

The requirements of the project are simple. The final system should be able to meet thefollowing requirements:

• Enable autonomous virtual characters in a virtual environment.

• The virtual environment should be customizable and intractable.

• The overall system should perform well in real time.

Page 6 of 36

Page 8: Thesis

1.0.1 Report Structure

This report will be split up into six primary sections “Background Research”, “Core Archi-tecture”, “Agents and OpenSim”, “Evaluation” and “Conclusion”. This structure aims todescribe the project from its Analysis to Design and finally Implementation, thereby providinga fluid transition from concept to construct.

Background research will provide a detailed insight into the technologies used in this projectas well as the research undertaken in obtaining them. This will begin with a brief introductioninto the history of Multi-Agent-Systems, with reference to its origin as DAI. Following this,Agent Factory will be introduced as a valuable platform which allows for the developmentand deployment of such Multi-Agent-Systems. The OpenSim virtual environment will beintroduced as a plausible means for hosting the projects MAS system. This will be followedby both obsolete (OS4OC) and current (OpenMetaverse) technologies that can be used toconnect to OpenSim.

Core Architecture will describe the design process of the project in its highest level. Inparticular, this chapter will discuss the reasoning behind the projects decision to rebuild anew architecture, using OpenMetaverse, instead of extending its preceding system, whichused OS4OC. Following this, a new architecture will be proposed, which will focus primarilyon using XStream and TCP as a link between OpenMetaverse and AgentFactory. Sendingcomplex data structures over a TCP stream will then be discussed in relation to linking Open-Metaverse with AgentFactory. In order to keep this report as brief as possible, this chapterwill be consist of both the Design and Implementation aspects of the Core Architecture.

The following chapter, ”Agents and OpenSim”, will describe the integration of the architec-ture discussed in the previous chapter with an EIS environment interface. This will beginwith detailed descriptions of both the actions and perceptions which were integrated intothis environment, using the Core Architecture. The chapter will then proceed to discussthe integration of this EIS environment into Agent Factory, including a brief description onsetting up a sample scenario.

The Evaluation chapter will detail a test AAL scenario, which is designed to show the finalsystems ability to accommodate such a scenario. This chapter will also attempt to incorporateany extra features, which the project has developed into its final system, into this scenario.The Agent Programming Language used to implement the scenario will be described in greatdetail. Finally this section will conclude with an analysis of the expected and the actualresults of the scenarios execution.

Finally this report will wrap up with its conclusion, which will also include a future worksection.

Page 7 of 36

Page 9: Thesis

Chapter 2: Background Research

2.1 Introduction

This chapter will focus primarily on the technologies researched and used in this project.This project is largely based on Multi Agent Systems, and will therefore start by introducingthe concept of Multi Agent Systems as well as their related technologies. These will includestandards such as FIPA and EIS as well as an Agent Platform, namely Agent Factory, whichaims to support both. This chapter will also discuss various means for exchanging databetween different programming languages, with particular emphasis on the XStream project.

2.2 A brief history of Multi Agent Systems (MAS)

The evolution of Multi Agent Systems can be traced back to its predecessor, DistributedArtificial Intelligence (DAI), which in turn is a subset of Artificial intelligence. To begin tounderstand how agent systems work, it must first be defined what is meant by an ’agent’.There exist many different variations which attempt to define what is meant by an ‘Agent’.The most generalized definition describes a automated system entity, which performs actionsbased on its surrounding enviornment. However, it is Wooldridge and Jennings (W&J)definition of “weak and strong notions of Agency” that is the most recognized [18]. Accordingto W&J, Agents can be defined through two definitions:

1. The Weak notion of Agency

2. The Strong notion of Agency

The Weak notion of Agency is a definition proposed by W&J, which attempts to defineAgents in their simplist form. This definition describes agents as computer based hardwareor software systems, which are autonomous, reactive, pro-active and also have certain socialabilities. Perhaps the most vital point to this definition is that Agents should have the abilityto set their own goals and achieve those goals through their own decisions. W&J’s definitionof Agents also maintains that Agents need not be mobile, which by definition extends anAgents use to beyond mobile systems. W&J also maintain a ‘Strong’ definition of Agency,which further defines the ‘Weak’ notion of Agency. This definition describes Agents havinga mental state, which typically consists of beliefs, goals, obligations, knowledge, preferences,amongst other typical mental traits normally associated with humans.

MAS views agents as having three fundamental characteristics[19]:

1. Autonomy: agents should have at least a minimum level of autonomy

2. Local views: the system as a whole is regarded as too complex for one single agent toconceive. Therefore, agents views are restricted to a local subset of the global systemview.

Page 8 of 36

Page 10: Thesis

3. Decentralization: there must be no central control agent, which would lead to amonolithic system.

A key function of agents is their autonomous capabilities from revising their own goals tosharing knowledge of their environment with other agents. Agents within MAS are said tobe social agents if they have the capability to share beliefs and perceptions of their localenvironment. However, as we can see in our own physical world there exists boundaries incommunications where languages are not the same. As different agent systems emerged, therewas an increasing interest in establishing a standard, which would allow interoperability ofthese agent systems.

The Foundation for Intelligent Physical Agents, or FIPA, set out to establish a set of stan-dards which would promote the interoperability if different agent systems [20]. The AgentCommunication Language (ACL) was one such standard proposed by FIPA. Two of the mostsuccessful ACL’s developed by FIPA are FIPA-ACL and Knowledge Query and ManipulationLanguage (KQML). Both of these standards are make extensive use of Searle’s ’Speech ActTheory’ (SAT), which theorizes that human utterances are spoken, with result of an entityacting or reacting to that utterance[21]. Essentially SAT viewed human utterances as actionswhich physically change the state of that world. For many agent systems, communicationsbetween agents is a primary function, and should be defined in a clear and effective form.Searle’s Speech Act Theory held quite an important role in the development of agent systems,as it provided a clear breakdown of the types of communication utterances, as well as theiraffect. SAT derives its three core definitions from John L. Austin’s doctrine of ’Locutionary’,’Illocutionary’ or ’Perlocutionary’[22] acts where,

• Locutionary acts define a well structured utterance,which has a substantial meaning.These acts can range from describing an object to asking a question. For example,“That candle is lit” is a well defined locutionary act.

• Illocutionay acts are locutionary acts which have the intention of causing a desiredeffect/ action. For example, “May I light the candle?” is a Illocutionary act, whichhas the desired effect of the person that the utterance is directed at, responding with aconfirming answer, i.e., “yes” or “no”.

• Perlocutionary acts are acts which are performed as a result of saying, or not sayingsomething. These acts range from persuasive to inspiring acts. For example, “Thecandle has gone out.” is a perluctionary act which may have the effect of a hearingperson reacting to the statement by relighting the candle.

In essence, these speech acts give FIPA’s two ACL’s clear and concise meaning, which pro-motes effective and justifiable communications between agents within a multi agent system.While DAI systems focus primarily on how multiple artificial intelligence systems can worktogether across a distributed system, MAI specializes in autonomous self-organized agents.

2.3 Agent Factory (AF)

Agent Factory is an open-source project whose primary purpose is to assist the developmentof multi-agent systems [2]. Agent Factory is composed of several components including severalplatforms, tools and languages, and comes in two formats, Agent Factory Standard Edition(AFSE) and Agent Factory Micro Edition (AFME). These two editions allow for AgentFactory to be tailored for both regular and mobile platforms.

Page 9 of 36

Page 11: Thesis

Since this project is specifically tailored for desktop and server deployment, AFSE was chosenas the edition of Agent Factory to be used with this system. AFSE is a “modular andextensible framework” which allows for multi-agent systems to be deployed in a supportenvironment [10]. One of the main purposes of Agent Factory is to provide an interfacewhich is compliant with FIPA, therefore allowing for a wide range of compatibility withother agent systems. This achievement can be seen through their AFSE Common LanguageFramework. This framework consists of a collection of libraries which allows for a wide anddiverse range of Agent Programing Languages (APL) to be used. AFSE is composed ofthree primary features, a Run-Time Environment, a Common Language Framework, andEIS compatibility.

2.3.1 Run-Time Environment (RTE)

The Run-Time Environment is the most critical function within Agent Factory as it providessupport for the interoperability of different agent platforms [3]. It achieves this by providingthe core software required by agent-based applications, which includes several agent plat-forms. The RTE effectively integrates these specialized agent platforms through a commoncommunication channel. Figure 1 shows two different agent platforms, where agents com-municate through a shared communication channel. The Agents are represented by purplecircles, and communications between agents, both locally and cross-platform, are representedby dotted lines. Communications are transparent in this case and the agents do not needto know how to communicate with agents existing on a different platform. Each platformmay require certain services in order to support their agents, this is done through dedicatedplatform services which exist locally between the platform and the communication channel.

Figure 2.1: AgentFactory Run-Time Environment

Page 10 of 36

Page 12: Thesis

In addition to providing transparent communications between different agent platforms aswell as support for multiple platforms, the RTE also provides a few key services which assistin the deployment and maintenance of such platforms. These include the Agent Manage-ment Service, which provides runtime support for agents (creating, terminating, suspending,resuming), as well as the Local Message Transport Service, which provides a means for cross-platform communications. The RTE is therefore an essential component to Agent Factory asit provides transparent cross-platform support at run-time.

2.3.2 Common Language Framework (CLF)

The Common Language Framework is another essential component of Agent Factory as itprovides support for many FIPA compliant Agent Programing languages and Architectures.The CLF uses its own JavaCC based compiler to check outline Grammar and templates, aswell as providing a configurable debugger[4]. The currently exist three main supported APL’sAgent Factory Programming Language (AFAPL), AF-AgentSpeak and AF-TeleoReactive.Many of these APL’s base their structure on the Beliefs, Desires, Intentions (BDI) model.

1. Beliefs are used to denote a belief which the agent has about its environment. Forexample, Bel(location,home) represents an agent belief which states that the agentbelieves it is at home. In short, beliefs are used to represent the state of an agents localenvironment. Beliefs are usually stored within a database called a belief-set, howeverdifferent systems may use different forms of belief storage.

2. Desires are used by agents to denote what it would like to achieve in the future, i.e.,Desires represent the agents motivations/goals. Goals represent active beliefs, whichshould not conflict with other goals, i.e., an agent should not have a goal of becominga doctor, if it also has a goal to become a software engineer.

3. Intentions denote the agents deliberative state, i.e., what the agent has chosen to do.In many systems, intentions denote what the agent has planned to do. A Plan is a setof actions, which an agent has formulated in order to achieve a certain goal.

4. Events are used to update an agents belief set, resulting from a triggered event. Thesetriggers may exist internally or externally, i.e., the agent may consist of its own internaltriggers such as sleep, or triggers may be a result of a change in the agents environment.

AF-AgentSpeak AF-AgentSpeak (AF-AS) is a specialization of Anund Rao’s extend ver-sion of AgentSpeak(L) language implemented through Agent Factory[16]. The Jason basedlanguage was initially developed as a demonstrative tool, which aimed to show how AgentFactory can be used to efficiently develop existing APL’s, using the CLF. AF-AS includesa reuse model which allows for inheritance, abstract plans and agents as well as overridingplans. The name of the file must reflect the designated agent name within the file, i.e., thefile test.aspeak must include #agent test within its declarative statement. This is due toAF-AS ability to extend and inherit agents, where agentspeak files must be clearly definedand easily locatable.

Beliefs in AF-AS are simple and take the form of grounded predicate formulae. For example,if an agent see’s a ball, the belief see(ball) will be generated. Once this belief is generated,a creation event, denoted by a ’+’ symbol, is called, i.e., +see(ball). These events onlylast for one agent cycle and are automatically removed via the removal event ’-’ symbol, i.e.,-see(ball).

Page 11 of 36

Page 13: Thesis

Plans in AF-AS are composed of a set of rules, and are triggered by an associated eventwithin a specified context. Rules may also use variables through its ’?’ symbol, which is usedto define all variables. AF-AS is also capable of handling simple if else statements within itsplans, similar to those seen in Java. AF-AS also supports printing to the console throughthe commands .print() and .println(). The following example highlights most of thesediscussed features:

#agent helloworld #extends simpleAgent

module eis -> com.agentfactory.eis.EISControlModule;

+initialized : name(?name) <-

.println("Hello World from " + ?name),

eis.perform(lookAround());

+see(?type) : true <-

?typeCopy = ?type,

.println("I can see an object of type"+?typeCopy);

This helloworld agent extends an already existing simpleAgent, and implements an EIS mod-ule. Modules are necessary to perform actions, which have been predefined within thatmodule. When this agent initializes, it prints out a hello world message along with its nameand then calls for an action lookaround() to be performed.

AFAPL AFAPL was an original language specifically designed for Agent Factory. It waslater adapted in accordance with FIPA’s Common Language Framework[15]. AFAPL is com-posed of a set of commitment rules which define situations where the agent should act/react.These rules formulate the basis of agents within AFAPL and allow the agent to work towardsits goals. AFAPL is also modeled on the BDI model, and is composed primarily of Beliefs,Goals, Plans and Commitment Rules. AFAPL also supports Plan structures, which can beused to define extra functions based on a precondition and a post condition. The followingcode demonstrates a plan which is used to print out a statement to the console when everthe say action is performed:

state(initialized) <-

!say(hello),

!say(goodbye);

plan sayPlan(?x) {

pre true;

post say(?x);

body {

.println("You said "+?x);

};

}

which results in the following output:

“You said hello”

“You said goodbye”

Page 12 of 36

Page 14: Thesis

AF-TeleoReactive AF-TeleoReactive (AF-TR) is another language written for Agent Fac-tory, and is derived from Nils Nilsson’s TeleoReactive model, while adhering to Agent Fac-tory’s Common Language Framework[14]. AF-TR was designed to function in conjunctionwith a dynamic environment, while maintaining and processing the actions of its autonomousagents. A nice feature of AF-TR is its ability to reuse agent code, in a manner similar tothat seen in Object Oriented programing languages. It achieves this through the use of its#extends keyword within the agent definition.

AF-TR provides simple commands within its language, similar to that seen in AFAPL andAF-AS. However the general structure of functions is slightly different, and reflects a morestructured functional decomposition. AF-AS uses functions to define an agents actions. Func-tions can be composed of both production rules and parameters. Production rules alwaystake the form of Condition -¿ Action. For example, the following snippet defines an agentnamed SpeakingAgent which is capable of speaking,

#agent SpeakingAgent #extends simpleAgent

function main{

initialized(true) -> say("Im Ready");

};

function say(?sayThis){

true -> .println("I said "+?sayThis)

};

Here we can see one production rule per function. This main function is run first, andstates that if the agent is initialized then perform the action say. The second function is atriggered event, which gets called when the action say is performed by the agent, and printsthe result to console. The first line of the script is the declarative statement which declaresthe agent file as SpeakingAgent as well as extending another AF-TR file named simpleAgent.Consequently, agents implemented through this file also inherit all of the functions and traitsof the simpleAgent.

2.4 Environment Interface Standard (EIS)

In order to address the growing issue of inter compatablity between various APL’s, a set ofstandards was set up which came to be known as the Environment Interface Standards, orEIS. EIS decided to model its standards based on popular existing API’s, while at the sametime maintaining an interface which is as generic as possible. This approach was taken inorder to facilitate the adoption of EIS by existing APL’s by providing a set of standardswhich are similar in style.

2.4.1 Agent-Entities-Relation

The agent-entities-relation refers to the manner of which EIS views an agents interactionwith its environment[6]. EIS regards Agents as separate to their environment, where agentsinteract with their environment through an assigned entity. An entity can be seen as theagents avatar, which allows the agent to indirectly interact with its environment through

Page 13 of 36

Page 15: Thesis

the use of sensors and actuators. This process is handled by the Environment Interface(EI), which may be adapted to the APL’s specifications. When designing the EnvironmentInterface, EIS decided to allow this Agent-Entities-Relation to be configured within that EI.Consequently, the EI requires that both agent and entities be preconfigured by populatingsets of identifiers for each.

The Agent-Entities-Relations can then be configured by creating associations between agentand entity identifiers. This setup accommodates the diversity within Environment Interfacesas it allows for any combination of agent-entity relation, from one-to one, one-to-many, andmany-to-one agent-entity-relations. For example, one EI may have multiple agents all shar-ing control over one entity (Figure 2: Agents C&D), i.e., multiple marines controlling onesubmarine. On the other hand, one agent may require control over several entities (Figure2: Agent B), i.e., a central control system operating several traffic lights. Or in the simplestscenario, each agent may control a single entity (Figure 2: Agent A), i.e., a competitive vir-tual football game. The following figure shows a sample Environment Interface configurationwhich includes each of these three possible relations.

Figure 2.2: Agent-Entity-Relation

2.4.2 AF integration

Agent Factory saw EIS as a valuable commodity, as it provides standards which define howagent platforms and architectures can connect to an environment interface. Such standardsare the key to allowing different agent platforms to share the same environment interface, aswell as allowing different agent platforms to be benchmarked against each-other. In orderto successfully integrate EIS into Agent Factory’s architecture, several components weredesigned which would facilitate the integration of EIS into AF. These include a PlatformService, a set of Modules, a Manager Agent and various Run Configurations[5].

The link which connects EIS to agent factory is provided by the platform service itself. Inorder to interact with EIS environments, CLF-based agents utilize one of two purpose builtmodules, namely the Manager and Control API’s. The Management API is responsible for the

Page 14 of 36

Page 16: Thesis

“creation, suspension, resumption and termination”[16] of agents existing on that platform.This is implemented through the Agent Management Service (AMS), which represents a coreplatform service that is implemented on all agent platforms, as required by FIPA.

Similarly, the control API is used to manager agents also. The control API is responsiblefor allowing the creation of agent-entity associations, setting up the API and linking it withthe associated EISService, i.e., setup(?serviceId), registering an agent with the environ-ment, registerAgent(Agent), and is also responsible for enabling entities to perform actionsthrough the associated Environment interface, i.e., perform(?Action). When created, theagents use the control api to link to their associated entity and subsequently use the api toretrieve the sensory data of the entity and to perform actions. In addition to this, a defaultmanager agent is provided that creates agents for each free entity in the environment. Finally,a set of Run Configurations are also maintained which assist the debugging and deploymentof EIS applications.

2.5 OpenSim

An essential component to this system is the virtual environment. It is important to thisproject that such a system will be easily customizable, and extendable. It would also bequite beneficial if such a virtual environment has a large support base and a well maintainedAPI. OpenSim is an open-source 3D virtual environment server which is based SecondLife[7].SecondLife is a virtual environment where people can interact with other people throughavatars. Users use programs called ‘viewers’ to interact with the virtual enviornment throughtheir avatar. OpenSim implements the same messaging protocols SecondLife, which allowsa SecondLife client view to be used to view an OpenSim virtual environment. HoweverOpenSim is much more open than SecondLife, as its primary goal is to create a virtualenvironment, which can be moulded and adapted as necessary. Objects within OpenSim areknown as ”Prims”, which can come in various shapes and forms.

One of OpenSims strong points is its social interaction features. In OpenSim you can makefriends, join groups and interact with other avatars. It also provides support for multiplephysics engines, which enables a grid to choose which ever physics engine suits it best. Open-Sim servers can be implemented in two different modes, standalone or grid mode. The firstmode, standalone, is the easiest to setup and allows the simulation to run on one system. Thismeans that standalone is restricted by the number of users it can accommodate. However,Grid mode allows a simulation to be spread across multiple systems, thereby increasing thescalability of a virtual environment, and allowing for a much greater user capacity. One ofthe great features of OpenSim is its hyper-grid system. The hyper grid allows for multipleopensim servers to connect to each other, much like the structure of the internet. In thisway, opensim is potentially an infinite virtual environment. The open grid works by keepinga reference of all of the connected servers and allowing a user to teleport to different grids.

Page 15 of 36

Page 17: Thesis

2.6 OpenMetaverse

The system which was already implemented before this project consisted of the OpenSimvirtual environment, as well as OS4OC as the interpreter. However, this setup was found tobe unreliable and bug prone. One of the suggested reasons for this is that OS4OC is not verywell maintained, and has remained in its early stages of development. As a result of this, thispaper decided to attempt to find an alternative interpreter for OpenSim. It was discoveredthat such a program existed within the architecture of OS4OC, namely OpenMetaverse.

OpenMetaverse is an open source set of libraries, which have been primarily designed toaccess OpenSims core functionality [1]. This allows for us to login to an OpenSim simulator,impart instructions to an avatar, as well as access the avatars surroundings. Like OS4OC,OpenMetaverse is .NET based. OpenMetaverse is simple and reliable, and allows for systemsto be easily built on top of it.

2.6.1 Architecture

OpenMetaverse is composed of three main components, OpenMetaverse-Types, OpenMetaverse-Structured-Data and OpenMetaverse-Core. OpenMetaverse-Types are a set of common types,which are required for 3D space manipulation. OpenMetaverse also includes a set of typesnecessary for communications between client and server nodes. OpenMetaverse StructuredData consists of functions for interpreting and translating objects, to and from OpenSimsserialization format.

Perhaps the most important object within OpenMetaverse, in regard with this project, isthe GridClient object. The Grid Client provides access to an avatars sensors, as well as itsactions. The Grid Client is composed up of several managers instances. These managersare responsible for obtaining data relevant to their allocated area, as well as implementingcommands on behalf of the grid client. The grid client is composed of over 17 of thesemanagers, and more may be added as OpenMetaverse extends its feature set.

The Grid Manager is used to access information about the grid, which the client is connectedto. This includes the local time, a list of map items, and the position of the sun and theheight of the water. This manager maintains a list of all of the prims (Objects), within a setradius of the avatar. The object manager also allows for prims to be edited by the avatar,given that it has permissions to do so.

The Avatar manager, namely “Self”, maintains the interaction actions of the avatar with itsenvironment. This manager consists of several actions, which are derived from two underlyingmanagers, Movement and actions. There currently exist a few issues with the movement ofthe avatar such as infinite movement. For example, calling an action to move an avatarforward, will result in that avatar moving until stop is called. It is currently not possibleto assign a stop position or max movement distance to an avatar. This may be an issue forcertain applications, which require precise movements. This issue may also be amplified bynetwork delays as openmetaverse connects to the simulator over a network stream.

Page 16 of 36

Page 18: Thesis

2.7 Xstream

As discussed before, this project is composed of several individual sub-systems, most of whichare built in different languages. Therefore communication protocols between these systemsis an essential factor in this project. It is vital that such communication protocols shouldbe efficient in handling large quantities of data. This is why this project chose the reliableTCP over UDP as its communication layer. What is needed now is an efficient method forserializing objects effectively.

One such library for serializing objects to and from XML the open sourced XStream [9].XML (Extensible Markup Language) is a set of rules which allow objects of any form to betranslated into a machine readable form [11]. It makes extensive use of tags which definethe object of which they enclose. Using Xstream, it is possible to transfer objects from onesystem to another. Another key benefit to XStream is that it is available as both .Net andJava libraries. XStream is also designed to be efficient and quick to serialize and deserializeobjects.

XStreams strength exists in its ability to translate complex objects to and from XML withlittle configuration. However, this strength coexists with a weakness. For example, if aperson object was being transferred from OpenMetaverse to the Java client, both serviceswould have to maintain a person object interface locally which are identical in structure. Thiswas essential to XStream as it would otherwise not know how to rebuild the XML streamto its object. However, after testing it was determined that the functionality of the objectscould be different to each other, as long as the core data structures of the objects were thesame.

Figure 2.3: XStream Example

Page 17 of 36

Page 19: Thesis

Chapter 3: Core Architecture

In this section, a brief overview of the project system will be described as it developed throughits various stages, from the pre-existing system, to the final system. System implementationdetails will be kept brief and concise where possible, with specific implementation detailsbeing described in the following section.

3.1 Rebuilding Foundations

The initial goal of this project was to build an extensive set of communication protocols on-top of a pre-existing system. The existing system was composed up of three core subsystemsOpenSim, OpenSim4OpenCog (OS4OC), and a simple agentspeak agent, within AgentFac-tory. These communication protocols should allow for a Java client to impart instructions toan avatar within OpenSim, as well as maintaing a complex world model.

To connect the two components OS4OC and Agent Factory, this system implemented a simpleTCP channel through XML. This channel was used to transfer actions from Agent Factoryto OS4OC, which would then be decoded by OS4OC and the relevant action would be calledfrom the grid client. The grid client then forwards the action request directly to the OpenSimserver. This system was built into an existing OS4OC program, which consisted of a simpleconsole based interface. This interface allowed the user to issue commands, including worlddescriptors such as ’describe all’ and actions such as ’move forward’.

This task began with testing within the OS4OC environment itself, however it quickly becameapparent that OS4OC was not a stable platform. Random crashes, core library errors andperformance issues were just some of the issues encountered with OS4OC. The main issuewith OS4OC was that it had been specifically developed towards another AI platform calledOpenCog, and was still in development.

These issues with the most critical component of the system, the OpenSim interpreter, ledthe project to consider finding an alternative solution. This task began with research intothe problematic component itself, OS4OC, in particular how it connects to OpenSim. Conse-quently, it was discovered that OS4OC utilizes a small set of libraries called OpenMetaverse,which are specifically tailored for connecting to OpenSim. Using this set of libraries, onecan control an avatars movements as well as access real-time data about its surroundings.This discovery also allowed for the newest, and most importantly, the most stable release ofOpenMetaverse to be deployed.

Further research was also done into the possible existence of a Java equivalent to OpenMeta-verse, as such a system would substantially benefit the system without the necessity for crossplatform translations. There did exist such a system called libsecondlife-j [17], which was anattempt to port the existing OpenMetaverse platform to Java. Unfortunately this projectwas over 4 years old as well as inactive, and was consequently near impossible to setup dueto outdated library dependancies. It was decided that implementing the project using Open-Metaverse would be much less time-consuming as well as being much more likely to result ina stable and effective system.

Page 18 of 36

Page 20: Thesis

3.2 OpenMetaverse & XStream

OpenMetaverse’s stability and large and active community base made it the clear choice to bethe OpenSim interpreter. However one issue still remained, how to integrate OpenMetaverse,a C# based system, with Agent Factory, a Java based system. What was now needed wasa mechanism for providing the most simple and extendable means of integrating these twovital components. This began by researching how the preceding system achieved such anintegration, as its interpreter, OS4OC, was also C# based, i.e., through TCP and XML. Thismechanism integrated the two components by using TCP communication with XML, both ofwhich are a common component of Java and C#. It built up a set of protocols from scratch,with complex XML parsers existing on both platforms, in order to serialize/deserialize thedata. Before attempting to set up custom XML parsers to translate data and requests to andfrom XML, it was decided to research if an easier alternative existed.

Several potential possibilities were explored, including Remote Procedure Calls (RPC), andTCP/UDP communication mechanisms. Research into RPC revealed that setups involvingdifferent programming languages could be quite difficult and time consuming. Since thefunctions involved were mainly actions of a simple nature such as ’Move’, ’Say’, it was decidedthat these methods could be invoked through simple XML structures. This concept wasfurther realized by the discovery of a popular XML parser called XStream. XStream allowsfor complex data structures of all types to be parsed to and from XML. Using XStream, thesystem could transfer both sensor objects, i.e., real world data such as a house, and actionobjects without the need to construct a complex XML parsing system to serialize/deserializethe objects.

This was a hugh benefit to the progress of the project, as it allowed the project to concentratemore on the data structure of the objects being transmitted, without worrying about theircomplexity or how to parse them to XML. Having one from of communication for bothmethod invoking and object retrieval procedures allowed the project to focus on developingand maintaining one communication system. As a result, XStream can be seen as beingthe systems primary marshaling service, and is essential to allowing the project to developquickly and effectively, as well as allowing the project to be extended easily in the future.

3.3 Proposed Architecture

Having established a means for connecting to OpenMetaverse from Java, the next step was todevise an architecture which would best compliment the capabilities of both OpenMetaverseand Xstream. This process began by developing a core communication layer, which wouldallow transparent access to OpenMetaverse. This layer was named the ’CommunicationLayer’, and is composed of TCP streams, using XML to transfer data between nodes. Thislayer also provides plug in functionality, where any number of components can ’plug in’to the communication layer, allowing access to OpenMetaverse, and consequently access toOpenSim itself.

The communication layer is further broken down into two sub-layers, namely the actions andsensor layers. The reason for dividing the communication layer into these two components, isto provide a simple decomposition of the OpenMetaverse’s functionality. Each of these threecommunication layers are managed by an associated management service. These managementservices are responsible for implementing and maintaining communications, while at the sametime providing extra services related to that layer.

Page 19 of 36

Page 21: Thesis

Figure 3.1: Proposed Architecture

Since the communication layer is essentially a wrapping service for the sensor and actionslayers, the communication managers only responsibility is to initialize and maintain these twolayers. The Actions and Sensors Managers differ in functionality, depending on whether theyexist as Server or Client instances. For example, an Actions Manager existing on the clientside has the simple task of forwarding actions to the OpenMetaverse server. Where as theActions Manager existing on the Server side is responsible for carrying out those actions, aswell as ensuring concurrency and other related issues are maintained. On both the client andserver, the Server Manager is responsible for maintaining an up-to-date world representationat all times.

These communication layers rely on the capabilities of XStream to transfer data. XStreaminstances exist at the point where the communication managers connect to the communicationlayer, providing a means for serializing and deserializing objects and data. In this case,XStream can be seen as a universal plug, which allows the communication manager to pluginto the communication layer regardless of its core language, i.e., Java, C#. XStream achievesthis through its ’alias’ associations, which associate an agreed name for a data type, withreference to the local representation for that type, e.g., Alias(’String’, typeOf(string)).

This architecture is designed to be able to run across two machines. This was primarily dueto the requirements of OpenMetaverse and OpenSim, i.e., a windows environment. Since thecommunication layer was implemented through TCP, the architecture could be spread acrosstwo machines through static IP’s. Consequently the system was divided into two components,a Server and a Client. The Server wraps OpenSim and OpenMetaverse together, as they arethe only two components which require a windows platform to run on. This allows the clientto be platform independent, which is also aided by the fact that Agent Factory is built inJava, which strides to be platform independent. This architecture can be seen used in thefollowing figure,

Here we can see four primary components, the EIS environment, the GUI, OpenMetaverseand the OpenSim virtual environment. The three components EIS, GUI and OpenMetaverseare connected by one shared communication layer. This allows for both the EIS environmentand the GUI to impart instruction to avatars, as well as retrieve world representations,through OpenMetaverse. OpenMetaverse directly connects to OpenSim through its gridclients, as mentioned in Section 2.6. This architecture is designed to be run with the server

Page 20 of 36

Page 22: Thesis

existing on one machine and the client existing on another machine. This is primarily dueto performance issues which may occur on some machines. However, the system is perfectlycapable of running on one machine where that machine has the necessary resources and powerto do so.

3.4 Communications Layer

In the previous section, the communication layer was introduced as a means to connectAgentFactory to OpenSim through a combination of Xstream, TCP and OpenMetaverse. TheClient/Server architecture was also introduced as a means to allow for platform independentclients, as well as spreading out the work load of the over all system. This section will gointo further detail on the mechanisms behind this communication layer, focussing primarilyon how the layer is implemented on both the server and the client, as well as the types ofdata that is sent across the various communication streams.

3.5 Sensor Manager

3.5.1 OpenMetaverse

On the Server side, the Sensor Manager is responsible for maintaining and updating its worldmodel every 200ms. It does this by translating the data available from OpenMetaverse intoa custom world model object, which can then be interpreted by the Java client. During aworld update, the sensor manager will choose the best candidate, i.e., a grid client, from a listof currently logged in grid clients in order to retrieve world data from the OpenSim model.The main issue which arose from implementing this system was communications costs. Forexample, the Sensor Manager should not send the world model to the client, if that modelhas not changed since the last time it was sent. To achieve this, every object within theworld model, including the world model itself, was made comparable to other instances ofthe same type. This allowed the sensor manager to determine when the state of the worldhas changed, consequently allowing it to only send the world model in that situation.

The server-side sensor manager is also responsible for managing the world sensor objects. Inparticular, this manager allows for these sensor objects to created and manipulated in realtime. It achieves this by rebuilding sensor objects during ever world update. Since worldupdates only occur when something changes within the OpenSim environment, such as anavatar moving or a prim being created, sensors are only updated where necessary. In orderto determine when a user has changed the sensor script, which is located within the primsdescription, more world comparators were added to detect relevant changes to sensors.

The script interpreter is located within the Sensor object itself, which is in turn an extensionof a prim object. This allows the sensor object to be built within any primitive object, asits script is located within the prims description. The sensor object works by comparingits position to that of all the avatars within the grid. If the position is within its sensingrange, it populates one of two lists of agent names with that agents name. These lists areused to determine if the avatar is moving or not moving, where that avatar is within therange of the sensor. This is particularly useful for testing certain AAL environments, such as

Page 21 of 36

Page 23: Thesis

determining the last know location where the occupant was active. Due to the cost effectiveway in which these sensors are updated, there can be a high number of sensors within avirtual environment. The only limit being the number of avatars, as the sensors need tocheck each position of every avatar during an update.

3.5.2 AgentFactory & EIS

On the client side, the Sensor Manager plays a more complex role. Here, the Sensor Managermust maintain a dynamic and up-to-date world representation, while at the same time pro-viding extra functionality based on that data set. One such function is the managers abilityto generate a sub-world model based on an agents location, and a limited range from thatlocation. For example, an avatar standing in a complex and vastly populated world, mayonly be interested in a limited number of objects within a certain distance of its position.This can greatly reduce an agents belief set, which is primarily based on its perception of itslocal world.

The Sensor Manager also provides multiple methods for identifying both avatars and objects.It achieves this by providing retrieval mechanisms which can take either a Name or a UUID.This allows for EIS to implement to keep its action set simple, as actions which are associatedwith objects or avatars can take any supported identification. However, as many objects arenot named by default, the default identification is done through UUID’s, unless otherwiserequested.

3.5.3 World Objects

World objects are a key component to the Sensor layer as they provide a means for interpretingOpenSim world data into a customized and simplified form. World objects come many forms,and have been designed to best represent a typical OpenSim scenario. The highest form ofworld objects is the ’World’ object itself. This object can be seen as a container for all otherworld objects, such as Prims, Agents, Avatars, Nodes, Sensors and Useables. When a clientmakes a request to the server throughout the sensor manager, the servers sensor managerwill reply with an up-to-date world object.

In the first implementation, Sensor and Useable objects were extensions of Prim objects. Theidea being that both Sensors and Useables are represented physically by Prim objects withinthe OpenSim environment. Therefore it was logical that such objects should extend theobject that they are based upon. However, after testing this system there was a noticeableincrease in the throughput of the sensor layer, which was detected through the GUI. Itquickly discovered that this was due to Prim data being duplicated, where Prims have beenimplemented as Sensors or Useables. The world model was storing Prim data within its listof Prims as well as its list of Sensors and/or Useables. Consequently Prim data was removedfrom Useables and Sensor objects by making them their own object, unrelated to a Prim.Instead, the Prim UUID was stored within the Object, which would allow for the relatedprim object to be recalled from the list of Prims within the world object.

Page 22 of 36

Page 24: Thesis

Figure 3.2: World Objects

3.6 Actions Manager

3.6.1 OpenMetaverse

On the server side, there are two core processes running, both of which are responsiblefor listening and executing actions accordingly. The listener thread maintains a constantcommunications with the client, and is responsible for simply adding the received actionto a queue of actions. The execution thread is responsible for actually carrying out theactions one by one, removing them from the queue as it works its way through them. Themanager was designed in this way because of the bottleneck of the single TCP connection.For example, processing an action can take any amount of time, and may often be costly innature. However, adding an action to a queue is simple and cost effective. This is why thelistener only adds the action to the queue, and then goes back to listening for more incomingactions, allowing actions to be received by the server in a durative speed of up to 100msintervals.

As discussed before, XStream allows for objects to differ in functionality, a trait of whichcan be seen to be used within the server-side actions. Here, actions are based around acore function ’execute’. This function is common to all actions, which makes the job of theexecuting thread much easier, since it just needs to call one generic function to execute allactions. The content of this execute function differs per type of action.

Some actions require constant processing until their completion, which may take from mil-liseconds to minutes depending on the type of action. In particular, the movement actionrequires constant monitoring until its completion. This is due to the nature of movement pro-tocols within OpenMetaverse itself. OpenMetaverse only supports vector-based movements,which means that one can only instruct an avatar to move in a certain direction at a certainspeed, and not to a certain point. The problem here is determining when to stop an avatarfrom moving once it has reached its destination.

To achieve this, movement actions were executed as a thread, which would stop after reachingone of the following conditions:

Page 23 of 36

Page 25: Thesis

1. Reached its destination

2. Moved Further than its estimated distance

3. Has stopped moving. (e.g. Walking into a wall.)

4. Has timed out

Consequently, it was decided to add a new variable to the avatar object which would indicateif that avatar is moving. In addition to this, a function was also defined within the Client-sideserver manager, which populates a list of objects which are beside that avatar. Using thesetwo functions in conjunction with the EIS belief set, it is possible for an agent to determineif it has reached its destination or not. This type of system allows for the minimum amountof logic to be maintained on the server side, while at the same time allowing agents to definetheir own complex movement protocols.

Despite this, enabling autonomous server-side actions resulted in an unforeseen bug, wherenew movement actions were becoming entangled with active previous movement actions. Thisresulted in confused multi-directional movements, which also meant for a complete loss ofmovement control for that avatar. It was clear that certain actions needed to be stoppedbefore new ones could start. This was achieved by storing the threads in an array, whichwere constantly checked for both finished threads and conflicting threads, either of which areterminated and removed from the list.

3.6.2 AgentFactory & EIS

On the client side, actions exist as simple objects, which are then sent to the OpenMetaverseserver, where they are interpreted and then executed. Since introducing multiple agentssupport, all actions are associated with an agent UUID. This allows the Server to determinewhich avatar should carry out the action. Action objects are largely object-oriented-designed.The parent action simply consists of an agent UUID, and a TYPE value. The UUID is usedby the server to determine which avatar should carry out the action. The TYPE value isused by the server to determine which of the numerous action types it has received.

One fact was known about the incoming actions on the server side, the objects would all beextensions of a parent Action object. Immediate tests were done to determine whether or notpolymorphism was maintained such that an object that is a descendant of an action objectcan be cast to an action object, while at the same time maintaining its data. This test provedthat descendants of action objects could in fact be converted to and from their parent typewithout loss of data. This also meant that objects could be first cast to their parent type,Action, to determine their actual type such as ’Move’, and consequently could be then castto their appropriate type.

Here, the Action Manager is primarily responsible for ensuring that actions are delivered tothe server, even during intense action calling. This is achieved through queueing actions andsending them to the server on a first-come-first-served basis every 100ms. This small delayallows other client processes to consume valuable resources, while at the same time reducingthe overall CPU usage.

Page 24 of 36

Page 26: Thesis

3.6.3 Action Objects

Action objects were designed primarily for use in within the actions layer. Action objects canbe viewed as packaged instructions, which are interpreted and executed by the server. In thisway, action objects can be seen as a form of remote procedure call which has been designedfrom the ground up through using XStream and TCP. The actions which are implementedthrough these action objects are directly related to avatars which are logged controlled bythat server. Such actions can range from instructing the avatar to move to a certain point,to instructing the avatar to interact with an object.

Since these actions are directly related to a specific avatar, it was decided to associate eachaction object with an avatar through means of a UUID variable. As previously discussed, theUUID is OpenSim’s way of identifying each world entity by assigning each entity a uniqueid. Consequently, actions were constructed in a hierarchal decomposition manner, where theroot action contains the Avatars UUID. This ensures that all actions can be associated withan Avatar. In addition to this UUID variable, a Type variable was added to the parent actionobject, which denotes what type of action that action is.

A total of Five avatar actions have been implemented through this system. The simplest ofwhich include the three actions ’Sit’, ’Stand’ and ’Say’. The ’Sit’ and ’Stand’ actions are usedto instruct the avatar to sit and stand, while the ’Say’ action is used to broadcast a messagewithin the OpenSim environment, and is primarily used for debugging purposes. Both the Sitand Stand actions are seen as empty action objects, as they do not implement extra variablesother that what exists from the parent action object.

The more complex actions include the ’Move’ and ’Use’ actions. Rather that implement anumber of different types of movement actions such as Stop, MoveToAvatar, MoveToObject,one generic Move action was constructed using a trait which is common to all movements,i.e., moving to a point. The Use action can be used to set an objects state to one of theobjects available states. For example, an avatar may wish to change the state of a lightbulbfrom ’off’ to ’on’. The following figure shows this hierarchal decomposition of Action objects,

Figure 3.3: Action Objects

Page 25 of 36

Page 27: Thesis

3.7 Interactive Objects

It was felt that object interaction would be a useful feature to implement into the system.Since the primary goal of this project is to implement an AAL scenario within OpenSim, onesuch interaction could be implementing sensors as objects. These objects would be used todetect nearby avatars, and could consequently be used to implement custom AAL scenarios.Another useful object behavior would be objects that react to an avatars input, e.g., an avatarturning on a light switch. Consequently, two types of object behaviors were implemented intothe existing system, which will now be discussed further.

3.7.1 Sensor Objects

Ambient Assisted Living scenarios are hugely benefited by the use of autonomous sensors.In particular, such sensors can allow us to provide ubiquitous environments, which allow theinhabitant to live comfortably within their own home. Since the only current form of sensingis through an agents own sensing abilities, it was decided that a new form of sensing shouldbe implemented.

These sensors should be able to mimic real life sensors in as many ways possible. Ideally,they should be conceivable from the OpenSim virtual environment itself, i.e., a user shouldbe able to create and manipulate a sensor object from an OpenSim viewer interface. Theobvious choice here is OpenSim’s prim objects, which allow for all of these manipulations andmore. These Sensors are defined under the following functionality,

1. Maximum range for the sensor

2. Custom Message when triggered

3. Maintain a list of moving avatars within range

4. Maintain a list of stopped avatars within range

These traits require a form of configuration from the user/creator of the sensor object. Conse-quently, it was decided to use a custom script engine, which would read in a set of parametersfrom the objects description field, and create a sensor object from those parameters. When auser creates an object within a viewer, such as Hippo OpenSim viewer, the user is presentedwith the option to give that object an name and a description, as well as many other definingcharacteristics. This system uses that field as a script container. When the system detectsthat a user has entered a sensor script into the description, it immediately constructs a sensorobject which is then propagated to the world model. For example, the following script canbe used to send message “hello world” when ever an avatar comes within 10 meters of theobject:

<type=sensor;range=10;print=hello world;>

This script can be placed anywhere within the description text and is designed to be minimal,as the description text only allows for one line of text, see figure 3.4. The two tags < and >define the beginning and end of the script, with every internal statement being separated by asemicolon. Statements are defined in the format of “Name=Value”. On the Java client side,Sensors provide a list of avatars which have activated them at that time. The key parameterhere is the type parameter, which in this case is set to sensor. The reason for having a typeparameter exists in the fact that there can exist more than one type of object behavior, i.e.,Use-able objects.

Page 26 of 36

Page 28: Thesis

Figure 3.4: Making a scripted prim (Hippo OpenSim viewer)

3.7.2 Use-able Objects

Many agent scenarios require various forms of interaction with their environment. Thisproject aimed to implement a generic form of implementing custom reactive behaviors intoobjects, which would be activated through various agent actions. It was decided to achievethis in the same way that objects were transformed into sensor objects, i.e., through scriptswithin the objects description field. there were several factors to consider during this devel-opment process, such as how the objects could react, and how the agent would know thatthey reacted.

It was decided to use a multiple state system, where the objects could take on various statessuch as ’On’ or ’Off’. Once the agent interacts with the object through its ’Use’ action, theobject would change its state. For example, an agent using a light switch may result in thatlight switch changing its state from ’on’ to ’off’. These objects were denoted as Use-ablesand are composed of the following properties,

1. Maximum Distance an avatar can be to use the object

2. Last time object was used

3. Last avatar to use the object (Name and UUID)

4. Current State of the object (e.g. On)

5. List of available states (e.g. On,Off,Idle)

The useable’s states are defined through by variable ’states’ which can define one to manystates separated by commas ’,’. For example, the following script can be used to define atraffic light prim,

< type=useable; range=5; states=green,orange,red,blinking-red; >

Both Useable objects and Sensor Objects react visually when their state changes. Sensorsglow when a moving avatar is detected within its range, while Actable objects glow when anavatar acts upon them, or when their state changes. This function was implemented primarilyfor debugging purposes, allowing the user to know when a sensor or actable has been set upcorrectly.

Page 27 of 36

Page 29: Thesis

3.8 GUI

One of the earlier goals of the project was to develop a simple interface, which would allowthe user to impart instructions to an avatar, while at the same time visualizing the avatarslocal world. The GUI was originally designed for testing the first stage of the system, i.e.,the Communication Layer and OpenMetaverse. This system was only capable of maintain-ing control over one agent at a time, which is reflected through the interface of the GUIitself. The GUI was designed before the communication managers, maintaining access to theOpenMetaverse server through its own mechanisms. These mechanisms were later developedinto the Sensor and Action managers, which existed independently to the GUI. This led tothe creation of the Communication Manager, which packages these two managers into oneindependent system.

The GUI allows simple interaction with the avatar, as well as maintaining a 2D map of thevirtual environment. The 2D map also maintains the scale of objects in regards to theirwidth and length. Clicking on the map will set a waypoint, indicated by a red dot, whichthe avatar will automatically head towards. Through this interaction paths, indicated by redlines, can also be set by adding more waypoints. Clicking the large ‘Stop’ button below themap will clear all way points and instruct the avatar to stop moving.

In addition to visualizing the world from an agent perspective, the GUI also allows theclient to change servers at run time. The GUI also played a vital role in visualizing thecommunications throughput of the sensor manager. The GUI also provides a simple text-based interface, which can be used to impart instructions to the avatar, describe objects ingreater detail and most importantly, can call for the OpenMetaverse server to reset. ThisGUI was a key contributor to the development of a system which is both stable and efficient.Its ability to visualize data in realtime, with negligible effect on performance of the machine,made it a valuable commodity to the project.

Figure 3.5: GUI

Page 28 of 36

Page 30: Thesis

Chapter 4: Agents and OpenSim

4.1 EIS

In order to allow various CLF-compliant Agent Programming languages to access OpenSim,the communication manager was integrated into an EIS environment. In order to integratethe Client Communication Manager with EIS, a few extra mechanisms were implementedon top of the communication manager. This section will now discuss these fundamentalprocesses.

4.1.1 Perceptions

The Sensor Manager has been extended in functionality to include mechanisms for populatingbeliefs based on queries to its world model. Such functions return a vector of beliefs based onan agents location and a given distance from that location. One such function returns a set ofbeliefs consisting only of one type of belief See(UUID). This function essentially providesa list of references to objects and avatars within the agents view range. This greatly reducesthe agents initial belief set to the bare essentials. However, should the agent wish to generatefurther beliefs based on these ’see’ beliefs, it can do so through its Describe(UUID)action.This action uses a similar function within the Sensor manager, with the difference being thecomplexity of the percepts returned.

The agents belief set is composed of several different beliefs, consisting both of world andpersonal beliefs. To further benefit the agents concept of movement, several beliefs arepopulated based on the agents movement such as ’Moving(UUID)’ and ’Stopped(UUID)’,which denote whether that entity is moving or not. In the case of the agent itself, several’self’ beliefs are populated, which represent the agents belief about itself.

• selfPosition(X, Y, Z) : The agents current position within OpenSim.

• selfName(Name) : The name of the avatar which the agent is associated with.

• selfState(moving — stopped) : Indicated if the agents associated avatar is currentlymoving or stopped.

• selfUUID(UUID) : Represents the UUID of the avatar associated with the agent.

Another concept which was integrated into this EIS environment interface is the concept ofspatial awareness. The idea behind this is to introduce beliefs into the agents belief set whichwould help make it more aware of objects within certain distances of its position. Conse-quently, beliefs were populated which indicated what objects the agent is beside, i.e., within adistance of three OpenSim units (1-2 meters). This belief was denoted as Beside(UUID). Anyother objects that the agent see’s are simply represented by the ’See(UUID)’ belief. The agentcan further populate these ’See(UUID)’ beliefs by issuing the special action Describe(UUID).This describe action results in a new set of beliefs being added to the agents belief set basedon that UUID. Such beliefs resulting from a Describe(UUID) action can include the following,

Page 29 of 36

Page 31: Thesis

• description(UUID, Description) : Associates an entities* UUID with its descrip-tion.

• moving(UUID) : Indicates that the entity of UUID is moving.

• stopped(UUID) : Indicates that the entity of UUID is not moving.

• name(UUID, Name) : This belief associates an entities UUID with its Name.

• type(UUID, Type) : This belief associates an entities UUID with its Type.

• position(UUID, X, Y, Z) : This belief associates an entities UUID with its position.

• see(UUID) : Indicates that the entity of UUID is within the agents visibility range.

• state(UUID, State) : Associates a Useables UUID with its State.

*Where an entity can represent any physical object within OpenSim, such as an avatar or aprim.

It was decided that beliefs relating to Sensor objects should be maintained by one agent. Thiswas essentially due to the cost of generating beliefs, since there may be numerous sensorswhich could impend on performance if all agents were required to maintain beliefs of eachsensor object. Consequently, an agent named “Sensor Agent” is the only agent which hasaccess to beliefs about sensors. It is up to the developer to determine how that Sensor Agentshould handle those percepts, such as choosing which agents to share those beliefs with.

4.1.2 Actions

The set of agent actions which were implemented involve numerous avatar movement actionsas well as interaction actions. To move, a complex but simple to use action was implemented,namely ’MoveTo’. This action takes in a UUID of either an object, an agent or any otherphysical entity which exists within the OpenSim environment. It is essentially a universalmethod which simplifies and reduces the agents set of movement actions to one single action.This action relies on the server to determine where and when to stop the avatar, i.e., whenit has reached its destination. However, should this method fail, the agent can stop itselfthrough a ’Stop’ action.

Other actions include actions which make the avatar Sit and Stand up as well as actionsdesigned to allow the agent to interact with certain objects through the ’Use(UUID)’ and’SetState(UUID, StateNumber)’ commands. The ’Use’ command simply changes the objectsstate to the next available state, e.g., from ’On’ to ’Off’, where as the SetState allows morecontrol over this action by allowing the agent to implicitly define which state the objectshould be set to, e.g., SetState(UUID, 0) would set the objects state to its first state, whichin the previous case would be equal to ’On’. Consequently, the ’SetState’ action requires theagent to have prior knowledge of the various states of that object. Below is a list of all of thecommands available to an EIS agent through this environment.

• describe(UUID) : This action can be imparted by the agent who wishes to populatea full set of beliefs based on a prim, avatar, agent, useable or sensor object who isassociated with that UUID. The describe action can be seen as a global describe actionfor all world objects.

• moveto(UUID) : The move to action works in a similar fashion to the describe action,in that it works for any world object which is associated with an UUID. Consequently,this action can be used by the agent to make it move to any entity that it perceives.

Page 30 of 36

Page 32: Thesis

• movetopoint(X,Y,Z) : This action allows more control over the agents associatedavatars movement by allow the agent to specifically define the coordinates where itwishes to go.

• stop : This action instructs the agents associated avatar to stop moving, and cancelall of its movements.

• say(SayThis) : This action can be used broadcast messages to all of the other logged inavatars within that OpenSim grid. This method is not used for agent communications,but instead is used for debugging purposes where visual messages are required.

• sit : This action is used to instruct the avatar associated with the agent to sit downon the prim or ground that it is standing on, if it is not already sitting.

• stand : This action instructs the avatar associated with the agent to stand up, if it isnot already standing.

• use(UUID) : This action is used to change the state of a Useable Prim to its nextavailable state. For example, a Useable of type “Lamp”, of states “On”, “Off”, of whichits current state is “On”, will transition to the state “Off” once this action is called bythe agent.

• setstate(UUID, StateNumber) : This action allows specific control over the useaction, allowing the agent to specifically set the state of a Useable Prim by definingwhich element within the Useables states array to set as its current state. Taking theprevious example of the Lamp, the agent could instead set the state of the object to ’0’if it wasted to turn the light on, or to ’1’ if it wanted to turn the light off.

4.2 EIS & AgentFactory

EIS was integrated into agent factory in order to allow various CLF-compliant Agent Pro-gramming Languages (APL’s) to be developed within its environment. EIS essentially allowsAgentFactory to develop environment interfaces which can be used by many different APL’s.Where this project is concerned, EIS enables the developed Environment Interface to be usedby any CLF-compliant APL. In other words, this allows any CLF APL to access OpenSimthrough this projects EIS environment interface.

4.2.1 Connecting

Connecting to EIS through AgentFactory is simple, and can be achieved through a simpleMain Java class. This class will be used to associate agents with their APL file. This isdone by Mapping agent names with their associated APL files and adding those maps toAgentFactory’s EISDebugConfiguration class. This class also takes in the environment jar,which in this case is eisOpenSim.jar.

Page 31 of 36

Page 33: Thesis

The following code executes two agents agent 1 and sensor agent which are defined by theirAPL files ”agent.aspeak” and sensor.aspeak accordingly.

Map<String, String> designs = new HashMap<String, String>();

designs.put("agent_1", "agent.aspeak");

designs.put("sensor_agent", "sensor.aspeak");

new EISDebugConfiguration("testing", designs, "eisOpenSim.jar").configure();

4.2.2 Moving

The following code is from an AF-AgentSpeak file, which instructs the agent to move towardsan avatar who’s name is Niall Deasy, when it perceives that agent. In order to control theagent a module is required, which in this case is com.agentfactory.eis.EISControlModule.

module eis -> com.agentfactory.eis.EISControlModule;

+see(?UUID,Niall_Deasy,Avatar) : true <-

eis.perform(moveto(?UUID));

4.2.3 Describing

The following code is from an AF-AgentSpeak file, which instructs the agent to move towardsan avatar who’s name is Niall Deasy, when it perceives that agent. In order to control theagent a module is required, which in this case is com.agentfactory.eis.EISControlModule.

module eis -> com.agentfactory.eis.EISControlModule;

+see(?UUID,?name,?type) : true <-

eis.perform(describe(?UUID));

+described(?UUID,?name,?type) : description(?UUID,?description)

.println(?name+"’s description is :"+?description);

Page 32 of 36

Page 34: Thesis

Chapter 5: Evaluation

In order to properly evaluate the outcome of the final project system, a scenario was setup which would best test the features of the system. This scenario is implemented throughAgent Factory’s AgentSpeak language, and involves two agents, with each agent controls oneavatar. A third avatar may also be used to view the simulation through an OpenSim viewer,such as Hippo OpenSim viewer.

5.1 The Scenario

The scene is aimed to best reflect the systems overall capabilities while showing how thesystem can be used to implement a simple AAL scenario. The scenario involves two physicalagents “Robot” and “Occupant”, and one non-physical agent “Sensor Agent”. Occupantwill represent our AAL occupant, who’s job will involve moving about and using objects.The Robot agent will be responsible for checking up on the occupant if it believes that theoccupant may be in trouble. The Sensor Agent is responsible for relaying sensor data to thesecond agent, who will use that data to determine if the occupant has stopped moving.

The scenario itself will consist of a single room, as this eliminates the need for implementingcomplex movement algorithms into the either agents movement abilities. There will be twosensors, one at each far end of the room, each of which will have its range set in such a wayas to cover its half of the room. At one end of the room there will be a Useable object,namely a television. Visually, this object will remain the same in either state, however it willbe capable of glowing when activated, i.e., when its state is changed from “on” to “off”, orfrom “off” to “on”. The Robot Agent will remain in the back left corner of the room whenit is not checking up on the occupant. The occupant will spend most of its time in-front ofthe television, usually sitting on the ground.

Figure 5.1: Scenario

Page 33 of 36

Page 35: Thesis

5.2 Implementation

The scene was set up to reflect that of Figure 10 in as much detail as possible. The Occupantagent is quite simple in nature. When the Occupant agent is started, it simply goes to thetelevision and turns it on. It then sits down in front of the television to watch it. Whilethis is happening, the sensor agent is monitoring the sensors to determine when the occupantstops moving for over 10 seconds. In real-life AAL scenarios, this time would differ largelydepending on other considerable factors such as if the occupant is in bed e.t.c. However, fortesting purposes, 10 seconds is suffice.

When the occupant has not moved for 10 seconds, the Sensor Agent instructs the Robotto go to the occupant, i.e., MoveTo(Occupant). The idea here is that the occupantwill react by telling the robot that it is ok or not ok. This is achieved through the Be-side(?UUID,?Name,?Type) belief which is populated when an object or agent is besidethat agent.

Consequently, the two possible responses are “Im ok” and “Help me”, either of which willbe sent from the Occupant agent to the Robot Agent through Agent Factory’s inter-agentcommunication mechanisms.

(a) Robot Intervention (b) TV Interaction

Figure 5.2: OpenSim (a), (b)

5.3 Results

The scenario was run 5 times, each resulting in a successful result. The Sensors successfullydetermined when the agent stops moving, which allowed the sensor agent to instruct theRobot to check out the Occupant. The Robot successfully moved right up to the Occupanteach time, allowing the Occupant to determine that the Robot was beside it, and allowingthe Occupant to reply each time. The Occupant was also able to turn on the televisionsuccessfully each time. This scenario, although very simple, was able to show that an AALscenario is possible through this system.

Page 34 of 36

Page 36: Thesis

Chapter 6: Conclusion

Ambient Assisted Living scenarios are designed to allow the occupant to live in the comfortof their own home, while at the same time maintaining the safety and security that is usuallyonly available from specially trained care takers. These scenarios come in many forms, themost common of which relies on incorporating sensor based AAL scenarios directly into theoccupants own home. Since all homes are different in so many ways, it is important to be ableto test such scenarios in a cost effective manner. This project set out to achieve such a methodfor doing so, by allowing AAL scenarios to be implemented within a virtual environment.

The final system, namely AgentFactory-OpenSim (AF-OpenSim) is designed with efficiency,platform independance, and ease of use in mind. Using OpenSim as its virtual environment,the system allows for every conceivable scenario to be built quickly and effectively. OpenSimalso allows for multiple avatars to interact with the environment simultaneously, which makesit ideal for testing real world applications. Scenarios can be built within any of the numerousOpenSim ’viewers’, which also allow for a scenario to be viewed in real time. Having suc-cessfully built sensory behavior directly into primitive objects, AF-OpenSim also allows forsensors to be created and manipulated within these viewers. This mechanism also allows forsensory objects to be exported and imported between OpenSim servers simply by means ofthe primitive object itself.

6.1 Future Work

AF-OpenSim does not take advantage of some valuable aspects of OpenSim such as objectmanipulation and estate management. However, this project does have strong confidencethat such features could be easily integrated into AF-OpenSim, as the system has beendesigned to allow such extensions. Many steps have been taken to make this AF-OpenSima valuable project, in particular its ability to be partially platform independent. Numerousconcepts were drawn up to achieve such a system based on available resources and technology,and it was eventually decided that the Server-Client based approach was the best option.Nevertheless, this project believes that a stable port of OpenMetaverse to Java is possible,as it has been attempted before, and that having such a port would eliminate the need fortwo sets of communication protocols, i.e., Client to OpenMetaverse and OpenMetaverse toOpenSim. It is hoped that the this project may lead to such a port, which would not onlybenefit this system, but other projects also.

Although several mechanisms have been implemented in order to keep communications ata minimum, such as not sending duplicate world objects, this project can foresee possibleimprovements, which could reduce communications even further. One such improvementcould be implemented quite easily as it relies on object comparators, which have alreadybeen implemented by this project. This improvement to communications would involve onlysending partial world data, such as those objects which have changed. This would greatlyimprove the throughput of the communications, especially for complex and highly populatedworlds.

Currently, AF-OpenSim is designed to primarily function as a one-to-one system, where there

Page 35 of 36

Page 37: Thesis

can exist one Server and one client. Although it is possible to connect more than one clientconcurrently, by altering the agent-entity relationships within the EIS environment, it is notrecommended. This is largely due to the bottleneck of the server, which can only handleone request at a time. Having said this, such a system could be achieved by allowing theclients to agree a set configuration with the server, such as what port to connect to andwhat agents are available for use. A system such as this would allow for a super sever tobe maintained, which could be responsible for maintaining an access point for a communityof clients. It should also be possible to implement a distributed server system, which wouldallow for a single access point to multiple servers while spreading the work load evenly acrossthose servers.

This project has taken on its goal of integrating multi agent systems into a virtual environmentwith the belief that a well designed system, which is designed to allow for further development,is the key to achieving a successful and highly capable system. It is for this reason that thechoice was made to start from the beginning, instead of using its antecedent system whichproved to be less than reliable. It is also believed that systems which stride to achievesuch goals are the key to encouraging existing APL’s, to take that vital step of makingthe transition to FIPA compliant standards. With attractive projects such as AF-OpenSimbeing developed through FIPA standards, it is believed that the wealth of such projects wouldprove to be invaluable resources. It is this projects belief that it has achieved a fundamentalmilestone in providing such a valuable commodity to the FIPA community.

Page 36 of 36

Page 38: Thesis

Bibliography

[1] OpenMetaverse/libSL : http://lib.openmetaverse.org/wiki/Main Page, Accessed 24thJanuary 2011.

[2] Agent Factory : http://www.agentfactory.com/index.php/Main Page, Accessed 29thJanuary 2011.

[3] Agent Factory - Run Time Environment (AF-RTE)http://www.agentfactory.com/index.php/Run-TimeEnvironment

[4] Agent Factory: A Framework for Prototyping Heterogenous AOP Languages Sean Rus-sell, Howell Jordan, Gregory M.P. OHare, and Rem W. Collier

[5] AgentFactory EIS integration http://www.agentfactory.com/index.php/EIS

[6] EIS, Guide for EIS-0.3, Tristan Behrens, February 17, 2011

[7] OpenSim http://opensimulator.org/wiki/Main Page, Accessed 25th January 2011.

[8] OpenSim4OpenCog http://wiki.opencog.org/w/OpenSim for OpenCog, Accessed 20thJanuary 2011.

[9] XStream : http://xstream.codehaus.org/index.html, Accessed 25th January 2011

[10] AAL: http://www.aal-europe.eu, Accessed 7th February 2011

[11] XML: http://www.xml.com/pub/a/98/10/guide0.html, Accessed 8th February 2011

[12] XStream Alias Tutorial: http://xstream.codehaus.org/alias-tutorial.html, Accessed 8thJanuary 2011

[13] AgentFactory - EIS http://www.agentfactory.com/index.php/EIS, Accessed 11th April2011

[14] AF-TeleoReactive http://www.agentfactory.com/index.php/AF-TeleoReactive

[15] AFAPL ,http://www.agentfactory.com/index.php/AFAPL

[16] AF-AgentSpeak : http://www.agentfactory.com/index.php/AF-AgentSpeak#Lesson 4: Using Other APIs, Accessed 20 April 2011

[17] libsecondlife-j : http://sourceforge.net/projects/libsecondlife-j, Accessed 22 April 2011

[18] : Acompositional semantic structure for multi-agent systems dynamics Pascal AntoniusTheodurus van Eck

[19] MAS : Michael Wooldridge, An Introduction to MultiAgent Systems, John Wiley SonsLtd, 2002, paperback, 366 pages, ISBN 0-471-49691-X

[20] The Foundation for Intelligent Physical Agents: FIPA http://www.fipa.org/

[21] Speech Act Theory, SAT, John Searle, Speech Acts, Cambridge University Press 1969,ISBN 0-521-09626-X

[22] John Langshaw Austin: How to Do Things With Words. Cambridge (Mass.) 1962 -Paperback: Harvard University Press, 2nd edition, 2005, ISBN 0-674-41152-8.

Page 37 of 36