submitted for the meng in computer science with games ... · project final report emotional agents...

69
Project Final Report Emotional Agents in Game Playing Submitted for the MEng in Computer Science with Games Development May 2012 by Christophe Lionet

Upload: vokhanh

Post on 15-Oct-2018

218 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Submitted for the MEng in Computer Science with Games ... · Project Final Report Emotional Agents in Game Playing Submitted for the MEng in Computer Science with Games Development

Project Final Report

Emotional Agents in Game Playing

Submitted for the MEng in Computer Science with Games Development

May 2012

by

Christophe Lionet

Page 2: Submitted for the MEng in Computer Science with Games ... · Project Final Report Emotional Agents in Game Playing Submitted for the MEng in Computer Science with Games Development

Table of Contents Abstract ................................................................................................................................................... 5

1 Introduction .................................................................................................................................... 6

1.1 Project Brief ............................................................................................................................ 6

1.2 Project Context ....................................................................................................................... 6

1.3 Aim and Objectives ................................................................................................................. 7

1.4 Report Structure Overview ..................................................................................................... 7

2 Project Background ......................................................................................................................... 8

2.1 Problem Context ..................................................................................................................... 8

2.1.1 A definition of an autonomous agent ................................................................................. 8

2.1.2 Crafting a virtual agent for a game ................................................................................. 8

2.1.3 Applications of Artificial Intelligence in the games industry ........................................ 10

2.2 Comparison of Technologies ................................................................................................. 11

2.2.1 C++ and OpenGL............................................................................................................ 11

2.2.2 Prolog and XPCE ............................................................................................................ 11

2.3 Control Methods and Algorithms ......................................................................................... 11

2.3.1 Finite State Machines .................................................................................................... 11

2.3.2 Fuzzy Logic .................................................................................................................... 12

2.3.3 Fuzzy State Machines .................................................................................................... 12

2.4 Alternate Solutions ............................................................................................................... 12

2.4.1 BDI Model ..................................................................................................................... 12

2.4.2 Planning and Reactive agents ....................................................................................... 13

2.4.3 Learning Agents ............................................................................................................. 13

3 Design and Specification ............................................................................................................... 14

3.1 Project Requirements ........................................................................................................... 14

3.1.1 Prey and Predator Types ................................................................................................... 14

3.1.2 Survival, Energy Management and Death ..................................................................... 14

3.1.3 Border Avoidance .......................................................................................................... 14

3.1.4 Use of Affordances ........................................................................................................ 15

3.1.5 Use of Affect .................................................................................................................. 15

3.2 Agent structures .................................................................................................................... 16

3.2.1 Basic Agent Structure .................................................................................................... 16

3.2.2 Affective Agent Structure .............................................................................................. 16

3.3 Description of Components .................................................................................................. 17

3.3.1 Agents Senses ............................................................................................................... 17

3.3.2 Input and Output Management .................................................................................... 18

3.3.3 Fuzzy system ................................................................................................................. 19

Page 3: Submitted for the MEng in Computer Science with Games ... · Project Final Report Emotional Agents in Game Playing Submitted for the MEng in Computer Science with Games Development

3.3.4 Finite State Machine Models ........................................................................................ 21

3.3.5 Affect models ................................................................................................................ 26

4 System Implementation ................................................................................................................ 27

4.1 Object Management & Hierarchy ......................................................................................... 27

4.1.1 Object Components .......................................................................................................... 27

4.1.2 Energy type ................................................................................................................... 27

4.1.3 Agent types ................................................................................................................... 27

4.1.4 Borders .......................................................................................................................... 28

4.1.5 Object management ..................................................................................................... 29

4.2 Agent step sequence ............................................................................................................. 29

4.3 Senses ................................................................................................................................... 30

4.4 Affordance memberships ...................................................................................................... 31

4.5 Input Selection ...................................................................................................................... 31

4.6 State Machines ...................................................................................................................... 33

4.6.1 Finite State Machines .................................................................................................... 33

4.6.2 Fuzzy State Machine ..................................................................................................... 34

4.7 Behaviour selection ............................................................................................................... 34

4.8 Behaviors and actions ........................................................................................................... 35

4.8.1 Crisp Actions .................................................................................................................. 35

4.8.2 Concurrent Fuzzy Actions .............................................................................................. 36

4.9 Utilities .................................................................................................................................. 37

4.10 Feedback and Data Output ................................................................................................... 38

4.11 UI and Test Bed ..................................................................................................................... 39

5 Experimentation and Testing ........................................................................................................ 40

5.1 Observations during testing: ................................................................................................. 40

5.2 Experimentation ................................................................................................................... 40

5.2.1 Independent Model ...................................................................................................... 41

5.2.2 Averaged Model ............................................................................................................ 43

5.2.3 Compensated Model ..................................................................................................... 45

5.2.4 Experimental Observations ........................................................................................... 47

6 Critical Evaluation ......................................................................................................................... 48

6.1 Project Management ............................................................................................................ 48

6.1.1 Tasks and Technical Deliverables ...................................................................................... 48

6.1.2 Risk Assessment and Summary ..................................................................................... 49

6.2 Project Achievements ........................................................................................................... 50

6.2.1 The field of AI ................................................................................................................ 50

6.2.2 Prolog ............................................................................................................................ 50

6.2.3 New level of Excel mastery ........................................................................................... 50

Page 4: Submitted for the MEng in Computer Science with Games ... · Project Final Report Emotional Agents in Game Playing Submitted for the MEng in Computer Science with Games Development

6.3 Further Development ............................................................................................................ 50

6.3.1 Full Fuzzy Architecture .................................................................................................. 50

6.3.2 New behaviours and flocking ........................................................................................ 51

6.3.3 Producing a Game Using a Graphics API ....................................................................... 51

6.4 Personal Reflection ............................................................................................................... 51

6.4.1 Project summary ........................................................................................................... 51

6.4.2 Acknowledgements ....................................................................................................... 52

7 Conclusion ..................................................................................................................................... 53

8 Bibliography .................................................................................................................................. 54

8.1 Online Material: .................................................................................................................... 54

8.2 Academic and Research papers ............................................................................................ 54

8.3 Books ..................................................................................................................................... 55

Appendix A. Initial Brief ................................................................................................................. 56

Appendix B. Fuzzy State Machine ................................................................................................. 57

Appendix C. Finite State Machines ................................................................................................ 58

Energy FSM ....................................................................................................................................... 58

Fleeing FSM ....................................................................................................................................... 58

Idle FSM ............................................................................................................................................ 59

Appendix D. Experimental data – Independent Model (Preys Only) ............................................ 60

Appendix E. Experimental data – Independent Model (Preys & Predators) ................................ 61

Appendix F. Experimental data – Averaged Model (Preys Only) .................................................. 62

Appendix G. Experimental data – Averaged Model (Preys & Predators) ...................................... 63

Appendix H. Experimental data – Compensated Model (Preys Only) ........................................... 64

Appendix I. Experimental data – Compensated Model (Preys & Predators) ............................... 65

Appendix J. Experimental data – Normalized Model (Preys Only) ............................................... 66

Appendix K. Experimental data – Normalized Model (Preys & Predators) ................................... 67

Appendix L. Original Time Plan ..................................................................................................... 68

Appendix M. Modified Time Plan ................................................................................................... 69

Page 5: Submitted for the MEng in Computer Science with Games ... · Project Final Report Emotional Agents in Game Playing Submitted for the MEng in Computer Science with Games Development

Emotional Agents in Game Playing

- 5 -

Emotional Agents in Game Playing

Abstract

While Artificial Intelligence had always studied the concept of an autonomous artificial agent, only in the past decades has the technology allowed the simulation of artificially intelligent entities. This adds a certain potential to the entertainment industry, where the interaction with characters plays a major role in providing a memorable experience to the public. In the industry of video games in particular, the focus is starting to shift to the domain of AI, where interaction with realistic characters is an increasing demand. One of the earliest examples of artificial intelligence can be found in the famous game Pong, where a computer-controlled bat would serve as the adversary, and bounce balls back at the player. The aim was to make the player believe he was playing against another person, and for the first time, a non-playing entity had its basic behaviors procedurally generated, instead of being defined at compile-time. This would give the player an illusion of competition in the simplest form possible: a vertically moving bat. However, the expression of such an AI was limited by both its sets of available outputs and the graphical capabilities of the time. It could not formulate or express any complex behaviours. On the other hand, the more recent Pac Man represents a step forward in representing characters as artificial agents. The ghosts will chase the main character around a maze, eventually switching to a fleeing behavior when Pac Man eats a special pill. These characters were believable in the context of the environment, as more advanced graphics allowed them to express basic emotions, such as fear when being chased. This provided an attachment between the player and the characters, even though the emotions conveyed were the result of the game’s art, rather than the system behind it: the ghosts become scared because Pac Man picks up the super pill, which is more of a trigger than a reaction to an internal state. This marks a distinction between emotional behavior and a more cosmetic emotional feedback. If agents could convey emotions to the player, should they be also able to process them? Could an agent factoring emotions in its AI end up making a game feel more satisfying and entertaining, or should a combination of simple AI structure and art be enough to convey emotional impact back to the player? We could then ask ourselves how agents in a video game would implement emotional traits to be meaningful in terms of influencing their course of action. This question will be studied by attempting to build a set of agents to be placed in an enclosed virtual environment. This paper will describe the details and the processes involved in undertaking this Final Year Project, the decisions, design choices and experimentations during the creation the agents. The report will first introduce the state of agents in the field of AI as well as in the games industry. It will then go onto covering the specification and design aspects of the project, as well as several experiments devised to test and compare the influence of different types of affect models on the agents.

Page 6: Submitted for the MEng in Computer Science with Games ... · Project Final Report Emotional Agents in Game Playing Submitted for the MEng in Computer Science with Games Development

Emotional Agents in Game Playing

- 6 -

1 Introduction

The introduction will present the brief, aims and overall context of the project, and will describe the structure of the report.

1.1 Project Brief This project will follow the brief below:

The game industry is in constant expansion, and a big amount of effort is put in producing the best graphics. However, artificial intelligence in games has seen its interest rise in the last few years, so much that poor AI is nowadays considered a major flaw of a video game. Autonomous agents, capable of expressing emotions and reacting according to their emotional state, make for a less predictable and thus more entertaining experience. An application will be developed to demonstrate the potential of such agents if placed in a gameplay situation, utilizing a predator-prey scenario. The choice has been made to keep focus of the game aspect of the brief to accord the project with the study of computer games development. It is hoped that this project will bring a better understanding of the methods and principles of Artificial Intelligence as well as the knowledge on suitable agents for gaming software. The perspective of using such agents in game development, and discovering the principles of AI game characters is the driving force behind the project. The initial brief can be found in detail in Appendix A.

1.2 Project Context

The emotional aspect of artificial intelligence has been brought to the forefront by Herbert Simon in 1967 in a scientific paper discussing the ability for computers to emulate emotions.

As humans, emotions and feelings provide a bias that can both influence us in terms of our decisions or override our sense of rationalization (Champandard, 2004, p494). When attempting to apply this theory to artificial intelligence, the question has often shifted to the need of emotion to produce an

intelligent behavior: as (Minsky, 1987) writes, “The question is not whether intelligent machines can have emotions, but whether machines can be intelligent without any emotions”. This need is also contrasted by the ambiguity of the word, as many academics agree that the notion of emotion is vague, and opens up to a variety of different interpretations. Sloman blames the over-generalisation of the term emotion, that it might cover areas that should not be related to its significance, such as goals and preferences. Such interpretations would lead to believe that emotions are essential for any intelligent behavior, which is not always the case (Sloman, 2004).

Many examples in past games have shown simulated emotion was not a requirement to emulate a seemingly intelligent behaviour, as simple behavior trees and rule-based systems often suffice to trigger a variety of pre-set actions for an agent. Since the recent progress in AI technology, having agents using affect as part of their AI structure has been an objective in making games more immersive for the player. Emotional ambiguity opens up new ways to provide entertainment, where a wide variety of actions could arise from a specific context, while still remaining readable by the player (Mateas, 2003, p2).

Page 7: Submitted for the MEng in Computer Science with Games ... · Project Final Report Emotional Agents in Game Playing Submitted for the MEng in Computer Science with Games Development

Emotional Agents in Game Playing

- 7 -

1.3 Aim and Objectives The aim of this project is to achieve autonomous artificial agents capable of survival in a test bed, through the use of finite state machines, fuzzy logic, and affect as part of a reactive architecture. These agents will then be studied in terms of their behaviour and their survival time, to discuss how affect influences their decisions and how such agents could be implemented into a game. The test bed will be composed of a bordered area where agents can move and interact, while a simple interface will allow the user to add new agents with different behaviours as well as energy items.

The agents should respond to the following traits:

Sense their environment o Detect objects and other agents nearby.

Driven by Finite-State Machines o The state machine should be evolved into a Fuzzy Finite State Machine

Emotional Core o The agent should include fuzzy variables and operations reproducing basic emotions.

Survivability o Agents will have to survive by maintaining their energy level high, by eating energy

sources or other agents.

Prey and predator behaviour o Preys should feed on energy sources scattered throughout the environment. o Predators should search for and pursue preys. o Preys should flee from predators.

These agents are to be developed with the idea that they should be usable if they were to be implemented in a game, that their behavior would be coherent yet offer some unpredictability to the player.

1.4 Report Structure Overview This report will first delve on the context of Artificial Intelligence in the gaming industry, and how the objectives in creating such an AI may differ from the scope of scientific and utilitarian applications. Then different approaches to the problem will be compared, in terms of principles and technologies. In the next chapter will be introduced the design specifications for the program and the agents, as well as an overview of the different agent structures. Once the specifications are laid out, this report will state the implementation details, how the specifications translate into code, and the eventual complications and refinements from the design. In the Experimentation chapter, various experiments will be performed with varying parameters, and their results interpreted in the light of the brief stated above. The different affect models will be tested in terms of survivability and observable behaviours. The results will be discussed to determine the strengths of each model. In the Critical Evaluation section will be detailed the planning and organization processes, comprising of the various project tasks and deliverables, as well as risk assessment and time chart. These arrangements will be discussed in retrospect, along with comments on the project itself and personal achievements.

Page 8: Submitted for the MEng in Computer Science with Games ... · Project Final Report Emotional Agents in Game Playing Submitted for the MEng in Computer Science with Games Development

Emotional Agents in Game Playing

- 8 -

2 Project Background

2.1 Problem Context

2.1.1 A definition of an autonomous agent

The definition of an autonomous agent is still changing to this day, with various sources in the field of AI providing different definitions. While Franklin and Graesser gathered many of these definitions and provided their own, the most meaningful in the context of games has to be the definition from Artificial Intelligence, a Modern Approach. “Anything that can be viewed as perceiving its environment through sensors and acting upon that environment through actuators.“ (Russel, Norvig, 2009, p34)

This description has the particularity of ignoring the deliberative aspect of an agent, which is indeed often ignored in producing virtual agents for games. (Franklin, 1997, p2) adds that an agent must still possess goals, or agendas, although they can be implicit. In the case of prey and predator agents, this goal is implicit, as survival is only defined as part of the program code.

Fig 1: Natural Kinds Classification of Autonomous Agents (Franklin, Graesser, 1995). Entertainment agents are part of a class of software agents.

2.1.2 Crafting a virtual agent for a game

2.1.2.1 Challenge and believability Building an agent should teach the functioning of intelligence and cognition (Pfeifer, 1996). This holds true in a purely academic model, where the focus is on the study of the behaviour of artificially intelligent entities. However, video games introduce the notion of human interaction, where an agent must take part in a cohesive experience that immerses the player in the game world. The dilemma in creating an intelligent agent in a game environment is the choice between believable behaviour and challenge. An agent can think or act either rationally or humanly. Thinking

Page 9: Submitted for the MEng in Computer Science with Games ... · Project Final Report Emotional Agents in Game Playing Submitted for the MEng in Computer Science with Games Development

Emotional Agents in Game Playing

- 9 -

and acting rationally means the agent should compute and execute what is the best possible outcome to a problem, while thinking and acting humanly implies reproducing human cognition processes on a certain level, which in turn is aimed at processing inputs and outputs in a similar way as an intelligent being would (Russel & Norvig, 2003, ch1). This approach is often simplified in the making of game agents, for the sake of efficiency. However, the objective remains the same. Acting humanly is a result of thinking humanly, but thinking humanly is not a requirement to acting humanly. This is why in most games featuring autonomous agents, believable behaviours are achieved through a reduced architecture (Champandard, 2004, p13).

2.1.2.2 Building a virtual agent Pfeifer defined a set of design principles aimed at the conception of virtual agents. The first principle defines a set of rules relative to autonomous agents. An autonomous agent must function without the need for human supervision or intervention. It must be self-sufficient and be able to maintain and sustain itself. It must be embodied, as it needs to be able to act within its world. (Pfeifer, 1996, p2-3). The meaning of the term ‘embodied’ varies between a real world (biological or robotic) agent and a software agent. The difference lies in how the agents perceive the information from the environment, and the impact of these properties. A physical agent must make use of input from sensory devices to perceive its environment. On the other hand, a software agent needs to gather data from the software memory itself while applying restrictions to its scope, which is more reliable than a physical agent due to its virtual nature (Franklin, 1996). Effort must be made in reproducing embodiment in virtual conditions by adding senses and actuators to software agents.

2.1.2.3 Modeling affect in an agent There is a difference between the display of an affective behavior and the use of affect in the process of AI decisions. The first case is widely used, if not essential, in maintaining a connection between the player and the game, as the characters need to demonstrate a certain amount of affective behavior to maintain the player’s suspension of disbelief, which is described as ‘emotional feedback’ (Gilleade, Dix, Allanson, 2005, p2). The second case describes emotions as part of the agent’s AI and decision process, which is a less common occurrence in games. It supposes a model to represent emotions within the AI. Much like the definitions for artificial agents, there were many attempts at finding a model to categorize affects. According to Ortony, Clore and Collins, emotions can be classified in different types. An emotion type can encompass various emotional states, such as ‘fear’ encompasses “fright” or “petrified”. These states can be associated with either psychological or physical triggers depending on the object of the fear (Ortony, Clore, Collins, 1990, ch2). On the other hand, Sloman differentiates emotion from affect, and describes emotion as being a class of affect, while affects in themselves encompass a broader scope of feelings. An agent makes use of various affective states, which can be classified in three layers. The primary layer, such as fear and anger, represent basic emotions related to reactive behaviours, which can be found in the simplest biological agents. The secondary layer implies feelings about the future, which applies to deliberative processes. The last and third layer encompasses the two previous layers and formulates emotional responses based on the organization of the two previous layers. As part of their reactive structure, the agents in this project will model the first emotional layer.

Page 10: Submitted for the MEng in Computer Science with Games ... · Project Final Report Emotional Agents in Game Playing Submitted for the MEng in Computer Science with Games Development

Emotional Agents in Game Playing

- 10 -

2.1.3 Applications of Artificial Intelligence in the games industry

Part of the definition of video games as an interactive media is the ability for the player to engage in a coherent experience, where feedback is delivered through the computation of the player’s actions. One of the ways of providing this feedback is using various methods of artificial intelligence to simulate actors or players in the game world. Below are a few examples of different types of agents used for different purposes in games.

2.1.3.1 Expert systems as NPCs The main representation of an agent in a video game is as a non-playable character (NPC). In most role playing games, interaction between characters happen through a user interface communicating with a simple expert system. Expert systems provide quick decision making based on input and a knowledge base, which is particularly suited to games featuring branching paths and dialogue trees. This allows games such as The Elder Scrolls V : Skyrim to introduce player choice in the narrative, where past actions and player decisions are taken into account when dialog options are laid out. These types of AI systems are often implemented to provide more depth to a game’s story and to accentuate the interactive drama, while introducing player choice.

2.1.3.2 Intelligent Agents as characters NPCs can also interact with the player as agents in the environment. On the contrary to expert systems, autonomous agents act in real time within the environment. These types of agents can be found as enemies to the player in games such as first person shooters. These elements include path finding, decision making, and sometimes planning, and are often required to be adaptive, as in most cases they are not built with a single environment in mind: in a game like The Sims, the environment is constantly evolving, and the Sims need to adapt their routines to fit these changes. On some occasions, the expert systems approach is combined with a real time agent, providing a high level influence on the behavior decision. This may sometimes be the cause of the AI ‘cheating’ , as the agent accesses data that would normally be out of reach of the senses it has been attributed (Davis, 2011). A different example such type of AI can be found in Michael Mateas’ game Façade, which places the player in a situation where he/she must interact with two characters in a real-time but social-oriented approach.

2.1.3.3 Intelligent Agents as players Agents may be produced to mimic the actions of a player. This is mostly the in competitive real-time strategy games or first person shooters, where the player have the ability to add ‘bots’ in matches. Player agents give a new meaning to ‘acting humanly’, since they are not emulating the characters of a game, but the person who is in control. They are more limited than traditional NPCs in terms of their actions, since the agent controller is limited by the same range of actions than any player. However, this makes them harder to differentiate from human players. This type of IA was first developed and observed with the bots for the game Quake II using the SOAR model, where experiments similar to the Turing test showed that setting reaction parameters closer to human-like values would result in players perceiving bots as more human (Gilleade, Dix, Allanson, 2005, p3).

Page 11: Submitted for the MEng in Computer Science with Games ... · Project Final Report Emotional Agents in Game Playing Submitted for the MEng in Computer Science with Games Development

Emotional Agents in Game Playing

- 11 -

2.2 Comparison of Technologies

2.2.1 C++ and OpenGL

It was initially considered using C++ language in conjunction with either DirectX or OpenGL to achieve a more game-like result. This underlined several issues: using graphical APIs would give the software an improved look and feel, but would greatly increase programming time, thus reducing the time spend developing the agents, which are the primary focus of this project.

2.2.2 Prolog and XPCE

Consequently, it has been decided that the project will be developed in the Prolog language, using the SWI-Prolog environment. Prolog’s logic and emphasis on symbolic reasoning will allow a close following of the AI principles, as well as a comprehensive way of expanding the AI further, with a bigger emphasis on the agents. Graphics will be produced using XPCE toolkit for SWI-Prolog, which allows producing basic graphics quickly and easily by the use of shapes. A user interface would be easier to implement as well, as XPCE implements functionalities (such as buttons) similar to windows forms.

2.3 Control Methods and Algorithms This section will detail a variety of control methods to be used by the agents.

2.3.1 Finite State Machines

Finite state machines allow the transition from one agent state to another depending on its current state and senses. In a game perspective, it is one of the most common ways of structuring an AI, as it is simple, efficient, and can be layered and combined to form a more complex reactive architecture. They also allow a more structural approach with the decomposition of a behavior into states (Buckland, 2005, p43), which simplifies the design process of large systems. For even more complex systems, FSMs have the advantage of being organized hierarchically, where each state of a high level state machine could represent a state machine of its own. A typical Finite State Machine is composed of several fixed states, interconnected by state change functions. One of these states is the active state. The transition functions are run according to the input passed to the Finite State Machine and the current state. This allows the FSM to switch between states. An autonomous agent uses Finite State Machines in two steps to process an input to an output according to the current active state. The first step is a transition between states, like in a typical State Machine: the input and the current state determine the next state. The second step uses the same input, and the new state to determine the output. The importance of the second step lies in the non-deterministic approach to agent design. A one-step Finite State Machine, would guarantee that for an input, the FSM would always give the same output, as it does not rely on any other data than the input itself. By introducing a new step in the FSM, where the output is determined by the input and the current state, the FSM will produce an output that cannot be predicted using only external data. Building a hierarchy of FSMs is another way to avoid a deterministic structure. In such a hierarchy, every FSM is a state of the FSM above, and can be activated or deactivated based on the status of their parent state.

Page 12: Submitted for the MEng in Computer Science with Games ... · Project Final Report Emotional Agents in Game Playing Submitted for the MEng in Computer Science with Games Development

Emotional Agents in Game Playing

- 12 -

2.3.2 Fuzzy Logic

Most of what a living being perceives cannot be deemed precise. It is filtered through each individual’s subjectivity, allowing a wide range of meanings for the same situations sensed by different creatures. Supposing a stick that measures more than 1.50 meter is long, while a stick shorter than this value is considered short, why would a stick measuring 1.49 meter be considered short, if it is by such a small interval? To perceive this stick like humans do, an agent needs a model of logic able to describe fuzziness (Negnevitsky, p87). Indeed, while imprecision is often considered opposed to performance in the domain of computing, one option in creating artificial intelligence is to replicate this sense of subjectivity. The introduction of Foundations of Fuzzy Systems illustrates this point, as it describes fuzzy systems as “One method to simplify complex systems is to tolerate a reasonable amount of imprecision" (Kruse, Gebhardt, Klawonn, 1993, p1). In the logic of traditional sets, a value can only belong to a certain set (1), or not (0). However, in fuzzy sets, elements possess a certain degree of membership, between the values 0 and 1. Therefore, a stick measuring 1.49 meter could share a 0.5 membership in the set ‘short’ and a 0.5 membership in the set ‘long’: it belongs to both sets through fuzzy memberships. Converting clear values to fuzzy memberships in such fashion is called fuzzification. This allows obtaining fuzzy values to perform operations on the sets, such as AND, NOT, OR, as well as composition operations. On the other hand, defuzzification converts a fuzzy value back into a crisp value (Champandard, 2004, p403). Ultimately, the sequence of fuzzification-operation-defuzzification allows for a more elaborate process of decision making, which is not based on crisp values.

2.3.3 Fuzzy State Machines

A Fuzzy State Machine or (FuSM) bears the same structure as a Finite State Machine, several (fuzzy) states interconnected by transition functions. However, as fuzzy states are not defined by clear-cut values, they cannot be considered inactive. All states must therefore be considered when feeding in an input, and the order in which the transition functions are called bears an impact on the state memberships.

2.4 Alternate Solutions This section will detail control methods and processes that were not used in this project, but that are viable solutions in making virtual agents, in the games industry or in general.

2.4.1 BDI Model

BDI stands for Belief Desire Intentions. The BDI model allows the separation of an agent’s deliberative and reactive processes. It is well suited for agents set to interact within a specific environment. Belief represents what the agent knows about the environment by the sum of its percepts. The term implies that an agent’s set of beliefs can change as a result of the agent’s percepts or internal states.

Page 13: Submitted for the MEng in Computer Science with Games ... · Project Final Report Emotional Agents in Game Playing Submitted for the MEng in Computer Science with Games Development

Emotional Agents in Game Playing

- 13 -

Desires define the immediate goals of the agent, whereas intentions encompass the plans of the agent (Busetta, Bailey, Ramamohanarao, 2003, p1). Beliefs, desires and intentions are stored as data within the agent. The BDI architecture is not followed thoroughly in the case of this project, although it is still possible to extrapolate the BDI model in terms of the agents being developed. Beliefs can be attributed to the input filtering stage of the agent’s structure, where an agent makes sense of the data it perceived, although data is part of the program, and not stored in the form of a database. On the other hand, desires can be related to the affordance memberships calculated for each agent relative their actions in relation with other objects. Intentions, however, are not represented as part of the agents, as they imply a deliberative behavior.

2.4.2 Planning and Reactive agents

Planning allows an agent to define a course of action to transition from its current state to a goal state. In games, artificial agents are often expected to react in direct consequence to the player’s actions. There are just a “few cases where planning is actually necessary”, as reactive decisions are close to instinct, and “close to the way human experience works” (Champandard, 2004, p38). Therefore, the reactive approach is often preferred to deliberative planning.

A reactive architecture favors low computational time and immediate decisions rather than memory and planning. Reaction-only based agents do not keep record of their states and their perceptions, which means they will only rely on their current state and what they are experiencing in the present time (Funge, 2004, p51). This type of deterministic structure provides little in terms of entertainment, as no internal state or values will imply a predictable output. To compensate for this predictability, some elements of knowledge need to be introduced, such as states kept in memory and data about the world. Such elements, combined with rule-based systems and finite state machines, offer convenience in terms of processing speed and flexibility for artificial agents.

Planning is nonetheless used in games, mostly for tasks such as path finding, although it will not be a focus of this project. Finite State Machines and Fuzzy Logic can provide some form of planning, matching responses to detected inputs and internal states. This form of planning, which is similar to reactive processes, is called reactive planning (Champandard, 2008, p37).

2.4.3 Learning Agents

There are several ways to achieve learning in games, including the following. Reinforcement learning uses reward systems adjust behavior policies through trial and error (Davis, 2001). It offers learning by small bursts. Neural networks on the other hand focus on learning by error feedback and continuous training. Neurons or perceptrons can be connected in various shapes of networks (Hopfield, Circular). The output of a neuron is determined by the combination of weights and the fed inputs, and a back-propagation algorithm is used to adjust the weight based on the error rate. Neural networks are expensive in terms of processing power, and are mostly used to generalize data and formulate predictions, such as the position of a moving target in a first person shooter. Learning is ultimately an important part of simulating a cognitive behavior for agents, although since the agent architecture for this project is based on reactive principles, learning will not be considered.

Page 14: Submitted for the MEng in Computer Science with Games ... · Project Final Report Emotional Agents in Game Playing Submitted for the MEng in Computer Science with Games Development

Emotional Agents in Game Playing

- 14 -

3 Design and Specification

3.1 Project Requirements This section will give an overview of the required elements to be implemented in the project, and their contribution to the program.

3.1.1 Prey and Predator Types

The prey and predator agent types define what the agent considers as a threat and as food. The predator will consider the prey as food and chase them, while nothing acts as a threat for it. The prey will consider energy sources as food and the predator as threat, and will run away from incoming predators. The prey should be able to decide on the priorities of eating energy and fleeing. This should introduce a dilemma between eating energy sources (to prevent starvation), or running away from predators. In terms of colour coding, a prey should be coded blue, a predator red, energy sources green, while corpses should be brown.

3.1.2 Survival, Energy Management and Death

The agents’ only desire and objective is survival, which is closely linked to its energy level and the rate at which it feeds. When an agent’s energy reached a value of zero, the agent dies from starvation or exhaustion. Consequently, the ratio between energy level and maximum energy represents the agent’s physical state, which has an influence on affects and speed. The maximum amount of energy should be common for all agents, but an agent manually spawned should start with a random energy value between half the maximum amount and the maximum amount. Energy consumption should vary based on the agent’s current state, and actions. A running and stressed agent should consume more energy than a happy, walking agent. To regain energy, an agent must consume food, the nature of which depends on the agent’s type. When a prey eats an energy source, its energy is replenished by the amount contained in the energy source. When a predator eats a prey, it recovers the amount of energy of the prey.

3.1.3 Border Avoidance

As agents would evolve inside a testbed, part of their behavior should ensure they are kept in this closed space. This includes a border avoidance behavior, which should vary on the agent’s current status. The agent needs to be able to identify borders and turn away when running into them. A ‘wandering or searching for food’ agent should turn away at a border while a ‘chased by predator’ agent should slide on the border towards the direction which is the most likely for a successful escape. However, there should be no need for the agent to take borders into account when it is chasing the energy, as the target is already staying in the testbed. If a more complex model of chasing is implemented, re-instigating the border avoidance might be an option.

Page 15: Submitted for the MEng in Computer Science with Games ... · Project Final Report Emotional Agents in Game Playing Submitted for the MEng in Computer Science with Games Development

Emotional Agents in Game Playing

- 15 -

The borders had to be taken in consideration for random agent and object spawning. As each energy and each agent has a radius, this radius should not overlap with the border when the object is created.

3.1.4 Use of Affordances

The term 'affordance' has first been coined by psychologist James Jerome Gibson, in the study of animal behavior. Gibson based it on the verb 'to afford', and describes it as "what [the environment] offers the animal, what it provides or furnishes, either for good or ill" (Gibson, 1979, p127). Affordances thus specify the possible actions of an agent with another object. In the program, affordances will be determined based on senses and data relative to the agent, then used to sort and trim input and output according to what the agent perceives of the environment. The information is relative to an action and to another object (such as eating an energy source, or fleeing a predator), and is processed just before the input phase, providing some extra data for affect processing. Affordance memberships can also be classified as drives, as they will influence both the input and output choice of the agent based on sensory data and the agent’s current state (Franklin, 1997, p3). Affordance data is then composed of the source agent’s name, the target agent’s name, and a membership value. The value of the membership would for instance allow an agent to choose between two different targeted energy items, based on the level of threat present near them.

3.1.5 Use of Affect

We mentioned the dilemma that a prey must face between fleeing and searching for energy sources. Such a dilemma can be solved by introducing affect. In the context of this simple scene, agents should be able to adjust their emotional state based on the information gathered by their senses. For instance, an agent detecting a predator will become stressed, while seeing an energy source close-by will have the opposite effect. As affect should hold an influence on the agents’ decision process, a separate energy source should be preferred to one that is close to a corpse or a predator. The agents use two basic affects: happiness and stress. These two affects will act as representation of both positive and negative emotions to provide a basic model of affect to the agents. Following Ortony, Clore and Collins’ model, these affect should provide reactions to events, to actions from other agents and to the physical aspect of objects (Ortony, Clore, Collins, 1990, chapter 3). Consequences of events:

Hungriness causes stress increase

Hungriness should decrease happiness slowly. Reaction to other agent’s actions:

A prey watching a predator coming to it should see its stress increase. Reaction to appearance of objects:

A prey should find a stressed and hungry predator more menacing.

Page 16: Submitted for the MEng in Computer Science with Games ... · Project Final Report Emotional Agents in Game Playing Submitted for the MEng in Computer Science with Games Development

Emotional Agents in Game Playing

- 16 -

These affects should be conditioned by various parameters from the agent’s senses. Therefore, the information relative to affect should be provided by input tokens, as some input tokens may be used for both the Finite State Machines and to influence the affective states. The affective states will be part of a Fuzzy State Machine on top of a FSM hierarchy. This will allow affect to play a role in high-level decision making. The input token will not distinguish any group of emotional input when processed for affect, as this distinction will be made at the creation of the input token from the percepts.

3.2 Agent structures During the course of the project, two reactive architectures stood out.

3.2.1 Basic Agent Structure

During the early phases of the project, the agents were built in the mindset of a basic reactive architecture, where the agent’s behavior would be highly dependent on the use of a single finite state machine. This structure did not yet make use of fuzzy logic or fuzzy state machines, and relied on the use of concurrent state machines to provide output for the agent.

Fig 2: Basic structure of an agent.

3.2.2 Affective Agent Structure

Later in the development of the project, the structure was updated to allow the use of affect and fuzzy logic. Improvements have been made in the input filtering and behavior selection, and the concurrent state machines were replaced by a hierarchy, composed of several state machines, whose activity was managed by a fuzzy state machine. This new structure also introduced the use of affordance memberships.

Page 17: Submitted for the MEng in Computer Science with Games ... · Project Final Report Emotional Agents in Game Playing Submitted for the MEng in Computer Science with Games Development

Emotional Agents in Game Playing

- 17 -

Fig 3: Structure of affective agent. As this structure is used in the final program, here is a brief overview of the flow represented in the figure above.

The agent gathers data about the environment through its senses.

The sense data is used in conjunction with internal states and variables to calculate affordance memberships.

Sense and affordance data are then both used to produce input tokens.

The input tokens are fed into the FuSM, which updates the affective states and produces membership outputs for each underlying FSM.

The FSMs are selected and run, producing output tokens.

The output tokens are then filtered through behavior selection to produce a set of actions, which the agent will execute.

3.3 Description of Components This section will provide a detailed description of each component of the affective architecture described above.

3.3.1 Agents Senses

Agents should be able to perceive their environment. Although the program itself already contains all the necessary data, according to the first of the design principles, the interaction with the environment should be done through the agent’s perspective (Rolf Pfeifer, 1996, p3). Data must thus be passed from the environment to the agent. To replicate the process of acquiring information about the environment, the agent must obtain filtered information according to its internal parameters: its senses must gather percept data. An agent can possess more than one sense, and its senses should be run concurrently to gather data for input processing. This data is represented as sense tokens, which consist of either an object’s name, or an atom.

Page 18: Submitted for the MEng in Computer Science with Games ... · Project Final Report Emotional Agents in Game Playing Submitted for the MEng in Computer Science with Games Development

Emotional Agents in Game Playing

- 18 -

3.3.2 Input and Output Management

This section describes the various approaches towards formatting the state machine’s input and output tokens.

3.3.2.1 Input management The input management went through a radical change halfway through the project, which defined the new structure more towards the Neat model of AI, which, as opposed to the Scruffy model, emphasizes an organized structure.

3.3.2.1.1 Scruffy approach Here are overviews of the early input tokens:

tokenName(sensedObjectName, sensedObjectType).

sensed(tokenName, sensedObjectName, sensedObjectType). In the early versions of the program, inputs and senses were very close structurally. Very few changes were made from a sense token to an input token, which had two repercussions:

1. Information gathered from the senses was directly passed on to the input token. As a result, they contained more information, such as the sensed object type. In many cases the extra information fed to the FSMs was not relevant.

2. As each input token potentially matched a sense token, a considerable amount of input tokens were fed into the FSMs. This led to a multiplication of output tokens from the FSM (see Fig. 4).

Fig 4: Flow of information tokens in the early structure. The input tokens carry information with them, and allowed this information to be used both in the FSM and in output filtering. Which led to the FSMs becoming more ambiguous and complex than they needed to be (as illustrated in the early FSM design, section 3.4.4.1). It was decided that this structure was to be abandoned, as it allowed flexibility in spite of structure coherence and performance, which are key when developing an agent for a game: the code needs to be both efficient and expandable.

3.3.2.1.2 Neat Approach Here is an overview of a revised input token: [tokenName, membership]. To accommodate a better structure, the filtering of inputs has been revised. Instead of producing one input token per sense token, the input filtering would produce input tokens based on an agent’s sense tokens. These new input tokens offer a summary of the percept data, as opposed to presenting them directly.

Page 19: Submitted for the MEng in Computer Science with Games ... · Project Final Report Emotional Agents in Game Playing Submitted for the MEng in Computer Science with Games Development

Emotional Agents in Game Playing

- 19 -

For example, instead of multiple away_from(Object, prey), a single sense_prey token is necessary to provide the necessary information to the FSMs. These inputs are one atom long, and thus contain less information than in the first approach. This gives the FSMs more clarity, as well as a more symbol-oriented functionality.

Fig 5: Flow of information tokens in the revised structure This second approach means increased time filtering inputs, as the process of producing input tokens is more complex. However, this amounts to considerably fewer amount of inputs being fed in the FSM, and consequently fewer outputs and actions to be sorted and treated in behavior selection. Each input token can be treated as a fuzzy set, using a membership variable. This type of information differs from the scruffy approach, since the membership only refers to the input token itself, and not to data out of the scope of the FSMs.

3.3.2.2 Output Management The output management (or behavior selection) acts as a filter between outputs from the FSMs and the agent’s actions, much like input selection does between senses and FuSM. Its task is to define a set of actions to be called by the agent. Its functionality and structure being similar, most of the points covered in the Input Management sub-section above also apply to behavior selection. The major difference being that the output of the behavior selection consists of predicates, which represent actions. Behaviour selection predicates are of the form: out(AgentName, OutputList, ActionPredicate).

3.3.3 Fuzzy system

To work with the fuzzy logic implied by affects and fuzzy state machines, a system to pass values into fuzzy sets and to perform calculations on such sets had to be created.

3.3.3.1 Sets and subsets Within the program, a fuzzy set is defined as a collection of subsets. Each subset contains the description of its membership function as well as its parameters. For example, the distance fuzzy set is composed of the subsets collides_with, close_to, nearby_crom, away_from.

Page 20: Submitted for the MEng in Computer Science with Games ... · Project Final Report Emotional Agents in Game Playing Submitted for the MEng in Computer Science with Games Development

Emotional Agents in Game Playing

- 20 -

Subset Membership Function Function Parameters (A, B, C)

collides_with uh (unified high) [0]

close_to gd (g decreasing) [0, 0.5]

nearby_from t (triangle) [0, 0.5, 1]

away_from gi(g increasing) [0.5, 1]

Fig 5: Membership functions for subsets of the Distance fuzzy set

Fig. 6: Graphical representation of the Distance fuzzy set. The base scale for any of the fuzzy sets is 0 to 1 (the X-axis on the graph), but could be altered depending on the agents, using the Ratio variable defined in the fuzzy set. If multiplied by 100 on agent1 and 300 on agent2, both would have a different interpretation of distance: a distance of 50 would have a higher close_to membership for agent1 and a higher nearby_from membership for agent2. The predicates describing sets and subsets are of the form : fuzzy_set(SetName, Ratio, [ListofSubsets]). fuzzy_subset(SetName, SubsetName, FunctionName, [ListOfFunctionParameters])

3.3.3.2 Membership functions Different membership functions can be assigned to the subsets. For the description of each functions, parameters will take the names of A, B, C, D. (Laxmidhar Behera, 2008)

Unified (u)Unified High (uh), Unified Low (ul) are crisp membership functions. o Unified is 1 between bounds A and B. o Unified High is 1 until A, while Unified Low is 0 until A.

Triangle (t), Pi (p), Gradient Increasing (gi) and Gradient Decreasing (gd) use straight slopes to provide a gradient between bounds.

o Gradient Increasing provides an increasing gradient from A to B, while Gradient Decreasing provides the opposite.

Page 21: Submitted for the MEng in Computer Science with Games ... · Project Final Report Emotional Agents in Game Playing Submitted for the MEng in Computer Science with Games Development

Emotional Agents in Game Playing

- 21 -

o Triangle provides an increasing gradient from A to B, and a decreasing gradient from B to C.

o Pi has the same functionality as Triangle, with an added core of 1 in the center. Pi takes four parameters.

Square (s), Square Increasing (si) and Square Decreasing (sd) allow to model a gradient based on a squared curve.

o Square Increasing and Square Decreasing set this gradient from A and B, respectively increasing and decreasing.

o Square, much like Triangle, uses increases between A and B, and decreases between B and C.

3.3.4 Finite State Machine Models

3.3.4.1 Early Design: Concurrent Finite State Machines The first approach in the project was to use several FSMS, and have them work concurrently, producing a vast array of outputs. This model was implemented before work on the energy-related behaviours, thus a FSM relative to such behaviours is not present. These FSMs use the earlier form of input tokens, described in section 3.4.2.1.

3.3.4.1.1 Move FSM Movement was originally handled by a FSM. The agent would stop if it did not detect any object in its vicinity. This was suitable for the earlier agent builds, where searching for energy was not yet implemented. In the later versions, movement was handled in behavior decision. Inputs: away_from(object, type), nearby_from(object, type), close_to(object, type). States: moving, waiting. Ouputs: run, walk, wait.

Input State New State

away_from(Object, Type) moving waiting

nearby_from(Object, Type) _AnyState moving

close_to(Object, Type) _AnyState moving

Fig. 7: State change functions for the Move FSM.

Input State Output

close_to(Object, agent) moving run

_AnyInput waiting wait

_AnyInput moving walk

Fig. 8: Output functions for the Move FSM.

3.3.4.1.2 Avoid Collision FSM Like the Move FSM, the Avoid Collision FSM was dropped in the later versions, where its statements have been merged in the newer FSMs. Inputs: collides_with(Object, Type), no_collision, States: is_colliding, no_collision Outputs: wait, solve_collision

Page 22: Submitted for the MEng in Computer Science with Games ... · Project Final Report Emotional Agents in Game Playing Submitted for the MEng in Computer Science with Games Development

Emotional Agents in Game Playing

- 22 -

Input State New State

no_collision is_colliding no_collision

collides_with(Object, Type) no_collision no_collision

_AnyInput SameState SameState

Fig. 9: State change functions for the Avoid Collision FSM.

Input State Output

no_collision no_collision run

collides_with(Agent) is_colliding solve_collision(Agent)

_AnyInput _AnyState wait

Fig. 10: Output functions for the Move FSM.

3.3.4.1.3 Avoid FSM This FSM is still present as well as expanded in the final program design (Fleeing FSM, see section 3.3.4.2.1).This early version of the FSM shows the limits of the first input token model, as information that is too specific can lead to many types of inputs being accepted, and the FSM becoming more ambiguous. Inputs: close_to(Object, border), close_to(Object, agent), close_to(Object, agent), nearby_from(Object, border), nearby_from(Object, enemy), nearby_from(Object, neutral), away_from(Object, border), away_from(Object, enemy), away_from(Object, neutral). States: roaming, fleeing Outputs: wait, avoid.

Input State New State

nearby_from(_AnyAgent, agent) _AnyState fleeing

away_from(_AnyAgent, agent) fleeing roaming

_AnyInput SameState SameState

Fig. 11: State change functions for the Avoid Collision FSM.

Input State Output

close_to(Object, border) _AnyState avoid(Object)

close_to(Object, agent) _AnyState avoid(Object)

nearby_from(Object, agent) _AnyState avoid(Object)

away_from(Object, agent) fleeing wait

_AnyInput _AnyState wait

Fig. 12: Output functions for the Move FSM.

3.3.4.1.4 Target FSM In the early designs, each agent would refer to a target objet, which would represent a position it should follow or avoid. A roaming behavior was achieved by displacing the target around the agent at a fixed radius. Targets are no longer used for the agent. Inputs: close_to(Object, target), nearby_from(Object, target) away_from(Object, target), no_target. States: no_target, has_target. Outputs: follow_target, drop_target, get_target.

Page 23: Submitted for the MEng in Computer Science with Games ... · Project Final Report Emotional Agents in Game Playing Submitted for the MEng in Computer Science with Games Development

Emotional Agents in Game Playing

- 23 -

Input State New State

away_from(_Target, target) has_target has_target

collides_with(_Target, target) has_target no_target

no_target no_target has_target

_AnyInput SameState SameState

Fig. 13: State change functions for the Avoid Collision FSM.

Input State Output

no_target has_target get_target

_AnyInput has_target follow_target

_AnyInput no_target drop_target

Fig. 14: Output functions for the Move FSM.

3.3.4.2 State Machine Hierarchy This meant that the selection of behavior was reliant on the processing of the combined outputs of the three FSMs, instead of a management of Finite State Machines.

3.3.4.2.1 Fleeing FSM Inputs: sense_threat, touches_border. States: idle, fleeing. (default: idle) Outputs: noaction, flee, slide_along_borders. (T being the threshold)

Input State New State

[sense_threat, IM >= T] idle fleeing

[sense_threat, IM < T] idle idle

[sense_threat, IM >= T] fleeing fleeing

[sense_threat, IM < T] fleeing idle

[touches_borders(_), 1] idle idle

[touches_borders(_), 1] fleeing fleeing

Fig 15: State change functions for the Fleeing FSM.

Input State Output

[sense_threat, _IM] idle noaction

[sense_threat, _IM] fleeing flee

[touches_borders, 1] idle noaction

[touches_borders, 1] fleeing slide_along_borders

Fig 16: Output functions for the Fleeing FSM.

Page 24: Submitted for the MEng in Computer Science with Games ... · Project Final Report Emotional Agents in Game Playing Submitted for the MEng in Computer Science with Games Development

Emotional Agents in Game Playing

- 24 -

3.3.4.2.2 Energy FSM Inputs: hungry, sense_energy, touches_energy, touches_border(_). States: idle, searching_for_energy. (default: idle) Outputs: go_to_energy, eat_energy, wander, bounce_on_border. T1 : hunger threshold. T2 : sense_energy threshold T3, higher than T2, to add opportunistic behavior

Input State NewState

[hungry, IM >= T1] idle searching_for_energy

[hungry, IM < T1] idle idle

[hungry, IM >= T1] searching_for_energy searching_for_energy

[hungry, IM < T1] searching_for_energy idle

[sense_energy, IM >T3] idle searching_for_energy

[sense_energy, _IM] searching_for_energy searching_for_energy

[touches_energy, 1] idle searching_for_energy

[touches_energy, 1] searching_for_energy searching_for_energy

[touches_borders(_), 1] idle idle

[touches_borders(_), 1] searching_for_energy searching_for_energy

[senses_borders, 1] idle idle

[senses_borders, 1] searching_for_energy searching_for_energy

Fig 17: State change functions for the Energy FSM.

Input State Output

[hungry, _IM] idle noaction

[hungry, _IM] searching_for_energy wander

[sense_energy, _IM] idle noaction

[sense_energy, IM >= T2] searching_for_energy go_to_energy

[sense_energy, IM < T2] searching_for_energy wander

[touches_energy, 1] idle eat_energy (opportunistic)

[touches_energy, 1] searching_for_energy eat_energy

[touches_borders(_), 1] idle noaction

[touches_borders(_), 1] searching_for_energy bounce_on_border

[senses_borders , _IM] idle noaction

[senses_borders , IM > 0.4] searching_for_energy avoid_borders

Fig 18: Output functions for the Energy FSM.

Page 25: Submitted for the MEng in Computer Science with Games ... · Project Final Report Emotional Agents in Game Playing Submitted for the MEng in Computer Science with Games Development

Emotional Agents in Game Playing

- 25 -

3.3.4.2.3 Idle FSM Inputs: hungry. States: idle, wandering. (default: idle) Outputs: wander, bounce_on_border.

Input State NewState

[hungry, IM > 0] idle wandering

[hungry, 0] idle idle

[hungry, IM > 0] wandering wandering

[hungry, 0] wandering idle

Fig 19 : Output functions for the Idle FSM.

Input State Output

[hungry, _IM] idle noaction

[hungry, _IM] wandering wander

Fig 20: Output functions for the Idle FSM.

3.3.4.3 Affect FuSM The different finite state machines (seen above) are activated and deactivated based on their matching output membership in a Fuzzy State Machine. Their name and memberships are represented as outputs for the FuSM. This FuSM also introduces affects, which will behave as fuzzy states. Currently, the Fuzzy State Machine models the two aforementioned affects: happiness and stress. Inputs : sense_energy, sense_threat, hungry. States (Affects ): happiness, stress. Outputs (FSM memberships) : fleeing, energy, idle.

Input Affect New Affect

[sense_threat, IM] [stress, AM] [stress, sqrt(AM * IM)]

[sense_threat, IM] [happiness, AM] [happiness, AM]

[sense_energy, IM] [stress, AM] [stress, AM]

[sense_energy, IM] [happiness, AM] [happiness, sqrt(AM * IM)]

[hungry, IM] [stress, AM] [stress, sqrt(AM * IM)]

[hungry, IM] [happiness, AM] [happiness, AM]

Fig 21: Fuzzy state transition functions for the Affect FuSM.

Input Affect FSM

[sense_threat, IM] [stress, AM] [fsm_fleeing, sqrt(AM * IM)]

[sense_threat, IM] [happiness, AM] [fsm_fleeing, sqrt((1-AM) * IM)]

[sense_energy, IM] [stress, AM] [fsm_energy, sqrt(AM * IM)]

[sense_energy, IM] [happiness, AM] [fsm_energy, sqrt(AM * IM)]

[hungry, IM] [stress, AM] [fsm_energy, sqrt(AM * IM * 0.5)

[hungry, IM] [happiness, AM] [fsm_idle, sqrt((1-AM) * IM)]

Fig 22: Output functions for the Affect FuSM

Page 26: Submitted for the MEng in Computer Science with Games ... · Project Final Report Emotional Agents in Game Playing Submitted for the MEng in Computer Science with Games Development

Emotional Agents in Game Playing

- 26 -

In Fig. 21, the affect states and FSM memberships are managed using an AND relationship (Independent affect model). This is an issue, since this will inevitably drag the affective states and FSM memberships to zero. Thus, different affect models have been introduced, to provide more ways of treating the affective output.

3.3.4.4 Eventual Full-Fuzzy Architecture The scope of this project was initially to produce a hierarchy of fuzzy state machines. The final product holds the architecture described above, comprised of a fuzzy state machine driving independent finite state machines. The objective would ultimately be to replace these FSMs by concurrent FuSMs, which would produce fuzzy outputs. These fuzzy outputs could be translated into fuzzy actions and behaviours.

3.3.5 Affect models

Affect models are ways of calculating the data relative to an agent’s affective states. Different models have been studied, each with its own advantages and drawbacks. In the following sections, cAM will be used to refer as the current affect membership, whereas nAM will be the new affect membership, and IM the input membership.

3.3.5.1 Independent The independent model is the model described above. Affective output follows an AND relationship. The new affective state can be represented as: nAM = sqrt(AM * IM). This leads to the affective output heading towards the zero, as a new state membership can never be higher than the current membership.

3.3.5.2 Averaged The averaged model is represented by the relationship: nAM = AM * IM * 0.5. By being additive, it provides a more balanced affective output, though it is less dependent on the current membership, which ultimately renders it more predictable. It also avoids the issues described in the independent model. One drawback is that the output is now less dependent on the initial state.

3.3.5.3 Compensated The compensated model is a post-treatment of the independent model, which is aimed at solving its issues. When an affect decreases, it increases the other affects at a certain degree, based on the nature of the other affects and on the size of the decrease. The only potential drawback to this approach would be when all affects reach a value of zero, and cannot decrease further. This case would lock all the affect values to zero, as none of them can decrease further (and consequently increase).

3.3.5.4 Normalized The normalized model is another post-treatment of the affective memberships, which uses the independent model. Once normalized, their sum should always be equal to 1. This model has one obvious drawback: normalization will now lock all affective values as soon as one reaches zero. This model will be absent of the experimentation section due to the similar drawbacks to the independent model, which do not provide matter for interesting conclusions. The data for this model can still be found in the appendices J and K.

Page 27: Submitted for the MEng in Computer Science with Games ... · Project Final Report Emotional Agents in Game Playing Submitted for the MEng in Computer Science with Games Development

Emotional Agents in Game Playing

- 27 -

4 System Implementation

This chapter will cover various aspects and issues that occurred during the conception of the program. This will include a description of the hierarchy and properties of the various object types, as well as implementation details of each element of the affective structure detailed in the design section.

4.1 Object Management & Hierarchy

4.1.1 Object Components

Objects in the game are classified using a simple hierarchy. When an object is created, it is assigned the name ‘o’+ gensym (unique identifier). For example, the first object created will use the name ‘o1’, the second ‘o2’, etc… An object is an entity present in the scene. Therefore, it holds three essential values.

Its type, which specifies what kind of object it is.

Its position in the scene.

Its level of energy. An object’s name is used to tag it in all the data it refers to. Below is a list of the predicates relative to a base object type:

type(objectName, type).

position(objectName, X, Y).

energy(objectName, EnergyAmount). Agents and energy sources are objects, which means they both integrate the parameters above. The Object type is the root of the object hierarchy.

Fig 23: Object hierarchy.

4.1.2 Energy type

The type energy is used to identify energy sources in the scene. The radius of the energy sources represents the amount of energy they contain. It is expressed by the predicate type(objectName, energy). The type is the only defining feature of the energy object, as the parameters such as position or radius are also shared amongst agents.

4.1.3 Agent types

An agent can be of two types: predator or prey.

type(objectName, predator).

Page 28: Submitted for the MEng in Computer Science with Games ... · Project Final Report Emotional Agents in Game Playing Submitted for the MEng in Computer Science with Games Development

Emotional Agents in Game Playing

- 28 -

type(objectName, prey). The type is accessed during the input and output processes to differentiate between preys and predators. It does not bear any influence on the workings of the FuSM and the FSMs. An agent also holds a name, which allows the user to identify it easily on feedback files. An agent name is composed of ‘agent’ + gensym. Agents have a variety of fixed parameters, which are created and removed as the agent is spawned and deleted.

agent_direction(ObjectName, Direction). o The direction the agent is pointing towards. Works with the position to execute the

steering behaviours.

agent_speed(ObjectName, Speed). o The agent’s current speed. This speed is calculated against a maximum and a

minimum speed, and is influenced by the agent’s energy level.

The agent’s affect levels as well as its current affect model. o a_affect(ObjectName, AffectModel, [happiness, HLevel], [stress, SLevel]).

The agent’s age, being the amount of steps since the agent spawned. o a_age(ObjectName, Age).

The agent’s output tokens. o a_out(ObjectName, Output).

The agent’s input tokens: o a_in(ObjectName, Input).

The agent’s actions: o a_act(ObjectName, Actions).

The agent’s sense tokens, relative to objects sensed by the agent, or to itself. This data will be used for input processing.

o a_sobj(ObjectName, SensedObjectsList). o a_sself(ObjectName, SelfSensedList).

To increase performance, a separate list containing objects in collision with the agent is also used by the input processing.

o a_scol(ObjectName, SensedCollisionList).

The agent’s name. o a_name(ObjectName, AgentName).

The agent’s senses. o a_senses(ObjectName, SenseList).

At multiple instances, the finite state machines run by an agent. o a_fsm(ObjectName, FSMname, CurrentState).

4.1.4 Borders

Borders are lines, as opposed to agents, which can be represented as points. This means the agents need a special way to sense their proximity to the borders. A way to achieve this is to project the agent’s position of each of the four borders, and use these points as reference. Thus is defined the border object type: type(ObjectName, border).

Page 29: Submitted for the MEng in Computer Science with Games ... · Project Final Report Emotional Agents in Game Playing Submitted for the MEng in Computer Science with Games Development

Emotional Agents in Game Playing

- 29 -

Fig 24: Border objects B projected from agent A. Defining borders as objects brings two benefits:

1. They are part of the object structure, which means their position predicate can be requested, just like any other objects.

2. They are part of the object list, which allow them to be processed by an agent’s sense, allowing it to determine the proximity between the border and itself.

4.1.5 Object management

The file object_management.pl serves the purpose of creating and removing objects in the scene. Basic objects are created using the make_object predicate. It is called in the creation process of any object in the scene: make_energy and make_agent, which both respectively create an energy and an agent, first use make_object to create a basic object, to which they add new properties relative to their object type.

4.2 Agent step sequence An agent’s step cycle is achieved with the agent_life predicate.

The step sequence illustrates the affective agent structure, as seen in the design chapter.

Page 30: Submitted for the MEng in Computer Science with Games ... · Project Final Report Emotional Agents in Game Playing Submitted for the MEng in Computer Science with Games Development

Emotional Agents in Game Playing

- 30 -

First, data is initialized for the agent. This mostly includes purging the sense and input token lists.

Then, the agent’s senses are executed, and percept data is gathered.

Affordances are calculated based on the percept data.

Input tokens are formed based on the sense and affordance data.

The input tokens are fed into the FuSM, which is executed and returns a list of selected FSMs.

These FSMs are run with the same input data, producing output tokens.

The behavior selection sorts, trims and merges the output data into action predicates.

The action predicates are executed.

The agent statistics are updated, stored and written.

Finally, the agent is drawn in the scene.

4.3 Senses The sense management is contained in the senses.pl file. The agent types possess two senses.

4.3.1.1 Hearing The first sense is a simplified sense of hearing, which detects object at a radius, independent of the direction the agent is currently facing. This assumes every object in the scene is emitting a certain amount of noise.

4.3.1.2 Touch The second sense is the sense of collision, or touch. Agents are able to detect objects they are colliding with. This sense works the same way as hearing. Though this might be considered as a waste of performance (since the distance is not calculated twice, once for each sense), this keeps the different senses encapsulated: the test for collision should not happen within the hearing sense.

Page 31: Submitted for the MEng in Computer Science with Games ... · Project Final Report Emotional Agents in Game Playing Submitted for the MEng in Computer Science with Games Development

Emotional Agents in Game Playing

- 31 -

Note that objects can be detected from several senses, thus an object colliding with the agent will be both touched and heard.

4.4 Affordance memberships Much like drives, affordance memberships have an influence on input and output by giving a degree of likeliness associated with a possible action. The flee_a affordance specifies how likely the agent is to run away from a predator, which represents the general threat level of this specific predator for this specific prey.

As this code shows, the aff_flee_a predicate uses the predator’s hungriness to determine its threat level. This implies that agents can identify hungriness amongst themselves. In a more elaborate simulation, agents could possibly identify hungriness by the physical traits and behaviour of other agents.

4.5 Input Selection The input and behavior selections make use of the same structure. All input selection predicates are of the following form: in(AgentName, InputToken, Membership):- Rules for merging. When the get_input predicate is called, all input selection predicates are selected, and their result is then sorted and placed in an input token list of the form : [[Input, Membership], [Input, Membership], …].

Page 32: Submitted for the MEng in Computer Science with Games ... · Project Final Report Emotional Agents in Game Playing Submitted for the MEng in Computer Science with Games Development

Emotional Agents in Game Playing

- 32 -

Such a structure simplifies the process of adding input tokens. The only requirement is to write a new ‘in’ predicate matching the form above.

4.5.1.1 Border detection Agents should be able to sense how close they are to borders. To the agent’s perception, borders are represented as a projection of the agent’s position on the test bed’s sides. These projections are defined as objects. They can thus be retrieved from the object list, and sensed by the agent. The agents can then perceive borders and turn around before entering in contact with them.

As a safeguard, the borders are explicitly detected if the agent collides with them. This essentially leads to the border_bounce behavior, which prevent the agents from going out of bounds.

Page 33: Submitted for the MEng in Computer Science with Games ... · Project Final Report Emotional Agents in Game Playing Submitted for the MEng in Computer Science with Games Development

Emotional Agents in Game Playing

- 33 -

4.5.1.2 Energy inputs These inputs specify what the agent considers as an energy source, and how strong the presence of energy is detected. As the meaning of energy differs for a predator and a prey, different predicates had to be written, differentiating the predator and prey types. The prey is looking for energy types, while the predator is looking for agent types.

To obtain the membership (Weight), agents make use of the eat_e affordance. Using the total of the weights obtained from the sensed energies, the input membership is determined using the sense_energy fuzzy set.

4.6 State Machines

4.6.1 Finite State Machines

4.6.1.1 Sorting and running the FSMs The finite state machines are run from the run_fsms predicate in fsm_management.pl. The FSM with the highest membership from the FuSM is selected from the list of FSMs. Since concurrent fuzzy output has not been achieved, only the FSM with the highest membership will be activated. A list of suitable input is then determined from the input alphabet using get_fsm_input, and the FSM is executed. The input alphabet for each state machine has been defined in the recognized_input predicates. This allows the retrieval of the inputs supported by specific FSMs, to remove extra and unwanted

Page 34: Submitted for the MEng in Computer Science with Games ... · Project Final Report Emotional Agents in Game Playing Submitted for the MEng in Computer Science with Games Development

Emotional Agents in Game Playing

- 34 -

outputs. The predicate get_fsm_input(Inputs, FSM, FSMInput) filters the input tokens through these alphabets to produce the final list of inputs for a desired state machine.

As there may be more than one input passed to the FSM, there may be several outputs. These outputs are flattened, trimmed, and sorted before being stored in a_out.

4.6.1.2 FSM structures The transition functions in the state machines have been implemented using the format: fsm_name_state(Input, State, Newstate). fsm_name_output(Input, State, Output). This simplifies the process of state transition, as Prolog steps through all possible predicates until it finds a match for the input and state. A detailed view of the finite state machines is provided in Appendix C.

4.6.2 Fuzzy State Machine

In terms of the predicate, the FuSM takes one more parameter, which specifies the agent’s affect model. This parameter is only taken in the state change functions, since the fuzzy states are the affects. Only the independent and averaged affect models require This allows the affective states to be calculated as part as the state transition function. The names fusm_a_s and fusm_a_o refer to the state change and the output transition functions. A detailed view of the fuzzy state machine is provided in Appendix B.

4.7 Behaviour selection The behaviour decision processes the output tokens obtained from the Finite State Machine into actions. All actions resulting from behavior selection actions are executed concurrently. However, since concurrent fuzzy behaviors were not fully achieved (more on this issue below), and not implemented in the program, this aspect has been reduced, and the behavior decision has been designed so that only a minimal number of actions will filter through.

Page 35: Submitted for the MEng in Computer Science with Games ... · Project Final Report Emotional Agents in Game Playing Submitted for the MEng in Computer Science with Games Development

Emotional Agents in Game Playing

- 35 -

As the behavior choice functions in the same way as does the input selection, only the most relevant example will be detailed, the pursuit of energy. To decide which energy source to follow, the prey will gather all energy sources in its sense data and will apply a cost function to them.

This cost function is defined by a reward over distance relation: Cost = Aff * Dae / (Dce + Dca).

Aff being the eat_e affordance membership.

Dae being the distance between agent and energy.

Dce and Dae being the summed distances of surrounding corpses between the energy and the agent.

The same principles are applied to the choice of a prey by a predator.

4.8 Behaviors and actions

4.8.1 Crisp Actions

4.8.1.1 Wandering The wandering and searching behaviors use a similar method to simulate the agent walking in different directions. The early method, which was later dropped, was to use a point as target. The agent would walk to the target until colliding with it, then it would reassign it within a radius of itself. The newest method is more simplistic, yet provides a more realistic wandering behavior. Every step, a small random angle is added to the agent’s direction. With the increment being small enough, this allows for some random movements which avoid feeling erratic. This later method, combined with border avoidance, gives the agent a realistic search behavior, compensating for the agent’s lack of memory.

4.8.1.2 Pursuing To head towards an energy source, preys will orient their direction to the angle between the energy source and itself, and run in its direction. A predator pursuing a prey will sprint by increasing its movement speed. Both running and sprinting increase energy consumption, as the movement speed is higher.

Page 36: Submitted for the MEng in Computer Science with Games ... · Project Final Report Emotional Agents in Game Playing Submitted for the MEng in Computer Science with Games Development

Emotional Agents in Game Playing

- 36 -

4.8.1.3 Fleeing When fleeing, an agent translates all the positions of the sensed threats into a barycenter, which serves as a reference point to correct its course. This allows the agent to flee from a single predator as well as groups (Figure X). However, this implies that all predators are of equal threat to the prey. To correct this, weights must be assigned to each predator, weights that will affect the barycenter.

Fig 25: Fleeing from a group of predators (not using flee_a)

The fleeing behaviour takes into account the level of threat of each sensed predator using the flee_a affordance. The flee_a affordance membership is used as a weight for each predator when the barycenter is calculated: the bigger the threat a predator represents, the closest to it the barycenter will be. This allows the agent to flee from a group of predators, while still prioritizing the escape from the biggest threat. In Figure 26, predator C is the biggest threat, as it is closer to the prey. Any factor that would affect threat would also affect the barycenter.

Fig 26: Fleeing from a group of predators (using flee_a)

4.8.2 Concurrent Fuzzy Actions

Halfway through the project, an attempt was made at fuzzy actions. Pfeifer’s Third Principle of Design claims that artificial intelligence can arise from a number of parallel and loosely coupled processes (Pfeifer, 1996, p3). The aim was to apply this principle to agents to provide concurrent and synchronized behaviours amongst one entity.

Page 37: Submitted for the MEng in Computer Science with Games ... · Project Final Report Emotional Agents in Game Playing Submitted for the MEng in Computer Science with Games Development

Emotional Agents in Game Playing

- 37 -

Outputs merged to actions could hold a membership variable, which in turn would affect the final behaviour of the agent. The same effect has been achieved with preys being attracted to energy sources.

Fig 27: Fuzzy avoidance of a predator

As shown by Figure X, an agent would slightly correct its course to avoid a predator that is far away (flee membership is low. The agent would however turn around faster the greater the flee membership, which is determined by the threat the predator represents. Using the flee_a affordance, the prey determines the threat and therefore the membership of the flee action. A closer or hungrier predator would induce a bigger threat, and the agent would turn around faster.

Fig 28: Concurrent fuzzy actions

The difference between crisp and fuzzy action resides in the fact that fuzzy actions can be combined: the steering data of multiple actions can be summed to produce a unique behavior. In Figure X, the prey reaches the energy while keeping its distances from the predator. This is the result of the weighted sum of the direction changes. Several tests have been conclusive, such as the fuzzy avoidance of predators, and attraction to energy sources. However, synchronous fuzzy behaviors were harder to achieve, as they would imply an entire structure based on concurrent fuzziness. As more time could not be spent on developing this aspect further, it was eventually dropped from the project. Fuzzy avoidance has been kept for the avoid_borders behavior, so that agents in search for energy would turn around when sensing a border close by instead of bouncing on it. It is, however, not combined with any other behavior.

4.9 Utilities The utilities.pl file was created to handle calculations relative to position, collision detection and list operations. As the name implies, they are used in diverse occasions throughout the program. These algorithms are implemented in terms of each other when possible. These algorithms include:

Page 38: Submitted for the MEng in Computer Science with Games ... · Project Final Report Emotional Agents in Game Playing Submitted for the MEng in Computer Science with Games Development

Emotional Agents in Game Playing

- 38 -

The values for Pi, Half Pi and Two Pi.

Predicates to obtain the distance between two objects: o get_distance(X1, Y1, X2, Y2, Distance). o get_distance(Object1, Object2, Distance).

A predicate to obtain a random position o get_random_point(X, Y, Radius, NewX, NewY). o The radius of the object allows to place to obtain a position so that an eventual

object does not intersect with the borders.

Predicates to obtain the center or barycenter of a list of objects: o get_center(ObjectList, Xc, Yc). o get_barycenter(ObjectList, WeightList, Xc, Yc).

Predicates to return the angle between two objects: o get_angle(X1, Y1, X2, Y2, Angle). o get_angle(Object1, Object2, Angle).

A predicate to clamp an angle between the values of 0 and Pi * 2. o clamp_angle(Angle, ClampedAngle).

A predicate to determine if angle A is on the left or on the right of angle B. o get_angle_side(A, B, Side).

A predicate to obtain a random angle. o get_random_angle(Angle, Cosine, Sine).

Predicates to add or subtract the same number from a list. o add_to_all(Value, List, ResultList). o subtract_to_all(Value, List, ResultList).

Predicates to obtain the value that is closest to a minimum or maximum in a list, or closest in general.

o get_closest_min(Minimum, List, ClosestMin). o get_closest_max(Maximum, List, ClosestMax). o get_closest_match(Value, List, ClosestMatch).

Predicates to obtain the average of a list of values. o average(List, Average). o weighted_average(List, WeightList, Average).

A predicate to find the first item of List1 to appear in List2. o find_first(List1, List2).

A predicate to extract a parameter at a certain index from another predicate. o get_parameter(Index, Predicate, Parameter).

4.10 Feedback and Data Output There are three ways for the program to display feedback to the user.

The command window displays a summary of each agent’s energy and affective states.

The user can type the load_feedback command in the command interface to load the feedback window, which will display more information on each agent for every step.

Page 39: Submitted for the MEng in Computer Science with Games ... · Project Final Report Emotional Agents in Game Playing Submitted for the MEng in Computer Science with Games Development

Emotional Agents in Game Playing

- 39 -

At every agent step, data involving the current step, agent age, energy, happiness and stress are recorded in a ‘results’ file, which can be found in the Feedback folder.

4.11 UI and Test Bed Different variations of the step function have been implemented. In the user interface, the Step button allows precise stepping though the scene, whereas the step drop-down menu allows for steps by increments from 5 to 100. The Step Until Dead option keeps stepping through until all agents in the test bed are dead.

Page 40: Submitted for the MEng in Computer Science with Games ... · Project Final Report Emotional Agents in Game Playing Submitted for the MEng in Computer Science with Games Development

Emotional Agents in Game Playing

- 40 -

5 Experimentation and Testing

5.1 Observations during testing: The prolog console allowed for extensive testing, as predicates could be called and values requested at any point in the runtime. There was not an exhaustive testing plan for the agents, however, the repeated testing and behavior calibration led to these observations. Movement: The agents are able to move consistently and engage in a searching behavior when hungry. The hungriness is also visible in terms of their movement speed, as the agents become slower the less energy they possess. Border avoidance: Agents searching for energy successfully turn away from walls in a smooth fashion, without colliding with them. This shows a successful implementation of a fuzzy action. Agents fleeing predators cannot walk past the walls, however, they do hug and slide along the walls as their behavior suggests. One drawback is that there is no decision-making involved in sliding away in the optima direction, and agents fleeing against walls will often find themselves stuck in a corner. In all cases, agents could not leave the test bed. Decision making: Agents can manifest a level of decision-making based on their affective levels (more on this in the experimentation section), and can try to grab an energy source close-by when low on energy, even if being chased. The main drawback, which is quite significant and inherent to the behaviours acting independently of each other, is that the agent reverts to its original speed when attempting to eat an energy source while being chased. This issue is illustrated by the ‘Triangle’ world preset. The ‘Energy decision’ shows the ability of an agent to select the most appropriate energy source.

5.2 Experimentation The aim of this chapter is to verify that the impact of affect on the agents’ behavior and survival. Affect should provide the agents with more realism with less predictability, and negative impact on survivability should not be in spite of the aforementioned elements. The experiments were run until the last agent’s death, using the step_until_all_dead predicate. For each experiment, two scenarios have been planned, one including five preys, and one including three preys and two predators. These scenarios are made to simulate semi-random object placement, with preys and predators at different positions and starting with various levels of energy. The data gathered for each agent will be its energy ratio, happiness and stress affects over time. The average energy ratio over time will be also presented to aid with assessing the agent’s survivability. As opposed to the energy ratio, stress and happiness are meaningful only in individual cases, which is why an agent will be taken as example in each experiment. The example agent will not necessarily be the longest survivor, as the graphs will also be used to analyze the effects of different affect models. Detailed descriptions of each affect model can be found in the Design chapter. The detailed experiment results can be found in Appendices D to K.

Page 41: Submitted for the MEng in Computer Science with Games ... · Project Final Report Emotional Agents in Game Playing Submitted for the MEng in Computer Science with Games Development

Emotional Agents in Game Playing

- 41 -

5.2.1 Independent Model

The independent model is defined by the multiplicative relation: nAM = sqrt(AM * IM).

5.2.1.1 Preys Only

Fig 29: Summed Energy Ratio Over Time Steps (Preys Only, Independent) The maximum survival time is 770 steps, achieved by Prey 3.

Fig 30: Energy, Happiness and Stress of Prey 3 Over Time Steps (Preys Only, Independent).

It was initially thought the stress and happiness values would reach the minima of zero quickly. When this minima is reach, the affects should not be able to raise again, due to the fact that the independent model is a purely multiplicative (sqrt(A * B)). As opposed to what was expected, the results for the independent model showed a perfect variance of stress: when the energy of an agent starts dropping, stress is affected and rises proportionally. It should be noted that this behavior is due to the fact that stress does not reach a value of zero. On the other hand, happiness does eventually drop to zero, and does not raise again, which underlines the limits of this model.

0

0.2

0.4

0.6

0.8

1

1 101 201 301 401 501 601 701

Summed Energy Ratio

0

0.2

0.4

0.6

0.8

1

1 101 201 301 401 501 601 701

Prey 3

Energy

Happiness

Stress

Page 42: Submitted for the MEng in Computer Science with Games ... · Project Final Report Emotional Agents in Game Playing Submitted for the MEng in Computer Science with Games Development

Emotional Agents in Game Playing

- 42 -

5.2.1.2 Preys and Predators

Fig 31: Summed Energy Ratio Over Time Steps (Preys Only, Independent)

The maximum survival time is 274, achieved by Prey 3. Survival time with prey and predators is considerably shorter, as predators may be chasing preys around, which will make them run, and potentially lose more energy. Another loss of energy resides in the energy sources that the dead preys have not eaten: there remain several energy sources in at the end of this experiment. All this energy loss resulted in the early starvation of the agents.

Fig 32: Energy, Happiness and Stress of Pred 1 Over Time Steps (Preys & Predators, Independent). The most interesting agent in the case of this experiment is Pred 1. As we can see, approximately between steps 50 and 100, Prey 1’s happiness shows some quick and significant increases, while its stress and energy consumption both go up: it is chasing a prey (which happens to be Prey 2). The multiple energy increases occur when the predator spots a potential prey. This shows that the predator changed its mind more than once when choosing a prey. Pred 1 eventually catches up with Prey 1 around step 125 and eats it. The same pattern for stress and happiness emerges. Stress varies but is maintained at stable values, while happiness ends up stuck at the zero value due to the limits of the independent model.

-0.2

0

0.2

0.4

0.6

0.8

1

1 101 201

Summed Energy Ratio

0

0.2

0.4

0.6

0.8

1

1 101 201

Pred 1

Energy

Happiness

Stress

Page 43: Submitted for the MEng in Computer Science with Games ... · Project Final Report Emotional Agents in Game Playing Submitted for the MEng in Computer Science with Games Development

Emotional Agents in Game Playing

- 43 -

5.2.2 Averaged Model

As its name implies, the averaged model sets the affects to the average of themselves and the input memberships.

5.2.2.1 Preys Only

Fig 33: Summed Energy Ratio Over Time Steps (Preys Only, Averaged) The maximum survival time is 643, achieved by Prey 5.

Fig 34: Energy, Happiness and Stress of Prey 2 Over Time Steps (Preys Only, Averaged).

The averaged model shows quick increases in stress, but no affect blocked at the zero value. This could prove to be an efficient model, as it is safer than the independent model. However, its non-multiplicative nature implies more uniform changes, which ultimately become more predictable. The values are also extremely quick to vary, as more dependent on the input. It is to be noted that the survival time is quite low, at 643 steps. This is a consequence of this particular model, where most agents are kept in a fairly high state of happiness, which has an impact on their hunger-based decisions.

0

0.2

0.4

0.6

0.8

1

1 101 201 301 401 501 601

Summed Energy Ratio

0

0.2

0.4

0.6

0.8

1

1 101 201 301 401 501 601

Prey 2 Energy

Happiness

Stress

Page 44: Submitted for the MEng in Computer Science with Games ... · Project Final Report Emotional Agents in Game Playing Submitted for the MEng in Computer Science with Games Development

Emotional Agents in Game Playing

- 44 -

5.2.2.2 Preys and Predators

Fig 35: Summed Energy Ratio Over Time Steps (Preys & Predators, Averaged)

Fig 36: Energy, Happiness and Stress of Pred 1 Over Time Steps (Preys & Predators, Averaged). This graph confirms the high impact of the average model on the affective states. Another remark to note is that the curves are often straight lines due to the additive nature of the model.

0

0.2

0.4

0.6

0.8

1

1 101 201 301 401 501 601 701 801 901 1001 1101

Summed Energy Ratio

0

0.2

0.4

0.6

0.8

1

1 101 201 301 401

Pred 1

Energy

Happiness

Stress

Page 45: Submitted for the MEng in Computer Science with Games ... · Project Final Report Emotional Agents in Game Playing Submitted for the MEng in Computer Science with Games Development

Emotional Agents in Game Playing

- 45 -

5.2.3 Compensated Model

The compensated model is aimed at solving the problem of stuck values in the independent model. A decreasing affect will in turn increase the others.

A stress decrease increases happiness by 75% of the decrease.

A happiness decrease increases stress by 25% of the decrease.

5.2.3.1 Preys Only

Fig 37: Summed Energy Ratio Over Time Steps (Preys Only, Compensated) The maximum survival time is 845, achieved by Prey 2.

Fig 38: Energy, Happiness and Stress of Prey2 Over Time Steps (Preys Only, Compensated). The effects of compensation can be clearly seen on the graph, where a drop in stress (either due to escaping a predator or eating an energy source) is rapidly followed by a boost of happiness. In some cases, this boost of happiness is not maintained, as the hunger or other factors might replace it in a correct interval. However, it could be argued that these happiness bumps could represent the sense of relief of an agent when something beneficial happens.

0

0.2

0.4

0.6

0.8

1

1 101 201 301 401 501 601 701 801

Summed Energy Ratio

0

0.2

0.4

0.6

0.8

1

1 101 201 301 401 501 601 701 801

Prey 2

Energy

Happiness

Stress

Page 46: Submitted for the MEng in Computer Science with Games ... · Project Final Report Emotional Agents in Game Playing Submitted for the MEng in Computer Science with Games Development

Emotional Agents in Game Playing

- 46 -

5.2.3.2 Preys and Predators

Fig 39: Summed Energy Ratio Over Time Steps (Preys & Predators, Compensated) The maximum survival time is 1318, achieved by Prey 3. This unusually long survival time is mostly due to a single prey being the sole survivor of the bunch, Prey 2. It is visible on the above graph: the energy ratio is more spread out than usual, as the total energy decreases between steps 1 and 200, the predators chase preys and eat one. Then, the remaining preys end up surviving and eating the rest of the energies. There was no energy left at the end of this experiment. Another observation is that this long survival has also been achieved by an agent from the normalized model, which was eventually replaced by this model.

Fig 40: Energy, Happiness and Stress of Pred 1 Over Time Steps (Preys & Predators, Compensated). Prey 2, the survivor, has been chased between steps 5 and 225, as the increased energy consumption shows. This graph shows the same issues in the model as stated before: once happiness has reached the value of zero, it will not increase anymore. Consequently, this effect is increased by the fact that the normalization forces the stress at a value of one.

0

0.2

0.4

0.6

0.8

1

1 101 201 301 401 501 601 701 801 901 1001 1101 1201 1301

Summed Energy Ratio

0

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

Pred 1

Energy

Happiness

Stress

Page 47: Submitted for the MEng in Computer Science with Games ... · Project Final Report Emotional Agents in Game Playing Submitted for the MEng in Computer Science with Games Development

Emotional Agents in Game Playing

- 47 -

5.2.4 Experimental Observations

We can formulate several conclusions from the different experiment results.

5.2.4.1 The affective states are coherent The stress and happiness affective states manage to model happiness and stress in a relatively believable fashion. In all cases, stress will increase when the prey is starving or being pursued by predators. Happiness acts as a witness to the prey’s ability to sense energy: the agent will receive a boost in happiness when sensing energy, which in turn inhibits its stress value when perceiving enemies.

5.2.4.2 The compensated model is the more suitable for emulating affective states The independent model shows a clearer balance and representation of the affective states, but is limited by its inability to rise above the value of zero once it has reached it. The symmetry of the normalized model accentuates this problem by blocking both affective values, which made it irrelevant. On the other hand, while the average model does not suffer these kinds of issues, it shows a much more predictable behavior of the affective states, which are very easily influenced to extreme values. The best way to model affect amongst the studies cases would have to be the compensated model, where the main drawback of the independent model is avoided while still acting as multiplicative. The representation of affect is coherent with the agent’s life, and it introduced some emergent affective behaviors in the terms of happiness, ‘relief’ peaks. This leads to the impact of the affective states. As the FSMs are chosen based on the highest membership, stress –related FSM memberships will be weighed against happiness related memberships. Thus, an agent highly stimulated by one or several pieces of energy will take risks against nearby predators, as its happiness value will outweigh its stress value. However, if the predators end up being dangerously close, stress will step over happiness, forcing the agent to flee. This demonstrates the impact of emotion on an agent’s behavior, as the affective states introduce some form of risk-taking in preys.

5.2.4.3 Affective states are an advantage Without affective states, which play a mediator role between the different behaviors, the agent has to place a priority on either feeding on energy sources or avoiding predators. The consequences are opposite, but lead both to a quicker death for the agent: prioritizing fleeing would make the agent flee as soon as it spots a predator, regardless of the predator’s intentions or distance, whereas prioritizing feeding would potentially ignore the predators when the prey spots an energy source. The outcome is either easier death by predator or death by starvation. No matter the affect model, affective states give an agent a better, fuzzy judgment which gives it better chances of survival.

5.2.4.4 Lack of affective interaction and feedback As far as observable behavior goes, affective feedback from the agents is somehow limited by XPCE and the development environment, which do not allow the same level of graphical detail or visual feedback as a game developed using a graphics API would. The only form of visual feedback the agents can deliver relative to their affective state is their movement speed, how fast they move around in relation to their stress level. Therefore, as of the present time, the agents do not offer much in terms of believability in their representation of emotion. At their current state, the agent would hardly be suitable in a game situation, as the feedback they offer to the player is rather limited.

Page 48: Submitted for the MEng in Computer Science with Games ... · Project Final Report Emotional Agents in Game Playing Submitted for the MEng in Computer Science with Games Development

Emotional Agents in Game Playing

- 48 -

6 Critical Evaluation

6.1 Project Management

6.1.1 Tasks and Technical Deliverables

The following tasks aim at producing a working agent prototype by the end of Semester 1, before implementing fuzzy logic and emotional behavior in Semester 2. All the tasks have been completed, with some delays.

6.1.1.1 Deliverables: Initial Report (27/10/11, delivered) Autonomous Agents in Testbed (5/12/11, delivered with reduced specifications) Interim Report (19/01/12, delivered) Final Version of Code (8/03/12, delivered) Final Report (06/05/12, delivered)

6.1.1.2 Main tasks:

Establish a basic agent structure. (1 Week)

Create a system of predicates for agent management. (1 Week)

Create a user interface to facilitate testing. (5 months) o New task: create an interface element to switch between agent IA protocols.

Develop and test a basic State Machine. (2 Weeks)

Develop an agent structure with the ability to move around in the environment. (3 Weeks)

Implement energy sources. (1 week, finished)

Calibrate State Machine behaviour of the agent and its interactions with energy sources. (2 weeks)

o The agent should be able to move in the environment, spot and eat an energy source.

Evolve FSM into controlled hierarchy of FSMs. (3 weeks) o Make a state machine out of some basic states, to get more precise behaviours.

Research and develop a fuzzy logic structure. (4 Weeks) o A system of predicates to create and manage fuzzy sets, as well as for the

fuzzification and defuzzification of variables.

Implement emotions as FFSMs and controllers for the FSM hierarchy. (3 weeks,) o Emotions should have an influence on state membership in each agent’s FSM.

Calibrate predator/prey interaction. (3 weeks) o Predators will detect preys and chase them o Preys will flee predators

Page 49: Submitted for the MEng in Computer Science with Games ... · Project Final Report Emotional Agents in Game Playing Submitted for the MEng in Computer Science with Games Development

Emotional Agents in Game Playing

- 49 -

6.1.2 Risk Assessment and Summary

Risks and measures are summed in the following table (Fig. 3).

Issue Cause Likelihood Severity Significance Measures to take

Computer Breakdown

Laptop is old and starting to show signs of weaknesses.

0.4 0.6 0.24 Use online backup systems, like Dropbox or Skydrive. Keep copies of important files on external storage.

Illness Tendency to easily catch illnesses. Being anxious also increases this risk.

0.5 0.3 0.15 If not serious, illnesses should not be preventing work. In case a meeting with the supervisor is impossible, send a notice as soon as possible. Allow more time to compensate for illnesses.

Task extension

As lacking some experience in AI, some unforeseen issues in programming and structure may appear and overrun a task.

0.7 0.7 0.49 Set up rational task lengths, with a delay that allows to compensate for overruns.

Set optional goals instead of having too many tasks.

Task dead end

Some tasks might end up stuck, or too complex to accomplish in a reasonable time.

0.3 0.9 0.27 Create regular backups of previous versions to fall back on. Minimise task dependencies.

Fig. 40: Main risks and eventual measures. In retrospect, most of the risks described above have occurred during the life of the project. In December, my laptop finally gave in to hard drive faults, and I had to buy a new model. Thankfully, recent backups prevented a significant work loss.

Page 50: Submitted for the MEng in Computer Science with Games ... · Project Final Report Emotional Agents in Game Playing Submitted for the MEng in Computer Science with Games Development

Emotional Agents in Game Playing

- 50 -

I contracted the flu on several occasions, only once was it serious enough to prevent normal workflow. Task extension occurred at several points during the initial stages of the project, due to the early design not being well suitable. This eventually led to a dead-end towards December, which encouraged a redesign of the program.

6.2 Project Achievements This section will detail the knowledge and skills gained by the author during the course of the project. For the reader and the writer’s convenience, this section will be presented at the first person.

6.2.1 The field of AI

Working on this project brought me understanding of the challenges in implementing virtual agents. I now could identify various reactive structures that would be suitable for games, through their pros and cons, and depending on what the games are set to achieve. Significant progress was achieved in my ability to design state machines since the beginning of this project, and I am now comfortable with using fuzzy logic. Reading different materials about AI also introduced me to some new concepts that, even though not applicable in the scope of this project, will prove to be more than useful in the future.

6.2.2 Prolog

Prolog is a very challenging language, and even though I had practiced it the year before, it proved to be difficult to get back into this special workflow. Being used to IDEs such as Visual Studio or JEdit, working with SWI Prolog was a drastic change, and I had to cope with the absence of many conveniences. This harshness proved to be beneficial on longer terms, as it forced me to think in different ways. Interacting with the console introduced me to some basic Unix commands, and more care was put in the debugging, as the debugger was quite different from what I had known. Of course, working with Prolog has greatly increased my ability to think recursively, as most predicates requiring to interact with a list had to include some form of recursion.

6.2.3 New level of Excel mastery

Incidentally, the large amount of data gathering from the various experiments led me to a more extensive use of Microsoft Excel. To gather and present data efficiently, I had to make use of various sorting functions and algorithms, and learned to program simple macros using Visual Basic. This allowed me to produce worksheets that would both be reusable and that would present data in a coherent way.

6.3 Further Development

6.3.1 Full Fuzzy Architecture

Currently, the FSM with the highest membership in the affect FuSM is selected to be run on its own. A full fuzzy architecture would allow to run all FSMs concurrently, to obtain a fuzzy output by combining it with their membership. It is estimated that combined fuzzy outputs would provide a variety of emergent behaviors and overall better interaction and believability in the agents.

Page 51: Submitted for the MEng in Computer Science with Games ... · Project Final Report Emotional Agents in Game Playing Submitted for the MEng in Computer Science with Games Development

Emotional Agents in Game Playing

- 51 -

6.3.2 New behaviours and flocking

The achievements of this project could be taken as a base to introduce new features and parameters to provide a more diverse range of behaviours. Predators could, for instance, resort to eating energy sources or corpses instead of preys when starving. Flocking could allow predators to group and hunt prey, or prey s to outnumber and discourage a single predator. In terms of structure, some more refinements to the affective system could be implemented, such as, of course, some more affective states and affect models. Preys could influence each other positively in terms of affect, which would encourage the flocking behavior.

6.3.3 Producing a Game Using a Graphics API

The initial thought was to produce a game incorporating elements of affective AI, which was later switched in favor of a more AI-oriented focus with Prolog. I do maintain the hope that using the tools available to port or combine prolog code to C++ code, I will be able to use both the base of the agents and their structure to expand this project into a playable game. Having a player interact in real time with the agents, even without an explicit goal, would be quite interesting.

6.4 Personal Reflection

6.4.1 Project summary

Artificial Intelligence was still quite a new concept in my mind when I decided to start this project. The previous AI modules only covering theoretical aspects of the field, I was eager to delve into the practical domain, where I would have the opportunity to implement artificial agents within a simulation. My initial idea was to produce a game where the player could interact with intelligent agents, where AI could be woven into the gameplay. However, it was made clear during the first weeks that this project would take another direction: we were going to focus on the AI elements, and using the prolog language. I was initially quite surprised by this shift of focus, but after deliberation, I decided to take this as a challenge, and accepted to lead the project in this new direction. I felt overwhelmed during the early stages of the project, and had problems finding a direction in terms of research. Consequently, early designs were relatively faulty in what was set to be achieved, and a drastic redesign of the program structure half way through the project (as illustrated in the design chapter) allowed me to rethink my approach to Finite State Machine design, and agent design in general. During these periods, I was usually subject to peaks of stress. To some extent, the final product did not live up to the standards established by the brief. In effect, agents cannot react to emotions or express them, although it was demonstrated that an affective state can have a clear impact on an agent’s behavior. The attempts at producing a more elaborate type of agent, using a combination of fuzzy outputs, were moderately successful, but not viable enough to be implemented as part as the final agent structure. In the end, even though I did not get the occasion to make a game, building and experimenting with these agents taught me a valuable set of skills, and the knowledge I gained in the field of AI will definitely come as a benefit in a future career. This project was definitely a challenge, and proved to be very different than what I have previously experienced.

Page 52: Submitted for the MEng in Computer Science with Games ... · Project Final Report Emotional Agents in Game Playing Submitted for the MEng in Computer Science with Games Development

Emotional Agents in Game Playing

- 52 -

6.4.2 Acknowledgements

I would like to thank my personal supervisor, Dr Darryl Davis, who provided a level of support I have rarely seen. My personal supervisor, Dr Peter Robinson, also needs to be praised for helping me getting through periods of intense stress, and potentially preventing me from becoming insane!

Page 53: Submitted for the MEng in Computer Science with Games ... · Project Final Report Emotional Agents in Game Playing Submitted for the MEng in Computer Science with Games Development

Emotional Agents in Game Playing

- 53 -

7 Conclusion

This project allowed the exposition of some of the main principles in gaming AI, while providing an introduction to the challenges of building artificial agents for a game, and representing emotion as part of the agent structure. Using a reactive structure combined with finite state machines and a fuzzy state machine, we were able to build a set of agents which used emotional states to influence their behaviours and choices. The final agents would not, in themselves, be suitable to entertain the player if directly ported into a game; however, they do exhibit satisfactory behaviour in terms of decisions and survivability. They do, as well, manage to represent emotional levels at a certain level of accuracy, which encourages more research in the means of conveying these emotions through visual feedback. These agents, although they cannot express emotions, succeeded at demonstrating the potential of affect in modeling even simple behaviours. Final Word Count (excl. diagrams, table of contents and appendices): 14875.

Page 54: Submitted for the MEng in Computer Science with Games ... · Project Final Report Emotional Agents in Game Playing Submitted for the MEng in Computer Science with Games Development

Emotional Agents in Game Playing

- 54 -

8 Bibliography

8.1 Online Material: XPCE examples, online, http://www.swi-prolog.org/packages/xpce/examples.html (Accessed 8th October 2011) Rolf Pfeifer, 1996, Building “Fungus Eaters”: Design Principles of Autonomous Agents, Computer Science Department, University of Zurich. Available: http://uzh.academia.edu/RolfPfeifer/Papers/745783/Building_fungus_eaters_Design_principles_of_autonomous_agents (Accessed 15th April 2012). Kiel Mark Gilleade, Alan Dix, Jen Allanson, 2005, Affective Videogames and Modes of Affective Gaming: Assist Me, Challenge Me, Emote Me, Computing Department, Lancaster University. http://www.digra.org/dl/db/06278.55257.pdf (Accessed 9th March 2012) Laxmidhar Behera, uploaded in 2008, Fuzzy Sets, a Primer, Available: http://www.youtube.com/watch?v=H9SikB7HbSU&feature=relmfu (Accessed 10th October 2011) http://en.wikipedia.org/wiki/Artificial_intelligence_(video_games) (Accessed 24th March 2012) Steve Hanks, Martha E. Pollack, Paul R. Cohen, 1993, Benchmarks, Test Beds, Controlled Experimentation, and the Design of Agent Architectures, AI Magazine, Vol. 14, Number 4, p17. online, Available: http://citeseerx.ist.psu.edu/viewdoc/downloaddoi=10.1.1.41.1591&rep=rep1&type=pdf (Accessed 10th October 2011)

8.2 Academic and Research papers Dr Darryl Davis, 2011, 08968: Advanced Rendering and AI for Games, Department of Computer Science, University of Hull.

Suzanne Carol Lewis, 2004, Computational Models of Emotion and Affect, PhD dissertation, University of Hull. Mateas M, 2002, Interactive Drama, Art and Artificial Intelligence, PhD dissertation, Carnegie Mellon University. Mateas M, 2003, Expressive AI: Games and Artificial Intelligence, in Proceedings of International DiGRA Conference. Stan Franklin, Art Graesser, 1996, Is it an agent or just a program? Taxonomy for Autonomous Agents, University of Memphis, USA. Stan Franklin, 1997, Autonomous Agents as Embodied AI, Cybernetics and Systems: an International Journal, Volume 28, Issue 6.

Page 55: Submitted for the MEng in Computer Science with Games ... · Project Final Report Emotional Agents in Game Playing Submitted for the MEng in Computer Science with Games Development

Emotional Agents in Game Playing

- 55 -

Paolo Busetta, James Bailey, Kotagiri Ramamohanarao, 2003, A Reliable Computational Model for BDI Agents, Department of Computer Science, University of Melbourne, Australia.

8.3 Books Michael Negnevitsky, 2011, Artificial Intelligence, a Guide to Intelligent Systems Third Edition,Pearson Education Limited, Great Britain. Gibson J , 1979, The Ecological Approach to Visual Perception, Hillsdale: Lawrence Elbraum Associates. Mat Buckland, 2005, Programming Game AI by Example, Wordware Publishing, Texas. Ortony A, Clore G.L, Collins A, 1990, The Cognitive Structure of Emotions, Cambridge University Press, United Kingdom. John David Funge, 2004, Artificial Intelligence for Computer Games, A.K Peters, Canada. Alex J. Champandard, 2004, AI Game Development, Synthetic Creatures with Learning and Reactive Behaviors, New Riders Publishing, North America. Ivan Bratko, 2001, Programming for Artificial Intelligence, Third Edition, Addison-Wesley Publishers, Great Britain. R.Kruse, J.Gebhardt, F.Klawonn, 1993, Foundations of Fuzzy Systems, John Wiley & Sons, Chichester, England. Marvin Minsky, 1987, The Society of Mind. Simon and Schuster, New York.

Page 56: Submitted for the MEng in Computer Science with Games ... · Project Final Report Emotional Agents in Game Playing Submitted for the MEng in Computer Science with Games Development

Emotional Agents in Game Playing

- 56 -

Appendix A. Initial Brief

“Affective computation is starting to attract considerable interest in a number of computational domains. One such area is the use of affect in computational agents that allow more involving interactive game playing. This project will address how a computational model of emotion (affect) can be used to make better computer games. A relatively simple scenario (e.g. predator-prey) can be used to investigate the effect of using different emotions and perhaps different computational

models of emotion. The exact nature of the project is open to negotiation. “

Page 57: Submitted for the MEng in Computer Science with Games ... · Project Final Report Emotional Agents in Game Playing Submitted for the MEng in Computer Science with Games Development

- 57 -

Appendix B. Fuzzy State Machine

Page 58: Submitted for the MEng in Computer Science with Games ... · Project Final Report Emotional Agents in Game Playing Submitted for the MEng in Computer Science with Games Development

- 58 -

Appendix C. Finite State Machines

Energy FSM

Fleeing FSM

Page 59: Submitted for the MEng in Computer Science with Games ... · Project Final Report Emotional Agents in Game Playing Submitted for the MEng in Computer Science with Games Development

- 59 -

Idle FSM

Page 60: Submitted for the MEng in Computer Science with Games ... · Project Final Report Emotional Agents in Game Playing Submitted for the MEng in Computer Science with Games Development

- 60 -

Appendix D. Experimental data – Independent Model (Preys Only)

0

0.2

0.4

0.6

0.8

1

1 101 201 301 401 501 601 701

Summed Energy Ratio

0

0.2

0.4

0.6

0.8

1

1 101 201 301 401 501 601 701

Prey 2 Energy

Happiness

Stress

0

0.2

0.4

0.6

0.8

1

1 101 201 301 401 501 601 701

Prey 1

Energy

Happiness

Stress

0

0.2

0.4

0.6

0.8

1

1 101 201 301 401 501 601 701

Prey 3

Energy

Happiness

Stress

0

0.2

0.4

0.6

0.8

1

1 101 201 301 401 501 601 701

Prey 4

Energy

Happiness

Stress

0

0.2

0.4

0.6

0.8

1

1 101 201 301 401 501 601 701

Prey 5

Energy

Happiness

Stress

Page 61: Submitted for the MEng in Computer Science with Games ... · Project Final Report Emotional Agents in Game Playing Submitted for the MEng in Computer Science with Games Development

- 61 -

Appendix E. Experimental data – Independent Model (Preys & Predators)

0

0.2

0.4

0.6

0.8

1

1 101 201

Prey 3

Energy

Happiness

Stress

0

0.2

0.4

0.6

0.8

1

1 101 201

Prey 2

Energy

Happiness

Stress

0

0.2

0.4

0.6

0.8

1

1 101 201

Prey 1 Energy

Happiness

Stress

0

0.2

0.4

0.6

0.8

1

1 101 201

Summed Energy Ratio

0

0.2

0.4

0.6

0.8

1

1 101 201

Pred 1

Energy

Happiness

Stress

0

0.2

0.4

0.6

0.8

1

1 101 201

Pred 2

Energy

Happiness

Stress

Page 62: Submitted for the MEng in Computer Science with Games ... · Project Final Report Emotional Agents in Game Playing Submitted for the MEng in Computer Science with Games Development

- 62 -

Appendix F. Experimental data – Averaged Model (Preys Only)

0

0.2

0.4

0.6

0.8

1

1 101 201 301 401 501 601

Prey 1

Energy

Happiness

Stress

0

0.2

0.4

0.6

0.8

1

1 101 201 301 401 501 601

Summed Energy Ratio

0

0.2

0.4

0.6

0.8

1

1 101 201 301 401 501 601

Prey 2 Energy

Happiness

Stress

0

0.2

0.4

0.6

0.8

1

1 101 201 301 401 501 601

Prey 3

Energy

Happiness

Stress

0

0.2

0.4

0.6

0.8

1

1 101 201 301 401 501 601

Prey 4

Energy

Happiness

Stress

0

0.2

0.4

0.6

0.8

1

1 101 201 301 401 501 601

Prey 5

Energy

Happiness

Stress

Page 63: Submitted for the MEng in Computer Science with Games ... · Project Final Report Emotional Agents in Game Playing Submitted for the MEng in Computer Science with Games Development

- 63 -

Appendix G. Experimental data – Averaged Model (Preys & Predators)

0

0.2

0.4

0.6

0.8

1

1 101 201 301 401 501 601 701 801 901 10011101

Summed Energy Ratio

0

0.2

0.4

0.6

0.8

1

1 101 201 301 401

Pred 1

Energy

Happiness

Stress0

0.2

0.4

0.6

0.8

1

1

10

1

20

1

30

1

40

1

50

1

60

1

70

1

80

1

90

1

10

01

11

01

Prey 1 Energy

Happiness

Stress

0

0.2

0.4

0.6

0.8

1

1

10

1

20

1

30

1

40

1

50

1

60

1

70

1

80

1

90

1

10

01

11

01

Pred 2

Energy

Happiness

Stress

0

0.2

0.4

0.6

0.8

1

1

10

1

20

1

30

1

40

1

50

1

60

1

70

1

80

1

90

1

10

01

11

01

Prey 2

Energy

Happiness

Stress

0

0.2

0.4

0.6

0.8

1

1

10

1

20

1

30

1

40

1

50

1

60

1

70

1

80

1

90

1

10

01

11

01

Prey 3

Energy

Happiness

Stress

Page 64: Submitted for the MEng in Computer Science with Games ... · Project Final Report Emotional Agents in Game Playing Submitted for the MEng in Computer Science with Games Development

- 64 -

Appendix H. Experimental data – Compensated Model (Preys Only)

0

0.2

0.4

0.6

0.8

1

1 101 201 301 401 501 601 701 801

Summed Energy Ratio

0

0.2

0.4

0.6

0.8

1

1 101 201 301 401 501 601 701 801

Prey 1

Energy

Happiness

Stress

0

0.2

0.4

0.6

0.8

1

1 101 201 301 401 501 601 701 801

Prey 2 Energy

Happiness

Stress

0

0.2

0.4

0.6

0.8

1

1 101 201 301 401 501 601 701 801

Prey 3

Energy

Happiness

Stress

0

0.2

0.4

0.6

0.8

1

1 101 201 301 401 501 601 701 801

Prey 4

Energy

Happiness

Stress

0

0.2

0.4

0.6

0.8

1

1 101 201 301 401 501 601 701 801

Prey 5

Energy

Happiness

Stress

0

0.2

0.4

0.6

0.8

1

1 101 201 301 401 501 601

Summed Energy Ratio

0

0.2

0.4

0.6

0.8

1

1 101 201 301 401 501 601

Prey 1

Energy

Happiness

Stress

0

0.2

0.4

0.6

0.8

1

1 101 201 301 401 501 601

Prey 2 Energy

Happiness

Stress

0

0.2

0.4

0.6

0.8

1

1 101 201 301 401 501 601

Prey 3

Energy

Happiness

Stress

0

0.2

0.4

0.6

0.8

1

1 101 201 301 401 501 601

Prey 4

Energy

Happiness

Stress

0

0.2

0.4

0.6

0.8

1

1 101 201 301 401 501 601

Prey 5

Energy

Happiness

Stress

Page 65: Submitted for the MEng in Computer Science with Games ... · Project Final Report Emotional Agents in Game Playing Submitted for the MEng in Computer Science with Games Development

- 65 -

Appendix I. Experimental data – Compensated Model (Preys & Predators)

0

0.2

0.4

0.6

0.8

1Summed Energy Ratio

0

0.2

0.4

0.6

0.8

1

11

01

20

13

01

40

15

01

60

17

01

80

19

01

10

01

11

01

12

01

13

01

Pred 1

Energy

Happiness

Stress0

0.2

0.4

0.6

0.8

1

11

01

20

13

01

40

15

01

60

17

01

80

19

01

10

01

11

01

12

01

13

01

Prey 1 Energy

Happiness

Stress

0

0.2

0.4

0.6

0.8

1

11

01

20

13

01

40

15

01

60

17

01

80

19

01

10

01

11

01

12

01

13

01

Pred 2

Energy

Happiness

Stress0

0.2

0.4

0.6

0.8

1

11

01

20

13

01

40

15

01

60

17

01

80

19

01

10

01

11

01

12

01

13

01

Prey 2

Energy

Happiness

Stress

0

0.2

0.4

0.6

0.8

1

11

01

20

13

01

40

15

01

60

17

01

80

19

01

10

01

11

01

12

01

13

01

Prey 3

Energy

Happiness

Stress

Page 66: Submitted for the MEng in Computer Science with Games ... · Project Final Report Emotional Agents in Game Playing Submitted for the MEng in Computer Science with Games Development

- 66 -

Appendix J. Experimental data – Normalized Model (Preys Only)

0

0.2

0.4

0.6

0.8

1

1 101 201 301 401 501 601

Summed Energy Ratio

0

0.2

0.4

0.6

0.8

1

1 101 201 301 401 501 601

Prey 1

Energy

Happiness

Stress

0

0.2

0.4

0.6

0.8

1

1 101 201 301 401 501 601

Prey 2 Energy

Happiness

Stress

0

0.2

0.4

0.6

0.8

1

1 101 201 301 401 501 601

Prey 3

Energy

Happiness

Stress

0

0.2

0.4

0.6

0.8

1

1 101 201 301 401 501 601

Prey 4

Energy

Happiness

Stress

0

0.2

0.4

0.6

0.8

1

1 101 201 301 401 501 601

Prey 5

Energy

Happiness

Stress

Page 67: Submitted for the MEng in Computer Science with Games ... · Project Final Report Emotional Agents in Game Playing Submitted for the MEng in Computer Science with Games Development

- 67 -

Appendix K. Experimental data – Normalized Model (Preys & Predators)

0

0.2

0.4

0.6

0.8

1

1 101 201 301 401 501 601 701 801

Summed Energy Ratio

0

0.2

0.4

0.6

0.8

1

1 101 201 301 401

Pred 1

Energy

Happiness

Stress

0

0.2

0.4

0.6

0.8

1

1 101 201 301 401 501 601 701 801

Prey 1 Energy

Happiness

Stress

0

0.2

0.4

0.6

0.8

1

1 101 201 301 401 501 601 701 801

Pred 2

Energy

Happiness

Stress

0

0.2

0.4

0.6

0.8

1

1 101 201 301 401 501 601 701 801

Prey 2

Energy

Happiness

Stress

0

0.2

0.4

0.6

0.8

1

1 101 201 301 401 501 601 701 801

Prey 3

Energy

Happiness

Stress

Page 68: Submitted for the MEng in Computer Science with Games ... · Project Final Report Emotional Agents in Game Playing Submitted for the MEng in Computer Science with Games Development

- 68 -

Appendix L. Original Time Plan

Page 69: Submitted for the MEng in Computer Science with Games ... · Project Final Report Emotional Agents in Game Playing Submitted for the MEng in Computer Science with Games Development

- 69 -

Appendix M. Modified Time Plan