Transcript
Page 1: Building Human-Level AI for Real-Time Strategy Games

Building Human-Level AI for Real-Time Strategy Games

Ben G. WeberExpressive Intelligence Studio

UC Santa [email protected]

Michael MateasExpressive Intelligence Studio

UC Santa [email protected]

Arnav JhalaComputational Cinematics Studio

UC Santa [email protected]

Abstract

Video games are complex simulation environments withmany real-world properties that need to be addressed inorder to build robust intelligence. In particular, real-time strategy games provide a multi-scale challengewhich requires both deliberative and reactive reason-ing processes. Experts approach this task by studyinga corpus of games, building models for anticipating op-ponent actions, and practicing within the game environ-ment. We motivate the need for integrating heteroge-neous approaches by enumerating a range of competen-cies involved in gameplay and discuss how they are be-ing implemented in EISBot, a reactive planning agentthat we have applied to the task of playing real-timestrategy games at the same granularity as humans.

IntroductionReal-time strategy (RTS) games are a difficult domain forhumans to master. Expert-level gameplay requires per-forming hundreds of actions per minute in partial informa-tion environments. Additionally, actions need to distributedacross a range of in-game tasks including advancing re-search progress, sustaining an economy, and performing tac-tical assaults. Performing well in RTS games requires theability to specialize in many competencies while at the sametime working towards high-level goals. RTS gameplay re-quires both parallel goal pursuit across distinct competen-cies as well as coordination among these competencies.

Our work is motivated by the goal of building an agentcapable of exhibiting these types of reasoning processes in acomplex environment. Specifically, our goal is to develop anRTS agent capable of performing as well as human experts.Paramount to this effort is evaluating the agent in an environ-ment which resembles the interface provided to human play-ers as closely as possible. Specifically, the agent should notbe allowed to utilize any game state information not avail-able to a human player, such as the locations of non-visibleunits, and the set of actions provided to the agent should beidentical to those provided through the game’s user inter-face. Enforcing this constraint ensures that an agent whichperforms well in this domain is capable of integrating andspecializing in several competencies.

Copyright c© 2011, Association for the Advancement of ArtificialIntelligence (www.aaai.org). All rights reserved.

Real-time strategy games demonstrate the need for het-erogeneous agent architectures, because reducing the deci-sion complexity of the domain is non-trivial. One possi-ble approach is to split gameplay into isolated subproblemsand to reason about each subproblem independently. Thisapproach is problematic, because RTS gameplay is non-hierarchical resulting in contention for in-game resourcesacross different subproblems. A second approach is to de-velop an abstraction of the state space and use it for reason-ing about goals. This approach is also problematic, becausedifferent types of abstractions are necessary for the differ-ent tasks that need to be performed. RTS gameplay requiresconcurrent and coordinated goal pursuit across multiple ab-stractions. We refer to AI tasks that have these requirementsas multi-scale AI problems (Weber et al. 2010).

Our domain of analysis is the real-time strategy gameStarCraft. High-level StarCraft gameplay requires special-izing in several different competencies of varying complex-ity. One such competency is micromanagement, which isthe task of controlling individual units in combat scenariosin order to maximize the utility of a squad of units. This areaof expertise focuses on reactive behavior, because it requiressplit-second timing of actions. Another competency is strat-egy selection, which is the task of determining which typesof units to produce and which upgrades to research. Thisarea of expertise focuses on planning and opponent model-ing, and involves formulating strategies in order to counteropponent strategies. We also selected this game because it isplayed at a professional level, resulting in a large number ofhigh-quality replays being available for analysis, and thereis a large player base for evaluating our system.

Our approach for managing the complexity of StarCraft isto partition gameplay into domains of competence and de-velop interfaces between these competencies for resolvinggoal conflicts. It is based on the integrated agent frameworkproposed by McCoy and Mateas (2008), which partitions thebehaviors in a reactive planning agent into managers whichspecialize in distinct aspects of gameplay. The decomposi-tion of gameplay into managers is based on analysis of ex-pert gameplay.

EISBot is a heterogeneous agent for playing StarCraftwhich incorporates many of these competencies. The coreof the agent is a reactive planner that supports real-time ac-tion execution and concurrent goal pursuit. A large num-

Page 2: Building Human-Level AI for Real-Time Strategy Games

ber of competencies are implemented in the ABL reactiveplanning language (Mateas and Stern 2002). The reactiveplanner interfaces with external components which performprediction and planning tasks. These components are imple-mented using case-based reasoning, machine learning, andparticle filters. The components are connected together byapplying several integration idioms, including using work-ing memory as a blackboard, instantiating new goals for theagent to pursue, and suspending and resuming the execu-tion of tasks. We have performed evaluation of EISBot on acompetitive StarCraft tournament ladder.

The remainder of this paper is structured as follows. Insection 2 we discuss related work. Section 3 provides anoverview of the EISBot architecture. Next, we discuss Star-Craft gameplay and competencies in Section 4, and presentthe competencies in EISBot in Section 5. In section 6 wepresent an evaluation of EISBot. We provide conclusions inSection 7 and discuss future work in Section 8.

Related WorkOur agent builds on ideas from commercial game AI, cogni-tive architectures, and heterogeneous agent designs. It alsocontinues the tradition of using games as an AI testbed.

Video games are ideal for AI research, because they pro-vide a domain in which incremental and integrative advancescan be made (Laird and VanLent 2001). RTS games inparticular are interesting because they provide several chal-lenges including decision making under uncertainty, spatialand temporal reasoning, and adversarial real-time planning(Buro 2003). Previous work has also shown that the deci-sion complexity of RTS games is enormous (Aha, Molin-eaux, and Ponsen 2005).

Cognitive architectures provide many of the mechanismsrequired for integrating heterogeneous competencies. TheICARUS architecture is capable of reasoning about concur-rent goals, which can be interrupted and resumed (Langleyand Choi 2006). The system has components for perception,planning, and acting which communicate through the use ofan active memory. One of the main differences from ourapproach is that our system works towards modeling game-play actions performed by players, as opposed to modelingthe cognitive processes of players.

SOAR is a cognitive architecture that performs state ab-straction, planning, and multitasking (Wintermute, Xu, andLaird 2007). The system was applied to the task of playingreal-time strategy games by specifying a middleware layerthat serves as the perception system and gaming interface.RTS games provide a challenge for the architecture, becausegameplay requires managing many cognitively distant tasksand there is a cost for switching between these tasks. Themain difference from our approach is that we decomposegameplay into more distinct modules and have a larger fo-cus on coordination across tasks.

Darmok is an online case-based planner that builds acase library by learning from demonstration (Ontanon et al.2010). The system can be applied to RTS games by sup-plying a collection of game replays that are used during theplanning process. The decision cycle in Darmok is similar tothe one in EISBot, in which each update checks for finished

steps, updates the world state, and picks the next step. Be-haviors selected for execution by Darmok can be customizedby specifying a behavior tree for performing actions (Palmaet al. 2011). The main difference from our approach is thatDarmok uses a case library to build a collection of behav-iors, while EISBot uses a large collection of hand-authoredbehaviors.

Heterogeneous agent architectures have also been appliedto the task of playing RTS games. Bakkes et al. (2008)present an approach for integrating an existing AI with case-based reasoning. The AI is able to adapt to opponents byusing a case retriever which selects the most suitable pa-rameter combination for the current situation, and modify-ing behavior based on these parameters. Baumgarten et al.(2009) present an RTS agent that uses case-based reasoning,simulated annealing, and decision tree learning. The systemuses an initial case library generated from a set of randomlyplayed games, and refines the library by applying an evalua-tion function to retrieved cases in order to evolve new tactics.Our system differs from this work, because components inEISBot interface using goals and plans rather than parameterselection.

Commercial game AI provides an excellent baseline foragent performance, because it must operate within a com-plex environment, as opposed to an abstraction of a game.However, the goal of commercial game AI is to provide theplayer with an engaging experience, as opposed to playingat the same granularity as a player. Both deliberative and re-active planning approaches have been applied to games. Be-havior trees are an action selection technique for non-playercharacters (NPCs) in games (Isla 2005). In a behavior treesystem, an agent has a collection of behaviors available forexecution. Each decision cycle update, the behavior withthe highest priority that can be activated is selected for exe-cution. Behavior trees provide a mechanism for implement-ing reactive behavior in an agent, but require substantial do-main engineering. Goal-oriented action planning (GOAP)addresses this problem by using a deliberative planning pro-cess (Orkin 2003). A GOAP agent has a set of goals, whichupon activation trigger the agent to replan. The planningoperators are similar to those in STRIPS (Fikes and Nils-son 1971), but the planner uses A* to guide the planningprocess. The main challenge in implementing a GOAP sys-tem is defining representations and operators that can be rea-soned about in real time. One of the limitations of both ofthese approaches is that they are used for action selection fora single NPC and do not extend to multi-scale problems.

EISBot ArchitectureEISBot is an extension of McCoy and Mateas’s (2008) inte-grated agent framework. We have extended the frameworkin several ways. While the initial agent was applied to thetask of playing Wargus (Ponsen et al. 2005), we have trans-ferred many of the concepts to StarCraft. Additionally, wehave implemented several of the competencies that were pre-viously identified, but stubbed out in the original agent im-plementation. Finally, we have interfaced the architecturewith external components in several ways.

Page 3: Building Human-Level AI for Real-Time Strategy Games

StarCraft

Brood War API

Control process Proxybot

Sensor1 Act

WME1

Working Memory

Sensors

ABT

ABL

Figure 1: EISBot-StarCraft interface

The core of the agent is a reactive planner which inter-faces with the game environment, manages active goals, andhandles action execution. It is implemented in the ABL(A Behavior Language) reactive planning language (Mateasand Stern 2002). ABL is similar to BDI architectures suchas PRS (Rao and Georgeff 1995), a hybrid architecture forplanning and decision making. The main strength of reactiveplanning is support for acting on partial plans while pursu-ing goal-directed tasks and responding to changes in the en-vironment (Josyula 2005).

EISBot interfaces with StarCraft through the use of sen-sors which perceive game state and actuators which enablethe agent to send commands to units. The ABL interfaceand agent components are shown in Figure 1. The BroodWar API, Control Process, and ProxyBot components pro-vide a bridge between StarCraft and the agent. The agentalso includes a working memory and an active behavior tree(ABT). The ABT pursues active goals by selecting behaviorsfor expansion from the behavior library, which is a collectionof hand-authored behaviors.

Our agent is based on a conceptual partitioning of game-play into areas of competence. It is composed of managers,each of which is responsible for a specific aspect of game-play. A manager is implemented as a collection of behaviorsin our system. We used several reactive planning idioms todecompose gameplay into subproblems, while maintainingthe ability to handle cross-cutting concerns such as resourcecontention between managers. EISBot is composed of thefollowing managers:

• Strategy manager: responsible for the strategy selectionand attack timing competencies.

• Income manager: handles worker units, resource collec-tion, and expanding.

• Construction manager: responsible for managing re-quests to build structures.

• Tactics manager: performs combat tasks and microman-agement behaviors.

• Recon manager: implements scouting behaviors.

Further discussion of the specific managers and planning

idioms is available in previous work (McCoy and Mateas2008; Weber et al. 2010).

Each of the managers implements an interface for inter-acting with other components. The interface defines howworking memory elements (WMEs) are posted to and con-sumed from working memory. This approach enables amodular agent design, where new implementations of man-agers can be swapped into the agent. It also supports thedecomposition of tasks, such as constructing a structure.In EISBot, the decision process for selecting which struc-tures to construct is decoupled from the construction pro-cess, which involves selecting a construction site and mon-itoring execution of the task. This decoupling also enablesspecialized behavior to be specified in a manager, such asunit specific movement behaviors in the tactics manager.

Integration ApproachesWe have integrated the reactive planner with external com-ponents. The following approaches are used to interfacecomponents with the reactive planner:• Augmenting working memory• External goal formulation• External plan generation• Behavior activationThese approaches are not specific to reactive planning andcan be applied to other agent architectures.

The agent’s working memory serves as a blackboard(Hayes-Roth 1985), enabling communication between dif-ferent managers. Working memory is also used to store be-liefs about the game environment. External components in-terface with EISBot by augmenting working memory withadditional beliefs about the game environment. These be-liefs include predictions about the opponents’ strategy andunit locations. The agent incorporates these predictions inthe preconditions of specific behaviors, such as selecting lo-cations to attack.

Another way EISBot interfaces with components is bypursuing goals that are selected outside of the reactive plan-ning process. Goals that are formulated external to theagent can be added to the reactive planner by modifying thestructure of the active behavior tree, or by triggering ABL’sspawngoal functionality, which causes the agent to pursue anew goal in parallel with the currently active goals. We haveused this approach to integrate the reactive planner (Weber,Mateas, and Jhala 2010a) with an instantiation of the goal-driven autonomy conceptual model (Molineaux, Klenk, andAha 2010).

We have also integrated the agent with an external plan-ning process (Weber, Mateas, and Jhala 2010b). Plans gen-erated by the planning component are executed in the reac-tive planner by adding elements to working memory whichactivate a sequence of actions to be executed. This approachis similar to the plan expansion mechanism. Interfacing withexternal planners enables the agent to incorporate delibera-tive reasoning components.

ABL agents can be interfaced with components whichmodify the activation conditions of behaviors. ABL behav-iors contain a set of preconditions, which specify activation

Page 4: Building Human-Level AI for Real-Time Strategy Games

Figure 2: A screenshot of StarCraft showing Protoss (green)and Terran (yellow) players engaged in combat.

conditions. These conditions can contain procedural pre-conditions (Orkin 2006), which query an external compo-nent about activation. We have explored the use of case-based reasoning for behavior activation (Weber and Ontanon2010), while Simpkins et al. (2008) present a reinforcementlearning approach for behavior activation.

StarCraftStarCraft1 is a real-time strategy game where the playertakes the role of a commander in a military scenario. Thesingle objective given to players is to destroy all oppo-nents. Achieving this objective requires performing numer-ous tasks which at a minimum are gathering resources, pro-ducing structures and units, and attacking enemy forces. Togain an advantage over the opponent, a player may per-form additional tasks including scouting and upgrading. Ascreenshot of combat in StarCraft is shown in Figure 2.

There are several properties that make StarCraft gameplaya challenging task. Actions are performed in real-time, re-quiring both deliberative and reactive reasoning. Also, thegame enforces imperfect information through a fog-of-war,which limits visibility of the map to areas where the playercontrols units. There are simultaneous tasks to perform, andfocusing too much attention on a single task is detrimen-tal. Another challenge is issuing orders to hundreds of units,which may have distinct tasks.

Learning to play StarCraft is difficult for additional rea-sons. There is a steep learning curve to StarCraft, becausemaking a mistake early in the game can have drastic conse-quences. However, it is difficult to evaluate the reward ofindividual commands due to the long game length. Also,inaction is extremely detrimental, because failing to issueorders results in opponents gaining a strategic or economicadvantage.

1StarCraft and its expansion StarCraft: Brood War were devel-oped by Blizzard EntertainmentTM.

There are three diverse races in StarCraft: Protoss, Ter-ran, and Zerg. Each race has a unique technology graph(tech tree) which can be expanded to unlock new unit types,structures, and upgrades. Additionally, each race supportsdifferent styles of gameplay. Mastering a single race re-quires a great deal of practice, and experts in this domainfocus on playing a single race.

Effective StarCraft gameplay requires specializing in sev-eral distinct competencies. Micromanagement is the taskof controlling individual units in combat in order to maxi-mize the utility of a unit. Terrain analysis is another areaof competence which determines where to place units andstructures to gain tactical advantages. Strategy selection isthe task of determining which unit types to produce and up-grades to research, in order to counter to opponent strategies.Attack timing determines the ideal moment to launch an at-tack against the opponent.

StarCraft GameplayStarCraft gameplay involves performing tasks across multi-ple scales. At the strategic scale, StarCraft requires decision-making about long-term resource and technology manage-ment. For example, if the agent is able to control a largeportion of the map, it gains access to more resources, whichis useful in the long term. However, to gain map control, theagent must have a strong combat force, which requires moreimmediate spending on military units, and thus less spend-ing on economic units in the short term.

At the economic scale, the agent must also consider howmuch to invest in various technologies. For example, to de-feat cloaked units, advanced detection is required. But theresources invested in developing detection are wasted if theopponent does not develop cloaking technology.

At the tactical scale, effective StarCraft gameplay requiresboth micromanagement of individual units in small-scalecombat scenarios and squad-based tactics such as forma-tions. In micromanagement scenarios, units are controlledindividually to maximize their utility in combat. For exam-ple, a common technique is to harass an opponent’s meleeunits with fast ranged units that can outrun the opponent. Inthese scenarios, the main goal of a unit is self-preservation,which requires a quick reaction time.

Effective tactical gameplay also requires well coordinatedgroup attacks and formations. For example, in some situa-tions, cheap units should be positioned surrounding long-ranged and more expensive units to maximize the effective-ness of an army. One of the challenges in implementingformations in an agent is that the same units used in micro-management tactics may be reused in squad-based attacks.In these different situations, a single unit has different goals:self-preservation in the micromanagement situation and ahigher-level strategic goal in the squad situation.

MicromanagementExpert players issue movement and attack commands to in-dividual units to increase their effectiveness in combat. Themotivation for performing these actions is to override thedefault low-level behavior of a unit. When a unit is givena command to attack a location, it will begin attacking the

Page 5: Building Human-Level AI for Real-Time Strategy Games

first enemy unit that comes into range. The default behaviorcan lead to ineffective target selection, because there is nocoordination between different units.

Target selection and damage avoidance are two forms ofmicromanagement applied to StarCraft. To increase the ef-fectiveness of units, a player will manually select the targetsfor units to acquire. This technique enables units to focusfire on specific targets, which reduces the enemy unit’s dam-age output. Another use of target selection is to select spe-cific targets which are low in health or high-profile targets.Dancing is the process of commanding individual units toflee from battle while the squad remains engaged. Runningaway from enemy fire causes the opponent units to acquirenew targets. Once the enemy units have acquired new tar-gets, the fleeing unit is brought back into combat. Dancingincreases the effectiveness of a squad, because damage isspread across multiple units, increasing the average lifespanof each unit.

Micromanagement is a highly reactive process, because itrequires a large number of actions to be performed. Duringpeak gameplay, expert players often perform over three hun-dred actions per minute (McCoy and Mateas 2008). Effec-tive micromanagement requires a large amount of attentionand focusing too much on micromanagement can be detri-mental to other aspects of gameplay, since the overall ad-vantage gained by micromanaging units is bounded. Dueto the attention demands of micromanagement, it is morecommon earlier in the game when there are fewer tasks tomanage and units to control.

Terrain AnalysisExpert players utilize terrain in a variety of ways to gaintactical advantages. These tasks include determining whereto build structures to create bottlenecks for opponents, se-lecting locations for engaging enemy forces, and choosingroutes for force movements. High ground plays an importantrole in StarCraft, because it provides two advantages: unitson low ground do not have vision of units on high groundunless engaged, and units on low ground have a 70% chanceof hitting targets on high ground. Players utilize high groundto increase the effectiveness of units when engaging and re-treating from enemy units.

Terrain analysis in RTS games can be modeled as a qual-itative spatial reasoning task (Forbus, Mahoney, and Dill2002). While several of the terrain analysis tasks such asbase layout are deliberative, a portion of these tasks can beperformed offline using static map analysis. This compe-tency also requires the ability to manage dynamic changesin terrain caused by unit creation and destruction.

Strategy SelectionStrategy selection is a competency for determining the orderin which to expand the tech tree, produce units, and researchupgrades. Expert players approach this task by developingbuild orders, which are sequences of actions to execute dur-ing the opening stage of a game. As a game unfolds, it isnecessary to adapt the initial strategy to counter opponentstrategies. In order to determine which strategy an opponenthas selected, it is necessary to actively scout to determine

which unit types and upgrades are being produced. Playersalso incorporate predictions of an opponent’s strategy duringthe strategy selection process. Predictions can be built bystudying the player’s gameplay history and analyzing strate-gies favored on the selected map.

Strategy selection is a deliberative process, because it in-volves selecting sequences of actions to achieve a particulargoal. Strategy planning can be performed both offline andonline. Build order selection is an offline process performedbefore the start of a game, informed by the specific map andanticipated opponent strategies, while developing a counterstrategy based on the opponent strategy during a game is anonline task.

Attack TimingAttack timing is a critical aspect of StarCraft gameplay, be-cause there is a large commitment involved. There are sev-eral conditions that are used for triggering an attack. A tim-ing attack is a planned attack based on a selected build order.Timing attacks are used to take advantage of the selectedstrategy, such as attacking as soon as an upgrade completes.An attack can also be triggered by observing an opportunityto damage the opponent such as scouting units out of placeor an undefended base. There are several other conditionswhich are used to trigger attacks, such as the need to placepressure on the opponent.

The attack timing competency is deliberative and reactive.Timing attacks can be planned offline and incorporated intothe strategy selection process. The online task of determin-ing when to attack based on reactive conditions is complex,because a player has only partial observability.

Competencies in EISBotEISBot incorporates several AI techniques for implementingthe previously described gameplay competencies. We areusing a variety of approaches for implementing these tasksand utilizing multiple sources of domain knowledge.

MicromanagementMicromanagement is a complex task, because it requiresprecise timing and several unit-specific techniques. We haveimplemented micromanagement in EISBot by hand author-ing several ABL behaviors which are specific to individualunit types. These additional behaviors augment the tacticsmanager’s general attack behaviors with specialized combatbehaviors. Implementation of these behaviors is discussedin previous work (Weber et al. 2010).

Hand authoring micromanagement behaviors requires asubstantial amount of domain engineering. We selectedthis approach because creating emergent or explorative tech-niques capable of the necessary level of sophistication formicromanagement is an open challenge. To reduce this engi-neering effort, we focused on implementing micromanage-ment techniques that have been identified by the StarCraftgaming community2.

2http://wiki.teamliquid.net/

Page 6: Building Human-Level AI for Real-Time Strategy Games

Figure 3: EISBot’s perception of the game includes unit lo-cations (green), estimations of opponent locations (blue), re-gions (gray), and chokepoints (orange). The locations of en-emy units (red) are not observable by the agent.

Terrain AnalysisTerrain analysis is critical for effective tactical gameplay.Our system uses Perkin’s (2010) terrain analysis library,which identifies qualitative spatial features in StarCraftmaps including regions, chokepoints, and expansion loca-tions. The features recognized by the library are added toEISBot’s working memory, augmenting that agent’s knowl-edge of the terrain. A portion of the agent’s perception of theenvironment in an example scenario is shown in Figure 3.

The qualitative features identified by Perkin’s library areused to specify activation conditions for several behaviors:the reconnaissance manager uses the list of expansion loca-tions to select target locations for scouting opponents, thetactics manager uses the list of chokepoints to determinewhere to place forces for defending bases, and the construc-tion manager uses the list of regions to determining whereto place structures. The list of chokepoints is also used toimprove the agent’s estimation of the game state. EISBotuses a particle model to track the location of an encounteredunit based on its trajectory and nearby chokepoints (Weber,Mateas, and Jhala 2011).

Strategy SelectionWe have implemented strategy selection in EISBot using acombination of reactive and deliberative techniques. During

the initial phase of the game, the agent selects and executes abuild order, which is a sequential list of actions for pursuinga strategy. The agent reacts to unanticipated situations by in-corporating a model for goal-driven autonomy (Molineaux,Klenk, and Aha 2010).

The strategy manager includes a collection of hand-authored behaviors for executing specific build orders. Weare using goal-driven autonomy to decouple strategy selec-tion from strategy execution in EISBot. In our system, eachbuild order contains a list of expectations which must remaintrue throughout the execution of the strategy. If an expecta-tion becomes violated, then a discrepancy is created and theagent selects a new strategy to pursue. This approach en-ables the agent to react to unforeseen game situations (We-ber, Mateas, and Jhala 2010a).

After executing an initial strategy, the agent transitionsto new strategies using a case-based planner. The case-based planner is used for both strategy selection and oppo-nent modeling (Weber, Mateas, and Jhala 2010b). It selectsactions to execute by retrieving the most similar situationfrom a library of examples, which are extracted from a cor-pus of expert replays. Plans include actions for producingunits, building structures, and researching upgrades. Thecase-based planner interfaces with the reactive planner byadding sequences of actions to perform to working memory.

Attack TimingThere are several ways in which attacks are triggered in EIS-Bot. A build order selected by the strategy selector can con-tain a timing attack annotation that specifies when to firstassault the opponent. The strategy manager triggers a tim-ing attack by adding an element to working memory, whichis used in precondition checks for attack behaviors. Attackscan also be triggered based on reaching a specific army size.A collection of hand-authored behaviors are included in theagent to ensure that pressure is frequently placed on the op-ponent.

EvaluationTo determine the performance of EISBot, we have carriedout an initial evaluation against human opponents on theInternational Cyber Cup (ICCup). ICCup is a competitiveStarCraft tournament ladder with over ten thousand activeStarCraft players. The skill of players on the ladder rangesfrom novice to professional, where the majority of playersare amateur players with a strong understanding of the game.

ICCup assigns players a letter grade based on a point sys-tem, which is similar to the Elo rating system used in chess.Players start with a provisional 1000 points which is in-creased after wins and decreased after loses. Novices, orD- ranked players, quickly fall below the 1000 point barrierafter a few matches, while amateurs, or D ranked playersmaintain a score in the range of 1000 to 2000 points. Scoresof 2000 points and higher are achieved by highly competi-tive and expert players.

We have classified the performance of EISBot by running250 games against human opponents on ICCup. The oppo-nents had an average score of 1205 with a standard deviation

Page 7: Building Human-Level AI for Real-Time Strategy Games

0

500

1000

1500

0 50 100 150 200 250

Poi

nts

Number of Games Played

Figure 4: The point progression of EISBot on ICCup indi-cates that the bot is a D ranked (amateur) player.

of 1660. EISBot had an average win rate of 32% against hu-man opponents and achieved a score of 1063 points, whichoutranks 33% of players on the ladder. The point progres-sion of EISBot is shown in Figure 4. While there wereseveral instances where EISBot’s score dropped below 1000points, the agent achieved an average score of 1027 points.Since EISBot was able to reach an average score above 1000points after a large number of games, it is classified as a Dranked, or amateur StarCraft player.

Since we are working towards the goal of achievinghuman-level competency, our evaluation of EISBot is fo-cused on performance against human opponents. In previ-ous work, we report a win rate of 78% against the built-inAI of StarCraft and bots from the AIIDE 2010 StarCraftAI competition (Weber, Mateas, and Jhala 2011). To ourknowledge, EISBot is the only system that has been evalu-ated against humans on ICCup.

ConclusionWe have presented an approach for building a human-levelagent for StarCraft. We analyzed the domain and identi-fied several of the competencies required for expert Star-Craft gameplay. Our analysis motivated the need for a het-erogeneous agent architecture, because gameplay requires acombination of reactive, deliberative, and predictive reason-ing processes, which are performed both offline and onlineby players. We are developing an agent that interacts withRTS games at the same granularity as humans and exploringapproaches for emulating these processes in an agent.

Our agent architecture is further motivated by the prop-erties of the task environment. In addition to being real-time and partially observable environments with enormousdecision complexities, RTS games provide a multi-scale AIchallenge. Gameplay requires concurrent and coordinatedgoal pursuit across multiple abstractions. Additionally, RTSgames are non-hierarchical and difficult to decompose intoisolated subproblems due to cross cutting concerns betweendifferent aspect of gameplay.

EISBot builds on previous work, which conceptually par-titioned RTS gameplay into distinct subproblems (McCoyand Mateas 2008). Additionally, we identified several com-petencies necessary for expert gameplay. These competen-cies include micromanagement of units in combat, terrainanalysis for effective unit positioning, strategy selection forcountering opponent strategies, and attack timing for identi-fying situations to assault the opponent.

The core of the agent is the ABL reactive planner whichmanages the active goals, interfaces with the game environ-ment, and monitors the execution of actions. This compo-nent was implemented by hand-authoring a large number ofbehaviors, requiring a substantial amount of domain engi-neering. To reduce this effort, we utilized rules-of-thumband specific techniques which have been identified by theStarCraft gaming community.

EISBot integrates with several heterogeneous compo-nents using multiple integration approaches. These includeadding elements to working memory, formulating goals topursue, generating plans to execute, and providing addi-tional conditions for behavior activation. We have usedthese approaches to integrate the reactive planner with case-based reasoning, goal-driven autonomy, and machine learn-ing components.

We have performed an initial evaluation of EISBot againsthuman opponents on the ICCUP tournament ladder. Theagent achieved an average win rate of 32% and outranked33% of players on the ladder. While there is still a large gapbetween the performance of our system and expert players,EISBot achieved a D ranking which indicates that it’s abilityis above the novice level and on par with amateur competi-tive players.

Future WorkThere are several directions for future work. The agent canbe extended by identifying additional competencies in RTSgames. These competencies could be implemented using re-active planning behaviors or an external component. Also,additional external components could be integrated into theagent for existing competencies. For example, Monte Carlotree search could be used to implement the micromanage-ment competency (Balla and Fern 2009), and would requireless knowledge engineering than our current approach.

Additional forms of evaluation could also be performedfor EISBot. In our evaluation, EISBot played against hu-man opponents in ladder matches, which consist of a singlegame. This form of analysis is insufficient for evaluatingthe adaptivity of the agent, because each game is a poten-tially different opponent. Another way to evaluate agents isin a tournament setting, where matches consists of multiplegames against the same opponent. Performing well in thissetting requires adapting to the opponent based on the out-come of previous games. Further evaluation could includeablation studies as well, to gain further understanding of theagent’s performance against humans.

Another direction for future work is to further pursue thegoal of interfacing with the game at the same level as a hu-man player. In the current system, the agent’s perceptionsystem is able to determine the locations and trajectories of

Page 8: Building Human-Level AI for Real-Time Strategy Games

units by directly querying the game state. Future work couldexplore the use of computer vision techniques for extractinggame state using screen captures (Lucas 2007). Currently,there is no restriction on the number of actions the agent canperform. To further mimic human actuation, future workcould evaluate how limiting the number of allowed actionsper minute impacts agent performance.

AcknowledgmentsThis material is based upon work supported by the NationalScience Foundation under Grant Number IIS-1018954. Anyopinions, findings, and conclusions or recommendations ex-pressed in this material are those of the authors and do notnecessarily reflect the views of the National Science Foun-dation.

ReferencesAha, D. W.; Molineaux, M.; and Ponsen, M. 2005. Learningto Win: Case-Based Plan Selection in a Real-Time StrategyGame. In Proceedings of ICCBR, 5–20. Springer.Bakkes, S.; Spronck, P.; and van den Herik, J. 2008. RapidAdaptation of Video Game AI. In Proceedings of IEEE Sym-posium on Computational Intelligence and Games.Balla, R. K., and Fern, A. 2009. UCT for Tactical AssaultPlanning in Real-Time Strategy Games. In Proceedings ofIJCAI, 40–45. Morgan Kaufmann.Baumgarten, R.; Colton, S.; and Morris, M. 2009. Combin-ing AI Methods for Learning Bots in a Real-Time StrategyGame. International Journal on Computer Game Technolo-gies.Buro, M. 2003. Real-Time Strategy Games: A New AIResearch Challenge. In Proceedings of IJCAI, 1534–1535.Fikes, R. E., and Nilsson, N. J. 1971. STRIPS: A New Ap-proach to the Application of Theorem Proving to ProblemSolving. Artificial Intelligence 2:189–208.Forbus, K.; Mahoney, J.; and Dill, K. 2002. How QualitativeSpatial Reasoning Can Improve Strategy Game AIs. IEEEIntelligent Systems 17(4):25–30.Hayes-Roth, B. 1985. A Blackboard Architecture for Con-trol. Artificial intelligence 26(3):251–321.Isla, D. 2005. Handling Complexity in the Halo 2 AI. InProceedings of the Game Developers Conference.Josyula, D. 2005. A Unified Theory of Acting and Agencyfor a Universal Interfacing Agent. Ph.D. Dissertation, Uni-versity of Maryland.Laird, J., and VanLent, M. 2001. Human-Level AI’s KillerApplication: Interactive Computer Games. AI magazine22(2):15–25.Langley, P., and Choi, D. 2006. A Unified Cognitive Archi-tecture for Physical Agents. In Proceedings of AAAI, 1469–1474. AAAI Press.Lucas, S. 2007. Ms Pac-Man Competition. SIGEVOlution2:37–38.Mateas, M., and Stern, A. 2002. A Behavior Language forStory-Based Believable Agents. IEEE Intelligent Systems17(4):39–47.

McCoy, J., and Mateas, M. 2008. An Integrated Agentfor Playing Real-Time Strategy Games. In Proceedings ofAAAI, 1313–1318. AAAI Press.Molineaux, M.; Klenk, M.; and Aha, D. W. 2010. Goal-Driven Autonomy in a Navy Strategy Simulation. In Pro-ceedings of AAAI, 1548–1553.Ontanon, S.; Mishra, K.; Sugandh, N.; and Ram, A. 2010.On-Line Case-Based Planning. Computational Intelligence26(1):84–119.Orkin, J. 2003. Applying Goal-Oriented Action Planning toGames. In Rabin, S., ed., AI Game Programming Wisdom 2.Charles River Media. 217–228.Orkin, J. 2006. Three States and a Plan: The AI of FEAR.In Proceedings of the Game Developers Conference.Palma, R.; Gonzalez-Calero, P. A.; Gomez-Martın, M. A.;and Gomez-Martın, P. P. 2011. Extending Case-Based Plan-ning with Behavior Trees. In Proceedings of FLAIRS, 407–412.Perkins, L. 2010. Terrain Analysis in Real-Time StrategyGames: Choke Point Detection and Region Decomposition.In Proceedings of AIIDE, 168–173. AAAI Press.Ponsen, M.; Lee-Urban, S.; Munoz-Avila, H.; Aha, D. W.;and Molineaux, M. 2005. Stratagus: An Open-Source GameEngine for Research in Real-Time Strategy Games. In IJCAIWorkshop on Reasoning, Representation, and Learning inComputer Games.Rao, A., and Georgeff, M. 1995. BDI Agents: From Theoryto Practice. In Proceedings of ICMAS, 312–319.Simpkins, C.; Bhat, S.; Isbell Jr, C.; and Mateas, M.2008. Towards Adaptive Programming: Integrating Rein-forcement Learning into a Programming Language. In Pro-ceedings of OOPSLA, 603–614. ACM.Weber, B. G., and Ontanon, S. 2010. Using AutomatedReplay Annotation for Case-Based Planning in Games. InProceedings of the ICCBR Workshop on Computer Games.Weber, B. G.; Mawhorter, P.; Mateas, M.; and Jhala, A.2010. Reactive Planning Idioms for Multi-Scale Game AI.In Proceedings of IEEE CIG, 115–122. IEEE Press.Weber, B. G.; Mateas, M.; and Jhala, A. 2010a. Apply-ing Goal-Driven Autonomy to StarCraft. In Proceedings ofAIIDE, 101–106.Weber, B. G.; Mateas, M.; and Jhala, A. 2010b. Case-BasedGoal Formulation. In Proceedings of the AAAI Workshop onGoal-Driven Autonomy.Weber, B. G.; Mateas, M.; and Jhala, A. 2011. A ParticleModel for State Estimation in Real-Time Strategy Games.In Proceedings of AIIDE (To Appear).Wintermute, S.; Xu, J.; and Laird, J. 2007. SORTS: AHuman-Level Approach to Real-Time Strategy AI. In Pro-ceedings of AIIDE, 55–60. AAAI Press.


Top Related