this is a publication of the american association for artificial...

33
This Is a Publication of The American Association for Artificial Intelligence This electronic document has been retrieved from the American Association for Artificial Intelligence 445 Burgess Drive Menlo Park, California 94025 (415) 328-3123 (415) 321-4457 [email protected] http://www.aaai.org (For membership information, consult our web page) The material herein is copyrighted material. It may not be reproduced in any form by any electronic or mechanical means (including photocopying, recording, or information storage and retrieval) without permission in writing from AAAI.

Upload: others

Post on 26-Sep-2020

1 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: This Is a Publication of The American Association for Artificial ...rakaposhi.eas.asu.edu/kambhampati.pdfsingle rocket. Figure 3 illustrates this prob-lem. States of the world are

This Is a Publication ofThe American Association for Artificial Intelligence

This electronic document has been retrieved from the American Association for Artificial Intelligence

445 Burgess DriveMenlo Park, California 94025

(415) 328-3123(415) [email protected]

http://www.aaai.org

(For membership information, consult our web page)

The material herein is copyrighted material. It may not bereproduced in any form by any electronic or

mechanical means (including photocopying, recording,or information storage and retrieval) without permission

in writing from AAAI.

Page 2: This Is a Publication of The American Association for Artificial ...rakaposhi.eas.asu.edu/kambhampati.pdfsingle rocket. Figure 3 illustrates this prob-lem. States of the world are

■ Planning—the ability to synthesize a course ofaction to achieve desired goals—is an importantpart of intelligent agency and has thus receivedsignificant attention within AI for more than 30years. Work on efficient planning algorithms stillcontinues to be a hot topic for research in AI andhas led to several exciting developments in thepast few years. This article provides a tutorialintroduction to all the algorithms and approach-es to the planning problem in AI. To fulfill thisambitious objective, I introduce a generalizedapproach to plan synthesis called refinementplanning and show that in its various guises,refinement planning subsumes most of the algo-rithms that have been, or are being, developed. Itis hoped that this unifying overview provides thereader with a brand-name–free appreciation ofthe essential issues in planning.

Planning is the problem of synthesizing acourse of action that, when executed,takes an agent from a given initial state

to a desired goal state. Automating plan syn-thesis has been an important research goal inAI for more than 30 years. A large variety ofalgorithms, with differing empirical trade-offs, have been developed over this period.Research in this area is far from complete,with many exciting new algorithms continu-ing to emerge in recent years.

To a student of planning literature, the wel-ter of ideas and algorithms for plan synthesiscan at first be bewildering. I remedy this situ-ation by providing a unified overview of theapproaches for plan synthesis. I describe ageneral and powerful plan-synthesis paradigmcalled refinement planning and show that amajority of the traditional, as well as the new-er, plan-synthesis approaches are special casesof this paradigm. It is my hope that this uni-fying treatment separates the essential trade-

offs from the peripheral ones (for example,brand-name affiliations) and provides thereader with a firm understanding of the exist-ing work as well as a feel for the importantopen research questions.

In this article, I briefly discuss the (classical)planning problem in AI and provide achronology of the many existing approaches.I then propose refinement planning as a wayof unifying all these approaches and presentthe formal framework for refinement plan-ning. Next, I describe the refinement strate-gies that correspond to existing planners. Ithen discuss the trade-offs among differentrefinement-planning algorithms. Finally, Idescribe some promising new directions forscaling up refinement-planning algorithms.

Planning and Classical PlanningIntelligent agency involves controlling theevolution of external environments in desir-able ways. Planning provides a way in whichthe agent can maximize its chances of achiev-ing this control. Informally, a plan can beseen as a course of action that the agentdecides on based on its overall goals, informa-tion about the current state of the environ-ment, and the dynamics of the evolution ofits environment (figure 1).

The complexity of plan synthesis dependson a variety of properties of the environmentand the agent, including whether (1) theenvironment evolves only in response to theagent’s actions or also independently, (2) thestate of the environment is observable or par-tially hidden, (3) the sensors of the agent arepowerful enough to perceive the state of theenvironment, and (4) the agent’s actions havedeterministic or stochastic effects on the state

Articles

SUMMER 1997 67

Refinement Planning as aUnifying Framework for

Plan SynthesisSubbarao Kambhampati

Copyright © 1997, American Association for Artificial Intelligence. All rights reserved. 0738-4602-1997 / $2.00

Page 3: This Is a Publication of The American Association for Artificial ...rakaposhi.eas.asu.edu/kambhampati.pdfsingle rocket. Figure 3 illustrates this prob-lem. States of the world are

tic effects on the state of the environment.Plan synthesis under these conditions hascome to be known as the classical planningproblem.

The classical planning problem is thusspecified (figure 2) by describing the initialstate of the world, the desired goal state, anda set of deterministic actions. The objective isto find a sequence of these actions, which,when executed from the initial state, lead theagent to the goal state.

Despite its apparent simplicity and limitedscope, the classical planning problem is stillimportant in understanding the structure ofintelligent agency. Work on classical planninghas historically also helped our understand-ing of planning under nonclassical assump-tions. The problem itself is computationallyhard—P-space hard or worse (Erol, Nau, andSubrahmanian 1995)—and a significantamount of research has gone into efficientsearch-based formulations.

Modeling Actions and StatesWe now look at the way the classical plan-ning problem is modeled. Let us use a simpleexample scenario—that of transporting twopackets from the earth to the moon, using asingle rocket. Figure 3 illustrates this prob-lem.

States of the world are conventionallymodeled in terms of a set of binary state vari-ables (also referred to as conditions). The ini-tial state is assumed to be specified complete-ly; so, negated conditions (that is, state-variables with false values) need not beshown. Goals involve achieving the specified(true-false) values for certain state variables.

Actions are modeled as state-transforma-tion functions, with preconditions andeffects. A widely used action syntax is Ped-nault’s (1988) action description language(ADL), where preconditions and effects arefirst-order quantified formulas (with no dis-junction in the effects formula because theactions are deterministic).

We have three actions in our rocketdomain (figure 4): (1) Load, which causes apackage to be in the rocket; (2) Unload, whichgets it out; and (3) Fly, which takes the rocketand its contents to the moon. Notice thequantified and negated effects in the case ofFly. Its second effect says that every box thatis in the rocket—before the Fly action is exe-cuted—will be at the moon after the action. Itmight be worth noting that the implicationin the second effect of Fly is not a strict logi-cal implication but, rather, a shorthand nota-tion for saying that when In(x) holds for any

of the environment. Perhaps the simplestcase of planning occurs when the environ-ment is static (in that it changes only inresponse to the agent’s actions) and observ-able, and the agent’s actions have determinis-

Articles

68 AI MAGAZINE

Environment

Goals

(static)(observable)

Perception(perfect)

Action(deterministic)

What actionnext?

Figure 1. Role of Planning in Intelligent Agency.

I = initial state G = goal state Oi(prec) (effects)

[ I ] Oi Oj O k Om [ G ]

Figure 2. Classical Planning Problem.

Earth

At(A,E), At(B,E), At(R,E)

At(A,M),At(B,M)¬In(A), ¬In(B)

Apo

llo 1

3

Figure 3. Rocket Domain.

Page 4: This Is a Publication of The American Association for Artificial ...rakaposhi.eas.asu.edu/kambhampati.pdfsingle rocket. Figure 3 illustrates this prob-lem. States of the world are

x in the state in which Fly is executed,At(x,M) and ¬At(x,E) will be true in the stateresulting after the execution.

Action ApplicationAs mentioned earlier, actions are seen as state-transformation functions. In particular, anaction can be executed in any state where itspreconditions hold; on execution, the state ismodified such that state variables named inthe effects have the specified values, and theremaining variables retain their values. Thestate after the execution is undefined if thepreconditions of the action do not hold in thecurrent state. The left-hand side of figure 4shows the result of executing the Fly() actionin the initial state of the rocket problem. Thisprocess is also sometimes referred to as pro-gressing a state through an action.

It is also useful to define the notion ofregressing a state through an action. Regressinga state s through an action a gives the weakestconditions that must hold before a was exe-cuted, such that all the conditions in s holdafter the execution. A condition c regressesover an action a to c if a has no effect corre-sponding to c, regresses to true if a has aneffect c, and regresses to d if a has a condition-al effect d => c. It regresses to false if a has aneffect ¬c (implying that there is no state of theworld where a can be executed to give rise toc). Regression of a state over an actioninvolves regressing the individual conditionsover the action and adding the preconditionsof the action to the combined result. Theright-hand side of figure 4 illustrates the pro-cess of regressing the final (goal) state of the

rocket problem through the action Fly().

Verifying a SolutionHaving described how actions transformstates, I can now provide a straightforwardway of checking if a given action sequence is asolution to the planning problem under con-sideration. We start by simulating the applica-tion of the first action of the action sequencein the initial state of the problem. If theaction applies successfully, the second actionis applied in the resulting state, and so on.Finally, we check to see if the state resultingfrom the application of the last action of theaction sequence is a goal state (that is, whe-ther the state variables named in the goalspecification of the problem occur in the statewith the specified values). An action sequencefails to be a solution if either some action inthe sequence cannot be executed in the stateimmediately preceding it or if the final state isnot a goal state. An alternate way of verifyingif the action sequence is a solution is to startby regressing the goal state over the lastaction of the sequence, regressing the result-ing state over the second to the last action,and so on. The action sequence is a solution ifall the conditions in the state resulting afterregression over the first action of the sequenceare present in the initial state, and none ofthe intermediate states are inconsistent (havethe condition false in them).

Chronology of Classical Planning Approaches Plan generation under classical assumptionshas received widespread attention, and a

Articles

SUMMER 1997 69

Fly()

Progress Regress

Partial states

At(A,E)At(R,E)In(B)At(B,E)

At(A,E)At(R,M)In(B)At(B,M)

At(A,E)At(R,E)¬In(A)In(B)

At(A,E)At(B,M)Fly()

Figure 4. Progression and Regression of World States through Actions.

Page 5: This Is a Publication of The American Association for Artificial ...rakaposhi.eas.asu.edu/kambhampati.pdfsingle rocket. Figure 3 illustrates this prob-lem. States of the world are

constraint-satisfaction problem have becomepopular (Joslin and Pollack 1996; Kautz andSelman 1996; Blum and Furst 1995).

One of my aims in this article is to put allthese approaches in a logically coherentframework so that we can see the essentialconnections among them. I use the refine-ment planning framework to effect such aunification.

Refinement Planning: OverviewBecause a solution for a planning problem isultimately a sequence of actions, plan synthe-sis in a general sense involves sorting our waythrough the set of all action sequences untilwe end up with a sequence that is a solution.Thus, we have the essential idea behindrefinement planning, that is, the process ofstarting with the set of all action sequencesand gradually narrowing it down to reach theset of all solutions. The sets of action se-quences are represented and manipulated interms of partial plans that can be seen as acollection of constraints. The action se-quences denoted by a partial plan, that is,those that are consistent with its constraints,are called its candidates. For technical reasonsthat become clear later, we find it convenientto think in terms of sets of partial plans(instead of single partial plans). A set of par-

large variety of planning algorithms havebeen developed (figure 5). Initial approachesto the planning problem have attempted tocast planning as a theorem-proving activity(Green 1969). The inefficiency of first-ordertheorem provers in existence at that time,coupled with the difficulty of handling the“frame problem”1 in first-order logic, has ledto search-based approaches in which the STRIPS

assumption—namely, the assumption thatany condition not mentioned in the effectslist of an action remains unchanged after theaction—is hard wired. Perhaps the first ofthese search-based planners was STRIPS (Fikesand Nilsson 1971), which searched in thespace of world states using means-ends analy-sis (see Making State-Space Refinements GoalDirected). Searching in the space of states wasfound to be inflexible in some cases, and anew breed of approaches formulated planningas a search in the space of partially construct-ed plans (Penberthy and Weld 1992;McAllester and Rosenblitt 1991; Chapman1987; Tate 1975). A closely related formula-tion called hierarchical planning (Wilkins 1984;Tate 1977; Sacerdoti 1972) allowed a partialplan to contain abstract actions that canincrementally be reduced to concrete actions.More recently, encouraged by the availabilityof high-performance constraint-satisfactionalgorithms, formulations of planning as a

Articles

70 AI MAGAZINE

Search in the space of Task networks (reduction

of nonprimitive tasks)(NOAH, 1975; NONLIN, 1977;

SIPE, 1985-; O-Plan, 1986 )

Planning as Search(1970 – 1990)

Search in the space of States (progression, regression, MEA)(STRIPS, 1971; PRODIGY, 1987)

Search in the space of Plans(total order, partial order,

protections, MTC)(Interplan, 1975; Tweak, 1987;

SNLP, 1991; UCPOP, 1992)

Planning as (constraint) Satisfaction(Graphplan, 1995; SATPLAN, 1996 )

Planning as Theorem Proving(Green’s planner, 1969)

Figure 5. A Chronology of Ideas in Classical Planning.

Page 6: This Is a Publication of The American Association for Artificial ...rakaposhi.eas.asu.edu/kambhampati.pdfsingle rocket. Figure 3 illustrates this prob-lem. States of the world are

tial plans is called a plan set, and its con-stituent partial plans are referred to as thecomponents. The candidate set of a plan set isdefined as the union of the candidate sets ofits components.

Plan synthesis in refinement planninginvolves a “split and prune” search (Pearl1980). The pruning is carried out by applyingrefinement operations to a given plan set.The aim is to incrementally get rid of nonso-lutions from the candidate set. The splittingpart involves pushing the component partialplans of a plan set into different branches ofthe search tree. The motivation is to reducethe cost of applying refinements and termina-tion check to the plan set and to supportmore focused pruning. Termination testinvolves checking if the set of minimal-length candidates of the current plan set con-tains a solution.

To make these ideas precise, we now lookat the syntax and semantics of partial plansand refinement operations.

Partial Plan Representation: SyntaxA partial plan can be seen as any set of con-straints that together delineate which actionsequences belong to the plan’s candidate setand which do not. One representation2 thatis sufficient for our purpose models partialplans as a set of steps, ordering constraintsbetween the steps, and auxiliary constraints.3

Each plan step is identified with a uniquestep number and corresponds to an action(allowing two different steps to correspond tothe same action, thus facilitating plans con-taining more than one instance of a givenaction). There can be two types of orderingconstraint between a pair of steps: (1) prece-dence and (2) contiguity. A precedence con-straint requires one step to precede the sec-ond step (without precluding other stepsfrom coming between the two), and a conti-

guity constraint requires that the two stepscome immediately next to each other.

Auxiliary constraints involve statementsabout the truth of certain conditions overcertain time intervals. We are interested intwo types of auxiliary constraint: (1) interval-preservation constraints (IPCs), which requirenonviolation of a condition over an interval(no action having an effect ¬p is allowed inan interval where the condition p is to be pre-served), and (2) point-truth constraints, whichrequire the truth of a condition at a particulartime point.

A linearization of a partial plan is a permu-tation of its steps that is consistent with all itsordering constraints (in other words, a topo-logical sort). A safe linearization is a lineariza-tion of the plan that is also consistent withthe auxiliary constraints.

Figure 6 illustrates a plan from our rocketdomain: Step 0 corresponds to the beginningof the plan. Step ∞ corresponds to the end ofthe plan. By convention, the effects of step 0correspond to the conditions that hold in theinitial state of the plan, and the precondi-tions of step ∞ correspond to the conditionsthat must hold in the goal state. There arefour steps other than 0 and ∞. Steps 1, 2, 3,and 4 correspond respectively to the actionsLoad(A), Fly(), Load(B), and Unload(A). Thesteps 0 and 1 are contiguous, as are the steps4 and ∞ (illustrated in the figure by puttingthem next to each other); step 2 precedes 4;and the condition At(R,E) must be preservedbetween 0 and 3 (illustrated in figure 6 by alabeled arc between the steps). Finally, thecondition In(A) must hold in the state preced-ing the execution of step 2. The sequences 0-1-2-3-4-∞ and 0-1-3-2-4-∞ are linearizations ofthis plan. Of these, the second one is a safelinearization, but the first one is not (becausestep 2 will violate the interval-preservationconstraint on At(R,E) between 0 and 3).

Articles

SUMMER 1997 71

1: Load(A) 2: Fly() 4: Unload(A)0 ∞In(A)@2

3: Load(B)

ContiguityPrecedenceAt(R,E)

IPC

IPC = Interval preservation constraint.

Figure 6. An Example (Partial) Plan in the Rocket Domain.

Page 7: This Is a Publication of The American Association for Artificial ...rakaposhi.eas.asu.edu/kambhampati.pdfsingle rocket. Figure 3 illustrates this prob-lem. States of the world are

another instance of the action Fly() that doesnot respect the IPC on At(R,E).

Connecting Syntax and Semantics of Partial PlansFigure 8 summarizes the connection betweenthe syntax and the semantics of a partialplan. Each partial plan has, at most, an expo-nential number of linearizations, some ofwhich are safe with respect to the auxiliaryconstraints. Each safe linearization corre-sponds to a minimal candidate of the plan.Thus, there are, at most, an exponential num-ber of minimal candidates. A potentiallyinfinite number of additional candidates canbe derived from each minimal candidate bypadding it with new actions without violat-ing auxiliary constraints. Minimal candidatescan thus be seen as the generators of the can-didate set of the plan. The one-to-one corre-spondence between safe linearizations andminimal candidates implies that a plan withno safe linearizations will have an empty can-didate set.

Minimal candidates and solution extrac-tion: Minimal candidates have anotherimportant role from the point of view ofrefinement planning. We see in the followingdiscussion that some refinement strategiesadd new step constraints to a partial plan.Thus, they simultaneously shrink the candi-date set of the plan and increase the length ofits minimal candidates. This property pro-vides an incremental way of exploring the(potentially infinite) candidate set of a partialplan for solutions: Examine the minimal can-didates of the plan after each refinement tosee if any of them correspond to solutions.Checking if a minimal candidate is a solutioncan be done in linear time by simulating theexecution of the minimal candidate and

Partial Plan Representation: SemanticsThe semantics of the partial plans are givenin terms of candidate sets. A candidate can beseen as a model of the partial plan. An actionsequence belongs to the candidate set of apartial plan if it contains the actions corre-sponding to all the steps of the partial plan inan order consistent with the ordering con-straints on the plan, and it also satisfies allauxiliary constraints. For the plan shown infigure 6, the action sequences shown on theleft in figure 7 are candidates, but those onthe right are noncandidates.

Notice that the candidates might containmore actions than are present in the partialplan; thus, a plan’s candidate set can poten-tially be infinite. We define the notion ofminimal candidates to let us restrict ourattention to a finite subset of the possiblyinfinite candidate set. Specifically, minimalcandidates are candidates that only containthe actions listed in the partial plan (thus,their length is equal to the number of steps inthe plan other than 0 and ∞). The top candi-date on the left of figure 9 is a minimal candi-date, but the bottom one is not. There is aone-to-one correspondence between the min-imal candidates and the safe linearizations ofa plan. For example, the minimal candidateon the top left of figure 7 corresponds to thesafe linearization 0-1-3-2-4-∞ (as can beverified by translating the step names in thelatter to corresponding actions).

The sequences on the right of figure 7 arenoncandidates because both of them fail tosatisfy the auxiliary constraints. Specifically,the first one corresponds to the unsafe lin-earization 1-2-3-4-∞. The second noncandi-date starts with the minimal candidate[Load(A),Load(B),Fly(),Unload(A)] and adds

Articles

72 AI MAGAZINE

Candidates ( ∈ «P»)

[Load(A),Load(B),Fly(),Unload(A)]

[Load(A),Load(B),Fly(),Unload(B),Unload(A)]

Noncandidates ( ∉ «P»)

[Load(A),Fly(),Load(B),Unload(B)]

[Load(A),Fly(),Load(B),Fly(),Unload(A)]

Minimal candidate. Corresponds to thesafe linearization [ 01324∞ ].

Corresponds to unsafe linearization [ 01234∞ ].

Figure 7. Candidate Set of a Plan.

Page 8: This Is a Publication of The American Association for Artificial ...rakaposhi.eas.asu.edu/kambhampati.pdfsingle rocket. Figure 3 illustrates this prob-lem. States of the world are

checking to see if the final state correspondsto a goal state (see Verifying a Solution).

Refinement StrategiesRefinements are best seen as canned proce-dures that compute the consequences of themetatheory of planning, and the domain the-ory (in the form of actions), in the specificcontext of the current partial plan. A refine-ment strategy R maps a plan set P to anotherplan set P9 such that the candidate set of P’ isa subset of the candidate set of P. R is said tobe complete if P9 contains all the solutions ofP. It is said to be progressive if the candidateset of P9 is a strict subset of the candidate setof P. It is said to be strongly progressive if thelength of the minimal candidates increasesafter the refinement. It is said to be systematicif no action sequence falls in the candidateset of more than one component of P9. Wecan also define the progress factor of arefinement strategy as the ratio between thesize of the candidate set of the refined planset and the size of the original plan set.

Completeness ensures that we don’t losesolutions with the application of refinements.Progressiveness ensures that refinement haspruning power. Systematicity ensures that wenever consider the same candidate more thanonce if we explore the components of the

plan set separately (see Introducing Splittinginto Refinement Planning).

Let us illustrate these notions with anexample. Figure 9 shows an example refine-ment for our rocket problem. It takes the nullplan set, corresponding to all actionsequences, and maps it to a plan set contain-ing three components. In this case, therefinement is complete because no solutionto the rocket problem can start with any oth-er action for the given initial state; progres-sive because it eliminated action sequencesnot beginning with Load(A), Load(B), or Fly()from consideration; and systematic becauseno action sequence will belong to the candi-date set of more than one component (thecandidates of the three components will dif-fer in the first action).

Refinement strategies are best seen ascanned inference procedures that computethe consequences of the metatheory of plan-ning, and the domain theory (in the form ofactions), in the specific context of the com-mitments in the current partial plan. In thiscase, given the theory of planning, whichsays that solutions must have actions that areexecutable in the states preceding them, andthe current plan set constraint that the stateof the world before the first step is the initialstate where the rocket and the packages are

Articles

SUMMER 1997 73

Partial Plan

Linearization 1 Linearization 2

Safe linearization 1 Safe linearization 2 ... Safe linearization m

Linearization 3 ... Linearization n

Minimal cand. 1 Minimal cand. 2 ... Minimal cand. m+ + +

Derivedcandidates

Reduce candidate set size.Increase length of minimal candidates.Refinements

Syntax

Semantics

Derivedcandidates

Derivedcandidates

Figure 8. Relating the Syntax and the Semantics of a Partial Plan.

Page 9: This Is a Publication of The American Association for Artificial ...rakaposhi.eas.asu.edu/kambhampati.pdfsingle rocket. Figure 3 illustrates this prob-lem. States of the world are

instantiations of this general refinementplanning template. However, most earlierplanners use a specialization of this templatethat I discuss next.

Introducing Splitting into Refinement PlanningSeen as an instantiation of split and prunesearch, the previous algorithm does not doany splitting. It is possible to add splitting tothe refinement process in a straightforwardway—to separate the individual componentsof a plan set and handle them in differentsearch branches. The primary motivation forthis approach is to reduce the solution-extrac-tion cost. After all, checking for solution in asingle partial plan is cheaper than searchingfor a solution in a plan set. Another possibleadvantage of handling individual compo-nents of a plan set separately is that this tech-nique, coupled with a depth-first search,might make the process of plan generationeasier for humans to understand.

Notice that this search process will havebacktracking even for complete refinements.Specifically, even if we know that the candi-date set of a plan set P contains all the solu-tions to the problem, we do not know howthey are distributed among the componentsof P. We see later (see Trade-Off in RefinementPlanning) that the likelihood of the back-tracking depends on the number and size ofthe individual components of P, which canbe related to the nature of constraints addedby the refinement strategies.

The following algorithm template intro-duces splitting into refinement planning:

Refine (P: Plan)

0*. If <<P>> is empty, Fail.

1. If SOL(P) returns a solution, terminate withsuccess.

2. Select a refinement strategy R.

on earth, the forward state-space refinementinfers that the only actions that can come asthe second step in the solution are Load(A),Load(B), and Fly().

Planning Using Refinement Strategiesand Solution ExtractionNow, I present the general refinement plan-ning template: It has three main steps. If thecurrent plan set has an extractable solu-tion—which is checked by inspecting its min-imal candidates to see if any of them is asolution—we terminate. If not, we select arefinement strategy R and apply it to the cur-rent plan set to get a new plan set.

Refine (P: Plan set)

0*. If <<P>> is empty, Fail.

1. If a minimal candidate of P is a solution,return it. End.

2. Select a refinement strategy R.Apply R to P to get a new plan set P9.

3. Call Refine(P9).

As long as the selected refinement strategyis complete, we never lose a solution. As longas the refinements are progressive, for solvableproblems, we eventually reach a plan set, oneof whose minimal candidates is a solution.

The solution-extraction process involveschecking the minimal candidates (corre-sponding to safe linearizations) of the plan tosee if any one of them are solutions. This pro-cess can be cast as a model finding or satisfac-tion process (Kautz and Selman 1996). Recallthat a candidate is a solution if each of theactions in the sequence has its preconditionssatisfied in the state preceding the action.

As we see in Scale Up through DisjunctiveRepresentations and Constraint-SatisfactionTechniques, some recent planners such asGRAPHPLAN (Blum and Furst 1995) and SATPLAN

(Kautz and Selman 1996) can be seen as

Articles

74 AI MAGAZINE

R

1: Load(A)0 ∞

1: Load(B)0 ∞

1: Fly()0 ∞

0 ∞

Figure 9. An Example Refinement Strategy (Forward State Space).

Page 10: This Is a Publication of The American Association for Artificial ...rakaposhi.eas.asu.edu/kambhampati.pdfsingle rocket. Figure 3 illustrates this prob-lem. States of the world are

Apply R to P to get a new plan set P9.

3. Nondeterministically select a componentP9i of P9.

Call Refine(P9i ).

It is worth noting the two new steps thatmade their way. First, the components of theplan set resulting after refinement are pushedinto the search space and are handled sepa-rately (step 3). Thus, we confine the applica-tion of refinement strategies to single plans.Second, once we work on individual plans,we can consider solution-extraction functionsthat are cheaper than looking at all the mini-mal candidates (as we see in the next sec-tion).

This simple algorithm template forms themain idea behind all existing refinementplanners. Various existing planners differ interms of the specific refinement strategiesthey use in step 2. These refinement strategiesfall broadly into four types: (1) state space, (2)plan space, (3) task reduction, and (4) tract-ability. We now look at these four refinementfamilies in turn.

Existing Refinement StrategiesIn this section, we look at the details of thefour different families of refinement strate-gies. Before we start, it is useful to introducesome additional terminology to describe thestructure of partial plans. Figure 10 shows anexample plan with its important structuralaspects marked. The prefix of the plan, thatis, the maximal set of steps constrained to becontiguous to step 0 (more specifically, stepss1, s2, ... sn such that 0*s1, s1*s2,...sn–1*sn) iscalled the head of the plan. The last step ofthe head is called the head step. Because weknow how to compute the state of the worldthat results when an action is executed in agiven state, we can easily compute the stateof the world after all the steps in the head areexecuted. We call this state the head state.Similarly, the suffix of the plan is called thetail of the plan, and the first step of the suffixis called the tail step. The set of conditionsobtained by successively regressing the goalstate over the actions of the tail is called thetail state. The tail state provides the weakestconditions under which the actions in thetail of the plan can be executed to result in a

Articles

SUMMER 1997 75

4: Fly()

3: Unload(A)0 ∞

In(B)

5: Load(B)

6: Unload(B)Head Tail

Head fringe

Tail fringe

Head step tail step

In(A)At(R,E)At(B,E)At(A,E)

Head state

Tail state

1: Load(A)

In(A,M)At(B,M)In(A)¬In(B)

Figure 10. Nomenclature for Partial Plans.

Page 11: This Is a Publication of The American Association for Artificial ...rakaposhi.eas.asu.edu/kambhampati.pdfsingle rocket. Figure 3 illustrates this prob-lem. States of the world are

with nonexecutable prefixes. It is also com-plete because any solution must have an exe-cutable prefix. It is systematic because each ofits components differs in the sequence ofsteps in the plan head, and thus, their candi-dates will have different prefixes. Finally, ifwe are using only forward state-space refine-ments, we can simplify the solution-extrac-tion function considerably—we can termi-nate as soon as the head state of a plan setcomponent contains its tail state. At thatpoint, the only minimal candidate of theplan corresponds to a solution.

The version of the refinement we consid-ered previously extends the head by oneaction at a time, possibly leading to larger-than-required number of components in theresulting plan set. Consider the actionsLoad(A) and Load(B), each of which is applica-ble in the initial state, and both of which canbe done simultaneously because they do notinteract in any way. From the point of view ofsearch-space size, it would be cheaper in suchcases to add both actions to the plan prefix,thereby reducing the number of componentsin the resulting plan set. Simultaneous addi-tion of multiple actions to the head can beaccommodated by generalizing the contiguityconstraints so that they apply to sets of steps.In particular, we could combine the top twocomponents of the plan set in figure 11 into asingle plan component, with both Load(A)and Load(B) made contiguous to step 0. Morebroadly, the generalized forward state-spacerefinement strategy should consider maximalsets of noninteracting actions that are allapplicable in the current initial state together(Drummond 1989). Here, we consider twoactions to interact if the preconditions of oneare deleted by the effects of the other.

goal state. The steps in the plan that neitherbelong to its head nor belong to its tail arecalled the middle steps. Among the middlesteps, the set of steps that can come immedi-ately next to the head step in some lineariza-tion of the plan is called its head fringe. Simi-larly, the tail fringe consists of the set ofmiddle steps that can come immediately nextto the tail step in some linearization. Withthis terminology, we are ready to describeindividual refinement strategies.

Forward State-Space RefinementForward state-space refinement involvesgrowing the prefix of a partial plan by intro-ducing actions in the head fringe or theaction library into the plan head. Actions areintroduced only if their preconditions hold inthe current head state:

Refine-forward-state-space (P)

1. Operator selection: Nondeterministicallyselect a step t either from the operatorlibrary or from the head fringe such thatthe preconditions of t are applicable in thehead state.

2. Operator application: Add a contiguityconstraint between the current head stepand the new step t, which makes t the newhead step and updates the head state.

Figure 11 shows an example of this refine-ment. On the top is a partial plan whose headcontains the single step 0, and the head stateis the same as the initial state. The headfringe contains the single action Unload(A),which is not applicable in the head state. Theaction library contains three actions that areapplicable in the head state. Accordingly, for-ward state-space refinement produces a planset with three components.

Forward state-space refinement is progres-sive because it eliminates all action sequences

Articles

76 AI MAGAZINE

Compared to the forward state-space refinement, the backwardstate-space refinement generates plan sets with a fewer number ofcomponents because it concentrates only on those actions that arerelevant to current goals.

This focus on relevant actions, in turn, leads to a lower branch-ing factor for planners that consider the plan set components indifferent search branches.

Page 12: This Is a Publication of The American Association for Artificial ...rakaposhi.eas.asu.edu/kambhampati.pdfsingle rocket. Figure 3 illustrates this prob-lem. States of the world are

Making State-Space Refinements Goal DirectedForward state-space refinement as stated con-siders all actions executable in the state afterthe prefix. In real domains, there might be alarge number of applicable actions, very fewof which are relevant to the top-level goals ofthe problem. We can make the state-spacerefinements goal directed in one of two ways:(1) using means-ends analysis to focus for-ward state-space refinement on only thoseactions that are likely to be relevant to thetop-level goals or (2) using backward state-space refinement that operates by growingthe tail of the partial plan. I elaborate onthese ideas in the next two subsections.

Means-Ends Analysis First, we can forceforward state-space refinement to consideronly those actions that are going to be rele-vant to the top-level goals. The relevantactions can be recognized by examining asubgoaling tree of the problem, as shown infigure 12. Here the top-level goal can poten-tially be achieved by the effects of theactions Unload(A) and Fly(). The precondi-tions of these actions are, in turn, achievedby the action Load(A). Because Load(A) isboth relevant (indirectly) to the top goals andis applicable in the head state, the forwardstate-space refinement can consider thisaction. In contrast, an action such as Load(C)will never be considered despite its applicabil-ity because it does not directly or indirectlysupport any top-level goals.

This way of identifying relevant actions isknown as means-ends analysis and was usedby one of the first planners, STRIPS (Fikes andNilsson 1971). One issue in using means-endsanalysis is whether the relevant actions areidentified afresh at each refinement cycle orwhether the refinement and means-endsanalysis are interleaved. STRIPS interleaved thecomputation of relevant actions with forwardstate-space refinement, suspending themeans-ends analysis as soon as an applicableaction has been identified. The action is thenmade contiguous to the current head step,thus changing the head state. The means-ends analysis is resumed with respect to thenew state. In the context of the exampleshown in figure 12, having decided that theFly() and Unload(A) actions are relevant forthe top-level goals, STRIPS introduces the Fly()action into the plan head before continuingto consider actions such as Load(A) that arerecursively relevant. Such interleaving cansometimes lead to premature operator appli-cation, which would have to be backtrackedover. In fact, many of the famous incomplete-

ness results related to STRIPS (Nilsson 1980) canbe traced to this particular interleaving. Morerecently, McDermott (1996) showed that theefficiency of means-ends–analysis planningcan be improved considerably by (1) deferringoperator application until all relevant actionsare computed and (2) repeating the computa-

Articles

SUMMER 1997 77

0 1: Unload(A) ∞

2: Load(A)0 ∞

2: Load(B)0 ∞

2: Fly()0 ∞1: Unload(A)

1: Unload(A)

1: Unload(A)

At(A,E)At(B,E)At(R,E)

Figure 11. Example of Forward State-Space Refinement.

At(A,M)

Unload(A)

In(A)

Fly()

Load(A)

Αt(R,E)At(A,E)

¬In(A)

Figure 12. Using Means-Ends Analysis to Focus Forward State-Space Refinement.

Page 13: This Is a Publication of The American Association for Artificial ...rakaposhi.eas.asu.edu/kambhampati.pdfsingle rocket. Figure 3 illustrates this prob-lem. States of the world are

they can give some of the conditions of thetail state without violating any others. TheFly() action in the tail fringe is not applicablebecause it can violate the ¬In(x) condition inthe tail state:

Compared to the forward state-space refine-ment, the backward state-space refinementgenerates plan sets with a fewer number ofcomponents because it concentrates only onthose actions that are relevant to currentgoals. This focus on relevant actions, in turn,leads to a lower branching factor for plannersthat consider the plan set components in dif-ferent search branches. However, because theinitial state of a planning problem is com-pletely specified, and the goal state is onlypartially specified, the head state computed bythe forward state-space refinement is a com-plete state, but the tail state computed by thebackward state-space refinement is only a par-tial state description. Bacchus and Kabanza(1995) argue that effective search-controlstrategies might require the ability to evaluatethe truth of complex formulas about the stateof the plan and view this approach as anadvantage in favor of forward state-spacerefinements because truth evaluation can bedone in terms of model checking rather thantheorem proving.

Position, Relevance, and Plan-Space RefinementsThe state-space refinements have to decideboth the position and the relevance of a newaction to the overall goals. Often times, wemight know that a particular action is rele-vant but not know its exact position in theeventual solution. For example, we know thata fly action is likely to be present in the solu-tion for the rocket problem but do not knowexactly where in the plan it will occur. Plan-space refinement is motivated by the idea thatin such cases, it helps to introduce an actioninto the plan without constraining its abso-lute position.4 Of course, the disadvantage ofnot fixing the position is that we will nothave state information, which makes it harderto predict the states of the world during theexecution based on the current partial plan.

The difference between state-space andplan-space refinements has traditionally beenunderstood in terms of least commitment,which, in turn, is related to candidate-set size.Plans with precedence relations have largercandidate sets than those with contiguityconstraints. For example, it is easy to see thatalthough all three plans shown in figure 14contain the single Fly action, the solutionsequence

tion of relevant actions afresh after eachrefinement.

Backward State-Space RefinementThe second way of making state-space refine-ments goal directed is to consider growingthe tail of the partial plan by applying actionsin the backward direction to the tail state. Allactions in the tail fringe or actions from theplan library are considered for application.An action is applicable to the tail state if itdoes not delete any conditions in the tailstate (if it does, the regressed state will beinconsistent) and adds at least one conditionin the tail state:

Refine-backward-state-space (P)

1. Operator selection: Nondeterministicallyselect a step t either from the operatorlibrary or from the tail fringe such that atleast one of its effects is relevant to the tailstate, and none of its effects negates anyconditions in the tail state.

2. Operator application: Add a contiguityconstraint between the new step t and thecurrent tail step, which makes t the newtail step and updates the tail state.

Figure 13 shows an example of backwardstate-space refinement. Here, the tail containsonly the last step of the plan, and the tailstate is the same as the goal state (shown inan oval on the right). Two library actions,Unload(A) and Unload(B), are useful in that

Articles

78 AI MAGAZINE

0 ∞

0 ∞2: Unload(B)

2: Unload(A)1: Fly()

1: Fly()

At(A,M)At(B,M)¬In(A)¬In(B)

0 1: Fly() ∞

Figure 13. Backward State-Space Refinement.

Page 14: This Is a Publication of The American Association for Artificial ...rakaposhi.eas.asu.edu/kambhampati.pdfsingle rocket. Figure 3 illustrates this prob-lem. States of the world are

[Load(A), Load(B), Fly , Unload(A),Unload(B)]

belongs only to the candidate set of the planwith precedence constraints.

Because each search branch corresponds toa component of the plan set produced by therefinement, planners using state-space refine-ments are thus more likely to backtrack froma search branch. (It is, of, course worth not-ing that the backtracking itself is an artifactof splitting plan-set components into thesearch space. Because all refinements arecomplete, backtracking would never berequired had we worked with plan sets with-out splitting.)

This brings us to the specifics of plan-spacerefinement. As summarized here, the plan-space refinement starts by picking any pre-condition of any step in the plan and intro-ducing constraints to ensure that theprecondition is provided by some step (estab-lishment) and is preserved by the interveningsteps (declobbering):

Refine-Plan-Space(P)

1. Goal selection: Select a precondition <C,s>of P.

2. Goal establishment: Nondeterministicallyselect a new or existing step t.

Establishment: Force t to come before s andgive C.

Arbitration: Force every step between t ands to preserve C.

3. Bookkeeping: (optional)

Add IPC <t,C,s> to protect the establish-ment.

Add IPC <t,¬C,s> to protect the contribu-tor.

An optional bookkeeping step (also calledthe protection step) imposes interval-preserva-tion constraints to ensure that the establishedprecondition is protected during futurerefinements. Plan-space refinement can have

several instantiations depending on whetherbookkeeping strategies are used and how thepreconditions are selected for establishmentin step 1.

Figure 15 shows an example of plan-spacerefinement. In this example, we pick the pre-condition At(A,M) of the last step. We add thenew step Fly() to support this condition. Toforce Fly to give At(A,M), we add the condi-tion In(A) as a precondition to it. This lattercondition is called a causation precondition. Atthis point, we need to make sure that any steppossibly intervening between the step 2: Flyand the step ∞ preserves At(A,M). In thisexample, only Unload(A) intervenes, and itdoes preserve the condition; so, we are done.The optional bookkeeping step involvesadding interval preservation constraints topreserve this establishment during subsequentrefinement operations (when new actionsmight come between Fly and the last step). Inthe example, the bookkeeping step involvesadding either one or both of the interval-preservation constraints <2, At(A,M), ∞> and<2, ¬At(A,M), ∞>. Informally, the first oneensures that no action deleting At(A,M) willbe allowed between 2 and ∞. The second onesays that no action adding At(A,M) will beallowed between 2 and ∞. If we add boththese constraints, we can show that the refine-ment is systematic (McAllester and Rosenblitt1991). As an aside, the fact that we are able toensure systematicity of the refinement with-out fixing the positions of any of the stepsinvolved is technically quite interesting.

Figure 16 is another example where weneed to do both establishment and declob-bering to support a precondition. Specifically,we consider the precondition At(A,E) of step∞ and establish it using the effects of theexisting step 0. No causation preconditionsare required because 0 gives At(A,E) directly.However, the step 1: Fly() can delete the con-dition At(A,E), and it is coming in between 0

Articles

SUMMER 1997 79

1: Fly()0 ∞1: Fly()0 ∞

1: Fly()0 ∞

Pf

Pb

Pp

Figure 14. State-Space Refinements Attempt to Guess Both the Position and the Relevance of an Action to a Given Planning Problem.

Plan-space refinements can consider relevance without committing to position.

Page 15: This Is a Publication of The American Association for Artificial ...rakaposhi.eas.asu.edu/kambhampati.pdfsingle rocket. Figure 3 illustrates this prob-lem. States of the world are

if A is not in the rocket, then A’s position willnot change when the rocket is flown). Thesethree ways of shoring up the establishmentare called (1) promotion, (2) demotion, and(3) confrontation, respectively.

Hierarchical (HTN) RefinementThe refinements that we have seen till nowtreat all action sequences that reach the goalstate as equivalent. In many domains, theusers might have significant preferencesamong the solutions. For example, when Iuse my planner to make travel plans to gofrom Phoenix to Portland, I might not want aplan that involves bus rides. The question ishow do I communicate such biases to theplanner such that it will not waste any timeprogressing toward unwanted solutions?Although removing bus-ride action from theplanner’s library of actions is a possible solu-tion, it might be too drastic. I might want toallow bus travel for shorter itineraries, forexample.

One natural way is to introduce nonprimi-tive actions and restrict their reduction toprimitive actions through user-suppliedreduction schemas. Consider the example infigure 17. Here the nonprimitive actionShip(o1) has a reduction schema that trans-lates it to a plan fragment containing threeactions. Typically, there can be multiple pos-sible legal reductions for a nonprimitiveaction. The reduction schemas restrict theplanner’s access to the primitive actions and,thus, stop progress toward undesirable solu-tions (Kambhampati and Srivastava 1996). Asolution is considered desirable only if it canbe parsed by the reduction schemas. Com-pleteness is ensured only with respect tothese desirable solutions (rather than all legalsolutions).

For this method to work, we need thedomain writer to provide us reductionschemas over and above domain actions. Thisrequirement can be steep because the reduc-tion schemas typically contain valuable con-trol information that is not easily recon-structed from limited planning experience.However, one hope is that in domains wherehumans routinely build plans, such reductionknowledge can easily be elicited. Of course,acquiring the knowledge and verifying itscorrectness can still be nontrivial (Chien1996).

Tractability RefinementsAll the refinements we have looked at untilnow are progressive in that they narrow thecandidate set of a plan to which they are

and ∞. To shore up the establishment, wemust either order step 1 to be outside theinterval [0 ∞] (which is impossible in thiscase) or force step 1: Fly() to preserve At(A,E).The latter can be done by adding the preser-vation precondition ¬In(A) to step 1 (because

Articles

80 AI MAGAZINE

PSR

0 ∞

At(A,M)@ ∞

2: Fly() 3: Unload(A)0 ∞In(A)@2

At(A,M)

At(A,M)@ ∞

causation precondition∀x In(x) ⇒ At(x,M)

¬At(A,M)

1:Unload(A)

Figure 15. Example of Plan-Space Refinement.

0 ∞At(A,E)@∞At(B,M)@∞

1:Fly()

At(A,E)@ ∞

0 ∞

At(A,E)

1: Fly()0 ∞

At(A,E)

1: Fly()0 ∞

At(A,E)

¬ In(A)@1

Promotion DemotionConfrontation

0 ∞1:Fly()

At(A,E)

preservationprecondition

Establishment

De-clobbering

1: Fly()

Figure 16. Plan-Space Refinement Example Showing Both Establishmentand Declobbering Steps.

Page 16: This Is a Publication of The American Association for Artificial ...rakaposhi.eas.asu.edu/kambhampati.pdfsingle rocket. Figure 3 illustrates this prob-lem. States of the world are

applied. Many planners also use a variety ofrefinements that are not progressive (that is,have no pruning power). The motivation fortheir use is to reduce the plan-handling costsfurther; thus, we call them tractability refine-ments. Many of them can be understood asderiving the consequences of plan set con-straints alone (without recourse to domainand planning theory) and splitting any dis-junction among the plan constraints into thesearch space.

We can classify the tractability refinementsinto three categories: The first attempts toreduce the number of linearizations of theplan. In this category, we have preorderingrefinements, which order two unordered steps,and prepositioning refinements, which con-strain the relative position of two steps.

Preordering refinements are illustrated infigure 18. The single partially ordered plan atthe top of the figure is converted into twototally ordered plans below. Planners usingpreordering refinements include TOCL (Barrettand Weld 1994) and TO (Minton, Bresina, andDrummond 1994).

Prepositioning refinement is illustrated infigure 19, where one refinement considers thepossibility of step 1 being contiguous to step0, but the other considers the possibility ofstep 1 being noncontiguous to step 0. Plan-ners such as STRIPS and PRODIGY use preposi-tioning refinements (to transfer the stepsfrom the means-ends–analysis tree to theplan head; see Means-Ends Analysis).

The second category of tractability refine-ments attempts to make all linearizations safewith respect to auxiliary constraints. Here, wehave presatisfaction refinements, which split aplan in such a way that a given auxiliary con-straint is satisfied by all linearizations of theresulting components. Figure 20 illustrates apresatisfaction refinement with respect to theinterval preservation constraint <0, At(A,E),∞>. To ensure that this constraint is satisfiedin every linearization, the plan shown at thetop is converted into the plan set with threecomponents shown at the bottom. The firsttwo components attempt to keep the stepFly() from intervening between 0 and ∞. Thelast component ensures that Fly() will beforced to preserve the condition At(A,E).Readers might note a strong similaritybetween the presatisfaction refinements andthe declobbering phase of plan-space refine-ment (see Position, Relevance, and Plan-SpaceRefinements). The important difference isthat declobbering is done with respect to thecondition that is established in the currentrefinement to ensure that the established

condition is not deleted by any step that iscurrently present in the plan. There is noguarantee that the steps that will be intro-duced by future refinements will continue torespect this establishment. In contrast, presat-isfaction refinements are done to satisfy theinterval-preservation constraints (presumablyadded by the bookkeeping phase of the plan-space refinement to protect an establishedcondition). As long as the appropriate inter-val-preservation constraints are present, theywill be enforced with respect to both existingsteps and any steps that might be introduced

Articles

SUMMER 1997 81

Ship(o1)

At(o1,M)

At(o1,E), At(R,E)

Load(o1)

Fly()

Unload(o1)

Figure 17. Using Nonprimitive Tasks, Which Are Defined in Terms of Reductions to Primitive Tasks.

1: Fly()0 ∞

1: Fly()0 ∞

2: Load(A)

2: Load(A)0 ∞1: Fly()

2: Load()

Preordering

Figure 18. Preordering Refinements.

Page 17: This Is a Publication of The American Association for Artificial ...rakaposhi.eas.asu.edu/kambhampati.pdfsingle rocket. Figure 3 illustrates this prob-lem. States of the world are

refinement, which converts a plan containinga nonprimitive action into a set of plans,each containing a different reduction of thenonprimitive action. Figure 21 illustrates theprereduction refinements. The plan at the topcontains a nonprimitive action, Ship(A),which can, in principle, be reduced in a vari-ety of ways to plan fragments containingonly primitive actions. To reduce this uncer-tainty, prereduction refinements convert thisplan to a plan set each of whose componentscorresponds to different ways of reducingShip(A) action (in the context of figure 20, itis assumed that only one way of reducingShip(A), viz., that shown in figure 17, is avail-able).

Although tractability refinements as awhole seem to have weaker theoretical moti-vations than progressive refinements, it isworth noting that most of the prominent dif-ferences between existing algorithms boildown to differences in the use of tractabilityrefinements. This point is illustrated in table1, which characterizes several plan-spaceplanners in terms of the specifics of the plan-space refinements they use (protection strate-gies, goal-selection strategies) and the type oftractability refinements they use.

Interleaving Different RefinementsOne of the advantages of the treatment ofrefinement planning that I presented is that itnaturally allows for interleaving of a variety ofrefinement strategies in solving a single prob-

by future refinements. Many plan-space plan-ners, including SNLP (McAllester and Rosen-blitt 1991), UCPOP (Penberthy and Weld 1992),and NONLIN (Tate 1977) use presatisfactionrefinements.

The third category of tractability refine-ments attempts to reduce uncertainty in theaction identity. An example is prereduction

Articles

82 AI MAGAZINE

1: Fly()0 ∞2: Load(A)

1: Fly()0 ∞

2: Load(A)

(0 and 1 noncontiguous)

1: Fly()0 ∞

2: Load(A)

Figure 19. Prepositioning Refinements.

1: Fly()0 ∞

At(A,E)

1: Fly()0

At(A,E)

Promotion

1: Fly()0 ∞

At(A,E)

Demotion

1: Fly()0 ∞

At(A,E)

¬ In(A)@1Confrontation

Figure 20. Presatisfaction Refinements.

Page 18: This Is a Publication of The American Association for Artificial ...rakaposhi.eas.asu.edu/kambhampati.pdfsingle rocket. Figure 3 illustrates this prob-lem. States of the world are

lem. From a semantic viewpoint, because dif-ferent refinement strategies correspond to dif-ferent ways of narrowing the candidate sets, itis perfectly legal to interleave them. We canformally guarantee completeness of planningif each of the individual refinement strategiesis complete. Figure 22 illustrates the solving ofour rocket problem with the use of severalrefinements: We start with backward statespace, then plan space, then forward state

space, and then a preposition refinement.Based on the specific interleaving strategy

used, we can devise a whole spectrum ofrefinement planners, which differ from theexisting single refinement planners. Ourempirical studies (Kambhampati and Srivasta-va 1996, 1995) show that interleavingrefinements this way can sometimes lead tosuperior performance over single-refinementplanners.

Articles

SUMMER 1997 83

How Important Is Least Commitment?

One of the more hotly debated issues related to refinement planning algorithms is the roleand importance of least commitment in planning. Informally, least commitment refers tothe idea of constraining the partial plans as little as possible during individualrefinements, with the intuition that overcommitting can eventually make the partial planinconsistent, necessitating backtracking. To illustrate, in figure 16 we saw that individualcomponents of state-space refinements tend to commit regarding both the absolute posi-tion and the relevance of actions inserted into the plan, but plan-space refinements com-mit to the relevance, leaving the position open by using precedence constraints.

Perhaps the first thing to understand about least commitment is that it has no specialexclusive connection to ordering constraints (as is implied in some textbooks, where thephrase least commitment planning is used synonymously with partial order or plan-spaceplanning). For example, a plan-space refinement that does (the optional) bookkeeping byimposing interval-preservation constraints is more constrained than a plan-spacerefinement that does not. Similarly, a hierarchical refinement that introduces an abstractaction into the partial plan is less committed than a normal plan-space refinement thatintroduces only primitive actions (because a single abstract action can be seen as a stand-in for all the primitive actions that it can eventually be reduced to).

The second thing to understand about least commitment is that its utility depends onthe nature of the domain. In general, commitment makes it easier to check if a partialplan contains a solution but increases the chance of backtracking. Thus, least commit-ment can be a winner in domains of low solution density and a loser in domains of highsolution density.

The final, and perhaps the most important, thing to note about least commitment isthat it makes a difference only when the planner splits the components of the plan setsinto the search space. Although most traditional refinement planners do split plan setcomponents, many recent planners such as GRAPHPLAN (Blum and Furst 1995; see Scale Upthrough Disjunctive Representations and Constraint-Satisfaction Techniques) handle plansets without splitting, pushing most of the computation into the solution-extractionphase. In such planners, backtracking during refinement is not an issue (assuming that allrefinements are complete), and thus, the level of commitment used by a refinement strate-gy does not directly affect the performance. What matters instead is the ease of extractingthe solutions from the plan sets produced by the different refinements (which, in turn,might depend on factors such as the pruning power of the refinement, that is, how manycandidates of the parent plan it is capable of eliminating from consideration) and the easeof propagating constraints on the plan set (see Scale Up through Disjunctive Representa-tions and Constraint-Satisfaction Techniques).

Page 19: This Is a Publication of The American Association for Artificial ...rakaposhi.eas.asu.edu/kambhampati.pdfsingle rocket. Figure 3 illustrates this prob-lem. States of the world are

Trade-Offs in Refinement Planning

I described a parameterized refinement plan-ning template that allows for a variety ofspecific algorithms depending on whichrefinement strategies are selected and how

It must be noted, however, that the issue ofhow one selects a refinement is largely open.We have tried refinement selection based onthe number of components produced by eachrefinement, or the amount of narrowing of acandidate set each refinement affords, withsome success.

Articles

84 AI MAGAZINE

1: Ship(A)0 ∞

At(A,E) At(A,M)

1: Load(A)0 ∞

At(A,E) At(A,M)

2: Fly() 3: Unload(A)

Figure 21. Prereduction Refinements.

∞0

1: Unload(A)0 ∞

2: Fly() 1: Unload(A)0 ∞

BSR

PSR

FSR

2: Fly() 1: Unload(A)0 3: Load(A) ∞

2: Fly() 1: Unload(A)0 ∞3: Load(A)

Preposition

At(A,M)@∞

Figure 22. Interleaving Refinements.

Page 20: This Is a Publication of The American Association for Artificial ...rakaposhi.eas.asu.edu/kambhampati.pdfsingle rocket. Figure 3 illustrates this prob-lem. States of the world are

they are instantiated. We now attempt tounderstand the trade-offs governing some ofthese choices and see how one can go aboutchoosing a planner, given a specific popula-tion of problems to solve. I concentrate hereon the trade-offs in refinement planners thatsplit plan set components into search space(see Introducing Splitting into RefinementPlanning).

Asymptotic Trade-OffsLet us start with an understanding of theasymptotic trade-offs in refinement planning.I use an estimate of the search-space size interms of properties of the plans at the fringeof the search tree (figure 23). Suppose K is thetotal number of action sequences (to make Kfinite, we can consider all sequences of orbelow a certain length). Let F be the numberof nodes on the fringe of the search tree gen-erated by the refinement planner and k be theaverage number of candidates in each of theplans on the fringe. Let ρ be the number oftimes a given action sequence enters the can-didate sets of fringe plans and p be theprogress factor, the fraction by which the can-didate set narrows each time a refinement isdone. We then have

F = pd × K × ρ/k .

Because F is approximately the size of thesearch space, it can also be equated to thesearch-space size derived from the effectivebranching factor b and the effective depth dof the generated search tree. Specifically,

F = pd × K × ρ/k = bd .

The time complexity of search can be writtenout as C × F, where C is the average cost ofhandling plan sets. C itself can be broken

down into two components: (1) CR, the costof applying refinements, and (2) CS, the costof extracting solutions from the plan. Thus,

T = (CS + CR ) F .

These formulas can be used to understand theasymptotic trade-offs in refinement planning,as shown in figure 23.

For example, using refinement strategieswith lower commitment (such as plan-spacerefinements as opposed to state-spacerefinements or plan-space refinements with-out bookkeeping strategies as opposed toplan-space refinements with bookkeepingstrategies) leads to plans with higher candi-date-set sizes and, thus, reduces F, but it canincrease C. Using tractability refinementsincreases b and, thus, increases F but mightreduce C by reducing CS (because thetractability refinements reduce the variationamong the linearizations of the plan, therebyfacilitating cheaper solution extractors). Theprotection (bookkeeping) strategies reducethe redundancy factor ρ and, thus, reduce F.However, they might increase C because pro-tection is done by adding more constraints,whose consistency needs to be verified.

Although instructive, the analysis of thissection does not make conclusive predictionson practical performance because perfor-mance depends on the relative magnitudes ofchange in F and C. To predict performance,we look at empirical evaluation.

Empirical Study of Trade-Offs inRefinement PlanningThe parameterized and unified understandingof refinement planning provided in this arti-cle allows us to ask specific questions about

Articles

SUMMER 1997 85

Planner Goal Selection Protection Tractability refinements TWEAK (Chapman1987)

Based on modal truthcriterion

none None

SNLP (McAllester andRosenblitt 1991), UCPOP(Penberthy and Weld 1992)

Arbitrary IPCs to protect theestablished condition aswell as its negation

presatisfaction

TOCL (Barrett and Weld,1994)

Arbitrary IPCs to protect theestablished condition aswell as its negation

preordering

UA, TO (Minton, Bresina, and Drummond 1994)

Based on modal truthcriterion

none preordering (UA ordersonly interacting steps,but TO orders all pairs of steps)

IPC = interval-preservation constraint.

Table 1. A Spectrum of Plan-Space Planners.

Page 21: This Is a Publication of The American Association for Artificial ...rakaposhi.eas.asu.edu/kambhampati.pdfsingle rocket. Figure 3 illustrates this prob-lem. States of the world are

the plan set components completely into thesearch space). We are, of course, interested inselecting a planner for which the given popu-lation of problems is easy. Thus, we need torelate the problem and planner characteristicsto the ease of solving the problem by theplanner. If we make the reasonable assump-tion that planners will solve a conjunctivegoal problem by solving the individual sub-goals serially (that is, develop a completeplan for the first subgoal and then extend itto also cover the second subgoal), we cananswer this question in terms of the interac-tions between subgoals. Intuitively, two sub-goals are said to interact if the planner mighthave to backtrack over a plan that it made forone subgoal to achieve both goals.

A subplan for a subgoal is a partial plan allof whose linearizations will execute andachieve the goal. Figure 24 shows two sub-plans for the At(A,M) goal in the rocket prob-lem. Every refinement planner R can be asso-ciated with a class PR of subplans it is capableof producing for a subgoal. For example, forthe goal At(A,M), a planner using purelystate-space refinements will produce prefixplans of the sort shown at the top of figure24, which have steps only in the head, but apure plan-space planner will produce elastic

the utility of specific design choices andanswer them through normalized empiricalstudies. Here, we look at two choices—(1)tractability refinements and (2) bookkeeping(protection) strategies—because many exist-ing planners differ along these dimensions.

Empirical results (Kambhampati, Knoblock,and Yang 1995) show that tractabilityrefinements lead to reductions in search timeonly when the additional linearization theycause as a side effect also reduces the numberof establishment possibilities. This reductionhappens in domains where there are condi-tions that are asserted and negated by manyactions. Results also show that protectionstrategies affect performance only in the caseswhere solution density is so low that theplanner looks at the full search space.

In summary, for problems with normalsolution density, performance differentialsbetween planners are often attributable todifferences in tractability refinements.

Selecting among Refinement PlannersUsing Subgoal Interaction AnalysisLet us now turn to the general issue of select-ing among refinement planners given a popu-lation of problems (constraining our atten-tion once again to those planners that split

Articles

86 AI MAGAZINE

Eager solution extraction: C↑ F↓ (d↑)

Effect of ...

Tractability refinements: C↓ F↑ (b↑) Protection–bookkeeping: C↑ F↓ (ρ↓)

Least commitment: C↑ F↑ (k↑) k Avg cand set size of fringe plan F Fringe sizeρ Redundancy factor ( ≥ 1)p Progress factor (≤ 1)

K Cand set size of null plan

b Branching factor (# comp.)d depth (# refinements)

C Plan handling costs

Fringe

K

k k

k

T = C * F

Size of exploredsearch space:

Time complexity:

F =

K * r

k=

pd* O bd

Figure 23. Asymptotic Trade-Offs in Refinement Planning.

Page 22: This Is a Publication of The American Association for Artificial ...rakaposhi.eas.asu.edu/kambhampati.pdfsingle rocket. Figure 3 illustrates this prob-lem. States of the world are

plans of the sort shown on the bottom,which only have steps in the middle, andnew steps can be introduced in between exist-ing ones.

The key question in solving two subgoalsG1 and G2 serially is whether a subplan for G1in the given plan class is likely to be extendedto be a subplan for the conjunctive goal G1and G2. Two goals G1 and G2 are said to betrivially serializable (Kambhampati, Ihrig, andSrivastava 1996; Barrett and Weld 1994) withrespect to a class of plans PR if every subplanof one goal belonging to PR can eventually berefined into a subplan for solving both goals.If all goals in a domain are pairwise triviallyserializable with respect to the class of plansproduced by a planner, then clearly, plan syn-thesis in this domain is easy for the planner(because the complexity will be linear in thenumber of goals).

It turns out that the level of commitmentinherent in a plan class is an important factorin deciding serializability. Clearly, the lowerthe commitment of plans in a given class is,the higher the chance for trivial serializabili-ty. For example, the plan at the top of figure24 cannot be extended to handle the subgoalAt(B,M), but the plan at the bottom can. Low-er commitment explains why many domainswith subgoal interactions are easier for plan-space planners than for state-space planners(Barrett and Weld 1994).

The preceding discussion does not imply adominance of state-space planners by plan-space planners, however. In particular, thelower the commitment is, the higher the costof handling plans in general is. Thus, the bestguideline is to select the refinement plannerwith the highest commitment, and withrespect to which class of (sub)plans, most goalsin the domain are trivially serializable. Empiri-cal studies show this strategy to be effective(Kambhampati, Ihrig, and Srivastava 1996).

Scaling Up Refinement PlannersAlthough refinement planning techniqueshave been applied to some complex real-world problems, such as beer brewing(Wilkins 1988), space-observation planning(Fuchs et. al. 1990), and spacecraft assembly(Aarup et. al., 1994), their widespread use hasbeen inhibited to some extent by the factthat most existing planners scale up poorlywhen presented with large problems. Thus,there has been a significant emphasis ontechniques for improving the efficiency ofplan synthesis. One of these techniquesinvolves improving performance by cus-tomizing the planner’s behavior toward theproblem population, and the second involvesusing disjunctive representations. Let me nowsurvey the work in these directions.

Scale Up through CustomizationCustomization can be done in a variety ofways. The first way is to bias the search of theplanner with the help of control knowledgeacquired from the user. As we discussed earli-er, nonprimitive actions and reductionschemas are used, for the most part, to sup-port such customization in the existing plan-ners. There is now more research on the pro-tocols for acquiring and analyzing reductionschemas (Chien 1996).

There is evidence that not all expert con-trol knowledge is available in terms of reduc-tion schemas. In such cases, incorporatingthe control knowledge into the planner canbe tricky. One intriguing idea is to fold thecontrol knowledge into the planner by auto-matically synthesizing planners—fromdomain specification and the declarative the-ory of refinement planning—using interac-tive software synthesis tools. I have started aproject on implementing this approach usingthe KESTREL interactive software synthesis sys-tem, and the preliminary results have been

Articles

SUMMER 1997 87

Load(A) Fly Unload(A)0 ∞

Load(A) Fly Unload(A)0 •

Figure 24. Different Partial Plans for Solving the Same Subgoal.

Page 23: This Is a Publication of The American Association for Artificial ...rakaposhi.eas.asu.edu/kambhampati.pdfsingle rocket. Figure 3 illustrates this prob-lem. States of the world are

Articles

88 AI MAGAZINE

The GRAPHPLAN algorithm developed by Blum and Furst(1995) has generated a lot of excitement in the plan-ning community on two counts: (1) it was by far themost efficient domain-independent planner on severalbenchmark problems and (2) its design seemed to havelittle in common with the traditional state-space andplan-space planners.

Within our general refinement planning framework,GRAPHPLAN can be seen as using forward state-spacerefinements without splitting the plan set components.We can elaborate this relation in light of our discussionof disjunctive representations. On the left of the figureis a state tree generated by a forward state-space plannerthat uses full splitting. On the right is the plan-graphstructure generated by GRAPHPLAN for the same problem.Note that the plan graph can roughly be seen as a dis-junction of the branches of the tree on the left.Specifically, the ovals representing the plan-graphproposition lists at a particular level can be seen asapproximately the union of the states in the state-spacetree at this level. Similarly, the actions at a given level inthe plan graph can be seen as the union of actions onthe various transitions at the level in the state tree. It isimportant to note that the relation is only approximate;for example, the action a9 in the second action leveland the proposition T in the third level do not have anycorrespondence with the search tree. As explained inRefining Disjunctive Parts, this approximation is part ofthe price we pay for refining disjunctive plans directly.However, the propagation of mutual exclusion con-straints allows GRAPHPLAN to keep a reasonably close cor-

respondence with the state tree without incurring theexponential space requirements of the state tree.

Viewing GRAPHPLAN in terms of our generalizedrefinement planning framework clarifies its sources ofstrength. For example, although Blum and Furst seemto suggest that an important source of GRAPHPLAN’sstrength is its ability to consider multiple actions ateach time step, thereby generating parallel plans, thisability in itself is not new. As described in ForwardState-Space Refinement, it is possible to generalize for-ward state-space refinement to consider sets of nonin-terfering actions simultaneously. Indeed, GRAPHPLAN’sbig win over traditional state-space planners (that dofull splitting of plan set components into the searchspace) comes from its handling of plan set componentswithout splitting, which, in turn, is supported by itsuse and refinement of disjunctive plans. Experimentswith GRAPHPLAN confirm this hypothesis (Kambhampatiand Lambrecht 1997).

The strong connection between forward state-spacerefinement and GRAPHPLAN also suggests that techniquessuch as means-ends analysis that have been used tofocus forward state-space refinement can also be used tofocus GRAPHPLAN. Indeed, we found that despite itsefficiency, GRAPHPLAN can easily fail in domains wheremany actions are available, and only a few are relevantto the top-level goals of the problem. Thus, it would beinteresting to deploy the best methods of means-endsanalysis (such as the greedy regression graph proposedby McDermott [1996]) to isolate potentially relevantactions and only use them to grow the plan graph.

Plan graph = Disjunctive plan setPlan graph growing = Refinement Backward search of plan graph = Search for min. cand. corresponding to solutions

P

Q

R

Q

R

W

M

Q

P

M

Q

W

P

S

a1

a2

a3

a4

a5

a6

P

Q

R

Q

P

S

R

W

M

PQ

a1

a2

a3

a4

a9

a5

a6

T

~ Union of states at third level

~ union of actions at 3rd level

Understanding the GRAPHPLAN Algorithm as a Refinement Planner Using State-Space Refinements over Disjunctive Partial Plans.

Understanding Graphplan

Page 24: This Is a Publication of The American Association for Artificial ...rakaposhi.eas.asu.edu/kambhampati.pdfsingle rocket. Figure 3 illustrates this prob-lem. States of the world are

promising (Srivastava and Kambhampati1996).

Another way to customize is to use learn-ing techniques and make the planner learnfrom its failures and successes. The object oflearning might be the acquisition of search-control rules that advise the planner whatsearch branch to pursue (Kambhampati,Katukam, and Qu 1996; Minton et. al. 1989)or the acquisition of typical planning casesthat can then be instantiated and extended tosolve new problems (Ihrig and Kambhampati1996; Veloso and Carbonell 1993; Kambham-pati and Hendler 1992). This area of researchis active, and a sampling of papers can befound in the machine-learning sessions fromthe American Association for Artificial Intelli-gence conference and the International JointConference on Artificial Intelligence.

Scale Up through Disjunctive Representations and Constraint-Satisfaction TechniquesAnother way to scale up refinement plannersis to directly address the question of search-space explosion. Much of this explosion isbecause all existing planners reflexively split

the plan set components into the search space.We saw earlier with the basic refinement plan-ning template that splitting is not required forcompleteness of refinement planning.

Let us examine the consequences of notsplitting plan sets. Of course, we reduce thesearch-space size and avoid the prematurecommitment to specific plans. We also sepa-rate the action-selection and action-sequenc-ing phases, so that we can apply schedulingtechniques for the latter.

There can be two potential problems, how-ever. First, keeping plan sets together can leadto unwieldy data structures. The way to getaround this problem is to internalize the dis-junction in the plan sets so that we can repre-sent them more compactly (see later discus-sion). The second potential problem is thatwe might just be transferring the complexityfrom one place to another—from search-space size to solution extraction. However,solution extraction can be cast as a model-finding activity, and there have been a slew ofefficient search strategies for propositionalmodel finding (Crawford and Auton 1996;Selman, Mitchell, and Levesque 1992). I nowelaborate on these ideas.

Articles

SUMMER 1997 89

1: Load(A)0 ∞

1: Load(B)0 ∞

1: Fly(R)0 ∞

1: Load(A)

2: Load(B)0 ∞

3: Fly(R)

In(A)

In(B)

At(R,M)

or

or

Figure 25. Disjunction over State-Space Refinements.

1: Load(A)0 ∞

1: Load(B)0 ∞

In(x)@ ∞

In(x)@ ∞

In(B)

In(A)

1: Load(A)

0 ∞or

2 : Load(B)

< 1,In(A),∞ > V < 2 ,In(B),∞ >

At(A,E)@1 V At(B,E)@2

Figure 26. Disjunction over Plan-Space Refinements.

Page 25: This Is a Publication of The American Association for Artificial ...rakaposhi.eas.asu.edu/kambhampati.pdfsingle rocket. Figure 3 illustrates this prob-lem. States of the world are

not know which actions should next beapplied to the plan prefix. Similarly, for thedisjunctive plan in figure 26, we don’t knowwhether steps 1 or 2 or both will be presentin the eventual plan. Thus, a plan-spacerefinement won’t know whether it shouldwork on the At(A,E) precondition or theAt(B,E) precondition or both.6

One way of refining such plans is to handlethe uncertainty in a conservative fashion. Forexample, for the plan in figure 25, althoughwe do not know the exact state after the first(disjunctive) step, we know that it can onlybe a subset of the union of conditions in theeffects of the three steps. Knowing that only1, 2, or 3 can be the first steps in the plantells us that the state after the first step canonly contain the conditions In(A), In(B), andAt(R,M). Thus, we can consider a generaliza-tion of forward state-space refinement thatadds only those actions whose preconditionsare subsumed by the union of the effects ofthe three steps.

This variation is still complete but will beless progressive than the standard versionthat operates on nondisjunctive plan sets. Itis possible that even though the precondi-tions of an action are in the union of effects,there is no real way for the action to takeplace. For example, although the precondi-tions of the “unload at moon” action mightseem satisfied, it is actually never going tooccur as the second step in any solutionbecause Load() and Fly() cannot be done atthe same time. This example brings up animportant issue: Disjunctive plans can berefined at the expense of some of the progres-sivity of the refinement.

Although the loss of progressivity cannotbe avoided, it can be reduced to a significantextent by generalizing that refinements infermore than action constraints. In the example,the refinement can infer that steps 1 and 3cannot both occur in the first step (becausetheir preconditions and effects are interact-ing). This interference tells us that the secondstate might have either In(A) or At(R,M) butnot both. Here, the interaction between steps1 and 3 propagates to make the conditionsIn(A) and At(R,M) mutually exclusive in thenext disjunctive state. Thus, any action thatneeds both In(A) and At(R,M) can be ignoredin refining this plan. This example uses con-straint propagation to reduce the number ofrefinements generated. The particular strategyshown here is used by Blum and Furst’s(1995) GRAPHPLAN algorithm (see the sidebar).

Similar techniques can be used to refinethe disjunctive plan in figure 26. For exam-

Disjunctive Representations The gener-al idea of disjunctive representations is toallow disjunctive step, ordering, and auxiliaryconstraints into a plan. Figures 25 and 26show two examples that illustrate the com-paction we can get through them. The threeplans on the left in figure 25 can be com-bined into a single disjunctive step with dis-junctive contiguity constraints. Similarly, thetwo plans in figure 26 can be compactedusing a single disjunctive step constraint, adisjunctive precedence constraint, a disjunc-tive interval-preservation constraint, and adisjunctive point-truth constraint.

Candidate-set semantics for disjunctiveplans follow naturally: The presence of thedisjunctive constraint c1 ~ c2 in a partial planconstrains its candidates to be consistent witheither the constraint c1 or the constraint c2.

Disjunctive representations clearly lead to asignificant increase in the cost of plan han-dling. For example, in the disjunctive plan infigure 25, we don’t know which of the stepswill be coming next to 0, and thus, we don’tknow what the state of the world will be afterthe disjunctive step. Similarly, in the disjunc-tive plan in figure 26, we don’t know whethersteps 1 or 2 or both will be present in theeventual plan. Thus, we don’t know whetherwe should work on the At(A,E) preconditionor the At(B,E) precondition.

This uncertainty leads to two problems:First, how are we to refine disjunctive partialplans? Second, how are we to extract solu-tions from the disjunctive representation? Wediscuss the first problem in the following sec-tion. The second problem can be answered tosome extent by posing the solution-extrac-tion phase as a constraint-satisfaction prob-lem (such as propositional satisfiability) andusing efficient constraint-satisfaction engines.Recall that solution extraction merelyinvolves finding a minimal candidate of theplan that is a solution (in the sense that thepreconditions of all the steps, including thefinal goal step, are satisfied in the states pre-ceding them).

Refining Disjunctive Plans To handledisjunctive plans directly, we need to general-ize the particulars of the refinement strategiesbecause the standard refinement strategies areclearly developed only for partial plans with-out disjunction (see Existing RefinementStrategies). For example, for the disjunctiveplan on the right in figure 26, we don’t knowwhich of the steps will be coming next to 0,and thus, we don’t quite know what the stateof the world will be after the disjunctive step.Thus, the forward state-space refinement will

Articles

90 AI MAGAZINE

Page 26: This Is a Publication of The American Association for Artificial ...rakaposhi.eas.asu.edu/kambhampati.pdfsingle rocket. Figure 3 illustrates this prob-lem. States of the world are

ple, knowing that either 1 or 2 must precedethe last step and give the condition In(x) tellsus that if 1 doesn’t, then 2 must. This infer-ence reduces the number of establishmentpossibilities that plan-space refinement has toconsider at the next iteration.

Open Issues in Planning with Disjunc-tive Representations In the last year orso, several efficient planners have been devel-oped that can be understood in terms of dis-junctive representations. These plannersinclude GRAPHPLAN (Blum and Furst 1995), SAT-PLAN (Kautz and Selman 1996), DESCARTES

(Joslin and Pollack 1996), and UCPOP-D (Kamb-hampati and Yang 1996).

However, many issues need careful atten-tion (Kambhampati 1997). For example, astudy of the constraint-satisfaction literatureshows that propagation and refinement canhave synergistic interactions. A case in pointis the eight-queens problem, where constraintpropagation becomes possible only after wecommit to the placement of at least onequeen. Thus, the best planners might bedoing controlled splitting of plan sets (ratherthan no splitting at all) to facilitate furtherconstraint propagation. The following gener-alized refinement planning template supportscontrolled splitting of plan sets:

Refine (P: [a disjunctive] plan set)

0*. If <<P>> is empty, Fail.

1. If a minimal candidate of P is a solution,terminate.

2. Select a refinement strategy R.Apply R to P to get a new plan set P9.

3. Split P9 into k plan sets.

4. Simplify plan sets by doing constraintpropagation.

5. Nondeterministically select one of the plansets P9i.

Call Refine (P9i).

Step 3 in the algorithm splits a plan setinto k components. Depending on the valueof k, as well as the type of refinement strategyselected in step 2, we can get a spectrum ofrefinement planners, as shown in table 2. Inparticular, the traditional refinement plan-ners that do complete splitting can be mod-eled by choosing k to be equal to the numberof components of the plan set. Planners suchas GRAPHPLAN that do not split can be modeledby choosing k to be equal to 1.

In between are the planners such asDESCARTES and UCPOP-D that split a plan set intosome number of branches that is less than thenumber of plan set components. One imme-diate question is exactly how many branchesshould a plan set be split into? Because theextent of propagation depends on the amountof shared substructure between the disjoinedplan set components, one way of controllingsplitting would be to keep plans with sharedsubstructure together.

The relative support provided by varioustypes of refinement for planning with dis-junctive representations needs to be under-stood. The trade-off analyses described inTrade-Offs in Refinement Planning need to begeneralized to handle disjunctive representa-tions. The analyses based on commitmentand ensuing backtracking are mostly inade-quate when we do not split plan set compo-nents. Nevertheless, specific refinementstrategies can have a significant effect on theperformance of disjunctive planners. As anexample, consider the fact that althoughGRAPHPLAN, which does forward state-spacerefinement on disjunctive plans, is efficient,no comparable planner does backward state-space refinements.

Finally, the interaction between therefinements and the solution-extraction pro-cess needs to be investigated more carefully.

Articles

SUMMER 1997 91

Planner Refinement Splitting (k)UCPOP (Penberthy and Weld 1992), SNLP (McAllesterand Rosenblitt 1991)

Plan space k = #Comp

TOPI (Barrett and Weld 1996) Backward state space k = #CompGRAPHPLAN (Blum and Furst, 1995) Forward state space k = 1UCPOP-D (Kambhampati and Yang 1996), DESCARTES(Joslin and Pollack 1996)

Forward state space 1 < k < #Comp

Table 2. A Spectrum of Refinement Planners.

Page 27: This Is a Publication of The American Association for Artificial ...rakaposhi.eas.asu.edu/kambhampati.pdfsingle rocket. Figure 3 illustrates this prob-lem. States of the world are

Articles

92 AI MAGAZINE

Recently, Kautz et. al. (Kautz and Selman 1996;Kautz, McAllester, and Selman 1996) have advocatedsolving planning problems by encoding them first asSAT problems and then using efficient SAT solvers suchas GSAT to solve them. Their approach involves gener-ating a SAT encoding, all models of which will corre-spond to k-length solutimons to the problem (forsome fixed integer k). Model finding is done byefficient SAT solvers such as GSAT (Selman, Levesque,and Mitchell 1992). Kautz et. al. propose to start withsome arbitrary value of k and increase it if they donot find solutions of this length. They have consid-ered a variety of ways of generating the encodings.

In the context of the general refinement planningframework, I can offer a rational basis for the genera-tion of the various encodings. Specifically, the naturalplace where SAT solvers can be used in refinementplanning is in the solution-extraction phase. As illus-trated in the first figure, after doing k complete andprogressive refinements on a null plan, we get a planset whose minimal candidates contain all k-lengthsolutions to the problem. Thus, picking a solutionboils down to searching through the minimal candi-dates—which can be cast as a SAT problem. Thisaccount naturally relates the character of the encod-ings to what refinements are used in coming with thek-length plan set and how the plan sets themselvesare represented (recall that disjunctive representationscan reduce the progressivity of refinements).

Kautz and his colleagues concentrate primarily ondirect translation of planning problems to SAT

encodings, sidelining refinement planning. I believe

that such an approach confounds two orthogonalissues: (1) reaching a compact plan set whose mini-mal candidates contain all the solutions and (2)posing the solution extraction as a SAT problem. Theissues guiding the first are by and large specific toplanning and are best addressed in terms of refiningdisjunctive plans. The critical issue in posing thesolution extraction as a SAT problem is to achievecompact SAT encodings. The techniques used toaddress this issue—such as converting n-ary predi-cates into binary predicates or compiling out depen-dent variables—are generic and are only loosely tiedto planning.

It is thus best to separate plan set constructionand its compilation into a SAT instance. Such a sepa-ration allows SATPLAN techniques to exploit, ratherthan reinvent, (disjunctive) refinement planning.Their emphasis can then shift toward understandingthe trade-offs offered by SAT encodings based on dis-junctive plans derived by different types of refine-ment. In addition, basing encodings on kth-levelplan sets can also lead to SAT instances that are small-er on the whole. Specifically, the pruning done bydisjunctive refinements can also lead to a plan setwith a fewer number of minimal candidates and,consequently, a tighter encoding than can beachieved by direct translation. This conjecture issupported to some extent by the results reported inKautz and Selman (1996) that show that SAT encod-ings based on k-length planning graphs generated byGRAPHPLAN can be solved more efficiently than thelinear encodings generated by direct translation.

If R1, R2,...,Rk are all complete ( and progressive) Then, Minimal candidates of Pk will contain all k-length solutions

Shape of the encoding depends on – Refinements Ri – Representations for plan sets – Disjunctive–nondisjunctive

Is there a minimal candidate of Pk

that is a solution to the problem?

Can be encoded as a SAT–CSP instance

P1

P2

R1

R2

Pk

Rk

Null plan set

Relating Refined Plan at the kth Level to SATPLAN Encodings.

Refinement Planning versus Encoding Planning as Satisfiability

Page 28: This Is a Publication of The American Association for Artificial ...rakaposhi.eas.asu.edu/kambhampati.pdfsingle rocket. Figure 3 illustrates this prob-lem. States of the world are

As I indicated in the preceding discussion, ifwe have access to efficient solution-extractionprocedures, we can, in theory, reduce the roleof refinements significantly. For example, thedisjunctive plan in figure 25 can be refined bysimply assuming that every step is a disjunc-tion of all the actions in the library. The solu-tion-extraction function will then have tosort through the large disjunctions and findthe solutions. To a first approximation, this isthe way the state-space and plan-space en-codings of SATPLAN (Kautz and Selman 1996)work. The approach we sketched involvesusing the refinement strategies to reduce theamount of disjunction, for example, to realizethat the second step can only be drawn froma small subset of all library actions, and thenusing SAT methods to extract solutions fromthe resulting plan. I speculate that thisapproach will scale up better by combiningthe forces of both refinement and efficientsolution extraction.

Conclusion and Future Directions

Let me conclude by reiterating thatrefinement planning continues to provide thetheoretical backbone for most of the AI plan-ning techniques. The general framework Ipresented in this article allows a coherentunification of all classical planning tech-niques, provides insights into design trade-offs, and outlines avenues for the develop-ment of more efficient planning algorithms.

Perhaps equally important, a clearer under-standing of refinement planning under classi-cal assumptions will provide us valuableinsights into planning under nonclassicalassumptions. Indeed, the operation of severalnonclassical planners described in the litera-ture can be understood from a refinementplanning point of view. These plannersinclude probabilistic least commitment plan-ners such as BURIDAN (Kushmerick, Hanks, andWeld 1995) and planners dealing with partial-ly accessible environments such as XII (Gold-en, Etzioni, and Weld 1996).

A variety of exciting avenues are open forfurther research in refinement planning(Kambhampati 1997). First, we have seen thatdespite the large number of planning algo-rithms, there are really only two fundamen-tally different refinement strategies: (1) planspace and (2) state space. It would be interest-ing to see if there are novel refinement strate-gies with better properties. To some extent,this formulation of new refinements candepend on the type of representations we use

for partial plans. In a recent paper, Ginsberg(1996) describes a partial plan representationand refinement strategy that differs sig-nificantly from the state-space and plan-spacestrategies, although its overall operation con-forms to the framework of refinement fol-lowed by the solution extraction describedhere. It will be interesting to see how itrelates to the classical refinements.

We also do not understand enough aboutwhat factors govern the selection of specificrefinement strategies. We need to take a freshlook at trade-offs in plan synthesis, given theavailability of planners using disjunctive rep-resentations. Concepts such as subgoal inter-action do not make too much sense in thesescenarios.

Finally, much can be gained by porting therefinement planning techniques to planningunder nonclassical scenarios. For example,how can we use the analogs of hierarchicalrefinements, and disjunctive representations,to make planning under uncertainty moreefficient?

Further ReadingFor the ease of exposition, I have simplifiedthe technical details of the refinement plan-ning framework in some places. For a moreformal development of the syntax andsemantics of refinement planning, see Kamb-hampati, Knoblock, and Yang (1995). For thedetails of unifying and interleaving state-space, plan-space, and hierarchical refine-ments, see Kambhampati and Srivastava(1996). For a more technical discussion of theissues in disjunctive planning, see Kambham-pati and Yang (1996) and Kambhampati(1997). Most of these papers can be found atrakaposhi.eas.asu.edu/yochan.html. A moreglobal overview of the areas of planning andscheduling can be found in Dean and Kamb-hampati (1996).

Pednault (1994) is the best formal intro-duction to the syntax and semantics of ADL.Barrett and Weld (1994) report on a compre-hensive analysis of the trade-offs betweenstate-space and plan-space planning algo-rithms. Minton, Bresina, and Drummond(1994) compare partial-order and total-orderplan-space planning algorithms. This studyfocuses on the effect of preordering tractabili-ty refinements. Kambhampati, Knoblock, andYang (1995) provide some follow-up resultson the effect of tractability refinements onplanner performance.

There are also several good sources formore detailed accounts of individual

Perhapsequallyimportant, a clearerunderstandingof refinementplanningunder classicalassumptionswill provideus valuableinsights intoplanningunder nonclassicalassumptions.

Articles

SUMMER 1997 93

Page 29: This Is a Publication of The American Association for Artificial ...rakaposhi.eas.asu.edu/kambhampati.pdfsingle rocket. Figure 3 illustrates this prob-lem. States of the world are

ence Foundation (NSF) (research initiationaward IRI-9210997; NSF Young Investigatoraward IRI-9457634), the Advanced ResearchProjects Agency (ARPA) Planning Initiativeunder Tom Garvey’s management (F30602-93-C-0039, F30602-95-C-0247), and an ARPAAASERT grant (DAAHO4-96-1-0231).

Notes1. In the context of planning, the frame problemrefers to the idea that a first-order logic–baseddescription of actions must not only state whatconditions are changed by an action but also whatconditions remain unchanged after the action. Inany sufficiently rich domain, many conditions areleft unchanged by an action, thus causing two sep-arate problems: First, we might have to write theso-called frame axioms for each of the action-unchanged condition pairs. Second, the theoremprover has to use these axioms to infer that theunchanged conditions in fact remained the same.Although the first problem can be alleviated usingdomain-specific frame axioms (Haas 1987) thatstate for each condition the circumstances underwhich it changes, the second problem cannot beresolved so easily. Any general-purpose theoremprover would have to explicitly use the frameaxioms to prove the persistence of conditions.

2. It is perhaps worth repeating that this represen-tation for partial plans is sufficient but not neces-sary. I use it chiefly because it subsumes the partialplan representations used by most existing plan-ners. For a significantly different representation ofpartial plans, which can also be given candidateset–based semantics, the reader is referred to somerecent work by Ginsberg (1996).

3. As readers familiar with partial-order–planningliterature will note, I am simplifying the represen-tation by assuming that all actions are fully instan-tiated, thus ignoring codesignation and noncodes-ignation constraints between variables.Introduction of variables does not significantlychange the nature of refinement planning.

4. David Chapman’s influential 1987 paper on thefoundations of nonlinear planning has unfortu-nately caused some misunderstandings about thenature of plan-space refinement. Specifically, Chap-man’s account suggests that the use of a modaltruth criterion is de rigeur for doing plan-spaceplanning. A modal truth criterion is a formalspecification of the necessary and sufficient condi-tions for ensuring that a state variable will have aparticular value in the state preceding (or follow-ing) a given action in a partially ordered plan (thatis, a partial plan containing actions ordered byprecedence constraints). Chapman’s idea is to makethe truth criterion the basis of plan-spacerefinement. This idea involves first checking to seeif every precondition of every action in the plan istrue according to the truth criterion. For each pre-condition that is not true, the planner then consid-ers adding all possible combinations of additionalconstraints (steps, orderings) to the plan to makethem true. Because interpreting the truth criterion

refinement planning algorithms. Weld (1994)is an excellent tutorial introduction to partialorder planning. Erol (1995) provides theoreti-cally clean formalization of the hierarchicalplanning algorithms. Penberthy and Weld(1994) describe a refinement planner that canhandle deadlines and continuous change.Wellman (1987) proposes a general templatefor planning under uncertainty based ondominance proving that is similar to therefinement planning template discussed here.

Although we concentrated on plan synthe-sis in classical planning, the theoretical mod-els developed here are also helpful in expli-cating the issues in replanning and plan reuseas well as the interleaving of planning andexecution. For a more global account of thearea of automated planning that meshes wellwith the unifying view described in this arti-cle, you might refer to the online notes fromthe graduate-level course on planning that Iteach at Arizona State University. The notescan be found at rakaposhi.eas.asu.edu/plan-ning-class.html.

In addition to the national and interna-tional AI conferences, planning-relatedpapers also appear in the biannual Interna-tional Conference on AI Planning Systemsand the European Conference (formerly Euro-pean Workshop) on Planning Systems. Thereis a mailing list for planning-related discus-sions and announcements. For a subscription,e-mail a message to [email protected].

AcknowledgmentsThis article is based on an invited talk givenat the 1996 American Association forArtificial Intelligence Conference in Portland,Oregon. My understanding of refinementplanning issues has matured over the yearsbecause of my discussions with various col-leagues. Notable among these are Tony Bar-rett, Mark Drummond, Kutluhan Erol, JimHendler, Eric Jacopin, Craig Knoblock, DavidMcAllester, Drew McDermott, Dana Nau, EdPednault, David Smith, Austin Tate, DanWeld, and Qiang Yang. My own students andparticipants of the Arizona State Universityplanning seminar have been invaluable assounding boards and critics of my half-bakedideas. I would especially like to acknowledgeBulusu Gopi Kumar, Suresh Katukam, BiplavSrivastava, and Laurie Ihrig. Amol Mali, Lau-rie Ihrig, and Jude Shavlik read a version ofthis article and provided helpful comments.Finally, I want to thank Dan Weld and NortFowler for their encouragement on this lineof research. Some of this research has beensupported by grants from the National Sci-

Articles

94 AI MAGAZINE

Page 30: This Is a Publication of The American Association for Artificial ...rakaposhi.eas.asu.edu/kambhampati.pdfsingle rocket. Figure 3 illustrates this prob-lem. States of the world are

turns out to be NP-hard when the actions in theplan can have conditional effects, Chapman’s workhas led to the belief that the cost of an individualplan-space refinement can be exponential.

The fallacy in this line of reasoning becomesapparent when we note that checking the truth ofa proposition in a partially ordered plan is nevernecessary for solving the classical planning prob-lem (whether by plan-space or some otherrefinement) because the solutions to a classicalplanning problem are action sequences! The partialordering among steps in a partial plan constrainsthe candidate set of the partial plan and is not tobe confused with action parallelism in the solu-tions. Our account of plan-space refinement avoidsthis pitfall by not requiring the use of a modaltruth criterion in the refinement. For a more elabo-rate clarification of this and other formal problemsabout the nature and role of modal truth criteria,the reader is referred to Kambhampati and Nau(1995).

5. A common misrepresentation of the state-spaceand plan-space refinements in the early planningliterature involved identifying plan-spacerefinements with plans where the actions are par-tially ordered and identifying state-spacerefinements with plans where the actions are total-ly ordered. As the description here shows, the dif-ference between state-space and plan-spacerefinements is better understood in terms of prece-dence and contiguity constraints. These constraintsdiffer primarily in whether new actions are allowedto intervene between a pair of ordered actions:Precedence relations allow an arbitrary number ofadditional actions to intervene, but contiguity rela-tions do not.

A planner employing plan-space refinement—and, thus, using precedence relations—can producetotally ordered partial plans if it uses preorderingbased tractability refinements (see TractabilityRefinements). Examples of such planners includeTOCL (Barrett and Weld 1994) and TO (Minton,Bresina, and Drummond 1994). Similarly, a plannerusing state-space refinements can produce partialplans with some actions unordered with respect toeach other if it uses the generalized state-spacerefinement that considers sets of noninterferingactions together in one plan set component ratherthan in separate ones (see Forward State-SpaceRefinement).

6. As I noted earlier, hierarchical refinement intro-duces nonprimitive actions into a partial plan. Thepresence of a nonprimitive action can be interpret-ed as a disjunctive constraint on the partialplan—effectively stating that the partial plan mustcontain all the primitive actions corresponding toat least one of the eventual reductions of the non-primitive task. Despite this apparent similarity, tra-ditional HTN planners differ from the disjunctiveplanners discussed here in that they eventuallysplit disjunction into the search space with thehelp of prereduction refinements, considering eachway of reducing the nonprimitive task in a differ-ent search branch. Solution extraction is done onlyon nondisjunctive plans. The utility of the disjunc-

tion in this case is primarily to postpone thebranching to lower levels of the search tree. In con-trast, disjunctive planners can (and perhapsshould) do solution extraction directly from dis-junctive plans.

ReferencesAarup, M.; Arentoft, M. M.; Parrod, Y.; and Stokes,I. 1994. OPTIMUM-AIV: A Knowledge-Based Planningand Scheduling System for Spacecraft AIV. In Intel-ligent Scheduling, eds. M. Fox and M. Zweben,451–470. San Francisco, Calif.: Morgan Kaufmann.

Bacchus, F., and Kabanza, F. 1995. Using TemporalLogic to Control Search in a Forward-ChainingPlanner. In Proceedings of the Third European Work-shop on Planning. Amsterdam: IOS.

Barrett, A., and Weld, D. 1994. Partial-Order Plan-ing: Evaluating Possible Efficiency Gains. ArtificialIntelligence 67(1): 71–112.

Blum, A., and Furst, M. 1995. Fast Planningthrough Plan-Graph Analysis. In Proceedings of theThirteenth International Joint Conference onArtificial Intelligence. Menlo Park, Calif.: Interna-tional Joint Conferences on Artificial Intelligence.

Chapman, D. 1987. Planning or ConjunctiveGoals. Artificial Intelligence 32(3): 333–377.

Chien, S. 1996. Static and Completion Analysis forPlanning Knowledge Base Development andVerification. In Proceedings of the Third InternationalConference on AI Planning Systems, 53–61. MenloPark, Calif.: AAAI Press.

Crawford, J., and Auton, L. 1996. ExperimentalResults on the Crossover Point in Random 3SAT.Artificial Intelligence 81.

Dean, T., and Kambhampati, S. 1996. Planning andScheduling. In CRC Handbook or Computer Scienceand Engineering. Boca Raton, Fla.: CRC Press.

Drummond, M. 1989. Situated Control Rules. InProceedings of the First International Conference onKnowledge Representation and Reasoning, 103–113.San Francisco, Calif.: Morgan Kaufmann.

Erol, K. 1995. Hierarchical Task Network PlanningSystems: Formalization, Analysis, and Implementa-tion. Ph.D. diss., Department of Computer Science,University of Maryland.

Erol, K.; Nau, D.; and Subrahmanian, V. 1995.Complexity, Decidability, and UndecidabilityResults for Domain-Independent Planning.Artificial Intelligence 76(1–2): 75–88.

Fikes, R. E., and Nilsson, N. J. 1971. STRIPS: A NewApproach to the Application of Theorem Provingto Problem Solving. Artificial Intelligence 2(3–4):189–208.

Fuchs, J. J.; Gasquet, A.; Olalainty, B. M.; and Cur-rie, K. W. 1990. PLANERS-1: An Expert Planning Sys-tem for Generating Spacecraft Mission Plans. InProceedings of the First International Conferenceon Expert Planning Systems, 70–75. Brighton, U.K.:Institute of Electrical Engineers.

Ginsberg, M. 1996. A New Algorithm for Genera-tive Planning. In Proceedings of the Fifth Internation-al Conference on Principles of Knowledge Representa-

Articles

SUMMER 1997 95

Page 31: This Is a Publication of The American Association for Artificial ...rakaposhi.eas.asu.edu/kambhampati.pdfsingle rocket. Figure 3 illustrates this prob-lem. States of the world are

Systems, 125–133. Menlo Park, Calif.: AAAI Press.

Kambhampati, S.; Katukam, S.; and Qu, Y. 1996.Failure-Driven Dynamic Search Control for Partial-Order Planners: An Explanation-Based Approach.Artificial Intelligence 88(1–2): 253–315.

Kambhampati, S.; Knoblock, C.; and Yang, Q. 1995.Planning as Refinement Search: A Unified Frame-work for Evaluating Design Tradeoffs in Partial-Order Planning. Artificial Intelligence 76(1–2):167–238.

Kautz, H., and Selman, B. 1996. Pushing the Enve-lope: Planning Propositional Logic and StochasticSearch. In Proceedings of the Thirteenth NationalConference on Artificial Intelligence, 1194–1201.Menlo Park, Calif.: American Association forArtificial Intelligence.

Kautz, H.; McAllester, D.; and Selman, B. 1996.Encoding Plans in Propositional Logic. In Proceed-ings of the Fifth International Conference on Principlesof Knowledge Representation and Reasoning, 374–385.San Francisco, Calif.: Morgan Kaufmann.

Kushmerick, N.; Hanks, S.; and Weld, D. 1995. AnAlgorithm for Probabilistic Least CommitmentPlanning. Artificial Intelligence 76(1–2).

McAllester, D., and Rosenblitt, D. 1991. SystematicNonlinear Planning. In Proceedings of the NinthNational Conference on Artificial Intelligence,634–639. Menlo Park, Calif.: American Associationfor Artificial Intelligence.

McDermott, D. 1996. A Heuristic Estimator forMeans-Ends Analysis in Planning. In Proceedings ofthe Third International Conference on AI Planning Sys-tems, 142–149. Menlo Park, Calif.: AAAI Press.

Minton, S.; Bresina, J.; and Drummond, M. 1994.Total-Order and Partial-Order Planning: A Compar-ative Analysis. Journal of Artificial IntelligenceResearch 2:227–262.

Minton, S.; Carbonell, J. G.; Knoblock, C.; Kuokka,D. R.; Etzioni, O.; and Gil, Y. 1989. Explanation-Based Learning: A Problem-Solving Perspective.Artificial Intelligence 40:363–391.

Nilsson, N. 1980. Principles of Artificial Intelligence.San Francisco, Calif.: Morgan Kaufmann.

Pearl, J. 1984. Heuristics. Reading, Mass.: Addison-Wesley.

Pednault, E. 1994. ADL and the State-TransitionModel of Action. Journal of Logic and Computation4(5): 467–512.

Pednault, E. 1988. Synthesizing Plans That ContainActions with Context-Dependent Effects. Computa-tional Intelligence 4(4): 356–372.

Penberthy, S., and Weld, D. 1994. Temporal Plan-ning with Continuous Change. In Proceedings ofthe Twelfth National Conference on Artificial Intel-ligence, 1010–1015. Menlo Park, Calif.: AmericanAssociation for Artificial Intelligence.

Penberthy, S., and Weld, D. 1992. UCPOP: A Sound,Complete, Partial-Order Planner for ADL. In Pro-ceedings of the Third International Conference onthe Principles of Knowledge Representation,103–114. San Francisco, Calif.: Morgan Kaufmann.

Sacerdoti, E. 1975. The Non-Linear Nature of Plans.

tion and Reasoning, 186–197. San Francisco, Calif.:Morgan Kaufmann.

Golden, K.; Etzioni, O.; and Weld, D. 1996. XII:Planning with Universal Quantification andIncomplete Information, Technical Report, Depart-ment of Computer Science and Engineering, Uni-versity of Washington.

Green, C. 1969. Application of Theorem Proving toProblem Solving. In Proceedings of the First Inter-national Joint Conference on Artificial Intelligence,219–239. Menlo Park, Calif.: International JointConferences on Artificial Intelligence.

Haas, A. 1987. The Case for Domain-Specific FrameAxioms. In The Frame Problem in Artificial Intelli-gence: Proceedings of the 1987 Workshop, ed. F. M.Brown. San Francisco, Calif.: Morgan Kaufmann.

Ihrig, L., and Kambhampati, S. 1996. Design andImplementation of a Replay Framework Based on aPartial-Order Planner. In Proceedings of the Thir-teenth National Conference on Artificial Intelli-gence, 849–854. Menlo Park, Calif.: American Asso-ciation for Artificial Intelligence.

Joslin, D., and Pollack, M. 1996. Is “Early Commit-ment” in Plan Generation Ever a Good Idea? InProceedings of the Thirteenth National Conferenceon Artificial Intelligence, 1188–1193. Menlo Park,Calif.: American Association for Artificial Intelli-gence.

Kambhampati, S. 1997. Challenges in Briding PlanSynthesis Paradigms. In Proceedings of the Fif-teenth International Joint Conference on ArtificialIntelligence. Menlo Park, Calif.: International JointConferences on Artificial Intelligence. Forthcom-ing.

Kambhampati, S., and Hendler, J. 1992. A Valida-tion Structure–Based Theory of Plan Modificationand Reuse. Artificial Intelligence 55(2–3): 193–258.

Kambhampati, S., and Lambrecht, E. 1997. WhyDoes GRAPHPLAN Work? Technical Report, ASU CSETR 97-005, Department of Computer Science, Ari-zona State University.

Kambhampati, S., and Nau, D. 1995. The Natureand Role of Modal Truth Criteria in Planning.Artificial Intelligence 82(1–2): 129–156.

Kambhampati, S., and Srivastava, B. 1996. UnifyingClassical Planning Approaches, Technical Report,96-006, Department of Computer Science and Engi-neering, Arizona State University.

Kambhampati, S., and Srivastava, B. 1995. Univer-sal Classical Planner: An Algorithm for UnifyingState-Space and Plan-Space Planning. In Proceedingsof the Third European Workshop on Planning. Amster-dam: IOS.

Kambhampati, S., and Yang, X. 1996. On the Roleof Disjunctive Representations and ConstraintPropagation in Refinement Planning. In Proceedingsof the Fifth International Conference on Principles ofKnowledge Representation and Reasoning, 35–147. SanFrancisco, Calif.: Morgan Kaufmann.

Kambhampati, S.; Ihrig, L.; and Srivastava, B. 1996.A Candidate Set–Based Analysis of Subgoal Interac-tion in Conjunctive Goal Planning. In Proceedingsof the Third International Conference on AI Planning

Articles

96 AI MAGAZINE

Page 32: This Is a Publication of The American Association for Artificial ...rakaposhi.eas.asu.edu/kambhampati.pdfsingle rocket. Figure 3 illustrates this prob-lem. States of the world are

In Proceedings of the Fourth International JointConference on Artificial Intelligence, 206–214.Menlo Park, Calif.: International Joint Conferenceson Artificial Intelligence.

Selman, D.; Levesque, H. J.; and Mitchell, D. 1992.GSAT: A New Method for Solving Hard SatisfiabilityProblems. In Proceedings of the Tenth NationalConference on Artificial Intelligence, 440–446.Menlo Park, Calif.: American Association forArtificial Intelligence.

Srivastava, B., and Kambhampati, S. 1996. Synthe-sizing Customized Planners from Specifications,Technical Report, 96-014, Department of ComputerScience and Engineering, Arizona State University.

Tate, A. 1977. Generating Project Networks. In Pro-ceedings of the Fifth International Joint Confer-ence on Artificial Intelligence, 888–893. MenloPark, Calif.: International Joint Conferences onArtificial Intelligence.

Tate, A. 1975. Interacting Goals and Their Use. InProceedings of the Fourth International Joint Con-ference on Artificial Intelligence, 215–218. MenloPark, Calif.: International Joint Conferences onArtificial Intelligence.

Veloso, M., and Carbonell, J. 1993. DerivationalAnalogy in PRODIGY: Automating Case Acquisition,Storage, and Utilization. Machine Learning10:249–278.

Weld, D. 1994. An Introduction to Least Commit-ment Planning. AI Magazine 15(4): 27–61.

Wellman, M. 1987. Dominance and Subsumptionin Constraint-Posting Planning. In Proceedings ofthe Tenth International Joint Conference onArtificial Intelligence, 884–890. Menlo Park, Calif.:International Joint Conferences on Artificial Intelli-gence.

Wilkins, D. 1988. Practical Planning. San Francisco,Calif.: Morgan Kaufmann.

Wilkins, D. E. 1984. Domain-Independent Plan-ning: Representation and Plan Generation.Artificial Intelligence 22(3).

Subbarao Kambhampati is anassociate professor at ArizonaState University, where he directsthe Yochan Research Group,investigating planning, learning,and case-based reasoning. He is a1994 National Science Founda-tion Young Investigator awardee.He received his B.Tech from the

Indian University of Technology, Madras, and hisM.S. and Ph.D. from the University of Maryland atCollege Park. His e-mail address is [email protected].

Proceedings of the Eighth Midwest AI and Cognitive

Science Conference

Dayton, Ohio, May 30 - June 1, 1997

Edited by Eugene Santos, Jr.

The Midwest Artificial Intelligence and Cogni-tive Science Conference (MAICS) is a generalforum for research in both of the above-men-

tioned fields and provides an opportunity for in-terdisciplinary cooperation and discussionbetween these two fields. Like the previous sevenannual meetings, the Eighth MAICS has encour-aged submissions and participation from thosenew to the research community.

The papers presented in 1997 can be roughlycategorized into the following eight subfields in AIand cognitive science: knowledge representation,intelligent tutoring systems, uncertainty, knowl-edge and data mining, virtual environments andintelligent actors, vision, language and interface,and self-organizing maps.

Technical Report CF-97-01128 pp., $25.00. ISBN 1-57735-023-5

The AAAI Press445 Burgess Drive

Menlo Park, California, 94025(415) 328-3123 (telephone) (415) 321-4457 (fax)

Articles

SUMMER 1997 97

Page 33: This Is a Publication of The American Association for Artificial ...rakaposhi.eas.asu.edu/kambhampati.pdfsingle rocket. Figure 3 illustrates this prob-lem. States of the world are

98 AI MAGAZINE