problem solving in c

22
Real-time problem solving is not only reasoning about time, it is also reasoning in time. This abili- ty is becoming increasingly critical in systems that monitor and control complex processes in semiautonomous, ill-structured, real-world envi- ronments. Many techniques, mostly ad hoc, have been developed in both the real-time community and the AI community for solving problems within time constraints. However, a coherent, holistic picture does not exist. This article is an attempt to step back from the details and exam- ine the entire issue of real-time problem solving from first principles. We examine the degrees of freedom available in structuring the problem space and the search process to reduce problem- solving variations and produce satisficing solu- tions within the time available. This structured approach aids in understanding and sorting out the relevance and utility of different real-time problem-solving techniques. A s computers become more ubiquitous, they are increasingly being used in applications that sense the environ- ment and directly influence it through action. Such applications are subject to the real-time constraints of the environments in which they operate. These systems, which have deadlines on their processing requirements, are referred to as real-time systems. Their functional correctness depends not only on the logical correctness of the results but also on the timeliness. These systems include manufacturing, control, transportation, aerospace, robotics, and military systems. Historically, these systems were relatively simple, operating in well-characterized envi- ronments. However, the emerging generation of increasingly sophisticated real-time sys- tems will be required to deal with complex, incompletely specified, dynamic environ- ments in an increasingly autonomous fashion (Laffey et al. 1988). To do so requires that time-constrained problem-solving techniques be incorporated into real-time systems. According to the problem-space hypothesis posited by Newell and Simon (1972), problem solving is defined as search in a problem space. They define a problem space as a set of states and a set of operators linking one state with the next. A problem instance is a problem space with an initial state and a set of goal states. Extending this notion, we define real- time problem solving as time-constrained search in a problem space. In the field of cognitive psychology, a study of air traffic controllers conducted by the Rand Corporation is the first known attempt to characterize human problem-solving behavior when subject to real-time con- straints (Chapman et al. 1959). In the field of AI, the roots of real-time problem solving can be traced to the development of search-based programs to play chess, with time constraints being placed on each move. Current tech- niques have evolved from these early studies and applications. Real-time systems tend to be critical in nature, where the impact of failures can have serious consequences. Before the next genera- tion of real-time problem-solving systems can be deployed in the real world, it is necessary to be able to predict the performance of the system and make statements about the sys- tem’s ability to meet timing constraints. There is currently no coherent theory of real- time problem solving. The existing real-time problem-solving systems are, at best, coinci- dentally real time (Laffey et al. 1988). The problem of building real-time problem- solving systems has been approached by both the real-time community and the AI commu- Articles SUMMER 1994 45 A Structured View of Real- Time Problem Solving Jay K. Strosnider and C. J. Paul Copyright © 1994, AAAI. 0738-4602-1994 / $1.25 AI Magazine Volume 15 Number 2 (1994) (© AAAI)

Upload: robert-bennett

Post on 11-May-2017

223 views

Category:

Documents


0 download

TRANSCRIPT

■ Real-time problem solving is not only reasoningabout time, it is also reasoning in time. This abili-ty is becoming increasingly critical in systemsthat monitor and control complex processes insemiautonomous, ill-structured, real-world envi-ronments. Many techniques, mostly ad hoc, havebeen developed in both the real-time communityand the AI community for solving problemswithin time constraints. However, a coherent,holistic picture does not exist. This article is anattempt to step back from the details and exam-ine the entire issue of real-time problem solvingfrom first principles. We examine the degrees offreedom available in structuring the problemspace and the search process to reduce problem-solving variations and produce satisficing solu-tions within the time available. This structuredapproach aids in understanding and sorting outthe relevance and utility of different real-timeproblem-solving techniques.

As computers become more ubiquitous,they are increasingly being used inapplications that sense the environ-

ment and directly influence it through action.Such applications are subject to the real-timeconstraints of the environments in whichthey operate. These systems, which havedeadlines on their processing requirements,are referred to as real-time systems. Theirfunctional correctness depends not only onthe logical correctness of the results but alsoon the timeliness. These systems includemanufacturing, control, transportation,aerospace, robotics, and military systems.

Historically, these systems were relativelysimple, operating in well-characterized envi-ronments. However, the emerging generationof increasingly sophisticated real-time sys-tems will be required to deal with complex,incompletely specified, dynamic environ-ments in an increasingly autonomous fashion

(Laffey et al. 1988). To do so requires thattime-constrained problem-solving techniquesbe incorporated into real-time systems.According to the problem-space hypothesisposited by Newell and Simon (1972), problemsolving is defined as search in a problemspace. They define a problem space as a set ofstates and a set of operators linking one statewith the next. A problem instance is a problemspace with an initial state and a set of goalstates. Extending this notion, we define real-time problem solving as time-constrainedsearch in a problem space.

In the field of cognitive psychology, a studyof air traffic controllers conducted by theRand Corporation is the first known attemptto characterize human problem-solvingbehavior when subject to real-time con-straints (Chapman et al. 1959). In the field ofAI, the roots of real-time problem solving canbe traced to the development of search-basedprograms to play chess, with time constraintsbeing placed on each move. Current tech-niques have evolved from these early studiesand applications.

Real-time systems tend to be critical innature, where the impact of failures can haveserious consequences. Before the next genera-tion of real-time problem-solving systems canbe deployed in the real world, it is necessaryto be able to predict the performance of thesystem and make statements about the sys-tem’s ability to meet timing constraints.There is currently no coherent theory of real-time problem solving. The existing real-timeproblem-solving systems are, at best, coinci-dentally real time (Laffey et al. 1988).

The problem of building real-time problem-solving systems has been approached by boththe real-time community and the AI commu-

Articles

SUMMER 1994 45

A Structured View of Real-Time Problem Solving

Jay K. Strosnider and C. J. Paul

Copyright © 1994, AAAI. 0738-4602-1994 / $1.25

AI Magazine Volume 15 Number 2 (1994) (© AAAI)

problem, we examine the fundamental differ-ences between conventional real-time tasksand problem-solving tasks and discuss howthese differences affect the execution-timevariations of these tasks.

First, let us consider the execution-timevariations of conventional real-time tasks.Typical real-time signal-processing algorithmshave little to no variation associated withtheir execution times: Regardless of the com-plexity and size of most signal-processingalgorithms (for example, fast Fourier trans-forms, filters), generally no data dependen-cies can cause the execution times to vary.The input data are simply processed in a uni-form, deterministic fashion. However, con-trol-oriented, real-time tasks often have datadependencies. As the system to be controlledincreases in complexity, the number of datadependencies will likely increase, resulting inincreased variations in the execution time ofreal-time tasks.

Next, let us consider the execution-timevariations of problem solving in the contextof the problem-space hypothesis advocatedby Newell and Simon (1972). Figure 1 illus-trates a continuum of tasks that range fromknowledge poor to knowledge rich. On thefar left, knowledge-rich tasks are fully charac-terized, and an explicit algorithm exists thattransforms a given set of input into an appro-priate output. There is no notion of search orbacktracking at this end of the spectrum.Variations in execution time are associatedsolely with data dependencies, as is the casefor conventional real-time tasks.

As one moves to the right, either the taskcharacteristics or their interactions with theenvironment are not completely known.Heuristics are now required to search thestate space for an appropriate result. At thefar right, there is no knowledge to direct thesearch, resulting in a blind search. In thiscase, one would expect to have a large varia-tion in execution time. Most potential appli-cations fall between the two extremes shownin figure 1. As one moves back to the left,increasing knowledge can be applied toreduce the variations caused by search.

Observations: 1. Execution-time variationsbecause of data dependency and execution-time variations as a result of search and back-tracking are orthogonal. 2. The combinationof execution-time variations because of datadependency and search results in real-timeproblem-solving tasks that tend to havemuch larger execution-time variations thanconventional real-time tasks. 3. Execution-time variation is a true measure of the quality

nity. The real-time community has taken asystem approach, integrating AI tasks intoexisting real-time systems. The AI communityhas concentrated primarily on developingtechniques and architectures to facilitatetime-constrained problem solving. A numberof techniques have been developed in boththe real-time community and the AI commu-nity, each with their respective terminology.A study of the current literature leaves oneconfused about techniques and terminology.It is also not clear how the techniques fromthe two research communities can be com-bined in solving a given problem.

This article attempts to sort out the termi-nology and the techniques developed in thetwo communities and provides a common,integrated approach to real-time problemsolving. The basic premise of this article isthat a structured approach to real-time prob-lem solving exists based on the fundamentaldegrees of freedom available in using domainknowledge to structure the problem spaceand the search process to produce best-so-farsolutions in limited time. Existing real-timeproblem-solving techniques are shown to becompositions of these fundamental degrees offreedom.

This article is organized as follows: First,The Problem provides a background on thecurrent state of the art in real-time problemsolving and introduces some of the existingreal-time problem-solving techniques. Thesecond section discusses the relationshipsbetween deadlines, computation time, andsolution quality. The sections entitled AStructured Approach and KnowledgeRetrieval introduce our structured approach.Mapping Existing Techniques examines relat-ed work in real-time problem solving and thekinds of problem tackled. The final sectionexplores the limitations of this work and pro-poses future research.

The ProblemThe challenge with real-time problem-solvingtasks is that the worst-case execution time isoften unknown or is much larger than theaverage-case execution time. Although con-ventional real-time tasks have execution-timevariations because of data dependencies, real-time problem-solving tasks have additionalvariations because of search and backtracking(Paul et al. 1991). A blind application of con-ventional real-time scheduling theory to real-time problem-solving tasks results in systemsthat either cannot be scheduled or have low-average resource utilization. To tackle this

Articles

46 AI MAGAZINE

of domain knowledge available and the effi-ciency of its use. (If a particular application isknowledge rich and still has a lot of execu-tion-time variations, then it is a poorly con-structed system.)

In the discussion that follows, wheneverwe refer to search, we mean generalizedsearch with backtracking (see side bar). Otherforms of search are referred to as iterativealgorithms. Generalized search occurs in aproblem space. Although some domains havenatural problem spaces (chess, for example),most real-world problems are made perverselydifficult because one has to discover how totreat them as search problems in the firstplace. Then the initial representation (initialproblem space, the operators, the start state,and the goal state) is derived from the prob-lem specification.

Throughout the remainder of this article,we use an application example to illustratethe abstract ideas we present. The applicationexample we consider is a car-tracking system.In large cities such as New York, antitheftdevices now include beacons installed in cars,which allow the police to track down a stolencar. The tracker has a display that indicatesthe approximate direction and range to thetarget stolen car. For any given combinationof current tracker location and current targetlocation, there is no procedural way to deter-mine the path from the tracker to the target.The solution requires search. Further, thestolen vehicle is being driven by an intelli-

gent person in an unpredictable fashion. Thisdynamic aspect of the example problem iscommon to most interesting real-time prob-lem-solving domains. For this application,the search space is the set of all road intersec-tions, represented as a graph with edges cor-responding to road segments. The operatorscorrespond to the intersections encounteredin daily life: turn right, turn left, go straightahead, turn around. The direction and rangeinformation can be used as a heuristic func-tion to guide the search process.

Deadlines, Computation Time,and Solution Quality

In this section, we explore the origin of timeconstraints in real-time problem solving andthe way that these time constraints translateinto deadlines. We highlight the effect of exe-cution-time variations of real-time problem-solving tasks on the determination of systemtiming correctness using real-time schedulingtheory. We then discuss how the requiredcomputation times, deadlines, and availablecomputation time determine the solutionquality that can be guaranteed.

DeadlinesTime constraints in real-time problem solvingcan manifest themselves in a variety of ways(Dodhiawala et al. 1989; Laffey 1988;Stankovic 1988). They can be either static or

Articles

SUMMER 1994 47

Heuristic SearchProblem-SolvingAlgorithm

Brute-ForceSearch

(backtracking)(no search,

no backtracking)

Knowledge-PoorDomain

Knowledge-RichDomain

Initial State

FinalState

Initial State

FinalState

Figure 1. The Search Spectrum.

determining what computation time isrequired to satisfy the functions of each taskand (2) determining whether the set of taskscan be scheduled such that each task’s com-putation-time requirements can be satisfiedbefore its deadline.

Determining the computation timerequired to support conventional real-timetasks is generally straightforward. Determin-ing the computation time required to supportreal-time problem-solving tasks highlights thefundamental problem of scheduling them,that is, their potentially large execution-timevariations. For well-structured and well-understood applications rich in domainknowledge, accurate estimates can beobtained. As the application becomes lesswell characterized, the execution-time esti-mates become increasingly pessimistic. Forhighly dynamic and unstructured applica-tions, reasonable estimates might not even bepossible.

Given the worst-case execution times of aset of tasks, real-time scheduling theory canthen be applied to determine whether thedeadlines of the tasks will be met. One suchapproach is the earliest deadline schedulingapproach (Liu and Layland 1973), whichorders task scheduling by deadline. Accordingto the Liu and Layland equation, a real-timetask set scheduled using the earliest deadlinescheduling algorithm is guaranteed to meetall deadlines if

where the Ci’s represent the worst-case com-putation times of a set of n conventional real-time tasks, and Cai

j ’s represent the worst-casecomputation times of a set of m periodic real-time problem-solving tasks. The Ti’s representthe periods of the tasks (deadlines areassumed to be the same as the periods). Thelarger the variations between the average-caseexecution time and the worst-case executiontime, the more pessimistic the value of Cai

j isand the lower the average-case resource uti-lization (Paul et al. 1991).

An introduction to real-time schedulingtheory can be found in Sha and Goodman(1990) and Sprunt (1989). For a summary offixed-priority scheduling in real-time systems,see VanTilburg and Koob (1991). For recentwork on dynamic priority scheduling, seeSchwan and Zou (1992) and Jeffay, Stanat,and Martel (1991).

dynamic. In conventional real-time systems,deadlines are known time quantities and aregenerally directly related to environmentalconstraints (Wensley et al. 1978). In our car-tracker example, tasks associated with enginecontrol, automatic braking, and so on, areconventional real-time tasks with hard dead-lines directly tied to the physics and dynam-ics of the underlying environment. The exe-cution properties of these tasks are wellbehaved, and the deadlines tend to be con-stant and known.

In other cases, the deadlines of tasks can bedynamic. In our example application, thedeadline of the real-time problem-solvingtracking task is derived from the requirementthat the tracker proceed at normal speed lim-its at all times. Thus, as the tracker movesbetween states (intersections) in the problemspace, he/she must determine which operator(right, left, and so on) to execute beforehe/she gets to the next intersection. Thedeadlines of these real-time problem-solvingpath-planning tasks are dynamically variable,depending on the vehicle speed and the dis-tance to the next intersection. Note that thedynamic nature of the stolen car precludesthe tracker from path planning far into thefuture (multiple intersections ahead).

Another source of deadlines in the exampleapplication would be at the system level,such as levying a requirement that the trackermust capture the stolen car within 20 min-utes. Although such deadlines are easilylevied, it is difficult or impossible to guaran-tee their being satisfied in highly dynamicenvironments.

Computation TimeIf a real-time problem-solving task were theonly task on the computer, the computationtime available to the task would be the entirecomputation time from the current time tothe deadline. More often, a real-time prob-lem-solving task must share the computingresource not only with other real-time prob-lem-solving tasks but also with conventionalreal-time tasks. In the application example,the real-time problem-solving task associatedwith finding a path to the stolen vehiclemight have to share the central processingunit with vehicle control functions that havetight real-time constraints. Further, if weextend the tracker problem to support track-ing of multiple stolen cars simultaneously,then multiple real-time problem-solving tasksand multiple conventional real-time taskswould have to coexist on the same resource.

There are two aspects to the problem: (1)

More often, a real-time

problem-solving taskmust share

the computingresource not

only with other

real-timeproblem-

solving tasksbut also withconventional

real-timetasks.

Articles

48 AI MAGAZINE

C

T

C

T

C

T

C

T

n

n

ai

aimai

mai

1

1

1

1

1+ … + + + … ≤

Solution QualityLet us now compare the computation timeavailable with the computational require-ments of the task. If the computation timeavailable is greater than the worst-case execu-tion time of the task, the task can be guaran-teed to generate the optimal solution beforeits deadline for all valid input in its designspace. Because real-time problem-solvingtasks have large execution-time variations asa result of search, it is often not possible toguarantee that the time available for compu-tation would be greater than the worst-caseexecution time of the task (Paul et al. 1991).If the computation time available is less thanthe worst-case execution time, we couldeither declare the task set unable to be sched-uled or attempt to generate acceptable solu-tions of lower quality (satisficing solutions).

Satisficing solution: A satisficing solution isone whose solution quality is greater than anacceptable threshold. The notion of a satisfic-ing solution places no restriction on the man-ner in which the solution is generated or onthe evolution of the solution quality as a func-tion of time. A satisficing solution necessarilyimplies a compromise in solution quality.

The task can be guaranteed to generate sat-isficing solutions if the worst-case timerequired to generate satisficing solutions forall valid input in the design space is less thanthe computation time available. If there isinsufficient computation time to guaranteesatisficing solutions for all valid input or ifthe deadline itself is unknown, the onlyoption is to structure the search space andthe search process to generate best-so-farsolutions.

Best-so-far: A best-so-far solution defines asearch evolution that generates intermediateresults whose expected solution qualityimproves monotonically. We note that theproperty of best-so-far evolution is indepen-dent of generating satisficing solutionsbecause satisficing solutions can be generatedby processes that do not produce any usefulintermediate results.

The next section examines the degrees offreedom and shows how existing real-timeproblem-solving techniques are compositionsof the fundamental degrees of freedom.

A Structured ApproachWhen attacking any complex real-time sys-tem problem, conventional or AI, it is imper-ative to structure the solution strategy insuch a way as to achieve the requiredresponse-time goals. To deal with the com-

plexity of such systems, the overall problemis decomposed into a set of subproblems. Weassume that this process of decomposition isapplied recursively until we end up with acollection of basic subproblems, some ofwhich are procedural, and some of which aresearch based. The basic subproblems indicat-ed here are the smallest computational enti-ties that can be treated independently (thatis, can produce a result and be scheduled asan independent computational entity).

Any representation of a problem forms anabstraction of the actual problem. Movingfrom one level of abstraction to anotherinvolves a change in the number of charac-teristics considered. Higher levels of abstrac-tion space look at progressively fewer but rel-atively important characteristics. Techniquessuch as ordered monotonic refinement(Knoblock 1990) are useful in developingabstraction hierarchies. Abstraction hierar-chies at the problem level guide the partition-ing of a problem into subproblems. The fol-lowing discussion assumes a specificdefinition of subproblem that we use in thisarticle.

Subproblem: Each search-based sub-problem operates at a single level ofabstraction and has a search space withan initial state, a goal state, and somenumber of intermediate states connectedby operators.

The single level of abstraction constraintenables us to make unambiguous assertionsabout the timing properties of the varioustechniques addressed later in this article. Italso allows us to regard the transition fromone level of abstraction to another as movingfrom one subproblem to the next. Real-timesearch issues are dealt with at this level.Because search involves selecting and apply-ing the operators until the solution is found,the deadlines at this level fall into two cate-gories: (1) a deadline to solve the search-based subproblem and (2) intermediate dead-lines to select and apply operators during thesearch process.

We define a problem and its relation tosubproblems as follows:

Problem: A problem is a set of sub-problems connected by a set of intersub-problem operators that trigger subprob-lem transitions between one subproblemand the next. These transitions occurwhen the goal state of a subproblem isreached.

We use the problem level to discuss issuesrelating to multiple subproblems, transitions

Techniquessuch asordered monotonicrefinement(Knoblock1990) areuseful indevelopingabstractionhierarchies.

Articles

SUMMER 1994 49

solving strategies fall within this frameworkand exploit some of these degrees of freedom.Each of these degrees of freedom manifestsitself at both the problem level and the sub-problem level and has different effects on theworst-case execution time and the ability togenerate best-so-far solutions.

Pruning is a fundamental method for reduc-ing the number of states searched. Pruningmanifests itself in a number of forms, bothduring run time and at design time. Partition-ing, problem decomposition, and abstraction areforms of pruning that manifest themselveswhen setting up and structuring the searchspace. If a satisficing solution is acceptable,then approximation can be used to reduce themaximum number of states searched in theworst case. Scoping determines the fraction ofthe search space that will be searched and isuseful in generating solutions of improvingquality with time. Ordering determines thesequence in which the states are searched giv-en a search space. We elaborate on these dis-tinctions as we describe each of these tech-niques in the following subsections.

PruningDefinition: Pruning is the technique ofusing domain knowledge to eliminateportions of the search space that areknown not to contain the goal state.

Pruning comes in two forms. First is staticpruning (pruning at design time). Static prun-ing requires a priori knowledge that certainportions of the search space need not besearched. At the problem level, static pruningmanifests itself in the form of partitioning,problem decomposition, and abstraction.Static pruning is used to set up the searchspace for the search process.

Partitioning is based on the notion thatsolving a collection of smaller problems isusually better than solving the whole prob-lem in a single step. This divide-and-conquertechnique assumes the availability of domainknowledge to break up the original probleminto smaller parts and combine the solutionof the smaller parts into a coherent solutionof the entire problem.

At the problem level, partitioning can beused to divide the problem into subproblems.Run-time indexing is used to a priori deter-mine which parts of the data or knowledgewill be applicable in different situations. Forexample, the tracking problem illustrated infigure 2 can be partitioned into a set of small-er subproblems: finding the path from thecurrent location of the tracker to the high-way, finding a highway connecting the

between subproblems, the scheduling andexecuting of different subproblems insequence or in parallel, and the mixing ofprocedural and search-based subproblems.The timing constraints at this level are tosolve the end-to-end problem.

In large application domains, this defini-tion of problems and subproblems can begeneralized to arbitrary levels of nesting. Indynamic environments, it might be necessaryto determine the decomposition of the prob-lem into subproblems at run time. However,for a particular problem instance, theassumption of a subproblem being at a singlelevel of abstraction is not a restrictiveassumption.

We now develop a framework for real-timeproblem solving by identifying the degrees offreedom available in structuring the problemspace and the search process to provide solu-tions of increasing quality as a function oftime. We then evaluate the effect of thisapproach on the execution-time variations ofthe problem-solving tasks and illustrate thiseffect using qualitative examples. This workwas motivated by the need for an integratedapproach to real-time problem solving in thedevelopment of real-world applications (Paulet al. 1992) and extends our initial work inreducing problem-solving variations toimprove the timing predictability of real-timeAI systems (Paul et al. 1991). Additionaldetails of the structured approach can befound in Paul (1993).

Our structured approach manages the largeexecution-time variations of real-time prob-lem solving in two ways. First, we try toreduce the execution-time variations by usingdomain knowledge to reduce the maximumnumber of states searched in the worst case.After having reduced the number of statessearched in the worst case, if the residual vari-ations are still large or if the deadline itself isunknown, the only option left is to structurethe search space and the search process toproduce solutions of improving quality as afunction of time (best-so-far property) usingthe fundamental degrees of freedom.

We postulate that the best one can do inmanaging problem-solving variations is touse domain knowledge for pruning, ordering,approximating, and scoping. Generic tech-niques that fall under this framework are asfollows: static pruning, dynamic pruning(pruning); best-first heuristic search (order-ing); value approximation, function approxi-mation, aggregation (approximating); andscoping in time, scoping in space (scoping).

We find that existing real-time problem-

Articles

50 AI MAGAZINE

Articles

SUMMER 1994 51

neighborhood of the tracker to the neighbor-hood of the target, and finding a path fromthe highway to the actual location of the tar-get. This problem partitioning is illustrated infigure 3. Within the subproblem of finding apath from the current location of the trackerto the highway, we need only consider thoseintersections and street segments in theneighborhood of the tracker. Informationabout intersections, street segments, and traf-fic flow information from other neighbor-hoods is not relevant and can be pruned (par-

titioned) away. The worst-case execution timefor this subproblem would then be the time tosearch through all the intersections and streetsegments of the particular neighborhood.

The second form of pruning is dynamicpruning (pruning at run time). This type ofpruning happens during the search processand is based on information that is availableonly during run time to prune the size of thesearch space.

Within a subproblem, dynamic pruning isthe use of run-time knowledge to eliminate

Northside

Hillside

Southside

Middletown

Boonyshire

RiversideShadyside

Plainfield

GreenfieldPark

FootHills

Tracker

Target

Figure 2. Sample Detailed Map for the Tracker Application.

Articles

52 AI MAGAZINE

Northside

Hillside

Southside

Middletown

Boonyshire

RiversideShadyside

Plainfield

GreenfieldPark

FootHills

Tracker

Target

Figure 3. High-Level Problem Decomposition: Generic Case and Application Example.

RiversideTracker

NorthsideTarget

INITIALSTATE

FINALSTATE

INITIALSTATE

FINALSTATE

INITIALSTATE

FINALSTATE

INITIALSTATE

FINALSTATE

INITIALSTATE

FINALSTATE

INITIALSTATE

FINALSTATE

operators applicable at a given state. In thetracking example, using current traffic infor-mation to rule out some of the operatorchoices would be an example of dynamicpruning. At the problem level, dynamic prun-ing manifests itself in the deletion of specificsubproblems (or their partitions) from consid-eration for the solution of a particular prob-lem instance.

We bind pruning to the availability of per-fect domain knowledge. Hence, pruningusing imperfect information or heuristicswould not constitute valid pruning in ourapproach. Subject to these assumptions,pruning (in one of its many forms) is theonly way to reduce the worst-case executiontime without compromising the goal state orthe solution quality.

In the case of static pruning, a priori analy-sis of the worst-case execution time is possiblebecause the maximum number of states canbe estimated at design time. In dynamic prun-ing, the reduction in the number of statessearched is instance dependent. Hence, a pri-ori analysis of the execution-time reduction asa result of dynamic pruning is difficult.

OrderingDefinition: Ordering is the technique oflooking earlier at the entities (states orsubproblems) that are more likely to liealong the solution path.

This type of ordering corresponds to theclassical best-first search technique. The A*algorithm (Korf 1987) and most of its deriva-tives fall into this category, where some formof heuristic function is used to order the eval-uation of states. For example, in the trackingapplication, ordering of operators and statescan be done within a subproblem based onthe compass heading and the range to thetarget. At the problem level, the ordering isdeterministic: finding a path from the trackerto the main highway; finding a path alongthe main highways to the neighborhoodwhere the target is; and, finally, finding apath within the target neighborhood to thetarget. In the more general case, ordering atthe problem level is a form of prioritizationof evaluation, with higher priority given tothose subproblems that are expected to havethe highest impact on the overall solution.

The better the search heuristic, the lowerthe average-case execution time is. We bindordering to the use of imperfect knowledge.Ordering only changes the probability that aparticular state will be visited, evaluated, andexpanded. Thus, the worst-case executiontime is not affected.

ApproximationDefinition: Approximation is the tech-nique of reducing the accuracy of a char-acteristic under consideration.

Approximations at the subproblem levelare usually value approximations, butapproximations at the problem level tend tobe method or function approximations.

Let us assume that the solution to a genericproblem can be expressed as

Solution = φ(a1, a2, a3, … an) ,where A = {a1, a2, … an} represents the set ofcharacteristics that influence the final solu-tion, and f represents the functional relation-ship between the characteristics.

Let A1 represent the set of admissible valuesfor a discrete variable a1 with cardinalityη(A1). In the tracking example, the values ofvalid speed limits are 5, 10, 15, 20, 25, 30, 35,40, 45, 50, 55, 60, and 65. Thus, the set ofspeed limits has a cardinality of 13. Approxi-mation is a technique for reducing the cardi-nality of the set of admissible values for avariable. (For continuous variables, approxi-mation can be seen to reduce the number ofdistinct regions in which a continuous vari-able might lie). In this case, a sample approxi-mation would constitute a mapping from thisset of speed limits to values of low speed,medium speed, and high speed. The newapproximate set now has a cardinality of only3. We use the following representation: LetA(a)

1 represent the set of admissible approxi-mate values for the variable a1 such thatη(A(a)

1 ) < η(A1). An approximation function isdefined from the original set A1 to theapproximate set A(a)

1 such that all elements ofthe original set are mapped to elements inthe approximate set.

The approximation function reduces thetotal number of states η(Papp) that need to besearched in the approximate space Papp.Hence, the worst-case execution time of thetask is less than the worst-case execution timein the base space:

C(Papp) < C(Pbase) .With approximations, we note that approx-

imation changes the problem definition; theoriginal problem space, the initial state, thegoal state, and the operators are all potentiallyaffected. The implication is that one is willingto accept a less precise goal state.

Approximation methods use differentapproximate models of the underlyingdomain, each with a different level of accura-cy (figure 4). As the approximation increases,search space size decreases. In the approximateprocessing technique proposed by Lesser,Pavlin, and Durfee (1988), the system esti-

Articles

SUMMER 1994 53

Articles

54 AI MAGAZINE

to the actual location of the target, thenapproximate processing fails as an end-to-endapproach. We need to work with the detailedmap in the initial subproblem and the finalsubproblem to avoid compromising the goalstate.

The effects of approximation on executiontime can be summarized as follows: (1) theworst-case execution time decreases becausethe total number of states decreases; (2) theaverage-case execution time is also expectedto decrease; (3) because approximation caninvolve aggregation, the number of data ele-ments processed is potentially lower; (4)approximation of the evaluation function (bylinearization, for example) can reduce itscomplexity (and the associated state-evalua-tion time); (5) some approximations allowincremental refinement with additional time,but others require the problem be solvedagain with a different level of approximation;(6) nonrefinable approximations require us toaccurately estimate the amount of timerequired to generate the solution and then

mates the amount of time available and usesa decision process to determine the best levelof approximation for the time available.This approach works well as long as the sys-tem does not underestimate the amount oftime available.

For example, in path planning at the levelof roads, using a map of highways only is anapproximation. This set of roads has a lessercardinality than the set of all roads on thedetailed map. Note that approximation isreducing the cardinality of the given charac-teristic (roads).

We also note from the application examplethat if we use an approximate map for doingthe overall planning, then we end up redefin-ing the problem (the problem space, the ini-tial state, and the goal state) to less accuratestates. With the approximate map, the bestone can do is to get to the vicinity of the tar-get, as shown in figure 5. If the approximatesolution is good enough, then approximateprocessing is acceptable. If the resulting solu-tion is unacceptable, and the intent is to get

InitialState

FinalState

Approximate Spaces

FinalStateInitial

State

InitialState

FinalState

Incr

easi

ng L

evel

s of

App

roxi

mat

ion

CM1i

CM2i

CM3i

DE

CIS

ION

P

RO

CE

SS

Figure 4. Approximate Models.

Articles

SUMMER 1994 55

pick the level of approximation that maxi-mizes the expected accuracy for the givenallocation of computation time; and (7) thedecision process itself needs to be bounded toallow the problem-solving segment enoughtime to run.

ScopingDefinition: Scoping is the technique ofcontrolling (in time and space) the maxi-mum lookahead when choosing the nextoperator.

Scoping is applicable within a subproblem

level in choosing the next move (operator).The initial state and the goal state are notaffected; what changes is the manner inwhich the operators are selected and applied.Figure 6 illustrates scoping in the trackingexample. It is possible to trade the increase inscope in range (time) for the increase in scopein azimuth (space). The narrow scope illus-trated in the figure allows us to consider lesserinformation with each move, but we can lookahead more moves (within a fixed-computa-tion time window). The wide scope illustratedin the figure is an example of increased scop-ing in azimuth (space). Because we consider

Northside

Hillside

Southside

Middletown

Boonyshire

RiversideShadyside

Plainfield

GreenfieldPark

FootHills

Tracker

Target

Figure 5. An Approximate Map.

Articles

56 AI MAGAZINE

Consider now the situation where thetracker cannot plan the entire route beforestarting and must plan the route while mov-ing. For this case, the degree of scoping intime (intersections ahead) and space(azimuth of roads under consideration)should simply be maximized. The trade-offbetween time and space would be a functionof the regularity of the environment. In ourexample, if the streets were perfectly rectilin-ear, then one would evaluate a relativelysmall azimuth and look as far forward in timeas possible. Conversely, if the streets werehighly irregular—corresponding to manydead-end streets, one-way streets, and other

more information with each move, we canonly look a few moves ahead within the timeavailable. Both scopes bring additionalknowledge and information to bear on thenext move, and each approach can be thepreferred approach based on circumstances.

If it were possible to plan the entire pathfrom the start state to the goal state beforestarting execution, then scoping would notbe necessary. Such an approach is possible inpurely static environments. In our applica-tion example, this situation would corre-spond to the target vehicle being parked. Thetracker could plan his/her entire route beforehe/she started.

B41

To Target

Tracker

Riverside

A11

A A

AA

AA

A

012

21

22

23

24

31

32

A

B0

B1 B

21B23B33

32B

B22

B31

A41

Narrow Scope

Wide Scope

Figure 6. Scoping in the Application Example.

obstacles—then one would need to examine awider azimuth to find the best route. Toavoid backtracking, the level of scope shouldbe such that it includes the irregularities thatcan affect the solution.

Dynamic problem solving (solving as yougo) in static environments (parked targetvehicle) allows one to trade storage fordecreased execution time. In cases where onehas to backtrack, the information about inter-sections evaluated previously but not takencan be stored. However, in dynamic environ-ments corresponding to the target vehiclemoving in an unpredictable fashion, the val-ue of stored information about previouslyexpanded nodes degrades with time. Theappropriate level of scoping is now a functionof both the regularity and the dynamics ofthe environment. In our car tracker example,it is not useful to plan a route too far ahead(in time) because the target will havechanged position before you get there. Inhighly dynamic environments, one is forcedto completely replan after each move. In thedynamic tracker example, this replanningcorresponds to a new evaluation of the inter-sections within a given scope at every inter-section, even though they might have beenin your previous scoping window. The targethas moved, and the relative value of follow-ing the path might have changed.

Some problem spaces can be structurednaturally for increasing scope in time andspace. Path-planning problems and schedul-ing problems fall in this category. For prob-lem spaces that satisfy this property, thebehavior of the actual solution quality is afunction of whether an exact or a heuristicevaluation function (or error function) isavailable. If only a heuristic evaluation func-tion is available, then one can at best guaran-tee that the expected solution quality, not theactual solution quality, will improve withtime.

With scoping, we note the following: 1.The initial state and the goal state are notaffected; what changes is the manner inwhich the goal state is approached. 2. Insome cases, the problem space might have tobe coerced to exhibit this property, which canpotentially increase the worst-case or theaverage-case execution time of the task. Thebenefit of partial solutions along the waymight or might not mitigate the increasedresponse times. 3. This technique has themost impact on the selection of the nextoperator (move). In most real-time applica-tions, it would be difficult to plan a staticpath from the current location of the tracker

to the target. Because of unpredictable move-ment of the target and changing data, thepath will have to be replanned after everymove. In such cases, scoping has a large effecton limiting the time to select an operator forthe next move. 4. Increasing the scope bringsadditional knowledge and information tobear on the choice of the next move, therebyincreasing the probability that the right moveis made. Increasing the probability of a cor-rect move reduces the probability of back-tracking and can potentially reduce the aver-age-case execution time.

We also note that scoping is fundamentallydifferent from approximating. In scoping, agiven map is processed, but increasing infor-mation is brought to bear on the decision. Inapproximating, the map itself is modified.Hence, the resulting solution using approxi-mating is necessarily less accurate, even if suf-ficient time was available to complete thesearch process.

Scoping is also different from pruning andordering. Scoping determines the maximumfraction of the search space that will besearched. Thus, pruning and ordering areindependently applied on the states that fallwithin a given scope.

Knowledge RetrievalIn the previous section, we considered theuse of domain knowledge to reduce searchvariations at the problem-level search spaceor problem space. At each state in the prob-lem space, knowledge was used to evaluatethe operators in the current state and deter-mine the next-best state. The process ofextracting knowledge from the knowledgebase and making it available to the searchprocess at the problem-space level is calledknowledge retrieval. It occurs at each state ofthe problem space, as shown in figure 7.

In analyzing the execution-time variationsthat are the result of search and backtrackingat the problem-space level, we made theimplicit assumption that the knowledgeretrieval process at each problem-space statewas bounded and took unit time. Unfortu-nately, in real-world systems, the knowledgeretrieval time varies from one problem-spacestate to another and is difficult to character-ize. Typical bounds for knowledge retrievaltime tend to be exponential bounds, whichare of no practical value in predicting theactual execution time (Forgy 1982).

We now consider the execution-time char-acteristics of the knowledge retrieval process,which occurs at each state of the problem

Articles

SUMMER 1994 57

Articles

58 AI MAGAZINE

applied at design time. Thus, the nature ofknowledge retrieval search and the availabili-ty of domain knowledge to control theknowledge retrieval search are different fromthe availability of domain knowledge to con-trol the search at the problem-space level.

Observation: The problem-level searchspace allows the use of both dynamic andstatic domain knowledge in reducing prob-lem-solving variations, whereas the immedi-ate knowledge retrieval process in nonlearn-ing systems allows the use of only staticdomain knowledge. Learning systems havethe potential to acquire knowledge duringrun time and apply it to reduce execution-time variations of the knowledge retrievalprocess, but this research challenge remainsopen.

In the previous section, we hypothesizedthat the only way to manage search varia-tions is to use domain knowledge for prun-ing, ordering, approximating, and scoping.Because using run-time knowledge is difficult,a viable approach to using domain knowl-edge is to partition the data and the knowl-edge base so that relevant data can bematched with the relevant partitions. Theimplications for managing the execution-time variations at the knowledge retrieval lev-el are examined in the following subsections.

space. To understand this issue and evaluatethe options in reducing search variations atthis level, let us first consider the nature ofknowledge retrieval processing. We restrictour attention to immediate knowledgeretrieval (that is, retrieving knowledge fromthe knowledge base or rule base only). All ref-erences to knowledge retrieval within thecontext of this article are limited to immedi-ate knowledge retrieval only.

The primary function of the immediateknowledge retrieval process is to find all rele-vant domain knowledge that is applicable forevaluating a given state at the problem-spacelevel. This process involves comparing thecurrent state of the world (consisting of rele-vant observed data patterns) with the knowl-edge stored in the knowledge base (consistingof relevant and valid data patterns). Thiscomparison requires pattern matching,which can be viewed as a search process. Ifdomain knowledge had to be made availableat run time to this (knowledge retrieval)search process, the process would require itsown nested knowledge retrieval search, andso on, to infinity. To avoid an architecturethat allows infinite nesting of knowledgeretrieval search processes, we find that anydomain knowledge that can be used torestrict knowledge retrieval search is best

Problem Space Level

KnowledgeRetrievalLevel

Figure 7. Search Spaces in Problem Solving.

Recent research in real-time production sys-tems evaluates statistical approaches to pre-dict the run time of the knowledge retrievalprocess (Barachini, Mistelberger, and Gupta1992).

Knowledge Retrieval PruningThe amount of processing required in theknowledge retrieval phase depends on theamount of knowledge in the system, the sizeof the state (data), and the amount of data towhich each piece of knowledge is potentiallyapplicable. In many systems, the process ofknowledge retrieval is essentially a pattern-matching process: finding valid patterns indata elements as a function of the patternsstored in the knowledge base (Acharya andKalb 1989; Nayak, Gupta, and Rosenbloom1988; Scales 1986; Forgy 1984). Thus, the pat-tern-match time is a function of the numberof data elements, the set of possible relations,and the number of elements in the knowl-edge base. To reduce the total work at thislevel, there are only three degrees of freedom:(1) reduce the relations in the knowledge base(using knowledge partitioning), (2) reducethe data elements (using data partitioning),and (3) reduce the relations that are possiblein the first place by restricting the language(Tambe and Rosenbloom 1989) (restrictingexpressiveness).

Doing the partitioning ensures that onlythe relevant data are matched with the rele-vant knowledge base. One of the primarycauses of large execution-time variations inknowledge retrieval, as well as one of thebiggest sources of inefficiency, is unnecessarymatch processing that is later discarded(Tambe and Rosenbloom 1989). The unneces-sary match processing is because the knowl-edge retrieval inference engine retrieves allpossible instantiations for a particular ruleand then selects one by conflict resolution.The rest are not used in the current inferencecycle. The search to find the instantiations isessentially a blind search and has limited aidfrom domain knowledge. Restricting theexpressiveness of the language or moving thesearch from the knowledge retrieval space tothe problem-space level, in addition to parti-tioning, would limit unwanted search at theknowledge retrieval level.

As a guiding principle, by applying a rele-vance filter to both the data and the knowl-edge base, we hope to limit the match pro-cessing to only useful processing, resulting inthe following options:

Partitioning the knowledge space: Bypartitioning the knowledge space, we are able

to avoid searching sections of the knowledgespace that contain knowledge that is a prioriknown to not be applicable to particularpieces of data. In the tracking example, theknowledge base contains knowledge aboutintersections, road segments, and their con-nectivity. When processing a subproblem atthe level of a neighborhood, it is not neces-sary to consider intersections and road seg-ments in other neighborhoods. Thus, parti-tioning the knowledge base by neighborhoodcan restrict the knowledge retrieval time.Also, although it is easy to talk about parti-tioning a fact base (for example, partitioningstreets by neighborhoods), it is more difficultto partition higher forms of knowledge (forexample, active relationships and rules ofinference) that could have application acrossmultiple partitions.

Partitioning the data: Many pieces ofknowledge express relations, desired or other-wise, between multiple pieces of data. Parti-tioning the data allows us to avoid consider-ing sets of data that are a priori known to notbelong to the relation. In the tracking exam-ple, continuous traffic information for allroad segments is flowing into the system.Processing all this information all the timecan be expensive computationally andunnecessary. (Besides, the data are gettingstale anyway, so why process them if they arenot currently required?) Partitioning thesedata by neighborhood and processing thedata for the current neighborhood wouldrestrict the knowledge-processing time.

Restricting expressiveness: Highly expres-sive knowledge representations allow a largenumber of relations to be represented. Whena part of the state changes, a large number ofpotential relations with the rest of the statehave to be checked. Representation for-malisms that restrict expressiveness a priorirestrict the number of relations that need tobe checked during every state change. In thetracking example, we could use a representa-tion formalism that allows unstructured setsto be represented in working memory. How-ever, if we use structured sets (for example,intersection I1 is north of intersection I3 asopposed to intersection I1, which is adjacentto intersection I3), the number of potentialrelations that have to be checked at run timeis limited, allowing polynomial bounds onknowledge retrieval time (Tambe and Rosen-bloom 1989).

Knowledge Retrieval OrderingRun-time ordering of states based on domain-specific heuristic evaluation functions is not

Articles

SUMMER 1994 59

er, these notions can effectively be exploitedat the problem-space level. Structuring theprocess to move the search from the knowl-edge-space level to the problem-space level isthe solution to this problem and alleviatesthe need for knowledge approximations.

Knowledge Retrieval ScopingThere is no notion of scoping at the knowl-edge retrieval level. Any attempt to imple-ment scoping results in arbitrary restrictionson the match process, resulting in potentialcorrectness and completeness problems.

Thus, domain knowledge can be used atdesign time to prune, order, and structure theknowledge retrieval space. We now examinehow existing approaches map to our struc-tured approach. The existing real-time prob-lem-solving strategies are mapped to thedegrees of freedom that they exploit.

Mapping Existing TechniquesOver the past few years, various techniqueshave been developed by the AI and real-timecommunities for generating satisficingsolutions in limited time for search and non-search problems. These techniques includeimprecise computation (Chung, Liu, and Lin1990), real-time search (Korf 1990), anytimealgorithms (Dean and Boddy 1988), andapproximate processing (Lesser, Pavlin, andDurfee 1988). Our structured view is anattempt to map the nature of the entiredesign space. Each technique represents a dif-ferent point in the design space of real-timeAI systems. Our attempt to map these tech-niques is made difficult by the terminologydifferences between the real-time communityand the AI community and the lack of precisedefinitions in some cases. In this section, weexamine these techniques in greater detailand attempt to map them to the structuredview of real-time problem solving. Figure 8summarizes the techniques and the degreesof freedom they exploit.

Real-Time Search AlgorithmsA number of algorithms have been developedto address searches in real time. In the real-time heuristic search techniques proposed byKorf (1990), a time constraint is associatedwith node expansion, which determines theamount of lookahead in making a decisionfor the node. These techniques rely on theleast-commitment strategy, allowing them tobe applicable equally in static, as well asdynamic, environments (Ishida and Korf1991). Once all the nodes within the search

possible at the knowledge retrieval level. Atbest, some knowledge-lean, domain-indepen-dent heuristics can be applied. Hence, theoptions for ordering at this level are morerestricted than at the problem-level searchspace.

However, even though domain knowledgecannot be used to order the processing at runtime, an implicit ordering in the evaluationcan be built into the match process (becauseof the serialization of processing). A program-mer writing application code can use thisknowledge, along with knowledge of theapplication domain, to tailor the applicationcode to reduce knowledge retrieval search atrun time.

Knowledge Retrieval ApproximationApproximation is another technique thatworks at the problem-space level. In dealingwith approximation at the knowledgeretrieval level, we first distinguish betweenknowledge of an approximation and anapproximation of knowledge. Knowledge of anapproximation is knowledge to deal with dataclusters and knowledge to deal with dataapproximations. Approximation of knowledge isthe deliberate ignoring of more specific orcorroboratory knowledge because of limita-tions in time.

Providing support at the knowledgeretrieval level to different forms of approxi-mation is equivalent to providing support fordifferent rule-set partitions because theknowledge retrieval level cannot distinguishbetween knowledge of approximations andother forms of knowledge.

However, to support approximation ofknowledge at run time, the knowledgeretrieval process must have the ability todetermine (at run time) the level of approxi-mation that is appropriate for a given situa-tion. Given the nature of the knowledgeretrieval process, approximation will requiredomain knowledge to be hard coded into theknowledge retrieval search as well as makethe knowledge retrieval process arbitrarilycomplex. Also, an approximation of knowl-edge could be counterproductive because theknowledge that is ignored could potentiallycause unnecessary states to be visited at theproblem-space level (each requiring a wholeknowledge retrieval step) as opposed to thecurrent step and the incremental knowledgeretrieval time that would be required to com-plete processing.

We feel that exploiting notions of time andapproximation at the knowledge retrieval lev-el is hard and of questionable value. Howev-

Articles

60 AI MAGAZINE

Articles

SUMMER 1994 61

horizon are evaluated, a single move is madein the direction of the best next state; theentire process is repeated under the assump-tion that after committing the first move,additional information from an expandedsearch frontier can result in a different choicefor the second move than was indicated bythe first search. These techniques map tosearch within a subproblem at a single levelof abstraction, using scoping in time (range)at each state to perform the lookahead. Scop-ing in space (azimuth) is kept at a maximum.Because these searches have to commit to amove before the entire solution is found,optimality of the overall solution-executionsequence is not guaranteed. The search tech-niques for optimization in limited time pro-posed by Chu and Wah (1991) look for thesolution with the best ascertained approxima-tion degree. The search space is reduced usingan approximation function to eliminatenodes that deviate beyond a threshold fromthe current best solution. Their approach canbe viewed as a way of implementing scopingin our framework. These search techniquesare applied within a subproblem at a singlelevel of abstraction. The approximations tothe evaluation function change the scope ofthe search and do not affect the nature of the

search space. Hence, they do not count asapproximations in our structured view.Because the approximations are used to per-form pruning, the optimal solution can bepruned away. The goal state is compromisedbecause the optimal solution is not alwaysfound, with the assumption that the nonopti-mal solution that is found is acceptable. Also,by its very nature, these techniques are appli-cable to problems that are static as the searchproceeds. Dynamic situations require restart-ing the search process with a new set of data.

Other techniques, such as static upperband search and dynamic band search pro-posed by Chu and Wah (1992), restrict thebreadth and depth of the search and are mod-ified forms of beam search. In our framework,these techniques are applicable within a sub-problem at a single level of abstraction. Eventhough these techniques restrict the breadthand depth of the search (by limiting thenodes in the active list), the decision at eachnode is made only after considering theimmediate successor nodes. Hence, thesetechniques do not implement the type ofscoping we discuss in our framework (whichis a lookahead to a certain breadth and depthfor each move). These techniques are onlyapplicable in domains that are static. If the

EXISTINGTECHNIQUES

DEGREES OFFREEDOM

Approximating Scoping

Real-Time HeuristicSearch

Real-Time Search-Static Upper Band Search

-Dynamic Band Search

Progressive Reasoning

Approx. Processing

Imprecise Computation

Anytime Algorithm

Deliberation Scheduling

Level ofApplicability

Sub-problem

ProblemSub-problem

ProblemSub-problem

Sub-problem

ProblemSub-problem

Problem

OrderingPruning

Sub-problem

Static Dynamic

Variants

PerformanceProfiles

PerformanceProfiles

HeuristicError

Function

MonotonicError

Function

of A*

Variantsof A*

Variantsof A*

Figure 8. Existing Techniques and the Degrees of Freedom They Exploit.

At the problem level, the approximate pro-cessing approach relies on making methodapproximations. Method approximationrelies primarily on function simplification.

The mapping of approximate processing isillustrated using the tracking example. Figure9 shows how the tracking example can bemade to conform to the model of approxi-mate processing (Garvey and Lesser 1992).We note the similarities between subproblem2 in figure 9 and in figure 4. Approximationwithin a subproblem is illustrated in the dif-ferent problem spaces generated at the levelsof approximation of highways only; high-ways and major roads; and, the most detailedversion, all roads. Method approximation isillustrated by the presence of the arc bypass-ing the solution of subproblem 2 using adefault solution instead. (The default solutionmight have been generated in the previouspass using older data). In the example illus-trated, the only viable reason for skippingsubproblem 2 entirely is when the target is inthe same neighborhood as the tracker. Skip-ping an entire subproblem is possible in situ-ations in which the input and the output ofthe skipped subproblem have the same form.

Imprecise ComputationImprecise computation relies on making resultsthat are of poorer but acceptable qualityavailable on a timely basis when results of thedesired quality cannot be produced in time.This approach, as formally defined in Chung,Liu, and Lin (1990), is applicable to thoseprocesses that are designed to be monotone(a process is called a monotone process if theaccuracy of its intermediate result is nonde-creasing as more time is spent to produce theresult). Thus, imprecise computation exploitswell-defined error functions that guaranteemonotonicity.

As such, the imprecise computationapproach maps well to iterative processes.Even though this technique originally had norelation to AI, of late, the interpretation ofthis technique is being extended to addresssearch-based processes. Note that error-func-tion formalities originally associated withimprecise computation are sacrificed becausemonotonicity cannot be guaranteed insearch-based domains.

Anytime AlgorithmsAnytime algorithms are algorithms that returnsome answer for any allocation of computa-tion time and are expected to return betteranswers when given more time (Dean andBoddy 1988). This approach requires an ini-

domain is dynamic (for example, targetmoves), the searches have to be restarted witha new set of data.

Progressive ReasoningProgressive reasoning is a technique to analyzethe situation to depth 1, then depth 2, thendepth 3, and so on, until the deadline isreached (Winston 1984). Another variant ofthis approach is depth-first iterative deepening(Korf 1985), in which the search is restartedfrom the root node with an incrementaldepth bound after the current depth bound isreached. Within a given subproblem, thesetechniques can be mapped to scoping intime, with maximum scoping in space (thatis, the azimuth is at a maximum; only therange is increased with additional time). Pro-gressive reasoning has also been used toinclude incremental detail as the search pro-gresses. In our structure, progressive reason-ing maps to using multiple subproblems,each at a different level of abstraction, andstructuring the search process to progressivelytraverse the abstraction levels. In our trackingexample, moving between subproblem 2 (atthe level of highways) to subproblem 3 (inthe neighborhood of the tracker) would be aninstance of this approach.

Approximate ProcessingIn its broad sense, approximate processing is acollection of techniques for dealing withinference under uncertainty, in which theunderlying logic is approximate or probabilis-tic rather than exact or deterministic (Zadeh1985). Approximate processing is relevant insituations in which there is not enough timeto find the optimal solution. A number oftechniques (Lesser, Pavlin, and Durfee 1988;Decker, Lesser, and Whitehair 1990) havebeen developed in this area to make trade-offs in solution quality along three axes: (1)completeness, (2) precision, and (3) certainty.These techniques compromise the accuracywith which the goal state is achieved.

This approach exploits the approximationdegree of freedom described in the structuredview. At the subproblem level, approximationin precision can be mapped to value approxi-mation and the reducing of the cardinality of agiven characteristic. Approximation in com-pleteness can be mapped to ignoring some ofthe characteristics on which the solution isdependent. Approximation in certainty can bemapped to a combination of the simplificationof the functional relationship between thecharacteristics and the ignoring of some of thecharacteristics on which the solution is based.

Articles

62 AI MAGAZINE

Articles

SUMMER 1994 63

tial solution whose cost is known and somemeasure of how this cost is expected toimprove with deliberation time. The primarycontribution of the anytime algorithmapproach is the introduction of time as anadditional degree of freedom in allocatingresources for computation.

Because of its rather broad definition, any-time algorithms appear more to be anattempt to define an anytime property forreal-time problem-solving systems than a spe-cific technique. As such, the techniques pub-lished under the anytime umbrella marchacross the full spectrum and exploit variouscombinations of the fundamental degrees offreedom. It would be possible to map specificimplementations of anytime algorithms tothe structured view. The robot path-planningexample described in Boddy and Dean (1989)breaks the problem into subproblems thatexploit scoping in time and space. Delibera-tion scheduling was used to implement deci-sion making. Major contributions of thiswork include the specification of deliberationscheduling as a sequential decision-making

problem and the use of expectations in theform of performance profiles to guide search.

Over the last few years, a lot of work hasgone into defining interesting special cases ofperformance profiles (for example, Zilbersteinand Horvitz). One of the difficulties in imple-menting anytime algorithms is generatingperformance profiles that are an integral partof most anytime algorithms. In search-baseddomains, performance profiles are difficult togenerate because of the large variance in theexecution time of the search process. Thelarge variance results in a wide spreadbetween the upper and lower bounds of theperformance profile, resulting in low pre-dictability. Conditional performance profileshelp mitigate some of the problems in cir-cumstances where the initial conditions donot allow for accurate low-variance perfor-mance predictions.

In nonsearch domains with well-definederror functions, more precise performanceprofiles are possible because of lower execu-tion-time variations. In these domains, thisapproach is similar to the imprecise computa-

Sub-Problem 1:Tracker location to

neighborhood highway

Sub-Problem 3:Target neighborhood

to Target location

Highways

Highways &Main Roads

All RoadsDec

isio

n Pr

oces

s

Sub-Problem 2:Tracker Neighborhood to Target Neighborhood

Three Approximation Levels

Default Solution

Figure 9. Mapping Approximate Processing to the Tracking Example.

Search-based problems fall into two broad classes:iterative search algorithms and generalized search processes.

Iterative search algorithmsIterative search algorithms (search without backtracking), such asthe binary search of a dictionary and the Newton-Raphsonmethod, are search techniques in domains where a perfect evalua-tion function (or an exact error function) is available. At eachstep, the evaluation function guarantees taking you closer alongthe path to the goal state. There is no backtracking—only iterativeconvergence. Hence, it is possible to guarantee the monotonicityof the actual solution quality. Execution-time variations are solelythe result of data dependencies: start state and goal state. Boundson the worst-case execution time typically grow logarithmically asa function of the size of the problem space. The imprecise compu-tation approach (Chung, Liu, and Lin 1990) developed in the real-time community falls into this category.

Generalized search processesGeneralized search processes (search with backtracking), such asheuristic search, are search techniques in domains where only aninexact, heuristic evaluation function (or error function) is avail-able. At each step, the heuristic evaluation function takes you clos-er to the goal state in an expected sense. However, it is possible tobe wrong, requiring backtracking. Therefore, it is not possible toguarantee the monotonicity of the actual solution quality. At best,heuristic search functions can only improve the expected solutionquality in a monotonic fashion. Execution-time variations are afunction of both data dependencies and backtracking. Bounds onthe worst-case execution time typically grow exponentially as afunction of the size of the problem space. The deliberationscheduling technique (Boddy and Dean 1989) developed in the AIcommunity assumes a class of underlying anytime algorithms thatprovide monotonically improving solution quality. Although theanytime algorithm approach can monotonically improve theexpected solution quality, the actual solution quality cannot beguaranteed to improve monotonically because of backtracking.

Articles

64 AI MAGAZINE

tion approach without the mandatory com-putation assumption. In real life, it is notclear to what extent this difference is a defin-ing one. Most applications require some com-putation time before even a feasible defaultsolution can be generated. In general, thedefault solution at time zero could be toensure safety; for example, the robot couldstop moving or a plane could continue flyingin the same direction, although in both cases,it would be difficult to ensure that the defaultsolution indeed guarantees safety.

Thus, all techniques, whatever their label,attempt to find satisficing solutions in limit-ed time. We mapped real-time search tech-niques to be applicable within a single levelof abstraction. Approximate processing tech-niques span multiple levels of abstraction.The distinction between imprecise computa-tion and anytime algorithms continues toblur as the interpretation of imprecise com-putation continues to broaden. The currenttrend seems to distinguish the two based onthe presence or absence of a mandatory com-putation-time requirement, but we believethat this difference is purely cosmetic. Wefind that existing techniques exploit differentcombinations of the fundamental degrees offreedom in generating satisficing solutions inlimited time.

Summary and LimitationsThe key challenge in real-time problem solv-ing is constraining the execution time of nat-urally exponential search problems in a best-so-far fashion such that satisficing solutionscan be guaranteed in time without severeunderuse of system resources (Paul et al.1991). This article examined the execution-time variations of real-time problem solvingwithin the context of the Newell-Simon prob-lem-space hypothesis. We conjectured thatthere are at least four degrees of freedom inreducing search variations and structuringthe search space and the search process toproduce best-so-far solutions. We consideredthe two levels of problem-solving search: (1)the problem-level search space and (2) theknowledge retrieval level. The problem-levelsearch space was divided into smaller searchspaces called subproblems. Each subproblemwas defined to be at a single level of abstrac-tion. The effect of each of these degrees offreedom on the execution time of the taskwas discussed using a tracking example as aconceptual aid.

In evaluating execution-time effects, wefind that pruning in its many manifestations

is the only technique that reduces the worst-case execution time without compromisingthe goal state. Approximate processing canreduce the worst case but compromises thegoal state. Other techniques, such as scoping,reduce the average-case execution time andprovide a best-so-far solution property. Wealso examined the application of thesedegrees of freedom at the knowledge retrievallevel. We find that partitioning is the mosteffective technique at this level because oflimitations in using domain knowledge.

We then examined existing approaches toreal-time problem solving and mapped themto our structured approach based on the fun-damental degrees of freedom. Within thiscontext, the different techniques were shownto exploit compositions of these fundamentaldegrees of freedom.

AcknowledgmentsWe thank Milind Tambe, Sarosh Talukdar,Herbert Simon, Dorothy Setliff, ThomasDean, Jane Liu, Alan Garvey, Richard Korf,Marcel Schoppers, and the anonymousreviewers for their suggestions to improve theclarity and the content of this article.

ReferencesAcharya, A., and Kalp, D. 1989. Release Notes onParaOPS5 4.4 and CParaOPS5 5.4. School of Com-puter Science, Carnegie Mellon Univ.

Barachini, F.; Mistelberger, H.; and Gupta, A. 1992.Run-Time Prediction for Production Systems. InProceedings of the Tenth National Conference onArtificial Intelligence, 478–485. Menlo Park, Calif.:American Association for Artificial Intelligence.

Boddy, M., and Dean, T. 1989. Solving Time-Dependent Planning Problems. In Proceedings ofthe Eleventh International Joint Conference onArtificial Intelligence, 979–984. Menlo Park, Calif.:International Joint Conferences on Artificial Intelli-gence.

Chapman, R. L.; Kennedy, J. L.; Newell, A.; andBiel, W. C. 1959. The System’s Research Laborato-ry’s Air Defense Experiments. Management Science5:250–269.

Chu, L., and Wah, B. 1992. Solution of ConstrainedOptimization Problems in Limited Time. In Pro-ceedings of the IEEE Workshop on Imprecise andApproximate Computation, 40–44. Washington,D.C.: IEEE Computer Society.

Chu, L., and Wah, B. 1991. Optimization in RealTime. In Proceedings of the Twelfth Real-Time Sys-tems Symposium, 150–159. Washington, D.C.: IEEEComputer Society.

Chung, J.; Liu, J.; and Lin, K. 1990. Scheduling Peri-odic Jobs That Allow Imprecise Results. IEEE Trans-actions on Computers 39(9): 1156–1174.

Dean, T., and Boddy, M. 1988. An Analysis of Time-Dependent Planning. In Proceedings of the Sev-

Articles

SUMMER 1994 65

Association for Artificial Intelligence.

Newell, A., and Simon, H. 1972. Human ProblemSolving. Englewood Cliffs, N.J.: Prentice-Hall.

Paul, C. J. 1993. A Structured Approach to Real-Time Problem Solving. Ph.D. thesis, Dept. of Elec-trical and Computer Engineering, Carnegie MellonUniv.

Paul, C. J.; Acharya, A.; Black, B.; and Strosnider, J.K. 1991. Reducing Problem-Solving Variance toImprove Predictability. Communications of the ACM34(8): 64–93.

Paul, C. J.; Holloway L. E.; Strosnider, J. K.; andKrogh, B. H. 1992. An Intelligent Reactive Schedul-ing and Monitoring System. IEEE Control Systems12(3): 78–86.

Scales, D. J. 1986. Efficient Matching Algorithmsfor the SOAR/OPS5 Production System. TechnicalReport, KSL-86-47, Knowledge Systems Lab., Stan-ford Univ.

Schwan, K., and Zhou, H. 1992. Dynamic Schedul-ing of Real-Time Tasks and Real-Time Threads. IEEETransactions on Software Engineering 18:736–748.

Sha, L., and Goodenough, J. B. 1990. Real-TimeScheduling Theory and ADA. IEEE Computer23:53–62.

Sprunt, B.; Sha, L.; and Lehoczky, J. 1989. A Period-ic Task Scheduling for Hard Real-Time Systems.Journal of Real-Time Systems 1(1): 27–60.

Stankovic, J. A. 1988. Misconceptions about Real-Time Computing: A Serious Problem for the Next-Generation Systems. IEEE Computer 21:10–19.

Tambe, M., and Rosenbloom, P. 1989. EliminatingExpensive Chunks by Restricting Expressiveness. InProceedings of the Eleventh International JointConference on Artificial Intelligence, 731–737.Menlo Park, Calif.: International Joint Conferenceson Artificial Intelligence.

VanTilborg, A., and Koob, G. M. 1991. Foundationsof Real-Time Computing, Scheduling, and ResourceManagement. New York: Kluwer Academic.

Wensley, J.; Lamport, L.; Goldberg, J.; Green, M.W.; Levitt, K.; Melliar-Smith, P.; Shostak, R.; andWeinstock, C. 1978. SIFT: Design and Analysis of aFault-Tolerant Computer for Aircraft Control. Pro-ceedings of the IEEE 66(10): 1240–1255.

Winston, P. H. 1984. Artificial Intelligence. 2d ed.Reading, Mass.: Addison-Wesley.

Zadeh, L. A. 1985. Foreword. In Approximate Rea-soning in Expert Systems. New York: North-Holland.

Jay K. Strosnider is an associate professor in theelectrical and computer engineering department atCarnegie Mellon University. He has ten yearsindustrial experience developing distributed real-time systems for submariens and six years academicexperience trying to formalize real-time systemsengineering. His current research focus is on inter-grating wide-ranging technologies within ascheduling theoretic framework.

C. J. Paul is a member of the staff at IBM Austin.His research interests include computer architec-ture, operating systems, and real-time artificialintelligence.

enth National Conference on Artificial Intelligence,49–54. Menlo Park, Calif.: American Association forArtificial Intelligence.

Decker, K. S.; Lesser, V. R.; and Whitehair, R. C.1990. Extending a Blackboard Architecture forApproximate Processing. Journal of Real-Time Sys-tems 2(1–2): 47–79.

Dodhiawala, R.; Sridharan, N. S.; Raulefs, P.; andPickering, C. 1989. Real-Time AI Systems: A Defini-tion and an Architecture. In Proceedings of theEleventh International Joint Conference on Artifi-cial Intelligence, 256–264. Menlo Park, Calif.: Inter-national Joint Conferences on Artificial Intelli-gence.

Forgy, C. L. 1984. The OPS83 Report. TechnicalReport, CMU-CS-84-133, Computer Science Dept.,Carnegie Mellon Univ.

Forgy, C. L. 1982. RETE: A Fast Algorithm for theMany Pattern–Many Object Pattern Match Prob-lem. Artificial Intelligence 19(1): 17–37.

Garvey, A., and Lesser, V. 1992. Scheduling Satisfic-ing Tasks with a Focus on Design-to-Time Schedul-ing. In Proceedings of the IEEE Workshop onImprecise and Approximate Computation, 25–29.Washington, D.C.: IEEE Computer Society.

Ishida, T., and Korf, R. E. 1991. Moving TargetSearch. In Proceedings of the Twelfth InternationalJoint Conference on Artificial Intelligence,204–210. Menlo Park, Calif.: International JointConferences on Artificial Intelligence.

Jeffay, K.; Stanat, D.; and Martel, C. 1991. On Non-Preemptive Scheduling of Periodic and SporadicResources. In Proceedings of the IEEE Real-TimeSystems Symposium, 129–139. Washington, D.C.:IEEE Computer Society.

Knoblock, C. A. 1990. Learning Abstraction Hierar-chies for Problem Solving. In Proceedings of theEighth National Conference on Artificial Intelli-gence, 923–928. Menlo Park, Calif.: American Asso-ciation for Artificial Intelligence.

Korf, R. E. 1990. Real-Time Heuristic Search. Artifi-cial Intelligence 42(2–3): 189–211.

Korf, R. E. 1987. Planning as Search: A QuantitativeApproach. Artificial Intelligence 33(1): 65–88.

Korf, R. E. 1985. Depth-First Iterative-Deepening:An Optimal Admissible Tree Search. Artificial Intelli-gence 27(1): 97–109.

Laffey, T. J.; Cox, P. A.; Schmidt, J. L.; Kao, S. M.;and Read, J. Y. 1988. Real-Time Knowledge-BasedSystems. AI Magazine 9(1): 27–45.

Lesser, V. R.; Pavlin, J.; and Durfee, E. 1988.Approximate Processing in Real-Time ProblemSolving. AI Magazine 9(1): 49–62.

Liu, C. L., and Layland, J. W. 1973. SchedulingAlgorithms for Multiprogramming in a Hard Real-Time Environment. Journal of the ACM 20(1):46–61.

Nayak, A.; Gupta, P.; and Rosenbloom, P. 1988.Comparison of the RETE and TREAT ProductionMatchers for SOAR (a Summary). In Proceedings ofthe Seventh National Conference on ArtificialIntelligence, 693–698. Menlo Park, Calif.: American

Articles

66 AI MAGAZINE