chapter 4 - informed search methods 4.1 best-first search 4.2 heuristic functions 4.3 memory bounded...

61
Chapter 4 - Informed Search Methods 4.1 Best-First Search 4.2 Heuristic Functions 4.3 Memory Bounded Search 4.4 Iterative Improvement Algorithms

Upload: lauren-cox

Post on 21-Jan-2016

241 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Chapter 4 - Informed Search Methods 4.1 Best-First Search 4.2 Heuristic Functions 4.3 Memory Bounded Search 4.4 Iterative Improvement Algorithms

Chapter 4 - Informed Search Methods

4.1 Best-First Search

4.2 Heuristic Functions

4.3 Memory Bounded Search

4.4 Iterative Improvement Algorithms

Page 2: Chapter 4 - Informed Search Methods 4.1 Best-First Search 4.2 Heuristic Functions 4.3 Memory Bounded Search 4.4 Iterative Improvement Algorithms

4.1 Best First Search

Is just a General Searchminimum-cost nodes are expanded firstwe basically choose the node that appears to be best according to the evaluation function

Two basic approaches:Expand the node closest to the goalExpand the node on the least-cost solution path

Page 3: Chapter 4 - Informed Search Methods 4.1 Best-First Search 4.2 Heuristic Functions 4.3 Memory Bounded Search 4.4 Iterative Improvement Algorithms

Function BEST-FIRST-SEARCH(problem, EVAL-FN) returns a solution sequence Inputs: problem, a problem Eval-Fn, an evaluation function Queueing-Fu a function that orders nodes by EVAL-FN Return GENERAL-SEARCH(problem, Queueing-Fn)

The Algorithm

Page 4: Chapter 4 - Informed Search Methods 4.1 Best-First Search 4.2 Heuristic Functions 4.3 Memory Bounded Search 4.4 Iterative Improvement Algorithms

Greedy Search

“ … minimize estimated cost to reach a goal”

a heuristic function calculates such cost estimates

h(n) = estimated cost of the cheapest path from the state

at node n to a goal state

Page 5: Chapter 4 - Informed Search Methods 4.1 Best-First Search 4.2 Heuristic Functions 4.3 Memory Bounded Search 4.4 Iterative Improvement Algorithms

Function GREEDY-SEARCH(problem) return a solution or failurereturn BEST-FIRST-SEARCH(problem, h)

Required that h(n) = 0 if n = goal

The Code

Page 6: Chapter 4 - Informed Search Methods 4.1 Best-First Search 4.2 Heuristic Functions 4.3 Memory Bounded Search 4.4 Iterative Improvement Algorithms

Straight-line distance

The straight-line distance heuristic function is a for finding route-finding problem

hSLD(n) = straight-line distance between n and the goal location

Page 7: Chapter 4 - Informed Search Methods 4.1 Best-First Search 4.2 Heuristic Functions 4.3 Memory Bounded Search 4.4 Iterative Improvement Algorithms

75

118

111

140

80

97

99

101

211

A

B

C

D

E F

G

H I

State h(n)

A 366B 374C 329D 244E 253F 178G 193H 98I 0

A

E BC

h=253 h=329 h=374

Page 8: Chapter 4 - Informed Search Methods 4.1 Best-First Search 4.2 Heuristic Functions 4.3 Memory Bounded Search 4.4 Iterative Improvement Algorithms

A

E C B

A F G

h=253

A E F

Total distance = 253 + 178= 431

h = 366 h = 178 h = 193

h = 253 h = 178

Page 9: Chapter 4 - Informed Search Methods 4.1 Best-First Search 4.2 Heuristic Functions 4.3 Memory Bounded Search 4.4 Iterative Improvement Algorithms

A

A F

BCE

G

IE

A E IF

h = 253

h =366 h=178 h=193

h = 253 h = 0h = 178

h = 0h = 253

Total distance=253 + 178 + 0=431

Page 10: Chapter 4 - Informed Search Methods 4.1 Best-First Search 4.2 Heuristic Functions 4.3 Memory Bounded Search 4.4 Iterative Improvement Algorithms

Optimality

A-E-F-I = 431

A-E-G-H-I = 418

vs.

State h(n)

A 366B 374C 329D 244E 253F 178G 193H 98I 0

75

118

111

140

80

97

99

101

211

A

B

C

D

EF

G

H I

Page 11: Chapter 4 - Informed Search Methods 4.1 Best-First Search 4.2 Heuristic Functions 4.3 Memory Bounded Search 4.4 Iterative Improvement Algorithms

Completeness

Greedy Search is incomplete

Worst-case time complexity O(bm)

Straight-line distance

A

B

C

D

h(n)

0

5

7

6 A

D

B

CTarget node

Starting node

Page 12: Chapter 4 - Informed Search Methods 4.1 Best-First Search 4.2 Heuristic Functions 4.3 Memory Bounded Search 4.4 Iterative Improvement Algorithms

A* search

f(n) = g(n) + h(n)

h = heuristic function

g = uniform-cost search

75

118

111

140

80

97

99

101

211

A

B

C

D

E F

G

H I

State h(n)

A 366B 374C 329D 244E 253F 178G 193H 98I 0

Page 13: Chapter 4 - Informed Search Methods 4.1 Best-First Search 4.2 Heuristic Functions 4.3 Memory Bounded Search 4.4 Iterative Improvement Algorithms

“ Since g(n) gives the path from the start node to node n, and h(n) is the estimated cost of the cheapest path from n to the goal, we have… ”

f(n) = estimated cost of the cheapest solution through n

Page 14: Chapter 4 - Informed Search Methods 4.1 Best-First Search 4.2 Heuristic Functions 4.3 Memory Bounded Search 4.4 Iterative Improvement Algorithms

A

E C B

f = 75 +374 =449

f = 118+329 =447

f=140 + 253 =393

f(n) = g(n) + h(n)

A

E C B

A F G

f = 140+253 = 393

f =0 + 366 = 366

f = 280+366 = 646

f =239+178 =417

f = 220+193 = 413

Page 15: Chapter 4 - Informed Search Methods 4.1 Best-First Search 4.2 Heuristic Functions 4.3 Memory Bounded Search 4.4 Iterative Improvement Algorithms

A

A F

BCE

G

HE

f = 140 + 253 = 393

f =220+193 =413

f = 317+98 = 415

f = 300+253 = 553

f =0 + 366= 366

Page 16: Chapter 4 - Informed Search Methods 4.1 Best-First Search 4.2 Heuristic Functions 4.3 Memory Bounded Search 4.4 Iterative Improvement Algorithms

A

A F

BCE

G

HE

f = 140 + 253 = 393

f =220+193 =413

f = 300+253 = 553

f =0 + 366= 366

f = 317+98 = 415

If = 418+0 = 418

State h(n)

A 366B 374C 329D 244E 253F 178G 193H 98I 0

f(n) = g(n) + h(n)

Page 17: Chapter 4 - Informed Search Methods 4.1 Best-First Search 4.2 Heuristic Functions 4.3 Memory Bounded Search 4.4 Iterative Improvement Algorithms

A-E-G-H-I = 418

Remember earlier

G

H

I

E

A

f = 418+0 = 418

Page 18: Chapter 4 - Informed Search Methods 4.1 Best-First Search 4.2 Heuristic Functions 4.3 Memory Bounded Search 4.4 Iterative Improvement Algorithms

The Algorithm

function A*-SEARCH(problem) returns a solution or failurereturn BEST-FIRST-SEARCH(problem,g+h)

Page 19: Chapter 4 - Informed Search Methods 4.1 Best-First Search 4.2 Heuristic Functions 4.3 Memory Bounded Search 4.4 Iterative Improvement Algorithms

Chapter 4 - Informed Search Methods

4.1 Best-First Search

4.2 Heuristic Functions

4.3 Memory Bounded Search

4.4 Iterative Improvement Algorithms

Page 20: Chapter 4 - Informed Search Methods 4.1 Best-First Search 4.2 Heuristic Functions 4.3 Memory Bounded Search 4.4 Iterative Improvement Algorithms

HEURISTIC FUNCTIONS

Page 21: Chapter 4 - Informed Search Methods 4.1 Best-First Search 4.2 Heuristic Functions 4.3 Memory Bounded Search 4.4 Iterative Improvement Algorithms

OBJECTIVE

calculates the cost estimates of an algorithm

Page 22: Chapter 4 - Informed Search Methods 4.1 Best-First Search 4.2 Heuristic Functions 4.3 Memory Bounded Search 4.4 Iterative Improvement Algorithms

IMPLEMENTATION

Greedy Search A* Search IDA*

Page 23: Chapter 4 - Informed Search Methods 4.1 Best-First Search 4.2 Heuristic Functions 4.3 Memory Bounded Search 4.4 Iterative Improvement Algorithms

EXAMPLES

straight-line distance to B 8-puzzle

Page 24: Chapter 4 - Informed Search Methods 4.1 Best-First Search 4.2 Heuristic Functions 4.3 Memory Bounded Search 4.4 Iterative Improvement Algorithms

EFFECTS Quality of a given heuristic

Determined by the effective branching factor b*

A b* close to 1 is ideal N = 1 + b* + (b*)2 + . . . + (b*)d

N = # nodes

d = solution depth

Page 25: Chapter 4 - Informed Search Methods 4.1 Best-First Search 4.2 Heuristic Functions 4.3 Memory Bounded Search 4.4 Iterative Improvement Algorithms

EXAMPLE

If A* finds a solution depth 5 using 52 nodes, then b* = 1.91

Usual b* exhibited by a given heuristic is fairly constant over a large range of problem instances

Page 26: Chapter 4 - Informed Search Methods 4.1 Best-First Search 4.2 Heuristic Functions 4.3 Memory Bounded Search 4.4 Iterative Improvement Algorithms

. . .

A well-designed heuristic should have a b* close to 1.

… allowing fairly large problems to be solved

Page 27: Chapter 4 - Informed Search Methods 4.1 Best-First Search 4.2 Heuristic Functions 4.3 Memory Bounded Search 4.4 Iterative Improvement Algorithms

NUMERICAL EXAMPLE

Fig. 4.8 Comparison of the search costs and effective branching factors for the IDA and A*

algorithms with h1, h2. Data are averaged over 100 instances of the 8-puzzle, for

various solution lengths.

Page 28: Chapter 4 - Informed Search Methods 4.1 Best-First Search 4.2 Heuristic Functions 4.3 Memory Bounded Search 4.4 Iterative Improvement Algorithms

INVENTING HEURISTIC FUNCTIONS

How ?

• Depends on the restrictions of a given problem

• A problem with lesser restrictions is known as a relaxed problem

Page 29: Chapter 4 - Informed Search Methods 4.1 Best-First Search 4.2 Heuristic Functions 4.3 Memory Bounded Search 4.4 Iterative Improvement Algorithms

INVENTING HEURISTIC FUNCTIONS

Fig. 4.7 A typical instance of the 8-puzzle.

Page 30: Chapter 4 - Informed Search Methods 4.1 Best-First Search 4.2 Heuristic Functions 4.3 Memory Bounded Search 4.4 Iterative Improvement Algorithms

INVENTING HEURISTIC FUNCTIONS

One problem

… one often fails to get one “clearly best” heuristic

Given h1, h2, h3, … , hm ; none dominates any others.

Which one to choose ?

h(n) = max(h1(n), … ,hm(n))

Page 31: Chapter 4 - Informed Search Methods 4.1 Best-First Search 4.2 Heuristic Functions 4.3 Memory Bounded Search 4.4 Iterative Improvement Algorithms

INVENTING HEURISTIC FUNCTIONS

Another way:

performing experiment randomly on a particular problem gather results decide base on the collected information

Page 32: Chapter 4 - Informed Search Methods 4.1 Best-First Search 4.2 Heuristic Functions 4.3 Memory Bounded Search 4.4 Iterative Improvement Algorithms

HEURISTICS FORCONSTRAINT SATISFACTION PROBLEMS

(CSPs)

most-constrained-variable

least-constraining-value

Page 33: Chapter 4 - Informed Search Methods 4.1 Best-First Search 4.2 Heuristic Functions 4.3 Memory Bounded Search 4.4 Iterative Improvement Algorithms

EXAMPLE

Fig 4.9 A map-coloring problem after the first two variables (A and B) have been selected.

Which country should we color next?

DF

A B

EC

Page 34: Chapter 4 - Informed Search Methods 4.1 Best-First Search 4.2 Heuristic Functions 4.3 Memory Bounded Search 4.4 Iterative Improvement Algorithms

Chapter 4 - Informed Search Methods

4.1 Best-First Search

4.2 Heuristic Functions

4.3 Memory Bounded Search

4.4 Iterative Improvement Algorithms

Page 35: Chapter 4 - Informed Search Methods 4.1 Best-First Search 4.2 Heuristic Functions 4.3 Memory Bounded Search 4.4 Iterative Improvement Algorithms

4.3 MEMORY BOUNDED SEARCH

In this section, we investigate two algorithms that are designed to conserve memory

Depth Nodes Time Memory

0 1 1 100 bytes

2 111 .1 11 kilobytes

4 11,111 11 1 megabyte

6 106 18 111 megabyte

8 108 31 11 gigabytes

10 1010 128 1 terabyte

12 1012 35 111 terabytes

14 1014 3500 11,111 terabytes

Figure 3.12 Time and memory requirements for breadth-first search.

Page 36: Chapter 4 - Informed Search Methods 4.1 Best-First Search 4.2 Heuristic Functions 4.3 Memory Bounded Search 4.4 Iterative Improvement Algorithms

Memory Bounded Search1. IDA* (Iterative Deepening A*) search

- is a logical extension of ITERATIVE - DEEPENING SEARCH to use heuristic information

2. SMA* (Simplified Memory Bounded

A*) search

Page 37: Chapter 4 - Informed Search Methods 4.1 Best-First Search 4.2 Heuristic Functions 4.3 Memory Bounded Search 4.4 Iterative Improvement Algorithms

Iterative Deepening search

Iterative Deepening is a kind of uniformed search strategy

combines the benefits of depth- first and breadth-first search

advantage - it is optimal and complete like breadth first search

- modest memory requirement like depth-first search

Page 38: Chapter 4 - Informed Search Methods 4.1 Best-First Search 4.2 Heuristic Functions 4.3 Memory Bounded Search 4.4 Iterative Improvement Algorithms

IDA* (Iterative Deepening A*) search

turning A* search IDA* search

each iteration is a depth first search

~ use an f-cost limit rather than a depth limit

space requirement ~ worse case : b f*/ b - branching factor

f* - optimal solution

- smallest operator cost

d - depth

~ most case : b d is a good estimate of the storage requirement

time complexity -- IDA* does not need to insert and delete nodes on a priority queue, its overhead per node can be much less than that of A*

Page 39: Chapter 4 - Informed Search Methods 4.1 Best-First Search 4.2 Heuristic Functions 4.3 Memory Bounded Search 4.4 Iterative Improvement Algorithms

IDA* searchFirst, each iteration expands all nodes inside the contour for the current f-cost

peeping over to find out the next contour lines

once the search inside a given contour has been complete

a new iteration is started using a new f-cost for the next contour

Contour

Page 40: Chapter 4 - Informed Search Methods 4.1 Best-First Search 4.2 Heuristic Functions 4.3 Memory Bounded Search 4.4 Iterative Improvement Algorithms

IDA* search Algorithmfunction IDA* (problem) returns a solution sequence inputs: problem, a problem local variables: f-limit, the current f-cost limit root, a noderoot <-MAKE-NODE(INITIAL-STATE[problem]) f-limit f-COST (root)loop do

solution,f-limit DFS-CONTOUR(root,f-limit) if solution is non-null then return solution

if f-limit = then return failure; end

----------------------------------------------------------------------------------------------------------------------------------

function DFS -CONTOUR (node, f-limit) returns a solution sequence and a new f- COST limit inputs: node, a node f-limit, the current f-COST limit

local variables: next-f , the f-COST limit for the next contour, initially

if f-COST [node] > f-limit then return null, f-COST [node] if GOAL-TEST [problem] (STATE[node]} then return node, f-limit for each node s in SUCCESSOR (node) do solution, new-f DFS-CONTOUR (s, f-limit)

if solution is non-null then return solution, f-limit next-f MIN (next-f,new-f); end return null, next-f

Page 41: Chapter 4 - Informed Search Methods 4.1 Best-First Search 4.2 Heuristic Functions 4.3 Memory Bounded Search 4.4 Iterative Improvement Algorithms

MEMORY BOUNDED SEARCH

1. IDA* (Iterative Deepening A*) search

2. SMA* (Simplified Memory

Bounded A*) search

- is similar to A* , but restricts the queue size

to fit into the available memory

Page 42: Chapter 4 - Informed Search Methods 4.1 Best-First Search 4.2 Heuristic Functions 4.3 Memory Bounded Search 4.4 Iterative Improvement Algorithms

SMA* (Simplified Memory Bounded A*) Search

advantage to use more memory -- improve search efficiency

Can make use of all available memory to carry out the search

remember a node rather than to regenerate it when needed

Page 43: Chapter 4 - Informed Search Methods 4.1 Best-First Search 4.2 Heuristic Functions 4.3 Memory Bounded Search 4.4 Iterative Improvement Algorithms

SMA* search (cont.)

SMA* has the following properties

SMA* will utilize whatever memory is made available to it

SMA* avoids repeated states as far as its memory allows

SMA* is complete if the available memory is sufficient to store the shallowest solution path

Page 44: Chapter 4 - Informed Search Methods 4.1 Best-First Search 4.2 Heuristic Functions 4.3 Memory Bounded Search 4.4 Iterative Improvement Algorithms

SMA* search (cont.)

SMA* properties cont.

SMA* is optimal if enough memory is available to store the shallowest optimal solution path

when enough memory is available for the entire search tree, the search is optimally efficient

Page 45: Chapter 4 - Informed Search Methods 4.1 Best-First Search 4.2 Heuristic Functions 4.3 Memory Bounded Search 4.4 Iterative Improvement Algorithms

Progress of the SMA* search

A 12 A

B

12

15

A

G

13 (15)

13

H

A

G

G

15 (15)

24 (infinite)24

A

B

15

15

A

B

15 (24)

15

A

B

20 (24)

20 (infinite)

D

20

Aim: find the lowest -cost goal node with enough memory

Max Nodes = 3

A - root node

D,F,I,J - goal node

12

----------------------------------------------------------------------------------------------------------

A

B G15

DC

E F

H I

J K

25 20

35 30

18 24

24 29

A

B

13

15 G 13

(Infinite)

I 24

C (Infinite)

------------------------------------------------------------------------------------------------------------

Label: current f-cost

• Memory is full

• update (A) f-cost for the min child

• expand G, drop the higher f-cost leaf (B)

• Memorize B

• memory is full

• H not a goal node, mark

h to infinite

• Drop H and add I

• G memorize H

• update (G) f-cost for the min child

• update (A) f-cost

• I is goal node , but may not be the best solution

• the path through G is not so great so B is generate for the second time

• Drop G and add C

• A memorize G

• C is non-goal node

• C mark to infinite

• Drop C and add D

• B memorize C

• D is a goal node and it is lowest f-cost node then terminate

• How about J has a cost of 19 instead of 24 ????????

13

Page 46: Chapter 4 - Informed Search Methods 4.1 Best-First Search 4.2 Heuristic Functions 4.3 Memory Bounded Search 4.4 Iterative Improvement Algorithms

SMA* search (cont.)Function SMA *(problem) returns a solution sequence inputs: problem, a problem local variables: Queue, a queue of nodes ordered by f-cost Queue Make-Queue({MAKENODE(INITIALSTATE[problem])}) loop do

if Queue is empty then return failuren deepest least-f-cost node in Queueif GOAL-TEST(n) then return successs NEXT-SUCCESSOR(n)if s is not a goal and is at maximum depth then

f(s) else

f(s) MAX(f(n), g(s)+h(s))if all of n’s successors have been generated then

update n’s f-cost and those of its ancestors if necessaryif SUCCESSORS(n) all in memory then remove n from Queueif memory is full then

delete shallowest, highest-f-cost node in Queueremove it from its parent’s successor listinsert its parent on Queue if necessary

insert s on Queueend

Page 47: Chapter 4 - Informed Search Methods 4.1 Best-First Search 4.2 Heuristic Functions 4.3 Memory Bounded Search 4.4 Iterative Improvement Algorithms

Chapter 4 - Informed Search Methods

4.1 Best-First Search

4.2 Heuristic Functions

4.3 Memory Bounded Search

4.4 Iterative Improvement Algorithms

Page 48: Chapter 4 - Informed Search Methods 4.1 Best-First Search 4.2 Heuristic Functions 4.3 Memory Bounded Search 4.4 Iterative Improvement Algorithms

ITERATIVE IMPROVEMENT ITERATIVE IMPROVEMENT ALGORITHMSALGORITHMS

For the most practical approach in which

All the information needed for a solution are contained in the state description itself

The path of reaching a solution is not important

Advantage: memory save by keeping track of only the current state

Two major classes: Hill-climbing (gradient descent)

Simulated annealing

Page 49: Chapter 4 - Informed Search Methods 4.1 Best-First Search 4.2 Heuristic Functions 4.3 Memory Bounded Search 4.4 Iterative Improvement Algorithms

Hill-Climbing SearchHill-Climbing Search

Only record the state and it evaluation instead of maintaining a search tree

Function Hill-Climbing(problem) returns a solution stateinputs: problem, a problemlocal variables: current, a node

next, a mode

current Make-Node(Initial-State[problem])loop do next a highest-valued successor of current if Value[next] Value[current] then return current current next end

Page 50: Chapter 4 - Informed Search Methods 4.1 Best-First Search 4.2 Heuristic Functions 4.3 Memory Bounded Search 4.4 Iterative Improvement Algorithms

Hill-Climbing SearchHill-Climbing Search

select at random when there is more than one best successor to choose from

Three well-known drawbacks:

Local maxima

Plateaux

Ridges

When no progress can be made, start from a new point.

Page 51: Chapter 4 - Informed Search Methods 4.1 Best-First Search 4.2 Heuristic Functions 4.3 Memory Bounded Search 4.4 Iterative Improvement Algorithms

Local MaximaLocal Maxima

A peak lower than the highest peak in the state space

The algorithm halts when a local maximum is reached

Page 52: Chapter 4 - Informed Search Methods 4.1 Best-First Search 4.2 Heuristic Functions 4.3 Memory Bounded Search 4.4 Iterative Improvement Algorithms

PlateauxPlateaux

Neighbors of the state are about the same height

A random walk will be generated

Page 53: Chapter 4 - Informed Search Methods 4.1 Best-First Search 4.2 Heuristic Functions 4.3 Memory Bounded Search 4.4 Iterative Improvement Algorithms

RidgesRidges

No steep sloping sides between the top and peaks

The search makes little progress unless the top is directly reached

Page 54: Chapter 4 - Informed Search Methods 4.1 Best-First Search 4.2 Heuristic Functions 4.3 Memory Bounded Search 4.4 Iterative Improvement Algorithms

Random-Restart Hill-ClimbingRandom-Restart Hill-Climbing

Generates different starting points when no progress can be made from the previous point

Saves the best result

Can eventually find out the optimal solution if enough iterations are allowed

The fewer local maxima, the quicker it finds a good solution

Page 55: Chapter 4 - Informed Search Methods 4.1 Best-First Search 4.2 Heuristic Functions 4.3 Memory Bounded Search 4.4 Iterative Improvement Algorithms

Simulated AnnealingSimulated Annealing

Picks random moves

Keeps executing the move if the situation is actually improved; otherwise, makes the move of a probability less than 1

Number of cycles in the search is determined according to probability

The search behaves like hill-climbing when approaching the end

Originally used for the process of cooling a liquid

Page 56: Chapter 4 - Informed Search Methods 4.1 Best-First Search 4.2 Heuristic Functions 4.3 Memory Bounded Search 4.4 Iterative Improvement Algorithms

Simulated-Annealing FunctionSimulated-Annealing FunctionFunction Simulated-Annealing(problem, schedule) returns a solution state

inputs: problem, a problem schedule, a mapping from time to “temperature”

local variables: current, a node next, a node T, a “temperature” controlling the probability of downward

steps

current Make-Node(Initial-State[problem])for t 1 to do T schedule[t]

if T=0 then return current next a randomly selected successor of current

E Value[next] - Value[current] if E > 0 then current next else current next only with probability eE/T

Page 57: Chapter 4 - Informed Search Methods 4.1 Best-First Search 4.2 Heuristic Functions 4.3 Memory Bounded Search 4.4 Iterative Improvement Algorithms

Applications in Constraint Applications in Constraint Satisfaction ProblemsSatisfaction Problems

General algorithms for Constraint Satisfaction Problemsassigns values to all variables

applies modifications to the current configuration by assigning different values to variables towards a solution

Example problem: an 8-queens problem(Definition of an 8-queens problem is on Pg64, text)

Page 58: Chapter 4 - Informed Search Methods 4.1 Best-First Search 4.2 Heuristic Functions 4.3 Memory Bounded Search 4.4 Iterative Improvement Algorithms

An 8-queens ProblemAn 8-queens Problem

Algorithm chosen:

the min-conflicts heuristic repair method

Algorithm Characteristics:

repairs inconsistencies in the current configuration

selects a new value for a variable that results in the minimum number of conflicts with other variables

Page 59: Chapter 4 - Informed Search Methods 4.1 Best-First Search 4.2 Heuristic Functions 4.3 Memory Bounded Search 4.4 Iterative Improvement Algorithms

Detailed StepsDetailed Steps1. One by one, find out the number of conflicts between

the inconsistent variable and other variables.

2 2

1 2

3 1

2

Page 60: Chapter 4 - Informed Search Methods 4.1 Best-First Search 4.2 Heuristic Functions 4.3 Memory Bounded Search 4.4 Iterative Improvement Algorithms

Detailed StepsDetailed Steps2. Choose the one with the smallest number of

conflicts to make a move

3 3

2

32

30

Page 61: Chapter 4 - Informed Search Methods 4.1 Best-First Search 4.2 Heuristic Functions 4.3 Memory Bounded Search 4.4 Iterative Improvement Algorithms

Detailed StepsDetailed Steps3. Repeat previous steps until all the inconsistent

variables have been assigned with a proper value.