a rtificial i ntelligence unit : 2 search techniques

27
ARTIFICIAL INTELLIGENCE UNIT : 2 Search Techniques

Upload: dwayne-chapman

Post on 13-Jan-2016

215 views

Category:

Documents


1 download

TRANSCRIPT

Page 1: A RTIFICIAL I NTELLIGENCE UNIT : 2 Search Techniques

ARTIFICIAL INTELLIGENCE

UNIT : 2

Search Techniques

Page 2: A RTIFICIAL I NTELLIGENCE UNIT : 2 Search Techniques

2

SEARCH TECHNIQUES:

Search techniques are general problem-solving methods.

Two types-

1) Uninformed search

2) Informed search

Page 3: A RTIFICIAL I NTELLIGENCE UNIT : 2 Search Techniques

3

1. DEPTH-FIRST SEARCH

A depth-first search (DFS) explores a path all the way to a leaf before backtracking and exploring another path

For example, after searching A, then B, then D, the search backtracks and tries another path from B

Node are explored in the order A B D E H L M N I O P C F G J K Q

N will be found before JL M N O P

G

Q

H JI K

FED

B C

A

Page 4: A RTIFICIAL I NTELLIGENCE UNIT : 2 Search Techniques

4

DFS ALGORITHM

o If the initial state is a goal state, quit and return success.

o Otherwise, do the following until success or failure is signaled -

1. Generate a successor, E, of the initial state. If there are no more successors, signal failure.

2. Call Depth-First Search with E as the initial state.

3. If success is returned, signal success. Otherwise continue in this loop.

Page 5: A RTIFICIAL I NTELLIGENCE UNIT : 2 Search Techniques

5

Advantages of depth-first search

o Simple to implement.o Needs relatively small memory for storing the state-space.o May find a solution without examine much of search spaceo DFS seen quickly searches deeply into problem space. If it is

known that solution path will e long then DFS is a good choice

Disadvantages of depth-first search

o Can sometimes fail to find a solutiono Not guaranteed to find an optimal solutiono Can take a lot longer to find a solution.

Page 6: A RTIFICIAL I NTELLIGENCE UNIT : 2 Search Techniques

6

BREADTH-FIRST SEARCHING

L M N O P

G

Q

H J I K

F E D

B C

A•A breadth-first search (BFS) explores nodes nearest the root before exploring nodes further away

•For example, after searching A, then B, then C, the search proceeds with D, E, F, G

•Node are explored in the order A B C D E F G H I J K L M N O P Q

•J will be found before N

Page 7: A RTIFICIAL I NTELLIGENCE UNIT : 2 Search Techniques

7

BFS ALGORITHM

A. 1. Create a variable called NODE-LIST and set it to the initial state.

B. 2. Until a goal state is found or NODE-LIST is empty do -

1. Remove the first element from NODE-LIST and call it E. If NODE-LIST was empty, quit.

2. For each way that each rule can match the state described in E do -

• Apply the rule to generate a new state.

• If the new state is a goal state, quit and return this state.

• Otherwise, add the new state to the end of NODE-LIST.

Page 8: A RTIFICIAL I NTELLIGENCE UNIT : 2 Search Techniques

8

Advantages of breadth-first search

o Guaranteed to find a solution (if one exists);o Guaranteed to find an optimal solution. o BFS does not suffer from any potential infinite loop problem ,

which may cause the computer to crash

Disadvantages of breadth-first search

o More complex to implement;o Needs a lot of memory for storing the state space if the search

space is high o Inefficient for deep solution i.e. when the search space is large

the search performance will be poor

Page 9: A RTIFICIAL I NTELLIGENCE UNIT : 2 Search Techniques

9

HEURISTIC TECHNIQUES

1. Generate and Test2. Hill Climbing3. Best First Search4. Problem Reduction

Page 10: A RTIFICIAL I NTELLIGENCE UNIT : 2 Search Techniques

10

1. GENERATE-AND-TEST

Page 11: A RTIFICIAL I NTELLIGENCE UNIT : 2 Search Techniques

11

GENERATE-AND-TEST

Algorithm -

Step1: Generate a possible solution for some problem this means generating a particular point in the problem space for other it means generating a path from a start state.

Step 2: Test to see it this is actually a solution by comparing the chosen point or the end point or the chosen path to the set of acceptable goal state.

Step 3 :It a solution has been found quit otherwise returned to step 1

The most straightforward way to implement systematic generate and test is as DFS with backtracking.

Disadvantage: It often produces same inaccurate solutions, since there is no feedback from the world.

Page 12: A RTIFICIAL I NTELLIGENCE UNIT : 2 Search Techniques

12

2. HILL CLIMBING Variation on generate-and-test -

– Generation of next state depends on feedback from the test procedure.

– Test now includes a heuristic function that provides a guess as to how good each possible state is.

There are a number of ways to use the information returned by the test procedure.

Page 13: A RTIFICIAL I NTELLIGENCE UNIT : 2 Search Techniques

13

2.1 SIMPLE HILL CLIMBING

Use heuristic to move only to states that are better than the current state.

Always move to better state when possible.

The process ends when all operators have been applied and none of the resulting states are better than the current state.

Page 14: A RTIFICIAL I NTELLIGENCE UNIT : 2 Search Techniques

14

A B

C D

6

5

4

31 2

2.1 TSP HILL CLIMB STATE SPACE

Page 15: A RTIFICIAL I NTELLIGENCE UNIT : 2 Search Techniques

15

1. Evaluate the initial state. If it is also goal state, then return it and quit. Otherwise continue with the initial state as the current state.

2. Loop until a solution is found or until there are no new operators left to be applied in the current state:a. Select an operator that has not yet been applied to the current state and apply it to produce a new stateb. Evaluate the new state –

i. If it is the goal state, then return it and quit.ii. If it is not a goal state but it is better than the current state, then make it the current state.iii. If it is not better than the current state, then continue in the loop.

2.1 SIMPLE HILL CLIMBING ALGORITHM

Page 16: A RTIFICIAL I NTELLIGENCE UNIT : 2 Search Techniques

16

Steepest-Ascent Hill Climbing

• A variation on simple hill climbing.

• Instead of moving to the first state that is better, move to the best possible state that is one move away.

• The order of operators does not matter.

• Not just climbing to a better state, climbing up the steepest slope.

2.2 STEEPEST-ASCENT HILL CLIMBING

Page 17: A RTIFICIAL I NTELLIGENCE UNIT : 2 Search Techniques

17

Evaluate the initial state. If it is also a goal state, then return it and quit. Otherwise, continue with the initial state as the current state.

Loop until a solution is found or until a complete iteration produces no change to current state:

a. Let SUCC be a state such that any possible successor of the current state will be better than SUCC

b. For each operator that applies to the current state do:

i. Apply the operator and generate a new state. Evaluate the new state. If it is a goal state, then return it and quit.

ii. If not, compare it to SUCC. If it is better, then set SUCC to this state. If it is not better, leave SUCC alone.

c. If the SUCC is better than the current state, then set current state to SUCC

2.2 STEEPEST-ASCENT HILL CLIMBINGALGORITHM

Page 18: A RTIFICIAL I NTELLIGENCE UNIT : 2 Search Techniques

18

3. BEST-FIRST SEARCH

- Depth-first search: not all competing branches having to be expanded.

- Breadth-first search: not getting trapped on dead-end paths.

Combining the two is to follow a single path at a time, but switch paths whenever some competing path look more promising than the current one.

Page 19: A RTIFICIAL I NTELLIGENCE UNIT : 2 Search Techniques

19

3.1 OR GRAPH

A

DCB

FEHG

JI

5

66 5

2 1

A

DCB

FEHG5

66 5 4

A

DCB

FE5

6

3

4

A

DCB53 1

A

Page 20: A RTIFICIAL I NTELLIGENCE UNIT : 2 Search Techniques

20

3.1 OR GRAPH

- OPEN: nodes that have been generated, but have not examined.

This is organized as a priority queue.

- CLOSED: nodes that have already been examined.

Whenever a new node is generated, check whether it has been generated before.

Page 21: A RTIFICIAL I NTELLIGENCE UNIT : 2 Search Techniques

21

3.1 OR GRAPH ALGORITHM

1. OPEN = {initial state}.

2. Loop until a goal is found or there are no nodes left in OPEN:

- Pick the best node in OPEN- Generate its successors- For each successor:

new evaluate it, add it to OPEN, record its parent

generated before change parent, update successors

Page 22: A RTIFICIAL I NTELLIGENCE UNIT : 2 Search Techniques

Uninformed and informed searchesSince we know what the goal state is like, is it possible get there faster?

Breadth-first search

Oracle path

Page 23: A RTIFICIAL I NTELLIGENCE UNIT : 2 Search Techniques

23

3.2 A* ALGORITHM

f(n) = g(n) + h(n)

h(n) = cost of the cheapest path from node n to a goal state.

g(n) = cost of the cheapest path from the initial state to node n.

Page 24: A RTIFICIAL I NTELLIGENCE UNIT : 2 Search Techniques

24

3.2 A* ALGORITHM

f*(n) = g*(n) + h*(n)

h*(n) (heuristic factor) = estimate of h(n).

g*(n) (depth factor) = approximation of g(n) found by A* so far.

Page 25: A RTIFICIAL I NTELLIGENCE UNIT : 2 Search Techniques

25

PROBLEM REDUCTION

Goal: Acquire TV set

AND-OR Graphs

Goal: Steal TV set Goal: Earn some money Goal: Buy TV set

Algorithm AO* (Martelli & Montanari 1973, Nilsson 1980)

Page 26: A RTIFICIAL I NTELLIGENCE UNIT : 2 Search Techniques

26

PROBLEM REDUCTION: AO*

A

DCB43 5

A5

6

FE44

A

DCB43

10

9

9

9

FE44

A

DCB4

6 10

11

12

HG75

Page 27: A RTIFICIAL I NTELLIGENCE UNIT : 2 Search Techniques

27

PROBLEM REDUCTION: AO*

A

G

CB 10

5

11

13

ED 65 F 3

A

G

CB 15

10

14

13

ED 65 F 3

H 9Necessary backward propagation