chapter 3 sections 1 - 5 1. solving problems by searching reflex agent is simple base their actions...

81
Chapter 3 Sections 1 - 5 1

Upload: emery-carter

Post on 17-Jan-2016

216 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Chapter 3 Sections 1 - 5 1. Solving Problems by Searching Reflex agent is simple base their actions on a direct mapping(رسم الخرائط) from states to actions

Chapter 3

Sections 1 - 5

1

Page 2: Chapter 3 Sections 1 - 5 1. Solving Problems by Searching Reflex agent is simple base their actions on a direct mapping(رسم الخرائط) from states to actions

Solving Problems by Searching

Reflex agent is simplebase their actions on a direct mapping(رسم الخرائط) from states

to actionsbut cannot work well in environments

which this mapping would be too large to storeand would take too long to learn

Hence, goal-based agent is used

2

Page 3: Chapter 3 Sections 1 - 5 1. Solving Problems by Searching Reflex agent is simple base their actions on a direct mapping(رسم الخرائط) from states to actions

Problem-solving agent

Problem-solving agentA kind of goal-based agent It solves problem by

finding sequences of actions that lead to desirable states (goals)

To solve a problem, the first step is the goal formulation, based on

the current situation

3

Page 4: Chapter 3 Sections 1 - 5 1. Solving Problems by Searching Reflex agent is simple base their actions on a direct mapping(رسم الخرائط) from states to actions

Goal formulationThe goal is formulated as a set of world states, in which the goal is

satisfied

Reaching from initial state goal stateActions are required

Actions are the operators causing transitions between world statesActions should be abstract enough at a

certain degree, instead of very detailed E.g., turn leftturn left VS turn left 30 degreeturn left 30 degree, etc.

4

Page 5: Chapter 3 Sections 1 - 5 1. Solving Problems by Searching Reflex agent is simple base their actions on a direct mapping(رسم الخرائط) from states to actions

Problem formulation

The process of deciding what actions and states to consider

E.g., driving zolfi Riyadh in-between states and actions definedStates: Some places in Zolfi & RiyadhActions: Turn left, Turn right, go straight,

accelerate & brake, etc.

5

Page 6: Chapter 3 Sections 1 - 5 1. Solving Problems by Searching Reflex agent is simple base their actions on a direct mapping(رسم الخرائط) from states to actions

Search

Because there are many ways to achieve the same goal Those ways are together expressed as a treeMultiple options of unknown value at a point,

the agent can examine different possible sequences of actions, and choose the best

This process of looking for the best sequence is called search

The best sequence is then a list of actions, called solution

6

Page 7: Chapter 3 Sections 1 - 5 1. Solving Problems by Searching Reflex agent is simple base their actions on a direct mapping(رسم الخرائط) from states to actions

Search algorithm Defined as taking a problem and returns a solution

Once a solution is found the agent follows the solution and carries out the list of actions –

execution phase

Design of an agent“Formulate, search, execute”

7

Page 8: Chapter 3 Sections 1 - 5 1. Solving Problems by Searching Reflex agent is simple base their actions on a direct mapping(رسم الخرائط) from states to actions

8

Page 9: Chapter 3 Sections 1 - 5 1. Solving Problems by Searching Reflex agent is simple base their actions on a direct mapping(رسم الخرائط) from states to actions

Environments for PS agent

Environment is: static

formulating and solving the problem is donewithout paying attention to any changes that

might occur in the environmentobservable

the agent knows its initial statediscrete

a finite number of actions can be defined9

Page 10: Chapter 3 Sections 1 - 5 1. Solving Problems by Searching Reflex agent is simple base their actions on a direct mapping(رسم الخرائط) from states to actions

Environments for PS agent

deterministicقطعيsolutions are just single action sequenceseffect of all actions are knownno percepts are needed except the first perceptso called “open-loop”

From these,we know that problem-solving agent is the easiest one

10

Page 11: Chapter 3 Sections 1 - 5 1. Solving Problems by Searching Reflex agent is simple base their actions on a direct mapping(رسم الخرائط) from states to actions

Well-defined problems and solutions A problem is defined by 4

components:

Initial state

Successor functions:state space.Path.

Goal Test.

Path Cost.

11

Page 12: Chapter 3 Sections 1 - 5 1. Solving Problems by Searching Reflex agent is simple base their actions on a direct mapping(رسم الخرائط) from states to actions

Well-defined problems and solutions A problem is defined by 4 components:The initial state

that the agent starts inThe set of possible actions (successor

functions) These two items define the state space

the set of all states reachable from the initial stateA path in the state space:

any sequence of actions leading from one state to another

12

Page 13: Chapter 3 Sections 1 - 5 1. Solving Problems by Searching Reflex agent is simple base their actions on a direct mapping(رسم الخرائط) from states to actions

Well-defined problems and solutionsThe goal test Applied to the current state to test

if the agent is in its goal Sometimes the goal is described by the

properties instead of stating explicitly the set of states

Example: Chess the agent wins if it can capture the KING of the

opponent on next move no matter what the opponent does

13

Page 14: Chapter 3 Sections 1 - 5 1. Solving Problems by Searching Reflex agent is simple base their actions on a direct mapping(رسم الخرائط) from states to actions

Well-defined problems and solutions

A path cost function,assigns a numeric cost to each path = performance measuredenoted by g to distinguish the best path from others

Usually the path cost is the sum of the step costs of the individual

actions (in the action list)

14

Page 15: Chapter 3 Sections 1 - 5 1. Solving Problems by Searching Reflex agent is simple base their actions on a direct mapping(رسم الخرائط) from states to actions

Well-defined problems and solutionsTogether a problem is defined by Initial state Successor function Goal test Path cost function

The solution of a problem is then a path from the initial state to a state satisfying the goal

test

Optimal solution the solution with lowest path cost among all solutions

15

Page 16: Chapter 3 Sections 1 - 5 1. Solving Problems by Searching Reflex agent is simple base their actions on a direct mapping(رسم الخرائط) from states to actions

Formulating problems

Besides the four components for problem formulation anything else?

Abstraction the process to take out the irrelevant(غير ذي صلة)

information leave the most essential parts to the description of the

states Conclusion: Only the most important parts that are

contributing to searching are used16

Page 17: Chapter 3 Sections 1 - 5 1. Solving Problems by Searching Reflex agent is simple base their actions on a direct mapping(رسم الخرائط) from states to actions

Example

17

Page 18: Chapter 3 Sections 1 - 5 1. Solving Problems by Searching Reflex agent is simple base their actions on a direct mapping(رسم الخرائط) from states to actions

From our Example

1. Formulate Goal

- Be In Amman

2. Formulate Problem

- States : Cities - actions : Drive Between Cities

3. Find Solution

- Sequence of Cities : ajlun – Jarash - Amman

18

Page 19: Chapter 3 Sections 1 - 5 1. Solving Problems by Searching Reflex agent is simple base their actions on a direct mapping(رسم الخرائط) from states to actions

Our Example

1. Problem : To Go from Ajlun to Amman

2. Initial State : Ajlun

3. Operator : Go from One City To another .

4. State Space : {Jarash , Salt , irbed}..

5. Goal Test : are the agent in Amman.

6. Path Cost Function : Get The Cost From The Map.

7. Solution :{ {Aj Ja Ir Ma Za Am} , {Aj Ir Ma Za Am} …. {Aj Ja Am} }

8. State Set Space : {Ajlun Jarash Amman}19

Page 20: Chapter 3 Sections 1 - 5 1. Solving Problems by Searching Reflex agent is simple base their actions on a direct mapping(رسم الخرائط) from states to actions

Single-state problem formulation

A problem is defined by four items:

1. initial state e.g., "at Arad"2. actions or successor function S(x) = set of action–state pairs

e.g., S(Arad) = {<Arad Zerind, Zerind>, … }3. goal test, can be

explicit, e.g., x = "at Bucharest" implicit, e.g., Checkmate(x)

20

Page 21: Chapter 3 Sections 1 - 5 1. Solving Problems by Searching Reflex agent is simple base their actions on a direct mapping(رسم الخرائط) from states to actions

Single-state problem formulation

5. path cost (additive) e.g., sum of distances, number of actions executed, etc. c(x,a,y) is the step cost, assumed to be ≥ 0

A solution is a sequence of actions leading from the initial state to a goal state

21

Page 22: Chapter 3 Sections 1 - 5 1. Solving Problems by Searching Reflex agent is simple base their actions on a direct mapping(رسم الخرائط) from states to actions

Example problems

Toy problems those intended to illustrate or exercise

various problem-solving methods E.g., puzzle, chess, etc.

Real-world problems tend to be more difficult and whose

solutions people actually care aboutE.g., Design, planning, etc.

22

Page 23: Chapter 3 Sections 1 - 5 1. Solving Problems by Searching Reflex agent is simple base their actions on a direct mapping(رسم الخرائط) from states to actions

Toy problemsExample: vacuum world

Number of states: 8

Initial state: Any

Number of actions: 4 left, right, suck,

noOp

Goal: clean up all dirt Goal states: {7, 8}

Path Cost:Each step costs 1

23

Page 24: Chapter 3 Sections 1 - 5 1. Solving Problems by Searching Reflex agent is simple base their actions on a direct mapping(رسم الخرائط) from states to actions

24

Page 25: Chapter 3 Sections 1 - 5 1. Solving Problems by Searching Reflex agent is simple base their actions on a direct mapping(رسم الخرائط) from states to actions

The 8-puzzle

25

Page 26: Chapter 3 Sections 1 - 5 1. Solving Problems by Searching Reflex agent is simple base their actions on a direct mapping(رسم الخرائط) from states to actions

The 8-puzzleStates: a state description specifies the location of each of

the eight tiles and blank in one of the nine squares

Initial State: Any state in state space

Successor function: the blank moves Left, Right, Up, or Down

Goal test: current state matches the goal configuration

Path cost: each step costs 1, so the path cost is just the length

of the path 26

Page 27: Chapter 3 Sections 1 - 5 1. Solving Problems by Searching Reflex agent is simple base their actions on a direct mapping(رسم الخرائط) from states to actions

The 8-queens

There are two ways to formulate the problem

All of them have the common followings:Goal test: 8 queens on board, not attacking

to each otherPath cost: zero

27

Page 28: Chapter 3 Sections 1 - 5 1. Solving Problems by Searching Reflex agent is simple base their actions on a direct mapping(رسم الخرائط) from states to actions

The 8-queens

(1) Incremental formulation involves operators that augment the state

description starting from an empty stateEach action adds a queen to the stateStates:

any arrangement of 0 to 8 queens on boardSuccessor function:

add a queen to any empty square

28

Page 29: Chapter 3 Sections 1 - 5 1. Solving Problems by Searching Reflex agent is simple base their actions on a direct mapping(رسم الخرائط) from states to actions

The 8-queens(2) Complete-state formulationstarts with all 8 queens on the boardmove the queens individually aroundStates:

any arrangement of 8 queens, one per column in the leftmost columns

Operators: move an attacked queen to a row, not attacked by any other

29

Page 30: Chapter 3 Sections 1 - 5 1. Solving Problems by Searching Reflex agent is simple base their actions on a direct mapping(رسم الخرائط) from states to actions

The 8-queensConclusion: the right formulation makes a big difference

to the size of the search space

30

Page 31: Chapter 3 Sections 1 - 5 1. Solving Problems by Searching Reflex agent is simple base their actions on a direct mapping(رسم الخرائط) from states to actions

Example: robotic assembly

states?: real-valued coordinates of robot joint angles parts of the object to be assembledactions?: continuous motions of robot jointsgoal test?: complete assemblypath cost?: time to execute

31

Page 32: Chapter 3 Sections 1 - 5 1. Solving Problems by Searching Reflex agent is simple base their actions on a direct mapping(رسم الخرائط) from states to actions

3.3 Searching for solutions Finding out a solution is done bysearching through the state space

All problems are transformedas a search treegenerated by the initial state and

successor function

32

Page 33: Chapter 3 Sections 1 - 5 1. Solving Problems by Searching Reflex agent is simple base their actions on a direct mapping(رسم الخرائط) from states to actions

Search treeInitial state The root of the search tree is a search

node(العقدة)

Expanding(توسيع)applying successor function to the current stateThereby(وبالتالي) generating a new set of states

leaf nodes the states having no successorsor they haven’t yet been expanded (fringe:حدود)

Refer to next figure33

Page 34: Chapter 3 Sections 1 - 5 1. Solving Problems by Searching Reflex agent is simple base their actions on a direct mapping(رسم الخرائط) from states to actions

Tree search example

34

Page 35: Chapter 3 Sections 1 - 5 1. Solving Problems by Searching Reflex agent is simple base their actions on a direct mapping(رسم الخرائط) from states to actions

Tree search example

35

Page 36: Chapter 3 Sections 1 - 5 1. Solving Problems by Searching Reflex agent is simple base their actions on a direct mapping(رسم الخرائط) from states to actions

Search treeThe essence(جوهر) of searching in case the first choice is not correct choosing one option and keep others for later

inspection

Hence we have the search strategy which determines the choice of which state to

expandgood choice fewer work faster

Important: state space ≠ search tree

36

Page 37: Chapter 3 Sections 1 - 5 1. Solving Problems by Searching Reflex agent is simple base their actions on a direct mapping(رسم الخرائط) from states to actions

Search tree

State space has unique states {A, B} while a search tree may have cyclic paths:

A-B-A-B-A-B- …

A good search strategy should avoid such paths

37

Page 38: Chapter 3 Sections 1 - 5 1. Solving Problems by Searching Reflex agent is simple base their actions on a direct mapping(رسم الخرائط) from states to actions

Search treeA node is having five componentsSTATE: which state it is in the state spacePARENT-NODE: from which node it is generatedACTION: which action applied to its parent-node

to generate itPATH-COST: the cost, g(n), from initial state to

the node n itselfDEPTH: number of steps along the path from the

initial state

38

Page 39: Chapter 3 Sections 1 - 5 1. Solving Problems by Searching Reflex agent is simple base their actions on a direct mapping(رسم الخرائط) from states to actions

39

Page 40: Chapter 3 Sections 1 - 5 1. Solving Problems by Searching Reflex agent is simple base their actions on a direct mapping(رسم الخرائط) from states to actions

Search tree

In order to better organize the fringewe use a queue to store them (FIFO)Operations on a queue

MAKE-QUEUE(element, …)EMPTY?(queue)FIRST(queue)REMOVE-FIRST(queue) INSERT(element, queue) INSERT-ALL(elements, queue)

40

Page 41: Chapter 3 Sections 1 - 5 1. Solving Problems by Searching Reflex agent is simple base their actions on a direct mapping(رسم الخرائط) from states to actions

Measuring problem-solving performance

The evaluation of a search strategy Completeness:

is the strategy guaranteed to find a solution when there is one?

Optimality: does the strategy find the highest-quality solution

when there are several different solutions? Time complexity:

how long does it take to find a solution?Space complexity:

how much memory is needed to perform the search?

41

Page 42: Chapter 3 Sections 1 - 5 1. Solving Problems by Searching Reflex agent is simple base their actions on a direct mapping(رسم الخرائط) from states to actions

Measuring problem-solving performance

In AI, complexity is expressed inb, branching factor, maximum number of

successors of any noded, the depth of the shallowest(ضحالة) goal nodem, the maximum length of any path in the state

space

Time and Space is measured innumber of nodes generated during the searchmaximum number of nodes stored in memory

42

Page 43: Chapter 3 Sections 1 - 5 1. Solving Problems by Searching Reflex agent is simple base their actions on a direct mapping(رسم الخرائط) from states to actions

For effectiveness of a search algorithmwe can just consider the total costThe total cost = path cost (g) + search cost

search cost = time necessary to find the solution

Trade-off: (long time, optimal solution with least g) vs. (shorter time, solution with slightly lager

path cost g)

Measuring problem-solving performance

43

Page 44: Chapter 3 Sections 1 - 5 1. Solving Problems by Searching Reflex agent is simple base their actions on a direct mapping(رسم الخرائط) from states to actions

3.4 Uninformed search strategies

Uninformed search no information about the number of steps or

the path cost from the current state to the goal

search the state space blindly( على نحو (أعمى

Informed search, or heuristic search a cleverer strategy that searches toward the

goal, based on the information from the current

state so far 44

Page 45: Chapter 3 Sections 1 - 5 1. Solving Problems by Searching Reflex agent is simple base their actions on a direct mapping(رسم الخرائط) from states to actions

Uninformed search strategies

Breadth(اتساع)-first searchUniform cost search

Depth(عمق)-first searchDepth-limited search Iterative deepening search

Bidirectional search

45

Page 46: Chapter 3 Sections 1 - 5 1. Solving Problems by Searching Reflex agent is simple base their actions on a direct mapping(رسم الخرائط) from states to actions

Breadth-first search

The root node is expanded first (FIFO)

All the nodes generated by the root node are then expanded

And then their successors and so on

46

Page 47: Chapter 3 Sections 1 - 5 1. Solving Problems by Searching Reflex agent is simple base their actions on a direct mapping(رسم الخرائط) from states to actions

Breadth-first search (Analysis)Breadth-first search Complete – find the solution eventuallyOptimal, if the path cost is a non-decreasing

function of the depth of the node

The disadvantage if the branching factor of a node is large, for even small instances (e.g., chess)

the space complexity and the time complexity are enormous

47

Page 48: Chapter 3 Sections 1 - 5 1. Solving Problems by Searching Reflex agent is simple base their actions on a direct mapping(رسم الخرائط) from states to actions

Breadth-first search (Analysis) assuming 10000 nodes can be processed per second, each

with 1000 bytes of storage

48

Page 49: Chapter 3 Sections 1 - 5 1. Solving Problems by Searching Reflex agent is simple base their actions on a direct mapping(رسم الخرائط) from states to actions

Uniform cost search Breadth-first finds the shallowest goal statebut not necessarily be the least-cost solutionwork only if all step costs are equal

Uniform cost search modifies breadth-first strategy

by always expanding the lowest-cost nodeThe lowest-cost node is measured by the path

cost g(n)

49

Page 50: Chapter 3 Sections 1 - 5 1. Solving Problems by Searching Reflex agent is simple base their actions on a direct mapping(رسم الخرائط) from states to actions

Uniform cost searchthe first found solution is guaranteed to be the cheapest least in depth But restrict to non-decreasing path cost Unsuitable for operators with negative cost

50

Page 51: Chapter 3 Sections 1 - 5 1. Solving Problems by Searching Reflex agent is simple base their actions on a direct mapping(رسم الخرائط) from states to actions

Depth-first search Always expands one of the nodes at the deepest level of the tree

Only when the search hits a dead end goes back and expands nodes at shallower levels Dead end leaf nodes but not the goal

Backtracking(التراجع) search only one successor is generated on expansion rather than all successors fewer memory

51

Page 52: Chapter 3 Sections 1 - 5 1. Solving Problems by Searching Reflex agent is simple base their actions on a direct mapping(رسم الخرائط) from states to actions

Depth-first searchExpand deepest unexpanded node

Implementation: fringe = LIFO queue, i.e., put successors at front

52

Page 53: Chapter 3 Sections 1 - 5 1. Solving Problems by Searching Reflex agent is simple base their actions on a direct mapping(رسم الخرائط) from states to actions

Depth-first searchExpand deepest unexpanded node

Implementation: fringe = LIFO queue, i.e., put successors at front

53

Page 54: Chapter 3 Sections 1 - 5 1. Solving Problems by Searching Reflex agent is simple base their actions on a direct mapping(رسم الخرائط) from states to actions

Depth-first search

Expand deepest unexpanded node

Implementation: fringe = LIFO queue, i.e., put successors at front

54

Page 55: Chapter 3 Sections 1 - 5 1. Solving Problems by Searching Reflex agent is simple base their actions on a direct mapping(رسم الخرائط) from states to actions

Depth-first search

Expand deepest unexpanded node

Implementation: fringe = LIFO queue, i.e., put successors at front

55

Page 56: Chapter 3 Sections 1 - 5 1. Solving Problems by Searching Reflex agent is simple base their actions on a direct mapping(رسم الخرائط) from states to actions

Depth-first search

Expand deepest unexpanded node

Implementation: fringe = LIFO queue, i.e., put successors at front

56

Page 57: Chapter 3 Sections 1 - 5 1. Solving Problems by Searching Reflex agent is simple base their actions on a direct mapping(رسم الخرائط) from states to actions

Depth-first search

Expand deepest unexpanded node

Implementation: fringe = LIFO queue, i.e., put successors at front

57

Page 58: Chapter 3 Sections 1 - 5 1. Solving Problems by Searching Reflex agent is simple base their actions on a direct mapping(رسم الخرائط) from states to actions

Depth-first search

Expand deepest unexpanded node

Implementation: fringe = LIFO queue, i.e., put successors at front

58

Page 59: Chapter 3 Sections 1 - 5 1. Solving Problems by Searching Reflex agent is simple base their actions on a direct mapping(رسم الخرائط) from states to actions

Depth-first search

Expand deepest unexpanded node

Implementation: fringe = LIFO queue, i.e., put successors at front

59

Page 60: Chapter 3 Sections 1 - 5 1. Solving Problems by Searching Reflex agent is simple base their actions on a direct mapping(رسم الخرائط) from states to actions

Depth-first search

Expand deepest unexpanded node

Implementation: fringe = LIFO queue, i.e., put successors at front

60

Page 61: Chapter 3 Sections 1 - 5 1. Solving Problems by Searching Reflex agent is simple base their actions on a direct mapping(رسم الخرائط) from states to actions

Depth-first search

Expand deepest unexpanded node

Implementation: fringe = LIFO queue, i.e., put successors at front

61

Page 62: Chapter 3 Sections 1 - 5 1. Solving Problems by Searching Reflex agent is simple base their actions on a direct mapping(رسم الخرائط) from states to actions

Depth-first search

Expand deepest unexpanded node

Implementation: fringe = LIFO queue, i.e., put successors at front

62

Page 63: Chapter 3 Sections 1 - 5 1. Solving Problems by Searching Reflex agent is simple base their actions on a direct mapping(رسم الخرائط) from states to actions

Depth-first search

Expand deepest unexpanded node

Implementation: fringe = LIFO queue, i.e., put successors at front

63

Page 64: Chapter 3 Sections 1 - 5 1. Solving Problems by Searching Reflex agent is simple base their actions on a direct mapping(رسم الخرائط) from states to actions

Depth-first search

64

Page 65: Chapter 3 Sections 1 - 5 1. Solving Problems by Searching Reflex agent is simple base their actions on a direct mapping(رسم الخرائط) from states to actions

Depth-first search (Analysis)Not complete because a path may be infinite or looping then the path will never fail and go back try

another option

Not optimal it doesn't guarantee the best solution

It overcomes the time and space complexities

65

Page 66: Chapter 3 Sections 1 - 5 1. Solving Problems by Searching Reflex agent is simple base their actions on a direct mapping(رسم الخرائط) from states to actions

Depth-limited search It is depth-first search with a predefined maximum depth However, it is usually not easy to define

the suitable maximum depth too small no solution can be found too large the same problems are

suffered from

Anyway the search is complete but still not optimal

66

Page 67: Chapter 3 Sections 1 - 5 1. Solving Problems by Searching Reflex agent is simple base their actions on a direct mapping(رسم الخرائط) from states to actions

Iterative deepening search(IDS)

No choosing of the best depth limit

It tries all possible depth limits: first 0, then 1, 2, and so on combines the benefits of depth-first and

breadth-first search

67

Page 68: Chapter 3 Sections 1 - 5 1. Solving Problems by Searching Reflex agent is simple base their actions on a direct mapping(رسم الخرائط) from states to actions

Iterative deepening search

68

Page 69: Chapter 3 Sections 1 - 5 1. Solving Problems by Searching Reflex agent is simple base their actions on a direct mapping(رسم الخرائط) from states to actions

Iterative deepening search (Analysis)

optimal

complete

Time and space complexities reasonable

suitable for the problem having a large search space and the depth of the solution is not known

69

Page 70: Chapter 3 Sections 1 - 5 1. Solving Problems by Searching Reflex agent is simple base their actions on a direct mapping(رسم الخرائط) from states to actions

Iterative lengthening search(ILS)

IDS is using depth as limit

ILS is using path cost as limitan iterative version for uniform cost searchhas the advantages of uniform cost search

while avoiding its memory requirementsbut ILS incurs substantial overhead

compared to uniform cost search

70

Page 71: Chapter 3 Sections 1 - 5 1. Solving Problems by Searching Reflex agent is simple base their actions on a direct mapping(رسم الخرائط) from states to actions

Bidirectional searchRun two simultaneous searchesone forward from the initial stateanother backward from the goalstop when the two searches meet

However, computing backward is difficultA huge amount of goal statesat the goal state, which actions are used to

compute it?can the actions be reversible to computer its

predecessors?71

Page 72: Chapter 3 Sections 1 - 5 1. Solving Problems by Searching Reflex agent is simple base their actions on a direct mapping(رسم الخرائط) from states to actions

72

Page 73: Chapter 3 Sections 1 - 5 1. Solving Problems by Searching Reflex agent is simple base their actions on a direct mapping(رسم الخرائط) from states to actions

Comparing search strategies

73

Page 74: Chapter 3 Sections 1 - 5 1. Solving Problems by Searching Reflex agent is simple base their actions on a direct mapping(رسم الخرائط) from states to actions

Avoiding repeated states for all search strategiesThere is possibility of expanding states

that have already been encountered and expanded before, on some other path

may cause the path to be infinite loop foreverAlgorithms that forget their history

are doomed (محكوم )to repeat it

74

Page 75: Chapter 3 Sections 1 - 5 1. Solving Problems by Searching Reflex agent is simple base their actions on a direct mapping(رسم الخرائط) from states to actions

Avoiding repeated statesThree ways to deal with this possibilityDo not return to the state it just came from

Refuse generation of any successor same as its parent state

Do not create paths with cyclesRefuse generation of any successor same as its

ancestor states Do not generate any generated state

Not only its ancestor states, but also all other expanded states have to be checked against

75

Page 76: Chapter 3 Sections 1 - 5 1. Solving Problems by Searching Reflex agent is simple base their actions on a direct mapping(رسم الخرائط) from states to actions

Avoiding repeated statesWe then define a data structureclosed lista set storing every expanded node so far If the current node matches a node on the

closed list, discard it.

76

Page 77: Chapter 3 Sections 1 - 5 1. Solving Problems by Searching Reflex agent is simple base their actions on a direct mapping(رسم الخرائط) from states to actions

Searching with Partial Information

What happens when knowledge of the states or actions is incomplete?

This leads to 3 distinct problem typesSensorless or conformant problemsContingency problemsExploration problems

77

Page 78: Chapter 3 Sections 1 - 5 1. Solving Problems by Searching Reflex agent is simple base their actions on a direct mapping(رسم الخرائط) from states to actions

Sensorless(captor) problemsThe agent has no sensorscould be in one of several possible initial stateseach action lead to one of several possible

successor stateseach of these successor states is called

belief statecurrent belief about the possible physical states

78

Page 79: Chapter 3 Sections 1 - 5 1. Solving Problems by Searching Reflex agent is simple base their actions on a direct mapping(رسم الخرائط) from states to actions

79

Page 80: Chapter 3 Sections 1 - 5 1. Solving Problems by Searching Reflex agent is simple base their actions on a direct mapping(رسم الخرائط) from states to actions

Contingency(طارئ) problemsThe environment such that the agent can obtain new information from its sensors after acting

Then the effect of actions are uncertain

Hence build a tree to represent the different possible effects of an action the contingencies

Hence the agent is interleaving search, execute, search, execute, … rather than single “search, execute”

80

Page 81: Chapter 3 Sections 1 - 5 1. Solving Problems by Searching Reflex agent is simple base their actions on a direct mapping(رسم الخرائط) from states to actions

Exploration problemsThe states and actions about the environment are unknown It has to experiment To perform the actions and see its results It involves significant danger because it may

cause the agent really damaged

If it survives, it learns the environment and the effects about its actions, which it can use to solve subsequent problems

81