informed search algorithms - auckland · informed search strategies • informed search strategies...
TRANSCRIPT
Informed search algorithms
Section 3.5 Russell & Norvig
2nd Term 2011 Informed Search - Lecture 1 2
Outline
• Review
• Informed search
• Greedy search
Goals of this part of the Course • Primary: Show how search permeates
much of AI problem-solving – puzzles & games – scheduling problems – planning
• Formulate search problems • Create heuristics using abstraction • Reduce search needed to solve problems
2nd Term 2011 Informed Search - Lecture 1 3
The Importance of Research in Search
• Search: core AI problem-solving approach.
• However, search is intractable.
• Improvements to search algorithms affect many other areas of AI.
2nd Term 2011 Informed Search - Lecture 1 4
Ubiquity of Search • Solving puzzles • Planning • Playing games • Machine Learning • Pattern matching • Satisfying constraints • Vision processing • Almost all problem-solving 2nd Term 2011 Informed Search - Lecture 1 5
State Space Search • Search is done through “space” of states. • Edges connect states and have
associated costs and direction. • Problem is specified by an initial state and
a goal test. • Solution: path from initial state to a state
that satisfies the goal test. • Solution cost = sum of edge costs along
solution path. 2nd Term 2011 Informed Search - Lecture 1 6
Formal Definition of “solution to problem”
2nd Term 2011 Informed Search - Lecture 1 7
/* solution(+Problem, ?Solution) A seq of states is a solution to a problem iff The first state in the seq is the init state of the problem, The last state in the seq satisfies the goal test, & each state in the seq is a neighbor of its preceding state */ solution(problem(State, Goal), [State]) :- call(Goal, State). %% executes “Goal(State)” solution(problem(State, Goal), [State | RestOfSolution]) :- neighbor(State, Neighbor), solution(problem(Neighbor, Goal), RestOfSolution).
The above definition is a valid prolog program which can both check whether a given list of states is a solution to a given problem or if just given the problem will generate a solution to it.
Need to define domain (neighbor/1) and your goal predicate Goal/1.
Example
2nd Term 2011 Informed Search - Lecture 1 8
Domain Definition: neighbor(losAngeles, sanFrancisco). neighbor(losAngeles, sanDiego). neighbor(sanFrancisco, portland). neighbor(sanFrancisco, lasVegas). neighbor(portland, seattle).
Goal Definition: reachedHome(seattle).
| ?- solution(problem(losAngeles, reachedHome), Solution).
Solution = [losAngeles,sanFrancisco,portland,seattle] ?
Run through example
• do example in emacs under SICStus
2nd Term 2011 Informed Search - Lecture 1 9
Search Space
2nd Term 2011 Informed Search - Lecture 1 10
What happens if clauses in different order
2nd Term 2011 Informed Search - Lecture 1 11
Tree Search
• Keeps record of current path and choice points along path (to visit if current path abandoned).
• [Can check for duplicate states along current path, avoid loops.]
• No global duplicate state checking. • When goal state is found, solution is
simply current path.
2nd Term 2011 Informed Search - Lecture 1 12
Naive solution implementation
• Prolog has its own search procedure for executing a program: depth-first search.
• Our naive solution’s search strategy is Prolog’s and has all the advantages & disadvantages of depth-first search.
2nd Term 2011 Informed Search - Lecture 1 13
Status of Tree Search
• Advantages: – Only needs to store current path
• Linear memory costs – Can use simpler logic (lower costs per node)
• Disadvantages – Non-optimal solution – Repeats search for duplicate states – Incomplete (for infinite graphs)
2nd Term 2011 Informed Search - Lecture 1 14
Graph Search
• Primarily, does a type of breadth-first search.
• Does global check for duplicate states. • Keeps whole search graph in memory. • When goal state is found, solution needs
to be extracted from search graph.
2nd Term 2011 Informed Search - Lecture 1 15
2nd Term 2011 Informed Search - Lecture 1 16
Graph search
Notes: 1. Fringe is the set of leaf nodes 2. Remove-Front is the search strategy 3. Avoid redundant searches for duplicate states
Graph version of solution
2nd Term 2011 Informed Search - Lecture 1 17
/* solution(+Problem, -Solution) */ solution(problem(InitialState, Goal), Solution) :-
solution(Goal, [node(InitialState, nil)], [ ], Solution).
/* solution(+Goal, +Fringe, +Closed, -Solution) */ solution(Goal, [node(State, ParentState) | _], Closed, Solution) :-
call(Goal, State), extractSolution(ParentState, Closed, [State], Solution).
solution(Goal, [node(State, Parent) | RestNodes], Closed, Solution) :- findall(NeighborNode, newNeighborNode(State, Closed, NeighborNode), NeighboringNodes), updateClosed(State, Closed, NewClosed), orderFringe(RestNodes, NeighboringNodes, NewFringe), solution(Goal, NewFringe, [node(State, Parent) | NewClosed], Solution).d
Status of Graph Search
• Possible Advantages: – Complete – Optimal – Only searches subspaces once
• These advantages depend upon strategy • Disadvantages:
– Exponential memory costs – More complex logic
2nd Term 2011 Informed Search - Lecture 1 18
2nd Term 2011 Informed Search - Lecture 1 19
Outline
• Review
• Best-first search
• Greedy search
2nd Term 2011 Informed Search - Lecture 1 20
Search strategies
• A search strategy is defined by picking the order of node expansion
• Let g(n) be the distance n’s state is from the initial state.
• Depth-first search strategy is pick node with highest g-value.
• Breadth-first search strategy is pick node with lowest g-value.
Best-first search strategy
• Given a set of nodes on the fringe of a search, which one is best to expand next? – Based on what criteria?
• Criteria: expand best nodes first, i.e., those along an optimal solution path – How do we do that?
• Use additional information to suggest such nodes.
2nd Term 2011 Informed Search - Lecture 1 21
2nd Term 2011 Informed Search - Lecture 1 22
Informed Search Strategies
• Informed Search Strategies use information beyond the problem desc.
• We will only look at functions that “guess” distance from a state to nearest goal state.
• h(n) is the function that guesses how far n’s state is from its nearest goal state.
2nd Term 2011 Informed Search - Lecture 1 23
Romania with step costs in km
2nd Term 2011 Informed Search - Lecture 1 24
Best-first search • Idea: use a function f(n) for each node
– f(n) is an estimate of "desirability” of a node – Expand most desirable unexpanded node
• Implementation: Order the nodes in fringe in decreasing order of desirability (normally, higher f is then less desirable)
• Uninformed Search: – Depth-first: f(n) = -g(n) – Breadth-first: f(n) = g(n)
Best-first informed search strategies
• Greedy Search
• A* Search
• Iterative Deepening A* (IDA*)
• Weighted A* Search 2nd Term 2011 Informed Search - Lecture 1 25
2nd Term 2011 Informed Search - Lecture 1 26
Outline
• Review
• Best-first search
• Greedy search
2nd Term 2011 Informed Search - Lecture 1 27
Greedy search
• Evaluation function: f(n) = h(n)
• h(n) = estimate of cost from n to goal – e.g., hSLD(n) = straight-line distance from n to
Bucharest
• Greedy search expands the node that appears to be closest to goal
2nd Term 2011 Informed Search - Lecture 1 28
Greedy best-first search example
2nd Term 2011 Informed Search - Lecture 1 29
Greedy best-first search example
2nd Term 2011 Informed Search - Lecture 1 30
Greedy best-first search example
2nd Term 2011 Informed Search - Lecture 1 31
Greedy best-first search example
Why greedy search is attractive
• With a decent enough heuristic, goes almost directly to goal.
• Best case: time and space are linear
• So, why not always do greedy search?
2nd Term 2011 Informed Search - Lecture 1 32
2nd Term 2011 Informed Search - Lecture 1 33
Properties of greedy best-first search
• Complete? No, has same problem with infinite graphs as depth-first search
• Time? O(bm), but a good heuristic can give dramatic improvement
• Space? O(bm) -- keeps all nodes in memory
• Optimal? No
Greedy Search in Prolog
2nd Term 2011 Informed Search - Lecture 1
/* solution(+Heuristic, +Goal, +Fringe, +Closed, -Solution) */ solution(_Heuristic, Goal, [Node | _], Closed, Solution) :- node(Node, State, ParentState, _FValue), test(Goal, State), extractSolution(ParentState, Closed, [State], Solution). solution(Heuristic, Goal, [Node | RestNodes], Closed, Solution) :- nodeState(Node, State), findall(NeighborNode,
newNeighborNode(State, Heuristic, [Node | Closed], NeighborNode), NeighboringNodes),
orderFringe(RestNodes, NeighboringNodes, NewFringe), solution(Heuristic, Goal, NewFringe, [Node | Closed], Solution).
34
Summary
• Search strategy defines a traversal of the search space, e.g., pick lowest f(n).
• Informed search strategies use information outside of problem description.
• One such type of information is estimated distance to nearest goal: h(n).
• Greedy search: f(n) = h(n).
2nd Term 2011 Informed Search - Lecture 1 35
Challenge • Can you create state space representation
for following domains: – scheduling taxi service in Auckland – playing chess – getting a degree at UofA – enjoying your life
• You need to represent states of the world, actions that change states, problems, and solutions.
2nd Term 2011 Informed Search - Lecture 1 36
Next Time
• Look at: – A* search – IDA* – Heuristics
2nd Term 2011 Informed Search - Lecture 1 37