Problem Solving and Search
School of Computer Science & Engineering Chung-Ang University
Artificial Intelligence
Dae-Won Kim
Outline
• Problem-solving agents
• Problem types
• Problem formulation
• Example problems
• Basic search algorithms
Problem-Solving Agents
On holiday In Romania;
currently in Arad.
Flight leaves tomorrow for Bucharest.
Goal: be in Bucharest
Solution: sequence of cities
Solution: ???
Performance measure: ???
Problem formulation:
states – various cities
actions – drive between cities
Problem Formulation: How To
vs. Problem Modeling
A problem is defined by four items:
• Initial state
• Successor function:
set of action-state pairs
• Goal test
• Path cost
A solution is a sequence of actions leading from the initial state to a goal state.
Consider a solution in algorithm
Problem Formulation: Romania
• Initial state:
• Successor function:
• Goal test:
• Path cost:
• Initial state: x = “at Arad”
• Successor function:
S = {<AradZerind,Zerind>, …}
• Goal test: x = “at Bucharest”
• Path cost: sum of distances
Problem Formulation: Vacuum Cleaner
• States:
• Actions:
• Goal test:
• Path cost:
• States: integer dirt and robot locations
• Actions: left, right, suck, stay
• Goal test: no dirt
• Path cost: 1 per action (0 for stay)
Problem Formulation: Robot Assembly
• States: real-valued coordinates of joint angles
• Actions: continuous motions of robot joints
• Goal test: complete assembly
• Path cost: time to execute
Problem Formulation: The 8-Puzzle
• States ?
• Actions ?
• Goal test ?
• Path cost ?
How to achieve the goal state through the complex state space from the initial state?
Answer: Tree Search Algorithms
Idea: exploration of state space by generating successors of already-explored states (expanding states)
Implementation: States vs. Nodes
A state is (a representation of) a physical configuration
A node is data structure constituting part of a search tree includes parents, children, depth, path cost.
A search strategy is defined by picking the order of node expansion.
Strategies are evaluated along the following dimensions:
• Completeness
• Time complexity
• Space complexity
• Optimality
Uninformed Search Strategies
Uninformed search strategies use only the information available in the problem definition.
• Breadth-first search
• Uniform-cost search
• Depth-first search
• Depth-limited search
• Iterative deepening search
Breath-First Search
Expand shallowest unexpanded node.
Implementation: FIFO queue
• Complete?
• Time complexity?
• Space complexity?
• Optimal?
• Complete? Yes (if b is finite)
• Time complexity? O(bd+1)
• Space complexity? O(bd+1)
• Optimal? Yes (if cost = 1 per step)
Uniform-Cost Search
Expand least-cost unexpanded node
using queue ordered by path cost
Equivalent to BFS if step costs equal.
Depth-First Search
Expand deepest unexpanded node.
Implementation: LIFO queue
• Complete?
• Time complexity?
• Space complexity?
• Optimal?
• Complete? No (infinite-depth, loops)
• Time complexity? O(bm)
• Space complexity? O(bm)
• Optimal? No
Depth-Limited Search
= DFS with depth limit (L).
i.e., nodes at depth (L) have no successors
Iterative Deepening Search
• Complete?
• Time complexity?
• Space complexity?
• Optimal?
• Complete? Yes
• Time complexity? O(bd)
• Space complexity? O(bd)
• Optimal? Yes (if step cost = 1)
Informed Search Methods : The Basics
School of Computer Science & Engineering Chung-Ang University
Artificial Intelligence
Dae-Won Kim
Outline
• Best-first search
• Greedy search
• A* search
• Brach and Bound
A strategy is defined by picking the order of node expansion.
Informed search strategy can find solutions more efficiently than an uninformed search.
It uses problem-specific knowledge beyond the definition of the problem itself.
Best-First Search
Idea: use an evaluation function for each node.
• Estimate the “desirability” of each node
• Expand most desirable unexpanded node
Special cases:
• Greedy search
• A* search
• Branch and Bound
Romania Example with Step Costs
Greedy Search
We need an evaluation function : heuristic function h(n)
h(n) = estimate of cost from n to the closest goal
h(n) = straight-line distance from n to Bucharest
Greedy search expands the node that appears to be closest to goal.
Properties of greedy search
• Complete?
• Time complexity?
• Space complexity?
• Optimal?
• Complete? No (can get stuck in loops)
Yes (in finite space with repeated-state checking)
• Time complexity? O(bm), A good heuristic is needed.
• Space complexity? O(bm), Keeps all nodes in memory.
• Optimal? No
What is A* Search?
Idea: avoid expanding paths that are already expensive.
Evaluation function: f(n) = g(n) + h(n)
• g(n) = cost so far to reach n
• h(n) = estimated cost to goal from n
• f(n) = estimated total cost through n to goal
A* search uses an admissible heuristic. Thus, it is optimal.
• h(n) h*(n) where h*(n) is the true cost from n.
• h(n) 0, so h(Goal) = 0.
e.g., hstraight(n) never overestimates the actual distance.
Romania Example with A* Search
Properties of A* Search
• Complete?
• Time complexity?
• Space complexity?
• Optimal?
• Complete? Yes
• Time complexity? Exponential in [relative error in h x length of sol.]
• Space complexity? Keeps all nodes in memory.
• Optimal? Yes
Q: Explain why A* is optimal?
Admissible Heuristics for the 8-puzzle
Guess a h-function : f = g + h
h1(n) = number of misplaced tiles
h1(n) = 6
h2(n) = total Manhattan distance
h2(n) = 4 + 0 + 3 + 3 + 1 + 0 + 2 + 1 = 14
Admissible Heuristics & Dominance
If h2(n) > h1(n) for all n, then h2 dominates h1 and is better for search.
• IDS = 50,000,000,000 nodes
• A*(h1) = 39,135 nodes
• A*(h2) = 1,641 nodes
Q: How to find good heuristics?
Admissible heuristics can be derived from the exact solution cost of a relaxed version of the problem.
The optimal solution cost of a relaxed problem is no greater than the optimal solution cost of the real problem.