intro to aijsearlem/cs451/fa13/lectures/13.reasoning.pdf · define wumpus world model checking ......
TRANSCRIPT
10-9-2013
Reasoning
Unification
Forward & Backward Chaining
Reading: AIMA Chapter 9 (Inference in FOL)
HW#5 posted: due Wed., 10/20/10
1. “John likes pizza”.
2. “John likes all kinds of food”.
3. “Steve only likes easy courses”.
4. “All science courses are hard.”
5. “Everybody loves somebody.”
6. “Abraham is the father of Isaac.”
7. “John gave the book to Mary.”
8. “There’s a book on the table.”
9. “There’s exactly one book on the table.”
diplomat(dad(john)). means “John’s father is a diplomat.”
test: substitute Sam, John’s father, for dad(john) yields: diplomat(sam). makes sense (x)[ easy( course(x) ) => likes(steve, x) ]. test: substitute “true” (or “false”) for course(x). yields: easy(true). doesn’t make sense predicates are truth-valued
not well-formed!
6. Abraham is the father of Isaac. predicates: father-of(x,y) means “x is y’s father” constants: abraham, isaac. function: dad(x) evaluates to x’s biological father
father-of(abraham, isaac).
note substitution: father-of(dad(isaac), isaac). 7. John gave the book to Mary. predicate: gave(x,y,z) means “x gave y a z” constants: book23, mary
gave(john, mary, book23).
8. There’s a book on the table. predicates: on(x,y) means “x is on top of y”, book(x) means “x is a book” constants: table.
(x)[ book(x) on(x, table) ].
9. There’s exactly one book on the table.
(x)[ book(x) on(x, table) (y) { book(y) on(y, table) => x = y } ].
(! x) is a convenient shorthand to express “there exists a unique x such that…”, but it is not part of FOL (so don’t use it, at least for now)
10.There are exactly two books on the table.
(x)(y)[
[ { book(x) on(x, table)} { book(y) on(y, table)} x ≠ y ]
[ (z) { book(z) on(z, table) => (z = x) (z = y) } ] ].
“there’s at least two books on the table”
“there’s no more than two books on the table”
First-order logic:
objects and relations are semantic primitives
syntax: constants, functions, predicates, equality, quantifiers
Increased expressive power: sufficient to define wumpus world
Model checking
easy to program, but space & time inefficient
Inference rules + algorithm
Proof = a sequence of inference rule applications. Can use inference rules as operators in a standard search algorithm.
Need to account for variables & quantifiers
Typically require transformation of sentences into a normal (or canonical) form
Suppose you “tell” the agent:
1. likes(john, pizza).
2. (x)[ food(x) => likes(john, x) ]
3. food(pizza).
4. food(zucchini). Universal Instantiation: since 2. is true for every x in the domain, it is also true for any specific object in the domain. That is, can substitute zucchini for x:
5. food(zucchini) => likes(john, zucchini).
then 4. & 5. and Modus ponens yields
6. likes(john, zucchini).
Universal Instantiation (UI)
v α Subst({v/g}, α)
for any variable v and ground term g
i.e. given as true infer
(x) p(x) |- p(a).
For any sentence α, variable v, and constant symbol k that does not appear elsewhere in the knowledge base:
v α
Subst({v/k}, α)
e.g. x Crown(x) OnHead(x,John) yields:
Crown(C1) OnHead(C1,John)
provided C1 is a new constant symbol, called a Skolem constant
examples of unification
var <- constant
var <- another var (“renaming”)
var <- function (not involving the same var)
●“occurs check”
common errors
examples of skolemizing
most general unifier
process of matching expressions and finding a “unifier” (substitution) that makes the two expressions identical
Unify(α,β) = θ if αθ = βθ
p q θ
Knows(John,x) Knows(John,Jane)
Knows(John,x) Knows(y,OJ)
Knows(John,x) Knows(y,Mom(y))
Knows(John,x) Knows(x,OJ)
Standardizing apart eliminates overlap of variables, e.g., Knows(z17,OJ)
{x/Jane}
{x/OJ,y/John}
{y/John,x/Mom(John)}
{fail}
To unify Knows(John,x) and Knows(y,z), θ = {y/John, x/z } or θ = {y/John, x/John, z/John}
The first unifier is more general than the
second.
There is a single most general unifier (MGU) that is unique up to renaming of variables. MGU = { y/John, x/z }
p1', p2', … , pn', ( p1 p2 … pn q)
qθ
p1' is King(John) p1 is King(x)
p2' is Greedy(y) p2 is Greedy(x)
θ is {x/John,y/John} q is Evil(x)
q θ is Evil(John)
GMP used with KB of definite clauses (exactly one positive literal)
All variables assumed universally quantified
where pi'θ = pi θ for all i
Mr. Coleman wishes to call his wife. Her answering service tells him that his wife, Dr. Coleman, is visiting Dr. Gordon and that Dr. Gordon is at patient Wagner’s residence.
What kind of agent would you
choose for the intelligent
answering service, and why?
Mr. Coleman wishes to call his wife. Her answering service tells him that his wife, Dr. Coleman, is visiting Dr. Gordon and that Dr. Gordon is at patient Wagner’s residence.
premise 1: visits(coleman, gordon).
premise 2: at(gordon, wagner).
premise 3: (X)(Y)(Z) [ visits(X,Y) at(Y,Z) at(X,Z) ].
premise 4: (U)(V) [ at(U,V) number(U, lookup(V)) ].
theorem: ( N)[ number(coleman, N) ].
Finding proofs is exactly like finding solutions to search problems.
Can search forward (forward chaining) to derive goal or search backward (backward chaining) from the goal.
Searching for proofs is not more efficient than enumerating models, but in many practical cases, it’s more efficient because we can ignore irrelevant propositions
Forward Chaining (FC):
Backward Chaining (BC):
When a new fact p is added to the KB
for each rule such that p unifies with a premise
if the other premises are known
then add the conclusion to the KB and continue FC
When a query q is asked
if a matching fact q’ is known, return the unifier
for each rule whose consequent q’ matches q
attempt to prove each premise of the rule by BC
Forward Chaining is data-driven
Backward Chaining is goal-driven
i.e. inferring properties and categories from percepts
there are some added complications in keeping track of
the unifiers
more complications help to avoid infinite loops
two versions: find any solution or find all solutions
Backward chaining is the basis for logic programming,
e.g. Prolog
Sound and complete for first-order definite clauses
Datalog = first-order definite clauses + no functions
FC terminates for Datalog in finite number of iterations
May not terminate in general if α is not entailed
This is unavoidable: entailment with definite clauses is semidecidable
Incremental forward chaining: no need to match a rule on iteration k if a premise wasn't added on iteration k-1
match each rule whose premise contains a newly added positive literal
Matching itself can be expensive:
Database indexing allows O(1) retrieval of known facts
e.g., query Missile(x) retrieves Missile(M1)
Forward chaining is widely used in deductive databases
SUBST(COMPOSE(θ1, θ2), p) = SUBST(θ2, SUBST(θ1, p))
Depth-first recursive proof search: space is linear in size of proof
Incomplete due to infinite loops fix by checking current goal against every goal
on stack
Inefficient due to repeated subgoals (both success and failure) fix using caching of previous results (extra
space)
Widely used for logic programming