probabilistic inference

59
PROBABILISTIC INFERENCE

Upload: dard

Post on 24-Feb-2016

64 views

Category:

Documents


0 download

DESCRIPTION

Probabilistic Inference. Agenda. Random variables Bayes rule Intro to Bayesian Networks. Cheat Sheet: Probability Distributions. Event: boolean propositions that may or may not be true in a state of the world For any event x: P(x) 0 ( axiom ) P(x) =1-P(x) For any events x,y : - PowerPoint PPT Presentation

TRANSCRIPT

Page 1: Probabilistic Inference

PROBABILISTIC INFERENCE

Page 2: Probabilistic Inference

AGENDA Random variables Bayes rule Intro to Bayesian Networks

Page 3: Probabilistic Inference

CHEAT SHEET: PROBABILITY DISTRIBUTIONS Event: boolean propositions that may or may

not be true in a state of the world For any event x:

P(x)0 (axiom) P(x)=1-P(x)

For any events x,y: P(xy) = P(yx) (symmetry) P(xy) = P(yx) (symmetry) P(xy) = P(x)+P(y)-P(xy) (axiom) P(x) = P(xy)+P(xy)

(marginalization) x and y are independent iff P(xy)=P(x)P(y). This

is an explicit assumption

Page 4: Probabilistic Inference

CHEAT SHEET: CONDITIONAL DISTRIBUTIONS P(x|y) reads “probability of x given y”

P(x) is equivalent to P(x|true) P(x|y) P(y|x) P(x|y) = P(xy)/P(y) (definition) P(x|yz) = P(xy|z)/P(y|z) (definition) P(x|y)=1-P(x|y). i.e., P(•|y) is a probability

distribution

Page 5: Probabilistic Inference

5

RANDOM VARIABLES

Page 6: Probabilistic Inference

6

RANDOM VARIABLES In a possible world, a random variable X can

take on one of a set of values Val(X)={x1,…,xn}

Such an event is written ‘X=x’

Capital: random variable Lowercase: assignment of variable to value Truth assignments to boolean random

variables may also be expressed as ‘X’ or ‘X’

Page 7: Probabilistic Inference

NOTATION WITH RANDOM VARIABLES Capital letters A,B,C denote random variables Each random variable X can take one of a set of

possible values xVal(X) Boolean random variable has Val(X)={True,False}

Although the most unambiguous way of writing a probabilistic belief is over an event… P(X=x) = a number P(X=x Y=y) = a number

…it is tedious to list a large number of statements that hold for multiple values x and y

Random variables allow using a shorthand notation. (Unfortunately this is a source of a lot of initial confusion!)

Page 8: Probabilistic Inference

DECODING PROBABILITY NOTATION Mental rule #1: Lowercase: assignments are

often left implicit when unambiguous P(a) = P(A=a) = a number

Page 9: Probabilistic Inference

DECODING PROBABILITY NOTATION (BOOLEAN VARIABLES) P(X=True) is written P(X) P(X=False) is written P(X) [Since P(X) = 1-P(X), knowing P(X) is

enough to specify the whole distribution over X=True or X=False]

Page 10: Probabilistic Inference

DECODING PROBABILITY NOTATION Mental rule #2: Drop the AND, use commas P(a,b) = P(ab) = P(A=a B=b) = a number

Page 11: Probabilistic Inference

DECODING PROBABILITY NOTATION Mental rule #3: Uppercase => values left

implicit Suppose Val(X) = {1,2,3} When I write P(X), it states “the distribution

defined over all of P(X=1), P(X=2), P(X=3)” It is not a single number, but rather a

set of numbers P(X) = [A probability table]

Page 12: Probabilistic Inference

12

DECODING PROBABILITY NOTATION P(A,B) = [P(A=a B=b) for all combinations

of aVal(A), bVal(B)]

A probability table with |Val(A)|x|Val(B)| entries

Page 13: Probabilistic Inference

DECODING PROBABILITY NOTATION Mental rule #3: Uppercase => values left

implicit So when you see f(A,B)=g(A,B) this means:

“f(a,b) = g(a,b) for all values of aVal(A) and bVal(B)”

f(A,B)=g(A) means: “f(a,b) = g(a) for all values of aVal(A) and

bVal(B)” f(A,b)=g(A,b) means:

“f(a,b) = g(a,b) for all values of aVal(A)” Order doesn’t matter. P(A,B) is equivalent to

P(B,A)

Page 14: Probabilistic Inference

14

ANOTHER MNEMONIC: FUNCTIONAL EQUALITIES P(X) is treated as a function over a variable X Operations and relations are on “function

objects”

If you say f(x) = g(x) without a value of x, then you can infer f(x) = g(x) holds for all x

Likewise if you say f(x,y) = g(x) without stating a value of x or y, then you can infer f(x,y) = g(x) holds for all x,y

Page 15: Probabilistic Inference

QUIZ: WHAT DOES THIS MEAN? P(AB) = P(A)+P(B)- P(AB)

P(A=a B=b) = P(A=a) + P(B=b)- P(A=a B=b)

For all aVal(A) and bVal(B)

Page 16: Probabilistic Inference

16

MARGINALIZATION If X, Y are boolean random variables that

describe the state of the world, then

This generalizes to multiple variables +

+

Etc.

Page 17: Probabilistic Inference

17

MARGINALIZATION If X, Y are random variables:

This generalizes to multiple variables

Etc.

Page 18: Probabilistic Inference

DECODING PROBABILITY NOTATION (MARGINALIZATION) Mental rule #4: domains are usually implicit Suppose a belief state P(X,Y,Z) is defined

over X, Y, and Z If I write P(X), I am implicitly marginalizing

over Y and Z P(X) = Sy Sz P(X,y,z)

P(X) = Sy Sz P(X Y=y Z=z)

P(X=x) = Sy Sz P(X=x Y=y Z=z) for all x By convention, each of y and z are summed over

Val(Y), Val(Z)

(should be interpreted as)

(should be interpreted as)

Page 19: Probabilistic Inference

CONDITIONAL PROBABILITY FOR RANDOM VARIABLES P(A|B) is the posterior probability of A given

knowledge of B “For each bVal(B): given that I know B=b,

what would I believe is the distribution over A?”

If a new piece of information C arrives, the agent’s new belief (if it obeys the rules of probability) should be P(A|B,C)

Page 20: Probabilistic Inference

CONDITIONAL PROBABILITY FOR RANDOM VARIABLES P(A,B) = P(A|B) P(B)

= P(B|A) P(A)P(A|B) is the posterior probability of A given knowledge of B

Axiomatic definition:P(A|B) = P(A,B)/P(B)

Page 21: Probabilistic Inference

CONDITIONAL PROBABILITY P(A,B) = P(A|B) P(B)

= P(B|A) P(A) P(A,B,C) = P(A|B,C) P(B,C)

= P(A|B,C) P(B|C) P(C) P(Cavity) = StSp P(Cavity,t,p)

= StSp P(Cavity|t,p) P(t,p)= StSp P(Cavity|t,p) P(t|p) P(p)

Page 22: Probabilistic Inference

INDEPENDENCE Two random variables A and B are

independent if P(A,B) = P(A) P(B)

hence P(A|B) = P(A) Knowing B doesn’t give you any information

about A

[This equality has to hold for all combinations of values that A,B can take on]

Page 23: Probabilistic Inference

REMEMBER: PROBABILITY NOTATION LEAVES VALUES IMPLICIT P(AB) = P(A)+P(B)- P(AB) means

P(A=a B=b) = P(A=a) + P(B=b)- P(A=a B=b)

For all aVal(A) and bVal(B)

A and B are random variables.A=a and B=b are events.

Random variables indicate many possible combinations of events

Page 24: Probabilistic Inference

CONDITIONAL PROBABILITY P(A,B) = P(A|B) P(B)

= P(B|A) P(A)P(A|B) is the posterior probability of A given knowledge of B

Axiomatic definition:P(A|B) = P(A,B)/P(B)

Page 25: Probabilistic Inference

25

PROBABILISTIC INFERENCE

Page 26: Probabilistic Inference

CONDITIONAL DISTRIBUTIONSState P(state)C, T, P 0.108C, T, P 0.012C, T, P 0.072C, T, P 0.008C, T, P 0.016C, T, P 0.064C, T, P 0.144C, T, P 0.576

P(Cavity|Toothache) = P(CavityToothache)/P(Toothache) =

(0.108+0.012)/(0.108+0.012+0.016+0.064) = 0.6

Interpretation: After observing Toothache, the patient is no longer an “average” one, and the prior probability (0.2) of Cavity is no longer validP(Cavity|Toothache) is calculated by keeping the ratios of the probabilities of the 4 cases of Toothache unchanged, and normalizing their sum to 1

Page 27: Probabilistic Inference

UPDATING THE BELIEF STATE The patient walks into

the dentists door Let D now observe

evidence E: Toothache holds with probability 0.8 (e.g., “the patient says so”)

How should D update its belief state?

State P(state)C, T, P 0.108C, T, P 0.012C, T, P 0.072C, T, P 0.008C, T, P 0.016C, T, P 0.064C, T, P 0.144C, T, P 0.576

Page 28: Probabilistic Inference

UPDATING THE BELIEF STATE P(Toothache|E) = 0.8 We want to compute

P(C,T,P|E)= P(C,P|T,E) P(T|E)

Since E is not directly related to the cavity or the probe catch, we consider that C and P are independent of E given T, hence:P(C,P|T,E) = P(CP|T)

P(C,T,P|E)= P(C,P,T) P(T|E)/P(T)

State P(state)C, T, P 0.108C, T, P 0.012C, T, P 0.072C, T, P 0.008C, T, P 0.016C, T, P 0.064C, T, P 0.144C, T, P 0.576

Page 29: Probabilistic Inference

UPDATING THE BELIEF STATE P(Toothache|E) = 0.8 We want to compute

P(CTP|E)= P(CP|T,E) P(T|E)

Since E is not directly related to the cavity or the probe catch, we consider that C and P are independent of E given T, hence:P(CP|T,E) = P(CP|T)

P(CTP|E)= P(C,P,T) P(T|E)/P(T)

State P(state)C, T, P 0.108C, T, P 0.012C, T, P 0.072C, T, P 0.008C, T, P 0.016C, T, P 0.064C, T, P 0.144C, T, P 0.576

These rows should be scaled to sum to 0.8

These rows should be scaled to sum to 0.2

Page 30: Probabilistic Inference

UPDATING THE BELIEF STATE P(Toothache|E) = 0.8 We want to compute

P(CTP|E)= P(CP|T,E) P(T|E)

Since E is not directly related to the cavity or the probe catch, we consider that C and P are independent of E given T, hence:P(CP|T,E) = P(CP|T)

P(CTP|E)= P(C,P,T) P(T|E)/P(T)

State P(state)C, T, P 0.108 0.432C, T, P 0.012 0.048C, T, P 0.072 0.018C, T, P 0.008 0.002C, T, P 0.016 0.064C, T, P 0.064 0.256C, T, P 0.144 0.036C, T, P 0.576 0.144

These rows should be scaled to sum to 0.8

These rows should be scaled to sum to 0.2

Page 31: Probabilistic Inference

ISSUES If a state is described by n propositions, then

a belief state contains 2n states (possibly, some have probability 0)

Modeling difficulty: many numbers must be entered in the first place

Computational issue: memory size and time

Page 32: Probabilistic Inference

INDEPENDENCE OF EVENTS Two events A=a and B=b are independent if

P(A=a B=b) = P(A=a) P(B=b)

hence P(A=a|B=b) = P(A=a) Knowing B=b doesn’t give you any

information about whether A=a is true

Page 33: Probabilistic Inference

INDEPENDENCE OF RANDOM VARIABLES Two random variables A and B are

independent if P(A,B) = P(A) P(B)

hence P(A|B) = P(A) Knowing B doesn’t give you any information

about A

[This equality has to hold for all combinations of values that A and B can take on, i.e., all events A=a and B=b are independent]

Page 34: Probabilistic Inference

SIGNIFICANCE OF INDEPENDENCE If A and B are independent, then

P(A,B) = P(A) P(B)

=> The joint distribution over A and B can be defined as a product of the distribution of A and the distribution of B

Rather than storing a big probability table over all combinations of A and B, store two much smaller probability tables!

To compute P(A=a B=b), just look up P(A=a) and P(B=b) in the individual tables and multiply them together

Page 35: Probabilistic Inference

CONDITIONAL INDEPENDENCE Two random variables A and B are

conditionally independent given C, if P(A, B|C) = P(A|C) P(B|C)

hence P(A|B,C) = P(A|C) Once you know C, learning B doesn’t give

you any information about A

[again, this has to hold for all combinations of values that A,B,C can take on]

Page 36: Probabilistic Inference

SIGNIFICANCE OF CONDITIONAL INDEPENDENCE Consider Rainy, Thunder, and RoadsSlippery Ostensibly, thunder doesn’t have anything

directly to do with slippery roads… But they happen together more often when it

rains, so they are not independent… So it is reasonable to believe that Thunder

and RoadsSlippery are conditionally independent given Rainy

So if I want to estimate whether or not I will hear thunder, I don’t need to think about the state of the roads, just whether or not it’s raining!

Page 37: Probabilistic Inference

Toothache and PCatch are independent given Cavity, but this relation is hidden in the numbers! [Quiz]

Bayesian networks explicitly represent independence among propositions to reduce the number of probabilities defining a belief state

State P(state)C, T, P 0.108C, T, P 0.012C, T, P 0.072C, T, P 0.008C, T, P 0.016C, T, P 0.064C, T, P 0.144C, T, P 0.576

Page 38: Probabilistic Inference

BAYESIAN NETWORK Notice that Cavity is the “cause” of both Toothache

and PCatch, and represent the causality links explicitly

Give the prior probability distribution of Cavity Give the conditional probability tables of Toothache

and PCatchCavity

Toothache PCatch

5 probabilities, instead of 7

P(C,T,P) = P(T,P|C) P(C) = P(T|C) P(P|C) P(C)

Cavity CavityP(T|C) 0.6 0.1

P(Cavity)

0.2

Cavity

Cavity

P(P|C) 0.9 0.02

Page 39: Probabilistic Inference

CONDITIONAL PROBABILITY TABLES

Cavity

Toothache

P(Cavity)

0.2

Cavity CavityP(T|C) 0.6 0.1

PCatch

Columns sum to 1

If X takes n values, just store n-1 entries

P(C,T,P) = P(T,P|C) P(C) = P(T|C) P(P|C) P(C)

Cavity CavityP(T|C) 0.6 0.1P(T|C) 0.4 0.9

Cavity

Cavity

P(P|C) 0.9 0.02

Page 40: Probabilistic Inference

SIGNIFICANCE OF CONDITIONAL INDEPENDENCE Consider Grade(CS101), Intelligence, and SAT Ostensibly, the grade in a course doesn’t

have a direct relationship with SAT scores but good students are more likely to get good

SAT scores, so they are not independent… It is reasonable to believe that Grade(CS101)

and SAT are conditionally independent given Intelligence

Page 41: Probabilistic Inference

BAYESIAN NETWORK Explicitly represent independence among

propositions Notice that Intelligence is the “cause” of both Grade

and SAT, and the causality is represented explicitly

Intel.

Grade

P(I=x)high 0.3low 0.7

SAT

7 probabilities, instead of 11

P(I,G,S) = P(G,S|I) P(I) = P(G|I) P(S|I) P(I)

P(G=x|I) I=low I=high

‘A’ 0.2 0.74‘B’ 0.34 0.17‘C’ 0.46 0.09

P(S=x|I) I=low I=highlow 0.95 0.2high 0.05 0.8

Page 42: Probabilistic Inference

SIGNIFICANCE OF BAYESIAN NETWORKS If we know that variables are conditionally

independent, we should be able to decompose joint distribution to take advantage of it

Bayesian networks are a way of efficiently factoring the joint distribution into conditional probabilities

And also building complex joint distributions from smaller models of probabilistic relationships

But… What knowledge does the BN encode about the

distribution? How do we use a BN to compute probabilities of

variables that we are interested in?

Page 43: Probabilistic Inference

A MORE COMPLEX BN

Burglary Earthquake

Alarm

MaryCallsJohnCalls

causes

effects

Directed acyclic graph

Intuitive meaning of arc from x to y:

“x has direct influence on y”

Page 44: Probabilistic Inference

B E P(A|…)

TTFF

TFTF

0.950.940.290.001

Burglary Earthquake

Alarm

MaryCallsJohnCalls

P(B)0.001

P(E)0.002

A P(J|…)TF

0.900.05

A P(M|…)

TF

0.700.01

Size of the CPT for a node with k parents: 2k

A MORE COMPLEX BN

10 probabilities, instead of 31

Page 45: Probabilistic Inference

WHAT DOES THE BN ENCODE?

Each of the beliefs JohnCalls and MaryCalls is independent of Burglary and Earthquake given Alarm or Alarm

Burglary Earthquake

Alarm

MaryCallsJohnCalls

For example, John doesnot observe any burglariesdirectly

P(BJ) P(B) P(J)P(BJ|A) = P(B|A) P(J|A)

Page 46: Probabilistic Inference

WHAT DOES THE BN ENCODE?

The beliefs JohnCalls and MaryCalls are independent given Alarm or Alarm

For instance, the reasons why John and Mary may not call if there is an alarm are unrelated

Burglary Earthquake

Alarm

MaryCallsJohnCalls

P(BJ|A) = P(B|A) P(J|A)P(JM|A) = P(J|A) P(M|A)

A node is independent of its non-descendants given its parents

Page 47: Probabilistic Inference

WHAT DOES THE BN ENCODE?Burglary Earthquake

Alarm

MaryCallsJohnCallsA node is

independent of its non-descendants given its parents

Burglary and Earthquake are independent

The beliefs JohnCalls and MaryCalls are independent given Alarm or Alarm

For instance, the reasons why John and Mary may not call if there is an alarm are unrelated

Page 48: Probabilistic Inference

LOCALLY STRUCTURED WORLD A world is locally structured (or sparse) if

each of its components interacts directly with relatively few other components

In a sparse world, the CPTs are small and the BN contains much fewer probabilities than the full joint distribution

If the # of entries in each CPT is bounded by a constant, i.e., O(1), then the # of probabilities in a BN is linear in n – the # of propositions – instead of 2n for the joint distribution

Page 49: Probabilistic Inference

EQUATIONS INVOLVING RANDOM VARIABLES GIVE RISE TO CAUSALITY RELATIONSHIPS C = A B C = max(A,B) Constrains joint probability P(A,B,C) Nicely encoded as causality relationship

C

A B

Conditional probability given by equation rather than a CPT

Page 50: Probabilistic Inference

NAÏVE BAYES MODELS P(Cause,Effect1,…,Effectn)

= P(Cause) Pi P(Effecti | Cause)

Cause

Effect1 Effect2 Effectn

Page 51: Probabilistic Inference

BAYES’ RULE AND OTHER PROBABILITY MANIPULATIONS P(A,B) = P(A|B) P(B)

= P(B|A) P(A)

P(A|B) = P(B|A) P(A) / P(B)

Gives us a way to manipulate distributions e.g. P(B) = Sa P(B|A=a) P(A=a) Can derive P(A|B), P(B) using only P(B|A) and

P(A)

Page 52: Probabilistic Inference

NAÏVE BAYES CLASSIFIER P(Class,Feature1,…,Featuren)

= P(Class) Pi P(Featurei | Class)

Class

Feature1 Feature2 Featuren

P(C|F1,….,Fk) = P(C,F1,….,Fk)/P(F1,….,Fk)

= 1/Z P(C) Pi P(Fi|C)

Given features, what class?

Spam / Not SpamEnglish / French/ Latin

Word occurrences

Page 53: Probabilistic Inference

BUT DOES A BN REPRESENT A BELIEF STATE?

IN OTHER WORDS, CAN WE COMPUTE THE FULL JOINT DISTRIBUTION OF THE PROPOSITIONS FROM IT?

Page 54: Probabilistic Inference

CALCULATION OF JOINT PROBABILITY

B E P(A|…)

TTFF

TFTF

0.950.940.290.001

Burglary Earthquake

Alarm

MaryCallsJohnCalls

P(B)0.001

P(E)0.002

A P(J|…)TF

0.900.05

A P(M|…)

TF

0.700.01

P(JMABE) = ??

Page 55: Probabilistic Inference

P(JMABE)= P(JM|A,B,E) P(ABE)= P(J|A,B,E) P(M|A,B,E) P(ABE)(J and M are independent given A)

P(J|A,B,E) = P(J|A)(J and B and J and E are independent given A)

P(M|A,B,E) = P(M|A) P(ABE) = P(A|B,E) P(B|E) P(E)

= P(A|B,E) P(B) P(E)(B and E are independent)

P(JMABE) = P(J|A)P(M|A)P(A|B,E)P(B)P(E)

Burglary Earthquake

Alarm

MaryCallsJohnCalls

Page 56: Probabilistic Inference

CALCULATION OF JOINT PROBABILITY

B E P(A|…)

TTFF

TFTF

0.950.940.290.001

Burglary Earthquake

Alarm

MaryCallsJohnCalls

P(B)0.001

P(E)0.002

A P(J|…)TF

0.900.05

A P(M|…)

TF

0.700.01

P(JMABE)= P(J|A)P(M|A)P(A|B,E)P(B)P(E)= 0.9 x 0.7 x 0.001 x 0.999 x 0.998= 0.00062

Page 57: Probabilistic Inference

CALCULATION OF JOINT PROBABILITY

B E P(A|…)

TTFF

TFTF

0.950.940.290.001

Burglary Earthquake

Alarm

MaryCallsJohnCalls

P(B)0.001

P(E)0.002

A P(J|…)TF

0.900.05

A P(M|…)

TF

0.700.01

P(JMABE)= P(J|A)P(M|A)P(A|B,E)P(B)P(E)= 0.9 x 0.7 x 0.001 x 0.999 x 0.998= 0.00062

P(x1x2…xn) = Pi=1,…,nP(xi|parents(Xi)) full joint distribution table

Page 58: Probabilistic Inference

CALCULATION OF JOINT PROBABILITY

B E P(A|…)

TTFF

TFTF

0.950.940.290.001

Burglary Earthquake

Alarm

MaryCallsJohnCalls

P(B)0.001

P(E)0.002

A P(J|…)TF

0.900.05

A P(M|…)

TF

0.700.01

P(x1x2…xn) = Pi=1,…,nP(xi|parents(Xi)) full joint distribution table

P(JMABE)= P(J|A)P(M|A)P(A|B,E)P(b)P(e)= 0.9 x 0.7 x 0.001 x 0.999 x 0.998= 0.00062

Since a BN defines the full joint distribution of a set of propositions, it represents a belief state

Page 59: Probabilistic Inference

HOMEWORK Read R&N 14.1-3