a non-parametric bayesian method for inferring hidden causes

Post on 01-Jan-2016

27 Views

Category:

Documents

5 Downloads

Preview:

Click to see full reader

DESCRIPTION

A Non-Parametric Bayesian Method for Inferring Hidden Causes. by F. Wood, T. L. Griffiths and Z. Ghahramani. Discussion led by Qi An ECE, Duke University. Outline. Introduction A generative model with hidden causes Inference algorithms Experimental results Conclusions. Introduction. - PowerPoint PPT Presentation

TRANSCRIPT

A Non-Parametric Bayesian Method for Inferring Hidden Causes

by F. Wood, T. L. Griffiths and Z. Ghahramani

Discussion led by Qi An

ECE, Duke University

Outline

• Introduction

• A generative model with hidden causes

• Inference algorithms

• Experimental results

• Conclusions

Introduction

• A variety of methods from Bayesian statistics have been applied to find model structure from a set of observed variables– Find the dependencies among the set of observed

variables– Introduce some hidden causes and infer their

influence on observed variables

Introduction

• Learning model structure containing hidden causes presents a significant challenge– The number of hidden causes is unknown and

potentially unbounded– The relation between hidden causes and observed

variables is unknown

• Previous Bayesian approaches assume the number of hidden causes is finite and fixed.

A hidden causal structure

• Assume we have T samples of N BINARY variables. Let be the data and be a dependency matrix among .

• Introduce K BINARY hidden causes with T samples. Let be hidden causes and be a dependency matrix between and

• K can potentially be infinite.

NxTX NxNA

NXXX ,,, 21

KxTY NxKZ

NXXX ,,, 21

KYYY ,,, 21

A hidden causal structure

Hidden causes (Diseases)

Observed variables (Symptoms)

A generative model

• Our goal is to estimate the dependency matrix Z and hidden causes Y.

• From Bayes’ rule, we know

• We start by assuming K is finite, and then consider the case where K∞

A generative model

• Assume the entries of X are conditionally independent given Z and Y, and are generated from a noise-OR distribution.

where , ε is a baseline probability that , and λ is the probability with which any of hidden causes is effective

K

ktkkiti yzyz

1,,:,:, 1, tix

A generative model

• The entries of Y are assumed to be drawn from a Bernoulli distribution

• Each column of Z is assumed to be Bernoulli(θk) distributed. If we further assume a Beta(α/K,1) hyper-prior and integrate out θk

where

These assumptions on Z are exactly the same as the assumption in IBP

Taking the infinite limit

• If we let K approach infinite, the distribution on X remains well-defined, and we only need to concern about rows in Y that the corresponding mk>0.

• After some math and reordering of Z, the distribution on Z can be obtained as

The Indian buffet process is defined in terms of a sequence of N customers entering a restaurant and choosing from an infinite array of dishes. The first customer tries the first Poisson(α) dishes. The remaining customers then enter one by one and pick previously sampled dishes with probability and then tries Poisson(α/i) new dishes.

i

m ki ,

Reversible jump MCMC

Gibbs sampler for Infinite case

Experimental resultsSynthetic Data

number of iterations

Real data

Conclusions

• A non-parametric Bayesian technique is developed and demonstrated

• Recovers the number of hidden causes correctly and can be used to obtain reasonably good estimate of the causal structure

• Can be integrated into Bayesian structure learning both on observed variables and on hidden causes.

top related