ultima presentacion bayesiancourse

12

Click here to load reader

Upload: yosoyanibal

Post on 10-Apr-2016

216 views

Category:

Documents


2 download

DESCRIPTION

última presentacion de estadística bayyesiana

TRANSCRIPT

Page 1: ultima presentacion BayesianCourse

OutlineEmprirical Bayes

Bayesian Statistics Course

Alvaro Montenegro

November 7, 2011

Alvaro Montenegro Bayesian Statistics Course

Page 2: ultima presentacion BayesianCourse

OutlineEmprirical Bayes

Emprirical BayesPreliminary

Alvaro Montenegro Bayesian Statistics Course

Page 3: ultima presentacion BayesianCourse

OutlineEmprirical Bayes

Preliminary

Reconstruction of the the Normal Joint Distribution

Let´s suppose that

y|θ ∼ N(θ,Σ)

θ ∼ N(µ,B)

then, (yθ

)∼ N[

[(µµ

),

(Σ + B B

B B

)]

Alvaro Montenegro Bayesian Statistics Course

Page 4: ultima presentacion BayesianCourse

OutlineEmprirical Bayes

Preliminary

Introduction to EB

For simplicity, suppose a two stage bayesian model.

Assume a likelihood f (y|θ) for the observed data.

For θ assume an apriori distribution with cdf G (θ) and density ormass function g(θ|η), where η is a vector of hyperparameters. Ifη is known, the Bayes’ theorem say us that

p(θ|y, η) =f (y|θ)g(θ|η)

m(y|η)

Alvaro Montenegro Bayesian Statistics Course

Page 5: ultima presentacion BayesianCourse

OutlineEmprirical Bayes

Preliminary

Introduction to EB II

m(y|η) denotes the marginal distribution of y,

m(y|η) =

∫f (y|θ)g(θ|η)dθ

If η is unknown the fully Bayesian approach would adopt a hyperprior distribution h(η). The posterior distribution is then

p(θ|y) =

∫f (y|θ)g(θ|η)h(η)dη∫ ∫f (y|u)g(u|η)h(η)dudη

=

∫p(θ|y, η)h(η|y)dη

Alvaro Montenegro Bayesian Statistics Course

Page 6: ultima presentacion BayesianCourse

OutlineEmprirical Bayes

Preliminary

Introduction to EB II

In empirical Bayes analysis, we use the marginal distributionm(y|η) to estimate η by η = η(y), (e.g, the marginal MLE).

Inference is then based on the estimate posterior distributionp(θ|y, η)

The name empirical Bayes arises from the fact that we are usingthe data to estimate the hyperparameter η.

Alvaro Montenegro Bayesian Statistics Course

Page 7: ultima presentacion BayesianCourse

OutlineEmprirical Bayes

Preliminary

Parametric EB (PEB)

If m(y|η) is available directly, then we used it directly to find theMMLE of η. Gaussian /Gaussian modelsConsider the two-stage Gaussian/Gaussian model

yi |θi ∼ N(θi , σ2), i = 1, · · · , k

θi ∼ N(µ, τ2), i = 1, · · · , k

Assume that both τ2 and σ2 are known. Then η = µAnd, (

yiθi

)∼ N

[(µµ

),

(σ2 + τ2 τ2

τ2 τ2

)]Thus, yi ∼ N(µ, σ2 + τ2) and cor2(yi , θi ) = τ2

σ2+τ2

Alvaro Montenegro Bayesian Statistics Course

Page 8: ultima presentacion BayesianCourse

OutlineEmprirical Bayes

Preliminary

Gaussian/Gaussian model

Hence, the marginal density of y = (y1, · · · , yn)t , is given by

m(y|µ) =1

[2π(σ2 + τ2)]k/2exp

[− 1

2(σ2+τ2)

k∑i=1

(yi − µ)2

]So, µ = y = 1

k

∑ki=1 yi is the MMLE of µ

We conclude that the estimated posterior distribution is

p(θi |yi , µ) = N(Bµ+ (1− B)yi , (1− B)σ2)

whereB = σ2/(σ2 + τ2)

Then,

θµi = By + (1− B)yi = y + (1− B)(yi − y)

Alvaro Montenegro Bayesian Statistics Course

Page 9: ultima presentacion BayesianCourse

OutlineEmprirical Bayes

Preliminary

Gaussian/Gaussian model II

Now, assume that τ is also unknown. Then η = (µ, τ). Now wehave to decide what estimate to use for τ (or τ2 or B).The MMLE for τ2 in this case is

τ2 = (s2 − σ2)+ = max{0, s2 − σ2}

where s2 = 1k

∑ki=1(yi − y)2.

The MMLE for B is

B =σ2

σ2 + τ2=

σ2

σ2 + (s2 − σ2)+

θµτi = y + (1− B)(yi − y)

Alvaro Montenegro Bayesian Statistics Course

Page 10: ultima presentacion BayesianCourse

OutlineEmprirical Bayes

Preliminary

EM algorithm for PEB

This alternative is useful if the MMLE for η is relativelystraightforward after θ is observed. The MMLE of η can becomputed using the the prior g . Let

S(θ|η) =∂

∂ηlog(g(θ|η))

be the score function.

I E-Step. Let η(j) denote the current estimate of thehyperparameter at iteration j . Compute the posterior mean ofθ from the posterior p(θ|y,η(j)).

I Compute S(η|η(j)) = E (S(θ|η)|y,η(j))I M-Step. Uses S to compute a new estimate of the

hyperparameter, that is obtain the MLE of η from S .

Alvaro Montenegro Bayesian Statistics Course

Page 11: ultima presentacion BayesianCourse

OutlineEmprirical Bayes

Preliminary

EM algorithm for the Gaussian/Gaussian model

For simplicity, let T = τ2. Let log(l(η)) be the likelihood of η forcomponent i . Then

−2log(l(η)) = log(T ) + (θi − µ)2/T

Hence,

S(θi ,η) = −1

2

(−2 (θi−µ)

T1T −

(θi−µ)2T 2

)

Alvaro Montenegro Bayesian Statistics Course

Page 12: ultima presentacion BayesianCourse

OutlineEmprirical Bayes

Preliminary

EM algorithm for the Gaussian/Gaussian model II

I E-Step. sample (obtain) θ(j)i from

p(θi |y , µ(j), T (j)) = N(Bµ+ (1− B)yi , σ2(1− B)).

I M-Step. Estimate µ(j+1) as µ(j+1) = 1k

∑ki=1 θi

(j). Estimate

T (j+1) as T (j+1) = 1k

∑ki=1(θ

(j)i − µ(j))2

Alvaro Montenegro Bayesian Statistics Course