ii .2 statistical inference: sampling and estimation

25
A statistical model Μ is a set of distributions (or regression functions), e.g., all uni-modal, smooth distributions. Μ is called a parametric model if it can be completely described by a finite number of parameters, e.g., the family of Normal distributions for a finite number of parameters μ, σ: II.2 Statistical Inference: Sampling and Estimation 0 , ) , ; ( 2 2 2 2 ) ( 2 1 R for e x f x X October 25, 2011 II.1 IR&DM, WS'11/12

Upload: kail

Post on 24-Feb-2016

46 views

Category:

Documents


4 download

DESCRIPTION

II .2 Statistical Inference: Sampling and Estimation. A statistical model Μ is a set of distributions (or regression functions ), e.g., all uni -modal, smooth distributions. - PowerPoint PPT Presentation

TRANSCRIPT

Page 1: II .2  Statistical Inference:  Sampling and Estimation

IR&DM, WS'11/12

A statistical model Μ is a set of distributions (or regression functions), e.g., all uni-modal, smooth distributions.

Μ is called a parametric model if it can be completely described by a finite number of parameters, e.g., the family of Normal distributions for a finite number of parameters μ, σ:

II.2 Statistical Inference: Sampling and Estimation

0,),;( 2

2

22

)(

21

Rforexf

x

X

October 25, 2011 II.1

Page 2: II .2  Statistical Inference:  Sampling and Estimation

IR&DM, WS'11/12

Statistical Inference

Given a parametric model M and a sample X1,...,Xn, how do we infer (learn) the parameters of M?

For multivariate models with observed variable X and „outcome (response)“ variable Y, this is called prediction or regression, for a discrete outcome variable this is also called classification.

r(x) = E[Y | X=x] is called the regression function.

October 25, 2011 II.2

Page 3: II .2  Statistical Inference:  Sampling and Estimation

IR&DM, WS'11/12

Idea of Sampling

Distribution X(e.g., a population, objects of interest)

Samples X1,…,Xn

drawn from X(e.g., people, objects)

Statistical InferenceWhat can we say about X based on X1,…,Xn?

Example:Suppose we want to estimate the average salary of employees in German companies. Sample 1: Suppose we look at n=200 top-paid CEOs of major banks. Sample 2: Suppose we look at n=100 employees across all kinds of companies.

Distrib. Param. Sample Param. μX mean

variance N size n

X2XS

2X

October 25, 2011 II.3

Page 4: II .2  Statistical Inference:  Sampling and Estimation

IR&DM, WS'11/12

Basic Types of Statistical InferenceGiven a set of iid. samples X1,...,Xn ~ X of an unknown distribution X. e.g.: n single-coin-toss experiments X1,...,Xn ~ X: Bernoulli(p)

• Parameter Estimation e.g.: - what is the parameter p of X: Bernoulli(p) ? - what is E[X], the cdf FX of X, the pdf fX of X, etc.?

• Confidence Intervals e.g.: give me all values C=(a,b) such that P(pC) ≥ 0.95

where a and b are derived from samples X1,...,Xn

• Hypothesis Testing e.g.: H0 : p = 1/2 vs. H1 : p ≠ 1/2

October 25, 2011 II.4

Page 5: II .2  Statistical Inference:  Sampling and Estimation

IR&DM, WS'11/12

Statistical EstimatorsA point estimator for a parameter of a prob. distribution X is a random variable derived from an iid. sample X1,...,Xn.

Examples: Sample mean:

Sample variance:

n

iiX

nX

1

1:

2

1

2 )(1

1: XXn

Sn

iiX

An estimator for parameter is unbiasedif E[ ] = ; otherwise the estimator has bias E[ ] – .An estimator on a sample of size n is consistentif 01]|ˆ[|lim anyforP nn

Sample mean and sample variance are unbiased and consistent estimators of μX and .

2X

October 25, 2011 II.5

Page 6: II .2  Statistical Inference:  Sampling and Estimation

IR&DM, WS'11/12

Estimator ErrorLet be an estimator for parameter over iid. samples X1, ...,Xn.

The distribution of is called the sampling distribution.The standard error for is:

The mean squared error (MSE) for is: n̂

2n n

ˆ ˆbias ( ) Var[ ]

Theorem: If bias 0 and se 0 then the estimator is consistent.

The estimator is asymptotically Normal if converges in distribution to standard Normal N(0,1).

n̂n

ˆ( ) / se

]ˆ[)ˆ( nn Varse

])ˆ[()ˆ( 2 nn EMSE

October 25, 2011 II.6

Page 7: II .2  Statistical Inference:  Sampling and Estimation

IR&DM, WS'11/12

Types of Estimation• Nonparametric Estimation

No assumptions about model M nor the parameters θ of the underlying distribution X

“Plug-in estimators” (e.g. histograms) to approximate X

• Parametric Estimation (Inference)Requires assumptions about model M and the parameters θ of

the underlying distribution XAnalytical or numerical methods for estimating θMethod-of-Moments estimatorMaximum Likelihood estimator and Expectation Maximization (EM)

October 25, 2011 II.7

Page 8: II .2  Statistical Inference:  Sampling and Estimation

IR&DM, WS'11/12

Nonparametric EstimationThe empirical distribution function is the cdf that puts probability mass 1/n at each data point Xi:

nF̂

nn ii 1

1F̂ ( x ) I( X x )n

A statistical functional (“statistics”) T(F) is any function over F,e.g., mean, variance, skewness, median, quantiles, correlation.

The plug-in estimator of = T(F) is: n nˆ ˆT( F )

xXifxXif

xXIwithi

ii 0

1)(

Simply use instead of F to calculate the statistics T of interest.nF̂

October 25, 2011 II.8

Page 9: II .2  Statistical Inference:  Sampling and Estimation

IR&DM, WS'11/12

Histograms as Density EstimatorsInstead of the full empirical distribution, often compact data synopses may be used, such as histograms where X1, ...,Xn are grouped into m cells (buckets) c1, ..., cm with bucket boundaries lb(ci) and ub(ci) s.t.

lb(c1) = , ub(cm ) = , ub(ci ) = lb(ci+1 ) for 1 i<m, and

freqf (ci ) =

freqF (ci ) =

Histograms provide a (discontinuous) density estimator.

Example:X1= 1X2= 1X3= 2X4= 2X5= 2X6= 3…X20=7

x

fX(x)

1 2 3 4 5 6 7

2/203/20

5/20

4/203/20

2/201/20

))()((1)(ˆ1 i

n

v vin cubXclbIn

xf

))((1)(ˆ1 i

n

v vn cubXIn

xF

65.32017

2026

2035

2044

2053

2032

2021ˆ

n

October 25, 2011 II.9

Page 10: II .2  Statistical Inference:  Sampling and Estimation

IR&DM, WS'11/12

Parametric Inference (1):Method of Moments

Compute j-th moment:

Method-of-moments estimators are usually consistent andasymptotically Normal, but may be biased.

n

ij

ij Xn 1

Suppose parameter θ = (θ1,…,θk ) has k components.

dxxfxXE Xjj

jj )(][)(

j-th sample moment: for 1 ≤ j ≤ k

Estimate parameter by method-of-moments estimator s.t. n̂

and… …and (for the first k moments)

11 ˆ)ˆ( n

22 ˆ)ˆ( n

knk ˆ)ˆ(

Solve equation system with k equations and k unknowns.

October 25, 2011 II.10

Page 11: II .2  Statistical Inference:  Sampling and Estimation

IR&DM, WS'11/12

Parametric Inference (2):Maximum Likelihood Estimators (MLE)

Let X1,...,Xn be iid. with pdf f(x;θ).

Estimate parameter of a postulated distribution f(x;) such that the likelihood that the sample values x1,...,xn are generated by this distribution is maximized.

Maximum likelihood estimation: Maximize L(x1,...,xn; ) ≈ P[x1, ...,xn originate from f(x; )] Usually formulated as Ln() = ∏i f(Xi; ) Or (alternatively) Maximize ln() = log Ln()

The value that maximizes Ln() is the MLE of .

If analytically untractable use numerical iteration methods

October 25, 2011 II.11

Page 12: II .2  Statistical Inference:  Sampling and Estimation

IR&DM, WS'11/12

Simple Example forMaximum Likelihood Estimator

Given: • Coin toss experiment (Bernoulli distribution) with unknown parameter p for seeing heads, 1-p for tails• Sample (data): h times head with n coin tossesWant: Maximum likelihood estimation of p

Let L(h, n, p)

with h = ∑i Xi

hnhn

i

XXn

ii pppppXf ii

)1()1();(1

1

1

Maximize log-likelihood function: log L (h, n, p)

nhp 0

1ln

phn

ph

pL

)1log()()log( phnph

October 25, 2011 II.12

Page 13: II .2  Statistical Inference:  Sampling and Estimation

IR&DM, WS'11/12

MLE for Parameters of Normal Distributions

n

i

xn

n

i

exxL1

2)(

21

2

2

21),,,...,(

ni2

i 1

ln( L ) 1 2( x ) 02

02

12 1

2422

n

ii )x(n)Lln(

n

iix

n 1

1̂ 2

1

2 )ˆ(1ˆ

n

iix

n

October 25, 2011 II.13

Page 14: II .2  Statistical Inference:  Sampling and Estimation

IR&DM, WS'11/12

MLE Properties

Maximum Likelihood estimators areconsistent, asymptotically Normal, andasymptotically optimal (i.e., efficient) in the following sense:

Consider two estimators U and T which are asymptotically Normal.Let u2 and t2 denote the variances of the two Normal distributionsto which U and T converge in probability.The asymptotic relative efficiency of U to T is ARE(U,T) := t2/u2 .

Theorem: For an MLE and any other estimator the following inequality holds:

n̂ n

n nˆARE( , ) 1

That is, among all estimators MLE has the smallest variance.October 25, 2011 II.14

Page 15: II .2  Statistical Inference:  Sampling and Estimation

IR&DM, WS'11/12

Bayesian Viewpoint of Parameter Estimation• Assume prior distribution g() of parameter • Choose statistical model (generative model) f (x | ) that reflects our beliefs about RV X• Given RVs X1,...,Xn for the observed data, the posterior distribution is h ( | x1,...,xn )

For X1= x1, ... ,Xn= xn the likelihood is

which implies(posterior is proportional to likelihood times prior)

MAP estimator (maximum a posteriori): Compute that maximizes h( | x1, …, xn ) given a prior for .

)(),...(~)...|( 11 gxxLxxh nn

n

i

iin

i in ggxfxh

xfxxL1

'11 )(

)'()'|()|()|(),...(

October 25, 2011 II.15

Page 16: II .2  Statistical Inference:  Sampling and Estimation

IR&DM, WS'11/12

Analytically Non-tractable MLE for parametersof Multivariate Normal Mixture

),...,,,...,,,...,,( 111 kkkxf

k

j

xx

jmj

jjT

je1

)()(21 1

)2(

1

with expectation values and invertible, positive definite, symmetricmm covariance matrices

j

j

k

jjjj xn

1

),,(

Maximize log-likelihood function:

n

i

k

jjjij

n

iin xnxPxxL

1 111 ),,(log]|[log:),,...,(log

Consider samples from a k-mixture of m-dimensional Normal distributionswith the density (e.g. height and weight of males and females):

October 25, 2011 II.16

Page 17: II .2  Statistical Inference:  Sampling and Estimation

IR&DM, WS'11/12

Expectation-Maximization Method (EM)

][]|,...[maxargˆ1 zZPzZXXJ nz

Key idea:When L(X1,...,Xn, θ) (where the Xi and are possibly multivariate)

is analytically intractable then• introduce latent (i.e., hidden, invisible, missing) random variable(s) Z such that• the joint distribution J(X1,...,Xn, Z, ) of the “complete” data

is tractable (often with Z actually being multivariate: Z1,...,Zm)• iteratively derive the expected complete-data likelihood by integrating J and find best :

EZ|X,[J(X1,…,Xn, Z, )]

October 25, 2011 II.17

Page 18: II .2  Statistical Inference:  Sampling and Estimation

IR&DM, WS'11/12

EM Procedure

E step (expectation): estimate posterior probability of Z: P[Z | X1,…,Xn, (t)] assuming were known and equal to previous estimate (t), and compute EZ|X,θ(t) [log J(X1,…,Xn, Z, (t))]by integrating over values for Z

Initialization: choose start estimate for (0) (e.g., using Method-of-Moments estimator)

Iterate (t=0, 1, …) until convergence:

M step (maximization, MLE step): Estimate (t+1) by maximizing(t+1) = arg maxθ EZ|X,θ[log J(X1,…,Xn, Z, )]

Convergence is guaranteed (because the E step computes a lower bound of the true L function, and the M step yields monotonically non-decreasing likelihood), but may result in local maximum of (log-)likelihood function

October 25, 2011 II.18

Page 19: II .2  Statistical Inference:  Sampling and Estimation

IR&DM, WS'11/12

EM Example for Multivariate Normal Mixture

i j ijijijnXZ ZPZxnZXXJE ]1[)1|,(log)],,,...,([log 1,|

Expectation step (E step):( t )

ij ij ih : P[ Z 1| x , ]

( t )i j

k( t )

i ll 1

P[ x | n ( ) ]

P[ x | n ( ) ]

Maximization step (M step):

nij i

i 1j n

iji 1

h x:

h

nT

ij i j i ji 1

j nij

i 1

h ( x )( x ):

h

n nij ij

i 1 i 1j k n

ijj 1i 1

h h:

nh

Zij = 1 if ith data point Xiwas generatedby jth component,0 otherwise

( t 1 ) October 25, 2011 II.19

See L. Wasserman, p.121 ff.for k=2, m=1

Page 20: II .2  Statistical Inference:  Sampling and Estimation

IR&DM, WS'11/12

Confidence IntervalsEstimator T for an interval for parameter such that

For the distribution of random variable X, a value x (0< <1) withis called a -quantile; the 0.5-quantile is called the median.For the Normal distribution N(0,1) the -quantile is denoted .

1]xX[P]xX[P

1]aTaT[P[T-a, T+a] is the confidence interval and 1– is the confidence level.

October 25, 2011 II.20

For a given a or α, find a value z of N(0,1) that denotes the [T-a, T+a] conf. interval or a corresponding -quantile for 1– .

Page 21: II .2  Statistical Inference:  Sampling and Estimation

IR&DM, WS'11/12

Confidence Intervals for Expectations (1)Let x1, ..., xn be a sample from a distribution with unknownexpectation and known variance 2. For sufficiently large n, the sample mean is N(,2/n) distributedand is N(0,1) distributed:

X

n)X(

1)(2))(1()()()(])([

zzzzzznXzP

][n

zXn

zXP

1][ 2/12/1

nX

nXP

For confidence interval or confidence level 1- set],[ aXaX

na:z ),(Nofquantile)(:z 10

21

then look up (z) to find 1–αza :

n

then

October 25, 2011 II.21

Page 22: II .2  Statistical Inference:  Sampling and Estimation

IR&DM, WS'11/12

Confidence Intervals for Expectations (2)Let X1, ..., Xn be an iid. sample from a distribution X with unknownexpectation and unknown variance 2 and known sample variance S2.For sufficiently large n, the random variable

SnXT )(:

has a t distribution (Student distribution) with n-1 degrees of freedom: 2

12

,

1

1

2

21

)(

nnT

ntn

n

n

tf

with the Gamma function:

0

1 0)( xfordttex xt

))x(x)x(and)(propertiesthewith( 111

1211211 ]n

StX

n

StX[P /,n/,n

October 25, 2011 II.22

Page 23: II .2  Statistical Inference:  Sampling and Estimation

IR&DM, WS'11/12

Summary of Section II.2

• Quality measures for statistical estimators• Nonparametric vs. parametric estimation• Histograms as generic (nonparametric) plug-in estimators• Method-of-Moments estimator good initial guess but may be biased• Maximum-Likelihood estimator & Expectation Maximization• Confidence intervals for parameters

October 25, 2011 II.23

Page 24: II .2  Statistical Inference:  Sampling and Estimation

IR&DM, WS'11/12

Normal Distribution Table

October 25, 2011 II.24

Page 25: II .2  Statistical Inference:  Sampling and Estimation

IR&DM, WS'11/12

Student‘s t Distribution Table

October 25, 2011 II.25