ttestlecture

Upload: innocent-harry

Post on 07-Apr-2018

221 views

Category:

Documents


0 download

TRANSCRIPT

  • 8/6/2019 TTestLecture

    1/9

    STAT 141 Confidence Intervals and Hypothesis Testing 10/26/04

    Today (Chapter 7):

    CI with unknown, t-distribution

    CI for proportions Two sample CI with known or unknown Hypothesis Testing, z-test

    Confidence Intervals with unknown

    Last Time: Confidence Interval when is known:

    A level C, or 100(1 ) % confidence interval for is[X z/2

    n, X+ z/2

    n

    ]

    But to return to reality, we dont know . Thus we must estimate the standard deviation ofX with:

    SEX =sn

    But s is just a function of our Xis and thus is a random variable too it has a sampling distribution too.

    Before we could say if we knew

    P(z/2 < X /

    n< z/2) = 1

    which after algebra gave the confidence interval.

    [Remember for any s, zs is defined as where 12s of the area falls in (zs, zs). So zs = qnorm(1s) =qnorm(s) = 1 s quantile. i.e. zs is the positive side.]Now we want a similar setup, so that:

    P(?? par(mfrow=c(2,2))

    #tdist1.pdf

    > plot(seq(-6,6,length=10000),dnorm(seq(-6,6,length=10000)),

    type="l",lty=3,ylab="",xlab="",main="t-dist w/ df=1")

    > lines(seq(-6,6,length=10000),dt(seq(-6,6,length=10000),df=1),

    type="l",ylab="",xlab="")

    > legend(x=2,y=.4,lty=c(1,3),legend=c("t-dist, df=1","N(0,1)"))

    ...

    Thus t-distribution approaches normal as increases, but for small n gives wider intervals.

    Why degrees of freedom??

    2

  • 8/6/2019 TTestLecture

    3/9

    Let yi = xi xWe have

    s2 =1

    n 1n1

    y2i and

    yi = 0()

    Now (*) < > 1 constraint on n numbers, hence the phrase n-1 degrees of freedomNow that we know the distribution, we know we can find the ?? from above these are just the/2 and 1 /2 quantiles of the t-distribution. Let tn1,s be defined similarly as zs and is equal toqt(1 s, df = n 1) = qt(s, df = n 1). We then have:

    P(tn1,/2 < X SEX

    < tn1,/2) = 1

    This gives us a confidence interval like before, only we use the quantiles of the t-distribution rather than

    the normal distribution.

    Example. Taken from the original paper on t-test by W.S. Gossett , 1908. [Gossett was employed by

    Guiness Breweries, Dublin. A chemist, turned statistician, Guiness, fearing the results to be of commer-

    ical importance, forbade Gossett to publish under his own name. Chose pseudonym Student out ofmodesty]

    Two drugs to induce sleep: A- dextro, B= laevo. Each of ten patients receives both drugs (presum-

    ably in random order).Issue: Is drug B better than drug A? Students sleep data:

  • 8/6/2019 TTestLecture

    4/9

    data(sleep)

    extra group

    1 0.7 1

    2 -1.6 1

    3 -0.2 1

    4 -1.2 15 -0.1 1

    6 3.4 1

    7 3.7 1

    8 0.8 1

    9 0.0 1

    10 2.0 1

    11 1.9 2

    12 0.8 2

    13 1.1 2

    14 0.1 2

    15 -0.1 216 4.4 2

    17 5.5 2

    18 1.6 2

    19 4.6 2

    20 3.4 2

    extra1= sleep[sleep[,2]==1,]

    extra2= sleep[sleep[,2]==2,]

    extradiff= extra2[,1]-extra1[,1]

    >extradiff

    1.2

    2.4

    1.3

    1.3

    0.0

    1.01.8

    0.8

    4.6

    1.4

    > mean(extradiff)

    [1] 1.58

    > sqrt(var(extradiff))

    [1] 1.229995

    > sqrt(var(extradiff)/10)

    [1] 0.3889587

    > 1.58/0.38896[1] 4.062114

    > qt(.975,9)

    [1] 2.262157

    > qt(.995,9)

    [1] 3.249836

    > qnorm(0.975)

    [1] 1.959964

    > qnorm(0.995)

    [1] 2.575829

    A level C conf. interval with unknown:

    exact ifX Normal otherwise approx correct for large n Margin of error M in E M is

    tn1,2

    sn

    = tn1,2

    SEX

    Remark: Large value, 4.6 possible outlier, so some doubt about normal assumptions here.

    Whats different?? Since we dont know , pay a penalty with a (slightly) wider interval: ( e.g t=2.262vs. z=1.96 for 5% level confidence )

    For large sample sizes we can just use the normal distribution quantiles z/2, since the t-distributionquickly looks like the normal distribution.

    Proportions

    We saw last time that p is approximately distributed as N(p, p(1p)n ). If we want a confidence interval forp we can use this normality to get an approximate confidence interval.

  • 8/6/2019 TTestLecture

    5/9

    M = z/2 SEp = z/2

    p(1p)n

    The book offers a correction to this using p =y+0.5z2

    /2

    n+z2/2

    and SEp =

    p(1p)n+z2

    /2.

    Two-samples

    One of the most common statistical procedures. Is there a difference? Is it real?? However, because of

    the preparatory work with one-sample problems, this should seem rather familiar, a case of dej-vu. , but

    with slightly more complex formulas.

    What do we mean by two-samples?

    Two groups Distinct populations [treatment/control, . . . , male/female . . . ] Grouping variable: categorical variable with 2 levels. Data is independentbetween groups

    Example: (Dalgaard p 87) Energy expenditure: Two groups of women, lean and obese. Twenty four

    hour energy expenditure in MJ.

    data(energy) lean_energy[energy$stature==lean,1]

    obese_energy[energy$stature==obese,1]

    obese

    [1]

    9.21 11.51 12.79 11.85 9.97 8.79 9.69 9.68

    9.19

    lean

    [1] 7.53 7.48 8.08 8.09 10.15 8.40 10.88 6.13 7.90 7.05 7.48

    7.58 8.11

    plot(expend~stature,data=energy)

    Beware: Some data sets that may looklike two sample problems are really better treated as paired data.

    Example: Sleep drugs data from above: 10 patients, Drugs A and B. But since each patient received both

    A and B, the samples are not really independent (common component of variation due to patient) betterto look at differences. Becomes a one-sample problem. (Will discuss more about pairing/blocking later).

    Notation:

    Population

    Variable Mean SD

    Population 1 X1 1 1Population 2 X2 2 2

    SRS from Each Population

    Sample Size Sample Mean Sample SD

    Sample 1 n1 X1 s1Sample 2 n2 X2 s2

    5

  • 8/6/2019 TTestLecture

    6/9

    Distribution ofX1 X2Sample mean difference: X1 X2 All depends on the variability and distribution of this difference!!Recall in general that ifE(V) = and E(W) = then

    E(V

    W) =

    and ifV and W are independentthen

    var(V W) = var(V) + var(W)

    So ifX1 (1, 2

    1

    n1), X2 (2,

    2

    2

    n2), we will have

    X1X2 = E(X1 X2) = 1 2and for independentrvs X1 and X2:

    2X1X2 = 2X1 + 2X2 =21

    n1 +22

    n2

    We need estimates for 1 2 and 2X1X2 .Clearly X1 X2 is estimate for 1 2. Once we have an estimate for the X1X2 then we can usesimilar method as for a 1-sample case to get a confidence interval.

    1. Unequal variances: 21 = 22 then use

    SE2X1X2 =s21n1

    +s22n2

    2. Equal Variances: If21 = 22 = 2 is unknown but assumed to be equal, can use a pooledestimate of variance :

    s2pooled =(n1 1)s21 + (n2 1)s22

    n1 + n2 2i.e. average with weights equal to the respective degrees of freedom. Then our estimate ofX1+X2

    SE2pooled = s2pooled(

    1

    n1+

    1

    n2)

    Good method if the two SDs are close, but if also are moderate to large, there wont bemuch difference from the unequal variances method (below)

    If the two SDs are different, better to use unequal variances method. will use this pooled estimate again when we study Analysis of Variance

    As above, we need the distribution of:

    T =X1 X2 X1X2

    SE ofX1 X2IfX1 N(1, 21) and X2 N(2, 22) then:

    6

  • 8/6/2019 TTestLecture

    7/9

    Equal Variances: If we have equal variances in the two populations, thenSE ofX1 X2 = SEpooled and T t with = n1 + n2 2

    Unequal Variances: Then SE ofX1 X2 = SEX1X2 and T is approximately distributed as t.We use one of two values for

    1. = min(n1

    1, n2

    1)

    2.

    =

    s21

    n1+

    s22

    n2

    1n11(

    s21

    n1)2 + 1n21(

    s22

    n2)2

    This is known as Welshs formula which gives fractional degrees of freedom. More accurate

    formula (generally used by packages, and only on computers!):

    Can use either approximation, but say which!

    Note that one can generally not go too far wrong, since can show by algebra that

    min(n1

    1, n2

    1)

    n1 + n2

    2

    Summary: Two sample confidence intervals for 1 2 at the 100(1 )% level

    E M, E = X1 X2, M = (z/2 or t/2) (appropriate SE)known large sample unknown, unequal unknown, equal

    M = z/2

    21

    n1+

    22

    n2M = z/2

    s21

    n1+

    s22

    n2M = t/2,

    s21

    n1+

    s22

    n2M = t/2,spooled

    1n1

    + 1n2

    = min(n1 1, n2 1) or = n1 + n2 2where z/2 and t/2, are same notation as for one-sample case.

    In energy data above, we can construct a 95% confidence interval for the difference in the true means

    between obese and lean. n1 = 9, n2 = 13 and X1 X2 = 2.23. Well use the conservative estimate for = min(9 1, 13 1) = 8. SEX1X2 = 0.58. So our M = 2.24 0.57 = 1.30. Then a (conservative)95% confidence interval is [0.93,3.53]. Computer output for Welshs formula gives [1.00,3.46]> mean(obese)-mean(lean)

    [1] 2.231624

    > qt(.9725,df=8)

    [1] 2.244938

    > sqrt(var(obese)/length(obese)+var(lean)/length(lean))

    [1] 0.5788152

    > t.test(obese,lean, conf.level=.95)Welch Two Sample t-test data: obese and leant = 3.8555, df = 15.919, p-value = 0.001411

    alternative hypothesis: true difference in means is not equal to 0

  • 8/6/2019 TTestLecture

    8/9

    95 percent confidence interval:

    1.004081 3.459167

    sample estimates:

    mean of x mean of y

    10.297778 8.066154

    Hypothesis Tests

    We will generally have some hypotheses about certain parameters of the population (or populations)

    from which our data arose, and we will be interested in using our data to see whether these hypotheses

    are consistent with what we have observed.

    To do this, we have already calculated confidence intervals for them, now we will be conducting

    hypothesis tests about the populations parameters of interest. We will discuss these two statistical

    procedures, in general they are built on the idea that if some theory about the population parameters is

    true, the observed data should follow, admittedly random, but generally predictable patterns. Thus, if the

    data do not fall within the likely outcomes under our supposed ideas about the population, we will tend

    to disbelieve these ideas, as the data do not strongly support them.

    We will initially be interested in using our data to make inferences about , the population mean. To dothis, we will use our estimate of location from the data; namely, the sample mean (average) (since it is

    mathematically nicer than the median). We will do this in the framework of several different data

    structures, starting with the most basic, the one-sample situation. How can we decide if a given set of

    data, and in particular its sample mean, is close enough to a hypothesized value for for us to believethat the data are consistent with this value? In order to answer such a question, we need to know how a

    statistic like the sample average behaves, i.e. its distribution.

    Now we have already studied the distribution of the sample average and the sample proportions, when

    the sample size is large enough, they follow Normal distributions, centered at the expected value and

    with a spread of the order the relevant SE.

    INFERENCE FOR A SINGLE SAMPLE: Z-DISTRIBUTION

    Standard Error of the Sample Mean ( known)

    Example: Testing whether the birthweights of the secher babies have above average mean.

    Variance of the original population =700 Known.

    We would like to test whether = 2500, versus the alternative > 2500.

    We have a sample of n = 107 observations. mean(bwt) gives that X = 2739, we would like to usethis data to test > 2500.

    We have a sample of size 107, we know that X will be normal with variance 2

    n =7002

    107 =490000107

    .

    If it is true that =2500 (this is called the null hypothesis), then under the central limit theorem,X N(2500, 490000/107) =N(2500, 67.72) and under the null hypothesis

    P(X 2739) = P(X n

    2739 250067.7

    ) = P(Z 3.53)

    What is the probability that a standard normal Z score is as big as 3.53?

    P(Z > 3.53) = 1 P(Z 3.53) = 1 (3.53) = 0.000207

    8

  • 8/6/2019 TTestLecture

    9/9

    using the R command pnorm(3.53) which returns [1] 0.9997922

    This is indeed very small, too small to be true. We reject the null hypothesis.

    Let X1, . . . , X n be a sample ofn i.i.d. random variables from a distribution having unknown mean ,and known standard deviation . Assume n is large, say n > 30. Suppose interest centers on testing thehypothesis

    H0 : = 0,

    where 0 is some fixed, pre-specified value. This will be our null hypothesis, notice that it is a simpleone, i.e. it postulates a single hypothesized value for . The hypothesis against which the nullhypothesis is to be compared, the alternative hypothesis, can take one of three basic forms:

    1. HA : = 02. HA : > 0

    3. HA : < 0

    The idea, as we have said, is to assess whether the data supports the null hypothesis (H0) or whether itsuggests the relevant alternative (HA).

    To begin, we assert that the null hypothesis is true (i.e. that the true value of is actually 0). Under thisassumption, the Central Limit Theorem implies that the test statistic

    Z =X 0/

    n,

    has a standard normal (N(0, 1)) distribution (notice that the test statistic is just the standardized versionofX under the assumption that the true mean is actually equal to 0). The usual convention applies thatif is unknown, and n is large then the sample standard deviation, s, is used in place of in forming the

    test statistic. The null hypothesis is supported if the observed value of the test statistic is small (i.e. X isclose enough to 0, the hypothesized value, so that I would believe that the true mean is 0). On theother hand, if I observe a large value of the test statistic, this suggests that X is far from 0, which tendsto discredit the null hypothesis in favor of the alternative hypothesis HA : = 0.The real issue is how large is large? (or small is small?).

    For example, if I observe a Z value of 1, say, can we conclude in favor of H0 over HA, or should weprefer HA over H0. What about a Z value of2? The answer to these question lies in considering whatthe test statistic actually measures. In words, the observed value ofZ is just the number of standarderrors the observed sample mean is from the hypothesized population mean; i.e.

    Zobs = number of standard errors

    X is away from 0

    This is determined by how rare a rare event should be to make us think soemthing else than H0 is goingon. This determines what we call the significance level , most often is taken to be 5%, sometimes10%, and sometimes even .1 % (1/1000).

    We compute the P-value which the probability of observing a value as extreme as this.

    The P-value computation either takes P(|Z| > Zobs),P(Z > Zobs) or P(Z < Zobs) depending on whatthe alternative HA was.

    9