ch.6 simple linear regression: continued

21
6.1 Ch.6 Simple Linear Regression: Continued To complete the analysis of the simple linear regression model, in this chapter we will consider how to measure the variation in y t , that is explained by the model how to report the results of a regression analysis how changes in the units of measurement affect the estimates some alternative functional forms that may be used to represent possible relationships between y t and x t .

Upload: kylynn-jones

Post on 13-Mar-2016

34 views

Category:

Documents


0 download

DESCRIPTION

Ch.6 Simple Linear Regression: Continued. To complete the analysis of the simple linear regression model, in this chapter we will consider how to measure the variation in y t , that is explained by the model how to report the results of a regression analysis - PowerPoint PPT Presentation

TRANSCRIPT

Page 1: Ch.6 Simple Linear Regression: Continued

6.1Ch.6 Simple Linear Regression: Continued

To complete the analysis of the simple linear regression model, in this chapter we will consider

• how to measure the variation in yt, that is explained by the model• how to report the results of a regression analysis• how changes in the units of measurement affect the estimates• some alternative functional forms that may be used to represent

possible relationships between yt and xt.

Page 2: Ch.6 Simple Linear Regression: Continued

6.2The Coefficient of Determination (R2)

Two major reasons for analyzing the modely = 1 + 2x + e

are• To explain how the dependent varaible (yt) changes as the

independent variable (xt) changes• To predict yo given xo.

We want the independent variable (xt) to explain as much of the variation in the dependent variable (yt) as possible.

We introduced the independent variable (xt) in hope that its variation will explain the variation in y

A measure of goodness of fit will measure how much of the variation in the dependent variable (yt) has been explained by variation in the independent variable (xt).

Page 3: Ch.6 Simple Linear Regression: Continued

6.3

ttt

ttt

eyyyyeˆˆˆˆ

tt xyE 21)(

Separate yt into its explainable and unexplainable components:

where is explainable.

The error term et is unexplainable. Using our estimates for 1 and 2, we get estimates of E(yt) and our residuals give us estimates of the error terms.

ttt eyEy )(

tt xbby 21ˆ Residual is defined as the difference between the actual and the predicted values of y.

Page 4: Ch.6 Simple Linear Regression: Continued

6.4

yeyyy ttt ˆˆ

A single deviation of yt from its mean can be split into two parts:

22

22

2

22

ˆ)ˆ(

ˆ)ˆ(2ˆ)ˆ(

)ˆ)ˆ((

)ˆˆ()(

tt

tttt

tt

ttt

eyy

eyyeyy

eyy

yeyyy

The sum of squared deviations from the mean is:

This term is zero

The total variation in yt is measured as the sum of the squared deviations from the mean:

2)( yyt Also known as SST (Total Sum of Squares)

Page 5: Ch.6 Simple Linear Regression: Continued

6.5

xbby 21

yyyt ˆ

ttt yye ˆˆ Unexplained

Explained

Total Variation

yyt ty

ty

xt

Graphically, a single y deviation from mean can be split into the two parts:

Page 6: Ch.6 Simple Linear Regression: Continued

6.6

Where:

SST: Total Sum of Squares with T-1 degrees of freedom. It measures the total variation in the actual yt values about its mean.

SSR: Regression Sum of Squares with 1 degree of freedom. It measures the variation in the predicted values of yt about their mean. It is the part of the total variation that is explained by the model.

SSE: Error Sum of Squares with T-2 degrees of freedom. It measures the variation in the actual yt values about the predicted yt values. It is the part of the total variation that is left unexplained.

222 ˆ)ˆ()( ttt eyyyy

Analysis of Variance (ANOVA):

SST = SSR + SSE

Page 7: Ch.6 Simple Linear Regression: Continued

6.7

Multiple R 0.563132517R Square 0.317118231Adjusted R Square 0.299147658Standard Error 37.80536423Observations 40

ANOVAdf SS MS F Significance F

Regression 1 25221.22299 25221.22299 17.64652878 0.00015495Residual 38 54311.33145 1429.245564Total 39 79532.55444

Coefficients Standard Error t Stat P-value Lower 95% Upper 95%Intercept 40.76755647 22.13865442 1.841464964 0.073369453 -4.049807902 85.58492083x 0.128288601 0.030539254 4.200777164 0.00015495 0.066465111 0.190112091

SST SSR SSE

R2 = SSR/SST = 1 – SSE/SST

Page 8: Ch.6 Simple Linear Regression: Continued

6.8Coefficient of Determination: R2

• R2 is the proportion of the total variation (SST) that is explained by the model. We can also think of it as one minus the proportion of the total variation that is unexplained (left in the residuals).

• 0 R2 1• The closer R2 is to 1.0, the better the fit of the

model and the greater is the predictive ability of the model over the sample.

• If R2 =1 the model has explained everything. All the data points lie on the regression lie (very unlikely). There are no residuals.

• If R2 = 0 the model has explained nothing.

2

22

)(

ˆ11

yye

SSTSSE

SSTSSRR

t

t

Page 9: Ch.6 Simple Linear Regression: Continued

6.9

x

x

y

y

R2 appears to be 1.0. All data Points lie on a line.

R2 appears to be 0. The best line thru thesepoints appears to have a slope of zero.

Graph A

Graph B

Page 10: Ch.6 Simple Linear Regression: Continued

6.10

x

x

y

y

R2 appears to be close to 1.0.

R2 appears to be greater than 0 butless than R2 in graph C.

Graph C

Graph D

Page 11: Ch.6 Simple Linear Regression: Continued

6.11• In the food expenditure example, R2 = 0.317 “31.7%

of the total variation in food expenditures has been explained by variation in household income.”

• More Examples:

Page 12: Ch.6 Simple Linear Regression: Continued

6.12Correlation Analysis

• Correlation coefficient between x and y is:

• The Sample Correlation between x and y is:

• It is always true that -1 r 1

• It measures the strength of a linear relationship between x and y.

22

22

)()())((

)(1

1)(1

1

))((1

1)(ˆ)(ˆ

),(ˆ

yyxxyyxx

yyT

xxT

yyxxT

yraVxraVyxvoCr

tt

tt

tt

tt

)()(),(

yVarxVaryxCov

Page 13: Ch.6 Simple Linear Regression: Continued

6.13Correlation and R2

• It can be shown that the square of the sample correlation coefficient for x and y is equal to R2.

• R2 can also be computed as the square of the sample correlation coefficient for the y values and the values.

• It can also be shown that

x

y

ss

rb 2

y

Page 14: Ch.6 Simple Linear Regression: Continued

6.14Reporting Regression Results

• The numbers in parentheses are the standard errors of the coefficients estimates. These can be used to construct the necessary t-statistics to ascertain the significance of the estimates.

• Sometimes, authors will report the t-statistic instead of the standard error. This would be the t-statistic for the Ho: = 0

tt xy 1283.0768.40ˆ (s.e.) (22.139) (0.0305) R2 = 0.317

tt xy 1283.0768.40ˆ (t-stat) (1.841) (4.201) R2 = 0.317

Page 15: Ch.6 Simple Linear Regression: Continued

6.15Units of Measurement

b1 is measured in “y units”

b2 is measured in “y units over x units”

Example 3.15 from Chapter 3 Exercisesy = number of sodas sold x = temperature in degrees (oF)

22 )())((

xxyyxxb

t

ttxbyb 21

xyt 6240ˆ

240ˆ oyIf xo = 0o then the model predicts:So b1 is measured in y units (# of sodas).

b2 = 6 where 6 is in (# of sodas / degrees).If x increases by 10 degrees y increases by 60 sodas

xyo 6ˆ^

Page 16: Ch.6 Simple Linear Regression: Continued

6.16

SUMMARY OUTPUT

Regression StatisticsMultiple R 0.563132517R Square 0.317118231Adjusted R Square 0.299147658Standard Error 37.80536423Observations 40

ANOVAdf SS MS F Significance F

Regression 1 25221.22299 25221.22299 17.64652878 0.00015495Residual 38 54311.33145 1429.245564Total 39 79532.55444

Coefficients Standard Error t Stat P-value Lower 95% Upper 95%Intercept 40.76755647 22.13865442 1.841464964 0.073369453 -4.049807902 85.58492083newx 12.82886011 3.053925406 4.200777164 0.00015495 6.646511122 19.01120909

Let newx = x/100. • no change to b1 • b2 increases by 100

Page 17: Ch.6 Simple Linear Regression: Continued

6.17Functional Forms

A linear model is one that is linear in the parameters with an additive error term.

y = 1 + 2x + e

The coefficient 2 measures the effect of a one unit change in x on y. As the model is written above, this effect is assumed to be constant:

However, we want to have the ability to model relationships among economic variables where the effect of x on y is not constant.

Example: our food expenditure example assumes that the increase in food spending from an additional dollar of income was the same whether the family had a high or low income. We can capture these effects using logs, powers and reciprocals yet still maintain a model that is linear in the parameters with an additive error term.

Page 18: Ch.6 Simple Linear Regression: Continued

6.18The Natural Logarithm

• We will use the derivative property often:

• Let y be the log of X:y = ln(x) dy/dx = 1/x or dy = dx/x

• This means that the absolute change in the log of X is equivalent to the relative change in the level of X.Let x=50 ln(x) = 3.912Let x=52 ln(x) = 3.951

dln(x) = 3.951 – 3.912 = 0.039

The absolute change in ln(x) is 0.039, which can be interpreted as a relative change in X (X increases from 50 to 52, which, in relative terms, is 3.9%)

Page 19: Ch.6 Simple Linear Regression: Continued

6.19

Using Logs

tt exy )ln(21 What does 2 measure?

ttt exy 21)ln( What does 2 measure?

ttt exy )ln()ln( 21 What does 2 measure?

Page 20: Ch.6 Simple Linear Regression: Continued

6.20

SUMMARY OUTPUT

Regression StatisticsMultiple R 0.571864156 xbar=698R Square 0.327028613 ybar=130Adjusted R Square 0.30931884Standard Error 37.5300348Observations 40

ANOVAdf SS MS F

Regression 1 26009.42098 26009.42098 18.46599654Residual 38 53523.13346 1408.503512Total 39 79532.55444

Coefficients Standard Error t Stat P-valueIntercept -415.5556981 127.1672145 -3.267789578 0.002303753lnx 83.91235619 19.52718051 4.297207994 0.000115804

tt exy )ln(21

Example: Y: food $, X: Weekly Income

Page 21: Ch.6 Simple Linear Regression: Continued

6.21Log- Linear Model

Double Log Model

SUMMARY OUTPUT Dep Variable is lny

Regression StatisticsMultiple R 0.586638275R Square 0.344144465Adjusted R Square 0.326885109Standard Error 0.27659846Observations 40

ANOVAdf SS MS F

Regression 1 1.525512305 1.525512305 19.9395888Residual 38 2.907254917 0.076506708Total 39 4.432767221

Coefficients Standard Error t Stat P-valueIntercept 4.118265149 0.161974838 25.42533897 1.84238E-25x 0.00099773 0.000223437 4.46537667 6.94104E-05

SUMMARY OUTPUT Dep Variable is lny

Regression StatisticsMultiple R 0.633562676R Square 0.401401664Adjusted R Square 0.385649077Standard Error 0.264249039Observations 40

ANOVAdf SS MS F

Regression 1 1.77932014 1.77932014 25.48163324Residual 38 2.653447081 0.069827555Total 39 4.432767221

Coefficients Standard Error t Stat P-valueIntercept 0.299762103 0.895384575 0.334785869 0.73962763lnx 0.694044986 0.137490911 5.047933561 1.14292E-05