spss training thomas v. joshua, ms july, 2012 college of nursing

35
Basic Data Analysis IV Regression Diagnostics in SPSS SPSS Training Thomas V. Joshua, MS July, 2012 College of Nursing

Upload: melvin-willis

Post on 17-Dec-2015

216 views

Category:

Documents


0 download

TRANSCRIPT

  • Slide 1
  • SPSS Training Thomas V. Joshua, MS July, 2012 College of Nursing
  • Slide 2
  • Regression Diagnostics SPSS has the different regression diagnoses we can use to judge the quality of your regression model. Theoretically, we wish to check that our general assumptions are valid. We would like to perform such checks before model fitting but this is usually impossible, so that we are forced to conduct such checks after model fitting. Such checks are often referred to as diagnostics.
  • Slide 3
  • Regression Diagnostics Diagnostics in standard multiple regression models: Distribution Normality Equal variance (homogeneity) Independence Linearity explanatory variables Transformations response variable explanatory variables
  • Slide 4
  • Regression Diagnostics (cont) Diagnostics in standard multiple regression models: Outliers dubious measurements mistakes in recording atypical individuals Influence individual observations that exert undue influence on the coefficients Collinearity predictors that are highly collinear.
  • Slide 5
  • Residuals The user can request any of these to be saved in variables with the names: res unstandardized residual zre standardized residual sre studentized residual pre predicted value
  • Slide 6
  • Residual Plots Residuals can be used in plots for the following diagnostics 1. Normality tests Distribution Histogram of sre (studentized residual). Q-Q plot of sre (studentized residual). Homogeneity 2. Linearity - Scatterplot of res (unstandardized residual) vs pre. 3. Outliers - Scatterplot of sre vs pre.
  • Slide 7
  • Influence Statistics The leverage measures this sort of influence of the independent variables. A case with large leverage and an unusual value for the dependent variable will have the largest effect on the parameter estimates. Cooks distance is a combined measure of influence of both dependent and independent variables. The values of these can be saved in variables: coo Cooks distance lev leverages
  • Slide 8
  • Influence Plots Outliers in independent variables Scatterplot of lev vs pre. Anomalous cases Scatterplot of coo vs pre Formal Tests Linearity - Can add extra terms to the model Linearity - Can add extra terms to the model Outliers - Can test the change in S.S. when an observation is excluded Outliers - Can test the change in S.S. when an observation is excluded
  • Slide 9
  • Diagnostic Facilities in SPSS Regression Linear Statistics: Durbin-Watson, Casewise diagnostics Plots: Histogram, Normal Probability plots (residuals) Save: allows residuals, influence statistics and predicted values to be saved. Compare Means One-way ANOVA Options: Homogeneity of variance (+ descriptives and means plot).
  • Slide 10
  • Diagnostic Facilities in SPSS (cont) User-defined Diagnostics Having saved the relevant diagnostic statistics, these can be displayed using the graphical facilities, e.g. Scatterplots Histograms P-P and Q-Q plots Sequence (Scatterplot is often more flexible)
  • Slide 11
  • Example Data set: SPSS v16 data set GSS93 subset.sav Step: Analyze -> Regression -> Linear Dependent variable: Respondent's Income (rincom91) Dependent variable: Respondent's Income (rincom91) Independent variable: age of respondent (age), age when first married (agewed), respondents highest degree (degree), and highest year of school completed (educ). Independent variable: age of respondent (age), age when first married (agewed), respondents highest degree (degree), and highest year of school completed (educ).
  • Slide 12
  • Slide 13
  • Slide 14
  • Syntax REGRESSION /DESCRIPTIVES MEAN STDDEV CORR SIG N /MISSING LISTWISE /STATISTICS COEFF OUTS CI BCOV R ANOVA COLLIN TOL CHANGE ZPP /CRITERIA=PIN(.05) POUT(.10) CIN(95) /NOORIGIN /DEPENDENT rincom91 /METHOD=ENTER agewed age educ degree /PARTIALPLOT ALL /SCATTERPLOT=(*SDRESID,*ZPRED ) (*ZPRED,rincom91 ) /RESIDUALS DURBIN HIST(ZRESID) NORM(ZRESID) /CASEWISE PLOT(ZRESID) OUTLIERS(3) /SAVE PRED ZPRED MAHAL COOK LEVER MCIN RESID ZRESID SRESID DRESID SDRESID DFBETA DFFIT.
  • Slide 15
  • Output The correlation matrix below shows the Pearsonian R's, the significance of each R, and the sample size (n) for each R. All correlations are significant at the.05 level or better, except age with age when first married, and respondent's highest degree. At r =.884, there may be multicollinearity between highest year of education completed and highest degree.
  • Slide 16
  • R-squared is the percent of the dependent explained by the independents. In this case, they explains only 16.9% of the variance. This suggests the model was misspecified. Other variables, not suggested by the researcher, explain the bulk of the variance. Moreover, the independents variables here may share common variance with these unmeasured variables and in fact part or even all of the observed 16.9% might disappear if these unmeasured variables were entered first in the equation.
  • Slide 17
  • Adjusted R-squared is a standard, arbitrary downward adjustment to penalize for the possibility that, with many independents, some of the variance may be due to chance. The more independents, the more the adjustment penalty. Since there are only four independents here, the penalty is minor. The F value for the "Change Statistics" shows the significance level associated with adding the variable or set of variables for that step in stepwise or hierarchical regression, not pertinent here. The Durbin-Watson statistic tests for serial correlation of error terms for adjacent cases. This test is mainly used in relation to time series data, where adjacent cases are sequential years.
  • Slide 18
  • Output (cont) The ANOVA table below tests the overall significance of the model (that is, of the regression equation). If we had been doing stepwise regression, significance for each step would be computed. Here the significance of the F value is below.05, so the model is significant.
  • Slide 19
  • The t-test tests the significance of each b coefficient. It is possible to have a regression model which is significant overall by the F test, but where a particular coefficient is not significant. Here only age of respondent and highest year of school completed is significant. This would lead the researcher to re-run the model, dropping one variable at a time, starting with the least significant (here, age when first married).
  • Slide 20
  • The zero-order correlation is simply the raw correlation from the correlation matrix given at the top of this output. The partial correlation is the correlation of the given variable with respondent's income, controlling for other independent variables in the equation. Part correlation, in contrast, "removes" the effect of the control variable(s) on the independent variable alone. Part correlation is used when one hypothesizes that the control variable affects the independent variable but not the dependent variable, and when one wants to assess the unique effect of the independent variable on the dependent variable.
  • Slide 21
  • The tolerance for a variable is 1 - R-squared for the regression of that variable on all the other independents, ignoring the dependent. When tolerance is close to 0 there is high multicollinearity of that variable with other independents and the b and beta coefficients will be unstable. VIF is the variance inflation factor, which is simply the reciprocal of tolerance. Therefore, when VIF is high there is high multicollinearity and instability of the b and beta coefficients.
  • Slide 22
  • The table below is another way of assessing if there is too much multicollinearity in the model. To simplify, crossproducts of the independent variables are factored. High eigenvalues indicate dimensions (factors) which account for a lot of the variance in the crossproduct matrix. Eigenvalues close to 0 indicate dimensions which explain little variance.
  • Slide 23
  • The condition index summarizes the findings, and a common rule of thumb is that a condition index over 15 indicates a possible multicollinearity problem and a condition index over 30 suggests a serious multicollinearity problem. The researcher will wish to drop a variable, combine variables, or pursue some other multicollinearity strategy.
  • Slide 24
  • Output (cont) The table below is a listing of outliers: cases where the prediction is 3 standard deviations or more from the mean value of the dependent. The researcher looks at these cases to consider if they merit a separate model, or if they reflect measurement errors. Either way, the researcher may decide to drop these cases from analysis.
  • Slide 25
  • The bottom three rows are measures of the influence of the minimum, maximum, and mean case on the model. Mahalanobis distance is (n-1) times leverage (the bottom row), which is a measure of case influence. Cases with leverage values less than.2 are not a problem, but cases with leverage values of.5 or higher may be unduly influential in the model and should be examined. Cook's distance measures how much the b coefficients change when a case is dropped. In this example, it does not appear there are problem cases since the maximum leverage is only.098.
  • Slide 26
  • Slide 27
  • Slide 28
  • Slide 29
  • Slide 30
  • Output (cont) Partial regression plots simply show the plot of one independent (such as agewed, for instance) on the dependent (respondent income). A partial residual plot, also called an "added variable plot," is plotted when there are two or more predictors. You get one partial residual plot for each predictor. Thus the partial residual plot is a visual form of the t- test of the b coefficient for the given variable.
  • Slide 31
  • Slide 32
  • Slide 33
  • Slide 34
  • Slide 35