IBM SPSS Statistics 23 Step by Step
A Simple Guide and Reference
14th Edition
Instructor’s Manual
Darren George, Ph.D. Canadian University College
Paul Mallery, Ph.D. La Sierra University
Brief Contents Section I: Introduction .............................................................................................................................. 1
Content and Organization of the Manual .............................................................................................. 1
Why IBM SPSS Statistics Step by Step Was Written .......................................................................... 2
Types of Classes for Which This Book is Appropriate ........................................................................ 4
General Overview of the Book ............................................................................................................. 5
A More Detailed Overview .................................................................................................................. 5
Section II: Chapter-by-Chapter Commentary ........................................................................................... 9
Chapter 1: Introduction ........................................................................................................................ 9
Chapter 2: Processes .......................................................................................................................... 10
Chapter 3: Creating and Editing a Data File ...................................................................................... 13
Chapter 4: Managing Data ................................................................................................................. 15
Chapter 5: The Graphs Procedure ...................................................................................................... 18
Chapter 6: Frequencies ...................................................................................................................... 20
Chapter 7: Descriptive Statistics ........................................................................................................ 21
Chapter 8: The Crosstabs Procedure and Chi-Square Tests of Independence ................................... 22
Chapter 9: The Means Procedure: Subpopulation Differences ........................................................ 24
Chapter 10: Bivariate Correlations .................................................................................................... 25
Chapter 11: Independent Samples, Paired Samples, and One-Sample t Tests .................................. 27
Chapter 12: The One-Way ANOVA Procedure ................................................................................ 29
Chapter 13: 2-Way Analysis of Variance .......................................................................................... 31
Chapter 14: 3-Way Analysis of Variance .......................................................................................... 33
Chapter 15: Simple Linear Regression and Curvilinear Regression ................................................. 35
Chapter 16: Multiple Regression Analysis ........................................................................................ 37
Section III: Exercises .............................................................................................................................. 39
Chapter 3: Creating and Editing a Data File ....................................................................................... 44
Chapter 4: Managing Data .................................................................................................................. 48
Chapter 5: Graphs ............................................................................................................................... 64
Chapter 6: Frequencies ....................................................................................................................... 71
Chapter 7: Descriptive Statistics ......................................................................................................... 79
Chapter 8: Crosstabulation and 2 Analyses ...................................................................................... 84
Chapter 9: The Means Procedure ....................................................................................................... 91
Chapter 10: Bivariate Correlation ....................................................................................................... 95
Chapter 11: The T Test Procedure .................................................................................................... 100
ii IBM SPSS Statistics 23 Step by Step Instructor’s Manual
Chapter 12: The One-Way ANOVA Procedure ............................................................................... 118
Chapter 14: Three-Way ANOVA ..................................................................................................... 131
Chapter 15: Simple Linear Regression ............................................................................................. 153
Chapter 16: Multiple Regression Analysis ....................................................................................... 163
Chapter 18: Reliability Analysis ....................................................................................................... 168
Chapter 23: MANOVA and MANCOVA ........................................................................................ 192
Chapter 24: Repeated-Measures MANOVA .................................................................................... 201
IBM SPSS Statistics 23 Step by Step Instructor’s Manual iii
Contents Section I: Introduction .............................................................................................................................. 1
Content and Organization of the Manual .............................................................................................. 1
Why IBM SPSS Statistics Step by Step Was Written .......................................................................... 2
Types of Classes for Which This Book is Appropriate ........................................................................ 4
General Overview of the Book ............................................................................................................. 5
A More Detailed Overview .................................................................................................................. 5
Section II: Chapter-by-Chapter Commentary ........................................................................................... 9
Chapter 1: Introduction ........................................................................................................................ 9
The First Four Sections, pages 1-5 ................................................................................................... 9
Introduction to the Example, pages 5-6 ............................................................................................ 9
Typographical and Formatting Conventions, pages 6-8 ................................................................... 9
Chapter 2: Processes .......................................................................................................................... 10
The Mouse, page 10 (PC) or 28 (Mac) ........................................................................................... 10
The Data Window, page 13 (PC) or 31 (Mac) ................................................................................ 10
The Statistical-Procedure Dialog Windows, page 18 (PC) or 36 (Mac) ........................................ 10
The SPSS Output Navigator, page 18 (PC) or 36 (Mac) ................................................................ 10
Printing or Exporting Output, page 23 (PC) or 41 (Mac) ............................................................... 11
The “Options” Option, page 25 (PC) or 43 (Mac) .......................................................................... 11
Chapter 3: Creating and Editing a Data File ...................................................................................... 13
Research Design and Structure of the File, page 46 ....................................................................... 13
Step by Step, page 47 ...................................................................................................................... 13
The Variable View Window (Screen 3.2), page 48 ........................................................................ 13
Step-by-Step Sequences, pages 49-54 ............................................................................................ 14
Entering Data, page 54 ................................................................................................................... 14
Editing Data, page 55 ..................................................................................................................... 14
Chapter 4: Managing Data ................................................................................................................. 15
Case Summaries, page 63 ............................................................................................................... 15
Replacing Missing Values, page 66 ................................................................................................ 15
The Compute Procedure, page 68 ................................................................................................... 15
The Recode into Different Variables Procedure, page 72 .............................................................. 16
Recoding into the Same Variable, page 74 ..................................................................................... 16
Select cases, page 76 ....................................................................................................................... 16
Sort cases, page 78 .......................................................................................................................... 16
iv IBM SPSS Statistics 23 Step by Step Instructor’s Manual
Merging files, page 79 .................................................................................................................... 17
Chapter 5: The Graphs Procedure ...................................................................................................... 18
Chapter 6: Frequencies ...................................................................................................................... 20
Chapter 7: Descriptive Statistics ........................................................................................................ 21
Chapter 8: The Crosstabs Procedure and Chi-Square Tests of Independence ................................... 22
Chapter 9: The Means Procedure: Subpopulation Differences ........................................................ 24
Chapter 10: Bivariate Correlations .................................................................................................... 25
Chapter 11: Independent Samples, Paired Samples, and One-Sample t Tests .................................. 27
Chapter 12: The One-Way ANOVA Procedure ................................................................................ 29
Chapter 13: 2-Way Analysis of Variance .......................................................................................... 31
How to Write up 2- and 3-way ANOVA Results ........................................................................... 31
Chapter 14: 3-Way Analysis of Variance .......................................................................................... 33
Chapter 15: Simple Linear Regression and Curvilinear Regression ................................................. 35
Chapter 16: Multiple Regression Analysis ........................................................................................ 37
Section III: Exercises .............................................................................................................................. 39
The GRADES.SAV Data File ........................................................................................................ 40
The DIVORCE.SAV Data File ....................................................................................................... 41
The HELPING3.SAV Data File ..................................................................................................... 42
Chapter 3: Creating and Editing a Data File ....................................................................................... 44
3-1 ................................................................................................................................................... 45
3-2 ................................................................................................................................................... 45
3-3 ................................................................................................................................................... 45
3-4 ................................................................................................................................................... 46
3-5 ................................................................................................................................................... 46
3-6 ................................................................................................................................................... 46
3-7 ................................................................................................................................................... 46
3-8 ................................................................................................................................................... 47
Chapter 4: Managing Data .................................................................................................................. 48
4-1 ................................................................................................................................................... 50
4-2 ................................................................................................................................................... 51
4-3 ................................................................................................................................................... 52
4-4 ................................................................................................................................................... 53
4-5 ................................................................................................................................................... 53
4-6 ................................................................................................................................................... 54
4-7 ................................................................................................................................................... 55
4-8 ................................................................................................................................................... 56
4-9 ................................................................................................................................................... 57
IBM SPSS Statistics 23 Step by Step Instructor’s Manual v
4-10 ................................................................................................................................................. 57
4-11 ................................................................................................................................................. 58
4-12 ................................................................................................................................................. 59
4-13 ................................................................................................................................................. 60
4-14 ................................................................................................................................................. 61
4-15 ................................................................................................................................................. 63
Chapter 5: Graphs ............................................................................................................................... 64
5-1 ................................................................................................................................................... 65
5-2 ................................................................................................................................................... 66
5-3 ................................................................................................................................................... 67
5-4 ................................................................................................................................................... 68
5-5 ................................................................................................................................................... 69
5-6 ................................................................................................................................................... 70
5-7 through 5-13 ............................................................................................................................. 70
Chapter 6: Frequencies ....................................................................................................................... 71
6-1 ................................................................................................................................................... 72
6-2 ................................................................................................................................................... 74
6-3 ................................................................................................................................................... 75
6-4 ................................................................................................................................................... 75
6-5 ................................................................................................................................................... 76
6-6 ................................................................................................................................................... 78
Chapter 7: Descriptive Statistics ......................................................................................................... 79
7-1 ................................................................................................................................................... 80
7-2 ................................................................................................................................................... 81
7-3 ................................................................................................................................................... 83
Chapter 8: Crosstabulation and 2 Analyses ...................................................................................... 84
8-1 ................................................................................................................................................... 85
8-2 ................................................................................................................................................... 86
8-3 ................................................................................................................................................... 87
8-4 ................................................................................................................................................... 88
8-5 ................................................................................................................................................... 89
Additional Exercises ....................................................................................................................... 90
Chapter 9: The Means Procedure ....................................................................................................... 91
9-1 ................................................................................................................................................... 92
9-2 ................................................................................................................................................... 93
9-3 ................................................................................................................................................... 94
Chapter 10: Bivariate Correlation ....................................................................................................... 95
vi IBM SPSS Statistics 23 Step by Step Instructor’s Manual
10-1 ................................................................................................................................................. 96
10-2 ................................................................................................................................................. 98
Chapter 11: The T Test Procedure .................................................................................................... 100
11-1 ............................................................................................................................................... 102
11-2 ............................................................................................................................................... 104
11-3 ............................................................................................................................................... 105
11-4 ............................................................................................................................................... 106
11-5 ............................................................................................................................................... 109
11-6 ............................................................................................................................................... 111
11-7 ............................................................................................................................................... 113
11-8 ............................................................................................................................................... 114
11-9 ............................................................................................................................................... 115
11-10 ............................................................................................................................................. 115
Additional Exercises ..................................................................................................................... 116
Chapter 12: The One-Way ANOVA Procedure ............................................................................... 118
12-1 ............................................................................................................................................... 119
12-2 ............................................................................................................................................... 120
12-3 ............................................................................................................................................... 122
12-4 ............................................................................................................................................... 124
12-5 ............................................................................................................................................... 126
12-6 ............................................................................................................................................... 128
Additional Exercises ..................................................................................................................... 130
Chapter 14: Three-Way ANOVA ..................................................................................................... 131
14-1 ............................................................................................................................................... 133
14-2 ............................................................................................................................................... 136
14-3 ............................................................................................................................................... 138
14-4 ............................................................................................................................................... 141
14-5 ............................................................................................................................................... 143
14-6 ............................................................................................................................................... 145
14-7 ............................................................................................................................................... 147
14-8 ............................................................................................................................................... 150
Additional Exercises ..................................................................................................................... 152
Chapter 15: Simple Linear Regression ............................................................................................. 153
15-1 ............................................................................................................................................... 155
15-2 ............................................................................................................................................... 156
15-3 ............................................................................................................................................... 158
15-4 ............................................................................................................................................... 158
IBM SPSS Statistics 23 Step by Step Instructor’s Manual vii
15-5 ............................................................................................................................................... 160
15-6 ............................................................................................................................................... 160
15-7 ............................................................................................................................................... 161
15-8 ............................................................................................................................................... 162
15-9 ............................................................................................................................................... 162
Chapter 16: Multiple Regression Analysis ....................................................................................... 163
16-1, 16-2, and 16-3 ...................................................................................................................... 165
16-4 ............................................................................................................................................... 166
Additional Exercises ..................................................................................................................... 167
Chapter 18: Reliability Analysis ....................................................................................................... 168
18-1 ............................................................................................................................................... 169
18-2 ............................................................................................................................................... 170
18-3 ............................................................................................................................................... 171
18-4 ............................................................................................................................................... 172
18-5 ............................................................................................................................................... 174
18-6 ............................................................................................................................................... 176
18-7 ............................................................................................................................................... 178
18-8 ............................................................................................................................................... 180
18-9 ............................................................................................................................................... 182
18-10 ............................................................................................................................................. 184
18-11 ............................................................................................................................................. 186
18-12 ............................................................................................................................................. 188
18-13 ............................................................................................................................................. 190
Chapter 23: MANOVA and MANCOVA ........................................................................................ 192
23-1 ............................................................................................................................................... 193
23-2 ............................................................................................................................................... 194
23-3 ............................................................................................................................................... 195
23-4 ............................................................................................................................................... 196
23-5 ............................................................................................................................................... 198
Chapter 24: Repeated-Measures MANOVA .................................................................................... 201
24-1 ............................................................................................................................................... 202
24-2 ............................................................................................................................................... 203
24-3 ............................................................................................................................................... 205
24-4 ............................................................................................................................................... 206
24-5 ............................................................................................................................................... 208
viii IBM SPSS Statistics 23 Step by Step Instructor’s Manual
IBM SPSS Statistics 23 Step by Step Instructor’s Manual 1
Section I: Introduction There are two authors to this instructor’s manual, Darren George and Paul Mallery. Our teaching styles dif-fer. Most of this book is written in first-person plural. When the term “us” is used, the material that follows is based on consensual thoughts or findings of both authors. At times, when different perspectives on the material are presented unique to one author or the other, third person is used (e.g., “Darren thinks that…”). In pages that follow, we share some of our experiences using the SPSS book and provide helpful hints on how to use the book, chapter by chapter, in your own teaching.
Content and Organization of the Manual This manual contains three primary sections.
1. The introduction (this section): The introduction considers general issues associated with using the book: Structure, rationale, creation, revisions, applications, and so forth. Reading this section will provide you with a (hopefully) pleasurable foray into many issues that motivated the creation of the book, determined the content, and dictated the structure we have followed. Many professors don’t even open the Instructor’s Manual; you, clearly, have done at least that much. We think that the 15 minutes required to read this section will be worth your time, especially if this is the first time you are using the book.
2. Chapter-by-chapter analysis (p. 9): This section considers each chapter individually in some detail. We have extensive experience presenting statistical material at an introductory statistics level, in re-search methods classes, and in more advanced multivariate classes. These pages deal largely with teaching tips for clear presentation of the contents of each chapter. Only material up to Chapter 16 is included in this chapter-by-chapter analysis—the typical coverage in an introductory statistics class. Chapter 17 through 28 are not only advanced and more complex, but teaching these techniques is so closely associated with the context of the class or the teaching style of the instructor that it seems unwise to comment extensively on styles of presentation in this manual. Further, presentation of these more-challenging procedures is typically dictated by the need for an immediate application in a particular setting.
3. Answers to all exercises (p. 39): There are 109 exercises in the text; most appear in the first 16 chapters and cover the standard statistical procedures. The majority of these exercises are class test-ed. Darren has taught a multivariate analysis class for the past 9 years and many of the exercises in the present volume have emerged out of class assignments created for his students. Paul has taught introductory and intermediate statistics as well as a course that introduced advanced statistics (enough to read and understand the statistics, rather than to perform them) and contributed exercises from his own teaching experience. Exercises are included for the following chapters:
Chapter 3: Creating and Editing a Data File Chapter 4: Managing Data: Listing cases, replacing missing values, computing new variables, re-
coding variables, exploring data, selecting cases, sorting cases, merging files Chapter 5: Graphs: Creating and editing graphs and charts Chapter 6: Frequencies: Frequencies, bar charts, histograms, percentiles Chapter 7: Descriptive Statistics: Measures of central tendency, variability, deviation from nor-
mality, size, and stability Chapter 8: Crosstabulation and Chi-Square (2) Analyses Chapter 9: The Means Procedure Chapter 10: Bivariate Correlation, partial correlations, and the correlation matrix Chapter 11: The t-Test Procedure: Independent-samples, paired-samples, and one-sample tests Chapter 12: The One-Way ANOVA Procedure: One-way Analysis of Variance
2 IBM SPSS Statistics 23 Step by Step Instructor’s Manual
Chapter 13: General Linear Models: Two-Way ANOVA Chapter 14: General Linear Models: Three-Way ANOVA and the influence of covariates Chapter 15: Simple Linear Regression Chapter 16: Multiple Regression Analysis Chapter 18: Reliability Analysis: Coefficient alpha () and split-half reliability Chapter 23: General Linear Models: MANOVA and MANCOVA Chapter 24: General Linear Models: Repeated-Measures MANOVA
We have not included exercises for most of the advanced procedures. These topics are typically presented in a context-specific situation; for example, if you are building a model using log-linear modeling, you need to have a fairly extensive theoretical understanding of the context of the data in order to properly build the model; this would take a great deal of space in an exercise, and would rarely fit the needs of the instructor. Exercises are included for two of the general linear model procedures (multivariate and repeated measures) without introducing the general linear model concept itself, as being able to do within-subjects analyses in factorial designs (Chapter 24) can be very helpful for conducting student projects with small sample sizes. Answers for selected exercises are provided for students on the website. These are viewable in Adobe Acro-bat format, but students cannot easily copy the answers (to minimize cheating on assignments). However, we have included answers to ALL exercises in the final section of this manual for your benefit. Many times answers to data-analysis exercises are long, complex, and very difficult to print out in compact form. One of the greatest challenges in teaching is creating answer keys. Now life is made easy. If you assign any prob-lems from the exercises in this book, the answer is already available in a neat, reasonably complete, and co-herent format in the final two thirds of this manual.
Why IBM SPSS Statistics Step by Step Was Written The present book was initially written to make the use of statistical software (IBM SPSS Statistics in this case—though it was just “SPSS” when we started) clear and straightforward for students. We have both spent enough of our lives in teaching to have seen many students traumatized by the professor who cheerily says “analyze these data in the lab; read the SPSS manuals (or help files or look it up online) for specifics of how to accomplish this.” The manuals and help files are thousands of pages (or screenfulls) long and beyond comprehension to almost anyone other than statistical and computer experts. Online resources are often use-ful, but students who are first learning the material are often challenged to find resources that are accurate, relevant, and neither highly technical (which they’ll skip) or too easy (which they’ll like, but may lead to misguided conclusions). For us data analysis is easy; we have spent thousands of hours with computers, SPSS, and analyzing data. It is not easy for someone who doesn’t know the language (mathematics, specifi-cally statistical terminology) and has no experience. The intense trauma experienced by these students was the major reason that we have invested so much time and effort to create a manual that is, above all else, clear. This clarity is accomplished in several ways. First, we both come from the perspective that learning to use statistical software is best accomplished by doing rather than studying about it. That is why almost every-thing in the book is example-based. In the 1960’s the “new math” was introduced based on the assumption that if students understood why they were doing what they were doing, that it would enhance their ability to do arithmetic. Thus first and second graders were subjected to subsets, supersets, elements of a set, numbers in base 8, transitive property, reflexive property (remember that one? something is equal to itself) and a lot of material that rightly belonged in classes taken by first year math majors in college. The kids don’t give a hoot about why 2+3 equals 3+2. It works, that’s why. They can see it with sticks, and marbles, and dolls. The new math died a well-deserved but relatively painless death soon after its inception; however, much of teaching in many classes today (particularly high school and college) is still based on the assumption that if you read enough about how and why SPSS procedures work, you’ll be able to do them. Although students
IBM SPSS Statistics 23 Step by Step Instructor’s Manual 3
certainly need to know about how and why statistics work, they rarely need to understand how and why SPSS works. We would challenge anyone who teaches statistics or research methods by reading and testing on some book for 15 weeks to match our students spend the majority of their time conducting research projects from data (or inception) through the final APA-style paper. Darren has students read the book for 5 weeks and then spend 10 weeks taking a fairly major project from the initial idea to a finished APA-format paper. Paul has students collect data and analyze it with a new technique every couple of weeks throughout a year; by part-way through Spring quarter, they are doing little besides working on faculty-guided research projects. Have the final exam half-way through the term, and the paper due at the end of the term. There is far less time reading about research and much more time actually doing research. The 15-weeks-of-reading students wouldn’t have a clue of how to conduct a real research project. The knowledge comes from doing. This, of course, parallels the focus and intention of our book. It is a book designed for doing. Some reading, of course, is necessary. However, after reading we urge that students practice, practice, practice on the comput-er, with real data if possible. With small classes, this is possible through a faculty-mentor model; with large classes, a more directive approach may be necessary, but laboratory work with SPSS should still be a signifi-cant part of the learning process. With this perspective in mind the present book has been created. Every line of the book has as its focus to make clear both the conceptual idea behind the statistical procedure and the procedure itself. As mentioned before, the 12 data sets are available for download at http://www.pearsonhighered.com/george (along with datasets for some exercises, and versions of the main datasets that are compatible with the student ver-sion of IBM SPSS Statistics). This allows students to practice the procedures they are reading about. How successful we have been in creating clarity is now out of our hands and remains for you to judge. We wel-come your input and suggestions. Our addresses (both geographical and e-mail) and phone numbers are in-cluded with the book and your comments will gain a response. Because Pearson plans a new revision every couple of years, your comments can be incorporated into new editions. Why frequent revisions? SPSS is like any other software business in that they wish to maintain a competitive advantage. Because of this they upgrade their software regularly, primarily to keep their pro-grammers employed and secondarily to provide irritation to their users who have just become acquainted with the previous version. (Why the name change from SPSS to SPSS Statistics to PASW Statistics to IBM SPSS Statistics? Perhaps that is just to keep the users from getting complacent?) Often their changes are beneficial. For instance, the shift from the command-syntax format of the DOS version to the click-and-paste of the windows version was nothing short of brilliant. A bulky, terrifying, process suddenly became easy. Error messages, by far the most frequent output you would see with the DOS version, essentially dis-appeared when you could click to select a variable rather than trying to remember how to spell a variable name. Their revisions, however, also have a down side. For example, beginning with SPSS Statistics 16.0, setting up the printer became much less intuitive and harder to find than in previous versions. (To set the data editor output to landscape, you have to go to “Page Setup” in the Output editor.) And the semiannual changes to the graphing procedures, while generally improving the ability to make customized graphs, occasionally drop or hide features that are commonly used. Fortunately the graphics routines for SPSS 18 were mostly fixed from the unusually buggy SPSS 17 routines. New revisions will happen as we keep up with current editions of the software. Although there were no ma-jor changes to IBM SPSS Statistics 23, there were many small changes. Additionally, the book is changed to keep up with developments in the field. When the book was first written, effect sizes were something that were a relatively new idea: People agreed that they were important, but it wasn’t a standard thing to include in papers and were rarely covered in introductory statistics classes. Now, they are commonly used, all but mandated in APA style, and included in most introductory statistics texts. Effect size information has been included in many chapters in previous editions; in this edition, we have incorporated in systematically
4 IBM SPSS Statistics 23 Step by Step Instructor’s Manual
throughout, using parallel language, and even included how to find or calculate effect sizes in cases that SPSS doesn’t include it (for example, SPSS doesn’t calculate Cohen’s d for t tests, so we cover that easy but manual calculation).
Types of Classes for Which This Book is Appropriate The Instructor’s Manual that you are now reading is designed to serve a much narrower audience than the textbook. First, it addresses the needs of professors (rather than students) teaching undergraduate statistics or research methods classes. Secondly, it covers only about half the content of the textbook. The SPSS Step by Step book (while including all the introductory statistical procedures) expands its scope to more advanced multivariate analysis classes, graduate statistics courses, major research projects (theses, dissertations), and as a reference guide to anyone who conducts data analysis occasionally or frequently. The main considera-tion in which procedures to include are (a) which procedures are most common? And (b) can a brief intro-duction allow someone to perform basic analyses without a deep understanding of the underlying mathematical models? Generalized linear models are very powerful…but you can’t do one unless you have a thorough grounding in statistical theory. So we have ignored that option; if you need to use a generalized linear model, understanding which SPSS button to push is the least of your worries. If you have a favorite procedure that you believe should be covered in a future edition of the book, please drop us an email. We include fairly extensive analysis in the Chapter-by-Chapter Commentary section of this book. Our comments about how to teach using this book must, therefore, be preceded by some description of classes with which it has been used. Darren employed selected chapters from this book to assist in teaching his re-search methods courses. IBM SPSS Statistics Step by Step (or prior editions of it) has, however, been his primary textbook in four different classes. One was a class on SPSS (Cal State Dominguez Hills) specifical-ly devoted to the understanding of how to use SPSS software to analyze data. He once taught an Introduc-tion to Statistics course using the text (much to the dismay of the math department who thought it wasn’t mathematical enough), and has regularly used the text to teach a course called Multivariate Analysis. Intro-duction to Statistics is a prerequisite for Multivariate Analysis—a class that is devoted to actual analysis of data, or the practical application of statistical concepts in the real world. Multivariate Analysis (and the SPSS class at Cal State) are the classes he taught that provided the best fit for the book. Curiously, the Intro-duction to Statistics class was Darren’s greatest success teaching with this book. The class managed to com-plete all the way through discriminant analysis with the lowest grade (in a class of 28) being C-. Darren feels this was testimony to the simple, direct treatment of material, and perhaps, to a lesser extent his fluency in the topic (at the practical level) and ability to present it clearly. Astonishingly, 10 students, after the class was over, actually purchased the SPSS Statistics software to install on their own computers! This suggests that they not only understood, but they actually wanted to continue using statistical software. Paul has used the textbook for a quantitative analysis class (similar to Darren’s Multivariate Analysis class) along with a traditional statistics textbook. He found that this SPSS textbook was an excellent complement to the (rather theoretical) statistics textbook. At the beginning of the class, students calculated simple anal-yses (means, t tests) by hand, but soon moved to having SPSS do the work. For that course, the focus of the class itself was on the statistics, and the focus of the lab was on analyzing and interpreting data. The SPSS textbook allowed students to make the connection between the theory and the SPSS tools and output without having to worry as much about how to get the desired output. Paul currently teaches a three-quarter com-bined research methods and statistics course; in this course, the SPSS text serves as a frequent reference for the simpler techniques, and more advanced techniques are introduced primarily through the introductions of the SPSS textbook chapters. Though students aren’t expected to perform logistic regression, factor analysis, or multidimensional scaling, having those chapters include in a familiar format empowers the top students to comfortably experiment with those advanced techniques as they interact with their data, and allows all stu-dents to develop a conceptual understanding of how the advanced procedures work.
IBM SPSS Statistics 23 Step by Step Instructor’s Manual 5
During our chapter-by-chapter presentation in the next section, most techniques of presenting the material in class refer to Darren’s Multivariate Analysis class or Paul’s Methods and Statistics sequence. Our sugges-tions comments should be considered and implemented in your own classes with that in mind.
General Overview of the Book An understanding of the structure of IBM SPSS Statistics Step by Step should be useful. Chapters 1 through 5 are either introductory in nature or relate to SPSS processes that may apply to many of the statistical pro-cedures that follow (Chapters 6-28). In the first five chapters special efforts have been made to present mate-rial in a way that is clear for the beginner. Of course, we have attempted to be clear in every chapter, but clarity in log-linear models, logistic regression, factor analysis and other advanced procedures is quite differ-ent from clarity in the first five introductory chapters or the 11 chapters of basic statistical procedures that follow. In the first 16 chapters we step-by-step, carefully cover the material and attempt to illustrate every-thing possible. The final 11 chapters deal with more complex procedures and a substantial grounding (viz., classes in advanced statistics) in those techniques is, in most instances, necessary. We still attempt, however, to make the conceptual bases of, say, factor analysis clear in the 6-page introduction, the many options straightforward in the Step by Step section, and the SPSS printouts coherent in the output section. We are under no illusion, however, that a beginner could adequately understand factor analysis based only on our presentation. It could, however, give someone with a basic understanding of statistics a general understand-ing of how factor analysis works conceptually (e.g., enough to read an article that uses factor analysis) or give someone with a good understanding of factor analysis who hasn’t used SPSS the tools they need to complete an analysis. By contrast, we have made every effort that introductions to the first 16 chapters would give a clear concep-tual feel for different procedures even for someone who had never taken a statistics class. As we move into two- and three-way ANOVAs and multiple regression analysis, we undoubtedly become less successful at our efforts, but the effort is always there. We feel that a student with good math aptitude (necessary!) even without a statistics course could make very productive use of the first 16 chapters of the book. We don’t recommend it; we feel the book is most useful to persons who have already had statistics (or better, are tak-ing it concurrently while learning SPSS), but this has not blunted our efforts to make it clear to the novice. For the more advanced procedures, there was discussion with the publisher about writing a more limited book (for instance, one including only the first 16 chapters). We argued against this. Darren’s Introduction to Statistics class (mentioned earlier) progressed all the way through discriminant analysis. It was nice to have the same format description of the more advanced procedures to assist in the process. Paul’s Methods and Statistics class regularly uses within-subject MANOVA’s to allow within-subject experiments (more power with small N’s), and an occasional student will use other advanced procedures as well (e.g., a log-linear model). We know that most introductory classes will not go beyond chapter 16, but, if someone is de-veloping a scale and wants to conduct a reliability analysis, the chapter is there, painstakingly written to be as clear as possible. The same applies to MANOVAs, discriminant analysis, or log-linear models. An im-portant benefits of this choice has been that the text is useful not only for undergraduate classes, but as a sec-ondary textbook for graduate statistics classes.
A More Detailed Overview This first introductory section of this manual introduces the reader to the structure and content of the book. In the next major section of the book (Chapter by Chapter Commentary, starting on page 9) we will extend the brief presentations offered here to address the concerns of those who teach using the book. In those pag-es we go into much greater chapter-by-chapter detail. In the three pages that follow, we consider additional preliminary information concerning the structure and content of individual chapters. Chapter 1 is introductory. It is warm and friendly; it talks about SPSS Statistics, it describes the structure of chapters that follow. Most importantly, it introduces the data file that is used in 16 of the 28 chapters and
6 IBM SPSS Statistics 23 Step by Step Instructor’s Manual
identifies typographical conventions that are used throughout the book. Darren requires his students to memorize the typographical conventions. It is not much of a burden because they are not numerous and are very intuitive, but it acknowledges the importance of knowing the language before you proceed. Chapter 2 deals with SPSS processes. While this chapter caters to someone who knows very little about computers and their use, there is much SPSS-specific information that would be useful to anyone. Included are topics concerning:
starting and navigating the SPSS Statistics program, common buttons, both general and SPSS Statistics specific, the data window and other commonly used windows with icons identified and defined, the output window with its bewildering pivot tables, also with icons identified and defined the “Options” option, and instructions for printing output.
This chapter is often cited in the remainder of the book in case someone tries to use the book to conduct data analysis without reading Chapter 2. For instance, in the print sequence included in every analysis chapter, we refer the reader to Chapter 2 if they desire more information about print options or editing output. There are two different versions of Chapter 2; the first for those using Windows, and the second for those using a Mac. Although the two versions of the SPSS Statistics program are practically identical for both Windows and Mac, getting into the program and navigating between the various windows (Data, Output, etc.) is different for PCs and Macs. Having two versions of the chapter allow for a smooth narrative without jumping back and forth between operating systems, and also allows people who are using a different operat-ing system than they are used to (e.g., a Mac user having to use SPSS on a computer lab with only PCs) to find their way around relatively easily. Chapter 3 concerns data entry and creation of data files. This chapter painstakingly step-by-steps students through conceptual ideas (how to design research and organize data) and procedures (how to name, format, and enter variables). It also considers editing already-entered data, and provides a complete listing of all data in the file (grades.sav) so frequently used throughout the book. Chapter 4 we regard as one of the most complex and important of the book. The ideas are not complex, the procedures often are. It would be rare for anyone sitting down to an analysis session to finish without mak-ing use of one or more procedures included in the fourth chapter. We deal with manipulation of data. Each concept is presented with conceptual rationale, examples to illustrate, and step-by-step instructions to clarify access. Included are the topics of:
listing cases, replacing missing values, computing new variables, recoding variables (both switches of coding for existing variables and creating new variables via
recoding), selecting subsets of data for further analysis, Reordering the data, and merging files.
Chapter 5 deals with graphing procedures. The graphing procedures for SPSS have undergone major transi-tions every couple of editions for some time now: “Graphs,” “Legacy Graphs,” “Interactive,” “Chart Build-er,” and “Graphboard Template Chooser” have at various times all referred to different sets of menu options and interfaces. SPSS 23 has three graphics systems—“Chart Builder,” “Legacy,” and “Graphboard Template Chooser.” As much as possible, we have used the Chart Builder, as the Legacy method is on the way out and the Graphboard Template Chooser is usually not as good for classroom settings (when you want to have stu-
IBM SPSS Statistics 23 Step by Step Instructor’s Manual 7
dents figure out what kind of graph is appropriate rather than letting the computer select a menu of possible graphs for you). In a few cases, in which the procedures are much simpler using Legacy graphs, we have used the older procedures. Chapters 6 through 28 deal with different types of analyses. We need not list them here; you can read the table of contents. A word on organization, however, might be useful. SPSS Statistics is available in differ-ent modules: Base, Regression Models, Advanced Models, Tables, Exact Tests, Categories, Trends, etc. We cover all of the commonly-used procedures presented in the Base System Module and several selected pro-cedures from the Advanced Statistics Module and the Regression Models Module. Our book has been organized around this module structure:
the first 5 chapters are SPSS processes that apply to all modules, chapters 6-22 apply to procedures offered in the Base System Module, chapters 23-27 describe procedures from the Advanced Statistics Module or the Regression
Models Module, and chapter 28 (on residuals) applies to all three modules.
The module structure is not a trivial issue. Currently users (students, or universities) can purchase either the SPSS Base Module (Chapters 1-22 and 28), the SPSS Standard (all chapters), SPSS Professional (which also includes other procedures not covered in this book), or SPSS Premium (which includes even more proce-dures not covered in this book). The book is organized so as to make it easy to manage if you have only one the SPSS Base Module. A word on the structure of each of the analysis chapters (6-27) is important. Each of the chapters is self-contained. That is, armed only with an understanding of the typographical conventions, if a person wished to conduct t tests on certain material, she or he could start with Chapter 11 and conduct t tests. True, if they didn’t know how to create a data file, step 1 tells them to go to Chapter 3 and learn how; if they get output and don’t know how to edit the material they will be referred to Chapter 2; but otherwise, everything is there. Each of the Analysis chapters is divided into three sections: 1. Introduction: The introduction of each chapter explains the procedure as clearly and parsimoniously as we are able. For several of the introductory chapters the introductory material is only about one page long. We see no need to take more than two lines to explain what a “mean” is. Notice our brief section on measures of central tendency from chapter 7:
The Mean is the average value of the distribution, or, the sum of all values divided by the number of values. The mean of the distribution [3 5 7 5 6 8 9] is
(3 + 5 + 7 + 5 + 6 + 8 + 9)/7 = 6.14. The Median is the middle value of the distribution. The median of the distribution [3 5 7 5 6 8 9], is 6, the middle value (when reordered from small to large, 3 5 5 6 7 8 9). The Mode is the most frequently occurring value. The mode of the distribution [3 5 7 5 6 8 9] is 5, because the 5 occurs most frequently (twice, all other values occur only once).
Ours is not a statistics book or a commentary. We see no need to be anything other than clear. On the medi-an we ignore the issue of “how about if you have an even number of values?” That’s your job. In other instances we go into some detail if we feel it is necessary for clarity. In Chapter 7 we spend a full page on statistical significance. More could be said about it, of course, but we felt it important enough to attend to it carefully. Also in chapter 7 we spend a full page on the normal distribution. Nearly all the statis-tical procedures presented in the book (with the exception of Chi-square analyses and other Nonparametric tests) are predicated upon an assumption of normality of your data. Thus, we felt it worthy of a full page.
8 IBM SPSS Statistics 23 Step by Step Instructor’s Manual
On other procedures we may take several pages to inch our way through a clear presentation. In our 28 chapters, some are better than others (we would never admit, however, that any chapter is worse than anoth-er!). Always our goal is to make complex procedures essentially clear. 2. Step-by-Step sections. Each Step by Step section begins with the identical first four steps (with minor variations depending on the procedure) and ends with the same two steps (explaining how to print their out-put and exit the program). It is the middle (various versions of Step 5) where the instruction takes place. There are as many step-5 versions as there are specific analyses presented in a particular chapter (labeled 5, 5a, 5b, 5c, etc.). Please refer to any of Chapters 6-27 to see examples of these. Prior to the step 5s are de-scriptions of essentially all screens (or dialog boxes) used in that procedure. In this portion of the Step by Step section, a narrative description of the screens plus identification and definition of terms helps the stu-dent gain a thorough awareness of how to use the screens in the analyses. 3. Output: the output includes the results of analyses produced by SPSS Statistics. In almost all instances we use the format provided by the SPSS Statistics 23 version but edited to be clearer and more space con-servative. Finally, most output sections contain one or more boxes that define all output terms. These terms are also included in the comprehensive Glossary at the end of the book.
IBM SPSS Statistics 23 Step by Step Instructor’s Manual 9
Section II: Chapter-by-Chapter Commentary Chapter 1: Introduction The First Four Sections, pages 1-5 The content of the chapter is clear from the Chapter cover page. The first four sections, (Necessary Skills for the User, Scope of Coverage, Overview of the Book, and Organization Chapter by Chapter) are largely for perspective and have a friendly, chatty feel to them despite the technical content. To the absolute novice, the Chapter-by-Chapter section would be essentially non-understandable and boring. The other parts, however, give one a feel for what is about to take place. The final two sections (the example, and typographical conventions) is where we move away from friendly noises to material that is critical for understanding. We start with the example.
Introduction to the Example, pages 5-6 As explained briefly in the introduction, the purpose of creating the example data file was to allow students to focus on data analysis rather than spending a lot of time trying to understand the file. In teaching a course using the book Darren actually quizzes them on content of the data file to assure that they have reasonable fluency in the file that is used so often in the remainder of the book. For introductory students it is good to know that this file is used to illustrate procedures in all but two (Chapters 15 and 16) of the first 19 chapters. The data are all fictional, but do a good job of demonstrating many types of analyses.
Typographical and Formatting Conventions, pages 6-8 This is the other section critical for understanding. We have created these conventions to be as intuitive as possible, and a thorough understanding of the 2 pages that comprise this section will place the users of the book in an excellent position to make most effective use of the chapters that follow.
10 IBM SPSS Statistics 23 Step by Step Instructor’s Manual
Chapter 2: Processes Chapter 2 is presented in two versions: PC (page 9) and Mac (page 28). Although the rest of the book is based on the PC version, it is essentially identical to the Mac version for everything except the content of this chapter. Each version of Chapter 2 is divided into a number of sections, some more important than the others. Hav-ing used this book (or prior versions of it) several times to teach statistics-related classes, we will attempt to help you place in perspective the amount of attention each section deserves.
The Mouse, page 10 (PC) or 28 (Mac) This section is for the novice. There is, however, one element of substantive importance here, which is the right click (or two-finger click): This often speeds up the data analysis process by its influence on variable names. A right click in many settings identifies the variable label and the value labels in a convenient box that disappears when you click elsewhere. How often have you tried to remember how you coded gender or ethnicity? This feature eliminates ever wondering. Also, a right click on many other terms will provide a definition or description.
The Data Window, page 12 (PC) or 30 (Mac) This is of substantial importance in the early stages. Once the data window has been used a few times, its importance diminishes. The lists of icons are handy but hardly worth memorizing; besides, Screen 2.3 is one of the front cover screens, so if a student wishes to recall the meaning of the icons, they are always handy. And, when you place the cursor on an icon it identifies its function. The advantage of the descriptions in Screen 2.3 (and the front cover) is that they often have more complete definitions. Most frequently used icons include:
Open file Save current file Print file Undo the last operation (this is short-term memory: It undoes only the ten most recent opera-
tions) Go to a particular case number Access information about the current variable Find data Insert subject or case into the data file Insert new variable into the data file Shift between numbers and labels for variables with several levels
The Statistical-Procedure Dialog Windows, page 16 (PC) or 34 (Mac) Every statistical procedure has a main dialog window. Many of the features are similar in each window. The OK, Paste, Reset, Cancel, and Help are always included. Also, the list of available variables is always present. This section is worth some quiz items because of its centrality to all statistical processes.
The SPSS Output Navigator, page 18 (PC) or 36 (Mac) This topic occupies 6 pages because the pivot tables are a substantial hurdle for a novice data analyst. With small data sets or very limited output they are kind of cute. The pivot features allow you to switch rows and columns, put labels on their ends and to engage in a variety of other entertaining activities. For power users with a clear idea of what they want, they can be indispensable. For the rest of us—especially with large out-puts and beginning students—they become a special type of purgatory. They do allow some advanced fea-tures, but at the cost of confusion.
IBM SPSS Statistics 23 Step by Step Instructor’s Manual 11
How you respond to this section of the chapter is, of course, your prerogative, but it cannot be ignored. A good starting point is to read the pages yourself and practice with some actual output. You can click and drag the lines of the output to create a neater look. SPSS has not yet figured out how to get value labels to fit in the boxes, but some click and drag activity can create greater order. Here, however, are some tips about issues that need to be covered: Screen 2.7: Distinguish between the two halves of the screen, the Outline view (to the left) and the actual output (to the right). The heavy border between them may be moved (via a click and drag mouse operation) so as to eliminate either section and allow a clear version of other window. Still on Screen 2.7, frequently used icons include:
Open file Save file Print output Undo the last operation Go to the SPSS Statistics Data editor (very handy to access your data screen on demand) Go to a particular case number Get information about variables Display currently selected object (a double click on the closed book icon in the outline view to
the left will accomplish the same) Hide currently selected object (a double click on the open book outline icon in the outline view to
the left will accomplish the same
The pivot functions should at least be presented: Explain that it first takes a double click on the output ob-ject to activate the pivot tables. A new menu bar heading will emerge called “Pivot” that allows you to ac-cess a number of pivoting functions. Beyond that I would suggest that you tell your students to play around with it to see what each menu item does. Warn them to save their file prior to their experimentation so if they tie themselves in hopeless knots, they can simply revert to the saved version of the file; or, just run the analysis again. If you have a burden to help your students become fluent in pivot-table operations, you will need to create your own agenda, as we try to avoid this particular kind of fluency.
Printing or Exporting Output, page 23 (PC) or 41 (Mac) Printing output is quite straightforward. The print dialog boxes are intuitive and very similar to many other print dialog boxes for word processing or other software programs. The only issue of significance concern-ing printing is the frequent need to edit the output prior to printing to save paper. Darren often specifies that his students should “Edit output so it fits on one page.” This gives students practice in output editing, and, by time the course is finished, they will have become fairly fluent in this practice through many instances of actually editing output. Paul takes a different approach, and simply tells the students to export their output to PDF and grades their electronic files. More trees are saved at the cost of more hours staring at a computer screen. Printing a data file is quite a different dynamic. There is really no space-conservative way that either of us have found to print an entire data file. SPSS is quite capable of printing even a large file but it takes many pages. The key concern is that usually a researcher wants to print only a portion of the file. This is accom-plished by highlighting the desired material: Click and drag within the data file to print a particular rectangle of data; click on the variable name(s) to print particular variables (all cases); click on the case number(s) to print certain cases (all variables). When you arrive at the print screen, make sure that the “Selection” option is highlighted prior to clicking the OK.
The “Options” Option, page 25 (PC) or 43 (Mac) As a starting point, acquaint your students with the “Options” option. There are a variety of irritants in SPSS Statistics default formatting that you can eliminate with an appropriate click in one of the “Options” sub cat-
12 IBM SPSS Statistics 23 Step by Step Instructor’s Manual
egories. Of particular distress is the default practice of printing variable labels in your output rather than var-iable names. This creates chaos in your output tables because the labels are often much too long to fit neatly in boxes. Whether the variable list is in dialog boxes is presented in alphabetical order or in the order of original entry also deserves attention. Based on your situation a strong preference for one of the other is al-most always present. Become acquainted with the various options yourself, and then you will be able to help students as they learn to negotiate through SPSS Statistics procedures.
IBM SPSS Statistics 23 Step by Step Instructor’s Manual 13
Chapter 3: Creating and Editing a Data File While the previous chapter provides material that may or may not be important to cover in a class setting (depending on the expertise of your students), the data-file chapter requires a thorough treatment for just about anyone. If students become aware of the importance of carefully-crafted research design prior to entry of data, coherent order and formatting of variables in the data file, and certain modifications and manipula-tions that may be enacted with these data, then they will have a strong foundation to negotiate the remainder of the class successfully. In our experience teaching classes using this book, this chapter cannot be either overlooked or minimized. If it isn’t covered early and the skills used consistently, students may know how to conduct an analysis given a data file, but not know how to go from raw data to a data file. Darren typical-ly quizzes thoroughly on all items he feel are foundational to creating the data file; Paul assigns lab work in which they have to set up files given raw data (not in the order or format that SPSS uses). In addition, we assign a good deal of practice on the computers that consists of actually creating a file and entering data. Now, it is true that any intellectually challenged human can enter data with minimal instructions. To con-struct a meaningful data file requires intelligence, experience, and finesse. A typical assignment directs students to recreate the data file for the grades.sav file (described in Chapter 1 and listed at the end of Chapter 3). Once students have created, named, and formatted all variables, then they enter data for the first 10 or 20 subjects from the lists at the end of Chapter 3. After completing this assign-ment, students (in a hands-on intensive course) will have many opportunities to edit variables (change cod-ing, cut and paste, create new variables) and perform other manipulations with the data file. A thorough understanding of the contents of Chapter 3, exercises such as the one described above, and a course that gives students lots of practice working with data files provides a secure foundation for creating their own files in the future. Some specific areas of interest follow.
Research Design and Structure of the File, page 46 Instruction concerning the nature of a well-designed study is central to conducting a research methods class, regardless of the particular discipline (e.g., psychology, sociology, political science, etc.). In the text we un-derline the necessity of a carefully designed study but are in no position to go into much detail. We would encourage you to expand and elaborate on the material introduced in the book. If, on the other hand, you are teaching a statistics class, the best that can be done is to indoctrinate students in the reality that carefully crafted research is central to effective data analysis. The actual procedures of how to do this will probably wait until a research methods course.
Step by Step, page 47 The screen shown here is a slightly modified version of Front Cover Screen 1. It is important to know, of course, but is so intuitive that fairly limited experience will provide the entire introduction students need. If they ever require visual reference during the course of data analysis, they can always check the front cover, which also includes the meaning of each of the icons.
The Variable View Window (Screen 3.2), page 48 This screen represents something SPSS did well. It allows equal fluidity at first typing all the variable names and then formatting each one afterwards, or, typing a single variable name and selecting all formatting op-tions before continuing to the next variable. When a variable name is typed and no formatting information is provided, SPSS Statistics will automatically assign that variable the default settings (numeric, 8 characters wide, 2 decimals, no labels or values, etc.). If this is acceptable, fine; if not, changes are as simple as click-ing on the desired cell and making those changes (described in detail in the text book). Finally, if the indi-vidual desires to make sure that values enter correctly, she or he need only click on the Data View tab, enter a value (under the variable name now listed at the top of the screen) and see that the value is entered and formatted correctly.
14 IBM SPSS Statistics 23 Step by Step Instructor’s Manual
Step-by-Step Sequences, pages 49-54 These seven step-by-step boxes identify in lucid format exactly (click at a time) how to format several differ-ent types of string and numeric variables. These five pages are sufficient for students in our experience to learn the basics and we rarely have questions from students about variable formatting after that. Once again, SPSS has made this quite intuitive so a little bit of teaching will go a long way.
Entering Data, page 54 This section begins with how to save (initially) and save (subsequently) the data file. You will avoid the oc-casional panic attack if you remind students to frequently save data during the entry process. There are sev-eral ways to enter data. A favorite method of ours is, following the creation and formatting of EACH variable, enter the actual value for the first subject—just to make sure it fits and is formatted correctly—then go on to the next variable. After the file has been created and formatted (and one value has been entered for each variable), then go on to enter the remainder of your data. Whether this is done row by row (cas-es/subjects) or column by column (variables) depends entirely on the way your data are formatted.
Editing Data, page 55 Changing the cell value, inserting a new case, and inserting a new variable will be utilized thousands of times by any serious researcher. To change an entry is as simple as clicking on the desired cell, typing the correct entry, then pressing the tab, enter, or one of the cursor keys. Inserting new cases or new variables requires a little more attention. A click on either of these icons clears a column (or row) before the selected column (or row). You may then either type in new data (with a new variable name when appropriate), or you may paste a new row or column into the recently cleared space. It would be well for you to pay special attention to the material on page 55 to ensure that your students cut and paste cases or variables correctly. Searching for data is intuitive, straightforward, and very handy. This is particularly useful for replacing errors in your data file. The chapter concludes with the data included in the grades.sav file. Be aware that the file is available at http://www.pearsonhighered.com/george and may be accessed directly from that site or downloaded for future use. Also be aware that it includes four additional variables (total, percent, grade, and passfail). All information necessary to create the grades.sav file is included on the last three pages.
IBM SPSS Statistics 23 Step by Step Instructor’s Manual 15
Chapter 4: Managing Data As suggested by the first paragraph of this chapter, we consider this as important to the mastery of statistical analysis as the creation of a data file. Let us be frank: This is also one of the most difficult chapters for stu-dents to master. The ideas are not at all difficult conceptually, but they are often troublesome to implement and too hard to remember a month or two later.
Case Summaries, page 63 In former versions this was a simple “List Cases” function that allowed you to select certain data (any num-ber of cases/subjects and any number of selected variables) to print out in a nicely space conservative (72 lines per page, 10 point Courier font) format. Now SPSS has expanded the command (as reflected in the name change to “Case Summaries”) to include a variety of other options (not really a problem, you don’t have to select them), but have reverted to the awkward boxed output (allowing no more than 30 lines per page). The command is still sometimes useful. Most people who use SPSS Statistics are probably not even aware of the Case Summaries option, but it proves to be convenient in a variety of settings, particularly to ensure that data are entered, formatted, ordered in the desired manner. You may want to give students some problems involving case summaries to heighten their awareness of its importance.
Replacing Missing Values, page 66 A read through this section will reveal our bias. We feel that missing values should be dealt with at the data entry phase, particularly for categorical variables. If we do replace missing values with the mean value for the distribution, we are more likely to base the replacement on the mean of other values in the same domain for that subject rather than for the average for all subjects in the data set. For serious research leading to publication, Darren consistently uses Method 3: Create regression equations to produce predicted values and then enter those values into the original data set. This process is certainly beyond the resources of most in-troductory students; more advanced students, however, should not have difficulty with it. Distinguish between the terms pairwise and listwise. Students will see these terms frequently. Also clarify the distinction between “system-missing values” and “user-missing values”, something beginners are not likely to use, but the terms appear from time to time. The 15% number is important to incorporate into their thinking. The long paragraphs on page 66 are crucial. Stress to your students that it is not possible to solve the problem of missing values by simply clicking boxes. What we demonstrate in sequence step 5b provides something that is statistically viable but is only obliquely related to what the student’s actual GPA or quiz score might be. We do not typically provide exercises in the missing value section, but we do cover the ma-terial.
The Compute Procedure, page 68 This material will be frequently used and should be carefully taught. Some hints:
Instruct the students that creating a new variable will produce a value (based on their formula) for every subject in the data set. This new variable will be given the name they designate, and will appear, after computation, at the end of the data file. This may then be cut and pasted into a more convenient location in the file if they wish.
Teach students to double click on the variable name (in the list to the left) to paste it into the ac-tive box rather click the name and click the right button. It is quicker.
For introductory classes, avoid the 70 functions all together. They are so idiosyncratic that a pe-rusal of the 199 found that Darren (who has an undergraduate degree in math) understood only about ¼ of them!
Have students compute 7 or 8 new variables to learn the process thoroughly. The complexity of the formulas to create these variables depends on how advanced your class is. Provide simple examples (such as sums of quizzes and final for total points, or computing percents) so they be-come acquainted with the process before they are forced to consider complex computations.
16 IBM SPSS Statistics 23 Step by Step Instructor’s Manual
The random number function can be quite handy, however, it extends beyond the resources of an introductory class. It’s especially useful for generating sample data with a normal distribution.
The step-by-step examples are excellent for first computations. Just remember that if you are creating these using the grades.sav downloaded from the http://www.pearsonhighered.com/george web site, you will need to give the variables differ-ent names than the names used in the original file (e.g., “total1” rather than “total”). Once com-pleted, they can check their results with equivalent values in the data file to see if they did it correctly.
The Recode into Different Variables Procedure, page 72 An important skill, but not one worth memorizing. The examples in this section give an excellent feel for the most frequent uses of this function. The command is not particularly intuitive and may require some work to master. Although this command is not used all that frequently, it is good to at least remember where in the book they can access the step-by-step information if they need to later.
Recoding into the Same Variable, page 74 This operation is used much more frequently than its brother in the previous paragraph. It is also much more intuitive in terms of the step-by-step procedures. Several exercises of changing the coding are quite useful; sequence step 5e on page 76 provides an excellent starting point. The final paragraph in the section is not to be ignored. It is mandatory that if you change the coding that you will also need to go into the variables dia-log box and change the value labels associated with each value. As noted in the book, some horrifying mis-information can result if the labels are not changed as well.
Select cases, page 76 This is, hands down, the most frequently used procedure in the chapter. At a general level it is important to teach the process: Provide them with several problems that give them practice, and be sure to include key concepts from this section on quizzes. However, once learned, they will use this option so often that before long it will become second nature. The most critical concern is determining how to select complex combinations of cases. For instance, in a data file on spirituality (not included on the website), there are 13 different religions listed and coded. If we wished to do a Catholic versus Protestant comparison this might be difficult because there are 7 different Protestant groups but only one Catholic group. It’s probably good to start simple, and present introductory students with only fairly simple select cases options. If you have a more advanced class, then deal with more complex selections when you encounter them in other analyses. It is necessary that you proceed with caution when using the logical operations. The ampersand (&) and the pipe (|) can easily be mixed up. Best bet is for students to create their statements, and then check the data file to see if cases have been selected (or dese-lected) correctly. If not, go back and try again. In time this will become intuitive. Finally, the example of selecting sophomores and juniors provides an example of an idiosyncrasy that stu-dents must comprehend if they wish to avoid major frustration. The intuitive first impulse would be to code for sophomores and juniors by a “year >= 2 & <= 3”. It won’t work. You must state the variable name (year in this case) after each logical symbol: “year >= 2 & year <= 3”.
Sort cases, page 78 This is a mercifully simple little procedure, and, very practical. Frequently you might wish to view your data set ordered from high to low (or low to high) on some particular variable. This is often used for class lists: alphabetical by class, alphabetical by section, from high to low based on total points or percents, from low to high based on student ID numbers. Not much effort for either you or the students.
IBM SPSS Statistics 23 Step by Step Instructor’s Manual 17
Merging files, page 79 This process is beyond the scope of an introductory course. Indeed, for complex files, it is beyond the scope of just about anyone. For advanced classes, there are two files (graderow.sav and gradecol.sav) that are formatted to neatly illustrate this procedure.
18 IBM SPSS Statistics 23 Step by Step Instructor’s Manual
Chapter 5: The Graphs Procedure In early editions of this book, the chapter was extensive and covered many different types of graphs with de-tailed step-by-step procedures for each. In this book we have tried to maintain a step-by-step format as much as possible while providing general enough instructions that a several-hundred-page chapter is not produced. The shift to a leaner, overview-and-editing-options style of chapter was for essentially three reasons:
1. We find that most specific graphs are context-specific and thus lost clarity if the reader was not ac-quainted with the procedure on which the graph was based. For instance, we showed how to use multiple line graphs and clustered bar charts to represent ANOVA interactions. If a person had never heard of ANOVA, it was a somewhat pointless exercise.
2. Another difficulty is that there are so many different types of graphs and different options for each graph that it would make the present chapter bulky and cumbersome to cover even a fraction of graphs and options available.
3. SPSS Statistics currently contains three complete graphing procedures: legacy graphing procedures, graphboard templates, and chart builder graphs. Our assumption is that users will use the chart builder graphs for most things—this procedure was rolled out with SPSS 14, and is reliable and sta-ble.
We have included instructions about how to access several different types of graphs in appropriate chapters; these include:
Bar charts, histograms, and (somewhat obliquely) pie charts in Chapter 6. Multiple-line graphs and clustered bar charts in Chapter 14. Scatter plots showing both linear and curvilinear trends in Chapter 15. Predicted-probability charts in Chapter 20. Scree Plots in Chapter 24. Icicle posts and dendograms in Chapter 25. A number of charts involving residuals in Chapter 28.
The primary function of Chapter 5 as it is currently constructed is to provide:
An overview of the graphics capabilities of the current SPSS Statistics graphics editor. A sample graph that identifies almost all of the SPSS-specific graphics terms. An overview of the Chart Builder dialog box, which is used in just about every graphics proce-
dure offered by SPSS Statistics. Sample graphs to demonstrate graphic output. An identification of how to edit the graph (starting with a double click on the graph). Identification and description of each of the graph edit icons. Explanation of how to access and use other edit options. Description of key commands and options.
We feel the chapter does a good job (within a few pages) of clarifying a number of important concerns. Prior to your students creating graphs, have them read through the chapter. How much time you wish to spend on this topic depends, of course, on the nature and goals of your class. Darren has covered the spectrum, at times having students create and edit many graphs (extending over several class periods) to en-tirely ignoring the topic. Paul encourages students to have SPSS Statistics produce graphs as part of statisti-cal procedures when possible, but teaches them how to use Excel to create APA-style graphs with the right formats and error bars (that SPSS cannot quite manage in perfect APA style).
IBM SPSS Statistics 23 Step by Step Instructor’s Manual 19
Note that for students to produce graphs, they do have to understand a number of key concepts that they probably once knew, but may have forgotten. For example, a good understanding of a Y-axis is needed to produce nearly any graph. (Try asking your students what the Y-axis represents in a normal distribution a couple of weeks after you have taught normal distributions, and see what they say!) We have tried to write the chapter so that students will have to know as few concepts as possible that they wouldn’t have to know any way to interpret the graph. Because students have to understand a number of concepts before using the Chart Builder procedures, we’d suggest first having them read beginning of the chapter carefully before sitting down at the computer. In particular, they should familiarize themselves with the sample chart, and develop an understanding of what each term identified in the chart means. Once they have done that, they can sit at the computer and follow the procedures to produce a few sample charts.
20 IBM SPSS Statistics 23 Step by Step Instructor’s Manual
Chapter 6: Frequencies This is the first chapter that actually begins to look like data analysis. We know that it isn’t, but some eager students begin to get excited as they start to do things that actually produce output. Within that context, we are now into the standard chapter format:
1. The Introduction. 2. The standard 4-step sequence that accesses the program of interest. 3. The step-5 sequences that actually perform different types of analyses, six of them in this case (5, 5a,
5b, 5c, 5d, and 5e). 4. the printing and exiting program sequences (steps 6 and 7) that complete each Step by Step section of
Chapters 6-27. 5. The Output composed usually of SPSS Statistics style output, definitions of terms in neat boxes, text
that gives a narrative description of what the output means, and sometimes charts or graphs to help clarify.
Frequencies, bar charts, histograms, and percentiles are described with an absolute minimum of adornment. If you feel our brief presentation is inadequate, feel free to expand upon it. In the Step-by-Step section, once students have executed the first three of the introductory steps a few times, they will be internalized and Step 4 will be the initial point of interest. There is not a lot worthy of comment in this chapter because it is so straightforward. The initial dialog box (Screen 6.1) is simple and intuitive. Examples and exercises are available in just about any data set. Students learn this quickly. A point that often causes frustration early in the process, however, is the order of variables in the initial dia-log window. Variables in the box to the left may be ordered according to the “variable list” (the order they were initially entered), or “alphabetic.” The variable list is typically the preferred order unless you have a very large number of variables, and all material presented in the text assumes the variables are listed in the order of the original variable list. If your students’ variables are alphabetic and they wish to change, have them click Edit, then Options … There, in a box in the upper left hand corner, the three selections (variable list, alphabetical, or measurement level) are offered. Once the desired order is selected, all further dialog boxes will list variables in the designated order. In this chapter (and also Chapter 7) we actually recreate the SPSS Statistics Output Navigator (the output screen) within the chapter for visual reference (with the output from sequence step 5 displayed). After Chap-ter 7 we simply refer students to Screen 1 on the inside back cover, a sample output screen that includes def-initions of all the icons.
IBM SPSS Statistics 23 Step by Step Instructor’s Manual 21
Chapter 7: Descriptive Statistics The first paragraph identifies all the terms that will be covered in this chapter. The most important terms from a statistical viewpoint are:
Mean, median, mode Variance and standard deviation Skewness Kurtosis Maximum, minimum, and range
The standard errors are described in the narrative but are not considered central for an introductory class. The median, mode, and size of the distribution are more frequently used, especially in business. Statistical significance is introduced here, and is often addressed (more briefly) in the chapters that follow. Obviously this is an important concept in data analysis and you may wish to build on the material we present, or you may wish to present this topic with the aid of another book or based on your own resources. The page is not designed to be comprehensive, but, to the instructor, it is a reminder to spend serious time on this is-sue. The next topic, more germane to the subject of the present chapter, is a full-page description of the normal distribution. We attempt to make the concept clear by many examples. You may want to foreshadow that the reason the normal distribution is so important is that most statistical procedures are based on the assump-tion of normality of data. Just as you don’t build a house with rotten lumber, you don’t build an analysis on variables that are not psychometrically sound (viz., normally distributed). Many statistical procedures are robust enough to violate this assumption, but it should be emphasized nonetheless—if you’re violating an assumption, you should know you’re doing it! Once again, the description of the terms and procedures are made with a minimum of adornment. You are free to expand if you feel such is necessary. The simple 2-line descriptions and simple illustrations seemed adequate to us. Kurtosis and skewness both merit an entire paragraph and a couple of graphs each. These two constructs provide the scales that actually measure the normality of any distribution. For assignments in this material, Darren will typically have students print out psychometric properties of 40 or 50 variables in some large data set. He include only mean, standard deviation, kurtosis, skewness, and N so that the information about each variable will fit on a single line. They will then circle, underline, or box variables that are excellent for most psychometric purposes (kurtosis AND skewness between 1.0), those that are generally acceptable for further analyses (kurtosis or skewness between 2.0 but at least one of them is outside the 1.0 range), or those not acceptable for some analyses (either kurtosis or skewness outside the 2.0 range). This is a bit more mechanical than how a real researcher might use these measures, but it con-veys the point clearly. Paul tends to give several lab assignments in which students have to describe the data in APA style (mean, standard deviation, etc.) and provide histograms and narrative descriptions of the data if the data is not normal. They need to check the skewness and kurtosis to see whether or not the data is nor-mal. In the Step-by-Step section, the two dialog boxes (Screens 7.1 and 7.2) are simple and intuitive. Students rarely have difficulty with either of them. The output is straightforward, and the interpretation is clear once students have internalized the meaning of terms from the introductory portion of the chapter.
22 IBM SPSS Statistics 23 Step by Step Instructor’s Manual
Chapter 8: The Crosstabs Procedure and Chi-Square Tests of Independence Ah! Now we move into material that is capable of actually confusing students! It is also the first genuine analysis chapter. As essentially simple as chi-square analyses are, it takes a while before students are able to comfortably internalize some of the concepts. The widely used-used terms “independence’ and “depend-ence” are perhaps a little unfortunate. They do not quickly convey what a chi-square analysis is attempting to accomplish. We have gone through tortured efforts to make the basic concepts clear in the introductory two pages. What follows are some thoughts dealing with how to most effectively convey this material. Deal with the categorical/continuous issue. While you may perform crosstabulation with either continuous or categorical data, it is rarely a good idea to perform chi-square analyses on categorized continuous data. Emphasize that with chi-squares you are always dealing with frequencies (or expected frequencies) of cate-gorical data. Next explain that what is being compared in chi-square analyses is whether actual (or observed) values dif-fer significantly from expected values. Some more time may need to be spent with “what are expected val-ues”. Whether you show the method for actually computing expected values (row percent column percent number of subjects) is up to you. We explain what expected values are in a seat-of-the-pants fashion in the introduction. We do not show the actual computations. Of course, this only applies to chi-square tests of independence of two variables. If you want to do a single-variable chi-square test, then you need to do a nonparametric chi-square test and enter the expected variables. Finding the expected variables in really the trickiest part of chi-square tests. To demonstrate how the chi-square procedure really works, we have students actually hand compute chi-square values based on the formula shown on page 125. This allows them to see that many small discrepan-cies result in a small (and non significant) chi-square value, and that several large discrepancies inflate the chi-square value resulting in significant differences between observed and expected values. It’s not a diffi-cult formula and gives students a good understanding of the procedure. Concerning the explanation of dependence and independence…Try what Chris Cozby suggested in his re-search methods book: Use the “that depends” phrase. Are there more men or women in different classes? That depends on whether the class deals with, for instance, engineering (typically more men) or psychology (typically more women). Are there more Christians or Jews in different classes? That depends on whether the classes are conducted in the U.S. or in Israel. These four instances represent examples of dependence. The membership of the class (whether gender or religion) depends on certain characteristics of the class or setting. Independence might be illustrated by some obvious examples: Is gender balance dependent on eth-nicity? No, we know that there are roughly 50% males and 50% females in any ethnic group. Thus gender and ethnicity are independent of each other. If this is helpful, good; if not, use your own examples. To further illustrate, run students through several chi-square analyses and then interpret what they mean. In the Chapter 8 Exercises, Problem 3, the helping3.sav file is used to demonstrate that there are significant differences between men and women concerning the situations in which they are more likely to provide help for a friend. (Do men or women help friends more? It depends.) Men report more instances of helping with goal-disruptive problems whereas women report more instances of helping with relational problems. This is a significant and interesting gender difference indicating that women are more likely to help in situations where emotional support is facilitative and men are more likely to help in situations where they can do some-thing to fix the problem. It may also be important to briefly address the correlation statistic that is typically printed out when conduct-ing chi-square analyses. Here it is necessary to consider the nominal data versus ordinal data issue. Our book addresses this concern only briefly (in Chapter 3), but it is worth explaining that if there is an intrinsic order to the levels of your variable (e.g., levels of income, years of education) then meaningful correlations may be computed. If your variables are nominal (or even one of them is nominal) then correlations are meaningless. There is no intrinsic order to levels of ethnicity or different types of religions. The difficulty,
IBM SPSS Statistics 23 Step by Step Instructor’s Manual 23
of course, is that if you are teaching an introductory class, they may not have not yet been introduced to cor-relations. The material on Phi and Cramer’s V will typically extend beyond the level of an introductory class. Howev-er, if you wish to compare the magnitude of various analyses, then both are important. There are additional chi-square-related issues not covered here, but the chapter does an adequate job of addressing many of the most important concerns. There are a number of chi square analyses in the exercises section that you may use to illustrate this procedure to your students or to assign for homework.
24 IBM SPSS Statistics 23 Step by Step Instructor’s Manual
Chapter 9: The Means Procedure: Subpopulation Differences This is a simple little procedure that is not really complex enough for “real” data analysis, it is just a listing in crosstabulated format of certain characteristics of your data. While the Crosstabs procedure from Chapter 8 indicates the frequency (or the number) of subjects in each category or sub-category, the Means procedure, instead of listing the frequency might list the mean values within each category, or the standard deviations, or the frequencies if you like. For instance, you might want to crosstabulate a section by year-in-school on total points for your classes. This would produce the mean number of total points for students in each class (3 levels), for each year in school (4 levels). It is mainly a device for organizing data. The Means procedure does include a simple one-way ANOVA in their options, but it seems foolish to consider that issue here when Chapter 12 does a fairly thorough and thoughtful job of covering one-way ANOVAs. How often is this process used? Darren never uses it. He finds that the Descriptives output does as well. Paul refers his students to it whenever they forget to request means when doing an ANOVA. The emphasis placed on this chapter will be entirely dependent on the preference of the instructor. There have been docu-mented cases of individuals who have actually lived full, happy lives, and have never even heard of the Means procedure.
IBM SPSS Statistics 23 Step by Step Instructor’s Manual 25
Chapter 10: Bivariate Correlations If Chapter 9 can be lived without, Chapter 10 most assuredly cannot. Correlations are a central form of anal-ysis in the majority of research projects, and, a number of advanced procedures are based on correlations, such as simple and multiple regression analyses, logistic regression, discriminant analysis, and structural equation modeling. We feel that the correlation chapter does a good job of presenting the material in a concise but lucid format. On pages 144 and 145 the explanations for the perfect positive, positive (but not perfect), no correlation, negative (but not perfect), perfect negative correlations with their associated graphs make clear what correla-tion is and how it relates to the associated phenomena. In addition, we cover several other issues that are critically important to an understanding of the use of corre-lations. The brief paragraph or two that we present on each of these issues (pages 146 and 147) is largely a reminder to instructors that this material needs to be considered and we leave it to you to determine how. Although we introduce linear versus curvilinear issues here, we wait until Chapter 15 for a more thorough discussion. The topic of significance is once again presented briefly. Four compact paragraphs present the central arguments concerning the issue of correlation and causation, and partial correlation is considered briefly, but is covered in more detail in the ANOVA and regression chapters. Additional issues that are covered in the Step by Step section include a Pearson versus Spearman contrast. Generally Pearson correlations should be used with continuous data and Spearman with categorical (ordinal, of course) data. The simple reality, however, is that the Pearson formula works quite well with much ordinal data, and when there is a correlation between a continuous variable and an ordinal variable (particularly if it has only two levels), Pearson is often used. Another concern is the selection of one-tailed or two-tailed tests. The rule of thumb that governs a choice of one-tailed versus two-tailed is that if you have a clear idea as to the direction of the relationship (a positive correlation or a negative correlation) then it is acceptable to use the more powerful one-tailed test. If you have a matrix of correlations in which you have little idea of the direction of relationships, then 2-tailed tests are more appropriate. A concern addressed in the regression chapters but not included here is the issue of linear dependence. Even though our book doesn’t consider linear dependence in the correlation chapter, we address the topic in our classes and it is a prominent issue in a few of the exercises that our students complete. Often as an assign-ment Darren will have students compute a correlation matrix of a number of variables. Then he will have them identify correlations that are meaningless (purposely including some categorical variables that are nom-inal), identify correlations in which there is numeric linear dependency (like quiz1 with total points, or final with total points—total points is the sum of the quizzes and the final), and identify correlations that may have conceptual linear dependency (such as anxiety and tension, outgoing and extroverted). To print out the ma-trix is no trouble for the students, but to identify the nature of the correlations often produces some produc-tive anxiety and some constructive thinking. For an extreme example of how linear dependencies can be a problem, Paul starts with the correlation between height in inches and height in centimeters. The correlation between these two variables can be computed, but it’s just not interesting or meaningful. Whenever you can do math to compute one variable from one or more other variables (even if it’s adding up several quizzes to make a total quiz grade), you shouldn’t be including variable(s) from both the left and the right side of the equation in the same statistical analysis. Another element of assignments is for students to identify the five highest valid correlations (positive) and the five strongest (negative) correlations and describe what they mean in appropriate language (that is, non-causal if causality cannot be assumed). These might include sentences such as:
26 IBM SPSS Statistics 23 Step by Step Instructor’s Manual
Greater self-efficacy is significantly associated with more time spent helping (r = .37, p < .001). There was a significant positive correlation between spirituality and life satisfaction (r = .17, p =
.023).
We introduce the frightening foray into SPSS Statistics syntax to create a correlation matrix that is not sym-metric. The inability of SPSS to compute asymmetric matrices (without using syntax) is a distressing failing. To demonstrate its utility: If you have computed a 10x10 matrix of correlations and then add two new varia-bles, you may wish to compute correlations of those two with the other 10. The material presented on page 150 explains how to do this. Those who have used previous versions of SPSS Statistics have encountered these syntax command files many times.
IBM SPSS Statistics 23 Step by Step Instructor’s Manual 27
Chapter 11: Independent Samples, Paired Samples, and One-Sample t Tests t tests represent another mainstay of data analysis. We regard this as another well-written chapter and few students who actually read it have difficulty. The differentiation between independent-samples, paired-samples, and one-sample t tests is presented simply and logically. Darren has found that additional words are rarely necessary in my classes to clarify further. A topic suggested by the first sentence of the chapter recommends a discussion that might take place at this time if it has not already been presented. This is the distinction between populations and samples. In an ear-lier draft of this book we had written the first sentence wrong and a reviewer (who was a statistician) tidied us up quickly. The issue is NOT whether two sample groups differ significantly, the issue is, do the two populations from which the samples were drawn differ significantly. This is central to the idea of statistical inference. You could venture into the identification of populations (Greek letters), population values (the unknown and unknowable real means, standard deviations, etc.) and indicate that samples (English letters) attempt to determine or estimate actual population statistics. This juncture a good spot to discuss these top-ics. This is also a useful point to illustrate the dynamic of within-group variation as compared to between-group variation. Draw some normal distributions on the board and show how manipulating the distance between means or an increase or decrease of variance will influence the likelihood of statistical significance between groups. You might even (as an introduction to Levene, described below) indicate how unequal variances affect statistical significance. We often present this material at a conceptual level, but when we get to one-way ANOVAs we have students actually calculate sums of squares to get a real feel of what the statistical procedure is accomplishing. The introduction also briefly discusses tests of significance and one- and two- tailed tests. The content of the Step-by-Step section is straightforward, and our students have acquired this material easi-ly. The output, however, presents some concerns. One issue is the use of Levene’s test for equality of vari-ances. It is not a very hard concept: If the variances differ significantly (p < .05) then use statistics based on the unequal-variance estimates. If the variances do NOT differ significantly (p > .05), then use statistics based on the slightly more powerful equal-variance estimates. It sounds (and is) simple, but slower students still mix it up. In assignments with t tests, Darren typically has students print out many of them (for instance gender differ-ences on all appropriate variables in a data set). Then he asks students to delete any analyses that do not show statistical significance, print out the ones that do show significant differences, circle the correct t value (based on the results of the Levene test) and then write up the results. The write-ups are simple sentences with the APA-correct statistical line. For instance:
Men (M = 103.45) scored significantly higher than women (M = 98.32) on the final exam, t(103) = 3.47, p = .03.
The helpers rated the quality of help significantly higher (M = 5.42) than did the help recipients (M = 4.96), t(535) = 4.56, p < .001.
Our students do fairly well at writing the statistical part of the output but have great difficulty writing a co-herent English sentence. This makes grading the assignments quite a frustrating chore. Don’t worry, though, it will get far worse in the ANOVA chapters. An additional feature we sometimes include in homework is to have students highlight the dependent varia-ble of interest, circle the two mean values, and circle the correct significance value. On paired-samples t tests, correlations between the two levels of the variable are included. Since the topic of correlations has just been covered (previous chapter), students now understand the procedure and we occasionally have them write up the meaning of the correlations as well.
28 IBM SPSS Statistics 23 Step by Step Instructor’s Manual
One challenge with reporting t test results is that SPSS Statistics does not automatically computer effect siz-es. For some, this may be an opportunity to have students calculate the d effect sizes from the data provided by the t test procedures. For others, it may be worth the wait for Univariate ANOVA where SPSS Statistics can helpfully compute 2. In general we have found t tests to be a straightforward and satisfying topic to present. The transition to ANOVAs, however, is another story.
IBM SPSS Statistics 23 Step by Step Instructor’s Manual 29
Chapter 12: The One-Way ANOVA Procedure This chapter begins its polemic with a comparison of t tests (exactly two levels) with one-way ANOVA (two or more levels of the independent variable). We clarify that one-way ANOVA includes exactly one continu-ous dependent variable and exactly one categorical independent variable. Here, then, is where description of ANOVAs becomes difficult. It is fine to say “if you conduct an ANOVA all pairwise comparisons are con-sidered and a test statistic results that identifies if there are ANY significant differences within pairwise comparisons. If there are, then you do a post hoc test to identify where significant differences occur.” It is a fine, noble sentence and usually correct, however, you get caught with egg on your face when a student computes a one-way ANOVA that shows a p value of .11 and yet ends up with significant differences in pairwise comparisons. The same thing happens with MANOVA. A MANOVA statistic can indicate no sig-nificant effects but then one-way ANOVAs indicate that there are significant differences. Or vice versa. We know why. It makes for interesting discussion. We can talk all we want about experimentwise error and simultaneous effects. The simple reality is that there are instances when a significant overall ANOVA does not result and there are unquestionably significant pairwise differences. For instance, take a categorical vari-able with five levels. Four of the levels may be very close (yielding small between-group variation) and the fifth might be quite distant (but its effect is not enough to counteract the small between-group variation gen-erated by the other four). Thus, you may find small between-group variation, large within-group variation, and an overall nonsignificant result. The distant group legitimately differs from the others (experimentwise error notwithstanding) but the overall ANOVA result says otherwise. All this makes for ANOVA being a difficult topic to explain. Regression is much more intuitive and easier to present for us. To help students understand the derivation of the F test, we typically have them compute a sum of squares to demonstrate how the influence of degrees of freedom, the influence of sample size, the influence of within group variation, and the influence of between-group variation influence the overall ANOVA statistic. If explained well, it is not too difficult, and the students (astonishingly) find the exercise enlightening. Here is the little example Darren uses:
For the following 3 distributions compute an F value; the solution follows: Group A: 2 3 4 4 5 5 6 6 7 8 Group B: 5 6 7 7 8 8 9 9 10 11 Group C: 8 9 10 10 11 11 12 12 13 14
Solution: 1. Means for the three groups are 5, 8, and 11, respectively. 2. Within-group sums of squares for the three groups are 30, 30, and 30 yielding a total within-groups
sum of squares of 90. The within-groups degrees of freedom is 30 – 3 = 27. The within groups mean square is 90/27, or 3.3333.
3. The between-groups sums of squares for the three groups are 90, 0, and 90 yielding a total between-groups sum of squares of 180. Degrees of freedom are 2, the number of groups minus 1. The be-tween groups mean square is 180/2 = 90.
4. The final F value is the between-groups mean square divided by the within-groups mean square: 90/3.3333 = 27.000, p < .001.
We do not spend a lot of time discussing the qualities of different post hoc tests. We goggle briefly at the 22 (!!) different tests that are available. Darren always (in an introductory class anyway) go with the most liber-al one, the LSD (least significant differences) test. We talk in the book a little bit about comparisons of the more popular tests (Scheffe, Bonferroni, Tukey) but we spend little time with it in class. In later chapters of the book these tests are discussed in greater detail.
30 IBM SPSS Statistics 23 Step by Step Instructor’s Manual
Contrasts is something Darren always cover with the students. In several of their exercises they are asked to compute particular contrasts; the explanation in the book does an adequate job of explaining how contrasts work. You will need to go through some of these on the board, step by step, with students, but once under-standing is acquired, most are able to write up contrasts quite adequately. In the Output section in the book, we feel there is a nice blend of the use of SPSS Statistics output, commen-tary, and definitions to give a clear idea about the meaning of the results for one-way ANOVA. To assist students in writing ANOVA results, we typically give them a generic format chart and suggest that they fol-low it precisely. Usually students have great difficulty producing a coherent description of ANOVA results. Thus, we suggest that they follow the format strictly and after they’ve spent a few years (says Darren) or weeks (says Paul) writing up results, then they will begin to streamline the rather boxy format suggested be-low. These write-ups are very difficult to grade unless you specify the strict format. The structured format Darren uses follows:
A one-way Analysis of Variance indicated a significant influence of [INDEPENT VARIABLE] on [DEPENDENT VARIABLE], F(x, xxx) = _.__, p < .xxx. Post hoc analyses using the [POST HOC METHOD] method with an alpha value of [ALPHA VALUE] found that [LEVEL A] (M = ) was significantly higher than [LEVEL B] (M = ) and that [repeat for as many pairwise differences as there are].
IBM SPSS Statistics 23 Step by Step Instructor’s Manual 31
Chapter 13: 2-Way Analysis of Variance The following chapter (Chapter 14) is so complex (including 3-way ANOVA, covariates, and graphing inter-actions) that Chapter 13 was purposely kept simple. Within the context of the three ANOVA chapters, in Chapter 13 we extend beyond the simplicity of pairwise comparisons to ask the experimental questions so characteristic of Analysis of Variance:
Is there a main effect for the first independent variable? Is there a main effect for the second independent variable? Is there an interactive effect of the two independent variables on the dependent variable?
While ANOVA has its complexities, it also has certain simplicities: For 2-way ANOVA there is exactly one continuous dependent variable and exactly two categorical independent variables. These concepts seem to be learned easily by most students. Then we extend to ANOVA options (see Screen 13.2). In the classes we teach, we go exclusively with the Hierarchical method; if covariates are included, we have their influence entered before the effects of the independent variables, and we always include the means and counts so we have visual listing of the mean values in each category. The example we use in Chapter 13 is the influence of gender (2 levels) and section (3 levels) on total points earned in the class. This provides a simple and straightforward example of a two-way ANOVA. Despite the fact that there are no significant main effects or interactions the example seems to clarify the process for our students. One thing that is emphasized in this chapter is interpretation of output. We explain that for a main effect with two levels (like gender) there is no reason to conduct post hoc anal-yses. It is clear from the cell means which group is higher than which. If, however, there is a main effect with a variable that has three or more levels, just like a one-way ANOVA, we may not be certain where the differences lie. When significant main effects occur, Darren has students systematically conduct a one-way ANOVA with post hoc tests to identify pairwise differences. Once again, this method is a bit rigid and me-chanical, but many introductory students they need that structure. The most difficult part for students is writing up findings. Since the interaction in the Chapter 13 example is not significant, we wait until Chapter 14 before we attempt to write results that include an interaction. But even writing up ANOVA results where there are only main effects, they find challenging. Darren provides a strict structure for write-ups that applies to 2-way or higher order ANOVAs and includes how to write up results if a covariate is included. It is similar to the structure for one-way ANOVA (parts, in fact, are dupli-cates). It helps them write results and helps in grading homework and papers. While we present and describe the graph presented in the text, we wait until Chapter 14 before students cre-ate graphs of interactions and then attempt to describe the results. Here then is the form that we provide stu-dents to assist them in writing the results from 2-way and 3-way ANOVAs:
How to Write up 2- and 3-way ANOVA Results
A [TYPE OF ANALYSIS] was conducted to determine the influence of [INDEPENDENT VARIA-BLE] and [INDEPENDENT VARIABLE] and [INDEPENDENT VARIABLE] on [DEPENDENT
VARIABLE]. [If there is a covariate] To control for the influence of [NAME OF COVARIATE] it was included as a covariate. [NAME OF COVARIATE] accounted for a significant amount of the overall variance, F(x, xxx) = _.__, p < .xxx). [repeat for as many covariates as there are that accounted for a signifi-cant portion of the variance]
32 IBM SPSS Statistics 23 Step by Step Instructor’s Manual
There was a main effect found for [INDEPENDENT VARIABLE], F(x, xxx) = _.__, p < .xxx. Post hoc analyses using the [POST HOC METHOD] method with an alpha value of [ALPHA VALUE] found that [LEVEL A] (M = ) was significantly higher than [LEVEL B] (M = ) and that [repeat for as many pairwise differences as there are] [repeat this entire section for as many significant main effects as there are] [note that if there are only two levels of a variable then post hoc compari-sons do not need to be conducted] There was [also] a significant [INDEPENDENT VARIABLE] by [INDEPENDENT VARIABLE] interaction, F(x, xxx) = _.__, p < .xxx. [then describe the interaction based on the graph in general terms]
IBM SPSS Statistics 23 Step by Step Instructor’s Manual 33
Chapter 14: 3-Way Analysis of Variance Three-way ANOVAs are complex. Armed with that knowledge we have spent as much time writing Chapter 14 as any chapter in the book. We inch through the material slowly, painstakingly, and attempt to make each prior level clear before we move on to the next level. It is also the longest Output section in the book. You can’t speed-read the chapter, but anyone with reasonable math aptitude is able to read and to understand. A number of special concerns attend this chapter. What they are and my response to them follows. First the example is simple and intuitive: the influence of gender, section, and lowup (lower or upper division student) on total points earned. The three independent variables have face validity even to the beginner be-cause it seems essentially reasonable that gender, section, or class standing MAY have an influence on how many points one earns in a class. As such, the example has proved to be a good vehicle to explain three-way ANOVA. Make clear the seven experimental questions: With three independent variables there will be three possible main effects (main effects for gender, section, and lowup), three possible 2-way interaction effects (gender by section, gender by lowup, and section by lowup) and one possible 3-way interaction (gender by section by lowup). In the often vague and fuzzy world of ANOVA, it really helps students to see some invariant con-stants in the process. The influence of covariates is presented for the first time in this chapter. the example gets the basic idea across effectively. Of course, entire books have been written on analysis of covariance and we are sure such authors would find our brief explanation inadequate. How much time you spend on this topic depends on your class objectives. In my classes we’ll spend a half hour on presentation and provide several examples of the use of covariates in different settings. Also, assignments will include one or two ANOVAs that include one or more covariates. How to write up the results of covariates is provided in the model presented on page 31 of this manual. In the Output section we exert thoughtful effort to provide a more lucid picture of the statistical influence of a covariate. An entire ANOVA output is provided with all statistics included for the analysis with the co-variate (in bold face to the left) and the identical analysis without the covariate (in italics, nonbold, in paren-theses, and to the right). We touch on the fact that when a covariate is included (if it has a significant influence) its effect on other variables in the output is typically to decrease the F values and increase the cor-responding p values. We provide examples where this happens in the table on page 192 and also show one case where the F actually increases (and p diminishes) when the covariate is included. Once again, the time you spend here depends on your class objectives. the clicks on the computer to conduct an ANOVA are the least of anyone’s worries. It is quick, easy (per-haps too easy), and in moments students can have pages of bewildering output. Just follow Step 5 (with your own variables of interest) to produce output. Performing these steps always prompts a brief spasm of neces-sary rhetoric (by the instructor) about the necessity of sound design and careful selection of variables before conducting analysis of variance. Chapter 14 is where we have students graph their ANOVA interaction results. The process of producing the graphs is quite intuitive and easy to follow. Instruction on what the graphs mean is up to you. Writing up interactions is a challenge for just about anyone. Darren tells his students that he has written re-sults from many interactions and still almost never gets it quite right the first time. Often several revisions are necessary before a correct and aesthetically pleasing write-up emerges. We do not expect intro students to ever get to the aesthetically pleasing stage; clarity is the goal. A sample write up for the interaction shown on page 189 might run as follows. Please note that this is not a statistically significant finding, but for prac-tice we write it as if it were: While there was little difference for scores earned by men and women in the first two sections, in the third section women scored substantially higher than men.
34 IBM SPSS Statistics 23 Step by Step Instructor’s Manual
There are many other ways to say the same thing. It is the rare student indeed who, after seeing some exam-ples, can write up an interaction for a new problem correctly and coherently. Another interactive effect that we present is what a graph looks like that demonstrates interactions. We work with the parallel-lines concept: If lines are parallel or near parallel it indicates that interactions are unlikely. The further the lines deviate from parallel, the greater the probability that there is a significant interaction. The standard waivers are inserted here: Graphs can be manipulated by extending or constricting the vertical or horizontal axes or by using only a narrow range of your values to make it appear that there is an interac-tion when there may be none. Therefore, whether significant interactions occur depends entirely on the ANOVA output, regardless of what the graph looks like. Graphs are designed to assist in understanding in-teractions, not to identify whether one exists. For three way interactions, we pretty much throw in the towel for an introductory group. We cleverly ar-ranged for the 3-way interaction in the book example to be non-significant. We explain that they are difficult to understand, difficult to explain, and it requires a thorough understanding of the data to make sense of one. It takes a course that focuses almost exclusively on ANOVA before much time will be spent untangling the meaning of three-way interactions. Paul assigns a homework problem on interpreting a three-way interac-tion, goes through it in class so students have had the experience, and gives them homework credit for trying. We have students write results in correct APA format. Before Darren created the structured format (the model was presented three pages earlier) it was essentially impossible to grade student papers. Once they had the model, then many were able to describe ANOVA results that were understandable. An additional issue, uncomfortable for many, is a discussion about degrees of freedom. We, like many oth-ers, have difficulty explaining the concept. In some frustration Darren once took all the statistic books that he owned and typed out the glossary definition of degrees of freedom from each of them. There was surpris-ingly little consensus among the definitions that resulted. On the bright side, we can at least see that they didn’t copy their definitions from each other. We typically explain that degrees of freedom is a concept related to the sample size and number of statistical questions that are asked in an analysis. The higher the degrees of freedom, the greater the statistical power. The more questions that you ask of the data, the lower your degrees of freedom become. Thus, in a study that involves many analyses, a large sample is required to provide sufficient degrees of freedom to produce valid results. We personally find that comparing a saturated model in structural equation modeling with models that are less than saturated is the best way for us to visualize the concept of the “number of parame-ters that are free to vary”; but this is not helpful for introductory students. If you have found a good way to present the topic we would welcome an email from you with a description.
IBM SPSS Statistics 23 Step by Step Instructor’s Manual 35
Chapter 15: Simple Linear Regression and Curvilinear Regression In the very earliest draft of this book this chapter didn’t exist. We simply launched (eyes tightly closed, limbs splayed) directly into multiple regression with a happy cry. Statisticians who reviewed the book said it was too much. “You must start with something simpler.” So we have. In retrospect, we entirely agree with our critics. Chapter 15 has provided a comfortable transition to the much more complex topic of multiple regression analysis. We begin our presentation of the material by tying linear regression into the discussion about correlation; indeed, simple linear regression is a correlation, that is, multiple R = r. We continue with a discussion of phenomena that covary and provide a number of real-world examples. We then draw the connection be-tween the inferences that people make hundreds of times daily and the statistical concept of predicted values. Within this discussion the concept of true versus predicted values emerges and a residual or error term is dis-cussed. We introduce the concept of residuals only briefly, then drop it (as the plot thickens) and refer read-ers to the chapter on residuals, Chapter 28, if they wish more information. The data set used to demonstrate linear regression is a fictional file that shows the relationship between pre-test anxiety and actual exam performance. We created the file to provide both a linear relationship (more anxiety yields better exam scores) and a curvilinear relationship (at the high end of the anxiety scale perfor-mance begins to drop off). This file provides clear visual evidence (from viewing a scattergram) that both linear and curvilinear relationships exist between anxiety and exam scores. Pages 201 and 202 provide these visual displays. The topic of predicted values remains the main theme throughout the fairly lengthy (for this book anyway) introduction. We create a linear regression equation then try actual values from subjects in the data set to demonstrate how well (or poorly) the regression equation is able to predict scores. The equation doesn’t do very well and this inadequacy leads into the discussion of curvilinear relationships. Before a discussion of curvilinear trends, however, four key points must be secured:
1. The regression command generates a Multiple R value (equal to r when there are only two variables) that measures the strength of relationship between an independent (or predictor) variable and a des-ignated dependent (or criterion) variable.
2. Along with the Multiple R, a significance value is produced that identifies the likelihood that the ob-served value occurred by chance.
3. R2 is produced, a statistic that identifies the percent of variation in one variable accounted for by an-other. For instance, in the linear equation, it is found that 23.8% of the exam score is accounted for by pretest anxiety. We also note that there is no causal ambiguity in that statement.
4. The regression output computes the constant and coefficient necessary to create the regression equa-tion.
The discussion that follows about curvilinear trends is sufficiently clear that our students typically find it in-teresting rather than daunting. Even on the computers they attack with relish the idea of testing for curvilin-ear trends using the procedure illustrated in steps 4a and 5a. We then create the regression equation that includes the curvilinear component, substitute anxiety scores from the same three students in our data set, and discover that the quadratic equation does a superior job of predicting scores than the linear equation was able to do. For assignments, students complete several linear regression problems. Then we check the same variables for curvilinear trends, and if there is significant curvilinearity of data, we create an additional analysis using both the linear and the curvilinear components. The step-by-step instructions for completing this task are included in Step 5b. Next students create the regression equations and try out values from three or four sub-jects to see how well the equation works. The process is very similar to the mode of presentation in the chapter, and students typically find this an easier than average topic to negotiate.
36 IBM SPSS Statistics 23 Step by Step Instructor’s Manual
The divorce.sav file (see the Exercises section) provides some good examples (using real data) of variables (social support, closeness, income, attributional style, locus) that exert both a linear and a curvilinear influ-ence on the dependent variable, life satisfaction.
IBM SPSS Statistics 23 Step by Step Instructor’s Manual 37
Chapter 16: Multiple Regression Analysis With thorough attention to the material in Chapter 15, the introduction of multiple regression analysis is greatly facilitated. We begin, once again, with the concept of a regression equation to determine predicted values. We, for the second chapter in a row, shift away from the grades.sav file and present the first of the three helping files, helping1.sav. This was actually a pilot study (N = 81) for several much larger studies that have been conducted since then. For instance the helping3.sav file is a data set drawn from more than 1000 subjects (the N of 537 represents number of helper-recipient pairs) and has resulted in publication. This file is used in several of the more complex analyses in later chapters. An intuitive, simple regression equation is introduced in which the amount of helping is predicted by the amount of sympathy the helper feels toward a needy friend. Sympathy is substantially correlated with help-ing and accounts for about 20% of the variance. Then we continue to suggest that factors other than sympa-thy may influence how much help is given. Within the example provided, we suggest that anger (anger felt by the helper toward his or her friend) and self-efficacy (the friend’s belief that he or she has the ability to render effective help) also contribute to the helping process. We then demonstrate the regression equation and substitute some numbers from actual subjects to illustrate how well the equations predict the amount of helping. In the classroom setting we usually present one or two additional real world examples to underline that most phenomena have multiple causes. A typical illustration is predictors of body weight. Students quickly come up with a string of factors that might influence how much a person weighs: daily caloric intake, height, bone structure, amount of daily exercise, number of fat grams consumed, metabolism. Students seem to grasp the fundamental structure of regression much more easily than they do the fundamentals of ANOVA. Back to the book example: The regression equation is created and tested to demonstrate its usefulness in pre-dicting values. During this process an unexpected finding is revealed that anger correlates positively with amount of time spent helping. Since one would expect that more anger would result in less help, this pro-vides a good opportunity to identify what an analysis can and cannot do. It can compute valid findings based on certain known parameters. It cannot explain why we get the results we do. A discussion of the amount of variance explained parallels the presentation in the prior chapter, but this time within the context of multiple regression. We also explain the nature of partial correlation and describe how in a stepwise or forward type of analysis that variables are entered sequentially into the regression equation based on the amount of total variance they explain. When there are no more variables that explain a signifi-cant amount of additional variance, then the regression process stops and results are printed. This may entail some careful explanation before students fully grasp it. There are many classes that spend an entire semester on just regression. Within a few introductory pages, therefore, we cannot begin to cover all relevant points. The section titled “Curvilinear Trends, Model Build-ing, and References” raises issues that are well worthy of substantial attention if one wishes students to use the regression procedure successfully. These include:
Thoughtfully crafted and carefully designed research. This, of course, is a theme that has been rein-forced throughout the book.
Sample size. It isn’t as much the number of subjects as the number of subjects compared to the number of analyses you are conducting. We don’t know any reliably useful rule-of-thumb ratios or numbers, but a small sample with many variables would yield an analysis with so few degrees of freedom that significant results would be almost impossible to achieve.
Examination of data for abnormalities or outliers is worthy of discussion. Use some examples from your own research to illustrate that sometimes forms have to be discarded, or provide examples of outliers in your own data.
Normal distribution of predictor and criterion variables is worth noting (since normality is the foun-dation for most forms of data analysis) but need not be over emphasized. The regression equation
38 IBM SPSS Statistics 23 Step by Step Instructor’s Manual
works quite well with an occasional categorical (ordinal, of course) variable or even a dichotomous variable such as gender. If you attempt to build a case that the dependent (or criterion) variable should be normally distributed, you will get shot down by anyone who has heard of logistic regres-sion or discriminant analysis. Both of them employ a dichotomous variable as the dependent varia-ble. In general, mathematicians have found that the regression equation often works quite well even when assumptions of normality are violated.
The issue of linear dependency is no trivial after thought. We spend a good deal of time addressing this issue in my classes. The first concern is to eliminate predictor variables that are numerically de-pendent (such as quiz scores and total points, or a measure of apprehensiveness that is a mathemati-cal composite of other variables in the data set such as tenseness, suspiciousness, and anxiety). Next is to look carefully at variables that that may be conceptually similar, such as anxiety and tension. An initial correlation matrix can often help avoid potential difficulties by noticing variables in the matrix that are highly correlated with each other. An example from the divorce.sav file: The attribu-tional style questionnaire (ASQ, optimistic or pessimistic attributional style) and locus of control were included in initial analyses. After it was observed that ASQ and locus were highly correlated, we discovered that the ASQ incorporates a locus component into its measure. Locus was subse-quently dropped from the analysis.
The Step-by-Step section is a bit more involved than the equivalent section from Chapter 15. Certain ideas should be addressed: For instance, the different methods of entering variables into the regression equation should be considered. The forward and stepwise methods are most frequently used, and there is not time or space in an introductory class to try and compare several of them. Students, however, should be aware that different options exist. The Plots option deals largely with plots of residuals and is not covered beyond definitions except in more advanced classes. The default statistics are satisfactory for introductory students. The Save option provides some interesting comparisons. If, for instance, you save the unstandardized predicted values, you can place them in the column next to the actual values of the dependent variable and get a first hand look at how well the regression equation does at predicting the criterion variable. Moving now to the Output section, an additional area of concern is distinguishing between B values and beta values. Explain that the B values are the correct values to use in creating the regression equation. However, these values are not comparable with each other because the metrics on which they are based might vary widely. The beta values, on the other hand, are based on standardized scores and thus can be compared with each other accurately. Betas can be compared directly to identify the magnitude of influence of a particular predictor variable on the criterion variable. Also, the sign (+ or -) of beta values has unique significance. A positive sign indicates that more of the predictor variable results in more of the criterion variable. A negative beta indicates that more of the predictor results in less of the criterion variable. It is helpful to use a number of examples to illustrate this. For assignments, we typically provide several data sets (available at www.pearsonhighered.com/ george) that are particularly conducive to regression analysis. There are a number of different regression outputs in the Exercises section of this manual. Then we have students run the procedures, and, for written output, they create a chart that lists the Multiple R, the R2, and then record variables that significantly predict the criterion variable, listed in order of magnitude of the beta values with the beta values included in paren-theses for each variable, and perhaps write up the findings in narrative format. Some defined terms in the last two or three pages are also of value. Particularly useful are the meaning of beta, partial correlation, minimum tolerance (another tool to help avoid linear dependency in your anal-yses), and r-square change.
IBM SPSS Statistics 23 Step by Step Instructor’s Manual 39
Section III: Exercises This section provides a wealth of resources for the instructor attempting to find meaningful exercises for his or her students. With only minimal knowledge of a particular data set, it might take hours of trying different analyses before you come up with the results that will illustrate the points or hone the skills that you desire. This section of the Instructor’s Manual focuses on the answers to the exercises included in the text. In some cases, additional exercises are provided for a chapter—these are exercises that, using the same datasets, will provide something interesting to look at. The exercises that follow are based on primarily on data included in three separate data files. All are availa-ble at the course website, http://www.pearsonhighered.com/george. First, we will describe the three data files most commonly used. Then, we will for each chapter in the SPSS for Windows text that has exer-cises provide a fairly detailed answer key. Finally, for some chapters additional exercises will be provided (with minimal or no answers). Also available from the course website is a document with selected answers to exercises for students to download. Each answer key in the instructor’s manual is labeled:
“Full answer provided for students” (in which case, they can see the same thing you can see in the instructor’s manual),
“Minimal answer provided for students” (in which case, they can see only a very brief portion of the answer, enough to see if they are on the right track), or
“No answer provided for students.”
40 IBM SPSS Statistics 23 Step by Step Instructor’s Manual
The GRADES.SAV Data File This file is described fairly thoroughly in the textbook; a summary is presented here for reference. The data file is the raw data for calculating the grades in a particular class. The example consists of a single file, used by a teacher who teaches three sections of a class with approximately 35 students in each section. From left to right, the variables that are used in the data file are:
Variable Description ID Six-digit student ID number LASTNAME The last name of the student FIRSTNAME First name of the student
GENDER
Gender of the student: 1=female, 2=male. Note that in the book, this variable is described as an ordinal variable; in the data files available for download, however, it is defined as a nominal variable. In the past, we defined Gender as ordinal (there was a time when you couldn’t do some things with nominal variables that now you can); the reality is that SPSS rarely makes a distinction between ordinal and nominal variables, but it is clearly less confusing for students if gender is de-scribed as a nominal variable. In the next version of the text, we plan to describe gender as a nominal variable to match the current data file.
ETHNIC Ethnicity of the student: 1=Native (Native American or Inuit), 2=Asian (or Asian American), 3=Black, 4=White (non-Hispanic), 5=Hispanic
YEAR Year in school; 1=Frosh (1st year), 2=Soph (2nd year), 3=Junior (3rd year), 4=Senior (4th year)
LOWUP Lower or upper division student: 1=Lower (1st or 2nd year), 2=Upper (3rd or 4th year)
SECTION Section of the class (1 through 3) GPA Cumulative GPA at the beginning of the course EXTCR Whether or not the student did the extra credit project: 1=No, 2=Yes REVIEW Whether or not the student attended the review sessions: 1=No, 2=Yes QUIZ1 to QUIZ5 Scores out of 10 points on five quizzes throughout the term
FINAL Final exam worth 75 points TOTAL Sum of the five quizzes and the final PERCENT The percent of possible points in the class GRADE The grade received in the class (A, B, C, D, or F) PASSFAIL Whether or not the student passed the course (P or F)
IBM SPSS Statistics 23 Step by Step Instructor’s Manual 41
The DIVORCE.SAV Data File divorce.sav: This is a file of 229 divorced individuals recruited from communities in Central Alberta. The objective of researchers was to identify cognitive or interpersonal factors that assisted in recovery from di-vorce. Students using the student version of SPSS should use the file divorce-studentversion.sav. Key variables employed in the study included: Dependent variables
lsatisfy: A measure of life satisfaction based on weighted averages of satisfaction in 12 different ar-eas of life functioning. This is scored on a 1 (low satisfaction) to 7 (high satisfaction) scale.
trauma: A measure of the trauma experienced during the divorce recovery phase based on the mean of 16 different potentially traumatic events, scored on a 1 (low trauma) to 7 (high trauma) scale.
Demographics
sex: [women(1), men(2)]. Note that this variable is now defined as a nominal variable in the data file.
age: (ranges from 23 to 76) sep: (years separated accurate to one decimal) mar: (years married prior to separation, accurate to one decimal) status: present marital status [married(1), separated(2), divorced(3), cohabiting(4)] ethnic: ethnicity [White(1), Hispanic(2), Black(3), Asian(4), other or decline to state(DTS; 5)] school: [1-11yr(1), 12yr(2), 13yr(3), 14yr(4), 15yr(5), 16yr(6), 17yr(7), 18yr(8), 19+(9)] childneg: number of children negotiated in divorce proceedings childcst: number of children presently in custody income: [DTS(0), <10,000(1), 10-20(2), 20-30(3), 30-40(4), 40-50(5), 50+(6)]
Key independents
cogcope: amount of cognitive coping during recovery [little(1) to much(7)] behcope: amount of behavioral coping during recovery [little(1) to much(7)] avoicope: amount of avoidant coping during recovery [little(1) to much(7)] iq: intelligence or ability at abstract thinking [low(1) to high(12)] close: amount of physical (non-sexual) closeness experienced [little(1) to much(7)] locus: locus of control [external locus(1) to internal locus(10)] asq: attributional style questionnaire [pessimistic style(-7) to optimistic style(+9)] socsupp: amount of social support experienced [little(1) to much(7)] spiritua: level of personal spirituality [low(1) to high(7)]
42 IBM SPSS Statistics 23 Step by Step Instructor’s Manual
The HELPING3.SAV Data File helping3.sav: This is a file that can demonstrate any statistical procedure that exists. In the original use of these data, structural equation modeling was the major form of analysis, but every other form of analysis preceded it. It is a study of helping among friends. The N of 537 represents the number of helpers (drawn from a community sample in the Los Angeles area) and 467 help recipients also responded. A procedure described under the missing values section in Chapter 4 dealt with replacing missing values with predicted values from regression equations. This procedure was enacted for the 70 helper forms that did not have equivalent recipient forms. This represented only 2.9% of total data replaced (the recipient forms were much less extensive than the helper forms), well within the limits for psychometric validity. It raised some eye-brows among reviewers, but after explanation, they were satisfied. The goal of the study was to create a theoretical model of helping among friends. Specifically we were seek-ing to find out what factors significantly influenced three different dependent help variables. Students using the student version of SPSS should use the file helping3-studentversion.sav. Key variables and demographics used in the study are listed below. Dependent variables:
thelplnz: Time spent helping: [the initial “t” (total) indicates that it is a combination of helper and recipient responses, the latter “lnz” indicates a natural log and z-score transformation]. The z score gives it a mean of 0 and a standard deviation of 1.0 and a range of approximately –3 to +3.
tqualitz: Help quality: [the initial “t” (total) indicates that it is a combination of helper and re-cipient responses, the latter z indicates a z-score transformation]. The z score gives it a mean of 0 and a standard deviation of 1.0 and a range of approximately –3 to +3.
tothelp: Total help. The time helping and help quality measures weighted equally. Mean is -.01, standard deviation about .8 and a range –3 to +3 range.
Please don’t confuse your students with the transformational garbage. Just “time helping,” “help quality” and the “combination” should be sufficient. Good opportunity to talk about linear dependency. Time help-ing and help quality are NOT linearly dependent and any forms of analyses may be used comparing them. Total help, on the other hand, is a mathematical composite of the other two; they are absolutely linearly de-pendent. Sub Categories of the Dependent Variables
cathelp: dichotomous variable [less than average(1), more than average(2)] empahelp: amount of time (in hours) spent giving empathic help insthelp: amount of time (in hours) spent giving instrumental help (doing things) infhelp: amount of time (in hours) spent giving informational help
Demographics and other categorical variables:
gender: [women(1), men(2)]. Note that this variable is now defined as a nominal variable in the data file.
age: range, 17-89, mean 31.3 occupat: [professional(1), service(2), blue-collar(3), unemployed(4), student(5), DTS(6)] marital: marital status [married(1), single(2), DTS(3)] school: [1-8yr(1), 9-11yr(2), 12yr(3), 13-14yr(4), 15-16yr(5), 17-18yr(6), 19+yr(7)] ethnicity: ethnicity [Caucasian(1), Black(2), Hispanic(3), Asian(4), other or DTS(5)] children: Number of children living in the household
IBM SPSS Statistics 23 Step by Step Instructor’s Manual 43
problem: problem type [goal disruptive(1), relational(2), illness(3), catastrophic(4)] income: [<15,000(1), 15-25,000(2), 25-50,000(3), 50,000+(4), DTS(5)]
Good opportunity to use the select cases procedures to eliminate the DTS crowd for some on your Chi-square or ANOVA analyses. However, sometimes the DTS group is interesting in its own right. There are predictable characteristics of individuals who endorse DTS frequently such as more cynical, more suspi-cious, less spiritual. Key predictor variables All variables are the helper’s rating of each construct
hclose: closeness of the relationship [distant(1) to close(7)] hseveret: severity of the problem [mild(1) to severe(7)] angert: anger felt toward the friend [little(1) to much(7)] controt: controllability; fault, responsibility [not at fault(1) to entirely at fault(7)] sympathi: sympathy felt toward the friend [little(1) to much(7)] worry: worry experienced about the friend’s problem [little(1) to much(7)] obligat: feelings of obligation toward the friend [little(1) to much(7)] hcopet: perception of how well the friend is coping [coping poorly(1) to well(7)] effict: helpers belief of self-efficacy (has the ability to help) [low(1) to high(7)] empathyt: helper’s empathic tendency [low empathy(1) to high empathy(7)]
There are more variables, of course, and may be understood and used by viewing the variable labels in the data file.
44 IBM SPSS Statistics 23 Step by Step Instructor’s Manual
Chapter 3: Creating and Editing a Data File
1. Set up the variables described above for the grades.sav file, using appropriate variable names, vari-able labels, and variable values. Enter the data for the first 20 students into the data file.
2. Perhaps the instructor of the classes in the grades.sav dataset teaches these classes at two different
schools. Create a new variable in this dataset named school, with values of 1 and 2. Create variable labels, where 1 is the name of a school you like, and 2 is the name of a school you don’t like. Save your dataset with the name gradesme.sav.
3. Which of the following variable names will SPSS accept, and which will SPSS reject? For those that
SPSS will reject, how could you change the variable name to make it “legal”? age firstname @edu sex. grade not anxeceu date iq
4. Using the grades.sav file, make the gpa variable values (which currently have two digits after the
decimal point) have no digits after the decimal point. You should be able to do this without retyping any numbers. Note that this won’t actually round the numbers, but it will change the way they are displayed and how many digits are displayed after the decimal point for statistical analyses you per-form on the numbers.
5. Using grades.sav, search for a student with 121 total points. What is his or her name?
6. Why is each of the following variables defined with the measure listed? Is it possible for any of
these variables to be defined as a different type of measure? ethnicity Nominal extrcred Ordinal quiz4 Scale grade Nominal
7. Ten people were given a test of balance while standing on level ground, and ten other people were given a test of balance while standing on a 30 slope. Their scores follow. Set up the appropriate variables, and enter the data into SPSS. Scores of people standing on level ground: 56, 50, 41, 65, 47, 50, 64, 48, 47, 57 Scores of people standing on a slope: 30, 50, 51, 26, 37, 32, 37, 29, 52, 54
8. Ten people were given two tests of balance, first while standing on level ground and then while standing on a 30 slope. Their scores follow. Set up the appropriate variables, and enter the data into SPSS.
Participant: 1 2 3 4 5 6 7 8 9 10 Score standing on level ground: 56 50 41 65 47 50 64 48 47 57
Score standing on a slope: 38 50 46 46 42 41 49 38 49 55
IBM SPSS Statistics 23 Step by Step Instructor’s Manual 45
3-1 No answer provided for students. ID
Lastname
Firstname
Gender
Ethnicity
Year
Lowup
Section
GPA
Extrcredit
Review
Quiz1
Quiz2
Quiz3
Quiz4
Quiz5
Final
106484 Villarruz Alfred male Asian soph lower 2 1.18 no yes 6 5 7 6 3 53 108642 Valazquez Scott male White junior upper 2 2.19 yes no 10 10 7 6 9 54 127285 Galvez Jackie female White senior upper 2 2.46 yes yes 10 7 8 9 7 57 132931 Osborne Ann female Black soph lower 2 3.98 no no 7 8 7 7 6 68 140219 Guadiz Valerie female Asian senior upper 1 1.84 no no 7 8 9 8 10 66
3-2 Full answer provided for students. The variable view screen might look something like this once the new variable is set up:
3-3 Minimal answer provided for students.
Variable Name
SPSS will…
What could be changed?
Age Accept firstname Accept @edu Reject(?) Variable name’s aren’t supposed to be able to start with “@”. But,
SPSS doesn’t generally give an error message when they do. But, it’s still a good idea to avoid starting variables with “@”.
sex. Reject Variable names can’t include a “.” so just use “sex” without a peri-od.
Grade Accept Not Reject “not” is reserved, so you need to choose a different variable name. anxeceu Accept Date Accept Iq Accept
46 IBM SPSS Statistics 23 Step by Step Instructor’s Manual
3-4 No answer provided for students. Go to the variable view, and change the number of decimals to “0”:
change this “2” to “0”
3-5 Full answer provided for students. Dawne Rathbun received a score of 121 for the course. No one received a score of 121 on the final exam.
3-6 Minimal answer provided for students. Variable Currently de-
fined as Could also be defined as
ethnicity Nominal Ethnicity will generally be defined as a nominal variable. The only exceptions might be if, for example, you were examining the relative size of different ethnicities in a certain population. In that case, where ethnicity has other the-oretical meaning, ethnicity could be defined as an ordinal variable.
extrcred Ordinal Could also be defined as a nominal variable. quiz4 Scale Could also be defined as an ordinal variable, but you would probably only
want to do that if you had unusual data (e.g., a very non-normal distribution). grade Nominal Could also be defined as an ordinal variable.
3-7 Full answer provided for students. The variable view should look something like this, with one variable identifying whether the person was standing on level or sloped ground and a second variable identifying each person’s balance score:
IBM SPSS Statistics 23 Step by Step Instructor’s Manual 47
Once the data is entered, the data view should look something like this:
3-8 Minimal answer provided for students. The variable view should look something like this, with one variable indicating balance scores while stand-ing on level ground and a second variable indicating scores while standing on a slope:
Once the data is entered, the data view should look something like this:
Note that, because each person took the balance test both on level ground and on a slope, there are ten rows (one for each person) rather than twenty rows (one for each time the balance test was given).
48 IBM SPSS Statistics 23 Step by Step Instructor’s Manual
Chapter 4: Managing Data Some of the exercises that follow change the original data file. If you wish to leave the data in their original form, don’t save your changes. Case Summaries
1. Using the grades.sav file, list variables (in the original order) from id to quiz5, first 30 students consecutive, fit on one page by editing.
2. Using the helping3.sav file, list variables hclose, hseveret, hcontrot, angert, sympathi, wor-ry, obligat, hcopet, first 30 cases, fit on one page by editing.
3. List ID, lastname, firstname, gender for the first 30 students in the grades.sav file, with the lower division students listed first, followed by upper division students (lowup variable). Edit output to fit on one page.
Missing Values 4. Using the grades.sav file delete the quiz1 scores for the first 20 subjects. Replace the (now) miss-
ing scores with the average score for all other students in the class. Print out lastname, firstname, quiz1 for the first 30 students. Edit to fit on one page.
Computing Variables
5. Using the grades.sav file calculate total (the sum of all five quizzes and the final) and percent (100 times the total divided by possible points, 125). Since total and percent are already present, name the new variables total1 and percent1. Print out id, total, total1, percent, percent1, first 30 subjects. Total and total1; percent and percent1 should be identical.
6. Using the divorce.sav file compute a variable named spirit (spirituality) that is the mean of sp8 through sp57 (there should be 18 of them). Print out id, sex, and the new variable spirit, first 30 cases, edit to fit on one page.
7. Using the grades.sav file, compute a variable named quizsum that is the sum of quiz1 through quiz5. Print out variables id, lastname, firstname, and the new variable quizsum, first 30, all on one page.
Recode Variables 8. Using the grades.sav file, compute a variable named grade1 according to the instructions on page
73. Print out variables id, lastname, firstname, grade and the new variable grade1, first 30, edit to fit all on one page. If done correctly, grade and grade1 should be identical.
9. Using the grades.sav file; recode a passfail1 variable so that D’s and F’s are failing, and A’s, B’s, and C’s are passing. Print out variables id, grade, passfail1, first 30, edit to fit all on one page.
10. Using the helping3.sav file, redo the coding of the ethnic variable so that Black = 1, Hispanic = 2, Asian = 3, Caucasian = 4, and Other/DTS = 5. Now change the value labels to be consistent with reality (that is the coding numbers are different but the labels are consistent with the original ethnicity). Print out the variables id and ethnic, (labels, not values) first 30 cases, fit on one page.
Selecting Cases 11. Using the divorce.sav file select females (sex = 1); print out id and sex, first 30 subjects, num-
bered, fit on one page.
12. Select all the students in the grades.sav file with previous GPA less than 2.00, and percentages for the class greater than 85. Print id, GPA, and percent on one page.
IBM SPSS Statistics 23 Step by Step Instructor’s Manual 49
13. Using the helping3.sav file, select females (gender = 1) who spend more than the average amount of time helping (thelplnz > 0). Print out id, gender, thelplnz, first 30 subjects, numbered, fit on one page.
Sorting Cases 14. Alphabetize the grades.sav file by lastname, firstname, Print out lastname, firstname, first 30
cases, edit to fit on one page.
15. Using the grades.sav file, sort by id (ascending order). Print out id, total, percent, and grade, first 30 subjects, fit on one page.
50 IBM SPSS Statistics 23 Step by Step Instructor’s Manual
4-1 No answer provided for students.
IBM SPSS Statistics 23 Step by Step Instructor’s Manual 51
4-2 Minimal answer provided for students. Case Summaries
52 IBM SPSS Statistics 23 Step by Step Instructor’s Manual
4-3 Full answer provided for students. Case Summaries lastname firstname Lower or upper di-vision
Lower 1 VILLARRUZ ALFRED 2 OSBORNE ANN 3 LIAN JENNY 4 MISCHKE ELAINE 5 WU VIDYUTH 6 TORRENCE GWEN 7 CARPIO MARY 8 SAUNDERS TAMARA Total N 8 8
Upper 1 VALAZQUEZ SCOTT 2 GALVEZ JACKIE 3 GUADIZ VALERIE 4 RANGIFO TANIECE 5 TOMOSAWA DANIEL 6 BAKKEN KREG 7 LANGFORD DAWN 8 VALENZUELA NANCY 9 SWARM MARK 10 KHOURY DENNIS 11 AUSTIN DERRICK 12 POTTER MICKEY 13 LEE JONATHAN 14 DAYES ROBERT 15 STOLL GLENDON 16 CUSTER JAMES 17 CHANG RENE 18 CUMMINGS DAVENA 19 BRADLEY SHANNON 20 JONES ROBERT 21 UYEYAMA VICTORINE 22 LUTZ WILLIAM Total N 22 22
Total N 30 30 a Limited to first 30 cases.
IBM SPSS Statistics 23 Step by Step Instructor’s Manual 53
4-4 No answer provided for students. First, delete the quiz1 scores from first 20 students; note that you need to press the “delete” key to do this and not just type in the number “0” (as it’s possible to get a score of zero on a quiz). Follow sequence step 5b, choosing to replace values for only quiz1. This will replace the missing scores with the average scores for all other students in the class. You will find that there is now a new variable, quiz1_1, for which the scores for those 20 students are now all equal to 7.4.
4-5 Minimal answer provided for students. Follow sequence steps 5c and 5c’ to complete this calculation. You will find that the total and percent values for the twenty students selected in #3 above have now changed.
54 IBM SPSS Statistics 23 Step by Step Instructor’s Manual
4-6 Full answer provided for students.
Case Summariesa
id sex Spirituality
1 1 female 3.72
2 2 female 5.28
3 3 female 5.83
4 4 female 5.89
5 5 female 5.44
6 6 male 5.39
7 7 male 5.56
8 8 female 5.39
9 9 male 4.89
10 10 female 6.06
11 11 female 5.61
12 12 female 6.28
13 13 male 6.28
14 14 male 5.28
15 15 male 4.83
16 16 female 5.11
17 17 male 5.72
18 18 male 5.78
19 19 female 5.00
20 20 female 6.28
21 21 female 4.72
22 22 female 4.72
23 23 female 5.56
24 24 male 5.00
25 25 male 5.83
26 26 female 5.61
27 27 male 4.78
28 28 female 5.94
29 29 male 4.83
30 30 female 4.33
Total N 30 30 30
a. Limited to first 30 cases.
IBM SPSS Statistics 23 Step by Step Instructor’s Manual 55
4-7 No answer provided for students.
56 IBM SPSS Statistics 23 Step by Step Instructor’s Manual
4-8 Minimal answer provided for students. Case Summaries
ID LASTNAME FIRSTNAME GRADE GRADE2 1 106484 VILLARRUZ ALFRED D D 2 108642 VALAZQUEZ SCOTT C C 3 127285 GALVEZ JACKIE C C 4 132931 OSBORNE ANN B B 5 140219 GUADIZ VALERIE B B 6 142630 RANGIFO TANIECE A A 7 153964 TOMOSAWA DANIEL B B 8 154441 LIAN JENNY A A 9 157147 BAKKEN KREG A A 10 164605 LANGFORD DAWN A A 11 164842 VALENZUELA NANCY C C 12 167664 SWARM MARK A A 13 175325 KHOURY DENNIS B B 14 192627 MISCHKE ELAINE D D 15 211239 AUSTIN DERRICK D D 16 219593 POTTER MICKEY C C 17 237983 LEE JONATHAN C C 18 245473 DAYES ROBERT C C 19 249586 STOLL GLENDON C C 20 260983 CUSTER JAMES B B 21 273611 WU VIDYUTH D D 22 280440 CHANG RENE A A 23 287617 CUMMINGS DAVENA C C 24 289652 BRADLEY SHANNON B B 25 302400 JONES ROBERT F F 26 307894 TORRENCE GWEN C C 27 337908 UYEYAMA VICTORINE B B 28 354601 CARPIO MARY A A 29 378446 SAUNDERS TAMARA D D 30 380157 LUTZ WILLIAM B B Total 30 30 30 30 30 a Limited to first 30 cases.
IBM SPSS Statistics 23 Step by Step Instructor’s Manual 57
4-9 Full answer provided for students. Follow sequence step 5d’ but use a range of 70 to 100 for “P”, and 0 to 69.9 for “F”.
4-10 No answer provided for students. Case Summaries
ID ETHNIC 1 1 3 ASIAN 2 2 2 HISPANIC 3 3 4 CAUCASIAN 4 5 4 CAUCASIAN 5 6 4 CAUCASIAN 6 7 4 CAUCASIAN 7 8 4 CAUCASIAN 8 9 4 CAUCASIAN 9 11 1 BLACK 10 12 4 CAUCASIAN 11 13 4 CAUCASIAN 12 14 4 CAUCASIAN 13 16 1 BLACK 14 17 4 CAUCASIAN 15 18 4 CAUCASIAN 16 19 4 CAUCASIAN 17 20 4 CAUCASIAN 18 21 4 CAUCASIAN 19 27 4 CAUCASIAN 20 28 2 HISPANIC 21 29 3 ASIAN 22 30 4 CAUCASIAN 23 31 4 CAUCASIAN 24 32 4 CAUCASIAN 25 33 4 CAUCASIAN 26 34 4 CAUCASIAN 27 36 4 CAUCASIAN 28 37 4 CAUCASIAN 29 38 4 CAUCASIAN 30 39 4 CAUCASIAN Total 30 30
a Limited to first 30 cases.
58 IBM SPSS Statistics 23 Step by Step Instructor’s Manual
4-11 Minimal answer provided for students.
Case Summariesa
id sex
1 1 female
2 2 female
3 3 female
4 4 female
5 5 female
6 8 female
7 10 female
8 11 female
9 12 female
10 16 female
11 19 female
12 20 female
13 21 female
14 22 female
15 23 female
16 26 female
17 28 female
18 30 female
19 31 female
20 33 female
21 34 female
22 36 female
23 37 female
24 38 female
25 39 female
26 40 female
27 41 female
28 42 female
29 43 female
30 44 female
Total N 30 30
a. Limited to first 30 cases.
IBM SPSS Statistics 23 Step by Step Instructor’s Manual 59
4-12 Full answer provided for students. Case Summaries
ID LASTNAME FIRSTNAME GPA PERCENT 1 140219 GUADIZ VALERIE 1.84 86.4 2 417003 EVANGELIST NIKKI 1.91 87.2 Total N 2 2 2 2 2 a Limited to first 100 cases.
60 IBM SPSS Statistics 23 Step by Step Instructor’s Manual
4-13 No answer provided for students.
IBM SPSS Statistics 23 Step by Step Instructor’s Manual 61
4-14 Minimal answer provided for students.
ID LASTNAME FIRSTNAME 1 779481 AHGHEL BRENDA 2 777683 ANDERSON ERIC 3 211239 AUSTIN DERRICK 4 420327 BADGER SUZANNA 5 157147 BAKKEN KREG 6 725987 BATILLER FRED 7 737728 BELTRAN JIM 8 289652 BRADLEY SHANNON 9 576008 BULMERKA HUSIBA 10 354601 CARPIO MARY 11 818528 CARRINGTON JYLL 12 985700 CHA LILY 13 280440 CHANG RENE 14 900485 COCHRAN STACY 15 623857 CORTEZ VIKKI 16 594463 CRUZADO MARITESS 17 287617 CUMMINGS DAVENA 18 260983 CUSTER JAMES 19 762813 DAEL IVAN 20 245473 DAYES ROBERT 21 419891 DE CANIO PAULA 22 467806 DEVERS GAIL 23 392464 DOMINGO MONIKA 24 768995 DUMITRESCU STACY 25 417003 EVANGELIST NIKKI 26 515586 FIALLOS LAUREL 27 447659 GALANVILLE DANA 28 127285 GALVEZ JACKIE 29 897606 GENOBAGA JACQUELINE 30 762308 GOUW BONNIE 31 681855 GRISWOLD TAMMY 32 140219 GUADIZ VALERIE 33 546022 HAMIDI KIMBERLY 34 463276 HANSEN TIM 35 899529 HAWKINS CARHERINE
62 IBM SPSS Statistics 23 Step by Step Instructor’s Manual
ID LASTNAME FIRSTNAME 36 498900 HUANG JOE 37 896972 HUANG MIRNA 38 574170 HURRIA WAYNE 39 905109 JENKINS ERIC 40 554809 JONES LISA Total N 40 40 40 a Limited to first 40 cases.
IBM SPSS Statistics 23 Step by Step Instructor’s Manual 63
4-15 Full answer provided for students. Case Summaries(a) id total percent grade 1 106484 80 64 D 2 108642 96 77 C 3 127285 98 78 C 4 132931 103 82 B 5 140219 108 86 B 6 142630 122 98 A 7 153964 112 90 A 8 154441 120 96 A 9 157147 123 98 A 10 164605 124 99 A 11 164842 97 78 C 12 167664 118 94 A 13 175325 111 89 B 14 192627 84 67 D 15 211239 79 63 D 16 219593 94 75 C 17 237983 92 74 C 18 245473 88 70 C 19 249586 98 78 C 20 260983 106 85 B 21 273611 78 62 D 22 280440 114 91 A 23 287617 98 78 C 24 289652 109 87 B 25 302400 65 52 F 26 307894 90 72 C 27 337908 108 86 B 28 354601 120 96 A 29 378446 81 65 D 30 380157 118 86 B 31 390203 97 78 C 32 392464 103 82 B 33 414775 96 77 C 34 417003 109 87 B 35 419891 92 74 C 36 420327 103 82 B 37 434571 98 78 C 38 436413 96 77 C 39 447659 99 79 C 40 463276 123 98 A Total N 40 40 40 40
a Limited to first 40 cases.
64 IBM SPSS Statistics 23 Step by Step Instructor’s Manual
Chapter 5: Graphs Answers to selected exercises are downloadable at www.pearsonhighered.com/george.
All of the following exercises use the grades.sav sample data file.
1. Using a bar chart, examine the number of students in each section of the class along with whether or not students attended the review session. Does there appear to be a relation between these variables?
2. Using a line graph, examine the relationship between attending the review session and section on the final exam score. What does this relationship look like?
3. Create a boxplot of quiz 1 scores. What does this tell you about the distribution of the quiz scores? Cre-ate a boxplot of quiz 2 scores. How does the distribution of this quiz differ from the distribution of quiz 1? Which case number is the outlier?
4. Create an error bar graph highlighting the 95% confidence interval of the mean for each of the three sec-
tions’ final exam scores. What does this mean? 5. Based on the examination of a histogram, does it appear that students’ previous GPA’s are normally dis-
tributed? 6. Create the scatterplot described in Step 5f (page 98). What does the relationship appear to be between
gpa and academic performance (total)? Add a regression lines for both men and women to this scatter-plot. What do these regression lines tell you?
7. By following all steps on pages 90 and 91, reproduce the bar graph shown on page 91. 8. By following all steps on pages 92 and 93, reproduce the line graph shown on page 93. 9. By following all steps on pages 93, reproduce the pie chart shown on page 93. 10. By following all steps on page 94, reproduce the Boxplot shown on page 95. 11. By following all steps on pages 95 and 96, reproduce the Error Bar Chart shown on page 96. Note that the
edits are not specified on page 96. See if you can perform the edits that produce an identical chart. 12. By following all steps on pages 96 and 97, reproduce the histogram shown on page 97. 13. By following all steps on page 98, reproduce the scatterplot shown on page 98.
IBM SPSS Statistics 23 Step by Step Instructor’s Manual 65
5-1 Minimal answer provided for students. There does appear to be a rela-tionship (though we don’t know if it’s significant or not): People in Section 3 were somewhat more likely to skip the review session than in sections 1 or 2, and most people who attended the review sessions were from Section 2, for example. This relationship may be clearer with stacked rather than clustered bars, as there aren’t the same number of people in each section:
section
321
Coun
t
40
30
20
10
0
YesNo
Attended review sessions?
section321
Coun
t
30
20
10
0
YesNo
Attended review sessions?
66 IBM SPSS Statistics 23 Step by Step Instructor’s Manual
5-2 Full answer provided for students.
Though it looks like attending the review sessions was helpful for all students, it seems to have been particularly helpful for students in Section 1. For this graph, we have modified the Y-axis to range from 55 to 65; the default is a much more compressed graph.
Attended review sessions?YesNo
Mean
fina
l
66
64
62
60
58
56
321
section
IBM SPSS Statistics 23 Step by Step Instructor’s Manual 67
5-3 No answer provided for students.
Though the distribution is gen-erally normal (note that the median is placed right in the middle of the box), there are more scores on the low extreme (outside the box but within the whiskers) than the high ex-treme. That’s why the whisk-ers go below, but not above, the box. So, the distribution is somewhat negatively skewed.
Note that this distribution is more normal, particularly when the outlier is ignored. The out-lier is case number 55.
quiz1
10
8
6
4
2
0
quiz2
10
8
6
4
55
68 IBM SPSS Statistics 23 Step by Step Instructor’s Manual
5-4 Minimal answer provided for students.
This is a good example of why we need to run statistical tests. The lower error bar or section 1, for example, overlaps the upper error bar for section 3 by more than a half of a one-sided error bar (and vice versa). So, the population mean for section 1 is probably not statistically significant. Because the error bars aren’t quite the same length, though, it may still be worth running a test to see if they are significantly different.
IBM SPSS Statistics 23 Step by Step Instructor’s Manual 69
5-5 Full answer provided for students.
Note that the GPA’s below the median appear fairly normal, but those above the median do not.
gpa4.003.503.002.502.001.501.00
Freq
uenc
y
20
15
10
5
0
Mean =2.7789 Std. Dev. =0.7638
N =105
70 IBM SPSS Statistics 23 Step by Step Instructor’s Manual
5-6 No answer provided for students.
Based just on the scatterplot, no relationship can be determined visually. (At least not by me!) But, if a re-gression line is added, it becomes apparent that previous GPA’s appear to be somewhat more closely related to total class points for males than for females (note the higher slope).
5-7 through 5-13 Because these exercises involve reproducing graphs in the text (and the page numbers of the answers in the text are provided), no additional answers are needed or provided for students.
gpa4.003.503.002.502.001.501.00
tota
l
140
120
100
80
60
40
MaleFemaleMaleFemale
gender
R Sq Linear = 0.119 R Sq Linear = 0.268
IBM SPSS Statistics 23 Step by Step Instructor’s Manual 71
Chapter 6: Frequencies Notice that data files other than the grades.sav file are being used here. Please refer to the Data Files sec-tion starting on page 385 to acquire all necessary information about these files and the meaning of the varia-bles. As a reminder, all data files are downloadable from the web address shown above. 1. Using the divorce.sav file display frequencies for sex, ethnic, and status. Print output to show fre-quencies for all three; edit output so it fits on one page. On a second page, include three bar graphs of these data and provide labels to clarify what each one means. 2. Using the graduate.sav file display frequencies for motive, stable, and hostile. Print output to show frequencies for all three; edit output so it fits on one page. Note: this type of procedure is typically done to check for accuracy of data. Motivation (motive), emotional stability (stable), and hostility (hostile) are scored on 1- to 9-point scales. You are checking to see if you have, by mistake, entered any 0s or 99s. 3. Using the helping3.sav file compute percentiles for thelplnz (time helping, measured in z scores), and tqualitz (quality of help measured in z scores). Use percentile values 2, 16, 50, 84, 98. Print output and cir-cle values associated with percentiles for thelplnz; box percentile values for tqualitz. Edit output so it fits on one page. 4. Using the helping3.sav file compute percentiles for age. Compute every 10th percentile (10, 20, 30, etc.). Edit (if necessary) to fit on one page. 5. Using the graduate.sav file display frequencies for gpa, areagpa, grequant. Compute quartiles for these three variables. Edit (if necessary) to fit on one page. 6. Using the grades.sav file create a histogram for final. Include the normal curve option. Create a title for the graph that makes clear what is being measured. Perform the edits on page 97 so the borders for each bar are clear
72 IBM SPSS Statistics 23 Step by Step Instructor’s Manual
6-1 Minimal answer provided for students. Frequency Table
sex
Frequency Percent Valid Percent Cumulative Percent
Valid female 119 52.0 52.0 52.0
male 110 48.0 48.0 100.0
Total 229 100.0 100.0
ethnicity of the subject
Frequency Percent Valid Percent Cumulative Percent
Valid white 209 91.3 91.3 91.3
hispanic 9 3.9 3.9 95.2
black 3 1.3 1.3 96.5
asian 1 .4 .4 96.9
other or DTS 7 3.1 3.1 100.0
Total 229 100.0 100.0
current marital status
Frequency Percent Valid Percent Cumulative Percent
Valid marred 55 24.0 24.0 24.0
separated 34 14.8 14.8 38.9
divorced or DTS 112 48.9 48.9 87.8
widowed 3 1.3 1.3 89.1
cohabit 25 10.9 10.9 100.0
Total 229 100.0 100.0
IBM SPSS Statistics 23 Step by Step Instructor’s Manual 73
74 IBM SPSS Statistics 23 Step by Step Instructor’s Manual
6-2 No answer provided for students.
IBM SPSS Statistics 23 Step by Step Instructor’s Manual 75
6-3 Full answer provide for students.
6-4 Minimal answer provided for students. Statistics AGE N Valid 537 Missing 0 Percentiles 10 20.00 20 21.00 30 22.00 40 23.00 50 25.00 60 30.00 70 34.00 80 42.00 90 51.00
Statistics
537 5370 0
-2.0966 -2.1701-.9894 -.8144.0730 .1351.9218 .9481
1.7643 1.4766
ValidMissing
N
216508498
Percentiles
MEAN OFHELPER/
RECIPIENTLNZHELP
MEAN OFHELPER/
RECIPIENTZQUALITY
HELP
76 IBM SPSS Statistics 23 Step by Step Instructor’s Manual
6-5 No answer provided for students.
GPA AREAGPA GREQUANT N Valid 50 50 50
Missing 0 0 0 Percentiles 25 3.3475 3.6275 640.00 50 3.5500 3.8150 685.00 75 3.7100 3.9700 740.00
IBM SPSS Statistics 23 Step by Step Instructor’s Manual 77
GPA Freq % Valid % Cum % AREA GPA
Freq % Valid % Cum % GRE QUANT
Freq % Valid % Cum % 2.80 1 2.0 2.0 2.0 3.20 1 2.0 2.0 2.0 550 1 2.0 2.0 2.0 2.90 1 2.0 2.0 4.0 3.30 1 2.0 2.0 4.0 560 1 2.0 2.0 4.0 3.00 1 2.0 2.0 6.0 3.40 1 2.0 2.0 6.0 570 1 2.0 2.0 6.0 3.10 1 2.0 2.0 8.0 3.50 1 2.0 2.0 8.0 580 1 2.0 2.0 8.0 3.13 1 2.0 2.0 10.0 3.53 1 2.0 2.0 10.0 590 1 2.0 2.0 10.0 3.16 1 2.0 2.0 12.0 3.54 1 2.0 2.0 12.0 600 2 4.0 4.0 14.0 3.19 1 2.0 2.0 14.0 3.56 1 2.0 2.0 14.0 610 1 2.0 2.0 16.0 3.22 1 2.0 2.0 16.0 3.57 1 2.0 2.0 16.0 620 1 2.0 2.0 18.0 3.25 1 2.0 2.0 18.0 3.59 2 4.0 4.0 20.0 630 2 4.0 4.0 22.0 3.28 1 2.0 2.0 20.0 3.60 1 2.0 2.0 22.0 640 3 6.0 6.0 28.0 3.32 1 2.0 2.0 22.0 3.62 1 2.0 2.0 24.0 650 4 8.0 8.0 36.0 3.34 1 2.0 2.0 24.0 3.63 1 2.0 2.0 26.0 660 1 2.0 2.0 38.0 3.35 1 2.0 2.0 26.0 3.65 1 2.0 2.0 28.0 670 2 4.0 4.0 42.0 3.37 1 2.0 2.0 28.0 3.66 1 2.0 2.0 30.0 680 4 8.0 8.0 50.0 3.38 1 2.0 2.0 30.0 3.68 1 2.0 2.0 32.0 690 1 2.0 2.0 52.0 3.40 1 2.0 2.0 32.0 3.69 1 2.0 2.0 34.0 700 2 4.0 4.0 56.0 3.41 1 2.0 2.0 34.0 3.72 1 2.0 2.0 36.0 710 2 4.0 4.0 60.0 3.43 1 2.0 2.0 36.0 3.73 1 2.0 2.0 38.0 720 3 6.0 6.0 66.0 3.44 1 2.0 2.0 38.0 3.75 1 2.0 2.0 40.0 730 2 4.0 4.0 70.0 3.46 1 2.0 2.0 40.0 3.76 1 2.0 2.0 42.0 740 4 8.0 8.0 78.0 3.47 1 2.0 2.0 42.0 3.78 1 2.0 2.0 44.0 750 3 6.0 6.0 84.0 3.49 1 2.0 2.0 44.0 3.79 1 2.0 2.0 46.0 760 3 6.0 6.0 90.0 3.51 1 2.0 2.0 46.0 3.81 2 4.0 4.0 50.0 770 3 6.0 6.0 96.0 3.53 1 2.0 2.0 48.0 3.82 2 4.0 4.0 54.0 780 2 4.0 4.0 100.0 3.54 1 2.0 2.0 50.0 3.84 1 2.0 2.0 56.0 Total 50 100.0 100.0 3.56 1 2.0 2.0 52.0 3.85 2 4.0 4.0 60.0 3.57 1 2.0 2.0 54.0 3.87 1 2.0 2.0 62.0 3.59 2 4.0 4.0 58.0 3.88 2 4.0 4.0 66.0 3.60 1 2.0 2.0 60.0 3.89 1 2.0 2.0 68.0 3.62 2 4.0 4.0 64.0 3.91 1 2.0 2.0 70.0 3.63 1 2.0 2.0 66.0 3.94 2 4.0 4.0 74.0 3.65 2 4.0 4.0 70.0 3.97 2 4.0 4.0 78.0 3.68 2 4.0 4.0 74.0 4.00 11 22.0 22.0 100.0 3.71 3 6.0 6.0 80.0 Total 50 100.0 100.0 3.74 1 2.0 2.0 82.0 3.77 1 2.0 2.0 84.0 3.81 2 4.0 4.0 88.0 3.84 1 2.0 2.0 90.0 3.86 1 2.0 2.0 92.0 3.87 1 2.0 2.0 94.0 3.91 2 4.0 4.0 98.0 3.94 1 2.0 2.0 100.0
Total 50 100.0 100.0
78 IBM SPSS Statistics 23 Step by Step Instructor’s Manual
6-6 Full answer provided for students.
806040
final
20
15
10
5
0
Freq
uenc
y
Mean =61.48 Std. Dev. =7.943
N =105
Distribution of Final Exam Scores
IBM SPSS Statistics 23 Step by Step Instructor’s Manual 79
Chapter 7: Descriptive Statistics Notice that data files other than the grades.sav file are being used here. Please refer to the Data Files sec-tion starting on page 385 to acquire all necessary information about these files and the meaning of the varia-bles. As a reminder, all data files are downloadable from the web address shown above.
1. Using the grades.sav file select all variables except lastname, firstname, grade, passfail. Compute
descriptive statistics including mean, standard deviation, kurtosis, skewness. Edit so that you elim-inate Std. Error (Kurtosis) and Std. Error (Skewness) making your chart easier to interpret. Edit the out-put to fit on one page. Draw a line through any variable for which descriptives are meaningless (either they are categorical
or they are known to not be normally distributed). Place an “*” next to variables that are in the ideal range for both skewness and kurtosis. Place an X next to variables that are acceptable but not excellent. Place a next to any variables that are not acceptable for further analysis.
2. Using the divorce.sav file select all variables except the indicators (for spirituality, sp8 – sp57, for cog-
nitive coping, cc1 – cc11, for behavioral coping, bc1 – bc12, for avoidant coping, ac1 – ac7, and for physical closeness, pc1 – pc10). Compute descriptive statistics including mean, standard deviation, kurtosis, skewness. Edit so that you eliminate Std. Error (Kurtosis) and Std. Error (Skewness) and your chart is easier to interpret. Edit the output to fit on two pages. Draw a line through any variable for which descriptives are meaningless (either they are categorical
or they are known to not be normally distributed). Place an “*” next to variables that are in the ideal range for both skewness and kurtosis. Place an X next to variables that are acceptable but not excellent. Place a next to any variables that are not acceptable for further analysis.
3. Create a practice data file that contains the following variables and values:
VAR1: 3 5 7 6 2 1 4 5 9 5 VAR2: 9 8 7 6 2 3 3 4 3 2 VAR3: 10 4 3 5 6 5 4 5 2 9
Compute: the mean, the standard deviation, and variance and print out on a single page.
80 IBM SPSS Statistics 23 Step by Step Instructor’s Manual
7-1 Full answer provided for students.
Descriptive Statistics
N Mean Std. Deviation Skewness Kurtosis
Statistic Statistic Statistic Statistic Statistic
id 105 571366.67 277404.129 -.090 -1.299
gender 105 1.39 .490 .456 -1.828
ethnicity 105 3.35 1.056 -.451 -.554 Year in school 105 2.94 .691 -.460 .553
Lower or upper division 105 1.79 .409 -1.448 .099
section 105 2.00 .797 .000 -1.419
gpa 105 2.7789 .76380 -.052 -.811
Did extra credit project? 105 1.21 .409 1.448 .099
Attended review sessions? 105 1.67 .474 -.717 -1.515
quiz1 105 7.47 2.481 -.851 .162
quiz2 105 7.98 1.623 -.656 -.253 X quiz3 105 7.98 2.308 -1.134 .750
quiz4 105 7.80 2.280 -.919 .024
quiz5 105 7.87 1.765 -.713 .290
final 105 61.48 7.943 -.335 -.332
total 105 100.57 15.299 -.837 .943
percent 105 80.34 12.135 -.834 .952
Valid N (listwise) 105
IBM SPSS Statistics 23 Step by Step Instructor’s Manual 81
7-2 Minimal answer provided for students. Descriptive Statistics
N Mean Std. Devia-tion Skewness Kurtosis
ID 229 116.32 66.903 -.007 -1.202 SEX 229 1.48 .501 .079 -2.011 AGE 229 41.90 9.881 .679 .910 FINANCE 229 3.83 1.775 -.096 -.873 X SEP 229 8.199 6.7008 1.113 .963 X MAR 229 9.486 7.0818 1.230 1.795 STATUS 229 2.59 1.176 .354 -.211 ETH 229 1.20 .763 4.244 17.613 SCHOOL 229 4.03 2.570 .573 -.883 CHILDNEG 229 1.35 1.155 .413 -.327 CHILDCST 229 1.02 1.177 .894 .094 INCOME 229 2.93 1.438 .015 -.877 EMPLOY 229 3.35 1.537 .582 -.074
X DRELAT 229 4.40 1.997 -.253 -1.096 X DBREAK 229 3.16 1.943 .450 -1.019 DEMOTI 229 4.72 1.644 -.277 -.737 DDEPRE 229 3.97 1.801 -.029 -.911 X DFEEL1 229 3.60 1.820 .049 -1.064 DFEEL2 229 3.44 1.867 .335 -.904 DFEEL3 229 3.24 1.856 .366 -.950 DLOWER 229 3.58 1.828 .166 -.881 DDISR1 229 4.15 1.762 -.094 -.841 DDISR2 229 4.01 1.812 -.095 -.957 X DLOSS 229 3.95 1.992 .000 -1.246 X DLACK 229 3.75 2.045 .105 -1.262 X DLEGAL 229 3.76 2.081 .112 -1.286 X DFINAN 229 4.35 1.943 -.229 -1.052 X DCHILD 229 3.14 1.982 .415 -1.074 DADJUST 229 2.74 1.897 .716 -.778 X ARELAT 229 2.27 1.711 1.099 -.048 AMAIN1 229 3.49 1.786 .176 -.803 ASUPP1 229 5.23 1.760 -.803 -.249 ASUPP2 229 5.08 1.938 -.772 -.501 X ACOUN 229 3.08 2.068 .542 -1.014 AENJO 229 4.55 1.773 -.252 -.805 ATHEP 229 5.12 1.649 -.700 -.134 X AINVO 229 4.68 2.228 -.606 -1.098 X ASPIR 229 4.14 2.342 -.169 -1.548 AAFFE 229 4.37 1.865 -.327 -.826 X ASEX 229 3.02 2.143 .543 -1.179 ACARE 229 4.65 1.780 -.480 -.559 AMAIN2 229 5.32 1.539 -.722 -.211 COGCOPE 229 4.4080 .93503 -.076 -.057 BEHCOPE 229 4.3804 1.06914 -.130 -.070 AVOICOP 229 2.7271 .91693 .609 .255 IQ 229 7.39 2.088 .039 -.694 CLOSE 229 3.372 .9452 .391 .278 X LOCUS 229 7.85 1.791 -1.121 1.796
82 IBM SPSS Statistics 23 Step by Step Instructor’s Manual
N Mean Std. Devia-tion Skewness Kurtosis
SOCSUPP 229 3.523 .8941 .098 -.303 ASQ 229 3.0447 2.74040 .245 .685 LSATISY 229 4.8151 .89571 -.307 -.036 SPIRITUA 229 4.4832 1.22858 -.233 -.755 DIFICULT 229 3.7467 1.07766 .046 -.225 ASSIST 229 4.2311 .77412 -.092 .315 DISCREP2 229 .4844 1.37949 -.079 .419 SATISFY 229 6.2499 2.00305 -.123 -.062 CLOSE2 229 12.2632 6.84401 1.312 3.376 ASQ2 229 16.7473 22.08842 2.259 5.909 INCOME2 229 10.6201 8.72483 .757 -.405 SOCSUP2 229 13.9112 7.25302 .853 .486 LOCUS2 229 64.7729 25.06240 -.308 -.544 NLSAT1 229 5.1509 .91682 -.297 -.052 NLSAT2 229 5.2181 .92148 -.295 -.055 NLSAT3 229 5.4349 .93749 -.287 -.062 RECOV-
ERY 229 4.0300 .74280 -.079 .419
CON-FRONT
229 4.3942 .87491 -.265 .045 TRAUMA 229 3.7470 1.07705 .049 -.230 TRAUMA2 229 15.1950 8.27035 .778 .430 TRAUMA3 229 3.7470 1.07705 .049 -.230 Valid N
(listwise) 229
IBM SPSS Statistics 23 Step by Step Instructor’s Manual 83
7-3 No answer provided for students. Case # VAR1 VAR2 VAR3
1 3.0 9.0 10.0 2 5.0 8.0 4.0 3 7.0 7.0 3.0 4 6.0 6.0 5.0 5 2.0 2.0 6.0 6 1.0 3.0 5.0 7 4.0 3.0 4.0 8 5.0 4.0 5.0 9 9.0 3.0 2.0
10 5.0 2.0 9.0 Descriptive Statistics N Mean Std. Deviation Variance VAR1 10 4.7000 2.35938 5.567 VAR2 10 4.7000 2.58414 6.678 VAR3 10 5.3000 2.49666 6.233 Valid N (listwise) 10
84 IBM SPSS Statistics 23 Step by Step Instructor’s Manual
Chapter 8: Crosstabulation and 2 Analyses For each of the chi-square analyses computed below:
1. Circle the observed (actual) values. 2. Box the expected values. 3. Put an * next to the unstandardized residuals. 4. Underline the significance value that shows whether observed and expected values differ significantly. 5. Make a statement about independence of the variables involved. 6. State the nature of the relationship. #5 identifies whether there is a relationship, now you need to indi-
cate what that relationship is. Example: Men tend to help more with goal-disruptive problems whereas women tend to help more with relational problems.
7. Is there a significant linear association? 8. Does linear association make sense for these variables? 9. Is there a problem with low-count cells? 10. If there is a problem, what would you do about it?
1. File: grades.sav. Variables: gender by ethnic. Select: observed count, expected count, un-standarized residuals. Compute: Chi-square, Phi and Cramer’s V. Edit to fit on one page, print out, then perform the 10 operations listed above.
2. File: grades.sav. Variables: gender by ethnic. Prior to analysis, complete the procedure shown in Step
5c (page 129) to eliminate the “Native” category (low-count cells). Select: observed count, expected count, unstandarized residuals. Compute: Chi-square, Phi and Cramer’s V. Edit to fit on one page, print out, then perform the 10 operations listed above.
3. File: helping3.sav. Variables: gender by problem. Select: observed count, expected count, un-
standarized residuals. Compute: Chi-square, Phi and Cramer’s V. Edit to fit on one page, print out, then perform the 10 operations listed above.
4. File: helping3.sav. Variables: school by occupat. Prior to analysis, select cases: “school > 2 & oc-cupat < 6”. Select: observed count, expected count, unstandarized residuals. Compute: Chi-square, Phi and Cramer’s V. Edit to fit on one page, print out, then perform the 10 operations listed above.
5. File: helping3.sav. Variables: marital by problem. Prior to analysis, eliminate the “DTS” category (marital < 3). Select: observed count, expected count, unstandarized residuals. Compute: Chi-square, Phi and Cramer’s V. Edit to fit on one page, print out, then perform the 10 operations listed above.
IBM SPSS Statistics 23 Step by Step Instructor’s Manual 85
8-1 Full answer provided for students.
5. Ethnicity and gender are independent of each other. 6. There is no difference of gender balance across different ethnic groups. or, Across different ethnic groups there is no difference in the balance of men and women. 7. No 8. No 9. Yes, there are 30% of cells with an expected value of less than 5. Acceptable is less than 25%. 10. Delete the category which most contributes to the low cell counts, the “Native” category in this case.
86 IBM SPSS Statistics 23 Step by Step Instructor’s Manual
8-2 Minimal answer provided for students.
Symmetric Measures Value Approx. Sig. Nominal by Nominal Phi .062 .942 Cramer's V .062 .942 N of Valid Cases 100 a Not assuming the null hypothesis. b Using the asymptotic standard error assuming the null hypothesis. 5. Ethnicity and gender are independent of each other. 6. There is no difference of gender balance across different ethnic groups. or, Across different ethnic groups there is no difference in the balance of men and women. 7. No 8. No 9. No, there are 12.5% of cells with an expected value of less than 5. Acceptable is less than 25%. 10. Delete the category which most contributes to the low cell counts.
GENDER * ETHNICITY CrosstabulationETHNICITY
2 Asian 3 Black 4 White 5 Hispanic Total
GENDER
1 female Count 13 14 26 7 60
Expected 12.0 14.4 27.0 6.6 60.0
Residual 1.0 -.4 -1.0 .4
2 male Count 7 10 19 4 40
Expected 8.0 9.6 18.0 4.4 40.0
Residual -1.0 .4 1.0 -.4
Total Count 20 24 45 11 100
Chi-Square Tests
Value df Asymp. Sig.(2-sided)
Pearson Chi-Square .389 3 .942
Likelihood Ratio .393 3 .942
Linear-by-Linear Association .068 1 .794
N of Valid Cases 100a 1 cells (12.5%) have expected count less than 5. The minimum expected count is 4.40.
IBM SPSS Statistics 23 Step by Step Instructor’s Manual 87
8-3 Minimal answer provided for students.
Symmetric Measures Value Approx. Sig. Nominal by Nominal Phi .176 .001 Cramer's V .176 .001 N of Valid Cases 537 a Not assuming the null hypothesis. b Using the asymptotic standard error assuming the null hypothesis. 5. Gender and problem type are dependent, that is, which problems receive the most attention is dependent upon the gender of the helper. 6. While there are no significant gender differences in the likelihood of helping with illness or catastrophic problems, women are significantly more likely to help with relational problems whereas men are significant-ly more likely to help with goal-disruptive problems. 7. No 8. No 9. No, there are no cells with an expected value of less than 5. Acceptable is less than 25%. 10. Delete the category which most contributes to the low cell counts. There are none here.
GENDER * PROBLEM CrosstabulationPROBLEM
1 Goal Disruptive 2 Relational 3 Illness 4 Catastrophic Total
GENDER 1 FEMALE Count 119 148 51 7 325
Expected 138.6 126.5 52.0 7.9 325.0
Residual -19.6 21.5 -1.0 -.9
2 MALE Count 110 61 35 6 212
Expected 90.4 82.5 34.0 5.1 212.0
Residual 19.6 -21.5 1.0 .9
Total Count 229 209 86 13 537
Chi-Square Tests
Value df Asymp. Sig.(2-sided)
Pearson Chi-Square 16.578 3 .001
Likelihood Ratio 16.809 3 .001
Linear-by-Linear Association 3.457 1 .063
N of Valid Cases 537a 0 cells (.0%) have expected count less than 5. The minimum expected count is 5.13.
88 IBM SPSS Statistics 23 Step by Step Instructor’s Manual
8-4 No answer provided for students.
Symmetric Measures Value Approx. Sig. Nominal by Nominal Phi .588 .000 Cramer's V .294 .000 N of Valid Cases 509 5. Occupation and years of schooling are dependent, that is, which occupation one is involved in is depend-ent upon the amount of schooling received. 6. As years of schooling increases there is a greater likelihood that the person will be involved in a profes-sional profession. Lower levels of education are more associated with service/support and blue-collar jobs. Those in the mid ranges of education (15-18 years) are more likely to be students. 7. No 8. No 9. Yes, there are 33.3% of cells with an expected value of less than 5. Acceptable is less than 25%. 10. Delete the category which most contributes to the low cell counts. In this case, delete the 9-11 years group (there are only 7 of them). Possibly delete blue collar workers (with 18).
SCHOOL * OCCUPAT CrosstabulationOCCUPAT
1 Professional
2 Service/Support
3 BlueCollar
4 Unemploy/Retired 5 Student Total
9-11 YR Count 1 2 3 0 1 7Expected 1.8 1.6 .2 .6 2.8 7.0Residual -.8 .4 2.8 -.6 -1.8
12 YR Count 7 25 6 7 2 47Expected 12.2 10.8 1.7 3.9 18.5 47.0Residual -5.2 14.2 4.3 3.1 -16.5
13-14 YR Count 20 36 3 10 35 104Expected 27.0 23.9 3.7 8.6 40.9 104.0Residual -7.0 12.1 -.7 1.4 -5.9
15-16 YR Count 31 34 4 13 105 187Expected 48.5 43.0 6.6 15.4 73.5 187.0Residual -17.5 -9.0 -2.6 -2.4 31.5
17-18 YR Count 31 13 1 8 47 100Expected 25.9 23.0 3.5 8.3 39.3 100.0Residual 5.1 -10.0 -2.5 -.3 7.7
19 + YR Count 42 7 1 4 10 64Expected 16.6 14.7 2.3 5.3 25.1 64.0
SCHOOL
Residual 25.4 -7.7 -1.3 -1.3 -15.1
Total Count 132 117 18 42 200 509
Chi-Square Tests
Value df Asymp. Sig.(2-sided)
Pearson Chi-Square 176.118 20 .000Likelihood Ratio 148.251 20 .000
Linear-by-Linear Association 1.211 1 .271N of Valid Cases 509
a 10 cells (33.3%) have expected count less than 5. The minimum expected count is .25.
IBM SPSS Statistics 23 Step by Step Instructor’s Manual 89
8-5 No answer provided for students.
Symmetric Measures Value Approx. Sig. Nominal by Nomi-nal
Phi .115 .312
Cramer's V .081 .312 N of Valid Cases 537 a Not assuming the null hypothesis. b Using the asymptotic standard error assuming the null hypothesis. 5. Problem type and marital status are independent, that is, one’s marital status is unrelated to the type of problem one is likely to help with. 6. There is no significant influence between marital status and type of problem one helps with. 7. No 8. No 9. Yes, there are 41.7% of cells with an expected value of less than 5. Acceptable is less than 25%. 10. Delete the category which most contributes to the low cell counts. The obvious choice is to delete the “DTS” category of the marital status.
MARITAL * PROBLEM CrosstabulationPROBLEM
1 Goal Disruptive 2 Relational 3 Illness 4 Catastrophic Total
1 MARRIED Count 57 64 31 4 156Expected 66.5 60.7 25.0 3.8 156.0Residual -9.5 3.3 6.0 .2
2 SINGLE Count 168 143 52 9 372Expected 158.6 144.8 59.6 9.0 372.0Residual 9.4 -1.8 -7.6 .0
3 DTS Count 4 2 3 0 9Expected 3.8 3.5 1.4 .2 9.0
MARITAL
Residual .2 -1.5 1.6 -.2
Total Count 229 209 86 13 537
Chi-Square Tests
Value df Asymp. Sig.(2-sided)
Pearson Chi-Square 7.097 6 .312Likelihood Ratio 7.025 6 .319
Linear-by-Linear Association 2.841 1 .092N of Valid Cases 537
a 5 cells (41.7%) have expected count less than 5. The minimum expected count is .22.
90 IBM SPSS Statistics 23 Step by Step Instructor’s Manual
Additional Exercises Using DIVORCE.SAV:
6. sex (2 levels) by level of income: (5 levels). Note that the level 0 (decline to state) and level 6 (over $50,000) of income have been removed prior to the analysis due to so few subjects appearing in either category.
7. marital status (4 levels) by level of income: (5 levels). Note that the level 0 (decline to state) and level 6 (over $50,000) of income have been removed prior to the analysis due to so few subjects ap-pearing in either category.
Using HELPING3.SAV: 8. gender (2 levels) by type of problem experienced (problem, 4 levels). 9. gender (2 levels) by the helper’s occupation (occupat, 5 levels). Deleted from this sample is the
DTS level of occupat, thus fewer than 537 subjects. 10. income (5 levels) by the helper’s ethnicity (ethnic, 5 levels). With 25 cells, interpretation requires
some careful observation. Preliminary evidence suggests that Caucasians are over-represented in the highest income categories, Hispanics and Blacks tend to be underrepresented and little difference be-tween observed and expected values for Asians.
11. Test whether those who scored below the mean on amount of help given (the not helpful group) and those who scored above the mean (the helpful group) (cathelp) is likely to be associated with their level of income. The Chi-square value suggests barely significant results (p = .05). Analysis of ob-served versus expected values suggests that the unhelpful group is less likely to have a high income and less likely to actually state their income. The helpful group appears to be over-represented in the highest income group and more likely to state their income.
IBM SPSS Statistics 23 Step by Step Instructor’s Manual 91
Chapter 9: The Means Procedure 1. Using the grades.sav file use the Means procedure to explore the influence of ethnic and section on to-tal. Print output, fit on one page, in general terms describe what the value in each cell means. 2. Using the grades.sav file use the Means procedure to explore the influence of year and section on final. Print output, fit on one page, in general terms describe what the value in each cell means. 3. Using the divorce.sav file use the Means procedure to explore the influence of gender (sex) and marital status (status) on spiritua (spirituality—high score is spiritual). Print output and, in general terms, describe what the value in each cell means.
92 IBM SPSS Statistics 23 Step by Step Instructor’s Manual
9-1 Full answer provided for students.
Report total ethnicity section Mean N Std. Deviation
Native 2 90.25 4 15.042
3 115.00 1 .
Total 95.20 5 17.094
Asian 1 108.00 7 12.423
2 97.78 9 14.394
3 105.50 4 6.351
Total 102.90 20 12.876
Black 1 105.14 7 12.185
2 105.00 7 11.547
3 93.10 10 16.509
Total 100.08 24 14.714
White 1 105.75 16 17.628
2 100.00 18 10.123
3 100.91 11 16.736
Total 102.27 45 14.702
Hispanic 1 94.67 3 27.154
2 104.00 1 .
3 90.57 7 21.816
Total 92.91 11 21.215
Total 1 105.09 33 16.148
2 99.49 39 12.013
3 97.33 33 17.184
Total 100.57 105 15.299 The ETHNICITY column identifies the ethnic group for which data are entered. The SECTION column identifies which of the three sections individuals of a particular ethnic group are en-rolled. The MEAN column identifies the mean total points for the individuals in each cell of the table. The N column identifies how many individuals are in each group. The STD. DEVIATION column identifies the standard deviation for the values in each category.
IBM SPSS Statistics 23 Step by Step Instructor’s Manual 93
9-2 No answer provided for students. Report FINAL YEAR SECTION Mean N Std. Deviation Frosh 1 66.00 1 . 2 56.00 2 1.414 Total 59.33 3 5.859 Soph 1 64.83 6 4.875 2 59.78 9 7.480 3 64.75 4 5.909 Total 62.42 19 6.628 Junior 1 62.29 21 10.441 2 62.53 19 5.125 3 59.92 24 8.802 Total 61.47 64 8.478 Senior 1 65.00 5 3.606 2 61.11 9 7.897 3 56.40 5 10.015 Total 60.89 19 7.951 Total 1 63.27 33 8.676 2 61.23 39 6.339 3 59.97 33 8.737 Total 61.48 105 7.943
The YEAR column identifies the year in school for subjects. The SECTION column identifies which of the three sections individuals in each year of school are enrolled. The MEAN column identifies the mean total points for the individuals in each cell of the table. The N column identifies how many individuals are in each group. The STD. DEVIATION column identifies the standard deviation for the values in each category.
94 IBM SPSS Statistics 23 Step by Step Instructor’s Manual
9-3 Minimal answer provided for students. Report spiritua
sex current marital status Mean N Std. Devia-
tion female marred 4.8226 31 1.20428
separated 4.5752 17 1.04913 divorced or DTS 5.0258 58 .97872 widowed 4.2500 2 .11785 cohabit 4.0152 11 .99166 Total 4.8020 119 1.07664
male marred 4.5348 24 1.11607 separated 4.4212 17 1.38179 divorced or DTS 4.0822 54 1.34770 widowed 1.9444 1 . cohabit 3.4881 14 .90665 Total 4.1383 110 1.29283
Total marred 4.6970 55 1.16490 separated 4.4982 34 1.21058 divorced or DTS 4.5709 112 1.25835 widowed 3.4815 3 1.33372 cohabit 3.7200 25 .96245 Total 4.4832 229 1.22858
The SEX column identifies the gender of the subjects. The STATUS column identifies the marital status (4 levels) of women (first) then men. The MEAN column identifies the mean total points for the individuals in each cell of the table. The N column identifies how many individuals are in each group. The STD. DEVIATION column identifies the standard deviation for the values in each category.
IBM SPSS Statistics 23 Step by Step Instructor’s Manual 95
Chapter 10: Bivariate Correlation 1. Using the grades.sav file create a correlation matrix of the following variables; id, ethnic, gender, year, section, gpa, quiz1, quiz2, quiz3, quiz4, quiz5, final, total; select one-tailed significance; flag significant correlations. Print out results on a single page.
Draw a single line through the columns and rows where the correlations are meaningless. Draw a double line through cells where correlations exhibit linear dependency. Circle the 1 “largest” (greatest absolute value) NEGATIVE correlation (the p value will be less than
.05) and explain what it means. Box the 3 largest POSITIVE correlations (each p value will be less than .05) and explain what they
mean. Create a scatterplot of gpa by total and include the regression line. (see Chapter 5, page 97-98 for in-
structions).
2. Using the divorce.sav file create a correlation matrix of the following variables; sex, age, sep, mar, sta-tus, ethnic, school, income, avoicop, iq, close, locus, asq, socsupp, spiritua, trauma, lsatisy; select one-tailed significance; flag significant correlations. Print results on a single page. Note: Use Data Files descriptions (p. 385) for meaning of variables.
Draw a single line through the columns and rows where the correlations are meaningless. Draw a double line through the correlations where there is linear dependency Circle the 3 “largest” (greatest absolute value) NEGATIVE correlations (each p value will be less
than .05) and explain what they mean. Box the 3 largest POSITIVE correlations (each p value will be less than .05) and explain what they
mean. Create a scatterplot of close by lsatisy and include the regression line. (see Chapter 5, page 97-98
for instructions). Create a scatterplot of avoicop by trauma and include the regression line.
96 IBM SPSS Statistics 23 Step by Step Instructor’s Manual
10-1 Minimal answer provided for students.
IBM SPSS Statistics 23 Step by Step Instructor’s Manual 97 r = -.21, p = .014: Students in lower numbered sections (e.g. sections 1 and 2) tend to score higher on quiz 1 than students in lower numbered sec-tions. r = .86, p < .001: Those who score higher on quiz 1 tend to score higher on quiz 3. r = .83, p < .001: Those who score higher on quiz 1 tend to score higher on quiz 4. r = .80, p < .001: Those who score higher on quiz 3 tend to score higher on quiz 4.
98 IBM SPSS Statistics 23 Step by Step Instructor’s Manual
10-2 No answer provided for students.
IBM SPSS Statistics 23 Step by Step Instructor’s Manual 99 r = -.26, p < .001: Those who have higher avoidant coping tend to have lower life satisfaction. r = -.27, p < .001: Men tend to be less spiritual than women. r = -.31, p < .001: Those who suffer greater trauma tend to experience lower overall life satisfaction. r = .39, p < .001: Those with more avoidant coping tended to have suffered greater trauma. r = .58, p < .001: The older someone is, the more likely he or she is to have been separated longer. r = .61, p < .001: The older someone is, the more likely he or she is to have been married longer.
100 IBM SPSS Statistics 23 Step by Step Instructor’s Manual
Chapter 11: The T Test Procedure For questions 1- 7, perform the following operations:
a) Print out results b) Circle the two mean values that are being compared. c) Circle the appropriate significance value (be sure to consider equal or unequal variance). d) For statistically significant results (p < .05) write up each finding in standard APA format.
1. Using the grades.sav file, compare men with women (gender) for quiz1, quiz2, quiz3, quiz4, quiz5, final, total.
2. Using the grades.sav file, determine whether the following pairings produce significant differences: quiz1 with quiz2, quiz1 with quiz3, quiz1 with quiz4, quiz1 with quiz5.
3. Using the grades.sav file, compare the GPA variable (gpa) with the mean GPA of the university of 2.89.
4. Using the divorce.sav file, compare men with women (sex) for lsatisfy, trauma, age, school, cog-cope, behcope, avoicop, iq, close, locus, asq, socsupp, spiritua.
5. Using the helping3.sav file, compare men with women (gender) for age, school, income, hclose, hcontrot, sympathi, angert, hcopet, hseveret, empathyt, effict, thelplnz, tqualitz, tothelp. See the Data Files section (page 385) for meaning of each variable.
6. Using the helping3.sav file, determine whether the following pairings produce significant differences: sympathi with angert, sympathi with empathyt, empahelp with insthelp, empahelp with infhelp, insthelp with infhelp.
7. Using the helping3.sav file, compare the age variable (age) with the mean age for North Americans (33.0).
8. In an experiment, 10 participants were given a test of mental performance in stressful situations. Their scores were 2, 2, 4, 1, 4, 3, 0, 2, 7, and 5. Ten other participants were given the same test after they had been trained in stress-reducing techniques. Their scores were 4, 4, 6, 0, 6, 5, 2, 3, 6, and 4. Do the appro-priate t test to determine if the group that had been trained had different mental performance scores than the group that had not been trained in stress reduction techniques. What do these results mean?
9. In a similar experiment, ten participants who were given a test of mental performance in stressful situa-tions at the start of the study, were then trained in stress reduction techniques, and were finally given the same test again at the end of the study. In an amazing coincidence, the participants received the same scores as the participants in question 8: The first two people in the study received a score of 2 on the pre-test, and a score of 4 on the posttest; the third person received a score of 4 on the pretest and 6 on the posttest; and so on. Do the appropriate t test to determine if there was a significant difference between the pretest and posttest scores. What do these results mean? How was this similar and how was this dif-ferent than the results in question 1? Why?
10. You happen to know that the population mean for the test of mental performance in stressful situations is exactly three. Do a t test to determine whether the post-test scores in #9 above (the same numbers as the
IBM SPSS Statistics 23 Step by Step Instructor’s Manual 101
training group scores in #8) is significantly different than three. What do these results mean? How was this similar and how was this different than the results in question 9? Why?
102 IBM SPSS Statistics 23 Step by Step Instructor’s Manual
11-1 Full answer provided for students.
Group Statistics
gender N Mean Std. De-viation
Std. Error Mean
quiz1 1 Female 64 7.72 2.306 .288
2 Male 41 7.07 2.715 .424 quiz2 1 Female 64 7.98 1.548 .194
2 Male 41 7.98 1.753 .274
quiz3 1 Female 64 8.19 2.130 .266
2 Male 41 7.66 2.555 .399
quiz4 1 Female 64 8.06 2.181 .273
2 Male 41 7.39 2.397 .374
quiz5 1 Female 64 7.88 1.638 .205
2 Male 41 7.85 1.969 .308
final 1 Female 64 62.36 7.490 .936
2 Male 41 60.10 8.514 1.330
total 1 Female 64 102.03 13.896 1.737
2 Male 41 98.29 17.196 2.686
IBM SPSS Statistics 23 Step by Step Instructor’s Manual 103
Independent Samples Test
Levene's Test for Equality of Variances t-test for Equality of Means
F Sig. t df Sig. (2-tailed)
Mean Differ-ence
Std. Error Dif-ference
95% Confidence Interval of the Differ-
ence Lower Upper quiz1 Equal variances assumed 2.180 .143 1.305 103 .195 .646 .495 -.335 1.627
Equal variances not assumed 1.259 75.304 .212 .646 .513 -.376 1.667 quiz2 Equal variances assumed 1.899 .171 .027 103 .979 .009 .326 -.638 .656
Equal variances not assumed .026 77.634 .979 .009 .335 -.659 .676 quiz3 Equal variances assumed 3.436 .067 1.147 103 .254 .529 .461 -.385 1.443
Equal variances not assumed 1.103 74.189 .274 .529 .480 -.427 1.485 quiz4 Equal variances assumed .894 .347 1.482 103 .141 .672 .454 -.227 1.572
Equal variances not assumed 1.452 79.502 .151 .672 .463 -.249 1.594 quiz5 Equal variances assumed 4.103 .045 .060 103 .952 .021 .355 -.682 .725
Equal variances not assumed .058 74.071 .954 .021 .369 -.715 .757 final Equal variances assumed .093 .761 1.431 103 .156 2.262 1.581 -.874 5.397
Equal variances not assumed 1.391 77.417 .168 2.262 1.626 -.976 5.500 total Equal variances assumed 2.019 .158 1.224 103 .224 3.739 3.053 -2.317 9.794
Equal variances not assumed 1.169 72.421 .246 3.739 3.198 -2.637 1.011E1
No results are statistically significant.
104 IBM SPSS Statistics 23 Step by Step Instructor’s Manual
11-2 Full answer provided for students.
Paired Samples Statistics
Mean N Std. Deviation Std. Error Mean
Pair 1 quiz1 7.47 105 2.481 .242
quiz2 7.98 105 1.623 .158
Pair 2 quiz1 7.47 105 2.481 .242
quiz3 7.98 105 2.308 .225
Pair 3 quiz1 7.47 105 2.481 .242
quiz4 7.80 105 2.280 .223
Pair 4 quiz1 7.47 105 2.481 .242
quiz5 7.87 105 1.765 .172
Paired Samples Test
Paired Differences
t df Sig. (2-tailed) Mean Std. Deviation Std. Error Mean
95% Confidence Interval of the
Difference
Lower Upper
Pair 1 quiz1 - quiz2 -.514 1.835 .179 -.869 -.159 -2.872 104 .005
Pair 2 quiz1 - quiz3 -.514 1.287 .126 -.763 -.265 -4.095 104 .000
Pair 3 quiz1 - quiz4 -.333 1.405 .137 -.605 -.061 -2.431 104 .017
Pair 4 quiz1 - quiz5 -.400 2.204 .215 -.827 .027 -1.860 104 .066
1. Students scored significantly higher on quiz 2 (M = 7.98, SD = 1.62) than on quiz 1 (M = 7.47, SD = 2.48), t(104) = -2.87, p = .005. 2. Students scored significantly higher on quiz 3 (M = 7.98, SD = 2.31) than on quiz 1 (M = 7.47, SD = 2.48), t(104) = -4.10, p < .001. [Notice that the mean values are identical with the first comparison but quiz 1 with quiz 3 pairing produces a much stronger result. This is due to a much narrower standard deviation for the second comparison (1.29) than for the first (1.84)] 3. Students scored significantly higher on quiz 4 (M = 7.80, SD = 2.28) than on quiz 1 (M = 7.47, SD = 2.48), t(104) = -2.43, p = .017.
IBM SPSS Statistics 23 Step by Step Instructor’s Manual 105
11-3 Minimal answer provided for students.
The values do not differ significantly.
One-Sample Statistics
105 2.7789 .76380 .07454gpaN Mean Std. Deviation
Std. ErrorMean
One-Sample Test
-1.491 104 .139 -.11114 -.2590 .0367gpat df Sig. (2-tailed)
MeanDifference Lower Upper
95% ConfidenceInterval of the
Difference
Test Value = 2.89
106 IBM SPSS Statistics 23 Step by Step Instructor’s Manual
11-4
Group Statistics sex N Mean Std. Deviation Std. Error Mean
life satisfaction female 119 4.8660 .91150 .08356
male 110 4.7599 .87911 .08382
trauma female 119 3.7637 1.09953 .10079
male 110 3.7290 1.05691 .10077
age female 119 43.01 9.243 .847
male 110 40. 70 10.439 .995
number of years of school-
ing
female 119 4.01 2.569 .236
male 110 4.05 2.582 .246
cognitive-active coping female 119 4.5309 .88380 .08102
male 110 4.2751 .97409 .09288
behcope female 119 4.4902 1.06313 .09746
male 110 4.2616 1.06771 .10180
avoicop female 119 2.5473 .84356 .07733
male 110 2.9216 .95646 .09119
iq female 119 7.41 2.184 .200
male 110 7. 36 1.990 .190
amount of physical close-
ness
female 119 3.507 .9427 .0864
male 110 3.227 .9305 .0887
internal locus female 119 7. 89 1.534 .141
male 110 7.80 2.040 .195
asq female 119 3.4386 2.74079 .25125
male 110 2.6186 2.68773 .25626
social support female 119 3.670 .9649 .0885
male 110 3.365 .7844 .0748
Minimal answer provided for students. Women (M = 4.53, SD = .88) are significantly more likely to practice cognitive coping than men (M = 4.28, SD = 4.28), t(227) = 2.08, p = .038. Men (M = 2.92, SD = .96) are significantly more likely to practice avoidant coping than women (M = 2.55, SD = .84), t(227) = -3.13, p = .002. Women (M = 3.51, SD = .94) are significantly more likely to experience non-sexual physical closeness than men (M = 3.23, SD = .93), t(227) = 2.26, p = .025. Women (M = 3.44, SD = 2.74) are significantly more likely to have a positive attributional style than men (M = 2.62, SD = 2.69), t(227) = 2.24, p = .023. Women (M = 3.67, SD = .96) are significantly more likely to receive social support than men (M = 3.37, SD = .78), t(227) = 2.36, p = .009. Women (M = 4.80, SD = 1.08) have significantly higher personal spirituality than men (M = 4.14, SD = 1.29), t(227) = 4.20, p < .001.
IBM SPSS Statistics 23 Step by Step Instructor’s Manual 107
spiritua female 119 4.8020 1.07664 .09870
male 110 4.1383 1.29283 .12327
108 IBM SPSS Statistics 23 Step by Step Instructor’s Manual
IBM SPSS Statistics 23 Step by Step Instructor’s Manual 109
11-5 No answer provided for students. Independent Samples Test
Levene's Test forEquality of Variances t-test for Equality of Means
95% CI of theDifferenceGender N Mean Std.
DeviationMean
Difference F Sig. t df Sig.(2-tailed)
Std. ErrorDifference Lower Upper
1 female 325 30.36 13.380 Equal var. -2.028 535 .043 1.234 -4.925 -.078AGE
2 male 212 32.86 14.844-2.50 1.159 .282
Unequal var. -1.984 417.543 .048 1.261 -4.980 -.0231 female 325 4.94 1.214 Equal var. -1.886 535 .060 .106 -.408 .008
SCHOOL2 male 212 5.14 1.180
-.20 .678 .411Unequal var. -1.898 460.258 .058 .105 -.407 .007
1 female 325 3.55 1.336 Equal var. 1.568 535 .118 .119 -.047 .419INCOME
2 male 212 3.37 1.355.19 .219 .640
Unequal var. 1.563 446.563 .119 .119 -.048 .4201 female 325 5.552 1.3968 Equal var. 4.155 535 .000 .1216 .2663 .7440
TCLOSE2 male 212 5.047 1.3465
.505 2.808 .094Unequal var. 4.187 462.717 .000 .1206 .2681 .7422
1 female 325 2.997 1.6897 Equal var. -3.596 535 .000 .1511 -.8399 -.2464HCONTROT
2 male 212 3.540 1.7435-.543 .035 .852
Unequal var. -3.572 440.945 .000 .1521 -.8420 -.24431 female 325 5.3651 1.24227 Equal var. 5.107 535 .000 .11245 .35336 .79514
SYMPATHI2 male 212 4.7909 1.32055
.5742 1.764 .185Unequal var. 5.042 431.316 .000 .11390 .35037 .79812
1 female 325 2.037 1.5394 Equal var. -2.849 535 .005 .1351 -.6501 -.1194ANGERT
2 male 212 2.422 1.5153-.385 .642 .423
Unequal var. -2.858 456.106 .004 .1346 -.6493 -.12021 female 325 5.079 1.2154 Equal var. 2.748 535 .006 .1069 .0838 .5040
HCOPET2 male 212 4.785 1.2052
.294 .015 .903Unequal var. 2.753 453.777 .006 .1068 .0841 .5037
1 female 325 5.190 1.5012 Equal var. 3.197 535 .001 .1383 .1705 .7139HSEVERET
2 male 212 4.748 1.6619.442 2.346 .126
Unequal var. 3.130 418.234 .002 .1413 .1645 .71991 female 325 5.2126 .90490 Equal var. 7.681 535 .000 .07873 .45003 .75934
EMPATHY2 male 212 4.6080 .87123
.6047 .159 .690Unequal var. 7.742 463.101 .000 .07810 .45121 .75816
1 female 325 4.8136 .96568 Equal var. 2.927 535 .004 .08447 .08132 .41318EFFICT
2 male 212 4.5664 .94300.2473 .718 .397
Unequal var. 2.942 458.641 .003 .08405 .08209 .412421 female 325 .1391 .88466 Equal var. 4.285 535 .000 .08169 .18955 .51050
THELPLNZ2 male 212 -.2109 .98453
.3500 3.077 .080Unequal var. 4.189 416.542 .000 .08355 .18579 .51425
1 female 325 .0961 .86555 Equal var. 3.108 535 .002 .07689 .08792 .38999TQUALITZ
2 male 212 -.1429 .87909.2390 .077 .782
Unequal var. 3.098 446.070 .002 .07714 .08735 .390561 female 325 .1176 .70325 Equal var. 4.620 535 .000 .06374 .16928 .41970
TOTHELP2 male 212 -.1769 .74986
.2945 .425 .515Unequal var. 4.558 430.328 .000 .06461 .16751 .42147
110 IBM SPSS Statistics 23 Step by Step Instructor’s Manual
Men in this sample (M = 32.86) are significantly older than women (M = 30.36), t(535) = -2.028, p = .043. Women are significantly closer to the friend they helped (M = 5.552) than men (M = 5.047), t(535) = 4.155, p < .001. Men rate problems significantly more controllable (M = 3.540) than women (M = 2.997), t(535) = -3.596, p < .001. Women experience significantly more sympathy (M = 5.365) than men (M = 4.791), t(535) = 5.107, p < .001. Men experience significantly more anger (M = 2.422) than women (M = 2.037), t(535) = -2.849, p = .005. Women perceive the recipient as coping significantly more successfully (M = 5.079) than men (M = 4.785), t(535) = 2.748, p = .006. Women rate problems significantly more severe (M = 5.190) than men (M = 4.748), t(535) = 3.197, p = .001. Women are significantly more empathic (M = 5.213) than men (M = 4.608), t(535) = 7.681, p < .001. In a help-giving setting women experience significantly higher efficacy (M = 4.814) than men (M = 4.566), t(535) = 2.927, p < .001. Women spend significantly more time helping (M = .1391) than men (M = -.2109), t(535) = 4.285, p < .001. Women provide a significantly higher quality of help (M = .0961) than men (M = -.1421), t(535) = 3.108, p = .002. Women provide significantly more total help (M = .1176) than men (M = -.1769), t(535) = 4.620, p < .001.
IBM SPSS Statistics 23 Step by Step Instructor’s Manual 111
11-6 No answer provided for students.
Paired Samples Statistics
Mean N Std. Devia-
tion Std. Error
Mean Pair 1 SYMPATHY MEASURE DELETING PITY 5.1384 537 1.30317 .05624
MEAN RATING OF FOUR ANGER QUESTIONS 2.189 537 1.5401 .0665 Pair 2 SYMPATHY MEASURE DELETING PITY 5.1384 537 1.30317 .05624
MEAN OF 14 EMPATHY QUESTIONS 4.9739 537 .93877 .04051 Pair 3 empahelp 1.4769 537 1.69299 .07306
insthelp 1.4984 537 1.62761 .07024 Pair 4 empahelp 1.4769 537 1.69299 .07306
infhelp 1.2251 537 1.33585 .05765 Pair 5 insthelp 1.4984 537 1.62761 .07024
infhelp 1.2251 537 1.33585 .05765
Paired Samples Test Paired Differences
Mean Std. Deviation Std. Error
Mean
95% Confidence Interval of the Difference
t df Sig. (2-tailed) Lower Upper
Pair 1 SYMPATHY MEASURE DELETING PITY - MEAN RATING OF FOUR ANGER QUESTIONS 2.94960E0 2.22116 .09585 2.76131 3.13788 3.077E1 536 .000
Pair 2 SYMPATHY MEASURE DELETING PITY - MEAN OF 14 EMPATHY QUESTIONS 1.64494E-1 1.40009 .06042 .04581 .28318 2.723 536 .007
Pair 3 empahelp - insthelp -2.15037E-2 1.26352 .05452 -.12861 .08560 -.394 536 .693 Pair 4 empahelp - infhelp 2.51760E-1 .86909 .03750 .17809 .32543 6.713 536 .000 Pair 5 insthelp - infhelp 2.73264E-1 1.17395 .05066 .17375 .37278 5.394 536 .000
In a help-giving setting helpers experienced significantly greater sympathy (M = 5.138) than anger (M = 2.189), t(536) = -30.773, p < .001. In a help-giving setting helpers experienced significantly greater sympathy (M = 5.138) than empathy (M = 4.974), t(536) = -2.723, p = .007.
112 IBM SPSS Statistics 23 Step by Step Instructor’s Manual
Helpers provided significantly more empathic help (M = 1.477) than informational help (M = 1.2251), t(536) = 6.713, p < .001. Helpers provided significantly more instrumental help (M = 1.498) than informational help (M = 1.2251), t(536) = 5.394, p < .001.
IBM SPSS Statistics 23 Step by Step Instructor’s Manual 113
11-7 No answer provided for students.
One-Sample Statistics
N Mean Std. Deviation Std. Error Mean
age 537 31.34 14.016 .605
One-Sample Test
Test Value = 33
t df Sig. (2-tailed) Mean Difference
95% Confidence Interval of the
Difference
Lower Upper
age -2.737 536 .006 -1.655 -2.84 -.47
The mean population age (M = 33.0) is significantly greater than the mean age of the sample (M = 31.34, SD = 14.02), t(536) = -2.74, p = .006.
114 IBM SPSS Statistics 23 Step by Step Instructor’s Manual
11-8 Full answer provided for students.
There was not a significant difference between the mean for the treatment group (M = 4.00, SD = 1.94) and the control group (M = 3.00, SD = 2.06), t(18) = 1.12, p > .05).
Group Statistics
10 3.00 2.055 .65010 4.00 1.944 .615
CONDITIOControlTreatment (training)
PERFORMAN Mean Std. Deviation
Std. ErrorMean
Independent Samples Test
.134 .718 -1.118 18 .278 -1.00 .894 -2.879 .879
-1.118 17.945 .278 -1.00 .894 -2.880 .880
Equal variancesassumedEqual variancesnot assumed
PERFORMAF Sig.
Levene's Test forEquality of Variances
t df Sig. (2-tailed)Mean
DifferenceStd. ErrorDifference Lower Upper
95% ConfidenceInterval of the
Difference
t-test for Equality of Means
IBM SPSS Statistics 23 Step by Step Instructor’s Manual 115
11-9 Minimal answer provided for students.
Although the mean for the treatment condition (M = 4.00, SD = 1.94) appeared to be higher than the mean for the control condition (M = 3.00, SD = 2.06), this difference was not statistically significant (t(9) = 2.24, p > .05). In this case, because the p value is so close to significant, many people would say that the results are “marginally” significant, suggesting that there may be a relationship that needs further study. Others would say if it’s not significant, no other interpretation is appropriate. Note that the t value is twice the t value for question 4, and there are half as many degrees of freedom. This test comes closer to significance because it is a more powerful test, as the within-subjects variance (the part of the variance that is due to a particular person’s tendency to perform at a certain level) can be separated from the variance due to the manipulation (giving the stress reduction training).
11-10 No answer provided for students.
The scores of the individuals who were given the stress-reduction training were not significantly higher than the population mean of 3 (t(9) = 1.63, p > .05). This is testing something different than was tested in Ques-tion 4, because question 4 was testing to see if the treatment mean was different than a sample mean for the control condition (i.e., only ten people in the control condition). Here, the treatment mean is compared to the population mean (i.e., all people who you are interested in studying). Because of this, this test is more pow-erful (so the t is higher here).
Paired Samples Statistics
3.00 10 2.055 .6504.00 10 1.944 .615
PERFCONTPERFTREA
Pair1
Mean N Std. DeviationStd. Error
Mean
Paired Samples Test
-1.00 1.414 .447 -2.01 .01 -2.236 9 .052PERFCONT - PERFTREAPair 1Mean Std. Deviation
Std. ErrorMean Lower Upper
95% ConfidenceInterval of the
Difference
Paired Differences
t df Sig. (2-tailed)
One-Sample Test
1.627 9 .138 1.00 -.39 2.39PERFTREAt df Sig. (2-tailed)
MeanDifference Lower Upper
95% ConfidenceInterval of the
Difference
Test Value = 3
116 IBM SPSS Statistics 23 Step by Step Instructor’s Manual
Additional Exercises Independent-Samples t tests in DIVORCE.SAV File
1. Are there gender differences on the length of marriage prior to separation? 2. Are there gender differences on the number of children currently in custody? 3. Are there gender differences on level of income [coded on a low(1) to high(7) scale]? 4. Are there gender differences on the amount of cognitive coping used in divorce recovery [coded
on a low(1) to high(7) scale]? 5. Are there gender differences on the amount of avoidant coping used in divorce recovery [coded
on a low(1) to high(7) scale]? 6. Are there gender differences on the amount non-sexual physical closeness experienced [coded on
a low(1) to high(7) scale]? 7. Are there gender differences on the amount of social support experienced [coded on a little(1) to
much(7) scale]? 8. Are there gender differences on attributional style [coded on a pessimistic attributional style(-6)
to optimistic attributional style(+9) scale]? 9. Are there gender differences on the experience of personal spirituality [coded on a not spiritu-
al(1) to very spiritual(7) scale]? Independent-samples t tests in HELPING3.SAV File
12. Are there gender differences on the closeness of the relationship with the friend (hclose) they helped [coded on a distant(1) to close(7) scale]?
13. Are there gender differences on ratings of problem severity (hseveret) [coded on a mild(1) to se-vere(7) scale]?
14. Are there gender differences on ratings of the controllability of the problem cause (hcontrot) [coded uncontrollable(1) to controllable(7)]?
15. Are there gender differences on the amount of anger felt toward their needy friend (angert) [coded little(1) to much(7)]?
16. Are there gender differences on the helpers’ rating of how well the recipient is coping (hcopet) [coded coping poorly(1) to coping well(7)]?
17. Are there gender differences on the amount of efficacy felt in the helping context (effict) [coded low efficacy(1) to high efficacy(7)]?
18. Are there gender differences on self ratings of empathic tendency (empathyt) [coded low empa-thy(1) to high empathy(7)]?
19. Are there gender differences on the amount of sympathy experienced in the helping context (sym-pathi) [coded low sympathy(1) to high(7)]?
20. Are there cathelp differences on the rating of the closeness of the friendship (hclose) [coded dis-tant(1) to close(7)]?
21. Are there cathelp differences on the rating of problem severity (hseveret) [coded mild(1) to se-vere(7)]?
22. Are there cathelp differences on the amount of worry experienced concerning their friend’s problem (worry) [coded little(1) to much(7)]?
23. Are there cathelp differences on the amount of obligation felt concerning their friend’s problem (obligat) [coded little(1) to much(7)]?
24. Are there cathelp differences on the efficacy of the person trying to help (effict) [coded low effica-cy(1) to high efficacy(7)]?
25. Are there cathelp differences on a self-rating of empathic tendency (empathyt) [coded low empa-thy(1) to high empathy(7)]?
IBM SPSS Statistics 23 Step by Step Instructor’s Manual 117
26. Are there cathelp differences on the amount of sympathy experienced concerning their friend’s problem (sympathi) [coded little(1) to much(7)]?
118 IBM SPSS Statistics 23 Step by Step Instructor’s Manual
Chapter 12: The One-Way ANOVA Procedure Perform one-way ANOVAs with the specifications listed below. If there are significant findings write them up in APA format (or in the professional format associated with your discipline). Examples of correct APA format are shown on the web site. Further, notice that the final five problems make use of the helping3.sav data file. This data set (and all data files used in this book) is also available for download at the website listed above. For meaning and specification of each variable, make use of Data Files section of this book beginning on page 385.
1. File: grades.sav; dependent variable: quiz4; factor: ethnic (2,5); use LSD procedure for post hoc com-parisons, compute two planned comparisons. This problem asks you to reproduce the output on pages 170-172. Note that you will need to perform a select-cases procedure (see page 166) to delete the “1 = Native” category.
2. File: helping3.sav; dependent variable: tothelp; factor: ethnic (1,4); use LSD procedure for post hoc comparisons, compute two planned comparisons.
3. File: helping3.sav; dependent variable: tothelp; factor: problem (1,4); use LSD procedure for post hoc comparisons, compute two planned comparisons.
4. File: helping3.sav; dependent variable: angert; factor: occupat (1,6); use LSD procedure for post hoc comparisons, compute two planned comparisons.
5. File: helping3.sav; dependent variable: sympathi; factor: occupat (1,6); use LSD procedure for post hoc comparisons, compute two planned comparisons.
6. File: helping3.sav; dependent variable: effict; factor: ethnic (1,4); use LSD procedure for post hoc com-parisons, compute two planned comparisons.
IBM SPSS Statistics 23 Step by Step Instructor’s Manual 119
12-1 Full answer provided for students.
Descriptives
quiz4
N Mean Std. Deviation Std. Error
95% Confidence Interval for Mean
Minimum Maximum Lower Bound Upper Bound
Asian 20 8.35 1.531 .342 7.63 9.07 6 10
Black 24 7.75 2.132 .435 6.85 8.65 4 10
White 45 8.04 2.256 .336 7.37 8.72 2 10
Hispanic 11 6.27 3.319 1.001 4.04 8.50 2 10
Total 100 7.84 2.286 .229 7.39 8.29 2 10
ANOVA
quiz4 Sum of Squares df Mean Square F Sig.
Between Groups (Combined) 34.297 3 11.432 2.272 .085
Linear Term Unweighted 26.464 1 26.464 5.258 .024
Weighted 14.484 1 14.484 2.878 .093
Deviation 19.813 2 9.906 1.968 .145
Within Groups 483.143 96 5.033 Total 517.440 99
Contrast Coefficients
Contrast
ethnicity
Asian Black White Hispanic
1 1 1 -1 -1
2 1 1 1 -3
Contrast Tests
Contrast Value of Contrast Std. Error t df Sig. (2-tailed)
quiz4 Assume equal variances 1 1.78 1.015 1.756 96 .082
2 5.33 2.166 2.459 96 .016
Does not assume equal variances 1 1.78 1.192 1.495 19.631 .151
2 5.33 3.072 1.734 10.949 .111
120 IBM SPSS Statistics 23 Step by Step Instructor’s Manual
Post Hoc Tests Multiple Comparisons
Dependent Variable: quiz4 LSD
(I) ethnicity (J) ethnicity Mean Difference (I-J) Std. Error Sig.
95% Confidence Interval
Lower Bound Upper Bound
Asian Black .600 .679 .379 -.75 1.95
White .306 .603 .613 -.89 1.50
Hispanic 2.077* .842 .015 .41 3.75
Black Asian -.600 .679 .379 -1.95 .75
White -.294 .567 .605 -1.42 .83
Hispanic 1.477 .817 .074 -.14 3.10
White Asian -.306 .603 .613 -1.50 .89
Black .294 .567 .605 -.83 1.42
Hispanic 1.772* .755 .021 .27 3.27
Hispanic Asian -2.077* .842 .015 -3.75 -.41
Black -1.477 .817 .074 -3.10 .14
White -1.772* .755 .021 -3.27 -.27
*. The mean difference is significant at the 0.05 level. A one-way ANOVA revealed marginally significant ethnic differences for scores on Quiz 4, F(3, 96) = 2.27, p = .085. Post hoc comparisons using the LSD procedure with an alpha value of .05 found that Whites (M = 8.04) and Asians (M = 8.35) scored significantly higher than Hispanics (M = 6.27).
12-2 Full answer provided for students.
IBM SPSS Statistics 23 Step by Step Instructor’s Manual 121
Post Hoc Tests COMBINED HELP MEASURE—QUANTITY & QUALITY LSD
A one-way ANOVA revealed marginally significant ethnic differences for the total amount of help given, F(3, 489) = 2.24, p = .083. Post hoc comparisons using the LSD procedure found that Blacks (M = .16, SD = .68) provide significantly more total help than Asians (M = -.18, SD = .76), p < .013.
122 IBM SPSS Statistics 23 Step by Step Instructor’s Manual
12-3 Minimal answer provided for students.
Descriptives
COMBINED HELP MEASURE--QUANTITY & QUALITY
N Mean
Std. Devia-
tion Std. Error
95% Confidence Inter-
val for Mean
Minimum Maximum
Lower
Bound
Upper
Bound
GOAL DISRUPTIVE 229 -.1245 .73671 .04868 -.2204 -.0286 -2.88 1.59
RELATIONAL BREAK 209 .0715 .70706 .04891 -.0249 .1679 -2.02 1.54
ILLNESS 86 .1254 .77712 .08380 -.0412 .2921 -2.51 1.69
CATASTROPHIC 13 .2697 .57947 .16072 -.0805 .6198 -.83 1.19
Total 537 .0013 .73557 .03174 -.0610 .0637 -2.88 1.69
ANOVA
COMBINED HELP MEASURE--QUANTITY & QUALITY
Sum of Squares df Mean Square F Sig.
Between Groups 6.915 3 2.305 4.340 .005
Within Groups 283.094 533 .531 Total 290.008 536
Contrast Tests Contrast Value of
Contrast
Std. Er-
ror t df
Sig. (2-
tailed)
COMBINED HELP
MEASURE--QUANTITY
& QUALITY
Assume equal vari-
ances dimension2
1 -.8401 .26542 -3.165 533 .002
2 -.0517 .22780 -.227 533 .820
Does not assume equal
variances dimension2
1 -.8401 .23785 -3.532 54.993 .001
2 -.0517 .19394 -.267 25.161 .792
Contrast Coefficients
3 -1 -1 -11 -1 -1 1
Contrast12
GOALDISRUPTIVE
RELATIONALBREAK ILLNESS
CATASTROPHIC
TYPE OF PROBLEM EXPERIENCED
IBM SPSS Statistics 23 Step by Step Instructor’s Manual 123
Multiple Comparisons
COMBINED HELP MEASURE--QUANTITY & QUALITY LSD
(I) TYPE OF PROBLEM
EXPERIENCED
(J) TYPE OF PROBLEM EX-
PERIENCED
Mean Differ-
ence (I-J)
Std.
Error Sig.
95% Confidence
Interval
Lower
Bound
Upper
Bound
d
i
m
e
n
s
i
o
n
2
GOAL DISRUPTIVE di-
men-
sion3
RELATIONAL BREAK -.19597* .06972 .005 -.3329 -.0590
ILLNESS -.24994* .09217 .007 -.4310 -.0689
CATASTROPHIC -.39417 .20779 .058 -.8023 .0140
RELATIONAL BREAK di-
men-
sion3
GOAL DISRUPTIVE .19597* .06972 .005 .0590 .3329
ILLNESS -.05397 .09337 .563 -.2374 .1294
CATASTROPHIC -.19820 .20832 .342 -.6074 .2110
ILLNESS di-
men-
sion3
GOAL DISRUPTIVE .24994* .09217 .007 .0689 .4310
RELATIONAL BREAK .05397 .09337 .563 -.1294 .2374
CATASTROPHIC -.14423 .21687 .506 -.5702 .2818
CATASTROPHIC di-
men-
sion3
GOAL DISRUPTIVE .39417 .20779 .058 -.0140 .8023
RELATIONAL BREAK .19820 .20832 .342 -.2110 .6074
ILLNESS .14423 .21687 .506 -.2818 .5702
*. The mean difference is significant at the 0.05 level.
A one-way ANOVA revealed significant differences between problem types on the total amount of help giv-en, F(3, 533) = 4.34, p = .005. Post hoc comparisons using the LSD procedure with an alpha value of .05 found that less help was given for goal disruptive problems (M = -.12) than for either relational problems (M = .07) or illness problems (M = .13).
124 IBM SPSS Statistics 23 Step by Step Instructor’s Manual
12-4 Minimal answer provided for students. Descriptives: Dependent Variable: ANGERT
N Mean Std. Deviation Std. Error 95% Confidence Interval for Mean
Minimum Maximum
Lower Upper 1 Professional 132 2.058 1.5045 .1310 1.799 2.317 1.0 7.0 2 Service Support 118 2.371 1.6269 .1498 2.075 2.668 1.0 6.8 3 Blue collar 19 2.553 1.9372 .4444 1.619 3.486 1.0 7.0 4 Unemployed/retired 43 1.802 1.2627 .1926 1.414 2.191 1.0 6.3 5 Student 200 2.146 1.5059 .1065 1.936 2.356 1.0 7.0 6 DTS 25 2.748 1.5278 .3056 2.117 3.379 1.0 6.0 Total 537 2.189 1.5401 .0665 2.058 2.319 1.0 7.0 ANOVA: Dependent Variable: ANGERT
Sum of Squares df Mean Square F Sig. Between Groups 23.294 5 4.659 1.982 .080 Within Groups 1248.019 531 2.350 Total 1271.313 536 Contrast Coefficients contrast 1 Professional 2 Service/Support 3 Blue collar 4 Unemployed/retired 5 Student 6 DTS
1 -3 2 2 2 -3 0 2 -5 1 1 1 1 1
Contrast Tests Contrast Value of Contrast Std. Error t df Sig. (2-tailed)
ANGERT Assume equal variances
1 .839 1.0291 .816 531 .415 2 1.328 .8656 1.535 531 .125
Does not assume equal variances
1 .839 1.1333 .741 46.473 .463 2 1.328 .8891 1.494 157.337 .137
IBM SPSS Statistics 23 Step by Step Instructor’s Manual 125
Multiple Comparisons; Dependent Variable: ANGERT; Post hoc procedure: LSD
(I) Problem (J) Problem Mean Difference (I-J) Std. Error Sig. 95% Confidence Interval Lower Upper
Professional Service Support -.313 .1942 .108 -.694 .069 Blue collar -.494 .3762 .189 -1.233 .245 Unemployed/ret .256 .2692 .342 -.273 .785 Student -.088 .1719 .610 -.425 .250 DTS -.690* .3344 .040 -1.347 -.033 Service Support Blue collar -.181 .3790 .632 -.926 .563 Unemployed/ret .569* .2731 .038 .032 1.105 Student .255 .1780 .206 -.124 .575 DTS -.377 .3375 .265 -1.040 .286 Blue collar Unemployed/ret .750 .4223 .076 -.079 1.580 Student .407 .3680 .270 -.316 1.130 DTS -.195 .4666 .676 -1.112 .721 Unemployed/retir Student -.344 .2577 .183 -.850 .163 DTS -.946* .3856 .015 -1.703 -.188 Student DTS -.602 .3252 .065 -1.241 .037 * The mean difference is significant at the .05 level. A one-way ANOVA revealed marginally significant differences for the amount of anger experienced based on the occupation of the helper, F(5, 531) = 1.982, p = .080. Post hoc comparisons using the LSD procedure found greater anger was experienced by those who chose not to state their occupation (M = 2.75, SD = 1.53) than for either unemployed/retired persons (M = 1.80, SD = 1.26) or professional persons (M = 2.06, SD = 1.50). It was also found that service/support workers (M = 2.37, SD = 1.63) experienced more anger than those who were unemployed (M = 1.80, SD = 1.26).
126 IBM SPSS Statistics 23 Step by Step Instructor’s Manual
12-5 No answer provided for students. Descriptives: Dependent Variable: SYMPATHI
N Mean Std. Deviation Std. Error 95% Confidence Interval for Mean
Minimum Maximum
Lower Upper 1 Professional 132 5.2955 1.2118 .1055 5.0868 5.5041 2.00 7.00 2 Service Support 118 5.3842 1.2301 .1132 5.1599 5.6084 1.00 7.00 3 Blue collar 19 5.0350 1.6098 .3693 4.2592 5.8110 1.33 7.00 4 Unemployed/retired 43 5.4884 1.2026 .1834 5.1183 5.8585 2.33 7.00 5 Student 200 4.8717 1.3740 .0972 4.6801 5.0633 1.00 7.00 6 DTS 25 4.7600 1.0024 .2005 4.3462 5.1738 2.67 6.67 Total 537 5.1384 1.3032 .0562 5.0280 5.2489 1.00 7.00 ANOVA: Dependent Variable: SYMPATHI
Sum of Squares df Mean Square F Sig. Between Groups 33.663 5 6.733 4.078 .001 Within Groups 876.604 531 1.651 Total 910.226 536 Contrast Coefficients contrast 1 Professional 2 Service/Support 3 Blue collar 4 Unemployed/retired 5 Student 6 DTS
1 -3 2 2 2 -3 0 2 -5 1 1 1 1 1
Contrast Tests Contrast Value of Contrast Std. Error t df Sig. (2-tailed)
SYMPATHI Assume equal variances
1 1.3139 .8625 1.523 531 .128 2 -.9380 .7254 -1.293 531 .197
Does not assume equal variances
1 1.3139 .9573 1.373 49.115 .176 2 -.9380 .7146 -1.313 151.578 .191
IBM SPSS Statistics 23 Step by Step Instructor’s Manual 127
Multiple Comparisons; Dependent Variable: SYMPATHI; Post hoc procedure: LSD
(I) Problem (J) Problem Mean Difference (I-J) Std. Error Sig. 95% Confidence Interval Lower Upper
Professional Service Support -.0887 .1628 .586 -.4085 .2310 Blue collar .2604 .3153 .409 -.3590 .8797 Unemployed/retir -.1929 .2256 .393 -.6361 .2503 Student .4238* .1441 .003 .1407 .7068 DTS .5355 .2803 .057 -.0151 1.0860 Service Support Blue collar .3491 .3176 .272 -.2748 .9730 Unemployed/retir -.1042 .2289 .649 -.5538 .3454 Student .5125* .1492 .001 .2195 .8055 DTS .6242* .2829 .028 .0685 1.1799 Blue collar Unemployed/retir -.4533 .3540 .201 -1.1486 .2420 Student .1634 .3085 .596 -.4423 .7694 DTS .2751 .3911 .482 -.4931 1.0433 Unemployed/retir Student .6167* .2160 .004 .1924 1.0410 DTS .7284* .3232 .025 .0936 1.3632 Student DTS .1117 .2726 .682 -.4238 .6471 * The mean difference is significant at the .05 level. A one-way ANOVA revealed marginally significant differences for the amount of sympathy experienced based on the occupation of the helper, F(5, 531) = 4.078, p = .001. Post hoc comparisons using the Least Significant Differences procedure with an alpha value of .05 found less sympathy was experienced by stu-dents (M = 4.87) than for either unemployed persons (M = 5.49), professional persons (M = 5.30), or service workers (M = 5.38). It was also found that those who declined to state their occupation experienced less sympathy (M = 4.76) than service workers or unemployed persons.
128 IBM SPSS Statistics 23 Step by Step Instructor’s Manual
12-6 No answer provided for students.
Descriptives
MEAN OF 14 EFFICACY MEASURES
293 4.7587 .93669 .05472 4.6510 4.8664 1.79 7.0050 4.6186 1.03052 .14574 4.3257 4.9114 2.43 7.0080 4.7268 1.08558 .12137 4.4852 4.9684 1.86 7.0070 4.4847 .88989 .10636 4.2725 4.6969 2.64 6.57
493 4.7004 .96758 .04358 4.6148 4.7860 1.79 7.00
WHITEBLACKHISPANICASIANTotal
N MeanStd.
Deviation Std. Error Lower Bound Upper Bound
95% Confidence Interval forMean
Minimum Maximum
ANOVA
MEAN OF 14 EFFICACY MEASURES
4.642 3 1.547 1.659 .175455.977 489 .932460.619 492
Between GroupsWithin GroupsTotal
Sum ofSquares df Mean Square F Sig.
Contrast Coefficients
4 -1 -1 -11 -1 -1 1
Contrast12
WHITE BLACK HISPANIC ASIANethnic
Contrast Tests
5.2046a .30748 16.926 489 .000-.1020 .21635 -.471 489 .6375.2046a .30854 16.868 418.197 .000
-.1020 .22423 -.455 182.665 .650
Contrast1212
Assume equal variances
Does not assume equalvariances
MEAN OF 14EFFICACY MEASURES
Value ofContrast Std. Error t df Sig. (2-tailed)
The sum of the contrast coefficients is not zero.a.
IBM SPSS Statistics 23 Step by Step Instructor’s Manual 129
Post Hoc Tests
A one-way ANOVA did not reveal significant ethnic differences for the amount of self-efficacy experienced by subjects, F(3, 489) = 1.659, p = .175. However, post hoc comparisons using the Least Significant Differ-ences procedure with an alpha value of .05 found that Caucasians experienced significantly more efficacy (M = 4.76) than Asians (M = 4.48).
Multiple Comparisons
Dependent Variable: MEAN OF 14 EFFICACY MEASURESLSD
.14008 .14776 .344 -.1502 .4304
.03187 .12181 .794 -.2075 .2712
.27396* .12847 .033 .0215 .5264-.14008 .14776 .344 -.4304 .1502-.10821 .17408 .534 -.4503 .2338.13388 .17880 .454 -.2174 .4852
-.03187 .12181 .794 -.2712 .2075.10821 .17408 .534 -.2338 .4503.24209 .15804 .126 -.0684 .5526
-.27396* .12847 .033 -.5264 -.0215-.13388 .17880 .454 -.4852 .2174-.24209 .15804 .126 -.5526 .0684
(J) ethnicBLACKHISPANICASIANWHITEHISPANICASIANWHITEBLACKASIANWHITEBLACKHISPANIC
(I) ethnicWHITE
BLACK
HISPANIC
ASIAN
MeanDifference
(I-J) Std. Error Sig. Lower Bound Upper Bound95% Confidence Interval
The mean difference is significant at the .05 level.*.
130 IBM SPSS Statistics 23 Step by Step Instructor’s Manual
Additional Exercises Using DIVORCE.SAV:
1. Does one’s marital status (4 levels) have an influence on their level of income [coded on a low(1) to high(7) scale]?
2. Is one’s marital status (4 levels) associated with the number of years of schooling completed [coded on a <11yr(1) to >19 yr(10) scale]?
3. Is one’s marital status (4 levels) associated with chronological age [range, 23-76]? 4. Is one’s marital status (4 levels) associated with the amount of behavioral coping that is practiced
[coded on a little(1) to much(7) scale]? 5. Is one’s marital status (4 levels) associated with one’s level of personal spirituality [coded on a
low(1) to high(7) scale]? Using HELPING3.SAV:
6. Is the type of problem experienced (problem, 4 levels) associated with the age of the helper? 7. Is the type of problem experienced (problem, 4 levels) associated with the rating of problem severi-
ty (hseveret)? 8. Is the type of problem experienced (problem, 4 levels) associated with how much the helper worries
(worry)? 9. Is the type of problem experienced (problem, 4 levels) associated with the helper’s perception of
how well the recipient is coping (hcopet)? 10. Is the type of problem experienced (problem, 4 levels) associated with the amount of sympathy
(sympathi) experienced by the helper? 11. Is the type of problem experienced (problem, 4 levels) associated with the total amount of help giv-
en (tothelp)? 12. Is the type of problem experienced (problem, 4 levels) associated with the amount of time spent
helping (thelplnz)? 13. Is the amount of education (school, 5 levels) associated with the rating of problem severity (hsev-
eret)? 14. Is the amount of education (school, 6 levels) associated with the quality of the help given
(tqualitz)? 15. Is the ethnicity of the helper (ethnic, 4 levels) associated with the quality of help given (tqualitz)?
IBM SPSS Statistics 23 Step by Step Instructor’s Manual 131
Chapter 14: Three-Way ANOVA Notice that data files other than the grades.sav file are being used here. Please refer to the Data Files sec-tion starting on page 385 to acquire all necessary information about these files and the meaning of the varia-bles. As a reminder, all data files are downloadable from the web address shown above.
For the first five problems below, perform the following: Print out the cell means portion of the output. Print out the ANOVA results (main effects, interactions, and so forth). Interpret and write up correctly (APA format) all main effects and interactions. Create multiple-line graphs (or clustered bar charts) for all significant interactions.
1. File: helping3.sav; dependent variable: tothelp; independent variables: gender, problem.
2. File: helping3.sav; dependent variable: tothelp; independent variables: gender, income.
3. File: helping3.sav; dependent variable: hseveret; independent variables: ethnic, problem.
4. File: helping3.sav; dependent variable: thelplnz; independent variables: gender, problem; covariate: tqualitz.
5. File: helping3.sav; dependent variable: thelplnz; independent variables: gender, income, marital. 6. In an experiment, participants were given a test of mental performance in stressful situations. Some partic-
ipants were given no stress-reduction training, some were given a short stress-reduction training session, and some were given a long stress-reduction training session. In addition, some participants who were tested had a low level of stress in their lives, and others had a high level of stress in their lives. Perform an ANOVA on these data (listed below). What do the results mean?
Training: None Short Life Stress: High Low High
Performance Score: 5 4 2 5 4 4 4 6 6 2 6 4 5 4 3
Training: Short Long Life Stress: Low High Low
Performance Score: 7 6 6 5 7 5 5 5 3 5 7 7 9 9 8
7. In an experiment, participants were given a test of mental performance in stressful situations. Some partic-ipants were given no stress-reduction training, and some were given a stress-reduction training session. In addition, some participants who were tested had a low level of stress in their lives, and others had a high level of stress in their lives. Finally, some participants were tested after a full night's sleep, and some were tested after an all-night study session on three-way ANOVA. Perform an ANOVA on these data (listed below question 8; ignore the "caffeine" column for now). What do these results mean?
132 IBM SPSS Statistics 23 Step by Step Instructor’s Manual
8. In the experiment described in problem 7, data were also collected for caffeine levels. Perform an ANO-
VA on these data (listed below). What do these results mean? What is similar to and different than the re-sults in question 7?
Training? Stress Level Sleep/Study Performance Caffeine No Low Sleep 8 12 No Low Sleep 9 13 No Low Sleep 8 15 No Low Study 15 10 No Low Study 14 10 No Low Study 15 11 No High Sleep 10 14 No High Sleep 11 15 No High Sleep 11 16 No High Study 18 11 No High Study 19 10 No High Study 19 11
Yes Low Sleep 18 11 Yes Low Sleep 17 10 Yes Low Sleep 18 11 Yes Low Study 10 4 Yes Low Study 10 4 Yes Low Study 11 4 Yes High Sleep 22 14 Yes High Sleep 22 14 Yes High Sleep 23 14 Yes High Study 13 5 Yes High Study 13 5 Yes High Study 12 4
IBM SPSS Statistics 23 Step by Step Instructor’s Manual 133
14-1 Full answer provided for students.
Between-Subjects Factors
Value Label N
Gender 1 FEMALE 294
2 MALE 199
TYPE OF PROBLEM EXPERI-
ENCED
1 GOAL DISRUP-
TIVE 207
2 RELATIONAL
BREAK 189
3 ILLNESS 84
4 CATASTROPHIC 13
Descriptive Statistics
Dependent Variable: COMBINED HELP MEASURE--QUANTITY & QUALITY
gender
TYPE OF PROBLEM EXPERI-
ENCED Mean Std. Deviation N
FEMALE GOAL DISRUPTIVE -.0299 .68184 105
RELATIONAL BREAK .1516 .72524 132
ILLNESS .2901 .71572 50
CATASTROPHIC .3449 .62825 7
Total .1149 .71313 294
MALE GOAL DISRUPTIVE -.2752 .77680 102
RELATIONAL BREAK -.0802 .68315 57
ILLNESS -.1298 .82601 34
CATASTROPHIC .1820 .56134 6
Total -.1807 .75724 199
Total GOAL DISRUPTIVE -.1507 .73870 207
RELATIONAL BREAK .0817 .71895 189
ILLNESS .1201 .78529 84
CATASTROPHIC .2697 .57947 13
Total -.0044 .74478 493
134 IBM SPSS Statistics 23 Step by Step Instructor’s Manual
Tests of Between-Subjects Effects
Dependent Variable: COMBINED HELP MEASURE--QUANTITY & QUALITY Source Type III Sum of Squares df Mean Square F Sig. Partial Eta Squared
Corrected Model 17.019a 7 2.431 4.608 .000 .062
Intercept .510 1 .510 .966 .326 .002
gender 2.785 1 2.785 5.278 .022 .011
problem 5.879 3 1.960 3.714 .012 .022
gender * problem .581 3 .194 .367 .777 .002
Error 255.894 485 .528 Total 272.923 493 Corrected Total 272.913 492 a. R Squared = .062 (Adjusted R Squared = .049) Estimated Marginal Means
1. Grand Mean
Dependent Variable: COMBINED HELP MEASURE--QUANTITY &
QUALITY
Mean Std. Error
95% Confidence Interval
Lower Bound Upper Bound
.057 .058 -.057 .170
2. gender Dependent Variable: COMBINED HELP MEASURE--QUANTITY & QUALITY
gender Mean Std. Error
95% Confidence Interval
Lower Bound Upper Bound
FEMALE .189 .077 .038 .341
MALE -.076 .086 -.244 .093
3. TYPE OF PROBLEM EXPERIENCED
Dependent Variable: COMBINED HELP MEASURE--QUANTITY & QUALITY
TYPE OF PROBLEM EXPERI-
ENCED Mean Std. Error
95% Confidence Interval
Lower Bound Upper Bound
GOAL DISRUPTIVE -.153 .050 -.252 -.053
RELATIONAL BREAK .036 .058 -.077 .149
ILLNESS .080 .081 -.078 .239
CATASTROPHIC .263 .202 -.134 .660
IBM SPSS Statistics 23 Step by Step Instructor’s Manual 135
4. gender * TYPE OF PROBLEM EXPERIENCED
Dependent Variable: COMBINED HELP MEASURE--QUANTITY & QUALITY
gender
TYPE OF PROBLEM EXPERI-
ENCED Mean Std. Error
95% Confidence Interval
Lower Bound Upper Bound
FEMALE GOAL DISRUPTIVE -.030 .071 -.169 .109
RELATIONAL BREAK .152 .063 .027 .276
ILLNESS .290 .103 .088 .492
CATASTROPHIC .345 .275 -.195 .884
MALE GOAL DISRUPTIVE -.275 .072 -.416 -.134
RELATIONAL BREAK -.080 .096 -.269 .109
ILLNESS -.130 .125 -.375 .115
CATASTROPHIC .182 .297 -.401 .765
(The chart (left) is included for demonstra-tion only. There is no significant interac-tion in the present results.) A 2-way ANOVA was conducted to deter-mine the influence of gender and type of problem on the total amount of help given. Results showed a significant main effect for gender in which women (M = .12) gave slightly more help than men (M = -.18), F(1, 529) = 5.54, p = .019, η2 = .01. There was also a significant (but small) main ef-fect for problem type, F(3, 529) = 1.65, p = .023, η2 = .02. There was no significant gender by problem type interaction.
136 IBM SPSS Statistics 23 Step by Step Instructor’s Manual
14-2 Minimal answer provided for students. Estimated Marginal Means Dependent Variable: TOTHELP (total amount of help provided) gender Mean Std. Error FEMALE .159 .044 MALE -.217 .052
income Mean Std. Error <15,000 .045 .084 <25,000 -.065 .100 <50,000 -.044 .072 >50,000 .103 .057 DTS -.186 .062
gender income Mean Std. Error FEMALE <15,000 .381 .111 <25,000 .217 .137 <50,000 .039 .087 >50,000 .160 .075 DTS -.004 .072 MALE <15,000 -.292 .126 <25,000 -.347 .145 <50,000 -.127 .114 >50,000 .046 .086 DTS -.367 .102
Tests of Between-Subjects Effects
Source Type III Sum of Squares df Mean Square F Sig.
Partial Eta Squared
Corrected Model 22.639(a) 9 2.515 4.958 .000 .078 Intercept .375 1 .375 .740 .390 .001 gender 15.289 1 15.289 30.136 .000 .054 income 6.392 4 1.598 3.150 .014 .023 gender * income 5.274 4 1.318 2.599 .035 .019 Error 267.370 527 .507 Total 290.009 537 Corrected Total 290.008 536
Multiple Comparisons (note: only significant results are shown) (I) Income (J) Income Mean Difference (I - J) Std. Error Sig.
IBM SPSS Statistics 23 Step by Step Instructor’s Manual 137
1 DTS 1 <15K -.2104* .1019 .039 4 >50K -.2355* .0814 .004
A 2-way ANOVA was conducted to deter-mine the influence of gender and level of income on the total amount of help given. Results showed a significant main effect for gender in which women (M = .16, SE = .04) gave more help than men (M = -.22, SE = .05), F(1, 527) = 30.14, p < .001, 2 =.05. There was also a signifi-cant main effect for level of income, F(4, 527) = 3.15, p = .014, 2 = .02. Post hoc comparisons using the LSD procedure revealed that subjects unwilling to state their income gave less total help (M = -.18, SE = .06) than subjects making less than 15,000 per year (M = .04, SE = .08) or sub-jects making more than 50,000 per year (M = .16, SE = .08). There was also a sig-nificant gender by income interaction, F(4, 527) = 2.60, p = .035, 2 = .02. While for all income levels, women helped more than men, for participants making less than 25,000, the gender discrepancy was large, while for subjects making more than 25,000, the gender discrepancy was small.
138 IBM SPSS Statistics 23 Step by Step Instructor’s Manual
14-3 Minimal answer provided for students.
Descriptive Statistics
Dependent Variable:HELPER MEAN SEVERITY RATING
ethnic TYPE OF PROBLEM
EXPERIENCED Mean Std. Deviation N
WHITE GOAL DISRUPTIVE 4.489 1.6532 115
RELATIONAL BREAK 5.248 1.2906 113
ILLNESS 5.867 1.2381 60
CATASTROPHIC 6.700 .6708 5
Total 5.101 1.5304 293
BLACK GOAL DISRUPTIVE 4.768 1.6638 28
RELATIONAL BREAK 5.200 1.4117 15
ILLNESS 5.625 2.7500 4
CATASTROPHIC 5.833 1.6073 3
Total 5.030 1.6672 50
HISPANIC GOAL DISRUPTIVE 4.597 1.5530 36
RELATIONAL BREAK 5.266 1.5133 32
ILLNESS 5.100 1.2867 10
CATASTROPHIC 5.250 1.0607 2
Total 4.944 1.5074 80
ASIAN GOAL DISRUPTIVE 4.786 1.5180 28
RELATIONAL BREAK 4.828 1.5827 29
ILLNESS 5.300 2.0710 10
CATASTROPHIC 5.500 1.8028 3
Total 4.907 1.6180 70
OTHER/DTS GOAL DISRUPTIVE 4.545 2.0523 22
RELATIONAL BREAK 4.800 1.7199 20
ILLNESS 6.000 1.4142 2
Total 4.727 1.8722 44
Total GOAL DISRUPTIVE 4.582 1.6551 229
RELATIONAL BREAK 5.146 1.4190 209
ILLNESS 5.703 1.4377 86
CATASTROPHIC 6.000 1.2583 13
Total 5.015 1.5800 537
IBM SPSS Statistics 23 Step by Step Instructor’s Manual 139
ethnic TYPE OF PROBLEM EXPERIENCED Mean Std. Error
WHITE GOAL DISRUPTIVE 4.489 .143 RELATIONAL
BREAK 5.248 .144
ILLNESS 5.867 .198 CATASTROPHIC 6.700 .685 BLACK GOAL DISRUPTIVE 4.768 .290 RELATIONAL
BREAK 5.200 .396
ILLNESS 5.625 .766 CATASTROPHIC 5.833 .885 HISPANIC GOAL DISRUPTIVE 4.597 .255 RELATIONAL
BREAK 5.266 .271
ILLNESS 5.100 .485 CATASTROPHIC 5.250 1.084 ASIAN GOAL DISRUPTIVE 4.786 .290 RELATIONAL
BREAK 4.828 .285
ILLNESS 5.300 .485 CATASTROPHIC 5.500 .885 OTHER/DTS GOAL DISRUPTIVE 4.545 .327 RELATIONAL
BREAK 4.800 .343
ILLNESS 6.000 1.084 CATASTROPHIC .a .
a This level combination of factors is not observed, thus the corresponding population marginal mean is not esti-mable. Tests of Between-Subjects Effects
Source Type III Sum of Squares df Mean Square F Sig.
Partial Eta Squared
Corrected Model 121.615a 18 6.756 2.877 .000 .091 Intercept 3118.842 1 3118.842 1328.149 .000 .719 ethnic 7.557 4 1.889 .805 .523 .006 problem 34.924 3 11.641 4.957 .002 .028 ethnic * problem 17.954 11 1.632 .695 .744 .015 Error 1216.399 518 2.348 Total 14845.140 537 Corrected Total 1338.015 536
140 IBM SPSS Statistics 23 Step by Step Instructor’s Manual
Multiple Comparisons (note: only significant results are shown) (I) Problem Type (J) Problem Type Mean Difference (I - J) Std. Error Sig. 1 Goal disruptive 2 Relational -.564 .1466 .000 3 Illness -1.122 .1938 .000 4 Catastrophic -1.418 .4369 .001 2 Relational 3 Illness -.558 .1963 .005
Estimated Marginal Means Dependent Variable: HSEVERET (rating of the severity of the problem) ethnic Mean Std. Error WHITE 5.576 .185 BLACK 5.357 .317 HISPANIC 5.053 .311 ASIAN 5.103 .272 OTHER/DTS 5.115a .394
TYPE OF PROBLEM EXPERIENCED Mean Std. Error GOAL DISRUPTIVE 4.637 .120 RELATIONAL BREAK 5.068 .134
ILLNESS 5.578 .301 CATASTROPHIC 5.821a .448
A two-way ANOVA was conducted to determine the influence of ethnicity and problem type on the severity rating of problems. Problem type had a significant effect on the severity ratings, F(3, 518) = 4.96, p = .002, η2 = .03. Post hoc comparisons using the least significant differences procedure with an alpha value of .05 revealed that the severity rating for goal-disruptive problems (M = 4.58, SD = 1.66) was significantly less than for relational problems (M = 5.15, SD = 1.42), illness problems (M = 5.70, SD = 1.44), or catastrophic problems (M = 6.00, SD = 1.26). Also illness problems were rated more severe than relational problems. There was no significant ethnic by problem type interaction.
IBM SPSS Statistics 23 Step by Step Instructor’s Manual 141
14-4 No answer provided for students. Estimated Marginal Means Dependent Variable: THELPLNZ (Amount of time spent helping in z scores) gender Mean Std. Error FEMALE .197(a) .091 MALE -.060(a) .101
a Covariates appearing in the model are evaluated at the following values: tqualitz = .0018. TYPE OF PROBLEM EXPERIENCED Mean Std. Error GOAL DISRUPTIVE -.285(a) .057 RELATIONAL BREAK .121(a) .065
ILLNESS .277(a) .094 CATASTROPHIC .160(a) .239
gender TYPE OF PROBLEM EXPERIENCED Mean Std. Error
FEMALE GOAL DISRUPTIVE -.195(a) .079 RELATIONAL
BREAK .271(a) .071
ILLNESS .306(a) .121 CATASTROPHIC .407(a) .325 MALE GOAL DISRUPTIVE -.374(a) .082 RELATIONAL
BREAK -.029(a) .110
ILLNESS .247(a) .146 CATASTROPHIC -.086(a) .351
142 IBM SPSS Statistics 23 Step by Step Instructor’s Manual
Tests of Between-Subjects Effects
Source Type III Sum of Squares df Mean Square F Sig.
Partial Eta Squared
Corrected Model 84.073(a) 8 10.509 14.237 .000 .177 Intercept .737 1 .737 .999 .318 .002 tqualitz 42.307 1 42.307 57.315 .000 .098 gender 2.650 1 2.650 3.591 .059 .007 problem 26.988 3 8.996 12.187 .000 .065 gender * prob-lem 1.123 3 .374 .507 .677 .003
Error 389.738 528 .738 Total 473.812 537 Corrected Total 473.812 536
A 2-way ANOVA was conducted to determine the influence of gender and problem type on the total amount of time spent helping. To control for the influence of help quality, it was included as a covariate. Help quality explained a significant portion of the overall variance, F(1, 528) = 57.315, p < .001. Results showed a marginally significant main effect for gender in which women (M = .14) spent more time helping than men (M = -.21), F(1, 528) = 3.591, p < .059. There was also a significant main effect for problem type, F(3, 528) = 12.187, p < .001.
IBM SPSS Statistics 23 Step by Step Instructor’s Manual 143
14-5 No answer provided for students. Estimated Marginal Means Dependent Variable: THELPLNZ (amount of time spent helping in z scores) gender Mean Std. Error FEMALE .360(a) .114 MALE -.176(a) .109
a Based on modified population marginal mean. income Mean Std. Error <15,000 .344(a) .199 <25,000 .052(a) .203 <50,000 .065(a) .199 >50,000 .204(a) .142 DTS -.120 .148
MARITAL STA-TUS Mean Std. Error MARRIED .190 .120 SINGLE -.048 .051 DTS .196(a) .343
gender income Mean Std. Error FEMALE <15,000 .827(a) .331 <25,000 .538(a) .336 <50,000 .053(a) .112 >50,000 .383 .225 DTS .110 .230 MALE <15,000 -.140(a) .222 <25,000 -.435(a) .230 <50,000 .073 .323 >50,000 -.065(a) .114 DTS -.349 .187
Similar tables occur for gender x marital, income x marital, and gender x income x marital
144 IBM SPSS Statistics 23 Step by Step Instructor’s Manual
Tests of Between-Subjects Effects
Source Type III Sum of Squares df Mean Square F Sig.
Corrected Model 45.502(a) 23 1.978 2.370 .000 Intercept .958 1 .958 1.148 .285 gender 8.175 1 8.175 9.791 .002 income 3.761 4 .940 1.126 .343 marital 3.131 2 1.566 1.875 .154 gender * income 6.816 4 1.704 2.041 .087 gender * marital .857 2 .428 .513 .599 income * marital 4.179 6 .696 .834 .544 gender * income * mari-tal 2.480 4 .620 .743 .563
Error 428.309 513 .835 Total 473.812 537 Corrected Total 473.812 536
A 3-way ANOVA was conducted to determine the influence of gender, level of income, and mar-ital status on the total amount of time spent help-ing. Results showed a significant main effect for gender in which women (M = .13) spent signifi-cantly more time helping than men (M = -.21), F(1, 508) = 11.62, p < .001. There was a marginally significant main effect for mari-tal in which married people (M = .190) spent more time helping than single people (M = -.048), F(1, 508) = 2.789, p = .068. There was also a marginally significant gender by in-come interaction. While for all income levels, women helped more than men, for subjects mak-ing less than 25,000, the gender discrepancy was much greater, while for subjects making more than 25,000, the gender discrepancy was small.
Gender by Income on Time Helping
INCOME
DTS>50K25 - 50K15 - 25K< 15K
Tim
e H
elpi
ng (z
sco
res)
1.0
.8
.6
.4
.2
-.0
-.2
-.4
-.6
GENDER
female
male
IBM SPSS Statistics 23 Step by Step Instructor’s Manual 145
14-6 Full answer provided for students.
There was a main effect of training: People who had a long training session (M = 6.30, SD = 2.00) per-formed better than people who had a short training session (M = 5.30, SD = 1.34), who in turn did better than those who had no training session (M = 4.20, SD = 1.40; F(2,24) = 8.17, p = .002, η2 = .41). There was a main effect of level of life stress: People with low levels of life stress (M = 6.20, SD = 1.90) performed better than people with high levels of life stress (M = 4.33, SD = 1.05; F(1,24) = 19.36, p < .001, η2 = .45). There was an interaction between training and level of life stress, as displayed in this graph (F(2, 24) = 4.17, p = .028, η2 = .26):
Descriptive Statistics
Dependent Variable: PERFORMA
4.00 1.225 54.40 1.673 54.20 1.398 104.40 1.140 56.20 .837 55.30 1.337 104.60 .894 58.00 1.000 56.30 2.003 104.33 1.047 156.20 1.897 155.27 1.780 30
LIFESTREHighLowTotalHighLowTotalHighLowTotalHighLowTotal
TRAININGNone
Short
Long
Total
Mean Std. Deviation N
Tests of Between-Subjects Effects
Dependent Variable: Performance Score
59.467a 5 11.893 8.810 .000 .647832.133 1 832.133 616.395 .000 .96322.067 2 11.033 8.173 .002 .40526.133 1 26.133 19.358 .000 .44611.267 2 5.633 4.173 .028 .25832.400 24 1.350
924.000 3091.867 29
SourceCorrected ModelIntercepttraininglifestretraining * lifestreErrorTotalCorrected Total
Type III Sumof Squares df Mean Square F Sig.
Partial EtaSquared
R Squared = .647 (Adjusted R Squared = .574)a.
146 IBM SPSS Statistics 23 Step by Step Instructor’s Manual
Note that for those with low life stress, the amount of training seems to make a big difference. For those with high life stress, the impact of training is minimal.
TrainingLongShortNone
Estim
ated
Mar
gina
l Mea
ns
8
7
6
5
4
LowHigh
Life Stress
Estimated Marginal Means of Performance Score
IBM SPSS Statistics 23 Step by Step Instructor’s Manual 147
14-7 Minimal answer provided for students.
Descriptive Statistics
Dependent Variable: PERFORMA
8.33 .577 314.67 .577 311.50 3.507 610.67 .577 318.67 .577 314.67 4.412 69.50 1.378 6
16.67 2.251 613.08 4.144 1217.67 .577 310.33 .577 314.00 4.050 622.33 .577 312.67 .577 317.50 5.320 620.00 2.608 611.50 1.378 615.75 4.864 1213.00 5.138 612.50 2.429 612.75 3.841 1216.50 6.411 615.67 3.327 616.08 4.889 1214.75 5.833 1214.08 3.232 1214.42 4.624 24
SLEESTUDSleepStudyTotalSleepStudyTotalSleepStudyTotalSleepStudyTotalSleepStudyTotalSleepStudyTotalSleepStudyTotalSleepStudyTotalSleepStudyTotal
STRESSLow
High
Total
Low
High
Total
Low
High
Total
TRAININGNo
Yes
Total
Mean Std. Deviation N
148 IBM SPSS Statistics 23 Step by Step Instructor’s Manual
There was a main effect for training: Participants who received training performed better (M = 15.75, SD = 4.86) than participants who did not receive training (M = 13.08, SD = 4.14), F(1, 16) = 128.00, p < .001, 2 = .89). There was a main effect of stress level: Participants with high stress levels performed better (M = 16.08, SD = 4.89) than those with low stress levels (M = 12.75, SD = 3.85), F(1, 16) = 200.00, p < .001, 2 = .93). There was main effect on sleeping versus studying all night: People who slept performed somewhat better (M = 14.75, SD = 5.83) than those who didn’t sleep (M = 14.08, SD = 3.23), F(1, 16) = 8.00, p = .012, 2 = .33). There was no significant interaction effect between training and stress level (F(1, 16) = .50, p > .05, 2 = .03). There was a significant interaction between training and sleeping versus studying (F(1, 16) = 1104.50, p < .001, 2 = .99): For those with no training, people who slept performed worse (M = 9.50, SD = 1.38) than those who studied (M = 16.67, SD = 2.25). For those with training, however, people who slept per-formed better (M = 20.00, SD = 2.61) than people who studied (M = 11.50, SD = 1.38). There was no significant interaction between stress level and sleeping versus studying (F(1, 16) = .50, p > .05, 2 = .03). There was a significant three-way interaction between training, stress level, and sleeping versus studying (F(1, 16) = 18.00, p = .001, 2 = .53). Here’s what it looks like:
Tests of Between-Subjects Effects
Dependent Variable: PERFORMA
486.500b 7 69.500 208.500 .000 .989 1459.500 1.0004988.167 1 4988.167 14964.500 .000 .999 14964.500 1.000
42.667 1 42.667 128.000 .000 .889 128.000 1.00066.667 1 66.667 200.000 .000 .926 200.000 1.0002.667 1 2.667 8.000 .012 .333 8.000 .757.167 1 .167 .500 .490 .030 .500 .102
368.167 1 368.167 1104.500 .000 .986 1104.500 1.000.167 1 .167 .500 .490 .030 .500 .102
6.000 1 6.000 18.000 .001 .529 18.000 .978
5.333 16 .3335480.000 24491.833 23
SourceCorrected ModelInterceptTRAININGSTRESSSLEESTUDTRAINING * STRESSTRAINING * SLEESTUDSTRESS * SLEESTUDTRAINING * STRESS *SLEESTUDErrorTotalCorrected Total
Type III Sumof Squares df Mean Square F Sig.
Partial EtaSquared
Noncent.Parameter
ObservedPowera
Computed using alpha = .05a.
R Squared = .989 (Adjusted R Squared = .984)b.
IBM SPSS Statistics 23 Step by Step Instructor’s Manual 149
For those who slept, they performed better with high stress levels, and better with training. A post hoc test could determine whether the difference between high and low stress levels was greater in the training condi-tion than in the no training condition.
For those who didn’t sleep, they performed better with high stress levels and better without training. A post hoc test could determine whether the performance gain for the high stress level participants was greater in the no training condition than in the training condition.
No Yes
Training?
7.5
10
12.5
15
17.5
20
22.5
Mea
n Pe
rfor
man
ceStress level
LowHigh
No Yes
Training?
10
12
14
16
18
20
Mea
n Pe
rform
ance
Stress levelLowHigh
150 IBM SPSS Statistics 23 Step by Step Instructor’s Manual
14-8 No answer provided for students.
Descriptive Statistics
Dependent Variable: CAFFEINE
13.33 1.528 310.33 .577 311.83 1.941 615.00 1.000 310.67 .577 312.83 2.483 614.17 1.472 610.50 .548 612.33 2.188 1210.67 .577 34.00 .000 37.33 3.670 6
14.00 .000 34.67 .577 39.33 5.125 6
12.33 1.862 64.33 .516 68.33 4.376 12
12.00 1.789 67.17 3.488 69.58 3.655 12
14.50 .837 67.67 3.327 6
11.08 4.252 1213.25 1.865 127.42 3.260 12
10.33 3.953 24
SLEESTUDSleepStudyTotalSleepStudyTotalSleepStudyTotalSleepStudyTotalSleepStudyTotalSleepStudyTotalSleepStudyTotalSleepStudyTotalSleepStudyTotal
STRESSLow
High
Total
Low
High
Total
Low
High
Total
TRAININGNo
Yes
Total
Mean Std. Deviation N
IBM SPSS Statistics 23 Step by Step Instructor’s Manual 151
Tests of Between-Subjects Effects
Dependent Variable:Caffeine
Source Type III Sum of
Squares df Mean Square F Sig.
Partial Eta
Squared
Corrected Model 350.000a 7 50.000 85.714 .000 .974
Intercept 2562.667 1 2562.667 4393.143 .000 .996
training 96.000 1 96.000 164.571 .000 .911
stress 13.500 1 13.500 23.143 .000 .591
sleestud 204.167 1 204.167 350.000 .000 .956
training * stress 1.500 1 1.500 2.571 .128 .138
training * sleestud 28.167 1 28.167 48.286 .000 .751
stress * sleestud 6.000 1 6.000 10.286 .005 .391
training * stress * sleestud .667 1 .667 1.143 .301 .067
Error 9.333 16 .583 Total 2922.000 24 Corrected Total 359.333 23
a. R Squared = .974 (Adjusted R Squared = .963)
There is a main effect showing a relationship between training and caffeine (F(1,16) = 164.57, p < .001): People without training had higher caffeine levels (M = 12.33, SD = 2.19) than those with training (M = 8.33, SD = 4.38). There was a main effect showing a relationship between stress and caffeine (F(1,16) = 23.14, p < .001): Those with high stress levels had higher caffeine levels (M = 11.08, SD = 4.25) than those with low stress levels (M = 9.58, SD = 3.66). There was a main effect showing a relationship between sleeping versus studying all night and caffeine (F(1,16) = 350.00, p < .001): People who slept had higher caffeine levels (M = 13.25, SD = 1.87) than those who studied (M = 7.42, SD = 3.26). There was no significant interaction between training and stress level (F(1,16) = 2.57, p > .05). There was a significant interaction between training and sleeping versus studying (F(1,16) = 48.29, p < .001): For those who did not receive training, there was a somewhat higher caffeine level associated with sleeping (M = 14.17, SD = 1.47) as opposed to studying (M = 10.50, SD = .55). For those who did receive training, there was a substantially higher caffeine level associated with sleeping (M = 12.33, SD = 1.86) as opposed to studying (M = 4.67, SD = .52). There was also a significant interaction between stress levels and sleeping versus studying (F(1,16) = 10,29, p = .005): The increased amount of caffeine present for those who slept as opposed to those who studied was greater for high stress subjects (M = 14.50, SD = .84 for the sleep condition, and M = 7.67, SD = 3.33 for the study condition) than low stress subjects (M = 12.00, SD = 1.79 for the sleep condition, and M = 7.17, SD = 3.49 for the study condition). There was no significant three-way interaction (F(1,16) = 1.14, p > .05).
152 IBM SPSS Statistics 23 Step by Step Instructor’s Manual
Additional Exercises Three-way ANOVA with 2 main effects and 1 2-way interaction in divorce.sav file: What influence does sex (2 levels), marital status (4 levels), and income (7 levels) have one one’s personal spirituality [coded on a low(1) to high(7) scale]? Three-way ANOVA with 2 main effects and 1 3-way interaction in divorce.sav file: What influence does sex (2 levels), marital status (4 levels), and income (7 levels) have one the amount of non-sexual closeness experienced [coded on a low(1) to high(7) scale]? Three-way ANOVA with 2 main effects and 1 3-way interaction in divorce.sav file: What influence does sex (2 levels), marital status (4 levels), and income (7 levels) have on intelligence [coded on a low(1) to high(12) scale]? Three-way ANOVA with 1 main effect and 1 2-way interaction in divorce.sav file: What influence does sex (2 levels), marital status (4 levels), and income (7 levels) have on the amount of avoidant cop-ing practiced [coded on a low(1) to high(7) scale]? In helping3.sav Three-way ANOVA with 2 main effects and no interactions: What influence does gender (2 levels), ethnic (4 levels), and problem type (problem, 4 levels) have on the length of time spent helping (thelp-lnz)? Three-way ANOVA with 3 main effects and no interactions: What influence does gender (2 levels), ethnic (4 levels), and problem type (problem, 4 levels) have on the total amount of help given (tothelp)? Three-way ANOVA with 3 main effects and no interactions: What influence does gender (2 levels), ethnic (4 levels), and problem type (problem, 4 levels) have on the rating of problem severity (hsev-eret)? Three-way ANOVA with 1 main effect and 1 2-way interaction: What influence does gender (2 lev-els), ethnic (4 levels), and problem type (problem, 4 levels) have on the amount of anger felt toward the friend (angert)? Three-way ANOVA with no main effects and 1 2-way interaction: What influence does gender (2 levels), ethnic (4 levels), and problem type (problem, 4 levels) have on a feeling of obligation toward the friend (obligat)? Three-way ANOVA with 1 main effect and 1 2-way interaction: What influence does gender (2 lev-els), ethnic (4 levels), and problem type (problem, 4 levels) have on a self-rating of empathic tendency (empathyt)? Three-way ANOVA with 3 main effects and 1 2-way interaction: What influence does gender (2 lev-els), ethnic (4 levels), and income, (5 levels) have on a the amount of time spent helping (thelplnz)? Three-way ANOVA with 1 main effects and 1 2-way interaction: What influence does gender (2 lev-els), ethnic (4 levels), and income, (5 levels) have on the quality of help given (tqualitz)? Three-way ANOVA with 3 main effects and 1 2-way interaction: What influence does gender (2 lev-els), ethnic (4 levels), and income, (5 levels) have on the amount of total help given (tothelp)? Three-way ANOVA with 2 main effects and 2 2-way interactions: What influence does gender (2 levels), problem type (4 levels), and income, (5 levels) have on the amount of anger experienced by the helper (angert)?
IBM SPSS Statistics 23 Step by Step Instructor’s Manual 153
Chapter 15: Simple Linear Regression 1. Use the anxiety.sav file for exercises that follow (downloadable at the address above).
Perform the 4a - 5a sequences on pages 204 and 205. Include output in as compact a form as is reasonable Write the linear equation for the predicted exam score Write the quadratic equation for the predicted exam score
For subjects numbered 5, 13, 42, and 45 Substitute values into the two equations and solve. Show work on a separate page. Then compare in a small table (shown below and similar to that on page 202)
The anxiety score for each subject Linear equation results, Quadratic equation results, and Actual exam scores for sake of comparison.
subject # anxiety score predicted linear
score predicted quadratic
score actual exam
score 5 13 42 45
2. Now using the divorce.sav file, test for linear and curvilinear relations between:
physical closeness (close) and life satisfaction (lsatisy) attributional style (asq) and life satisfaction (lsatisy)
Attributional style, by the way, is a measure of optimism—a low score is “pessimistic” and a high score is “optimistic”.
Print graphs and write linear and quadratic equations for both. For each of the three analyses in problems 3 and 4:
Print out the results Box the Multiple R, Circle the R Square, Underline the three (3) B values, and Double underline the three (3) Sig of T values.
In a single sentence (just once, not for each of the 3 problems) identify the meaning of each of the final four (4) bulleted items above. 3. First, perform step 5b (p. 206) demonstrating the influence of anxiety and anxiety squared (anxiety2) on the exam score (exam).
154 IBM SPSS Statistics 23 Step by Step Instructor’s Manual
4. Now, complete similar procedures for the two relationships shown in problem 2 (from the divorce.sav file) and perform the 5 steps bulleted above: Specifically,
the influence of closeness (close) and closeness squared (close2) on life satisfaction (lsatisy), and the influence of attributional style (asq) and the square of attributional style (asq2) on life satisfac-
tion (lsatisy). 5. A researcher is examining the relationship between stress levels and performance on a test of cognitive performance. She hypothesizes that stress levels lead to an increase in performance to a point, and then in-creased stress decreases performance. She tests ten participants, who have the following levels of stress: 10.94, 12.76, 7.62, 8.17, 7.83, 12.22, 9.23, 11.17, 11.88, and 8.18. When she tests their levels of mental per-formance, she finds the following cognitive performance scores (listed in the same participant order as above): 5.24, 4.64, 4.68, 5.04, 4.17, 6.20, 4.54, 6.55, 5.79, and 3.17. Perform a linear regression to examine the relationship between these variables. What do these results mean? 6. The same researcher tests ten more participants, who have the following levels of stress: 16, 20, 14, 21, 23, 19, 14, 20, 17, and 10. Their cognitive performance scores are (listed in the same participant order): 5.24, 4.64, 4.68, 5.04, 4.17, 6.20, 4.54, 6.55, 5.79, and 3.17. (Note that in an amazing coincidence, these participants have the same cognitive performance scores as the participants in Question 5; this coincidence may save you some typing.) Perform a linear regression to examine the relationship between these variables. What do these results mean? 7. Create a scatterplot (see Chapter 5) of the variables in Question 6. How do results suggest that linear re-gression might not be the best analysis to perform? 8. Perform curve estimation on the data from Question 6. What does this tell you about the data that you could not determine from the analysis in Question 6? 9. What is different about the data in Questions 5 and 6 that leads to different results?
IBM SPSS Statistics 23 Step by Step Instructor’s Manual 155
15-1 Full answer provided for students.
Dependent Variable: exam
Equation
Model Summary Parameter Estimates
R Square F df1 df2 Sig. Constant b1 b2
Linear .238 22.186 1 71 .000 64.247 2.818 Quadratic .641 62.525 2 70 .000 30.377 18.926 -1.521
The independent variable is anxiety.
Linear: EXAM(pred) = 64.247 + 2.818(ANXIETY) Quadratic: EXAM(pred) = 30.377 + 18.926(ANXIETY) – 1.521(ANXIETY)2
subject # Anxiety score predicted linear score predicted quadratic score actual score 5 3.0 72.7 73.5 70
13 4.0 75.5 81.7 82
42 6.5 82.6 89.1 98
45 9.0 89.6 77.6 79
156 IBM SPSS Statistics 23 Step by Step Instructor’s Manual
15-2 Minimal answer provided for students.
Linear: LSATISFY(pred) = 3.941 + .259(CLOSE) Quadratic: LSATISFY(pred) = 5.097 – .442(CLOSE) + .099(CLOSE)2
IBM SPSS Statistics 23 Step by Step Instructor’s Manual 157
Linear: LSATISFY(pred) = 4.571 + .08(ASQ) Quadratic: LSATISFY(pred) = 4.587 + .051(ASQ) + .004(ASQ)2
158 IBM SPSS Statistics 23 Step by Step Instructor’s Manual
15-3 Minimal answer provided for students.
Model Summary
Model
R R Square
Adjusted R
Square
Std. Error of the
Estimate
d
i
m
e
n
s
i
o
n
0
1 .801a .641 .631 8.443
a. Predictors: (Constant), square of anxiety, anxiety
Coefficientsa
Model
Unstandardized Coefficients
Standardized
Coefficients
t Sig. B Std. Error Beta
1 (Constant) 30.377 4.560 6.662 .000
Anxiety 18.926 1.863 3.277 10.158 .000
square of anxiety -1.521 .172 -2.861 -8.866 .000
a. Dependent Variable: exam Multiple R: The multiple correlation between the dependent variable and (in this case) the two independent variables. R Square: The proportion of variance in the dependent variable explained by the independent variables. B values: The coefficients (and constant) that make up the regression equation. T values: Identifies whether the independent variable of interest significantly predicts the dependent varia-ble.
15-4 No answer provided for students.
Model Summary
Model
R R Square
Adjusted R
Square
Std. Error of the
Estimate
d
i
m
e
n
s
i
o
n
0
1 .308a .095 .087 .85582
a. Predictors: (Constant), close2, amount of physical closeness
IBM SPSS Statistics 23 Step by Step Instructor’s Manual 159
Coefficientsa
Model
Unstandardized Coefficients
Standardized
Coefficients
t Sig. B Std. Error Beta
1 (Constant) 5.097 .556 9.168 .000
amount of physical close-
ness
-.442 .318 -.467 -1.390 .166
close2 .099 .044 .754 2.246 .026
a. Dependent Variable: life satisfaction
Model Summary
Model
R R Square
Adjusted R
Square
Std. Error of the
Estimate
d
i
m
e
n
s
i
o
n
0
1 .252a .064 .055 .87053
a. Predictors: (Constant), asq2, asq
Coefficientsa
Model
Unstandardized Coefficients
Standardized
Coefficients
t Sig. B Std. Error Beta
1 (Constant) 4.587 .088 52.118 .000
asq .051 .039 .156 1.321 .188
asq2 .004 .005 .107 .906 .366
a. Dependent Variable: life satisfaction
160 IBM SPSS Statistics 23 Step by Step Instructor’s Manual
15-5 Full answer provided for students.
These results suggest that there is a significant relationship between stress and performance (R2 = .399, F(1,8) = 5.31, p = .05). Note, though, that we have tested for a linear relationship—which is not what the research hypothesized.
15-6 No answer provided for students.
Note that there is no significant relationship between stress and performance in these data (F(1,8) = 2.05, p > .05).
Model Summary
.632a .399 .324 .82256Model1
R R SquareAdjustedR Square
Std. Error ofthe Estimate
Predictors: (Constant), STRESSa.
ANOVAb
3.594 1 3.594 5.312 .050a
5.413 8 .6779.007 9
RegressionResidualTotal
Model1
Sum ofSquares df Mean Square F Sig.
Predictors: (Constant), STRESSa.
Dependent Variable: PERFORMAb.
Model Summary
.451a .204 .104 .94683Model1
R R SquareAdjustedR Square
Std. Error ofthe Estimate
Predictors: (Constant), STRESSa.
ANOVAb
1.835 1 1.835 2.047 .190a
7.172 8 .8969.007 9
RegressionResidualTotal
Model1
Sum ofSquares df Mean Square F Sig.
Predictors: (Constant), STRESSa.
Dependent Variable: PERFORMAb.
IBM SPSS Statistics 23 Step by Step Instructor’s Manual 161
15-7 No answer provided for students.
There does appear to be a relationship between stress and performance, but it doesn’t appear to be linear (i.e., it’s not well defined by a straight line). So, linear regression isn’t a good analysis to perform.
stress22.5020.0017.5015.0012.5010.00
perf
orm
a
7.00
6.00
5.00
4.00
3.00
162 IBM SPSS Statistics 23 Step by Step Instructor’s Manual
15-8 Minimal answer provided for students. Model Summary and Parameter Estimates Dependent Variable: performa
Equation Model Summary Parameter Estimates
R Square F df1 df2 Sig. Constant b1 b2 Linear .204 2.047 1 8 .190 3.013 .114 Quadratic .687 7.682 2 7 .017 -8.623 1.595 -.045
The independent variable is stress.
Notice that the linear regression information has (within rounding error) the same information as calculated by the linear regression procedure in exercise 5, above. That model doesn’t fit the data well. The quadratic equation, however, fits the data much better (R2 = .69, F(1, 7) = 7.68, p = .017). This tells us that the data is predicted much better from a quadratic equation (which will form an upside-down “U” shape) than a linear one.
15-9 Minimal answer provided for students. The data in question 4 is (roughly) linear; the data in question 5 is curvilinear. That’s why you should al-ways examine your scatterplots before doing a linear regression!
IBM SPSS Statistics 23 Step by Step Instructor’s Manual 163
Chapter 16: Multiple Regression Analysis Use the helping3.sav file for the exercises that follow (downloadable at the address shown above). Conduct the following THREE regression analyses:
Criterion variables: 1. thelplnz: Time spent helping
2. tqualitz: Quality of the help given
3. tothelp: A composite help measure that includes both time and quality Predictors: (use the same predictors for each of the three dependent variables) age: range from 17 to 89 angert: Amount of anger felt by the helper toward the needy friend effict: Helper’s feeling of self-efficacy (competence) in relation to the friend’s problem empathyt: Helper’s empathic tendency as rated by a personality test gender: 1 = female, 2 = male hclose: Helper’s rating of how close the relationship was hcontrot: helper’s rating of how controllable the cause of the problem was hcopet: helper’s rating of how well the friend was coping with his or her problem hseveret: helper’s rating of the severity of the problem obligat: the feeling of obligation the helper felt toward the friend in need school: coded from 1 to 7 with 1 being the lowest education, and 7 the highest (> 19 years) sympathi: The extent to which the helper felt sympathy toward the friend worry: amount the helper worried about the friend in need
Use entry value of .06 and removal value of .11. Use stepwise method of entry.
Create a table (example below) showing for each of the three analyses Multiple R, R2, then each of the varia-bles that significantly influence the dependent variables. Following the R2, List the name of each variable and then (in parentheses) list its value. Rank order them from the most influential to least influential from left to right. Include only significant predictors.
164 IBM SPSS Statistics 23 Step by Step Instructor’s Manual
Depend-ent
Variable
Multiple R R2 1st var () 2nd var ( 3rd var () 4th var () 5th var () 6th var ()
Time helping
Help quality
Total help
4. A researcher is examining the relationship between stress levels, self-esteem, coping skills, and per-
formance on a test of cognitive performance (the dependent measure). His data are shown below. Per-form multiple regression on these data, entering variables using the stepwise procedure. Interpret the results.
Stress Self-esteem Coping skills Performance
6 10 19 21 5 10 14 21 5 8 14 22 3 7 13 15 7 14 16 22 4 9 11 17 6 9 15 28 5 9 10 19 5 11 20 16 5 10 17 18
IBM SPSS Statistics 23 Step by Step Instructor’s Manual 165
16-1, 16-2, and 16-3 Full answer provided for students for 16-1; no answers provided for 16-2 or 16-3. Dependent Variable
Multiple R R2 1st var () 2nd var () 3rd var () 4th var () 5th var () 6th var ()
1. Time helping .576 .332 Efficacy
(.330) Severity (.214)
Worry (.153)
Closeness (.113) Anger (.110) Gender
(-.096) 2. Help quality .590 .348 Efficacy
(.423) Coping (.200)
Anger (-.155)
Severity (.124)
Obligation (.088)
3. Total help .656 .430 Efficacy (.454)
Severity (.242)
Closeness (.102)
Empathy (.097)
Coping (.085)
Obligation (.079)
166 IBM SPSS Statistics 23 Step by Step Instructor’s Manual
16-4 Minimal answer provided for students.
Two different models were examined. The first model, Performance = 7.688 + 2.394 x Stress + Residual, fit the data fairly well (R2 = .49, F(1,8) = 7.53, p = .025). Adding self-esteem significantly improved the model, so the second model, Performance = 12.999 + 4.710 x Stress – 1.765 x Self-Esteem + Residual, fit the data even better (R2 = .90, F(2,7) = 14.65, p = .003). So, when stress goes up, performance goes up; but when self-esteem goes up, performance goes down. Coping skills didn’t contribute to make the model better.
Model Summary
.696a .485 .420 2.881
.898b .807 .752 1.884
Model12
R R SquareAdjustedR Square
Std. Error ofthe Estimate
Predictors: (Constant), STRESSa.
Predictors: (Constant), STRESS, SELFESTEb.
ANOVAc
62.496 1 62.496 7.529 .025a
66.404 8 8.300128.900 9104.042 2 52.021 14.649 .003b
24.858 7 3.551128.900 9
RegressionResidualTotalRegressionResidualTotal
Model1
2
Sum ofSquares df Mean Square F Sig.
Predictors: (Constant), STRESSa.
Predictors: (Constant), STRESS, SELFESTEb.
Dependent Variable: PERFORMAc.
Coefficientsa
7.688 4.543 1.692 .1292.394 .873 .696 2.744 .025
12.999 3.353 3.877 .0064.710 .885 1.370 5.319 .001
-1.765 .516 -.881 -3.420 .011
(Constant)STRESS(Constant)STRESSSELFESTE
Model1
2
B Std. Error
UnstandardizedCoefficients
Beta
StandardizedCoefficients
t Sig.
Dependent Variable: PERFORMAa.
Excluded Variablesc
-.881a -3.420 .011 -.791 .416-.317a -1.140 .292 -.396 .804-.182b -.949 .379 -.361 .762
SELFESTECOPINGSKCOPINGSK
Model1
2
Beta In t Sig.Partial
Correlation Tolerance
CollinearityStatistics
Predictors in the Model: (Constant), STRESSa.
Predictors in the Model: (Constant), STRESS, SELFESTEb.
Dependent Variable: PERFORMAc.
IBM SPSS Statistics 23 Step by Step Instructor’s Manual 167
Additional Exercises Using DIVORCE.SAV:
1. Criterion variable: Life satisfaction (lsatisfy) with predictors of income, avoidant coping (avoicope), social support (socsupp), curvilinear influence of social support (socsupp2), close-ness (close), curvilinear influence of closeness (close2), attributional style (asq), curvilinear in-fluence of attributional style (asq), gender (sex), age, length of separation (sep), years married prior to separation (mar), amount of education (school), cognitive coping (cogcope), behavioral coping (behcope), intelligence (iq), level of spirituality (spiritua).
2. Criterion variable: Severity of trauma experienced during recovery (trauma) with predictors of in-come, avoidant coping (avoicope), social support (socsupp), curvilinear influence of social sup-port (socsupp2), closeness (close), curvilinear influence of closeness (close2), attributional style (asq), curvilinear influence of attributional style (asq2), gender (sex), age, length of separation (sep), years married prior to separation (mar), amount of education (school), cognitive coping (cogcope), behavioral coping (behcope), intelligence (iq), level of spirituality (spiritua).
Using HELPING3.SAV:
3. Criterion variable: Total help (tothelp) with predictors of gender, age, income, closeness of the friend (hclose), problem severity (hseveret), feelings of obligation (obligat), how well the recipi-ent is coping (hcopet), the self-efficacy of the helper (effict), the empathic tendency of the helper (empathyt), controllability of the problem cause (hcontrot), anger felt by the helper (angert), worry felt by the helper (worry), and the sympathy of the helper (sympathi).
4. For men, criterion variable: Total help (tothelp) with predictors of gender, age, income, close-ness of the friend (hclose), problem severity (hseveret), feelings of obligation (obligat), how well the recipient is coping (hcopet), the self-efficacy of the helper (effict), the empathic tendency of the helper (empathyt), controllability of the problem cause (hcontrot), anger felt by the helper (angert), worry felt by the helper (worry), and the sympathy of the helper (sympathi).
5. For women, criterion variable: Total help (tothelp) with predictors of gender, age, income, closeness of the friend (hclose), problem severity (hseveret), feelings of obligation (obligat), how well the recipient is coping (hcopet), the self-efficacy of the helper (effict), the empathic ten-dency of the helper (empathyt), controllability of the problem cause (hcontrot), anger felt by the helper (angert), worry felt by the helper (worry), and the sympathy of the helper (sympathi).
6. Criterion variable: Help quality (tqualitz) with predictors of gender, age, income, closeness of the friend (hclose), problem severity (hseveret), feelings of obligation (obligat), how well the re-cipient is coping (hcopet), the self-efficacy of the helper (effict), the empathic tendency of the helper (empathyt), controllability of the problem cause (hcontrot), anger felt by the helper (an-gert), worry felt by the helper (worry), and the sympathy of the helper (sympathi).
7. For men, criterion variable: Help quality (tqualitz) with predictors of gender, age, income, closeness of the friend (hclose), problem severity (hseveret), feelings of obligation (obligat), how well the recipient is coping (hcopet), the self-efficacy of the helper (effict), the empathic ten-dency of the helper (empathyt), controllability of the problem cause (hcontrot), anger felt by the helper (angert), worry felt by the helper (worry), and the sympathy of the helper (sympathi).
8. For women, criterion variable: Help quality (tqualitz) with predictors of gender, age, income, closeness of the friend (hclose), problem severity (hseveret), feelings of obligation (obligat), how well the recipient is coping (hcopet), the self-efficacy of the helper (effict), the empathic ten-dency of the helper (empathyt), controllability of the problem cause (hcontrot), anger felt by the helper (angert), worry felt by the helper (worry), and the sympathy of the helper (sympathi).
168 IBM SPSS Statistics 23 Step by Step Instructor’s Manual
Chapter 18: Reliability Analysis Use the helping3.sav file for the exercises that follow (downloadable at the address shown above). Measure the internal consistency (coefficient alpha) of the following sets of variables. An “h” in front of a variable name, refers to assessment by the help giver; an “r” in front of a variable name refers to assessment by the help recipient.
Compute Coefficient alpha for the following sets of variables, then delete variables until you achieve the highest possible alpha value. Print out relevant results.
1. hsevere1, hsevere2, rsevere1, rsevere2 measure of problem severity
2. sympath1, sympath2, sympath3, sympath4 measure of helper’s sympathy
3. anger1, anger2, anger3, anger4 measure of helper’s anger
4. hcope1, hcope2, hcope3, rcope1, rcope2, rcope3 how well the recipient is coping
5. hhelp1-hhelp15 helper rating of time spent helping
6. rhelp1-rhelp15 recipient’s rating of time helping
7. empathy1-empath14 helper’s rating of empathy
8. hqualit1, hqualit2, hqualit3, rqualit1, rqualit2, rqualit3 quality of help
9. effic1-effic15 helper’s belief of self efficacy
10. hcontro1, hcontro2, rcontro1, rcontro2 controllability of the cause of the problem
From the divorce.sav file:
11. drelat-dadjust (16 items) factors disruptive to divorce recovery
12. arelat-amain2 (13 items) factors assisting recovery from divorce
13. sp8-sp57 (18 items) spirituality measures
IBM SPSS Statistics 23 Step by Step Instructor’s Manual 169
18-1 Full answer provided for students.
Reliability Statistics
.889 .890 4
Cronbach'sAlpha
Cronbach'sAlpha Based
onStandardized
Items N of Items
Inter-Item Correlation Matrix
1.000 .789 .610 .603
.789 1.000 .588 .647
.610 .588 1.000 .774
.603 .647 .774 1.000
HELPER RATINGOF DISRUPTIONHELPER RATINGOF TRAUMARECIPIENT RATINGOF DISRUPTIONRECIPIENT RATINGOF TRAUMA
HELPERRATING OF
DISRUPTION
HELPERRATING OF
TRAUMA
RECIPIENTRATING OF
DISRUPTION
RECIPIENTRATING OF
TRAUMA
The covariance matrix is calculated and used in the analysis.
Summary Item Statistics
5.082 4.886 5.199 .313 1.064 .019 42.782 2.638 2.944 .306 1.116 .016 4.668 .588 .789 .201 1.342 .007 4
Item MeansItem VariancesInter-Item Correlations
Mean Minimum Maximum RangeMaximum /Minimum Variance N of Items
The covariance matrix is calculated and used in the analysis.
Item-Total Statistics
15.44 19.157 .754 .655 .859
15.18 19.718 .768 .668 .854
15.23 19.662 .741 .631 .864
15.13 19.459 .766 .655 .855
HELPER RATINGOF DISRUPTIONHELPER RATINGOF TRAUMARECIPIENT RATINGOF DISRUPTIONRECIPIENT RATINGOF TRAUMA
Scale Mean ifItem Deleted
ScaleVariance if
Item Deleted
CorrectedItem-TotalCorrelation
SquaredMultiple
Correlation
Cronbach'sAlpha if Item
Deleted
Scale Statistics
20.33 33.433 5.782 4Mean Variance
Std.Deviation N of Items
170 IBM SPSS Statistics 23 Step by Step Instructor’s Manual
18-2 Full answer provided for students.
Reliability Statistics
.817 .820 3
Cronbach'sAlpha
Cronbach'sAlpha Based
onStandardized
Items N of Items
Inter-Item Correlation Matrix
1.000 .591 .590
.591 1.000 .626
.590 .626 1.000
HELPER RATINGOF COMPASSIONHELPER RATINGOF SYMPATHYHELPER RATINGOF MOVED
HELPERRATING OFCOMPASSI
ON
HELPERRATING OFSYMPATHY
HELPERRATING OF
MOVED
The covariance matrix is calculated and used in the analysis.
Summary Item Statistics
5.138 4.732 5.458 .726 1.153 .138 32.321 1.857 2.790 .933 1.502 .218 3.602 .590 .626 .036 1.061 .000 3
Item MeansItem VariancesInter-Item Correlations
Mean Minimum Maximum RangeMaximum /Minimum Variance N of Items
The covariance matrix is calculated and used in the analysis.
Item-Total Statistics
9.96 8.291 .655 .429 .768
10.19 7.333 .683 .467 .733
10.68 6.623 .683 .467 .740
HELPER RATINGOF COMPASSIONHELPER RATINGOF SYMPATHYHELPER RATINGOF MOVED
Scale Mean ifItem Deleted
ScaleVariance if
Item Deleted
CorrectedItem-TotalCorrelation
SquaredMultiple
Correlation
Cronbach'sAlpha if Item
Deleted
Scale Statistics
15.42 15.284 3.910 3Mean Variance
Std.Deviation N of Items
IBM SPSS Statistics 23 Step by Step Instructor’s Manual 171
18-3 Minimal answer provided for students.
Reliability Statistics
.938 .938 4
Cronbach'sAlpha
Cronbach'sAlpha Based
onStandardized
Items N of Items
Inter-Item Correlation Matrix
1.000 .789 .729 .755
.789 1.000 .822 .839
.729 .822 1.000 .805
.755 .839 .805 1.000
HELPER RATINGOF ANGERHELPER RATINGOF IRRITATIONHELPER RATINGOF AGGRAVATIONHELPER RATINGOF ANNOYED
HELPERRATING OF
ANGER
HELPERRATING OFIRRITATION
HELPERRATING OFAGGRAVAT
ION
HELPERRATING OFANNOYED
The covariance matrix is calculated and used in the analysis.
Summary Item Statistics
2.176 2.054 2.307 .253 1.123 .011 42.777 2.615 2.904 .289 1.111 .018 4.790 .729 .839 .110 1.151 .002 4
Item MeansItem VariancesInter-Item Correlations
Mean Minimum Maximum RangeMaximum /Minimum Variance N of Items
The covariance matrix is calculated and used in the analysis.
Item-Total Statistics
6.65 22.451 .807 .659 .932
6.40 20.815 .890 .793 .906
6.51 21.243 .846 .726 .921
6.56 21.464 .866 .756 .914
HELPER RATINGOF ANGERHELPER RATINGOF IRRITATIONHELPER RATINGOF AGGRAVATIONHELPER RATINGOF ANNOYED
Scale Mean ifItem Deleted
ScaleVariance if
Item Deleted
CorrectedItem-TotalCorrelation
SquaredMultiple
Correlation
Cronbach'sAlpha if Item
Deleted
Scale Statistics
8.71 37.432 6.118 4Mean Variance
Std.Deviation N of Items
172 IBM SPSS Statistics 23 Step by Step Instructor’s Manual
18-4 No answer provided for students.
Reliability Statistics
.674 .672 6
Cronbach'sAlpha
Cronbach'sAlpha Based
onStandardized
Items N of Items
Inter-Item Correlation Matrix
1.000 .514 .400 .293 .049 .139
.514 1.000 .306 .199 .226 .149
.400 .306 1.000 .193 .105 .252
.293 .199 .193 1.000 .270 .434
.049 .226 .105 .270 1.000 .287
.139 .149 .252 .434 .287 1.000
HELPER RATING OFCOPINGHELPER RATING OFEFFORT BY FRIENDHELPER RATING OFABILITY TO COPERECIPIENT RATING OFCOPINGRECIPIENT RATING OFEFFORT BY FRIENDRECIPIENT RATING OFABILITY TO COPE
HELPERRATING OF
COPING
HELPERRATING OF
EFFORTBY FRIEND
HELPERRATING OFABILITY TO
COPE
RECIPIENTRATING OF
COPING
RECIPIENTRATING OFEFFORT BY
FRIEND
RECIPIENTRATING OFABILITY TO
COPE
The covariance matrix is calculated and used in the analysis.
Summary Item Statistics
4.846 4.484 5.283 .799 1.178 .121 62.190 1.833 2.663 .830 1.453 .099 6.254 .049 .514 .465 10.467 .015 6
Item MeansItem VariancesInter-Item Correlations
Mean Minimum Maximum RangeMaximum /Minimum Variance N of Items
The covariance matrix is calculated and used in the analysis.
IBM SPSS Statistics 23 Step by Step Instructor’s Manual 173
Item-Total Statistics
24.54 20.544 .457 .370 .613
24.00 20.993 .456 .313 .613
23.80 22.089 .405 .209 .632
24.49 22.239 .437 .266 .622
23.97 24.415 .278 .152 .671
24.59 22.745 .384 .248 .639
HELPER RATING OFCOPINGHELPER RATING OFEFFORT BY FRIENDHELPER RATING OFABILITY TO COPERECIPIENT RATING OFCOPINGRECIPIENT RATING OFEFFORT BY FRIENDRECIPIENT RATING OFABILITY TO COPE
Scale Mean ifItem Deleted
ScaleVariance if
Item Deleted
CorrectedItem-TotalCorrelation
SquaredMultiple
Correlation
Cronbach'sAlpha if Item
Deleted
Scale Statistics
29.08 29.971 5.475 6Mean Variance
Std.Deviation N of Items
174 IBM SPSS Statistics 23 Step by Step Instructor’s Manual
18-5 No answer provided for students.
. . .
.
.
.
Reliability Statistics
.914 .916 14
Cronbach'sAlpha
Cronbach'sAlpha Based
onStandardized
Items N of Items
Summary Item Statistics
1.336 .521 2.088 1.567 4.009 .211 144.315 1.584 6.740 5.155 4.254 2.821 14.438 .201 .700 .499 3.483 .018 14
Item MeansItem VariancesInter-Item Correlations
Mean Minimum Maximum RangeMaximum /Minimum Variance N of Items
The covariance matrix is calculated and used in the analysis.
IBM SPSS Statistics 23 Step by Step Instructor’s Manual 175
Item-Total Statistics
16.884 328.906 .708 .559 .905
17.018 344.895 .511 .365 .914
16.617 320.973 .768 .683 .902
17.463 343.448 .736 .651 .904
17.848 362.678 .495 .395 .912
17.227 336.944 .783 .678 .902
17.448 340.654 .721 .590 .904
16.892 343.378 .525 .394 .913
18.184 374.440 .474 .302 .913
17.344 342.010 .731 .603 .904
17.090 337.382 .659 .522 .907
17.907 371.888 .427 .304 .914
17.329 342.559 .705 .606 .905
17.909 359.801 .625 .511 .909
EMPTH--HELPERENCOURAGEREASSUREINSTR--HELPER TASKSOR SERVICESINFORM--HELPERAPPRAISE/CLARIFYEMPTH--HELPERVALIDATE AFFIRMINSTR--HELPERLOANING MATEIALSINFORM--HELPERINFORMATION ADVICEEMPTH--HELPEREXPRESSWILLINGNESS TO HELPINSTR--HELPERPARTICIPATE INACTIVITIESINFORM--HELPER FINDSOMEONE TO HELPEMPTH--HELPERSYMPATHY EMPATHYCONCERNINSTR--HELPERREDUCE TENSIONTELL JOKESINFORM--HELPERTEACH TO DO BETTEREMPTH--HELPEREMPATHIC LISTENINGEMPTH--HELPERRELIEVE OF SELFBLAME
Scale Mean ifItem Deleted
ScaleVariance if
Item Deleted
CorrectedItem-TotalCorrelation
SquaredMultiple
Correlation
Cronbach'sAlpha if Item
Deleted
Scale Statistics
18.705 399.129 19.9782 14Mean Variance
Std.Deviation N of Items
176 IBM SPSS Statistics 23 Step by Step Instructor’s Manual
18-6 No answer provided for students.
. . .
.
.
.
Reliability Statistics
.934 .935 14
Cronbach'sAlpha
Cronbach'sAlpha Based
onStandardized
Items N of Items
Summary Item Statistics
1.398 .585 1.914 1.329 3.273 .166 144.316 1.499 6.138 4.639 4.096 2.177 14.507 .247 .813 .565 3.286 .017 14
Item MeansItem VariancesInter-Item Correlations
Mean Minimum Maximum RangeMaximum /Minimum Variance N of Items
The covariance matrix is calculated and used in the analysis.
IBM SPSS Statistics 23 Step by Step Instructor’s Manual 177
Item-Total Statistics
17.814 378.375 .766 .675 .926
17.922 390.266 .590 .446 .933
17.655 374.927 .798 .717 .925
18.118 384.161 .799 .705 .925
18.623 409.252 .527 .465 .933
17.984 387.254 .751 .605 .927
18.070 379.182 .818 .743 .925
17.742 388.763 .610 .439 .932
18.985 428.463 .478 .446 .934
18.075 381.064 .827 .797 .924
17.974 391.105 .654 .507 .930
18.737 417.041 .619 .544 .931
18.058 389.472 .759 .700 .927
18.647 411.521 .637 .518 .931
EMPTH--RECIPIENTRATE ENCOURAGEREASSUREINSTR--RECIPIENTRATE TASKS ORSERVICESINFORM--RECIPIENTRATEAPPRAISE/CLARIFYEMPTH--RECIPIENTRATE VALIDATEAFFIRMINSTR--RECIPIENTRATE LOANINGMATEIALSINFORM--RECIPIENTRATE INFORMATIONADVICEEMPTH--RECIPIENTRATE EXPRESSWILLINGNESS TO HELPINSTR--RECIPIENTRATE PARTICIPATE INACTIVITIESINFORM--RECIPIENTRATE FIND SOMEONETO HELPEMPTH--RECIPIENTRATE SYMPATHYEMPATHY CONCERNINSTR--RECIPIENTRATE REDUCETENSION TELL JOKESINFORM--RECIPIENTRATE TEACH TO DOBETTEREMPTH--RECIPIENTRATE EMPATHICLISTENINGEMPTH--RECIPIENTRATE RELIEVE OFSELF BLAME
Scale Mean ifItem Deleted
ScaleVariance if
Item Deleted
CorrectedItem-TotalCorrelation
SquaredMultiple
Correlation
Cronbach'sAlpha if Item
Deleted
Scale Statistics
19.570 454.202 21.3120 14Mean Variance
Std.Deviation N of Items
178 IBM SPSS Statistics 23 Step by Step Instructor’s Manual
18-7 No answer provided for students.
. . .
.
.
.
Reliability Statistics
.797 .798 9
Cronbach'sAlpha
Cronbach'sAlpha Based
onStandardized
Items N of Items
Inter-Item Correlation Matrix
1.000 .417 .433 .387 .319 .248 .258 .207 .351
.417 1.000 .482 .429 .347 .191 .235 .229 .218
.433 .482 1.000 .508 .264 .274 .334 .276 .333
.387 .429 .508 1.000 .338 .256 .293 .256 .371
.319 .347 .264 .338 1.000 .248 .112 .233 .161
.248 .191 .274 .256 .248 1.000 .251 .325 .369
.258 .235 .334 .293 .112 .251 1.000 .227 .305
.207 .229 .276 .256 .233 .325 .227 1.000 .520
.351 .218 .333 .371 .161 .369 .305 .520 1.000
SAD TO SEELONELY STRANGEREMOTIONALLYINVOLVED WITHFRIEND PROBLEMDISTURBED WHENBRING BAD NEWSA PERSON CRYINGUPSETS MEREALLY INVOLVEDIN BOOK OR MOVIEANGRY WHEN SEESOMEONE ILLTREATEDDO NOT FEEL OKWHEN OTHERS AREDEPRESSEDUPSET TO SEEANIMAL IN PAINUPSET TO SEEHELPLESS OLDPEOPLE
SAD TO SEELONELY
STRANGER
EMOTIONALLY
INVOLVEDWITH
FRIENDPROBLEM
DISTURBEDWHEN
BRING BADNEWS
A PERSONCRYING
UPSETS ME
REALLYINVOLVED
IN BOOK ORMOVIE
ANGRYWHEN SEESOMEONE
ILLTREATED
DO NOTFEEL OK
WHENOTHERS
AREDEPRES
SED
UPSET TOSEE ANIMAL
IN PAIN
UPSET TOSEE
HELPLESSOLD
PEOPLE
The covariance matrix is calculated and used in the analysis.
Summary Item Statistics
4.974 4.246 5.858 1.613 1.380 .286 92.309 1.596 2.648 1.052 1.659 .114 9.306 .112 .520 .409 4.659 .009 9
Item MeansItem VariancesInter-Item Correlations
Mean Minimum Maximum RangeMaximum /Minimum Variance N of Items
The covariance matrix is calculated and used in the analysis.
IBM SPSS Statistics 23 Step by Step Instructor’s Manual 179
Item-Total Statistics
39.97 56.014 .528 .318 .772
40.26 57.553 .517 .341 .774
39.86 56.286 .594 .404 .764
40.09 54.837 .577 .374 .765
40.01 58.920 .397 .219 .790
38.91 61.361 .426 .212 .785
40.52 59.366 .392 .181 .791
39.29 58.649 .444 .318 .783
39.22 57.495 .525 .400 .773
SAD TO SEELONELY STRANGEREMOTIONALLYINVOLVED WITHFRIEND PROBLEMDISTURBED WHENBRING BAD NEWSA PERSON CRYINGUPSETS MEREALLY INVOLVEDIN BOOK OR MOVIEANGRY WHEN SEESOMEONE ILLTREATEDDO NOT FEEL OKWHEN OTHERS AREDEPRESSEDUPSET TO SEEANIMAL IN PAINUPSET TO SEEHELPLESS OLDPEOPLE
Scale Mean ifItem Deleted
ScaleVariance if
Item Deleted
CorrectedItem-TotalCorrelation
SquaredMultiple
Correlation
Cronbach'sAlpha if Item
Deleted
Scale Statistics
44.77 71.385 8.449 9Mean Variance
Std.Deviation N of Items
180 IBM SPSS Statistics 23 Step by Step Instructor’s Manual
18-8 No answers provided for students.
Reliability Statistics
.885 .885 6
Cronbach'sAlpha
Cronbach'sAlpha Based
onStandardized
Items N of Items
Inter-Item Correlation Matrix
1.000 .628 .629 .424 .380 .407
.628 1.000 .825 .417 .496 .507
.629 .825 1.000 .440 .457 .524
.424 .417 .440 1.000 .751 .750
.380 .496 .457 .751 1.000 .778
.407 .507 .524 .750 .778 1.000
HELPER RATING OFQUALITY OF HELPGIVENHELPER RATING OFEFFECTIVENESS OFHELP GIVENHELPER RATING OFBENEFIT RECEIVEDFROM HELP GIVENRECIPIENT RATING OFQUALITY OF HELPGIVENRECIPIENT RATING OFEFFECTIVENESS OFHELP GIVENRECIPIENT RATING OFBENEFIT RECEIVEDFROM HELP GIVEN
HELPERRATING OFQUALITY OFHELP GIVEN
HELPERRATING OF
EFFECTIVENESS OF
HELP GIVEN
HELPERRATING OFBENEFIT
RECEIVEDFROM HELP
GIVEN
RECIPIENTRATING OFQUALITY OFHELP GIVEN
RECIPIENTRATING OF
EFFECTIVENESS OF
HELP GIVEN
RECIPIENTRATING OFBENEFIT
RECEIVEDFROM HELP
GIVEN
The covariance matrix is calculated and used in the analysis.
Summary Item Statistics
5.166 4.823 5.594 .771 1.160 .095 61.877 1.556 2.150 .594 1.382 .045 6.561 .380 .825 .445 2.172 .023 6
Item MeansItem VariancesInter-Item Correlations
Mean Minimum Maximum RangeMaximum /Minimum Variance N of Items
The covariance matrix is calculated and used in the analysis.
IBM SPSS Statistics 23 Step by Step Instructor’s Manual 181
Item-Total Statistics
25.98 32.733 .604 .458 .879
26.17 29.370 .716 .719 .862
26.12 29.895 .720 .715 .861
25.40 31.147 .690 .650 .866
25.71 30.175 .713 .685 .862
25.59 29.563 .742 .690 .857
HELPER RATING OFQUALITY OF HELPGIVENHELPER RATING OFEFFECTIVENESS OFHELP GIVENHELPER RATING OFBENEFIT RECEIVEDFROM HELP GIVENRECIPIENT RATING OFQUALITY OF HELPGIVENRECIPIENT RATING OFEFFECTIVENESS OFHELP GIVENRECIPIENT RATING OFBENEFIT RECEIVEDFROM HELP GIVEN
Scale Mean ifItem Deleted
ScaleVariance if
Item Deleted
CorrectedItem-TotalCorrelation
SquaredMultiple
Correlation
Cronbach'sAlpha if Item
Deleted
Scale Statistics
30.99 42.905 6.550 6Mean Variance
Std.Deviation N of Items
182 IBM SPSS Statistics 23 Step by Step Instructor’s Manual
18-9 No answer provided for students.
. . .
.
.
.
Reliability Statistics
.846 .854 15
Cronbach'sAlpha
Cronbach'sAlpha Based
onStandardized
Items N of Items
Summary Item Statistics
4.671 3.784 5.374 1.590 1.420 .292 152.799 1.744 4.293 2.549 2.462 .625 15.280 .109 .569 .460 5.208 .009 15
Item MeansItem VariancesInter-Item Correlations
Mean Minimum Maximum RangeMaximum /Minimum Variance N of Items
The covariance matrix is calculated and used in the analysis.
IBM SPSS Statistics 23 Step by Step Instructor’s Manual 183
Item-Total Statistics
65.07 176.238 .551 .373 .833
65.73 176.520 .425 .316 .840
64.72 178.402 .546 .444 .834
65.09 175.746 .520 .368 .834
66.04 172.961 .421 .296 .841
65.00 174.405 .579 .455 .831
64.75 175.936 .553 .351 .833
65.52 173.198 .440 .222 .839
66.28 171.996 .425 .275 .841
64.69 176.998 .511 .449 .835
65.21 177.138 .493 .295 .836
65.90 173.419 .488 .326 .836
65.02 176.212 .500 .418 .835
65.85 173.128 .469 .279 .837
66.02 181.964 .334 .139 .844
EFFICACY FORENCOURAGEREASSUREEFFICACY FOR TASKSOR SERVICESEFFICACY FORAPPRAISE/CLARIFYEFFICACY FORVALIDATE AFFIRMEFFICACY FORLOANING MATEIALSEFFICACY FORINFORMATION ADVICEEFFICACY FOREXPRESSWILLINGNESS TO HELPEFFICACY FORPARTICIPATE INACTIVITIESEFFICACY FOR FINDSOMEONE TO HELPEFFICACY FOREXPRESS SYMPATHYEMPATHY CONCERNEFFICACY FORREDUCE TENSIONTELL JOKESEFFICACY FOR TEACHTO DO BETTEREFFICACY FOREMPATHIC LISTENINGEFFICACY FORRELIEVE OF SELFBLAMEEFFICACY FOROPEN-ENDEDQUESTION
Scale Mean ifItem Deleted
ScaleVariance if
Item Deleted
CorrectedItem-TotalCorrelation
SquaredMultiple
Correlation
Cronbach'sAlpha if Item
Deleted
Scale Statistics
70.06 199.406 14.121 15Mean Variance
Std.Deviation N of Items
184 IBM SPSS Statistics 23 Step by Step Instructor’s Manual
18-10 No answer provided for students.
Reliability Statistics
.812 .813 4
Cronbach'sAlpha
Cronbach'sAlpha Based
onStandardized
Items N of Items
Inter-Item Correlation Matrix
1.000 .616 .458 .378
.616 1.000 .477 .492
.458 .477 1.000 .704
.378 .492 .704 1.000
HELPER RATING OFCONTROLLABILITYHELPER RATING OFRESPONSIBILITY/FAULT
RECIPIENT RATING OFCONTROLLABILITYRECIPIENT RATING OFRESPONSIBILITY/FAULT
HELPERRATING OFCONTROLL
ABILITY
HELPERRATING OF
RESPONSIBILITY/FAULT
RECIPIENTRATING OFCONTROLL
ABILITY
RECIPIENTRATING OF
RESPONSIBILITY/FAULT
The covariance matrix is calculated and used in the analysis.
Summary Item Statistics
3.254 2.890 3.536 .646 1.224 .083 43.584 3.382 3.958 .576 1.170 .066 4.521 .378 .704 .326 1.864 .013 4
Item MeansItem VariancesInter-Item Correlations
Mean Minimum Maximum RangeMaximum /Minimum Variance N of Items
The covariance matrix is calculated and used in the analysis.
Item-Total Statistics
9.48 21.952 .577 .416 .791
10.13 21.939 .647 .461 .756
9.59 21.492 .668 .541 .746
9.85 22.285 .633 .528 .763
HELPER RATING OFCONTROLLABILITYHELPER RATING OFRESPONSIBILITY/FAULT
RECIPIENT RATING OFCONTROLLABILITYRECIPIENT RATING OFRESPONSIBILITY/FAULT
Scale Mean ifItem Deleted
ScaleVariance if
Item Deleted
CorrectedItem-TotalCorrelation
SquaredMultiple
Correlation
Cronbach'sAlpha if Item
Deleted
IBM SPSS Statistics 23 Step by Step Instructor’s Manual 185
Scale Statistics
13.02 36.666 6.055 4Mean Variance
Std.Deviation N of Items
186 IBM SPSS Statistics 23 Step by Step Instructor’s Manual
18-11 No answer provided for students.
. . .
.
.
.
Reliability Statistics
.860 .864 16
Cronbach'sAlpha
Cronbach'sAlpha Based
onStandardized
Items N of Items
Summary Item Statistics
3.747 2.738 4.716 1.978 1.722 .280 163.591 2.704 4.332 1.628 1.602 .186 16.285 -.052 .662 .714 -12.656 .022 16
Item MeansItem VariancesInter-Item Correlations
Mean Minimum Maximum RangeMaximum /Minimum Variance N of Items
The covariance matrix is calculated and used in the analysis.
IBM SPSS Statistics 23 Step by Step Instructor’s Manual 187
Item-Total Statistics
55.55 274.819 .274 .297 .863
56.79 264.181 .459 .282 .854
55.24 263.637 .574 .474 .84955.99 254.829 .676 .611 .84356.35 269.527 .404 .284 .85656.52 252.628 .689 .622 .843
56.71 260.004 .560 .498 .849
56.38 254.367 .673 .614 .84355.80 256.705 .658 .499 .845
55.94 261.961 .541 .462 .850
56.00 260.526 .505 .495 .85156.21 268.778 .358 .344 .85956.19 268.700 .351 .265 .86055.60 265.566 .436 .308 .85556.81 268.969 .370 .379 .858
57.21 269.801 .378 .396 .858
relation with formerspousebreak down of physicalhealthemotional traumadepressionfeelings of guiltfeelings of hopelessnessfeeling less personallyattractivelowered self-esteemdisruption of life patternsdisruption of socialrelationshipsloss of closenesslack of sexlegal problemsfinancial diffiucltieschildrenadjustment tosingleparenthood
Scale Mean ifItem Deleted
ScaleVariance if
Item Deleted
CorrectedItem-TotalCorrelation
SquaredMultiple
Correlation
Cronbach'sAlpha if Item
Deleted
Scale Statistics
59.95 296.967 17.233 16Mean Variance
Std.Deviation N of Items
188 IBM SPSS Statistics 23 Step by Step Instructor’s Manual
18-12 No answer provided for students.
. . .
.
.
.
Reliability Statistics
.614 .626 12
Cronbach'sAlpha
Cronbach'sAlpha Based
onStandardized
Items N of Items
Inter-Item Correlation Matrix
1.000 .179 -.035 .021 .072 .028 .015 .165 -.026 .058 .001 -.015
.179 1.000 .217 .134 -.156 .241 .067 -.036 -.039 .090 .131 .124
-.035 .217 1.000 .446 .066 .155 .030 .119 .026 .159 .071 .175
.021 .134 .446 1.000 .038 .104 .041 .167 .044 .108 .144 .230
.072 -.156 .066 .038 1.000 .150 .139 .174 .241 -.110 .116 .007
.028 .241 .155 .104 .150 1.000 .255 -.025 .129 .261 .093 .237
.015 .067 .030 .041 .139 .255 1.000 .275 .142 .150 .300 .341
.165 -.036 .119 .167 .174 -.025 .275 1.000 .261 -.021 .272 .244-.026 -.039 .026 .044 .241 .129 .142 .261 1.000 -.026 .091 .151
.058 .090 .159 .108 -.110 .261 .150 -.021 -.026 1.000 .157 .222
.001 .131 .071 .144 .116 .093 .300 .272 .091 .157 1.000 .489
-.015 .124 .175 .230 .007 .237 .341 .244 .151 .222 .489 1.000
relationship with formerspousemaintenance of previouslife patternssupport of friendssupport of parents orfamilycounseling or grouptherapyenjoyed activities orhobbiesthe passage of timeinvolvement with childrenspiritual involvementaffectionate involvementwith otherscareful rational analysisof the situationmaintenance of positiveattitude
relationshipwith former
spouse
maintenanceof previouslife patterns
support offriends
support ofparents or
familycounseling orgroup therapy
enjoyedactivities or
hobbiesthe passage
of timeinvolvementwith children
spiritualinvolvement
affectionateinvolvementwith others
carefulrational
analysis ofthe situation
maintenanceof positive
attitude
The covariance matrix is calculated and used in the analysis.
Summary Item Statistics
4.332 2.271 5.323 3.052 2.344 .889 123.548 2.369 5.484 3.115 2.315 .858 12.122 -.156 .489 .645 -3.132 .015 12
Item MeansItem VariancesInter-Item Correlations
Mean Minimum Maximum RangeMaximum /Minimum Variance N of Items
The covariance matrix is calculated and used in the analysis.
IBM SPSS Statistics 23 Step by Step Instructor’s Manual 189
Item-Total Statistics
49.71 91.267 .096 .102 .622
48.49 88.488 .168 .197 .610
46.76 84.826 .290 .263 .588
46.90 83.184 .294 .234 .587
48.90 86.871 .160 .184 .615
47.44 83.721 .322 .234 .582
46.86 83.723 .361 .225 .57647.30 79.194 .332 .250 .57847.84 82.917 .209 .135 .608
47.61 87.344 .186 .154 .608
47.34 82.031 .376 .308 .572
46.66 82.252 .455 .354 .562
relationship with formerspousemaintenance of previouslife patternssupport of friendssupport of parents orfamilycounseling or grouptherapyenjoyed activities orhobbiesthe passage of timeinvolvement with childrenspiritual involvementaffectionate involvementwith otherscareful rational analysisof the situationmaintenance of positiveattitude
Scale Mean ifItem Deleted
ScaleVariance if
Item Deleted
CorrectedItem-TotalCorrelation
SquaredMultiple
Correlation
Cronbach'sAlpha if Item
Deleted
Scale Statistics
51.98 97.324 9.865 12Mean Variance
Std.Deviation N of Items
190 IBM SPSS Statistics 23 Step by Step Instructor’s Manual
18-13 No answer provided for students.
Reliability Statistics
.952 .953 8
Cronbach'sAlpha
Cronbach'sAlpha Based
onStandardized
Items N of Items
Inter-Item Correlation Matrix
1.000 .759 .718 .778 .565 .649 .654 .775
.759 1.000 .715 .789 .603 .709 .605 .766
.718 .715 1.000 .769 .655 .778 .709 .856
.778 .789 .769 1.000 .638 .756 .662 .808
.565 .603 .655 .638 1.000 .832 .581 .722
.649 .709 .778 .756 .832 1.000 .662 .827
.654 .605 .709 .662 .581 .662 1.000 .736
.775 .766 .856 .808 .722 .827 .736 1.000
time reading spiritualbookstime spent prayingincorporate spiritualityinto daily lifetime spent meditatingbelieve in existence of ahigher power?belief in God is beneficialto my lifeturn to spiritual things indifficult timesspiritual growth a priority?
time readingspiritualbooks
time spentpraying
incorporatespirituality
into daily lifetime spentmeditating
believe inexistence
of a higherpower?
belief in Godis beneficial
to my life
turn tospiritual things
in difficulttimes
spiritualgrowth apriority?
The covariance matrix is calculated and used in the analysis.
Summary Item Statistics
4.344 3.266 5.856 2.590 1.793 .745 84.342 3.247 4.979 1.732 1.533 .436 8.717 .565 .856 .291 1.515 .006 8
Item MeansItem VariancesInter-Item Correlations
Mean Minimum Maximum RangeMaximum /Minimum Variance N of Items
The covariance matrix is calculated and used in the analysis.
IBM SPSS Statistics 23 Step by Step Instructor’s Manual 191
Item-Total Statistics
31.20 158.255 .803 .705 .947
31.49 162.400 .812 .701 .946
30.80 163.772 .859 .768 .944
30.90 157.061 .859 .762 .943
28.90 170.011 .746 .697 .951
29.62 157.156 .855 .820 .944
30.08 161.332 .749 .582 .951
30.28 154.659 .912 .845 .940
time reading spiritualbookstime spent prayingincorporate spiritualityinto daily lifetime spent meditatingbelieve in existence of ahigher power?belief in God is beneficialto my lifeturn to spiritual things indifficult timesspiritual growth a priority?
Scale Mean ifItem Deleted
ScaleVariance if
Item Deleted
CorrectedItem-TotalCorrelation
SquaredMultiple
Correlation
Cronbach'sAlpha if Item
Deleted
Scale Statistics
34.75 208.319 14.433 8Mean Variance
Std.Deviation N of Items
192 IBM SPSS Statistics 23 Step by Step Instructor’s Manual
Chapter 23: MANOVA and MANCOVA 1. Using the grade.sav file, compute and interpret a MANOVA examining the effect of whether or not
students completed the extra credit project on the total points for the class and the previous GPA.
2. Using the grades.sav file, compute and interpret a MANOVA examining the effects of section and lowup on total and GPA.
3. Why would it be a bad idea to compute a MANOVA examining the effects of section and lowup on total and percent?
4. A researcher wishes to examine the effects of high- or low-stress situations on a test of cognitive per-formance and self-esteem levels. Participants are also divided into those with high- or low-coping skills. The data are shown after question 5 (ignore the last column for now). Perform and interpret a MANOVA examining the effects of stress level and coping skills on both cognitive performance and self-esteem level.
5. Coping skills may be correlated with immune response. Include immune response levels (listed be-low) in the MANOVA performed for Question 4. What do these results mean? In what way are they different than the results in Question 4? Why?
Stress Level Coping Skills Cognitive Performance Self-Esteem Immune Response High High 6 19 21 Low High 5 18 21 High High 5 14 22 High Low 3 8 15 Low High 7 20 22 High Low 4 8 17 High High 6 15 28 High Low 5 7 19 Low Low 5 20 16 Low Low 5 17 18
IBM SPSS Statistics 23 Step by Step Instructor’s Manual 193
23-1 Full answer provided for students.
There is a significant effect of whether or not students did the extra credit project and their previous GPA’s/class points (F(2,102) = 5.69, p = .005, η2 = .10).
One-way ANOVA suggest that this effect seems to primarily be related to the total class points (F(1,103) = 9.99, p = .002, η2 = .09) rather than the previous GPA (F(1,103) = .093, p > .05, η2 = .00).
Students who completed the extra credit project had more points (M = 109.36, SD = 11.36) than those who did not complete the extra credit project (M = 98.24, SD = 15.41).
Multivariate Testsc
.971 1733.479b 2.000 102.000 .000 .971 3466.959 1.000
.029 1733.479b 2.000 102.000 .000 .971 3466.959 1.00033.990 1733.479b 2.000 102.000 .000 .971 3466.959 1.00033.990 1733.479b 2.000 102.000 .000 .971 3466.959 1.000
.100 5.686b 2.000 102.000 .005 .100 11.372 .854
.900 5.686b 2.000 102.000 .005 .100 11.372 .854
.111 5.686b 2.000 102.000 .005 .100 11.372 .854
.111 5.686b 2.000 102.000 .005 .100 11.372 .854
Pillai's TraceWilks' LambdaHotelling's TraceRoy's Largest RootPillai's TraceWilks' LambdaHotelling's TraceRoy's Largest Root
EffectIntercept
EXTRCRED
Value F Hypothesis df Error df Sig.Partial EtaSquared
Noncent.Parameter
ObservedPowera
Computed using alpha = .05a.
Exact statisticb.
Design: Intercept+EXTRCREDc.
Tests of Between-Subjects Effects
.055b 1 .055 .093 .761 .001 .093 .0612151.443c 1 2151.443 9.985 .002 .088 9.985 .879543.476 1 543.476 923.452 .000 .900 923.452 1.000
749523.786 1 749523.786 3478.731 .000 .971 3478.731 1.000.055 1 .055 .093 .761 .001 .093 .061
2151.443 1 2151.443 9.985 .002 .088 9.985 .87960.618 103 .589
22192.272 103 215.459871.488 105
1086378.000 10560.673 104
24343.714 104
Dependent VariableGPATOTALGPATOTALGPATOTALGPATOTALGPATOTALGPATOTAL
SourceCorrected Model
Intercept
EXTRCRED
Error
Total
Corrected Total
Type III Sumof Squares df Mean Square F Sig.
Partial EtaSquared
Noncent.Parameter
ObservedPowera
Computed using alpha = .05a.
R Squared = .001 (Adjusted R Squared = -.009)b.
R Squared = .088 (Adjusted R Squared = .080)c.
Descriptive Statistics
2.7671 .78466 832.8232 .69460 222.7789 .76380 10598.24 15.414 83
109.36 11.358 22100.57 15.299 105
EXTRCREDNoYesTotalNoYesTotal
GPA
TOTAL
Mean Std. Deviation N
194 IBM SPSS Statistics 23 Step by Step Instructor’s Manual
23-2 Minimal answer provided for students.
There is not a significant main effect of lower/upper division status on total class points and previous gpa (F(2, 98) = 1.14, p = .323, 2 = .02). There is not a significant main effect of class section on total class points and previous GPA (F(4, 198) = 1.98, p = .10, 2 = .04). There is a significant interaction between class section and lower/upper division status, on total class points and previous GPA (F(4, 198) = 4.23, p = .003, 2 = .08).
Multivariate Testsd
.968 1474.831b 2.000 98.000 .000 .968 2949.662 1.000
.032 1474.831b 2.000 98.000 .000 .968 2949.662 1.00030.099 1474.831b 2.000 98.000 .000 .968 2949.662 1.00030.099 1474.831b 2.000 98.000 .000 .968 2949.662 1.000
.023 1.142b 2.000 98.000 .323 .023 2.284 .246
.977 1.142b 2.000 98.000 .323 .023 2.284 .246
.023 1.142b 2.000 98.000 .323 .023 2.284 .246
.023 1.142b 2.000 98.000 .323 .023 2.284 .246
.077 1.976 4.000 198.000 .100 .038 7.903 .587
.924 1.974b 4.000 196.000 .100 .039 7.894 .586
.081 1.971 4.000 194.000 .100 .039 7.884 .586
.068 3.368c 2.000 99.000 .038 .064 6.735 .623
.157 4.229 4.000 198.000 .003 .079 16.918 .921
.848 4.205b 4.000 196.000 .003 .079 16.818 .919
.172 4.179 4.000 194.000 .003 .079 16.717 .918
.114 5.658c 2.000 99.000 .005 .103 11.316 .852
Pillai's TraceWilks' LambdaHotelling's TraceRoy's Largest RootPillai's TraceWilks' LambdaHotelling's TraceRoy's Largest RootPillai's TraceWilks' LambdaHotelling's TraceRoy's Largest RootPillai's TraceWilks' LambdaHotelling's TraceRoy's Largest Root
EffectIntercept
LOWUP
SECTION
LOWUP * SECTION
Value F Hypothesis df Error df Sig.Partial EtaSquared
Noncent.Parameter
ObservedPowera
Computed using alpha = .05a.
Exact statisticb.
The statistic is an upper bound on F that yields a lower bound on the significance level.c.
Design: Intercept+LOWUP+SECTION+LOWUP * SECTIONd.
Tests of Between-Subjects Effects
5.307b 5 1.061 1.898 .101 .087 9.490 .6223090.940c 5 618.188 2.880 .018 .127 14.398 .827507.576 1 507.576 907.602 .000 .902 907.602 1.000
628849.181 1 628849.181 2929.315 .000 .967 2929.315 1.0001.246 1 1.246 2.228 .139 .022 2.228 .315
34.650 1 34.650 .161 .689 .002 .161 .068.829 2 .414 .741 .479 .015 1.482 .173
1359.584 2 679.792 3.167 .046 .060 6.333 .5953.350 2 1.675 2.995 .055 .057 5.990 .569
1974.216 2 987.108 4.598 .012 .085 9.196 .76755.366 99 .559
21252.774 99 214.674871.488 105
1086378.000 10560.673 104
24343.714 104
Dependent VariableGPATOTALGPATOTALGPATOTALGPATOTALGPATOTALGPATOTALGPATOTALGPATOTAL
SourceCorrected Model
Intercept
LOWUP
SECTION
LOWUP * SECTION
Error
Total
Corrected Total
Type III Sumof Squares df Mean Square F Sig.
Partial EtaSquared
Noncent.Parameter
ObservedPowera
Computed using alpha = .05a.
R Squared = .087 (Adjusted R Squared = .041)b.
R Squared = .127 (Adjusted R Squared = .083)c.
IBM SPSS Statistics 23 Step by Step Instructor’s Manual 195
One-way ANOVA suggest that this interaction takes place primarily in the total class points (F(2, 99) = 4.60, p = .012, 2 = .09), though the interaction of lower/upper division status and class section on GPA was only somewhat weaker (F(2, 99) = 3.00, p = .055, 2 = .06).
An examination of means suggests that lower division students had more total points than upper division students in sections 1 (M = 109.86, SD = 9.51 vs. M = 103.81, SD = 17.44) and 3 (M = 107.50, SD = 9.47 vs. M = 95.93, SD = 17.64), but upper division students had more total points (M = 103.18, SD = 9.44) than low-er division students (M = 90.09, SD = 13.13) in section 2. Lower division students had higher GPA’s than upper division students is sections 2 (M = 2.84, SD = .99 vs. M = 2.67, SD = .68) and 3 (M = 3.53, SD = .50 vs. M = 2.57, SD = .77), but lower GPA’s (M = 2.72, SD = .99) than upper division students (M = 3.00, SD = .71) in section 1.
23-3 No answer provided for students. Percent = total / 125. So, they are actually just re-scaled versions of the same exact thing! MANOVA as-sumes independence of the dependent variables, so that would violate that assumption.
196 IBM SPSS Statistics 23 Step by Step Instructor’s Manual
23-4 Minimal answer provided for students.
Descriptive Statistics
5.00 .000 26.00 1.414 25.50 1.000 44.00 1.000 35.67 .577 34.83 1.169 64.40 .894 55.80 .837 55.10 1.101 10
18.50 2.121 219.00 1.414 218.75 1.500 47.67 .577 3
16.00 2.646 311.83 4.875 612.00 6.042 517.20 2.588 514.60 5.168 10
COPINGLowHighTotalLowHighTotalLowHighTotalLowHighTotalLowHighTotalLowHighTotal
STRESSLow
High
Total
Low
High
Total
COGNITIV
SELFESTE
Mean Std. Deviation N
Multivariate Testsc
.992 316.039b 2.000 5.000 .000 .992 632.079 1.000
.008 316.039b 2.000 5.000 .000 .992 632.079 1.000126.416 316.039b 2.000 5.000 .000 .992 632.079 1.000126.416 316.039b 2.000 5.000 .000 .992 632.079 1.000
.846 13.700b 2.000 5.000 .009 .846 27.400 .924
.154 13.700b 2.000 5.000 .009 .846 27.400 .9245.480 13.700b 2.000 5.000 .009 .846 27.400 .9245.480 13.700b 2.000 5.000 .009 .846 27.400 .924.714 6.237b 2.000 5.000 .044 .714 12.475 .628.286 6.237b 2.000 5.000 .044 .714 12.475 .628
2.495 6.237b 2.000 5.000 .044 .714 12.475 .6282.495 6.237b 2.000 5.000 .044 .714 12.475 .628.639 4.418b 2.000 5.000 .079 .639 8.836 .483.361 4.418b 2.000 5.000 .079 .639 8.836 .483
1.767 4.418b 2.000 5.000 .079 .639 8.836 .4831.767 4.418b 2.000 5.000 .079 .639 8.836 .483
Pillai's TraceWilks' LambdaHotelling's TraceRoy's Largest RootPillai's TraceWilks' LambdaHotelling's TraceRoy's Largest RootPillai's TraceWilks' LambdaHotelling's TraceRoy's Largest RootPillai's TraceWilks' LambdaHotelling's TraceRoy's Largest Root
EffectIntercept
STRESS
COPING
STRESS * COPING
Value F Hypothesis df Error df Sig.Partial EtaSquared
Noncent.Parameter
ObservedPowera
Computed using alpha = .05a.
Exact statisticb.
Design: Intercept+STRESS+COPING+STRESS * COPINGc.
IBM SPSS Statistics 23 Step by Step Instructor’s Manual 197
MANOVA suggests that there is a main effect of stress on cognitive performance and self-esteem (F(2, 5) = 13.70, p = .009, 2 = .85). One-way ANOVA suggest that this effect is primarily centered on the relation between stress and self-esteem (F(1,6) = 32.55, p = .001, 2 = .84) rather than stress and cognitive perfor-mance (F(1,6) = 1.37, p > .05, 2 = .19). Those in the low-stress condition had higher self-esteem (M = 18.75, SD = 1.50) than those in the high-stress condition (M = 11.83, SD = 4.88). MANOVA also revealed a significant main effect of coping on cognitive performance and self-esteem (F(2,5) = 6.24, p = .044, 2 = .71). One-way ANOVA suggest that this effect is clearly present in the rela-tion between coping and self-esteem (F(1,6) = 13.27, p = .011, 2 = .70), though the relation between coping and cognitive performance was marginally significant as well (F(1,6) = 5.49, p = .058, 2 = .48). Those with high coping skills had higher self-esteem (M = 17.20, SD = 2.59) than those with low coping skills (M = 12.00, SD = 6.04). Those high coping skills may have also had higher cognitive performance (M = 5.80, SD = .84) than those with low coping skills (M = 4.40, SD = .89). The interaction effect between coping and stress levels was not significant (F(2,5) = 4.42, p = .079, 2 = .64).
Tests of Between-Subjects Effects
6.233b 3 2.078 2.671 .141 .572 8.014 .385219.233c 3 73.078 20.715 .001 .912 62.145 .998256.267 1 256.267 329.486 .000 .982 329.486 1.000
2244.817 1 2244.817 636.326 .000 .991 636.326 1.0001.067 1 1.067 1.371 .286 .186 1.371 .168
114.817 1 114.817 32.546 .001 .844 32.546 .9974.267 1 4.267 5.486 .058 .478 5.486 .502
46.817 1 46.817 13.271 .011 .689 13.271 .856.267 1 .267 .343 .580 .054 .343 .079
36.817 1 36.817 10.436 .018 .635 10.436 .7684.667 6 .778
21.167 6 3.528271.000 10
2372.000 1010.900 9
240.400 9
Dependent VariableCOGNITIVSELFESTECOGNITIVSELFESTECOGNITIVSELFESTECOGNITIVSELFESTECOGNITIVSELFESTECOGNITIVSELFESTECOGNITIVSELFESTECOGNITIVSELFESTE
SourceCorrected Model
Intercept
STRESS
COPING
STRESS * COPING
Error
Total
Corrected Total
Type III Sumof Squares df Mean Square F Sig.
Partial EtaSquared
Noncent.Parameter
ObservedPowera
Computed using alpha = .05a.
R Squared = .572 (Adjusted R Squared = .358)b.
R Squared = .912 (Adjusted R Squared = .868)c.
198 IBM SPSS Statistics 23 Step by Step Instructor’s Manual
23-5 No answer provided for students.
Descriptive Statistics
5.00 .000 26.00 1.414 25.50 1.000 44.00 1.000 35.67 .577 34.83 1.169 64.40 .894 55.80 .837 55.10 1.101 10
18.50 2.121 219.00 1.414 218.75 1.500 47.67 .577 3
16.00 2.646 311.83 4.875 612.00 6.042 517.20 2.588 514.60 5.168 1017.00 1.414 221.50 .707 219.25 2.754 417.00 2.000 323.67 3.786 320.33 4.546 617.00 1.581 522.80 2.950 519.90 3.784 10
COPINGLowHighTotalLowHighTotalLowHighTotalLowHighTotalLowHighTotalLowHighTotalLowHighTotalLowHighTotalLowHighTotal
STRESSLow
High
Total
Low
High
Total
Low
High
Total
COGNITIV
SELFESTE
IMMUNE
Mean Std. Deviation N
IBM SPSS Statistics 23 Step by Step Instructor’s Manual 199
MANOVA suggests that there is a main effect of stress on cognitive performance, self-esteem, and immune response (F(3,4) = 10.57, p = .023, η2 = .89). One-way ANOVA suggest that this effect is primarily centered on the relation between stress and self-esteem (F(1,6) = 32.55, p = .001, η2 = .85) rather than stress and cog-nitive performance (F(1,6) = 1.37, p > .05, η2 = .19) or stress and immune response (F(1,6) = .43, p > .05,
Multivariate Testsc
.998 655.351b 3.000 4.000 .000 .998 1966.054 1.000
.002 655.351b 3.000 4.000 .000 .998 1966.054 1.000491.514 655.351b 3.000 4.000 .000 .998 1966.054 1.000491.514 655.351b 3.000 4.000 .000 .998 1966.054 1.000
.888 10.571b 3.000 4.000 .023 .888 31.713 .817
.112 10.571b 3.000 4.000 .023 .888 31.713 .8177.928 10.571b 3.000 4.000 .023 .888 31.713 .8177.928 10.571b 3.000 4.000 .023 .888 31.713 .817.913 14.051b 3.000 4.000 .014 .913 42.152 .908.087 14.051b 3.000 4.000 .014 .913 42.152 .908
10.538 14.051b 3.000 4.000 .014 .913 42.152 .90810.538 14.051b 3.000 4.000 .014 .913 42.152 .908
.815 5.865b 3.000 4.000 .060 .815 17.596 .571
.185 5.865b 3.000 4.000 .060 .815 17.596 .5714.399 5.865b 3.000 4.000 .060 .815 17.596 .5714.399 5.865b 3.000 4.000 .060 .815 17.596 .571
Pillai's TraceWilks' LambdaHotelling's TraceRoy's Largest RootPillai's TraceWilks' LambdaHotelling's TraceRoy's Largest RootPillai's TraceWilks' LambdaHotelling's TraceRoy's Largest RootPillai's TraceWilks' LambdaHotelling's TraceRoy's Largest Root
EffectIntercept
STRESS
COPING
STRESS * COPING
Value F Hypothesis df Error df Sig.Partial EtaSquared
Noncent.Parameter
ObservedPowera
Computed using alpha = .05a.
Exact statisticb.
Design: Intercept+STRESS+COPING+STRESS * COPINGc.
Tests of Between-Subjects Effects
6.233b 3 2.078 2.671 .141 .572 8.014 .385219.233c 3 73.078 20.715 .001 .912 62.145 .99889.733d 3 29.911 4.582 .054 .696 13.746 .604
256.267 1 256.267 329.486 .000 .982 329.486 1.0002244.817 1 2244.817 636.326 .000 .991 636.326 1.0003760.417 1 3760.417 576.064 .000 .990 576.064 1.000
1.067 1 1.067 1.371 .286 .186 1.371 .168114.817 1 114.817 32.546 .001 .844 32.546 .997
2.817 1 2.817 .431 .536 .067 .431 .0864.267 1 4.267 5.486 .058 .478 5.486 .502
46.817 1 46.817 13.271 .011 .689 13.271 .85674.817 1 74.817 11.461 .015 .656 11.461 .804
.267 1 .267 .343 .580 .054 .343 .07936.817 1 36.817 10.436 .018 .635 10.436 .7682.817 1 2.817 .431 .536 .067 .431 .0864.667 6 .778
21.167 6 3.52839.167 6 6.528
271.000 102372.000 104089.000 10
10.900 9240.400 9128.900 9
Dependent VariableCOGNITIVSELFESTEIMMUNECOGNITIVSELFESTEIMMUNECOGNITIVSELFESTEIMMUNECOGNITIVSELFESTEIMMUNECOGNITIVSELFESTEIMMUNECOGNITIVSELFESTEIMMUNECOGNITIVSELFESTEIMMUNECOGNITIVSELFESTEIMMUNE
SourceCorrected Model
Intercept
STRESS
COPING
STRESS * COPING
Error
Total
Corrected Total
Type III Sumof Squares df Mean Square F Sig.
Partial EtaSquared
Noncent.Parameter
ObservedPowera
Computed using alpha = .05a.
R Squared = .572 (Adjusted R Squared = .358)b.
R Squared = .912 (Adjusted R Squared = .868)c.
R Squared = .696 (Adjusted R Squared = .544)d.
200 IBM SPSS Statistics 23 Step by Step Instructor’s Manual
η2 = .07). Those in the low-stress condition had higher self-esteem (M = 18.75, SD = 1.50) than those in the high-stress condition (M = 11.83, SD = 4.88). MANOVA also revealed a significant main effect of coping on cognitive performance, self-esteem, and im-mune response (F(3,4) = 14.05, p = .014, η2 = .91). One-way ANOVA suggest that this effect is clearly pre-sent in the relation between coping and self-esteem (F(1,6) = 13.27, p = .011, η2 = .69), and in the relation between coping and immune response (F(1,6) = 11.46, p = .015, η2 = .66); the relation between coping and cognitive performance was moderately large and marginally significant as well (F(1,6) = 5.49, p = .058, η2 = .48). Those with high coping skills had higher self-esteem (M = 17.20, SD = 2.59) than those with low coping skills (M = 12.00, SD = 6.04); they also had higher immune response (M = 22.80, SD = 2.95) than those with low coping skills (M = 17.00, SD = 1.58). Those high coping skills may have also had higher cognitive performance (M = 5.80, SD = .84) than those with low coping skills (M = 4.40, SD = .89). The interaction effect between coping and stress levels was not significant (though the effect size was large; F(3,4) = 5.87, p = .06, η2 = .82). By adding an additional dependent variable, we were in this case able to get more information without losing too much power. Notice, however, that our numerator (error) degrees of freedom went down, so we have to be careful about adding too many variables to our tests.
IBM SPSS Statistics 23 Step by Step Instructor’s Manual 201
Chapter 24: Repeated-Measures MANOVA 1. Imagine that in the grades.sav file, the five quiz scores are actually the same quiz taken under dif-
ferent circumstances. Perform repeated-measures ANOVA on the five quiz scores. What do these results mean?
2. To the analysis in exercise 1, add whether or not students completed the extra credit project (ex-trcred) as a between-subjects variable. What do these results mean?
3. A researcher puts participants in a highly stressful situation (say, performing repeated-measures MANCOVA) and measures their cognitive performance. He then puts them in a low-stress situation (say, lying on the beach on a pleasant day). Participant scores on the test of cognitive performance are reported below. Perform and interpret a within-subjects ANOVA on these data.
Case Number: 1 2 3 4 5 5 6 7 8 10 High Stress: 76 89 86 85 62 63 85 115 87 85 Low Stress: 91 92 127 92 75 56 82 150 118 114
4. The researcher also collects data from the same participants on their coping ability. They scored (in case number order) 25, 9, 59, 16, 23, 10, 6, 43, 44, and 34. Perform and interpret a within-subjects ANCOVA on these data.
5. The researcher just discovered some more data…in this case, physical dexterity performance in the high-stress and low-stress situations (listed below, in the same case number order as in the previous two exercises). Perform and interpret a 2 (stress level: high, low) by 2 (kind of performance: cogni-tive, dexterity) ANCOVA on these data.
Physical dexterity values:
Case Number: 1 2 3 4 5 5 6 7 8 10 High Stress: 91 109 94 99 73 76 94 136 109 94 Low Stress: 79 68 135 103 79 46 77 173 111 109
202 IBM SPSS Statistics 23 Step by Step Instructor’s Manual
24-1 Full answer provided for students.
These results suggest that there is a significant difference between the five conditions under which the quiz was taken (F(4,101) = 4.54, p = .002, η2 = .15). We can examine the means to determine what that pattern of quiz scores looks like.
Descriptive Statistics
7.47 2.481 1057.98 1.623 1057.98 2.308 1057.80 2.280 1057.87 1.765 105
QUIZ1QUIZ2QUIZ3QUIZ4QUIZ5
Mean Std. Deviation N
Multivariate Testsc
.152 4.539b 4.000 101.000 .002 .152 18.156 .934
.848 4.539b 4.000 101.000 .002 .152 18.156 .934
.180 4.539b 4.000 101.000 .002 .152 18.156 .934
.180 4.539b 4.000 101.000 .002 .152 18.156 .934
Pillai's TraceWilks' LambdaHotelling's TraceRoy's Largest Root
EffectCONDITIO
Value F Hypothesis df Error df Sig.Partial EtaSquared
Noncent.Parameter
ObservedPowera
Computed using alpha = .05a.
Exact statisticb.
Design: Intercept Within Subjects Design: CONDITIO
c.
IBM SPSS Statistics 23 Step by Step Instructor’s Manual 203
24-2 Minimal answer provided for students.
When the condition in which the quiz was taken is examined at the same time that extra credit participation is examined, there is no difference between the conditions on their own (F(4, 412) = .51, p > .05, 2 = .01). There is, however, an interaction effect between the quiz condition and extra credit participation (F(4, 412) = 7.60, p < .001, 2 = .07).
204 IBM SPSS Statistics 23 Step by Step Instructor’s Manual
An examination of the means suggests that doing the extra credit helped more for the quiz in conditions 1 and 4 (or, not doing the extra credit hurt more in conditions 1 and 4) than in the other conditions, with the extra credit affecting the quiz score least in conditions 2 and 5.
IBM SPSS Statistics 23 Step by Step Instructor’s Manual 205
There was also a significant main effect of doing the extra credit (F(1, 103) = 10.16, p = .002, 2 = .09) such that people who did the extra credit assignment had higher scores overall (M = 8.86, SE = .37) that those who didn’t do the extra credit assignment (M = 7.54, SE = .19).
24-3 No answer provided for students.
There is a significant difference in cognitive performance between individuals in the high stress (M = 83.30, SD = 14.86) and low stress (M = 99.70, SD = 27.57) conditions, F(1,9) = 9.57, p = .013.
Descriptive Statistics
83.3000 14.85523 1099.7000 27.57233 10
HIGHSTLOWST
Mean Std. Deviation N
Multivariate Testsc
.515 9.574b 1.000 9.000 .013 .515 9.574 .786
.485 9.574b 1.000 9.000 .013 .515 9.574 .7861.064 9.574b 1.000 9.000 .013 .515 9.574 .7861.064 9.574b 1.000 9.000 .013 .515 9.574 .786
Pillai's TraceWilks' LambdaHotelling's TraceRoy's Largest Root
EffectSTRESS
Value F Hypothesis df Error df Sig.Partial EtaSquared
Noncent.Parameter
ObservedPowera
Computed using alpha = .05a.
Exact statisticb.
Design: Intercept Within Subjects Design: STRESS
c.
206 IBM SPSS Statistics 23 Step by Step Instructor’s Manual
24-4 Minimal answer provided for students.
There is a significant difference in cognitive performance between individuals in the high stress (M = 83.30, SD = 14.86) and low stress (M = 99.70, SD = 27.57) conditions, F(1,8) = 10.50, p = .012, 2 = .57. There is also a significant interaction between stress and coping skills in their effect on cognitive perfor-mance, F(1,8) = 128.28, p < .001, 2 = .94. Note that to interpret this interaction, we would need to examine scatterplots and/or regressions for the relation between coping and cognitive performance for the high and low stress conditions. An example of this graph is shown here:
Multivariate Testsc
.568 10.503b 1.000 8.000 .012 .568 10.503 .809
.432 10.503b 1.000 8.000 .012 .568 10.503 .8091.313 10.503b 1.000 8.000 .012 .568 10.503 .8091.313 10.503b 1.000 8.000 .012 .568 10.503 .809.941 128.281b 1.000 8.000 .000 .941 128.281 1.000.059 128.281b 1.000 8.000 .000 .941 128.281 1.000
16.035 128.281b 1.000 8.000 .000 .941 128.281 1.00016.035 128.281b 1.000 8.000 .000 .941 128.281 1.000
Pillai's TraceWilks' LambdaHotelling's TraceRoy's Largest RootPillai's TraceWilks' LambdaHotelling's TraceRoy's Largest Root
EffectSTRESS
STRESS * COPING
Value F Hypothesis df Error df Sig.Partial EtaSquared
Noncent.Parameter
ObservedPowera
Computed using alpha = .05a.
Exact statisticb.
Design: Intercept+COPING Within Subjects Design: STRESS
c.
Linear Regression
10.00 20.00 30.00 40.00 50.00 60.00
coping
60.00
70.00
80.00
90.00
100.00
110.00
high
st
highst = 74.15 + 0.34 * copingR-Square = 0.16
IBM SPSS Statistics 23 Step by Step Instructor’s Manual 207
We can see from these graphs that there is a stronger relationship between coping and performance for indi-viduals in the low stress condition that those in the high stress condition.
There is also a significant relationship between coping and cognitive performance overall (F(1,8) = 7.26, p = .027, 2 = .48). From the graphs above, it is clear that as coping skills increase, so does performance on the cognitive task.
Linear Regression
10.00 20.00 30.00 40.00 50.00 60.00
coping
60.00
80.00
100.00
120.00
140.00
low
st
lowst = 65.81 + 1.26 * copingR-Square = 0.65
Tests of Between-Subjects Effects
Measure: MEASURE_1Transformed Variable: Average
27418.728 1 27418.728 55.328 .000 .874 55.328 1.0003599.488 1 3599.488 7.263 .027 .476 7.263 .6573964.512 8 495.564
SourceInterceptCOPINGError
Type III Sumof Squares df Mean Square F Sig.
Partial EtaSquared
Noncent.Parameter
ObservedPowera
Computed using alpha = .05a.
208 IBM SPSS Statistics 23 Step by Step Instructor’s Manual
24-5 No answer provided for students.
Note that these results are presented here in the order they are found in the SPSS output. This is probably not the order that you would use if you were writing the results up in a paper!
There is a significant difference in performance (both cognitive and dexterity) between individu-als in the high stress (M = 90.40) and low stress (M = 98.85) conditions, F(1,8) = 18.64, p = .003.
There is also a significant interaction between stress and coping skills in their effect on perfor-mance (both cognitive and dexterity), F(1,8) = 50.69, p < .001. Note that to interpret this interac-tion, we would need to examine scatterplots and/or regressions for the relation between coping and overall performance for the high and low stress conditions. Our results would be similar to those found in the previous exercise: There is a stronger relationship between coping and perfor-mance for individuals in the low stress condition that those in the high stress condition.
Within-Subjects Factors
Measure: MEASURE_1
HIGHSTHIGHSTPHLOWSTLOWSTPH
PERFORMA1212
STRESS1
2
DependentVariable
Multivariate Testsc
.700 18.640b 1.000 8.000 .003 .700 18.640 .964
.300 18.640b 1.000 8.000 .003 .700 18.640 .9642.330 18.640b 1.000 8.000 .003 .700 18.640 .9642.330 18.640b 1.000 8.000 .003 .700 18.640 .964.864 50.693b 1.000 8.000 .000 .864 50.693 1.000.136 50.693b 1.000 8.000 .000 .864 50.693 1.000
6.337 50.693b 1.000 8.000 .000 .864 50.693 1.0006.337 50.693b 1.000 8.000 .000 .864 50.693 1.000.008 .061b 1.000 8.000 .811 .008 .061 .056.992 .061b 1.000 8.000 .811 .008 .061 .056.008 .061b 1.000 8.000 .811 .008 .061 .056.008 .061b 1.000 8.000 .811 .008 .061 .056.246 2.614b 1.000 8.000 .145 .246 2.614 .297.754 2.614b 1.000 8.000 .145 .246 2.614 .297.327 2.614b 1.000 8.000 .145 .246 2.614 .297.327 2.614b 1.000 8.000 .145 .246 2.614 .297.536 9.231b 1.000 8.000 .016 .536 9.231 .758.464 9.231b 1.000 8.000 .016 .536 9.231 .758
1.154 9.231b 1.000 8.000 .016 .536 9.231 .7581.154 9.231b 1.000 8.000 .016 .536 9.231 .758.188 1.847b 1.000 8.000 .211 .188 1.847 .224.812 1.847b 1.000 8.000 .211 .188 1.847 .224.231 1.847b 1.000 8.000 .211 .188 1.847 .224.231 1.847b 1.000 8.000 .211 .188 1.847 .224
Pillai's TraceWilks' LambdaHotelling's TraceRoy's Largest RootPillai's TraceWilks' LambdaHotelling's TraceRoy's Largest RootPillai's TraceWilks' LambdaHotelling's TraceRoy's Largest RootPillai's TraceWilks' LambdaHotelling's TraceRoy's Largest RootPillai's TraceWilks' LambdaHotelling's TraceRoy's Largest RootPillai's TraceWilks' LambdaHotelling's TraceRoy's Largest Root
EffectSTRESS
STRESS * COPING
PERFORMA
PERFORMA * COPING
STRESS * PERFORMA
STRESS * PERFORMA* COPING
Value F Hypothesis df Error df Sig.Partial EtaSquared
Noncent.Parameter
ObservedPowera
Computed using alpha = .05a.
Exact statisticb.
Design: Intercept+COPING Within Subjects Design: STRESS+PERFORMA+STRESS*PERFORMA
c.
IBM SPSS Statistics 23 Step by Step Instructor’s Manual 209
There is no significant relationship between kind of performance (cognitive, dexterity) and level of performance, F(1,8) = .06, p > .05.
There is no significant interaction effect between kind of performance and coping ability on per-formance scores.
There is a significant interaction between high- versus low-stress condition and kind of perfor-mance (cognitive, dexterity) on level of performance, F(1,8) = 9.23, p = .016. An examination of the means suggests that for those in the high stress condition, cognitive performance was much lower (M = 83.30) than dexterity performance (M = 97.50). But, for those in the low stress con-dition, cognitive performance was about the same (or maybe slightly higher; M = 99.70) than dexterity performance (M = 98.00).
There is not a significant three-way interaction between stress level, kind of performance, and coping skills, F(1,8) = 1.85, p > .05.
There was, however, a significant relationship between coping skills and overall level of performance (across all stress conditions and across all kinds of performance), F(1,8) = 7.38, p = .026. If we were to examine a scatterplot and/or linear regression for the relationship between coping and overall level of performance (the average of the four performance measures we have for each subject), we would find that coping skills are positively related to overall performance, R2 = .48. Note that the R2 value of the regression is the same as the partial 2 value.
Tests of Between-Subjects Effects
Measure: MEASURE_1Transformed Variable: Average
55587.889 1 55587.889 45.208 .000 .850 45.208 1.0009079.736 1 9079.736 7.384 .026 .480 7.384 .6649836.889 8 1229.611
SourceInterceptCOPINGError
Type III Sumof Squares df Mean Square F Sig.
Partial EtaSquared
Noncent.Parameter
ObservedPowera
Computed using alpha = .05a.
1. STRESS
Measure: MEASURE_1
90.400a 5.045 78.766 102.03498.850a 6.276 84.377 113.323
STRESS12
Mean Std. Error Lower Bound Upper Bound95% Confidence Interval
Evaluated at covariates appeared in the model: COPING =26.9000.
a.
2. PERFORMA
Measure: MEASURE_1
91.500a 4.978 80.021 102.97997.750a 6.228 83.388 112.112
PERFORMA12
Mean Std. Error Lower Bound Upper Bound95% Confidence Interval
Evaluated at covariates appeared in the model: COPING =26.9000.
a.
210 IBM SPSS Statistics 23 Step by Step Instructor’s Manual
3. STRESS * PERFORMA
Measure: MEASURE_1
83.300a 4.557 72.793 93.80797.500a 5.628 84.522 110.47899.700a 5.452 87.128 112.27298.000a 7.528 80.640 115.360
PERFORMA1212
STRESS1
2
Mean Std. Error Lower Bound Upper Bound95% Confidence Interval
Evaluated at covariates appeared in the model: COPING = 26.9000.a.