use of learner-centered instruction in college science and mathematics classrooms
TRANSCRIPT
JOURNAL OF RESEARCH IN SCIENCE TEACHING VOL. 40, NO. 6, PP. 566–584 (2003)
Use of Learner-Centered Instruction in College Scienceand Mathematics Classrooms
Jeffrey J. Walczyk, Linda L. Ramsey
Department of Psychology and Behavioral Sciences, College of Education,
Louisiana Tech University, Ruston, Louisiana 71272
Received 12 June 2002; Accepted 10 February 2003
Abstract: Learner-centered approaches to science and mathematics instruction assume that only when
students are active participants will learning be deep, enduring, and enjoyable, and transfer to contexts
beyond the classroom. Although their beneficial effects are well known, the extent to which learner-
centered practices are used in college classrooms may be low. Surveys of undergraduate science and math
majors reveal general dissatisfaction with how courses in their majors are taught, and their number is half
what it was 2 decades ago. In response, federally funded systemic reform initiatives have targeted
increasing the use learner-centered instruction in science and mathematics courses to improve
undergraduate education generally and the training of preservice teachers specifically. Few data exist
regarding how effective these initiatives have been or how frequently learned-centered instruction occurs
as assessed from faculty’s perspective, which may not corroborate undergraduate perceptions. Accordingly,
a survey was developed to assess the use of learner-centered techniques and was administered to science
and math professors of Louisiana over the Internet. The return rate was 28%. Analyses reveal that they
are used infrequently, but when used, are applied to all aspects of teaching. Data also suggest that
federal funding has been slightly effective in promoting its use. � 2003 Wiley Periodicals, Inc. J Res Sci
Teach 40: 566–584, 2003
This article presents data on the use of learner-centered instruction in undergraduate science
and mathematics classrooms. To create a context for the research, constructivistic views of student
learning are reviewed. Teaching practices that foster student construction of knowledge are then
discussed. Research that documents wide-scale dissatisfaction among undergraduates over the
quality of science and mathematics instruction they receive is then summarized. The data we
collected is reported thereafter and contributes to the literature on undergraduate teaching in a few
ways. It may be the first broad exploration of the extent to which science and math faculty teaching
Contract grant sponsor: NSF; Contract grant number: 9255761 (Louisiana Collaborative for Excellence in the
Preparation of Teachers); Contract grant sponsor: Louisiana Education Quality Support Fund.
Correspondence to: J.J. Walczyk; E-mail: [email protected]
DOI 10.1002/tea.10098
Published online in Wiley InterScience (www.interscience.wiley.com).
� 2003 Wiley Periodicals, Inc.
undergraduate courses use learner-centered planning, delivery, and assessment from instructors’
points of view. Moreover, it is one of only a few that provide program evaluation on the
effectiveness of money spent by the National Science Foundation (NSF) to promote learner-
centered instruction.
Learner Construction of Knowledge
Constructivism has been an influential movement in education and psychology over the past
few decades [National Research Council (NRC), 1999]. Originating partly from Vygotsky’s
(1981) theory of cognitive development and from the writings of Dewey (1910), it concerns how
students make sense of new experiences with their current knowledge. Constructivism
acknowledges the active roles students must play in their learning if it is to occur deeply, endure,
be enjoyable, and transfer to contexts beyond the classroom (NRC, 1999). Six principles capture
how learning is conceptualized from this perspective: (a) Students must perceive that the material
to be learned is important. (b) Students must act on the information in some way at a deep level. (c)
It is crucial that they relate new material to information they already know. (d) Students must
continually check and update their understandings based on new experiences. (e) New learning
does not automatically transfer to new contexts to which it is relevant. (f) Finally, students become
autonomous learners if they become aware of the process of learning itself, including strategies for
consolidating new material and for checking their understanding (Uno, 1999).
Learner-Centered Instruction
As opposed to traditional college instruction involving lectures punctuated by objective tests,
instruction from a learner-centered perspective is the facilitation of student construction of
knowledge according to the six principles above. First and foremost, it takes account of students’
interests, experiences, background knowledge, developmental level, and aptitude [American
Psychological Association (APA), 1997]. Learner-centered instruction requires that
teachers are aware that learners construct their own meanings, beginning with the beliefs,
understandings, and cultural practices they bring to the classroom . . . . Accomplished
teachers give learners reason by respecting and understanding learners’ prior experiences
and understandings, assuming that these can serve as a foundation on which to build
bridges to new understandings. (NRC, 1999, p. 124)
Chickering and Gamson (1999) made seven recommendations for teaching undergraduates
from a learner-centered perspective.
1. Frequent student–faculty interaction should occur.
2. Cooperative learning activities should be interspersed among other engaging
instructional formats.
3. Students should be actively involved with learning.
4. Instructors should provide prompt, constructive feedback on student performance.
5. Instructors must keep students focused on learning, not on the fear of embarrassment or
other distractions.
6. Teachers should communicate high expectations.
7. Finally, teachers must respect diverse talents and ways of learning.
These recommendations and those that follow are also endorsed by the APA (1997).
Learner-centered teaching requires more work of instructors than traditional lecture–
recitation–evaluation in planning for, delivering, and assessing instruction (APA, 1997; Gagne,
LEARNER-CENTERED INSTRUCTION 567
Yekovich, & Yekovich, 1993; Uno, 1999). Planning must include writing clear, cognitive learning
objectives, some of which require critical thinking, designing learning activities to engage
students, and preparing authentic ways of assessing achievement. Without adequate planning,
classrooms can revert to lecture–recitation (NRC, 1999; Uno, 1999). The delivery of instruction is
the implementation of these well-laid plans. Assessment of instruction from a learner-centered
perspective, though permitting some objective testing, also includes activities resembling how
class content will be applied in the real world such as creating scientific projects, writing literature
reviews, proving mathematical theorems, conducting scientific experiments (APA, 1997; Gagne
et al., 1993; Uno, 1999), and writing about demonstrations (Deese, Ramsey, Walczyk, & Eddy,
2000).
In addition to cognitive elements, learner-centered teaching includes affective ones such as
having enthusiasm for course content and communicating concern for student learning. It also
means fostering intrinsic motivation by emphasizing conceptual understanding and its application
over rote learning (APA, 1997). This can be accomplished by invoking discussions, citing
examples, providing demonstrations, and using technology and other engaging activities.
Learner-centered instructors also help students to see the relations among concepts through the use
of concept maps, matrices, outlines, or similar techniques (Springer, Stanne, & Donovan, 1999;
Seymour & Hewitt, 1997; NRC, 1999; McKeachie, Gibbs, Laurillard, Van Note Chism, Menges,
Svinicki, & Weinstein, 1999; Uno, 1999).
Undergraduate Dissatisfaction: Reasons for It and What Is Being Done About It
The number of students majoring in science and mathematics in the United States has ebbed in
the past few decades, dropping by half (Kardash & Wallace, 2001; NSF, 1996; Seymour & Hewitt,
1997; Strenta, Elliott, Adair, Matier, & Scott, 1994). Although distal causes of this dramatic
decline begin before students enter college (Pearson & Fechter, 1994; Powell, 1990), one proximal
cause frequently suggested is the quality of instruction majors receive, particularly in the first few
courses. Students often complain that instruction is primarily lecture, is boring, and is hard to
relate to, a problem particularly acute for women and minorities (Rayman & Brett, 1995;
Seymour, 1995; Seymour & Hewitt, 1997). Kardash and Wallace (2001) reported many student-
cited examples of non–learner-centered instruction in science classes. Topping the list are unclear
course goals, poor organization, and inconsistency across materials, homework, and evaluation.
Students also complain of grading that is not reflective of achievement, an emphasis on
competition over cooperative learning, a focus on memorization over understanding, a lack of
linkage among concepts, too few examples and demonstrations, little class interaction, and faculty
indifference.
Unlike certified kindergarten through Grade 12 (K–12) science teachers who learn how to
plan for, deliver, and assess instruction in colleges of education, science and math professors
usually have no formal training in pedagogy or training in learner-centered techniques. Faculty
often do not have much incentive to obtain it (McKeachie et al., 1999; NSF, 1996). Teaching is
frequently viewed as an encumbrance on research time, particularly at large research universities.
In the latter case, undergraduate courses are often assigned to graduate assistants (McKeachie
et al., 1999; Seymour & Hewitt, 1997).
The poor performance of American high school seniors on standardized tests of science and
math achievement compared with those of other industrialized countries, and the ebb in science
and math majors have caused federal legislators concerned about America’s scientific future to
institute attempts to reverse these trends. The NSF has led these systemic reform efforts by funding
568 WALCZYK AND RAMSEY
projects designed to promote the use of learner-centered instruction in the sciences and in the
allied discipline of mathematics at the elementary, secondary, and undergraduate levels (NSF,
1996). Promoting their use in undergraduate classrooms is deemed to be an cost-effective
approach to reform. Not only should the training of undergraduate science and math majors
improve, so, too, should the education received by preservice science and math teachers who, it is
hoped, will teach using learner-centered techniques that improve the achievement of K–12
students (NSF, 1996; Powell, 1990; Seymour & Hewitt, 1997). The Louisiana Collaborative for
the Excellence in the Preparation of Teachers (LaCEPT) is one such program.
LaCEPT
A systemic reform initiative funded by the NSF, LaCEPT began in 1993. Its proximal goal was
to foster the use of learner-centered instruction in undergraduate science and math courses of
Louisiana and thereby improve the science and math training of K–12 preservice teachers. A
distal goal was to improve the science and math instruction that public school students receive
from better-prepared teachers. Louisiana was in the first group of such projects to be funded
nationally. An initial 5-year grant was refunded and ended in December 2002. The description of
the NSF dispersal below is similar to those of other states. The governing board for higher
education of Louisiana is the Board of Regents that served as the fiscal agent for LaCEPT. Funding
was awarded to individual campuses through a competitive review process in which they
submitted proposals. All were reviewed by an external panel of experts who recommended some
for funding and provided feedback for strengthening others. Initially, campuses applied each year
for funding. Later they were awarded grants that ran from 2 to 3 years.
LaCEPT used a variety of strategies to reform undergraduate education. Grant requirements
mandated that a campus demonstrate collaboration between the college of education and the
colleges in which discipline-specific courses were housed (e.g., biology, chemistry). Initially
through local, intercampus, and statewide workshops, the projects focused on raising the
awareness of science and math department faculty to changes occurring in K–12 education and
what research revealed about how students learn. These workshops brought faculty and
administrators together to discuss what was occurring at their campuses and explore models of
what professionals from across the nation were doing. At the same time, faculty began the process
of developing new science and math courses for preservice teachers or revising current courses
taken by preservice teachers to make them more learner-centered.
LaCEPT sought to increase the numbers of faculty who were actively involved in these
projects through a variety of means: (a) Locally operated minigrants for science and math faculty
supported their travel to conferences where learner-centered reform issues were being discussed;
(b) regional workshops on learner-centered instructional strategies were hosted at individual
campuses; (c) LaCEPT faculty internships enabled interested faculty to observe successful
learner-centered classrooms or participate in professional development projects conducted by
expert teachers; (d) LaCEPT NSF Teaching Fellowships were awarded to selected undergraduate
and graduate students preparing to be K–12 teachers, allowing them to participate in a rich
program of activities with K–12 teachers, students, and schools; and (e) projects for assessing the
effectiveness of the systemic reform process, such as this one, were also funded.
The Current Study
We know of no large-scale survey of the extent to which learner-centered planning,
delivery, and assessment occur in undergraduate science and math courses according to the
LEARNER-CENTERED INSTRUCTION 569
faculty teaching these courses. Moreover, studies are needed to evaluate the effectiveness of
the NSF’s systemic reform efforts. Accordingly, this project sought to advance three aims: (a)
Data collected would provide a benchmark against which the accuracy of student perceptions of
the quality of science and math instruction could be compared; (b) findings would provide
program evaluation on the effectiveness of LaCEPT; and (c) by exposing deficits in the use of
learner-centered instruction, findings would suggest where future federal dollars need to be
targeted.
Unable to find a suitable one in existence, we constructed a survey for evaluating the use of
learner-centered planning, delivery, and assessment, three crucial facets that must be addressed
when teaching from a learner-centered perspective (APA, 1997; Gagne et al., 1993; Uno, 1999).
The survey was administered to all science and math faculty at 4-year colleges and universities of
Louisiana for whom e-mail addresses were available. All participants, some of whom taught
methods classes to preservice teachers, were members of science or math departments, not
colleges of education. Consequently, findings based on these data were expected to generalize
only to science and math professors. The analysis plan called for the use of Common Factor
Analysis (CFA) to uncover latent variables underlying the Likert items of the survey probing the
frequency of use of learner-centered teaching practices (Snook & Gorsuch, 1989). CFA simplifies
a dataset by combining items that are highly intercorrelated (i.e., aggregates items that may be
measuring the same construct) into a single variable called factor scores. Factor scores then serve
as the level of the latent variable for each observation in subsequent analyses. Because items deal
with theoretically and practically distinct aspect of instruction (Gagne et al., 1993; Mager, 1962;
McKeachie et al., 1999), three factor analyses were conducted: one on the 6 items tapping
planning for instruction, another on the 15 items tapping delivery of instruction, and a third on the
7 items tapping assessment of instruction.
If college science and math instruction resembles that of high school, the use of learner-
centered planning should correlate positively with the use of learner-centered delivery and the use
of learner-centered assessment (Emmer, Evertson, Sanford, Clements, & Worsham, 1984).
Moreover, smaller class sizes, with their lower student–teacher ratios, and upper-division classes,
with their emphasis on advanced content, should support and call for the use of learner-centered
instruction, respectively (Emmer et al., 1984; Springer et al., 1999). Based on these observations,
the following hypotheses were tested.
Hypotheses
1. Instructional practices related to planning for learner-centered instruction will each load
on a common factor as will practices related to learner-centered delivery of instruction as
will practices related to learner-centered assessment.
2. Participation in LaCEPT workshops will be associated with more learner-centered
planning, learner-centered delivery, and learner-centered assessment.
3. Instructors who plan for learner-centered instruction will follow through by delivering
more learner-centered instruction and will use more learner-centered assessment.
4. As effective models for future K–12 teachers, faculty teaching science and math
methods classes to preservice teachers will report more learner-centered planning,
learner-centered delivery, and more learner-centered assessment than faculty teaching
nonmethods science and math classes.
5. Class size will correlate negatively with learner-centered planning, negatively with
learner-centered delivery, and negatively with learner-centered assessment.
570 WALCZYK AND RAMSEY
6. Instructors of upper-division classes will report more learner-centered planning, more
learner-centered delivery, and more learner-centered assessment than instructors of lower
division classes.
Methods
Participants
Target Population. The population of interest consisted of all full-time science (biology,
chemistry, earth science, and physics) and math faculty at 4-year colleges and universities of
Louisiana (e.g., LSU, Louisiana Tech, Tulane University, University of New Orleans). A complete
list appears in the Appendix under Item 1. Before surveys could be sent out over the Internet, a list
of instructor e-mail addresses was compiled in two ways. Most institutions had departmental
Web pages online. When unavailable, research assistants phoned departments and requested a list
of faculty names and e-mails; 825 e-mail addresses resulted and were entered into a Microsoft
Excel file.
Sample. Of the 825 surveys sent out over the Internet, 230 were eventually received. A 28%
response rate is higher than is typical in survey research involving traditional mailings (Churchill,
1979). The sample’s racial composition was 88.7% White, 4.3% Asian, 1.3% African American,
.4% Latino American, and 5.2% other; 74.8% were male.
The sample’s distribution by discipline was: Biology, 64 (27.8%); Chemistry, 40 (17.4%);
Earth Science, 17 (7.4%); Mathematics, 61 (26.5%); Math Methods, 8 (3.5%); Physics, 3 (1.3%);
Science Methods, 28 (12.2%); and Other, 9 (3.9%). The distribution of academic rank was: 15.2%
instructor, 19.6% assistant professor, 26.5% associate professor, 36.5% full professor, and 2.1%
other; 41% of respondents had participated in one or more LaCEPT workshops.
Survey of Instructional and Assessment Strategies (SIAS)
Survey Development. A 51 item survey, the SIAS, was written to elicit from college science
and math faculty data on instructional practices for an undergraduate course they teach often.
Items were written based on their relevance to learner-centered planning, delivery, and assessment
as suggested from several sources (see below).
To establish the content validity of the SIAS, early drafts were reviewed by eight experienced
science and math faculty at Louisiana Tech University. Copies were also sent out to three
professional program evaluators who had worked on several NSF education grants. Feedback
involved clarifying jargon, disambiguating items, eradicating redundancy, and adding items to
ensure a broad sampling of learner-centered instruction. Appearing in the Appendix, the final draft
was then programmed into a standard Web-based HTML format.
Survey Items. Items 1–12 elicit data on demographics, teaching responsibilities, and general
information about the course on which the respondent chose to focus. The survey was
programmed such that all items could be left blank. Items 1 and 3–11 permitted only one response
each because response categories are mutually exclusive.
For the next 28 items, a 5-point Likert scale was used to assess the frequency with which
specific instructional practices occur. Each was reverse-coded (1¼ ‘‘Always,’’ 5¼ ‘‘Never’’) to
LEARNER-CENTERED INSTRUCTION 571
minimize response set bias (Thorndike, 1997). The first six assess planning for instruction. Items 1
and 2 concern learning objectives, the use of which clarifies instructor expectations (Emmer et al.,
1984; Mager, 1962). Items 3 and 4 index whether the instructor plans for a variety of formats for
instruction and assessment, crucial for engaging students. Finally, Items 5 and 6 reflect the extent
to which instructors update notes and teaching practices, crucial to providing the latest advances in
science and math (Weimer, Parrett, & Kerns, 1988).
Whereas the effectiveness of some instructional practices depends on individual factors such
as personality, the 15 items under the topic of delivery of instruction are generally effective
(Emmer et al., 1984). Items 1–3 regard assessing what students know before presenting new
material, crucial for working within students’ zone of proximal development (Gagne et al., 1993;
Vygotsky, 1981). Items 4–6 pertain to concept mapping, a powerful technique with which
students realize the broader scope when understanding interrelated ideas (Ruiz-Primo &
Shavelson, 1996). Item 7 concerns manipulatives, a constructivistic teaching aid for science and
math faculty (Valencia, Hiebert, & Afflerbach, 1994). Item 8 addresses cooperative learning, a
technique that can foster critical thinking (Springer et al., 1999). Item 9 taps traditional lecture and
should load negatively on a factor measuring learner-centered instruction. The use of technology
in the classroom (Items 10 and 12) can support a variety of demonstrations and interactive
experiences (McKeachie et al., 1999). Humor (Item 11) in the classroom is an affective device for
engaging students (Kardash et al., 2001). Finally, by occasionally working independently (Item
13), especially by writing (Item 14) or discussing concepts with peers (Item 15), students elaborate
on recently acquired knowledge (NRC, 1999).
The next seven practices capture learner-centered assessment; that is, they require students to
actively demonstrate what they know. A variety of measurement techniques are needed (Item 7)
(Valencia et al., 1994). Most are learner-generated: for instance, written reports, projects,
presentations, and experiments (Items 2, 4, and 6) (Valencia et al., 1994). Tying test items to
learning objects constitutes good instructional practice (Item 1) (Mager, 1962). Because it
involves traditional assessment, drawing questions from a test bank (Item 3) was expected to load
negatively on a learner-centered assessment factor. Although 28 items do not tap all aspects of
learner-centered instruction, they do capture many of the most important (Weimer et al., 1988).
The last six items probe nonmutually exclusive sources from which faculty have acquired
information about new teaching methods.
Procedure
The survey was sent out over the Internet and sent again 1 week later during the regular
academic year. To maximize response rate, participants were informed that results would be
kept strictly confidential and a letter commending their participation would be sent to their
administrators at their request. By 1 month after the second e-mailing, no more responses were
received.
Results
Data Entry
The Web-based HTML returned to investigators an ASCII file containing responses to each
item. Hard copies were printed out. Research assistants then manually entered data into SPSS for
Windows (1998). There was <10% missing data for each item.
572 WALCZYK AND RAMSEY
Analyses
Internal Consistency of the SIAS. Each of the planning, delivery, and assessment Likert
items of the SIAS measured a discrete behavior. Therefore, it was not possible to determine
interitem reliability of the measurement of specific behaviors. However, the internal consistencies
of the 6 planning, 15 delivery, and 7 assessment items were determined using Cronbach a(Anastasi & Urbina, 1997); for the planning items a¼ .67, for the delivery items a¼ .56, and for
the assessment items a¼ .71. Internal consistencies of these magnitudes are adequate for survey
items that are hypothesized to tap common latent variables such as learner-centered planning
(Thorndike, 1997).
Data Reduction: Hypothesis 1. Because planning, delivery, and assessment are theoretically
and practically distinct (Mager, 1962; Gagne et al., 1993), it was not appropriate to analyze the 28
Likert items in a single factor analysis. To test Hypothesis 1, three common factor analyses (CFA)
were performed on the planning, delivery, and assessment items, respectively. When fewer than
50 items are analyzed, CFA is superior to a principal components solution because it examines
only the reliable variance among items, thereby adjusting for measurement error (Snook &
Gorsuch, 1989). Consistent with convention (Hair, Anderson, Tatham, & Black, 1995), scores of
factors with eigenvalues > 1 were retained as variables and were calculated using the regression
method. Variables possessing loadings with an absolute value of .4 or more were interpreted.
Means, standard deviations (SDs), and factor loadings for the six planning items appear
in Table 1. Two factors were retained. Interpreted loadings are italicized in this and in other
tables. The percentage of variance of the correlation matrix accounted for by each factor also
appears. The four items loading on PLANF1 suggest planning for learner-centered instruction,
including the use of learning objectives, a variety of teaching methods, and multiple assessment
strategies. The two items loading on PLANF2 suggest traditional planning through the use of
learning objectives only (Mager, 1962).
Means, SDs, and factor loadings for 15 delivery items are provided in Table 2; 4 factors were
retained. The 12 items loading on DELIVF1 indicate a learner-centered approach to delivering
instruction by which students create their own knowledge. The predicted negative loading on Item
9 (lecture) was observed. The two items loading positively on DELIVF2 relate to students writing
Table 1
Factor loadings for planning for instruction
Survey Item M SD
Plan forLearner-Centered
Instruction (PLANF1)
Use LearningObjectives(PLANF2)
1. How often are learning objectives used? 3.1 1.5 .56 .632. How often are course objectives on
syllabus?2.5 1.7 .39 .43
3. Plan for a variety of teaching methods 2.1 0.9 .68 �.344. Incorporate a variety of assessment
strategies2.6 1.2 .61 �.27
5. Revise notes, etc. in light of new research 2.1 1.0 .32 �.116. Revised instruction/assessment in light of
new research3.1 1.1 .63 �.13
% variance accounted for by factor 30.29 13.31
Note. Lower means imply less frequent use.
LEARNER-CENTERED INSTRUCTION 573
in class and working in small groups. The largest loading on DELIVF3 relates to the use of
technology in class. With only one loading, DELIVF4 is difficult to interpret, accounting for little
variance, and is not analyzed further. Based on Table 2 means, a ranking of instructional practices
from most to least common is: 1 lecture, 2, 3 humor, use feedback to revise teaching, 4, 5 instructor
uses technology, students use technology, 6 begin class with engaging activity, 7, 8 students write,
students discuss concepts, 9, 10, 11, 12, assess prior knowledge, use manipulatives, work in small
groups, students work independently, 13 concept mapping during instruction, 14 concept maps
for assessment, 15 students map concepts. Clearly lecturing is quite common; concept mapping is
rare.
Table 3 reports means, SDs, and factor loading for the seven assessment items. Two factors
were retained. Four items load on ASSESSF1, suggesting learner-centered assessment, including
authentic assessment (Valencia et al., 1994). The largest loading on ASSESSF2 concerns the use
of peer evaluations. Thus, consistent with Hypothesis 1, practices related to learner-centered
planning load on a common factor. Those related to learner-centered delivery load on a common
factor. Finally, practices related to a learner-centered assessment load on a common factor.
Hypothesis 2. To determine whether participation in NSF-funded programs promotes
learner-centered instruction, biserial correlations were calculated between participation in
LaCEPT (dummy coded 1 or 0) and all factor scores, with particular interest in those related to
planning for learner-centered instruction, PLANF1, delivering learner-centered instruction,
Table 2
Factor loadings for delivery of instruction
Survey Item M SD
Learner-CenteredInstructionDELIVF1
Use ofSmall
GroupsDELIVF2
Use ofTechnologyDELIVF3 DELIVF4
1. Use feedback to reviseteaching
2.2 0.9 .36 �.04 .04 .23
2. Assess prior knowledgebefore instruction
3.3 1.2 .46 �.11 �.02 .41
3. Begin class with an engagingactivity
2.7 0.9 .48 �.11 �.06 .39
4. Use concept maps duringinstruction
4.3 1.1 .68 �.55 �.02 �.11
5. Use concept maps forassessment
4.6 0.8 .64 �.59 �.07 �.15
6. Students map concepts 4.6 0.7 .66 �.49 �.11 �.167. Students use manipulatives 3.3 1.3 .45 .35 �.22 �.138. Work in small groups 3.3 1.4 .53 .49 �.33 �.209. Mostly lecture 2.0 0.8 �.40 �.27 .41 .02
10. Instructor uses technology 2.4 1.0 .49 .21 .60 �.1111. Humor is used 2.2 0.9 .22 .08 .07 .1912. Students use technology 2.4 0.9 .43 .15 .70 �.0813. Students work independently 3.3 1.2 .38 .30 �.08 �.3214. Students write in class 3.1 1.2 .45 .46 .10 .1315. Students discuss concepts 3.1 1.1 .50 .33 �.11 .20% variance accounted for by
factor24.08 12.32 8.10 4.76
574 WALCZYK AND RAMSEY
DELIVF1, and the use of learner-centered assessment, ASSESSF1. Preferable to t tests,
correlations more clearly reveal the strength and direction of a relationship between two variables.
Because lower Likert ratings correspond to more frequent use of the practice, negative correlations
with factor scores reveal a positive association between the constructs and are reported in Table 4.
The data support the effectiveness of LaCEPT. In particular, LaCEPT participants were more
likely to plan for learner-centered instruction, deliver learner-centered instruction, form small
groups, and apply technology. Except for the use of learner-centered assessment, Hypothesis 2
was supported, albeit the correlations are small.
Hypothesis 3. Is planning for learner-centered instruction associated with the delivery of
learner-centered instruction and the use of learner-centered assessment? PLANF1 was correlated
with DELIVF1 and with ASSESSF1. Instructors who plan for learner-centered instruction are
more likely to deliver learner-centered instruction, r¼ .66, p< .01. They are also more likely to
include learner-centered assessment, r¼ .54, p< .01. Learner-centered planning factor scores
(PLANF1) were not correlated with any other delivery or assessment factors. Hypothesis 3
received strong support.
Hypothesis 4. Hypothesis 4 asserts that those teaching science and math methods classes will
report more learner-centered planning, learner-centered delivery, and learner-centered assessment
Table 3
Factor loadings for assessment of instruction
Survey Item M SD
ConstructedResponses
(ASSESSF1)
Peer Evaluations(ASSESSF2)
1. Assessment tied to learning objectives 2.2 1.2 .38 .052. I use essay or short answer 2.6 1.4 .72 �.413. I use test bank items 4.1 1.2 �.33 .134. Test items assess higher-order thinking 2.1 0.9 .40 �.175. Peer evaluations used 4.1 1.0 .54 .426. Informal assessment used 3.7 1.1 .44 .347. Variety of assessments are used 2.4 1.3 .57 .01% variance accounted for by factor 25.12 7.44
Table 4
Biserial correlations between participation in LaCEPT and factor scores
Factor Score
Participation in LaCEPT
r p Value N
PLANF1—Plan for learner-centered instruction �0.17* .01 223PLANF2—Use of learning objectives 0.13 .06 223DELIVF1—Learner-centered instruction �0.17* .01 214DELIVF2—Use of small groups �0.16* .02 214DELIVF3—Use of technology �0.16* .02 214ASSESSF1—Constructed responses �0.12 .08 210ASSESSF2—Variety of assessments are used �0.09 .17 210
Note. Negative correlations indicates a positive association.
*p< .05.
LEARNER-CENTERED INSTRUCTION 575
than will nonmethods teachers. To test this, three one-way analyses of variance (ANOVAs) were
computed. The independent variable, type of class, had six levels: biology, chemistry, physics,
earth science, math, and methods classes. The dependent variables were PLANF1, DELIVF1, and
ASSESSF1. F ratios were: F(6, 208)¼ .487 , MSe¼ 1.0, p¼ .82, F(6, 205)¼ 1.67, MSe¼ .98,
p¼ .13, andF(6, 201)¼ .82, MSe¼ .99, p¼ .55, respectively. Because no significant main effects
were found, no follow-up comparisons were warranted. Hypothesis 4 was not supported.
Hypothesis 5. Do larger classes discourage planning for learner-centered instruction,
delivering learner-centered instruction, or learner-centered assessment? Item 10 of the SIAS
provides a discrete variable of four class sizes, <25 (19.7%), 26–45 (43.9%), 46–100 (25.9%),
and�100 (10.4%). Percentages of cases falling into each category appear parenthetically. For this
analysis, a new ordinal variable was created that recoded class size as a 1, 2, 3, or 4, respectively.
PLANF1, planning for learner-centered instruction, and class size were uncorrelated, r¼ .05,
p� .05. DELIVF1, delivering learner-centered instruction, correlated significantly with class
size, r¼ .16, p< .01. Because larger scores on DELIVF1 signify less learner-centered instruction,
larger classes use less learner-centered instruction as predicted. Larger classes also use less
learner-centered assessment, ASSESSF2, r¼ .20, p< .01. There is no correlation between class
size and the use of technology, DELIVF3, r¼ .12, p> .05. Importantly, smaller classes are
associated with greater use of small groups (DELIVF2), r¼ .26, p< .01. No other correlations
between factor scores and class size were significant. Hypothesis 5 was largely supported.
Hypothesis 6. Do instructors of upper-division science and mathematics classes plan more
for learner-centered instruction, deliver more learner-centered instruction, and use more learner-
centered assessment? Item 9 of the SIAS provided for two class levels: introductory, 100–200
(72.6%), and advanced, 300–400 (27.4%). The percentage of cases of each appears
parenthetically. A dummy variable was created, 0 or 1, respectively. The biserial correlation
between class level and PLANF1, planning for learner-centered instruction, was �.03, p> .05.
The correlation between class level and DELIVF1, delivering learner-centered instruction, was
.08, p> .05. A negative correlation between class level and ASSESSF1 reveals that instructors of
upper-division classes use more learner-centered assessment, r¼�.29, p< .05. No other
correlation between class level and factor scores was significant. Hypothesis 6 received only slight
support.
Table 5 reports the sources of information from which science and math faculty obtain
information about pedagogy. The most common is discussion with colleagues. The second most
common is from books or educational articles. Workshops such as those sponsored by NSF
provided information to about 40% of our sample.
Table 5
Sources from which faculty acquire new information about instruction
Source % Sample Using This Source
1. Discussion with colleagues 82.6%2. Privately have read about educational issues and methods 54.3%3. Attend workshops and seminars away from my campus 38.7%4. Attended presentations on education at meetings of professional
societies37.8%
5. Attended workshops and seminars at my campus 31.3%6. Observed class taught by colleague 28.7%
576 WALCZYK AND RAMSEY
Discussion
In this research data were collected from science and math faculty of Louisiana that advanced
three aims: (a) A benchmark against which the accuracy of student perceptions on the quality of
undergraduate instruction could be compared was obtained by surveying the faculty themselves,
(b) the results are a program evaluation on the effectiveness of NSF dollars spent thus far to reform
undergraduate science and math education, and (c) deficits in the use of learner-centered
instruction where future federal dollars targeted at increasing its use should be spent were
identified. In addition, hypotheses concerning how facets of learner-centered instruction
interrelate or concerning factors associated with the use of learner-centered instruction were
tested. Results are discussed below.
Hypothesis 1
Do practices theoretically related to learner-centered planning load on a common factor? Do
those related to learner-centered delivery and learner-centered assessment do so as well? These
questions were answered affirmatively. Based on CFA, faculty who plan to use a variety of
teaching methods are more likely to revise techniques, incorporate a variety of assessments, and so
forth, in light of new research. Instructors who have students break into small groups also tend to
have them work with manipulatives, write in class, map concepts, incorporate technology, and
lecture less. Regarding assessment, instructors who permit peer evaluations also have students
write essays, exhibit critical thinking, and use other informal assessments.
Likert means from Table 1 reveal that planning for learner-centered instruction occurs
moderately often, including planning for a variety of teaching methods, incorporating a range of
assessment techniques, and revising notes and assessment in light of new research. Regarding
delivery of instruction, learner-centered teaching practices that occur most frequentlty (mean< 3)
are using feedback to revise teaching, beginning class with an engaging activity, using technology,
and incorporating humor. Despite their value (McKeachie et al., 1999; Ruiz-Primo & Shavelson,
1996), mapping concepts, using manipulatives, and working in small groups are used infrequently.
Sadly, informal assessments, including peer evaluation, are rarely used as well, despite their
relevance to science (Valencia et al., 1994). In summary, student perceptions that learner-centered
instruction is infrequently used (Kardash & Wallace, 2001) are confirmed by these data. Lecture
still dominates in undergraduate classrooms. Moreover, we suspect that the self-selected faculty
who responded to this survey are more conscientious about their educational duties than those who
did not and may be more learner-centered in their teaching practices as well. In other words,
traditional lecture–recitation–evaluation may be more common than is suggested here.
Hypothesis 2
Is participation in LaCEPT associated with learner-centered planning, learner-centered
delivery, and learner-centered assessment? Hypothesis 2 was partially confirmed. Faculty who
participated in these workshops on pedagogy were slightly more likely to plan for learner-centered
instruction and then deliver it. Moreover, they were slightly more likely to use small groups and
technology. It should be noted, however, that a sample size of 230 provides a statistical test with
ample power. Consequently, the slight correlations of Table 4 were statistically significant.
Participation in LaCEPT only accounted for about 3% of the variance in each of the learner-
centered factors, however. Moreover, correlation is not causation. Rather than causing an increase
in learner-centered instruction, it could be argued that faculty who are dedicated enough to
participate in LaCEPTare more likely to use learner-centered practices in any case. A statistic that,
LEARNER-CENTERED INSTRUCTION 577
in combination with the correlations of Table 4, imbue us with confidence that LaCEPT made a
difference is the fact that the percentage of respondents who indicated that they received useful
pedagogical information from workshops, about 40%, is approximately the same group who
participated in LaCEPT. In defense of LaCEPT, the interventions it provided were sporadic, not
sustained. Sustained interventions that target increasing learner-centered instruction might have
produced larger effect sizes.
Hypothesis 3
Do instructors who plan for learner-centered instruction follow through by delivering it and
assess in a learner-centered way? Hypothesis 3 was strongly confirmed. As with primary and
secondary teachers (Emmer et al., 1984; Gagne et al., 1993), college faculty who plan for learner-
centered instruction are more likely to use learner-centered methods in the classroom and evaluate
in authentic ways. This suggests a commitment by these faculty to all facets of learner-centered
instruction, although they are clearly in the minority. These data further suggest that federal dollars
are still needed to increase the frequency of all facets of its use in undergraduate science and math
classrooms. To effect far reaching change, researchers are encouraged to identify the incentives
and supports available to faculty at their institutions for improving teaching. We suspect that, only
when institutions of higher learning make it in faculty’s professional self-interest to do so, will
learner-centered instruction become ubiquitous and enduring.
Hypothesis 4
Do faculty who teach science and math methods classes use more learner-centered planning,
delivery, and assessment than nonmethods faculty? Alarmingly, analyses reveal no differences
across disciplines in the use of learner-centered instruction. It is disconcerting that faculty
teaching science and math methods classes are not more likely to model such instruction to their
students, many of whom will go on to teach science and math in K–12 schools. However, as noted
in the introduction, all those surveyed were housed in science or math departments, not colleges of
education. Consequently, they may not have been trained as professional teachers. The NSF may
wish to target additional money to increase the use of learner-centered instruction among faculty
charged with the responsibility of training future K–12 teachers.
Hypothesis 5
Does class size correlate negatively with the use of learner-centered planning, learner-
centered delivery, and learner-centered assessment, presumably because larger classes are not
conducive to learner-centered instruction (Emmer et al., 1984; Springer et al., 1999)? Partial
support was obtained. Smaller class size was associated with more learner-centered delivery and
assessment, including the use of small groups. Clearly smaller classes may enhance the quality of
instruction that students receive. To retain more science and mathematics students in these majors,
it may be useful for large introductory classes to be made smaller initially to provide students with
richer, more engaging educational experiences rather than turning them away from these majors
because of negative initial experiences (Kardash & Wallace, 2001).
Hypothesis 6
With their advanced content, do upper-division classes involve more learner-centered
planning, more learner-centered delivery, and more learner-centered assessment than lower
578 WALCZYK AND RAMSEY
division classes? This hypothesis was unsupported, with one exception. Upper-division classes
incorporate more learner-centered assessment. Despite the propriety of using learner-centered
practices with more advanced material (NSF, 1996; Uno, 1999), they are not being used to the
extent that they might. Federal dollar might also target promoting learner-centered instruction in
these critical advanced courses to improve the quality of training of future scientists and future
science and math educators.
Educational Implications and Research Needs
According to the survey results, lecture–recitation–evaluation is alive and well in college
science and math classrooms, even in schools whose primary emphasis is not on research
(institutions other than LSU, the University of New Orleans, and Tulane). The same conclusions
have been reached in surveys of students (Kardash & Wallace, 2001; Powell, 1990; Rayman &
Brett, 1995; Seymour & Hewitt, 1997). We have identified gaps where federal dollars allocated for
the purpose of promoting learner-centered instruction might optimally be spent. Our data also
suggest that the money spent thus far has been slightly effective in promoting learner-centered
instruction.
When properly done, learner-centered instruction can achieve many positive outcomes in
students and faculty alike. It inculcates in students intrinsic motivation to learn in addition to a
deep, durable, and transferable understanding of class content compared with traditional lecture–
recitation–evaluation (APA, 1997; NRC, 1999; Uno, 1999). It can also be professionally
rewarding for the faculty who provide it (Chickering & Gamson, 1999). An important caveat,
however, more is required of faculty to achieve these outcomes than having students go through
the motions of the practices that appear on the SIAS. Intense and sustained faculty commitment is
crucial (NRC, 1999; Uno, 1999). Some examples follow. Faculty must learn about constructivism
and learner-centered approaches to teaching. Reading the volume entitled How People Learn:
Brain, Mind, Experience, and School (NRC, 1999) is an excellent way to get started. Faculty must
be open-minded enough to consider ways of teaching that may differ radically from how they were
taught or have taught in the past. They must attach to instruction an importance comparable to
the importance many attach to research. Faculty must be willing to experiment in the classroom
with different instructional strategies to determine which techniques are most effective for them
at the risk of lower student ratings in the short term. Even after attaining initial expertise in
teaching in a learner-centered way, faculty must continually update their knowledge by parti-
cipating in workshops on pedagogy, by reading articles on instructional advances in their
disciplines, and so forth.
Another caveat, a basic tenet of the NSF’s systemic reform initiative is that modifying a
component of undergraduate education will not produce lasting change. The overall system has to
be adjusted to produce stable improvements (NSF, 1996). We close by suggesting some ways that
the collegiate system of undergraduate science education may have to change before advances
made in educational practices become ubiquitous and enduring.
As noted earlier, many science and math faculty view undergraduate teaching as an
encumbrance on research time (McKeachie et al., 1999; NSF, 1996). Consequently, the quality of
instruction undergraduates receive can be poor. One way to bring these faculty onboard as
motivated participants in the process of systemic reform is for institutions of higher learning,
especially top research schools, to redesign reward structures such that greater weight is given
to instructional innovation in decisions of tenure, promotion, and raises than has occurred before.
We suspect that only when they believe it is in their professional self-interest will many faculty
commit to the considerable undertaking of becoming learner-centered instructors. Finally, our
LEARNER-CENTERED INSTRUCTION 579
data (Table 5) suggest that college faculty most frequently receive information concerning
pedagogy from two sources: discussion with colleagues and reading about it on their own. These
data argue for the cost-effectiveness of having a few members of a department participate in
workshops, such as LaCEPT, and then share what they learned with colleagues.
This research was funded by National Science Foundation Grant 9255761 (Louisiana
Collaborative for Excellence in the Preparation of Teachers) and the Louisiana Education
Quality Support Fund. The authors express their gratitude to Mary Jo McGee-Brown of
Quantitative Research and Evaluation for Action, Inc., Athens, Georgia, and to Michael
and Jay Hughes, Curriculum Foundations and Research, Georgia Southern University, and
finally to our research assistants: Celeste Baine, Keli S. Bryan, and Susan Borglum.
Appendix
Survey of Instructional and Assessment Strategies Used in Undergraduate Science
and Mathematics and Science and Math Methods Courses
This survey will take approximately 5 minutes to complete. The following survey is designed
to determine the educational practices used by faculty teaching undergraduate science and
mathematics and science and math methods courses at institutions of higher education in
Louisiana. Please take a few minutes to fill out this electronic survey. The results will be kept
strictly confidential and will not be communicated to administrators. Only overall, aggregated
statistics will be made public. This research was funded by a Louisiana Collaborative for
Excellence in the Preparation of Teachers (LaCEPT)/Board of Regents grant to assess the status
quo regarding educational practice at the undergraduate level in Louisiana.
General Information.
1. I am on the faculty of:
2. Involvement in LaCEPT
& Centenary College & Northwestern State University
& Dillards University & Southeastern Louisiana University
& Grambling State University & Southern University, Baton Rouge
& Louisiana College & Southern University, New Orleans
& Louisiana State University, Baton Rouge & Southern University, Shreveport
& Louisiana State University, Alexandria & Tulane
& Louisiana State University, Eunice & University of Louisiana at Lafayette
& Louisiana State University, Shreveport & University of Louisiana at Monroe
& Louisiana Tech University & University of New Orleans
& McNeese State University & Xavier
& Nicholls State University
& I have not participated in LaCEPT-funded activities. (Go to Question 3)
& I have participated in LaCEPT-funded activities.
580 WALCZYK AND RAMSEY
Please check all of the following LaCEPT activities in which you have participated:
7. I teach, on average, ——— courses per semester & 1 & 2 & 3 & 4
8. I teach undergraduate courses primarily in the area of (check all that apply):
& Biology & Chemistry & Earth Science & Mathematics
& Math Methods & Science Methods & Physics & Other———
For the content area you checked above, choose the undergraduate course you teach most
frequently and provide the following information.
9. Course Level & introductory (100–200 level) & advanced (300–400 level)
10. A typical enrollment per offering for one section of this course is:& <25 & 26–45 & 46–100 & > 100
11. The average % of students who initially enroll and successfully complete this course(earn a grade of C or above) is as follows:
&> 90% & 80–90% & 70–79% & 60–69%
& 50–59% & 40–49% & 30–39% & <30%
12. Check all of the following statements that apply. This course:
& Revised courses for education majors & Participated in LaCEPT Mentoring Program
& Administered local campus renewalt & Was funded by LaCEPT to travel to
grant professional meetings
& Attended LaCEPT Annual State & Received mini-grant funding from LaCEPT
Meeting(s)
& Attended LaCEPT-Sponosored & Other ———
Workshop(s)
& Served as LaCEPT Faculty Intern
3. I am a/an: & Instructor & Assistant & Associate & Full & Other
Professor Professor Professor
4. Gender: & male & female
5. Ethnicity:
& African American & Asian & White (Non-Hispanic) & Hispanic
& Other———
6. I have been teaching at the college level:
& 0–4 years & 5–10 years & 11–20 years & > 20 years
& enrolls science or math majors only
& enrolls students from a variety of majors & is required of elementary education
majors
& enrolls education majors only & is required of secondary education major
LEARNER-CENTERED INSTRUCTION 581
Please respond to the following questions by indicating the frequency with which each of the
following tasks is completed in the course you describe above. In the following 5-point Likert
scale 1 indicates that the task is always completed and 5 indicates that the task is never completed.
Planning for Instruction (1¼ ‘‘Always,’’ 2¼ ‘‘Frequently,’’ 3¼‘‘Occasionally,’’ 4¼ ‘‘Seldom,’’ 5¼ ‘‘Never’’)
Always$Never
1 2 3 4 5
1. How often do you write learning objectives for topics/segments of thiscourse?
1 2 3 4 5
2. How often are learning objectives listed on your course syllabus? 1 2 3 4 53. To what extend do you incorporate a variety teaching techniques in this
class?1 2 3 4 5
4. To what extent do you incorporate a variety of assessment strategies inthis class?
1 2 3 4 5
5. How often do you revise your class notes and outline to incorporaterecent research in the field?
1 2 3 4 5
6. How often do you revise your instructional and assessment strategiesto incorporate recent research about how students learn?
1 2 3 4 5
Delivery of Instruction (1¼ ‘‘Always,’’ 2¼ ‘‘Frequently,’’ 3¼‘‘Occasionally,’’ 4¼ ‘‘Seldom,’’ 5¼ ‘‘Never’’)
Always$Never
1 2 3 4 5
1. How often do you use feedback from the assessments in your courseto adjust your teaching strategies?
1 2 3 4 5
2. How often do you use techniques to determine what students knowabout a topic prior to beginning instruction on the topic?
1 2 3 4 5
3. How often do you begin a class period with an engaging problem,question, or unusual fact to gain student interest?
1 2 3 4 5
4. How often do you use concept maps for instruction? 1 2 3 4 55. How often do you use concept maps for assessment? 1 2 3 4 56. How often do students construct concept maps in this class?. 1 2 3 4 57. How often do students make use of manipulatives (hands-on
instructional aids) in this class?1 2 3 4 5
8. How often do students work in small groups during a class period? 1 2 3 4 59. How often is the majority of a class period spent primarily in
traditional lectures presented by the instructor?1 2 3 4 5
10. How often do you use technology in this class to enhance instruction? 1 2 3 4 511. How often is humor is used in this class to enhance instruction? 1 2 3 4 512. How often do students in this class use technology to enhance their
learning?1 2 3 4 5
13. How often do students work independently during a class period inthis course?
1 2 3 4 5
14. How often do students write (in addition to note taking and testtaking) in this course?
1 2 3 4 5
15. How often do students spend time during class talking about theconcepts they are learning?
1 2 3 4 5
Assessment of Learning (1¼ ‘‘Always,’’ 2¼ ‘‘Frequently,’’ 3¼‘‘Occasionally,’’ 4¼ ‘‘Seldom,’’ 5¼ ‘‘Never’’)
Always$Never
1 2 3 4 5
1. To what extent do you tie the design of your assessment instruments ortasks to your learning objectives?
1 2 3 4 5
2. How often do your pencil and paper assessments include questions thatrequire students to write (essay or short answer)?
1 2 3 4 5
3. To what extent do you use questions from a test bank in preparing yourexams?
1 2 3 4 5
582 WALCZYK AND RAMSEY
Please check all that apply. I have learned new methods of teaching and assessing from:
Please add any additional information you feel would be helpful.
References
American Psychological Association. (1997). Learner-centered psychological principles: A
framework for school reform & redesign. [Avalable online]. http:/ /www.apa.org/ed/lcp2/
lcptext.html.
Anastasi, A. & Urbina, S. (1997). Psychological testing (7th ed.). Upper Saddle River, NJ:
Prentice Hall.
Chickering, A.W. & Gamson, Z.F. (1999). Development and adaptations of the seven
principles for seven principles for good practice in undergraduate education. New Directions for
Teaching and Learning, 80, 75–81.
Churchill, G.A. (1979). A paradigm for developing better measures of marketing constructs.
Journal of Marketing Research, 16, 64–73.
Deese, W.C., Ramsey, L.L., Walczyk, J.J., & Eddy, D. (2000). Using demonstration
assessments improve learning. Journal of Chemical Education, 77, 1511–1516.
Dewey, J. (1910). How we think. Boston, MA: DC Heath.
Emmer, E.T., Evertson, C.M., Sanford, J.P, Clements, B.S., & Worsham, M.E. (1984).
Classroom management for secondary teachers. Englewood Cliffs, NJ: Prentice Hall.
Gagne, E.D., Yekovich, C.W., & Yekovich, F.R. (1993). The cognitive psychology of school
learning. New York: Addison-Wesley.
Hair, J.F., Anderson, R.E., Tatham, B.M., & Black, W.C. (1995). Multivariate data analysis
with readings (4th ed.). Englewood Cliffs, NJ: Prentice Hall.
(Continued )
Assessment of Learning (1¼ ‘‘Always,’’ 2¼ ‘‘Frequently,’’ 3¼‘‘Occasionally,’’ 4¼ ‘‘Seldom,’’ 5¼ ‘‘Never’’)
Always$Never
1 2 3 4 5
4. To what extent do your test questions assess higher levels ofunderstanding such as analysis and/or synthesis?
1 2 3 4 5
5. How often do students in this class evaluate their own work or evaluatethe work of their peers?
1 2 3 4 5
6. How often do you use informal assessments in this course? 1 2 3 4 57. To what extent do you use a variety of assessment formats to determine
student grades (traditional multiple choice and short answer or essayexams, written reports, projects, presentations, etc.) in this course?
1 2 3 4 5
& discussions with colleagues & attending workshops and seminars away
from my campus
& observing classes taught by colleagues & attending presentations on education at
meetings of professional societies
& reading about educational issues and
methods
& attending workshops and seminars on
my campus
LEARNER-CENTERED INSTRUCTION 583
Kardash, C.M. & Wallace, M.L. (2001). The Perceptions of Science Classes Survey: What
undergraduate science reform efforts really need to address. Journal of Educational Psychology,
93, 199–210.
Mager, R.F. (1962). Preparing learning objectives. Palo Alto, CA: Fearon.
McKeachie, W.J., Gibbs, G., Laurillard, D., Van Note Chism, N., Menges, R., Svinicki, M., &
Weinstein, C.E. (1999). Teaching tips: Strategies, research, and theory for college and university
teachers. Boston, MA: Houghton Mifflin.
National Research Council. (1999). How people learn: Brain, mind, experience, and school.
Washington, DC: National Academy Press.
National Science Foundation. (1996). Shaping the future: New expectations for under-
graduate education in science, mathematics, engineering, and technology (NSF Pub. No. 96-139).
Arlington, VA: Author.
W. Pearson & A. Fechter. (Eds.). (1994). Who will do science? Educating the next generation.
Baltimore, MD: Johns Hopkins University.
Powell, L. (1990). Factors associated with the underrepresentation of African Americans in
mathematics and science. Journal of Negro Education, 59, 292–298.
Rayman, P. & Brett, B. (1995). Women science majors: What makes a difference in
persistence after graduation? Journal of Higher Education, 66, 388–414.
Ruiz-Primo, M.A. & Shavelson, R.J. (1996). Problems and issues in the use of concept maps
in science assessment. Journal of Research in Science Teaching, 33, 569–600.
Seymour, E. (1995). The loss of women from science, mathematics, and engineering majors:
An explanatory account. Science Education, 79, 437–473.
Seymour, E. & Hewitt, N.M. (1997). Talking about leaving: Why undergraduates leave the
sciences. Boulder, CO: Westview Press.
Snook, S.C. & Gorsuch, R.L. (1989). Component analysis versus common factor analysis:
A Monte Carlo study. Psychological Bulletin, 106, 148–154.
Springer, L., Stanne, M.E., & Donovan, S.S. (1999). Effects of small-group learning on
undergraduates in science, mathematics, engineering, and technology: A meta-analysis. Review
of Educational Research, 69, 21–51.
Strenta, A.C., Elliott, R., Adair, R., Matier, M., & Scott, J. (1994). Choosing and leaving
science in highly selective institutions. Research in Higher Education, 35, 513–547.
Thorndike, R.M. (1997). Measurement and evaluation in psychology and education (6th ed.).
Columbus, OH: Prentice-Hall.
Uno, G.E. (1999). Handbook on teaching undergraduate sciences course: A survival training
manual. Fort Worth, TX: Harcourt Brace.
Valencia, S.W., Hiebert, E.H., & Afferbach, P.P. (1994). Realizing the possibilities of
authentic assessment: Current trends and future issues. In S.W. Valencia, E.H. Hiebert, &
P.P. Afflerbach. (Eds.), Authentic reading assessment: Practices and possibilities. Newark DE:
International Reading Association.
Vygotsky, L. (1981). The genesis of higher mental functions. In J.V. Wertsch. (Ed.), The
concept of activity in Soviet psychology. Armonk, NY: Sharpe.
Weimer, M., Parrett, J.L., & Kerns, M.M. (1988). How am I teaching: Forms and activities for
acquiring instructional input. Madison, WI: Magna.
584 WALCZYK AND RAMSEY