students’ experiences with contrasting learning environments: the added value of students’...
TRANSCRIPT
ORI GIN AL PA PER
Students’ experiences with contrasting learningenvironments: The added value of students’ perceptions
Katrien Struyven Æ Filip Dochy Æ Steven Janssens Æ Sarah Gielen
Received: 19 July 2006 / Accepted: 23 March 2007 / Published online: 6 May 2008� Springer Science+Business Media B.V. 2008
Abstract This study investigated the effects of two contrasting learning environments on
students’ course experiences: a lecture-based setting to a student-activating teaching
environment. In addition, the evaluative treatment involved five research conditions that
went together with one of four assessment modes, namely, portfolio, case-based, peer
assessment, and multiple-choice testing. Data (N = 608) were collected using the Course
Experience Questionnaire. Results showed that the instructional intervention (i.e. lectures
versus student-activating treatment) influenced students’ course experiences, but in the
opposite direction to that expected. In declining order, the following scales (5 out of 7)
revealed statistically significant differences: Clear Goals and Standards; the General scale;
Appropriate Workload; Good Teaching; and Independence. Moreover, when the assess-
ment mode was considered, also the Appropriate Assessment scale demonstrated
significant differences between the five research conditions. Moreover, the same teaching/
learning environments led to diverse students’ perceptions. While the perceptions of lec-
ture-taught students were focused and concordantly positive, students’ course experiences
with student-activating methods were widely varied and both extremely positive and
negative opinions were present. Students’ arguments in favour of the activating setting
were the variety of teaching methods, the challenging and active nature of the assignments
and the joys of collaborative work in teams, whereas students expressed dissatisfaction
with the perceived lack of learning gains, the associated time pressure and workloads, and
the (exclusive) use of collaborative assignments and related group difficulties.
K. Struyven (&) � F. Dochy � S. Janssens � S. GielenCentre for Research on Teaching and Training, Katholieke Universiteit Leuven,Dekenstraat 2, 3000 Leuven, Belgiume-mail: [email protected]
F. Dochye-mail: [email protected]
S. Janssense-mail: [email protected]
S. Gielene-mail: [email protected]
123
Learning Environ Res (2008) 11:83–109DOI 10.1007/s10984-008-9041-8
Keywords Constructivism � Course Experience Questionnaire � Higher education �Lectures � Student-activating teaching methods/active teaching � Student
teachers’ perceptions
Introduction
The principle that underpinned this study is the statement of Entwistle (1991) that it is a
student’s perceptions of the learning environment that influence how that student learns
and not necessarily the context in itself. Students’ perceptions are essential to an under-
standing of student learning. Instructional interventions are always interpreted by students
and this interpretation of the environment, rather than the intervention itself, triggers both
the engagement of students in learning and the effects of the teaching/learning environment
(Elen and Lowyck 2000). This relationship between perceptions and student learning has
been repeatedly ratified by empirical data. In this regard, Konings et al. (2005) argue that
students’ perceptions of a learning environment affect their subsequent learning behaviours
and the quality of their learning outcomes. Consequently, it is no surprise that Fraser and
Fisher (1983) found statistically significant relationships between students’ perceptions of
the classroom environment and learning outcomes. Likewise, Ben-Ari and Eliassy (2003)
provide evidence for contextual effects on achievement motivation. Similarly, Biller
(1996) argues that students need to feel comfortable with the instruction that they receive
in order to learn.
These relationships between students’ outcomes and their perceptions have consequences
for the design of learning environments. In this respect, Trigwell and Prosser (1991) argue
that environments perceived to encourage deep approaches to learning will facilitate
higher-quality learning than environments designed to encourage surface approaches.
Consequently, it was hypothesised that constructivist learning/teaching environments,
which serve the purpose of deep, active and regulative learning (De Corte 2000; Oxford
1997; Sivan et al. 2000; Terwel 1999; Vermunt 1998; Von Glasersfeld 1988), affect
students’ perceptions and learning outcomes positively compared to traditional lecture-
based settings. In fact, the learning activities in which students engage largely determine the
quality of the learning outcomes that they attain (Vermunt and Verloop 1999). Three
categories of learning activities, namely, cognitive, affective and metacognitive (or
regulative) activities, might be encompassed by a wide variety of teaching strategies
(Vermunt and Verloop 1999). More than traditional paradigms, the (social)-constructivist
movement highlighted the metacognitive, self-regulative dimension of learning (De Corte
2000). Vermunt and Verloop (1999) argue that, when learning is increasingly seen as a
process of self-regulated knowledge construction, teaching models should take this learning
process as a point of departure. In this respect, student-activating teaching methods might
serve the purpose. ‘Student-activating teaching methods’, as defined in this study, challenge
students to construct knowledge by means of authentic assignments that literally require
their ‘active’ participation in the learning/teaching process in order to incorporate the
available information. Students are required to select, interpret and apply information and
knowledge to practical cases and to solve complex vocational problems (Jacobson and Mark
1995; Meyers and Jones 1993; Silberman 1996; Tenenbaum et al. 2001; Tynjala 1997;
White 1996). In contrast to lectures in which the teacher directly instructs the students in the
content of the course, students in the activating setting need to find, browse and acquire the
content themselves by means of real-life problem tasks, practical case studies or team-based
assignments (e.g. role plays, educational games). This instructional contrast constitutes the
84 Learning Environ Res (2008) 11:83–109
123
primary focus of the present study, which investigated whether a student-activating
teaching/learning environment produces different (i.e. more positive) effects on students’
perceptions from those produced by a lecture-based environment. These differences are
studied by means of the Course Experience Questionnaire (Ramsden 1991), which considers
students’ attitudes towards the teaching methods that were experienced.
Consistent with the hypothesised conducive role of student-activating teaching methods
in student learning (De Corte 2000; Oxford 1997; Terwel 1999; Vermunt 1998; Von
Glasersfeld 1988), educational literature on students’ perceptions of teaching is generally
positive for students’ experiences with the active instructional type (Mansfield 1989;
McNaughton and Krentz 2000). Although student performances often remain comparable
to lecture-based settings, particular (computer-supported) techniques tend to trigger
favourable effects on students’ experiences (Ballard et al. 2004; O’Leary et al. 2005; Woo
and Kimmick 2000). As such, Marcel (2003) found that students commented positively
about online experiences and mentioned interaction with other students, familiarity with
computers and the internet, ease of navigation, student collaboration and self-direction.
Likewise, Sobral (1995) found that the problem-based learning course experience was
perceived as personally more meaningful than the conventional lecture-based course
experience (Slavin 1996). Similarly, Perkins and Saris (2001) reported that students viewed
the jigsaw classroom technique positively because it helped them to understand statistical
procedures and offered a variety of learning experiences. These results are consistent with
research on the effects of active teaching on students’ approaches to learning (Case and
Marshall 2004; Sivan et al. 2000; Waters and Johnston 2004; Wierstra et al. 2003). When
students become active participants in the learning/teaching process, Sivan and colleagues
(2000) convincingly demonstrated increases in the deep approach to learning as a result of
the active learning environment. In particular, active learning made a valuable contribution
to the development of independent learning skills and students’ abilities to apply know-
ledge. However, these students did not engage in a solely student-activating teaching
environment because the ‘active’ courses included compulsory lectures. Also Wilson and
Fowler (2005) reported that the teaching/learning environment influences students’
approaches to learning. Whereas students who reported themselves as more typically deep
in their approach to learning were consistent in their approaches across the different
environments, students who reported themselves as more typically surface in their
approach were influenced to adopt deeper processing strategies in the action learning
design. Students explained this shift in terms of the greater expectations of learner activity
and responsibility with action learning. However, though the dynamics of students’
approaches to learning are well established, these studies seem to remain inconclusive
about the effects of students’ activation on their approaches to learning in comparison to a
lecture-based environment.
However, the student-activating story is not completely plain sailing and students also
report difficulties and problems that arise from their active course experiences. For
example, Marcel (2003) addressed critical issues that students raised about their online
experiences, including extra time spent in online learning, pace of the course, learning
strategies, course selection, mentor and instructor issues, attrition and performance.
Similarly, Welker and Berardino (2005) studied a blended learning format, which involved
the use of electronic learning tools that supplement, but do not replace, face-to-face
learning. Students reported flexibility, convenience and independence as advantages, along
with confusion, reduced social interaction and more work as disadvantages. Likewise,
Richardson (1997) found that students did not perceive computer instruction as being time
effective, compared to traditional didactic lectures, and Schmidt (2002) found that
Learning Environ Res (2008) 11:83–109 85
123
interaction was rated more favourably in the traditional setting than in the online teaching.
Hayward and Cairns (2001) also found that, although students preferred internet cases to
lectures, they expressed concerns about the accuracy and complexity of internet infor-
mation. Phipps et al. (2001) found students’ perceptions of cooperative learning to be
contradictory. Their study showed that overall the students perceived the (wide range of)
individual techniques of cooperative learning to be positive, yet they perceived cooperative
learning in general to be ineffective in terms of motivation (assuming no increase in study
time). Only half of the students in this study perceived cooperative learning to affect
motivation positively (48%). The authors explain that students could be rating overall
effectiveness negatively because they view group work as inefficient because it takes more
time and because it involves more sets of skills, such as interpersonal communication
skills, than does memorising lecture notes.
Nevertheless, these results reflect negatively of the effectiveness of activating learning
environments in terms of student learning. For example, Maguire et al. (2001) found that,
although students’ confidence levels in their ability to study and learning improved, they
became increasingly instrumental/surface in their approach to learning. The active skills
program ‘‘had not achieved its aim of encouraging students to adopt a deeper approach to
learning. Students have achieved a clear understanding of the different approaches to
learning but demonstrate an increasingly instrumental learning approach’’ (p. 104). These
results conform to the findings of Segers et al. (2006) that their student-activating teaching
methods, that were in line with constructivist principles and intended to deepen students’
approaches to learning, did not meet the purpose. In fact, surface approaches to learning
increased. Plausible explanations are that task conditions could interfere; for example,
‘reasonable’ workload could be a pre-condition of good studying and deep learning
(Chambers 1992; Kember 2004). In this respect, Entwistle and Ramsden (1983) and
Trigwell and Prosser (1991) showed that a perceived heavy workload and less freedom in
learning related to a reproducing orientation or a surface approach, and that perceived good
teaching, clear goals and more freedom in learning related to a meaning orientation or a
deep approach. The perceived quality of teaching influences the approach to learning, and
in particular the perceived characteristics of the student-activating teaching methods
pushed students towards surface approaches to learning in order to cope with the high
workload requirements. Here again, the majority of studies remain inconclusive about the
effectiveness of active teaching methods in comparison to lecture-based environments.
In addition to the aforementioned educational research, an innovative characteristic of
the study is that the effects of the assessment method are included in the research design. In
particular pre-assessment effects, or the effects of the expectation of a particular evaluation
method on students’ learning before the actual examination takes place, are considered
(Dochy et al. 2006; Struyven et al. 2005). In this respect, the distinction between con-
ventional tests and alternative (new) modes of assessment is considered (Birenbaum 1996;
Sambell et al. 1997), as well as the difference(s) between formative and summative
functions of evaluation (Segers et al. 2003). A quasi-experimental design was used to
examine students’ course experiences in a student-activating environment in comparison of
a lecture-based setting, considering both instructional and evaluative procedures. There-
fore, quantitative data were provided by means of the Course Experience Questionnaire
and backed up by qualitative information on students’ perceptions about the learning/
teaching environment that they experienced.
Given the direct relationship between students’ perceptions and learning (Biller 1996;
Entwistle 1991; Fraser and Fisher 1983; Konings et al. 2005; Trigwell and Prosser 1991),
the importance of high levels of satisfaction on the course experience questionnaire with
86 Learning Environ Res (2008) 11:83–109
123
the learning/teaching environment for student learning is legitimised. Characteristics of
both the instructional treatment (Lecture versus Student-activating treatment) and the
assessment method (Multiple-choice test, Case-based assessment, Peer assessment and
Portfolio assessment) might give rise to differences in course experiences, depending on
the criteria that are examined. A more detailed discussion of the identifying features of the
instructional and evaluation treatment is provided in the research design section.
Research design
Sample
The investigation had a quasi-experimental design. A course on Child Development was
delivered in five conditions, involving 608 students in their first year of an elementary
teacher training program at eight Flemish (Belgian) teacher education institutions. Students
were primarily female (83%) and aged 18–20 years. Students were randomly assigned to
the research conditions.
Procedure
In all, there were five research conditions in this investigation: one lecture-taught group
(lecture group 1) and four student-activating groups (activating groups 2–5) characterised
by one of four assessment modes. Table 1 provides a simple overview.
In order to control for differences in teaching approaches between the five conditions,
standardised learning materials on the subject of Child Development (including a course
book, a set of assignments and four assessment methods) were developed for a set of 10
lessons (each lasting 1 h 30 min). A total of 24 professional teachers from eight teacher
education institutions volunteered to participate in the study. The course on Child
Development consists of an instructional part during the lessons, associated with or
followed by an assessment part.
The instructional part of the course on Child Development
With respect to the instructional part of the treatment, the study of two contrasting
teaching/learning environments in terms of required self-regulative learning activities
(De Corte 2000; Vermunt and Verloop 1999) was central to this research project.
On the one hand, a group of pre-service teachers was instructed in the content on
Child Development within a lecture-based setting, characterised by direct teaching and
Table 1 Overview of the five research conditions, number of observations, instructional treatment andassociated method of assessment
Condition N Instructional treatment Assessment method
Le 114 Lectures Multiple-choice examination
Mc 109 Student-activating assignments Multiple-choice examination
Ca 107 Student-activating assignments Case-based examination
Pe 172 Student-activating assignments Peer/co-assessment
Po 106 Student-activating assignments Portfolio assessment
Learning Environ Res (2008) 11:83–109 87
123
teacher/learner interaction through formal lectures (Lectures, N = 114). Instruction on the
content in the course book (Struyven et al. 2003) was given by the teacher who literally
stood in front of the class. Pre-structured transparencies were used to guarantee a fairly
standardised delivery of the contents. The lectures were informative and exemplified good
teaching practices. Hence, teacher/learner interactions were deliberately and continuously
integrated to stimulate active student thinking. A multiple-choice test, which is described
below, was used to assess student performance.
While the content of the course book was delivered to students through direct teaching
during the lectures, students in the student-activating learning environment (Activating,
N = 494) had to explore and discover the content in the (same) course book by means of
(solely) authentic assignments that required students to ‘actively’ browse and study the
information in teams (learner/learner interaction) in order to solve the problems set in the
tasks. Examples of these assignments, which are organised in a booklet, are problem-based
learning tasks, case studies, role-plays, educational games, simulations and other tasks that
need teamwork. Correction keys were provided. The teachers’ roles within the student-
activating learning environment were restricted to the supervision and coaching of students’
learning processes while tackling these tasks. The assignments were collaborative in nature
(6–8 students) and required shared responsibilities. The set of assignments included detailed
instructions that were associated with the assignments and self-directed both students and
teachers. For each assignment in the booklet, a description was given of the prior require-
ments, the goals and achievement standards of the assignment, and details of the methods
and procedures of the assignment. Moreover, the duration of each lesson (1.5 h) was devoted
to students working on the assignments. To avoid the problem of excessive workloads, study
efforts on the assignments outside the class were intentionally restricted to a minimum.
Apart from the problem-based tasks which required some (limited) self-study at home, other
teaching methods were to be completed within the course of a lesson.
Lessons were similar in the four student-activating groups: hence, these research con-
ditions received standardised treatment as the same content was studied, uniform
assignments were completed within comparable time restrictions and guided by identical
self-directing detailed instructions to both students and teachers. In addition, randomly
selected observations in the classes of participating teachers ensured the standardised
implementation of both instructional treatments (lectures versus student-activating
assignments). An overview of the lessons, their respective content and the associated
instructional methods is given in Table 2.
The assessment part of the course on Child Development
Although students in the activating group were instructed in the same way, the assessment
mode for the student-activating learning environment distinguished between four groups of
students, namely: students who were assigned (1) a portfolio assessment; (2) a case-based
assessment; (3) a peer/co-assessment; and (4) a multiple-choice test. On the one hand, the
selection was inspired by the distinction between conventional tests (e.g. multiple-choice
test—Le and Mc) and alternative (new) modes of assessment (e.g. portfolio, peer and case-
based assessment—Po, Pe, Ca), mirroring the shift from the traditional test culture to an
assessment culture (Birenbaum 1996; Sambell et al. 1997). On the other hand, evaluation
methods might serve a formative and/or summative function (Segers et al. 2003). Besides
two methods that serve a summative function (Le, Mc, Ca), highlighting the product of
learning; two methods were chosen that included a formative function (Pe, Po) in order to
enhance the process of learning as well.
88 Learning Environ Res (2008) 11:83–109
123
Considering the importance of alignment between instruction and assessment (Biggs
1996; Segers et al. 2003), the lecture-based setting was only followed by the multiple-
choice test, whereas the case-based evaluation, peer assessment and portfolio assessment
required working with the student-activating assignments. Each assessment method is
discussed in detail and an overview of the characteristics is presented in Table 3. The
upper half of the figure describes the characteristics of the end-of-course examination or
assessment conversation, whereas the lower part emphasises the assessment procedure as it
is embedded in the course on Child Development.
Multiple-choice test. The multiple-choice test followed the set of classes and included
20 questions with four multiple-choice answers. To control for correct guessing and
associated high scoring, wrong answers were penalised by subtracting 1/3 point (correct
answer: 1 point, blank answer: 0 points). Items were equally divided over four categories
of questions: knowledge; insight; application; and problem solving. For each category, an
exemplary item is given:
• ‘knowledge’ (e.g. ‘‘Four defining statements [under a, b, c or d] are given about the
zone of proximal development [Vygotsky]. Which one is incorrect? Tick the
corresponding box.’’)
• ‘insight’ (e.g. ‘‘The developmental process of children is enhanced by playing. The
question is WHY? Tick the box of the correct answer [a, b, c or d].’’)
• ‘application’ (e.g. ‘‘For each choice-answer [a, b, c or d] a selection of playing
materials is listed. Which set of plays will appeal most to a 4-year-old kindergarten
boy? Indicate the corresponding answer.’’)
Table 2 Overview of content and teaching methods by lesson for the student-activating treatment
Lesson Content Teaching methods
Lesson 1 Introduction to instruction/evaluation Class teaching
Introduction to Child Development Introductory case studya
Construction of one’s developmental life storya
Discussion/debate by statementsa
Lesson 2 Prenatal period Learning contracta,b
Neonates and baby Video-based assignmentb
Lesson 3 Neonates and baby Role playa
Toddler and kindergartner Problem-based assignment 1 (start)a
Lesson 4 Toddler and kindergartner Problem-based assignment 1 (conclusion)a
Toddler and kindergartner Problem-based assignment 1 (extension)a
Lesson 5 Schoolchild Problem-based assignment 2 (start)a
Lesson 6 Schoolchild Problem-based assignment 2 (conclusion)a
Task on playing skills developmenta
Lesson 7 Schoolchild Working in cornersa
Lesson 8 Schoolchild Reflection taskb
Adolescent Case study (start)a
Lesson 9 Adolescent Case study (conclusion)a
Synthesis Educational board gamea,b
Lesson 10 Information on examination Class teaching
Opportunity for questions Class teaching
a Collaborative assignments, b individual work
Learning Environ Res (2008) 11:83–109 89
123
Ta
ble
3O
ver
vie
wo
fth
ech
arac
teri
stic
s(a
nd
dif
fere
nce
s)fo
rth
efo
ur
asse
ssm
ent
met
ho
ds
inth
est
ud
y
Ass
essm
ent
char
acte
rist
ics
Mult
iple
choic
eC
ase
bas
edP
eer
asse
ssm
ent
Port
foli
o
Op
en/c
lose
db
oo
kex
amin
atio
nC
lose
dO
pen
Op
enO
pen
Ind
ivid
ual
/ex
amin
gro
up
Ind
ivid
ual
Ind
ivid
ual
Gro
up
Ind
ivid
ual
Wri
tten
/ora
lex
amin
atio
nW
ritt
enW
ritt
enO
ral
Ora
l
Ques
tions
inth
eex
amdea
ldir
ectl
yw
ith?
Info
rmat
ion
inco
urs
eb
oo
kC
ase
stud
y(i
nre
lati
on
toco
urs
eb
oo
kin
fo)
Gro
up
task
s(i
nre
lati
on
toco
urs
eb
oo
kin
fo)
Po
rtfo
lio
(in
rela
tio
nto
cou
rse
bo
ok
info
)
Pre
par
atio
no
fst
ud
ents
for
the
asse
ssm
ent
met
ho
dd
uri
ng
the
cou
rse?
Les
son
1In
stru
ctio
ns
Inst
ruct
ions
Inst
ruct
ions
+T
ry-o
ut
Inst
ruct
ion
s
Mea
nti
me
/C
ase
stu
dy
on
ado
lesc
ence
Inte
rim
pee
rfe
edb
ack
(3se
ssio
ns)
aIn
teri
mfe
edb
ack
(1se
ssio
n)
Les
son
10
Exem
pla
ryq
ues
tio
ns
Cas
eto
go
(ho
me)
+E
xem
pla
ryq
ues
tion
s
Ex
emp
lary
qu
esti
on
sE
xem
pla
ryq
ues
tion
s
Fo
rmat
ive/
sum
mat
ive
use
of
the
eval
uat
ion
Su
mm
ativ
eS
um
mat
ive
Fo
rmat
ive
+S
um
mat
ive
Fo
rmat
ive
+S
um
mat
ive
Inte
rim
feed
bac
k(f
orm
alse
ssio
ns)
No
No
(Yes
,af
ter
each
gro
up
’sp
aper
)aY
es,
1in
teri
mse
ssio
n(a
nd
on
dem
and
)
Co
urs
ew
ork
calc
ula
ted
infi
nal
sco
re?
No
No
Yes
Yes
Fin
alsc
ore
det
erm
ined
by?
Exam
Exam
Gro
up
task
s(N
=3
)+
Pee
ras
sess
men
ts+
Ex
amG
rou
pta
sks,
incl
.In
div
idu
alre
flec
tions
(Port
foli
o)
+E
xam
aD
ue
toti
me
rest
rict
ion
s,te
ach
ers
wer
eu
nab
leto
pro
vid
est
ud
ents
wit
hth
ein
teri
mp
eer
feed
bac
kin
form
atio
n
90 Learning Environ Res (2008) 11:83–109
123
• ‘problem solving’ (e.g. ‘‘Mary was aged six when she enters your class in the first year
of elementary school. When Mary was 1-year-old, doctors diagnosed her with
leukaemia, which is a serious disease/cancer in which the body produces too many
white blood cells. The following years were not only very stressful, but also very
tiresome and detrimental to Mary’s developmental process. Today, Mary has fully
recovered, but she still suffers some ill effects of the cancer: (1) Mary falls ill very
easily because she has a low resistance to disease. As a consequence, she has been
frequently absent from school; (2) also, entire days at school are very tiresome and hard
on Mary. Her mother asked you, if you were to notice that Mary was getting tired, to let
her doze off for a while in the cosy corner of your classroom to recover. Attending all
lessons in a concentrated manner therefore is not possible; (3) Mary has spent a lot of
time, literally sick and tired, stuck to her hospital bed. Consequently, she has been
experiencing difficulties in mastering movement and balance (learning how to walk and
run, throwing/catching objects, etc.). Also Mary’s fine-motor skills are insufficient; (4)
Mary finds it hard to get in touch with other pupils. She does not engage in play with
other children. Rather, she plays by herself in a corner at the playground. Also
teamwork in class is not obvious. Mary’s parents told you that at home Mary does not
notice Matthew, her little brother or at least she does not initiate contact with him; (5)
Nonetheless, Mary is normally gifted and very eager to learn. She has mastered the
English language as well as the other pupils in your class. As already mentioned, Mary
is a pupil in your class. The answers/choices refer to possible risks for Mary’s
developmental process in class. In your opinion, what is the most urgent risk for Mary
that will need your immediate attention (and treatment). Tick the correct box.’’)
Case-based assessment. The case-based assessment (Segers 2003) concerned ‘Miss
Ellen’, who is a teacher in the third year of elementary school. The case material, involving
a set of information and documents about the class and pupils, included the following: the
class report for the first trimester, the detailed planning of a week, the thematic planning of
the whole school year, the floor plan of the classroom and its play materials, a medical
record on Robert, and a letter from Cindy’s mother. Students received the case materials
after the final lesson in order to prepare for their examination. The examination questions
remained confidential and all concerned this ‘case study’. Students were allowed to use all
resources available (e.g. course book, case documentation, articles, etc.) to complete their
examination. An example of a question is: ‘‘Are there any students in the class of Miss
Ellen who might suffer from the learning disability ‘dyslexia’? Tick the box in front of
their name (in the list), and use the blank space to explain on what evidence or information
in the case materials you have made this attribution.’’
Peer/co-assessment. Peer assessment has been used to differentiate between individual
contributions to small-group projects (Topping 2003). In particular, three problem-based
assignments related to the content of Child Development were subject of the peer evalu-
ation, involving group member scoring their peers and themselves on the learning and
collaborative processes within the group and contributions to the tasks’ solutions. An
example of a problem-based assignment involved ‘Frank’, a teacher in kindergarten, who
has been reading a chapter about the developmental stage of young children in kinder-
garten. As a consequence, he became worried that his classroom did not comply with the
theories in the book and he thought that enrichment and redecoration of his classroom
might be needed. Students were provided with the floor plan of Frank’s classroom and
were required to assess the interior organisation (and accessories) on the basis of the
developmental theories for the kindergarten age. Suggestions for enrichment or interior
Learning Environ Res (2008) 11:83–109 91
123
(re)arrangements (e.g. the purchase of new or alternative learning materials and plays; the
re-arrangement of furniture) ought to be proposed (and legitimated) as a result.
Obviously, the peer assessment scoring, which was associated with each problem task, was
anonymous. For each of 10 criteria (e.g. critical thinking; involvement in group discussions;
accurateness and relevance of preparations; willingness to undertake arranged tasks), stu-
dents had to indicate for each team member (and themselves) whether he/she contributed
more (score 3), equally (score 2) or less (score 1) than average or if the student had no
contribution to the group’s work at all (score 0). These scores were calculated into an
individual peer factor, including a correction for favouritism (i.e. subtraction of the highest
and lowest score for each student). In addition, the teacher scored the groups’ product to the
problem-based assignment. Depending on the individual peer factor, this teacher’s score
increased or decreased. Students were to be informed of this score (and their average score on
each criterion) after each of the three assignments, so that the peer assessment would serve
formative assessment purposes as well. In order to conclude the peer-assessment procedure,
an oral group-assessment conversation on the group’s assignments was arranged.
Portfolio assessment. The portfolio assessment consisted of a portfolio document,
including the portfolio-constructing process (interim feedback session) and an end-of-
course assessment interview with the teacher on the definitive portfolio, serving both
formative and summative purposes. The portfolio document contained a selection of
activating assignments tackled during classes, elaborated with students’ reflections on their
own experiences and learning (Davies and LeMahieu 2003; Janssens et al. 2001). In this
respect, it is an example of a showcase portfolio with an intentional reflective component
(Tillema and Smith 2000).
The scores for the course on Child Development were solely determined by the score on
the multiple-choice test (Le and Mc) and the case-based (Ca) conditions, whereas students’
work on the group assignments (Pe) and on the portfolio (Po) was included in students’
final marks for the peer and portfolio conditions. Hence, the latter students (Pe and Po) had
to hand in their assignments for evaluation purposes and, consequently, could have
invested more effort and study work in their assignments. Apart from the multiple choice-
examination (Le and Mc), all assessment methods took place by means of an open-book
format which allowed the use of resources. The multiple-choice examination (Le and Mc)
and case-based evaluation (Ca) were individual written examinations, whereas both the
peer (Pe) and portfolio assessments (Po) were accompanied by an oral assessment con-
versation, in groups and individually, respectively. These assessment conversations mainly
concerned the submitted work, namely, the problem-based assignments and the definitive
portfolio. Within the student-activating treatment, these features of the expected assess-
ment method might give rise to differences in course experiences (e.g. appropriate
workload, independence, etc.). Of course, thorough preparations were made for students to
become informed and skilled examination participants. For instance, information on the
assessment method was given during the first lesson of the course and sample examination
questions were distributed beforehand. The methods, including formative assessment such
as portfolio and peer assessment, also comprised (in)formative directions/feedback from
the teacher and a try-out session.
Instruments and data collection
Data were obtained by means of the Course Experience Questionnaire (CEQ36; Ramsden
1991), of which an overview is given in Table 4. At the end of the final lesson, students
were asked to indicate their experiences with the course on Child Development by means
92 Learning Environ Res (2008) 11:83–109
123
of 38 items. The perceived quality of the course on Child Development was comprised of
six scales: Good Teaching, Clear Goals and Standards, Generic Skills, Appropriate
Assessment, Appropriate Workload and Independence. Additionally, two items attempted
to measure the students’ overall course satisfaction in the ‘General’ scale. Moreover, as the
six scales are assumed to comprise the perceived quality of the course on Child Deve-
lopment. Moreover, as significant correlations with the ‘general scale’ were demonstrated,
the Total quality was also calculated by averaging the sum of the items completed in the
Course Experience Questionnaire. The reliability of the ‘Total’ scale was indicated by a
high a coefficient (a = 0.877). A five-point Likert scale was used to register the students’
responses of Agree = 5, Agree Somewhat = 4, Unsure = 3, Disagree Somewhat = 2 to
Disagree = 1. Each item was set as a variable and a scale total was produced by creating a
new variable by summing the item scores. The SAS Entreprise Guide and SPSS 12.0
software was used to carry out the scoring and statistical analyses. In agreement with the
Table 4 Reliability coefficient (Cronbach’s a) and sample question for each scale in the Course ExperienceQuestionnaire (Ramsden 1991), including example questions
Scale No. ofitems
a Sample question
General 2 0.570 The degree course is intellectually stimulating.
Overall, I’m satisfied with the quality of this degree course.
Good teaching 8 0.762 The teaching staff of this course motivate students to do theirbest work.
Staff here put a lot of time into commenting on students’ work.
Clear goals andstandards
5 0.754 It’s always easy here to know the standard of work expected.
You usually have a clear idea of where you’re going and what’sexpected of you.
Generic Skills 6 0.811 This course has helped me to develop my problem-solving skills.
This course has helped develop my ability to work as a teammember.
Appropriateassessment
6 0.699 To do well on this course all you really need is a good memory.(negative)
Too many staff ask us questions just about facts. (negative)
Appropriateworkload
5 0.683 The workload is too heavy. (negative)
It seems to me that the syllabus tries to cover too many topics.(negative)
Independencea 4 0.570 Students have a great deal of choice over how they are going tolearn in this course.
There are few opportunities to choose the particular areas youwant to study. (negative)
Totalb 36 0.877
a When the following items were omitted from the Independence scale, reliability measures increased tomediocre levels (a = 0.570): ‘‘There are few opportunities to choose the particular areas you want to study’’and ‘‘There is very little choice in this course in the ways you are assessed’’b Pearson correlations coefficients showed positive, significant scores between the scales of the CEQ andthe ‘General’ scale. Hence, a Total category was calculated. A high a coefficient confirmed the procedure tobe sound. One exception is the ‘Appropriate Assessment’ scale, which did not correlate with the ‘General’scale (r = –0.04). However, because the assessment method was essentially part of the research design (anda high a coefficient is secured), the scale was included to represent an accurate reproduction of students’overall course experiences
Learning Environ Res (2008) 11:83–109 93
123
results of Ramsden (1991) and Wilson et al. (1997), the reliability of the Course Exper-
ience Questionnaire was sound (0.683–0.877 for different scales). Exceptions are the
General (a = 0.570) and Independence scales (a = 0.461) that showed lower alpha-scores
of reliability. However, after omitting two items from the Independence category (see
Table 4), the low alpha score heightened to the mediocre level (a = 0.570). Hence, the
reduced Independence scale (4 items) served the purpose of statistical analyses.
Because of the differences in teaching methods between the lecture-based group and the
student-activating environments, an additional evaluative question assessed students’
perceptions of the instructional methods that they experienced during the course on Child
Development. The item was: ‘‘How do you assess the teaching methods you have exper-
ienced during the course on Child Development?’’ Students had to indicate on a five-point
scale one of the following answers: Very Good, Good, Moderate, Weak or Very Weak.
Blank space was provided for optional comments, which were listed, categorised and
quantified into a simple numeric table (see Table 10). The same was done for students’
perceptions of the assessment methods.
Results
Because of the authentic, constructive, collaborative and self-regulative nature of the
activating assignments used in this study (De Corte 2000; Oxford 1997; Terwel 1999;
Vermunt 1998; Von Glasersfeld 1988), and considering the scales in the Course Exper-
ience Questionnaire, scores were expected to be higher for the student-activating learning
environment than for the lecture-based setting on the following scales: Generic Skills,
Independence and, consequently, higher General/Total scales. In order to control for dif-
ferences in teaching approach, explicit attention was given to the construction of
standardised learning materials and to monitoring implementation practices. For example,
explicit interventions were undertaken to secure standardised instruction in class; to inform
students about the goals and standards of the course in the booklet of assignments; and to
prevent excessive workload by restricting time on task to time in class. As a consequence,
no differences between the instructional treatments (activating assignments vs. lectures)
were expected for the scales of Good teaching, Clear Goals and Standards and Appropriate
Workload. The expected assessment method might also generate differences in course
experiences, in particular on the Appropriate Assessment scale, but possibly also on the
scales such as Appropriate Workload (e.g. students in the Pe and Po conditions need to
hand in assignments; or experiences might be different between conditions depending on
the open-book or closed-book format of the examination) or Independence (e.g. students in
the Po group are free to select tasks to include in the portfolio; or Pe students have a say
about the criteria that were included in the peer assessment procedure). Given the categ-
orised independent variable (i.e. instructional/evaluative treatment), descriptive statistics
were calculated and the analyses of variance (ANOVA) was used in order to confirm or
reject these statements. Effect sizes (R2 and/or Cohen’s d) are provided. With respect to the
ordinal evaluative items on the experienced teaching methods or the expected assessment
method, the non-parametric variants of the analyses of variance procedure were used: the
Mann–Witney test (for two independent samples) and Kruskal–Wallis test (for k inde-
pendent samples). The Results section comprises three parts; parts 1 and 2 report the results
of the Course Experience Questionnaire by instructional treatment (lectures vs. student-
activating teaching) and by instructional/evaluation treatment (Le, Mc, Ca, Pe, Po) which
considers the pre-assessment effect of the diverse assessment methods included in the
94 Learning Environ Res (2008) 11:83–109
123
design. Part 3 pinpoints students’ perceptions of the instructional method evaluation format
that they experienced during, or expect after, the course on Child Development, revealing
students’ arguments in positive and negative directions.
The contrast lectures versus student-activating teaching: CEQ
Regarding the instructional treatment (lectures vs. student-activating teaching), the simple
statistics in Table 5 already reveal remarkable differences in students’ experiences of the
same course content when delivered by student-activating or lecture-based methods.
Compared to the lecture-based setting, the perceived quality of the course was generally
lower in the student-activating learning environment. Conversely, standard deviations are
(significantly) higher, suggesting that students’ opinions of the quality of their experiences
are more divided.
The analyses of variance (Table 5) showed statistically significant discrepancies
between the two learning environments for five scales in the CEQ and the Total perceived
quality measure, accompanied by medium (d C 0.50) and large effect sizes (d C 0.80). In
contrast with the premises, the Generic Skills scale did not differentiate between lectures
and student-activating assignments. Moreover, both scales referring to the overall quality
of the course (General and Total), clearly distinguish between the lecture-taught students
and their activated peers as these two scales account for nearly 20% of the variance.
Likewise, the Clear Goals and Standards scale differed between the conditions, with lec-
ture-taught students feeling better informed about the aims and requirements of the course.
Moreover, despite the content being the same and efforts to limit time on task to in-class
time, the student-activating students perceived the Workload to be remarkably less
appropriate (i.e. heavier) than their lecture-taught companions. Finally, though the
variances shown by these scales were lower and the effect sizes were modest, the results
Table 5 ANOVA for Course Experience Questionnaire scales comparing lectures with student-activatingteaching methods
Source Lectures(Le)
Activating(Act)
df F p* R2 d a Comparisonsb
M SDc M SD
General 4.01 0.61 2.90 0.95 1, 606 143.61 \0.0001* 0.192 1.39 Le [ Act
Good teaching 3.84 0.64 3.33 0.77 1, 605 43.07 \0.0001* 0.067 0.72 Le [ Act
Clear goals 3.72 0.67 2.74 0.81 1, 605 143.69 \0.0001* 0.192 1.32 Le [ Act
Generic skills 2.86 0.70 2.72 0.80 1, 605 3.31 0.0694 0.005 0.19 /
Appropriate assessment 3.52 0.63 3.47 0.74 1, 604 0.40 0.5284 0.001 0.07 /
Appropriate workload 3.58 0.67 2.84 0.74 1, 605 94.50 \0.0001* 0.135 1.05 Le [ Act
Independence 3.32 0.65 2.89 0.77 1, 604 30.86 \0.0001* 0.049 0.60 Le [ Act
Total 3.55 0.42 2.98 0.49 1, 606 127.91 \0.0001* 0.174 1.25 Le [ Act
* p \ 0.05a Cohen’s d: d [ 0.50 = medium, d [ 0.80 = largeb Bonferroni comparisons: a = 0.05c ANOVA for the standard deviations demonstrate significant differences for the following CEQ scales:General, Good teaching, Clear goals, Appropriate workload and the Total scale
NLect = 114, NAct = 494
Learning Environ Res (2008) 11:83–109 95
123
show significant effects for the instructional treatment in terms of the Good Teaching and
Independence scales.
The ‘Appropriate Assessment’ scale is not discussed here because analyses by research
condition are needed to provide accurate information about these measurements as four
assessment methods were used. Hence, the next section considers these issues.
The instructional contrast, including pre-assessment effects: CEQ
As already mentioned, the treatments in the present study did not only involve the dif-
ferences between teaching methods. Although the questionnaire was administered before
the examination period, the differences between assessment methods might also explain
the findings for students’ course experiences. Table 6 shows simple statistics for the
Course Experience Questionnaire scales, while Table 7 reports the ANOVA results for the
five research conditions.
Although the Bonferroni comparisons more than once replicated the difference in
instructional treatment between the lecture-taught students and their student-activating
fellows, the differences in assessment mode that are associated with the course also
accounted for an important amount of the variance on the scales of the Course Experience
Questionnaire, as differences within the student-activating group demonstrate. Although
the perceived Total quality of the course was moderate, students in the Portfolio (Po)
condition were significantly more satisfied with their experiences of the student-activating
course than were students in the Case condition (Ca) and the active Multiple-choice test
group (Mc). The perceived Total quality is not entirely consistent with the results for the
General items in the questionnaire, for which the Peer group (Pe) had the lowest scores.
Next, the Clear Goals and Standards scale replicated the instructional difference between
the lecturing and student-activating treatments. This distinction was similarly demonstrated
in the Appropriate Workload scale, but with the order of conditions being different. The
Good Teaching scale again showed the perceived differences between the experienced
modes of instruction. In addition, the Peer assessment students (Pe) were more content with
their teachers’ support and encouragement compared with their peers in the Portfolio (Po),
Table 6 Descriptive statistics for the Course Experience Questionnaire scales by research conditions
Scale Le Mc Ca Pe Po
M SD M SD M SD M SD M SD
General 4.01 0.61 2.89 1.00 2.96 0.94 2.75 0.95 3.10 0.85
Good teaching 3.84 0.64 3.16 0.82 3.13 0.88 3.57 0.65 3.30 0.67
Clear goals 3.72 0.67 2.64 0.75 2.78 0.83 2.73 0.83 2.84 0.81
Generic skills 3.86 0.70 2.58 0.78 2.68 0.78 2.83 0.84 2.71 0.76
Appropriate assessment 3.52 0.63 3.30 0.74 3.36 0.85 3.53 0.72 3.66 0.58
Appropriate workload 3.58 0.67 2.79 0.82 2.74 0.70 2.84 0.71 2.99 0.74
Independence 3.32 0.65 2.68 0.82 2.80 0.78 2.90 0.73 3.18 0.72
Total 3.55 0.42 2.86 0.55 2.92 0.48 3.02 0.45 3.12 0.48
Conditions: Le = lecture-based LE + Multiple choice test; Mc = student-activating LE + Multiple choicetest; Ca = student-activating LE + Case-based examination; Pe = student-activating LE + peer/co-assessment; Po = student-activating LE + portfolio
NLe = 114, NMc = 109, NCa = 107, NPe = 172, NPo = 106
96 Learning Environ Res (2008) 11:83–109
123
active Multiple-Choice test (Mc) and Case-based conditions (Ca). The next scale, in terms
of the variance shown, is the Independence scale, which did not entirely replicate the
instructional treatment. In fact, the Portfolio condition (Po) showed similarities to the
Lecture-taught group (Le) and significantly differed from the Peer condition (Pe), the Case-
based (Ca) group and active Multiple-choice group (Mc). Although the Generic Skills and
Appropriate Assessment scales reveal significant ANOVA values, they only accounted for
a limited proportion of the variance. The Bonferroni comparisons significantly differen-
tiated the Portfolio group (Po) from the Case-based (Ca) and active Multiple-choice
conditions (Mc), but only with respect to students’ experiences with the assessment
method. Interestingly, the Multiple-choice conditions (Le and Mc) students score diffe-
rently on the Appropriate Assessment scale in the questionnaire. Repeatedly, ANOVA
analyses for the standard deviations revealed significantly lower values in the lecture-
taught group for several scales in the CEQ, suggesting that these students agree with each
other more compared with students in the activating conditions, who tend to display more
divergent opinions.
Students’ comments with respect to lectures or student-activating teaching
In addition, students were asked to rate the experienced instruction on a five-point scale
that ranged from Very Good (++) to Very Weak (--), as demonstrated in Table 8. In
accordance with the high standard deviations, the results suggest that the student-activating
group could be divided into supporters and opponents of these methods of teaching. The
Mann–Witney test showed statistically significant differences between the two instruc-
tional settings (see Table 8). Strikingly, more students in the activating group felt very
satisfied (14.81%) as well as very dissatisfied (8.37%) with the experienced teaching
methods, compared to their peers in the lecture-taught group (respectively, 12.39% and
0.88%). This finding is largely mirrored in the analyses by research condition (Le, Mc, Ca,
Table 7 ANOVA for Course Experience Questionnaire scales comparing research conditions
Source df F p* R2 Comparisonsa,b
General 4, 603 39.04 \0.0001* 0.206 Le [ Po, Ca, Mc, Pe
Po [ Pe
Good teaching 4, 602 19.53 \0.0001* 0.115 Le [ Pe, Po, Mc, Ca
Pe [ Po, Mc, Ca
Clear goals 4, 602 36.91 \0.0001* 0.197 Le [ Po, Ca, Pe, Mc
Generic skills 4, 602 2.65 0.0325* 0.017 /
Appropriate assessment 4, 601 4.47 0.0014* 0.029 Po [ Ca, Mc
Appropriate workload 4, 602 25.41 \0.0001* 0.144 Le [ Po, Pe, Mc, Ca
Independence 4, 601 14.58 \0.0001* 0.088 Le, Po [ Pe, Ca, Mc
Total 4, 603 37.42 \0.0001* 0.199 Le [ Po, Pe, Ca, Mc
Po [ Ca, Mc
* p \ 0.05a Bonferroni comparisons: a = 0.05b ANOVA for the standard deviations show significant differences between the conditions for the followingCEQ-scales (between parentheses the significant Bonferroni-comparisons are demonstrated, a = 0.05):General (Po [ Le), Good Teaching (Po, Mc, Pe, Ca [ Le), Clear Goals (Mc, Po, Pe [ Le), AppropriateWorkload (Po, Pe, Ca, Mc [ Le) and the Total scale (Po [ Le)
Learning Environ Res (2008) 11:83–109 97
123
Pe, Po). However, the Kruskal–Wallis analysis of the teaching method by research con-
dition (see Table 9) only revealed significant differences in assessments between the
lecture-taught students and the case-based examination group; the other conditions were
between the two extremes.
Apparently, the same educational setting is able to trigger diverse students’ percep-
tions. Whereas the perceptions of lecture-taught students were focused and generally
positive, students’ course experiences with student-activating methods were ambiguous
and widespread; the activated students clearly were divided into a supportive group and a
group of opponents. The additional comments associated with the assessment ques-
tion about the instructional methods revealed interesting arguments in both directions
(see Table 10). Because the teaching method was central to this item in the questionnaire
and because of its optional nature (less than a 50% response rate), students’ answers are
only discussed by instructional treatment. Hence, caution in generalising results is
recommended.
The results in Table 10 substantiate both extreme positive and negative students’
course experiences in the student-activating setting (and the lecture-taught condition).
Arguments in favour of the activated learning environment emphasised the variety of
teaching methods that was adopted (38.19%), the challenging, active nature of the setting
Table 8 Percentage of responses by instructional treatment to the item ‘‘How do you assess the teachingmethods you have experienced during the course on Child Development?’’
Condition N M SD Percentage of responses
Very good Good Moderate Weak Very weak Total
Lectures 113 3.60 0.86 12.39 46.90 30.09 9.73 0.88 100
Student-activating teaching 466 3.32 1.16 14.81 34.98 26.82 15.02 8.23 100
The Mann–Witney test for two unrelated samples revealed statistically significant differences between bothgroups (Mann–Witney U = 23716, 5, p = 0.036)
Table 9 Percentage of responses by research condition to the item ‘‘How do you assess the teachingmethods you have experienced during the course on Child Development?’’
Condition N M SD Percentage of responses
Very good Good Moderate Weak Very weak Total
Le 113 3.60 0.86 12.4 46.9 30.1 9.7 0.9 100
Mc 107 3.49 1.10 15.9 41.1 26.2 9.3 7.5 100
Ca 104 3.13 1.10 7.7 36.5 24.0 24.0 7.7 100
Pe 163 3.28 1.18 16.6 28.8 29.4 16.6 8.6 100
Po 104 3.39 1.22 18.3 35.6 24.0 11.5 10.6 100
Conditions: Le = lecture-based LE + Multiple choice test; Mc = student-activating LE + Multiple choicetest; Ca = student-activating LE + Case-based examination; Pe = student-activating LE + peer/co-assessment; Po = student-activating LE + portfolio
Kruskall-Wallis test for research condition revealed statistically significant results (Chi-square = 11.445;df = 4; p = 0.022)
If the scale-scores are used, the Bonferonni comparisons reveal significant differences (a = 0.05) for:Le [ Ca
98 Learning Environ Res (2008) 11:83–109
123
(15.97%) and the joys of team-based, collaborative work with fellow students (9.72%).
Another set of favourable arguments related to the research group, namely, student
teachers who felt that they learn about teaching methods (7.64%) by means of experi-
encing this course on Child Development. They highlighted the applicability of the
teaching methods for their practice (5.56%) and liked the newness and innovativeness of
the diverse methods (5.56%).
Table 10 Frequency of students’ comments, categorised into pro- and contra-arguments and subcategor-ised by meaning
Students’ comment Activating Lectures
Frequency % Frequency %
Pro-arguments
Variety of teaching methods 55 38.19 9 42.86
Challenging/fun/great 23 15.97 1 4.76
Collaboration/team-work 14 9.72
OK/interesting/good/informative 11 7.64 6 28.57
Learning about teaching (methods) 11 7.64
Applicability for own teaching 8 5.56
New/innovative value of teaching (methods) 8 5.56
Interesting/challenging/real life assignments 7 4.86
Particular teaching method 3 2.08
Learning about child development 2 1.39 3 14.29
No lecturing by the teacher 1 0.69
Other pro-arguments 1 0.69 2 9.52
Total 144 100.00 21 100.00
Contra-arguments
Poor learning gains 24 16.11
Lack of variety in methods 21 14.09 14 63.64
Too much team-work/collaboration 16 10.74
Difficulties with time management 15 10.07
No lecturing/insufficient feedback 14 9.40
Chaotic/always different/new 14 9.40
Unstructured/confusing/unclear 8 5.37
Group difficulties and free-riding 7 4.70
Particular teaching method 6 4.03
Too difficult 4 2.68
Tasks were divided between students 3 2.01
Complex/cohesion? 3 2.01
Inappropriate for course on child development 2 1.34
Not fun 2 1.34
Dislike of collaborative teaching 2 1.34
Large group size/disturbing noise 4 18.18
Too little (learner/learner) interaction 2 9.09
Other contra-arguments 8 5.37 2 9.09
Total 149 100.00 22 100.00
Learning Environ Res (2008) 11:83–109 99
123
Interestingly, students in the lecture-based setting also liked the variety of teaching
methods (42.86%). Teachers varied between direct teaching and teacher/learner interac-
tion, and occasionally showed video fragments for visualising the theory that had been
taught. Also the informative nature of the classes (28.57%) and the learning that occurred
concerning the processes of Child Development (14.29%) were important supportive
arguments regarding the lectures.
Conversely, dissatisfaction was expressed with the student-activating teaching. For
example, several students actually doubted whether they ‘learned’ the information on Child
Development in their course book. Though they remembered reading and using (parts and
pieces of) the information in the course book, they expressed doubt over actually
‘knowing’ the content on Child Development. Hence, students reported a lack of learning
gains and poor learning outcomes (16.11%) due to the teaching methods, which were
perceived as unstructured (5.37%) and chaotic—because they always were different and
new (9.40%) and (therefore) difficult (2.68%). As a consequence, students suffered time
pressures and intensive workload (10.07%), felt insufficiently informed about the content
of Child Development because of the lack of structured, informative lectures or classical
feedback from the teacher (9.40%), and were left with the uncomfortable feeling of being
unable to see the wood for the trees (2.01%). Despite the abovementioned appraisal of the
variety of teaching methods of activated supporters, students in the contra-group thought
the contrary and felt that the student-activating setting lacked variety and was dull
(14.09%) because exclusively collaborative methods were adopted. These students
opposed the continually required team work (14.09%), simply disliked this type of
teaching (1.34%) and/or reported group difficulties and the problem of free-riding fellow
students (4.70%). Moreover, due to the team-based approach, students divided the effort
and work among group members (2.01%), which could have jeopardised the learning
outcomes of the activated students.
The same phenomenon is reflected in students’ comments on the lectures. Whereas
several students appreciated the variety of teaching methods (42.86%), the primary
argument against lectures was the lack of variety of teaching methods (63.64%) and the
lack of learner/learner interaction (9.09%). Another contra-argument refers to the extensive
group size (approximately 60 students) attending the lectures and subsequently disturbing
noises (18.18%).
With regard to the ‘expected’ assessment methods, students were provided with a
similar assessment question. Table 11 clearly shows that the lecture-based students (Le), as
well as their peers in the activating learning environment with portfolio assessment (Po),
were most satisfied with their expected mode of assessment. In contrast, students in the
student-activating learning environments with multiple-choice testing (Mc), case-based
examinations (Ca) or peer assessment (Pe) thought negatively about their forthcoming
evaluation method. The Kruskal–Wallis test for k independent groups revealed statistically
significant results (v2 = 37.34, df = 4, p B 0.0001) and the Bonferroni mean comparisons
replicated the differences between the lecture-based setting (Le), the activated portfolio
condition (Po) and the other conditions in the study (Mc, Ca, Pe).
Here again, the additional comments associated with this item, and the abovementioned
values, revealed interesting arguments in both positive and negative directions. Arguments
that support positive ratings on the scale, are: ‘‘cramming for the exam is unnecessary’’
(Po), ‘‘theories need to be put into practice’’/‘‘applications of theory are more important’’
(Po, Ca), ‘‘instructive’’/‘‘informative’’ (Po, Pe, Ca), ‘‘compatible with my interests and
experiences’’ (Po), ‘‘no ‘real’ examination’’ (Pe, Po), ‘‘interesting method’’ (Pe), ‘‘you
learn how to deal with the evaluations of your work by others than the teacher’’ (Pe), ‘‘open
100 Learning Environ Res (2008) 11:83–109
123
book format’’ (Ca), ‘‘looking up information is fascinating’’ (Ca), ‘‘too much information
in order to study/crap’’ (Ca), ‘‘little writing is needed’’ (Mc/Le), ‘‘intellectually challenging
questions’’ (Le), ‘‘questions cover the range of the course’’ (Le), ‘‘you know whether you
really understand everything’’ (Le), ‘‘stress is on applications’’ (Le), and ‘‘the correct
answer is already shown’’ (Mc/Le). Arguments against the expected assessment method are
the following: ‘‘high workload’’/‘‘much effort needed’’ (Po), ‘‘do I really know everything
about the contents in the course?’’/‘‘I did not grasp the information on Child Development’’
(Po), ‘‘subjective’’/‘‘favouritism’’/‘‘appropriate evaluation method?’’ (Pe), ‘‘difficult’’ (Pe),
‘‘not all students attend classes regularly’’ (Pe), ‘‘what is my final score going to be like?’’
(Pe), ‘‘I never actually study the theories and information in the course book’’ (Ca, Pe, Po),
‘‘easy’’ (Ca), ‘‘the assignments in class should count for our final score’’ (Ca), ‘‘answers
resemble each other’’ (Le), ‘‘no additional comments and support are allowed’’ (Mc/Le)
and ‘‘too much information to study’’ (Mc/Le).
It should be noted that the students in the portfolio (Po) and peer-assessment (Pe)
conditions had prior hands-on experience with the formative part of the evaluation
method and the lecture-taught group (Le) had experienced a partial multiple-choice
format examination at the time when this question needed to be addressed. After the
examination, students’ evaluative scores for the assessment method could display a
different picture.
Discussion and conclusions
Students’ arguments about the teaching method(s) that they experienced tended to explain
the results as displayed by the Course Experience Questionnaire. In particular, the
Appropriate Workload, Clear Goals and Standards, and General/Total scales, each
accounting for 20% of the variance, differentiated between the lecture-based and the
student-activating teaching/learning environments and between the five research con-
ditions. These findings are consistent with the evidence that the activating setting proved to
be significantly more labour-intensive than the lecture-based format, hence triggering
Table 11 Percentage of responses by research condition for overall appreciation of the expected assess-ment method at the end of lesson 10
Condition N M SD Percentage of responses
Very good Good Moderate Weak Very weak Total
Le 113 3.48 0.81 6.2 47.8 35.4 8.8 1.8 100
Mc 99 2.92 1.10 7.1 24.2 33.3 24.2 11.1 100
Ca 92 2.95 1.03 2.2 34.8 27.2 27.2 8.7 100
Pe 154 2.69 1.17 5.2 23.4 25.3 27.9 18.2 100
Po 103 3.21 1.14 9.7 39.8 21.4 20.4 8.7 100
Conditions: Le = lecture-based LE + Multiple choice test; Mc = student-activating LE + Multiple choicetest; Ca = student-activating LE + Case-based examination; Pe = student-activating LE + peer/co-assessment; Po = student-activating LE + portfolio
Kruskall–Wallis test for research condition revealed statistically significant results (Chi-square = 37.34;df = 4; p \ 0.0001)
If the scale-scores are used, the Bonferonni comparisons reveal significant differences (a = 0.05) for:Le [ Ca, Mc, Pe and Po [ Pe
Learning Environ Res (2008) 11:83–109 101
123
higher workloads and time pressures. For instance, Marcel (2003) and Sistek (1986)
showed convincingly that active teaching requires more time than conventional teaching
with, as a possible consequence, higher perceived workload. Moreover, the finding of
Gibbs (2006), that being explicit does not automatically result in being clear, tends to be
supported by the activating teaching format in the Clear Goals and Standards scale. Despite
the detailed descriptions of the aims and goals that were associated with each task and set
of assignments in the booklet, students were obviously insufficiently informed—or at least
not clear enough—about the purposes of the course as displayed by the CEQ. Clearly, the
complex nature of student-activating tasks (Perkins 1991) might add to the lack of clarity
of the activated setting. In addition, from students’ comments, we also learned that the
(exclusive) cooperative nature of the assignments posed problems for a group of students.
In accordance with the former results, Phipps and colleagues (2001) argue that these
methods might be perceived ineffective because they take more time and require more sets
of skills than studying lecture notes, thus explaining the high workloads reported in the
CEQ. However, other plausible explanations might be: (1) a student’s instructional pre-
ferences (Birenbaum 1997); (2) the dependence of an individual’s work and the scores on
others, apart from the teacher, makes students feel uncomfortable and uneasy (Clifford
1999); and (3) dysfunctional groups and associated group problems.
In general, the lecture-taught students experienced their course in more positive terms
than the students in the activated conditions for all scales in the CEQ. Interestingly, the
educational setting was credited with characteristics and advantages that it might not
explicitly aim for. For example, lectures were assigned high scores on the Generic Skills
and Independence scales, but these goals are not addressed or strived for in the lessons.
Inversely, if the positive relationship applies, then the opposite case might be also in force,
implying that, for example, if students experience the activating teaching methods nega-
tively, then the educational goals explicitly aimed for (such as Generic Skills and
Independence) are not recognised nor acknowledged. In fact, given the direct relationship
between students’ perceptions and student learning (Biller 1996; Entwistle 1991; Fraser
and Fisher 1983; Konings et al. 2005; Trigwell and Prosser 1991), the learning of students
who dislike the course on Child Development might be jeopardised. In this respect, future
research that distinguishes between supporters and opponents might explain differences in
students’ experiences, perceptions and student learning with respect to activating (and
teacher-directed) formats of teaching.
When the assessment methods within the activating treatment were studied, findings
were not always consistent with expectations. For example, although the peer assessment
(Pe) and portfolio (Po) students are required to invest more time and effort (i.e. higher
workloads might be expected) in the student-activating assignments, because the tasks had
to be handed in and had a determining influence on their final scores, the Course Exper-
ience Questionnaire showed significantly higher scores, only for the Portfolio group on the
General, Appropriate Assessment, Independence and Total scales as well as slightly higher
scores in comparison to the other conditions (Mc, Ca, Pe), on the Appropriate Workload
and Clear Goals scales. A plausible explanation is that, in contrast to the Case-based
examination (Ca) and the active Multiple-Choice test group (Mc) for whom the interim
efforts did not make a difference in course performance, students’ work in class counted
towards their final course marks. The assignments in the end-of-course-assessed groups
(Ca and Mc) were perceived to be additional work on top of the study requirements for the
examination. Due to the lack of consequences of students’ work and efforts during classes
on the course performance, students in the Case-based (Ca) and active Multiple-choice
(Mc) groups failed to (or at least were more unlikely to) acknowledge the alleged
102 Learning Environ Res (2008) 11:83–109
123
advantages that students in the Portfolio (Po) assessment group saw. This finding is similar
to Anaya’s (1996) finding that learning was enhanced by student involvement in the
learning activities and environments that were most directly related to the learning out-
comes. Likewise, Concannon et al. (2005) found that the lecturer’s reward system
influenced students’ experiences with the teaching. In both conditions, the compounding
effect of workload (Chambers 1992; Kember 2004) was ‘eased’ by the performance
consequences of the assessment methods. The integration of the assessment method into
the teaching/learning environment, serving both formative and summative functions, again
deserves warm appreciation (Segers et al. 2003). However, this principle does not (fully)
apply to the Peer Assessment (Pe) condition. In fact, the Pe students indicated low(est)
scores on the General scale in the CEQ, and this condition was least appreciated by its
students at the end of the lessons. Here, the poor use of the peer assessment, in which the
formative function was not met by the teachers in this condition, tends to provide a
plausible explanation. Opportunities for adjustments in learning behaviour and group
processes were missed, and students lacked information about the performance conseq-
uences of the peer-assessment procedure. Moreover, students’ comments were consistent
with the results of Clifford (1999), who found that students resent staff not taking
responsibility and feeling unsure about their own and their peers’ abilities to grade work.
Similarly, students in the present study also highlighted reservations about favouritism for
friends and the misplaced ‘opportunism’ of some student colleagues in the teams. Con-
sequently, the use and quality of the assessment method elicited an important limitation of
the present study: conclusions might only apply to the instrument or tool that was used to
make the assessment method operational. Caution in generalising the findings to other
assessment methods is recommended. Replication of this study and verification of the
results are needed.
Although the abovementioned explanations provide ready arguments for the differences
in instructional and/or evaluation treatments, the results clearly demonstrate that the same
educational setting was able to trigger (significant) diverse students’ perceptions. In par-
ticular, students’ perceptions about the activating teaching methods displayed widespread
responses and contradictory opinions, possibly with different learning outcomes as a result.
For example, in the student-activating setting, more extreme positive arguments were
present and there were more negative arguments, when compared to the lecture-based
setting. Interestingly, the same arguments were used in both directions to explain positive
and negative opinions (such as collaboration being exclusively used; always new/inno-
vative; no lecturing) and some arguments contradict each other (such as good/poor variety
of teaching methods; learning about Child Development/lack of learning outcomes; and
challenging and fun/boring and not fun). Students’ comments clearly demonstrate that the
‘activated’ students were divided into supportive and opposing groups. In accordance with
Kember et al. (2004), it becomes clear that the presumption that students have consistent
views of what constitutes good teaching is wrong. Moreover, students do not necessarily
have a common view of good teaching. Conceptions of learning are important influences
towards differentiation (Entwistle and Tait 1990; Kember and Wong 2000; Van Rossum
and Taylor 1987). The authors distinguish between two extremes of learning conceptions,
from a reproductive pole to a self-determining conception of learning (Kember et al.
2003). Teacher evaluations are concordant; those students experiencing teaching incom-
patible with their beliefs could downgrade their rating of the instructor because they
perceive the teaching as poor; and vice versa in the compatible version (Kember et al.
2004). Rather than accepting this status quo, a more fruitful long-term strategy would be to
wean students gradually away from their teacher-centred beliefs by introducing measures
Learning Environ Res (2008) 11:83–109 103
123
of more learner-centred teaching approaches. This exposure, however, needs to be handled
carefully (Kember 2001); ‘‘Throwing students in at the deep end is more likely to result in
sinking than swimming. Learning to swim is more likely to result if students are allowed to
start at the shallow end and swimming lessons are provided’’ (Kember et al. 2003, p. 250).
Obviously, future research is needed in order to confirm whether the student-activated
group or the non-supportive lecture-based group is more reproductive or self-determining
in their conceptions of learning and whether the inverse relationship applies for the
favouring students.
Matching students’ instructional preferences and teaching in order to design an edu-
cational setting conducive to learning might be considered. Although research has shown
that students prefer congruence between their learning habits and the characteristics of a
learning environment (Vermetten et al. 2002), relevant notions might be ‘congruence’ and
‘friction’ (Trigwell et al. 1999; Vermunt and Verloop 1999). Congruence occurs when
students’ learning strategies and teachers’ teaching strategies are compatible; friction
occurs when this is not the case. Two kinds of friction are discerned: constructive and
destructive. Constructive friction can stimulate students to employ learning and thinking
strategies that they have not used before, and hence give rise to an increase in the use of
those strategies (e.g. Trigwell et al. 1999). An example of this is the study of Kelly and
Tangney (2006), who explored the impact on learning performance when study material is
matched and mismatched with learning preferences and who found that learning gains
increased when students were provided with resources that they did not normally prefer.
Destructive friction, on the other hand, can occur when the distance between the level of
self-regulated learning that the teacher expects from the students, and the self-regulatory
skills that these students possess, is too great (Vermunt and Verloop 1999). This would be
an unsuccessful adaptation. Another often-observed routine to explain the more negative
results of the promising constructive theory and its applications is to attribute these results
to students’ difficulties in adapting during the transition from traditional lectures to acti-
vating, independent learning/teaching settings (Novak et al. 2006; Salamonson and Lantz
2005; Vermetten et al. 2002). Vermunt and Verloop (1999) argue that students often lack
the skills to cope with activating assignments (e.g. self-regulation skills), with negative
perceptions and learning outcomes as a consequence. Again, a gradual transition from
traditional teacher-directed settings towards a self-regulated activating setting ought to
offer a satisfactory solution. Students need time to adapt and steps to ease the transition
process (Kember 2001). Liow et al. (1993) also advocate that the best solution would be to
introduce active learning methods gradually in line with the way in which educational
objectives for courses change from year to year. Future research might serve the purpose of
providing empirical evidence for these explanations. In fact, quasi-experimental designs
that include a gradually implemented activating level and/or mixture of teacher-directed
and student-activating methods might add to our current understanding. However, a study
that allows time and support for adaptation would be ideal. Experimental designs, however,
make it hard to provide support for adaptation as the designs themselves are not meant to
be adapted (Kember 2003).
Additionally, future research might also overcome the limitations from which the
present study suffered. For instance, the results remain inconclusive about the characte-
ristics of students in both groups that might have triggered the differences in students’
perceptions. For example, students’ characteristics, such as educational history, concep-
tions of learning, intelligence, academic achievement and communicative skills, might
determine whether students like or dislike particular teaching methods. The same argu-
ments apply to the characteristics in the teaching/learning environment and the causality of
104 Learning Environ Res (2008) 11:83–109
123
the processes that occurred. Students’ arguments might provide some indications or sug-
gest causal relationships, but the results of this study remain inconclusive. Future analyses
and research are recommended to pinpoint the characteristics of students and educational
settings and the causal relationships between the processes of teaching, students’ per-
ceptions and student learning.
Interestingly, in educational literature and policy, the independent, self-regulatory,
activated learner always tends to be the aim of current education. The strategy of
replacing traditional classroom settings with student-activating methods (sometimes
computer-mediated) is often strongly advocated by educationalists. However, perhaps
complementary views (taking into account students’ points of view) would be more
appropriate. Empirical studies explicitly support complementary usage of both tech-
niques. For instance, Michel (2001) assessed student and staff perceptions of a Web-
based tutorial on library research. In spite of positive views of the guide, students and
staff members were not strongly in favour of using the tutorial to replace traditional
library instruction. Likewise, Chisholm et al. (1996) found that all students perceived
that the computer-assisted instructional (CAI) program was a valuable learning experi-
ence and felt that it enhanced problem-solving skills. Notwithstanding the feeling of
enjoyment, students disagreed with the statement that the CAI program should be used in
place of traditional lectures and indicated that it should be used as a supplement to
lectures. The results of the current study also better support the complementary view of
using both teaching formats, instead of the replacement strategy. Students can differ in
many ways, including different conceptions of learning, different former experiences,
different social skills and assertiveness, different interests, different needs for support,
different instructional preferences and different academic capabilities. Although some
differences might be gradually or partially overcome, the diversity of students tends to
require a diversity of teaching methods as well. Actually, given the fundamental
importance of positive perceptions of teaching for learning (Biller 1996; Entwistle 1991;
Fraser and Fisher 1983; Konings et al. 2005; Trigwell and Prosser 1991), a varied
educational environment is likely to satisfy a larger number of students, and to be more
conducive to learning, than any one-sided approach. For instance, neither students who
require support and structure, nor students who are challenged by complex assignments,
are disadvantaged in a system that integrates the best of both worlds and uses a variety
of teaching methods (individual, in collaboration with fellow students, and in class
activities), acknowledging the advantages and disadvantages of both traditional and
innovative, activating teaching. As a matter of fact, students’ self-reported learning for
the Generic Skills and Independence scales was even higher in the lecture-taught setting.
Obviously, a student’s self-reported learning might differ from his/her actual learning
outcomes; however, it is the student’s perception that influences student learning (Ent-
wistle 1991) and, when students’ course experiences are positive or negative, student
learning is enhanced or jeopardised.
In conclusion, results support the need for students’ perceptions to be positive, but also
they suggest that, when students’ perceptions are positive, the educational setting can be
characterised with advantages that it does not (explicitly) aim for. Moreover, students who
experience the same educational setting might think quite differently about this environ-
ment. Although individual differences in beliefs and skills might generate divergent course
experiences, students’ arguments can serve as an interesting learning tool with which
educators can optimise current educational practices. A complementary view of teaching
tends to combine the best of both worlds and seems to best serve the majority and diversity
of students.
Learning Environ Res (2008) 11:83–109 105
123
References
Anaya, G. (1996). College experiences and student learning: The influence of active learning, collegeenvironments and cocurricular activities. Journal of College Student Development, 37(6), 611–622.
Ballard, S., Stapleton, J., & Carroll, E. (2004). Students’ perceptions of course web sites used in face-to-faceinstruction. Journal of Interactive Learning Research, 15(3), 197–211.
Ben-Ari, R., & Eliassy, L. (2003). The differential effects of the learning environment on studentachievement motivation: A comparison between frontal and complex instruction strategies. SocialBehavior and Personality, 31(2), 143–165.
Biggs, J. (1996). Enhancing teaching through constructive alignment. Higher Education, 32(3), 347–364.Biller, J. (1996, October). Reduction of mathematics anxiety. Paper presented at the Annual National
Conference on Liberal Arts and Education of Artists, New York.Birenbaum, M. (1996). Assessment 2000: Towards a pluralistic approach to assessment. In M. Birenbaum &
F. J. R. C. Dochy (Eds.), Alternatives in assessment of achievements, learning processes and priorknowledge: Evaluation in education and human services (pp. 3–29). Boston, MA: Kluwer.
Birenbaum, M. (1997). Assessment preferences and their relationship to learning strategies and orientations.Higher Education, 33(1), 71–84.
Case, J., & Marshall, D. (2004). Between deep and surface: Procedural approaches to learning inengineering education contexts. Studies in Higher Education, 29(6), 605–615.
Chambers, E. (1992). Work-load and the quality of student learning. Studies in Higher Education, 17(2),141–154.
Chisholm, M. A., Dehoney, J., & Poirier, S. (1996). Development and evaluation of a computer-assistedinstructional program in an advanced pharmacotherapeutics course. American Journal of Pharma-ceutical Education, 6(4), 365–369.
Clifford, V. A. (1999). The development of autonomous learners in a university setting. Higher EducationResearch and Development, 18(1), 115–128.
Concannon, F., Flynn, A., & Campbell, M. (2005). What campus-based students think about the quality andbenefits of e-learning. British Journal of Educational Technology, 36(3), 501–512.
Davies, A., & LeMahieu, P. (2003). Reconsidering portfolios and research evidence. In M. Segers, F.Dochy, & E. Cascallar (Eds.), Optimising new modes of assessment: In search of qualities andstandards (pp. 141–170). Dordrecht, The Netherlands: Kluwer.
De Corte, E. (2000). Marrying theory building and the improvement of school practice: A permanentchallenge for instructional psychology. Learning and Instruction, 10(3), 249–266.
Dochy, F., Gijbels, D., & Segers, M. (2006). Learning and the emerging new assessment culture. In L.Verschaffel, F. Dochy, M. Boeckaerts, & S. Vosniadou (Eds.), Instructional psychology: Past, presentand future trends: Advances in learning and Instruction Series of EARLI (pp. 191–208). Amsterdam:Elsevier.
Elen, J., & Lowyck, J. (2000). Instructional metacognitive knowledge: A qualitative study on conceptions offreshman about instruction. Journal of curriculum studies, 32(3), 421–444.
Entwistle, N. J. (1991). Approaches to learning and perceptions of the learning environment: Introduction tothe special issue. Higher Education, 22, 201–204.
Entwistle, N. J., & Ramsden, P. (1983). Understanding student learning. London: Croom Helm.Entwistle, N., & Tait, H. (1990). Approaches to learning, evaluations of teaching, and preferences for
contrasting academic environments. Higher Education, 19(2), 169–194.Fraser, B. J., & Fisher, D. L. (1983). Student achievement as a function of person-environment fit:
A regression surface analysis. British Journal of Educational Psychology, 53(1), 89–99.Gibbs, G. (2006, October). Changing assessment policy and practice in higher education through research.
Keynote presentation at the first European Practice Based and Practitioner Research Conference onLearning and Instruction, Belgium, Leuven.
Hayward, L. M., & Cairns, M. A. (2001). Allied health students’ perceptions of and experiences withinternet-based case study instruction. Journal of Allied Health, 30(4), 232–238.
Jacobson, T. E., & Mark, B. L. (1995). Teaching in the information age: Active learning techniques toempower students. Reference Librarian, 51–52, 105–120.
Janssens, S., Boes, W., & Wante, D. (2001). Portfolio: een instrument voor toetsing en begeleiding [Port-folio: An instrument for evaluation and coaching]. In F. Dochy, L. Heylen, & H. Van de Mosselaer(Eds.), Assessment in onderwijs [Assessment in Education] (pp. 203–224). Utrecht, The Netherlands:LEMMA.
Kelly, D., & Tangney, B. (2006). Adapting to intelligence profile in an adaptive educational system.Interacting with computers, 18(3), 385–409.
106 Learning Environ Res (2008) 11:83–109
123
Kember, D. (2001). Beliefs about knowledge and the process of teaching and learning as a factor inadjusting to study in higher education. Studies in Higher Education, 26(2), 205–221.
Kember, D. (2003). To control or not to control: The question of whether experimental designs areappropriate for evaluating teaching innovations in higher education. Assessment and Evaluation inHigher Education, 28(1), 89–101.
Kember, D. (2004). Interpreting student workload and the factors which shape students’ perceptions of theirworkload. Studies in Higher Education, 29(2), 165–184.
Kember, D., Jenkins, W., & Ng, K. C. (2003). Adult students’ perceptions of good teaching as a function oftheir conceptions of learning—Part 1. Influencing the development of self-determination. Studies inContinuing Education, 25(2), 240–251.
Kember, D., Jenkins, W., & Ng, K. C. (2004). Adult students’ perceptions of good teaching as a functionof their conceptions of learning—Part 2. Implications for the evaluation of teaching. Studies inContinuing Education, 26(1), 81–97.
Kember, D., & Wong, A. (2000). Implications for evaluation from a study of students’ perceptions of goodand poor teaching. Higher Education, 40(1), 69–97.
Konings, K. D., Brand-Gruwel, S., & van Merrienboer, J. J. G. (2005). Towards more powerful learningenvironments through combining the perspectives of designers, teachers, and students. British Journalof Educational Psychology, 75(4), 645–660.
Liow, S. R., Betts, M., & Kok Leong Lit, J. (1993). Course design in higher education: A study of teachingmethods and educational objectives. Studies in Higher Education, 18(1), 65–79.
Maguire, S., Evans, S. E., & Dyas, L. (2001). Approaches to learning: A study of first-year geographyundergraduates. Journal of Geography in Higher Education, 25(1), 95–107.
Mansfield, B. (1989). Teaching social studies: Learning from past experiences. History and Social ScienceTeacher, 24(2), 87–88.
Marcel, K. W. (2003). Online advanced placement courses: Experiences of rural and low-income highschool students: WCALO special studies. Boulder, CO: Western Interstate Commission for HigherEducation. (Eric Document Reproduction Service, No. ED478377).
McNaughton, K., & Krentz, C. (2000). Reflections on constructivist practices in early childhood teachereducation. Canadian Journal of Research in Early Childhood Education, 8(2), 7–20.
Meyers, C., & Jones, T. B. (1993). Promoting active learning: Strategies for the college classroom. SanFrancisco: Jossey-Bass Incorporation.
Michel, S. (2001). What do they really think? Assessing student and faculty perspectives of a Web-basedtutorial to library research. College and Research Libraries, 62(4), 317–332.
Novak, S., Shah, S., Candidate, D., Wilson, J. P., Lawson, K. A., & Salzman, R. D. (2006). Pharmacystudents’ learning styles before and after a problem-based learning experience. American Journal ofPharmaceutical Education, 70(4), Article No. 74. Retrieved March 20, 2007, from http://www.ajpe.org/view.asp?art=aj700474&pdf=yes.
O’Leary, S., Diepenhorst, L., Churley-Strom, R., & Magrane, D. (2005). Educational games in an obstetricsand gynecology core curriculum. American Journal of Obstetrics and Gynecology, 193(5), 1848–1851.
Oxford, R. L. (1997). Constructivism: Shape-shifting, substance and teacher education applications. Pea-body Journal of Education, 72(1), 35–66.
Perkins, D. N. (1991). What constructivism demands from learners. Educational technology, 31(9), 19–21.Perkins, D. V., & Saris, R. N. (2001). A ‘‘jigsaw classroom’’ technique for undergraduate statistics courses.
Teaching of Psychology, 28(2), 111–113.Phipps, M., Phipps, C., Kask, S., & Higgins, S. (2001). University students’ perceptions of cooperative
learning: Implications for administrators and instructors. Journal of Experiential Education, 24(1), 14–21.
Ramsden, P. (1991). A performance indicator of teaching quality in higher education: The course experiencequestionnaire. Studies in Higher Education, 16(2), 129–150.
Richardson, D. (1997). Student perceptions and learning outcomes of computer-assisted versus traditionalinstruction in physiology. Advances in Physiology Education, 18(1), S55–S58.
Salamonson, Y., & Lantz, J. (2005). Factors influencing nursing students’ preference for a hybrid formatdelivery in a pathophysiology course. Nurse Education Today, 25(1), 9–16.
Sambell, K., McDowell, L., & Brown, S. (1997). ‘But is it fair?’: An exploratory study of student per-ceptions of the consequential validity of assessment. Studies in Educational Evaluation, 23(4),349–371.
Schmidt, K. (2002). Classroom action research: A case study assessing students’ perceptions and learningoutcomes of classroom teaching versus on-line teaching. Journal of Industrial Teacher Education,40(1), 45–59.
Learning Environ Res (2008) 11:83–109 107
123
Segers, M. (2003). Evaluating the OverAll Test: Looking for multiple validity measures. In M. Segers, F.Dochy, & E. Cascallar (Eds.), Optimising new modes of assessment: In search of qualities andstandards (pp. 119–140). Dordrecht, The Netherlands: Kluwer.
Segers M., Dochy F., & Cascallar E. (Eds.). (2003). Optimising new modes of assessment: In search ofqualities and standards. Dordrecht, The Netherlands: Kluwer.
Segers, M., Nijhuis, J., & Gijselaers, W. (2006). Redesigning a learning and assessment environment: Theinfluence on students’ perceptions of the assessment demands and their learning strategies. Studies inEducational Evaluation, 32(3), 223–242.
Silberman, M. (1996). Active learning: 101 strategies to teach any subject. Boston, MA: Allyn and Bacon.Sistek, V. (1986, June). How much do our students learn by attending lectures? Paper presented at the
annual conference of the Society for Teaching and Learning in Higher Education, Guelph, ON,Canada.
Sivan, A., Wong Leung, R., Woon, C., & Kember, D. (2000). An implementation of active learning and itseffects on the quality of student learning. Innovations in Education and Training International, 37(4),381–389.
Slavin, R. E. (1996). Research on cooperative learning and achievement: What we know, what we need toknow. Contemporary Educational Psychology, 21(1), 43–69.
Sobral, D. T. (1995). The problem-based learning approach as an enhancement factor of personal mea-ningfulness of learning. Higher Education, 29(1), 93–101.
Struyven, K., Dochy, F., & Janssens, S. (2005). Students’ perceptions about evaluation and assessment inhigher education: A review. Assessment and Evaluation in Higher Education, 30(4), 325–341.
Struyven, K., Sierens, E., Dochy, F., & Janssens, S. (2003). Groot worden: De ontwikkeling van baby totadolescent [Growing: The development from baby to adolescent] (Course book for prospectiveteachers). Leuven, Belgium: LannooCampus.
Tenenbaum, G., Naidu, S., Jegede, O., & Austin, J. (2001). Constructivist pedagogy in conventional on-campus and distance learning practice: An exploratory investigation. Learning and Instruction, 11(2),87–111.
Terwel, J. (1999). Constructivism and its implications for curriculum theory and practice. Journal ofCurriculum Studies, 31(2), 195–199.
Tillema, H., & Smith, K. (2000). Learning from portfolios: Differential use of feedback in portfolio con-struction. Studies in Educational Evaluation, 26, 193–210.
Topping, K. (2003). Self and peer assessment in school, university: Reliability, validity and utility. In M.Segers, F. Dochy, & E. Cascallar (Eds.), Optimising new modes of assessment: In search of qualitiesand standards (pp. 55–88). Dordrecht, The Netherlands: Kluwer.
Trigwell, K., & Prosser, M. (1991). Improving the quality of student learning: The influence of learningcontext and student approaches to learning on learning outcomes. Higher Education, 22(3), 251–266.
Trigwell, K., Prosser, M., & Waterhouse, F. (1999). Relations between teachers’ approaches to teaching andstudents’ approaches to learning. Higher Education, 37(1), 57–70.
Tynjala, P. (1997). Developing education students’ conceptions of the learning process in different learningenvironments. Learning and Instruction, 7(3), 277–292.
Van Rossum, E. J., & Taylor, I. P. (1987, April). The relationship between conceptions of learning and goodteaching: A scheme of cognitive development. Paper presented at the annual meeting of the AmericanEducational Research Association, Washington, DC.
Vermetten, Y., Vermunt, J. D., & Lodewijks, H. G. (2002). Powerful learning environments? How uni-versity students differ in their response to instructional measures. Learning and Instruction, 12(3),263–284.
Vermunt, J. D. (1998). The regulation of constructive learning processes. British Journal of EducationalPsychology, 68(2), 149–171.
Vermunt, J. D., & Verloop, N. (1999). Congruence and friction between learning and teaching. Learningand Instruction, 9(3), 257–280.
Von Glasersfeld, E. (1988). Constructivism as a scientific method. Scientific Reasoning Research InstituteNewsletter, 3(2), 8–9.
Waters, L., & Johnston, C. (2004). Web-delivered, problem-based learning in organisational behaviour: Anew form of CAOS. Higher Education Research & Development, 23(4), 413–431.
Welker, J., & Berardino, L. (2005). Blended learning: Understanding the middle ground between traditionalclassroom and fully online instruction. Journal of Educational Technology Systems, 34(1), 33–55.
White, C. (1996). Merging technology and constructivism in teacher education. Teacher Education andPractice, 12(1), 62–70.
108 Learning Environ Res (2008) 11:83–109
123
Wierstra, R. F. A., Kanselaar, G., Van der Linden, J. L., Lodewijks, H. G. L. C., & Vermunt, J. D. (2003).The impact of the university context on European students’ learning approaches and learningenvironment preferences. Higher Education, 45(4), 503–523.
Wilson, K., & Fowler, J. (2005). Assessing the impact of learning environments on students’ approaches tolearning: Comparing conventional and action learning designs. Assessment & Evaluation in HigherEducation, 30(1), 87–101.
Wilson, K. L., Lizzio, A., & Ramsden, P. (1997). The development, validation and application of the CourseExperience Questionnaire. Studies in Higher Education, 22(1), 33–53.
Woo, M. A., & Kimmick, J. V. (2000). Comparison of Internet versus lecture instructional methods forteaching nursing research. Journal of Professional Nursing, 16(3), 132–139.
Learning Environ Res (2008) 11:83–109 109
123