an evaluation report of multimedia environments as cognitive learning tools

10
An evaluation report of multimedia environments as cognitive learning tools Norbert M. Seel * , Katharina Schenk Freiburg Institute of Educational Science, Albert-Ludwigs University, Freiburg, Germany Received in revised form 1 December 2001 Abstract This article deals with the evaluation of a multimedia learning environment which has been developed and evaluated within the broader context of a research project on the learning-dependent progression of mental models in economics. To carry out formative evaluations, we have adapted a particular evaluation approach which allows and requires the implementation of specific evaluation instruments. The crucial questions of our evaluation studies were the efficacy of a multimedia-based realization of the cognitive apprenticeship (CA) approach, the diagnosis of mental model progression through the CA based instruction, and the effects of implemented metacognitive training. For the assessment of the learning-dependent progression of the mental models, we developed and used a special diagnostic instrument for causal diagrams, which are understood as reproductions of students’ mental models. In order to be able to meet statements about the practicability of a multimedia based realization of CA, we measured the results of the tasks of learning during each different learning phase. Additionally, several motivational variables and persistent learning strategies were measured. In this article, we will specify the adapted evaluation instruments. Furthermore, we will report on the results of five replication studies and discuss the consequences for instructional design in connection with the design of constructive learning environments. q 2003 Elsevier Science Ltd. All rights reserved. Keywords: Causal diagrams; Cognitive apprenticeship; Instructional design; Mental models; Metacognitive training 1. Introduction There is considerable concern that student’s thinking skills, motivational dispositions, and domain-specific knowledge might be inadequate for them to lead fulfilling lives in a global, information-rich, technology-oriented world. Informed by recent theory and research on learning and teaching, efforts to reform classroom instruction and create learning environments that promote these ends are underway. Hannafin (1992) argues that the improvement of problem-solving abilities and other key skills requires ‘emergent technologies’ in order to design effective learning environments that provide opportunities for reflective thinking. However, this argument can be seen from different points of view. One approach focuses on the improvement of students’ technological literacy and advocates a new type of understanding of information and communication technol- ogy in educational settings. Another approach focuses on the effective instructional design (ID) of multimedia environ- ments as opposed to the technology itself. Obviously, these perspectives are not mutually exclusive as each may be considered within the context of ID. ID is a theoretically sound educational technology for the development, implementation, and evaluation of learning environments that are adapted to learners, tasks, resources, and contexts (Tennyson, Schott, Seel, & Dijkstra, 1997). An analysis of the literature indicates that there is broad consensus with regard to ID requirements and how to evaluate various designs in different contexts (Weston, McAlpine, & Bordonaro, 1995). Evaluation has been considered a central and necessary part of instructional planning from the very beginning of ID (Andrews & Goodson, 1980; Cronbach, 1963), wherein a differentiation is made between two kinds of evaluation (Scriven, 1967). formative evaluation aimed at the improvement of instruction by means of feedback of information concerning the effective use and outcomes; and, 0149-7189/03/$ - see front matter q 2003 Elsevier Science Ltd. All rights reserved. doi:10.1016/S0149-7189(03)00003-X Evaluation and Program Planning 26 (2003) 215–224 www.elsevier.com/locate/evalprogplan * Corresponding author. E-mail addresses: [email protected] (N.M. Seel), schenk@ ezw.uni-freiburg.de (K. Schenk).

Upload: norbert-m-seel

Post on 15-Sep-2016

217 views

Category:

Documents


2 download

TRANSCRIPT

Page 1: An evaluation report of multimedia environments as cognitive learning tools

An evaluation report of multimedia environments as cognitive

learning tools

Norbert M. Seel*, Katharina Schenk

Freiburg Institute of Educational Science, Albert-Ludwigs University, Freiburg, Germany

Received in revised form 1 December 2001

Abstract

This article deals with the evaluation of a multimedia learning environment which has been developed and evaluated within the broader

context of a research project on the learning-dependent progression of mental models in economics. To carry out formative evaluations, we

have adapted a particular evaluation approach which allows and requires the implementation of specific evaluation instruments. The crucial

questions of our evaluation studies were the efficacy of a multimedia-based realization of the cognitive apprenticeship (CA) approach, the

diagnosis of mental model progression through the CA based instruction, and the effects of implemented metacognitive training. For the

assessment of the learning-dependent progression of the mental models, we developed and used a special diagnostic instrument for causal

diagrams, which are understood as reproductions of students’ mental models. In order to be able to meet statements about the practicability of

a multimedia based realization of CA, we measured the results of the tasks of learning during each different learning phase. Additionally,

several motivational variables and persistent learning strategies were measured. In this article, we will specify the adapted evaluation

instruments. Furthermore, we will report on the results of five replication studies and discuss the consequences for instructional design in

connection with the design of constructive learning environments.

q 2003 Elsevier Science Ltd. All rights reserved.

Keywords: Causal diagrams; Cognitive apprenticeship; Instructional design; Mental models; Metacognitive training

1. Introduction

There is considerable concern that student’s thinking

skills, motivational dispositions, and domain-specific

knowledge might be inadequate for them to lead fulfilling

lives in a global, information-rich, technology-oriented

world. Informed by recent theory and research on learning

and teaching, efforts to reform classroom instruction and

create learning environments that promote these ends are

underway. Hannafin (1992) argues that the improvement of

problem-solving abilities and other key skills requires

‘emergent technologies’ in order to design effective learning

environments that provide opportunities for reflective

thinking. However, this argument can be seen from different

points of view. One approach focuses on the improvement of

students’ technological literacy and advocates a new type of

understanding of information and communication technol-

ogy in educational settings. Another approach focuses on the

effective instructional design (ID) of multimedia environ-

ments as opposed to the technology itself. Obviously, these

perspectives are not mutually exclusive as each may be

considered within the context of ID.

ID is a theoretically sound educational technology for the

development, implementation, and evaluation of learning

environments that are adapted to learners, tasks, resources,

and contexts (Tennyson, Schott, Seel, & Dijkstra, 1997). An

analysis of the literature indicates that there is broad

consensus with regard to ID requirements and how to

evaluate various designs in different contexts (Weston,

McAlpine, & Bordonaro, 1995). Evaluation has been

considered a central and necessary part of instructional

planning from the very beginning of ID (Andrews &

Goodson, 1980; Cronbach, 1963), wherein a differentiation

is made between two kinds of evaluation (Scriven, 1967).

† formative evaluation aimed at the improvement of

instruction by means of feedback of information

concerning the effective use and outcomes; and,

0149-7189/03/$ - see front matter q 2003 Elsevier Science Ltd. All rights reserved.

doi:10.1016/S0149-7189(03)00003-X

Evaluation and Program Planning 26 (2003) 215–224

www.elsevier.com/locate/evalprogplan

* Corresponding author.

E-mail addresses: [email protected] (N.M. Seel), schenk@

ezw.uni-freiburg.de (K. Schenk).

Page 2: An evaluation report of multimedia environments as cognitive learning tools

† summative evaluation aimed at the measurement of the

degree to which intended results are achieved.

A further distinction is often drawn with regard to the

instruments of evaluation, insofar as quantitative or

qualitative data are to be assessed.

This article is concerned with the evaluation of a

particular multimedia learning environment which was

developed and evaluated as part of a comprehensive

research project focusing on the learning-dependent

progression of mental models. Insofar as the construction

of mental models presupposes constructive learning,

which occurs when learners actively construct meaningful

mental representations during instruction (Mayer, Moreno,

Boire, & Vagge, 1999), multimedia is considered to be

the most effective for the promotion of such constructive

learning. In the case of the given multimedia-learning

environment, the meaningful mental representation is a

coherent mental model of a dynamic model of macro-

economics and financial politics, respectively. In our

research, learning outcomes were evaluated with multiple

measuring instruments. On the one hand, students had to

produce causal diagrams of the problem situation; on the

other hand, student solutions to learning tasks and transfer

problems were tested.

In the following sections we will illuminate the

formative evaluation of the learning environment

‘Dynamic Systems of Economics’ (DSE) as applied in

several replication studies. An evaluation model will be

described with the focus on the following components:

methodology, data analysis, and interpretation. We will

conclude with a discussion of the main results of several

evaluation studies.

2. Model-based learning and instruction

Our research group has been involved in the develop-

ment and investigation of instructional intervention pro-

grams aimed at the improvement of model-based learning

and thinking for several years. The epistemological and

psychological foundations of this research rest on Seel’s

(1991) theory of mental models.

Mental models emerged in the 1980s as a theoretical

construct to encompass both situated cognition as well as

qualitative reasoning. Greeno (1989) argues that compre-

hension of and reasoning in specific situations necessarily

involves the use of mental models of different qualities.

Mental models are a central construct of symbolic models of

human cognition that presuppose the use and manipulation

of symbols. Following Wartofsky (1979), cognition takes

place while using mental representations in which individ-

uals organize symbols of experience or thought in such a

way that they effect a systematic representation of this

experience or thought as a means of understanding or

explaining it to others.

Mental models play a central and unifying role in

representing objects, states of affairs, sequences of

events, and the social and psychological actions of

daily life. They enable individuals to make inferences

and predictions, to understand phenomena, to decide

what action to take and to control its execution,

and, above all, to experience events by proxy (Johnson-

Laird, 1983, p. 397)

Craik (1943) introduced the idea of internal models to

psychology with the notion of a working model. According

to Craik, most cognitive theorists agree on the point that

mental models serve primarily to create situation-specific

plausibility. Due to an idealized reduction to relevant

characteristics of its original, a model is a concrete,

comprehensible, and feasible representation of non-obvious

or abstract objects. The representation of the objects’

attributes and components comes second to the represen-

tation of structural relationships. Mental models are not a

specific representational format such as images and

propositions, but rather higher-order cognitive constructions

(artifacts) which refer primarily to the content of mental

representations (Seel, 1991).

The functions of mental models, including structural

features, are defined on the basis of the objectives of the

model-constructing person. In physics and other sciences,

the term ‘model’ is always used in a functional sense.

‘Appearance models’ may serve to simplify a complex

phenomenon or represent structural relationships visually.

On the other hand, ‘derivative (thought) models’ (e.g.

Rutherford’s model of the atom) serve primarily to aid

analogical reasoning in exploring phenomena (e.g. quantum

mechanisms). Mental simulations occur when cognitive

operations simulate (in the sense of thought experiments)

specific transformations of objects that may occur in real-

life situations. In sum, mental models ‘run in the mind’s

eye’ to produce qualitative inferences with respect to the

situation to be cognitively mastered.

Although mental models may differ markedly in their

content, there is no evidence to suggest that they differ in

representational format or in the processes that construct

and manipulate them. What is at issue is how such

models develop as an individual progresses from novice

to expert, and whether there is any pedagogical

advantage in providing people with models of tasks

they are trying to learn (Johnson-Laird, 1989, p. 485)

In accordance with Snow (1990) we have identified the

learning-dependent progression of mental models as a

specific kind of transition mediating between student

preconceptions, which describe the initial states of the

learning process, and causal explanations, which describe

the desired end states of the learning process (Seel,

Al-Diban, & Blumschein, 2000). From the perspective of

instructional psychology, the guiding principle for

N.M. Seel, K. Schenk / Evaluation and Program Planning 26 (2003) 215–224216

Page 3: An evaluation report of multimedia environments as cognitive learning tools

influencing the construction of mental models has been

expressed by Mayer (1989) as follows: “Students given

model-instruction may be more likely to build mental

models of the systems they are studying and to use these

models to generate creative solutions to transfer problems”

(p. 47). This presupposes that the learner is sensitive to the

model-relevant characteristics of the learning environment,

such as the availability of certain information at a given

time, the way this information is structured and mediated,

and the ease with which it can be found in the environment.

An analysis of the relevant literature indicates that the

suggested instructional strategy to provide learners with a

designed conceptual model actually constitutes the main

trend of instructional research on mental models. According

to Carlson (1991), instruction can be designed to involve the

learner in an inquiry process in which facts are gathered

from data sources, similarities and differences among facts

noted, and concepts developed. In this process, the

instructional program serves as a facilitator of learning for

students who are working to develop their own answers to

questions. In this case, mental models are more proactive

and direct the learning experiences so that the result of

learning is dependent on the initial model, defined as the

learner’s ‘a priori understanding’ of the material to be

learned. On the other hand, instructional programs can

present concepts with clear definitions followed by clear

examples. A conceptual model may be presented before the

learning tasks in order to direct the learner’s comprehension

of the learning material. Over the past decades much

research applying this strategy has been done to provide

students with model-based instruction but several authors

(Royer, Cisero, & Carlo, 1993; Snow, 1990) have objected

that this kind of research has typically been done piece-

meal, in small-scale, specialized contexts. In order to

overcome these shortcomings we need a more comprehen-

sive instructional approach. Cognitive apprenticeship (CA)

(Collins, Brown, & Newman, 1989) provides a fundamental

basis for initiating and directing model-based learning.

Our research group started in 1994 with the development

of a multimedia environment aimed at an externally guided,

goal-oriented, and systematic influence upon the learners’

progression of mental models. CA (Collins et al., 1989) was

the only promising instructional strategy corresponding

with the idea of providing the students with model-

instruction in the aforementioned sense. There are six

instructional methods in CA: modeling, coaching, scaffold-

ing, articulation, reflection, and exploration. The instruc-

tional intervention of apprenticeship starts with the

presentation of an expert’s conceptual model of the tasks

to be accomplished, and then the students are coached and

scaffolded to adapt this model for their own solutions

(exploration) to the learning tasks designed. CA is based on

results of cognitive psychology and applies these results in a

prescriptive way in order to identify ‘ideal features’ of

learning environments. This approach prescribes in detail

what the learner has to do and in which sequence in order to

achieve particular objectives. However, the question as to

whether the CA approach may be appropriate for the design

of multimedia environments could not be answered at this

time. There are several studies (Casey, 1996; Chee, 1996;

Jarvela, 1995) that have investigated this. However, these

studies run parallel with our investigations and final

conclusions are not available. Therefore, we focused on

the issue of whether the preferred use and application

of multimedia technology allows a strict adaptation of

instructional regularities to individual regularities

of learning.

The research we have done in the past 6 years has

centered around two main topics:

1. the investigation of the learning-dependent progression

of mental models, more specifically of analogy models of

DSE; and,

2. how this progression can be guided or influenced through

a particular instructional intervention program designed

as a multimedia environment in accordance with

principles of CA. We focus on the second line of

research in this paper.

3. The evaluation model

We have adopted the evaluation approach of Ross and

Morrison (1997) with these main components: (1) needs

assessment, (2) methodology, (3) data analysis and

interpretation, and (4) dissemination results. The evaluation

of DSE focused on methodology and data analysis and

interpretation. Accordingly, we have realized:

† a program analysis in order to determine the content and

methods of their mediation within the multimedia

program;

† a participant’s analysis in order to determine the (groups

of) learners as well as the scope of the instructional

program;

† a specification of the evaluation design;

† the development of instruments of measurement; and,

† the implementation and control of the evaluation

design.

Following these methodological steps, the analysis and

interpretation of data was to be done in order to modify or

revise the instructional program or parts of it. Accordingly,

the data analysis centers around the formative evaluation of

instruction during the development phase for the purpose of

improvement.

3.1. Program analysis

The multimedia program is designed to explain the

dynamics of economic systems and to introduce the

monetary policy of the European Central Bank into

N.M. Seel, K. Schenk / Evaluation and Program Planning 26 (2003) 215–224 217

Page 4: An evaluation report of multimedia environments as cognitive learning tools

students’ discussion. In order to facilitate student construc-

tion of adequate mental models, several conceptual models,

especially a circuit model of economic systems, are

presented. According to the CA approach, effective learning

environments can be characterized by 18 features in four

broad dimensions: content, methods, sequencing, and the

sociology of teaching. We focused especially on methods,

which encompass the features of modeling, coaching,

scaffolding, articulation, reflection, and exploration.

Altogether, we integrated the relevant aspects of the CA

approach into a comprehensive model, as described in Fig. 1.

We realized CA in the following way (Seel et al., 2000).

In modeling, an expert demonstrates a solution to a problem,

and the students acquire an initial model of this process by

observing the expert’s approach. In coaching, the students

are supervised and given guidance as they try to find

solutions to a given task in an adaptive manner. In

scaffolding a special problem-solving heuristics is taught.

The realization of articulation and reflection proved to be

problematic within the multimedia program. Articulation is

the process of ‘thinking aloud’ while working on a task, and

reflection is the comparison of the problem solving

procedures applied by the learner and the expert respect-

ively. We realized both methods in the form of a ‘teach-

back’ procedure (Jih & Reeves, 1992; Sasse, 1991) in a

social learning situation. This procedure is based on the

‘constructive interaction’ between two communication

partners who share similar domain-specific knowledge.

In exploration, the final part of the apprenticeship

instruction, the learners have to solve transfer tasks—one

of them requires a ‘near transfer’ (i.e. the task remains in the

same subject matter domain of economics) the other one

requires a ‘far transfer’ from economics to ecology. On the

whole, the multimedia program realized the methods of CA

in the sequence illustrated in Fig. 2.

Additionally, two different instructional strategies aimed

at the improvement of analogical problem solving were

realized in modeling as well as in scaffolding:

† subsumption of analogous learning tasks under a schema

of a problem-solving structure, followed by its instantia-

tion through a detailed, worked-out example; and,

† induction of a more general problem-solving schema

from analogous learning tasks by the comparison of

different examples in order to extract structural

similarities.

3.2. Subjects

Taking curriculum constraints into account, the multi-

media program DSE is directed at 12th grade students of

German secondary schools (on average 18 years old). Two

pilot studies with university students indicated that the

program could also be used effectively with college students

in non-economic disciplines. In sum, more than 400

students have worked with DSE in various instructional

settings.

3.3. The evaluation design

In order to ensure that goals were being achieved and to

improve the multimedia program, the evaluation design

focused on formative evaluation as an iterative process of

trying out and revising of instruction during development.

However, formative evaluation can be conceived of in a

variety of ways (Weston et al., 1995). We implemented

formative evaluation as an iterative process in a series of

replication studies aimed at the gradual enhancement of

confidence. This process included empirically gathered

performance data and their interpretation.

Altogether, we realized five replication studies aimed at

(a) the formative evaluation of the multimedia program, and

(b) the diagnosis of mental models. The strategy of

formative evaluation applied related to the methods of the

Fig. 1. A view of the CA model (Seel et al., 2000).

N.M. Seel, K. Schenk / Evaluation and Program Planning 26 (2003) 215–224218

Page 5: An evaluation report of multimedia environments as cognitive learning tools

CA, and aimed at the gradual improvement of the

instruction (Table 1). This strategy can be sketched as

shown in Table 1.

This evaluation plan is based on the assumption that the

instruction can be considered effective with regard to the

achievement of goals if theoretically prognosed changes in

criterion variables can be observed. However, the efficacy of

the instruction may also depend on effects other than the

postulated ones, for example, on characteristics of the

instruction that are independent of the multimedia program

effects. This means, that measurable effects of an instruc-

tional intervention are not sufficient to justify the drawing of

conclusions about the model of efficacy applied.

3.4. Instruments of evaluation

A central goal of instruction is to improve student

performance, defined in terms of domain-specific knowl-

edge, skills, strategies, attitudes, and behavioral disposi-

tions. We assessed student domain-specific knowledge

using a test developed by Beck (1993). This test is

constructed in accordance with Bloom’s taxonomy of

cognitive objectives and permits statements about the

quality of knowledge in the field of economics. We assessed

the quantity and quality of learning and transfer tasks

accomplished within the learning environment. In coaching,

we measured the frequency and self-corrections of errors as

well as the time required to accomplish learning tasks,

whereas in scaffolding and exploration the frequency and

type of correct solutions were assessed. In order to assess the

learning-dependent progression of mental models, our

research group developed a special test procedure of causal

diagrams, which can considered to be a combination of

cognitive modeling and a particular structure-spreading

technique similar to concept mapping. Additionally, in two

studies we applied receptive interviews to assess the quality

of mental models, but this procedure proved to be

ineffective (Al-Diban, 2001). Finally, we also used protocol

analyses of the ‘teach-back’ phase of instruction.

Additionally, several motivational variables and persist-

ent learning strategies were measured, including: learner

achievement motivation, by means of a questionnaire

Table 1

Overview of the evaluation studies

Study Methods of CA implemented

Pilot study Modeling Coaching – – –

Study 1 Modeling Coaching Scaffolding – Exploration

Study 2 Modeling Coaching Scaffolding Computer-based, and individual articulation and reflection Exploration

Study 3 Modeling Coaching Scaffolding Articulation and reflection as teach-back procedure Exploration

Study 4 Modeling Coaching Scaffolding Articulation and reflection as teach-back procedure Exploration

Study 5 Modeling Coaching Scaffolding Articulation and reflection as training of metacognitive strategies Exploration

Fig. 2. The methods of CA as implemented in the multimedia program.

N.M. Seel, K. Schenk / Evaluation and Program Planning 26 (2003) 215–224 219

Page 6: An evaluation report of multimedia environments as cognitive learning tools

developed and validated by Rollett and Bartram (1977);

learner interests and attitudes, measured with a self-

constructed questionnaire; student perceptions of the

learning situation, also measured with a self-constructed

questionnaire to assess the ‘climate of multimedia-learning’

(Seel, 1980); and, student persistent learning strategies, with

a questionnaire by Wild and Schiefele (1994).

In addition to these external criteria we realized an

objective task analysis and a subjective task analysis to

determine whether the instruction was appropriated with

regard to curriculum and content criteria. Task analysis

procedures (Jonassen, Tessmer, & Hannum, 1999) pre-

suppose different criteria to achieve a sufficient differen-

tiation of the requirements and a classification of learning

tasks: (a) sub-task differences must be unambiguously

observable; (b) the task analysis should be derived

independently from a specific topic and should have general

validity; and (c) the analysis should be independent from

knowledge and dispositions of the application. In accord-

ance with these requirements, we applied a procedure of

problem and task analysis which Hacker, Sachse, and

Schroda (1998) developed especially for complex and

authentic tasks. With this procedure it was possible to

separate several dimensions, such as the complexity of

learning tasks (e.g. number of partial functions of a system

and of their relationships), the consistency of objectives

(number of goals, number of contradictory goals, degree of

concurrence of goals), the transparency of possible

solutions, degrees of freedom (variants of solutions),

dynamics (change of constraints), and the prior knowledge

necessary. By following this procedure, we were able to

characterize each learning task by means of a multi-

dimensional and objective profile of demands.

3.5. Implementation of the evaluation plan

The evaluation plan contains one pilot study with

university students and five replication studies with 12th

grade German students (Table 1). All studies were carried

out outside the school at the Laboratory for Multimedia

Research of the Technical University of Dresden (1994–

1998) and the University of Freiburg (1998 –2001),

enabling a strict control of external factors. The recruitment

of subjects took place on the basis of announcements in

schools, and the volunteers received a nominal fee. Subjects

were assigned to the different treatments randomly.

4. Results

This section starts with the report on findings of the tasks

analyses. In the next section we describe the main results of

different evaluation studies on the effectiveness of the

multimedia program in accordance with CA principles and

the development of mental models.

4.1. Results of task analyses

We realized a task analysis as part of the second

replication study in order to get information concerning the

degree of difficulty and the content-related quality of the

learning tasks in coaching, scaffolding, and exploration. We

asked three experts in the field of economics to analyze the

learning tasks, applying the aforementioned procedure of

Hacker et al. (1998). The experts’ judgments concerning the

dimensions of the task analysis obtained an average

agreement of at least 65% (Table 2).

The results indicate that the experts estimated the

dimension ‘complexity’, ‘dynamics’, and ‘transparency’ to

be most important for the task difficulty whereas the

dimensions ‘inconsistent goals’ and ‘degrees of freedom’

were rated as less important. The task analysis also

demonstrated that the instruction satisfied the apprentice-

ship features of increasing complexity and variety as well as

of increasing abstraction of the learning tasks in coaching,

scaffolding, and exploration. As a result of the expert’s task

analysis, three learning tasks of scaffolding and two of the

five exploration tasks were crossed from the list due to

evident weaknesses in content and overly strong demands

on knowledge.

In addition to the experts’ task analysis, we also asked

students to estimate the difficulty of the learning tasks

within the various apprenticeship methods. In the second

study, 84 subjects rated the difficulty of the coaching-tasks

as ‘not difficult’, whereas the learning tasks of scaffolding

and exploration were judged to be ‘more difficult’ and

‘difficult’ to solve. Within each method there were no

significant differences concerning the difficulty of the

learning tasks. However, there were significant differences

Table 2

Average agreements of experts and judgments on the relevance of the dimensions

Dimensions Inconsistency

of goals

Level of

complexity

Level of

transparency

Degree of

freedom

Level of

dynamics

Necessity of

prior knowledge

Agreement of

experts on average (%)

69.8 73.0 72.0 73.0 65.0 78.9

Average relevance

of the dimensions

1.7 4.3 3.3 2.3 4.0 2.6

N.M. Seel, K. Schenk / Evaluation and Program Planning 26 (2003) 215–224220

Page 7: An evaluation report of multimedia environments as cognitive learning tools

between these methods, coaching being the least difficult,

scaffolding more difficult, and exploration the most difficult

of the three. Interestingly, this increase of cognitive effort

involved in the solution of the learning tasks depending on

the method of apprenticeship instruction correlated with the

cognitive performance as measured with the knowledge test

of Beck (1993). There was a correlation of r ¼ 0:469 with

the measures in coaching but only a correlation of r ¼

20:74 with the scaffolding tasks and of r ¼ 20:379 with

the exploration tasks. This result corresponds with the

objective results of the various evaluation studies, which

indicated that the learners could not perform the learning

tasks in scaffolding and exploration as well as the coached

tasks.

4.2. Results of the evaluation studies

On the whole, the results of the five evaluations with

more than 400 subjects allow the statement that the CA

approach can be considered to be a sound framework for the

ID of environments aiming at constructivist learning.

Results of the second evaluation study indicated that the

attempts to install articulation and reflection in the multi-

media program failed insofar as the written statements and

comments required of the learners were revealed to be not

effective with regard to the reflective thinking about

problem-solving procedures which was intended. These

results correspond with Casey’s (1996) observation that

“interaction with the computer can provide a great deal of

information on why learners make the choices they make,

but computer-based intelligence seems pale in comparison

to the open peer dialogue we observed learners having

during the testing of CI” (p. 83).

As a result of these findings we implemented articulation

and reflection in the third study and later in the form of a

particular ‘teach-back’-procedure in accordance with Sasse

(1991). Moreover, in the fifth replication study we also

applied the method of ‘generative teaching’ by Kourilsky

and Wittrock (1992), but only with moderate effects. Both

of these alternatives for the realization of articulation and

reflection in a cooperative manner should be investigated in

more detail in further studies.

All replication studies agree on the point that the learners

come out on top in accomplishing the learning tasks of

coaching. Successful learners are characterized both by

fewer mistakes in task solutions of coaching and by longer

learning times for these tasks, indicating a well-planned

method in accomplishing the learning tasks. This appren-

ticeship method evidently aims at ‘controlled content-

oriented learning’ and narrows down the learners to imitate

the expert model they are provided in modeling.

In all of the studies we observed a significant decrease of

performance in scaffolding, which is characterized by a fading

of external guidance maintained. The learning outcomes of all

replication studies show that learners had difficulty in

developing and applying their own problem-solving

strategies. The significant decrease of performance from

coaching to scaffolding, which was observable in all of the

replication studies, indicates that the learners obviously

could not progress effectively from content- to process-

oriented learning in the sense of an increasingly self-

regulated accomplishment of analogous tasks.

In designing scaffolding we followed the statements of

Collins et al. (1989) and the standard procedures of

cognitive task analysis (Jonassen et al., 1999). Accordingly,

we organized the tasks in a structure that we considered to

be best for the solution of learning tasks by analogy. That is,

we provided the learners with a learning task which they

could solve easily and then we increased the difficulty of

tasks until the learners were no longer able to solve them on

their own. As an alternative, Hmelo and Guzdial (1996)

considered the redesign of learning tasks in such a way that

they support task performance with the help of a

‘supplantation’ (Salomon, 1979) of those cognitive oper-

ations that are involved in the task solutions. Moreover, task

performance can be supported with cognitive tools which

give advice to learners by representing a problem and then

manipulating the representation in the process of finding a

solution. These forms of scaffolding are taken by Hmelo and

Guzdial (1996) as examples of ‘glass-box scaffolding’

which aims at giving help to learners in situations which

they cannot master on their own. Moreover, the results of

our studies concerning the efficacy of scaffolding also

correspond with other investigations aimed at constructivist

learning.

Comparable with the learning results in scaffolding, the

subjects did not perform the two transfer tasks administered

in exploration well. One of the tasks remained within the

same subject matter domain of fiscal policy and thus aimed

at a ‘near transfer’ whereas the second task required the

transfer of freshly acquired knowledge into the different

subject matter area of ecology (i.e. ‘far transfer’). As in

scaffolding, the subjects achieved only average results on

these transfer tasks in the various replication studies. This

result generally corresponds with the literature on learning

transfer (Seel, 2000) insofar as the learners’ strong fixation

on content-oriented learning was consistently to the

detriment of learning effectiveness. Actually, with regard

to the accomplishment of transfer tasks, student perform-

ance in coaching was revealed to be a significant predictor.

From a didactic point of view, these results are not

satisfactory, especially with regard to the results of

regression analyses, according to which scaffolding did

not influence the requested transfer in exploration.

Jonassen, Marra, & Palmer (2003) point out with regard

to the efficacy of so-called constructivist learning environ-

ments that they often do not meet the expectations of

students (and teachers) with regard to their effects on

learning. This in turn may lead to motivational problems in

students (Jonassen & Tessmer, 1996/1997). With regard to

the cognitive effects of the instruction on learning and

transfer, the data of our replication studies correspond with

N.M. Seel, K. Schenk / Evaluation and Program Planning 26 (2003) 215–224 221

Page 8: An evaluation report of multimedia environments as cognitive learning tools

the argument of Jonassen and Marra. However, with regard

to motivational effects, we found consistently positive

learning emotions as well as explicitly good and persistent

motivation to learn. Actually, the findings of the various

replication studies show that there is a strong tendency

towards a positive motivation to learn and related interests

for these students in the domain of economics. Therefore, it

was not surprising that in the fifth study the learners’

content-oriented interests contributed to the variance of

performance in accomplishing the learning tasks of

scaffolding. Taking into account this finding as well as the

observation that enduring learning strategies evidently did

achieve greater effects than metacognitive training, we can

conclude that instructional interventions of a short duration

are strongly limited with regard to their properties.

In the pilot study and in two replications, we investigated

the efficacy of subsuming worked examples under a general

schema of solving problems vs. inducing such a schema

from worked examples. The data of these studies proved to

be inconsistent insofar as the strategy of subsumption was

superior to the strategy of induction in the pilot study with

regard to both the amount of mistakes and the time needed

for accomplishment of the learning tasks in coaching,

whereas in the second study a superiority of the inductive

strategy could be assessed with regard to the performances

in scaffolding. However, the fourth replication indicated

that there are no significant differences between the two

teaching strategies. Only in one case, namely in the

measured ‘procedural knowledge’ in performing the second

transfer task in exploration, was the inductive instruction

revealed to be more effective.

With regard to the measurement of the learning-

dependent progression of mental models, in all of the

various replication studies we could observe significant

changes in the accomplishment in producing causal

diagrams—understood here as external representations of

mental models. Much clearer were the results of regression

analyses, which indicated that the quality of the accom-

plishment of the learning tasks of increasing complexity

were influenced both by the learning experiences in the

course of instruction and the enduring learning strategies of

the subjects. The empirical findings of the fifth study show

significant learning progress in all experimental groups

depending on the complexity of their causal diagrams. We

interpret the substantial changes of causal diagrams

observed as a proof for the general effectiveness of the

instructional intervention. Our findings also support the

theoretical assumption that subjective causal models—as

measured with the help of causal diagrams—are situation-

dependent constructions. Remaining within the same

context of contents, the learners did construct causal

diagrams as cognitive artifacts correlating only minimally

with each other. Instructionally relevant are the results of

our investigations concerning the ‘adoption’ of the concep-

tual models the students were provided with in the

instruction. Our data contradict the assumption that students

adapt externally provided models and apply them to solve

tasks. In our replication studies the learners’ causal

diagrams showed only minor similarities with the concep-

tual models provided during instruction. Although con-

tingency coefficients indicate that the causal diagrams are

not fully independent of the conceptual models, the

correlation was not significant. Basically, we can agree

with Mayer’s (1989) verdict that “students given model-

instruction may be more likely to build mental models of the

systems they are studying and to use these models to

generate creative solutions to transfer problems” (p. 47), but

at the same time it is clear that the students do not adapt the

conceptual model provided through instruction one-to-one.

The results of the fifth evaluation study (Schenk, 2003)

did not indicate that metacognitive training had any effect

on the performance in the scaffolding and exploration part

of the learning program, but it did show significant

differences in different qualities of metacognitive knowl-

edge. The decisive predictors for the performances in

scaffolding were the students’ content-oriented interests.

When metacogitive training was implemented, several

learning strategies had a significant effect on the accom-

plishment of the transfer tasks in exploration. This

corresponds with the results of previous investigations

(Seel & Dinter, 1995). However, the amount of metacog-

nitive statements correlated positively with performances,

measured by the complexity of causal diagrams and

declarative knowledge in scaffolding.

5. Conclusions

Over the project’s life it was fruitful to realize and to

evaluate the methods of the CA approach with the help of a

step-by-step formative evaluation strategy. Parallel with

this, we investigated the learning-dependent progression of

mental models with the help of a specific diagnosis that

confirmed central assumptions of the theory of mental

models.

In the course of five replication studies, the suitability

of CA for the design of particular multimedia learning

environments could be proved in general—exceptionally

the methods of articulation and reflection, for which the

realization of a ‘teach-back’ procedure proved to be more

effective with regard to the required metacognitive

activities. Independent of two instructional strategies

(subsumption vs. induction) that were applied to design

the methods of modeling and scaffolding, the subjects

achieved excellent performance in the accomplishment of

the learning tasks administered in coaching. This consist-

ently positive result in all of our evaluation studies

indicates that the multimedia instruction was very effective

for the improvement of content-oriented learning. In

general, scaffolding aimed at fading instructional advice

can be considered the weak spot of the multimedia

instruction. Although the subjects would have liked more

N.M. Seel, K. Schenk / Evaluation and Program Planning 26 (2003) 215–224222

Page 9: An evaluation report of multimedia environments as cognitive learning tools

influencing control and alternatives during instruction,

they were not able to construct satisfactory problem

solutions or transfer their knowledge to other fields of

content. So far our results correspond with observations

and empirical results of other studies, such as Casey

(1996), Chee (1996) and Lajoie and Lesgold (1989),

which suggest that CA principles are suitable for the ID of

learning environments. Reservations include the obser-

vation that it was very difficult to implement articulation

and reflection in multimedia environments, and that

scaffolding and exploration, both of which aim to improve

self-regulated learning, were less effective than expected.

Taking into consideration the experiences we made in

designing the CA environment, we do not suggest the

realization of the methods articulation and reflection for

metacognitive improvement in a computer-based format

since it has not yet been possible to implement an intelligent

tutoring system in the program that might be able to guide

the learners’ self-verbalization and reflective thinking

adequately. Rather, we suggest the application of coopera-

tive procedures such as the ‘teach-back’ procedure in order

to realize articulation and reflection.

The weak learning results our subjects achieved in

scaffolding in the several replication studies actually

contradict its popularity in the literature (Hmelo &

Guzdial, 1996; Jonassen et al., 2003). This may be due

to the fact that the subjects of our studies were

constrained by the instructional program and did not

receive additional advice through a teacher as advocated

by Palincsar (1986), who considers such a dialogue to be

a solid basis for effective scaffolding. The multimedia

instruction was not capable of adapting the learning tasks

to the individual learner. For example, it cannot adapt

the difficulty of a learning task to the learners’ abilities

in such a way that a learner’s missing knowledge could

be compensated. Furthermore, the multimedia instruction

did not make appropriate ‘cognitive tools’ available to

support the learners in accomplishing the learning tasks.

As in a study by Niegemann (1995), the available

program resources (e.g. icons, glossary, cribs) were

evidently not sufficient and they were not used

effectively by the subjects of the replication studies.

The lesson learned from the students’ average perform-

ance in scaffolding is that this method should be

primarily oriented towards the idea of ‘glass-box

scaffolding’ in the future, with the aim of expanding

the opportunities of multimedia learning with appropriate

cognitive tools. We believe that the consistently average

achievement in exploration as well as in scaffolding can

be explained by enduring socialization effects of school-

ing. Schooling is targeted primarily at content-oriented

learning along with the acquisition of a lot of content

knowledge. For the development and practice of

cognitive skills, less resources are used and therefore

process-oriented learning in the sense of the development

of problem-solving skills can be seen as a negligible

quantity. This observation corresponds with the critics of

Krohne (1977) as well as with the major results of

TIMSS (Baumert, Bos, & Watermann, 1999).

The results of our studies demonstrate that causal

diagrams can be considered to be a suitable method of

assessing mental models as knowledge constructions of a

higher order that develop depending on learning experi-

ences. The pedagogical lesson learned from empirical

results with regard to significant effects on the complexity

of causal diagrams is that enduring learning strategies and

previous learning experiences may have greater influence

on the accomplishment of transfer tasks than temporary

instructional strategies, which perhaps need more time to

become effective than we could offer in our studies.

Konrad (1998) argues that successful self-regulated

learning—which was our goal in implementing metacogni-

tive training in the fifth study—presupposes that the learners

possess not only strategic knowledge but also broad world

knowledge. These learners can also fall back on specific

previous experiences in various learning situations. There-

fore, in future we should investigate in more detail whether

it would be effective to teach novice learners the application

of metacognitive strategies in the course of the instruction of

new subject matter.

In examining these evaluation results, we conclude that

learners constructed situation-bound problem solutions

independently of teaching strategies provided during

instruction, and that an effective design of successful

learning environments presupposes the provision of cogni-

tive tools which facilitate and support individual model-

building and revision aimed at problem solving. In this

sense, it is doubtful whether multimedia environments

which guide students on a predetermined ‘learning way’ are

actually suitable to overcome the ‘problems with problem-

based learning’ (Hoffman & Ritchie, 1997).

Acknowledgements

We gratefully acknowledge financial support for this

research from a generous grant provided by the German

Research Association (Deutsche Forschungsgemeinschaft)

with Grant-No. Se 399/4-1-3.

References

Al-Diban, S. (2001). Padagogische Diagnose mentaler Modelle. Freiburg:

Albert-Ludwigs-Universitat.

Andrews, D. H., & Goodson, L. A. (1980). A comparative analysis of

models of instructional design. Journal of Instructional Development,

3(4), 2–16.

Baumert, J., Bos, W., & Watermann, R. (1999) (2. Aufl.). TIMSS/III.

Schulerleistungen in Mathematik und den Naturwissenschaften am

Ende der Sekundarstufe II im internationalen Vergleich. Zusammen-

fassung deskriptiver Ergebnisse, Berlin: Max-Planck-Institut fur

Bildungsforschung.

N.M. Seel, K. Schenk / Evaluation and Program Planning 26 (2003) 215–224 223

Page 10: An evaluation report of multimedia environments as cognitive learning tools

Beck, K. (1993). Dimensionen der okonomischen Bildung. Messinstru-

mente und Befunde. Nurnberg: Universitat (Abschlussbericht zum

DFG-Projekt Wirtschaftskundlicher Bildungs-Test [WBT]. Normier-

ung und internationaler Vergleich).

Carlson, H. L. (1991). Learning style and program design in interactive

multimedia. Educational Technology Research and Development,

39(3), 41–48.

Casey, C. (1996). Incorporating cognitive apprenticeship in multi-media.

Educational Technology Research and Development, 44(1), 71–84.

Chee, Y. S. (1996). Mind bridges: A distributed, multimedia learning

environment for collaborative knowledge building. International

Journal of Educational Telecommunications, 2(2/3), 137–153.

Collins, A., Brown, J. S., & Newman, S. E. (1989). Cognitive apprentice-

ship: Teaching the crafts of reading, writing, and mathematics. In L. B.

Resnick (Ed.), Knowing, learning, and instruction (pp. 453–494).

Hillsdale, NJ: Erlbaum.

Craik, K. J. W. (1943). The nature of explanation. Cambridge: Cambridge

University Press.

Cronbach, L. J. (1963). Course improvement through evaluation. Teacher

College Record, 64, 672–683.

Greeno, J. G. (1989). Situations, mental models, and generative knowledge.

In M. Klahr, & K. Kotovsky (Eds.), Complex Information Processing

(pp. 285–318). Hillsdale, NJ: Erlbaum.

Hacker, W., Sachse, P., & Schroda, F. (1998). Design thinking—Possible

ways to successful solutions in product development. In P. Badke-

Schaub, H. Birkhofer, & E. Frankenberger (Eds.), Designers—The key

to successful product development. London: Springer.

Hannafin, M. J. (1992). Emerging technologies, ISD, and learning

environments: critical perspectives. Educational Technology Research

and Development, 40(1), 49–63.

Hmelo, C. E., & Guzdial, M. (1996). Of black and glass boxes: Scaffolding

for doing and learning. Proceedings of the Second International

Conference on the Learning Sciences, Charlottesville, VA: Association

for the Advancement of Computers in Education, pp. 128–133.

Hoffman, B., & Ritchie, D. (1997). Using multimedia to overcome the

problems with problem based learning. Instructional Science, 25(2),

97–115.

Jarvela, S. (1995). The cognitive apprenticeship model in a technologically

rich learning environment: interpreting the learning interaction.

Learning and Instruction, 5(3), 237–259.

Jih, H. J., & Reeves, T. C. (1992). Mental models: a research focus for

interactive learning systems. Educational Technology Research and

Development, 40(3), 39–53.

Johnson-Laird, P. N. (1983). Mental models: Towards a cognitive science

of language, inference, and consciousness. Cambridge: Cambridge

University Press.

Johnson-Laird, P. N. (1989). Mental models. In M. I. Posner (Ed.),

Foundations of cognitive science (pp. 469–499). Cambridge, MA: MIT

Press.

Jonassen, D. H., Marra, R., & Palmer, B. (2003). Epistemological

development: An implicit entailment of constructivist learning

environments. In N. M. Seel, & S. Dijkstra (Eds.), Curriculum, plans

and processes of instructional design: International perspectives.

Mahwah, NJ: Erlbaum, in press.

Jonassen, D. H., & Tessmer, M. (1996/1997). An outcomes-based

taxonomy for instructional systems design, evaluation, and research.

Training Research Journal, 2, 11–46.

Jonassen, D. H., Tessmer, M., & Hannum, W. H. (1999). Task analysis

methods for instructional design. Mahwah, NJ: Erlbaum.

Konrad, K. (1998). Kooperatives lernen bei studierenden: forderung

metakognitiver selbstaußerungen und (meta)kognitive profile. Unter-

richtswissenschaft, 26(1), 67–87.

Kourilsky, M., & Wittrock, M. C. (1992). Generative teaching:

an enhancement strategy for the learning of economics in

cooperative groups. American Educational Research Journal,

29(4), 861–876.

Krohne, H. W. (1977). Kognitive Strukturiertheit als Bedingung und Ziel

schulischen Lernens. Zeitschrift fur Entwicklungspsychologie und

Padagogische Psychologie, 9(1), 54–75.

Lajoie, S. P., & Lesgold, A. (1989). Apprenticeship training in the

workplace: computer-coached practice environment as a new form of

apprenticeship. Machine-Mediated Learning, 3, 7–28.

Mayer, R. E. (1989). Models for understanding. Review of Educational

Research, 59(1), 43–64.

Mayer, R. E., Moreno, R., Boire, M., & Vagge, S. (1999). Maximizing

constructivist learning from multimedia communication by minimiz-

ing cognitive load. Journal of Educational Psychology, 91(4),

638–643.

Niegemann, H. M. (1995). Zum einfluß von modeling in einer

computergestutzten lernumgebung: quasi-experimentelle untersuchung

zur instruktionsdesign-theorie. Unterrichtswissenschaft, 23(1), 75–87.

Palincsar, A. S. (1986). The role of dialogue in providing scaffolded

instruction. Educational Psychologist, 2(1/2), 73–98.

Rollett, B., & Bartram, M. (1977). Anstrengungsvermeidungstest (AVT).

Braunschweig: Westermann.

Ross, S. M., & Morrison, G. R. (1997). Measurement and evaluation

approaches in instructional design: Historical roots and current

perspectives. In R. D. Tennyson, F. Schott, N. M. Seel, & S. Dijkstra

(Eds.), Instructional design: International perspectives (Vol. 1) (pp.

327–351). Theory, research, and models, Mahwah, NJ: Lawrence

Erlbaum.

Royer, J. M., Cisero, C. A., & Carlo, M. S. (1993). Techniques and

procedures for assessing cognitive skills. Review of Educational

Research, 63(2), 201–243.

Salomon, G. (1979). Interaction of media, cognition and learning. San

Francisco: Jossey Bass.

Sasse, M. (1991). How to t(r)ap users’ mental models. In M. J. Tauber, & D.

Ackermann (Eds.), (Vol. 2) (pp. 59–79). Mental models and human–

computer interaction, Amsterdam: North-Holland.

Schenk, K. (2003). Effekte metakognitives Trainings auf Lernen und

Problemlosen. Dresden: Saxoprint.

Scriven, M. (1967). In R. W. Tyler, R. M. Gagne, & M. Scriven (Eds.),

Perspectives of curriculum evaluation (Vol. 1) (pp. 39–83). AERA

monograph series on curriculum evaluation, Chicago: Rand

McNally.

Seel, N. M. (1980). Lernerleben im Geschichtsunterricht der Sekundarstufe

I. Eine experimentelle Analyse. Munchen: Minerva.

Seel, N. M. (1991). Weltwissen und mentale Modelle. Gottingen: Hogrefe.

Seel, N. M. (2000). Psychologie des Lernens. Munchen: Reinhardt.

Seel, N. M., Al-Diban, S., & Blumschein, P. (2000). Mental models and

instructional planning. In J. M. Spector, & T. M. Anderson (Eds.),

Integrated and holistic perspectives on learning, instruction and

technology: Understanding complexity (pp. 129–158). Dordrecht,

NL: Kluwer.

Seel, N. M., & Dinter, F. R. (1995). Instruction and mental model

progression: learner-dependent effects of teaching strategies on knowl-

edge acquisition and analogical transfer. Educational Research and

Evaluation, (1), 4–35.

Snow, R. E. (1990). New approaches to cognitive and conative assessment

in education. International Journal of Educational Research, 14(5),

455–473.

Tennyson, R. D., Schott, F., Seel, N. M., & Dijkstra, S. (Eds.), (1997).

Instructional design: International perspectives (Vol. 1). Theory,

research, and models, Mahwah, NJ: Erlbaum.

Wartofsky, M. W. (1979). Models: Representation and the scientific

understanding. Dordrecht: Reidel.

Weston, C., McAlpine, L., & Bordonaro, T. (1995). A model for

understanding formative evaluation in instructional design. Educational

Technology Research and Development, 43(3), 29–48.

Wild, K. P., & Schiefele, U. (1994). Lernstrategien im studium. Ergebnisse

zur faktorenstruktur und reliabilitat eines neuen fragebogens. Zeitschrift

fur Differentielle und Diagnostische Psychologie, 15, 185–200.

N.M. Seel, K. Schenk / Evaluation and Program Planning 26 (2003) 215–224224