one click away is too far! how the presentation of cognitive learning aids influences their use in...

11
One click away is too far! How the presentation of cognitive learning aids influences their use in multimedia learning environments Tatjana Ruf , Rolf Ploetzner 1 University of Education, Institute of Media in Education, Kunzenweg 21, D-79117 Freiburg, Germany article info Article history: Keywords: Multimedia learning Cognitive learning aids Interface design Usability Eye tracking abstract In an experimental study, we investigated how the presentation of cognitive learning aids, as well as the availability of self-monitoring questions affect the frequency of use of cognitive learning aids in a multi- media learning environment. The learning aids were presented either dynamically, statically, or they were initially collapsed and the students had to activate them by clicking on a button. The comparability of all three versions of the multimedia learning environment was assured by means of repeated usability testing. Self-monitoring questions were either presented to the learners or not. A total of 60 undergrad- uate students participated in the study. Their activities in the learning environment, together with their eye movements were recorded. The students took advantage of the learning aids most when they were dynamically presented, less when they were statically presented, and least when they were presented in a collapsed form. The differences in use of the learning aids were statistically significant with large effect sizes. The availability of self-monitoring questions had no significant effect on the use of learning aids. Ó 2014 Elsevier Ltd. All rights reserved. 1. Introduction Multimedia learning environments often place high demands on the learners. There is a growing body of evidence which shows that many learners struggle to appropriately process the informa- tion included in different representations such as text and pictures (for an overview see Mayer, 2005). In order to facilitate learning within multimedia learning environments, cognitive learning aids are often made available to the learners. These aim at helping learners to engage in cognitive processes that are relevant to the learning task such as the selection, the organization, and the inte- gration of information encoded in the different representations (cf. Clarebout & Elen, 2006; Jonassen, 1999; Mayer, 2009). In past research, it has been repeatedly demonstrated that the use of learning aids can significantly improve learning from multi- media. For instance, Renkl (2002) investigated learning from worked examples in a computerized learning environment about probability calculations. He found that students learn more success- fully not only when they self-explain but also when they take advantage of instructional explanations offered by means of an online help system. In an experimental study on learning from hypertext, Gerjets, Scheiter, and Schuh (2005) demonstrated that prompting students to produce self-explanations improves learn- ing. Schworm and Renkl (2006) obtained similar results when they investigated learning from a multimedia environment. Bartholomé, Stahl, Pieschl, and Bromme (2006) examined learning from a multi- media environment about the biology of plants. They found that students who took advantage of instructional hints performed bet- ter than students who did not so. Clarebout and Elen (2009a, 2009b) observed more successful learning in computerized learning envi- ronments when the students made use of learning aids such as dic- tionaries, descriptions of learning goals, sample questions, and help for interpreting text and images that were presented in the environ- ments. In experimental studies conducted by Kombartzky, Ploetzner, Schlag, and Metz (2010) as well as by Ploetzner and Schlag (2013), learning from narrated animations was significantly improved by means of cognitive learning aids that were designed on the basis of Mayer’s (2009) Cognitive Theory of Multimedia Learn- ing. Schlag and Ploetzner (2011) obtained comparable results with respect to learning from combinations of texts and static images. Although many empirical studies demonstrate that learning aids can improve learning from multimedia, several studies also report a severe problem: In many cases it was observed that learn- ers do not use learning aids spontaneously, and often they are com- pletely ignored (cf. Aleven, Stahl, Schworm, Fischer, & Wallace, 2003; Clarebout & Elen, 2006; Heiß, Eckhardt, & Schnotz, 2003; Horz, Winter, & Fries, 2009; Narciss, Proske, & Koerndle, 2007; http://dx.doi.org/10.1016/j.chb.2014.06.002 0747-5632/Ó 2014 Elsevier Ltd. All rights reserved. Corresponding author. Tel.: +49 761 682 910. E-mail addresses: [email protected] (T. Ruf), [email protected] (R. Ploetzner). 1 Tel.: +49 761 682 900. Computers in Human Behavior 38 (2014) 229–239 Contents lists available at ScienceDirect Computers in Human Behavior journal homepage: www.elsevier.com/locate/comphumbeh

Upload: rolf

Post on 01-Feb-2017

214 views

Category:

Documents


2 download

TRANSCRIPT

Page 1: One click away is too far! How the presentation of cognitive learning aids influences their use in multimedia learning environments

Computers in Human Behavior 38 (2014) 229–239

Contents lists available at ScienceDirect

Computers in Human Behavior

journal homepage: www.elsevier .com/locate /comphumbeh

One click away is too far! How the presentation of cognitive learningaids influences their use in multimedia learning environments

http://dx.doi.org/10.1016/j.chb.2014.06.0020747-5632/� 2014 Elsevier Ltd. All rights reserved.

⇑ Corresponding author. Tel.: +49 761 682 910.E-mail addresses: [email protected] (T. Ruf), [email protected]

(R. Ploetzner).1 Tel.: +49 761 682 900.

Tatjana Ruf ⇑, Rolf Ploetzner 1

University of Education, Institute of Media in Education, Kunzenweg 21, D-79117 Freiburg, Germany

a r t i c l e i n f o a b s t r a c t

Article history:

Keywords:Multimedia learningCognitive learning aidsInterface designUsabilityEye tracking

In an experimental study, we investigated how the presentation of cognitive learning aids, as well as theavailability of self-monitoring questions affect the frequency of use of cognitive learning aids in a multi-media learning environment. The learning aids were presented either dynamically, statically, or theywere initially collapsed and the students had to activate them by clicking on a button. The comparabilityof all three versions of the multimedia learning environment was assured by means of repeated usabilitytesting. Self-monitoring questions were either presented to the learners or not. A total of 60 undergrad-uate students participated in the study. Their activities in the learning environment, together with theireye movements were recorded. The students took advantage of the learning aids most when they weredynamically presented, less when they were statically presented, and least when they were presented ina collapsed form. The differences in use of the learning aids were statistically significant with large effectsizes. The availability of self-monitoring questions had no significant effect on the use of learning aids.

� 2014 Elsevier Ltd. All rights reserved.

1. Introduction

Multimedia learning environments often place high demandson the learners. There is a growing body of evidence which showsthat many learners struggle to appropriately process the informa-tion included in different representations such as text and pictures(for an overview see Mayer, 2005). In order to facilitate learningwithin multimedia learning environments, cognitive learning aidsare often made available to the learners. These aim at helpinglearners to engage in cognitive processes that are relevant to thelearning task such as the selection, the organization, and the inte-gration of information encoded in the different representations (cf.Clarebout & Elen, 2006; Jonassen, 1999; Mayer, 2009).

In past research, it has been repeatedly demonstrated that theuse of learning aids can significantly improve learning from multi-media. For instance, Renkl (2002) investigated learning fromworked examples in a computerized learning environment aboutprobability calculations. He found that students learn more success-fully not only when they self-explain but also when they takeadvantage of instructional explanations offered by means of anonline help system. In an experimental study on learning from

hypertext, Gerjets, Scheiter, and Schuh (2005) demonstrated thatprompting students to produce self-explanations improves learn-ing. Schworm and Renkl (2006) obtained similar results when theyinvestigated learning from a multimedia environment. Bartholomé,Stahl, Pieschl, and Bromme (2006) examined learning from a multi-media environment about the biology of plants. They found thatstudents who took advantage of instructional hints performed bet-ter than students who did not so. Clarebout and Elen (2009a, 2009b)observed more successful learning in computerized learning envi-ronments when the students made use of learning aids such as dic-tionaries, descriptions of learning goals, sample questions, and helpfor interpreting text and images that were presented in the environ-ments. In experimental studies conducted by Kombartzky,Ploetzner, Schlag, and Metz (2010) as well as by Ploetzner andSchlag (2013), learning from narrated animations was significantlyimproved by means of cognitive learning aids that were designed onthe basis of Mayer’s (2009) Cognitive Theory of Multimedia Learn-ing. Schlag and Ploetzner (2011) obtained comparable results withrespect to learning from combinations of texts and static images.

Although many empirical studies demonstrate that learningaids can improve learning from multimedia, several studies alsoreport a severe problem: In many cases it was observed that learn-ers do not use learning aids spontaneously, and often they are com-pletely ignored (cf. Aleven, Stahl, Schworm, Fischer, & Wallace,2003; Clarebout & Elen, 2006; Heiß, Eckhardt, & Schnotz, 2003;Horz, Winter, & Fries, 2009; Narciss, Proske, & Koerndle, 2007;

Page 2: One click away is too far! How the presentation of cognitive learning aids influences their use in multimedia learning environments

230 T. Ruf, R. Ploetzner / Computers in Human Behavior 38 (2014) 229–239

Roll, Aleven, McLaren, & Koedinger, 2011). This raises the questionof how the use of learning aids can be encouraged.

Previous research on the use of learning aids in computerizedlearning environments primarily investigated how learner relatedvariables such as prior knowledge, self-regulatory skills, motiva-tion, and epistemological beliefs affect the use of cognitive learningaids. Only a few studies examined the influence that the character-istics of the learning environment have on the use of learning aids,such as the content of learning aids, the usability of learning aids,or the presentation of supplementary metacognitive cues toencourage the use of learning aids (for an overview see Alevenet al., 2003; Clarebout & Elen, 2006). Up until now, these studiesprovided little evidence that could guide the design of cognitivelearning aids in such a way that learners would actually takeadvantage of them.

In this paper, we present an experimental study that was con-ducted to investigate how the design of a multimedia learningenvironment can increase the frequency of use of cognitive learn-ing aids, irrespective of the learning aids’ educational effectiveness.We first describe a process model of help-seeking in multimedialearning environments. We also report empirical findings concern-ing the influence of various factors on the process of help-seeking.On the basis of the model, we suggest two design features of multi-media learning environments that could foster the use of cognitivelearning aids. The first feature concerns the presentation of learn-ing aids; the second feature is concerned with the availability ofself-monitoring questions. We then describe a multimedia learningenvironment in which the suggested design features have beenimplemented. The usability of the learning environment wasrepeatedly tested and systematically improved during the develop-ment phase. Thereafter, we report an experimental study that wasconducted to investigate how the implemented design featuresaffect the use of cognitive learning aids. A discussion of theobserved results concludes the paper.

2. Theoretical background

The use of learning aids is a complex process that comprisesmetacognitive reasoning as well as decision making (for an over-view see Aleven et al., 2003). Current conceptualizations of thisprocess rely for the most part on Nelson-Le Gall’s (1981) model.It was formulated to describe help-seeking processes in social con-texts and consists of five steps:

1. Learners must become aware of their need for help, i.e., theymust realize that they cannot accomplish the learning taskexclusively by means of their own resources.

2. Learners can decide to seek help from others. This decision isbased on an analysis of the costs and benefits associated withseeking help. Costs may be the loss of perceived competence,for instance, or the fear of getting less credit for a successfulaccomplishment.

3. Learners need to identify potential helpers, i.e., persons who canprovide the required support.

4. Learners have to apply strategies for receiving help from others.For example, learners may attempt to receive help by express-ing their need in an adequate way.

5. Learners evaluate the help-seeking episode, i.e., they assess thesuccess or failure of their attempt to receive help. This evalua-tion may influence future help-seeking activities.

Aleven et al. (2003) adopted this model to describe the processof using help in computerized learning environments. According tothose authors, however, the steps of Nelson-Le Gall’s (1981) modeltake on a different character within computer-based learning:

1. Learners must become aware of their need for help. In this case,the computerized learning environment itself might supportlearner self-monitoring.

2. Learners can decide to seek help. In computerized learningenvironments, the anticipated costs and benefits may be differ-ent from those in social contexts. For instance, the use of thelearning aids may demand specific efforts.

3. Learners need to identify potential sources of help. For example,computerized learning environments may offer different kindsof learning aids. The learners not only need to be able to locatethem within the environment, but must also decide whichlearning aids best meet their needs.

4. Learners have to apply strategies for receiving help. In comput-erized learning environments, learners have less flexibility toexpress their requests for help. Furthermore, learning aids com-monly offer a specific functionality to the learners. As a conse-quence, the learners have to take advantage of this functionalityin such way that their need for help is met.

5. Learners evaluate the help-seeking episode. Again, the learningenvironment may aid this evaluation process by supportinglearner self-monitoring.

To our knowledge, the model of Aleven et al. (2003), respec-tively Nelson-Le Gall (1981), is up until now the only generic pro-cess model concerning the use of learning aids in computerizedlearning environments. Mercier and Frederiksen (2008) custom-ized this model to problem-based learning environments withfocus on various cognitive aspects of using help. Aleven,McLaren, Roll, and Koedinger (2006) developed a process modelspecific to intelligent tutorial systems based on the model ofAleven et al. (2003).

The research presented in this paper is also based on the modelof Aleven et al. (2003). However, we suppose that the identificationof potential sources for help needs to take place before the decisionto make use of them. In computerized learning environments,learners might not even be aware of the availability of learningaids. Even if the learners are aware when they start to learn, theymight lose track during learning. Therefore, to make an informeddecision, the learners need to be aware of existing learning aids,their functionality, as well as the effort it takes to use them. Thus,we propose to modify the model of Aleven et al. (2003) as follows:

1. Learners must become aware of their need for help.2. Learners need to identify potential sources of help, i.e., the

learning aids available in the learning environment.3. Learners can decide to seek help.4. Learners have to apply strategies for receiving help, i.e., to take

advantage of the provided functionality in a goal-oriented way.5. Learners evaluate the help-seeking episode.

Each step in this process can be influenced by a number of fac-tors. For instance, learners may not properly identify their need forhelp because they lack prior knowledge or only have limited meta-cognitive ability. The learners may not be aware of available learn-ing aids because they are presented in an unapparent way.Furthermore, the learners may reject using learning aids becausethey are difficult or laborious to use.

Despite two decades of research, it is still unclear under whichconditions learners make use of learning aids. One line of researchprimarily investigated how learner characteristics such as priorknowledge, self-regulatory skills, motivation, and epistemologicalbeliefs influence the use of learning aids. The main result from thisline of research revealed that learners with low prior knowledgeutilize learning aids more frequently than learners with high priorknowledge (Babin, Tricot, & Mariné, 2009; Bartholomé et al., 2006;Horz et al., 2009; Renkl, 2002; Wood & Wood, 1999). It was also

Page 3: One click away is too far! How the presentation of cognitive learning aids influences their use in multimedia learning environments

T. Ruf, R. Ploetzner / Computers in Human Behavior 38 (2014) 229–239 231

observed that learners in both groups nevertheless frequentlymake inadequate decisions about when to use learning aids(Bartholomé et al., 2006; Wood, 2001; Wood & Wood, 1999). Noconclusive results were achieved with respect to other learnercharacteristics. For instance, several studies failed to establish arelationship between the learners’ self-regulatory skills and theuse of learning aids (Clarebout & Elen, 2009b; Clarebout, Horz,Schnotz, & Elen, 2010; Hartley & Bendixen, 2003). Concerning therole of the learners’ motivational orientation, the results are alsoinconsistent. Huet, Escribe, Dupeyrat, and Sakdavong (2011)observed a negative correlation between a performance orienta-tion of the learners and the use of learning aids. Clarebout andElen (2009b) established a negative correlation between a learningorientation of the learners and the use of learning aids. Bartholoméet al. (2006), in contrast, found no relationship at all between thelearners’ motivational orientation and the use of learning aids.There is also little evidence as to how the learners’ epistemologicalbeliefs might affect the use of learning aids. In a study byBartholomé et al. (2006), learners who believed that the domainknowledge was unstructured and uncertain used learning aidsmore often than those who believed it to be more structured andcertain. Hartley and Bendixen (2003) established a positive rela-tionship between the learners’ belief that learning occurs in a quickor not-at-all fashion and the use of learning aids.

A second line of research examined how characteristics of thelearning environment affect the use of learning aids. Factors thathave been investigated are the type of learning aids, the usabilityof learning aids, and metacognitive prompts for encouraging theuse of learning aids. It has been demonstrated that learners oftenprefer executive help over instrumental help (Aleven &Koedinger, 2000, 2001; Huet et al., 2011; Mäkitalo-Siegl, 2011).While executive help directly provides the learners with answers,instrumental help supports learners to produce the answers ontheir own (Aleven, Mclaren, & Koedinger, 2006; Huet et al., 2011;Puustinen & Rouet, 2009). It is commonly recommended to encour-age the use of instrumental help (Aleven & Koedinger, 2000, 2001;Babin et al., 2009; Dutke & Reimer, 2000; Karabenick, 2011;Nelson-Le Gall, 1985). Usability aspects such as the perceived easeof use can also influence whether learners take advantage of learn-ing aids. For instance, easy to use learning aids make it more likelythat learners intend to take advantage of them (cf. Chen, Hwang, &Wang, 2012; Cho, Cheng, & Lai, 2009; Pituch & Lee, 2006). How-ever, the learners’ intentions and actual behaviors might neverthe-less differ from each other (Juarez Collazo, Elen, & Clarebout, 2012).Mixed results have been found with regard to the role of metacog-nitive prompts. Clarebout and Elen (2009a, 2009b) found positiveeffects when the learners received information about the function-ality of learning aids prior to the learning phase, but no effectswhen the information was integrated into the learning environ-ment and periodically presented to the learners. In a study byStahl and Bromme (2009), all learners exhibited adequate help-seeking behavior, regardless of whether metacognitive promptswere provided or not. In a study conducted by Roll et al. (2011),the provision of metacognitive feedback on the learners’ help-seek-ing behavior increased the use of instrumental help in comparisonto the use of executive help. Overall, however, the learning aidswere rarely used. Schwonke et al. (2013) observed that learnerswith low prior knowledge used learning aids more effectively,but not necessarily more frequently when they received metacog-nitive prompts. Schworm and Gruber (2012) report that metacog-nitive prompts increased the frequency of help requests in ablended learning environment.

The mixed results obtained so far are certainly not sufficient toimpact the design of learning aids in multimedia learning

environments in such a way that learners actually make use ofthem. Not only is the number of conducted studies limited, but alsothe number of investigated factors that are potentially relevant.This is especially true with respect to characteristics of the learningenvironment.

The question of how the presentation of learning aids influ-ences their use still remains open. In the model presentedabove, we assume that the identification of potential learningaids is an important prerequisite for the learners’ decision touse them. Very often, learning aids in computerized learningenvironments are barely visible; the learners need to activatethem by clicking on a button (Bartholomé et al., 2006; JuarezCollazo et al., 2012; Martens, Valcke, & Portier, 1997; Narcisset al., 2007; Roll et al., 2011). Research in the field of human–computer-interaction, however, indicates that more apparentinformation can attract the users’ attention, keep the users’aware of its availability, and encourage the users to actuallyperceive the information. For instance, in a study conductedby Bailey, Konstan, and Carlis (2000), information presented ina web browser was perceived more frequently when it was pro-vided in a more apparent way. However, when the presentationgets too intrusive – as is often the case with so called pop-ups –users may become easily annoyed and intentionally dismiss theoffered information (Bahr & Ford, 2011; Bailey, Konstan, &Carlis, 2001). Based on these findings, it can be assumed thatlearning aids that are presented in a more apparent – but nottoo intrusive – way can attract more of the learners’ attention,keep the learners aware of their availability, and encouragethe learners to actually use the learning aids. Therefore, theresearch reported in this paper investigates if learning aids thatare presented in a more apparent way get used more frequently.

A further open question concerns the impact of opportunitiesfor self-monitoring on the use of learning aids. All current theoriesof self-regulated learning assume that successful self-regulatedlearning includes the processes of planning, monitoring, evaluat-ing, and regulating one’s own learning activities (e.g. Boekaerts,1997; Winne & Hadwin, 1998; Zimmerman, 2002). However,recent research has shown that many learners do not spontane-ously engage in these processes. For instance, many learners donot adequately monitor and evaluate their learning. Instead, theyare overconfident that they have already understood the materialto be learned and therefore do not appropriately adapt their learn-ing activities (Dunlosky & Rawson, 2012; Dunlosky & Thiede, 2013;Koriat, 2012). As Koriat (2012) points out, ‘‘. . . instructors shouldfind ways to help students accurately monitor their degree oflearning and avoid illusions of competence’’ (p. 296). In contrast,when learners judge that they do not understand, they may seekfor help (Dunlosky & Thiede, 2013). In their model, Aleven et al.(2003) already pointed out that it can be very difficult for learnersto identify information deficits and a need for help. Accordingly,the provision of opportunities for self-monitoring might supportthese identification processes and lead to a more frequent use oflearning aids. However, we are not aware of any existing empiricalstudies that have investigated this assumption. Therefore, in theresearch reported in this paper we investigate how the availabilityof opportunities for self-monitoring influence the frequency of useof learning aids.

In summary, we suggest two design features for multimedialearning environments that aim at fostering the use of learningaids. These features address two of the steps of the model pre-sented by Aleven et al. (2003). The first feature addresses the iden-tification of learning aids by means of an apparent presentation.The second feature addresses the identification of a need for helpby means of self-monitoring questions. In the following section,

Page 4: One click away is too far! How the presentation of cognitive learning aids influences their use in multimedia learning environments

232 T. Ruf, R. Ploetzner / Computers in Human Behavior 38 (2014) 229–239

we describe the multimedia learning environment and the designof the learning aids in detail.

3. The multimedia learning environment

Fig. 1 shows the user interface of the multimedia learning envi-ronment with its five main areas. The navigation tree providesaccess to the different learning units by means of a hierarchicalmenu. Within each learning unit, the navigation bar offers page-by-page browsing. Each learning unit can consist of severalrepresentations such as written texts, schematic pictures, andanimations. The media shelf presents the different representationsto the learners. Selected representations are displayed in thecontent area. The support area contains learning aids to encouragespecific cognitive learning activities.

Learners can either follow a linear pathway through the learn-ing units by utilizing the navigation bar, or they can take advantageof the navigation tree to navigate freely in the learning environ-ment. In each learning unit, the system displays an initial combina-tion of representations in the content area. The learners maycustomize this combination by dragging their preferred represen-tations from the media shelf into the content area. A preview ofeach representation is displayed in the media shelf to assist inthe selection process. If a learner re-visits a learning unit, the lastchosen combination of representations is displayed again.

The subject matter to be learned is basic mechanisms of sailing.The learning material was developed on the basis of two textbooksabout sailing (Bark, 2009; Overschmidt & Gliewe, 2009). It consistsof written texts, pictures, and animations that describe and depictbasic mechanisms of sailing. The learning material is made up ofeight learning units. The first four learning units distinguishbetween driving and resistance forces. They also describe howthese forces act on a yacht. The remaining four learning unitsdescribe and depict four different courses that a yacht can sail inrelation to the wind direction: running, broad reach, close hauled,

Fig. 1. The user interface of the multimedia learn

and tacking. The learning units on running, broad reach, and closehauled explain how the yacht’s hull and sail are oriented, how thedifferent forces act on the yacht, and how the yacht moves forward.The learning unit on tacking explains how the yacht needs to sail inorder to reach a goal that is located straight into the wind.

The different representations serve different instructional func-tions. Texts introduce terms (e.g. the names of the different forcesand sailing courses) as well as describe relationships and proce-dures (e.g. when and how a force is split up into componentforces). Pictures visualize specific configurations and concepts(e.g. how the yacht’s sail is oriented with respect to the hull andthe wind and how forces can be represented as arrows). Anima-tions display dynamic phenomena (e.g. how the featheringchanges and how the yacht moves at a certain speed). To fullyunderstand the mechanisms of sailing, the learners need to per-ceive and mentally integrate all of the offered representations.

Learning from multimedia is a two-edged sword. On the onehand it has the potential to enhance learning: learners can followdifferent learning paths and choose to process different represen-tations, for example. On the other hand, multimedia learning envi-ronments require that the learners plan, monitor, and evaluatetheir own learning in order to make appropriate decisions and toengage in productive cognitive processes. That is, successful multi-media learning demands self-regulation and strategic learning.There is a growing body of evidence which shows that many learn-ers are not able to meet these demands (for overviews seeKirschner & van Merriënboer, 2013; Mayer, 2005).

In order to support learning from multimedia, cognitive learn-ing aids are often made available to the learners. They aim toencourage the learners to engage in cognitive processes that arerelevant to learning from texts, pictures, and animations. Accord-ing to the Cognitive Theory of Multimedia Learning (Mayer,2009), relevant cognitive processes for learning with different rep-resentations are (1) the selection of information from each repre-sentation, (2) the organization of the selected information intoverbal and pictorial mental models, and (3) the integration of

ing environment (translation by the authors).

Page 5: One click away is too far! How the presentation of cognitive learning aids influences their use in multimedia learning environments

T. Ruf, R. Ploetzner / Computers in Human Behavior 38 (2014) 229–239 233

verbal and pictorial information into one coherent mental model.In two experimental studies with printed text-picture-combina-tions, Schlag and Ploetzner (2011) demonstrated that learnerswho are instructed to systematically select, organize, and integratethe presented information learn significantly more successful thanlearners who only write a summary. In accord with these findings,the learning aids in the multimedia learning environment are sta-ted as questions that aim to encourage the information processesdescribed above. The learners can obtain more fine-grained ques-tions by clicking on the superordinate questions:

� Selection: What is important? (Which statements in the text areimportant? Which parts of the pictures or animations areimportant?)� Organization: How is this related? (Which relations are

described in the text and picture or in the animation?)� Integration: How is this related to everything else? (How is this

situation similar to other situations? How is this situation dif-ferent from other situations?)

Each question has a textbox below it to take typewritten notes.The learners’ notes are automatically stored so that they can bereviewed or changed at any time.

However, it is not a goal to evaluate the educational effective-ness of the employed learning aids or to reproduce the findingsby Schlag and Ploetzner (2011). Rather, the goal is to investigatehow frequently the learners make spontaneous use of the learningaids when the learning aids are differently presented to the learn-ers. Therefore, the learning aids are offered to the learners in threedifferent ways (cf. Fig. 2). In the collapsed presentation mode, thesupport area is initially collapsed to a slim bar and the learningaids are invisible. To make the learning aids visible, the learnershave to click on a button at the top of the visible bar. In the staticpresentation mode, the support area is always visible and pre-sented in a fixed size. In the dynamic presentation mode, the sup-port area is also always visible. However, when a learner visits a

Fig. 2. The user interface of the learning environment with collapsed, static

learning unit, after two seconds the support area enlarges in ananimated way while the content area decreases, but still showsall of the learning material. Bailey et al. (2000) termed thisapproach the technique of ‘‘adjusting windows’’. The support arearemains in this adjusted state until the learner scales it downagain. When the support area is scaled down, it has the same sizethat it had in the static presentation mode.

To support learner self-monitoring, the learning environmentposes questions to the learners after they have visited a learningunit. Before the learners can proceed to the next learning unit,the user interface shades and a self-monitoring question is pre-sented in an overlay window (cf. Fig. 3). The learners are askedwhether they believe that they have understood the key conceptsand relationships of the learning unit they are about to leave. Forexample, after the learning unit that describes broad reach, thelearners are presented the following question: ’’Later [in the post-test] you will be asked why a yacht can sail relatively fast in broadreach. Do you believe you have understood the reason why a yachtcan do so?’’ If the learners believe they have understood the mate-rial, they are encouraged to proceed to the next learning unit. If thelearners do not believe they have understood, they are encouraged– but not enforced – to return to the last learning unit and re-pro-cess it.

The comparability of all versions of the multimedia learningenvironment was assured by means of usability testing. In threeconsecutive studies, the usability of the learning environmentwas tested and its interaction design was systematically improved(Ruf & Ploetzner, 2012). In the first two studies, eight undergradu-ate students tested the user interface of the learning environmentwithout learning aids and self-monitoring questions. In both stud-ies, the usability was assessed by means of specific tasks that thelearners had to accomplish (cf. Dumas & Redish, 1999; Rubin &Chisnell, 2008), as well as by means of subjective ratings (cf.Tullis & Albert, 2008). Although only minor usability problemswere identified, the interaction design was revised accordinglyafter each study. In the third study, a total of 30 undergraduate

, and dynamically enlarged support areas (translation by the authors).

Page 6: One click away is too far! How the presentation of cognitive learning aids influences their use in multimedia learning environments

Fig. 3. A self-monitoring question (translation by the authors).

234 T. Ruf, R. Ploetzner / Computers in Human Behavior 38 (2014) 229–239

students tested the different versions of the complete learningenvironment. The versions differed in the presentation mode ofthe learning aids and the availability of the self-monitoring ques-tions. The results demonstrated that the learning environment iseasy to use, irrespective of how the learning aids are presentedand whether the self-monitoring questions are available. Further-more, the – operational – use of the learning aids itself wasstraightforward to the users.

4. Experimental study

4.1. Research questions and hypotheses

The research reported in this paper investigated two main ques-tions. First, how does the presentation of cognitive learning aidsaffect their frequency of use? Second, how does the availabilityof self-monitoring questions influence the frequency of use oflearning aids?

Hypothesis 1. A more obvious presentation of learning aids willresult in a higher frequency of use. This hypothesis is based on theassumption that learners often do not take advantage of learningaids because they are not aware of their availability duringlearning (cf. Bailey et al., 2000). An obvious presentation of thelearning aids should help learners to identify the learning aids andto remain aware of their availability. Therefore, we investigate theinfluence of three presentation modes on the frequency of use ofthe learning aids. We assume that the learning aids will be utilizedleast in the collapsed presentation mode (the least obvious mode),more in the static presentation mode, and most in the dynamicpresentation mode (the most obvious mode).

Hypothesis 2. The availability of self-monitoring questions willresult in a more frequent use of learning aids. This hypothesis isbased on the assumption that learners often do not use cognitivelearning aids because they are not aware of their need for support(cf. Aleven et al., 2003). Recent research has shown that manylearners are overconfident that they have already understood the

material to be learned (Dunlosky & Rawson, 2012; Dunlosky &Thiede, 2013; Koriat, 2012). In line with Koriat (2012), the self-monitoring questions should help learners to monitor theirunderstanding of the learning content and to bring possiblecomprehension problems to their attention. As a consequence, ifthey identify such comprehension problems, they may takeadvantage of the learning aids (cf. Dunlosky & Thiede, 2013).

4.2. Method

4.2.1. DesignTwo factors were varied yielding six experimental conditions:

(1) the presentation of the learning aids (collapsed presentationmode, static presentation mode, dynamic presentation mode)and (2) the availability of self-monitoring questions (withoutself-monitoring questions, with self-monitoring questions).

4.2.2. ParticipantsA G*Power analysis (cf. Faul, Erdfelder, Lang, & Buchner, 2007)

for a multivariate analysis of variance with two factors, six groups,six dependent variables, 5% error probability, 80% power, and anexpected large effect size (Cohen’s f = .40) yielded a required totalsample size of 60 subjects. Therefore, a total of 60 undergraduatestudents from a university in southwest Germany participated vol-untarily in the study (50 females and 10 males, mean ageM = 22.95, SD = 2.22). The students received financial compensa-tion for their participation.

4.2.3. Learning materialThe multimedia learning environment described in Section 3

was used. The learning material was the same in all six experimen-tal conditions. Furthermore, all groups had access to the samelearning aids. However, according to the experimental condition(cf. Section 4.2.1), the learning aids were presented in the collapsedpresentation mode, in the static presentation mode, or in thedynamic presentation mode (cf. Fig. 2). Self-monitoring questionswere only presented to three out of the six investigated groups(cf. Section 4.2.1).

Page 7: One click away is too far! How the presentation of cognitive learning aids influences their use in multimedia learning environments

T. Ruf, R. Ploetzner / Computers in Human Behavior 38 (2014) 229–239 235

4.2.4. Measures4.2.4.1. Frequency of use of the learning aids. How frequently thelearners used the cognitive learning aids was assessed by recordingtheir actions in the user interface as well as their eye movements.Whenever a learner visited a learning unit, it was determinedwhether she or he had utilized the learning aids in at least oneout of six possible forms. Table 1 lists the different forms of usageand how they were assessed.

In the first step, it was determined whether the learner made anew note or modified at least one existing note. In the case that nonew note was made, and no existing note was modified, it wasthen assessed whether the learner alternated between reading inthe support area and reading or watching in the content area. Ifthis was also not the case, it was successively determined whetherthe learner read at least one taken note, whether the learner readat least one learning aid, whether the learner clicked on at leastone learning aid to get more specific information, or whether thelearner at least visually focused a learning aid.

If any of these forms of usage was assessed during a learner’svisit to the learning unit, it was counted as ‘one’ utilization of thelearning aids. Thus, for each learner, the maximal possible fre-quency of use equals the number of learning units visited by thislearner. Furthermore, if learners made use of the learning aids inmore than one form during a visit to the learning unit, only theform that stands closest to the top of Table 1 was counted. There-fore, the frequencies of all individual forms of usage can be addedup to a total frequency of use. This also implies that the forms ofusage which stand closer to the bottom of Table 1 become residualcategories with respect to the upper forms of usage.

A SensoMotoric Instruments (SMI) RED binocular remote eyetracker recorded the learners’ eye movements as well as theiractions in the user interface of the learning environment. It con-sists of a 22 inch widescreen display with a resolution of 1680 pix-els (width) � 1050 pixels (height). Infrared light emitting diodesand eye tracking cameras are attached to the lower end of the dis-play. At an operating distance of 70 cm, the eye tracker compen-sates for head movements in the range of 40 cm (width) � 40 cm(height) � 20 cm (depth). Gaze position can be accurately deter-mined to less than .5�. For each learner, a nine-point calibrationprocedure was conducted. After calibration, the movements ofboth eyes were recorded at a sampling rate of 60 Hz.

4.2.4.2. Pre- and posttest. Although the focus of this study was onhow frequently learners take advantage of learning aids, the learn-ers’ knowledge about sailing was assessed before and after thelearning phase by means of a pre- and posttest. The availabilityof prior knowledge about sailing might influence the use of learn-ing aids since it could reduce the learners’ need for help. That is,prior domain knowledge is a potentially relevant covariate. Thepretest consisted of eight items that assessed the learners’ priorknowledge of sailing. Each item was an open question. Three itemsrequired verbal answers and five items required graphical answers.

Table 1Forms of usage of the learning aids and their assessment.

Form of usage Asse

Taking or modifying notes in the support area Log dAlternating between reading in the support area and the content area Sacc

fixatReading at least one note SequReading at least one learning aid SequClicking on at least one of the learning aids to get a more specific explanation

of itLog d

Visually focusing a learning aid Fixat

Each correct response was awarded one point. Hence, the maximalpossible pretest score was eight points.

The provision of the posttest aimed to establish a convincinglearning atmosphere and to encourage the learners to strive forsuccessful learning outcomes. That is, it was not a goal of this studyto evaluate the instructional effectiveness of the learning aids perse. The posttest was made up of 24 items: eight items assessingfactual knowledge, eight items assessing conceptual knowledge,and eight items assessing transfer (cf. Anderson & Krathwohl,2001; for examples see Table 2). Each item was an open question.The items assessing factual knowledge were the same as the itemsincluded on the pretest. Nine items required verbal answers andfifteen items required graphical answers. Each correct responsewas awarded one point. Hence, the maximal possible posttestscore was 24 points.

4.2.4.3. Usability questionnaire. The learners’ perceived usability ofthe learning environment might affect how they take advantageof the functionality offered by the learning environment. That is,usability is another potentially relevant covariate. Therefore, thelearners’ subjective usability of the learning environment wasmeasured with the System Usability Scale of Brooke (1996). It com-prises ten statements. For each statement, the learners have to ratetheir level of agreement on a five-point Likert-scale. The maximumscore of the scale is 100. A score between 80 and 90 refers to a goodusability. Scores above 90 are considered to be excellent (Bangor,Kortum, & Miller, 2008).

4.2.5. ProcedureThe learners were randomly assigned to one of the six experi-

mental conditions. They participated in individual sessions. Tobegin, the learners completed the pretest. They were then providedwith a computer that presented the multimedia learning environ-ment. The eye tracker was attached to the screen of the computer.Next, the learners were shown how to operate the navigation tree,the navigation bar, the media shelf, and the support area of thelearning environment. It is important to note, however, that thelearners were not instructed or encouraged in any way to makeuse of the support area. After the introduction to the functionalityof the learning environment, the eye tracker was calibrated. There-after, the learners were given a max. of 60 min to process the learn-ing material in the order they preferred. Directly after the learningphase, the posttest was administered. In conclusion, the learnerscompleted the System Usability Scale.

4.3. Results

4.3.1. Results of the pretestThe descriptive results of the groups’ performance on the pre-

test are shown in Table 3. All groups had only little prior knowl-edge. There were no significant differences between the groups,neither with regard to the presentation of the learning aids

ssment

ata of entries into textboxesades between support area and content area with intermediate sequences ofionsences of fixations on sequenced words of notesences of fixations on sequenced words of the learning aidsata of clicks on learning aids

ions on a learning aid

Page 8: One click away is too far! How the presentation of cognitive learning aids influences their use in multimedia learning environments

Table 2Sample items from the posttest (translation by the authors).

Type of knowledge Question

Factual When does the sail of a yacht start to flutter?Conceptual A yacht sails perpendicular to the wind. How does the driving force on the yacht change if the yacht turns clockwise into the wind?

Transfer A yacht is to sail from Position 1 to Position 2 and back again to Position 1. Draw the shortest possible path the yacht can sail

Table 3The absolute and relative means (M) and standard deviations (SD) of student performance on the pretest.

Presentation of the learning aids

Collapsed Static Dynamic Overall

Self-monitoring questions Not M .20 (2.5%) .50 (6.25%) .20 (2.5%) .30 (3.75%)Available SD .42 .71 .63 .60Available M .30 (3.75%) .90 (11.25%) .50 (6.25%) .57 (7.13%)

SD .68 1.52 .97 1.10Overall M .25 (3.13%) .70 (8.75%) .35 (4.38%) .43 (5.38%)

SD .55 1.17 .83 .89

236 T. Ruf, R. Ploetzner / Computers in Human Behavior 38 (2014) 229–239

(F[2,54] = 1.40; p = .26) nor with regard to the availability of self-monitoring questions (F[1,54] = 1.33; p = .25). Therefore, priorknowledge is not considered as a covariate in the further analysis.

4.3.2. Results of the usability questionnaireThe descriptive results of the groups’ answers to the System

Usability Scale are shown in Table 4. The score for all groups wasconsiderably above 80 points, which suggests a good usability ofthe learning environment in all conditions (cf. Bangor et al., 2008).

The learners rated the usability significantly higher when thelearning aids were presented statically than when they were

Table 4The absolute means (M) and standard deviations (SD) of student ratings in the System Us

Self-monitoring questions Not MAvailable SDAvailable M

SDOverall M

SD

presented collapsed or dynamically (F[2,54] = 4.65; p < .05;g2 = .15). However, the subjective usability of the learning environ-ment does not significantly correlate with the frequency of use ofthe learning aids (r = �.04; p = .76). Therefore, the subjectiveusability is not considered as a covariate in the further analysis.

4.3.3. Results of the eye movements and screen capturesThe average horizontal accuracy of the calibration of the eye

tracking device was M = .67� (SD = .40�). The average vertical accu-racy was M = .61� (SD = .36�). At an average distance of about 80 cmto the monitor, this corresponds to a deviation of approximately

ability Scale.

Presentation of the learning aids

Collapsed Static Dynamic Overall

87.25 94.00 87.25 89.508.78 4.74 7.50 7.67

88.25 92.75 88.75 90.257.57 3.22 5.43 5.35

88.25 93.38 88.00 89.887.57 4.00 6.42 6.56

Page 9: One click away is too far! How the presentation of cognitive learning aids influences their use in multimedia learning environments

Table 5The relative means (M) and standard deviations (SD) of the frequency of use of the learning aids.

Forms of usage Self-monitoring questions notavailable

Self-monitoring questions available

Collapsed(%)

Static(%)

Dynamic(%)

Collapsed(%)

Static(%)

Dynamic(%)

Taking or modifying notes in the support area M 12.22 30.14 40.30 23.89 22.08 42.55SD 15.96 11.42 19.60 20.31 14.40 19.74

Alternating between reading in the support area and the content area M 5.44 12.75 19.42 10.24 11.53 14.90SD 7.65 7.84 15.04 9.28 11.62 15.21

Reading at least one note M 5.08 9.84 6.60 13.68 11.26 11.01SD 8.53 8.00 6.74 12.12 9.42 8.90

Reading at least one learning aid M 0 1.43 0.76 0.36 4.79 2.48SD 0 2.56 1.25 1.13 7.41 4.59

Clicking on at least one of the learning aids to get a more specific explanation ofit

M 0 0 0 0 0 0SD 0 0 0 0 0 0

Visually focusing a learning aid M 8.53 11.85 7.29 4.16 14.61 9.88SD 14.33 9.54 7.30 5.30 15.03 13.96

Overall M 31.28 66.02 74.37 52.33 64.27 80.83SD 29.76 15.21 12.78 35.25 16.87 11.82

T. Ruf, R. Ploetzner / Computers in Human Behavior 38 (2014) 229–239 237

13 � 12 pixels. Fixations were defined as events in which the gazeremained for at least 80 ms within a radius of max. 100 pixels. Sac-cades were defined as gaze movements from one fixation toanother fixation.

The SMI analysis software BeGaze was used to replay and ana-lyze the screen captures as well as the eye movements of the learn-ers. In the analysis we determined if one of the six forms of usagedescribed in Table 1 occurred whenever a learner visited a learningunit. Always the form of usage that stands closest to the top ofTable 1 was counted as the relevant form of usage. For instance,if a learner took or modified a note in the support area, it wasnot considered any more whether this learner also visually focusedon the learning aid. On average, the learners visited learning units27.3 times (SD = 11.51). Because the standard deviation is large,the frequency of use of the learning aids is related to the numberof visited learning units.

The descriptive results are shown in Table 5. Overall, the learn-ers used the cognitive learning aids least in the collapsed presenta-tion mode, more in the static presentation mode, and most in thedynamic presentation mode. This is especially true for taking ormodifying notes, as well as for alternating between reading inthe support area and the content area. These two forms of usagewere also the most frequent ones. In more than two-thirds of thecases, the learners made use of the learning aids by taking or mod-ifying notes, or by alternating between reading in the support areaand the content area. The remaining forms of usage occured con-siderably less.

A multivariate analysis of variance (MANOVA) revealed a signif-icant main effect for the presentation mode (F[10,102] = 3.85;p < .001; g2 = .27). At the univariate level, this effect is significantfor two forms of usage that stand closest to the top of Table 1:

Table 6The absolute and relative means (M) and standard deviations (SD) of student performanc

Presentation

Collapsed

Self-monitoring questions Not M 10.10 (42.08%Available SD 2.13Available M 15.10 (62.92%

SD 3.96Overall M 12.60 (52.20%

SD 4.01

taking or modifying notes (F[2,54] = 9.50, p < .001; g2 = .26) andalternating between reading in the support area and the contentarea (F[2,54] = 3.27, p = .046; g2 = .11). No significant differenceswere observed with respect to the other forms of usage.

The learning aids were used less when self-monitoring ques-tions were not available and used more when self-monitoringquestions were available. However, the difference between groupsis not statistically significant (F[5,50] = 1.88; p = .12). Furthermore,the number of re-processed learning units after receiving self-monitoring questions did not significantly correlate with the learn-ers’ use of the learning aids (r = � .29; p = .12).

4.3.4. Results of the posttestThe descriptive results of the group performance on the posttest

are shown in Table 6. Although the learners used the learning aidsmore frequently in the static and dynamic presentation modes, thisdid not result in an enhanced performance on the posttest. How-ever, learners who received self-monitoring questions performedbetter on the posttest than those who did not receive suchquestions.

In a multivariate analysis of variance (MANOVA), no significantdifferences with respect to the presentation mode of the learningaids were found (F[6,106] = .90; p = .50), and no significant effectof the availability of self-monitoring questions was established(F[3,52] = 2.28, p = .09). Furthermore, the frequency of use of thelearning aids does not significantly correlate with the learners’performance on the posttest (r = � .019; p = .89). The learners’decisions to reprocess learning units after they have receivedself-monitoring questions also do not significantly correlate withtheir performance on the posttest (r = .069; p = .72).

e on the posttest.

of the learning aids

Static Dynamic Overall

) 13.60 (56.67%) 10.30 (42.91%) 11.33 (47.21%)5.13 2.63 3.79

) 13.70 (57.08%) 12.00 (50.00%) 13.60 (56.67%)3.65 3.53 3.81

) 13.65 (56.88%) 11.15 (46.46%) 12.47 (51.96%)4.33 3.15 3.94

Page 10: One click away is too far! How the presentation of cognitive learning aids influences their use in multimedia learning environments

238 T. Ruf, R. Ploetzner / Computers in Human Behavior 38 (2014) 229–239

5. Discussion

In this paper, we described an experimental study that aimed atenhancing the frequency of use of learning aids within a multime-dia learning environment. The design of the study is based on thecurrent five-step model of help-seeking in computerized learningenvironments as proposed by Aleven et al. (2003). Our study takesinto consideration three steps of this model and adapts them for amultimedia learning environment. The two steps of ‘‘identifyingthe need for help’’ and ‘‘identifying potential learning aids’’ wereaddressed by the experimental factors ‘‘availability of self-moni-toring questions’’ and ‘‘presentation mode of the learning aids’’.The third step, ‘‘using the delivered functions of the learning aids,’’was taken into account by the usability design. In all three cases,the research conducted aims at facilitating the processes expressedin the model of Aleven et al. (2003).

Our study yielded two main results. First, it was demonstratedthat an obvious, but non-intrusive presentation of learning aidsincreases their use significantly. The learners made use of the learn-ing aids most when they were dynamically presented, less whenthey were statically presented, and least when they were presentedin a collapsed way. This result confirms our hypothesis and providesan important contribution to the question of why learners rarelytake advantage of learning aids in computerized learning environ-ments. In many learning environments, the possibilities for receiv-ing help, such as learning aids, are not initially visible and thelearners have to activate them by clicking on a button. In these cases,the learning aids are literally out of sight and it is therefore unlikelythat learners are going to search for them and use them.

However, there is no guarantee that learners will decide to uselearning aids even after they have successfully identified them.Other factors, such as the learners’ motivation to learn, may affectthis decision. In our investigation, such factors might have playeda minor role because the learners participated in individual sessionsof a laboratory study and therefore possibly felt committed to learn.It is thus an open question as to whether the same beneficial effectsof the static and dynamic presentation modes would also be obser-vable in a field study that examines more realistic learning settings.

Contrary to our hypothesis, the availability of self-monitoringquestions had no positive effect on the use of learning aids. Evenwhen learners decided to re-process a learning unit after receivinga self-monitoring question, this did not enhance the use of learningaids. Self-monitoring questions asked the learners whether theybelieve that they have understood the key concepts and relation-ships of the learning unit they are about to leave. Thus, during there-processing of a learning unit, it could be that the self-monitoringquestions made the learners focus directly on information about theconcepts and relationships that were mentioned in the question.That is, the self-monitoring questions themselves could have servedas a kind of learning aid. This assumption is consistent with theobservation that learners who received self-monitoring questionslearned more successfully than learners who did not receive thesequestions. This finding might also exemplify that even when learn-ers successfully identify deficits in their learning process, they mightbe able to overcome these deficits without seeking external help.

While usability research and design is frequently applied to thedevelopment of commercial applications such as information ande-commerce systems, its significance is still far from being self-evi-dent in the development of learning environments. In our study,we implemented three iterative cycles to systematically improvethe usability of the employed multimedia learning environment.Although not experimentally varied, this systematic attempt ofachieving good usability might have also contributed to anenhanced usage of the available learning aids, irrespective of theirmode of presentation.

In this study, the use of the learning aids was not beneficial tolearning. In previous studies, the encouragement to systematicallyselect, organize, and integrate information from printed texts andpictures had strong positive effects on learning (cf. Schlag &Ploetzner, 2011). However, the learners in those studies wereexplicitly instructed on how to carry out each single learning tech-nique such as underlining important statements in a text, markingimportant regions in a picture, labeling regions in a picture, andestablishing relations between text and pictures by means ofself-generated statements and sketches. Furthermore, in the stud-ies conducted by Schlag and Ploetzner (2011), the experimenterinspected whether the learners actually applied the different learn-ing techniques. In this study, however, it was not possible toinstruct the learners how to take advantage of the learning aidsin an optimal way; nor was it possible to inspect whether thelearners actually did so. Such an instruction would almost certainlyresult in a regular and systematic use of the learning aids by thelearners. In contrast, we were interested in how the presentationof learning aids influences their spontaneous use, irrespective ofthe instructional effectiveness of the learning aids per se. Further-more, due to the technical limitations of the employed learningenvironment, the learners’ responses to the learning aids had tobe limited to typewritten notes. That is, compared to the learningtechniques investigated by Schlag and Ploetzner (2011), the learn-ing aids realized in this study were strongly simplified. This mighthave severely constrained the learners from expressing their ideasand thoughts while processing the learning aids. For example, itmight have been easier to highlight an important region in a pic-ture than to describe the location of the same region by means oftypewritten notes. Graphical annotations and sketches might alsobe more suitable for expressing certain ideas and thoughts thanare typewritten notes (cf. Tytler, Prain, Hubber, & Waldrip, 2013).Thus, the reduction of the learning aids to typewritten notes mighthave also contributed to their ineffectiveness in this study.

To summarize, the results of the study reported in this papersuggest that learning aids in computerized learning environmentsshould be presented as obvious as possible, without being toointrusive, in order to foster frequent usage of the learning aids.We accomplished this in our study through a permanently visibleand dynamic presentation of the learning aids. Further researchshould examine whether this finding can be confirmed in morerealistic learning settings.

Acknowledgements

The Leibniz Society and the State of Baden-Württemberg sup-ported this research within the ScienceCampus Tübingen. Wethank Marie Kösters for helping to analyze the eye tracking data.

References

Aleven, V., & Koedinger, K. R. (2001). Investigations into help seeking and learningwith a cognitive tutor. In R. Luckin (Ed.), Papers of the AIED-2001 workshop onhelp provision and help seeking in interactive learning environments. <http://www.hcrc.ed.ac.uk/aied2001/workshops.html>. Retrieved 28.02.13.

Aleven, V., & Koedinger, K. R. (2000). Limitations of student control: Do studentsknow when they need help? In G. Gauthier, C. Frasson, & K. VanLehn (Eds.),Proceedings of the 5th international conference on intelligent tutoring systems(pp. 292–303). London: Springer.

Aleven, V., Mclaren, B. M., & Koedinger, K. R. (2006). Towards computer-basedtutoring of help-seeking skills. In S. A. Karabenick & R. S. Newman (Eds.), Helpseeking in academic settings: Goals, groups and contexts (pp. 259–296). Mahwah,NJ: Lawrence Erlbaum Associates.

Aleven, V., McLaren, B., Roll, I., & Koedinger, K. R. (2006). Toward meta-cognitivetutoring: A model of help-seeking with a cognitive tutor. International Journal ofArtificial Intelligence in Education, 16, 101–130.

Aleven, V., Stahl, E., Schworm, S., Fischer, F., & Wallace, R. (2003). Help seeking andhelp design in interactive learning environments. Review of EducationalResearch, 73, 277–320.

Page 11: One click away is too far! How the presentation of cognitive learning aids influences their use in multimedia learning environments

T. Ruf, R. Ploetzner / Computers in Human Behavior 38 (2014) 229–239 239

Anderson, L. W., & Krathwohl, D. R. (2001). A taxonomy for learning, teaching, andassessing – A revision of Bloom’s taxonomy of educational objectives. New York:Longman.

Babin, L.-M., Tricot, A., & Mariné, C. (2009). Seeking and providing assistance whilelearning to use information systems. Computers & Education, 53, 1029–1039.

Bahr, S. G., & Ford, R. A. (2011). How and why pop-ups don’t work: Pop-upprompted eye movements, user affect and decision making. Computers inHuman Behavior, 27, 776–783.

Bailey, B. P., Konstan, J. A., & Carlis, J. V. (2000). Adjusting windows: Balancinginformation awareness with intrusion. In Paper presented at the 6th conferenceon human factors and the web (HFWeb 2000), Austin, Texas. <https://wiki.engr.illinois.edu/display/orchid/Publications>. Retrieved 24.03.11.

Bailey, B. P., Konstan, J. A., & Carlis, J. V. (2001). The effects of interruptions on taskperformance, annoyance and anxiety in the user interface. In M. Hirose (Ed.),Proceedings of INTERACT (pp. 593–601). Amsterdam: IOS Press.

Bangor, A., Kortum, P. T., & Miller, J. T. (2008). An empirical evaluation of the SystemUsability Scale. International Journal of Human–Computer Interaction, 24,574–594.

Bark, A. (2009). Sportküstenschifferschein + Sportbootführerschein See.[Yachtmaster Coastal for up to 12 nautical miles distance tomainland + Yachtmaster Coastal up to 3 nautical miles distance to mainland.]Bielefeld: Delius Klasing.

Bartholomé, T., Stahl, E., Pieschl, S., & Bromme, R. (2006). What matters in help-seeking? A study of help effectiveness and learner-related factors. Computers inHuman Behavior, 22, 113–129.

Boekaerts, M. (1997). Self-regulated learning: A new concept embraced byresearchers, policy makers, educators, teachers, and students. Learning andInstruction, 7, 161–186.

Brooke, J. (1996). SUS – A quick and dirty usability scale. In P. W. Jordan, B. Thomas,B. A. Weerdmeester, & A. L. McClelland (Eds.), Usability evaluation in industry.London: Taylor and Francis.

Chen, Y.-C., Hwang, R.-H., & Wang, C.-Y. (2012). Development and evaluation of aWeb 2.0 annotation system as a learning tool in an e-learning environment.Computers & Education, 58, 1094–1105.

Cho, V., Cheng, T. C. E., & Lai, W. M. J. (2009). The role of perceived user-interfacedesign in continued usage intention of self-paced e-learning tools. Computers &Education, 53, 216–227.

Clarebout, G., & Elen, J. (2006). Tool use in computer-based learning environments:Towards a research framework. Computers in Human Behavior, 22, 389–411.

Clarebout, G., & Elen, J. (2009a). The complexity of tool use in computer-basedlearning environments. Instructional Science, 37, 475–486.

Clarebout, G., & Elen, J. (2009b). Benefits of inserting support devices in electroniclearning environments. Computers in Human Behavior, 25, 804–810.

Clarebout, G., Horz, H., Schnotz, W., & Elen, J. (2010). The relation between self-regulation and the embedding of support in learning environments. EducationalTechnology Research and Development, 58, 573–587.

Dumas, J. S., & Redish, J. C. (1999). A practical guide to usability testing. Exeter:Intellect.

Dunlosky, J., & Rawson, K. A. (2012). Overconfidence produces underachievement:Inaccurate self evaluations undermine students’ learning and retention.Learning and Instruction, 22, 271–280.

Dunlosky, J., & Thiede, K. W. (2013). Four cornerstones of calibration research: Whyunderstanding students’ judgments can improve their achievement. Learningand Instruction, 24, 58–61.

Dutke, S., & Reimer, T. (2000). Evaluation of two types of online help for applicationsoftware. Journal of Computer Assisted Learning, 16, 307–315.

Faul, F., Erdfelder, E., Lang, A.-G., & Buchner, A. (2007). G⁄Power 3: A flexiblestatistical power analysis program for the social, behavioral, and biomedicalsciences. Behavior Research Methods, 39, 175–191.

Gerjets, P., Scheiter, K., & Schuh, J. (2005). Instruktionale Unterstützungen beimFertigkeitserwerb aus Beispielen in hypertextbasierten Lernumgebungen.[Instructional support in learning from examples in hypertextual learningenvironments.] Zeitschrift für Pädagogische Psychologie, 19, 23–38.

Hartley, K., & Bendixen, L. D. (2003). The use of comprehension aids in a hypermediaenvironment: Investigating the impact of metacognitive awareness andepistemological beliefs. Journal of Educational Multimedia and Hypermedia, 12,275–289.

Heiß, A., Eckhardt, A., & Schnotz, W. (2003). Selbst- und Fremdsteuerung beimLernen mit Hypermedien. [Self-regulation and external regulation in learningwith hypermedia.] Zeitschrift für Pädagogische Psychologie, 17, 211–220.

Horz, H., Winter, C., & Fries, S. (2009). Differential benefits of situated instructionalprompts. Computers in Human Behavior, 25, 818–828.

Huet, N., Escribe, C., Dupeyrat, C., & Sakdavong, J.-C. (2011). The influence ofachievement goals and perceptions of online help on its actual use in aninteractive learning environment. Computers in Human Behavior, 27, 413–420.

Jonassen, D. H. (1999). Designing constructivist learning environments. In C. M.Reigeluth (Ed.), Instructional-design theories and models (pp. 215–239). Mahwah,NJ: Lawrence Erlbaum.

Juarez Collazo, N. A., Elen, J., & Clarebout, G. (2012). Perceptions for tool use: Insearch of a tool use model. In T. Amiel & B. Wilson (Eds.), Proceedings of worldconference on educational multimedia, hypermedia and telecommunications 2012(pp. 2905–2912). Chesapeake, VA: AACE.

Karabenick, S. A. (2011). Classroom and technology-supported help seeking: Theneed for converging research paradigms. Learning and Instruction, 21, 290–296.

Kirschner, P. A., & van Merriënboer, J. J. G. (2013). Do learners really know best?Urban legends in education. Educational Psychologist, 48, 169–183.

Kombartzky, U., Ploetzner, R., Schlag, S., & Metz, B. (2010). Developing andevaluating a strategy for learning from animations. Learning and Instruction, 20,424–433.

Koriat, A. (2012). The relationships between monitoring, regulation andperformance. Learning and Instruction, 22, 296–298.

Mäkitalo-Siegl, K. (2011). Computer-supported collaborative inquiry learning indifferently structured classroom scripts. Learning and Instruction, 21, 257–266.

Martens, R. L., Valcke, M. M. A., & Portier, S. J. (1997). Interactive learningenvironments to support independent learning: The impact of discernability ofembedded support devices. Computers & Education, 28, 185–197.

Mayer, R. E. (Ed.). (2005). The Cambridge handbook of multimedia learning. New York:Cambridge University Press.

Mayer, R. E. (2009). Multimedia learning (2nd ed.). New York: Cambridge UniversityPress.

Mercier, J., & Frederiksen, C. (2008). The structure of the help-seeking process incollaboratively using a computer coach in problem-based learning. Computers &Education, 51, 17–33.

Narciss, S., Proske, A., & Koerndle, H. (2007). Promoting self-regulated learning inweb-based learning environments. Computers in Human Behavior, 23,1126–1144.

Nelson-Le Gall, S. (1981). Help-Seeking: An understudied problem-solving skill inchildren. Developmental Review, 1, 224–246.

Nelson-Le Gall, S. (1985). Help-seeking behavior in learning. Review of Research inEducation, 12, 55–90.

Overschmidt, H., & Gliewe, R. (2009). Das Bodenseeschifferpatent A + D.[Yachtmaster A + D for Lake Constance.] Bielefeld: Delius Klasing.

Pituch, K. A., & Lee, Y. (2006). The influence of system characteristics on e-learninguse. Computers & Education, 47, 222–244.

Ploetzner, R., & Schlag, S. (2013). Strategic learning from expository animations:Short- and mid-term effects. Computers & Education, 69, 159–168.

Puustinen, M., & Rouet, J.-F. (2009). Learning with new technologies: Help seekingand information searching revisited. Computers & Education, 53, 1014–1019.

Renkl, A. (2002). Worked-out examples: Instructional explanations supportlearning by self-explanations. Learning and Instruction, 12, 529–556.

Roll, I., Aleven, V., McLaren, B. M., & Koedinger, K. R. (2011). Improving students’help-seeking skills using metacognitive feedback in an intelligent tutoringsystem. Learning and Instruction, 21, 267–280.

Rubin, J., & Chisnell, D. (2008). Handbook of usability testing. Indianapolis: Wiley.Ruf, T., & Ploetzner, R. (2012). Interaction design for self-regulated learning with

multimedia: Conceptualization and empirical tests. In T. Amiel & B. Wilson(Eds.), Proceedings of world conference on educational multimedia, hypermediaand telecommunications (pp. 1390–1395). Chesapeake, VA: AACE.

Schlag, S., & Ploetzner, R. (2011). Supporting learning from illustrated texts:Conceptualizing and evaluating a learning strategy. Instructional Science, 39,921–937.

Schwonke, R., Ertelt, A., Otieno, C., Renkl, A., Aleven, V., & Salden, R. J. C. M. (2013).Metacognitive support promotes an effective use of instructional resources inintelligent tutoring. Learning and Instruction, 23, 136–150.

Schworm, S., & Gruber, H. (2012). E-learning in universities: Supporting help-seeking processes by instructional prompts. British Journal of EducationalTechnology, 43, 272–281.

Schworm, S., & Renkl, A. (2006). Computer-supported example-based learning:When instructional explanations reduce self-explanations. Computers &Education, 46, 426–445.

Stahl, E., & Bromme, R. (2009). Not everybody needs help to seek help: Surprisingeffects of metacognitive instructions to foster help-seeking in an online-learning environment. Computers & Education, 53, 1020–1028.

Tullis, T., & Albert, B. (2008). Measuring the user experience: Collecting, analyzing andpresenting usability metrics. Amsterdam: Morgan Kaufmann.

Tytler, R., Prain, V., Hubber, P., & Waldrip, B. (Eds.). (2013). Constructingrepresentations to learn in science. Rotterdam: Sense Publishers.

Winne, P. H., & Hadwin, A. F. (1998). Studying as self-regulated learning. In D. J.Hacker, J. Dunlosky, & A. C. Graesser (Eds.), Metacognition in educational theoryand practice (pp. 277–304). Hillsdale, NJ: Lawrence Erlbaum Associates.

Wood, D. (2001). Scaffolding, contingent tutoring, and computer-supportedlearning. Journal of Artificial Intelligence in Education, 12, 280–292.

Wood, H., & Wood, D. (1999). Help seeking, learning and contingent tutoring.Computers and Education, 33, 153–169.

Zimmerman, B. J. (2002). Becoming a self-regulated learner: An overview. Theoryinto Practice, 41, 64–70.