cognitive, metacognitive and motivational perspectives on preflection in self-regulated online...

11

Click here to load reader

Upload: dirk

Post on 15-Dec-2016

235 views

Category:

Documents


4 download

TRANSCRIPT

Page 1: Cognitive, metacognitive and motivational perspectives on preflection in self-regulated online learning

Computers in Human Behavior xxx (2013) xxx–xxx

Contents lists available at ScienceDirect

Computers in Human Behavior

journal homepage: www.elsevier .com/locate /comphumbeh

Cognitive, metacognitive and motivational perspectives on preflectionin self-regulated online learning

0747-5632/$ - see front matter � 2013 Elsevier Ltd. All rights reserved.http://dx.doi.org/10.1016/j.chb.2013.07.051

⇑ Tel.: +61 3 8628 2487.E-mail address: [email protected]: http://www.ifenthaler.info

Please cite this article in press as: Ifenthaler, D. Cognitive, metacognitive and motivational perspectives on preflection in self-regulated online leComputers in Human Behavior (2013), http://dx.doi.org/10.1016/j.chb.2013.07.051

Dirk Ifenthaler ⇑Applied Research and Learning Analytics, Open Universities Australia, Level 1, 473 Bourke Street, Melbourne, VIC 3000, Australia

a r t i c l e i n f o a b s t r a c t

Article history:Available online xxxx

Keywords:Self-regulated learningOnline educationPreflectionReflectionPrompting

Self-regulated learning is regarded as a critical component of successful online education. Hence, thedevelopment of effective online education requires an orchestration of external control and freedomfor self-regulation. Prompts are regarded as effective means for promoting such personalised and adap-tive learning processes in online education. Within two experimental studies, the effectiveness of preflec-tive and reflective prompts is tested. Additionally, personal characteristics such as motivation andlearning preferences are controlled. Results indicate that directed preflective prompts work best for nov-ice learners. Such prompts also activate positive motivation within online learning environments. Still,more research is needed for investigating personalised and adaptive realisation of preflective promptsas well as automated feedback for SRL.

� 2013 Elsevier Ltd. All rights reserved.

1. Introduction

With the rapid growth of online education students are re-quired to become experts in mastering their own learning pro-cesses due to the high degree of learning autonomy and physicalabsence of the teacher or tutor (Schunk & Zimmerman, 1998).Thus, self-regulated learning (SRL) is regarded as a critical compo-nent of successful online education (Dillon & Greene, 2003; Hartley& Bendixen, 2001; Hill & Hannafin, 1997). SRL refers to learningthat includes metacognitive, motivational, and behavioural pro-cesses in order to achieve specific goals in a particular context(Low & Jin, 2012). Distinct phases of SRL involve planning, monitor-ing, control and evaluation (Zimmerman & Schunk, 2001). A largebody of literature shows that many learners have trouble in regu-lating their learning in online learning environments (Lajoie &Azevedo, 2006) and more recent studies have shown that lessself-regulated learners are far less successful within online learn-ing environments (Lee, Shen, & Tsai, 2008; Samruayruen, Enriquez,Natakuatoong, & Samruayruen, 2013; Tsai, 2010).

Still, it is accepted that the processes of (online) learning occurin interdependency of self-regulation and external regulation (Agi-na, 2012; Low & Jin, 2012; Simons, 1992). Thus, the development ofonline education requires an orchestration of external control andfreedom for self-regulation (Koedinger & Aleven, 2007). Promptsare regarded as effective means for promoting self-regulated

learning processes in online education (Ifenthaler, 2012; Bannert,2009; Ge & Land, 2003; Wirth, 2009).

This article contributes to the research on SRL in four ways.First, it critically reviews the theoretical assumptions and empiri-cal findings of SRL as well as its mechanisms of external control.Second, it integrates individual characteristics such as motivationand learning preferences into the overall model of SRL. Third, itempirically investigates the potential of preflection as a pendantof reflection within online learning environments. Finally, it sug-gests theoretical and practical implications for SRL with a specificfocus on online education.

1.1. Self-regulated learning

To be an effective self-regulated learner, learners actively influ-ence and adjust all learning processes on the cognitive, metacogni-tive, and motivational dimensions (Bransford, Brown, & Cocking,2000; Schiefele & Pekrun, 1996; Zimmerman, 2002; Ifenthaler &Lehmann, 2012). However, self-regulation is highly influenced bytask characteristics (e.g., complexity), the actual situation (e.g.,noise) and individual characteristics (e.g., mood), leading to vary-ing decisions and learning outcomes (Schmitz, Landmann, & Perels,2007).

The widely accepted theoretical model of SRL by Zimmerman(1998, 2000, 2008) shares the above mentioned dimensions ofSRL and orchestrates its sub-processes in three cyclic-reflectivephases. The (1) preactional phase (e.g., goal setting, strategic plan-ning) refers to the planning and prearrangement of future learningoutcomes (Schmitz, 2001). The (2) actional phase (e.g., self-moni-toring, self-instruction) refers to the actual learning and volitional

arning.

Page 2: Cognitive, metacognitive and motivational perspectives on preflection in self-regulated online learning

2 D. Ifenthaler / Computers in Human Behavior xxx (2013) xxx–xxx

control (Pintrich, 2000). The (3) postactional phase (e.g., self-attri-bution) refers to the evaluation of the past learning, i.e., self-reflec-tion (Zimmerman, 2000). The complex interplay of thesedimensions is illustrated in Fig. 1 (Boekaerts, 1999; Friedrich &Mandl, 1997; Ifenthaler & Lehmann, 2012).

The cognitive dimension refers to internal procedures of repre-senting and processing information (e.g., domain-specific knowl-edge). The metacognitive dimension is regarded as asuperordinate ability to direct and regulate cognitive, motivationaland behavioural processes in order to achieve specific goals (Ifent-haler, 2012; Pintrich, 2000). The motivational dimension isconceptualised through processes that initiate, guide and maintaingoal-oriented behaviours (Heckhausen & Heckhausen, 2006; Zim-merman & Campillo, 2003).

The three dimensions of SRL (cognition, metacognition, motiva-tion) can be further nuanced by structural and processual compo-nents (Friedrich & Mandl, 1997). Structural components areconsidered as stable habitual characteristics or behavioural dispo-sitions (e.g., goal orientation). Processual components representinstantaneous behaviour in particular situations (e.g., target goalsetting). All dimensions and processes of SRL may be influencedby specific instructional interventions such as prompts (Ifenthaler,2012; Ifenthaler & Lehmann, 2012; Bannert, 2009; Lin & Lehmann,1999; Thillmann, Künsting, Wirth, & Leutner, 2009; van den Boom,Paas, van Merriënboer, & van Gog, 2004; Wirth, 2009).

1.2. Prompts for self-regulated learning

Prompts are essential instructional methods for guiding keyprocesses of SRL (Bannert, 2009). They support learners in activat-ing their learning and metacognitive strategies, such as elabora-tion, critical thinking or mental simulation (Ifenthaler, Pirnay-Dummer, & Seel, 2007; Ifenthaler & Seel, 2011; Ifenthaler & Seel,2013; Ge & Land, 2004; Kirschner, Sweller, & Clark, 2006; Lin &Lehmann, 1999; van den Boom, Paas, & van Merriënboer, 2007;van den Boom et al., 2004; Wild, 2000) as well as the motivationaldimension (Latham & Locke, 1991; Meece, 1994; Zimmerman &Campillo, 2003). Prompts are realised as simple questions, incom-plete sentences, execution instructions, or pictures, graphics andother forms of multimedia (Ifenthaler, 2012; Bannert, 2007,2009). Similar instructional interventions are known as adjunctaids (e.g., Clarebout & Elen, 2006; Elen & Louw, 2006a, 2006b).

Fig. 1. Dimensions of sel

Please cite this article in press as: Ifenthaler, D. Cognitive, metacognitive andComputers in Human Behavior (2013), http://dx.doi.org/10.1016/j.chb.2013.07.0

However, these forms of instructional stimuli mainly focus onlearning from text (Winne, 1983).

From an instructional point of view, prompts can be orches-trated as a request for reflection before, during and after the learn-ing process (Taminiau, 2013; Taminiau et al., 2013; Ifenthaler &Lehmann, 2012). In addition to the timing of prompts, the methodof prompting and the degree of specification can be personalisedand adapted to the learners’ needs (Thillmann et al., 2009; Wirth,2009). While investigating the time of prompting for successfullearning, Davis (2003) found several patterns of effective prompt-ing. Prompting before the learning task facilitates the planning ofrequired learning procedures, whereas prompting during thelearning process helps the learner to monitor and evaluate com-plex learning events (Davis, 2003). Further, if timing does not meetthe requirements of external support from the learners perspec-tive, prompts may result in increased cognitive effort and thusmay hamper learning (Thillmann et al., 2009).

Davis (2003) differentiates between generic and directedprompts. Generic prompts follow the principle ‘‘stop and think’’(Davis, 2003, p. 92). A generic prompt encourages learners to inter-rupt the current learning for a moment and reflect. Hence, the fo-cus of reflection is left completely open. Overall, there are nodetails highlighted or instructed by a generic prompt. Directedprompts, on the other hand, follow the principle ‘‘stop and thinkabout . . .’’. Accordingly, a directed prompt includes specific instruc-tions such as sentences to complete, e.g., ‘‘To approach the solutionto the problem step by step, I have to . . .’’.

Ifenthaler (2012) found generic prompts to be more effectivecompared to directed prompts, because they leave a certainamount of autonomy for SRL. Still, Ifenthaler (2012) agrees on Da-vis’ (2003) argument that directed prompts may be more effectivefor learners who do not already have a specific set of prior knowl-edge and skills. Accordingly, a further investigation of prompts inthe light of SRL seems reasonable within the scope of the presentstudy.

1.3. Motivation within self-regulated learning

Based on the self-determination theory, the nature and qualityof motivation are determined by satisfying three basic needs:autonomy, competence and relatedness (Deci & Ryan, 1985). Satis-faction of these needs fosters internalized forms of motivation,

f-regulated learning.

motivational perspectives on preflection in self-regulated online learning.51

Page 3: Cognitive, metacognitive and motivational perspectives on preflection in self-regulated online learning

D. Ifenthaler / Computers in Human Behavior xxx (2013) xxx–xxx 3

such as intrinsic motivation (interest), identified regulation, andintegrated regulation, which would lead to higher quality engage-ment and learning (Heckhausen, 1977; Heckhausen, Schmalt, &Schneider, 1985; Ryan & Deci, 2000; Vollmeyer & Rheinberg,1999). Another related key factor that drives motivation andengagement is self-efficacy, which concerns one’s perceived capa-bility for achieving desired outcomes (Bandura, 2006; Joo, Bong, &Choi, 2000). Understanding these key components of motivationunder the condition of SRL may help us to design and develop moresuccessful online learning environments.

Overall, it is argued that motivation is positively linked to suc-cessful SRL (Kim & Pekrun, 2014; Latham & Locke, 1991; Meece,1994; Pintrich, 1999; Pintrich, Wolters, & Baxter, 2000; Schunk,1991; Zimmerman & Campillo, 2003). Recent research studies fo-cussed on motivation and the design of learning environments(Keller, 2010; Kim, 2012; Kim & Keller, 2010). Still, the process-ori-ented investigation of motivation within SRL is scarce. Accordingly,a further investigation of motivation in the light of SRL seems rea-sonable within the scope of the present study.

1.4. Learning preferences and self-regulated learning

Learning preferences are a learner’s relatively stable set ofcognitive and affective dispositions applied when interactingwithin a specific (online) learning environment. Learning prefer-ence models range from holistic approaches (e.g., the Dunn andDunn learning-style model; Dunn et al., 2009) to specific modelsthat focus on certain dimensions of learning, i.e., informationgathering, processing and retrieval (e.g., the Felder-Silvermanlearning-style model; Felder & Silverman, 1988). Overall, learn-ing preferences are regarded as a crucial factor for SRL anddesigning (online) learning environments: ‘‘Instruction designedto address a broad spectrum of learning styles has consistentlyproved to be more effective than traditional instruction, whichfocuses on a narrow range of styles’’ (Felder & Brent, 2005, p.59). However, our in-depth literature review on learning prefer-ences and SRL revealed numerous contradicting empirical find-ings. Several studies support the benefit of considering learningpreferences when designing (online) learning environments(e.g., Bajraktarevic, Hall, & Fullick, 2003; Dunn et al., 2009;Hayes & Allison, 1993, 1996; Riding & Grimley, 1999; Schmeck,1988). In contrast, other studies provide evidence that learningpreferences are not associated with better learning outcomes(e.g., Cook, Gelula, Dupras, & Schwartz, 2007; Gilbert & Swainer,2008; Kavale & Forness, 1987; Martin, 2010; Snider, 1992).Accordingly, a further investigation of learning preferences inthe light of SRL seems reasonable within the scope of the pres-ent study.

2. Study 1

The central research objective of this study is to identify theeffectiveness of different types of prompts in SRL processes andthe influence of individual characteristics such as motivation andmetacognitive awareness. Based on the above described theoreti-cal background (Pintrich et al., 2000; Zimmerman, 2000) and pre-vious empirical findings (Davis, 2003; Ge & Land, 2004; Ifenthaler,2012), three different types of prompts have been developed andimplemented in an online learning environment. More specifically,the effect of time and type of prompting on the learning outcomeswill be investigated. We assume that the time of presentation(Hypothesis 1a) and type of prompting (Hypothesis 1b) will influ-ence the outcomes of SRL (Hypothesis 1).

Although it is commonly accepted that metacognitive knowl-edge can lead to improved learning outcomes, it need not necessar-

Please cite this article in press as: Ifenthaler, D. Cognitive, metacognitive andComputers in Human Behavior (2013), http://dx.doi.org/10.1016/j.chb.2013.07.0

ily do so (Stolp & Zabrucky, 2009). Still, there are valid argumentsthat the metacognitive dimension is critical for SRL (Flowers &Hayes, 1980; Puustinen & Pulkkinen, 2001; Veenman, van Hout-Wolters, & Afflerbach, 2006). Hence, we assume that metacognitiveregulation (Hypothesis 2a) and metacognitive knowledge (Hypoth-esis 2b) is associated with the SRL outcomes.

Finally, Boekaerts (1995), Efklides (2008) and Zimmerman(2008) suggest that investigating SRL shall include motivationaldispositions of learners. It is argued that satisfaction of motiva-tional dispositions will lead to higher quality of engagement andlearning (Ryan & Deci, 2000). Accordingly, we assume that positivemotivation is associated with higher SRL outcomes (Hypothesis 3).

2.1. Method

2.1.1. Participants and design64 students (44 female and 20 male) from an European univer-

sity took part in this study. Their average age was 22.30 years(SD = 3.18). They were all enrolled in a research methods courseof intermediate level in an educational psychology program andstudied an average of 3.45 semesters (SD = 2.72). Participants wererandomly assigned to three experimental conditions varying thetime of presentation and type of prompting: generic preflectionprompt (GPP; n1 = 21), directed preflection prompt (DPP; n2 = 21),and generic reflection prompt (GRP; n3 = 22). Participants in theGPP group received general instructions for planning and preflect-ing on their future learning activities (see materials for details). Forparticipants in the DPP group, we provided nine sentences that re-ferred to planning (1–3), monitoring (4–6), and evaluation (7–9) ofthe future learning activities (see materials for details). The GRPgroup received general instructions for planning and reflectingon their learning activities.

2.1.2. Materials2.1.2.1. Problem scenario. An online learning environment wasimplemented which included a partially illustrated article on thefunctions of the LHC (Large Hadron Collider) with 1200 words. Par-ticipants were introduced to a scenario of a news agency and wereasked to: (1) Write a newspaper essay answering the followingquestions: How does the LHC work? Explain the possible develop-ment of a black hole with regard to the LHC. (2) Graphically repre-sent their understanding of these complex processes of the LHC inform of a knowledge map (also embedded in the scenario of thenews agency).

2.1.2.2. Prompts. The prompts were developed in order to stimulatethe participants to preflect (GPP, DPP) or reflect (GRP) on theirlearning activities. The generic prompt for preflection (GPP) andfor reflection (GRP) included a general suggestion for the SRL activ-ity: ‘‘Please use the next 10 min to optimise your learning throughwell-thought-out planning and preparation’’. The directed promptfor preflection (DPP) included nine to-complete-sentences in orderto induce the specific processes of practional self-regulation, e.g.,‘‘To achieve the main objective I will set the following sub goals . . .’’(focus on planning).

2.1.2.3. Domain-specific knowledge test. The domain-specific knowl-edge test included 20 multiple-choice questions with four possiblesolutions each (1 correct, 3 incorrect). The questions were devel-oped on the basis of the article on the functions of the LHC. Theaverage difficulty level was tested in a pilot study (N = 9) to ac-count for ceiling effects. Two versions were administered (preand posttest) in which the 20 multiple-choice questions appearedin a different order. It took about 10 min to complete the test.

motivational perspectives on preflection in self-regulated online learning.51

Page 4: Cognitive, metacognitive and motivational perspectives on preflection in self-regulated online learning

4 D. Ifenthaler / Computers in Human Behavior xxx (2013) xxx–xxx

2.1.2.4. Metacognitive awareness inventory. The participant’s meta-cognitive awareness was assessed with the MAI - MetacognitiveAwareness Inventory (Schraw & Dennison, 1994). Each of the 52items of the inventory was answered on a scale from 1 to 100.Two dimensions of metacognitive awareness were addressed:knowledge of cognition (KC; Cronbach’s alpha = .74) and (2) regula-tion of cognition (RC; Cronbach’s alpha = .84).

2.1.2.5. Intrinsic motivation inventory. Subscales of the IMI - IntrinsicMotivation Inventory (Deci & Ryan, 2005) were used to assess theparticipants’ interest/enjoyment (INT; 7 items; Cronbach’salpha = .92), perceived competence (COM; 5 items; Cronbach’s al-pha = .77), effort/importance (EFF; 5 items; Cronbach’s al-pha = .87), pressure/tension (PRS; 5 items; Cronbach’salpha = .80) and perceived choice (COI; 7 items; Cronbach’salpha = .86).

2.1.3. ProcedureWithin a computer-laboratory, participants learned in an online

learning environment. First, participants were randomly assignedto the three experimental conditions (GPP, DPP, GRP). Then theycompleted a demographic data survey (3 min), the MAI (10 min)and the declarative knowledge test (pretest; 10 min). Next, theyworked through a tutorial on knowledge mapping (10 min). Thenthey received the task for the problem scenario (2 min). Partici-pants in the DPP and GPP received their prompts. Then they re-ceived the article on the LHC (10 min). Next, participants wereasked to write an essay (15 min) and construct a knowledge map(15 min) with their answer to the phenomenon in question. Duringthis phase the GRP group received the prompt. Finally, participantsanswered the IMI (4 min) and the declarative knowledge test(posttest; 10 min).

Table 1Means (standard deviations in parentheses) for the quality of written essays andknowledge maps (N = 64).

Experimental group

GPP DPP GRP

BSMWT .125 (.121) .172 (.106) .132 (.125)KM .168 (.143) .263 (.133) .203 (.140)

Note: BSM: Balanced Semantic Matching; WT: written text; KM: knowledge map;GPP: generic preflection prompt; DPP: directed preflection prompt; GRP: genericreflection prompt.

Table 2Descriptives and zero-order correlations for learning outcomes and personal characteristi

Variable 1 2 3

1. Domain-specific knowledge gain (KNO) –2. BSM written essay .160 –3. BSM knowledge map .412** �.061 –4. Metacognitive regulation (RC) �.146 .678** �.285*

5. Metacognitive knowledge (KC) .014 .591** �.0646. Interest (INT) .324** �.123 .595**

7. Competence (COM) .227 �.098 .294*

8. Effort (EFF) .123 �.135 .286*

9. Pressure (PRS) .198 .287 .280*

10. Choice (COI) .313* �.171 .444*

M 4.89 .14 .21SD 3.18. .12 .14

��� p < .001.* p < .05.** p < .01.

Please cite this article in press as: Ifenthaler, D. Cognitive, metacognitive andComputers in Human Behavior (2013), http://dx.doi.org/10.1016/j.chb.2013.07.0

2.1.4. Scoring and analysisScores for the pre- and post-version of the domain-specific

knowledge test, the IMI subscales and the MAI dimensions werecalculated.

In order to analyse the quality of written essays and knowledgemaps, the AKOVIA (Automated Knowledge Visualisation andAssessment) analysis function was applied (Ifenthaler & Pirnay-Dummer, 2014; Pirnay-Dummer, Ifenthaler, & Spector, 2010). Eachof the participants written essay and knowledge map was auto-matically compared against a reference solution. The referencesolution was based on the content of the online learning environ-ment and was co-developed by a team of subject matter experts.AKOVIA uses specific automated comparison algorithms to calcu-late similarities between a given set of properties (Ifenthaler,2010b). The resulting similarity index s results in a measure of0 6 s 6 1, where s = 0 is complete exclusion and s = 1 completeidentify (Tversky, 1977). The resulting dependent variable fromthe AKOVIA analysis for the quality of participants’ written essaysand knowledge maps is BSM (Balanced Semantic Matching). BSMfocuses on complex semantic relationships within knowledge rep-resentations (written text or knowledge map) and was successfullytested for reliability and validity in various experimental studies(Al-Diban & Ifenthaler, 2011; Johnson et al., 2011; Lee, 2009;McKeown, 2009).

2.2. Results

On the domain-specific knowledge test (pre- and posttest), par-ticipants could score a maximum of 20 correct answers. In the pre-test they scored an average of M = 7.89 correct answers (SD = 2.01)and in the posttest M = 12.78 correct answers (SD = 3.39). The in-crease in correct answers was significant, t(63) = 12.29, p < .001,d = 1.676 (strong effect). ANOVA was used to test for domain-spe-cific knowledge gain differences among the three experimentalgroups. The increase of correct answers differed significantlyacross the three experimental groups, F(2,61) = 6.380, p = .003,g2 = .173. Tukey HSD post hoc comparisons indicate that the DPPgroup (M = 6.52, SD = 3.16, 95 CI [5.09,7.96]) gained significantlymore correct answers than the GRP group (M = 3.32, SD = 2.95, 95CI [2.01,4.63]), p = .002. However, comparisons with the GPP group(M = 4.90, SD = 2.70, 95 CI [3.68,6.13]) were not statistically signif-icant at p < .05.

The quality of the written essays and knowledge maps (SRL out-come) are reported through the BSM measure (see Table 1). ANO-VA did not reveal significant differences between the threeexperimental groups for the written essays, F(2,61) = .952,p = n.s., and knowledge maps, F(2,61) = 2.498, p = n.s.

cs (N = 64).

4 5 6 7 8 9 10

–.585** –�.248* �.176 –�.030 �.240 .732** –�.193 �.020 .642** .541** –�.192 �.233 .482** .555** .419** –�.346** �.312* .698** .494** .512** .381** –37.22 38.14 8.18 2.88 6.66 2.90 7.7615.57 8.41 1.65 .52 1.51 .66 1.10

motivational perspectives on preflection in self-regulated online learning.51

Page 5: Cognitive, metacognitive and motivational perspectives on preflection in self-regulated online learning

Table 3Regression analysis for variables predicting learning outcomes with written essays(N = 64).

BSM (written essay) B SE B b

Step 1Domain-specific knowledge gain (KNO) .006 .005 .160Step 2Domain-specific knowledge (KNO) .009 .003 .238**

Metacognitive regulation (RC) .004 .001 .561***

Metacognitive knowledge (KC) .004 .001 .259**

Step 3Domain-specific knowledge (KNO) .010 .004 .262**

Metacognitive regulation (RC) .005 .001 .597***

Metacognitive knowledge (KC) .003 .002 .191Interest (INT) .014 .012 .197Competence (COM) �.035 .034 �.156Effort (EFF) �.001 .010 �.018Pressure (PRS) �.014 .014 �.130Choice (COI) �.003 .019 �.015

� p < .05.** p < .01.*** p < .001.

D. Ifenthaler / Computers in Human Behavior xxx (2013) xxx–xxx 5

Accordingly, the results support Hypothesis 1a/b for the do-main-specific knowledge. The time of presentation and the typeof prompting influences the domain-specific knowledge gain.However, Hypothesis 1a/b is not accepted for the quality of thewritten text and knowledge maps.

Table 2 shows the zero-order correlations among the variableswith regard to Hypothesis 2 and Hypothesis 3.

A hierarchical regression analysis was used to determinewhether the personal characteristics (domain-specific knowledge,metacognitive awareness and motivation) were significant predic-tors of SRL outcome (BSM written essay; see Table 2 for zero-ordercorrelations). Domain-specific knowledge gain (KNO) entered intothe equation of step one explained no statistically significantamount of variance in BSM (written essay), R2 = .026,F(1,62) = 1.64, p = .206. After step two, with metacognitive regula-tion (KR) and metacognitive knowledge (RC) also included in theequation, R2 = .572, F(3,60) = 26.76, p < .001. Thus, the addition ofthese variables resulted in a 54% increment in the variance ac-counted for. In a last step, with motivation (interest, competence,effort, pressure, choice) also included in the equation, the statisti-cally significant amount of variance in BSM increased another 2%,R2 = .591, F(8,55) = 9.94, p < .001.

Table 4Regression analysis for variables predicting learning outcomes with knowledge maps(N = 64).

BSM (knowledge map) B SE B b

Step 1Domain-specific knowledge gain (KNO) .018 .005 .412**

Step 2Domain-specific knowledge (KNO) .016 .005 .369**

Metacognitive regulation (RC) �.003 .001 �.291*

Metacognitive knowledge (KC) .002 .002 .101

Step 3Domain-specific knowledge (KNO) .009 .005 .202Metacognitive regulation (RC) �.001 .001 �.164Metacognitive knowledge (KC) .002 .002 .121Interest (INT) .063 .016 .727***

Competence (COM) �.061 .047 �.224Effort (EFF) �.014 .013 �.149Pressure (PRS) .002 .019 .016Choice (COI) .015 .026 .067

* p < .05.** p < .01.*** p < .001.

Please cite this article in press as: Ifenthaler, D. Cognitive, metacognitive andComputers in Human Behavior (2013), http://dx.doi.org/10.1016/j.chb.2013.07.0

Specifically, participants’ domain-specific knowledge gain (KNO)positively predicted the SRL outcome (BSM written essay), indicat-ing that the higher the domain-specific knowledge gain, the higherthe participant’s quality of written essay (see Table 3). Additionally,participants’ metacognitive regulation (KR) positively predicted theSRL outcome (BSM written essay), indicating that the higher themetacognitive regulation, the higher participant’s quality of writtenessay (see Table 3). Accordingly, Hypothesis 2a is accepted. Theawareness of metacognitive regulation has an influence on thequality of the written essay. As shown in Table 3, no correlationwas found for metacognitive knowledge and motivation (interest,competence, effort, pressure, choice). Thus, Hypothesis 2b andHypothesis 3 is rejected.

Another hierarchical analysis was used to determine whetherthe personal characteristics were significant predictors of the SRLoutcome measured with the knowledge maps (BSM knowledgemap; see Table 2 for zero-order correlations). The final equationexplained a statistically significant amount of variance in BSM(knowledge map), R2 = .472, F(8,55) = 6.15, p < .001 (see Table 4).

Specifically, participant’s interest (INT) positively predicted theSRL outcome (BSM knowledge map), indicating that the higherthe interest, the higher the participant’s quality of knowledgemap (see Table 4). Accordingly, Hypothesis 3 is accepted for thesubscale interest of motivation. As shown in Table 4, no correlationwas found for the metacognitive variables. Thus, Hypothesis 2aand Hypothesis 2b are rejected.

3. Discussion of Study 1 and introduction to Study 2

Study 1 showed that participants in the DPP outperformed par-ticipants of the other experimental groups with regard to their do-main-specific knowledge gain. Hence, the directed preflection(step-by-step preflection strategy) seems to be beneficial for nov-ice learners with low domain-specific knowledge. However, resultsdid not indicate a difference between the two preflection prompts.Accordingly, Study 2 was designed for an in-depth investigation ofthe preflection prompts (Ifenthaler & Lehmann, 2012). We assumethat learners who receive preflective prompts will outperformother learners with regard to the SRL outcomes (Hypothesis 4a).Given the novice status of participants, we assume that novicelearners benefit more from the directed preflection prompt thanfrom the generic preflection prompt (Hypothesis 4b).

Additionally, results of Study 1 indicated that metacognitiveawareness and interest may be critical predictors of SRL within on-line learning environments. However, post hoc analysis found asignificant difference for the type of knowledge representation(BSM; written text and knowledge map), t(63) = 4.12, p < .001,d = 1.038. The knowledge maps (M = .112, SD = .123) were signifi-cantly more similar to the reference representation than the writ-ten texts (M = .043, SD = .099). Accordingly, the type of knowledgerepresentation is a critical indicator for assessments within specificdomains (Johnson et al., 2011; Ifenthaler, 2011a).

These significant differences may be linked to specific learningpreferences (Dunn et al., 2009; Felder & Spurlin, 2005). With regardto the theoretical foundation of learning preferences and SRL,empirical research showed that learners who prefer reflectiveand verbal learning procedures gain better learning outcomes thanactive and visual learners (Felder & Silverman, 1988). Hence, weassume that the more reflective and verbal the learners are, thehigher the quality of their SRL outcomes (Hypothesis 5).

Finally, the unclear picture of motivation and its relationship topreflective prompts and SRL outcomes in Study 1 requires an in-depth investigation of this construct. According to Zimmerman(2008), motivation needs to be investigated with a process-ori-ented perspective (Willett, 1988). Accordingly, the multiple

motivational perspectives on preflection in self-regulated online learning.51

Page 6: Cognitive, metacognitive and motivational perspectives on preflection in self-regulated online learning

6 D. Ifenthaler / Computers in Human Behavior xxx (2013) xxx–xxx

measurement of motivation throughout the learning process mayidentify specific associations of preflective prompting and motiva-tion. Hence, we assume that the different preflective promptsinfluence the learner’s positive activation (Hypothesis 6a), negativeactivation (Hypothesis 6b) and valence (Hypothesis 6c) during thelearning process.

3.1. Method

3.1.1. Participants and design67 students (48 female and 19 male) from the same European

university as in Study 1 took part in Study 2. Their average agewas 22.67 years (SD = 3.35). They were all enrolled in an instruc-tional design course of intermediate level and studied an averageof 3.40 semesters (SD = 2.61). Participants were randomly assignedto three experimental conditions: generic preflection prompt (GPP;n1 = 23), directed preflection prompt (DPP; n2 = 22), and controlgroup (CON; n3 = 22).

3.1.2. Materials3.1.2.1. Problem scenario and prompts. Similar to Study 1, an onlinelearning environment was implemented which included a partiallyillustrated article on the spine’s and spinal cord’s anatomy andfunctionalities as well as the reflex circuit. Participants were intro-duced to a scenario in which they were asked to help their motherwho was moaning about dorsalgia and concerned about having aherniated disk. Being separated on two continents, participants re-sponded to their mother through a written essay that was submit-ted through an online discussion forum.

The generic prompt included a general suggestion for a preac-tional phase of forethoughts, planning, and activation on the sub-sequent learning process: ‘‘Please use the following ten minutes tooptimize your upcoming learning process through well-thought-outplanning and preparation (translated from German). In contrast,the directed prompt was designed more specific. In total it con-tained nine sentences to be completed inducing specific processesof preactional self-regulation. The control group did not receive aprompt.

3.1.2.2. Domain-specific knowledge test. The domain-specific knowl-edge test included eleven multiple-choice questions with four pos-sible solutions each (1 correct, 3 incorrect). Two versions (pre andposttest) of the domain-specific knowledge test were administered(in which the eleven multiple-choice questions appeared in a dif-ferent order). It took about 8 min to complete the test.

3.1.2.3. Learning preferences. A German translation of the Index ofLearning Styles (ILS; Felder & Spurlin, 2005) was used to assessthe participants’ individual learning preferences. Each of the fourILS dimensions consists of eleven items. Each item possesses tworesponse possibilities (a or b) that either correspond to one ex-treme of a dimension’s continuum (e.g., active vs. reflective). Felderand Spurlin (2005) report correlation coefficients for the ILS-scalesof three test–retest studies with values between .72 and .87 at a4 weeks interval, .60 and .78 at 7 months, and .51 and .68 at8 months. Additionally, both exploratory factor analysis with aneight-factor solution, as well as a feedback analysis of the samplewith approval levels of 80% for the Sequential–Global scale andover 90% for the economies of scale Active-Reflective, Sensing-Intuitive, Visual–Verbal seem to prove the validity of the ILS (Litz-inger, Lee, Wise, & Felder, 2007). Reliability scores for the Germantranslation of the four dimensions of the ILS are reported as fol-lows: active-reflective (Cronbach’s alpha = .551); sensitive-intui-tive (Cronbach’s alpha = .651); visual–verbal (Cronbach’salpha = .603); sequential–global (Cronbach’s alpha = .534).

Please cite this article in press as: Ifenthaler, D. Cognitive, metacognitive andComputers in Human Behavior (2013), http://dx.doi.org/10.1016/j.chb.2013.07.0

3.1.2.4. Motivation. The PANAVA inventory (Schallberger, 2005) formeasuring motivation-relevant conditions on the dimensions posi-tive activation (PA; Cronbach’s alpha = .786), negative activation(NA; Cronbach’s alpha = .814), and valence (VA; Cronbach’s al-pha = .860). The factorial structure has been successfully tested(Rheinberg, 2004). The short version of the survey takes about2 min to be completed.

3.1.3. ProcedureParticipants learned in an online learning environment. After

being randomly assigned to an experimental condition (GPP, DPP,CON), they completed the pretest of the domain-specific knowl-edge test (8 min) and a demographic data questionnaire (2 min).Then they answered the first PANAVA survey (2 min) to measuretheir initial state of motivation-relevant conditions. Participantswere then introduced to the problem-scenario (5 min). The CONreceived the learning material immediately afterwards and beganto solve the problem and writing the essay. Ten minutes later, theywere asked to answer the second PANAVA survey (2 min). The GPPand DPP groups received their preflective prompts. After 10 minthey answered the second PANAVA survey (2 min). The learningmaterial was available afterwards. After a total of 40 min for solv-ing the problem and writing the essay in response to the scenario,the third PANAVA survey followed (2 min). Last, the post-versionof the domain-specific knowledge test was answered (8 min).

3.1.4. Scoring and analysisScores for the pre- and post-version of the domain-specific

knowledge test, the ILS and PANAVA survey were calculated.Similar to Study 1, the quality of written essays (response to the

problem scenario) was analysed with AKOVIA (Ifenthaler & Pirnay-Dummer, 2014). Each of the participants written essay was auto-matically compared against a reference solution which was basedon the content of the online learning environment and was co-developed by a team of subject matter experts. Again, the qualityof the written essays (SRL outcome) are reported through theBSM (Balanced Semantic Matching) measure (Ifenthaler & Pirnay-Dummer, 2014).

3.2. Results

In the pretest of domain-specific knowledge test, participantsscored an average of M = 3.43 correct answers (SD = 1.49) and inthe posttest M = 6.78 correct answers (SD = 1.44). The increase incorrect answers was significant, t(66) = 19.88, p < .001, d = 2.286(strong effect). ANOVA was used to test for domain-specific knowl-edge gain differences among the three experimental groups. Nosignificant between-group differences could be identified acrossthe three experimental groups, F(2,64) = .881, p = n.s.

With regard to Hypothesis 4a and 4b, ANOVA revealed signifi-cant differences between the three experimental groups for thequality of written essays (BSM), F(2,64) = 10.641, p < .001,g2 = .249. Tukey HSD post hoc comparisons indicate that the GPPgroup (M = .314, SD = .104, 95 CI [.269, .359], p < .001) and DPPgroup (M = .292, SD = .095, 95 CI [.250, .333], p = .002) gained sig-nificantly higher quality in their written essays (BSM) than theCON group (M = .185, SD = .099, 95 CI [.141, .229]). However, com-parisons between with the GPP group and DPP group were not sta-tistically significant at p < .05. Accordingly, Hypothesis 4a isaccepted. However, Hypothesis 4b is rejected.

With regard to Hypothesis 5, a regression analysis was used todetermine whether the individual learning preferences (active-reflective, sensitive-intuitive, visual–verbal, sequential–global)were significant predictors of the SRL outcome (BSM; written es-say). However, the equation did not explain a significant amountof variance in BSM (written essay), R2 = .052, F(4,62) = .857,

motivational perspectives on preflection in self-regulated online learning.51

Page 7: Cognitive, metacognitive and motivational perspectives on preflection in self-regulated online learning

Table 5Descriptives and zero-order correlations for learning outcomes and learning prefer-ences (N = 67).

Variable 1 2 3 4 5

1. BSM written essay –2. ILS active-reflective �.106 –3. ILS sensing-intuitive .128 �.278* –4. ILS visual–verbal .020 .142 �.027 –5. ILS sequential–global �.102 �.156 .414** �.176 –M .264 2.91 3.01 2.55 3.61SD .113 1.16 1.26 1.13 1.11

* p < .05.** p < .01.

Table 6Means (standard deviations in parentheses) of motivational dispositions at threemeasurement points (N = 67).

Measurementpoint

Experimentalgroup

Motivational disposition

PA NA VA

MP1 GPP 14.43 (4.09) 12.70 (4.52) 9.30 (2.64)DPP 15.05 (3.75) 11.18 (3.91) 9.82 (1.99)CON 14.95 (4.73) 11.00 (3.87) 10.77 (2.22)

MP2 GPP 13.13 (4.25) 11.48 (3.67) 8.78 (2.30)DPP 15.95 (3.76) 10.41 (3.54) 9.73 (1.96)CON 12.18 (4.63) 9.36 (3.46) 10.95 (2.01)

MP3 GPP 14.96 (5.52) 14.52 (4.69) 8.70 (2.77)DPP 16.91 (4.23) 12.59 (4.67) 9.00 (2.12)CON 13.95 (3.11) 10.77 (3.78) 10.59 (2.58)

Note: MP: measurement point; PA: positive activation; NA: negative activation; VA:valence.

D. Ifenthaler / Computers in Human Behavior xxx (2013) xxx–xxx 7

p = n.s. (see Table 5 for zero-order correlations). Accordingly,Hypothesis 5 is rejected.

With regard to Hypotheses 6a, 6b and 6c, we computed threerepeated-measure MANOVAs with the PANAVA dimensions (PA,NA, VA) at three measurement points as a within-subjects factor,and experimental groups (GPP, DPP, CON) as a between-subjectsfactor (see Table 6 for descriptive results). For the PA dimension,MANOVA revealed a significant main effect of time on positive acti-vation, Wilks’ Lamda = .858, F(2,63) = 5.225, p = .008, Eta2 = .142.The sphericity assumption was not met (v2(2) = 8.51, p = .014), sothe Greenhouse–Geisser correction was applied. The difference be-tween measurements was significant, F(1.8,113.6) = 4.93, p = .011,Eta2 = .071. The comparison of positive activation (PA) at each

Fig. 2. Interactions of experimental group x time on positive activation.

Please cite this article in press as: Ifenthaler, D. Cognitive, metacognitive andComputers in Human Behavior (2013), http://dx.doi.org/10.1016/j.chb.2013.07.0

measurement point (MP) indicated significant differences betweenexperimental groups (GPP, DPP, CON) at MP2, F(2,64) = 4.76,p = .012, g2 = .049 (see Table 6 for means and standard deviations).

Further, we found a significant interaction effect of time andgroup on the PA dimension, F(3.6,113.6.5) = 2.55, p = .049,g2 = .074. Fig. 2 shows the interaction effect on the positive activa-tion dimension (PA). Accordingly, Hypothesis 6a is accepted.

For the NA dimension, MANOVA revealed a significant main ef-fect of time on negative activation, Wilks’ Lamda = .640,F(2,63) = 17.721, p < .001, Eta2 = .360. The sphericity assumptionwas not met (v2(2) = 13.45, p = .001), so the Greenhouse–Geissercorrection was applied. The difference between measurementswas significant, F(1.7,97.9) = 12.28, p < .001, Eta2 = .161. The com-parison of negative activation (NA) at each measurement point(MP) indicated significant differences between experimentalgroups (GPP, DPP, CON) at MP3, F(2,64) = 4.48, p = .015, g2 = .123(see Table 6 for means and standard deviations). Accordingly,Hypothesis 6b is accepted.

Finally, MANOVA revealed no significant main effect of time onvalence, Wilks’ Lamda = .939, F(2,63) = 2.041, p = n.s., and for timex group, Wilks’ Lamda = .949, F(2,126) = .843, p = n.s. Accordingly,Hypothesis 6c is rejected.

4. General discussion

The importance of self-regulated learning for online educationis undisputed (Azevedo, 2008, 2009; Azevedo, Cromley, Winters,Moos, & Greene, 2005; Azevedo, Johnson, Chauncey, & Burkett,2010; Meece, 1994; Moos & Azevedo, 2008; Veenman, 2007;Winne, 2001; Zimmerman, 1989, 2002, 2008; Zimmerman & Cam-pillo, 2003; Zimmerman & Schunk, 2001). The cognitive, metacog-nitive and motivational dimensions of SRL build a solid theoreticalframework for psychological and educational research (Azevedoet al., 2010; Boekaerts, 1995; Zimmerman, 2008). Still, empiricalresearch investigating the interrelationship of these dimensionswith a specific focus on the underlying processes while learningin online environments is rare. Additionally, preflective promptsas a simple but effective instructional intervention requires farmore empirical evidence (Taminiau, 2013).

Hence, two experimental studies were designed to investigatethe interrelationship of the dimensions of SRL and variations ofpreflective prompts within an online learning environment. Study1 and Study 2 indicated that preflective prompts are effectiveinstructional methods within online learning environments. Espe-cially novice learners seem to benefit from directed preflectiveprompts as this short instructional intervention guide the novicelearners forethought phase, i.e., the planning of the future learning(Friedrich & Mandl, 1997; Jacobson, 2000; Veenman, 2007).

The results of Study 1 support the notion that metacognitiveregulation is highly important for writing assignments. Hence,the complexity of a writing task can be best mastered when learn-ers are able to monitor and regulate their learning processes (vanMerriënboer & Sweller, 2005). Additionally, Study 1 showed thatinterest is highly important for learning tasks where knowledgemap assessments are used. As well represented knowledge mapsrequire a lot of effort and understanding of the subject domain,learners with high interest in the learning task achieve structurallyand semantically better results.

Study 2 critically investigated learning preferences as a possiblepredictor for self-regulated learning outcomes (Bajraktarevic et al.,2003; Dunn et al., 2009). However, no empirical evidence wasfound for this hypothesis that is in line with other empirical find-ings (Cook et al., 2007; Gilbert & Swainer, 2008; Martin, 2010).More importantly, a process-oriented view on motivation withinSRL was realised in Study 2. Clearly, the directed preflective

motivational perspectives on preflection in self-regulated online learning.51

Page 8: Cognitive, metacognitive and motivational perspectives on preflection in self-regulated online learning

8 D. Ifenthaler / Computers in Human Behavior xxx (2013) xxx–xxx

prompt had a significant influence on the positive activation of theparticipants. Hence, novice learner’s motivation is positively influ-enced by a step-by-step instruction before the learning process(Ryan & Deci, 2000).

4.1. Implications

Prompts are effective means for promoting self-regulated learn-ing (Bannert, 2009). Preflective prompts support learners in acti-vating their metacognitive strategies and positive motivation.Hence, self-regulation, self-monitoring and evaluation benefitsfrom prompting before the learning process (Ifenthaler & Lehmann,2012; Bannert, 2007; Veenman, 1993). When designing promptsfor online learning environments, not only the type of aiding strat-egy needs to be differentiated but also the conditions of presenta-tion, the prompting methodology and the degree of specification(Wirth, 2009). The results of our empirical investigations show thatdifferent types of prompts have different effects on learners.Accordingly, prompts need to be critically reviewed before theyare implemented as instructional aids in personalised and adaptivelearning environments (Thillmann et al., 2009). Such learning envi-ronments require automated and instant analyses of learning pro-cesses as well as intelligent feedback mechanisms (Roscoe, Snow, &McNamara, 2013).

Within the last five years strong progress has been made in thedevelopment of automated tools for structural and semanticknowledge assessment (e.g., Ifenthaler, 2008; Ifenthaler, 2010b;Al-Diban & Ifenthaler, 2011; Clariana & Wallace, 2007; Shute,Jeong, Spector, Seel, & Johnson, 2009). Study 1 and Study 2 success-fully used AKOVIA for an automated analysis of written texts andknowledge maps (Ifenthaler & Pirnay-Dummer, 2014). As dis-cussed above, there are numerous approaches for eliciting knowl-edge for various diagnostic purposes. However, most approacheshave not been tested for reliability and validity (Ifenthaler, 2008;Seel, 1999a). Additionally, they are almost only applicable to singleor small sets of data (Ifenthaler, 2010a; Al-Diban & Ifenthaler,2011). Hence, new approaches are required, such as AKOVIA,which have not only been tested for reliability and validity but alsoprovide a fast and economic way of analysing larger sets of data.AKOVIA is web-based an its algorithms are not limited to a specificamount of data. Also, AKOVA can interface with almost every on-line learning environment. This feature of learning analytics en-ables the implementation of personalised and adaptive onlinelearning environments (Kalyuga, 2006). Additionally, AKOVIA canprovide personalised and adaptive feedback or scaffolds wheneverthe learner needs them (Ifenthaler, 2009; Ifenthaler 2011b).

In Study 1, participants were asked to represent their under-standing of the phenomenon in question using two differentmodes of externalisation: written text and knowledge maps. Thepost hoc analysis showed significant differences between the writ-ten text and knowledge map assessment. Additionally, the differ-ent types of representation were influenced differently bypersonal characteristics of the participants. These results are highlyimportant for the reliability and validity of assessment methods.Overall, the possibilities of knowledge externalisation are limitedto a few sets of sign and symbol systems (Seel, 1999b) – character-ised as graphical and language-based approaches (Ifenthaler,2014). With regard to cognitive psychology and its focus on humanknowledge representation the most important result of this re-search consists in the observation that learners are able to use dif-ferent forms of representation. Learners can either recall anappropriate form of representation from memory or transformmemorised information in an appropriate form of representationin dependence on situational demands. However, because it isnot possible to directly assess internal representations of knowl-edge one of the most important issues of research on knowledge

Please cite this article in press as: Ifenthaler, D. Cognitive, metacognitive andComputers in Human Behavior (2013), http://dx.doi.org/10.1016/j.chb.2013.07.0

representation is concerned with reliable and valid measurementsof declarative and procedural knowledge (Ifenthaler, 2008; Pirnay-Dummer, Ifenthaler, & Seel, 2012; Galbraith, 1999; Seel, 1999a;Stachowiak, 1973; Wygotski, 1969). This suggests that the designof reliable and valid assessment requires a critical investigationof the type of representation fitting best to the assessment goals,the learners’ preconditions and domain-specific requirements (Ac-ton, Johnson, & Goldsmith, 1994; Ifenthaler, 2011a; Ifenthaler,Eseryel, & Ge, 2012; Baker, Chung, & Delacruz, 2008; McNeill, Gos-per, & Hedberg, 2012; Trumpower, Sharara, & Goldsmith, 2010;Veenman, 2007; Yang, 2010).

4.2. Limitations and future work

As with all experimental research, there are limitations to thisstudy, which need to be addressed. First, while our sample sizewas large enough to achieve statistically significant results, the ex-plained variance for some of our regression models was rathermoderate. This indicates that besides the tested variables othervariables may have influenced the outcomes, which were nottested in this study. Second, a more in-depth investigation of dif-ferent knowledge representations would have revealed a more pre-cise understanding of the modes of externalisation within SRL.Hence, future studies may investigate specific modes of represen-tations within SRL and online learning environments. Third, all par-ticipants in Study 1 and Study 2 were novice learners within thesubject domains. This fact limits the external validity of our empir-ical investigations. Accordingly, future studies may include variouslevels of expertise within the field of study. Forth, as external fac-tors of the learning environment influence SRL, future studies mayinclude different levels of task difficulty and/or complexity as wellas a comparison across different subject domains. Fifth, based onearlier studies it was assumed that the implemented prompts havea positive effect on learning processes. However, the experimentalsettings of both studies did not contrast effects against participantsnot receiving instructional support. Hence, future studies may in-clude further variations of experimental groups including a controlgroup which does not receive any type of prompt. Sixth, further ef-fects were revealed in our statistical analysis, e.g., a negative corre-lation between metacognitive regulation and choice, r = �.346��.Accordingly, further hypotheses based on these findings may in-form post hoc analysis and future experimental investigations.Last, the limited time of learning and the laboratory situation is an-other threat to external validity of our findings. Accordingly, withthe capabilities of learning analytics, future studies may investi-gate the nature of SRL and the influence of preflective prompts inan open online learning environment. As a result, we suggest theimplementation of design experiments for these future investiga-tions (Brown, 1992).

5. Conclusions

Dewey (1916, 1933) introduced the concept of reflective pro-cesses almost a century ago, suggesting that humans learn betterfrom reflection on their own experience than from the experi-ence itself. Since then, a large body of literature and empiricalstudies suggest that technology can help learners to self-regulatetheir learning (Azevedo et al., 2005; Hadwin, Oshige, Gress, &Winne, 2010; Reed, 2006). However, the optimal orchestrationof technology systems to support SRL is not fully answered yet(Schraw, 2007). Hence, design-based research investigating theinfluence of current technology for facilitating SRL in onlinelearning environments is much needed. In addition, the above-described studies provide evidence that preflection as a pendantto reflection offers benefits for learners’ cognitive, metacognitive

motivational perspectives on preflection in self-regulated online learning.51

Page 9: Cognitive, metacognitive and motivational perspectives on preflection in self-regulated online learning

D. Ifenthaler / Computers in Human Behavior xxx (2013) xxx–xxx 9

and motivational dispositions. Therefore, it is up to instructionaldesigners to develop preflective prompts for online learningenvironments that provoke and foster preflection in less experi-mental settings.

With the rapid growth of technology influencing learning pro-cesses, there are high hopes that the personalised and adaptiverealisation of preflective prompts and automated assessment aswell as feedback for SRL is just around the corner. Currently, thePASS (Personalised Adaptive Study Success) initiative realises theabove-described implications for a leading institution in onlinehigher education.

Acknowledgments

The author gratefully thanks his student research assistantsInka Hähnlein and Thomas Lehmann for their support in conduct-ing the experimental studies and in the preparation of the analysisof preliminary data.

References

Acton, W. H., Johnson, P. J., & Goldsmith, T. E. (1994). Structural knowledgeassessment: Comparison of referent structures. Journal of EducationalPsychology, 86(2), 303–311.

Agina, A. M. (2012). The effect of nonhuman’s external regulation on youngchildren’s self-regulation to regulate their own process of learning. Computers inHuman Behavior, 28(4), 1140–1152. http://dx.doi.org/10.1016/j.chb.2012.01.022.

Al-Diban, S., & Ifenthaler, D. (2011). Comparison of two analysis approaches formeasuring externalized mental models: Implications for diagnostics andapplications. Journal of Educational Technology & Society, 14(2), 16–30.

Azevedo, Roger. (2008). The role of self-regulation in learning about science withhypermedia. In D. Robinson & G. Schraw (Eds.), Recent innovations in educationaltechnology that facilitate student learning (pp. 127–156). Charlotte, NC:Information Age Publishing.

Azevedo, Roger. (2009). Theoretical, conceptual, methodological, and instructionalissues in research on metacognition and self-regulated learning: A discussion.Metacognition and Learning, 4(1), 87–95.

Azevedo, Roger., Cromley, J. G., Winters, F. I., Moos, D. C., & Greene, J. A. (2005).Adaptive human scaffolding facilitates adolescents’ self-regulated learning withhypermedia. Instructional Science, 33(5–6), 381–412.

Azevedo, Roger, Johnson, Amy, & Chauncey, Amber (2010). Self-regulated learningwith metatutor: Advancing the science of learning with metacognitive tools. InM. S. Khine & I. M. Saleh (Eds.), New science of learning (pp. 225–247). New York:Springer.

Bajraktarevic, N., Hall, W., & Fullick, P. (2003). Incorporating learning styles inhypermedia environment: Empirical evaluation. Paper presented at the 14thconference on hypertext and hypermedia, Nottingham, UK.

Baker, E. L., Chung, G. K. W. K., & Delacruz, G. C. (2008). Design and validation oftechnology-based performance assessments. In J. M. Spector, M. D. Merrill, J. J.G. van Merriënboer, & M. P. Driscoll (Eds.), Handbook of research on educationalcommunications and technology (pp. 595–604). New York: Taylor & FrancisGroup.

Bandura, A. (2006). Guide for constructing self-efficacy scales. In F. Pajares & T. C.Urdan (Eds.). Self-efficacy beliefs of adolescents (Vol. 5, pp. 307–337). Hershey,PA: Information Age Publishing.

Bannert, M. (2007). Metakognition beim Lernen mit Hypermedia. Erfassung,Beschreibung, und Vermittlung wirksamer metakognitiver Lernstrategien undRegulationsaktivitäten. Münster: Waxmann.

Bannert, M. (2009). Promoting self-regulated learning through prompts. Zeitschriftfür Pädagogische Psychologie, 23(2), 139–145.

Boekaerts, M. (1995). Self-regulated learning: Bridging the gap betweenmetacognition and metamotivation theories. Educational Psychologist, 30(4),195–200.

Boekaerts, M. (1999). Self-regulated learning: Where we are today. InternationalJournal of Educational Research, 31(6), 445–457.

Bransford, J. D., Brown, A. L., & Cocking, R. R. (Eds.). (2000). How people learn: Brain,mind, experience, and school. Washington, DC: National Academy Press.

Brown, A. L. (1992). Design experiments: Theoretical and methodologicalchallenges in creating complex interventions in classroom settings. Journal ofthe Learning Sciences, 2(2), 141–178.

Clarebout, G., & Elen, J. (2006). Tool use in computer-based learning environments:Towards a research framework. Computers in Human Behavior, 22(3), 389–411.http://dx.doi.org/10.1016/j.chb.2004.09.007.

Clariana, R. B., & Wallace, P. E. (2007). A computer-based approach for deriving andmeasuring individual and team knowledge structure from essay questions.Journal of Educational Computing Research, 37(3), 211–227.

Please cite this article in press as: Ifenthaler, D. Cognitive, metacognitive andComputers in Human Behavior (2013), http://dx.doi.org/10.1016/j.chb.2013.07.0

Cook, D. A., Gelula, M. H., Dupras, D. M., & Schwartz, A. (2007). Instructionalmethods and cognitive and learning styles in web-based learning: Report of tworandomised trials. Medical Education, 41(9), 897–905.

Davis, E. (2003). Prompting middle school science students for productivereflection: Generic and directed prompts. Journal of the Learning Sciences,12(1), 91–142.

Deci, E. L., & Ryan, R. M. (1985). Intrinsic motivation and self-determination in humanbehavior. New York: Plenum Press.

Deci, E. L., & Ryan, R. M. (2005). Intrinsic motivation inventory (IMI). <http://www.psych.rochester.edu/SDT/measures/IMI_description.php>.

Dillon, C., & Greene, B. (2003). Learner differences in distance learning: Findingdifferences that matter. In M. G. Moore & W. G. Anderson (Eds.), Handbookof distance education (pp. 235–244). Mahwah, NJ: Lawrence ErlbaumAssociates.

Dunn, R., Honigsfeld, A., Doolan, L. S., Bodstrom, L., Russo, K., Schiering, M. S., et al.(2009). Impact of learning-style instructional strategies on students’achievement and attitudes: Perceptions of educators in diverse institutions.The Clearing House: A Journal of Educational Strategies, Issues and Ideas, 82(3),135–140.

Efklides, A. (2008). Metacognition: Defining its facets and levels of functioning inrelation to self-regulation and co-regulation. European Psychologist, 13,277–287.

Elen, J., & Louw, L. P. (2006a). The impact of instructional conceptions on the use ofadjunct aids. Technology, Instruction, Cognition and Learning, 4(3–4), 331–350.

Elen, J., & Louw, L. P. (2006b). The instructional functionality of multiple adjunctaids. e-Journal of Instructional Science and Technology, 9(2), 1–17.

Felder, R. M., & Brent, R. (2005). Understanding student differences. Journal ofEngineering Education, 94(1), 57–72.

Felder, R. M., & Silverman, L. K. (1988). Learning and teaching styles in engineeringeducation. Journal of Engineering Education, 56(7), 674–681.

Felder, R. M., & Spurlin, J. (2005). Applications, reliability and validity of the Index ofLearning Styles. International Journal of Engineering Education, 21(1), 103–112.

Flowers, L. S., & Hayes, J. R. (1980). The dynamics of composing: Making plans andjuggling constraints. In L. W. Gregg & E. R. Steinberg (Eds.) (pp. 31–50).Hillsdale, NJ: Lawrence Erlbaum Associates.

Friedrich, H. F., & Mandl, H. (1997). Analyse und Förderung selbstgesteuertenLernens. In H. Mandl & H. F. Friedrich (Eds.), Lern- und Denkstrategien (pp. 3–54).Göttingen: Hogrefe.

Galbraith, David (1999). Writing as a knowledge-constituting process. In M.Torrance & D. Galbraith (Eds.), Knowing what to write. Conceptual processes intext production (pp. 139–160). Amsterdam: University Press.

Ge, X., & Land, S. M. (2003). Scaffolding students’ problem-solving processes in anill-structured task using question prompts and peer interactions. EducationalTechnology Research and Development, 51(1), 21–38.

Ge, X., & Land, S. M. (2004). A conceptual framework for scaffolding ill-structuredproblem-solving processes using question prompts and peer interactions.Educational Technology Research and Development, 52(2), 5–22.

Gilbert, J. E., & Swainer, C. A. (2008). Learning styles: How do they fluctuate?Institute for Learning Styles Research Journal, 1, 29–40.

Hadwin, A. F., Oshige, M., Gress, C. L. Z., & Winne, P. H. (2010). Innovative ways forusing gStudy to orchestrate and research social aspects of self-regulatedlearning. Computers in Human Behavior, 26(5), 794–805.

Hartley, K., & Bendixen, L. D. (2001). Educational research in the Internet age:Examining the role of individual characteristics. Educational Researcher, 30(9),22–26.

Hayes, J., & Allison, C. W. (1993). Matching learning style and instructional strategy:An application of the person-environment interaction paradigm. Perceptual andMotor Skills, 76(1), 63–79.

Hayes, J., & Allison, C. W. (1996). The implications of learning styles for training anddevelopment: A discussion of the matching hypothesis. British Journal ofManagement, 7(1), 63–73.

Heckhausen, H. (1977). Achievement motivation and its constructs: A cognitivemodel. Motivation and Emotion, 1(4), 283–329.

Heckhausen, H., Schmalt, H.-D., & Schneider, K. (1985). Achievement motivation inperspective. London: Academic Press.

Heckhausen, J., & Heckhausen, H. (2006). Motivation und Handeln: Lehrbuch derMotivationspsychologie (3 ed.). Heidelberg: Springer.

Hill, J. R., & Hannafin, M. J. (1997). Cognitive strategies and learning from the WorldWide Web. Educational Technology Research and Development, 45(4), 37–64.

Ifenthaler, D. (2008). Practical solutions for the diagnosis of progressing mentalmodels. In D. Ifenthaler, P. Pirnay-Dummer & J. M. Spector (Eds.),Understanding models for learning and instruction. Essays in honor ofNorbert M. Seel (pp. 43–61). New York: Springer.

Ifenthaler, D. (2009). Model-based feedback for improving expertise and expertperformance. Technology, Instruction, Cognition and Learning, 7(2), 83–101.

Ifenthaler, D. (2010a). Relational, structural, and semantic analysis of graphicalrepresentations and concept maps. Educational Technology Research andDevelopment, 58(1), 81–97. http://dx.doi.org/10.1007/s11423-008-9087-4.

Ifenthaler, D. (2010b). Scope of graphical indices in educational diagnostics. In D.Ifenthaler, P. Pirnay-Dummer, & N. M. Seel (Eds.), Computer-based diagnosticsand systematic analysis of knowledge (pp. 213–234). New York: Springer.

Ifenthaler, D. (2011a). Identifying cross-domain distinguishing features of cognitivestructures. Educational Technology Research and Development, 59(6), 817–840.http://dx.doi.org/10.1007/s11423-011-9207-4.

Ifenthaler, D. (2011b). Intelligent model-based feedback. Helping students to

motivational perspectives on preflection in self-regulated online learning51

.

Page 10: Cognitive, metacognitive and motivational perspectives on preflection in self-regulated online learning

10 D. Ifenthaler / Computers in Human Behavior xxx (2013) xxx–xxx

monitor their individual learning progress. In S. Graf, F. Lin, Kinshuk & R.McGreal (Eds.), Intelligent and adaptive systems: Technology enhanced supportfor learners and teachers (pp. 88–100). Hershey, PA: IGI Global.

Ifenthaler, D. (2012). Determining the effectiveness of prompts for self-regulatedlearning in problem-solving scenarios. Journal of Educational Technology &Society, 15(1), 38–52.

Ifenthaler, D., Eseryel, D., & Ge, X. (2012). Assessment for game-based learning. In D.Ifenthaler, D. Eseryel & X. Ge (Eds.), Assessment in game-based learning.Foundations, innovations, and perspectives (pp. 3–10). New York: Springer.

Ifenthaler, D., & Lehmann, T. (2012). Preactional self-regulation as a tool forsuccessful problem solving and learning. Technology, Instruction, Cognition andLearning, 9(1–2), 97–110.

Ifenthaler, D., & Pirnay-Dummer, P. (2014). Model-based tools for knowledgeassessment. In J. M. Spector, M. D. Merrill, J. Elen, & M. J. Bishop (Eds.), Handbookof research on educational communications and technology (4 ed., pp. 289–301).New York: Springer.

Ifenthaler, D., Pirnay-Dummer, P., & Seel, N. M. (2007). The role of cognitive learningstrategies and intellectual abilities in mental model building processes.Technology, Instruction, Cognition and Learning, 5(4), 353–366.

Ifenthaler, D., & Seel, N. M. (2011). A longitudinal perspective on inductivereasoning tasks. Illuminating the probability of change. Learning and Instruction,21(4), 538–549. http://dx.doi.org/10.1016/j.learninstruc.2010.08.004.

Ifenthaler, D., & Seel, N. M. (2013). Model-based reasoning. Computers andEducation, 64, 131–142. http://dx.doi.org/10.1016/j.compedu.2012.11.014.

Jacobson, M. J. (2000). Problem solving about complex systems: Differencesbetween experts and novices. In B. Fishmann & S. O. O’Connor-Divelbiss(Eds.), Fourth international conference of the learning sciences (pp. 14–21).Mahwah, NJ: Lawrence Erlbaum Associates.

Johnson, T. E., Pirnay-Dummer, P., Ifenthaler, D., Mendenhall, A., Karaman, S., &Tennenbaum, G. (2011). Text summaries or concept maps: Which betterrepresent reading text conceptualization? Technology, Instruction, Cognition andLearning, 8(3–4), 297–312.

Joo, Y.-J., Bong, M., & Choi, H.-J. (2000). Self-efficacy for self-regulated learning,academic self-efficacy, and internet self-efficacy in web-based instruction.Educational Technology Research and Development, 48(2), 5–17. http://dx.doi.org/10.1007/BF02313398.

Kalyuga, S. (2006). Assessment of learners’ organised knowledge structures inadaptive learning environments. Applied Cognitive Psychology, 20, 333–342.

Kavale, K. A., & Forness, S. R. (1987). Substance over style: Assessing the efficacy ofmodality testing and teaching. Exceptional Children, 54(3), 228–239.

Keller, J. M. (2010). Motivational design for learning and performance: The ARCS modelapproach. New York: Springer.

Kim, C. (2012). The role of affective and motivational factors in designingpersonalized learning environments. Educational Technology Research andDevelopment, 60(4), 563–584. http://dx.doi.org/10.1007/s11423-012-9253-6.

Kim, C., & Keller, J. M. (2010). Motivation, volition, and belief change strategies toimprove mathematics learning. Journal of Computer Assisted Learning, 26(5),407–420.

Kim, C., & Pekrun, R. (2014). Emotions and motivation in learning and performance.In J. M. Spector, M. D. Merrill, J. Elen, & M. J. Bishop (Eds.), Handbook of researchon educational communications and technology (pp. 65–75). New York: Springer.

Kirschner, K. S., Sweller, J., & Clark, R. E. (2006). Why minimal guidance duringinstruction does not work: An analysis of the failure of the constructivist,discovery, problem-based, experiential, and inquiry-based teaching. EducationalPsychologist, 41(2), 75–86.

Koedinger, K. R., & Aleven, V. (2007). Exploring the assistance dilemma inexperiments with cognitive tutors. Educational Psychology Review, 19(3),239–264.

Lajoie, S. P., & Azevedo, Roger. (2006). Teaching and learning in technology-richenvironments. In P. A. Alexander & P. H. Winne (Eds.), Handbook of educationaltechnology (2 ed., pp. 803–821). Mahwah, NJ: Lawrence Erlbaum Associates.

Latham, G. P., & Locke, E. A. (1991). Self-regulation through goal setting.Organizational Behavior and Human Decision Processes, 50, 212–247.

Lee, J. (2009). Effects of model-centered instruction and levels of learner expertise oneffectiveness, efficiency, and engagement with ill-structured problem solving: Anexploratory study of ethical decision making in program evaluation. Tallahassee,FL: Florida State University.

Lee, T.-H., Shen, P.-D., & Tsai, C.-W. (2008). Applying web-enabled problem-basedlearning and self-regulated learning to add value to computing education inTaiwan’s vocational schools. Educational Technology & Society, 11(3), 13–25.

Lin, X., & Lehmann, J. D. (1999). Supporting learning of variable control in acomputer-based biology environment: Effects of prompting college students toreflect on their own thinking. Journal of Research in Science Teaching, 36,837–858.

Litzinger, T. A., Lee, S. H., Wise, J. C., & Felder, R. M. (2007). A psychometric study ofthe Index of Learning Styles. Journal of Engineering Education, 96(4), 309–319.

Low, R., & Jin, P. (2012). Self-regulated learning. In N. M. Seel (Ed.), Encyclopedia ofthe sciences of learning (pp. 3015–3018). New York: Springer.

Martin, S. (2010). Teachers using learning styles: Torn between research andaccountability. Teaching and Teacher Education, 26(8), 1583–1591.

McKeown, J. O. (2009). Using annotated concept map assessments as predictors ofperformance and understanding of complex problems for teacher technologyintegration. Tallahassee, FL: Florida State University.

McNeill, M., Gosper, M., & Hedberg, J. (2012). Technologies and the assessment ofhigher order outcomes: A snapshot of academic practice in curriculum

Please cite this article in press as: Ifenthaler, D. Cognitive, metacognitive andComputers in Human Behavior (2013), http://dx.doi.org/10.1016/j.chb.2013.07.0

alignment. In P. Isaias, D. Ifenthaler, D. Kinshuk, G. Sampson, & J. M. Spector(Eds.), Towards learning and instruction in Web 3.0. Advances in cognitive andeducational psychology (pp. 109–121). New York: Springer.

Meece, J. L. (1994). The role of motivation in self-regulated learning. In D. H. Schunk& B. J. Zimmerman (Eds.), Self-regulation of learning and performance: Issues andeducational applications (pp. 25–44). Hillsdale, NJ: Lawrence ErlbaumAssociates.

Moos, D. C., & Azevedo, Roger. (2008). Self-regulated learning with hypermedia: Therole of prior domain knowledge. Contemporary Educational Psychology, 33(2),270–298. http://dx.doi.org/10.1016/j.cedpsych.2007.03.001.

Pintrich, P. R. (1999). The role of motivation in promoting and sustaining self-regulated learning. International Journal of Educational Research, 31, 459–470.

Pintrich, P. R. (2000). The role of goal orientation in self-regulated learning. In M.Boekaerts, P. R. Pintrich, & M. Zeidner (Eds.), Handbook of self-regulated learning(pp. 451–502). San Diego, CA: Academic Press.

Pintrich, P. R., Wolters, C., & Baxter, G. (2000). Assessing metacognition and self-regulated learning. In G. Schraw (Ed.), Metacognitive assessment (pp. 43–97).Lincoln, NE: The University of Nebraska Press.

Pirnay-Dummer, P., Ifenthaler, D., & Seel, N. M. (2012). Designing model-basedlearning environments to support mental models for learning. In D. H. Jonassen& S. Land (Eds.), Theoretical foundations of learning environments (2 ed.,pp. 66–94). New York: Routledge.

Pirnay-Dummer, P., Ifenthaler, D., & Spector, J. M. (2010). Highly integrated modelassessment technology and tools. Educational Technology Research andDevelopment, 58(1), 3–18. http://dx.doi.org/10.1007/s11423-009-9119-8.

Puustinen, M., & Pulkkinen, L. (2001). Models of self-regulated learning. A review.Scandinavian Journal of Educational Research, 45(3), 269–286.

Reed, S. K. (2006). Cognitive architectures for multimedia learning. EducationalPsychologist, 41, 87–98.

Rheinberg, F. (2004). Motivationsdiagnostik – Kompendien Psychologische Diagnostik(Vol. 5). Göttingen: Hogrefe.

Riding, R., & Grimley, M. (1999). Cognitive style, gender and learning from multi-media materials in 11-year-old children. British Journal of EducationalTechnology, 30(1), 43–56.

Roscoe, R. D., Snow, E. L., & McNamara, D. S. (2013). Feedback and revising in anintelligent tutoring system for writing strategies. In H. C. Lane, K. Yacef, J.Mostow, & P. Pavlik (Eds.). Artificial intelligence in education (Vol. 7926,pp. 259–268). Heidelberg: Springer.

Ryan, R. M., & Deci, E. L. (2000). Self-determination theory and the facilitation ofintrinsic motivation, social development, and well-being. American Psychologist,55(1), 68–78.

Samruayruen, Buncha, Enriquez, Judith, Natakuatoong, Onjaree, & Samruayruen,Kingkaew (2013). Self-regulated learning: A key of a successful learner in onlinelearning environments in Thailand. Journal of Educational Computing Research,48(1), 45–69. http://dx.doi.org/10.2190/EC.48.1.c.

Schallberger, U. (2005). Kurzskalen zur Erfassung der Positiven Aktivierung, NegativenAktivierung und Valenz in Experience Sampling Studien (PANAVA-KS)Forschungsberichte aus dem Projekt ‘‘Qualität des Erlebens in Arbeit und Freizeit’’(Vol. 6). Zürich: Psychologisches Institut.

Schiefele, U., & Pekrun, R. (1996). Psychologische Modelle des selbstgesteuerten undfremdgesteuerten Lernens. In F. E. Weinert (Ed.). Enzyklopädie der Psychologie –Psychologie des Lernens und der Instruktion. Pädagogische Psychologie (Vol. 2,pp. 249–287). Göttingen: Hogrefe.

Schmeck, R. R. (1988). Strategies and styles of learning: An integration of variedperspectives. In R. R. Schmeck (Ed.), Learning strategies and learning styles(pp. 317–348). London: Plenum Press.

Schmitz, B. (2001). Self-Monitoring zur Unterstützung des Transfers einer Schulungin Selbstregulation für Studierende: Eine prozessanalytische Untersuchung.Zeitschrift für Pädagogische Psychologie, 15(3/4), 181–197.

Schmitz, B., Landmann, M., & Perels, F. (2007). Das Selbstregulationsmodell undtheoretische Implikationen. In M. Landmann & B. Schmitz (Eds.),Selbstregulation erfolgreich fördern: Praxisnahe Trainingsprogramme füreffektives Lernen (pp. 312–326). Stuttgart: Kohlhammer.

Schraw, G. (2007). The use of computer-based environments for understanding andimproving self-regulation. Metacognition and Learning, 2(2–3), 169–176.

Schraw, G., & Dennison, R. S. (1994). Asessing metacoginitive awareness.Contemporary Educational Psychology, 19(4), 460–475.

Schunk, D. H. (1991). Self-efficacy and academic motivation. EducationalPsychologist, 26, 207–232.

Schunk, D. H., & Zimmerman, B. J. (Eds.). (1998). Self-regulated learning: Fromteaching to self-reflective practice. New York: The Guilford Press.

Seel, N. M. (1999a). Educational diagnosis of mental models: Assessment problemsand technology-based solutions. Journal of Structural Learning and IntelligentSystems, 14(2), 153–185.

Seel, N. M. (1999b). Educational semiotics: School learning reconsidered. Journal ofStructural Learning and Intelligent Systems, 14(1), 11–28.

Shute, V. J., Jeong, A. C., Spector, J. M., Seel, N. M., & Johnson, T. E. (2009). Model-based methods for assessment, learning, and instruction: Innovativeeducational technology at Florida State University. In M. Orey (Ed.),Educational media and technology yearbook (pp. 61–79). New York: Springer.

Simons, P. R. J. (1992). Lernen selbstständig zu lernen – ein Rahmenmodell. In H.Mandl & H. F. Friedrich (Eds.), Lern- und Denkstrategien. Analyse und Intervention(pp. 251–264). Göttigen: Hogrefe.

Snider, V. E. (1992). Learning styles and learning to read: A critique. Remedial andSpecial Education, 13(1), 6–18.

motivational perspectives on preflection in self-regulated online learning.51

Page 11: Cognitive, metacognitive and motivational perspectives on preflection in self-regulated online learning

D. Ifenthaler / Computers in Human Behavior xxx (2013) xxx–xxx 11

Stachowiak, F. J. (1973). Allgemeine Modelltheorie. Berlin: Springer.Stolp, Stephanie, & Zabrucky, Karen M. (2009). Contributuions of metacognitive and

self-regulated learning theories to investigations of calibration ofcomprehension. International Electronic Journal of Elementary Education, 2(1),7–31.

Taminiau, E. M. C. (2013). Advisory models for on-demand learning. Herlen: OpenUniversiteit.

Taminiau, E. M. C., Kester, L., Corbalan, G., Alessi, S. M., Moxnes, E., Gijselaers, W. M.,et al. (2013). Why advice on task selection may hamper learning in on-demandeducation. Computers in Human Behavior, 29(1), 145–154. http://dx.doi.org/10.1016/j.chb.2012.07.028.

Thillmann, H., Künsting, J., Wirth, J., & Leutner, Detlev (2009). Is it merely a questionof ‘‘What’’ o prompt or also ‘‘When’’ to prompt? Zeitschrift für PädagogischePsychologie, 23(2), 105–115.

Trumpower, D. L., Sharara, H., & Goldsmith, T. E. (2010). Specificity of structuralassessment of knowledge. The Journal of Technology, Learning, and Assessment,8(5), 2–32.

Tsai, C. W. (2010). The effects of feedback in the implementation of web-mediatedself-regulated learning. CyberPsychology & Behavior, 13(2), 153–158.

Tversky, A. (1977). Features of similarity. Psychological Review, 84, 327–352.van den Boom, G., Paas, F. G., & van Merriënboer, J. J. G. (2007). Effects of elicited

reflections combined with tutor or peer feedback on self-regulated learning andlearning outcomes. Learning and Instruction, 17(5), 532–548. http://dx.doi.org/10.1016/j.learninstruc.2007.09.003.

van den Boom, G., Paas, F. G., van Merriënboer, J. J. G., & van Gog, T. (2004).Reflection prompts and tutor feedback in a web-based learning environment:Effects on students’ self-regulated learning competence. Computers in HumanBehavior, 20(4), 551–567. http://dx.doi.org/10.1016/j.chb.2003.10.001.

van Merriënboer, J. J. G., & Sweller, J. (2005). Cognitive load theory and complexlearning: Recent developments and future directions. Educational PsychologyReview, 17(2), 147–177.

Veenman, M. V. J. (1993). Intellectual ability and metacognitive skill: Determinants ofdiscovery learning in computerized learning environments. Amsterdam:University of Amsterdam.

Veenman, M. V. J. (2007). The assessment and instruction of self-regulation incomputer-based environments: A discussion. Metacognition and Learning, 2(2–3), 177–183.

Veenman, M. V. J., van Hout-Wolters, B. H. A. M., & Afflerbach, P. (2006).Metacognition and learning: Conceptual and methodological considerations.Metacognition and learning, 1(1), 3–14.

Vollmeyer, R., & Rheinberg, F. (1999). Motivation and metacognition when learninga complex system. European Journal of Psychology of Education, 14(4), 541–554.http://dx.doi.org/10.1007/BF03172978.

Please cite this article in press as: Ifenthaler, D. Cognitive, metacognitive andComputers in Human Behavior (2013), http://dx.doi.org/10.1016/j.chb.2013.07.0

Wild, K. P. (2000). Lernstrategien im Studium. Strukturen und Bedingungen. Münster:Waxmann.

Willett, J. B. (1988). Questions and answers in the measurement of change. Reviewof Research in Education, 15, 345–422.

Winne, P. H. (1983). Training students to process text with adjunct aids.Instructional Science, 12(3), 243–266. http://dx.doi.org/10.1007/BF00051747.

Winne, P. H. (2001). Self-regulated learning viewed from models of informationprocessing. In B. J. Zimmerman & D. H. Schunk (Eds.), Self-regulated learning andacademic achievement. Theoretical perspectives (pp. 153–190). Mahawah, NJ:Lawrence Erlbaum Associates.

Wirth, J. (2009). Prompting self-regulated learning through prompts. Zeitschrift fürPädagogische Psychologie, 23(2), 91–94.

Wygotski, L. S. (1969). Denken und Sprechen. Mit einer Einleitung von ThomasLuckmann. Übersetzt von Gerhard Sewekow. Stuttgart: Fischer Verlag.

Yang, Y.-F. (2010). Conceptions of and approaches to learning through online peerassessment. Learning and Instruction, 20, 72–83.

Zimmerman, B. J. (1989). Models of self-regulated learning and academicachievement. In B. J. Zimmerman & D. H. Schunk (Eds.), Self-regulated learningand academic achievement. Theory, research and practice (pp. 1–25). New York:Springer.

Zimmerman, B. J. (1998). Developing self-fulfilling cycles of academic regulation:An analysis of exemplary instructional models. In D. H. Schunk & B. J.Zimmerman (Eds.), Self-regulated learning: From teaching to self-reflectivepractice (pp. 1–19). New York: Guilford Press.

Zimmerman, B. J. (2000). Attaining self-regulation: A social cognitive perspective. InM. Boekaerts, P. R. Pintrich, & M. Zeidner (Eds.), Handbook of self-regulation(pp. 13–39). San Diego, CA: Academic Press.

Zimmerman, B. J. (2002). Becoming a self-regulated learner: An overview. Theoryinto Practice, 41(2), 64–70.

Zimmerman, B. J. (2008). Investigating self-regulation and motivation: Historicalbackground, methodological developments, and future prospects. AmericanEducational Research Journal, 45(1), 166–183.

Zimmerman, B. J., & Campillo, M. (2003). Motivating self-regulated problem solvers.In J. E. Davidson & R. J. Sternberg (Eds.), The psychology of problem solving(pp. 233–262). Cambridge, MA: Cambridge University Press.

Zimmerman, B. J., & Schunk, D. (2001). Theories of self-regulated learning andacademic achievement: An overview and analysis. In B. J. Zimmerman & D. H.Schunk (Eds.), Self-regulated learning and academic achievement. Theoreticalperspectives (pp. 1–37). Mahawah, NJ: Lawrence Erlbaum Associates.

motivational perspectives on preflection in self-regulated online learning.51