motivation in a mooc: a probabilistic analysis of online ...webdocs.cs.ualberta.ca › ~zfahmad ›...
TRANSCRIPT
Motivation in a MOOC: a probabilistic analysisof online learners’ basic psychological needs
Tracy L. Durksen1 • Man-Wai Chu2 •
Zaheen F. Ahmad3 • Amanda I. Radil4 •
Lia M. Daniels4
Received: 31 May 2015 / Accepted: 7 December 2015
� Springer Science+Business Media Dordrecht 2016
Abstract Self-determination theory (SDT) is one of the most well-known
approaches to achievement motivation. However, the three basic psychological
needs of SDT have not received equivalent attention in the literature: priority has
been given to autonomy, followed by the need for competence, with research into
relatedness lacking (Bachman and Stewart in Teach Psychol 38: 180–187, 2011.
doi:10.1177/0098628311411798). One new educational setting where relatedness
may be particularly important is massive open online courses (MOOCs), which
provide unprecedented opportunities for either relatedness or isolation. The purpose
of the research was to use Bayesian networks (BN) to establish probabilistic
relationships between learners’ basic psychological needs in the context of one
& Tracy L. Durksen
Man-Wai Chu
Zaheen F. Ahmad
Amanda I. Radil
Lia M. Daniels
1 School of Education, University of New South Wales, John Goodsell Building, Sydney,
NSW 2052, Australia
2 Werklund School of Education, University of Calgary, 2500 University Dr. NW, Calgary,
AB T2N 1N4, Canada
3 Department of Computing Science, University of Alberta, 2-21 Athabasca Hall, Edmonton,
AB T6G 2E8, Canada
4 Department of Educational Psychology, University of Alberta, 6-102 Education North,
Edmonton, AB T6G 2G5, Canada
123
Soc Psychol Educ
DOI 10.1007/s11218-015-9331-9
MOOC. The majority (59 %) of participants (N = 1037; 50 % female and 50 %
male) were under 45 (age range was 18–74 years). This sample represented
approximately 88 regions and countries. Participants completed a revised Basic
Student Needs Scale (Betoret and Artiga in Electron J Res Educ Psychol 9(2):
463–496, 2011). In order to reveal the best structural understanding of SDT within a
MOOC learning environment, analysis of the data involved the development of a
BN probabilistic model. The best fitting BN model included autonomy, competence,
and relatedness—resulting in a probabilistic accuracy of 77.41 %. Analyses
revealed participants with high autonomy had an 80.01 % probability of having a
moderate level of competence. Relatedness was distinct from the autonomy and
competence relationship. The strong inter-connections between autonomy and
competence support existing research. The notion that relatedness may be a distinct
need, at least in this context, was supported and warrants future research.
Keywords Student motivation � Self-determination theory � Online learning �Higher education � Bayesian network analysis � Massive open online course
1 Introduction
‘‘[There are] no more lecture theatres in our buildings because we think they
can be done just as well online’’—University Professor (Griffits and Raschella
2014).
Massive open online courses (MOOCs) are a new type of online learning
environment that allow an unlimited number of participants from anywhere in the
world free access to course content often delivered by world renowned experts.
Whether MOOCs are the ‘game-changer’ for developing countries (Warusavitarana
et al. 2014) or perceived as a digital threat to education (Noble 2001), something
motivates hundreds of thousands of learners to participate. This study provides
empirical evidence to aide the considerable commentary and informal debates that
surround the issue of MOOCs—an area of limited, but growing, peer-reviewed
research.
In the current study, we applied Ryan and Deci’s (2000) self-determination
theory (SDT) as a framework for analyzing the motivation of MOOC learners.
According to SDT there are three basic needs that, when met, give rise to optimally
motivated behavior: autonomy, competence, and relatedness. In education, research
has focused on autonomy and competence, largely ignoring relatedness—a need that
may be at a greater risk of being ignored within an online learning environment.
Even though empirical research on MOOCs is only beginning to emerge, SDT has
already established itself as a promising theory guiding both case study (e.g.,
Hartnett et al. 2014) and pre/post research designs (e.g., Beaven et al. 2014). Adding
to this body of research, our study asked: What relationships exist between the basic
psychological needs of MOOC learners via a Bayesian network (BN) analysis?
Typically, motivation or engagement in MOOCs has focused on very basic
indicators such as number of logins, downloads, and unstable Internet connections.
T. L. Durksen et al.
123
By using SDT as a conceptual framework and performing probabilistic analysis, we
went beyond these basic indicators of participation to develop a probabilistic
description of learners’ satisfaction of basic psychological needs within the context
of a MOOC.
1.1 Massive open online courses: a brief history
MOOCs are relatively new, yet have already impacted our ideology of learning
within an online environment (Liyanagunawardena et al. 2013). While some have
argued that MOOCs have revolutionized education by providing learners with
classes from the most prominent professors at no cost (Cusumano 2013), others
focus on MOOCs as a new application of old learning theories (Siemens 2004).
Even in their short history, MOOCs have morphed from their original design and
intent, have attracted learners from around the world, and have gained notoriety at
world-class educational institutions (Clarke 2013).
In 2012 Siemens distinguished between the original connectivist massive open
online courses (cMOOCs) and newer edX and massive open online courses such as
those offered by Coursera� (xMOOCs). cMOOCs were originally developed based
on connectivist pedagogies that take advantage of the online environment for
connecting students (Siemens 2012). According to the tenets of connectivism,
students learn best by socially engaging with the material and sharing their ideas
with others. Connectivism proposed that knowledge is broadened when an idea is
critically presented, re-considered, and used as the basis of connections and
networks between learners. cMOOCs were built on this shared belief that
knowledge can be created through spontaneous and practical sharing of ideas.
That cMOOCs expected knowledge to be created, gathered, and re-distributed
was both their biggest advantage and downfall. Universities could see the advantage
of reaching thousands of learners but needed to do so with more conventional and
structured methods like traditional university courses, thus the xMOOC was born.
xMOOCs are just as massive, open, and online as cMOOCs but they are
philosophically different. xMOOCs tend to provide materials for students to learn
such as notes or pre-recorded video segments, use discussion boards or forums, and
assess learning through quizzes or tests (Clarke 2013). Even within this more
traditional format, there is evidence of spontaneous connections, groups, artifacts,
and learning, but these are not the driving force of an xMOOC. Thus, for xMOOCs
it may be true that ‘‘[i]n spite of all the hype about interactivity, ‘lecturing’ a la
MOOCs merely extends the cliche of the static, one-sided lecture hall, where
distance learning begins after the first row’’ (Newstok 2013 p. 18).
We believe that inquiries into motivation, however, can reveal the influential
factors that underlie connectivity in learning—regardless of the context. Because
now tens of thousands of learners may simply be exposed to ‘‘good’’ or ‘‘bad’’
xMOOCs the same way traditional students have endured ‘‘good’’ and ‘‘bad’’
lectures, seminars, and labs, it has become all the more important to understand how
the basic psychological needs are related to each other in an xMOOC.
Motivation in a MOOC: a probabilistic analysis of online…
123
1.2 Self-determined learning
We used SDT as the guiding framework for our research because the theory
positions motivation for learning on a continuum (from amotivation to extrinsic
motivation and towards intrinsic motivation) within an immediate social context—
ideally one with opportunities for need satisfaction. Given that three basic
psychological needs—autonomy, competence, and relatedness—have been univer-
sally recognized as intrinsic motivators (Deci et al. 2001; Ryan and Deci 2000), the
current study aimed to contribute empirical evidence of a good model of SDT
within a MOOC learning environment. We also explored a fourth need—
belonging—since a sense of belonging may be achieved through an online
community with a shared interest in course content in a way that is different from
relatedness.
Researchers have explored need satisfaction across life domains (Deci and Ryan
2008; Milyavskaya and Koestner 2011) and learning environments (e.g., online;
Yang 2014). For example, Chen and Jang (2010) found SDT particularly suitable as
a framework for studies of motivation in online settings, given the apparent
connections to autonomy (e.g., flexible learning context), competence (e.g., content
and technological skill development), and relatedness (e.g., computer-mediated
social interactions). Autonomy has been defined as an individual’s experience of
volitional and freely chosen behaviors (Niemiec and Ryan 2009; Oliver et al. 2008;
Deci and Ryan 2008; Faye and Sharpe 2008). The need for competence, however,
can be satisfied by experiencing ‘‘behavior as effectively enacted’’ (Niemiec and
Ryan 2009, p. 135). For example, a learner may gain competence through a
challenging activity (i.e., neither too difficult nor too easy) that allows an individual
to feel effective in his or her environment (Faye and Sharpe 2008). The
psychological need for relatedness is experienced through feeling connected to
others within supportive or close relationships (Oliver et al. 2008; Reis et al. 2000),
while belonging may be more likely satisfied through collective and communal
shared interests (Baumeister and Leary 1995).
Autonomy-supportive teachers can maximize ‘‘students’ perceptions of having a
voice and choice’’ (Niemiec and Ryan 2009, p. 139). While MOOCs offer students a
platform to work independently, this does not ensure that students feel autonomous.
Research from traditional classrooms found students with autonomy-supportive (as
opposed to controlling) teachers are more intrinsically motivated, more creative,
show more interest, and are more likely to persevere (Niemiec and Ryan 2009).
Teachers in a MOOC environment can also instruct in an autonomy-supportive
manner. For example, Yang (2014) asserts that the format and flexibility of online
courses can easily be used to encourage a students’ sense of autonomy. Deci and
Ryan (2008) found that an autonomy-supportive learning environment leads to
better outcomes (when compared with a controlling climate) and that an absence of
autonomy support can substantially decrease students’ satisfaction of basic
psychological needs.
Niemiec and Ryan (2009) emphasize the pairing of autonomy and competence as
essential for intrinsic motivation. Teachers and the learning climate that they create
not only influence a student’s sense of autonomy, but also student’s competence
T. L. Durksen et al.
123
(e.g., through task difficulty and feedback). Like self-efficacy (Bandura 1997), a
student’s need for competence can affect effort and achievement outcomes
(Bachman and Stewart 2011). Unlike self-efficacy, which defines confidence in
the ability to carry out a task in the future, student competence focuses on present
capability in an immediate social context. While self-efficacy can be measured
through questions that begin with ‘‘How confident are you in your ability to,’’ SDT
researchers interested in measuring competence have used items such as ‘‘I am
capable of learning the material in this course’’ (Perceived Competence for Learning
Scale; Williams and Deci 1996).
Students’ learning benefits from student–student and teacher–student connec-
tions, however relatedness has been an understudied component (Bachman and
Stewart 2011). Moller et al. (2010) state that positive trusting relationships define
the psychological need for relatedness—one that requires interactions with ‘‘real
human[s]’’ (p. 760). An environment that promotes teacher–student relatedness can
increase both teacher and student engagement, and consequently learning (Klassen
et al. 2012, 2013). In traditional classrooms, teachers who intentionally and
regularly provide one-on-one or small group support can create positive experiences
that students’ may drawn upon when faced with learning or interpersonal challenges
in the future (Klassen et al. 2012). While there is evidence showing how online
learning can meet students’ needs of autonomy and competence, the challenge often
remains with satisfying the need for relatedness through computer-mediated
interactions.
While relatedness tends to address the teacher–student relationship, belonging
may best be satisfied through a school, classroom, or online community of learners.
Well-designed learning communities, off- and online, have the potential for
encouraging relatedness and a sense of belonging through the development and
maintenance of genuine and caring relationships. A virtual community can provide
the motivational context for belonging with membership in a social space,
irrespective of actual physical locality (Ainley and Armatas 2006). Although
relatedness and belonging have been used interchangeably in the literature—with
the need for relatedness defined as the need for experiencing belonging (Osterman
2000), Betoret and Artiga (2011) considered belonging to be a separate need for
students.
1.3 Bayesian network analysis
Bayesian network (BN) analyses are applied in many disciplines to represent and
model different theories (Darwiche 2010). For example, BN analysis has been
successfully applied to SDT by researchers looking at reasoning patterns in
competitive sports (Fuster-Parra et al. 2014). The strength of this model is that it
allows for a representation of the relationships in a domain while using probabilities
to characterize the strength of those relationships (West et al. 2010). In the current
study, we chose BN analysis as one way to find a model that represents the extent to
which learners’ basic psychological needs were met in the context of learning in a
MOOC.
Motivation in a MOOC: a probabilistic analysis of online…
123
BNs are probabilistic graphical models that represent conditional dependencies in
a framework that will yield the highest expected information on the flow of
influence between variables (Koller and Friedman 2009; Russell and Norvig 2003).
They are typically represented as directed acyclic graphs (DAG), which represent
data using objects called nodes and edges. A node is a variable (e.g., observable or
latent) and an edge is a connection (e.g., one directional dependency) between two
nodes. Data can flow from one node to another along an edge or set of edges. For
instance, node A has an edge with node B, and node B also has an edge with a node
C. Since information can flow from nodes A to C through node B, a path exists
between nodes A and C.
DAGs for BNs are a compact representation of a complete joint probability
table given a set of random variables. DAGs, as the name suggests, are composed of
directed and acyclic graphs. A directed graph is one where the edges have an
explicitly defined direction for the passage of information between two nodes. For
example, a directed edge from node A to node B shows that information can be
passed from A to B but not from B to A. An acyclic graph is one where, given a set
of nodes, the edges between nodes exist in a manner such that no cyclic path exists
between the nodes, ensuring a one-way flow of information. The nodes and edges
represent the random variables and the dependencies between them. If there is a
directed edge from node A to node B, it shows that the variable of node A has a
causal effect on the variable of node B. On the other hand, when we look at it in the
other direction, the directed edge shows the variable of node B only has an indirect
effect on the variable of node A; it implies an effect only if evidence of node B is
given. A Bayesian Network represents these relationships over a set of many
variables so that calculating these joint probabilities become computationally easier.
The heart of BN is the inversion formula, which follows Bayes’ (1763) rule:
P Hjeð Þ ¼ P ejHð ÞP Hð ÞP eð Þ
In the formula, P(H) represents the probability that the hypothesis is true and is also
known as the prior; P(e) represents the probability that the evidence observed is
true; P(e|H) indicates the probability that e will materialise if H is true; and P(H|e) is
the probability that the hypothesis is true if the evidence is also true. Unlike null
hypothesis significance testing, Bayesian analysis allows for direct statements about
the probability of researchers’ parameters of interest based on a prior probability
distribution chosen before analysis (Zyphur and Oswald 2015). In this study, we
chose to estimate prior distributions from the data with maximum likelihood as
empirical priors (i.e., empirical Bayes estimation).
Our goal was to construct a BN model that reflected a representation of joint
probability distributions and included a collection of conditional independent
statements to help guide inferences of the networks (Russell and Norvig 2003). In
the current study, we used structure learning, an exploratory analysis method that
can represent joint probability distributions. Exploratory data analyses often require
the testing of many models with modifications so that the best fitting model is found.
Structure learning borrows principles from machine learning, which uses an
T. L. Durksen et al.
123
exhaustive and automated search technique that examines all possible connections
between the nodes in the model (Koller and Friedman 2009; Pearl 2000). Since
employing an exhaustive search would be computationally infeasible, we used a
local hill search method to find the locally best model. Machine learning is a field
that deals with learning from data and making predictions on new data by producing
models using observed data. BNs are one such machine learning technique that uses
data to model beliefs.
The goal of this paper is to represent the relationships between the different
aspects of SDT. This technique is suitable because it allows for accurate modelling
of the basic psychological needs forwarded by SDT and inferences on how
observing evidence on one psychological need may affect other psychological
need(s). An advantage of BN is its ability to automate the modelling process and
reduce the complexity of the final model. Unlike most other techniques (e.g., neural
networks) used to make predictions based on patterns in data, a BN is more
transparent. It provides an easy-to-understand model of correlations in the
observation by presenting relationships between variables as simple conditional
probabilities, thus allowing experts to apply domain knowledge to the problem.
Therefore, we can use BNs to verify whether a hypothesized model is able to
explain real observations.
2 Method
2.1 Data collection
Data were collected during a Canadian university’s first offering of an xMOOC. The
introductory course in dinosaur paleobiology (Dino 101) consisted of twelve lessons
on non-avian dinosaurs. Three versions of the course were available for learners: (1)
universally for free (no exams); (2) universally for course accreditation with a
modest fee; and (3) enrolled at the university offering the course. Course content
and lessons were offered through interactive digital activities as well as videos. The
course included formative assessments such as quizzes embedded in the videos and
summative assessments such as end-of-chapter multiple choice quizzes and
university-built online exams (Chesney 2013a). In total, 23,252 people enrolled in
Dino 101 during its inaugural offering in the last semester of 2013. We were
interested in studying motivation and emotions of students in Dino 101 from a
psychological perspective that is not captured by the usage and demographic
surveys administered by the MOOC.1 Instead, we invited learners to complete one
questionnaire at the conclusion of Dino 101 that included a number of scales such as
Task Value Questionnaire (Wigfield 1994), Achievement Emotions Questionnaire
(Pekrun et al. 2005) and the Academic Motivation Scale (Vallerand et al. 1992).
1 The authors were not Dino 101 course instructors or students, but were provided access as researchers.
Motivation in a MOOC: a probabilistic analysis of online…
123
2.1.1 Participants
Participants were recruited through a voluntary link posted on the main page of the
Dino 101 course website. The link was available for approximately one month,
from the last week of the course until the official end of the course. Any student
registered in Dino 101 who logged in during that time would have access to the link
regardless of how frequently they had participated in Dino 101. Thus, although it is
more likely that respondents were learners still actively participating in Dino 101,
all registrants had the ability to participate. From this link, learners were connected
to the online questionnaire (via Survey Monkey�). Study participants (511 female,
515 male, 11 unspecified) were Dino 101 learners (N = 1037) who ranged from 18
to 74 years of age (59 % under 45) from approximately 88 regions and countries.
Prior to enrolling in the MOOC, 34.1 % of participants already achieved a
university degree. Similar demographic details were considered representative of a
respondent group of 3500 Dino 101 learners (Chesney 2013b).
2.1.2 Measures
With SDT as our framework, we chose to use learners’ responses to a revised
version of the Basic Student Needs Scale (BSNS; Betoret and Artiga 2011). This
scale was initially based on the Basic Psychological Needs Scale (BPNS; Ilardi et al.
1993), which consisted of three components (autonomy, competence, and related-
ness). Betoret and Artiga (2011) later added a fourth component (belonging) to
reflect Goodenow’s (1992) work on student’s feelings toward belonging to a class or
peer group. The relatedness items from the BSNS scale were replaced with the
relatedness items from the Basic Need Satisfaction at Work scale (BNSW; Deci
et al. 2001). The original BSNS relatedness items focused mostly on the relationship
between students and the teacher, while the BNSW relatedness items were more
relevant to relationships within the entire online community.
The four original subscales of the BSNS were reliable, as indicated by
Cronbach’s alphas (.65 to .86; Betoret and Artiga 2011). The relatedness subscale
(from the BNSW scale; Deci et al. 2001) that replaced the BSNS relatedness scale
was also reliable (a = 0.84). Table 1 presents the revised scale items that were used
in the current study, with subscale reliabilities ranging from .61 to .80. The
reliability for the revised relatedness scale used in the current study was less than
optimal (.61); likely due to the revised content and limited number of items included
(3 instead of 8 items). The low Cronbach’s alpha for the relatedness scale, however,
did not have an impact on the BN analysis. Alphas (and correlations) are not directly
related to BN analysis because the alpha and correlation show a relationship
between a group of items (alpha) and all of the items (correlation), but BN analysis
looks at each item individually.
Original subscale items were re-worded to prompt learners to reflect upon their
contextual experiences in the Dino 101 course and rated on a 7-point Likert scale
(1 = Strongly Disagree and 7 = Strongly Agree). Autonomy items measured a
learner’s internal acceptance of, and engagement with, his or her motivated behavior
(e.g., ‘‘I have been able to freely decide my own pace of learning during Dino
T. L. Durksen et al.
123
101’’); competence items measured a learner’s ability to take on and master
challenging tasks (e.g., ‘‘I felt I was capable while learning in Dino 101’’);
relatedness items measured a learner’s feeling of being connected to or understood
by others (e.g., ‘‘I really liked the people I learned with in Dino 101’’); and
belonging items measured a learner’s feeling of being part of a group (e.g., ‘‘There
has been a strong feeling of friendship in Dino 101’’).
2.2 Data analysis
In order to reveal the best structural understanding of SDT using data based on
learning within one MOOC, we developed a probabilistic model using BN analysis.
Given the increasing MOOC offerings through universities, this approach allowed
for a probabilistic analysis of learners’ psychological needs satisfaction that can be
used when highlighting practical implications for online learning. In the current
study, we used Murphy’s (1997–2002) Bayes Net Toolbox to write the program in
Matlab.2 Each node in the model represented an item from the revised BSNS. After
the items were selected, we deleted 8.3 % of the cases (listwise deletion; Acock
Table 1 Items and subscale reliabilities for the Basic Student Needs Scale (revised)
Subscale items (with Cronbach’s alphas)
Autonomy (a = 0.80)
1. I have been able to freely decide my own pace of learning during Dino 101 (A1)
2. I was able to freely choose the tasks I will do during Dino 101 (A2)
3. I felt I was capable of deciding about how to learn and work in Dino 101 (A3)
4. Dino 101 allowed me to work as independently, collaboratively or cooperatively as I wanted (A4)
Competence (a = 0.71)
5. I felt capable while learning in Dino 101 (C1)
6. I had the chance to show my capacities during Dino 101 (C2)
7. I was able to learn new and interesting material in Dino 101 (C3)
8. I felt competent enough to meet the challenges and tasks posed by Dino 101 (C4)
Relatedness (a = 0.61)
9. I really liked the people I learned with in Dino 101 (R1)
10. I pretty much keep to myself in Dino 101a (R2)
11. People in Dino 101 were pretty friendly towards me (R3)
Belonging (a = 0.80)
12. There has been a strong feeling of friendship in Dino 101 (B1)
13. I felt at ease in Dino 101 (B2)
14. Being in Dino 101 felt like belonging to a large family (B3)
15. I got the feeling that we formed a large team in Dino 101 (B4)
16. I will remember my classmates from Dino 101 affectionately in the future (B5)
a Reverse scored
2 Contact the authors if you are interested in the code that generated the results in this paper.
Motivation in a MOOC: a probabilistic analysis of online…
123
2005; Gravetter and Wallnau 2009) because the participants did not respond to any
of the items on basic needs. The analysis continued with the data collected from the
remaining 950 participants.
The structure learning algorithm we used to construct the model systematically
tested each possible link and calculated fit indices for each model using a local hill
search method (Koller and Friedman 2009). The indices used in the study were the
Bayesian Information Criterion (BIC; Schwarz 1978) and accuracy3 scores. BIC
scores are a criterion for model selection among a finite number of models (see
Schwarz 1978 for formulas). When developing models, it is possible to increase the
likelihood function by adding parameters, but this may result in over fitting the
model to the data. BIC scores offer a solution by introducing a penalty for each
parameter of the model. Often the largest BIC score among a set of models is
considered to be the best fitting model for the data.
In calculating the BIC, our goal was to achieve a score that represented the best
fit of a model. Each time we represented a portion of the model correctly, we
increased the score. At the same time, we were looking for a structure that provided
good accuracy without requiring too many edges between nodes. We also lowered
the score by penalizing structures that had an increased number of edges. We
penalized structures with more edges since the increased number of edges may lead
to over fitting (i.e., the model being so complex that is describes the random error
instead of underlying relationships).
In the current study, the penalty was reflected through the lowering of the BIC
score. Specifically, we took the Maximum Likelihood Estimate (MLE) and
subtracted it by the penalty (i.e., a larger BIC score indicated a better fitting model).
It is important to note that some programs will add a penalty score (i.e., a lower BIC
score would represent a better fitting model) to -MLE. This often leads to
confusion, so it is essential for researchers to include which equation is being used
when comparing models. For our equation, we maximized over the log likelihood
(MLE - penalty), while others may choose to minimize over the negative log
likelihood (-MLE ? penalty). Our procedure produced a larger BIC score
(indicating a better fit) while the latter would produce a smaller BIC score (also
indicating a better fit). In sum, we used MLE when calculating BIC scores (Murphy
2012) where a larger BIC score was indicative of a better fit.
3 Results
Descriptive statistics and correlations between the 16 items of the revised BSNS are
displayed in Table 2. Since the BN model is robust against low or weak coefficients,
the non-significant correlations that existed for some items did not have an impact
on the analyses. Items that were skewed in the current study, however, were
problematic since the low sample size on some of the scales made the model
difficult to converge. Therefore, we reduced the 7-point scale to a 3-point scale (1,
3 To calculate the accuracy score, we used * 80 % of the data (n = 750) to develop the model and then
used the remaining 20 % (n = 200) to calculate the accuracy of the model.
T. L. Durksen et al.
123
Table
2D
escr
ipti
ve
stat
isti
csan
dco
rrel
atio
ns
for
auto
nom
y(A
1–A
4),
com
pet
ence
(C1–C
4),
rela
tednes
s(R
1–R
3),
and
bel
ongin
g(B
1–B
5)
A1
A2
A3
A4
C1
C2
C3
C4
A1
6.6
3(.
87
)
A2
.65
**
6.3
1(1
.13
)
A3
.53
**
.47
**
6.2
4(1
.06
)
A4
.50
**
.48
**
.44
**
6.2
6(1
.11
)
C1
.61
**
.50
**
.56
**
.48
**
6.3
5(.
98
)
C2
.19
**
.27
**
.41
**
.29
**
.30
**
4.9
4(1
.36
)
C3
.50
**
.39
**
.45
**
.44
**
.44
**
.32
**
6.4
7(.
96
)
C4
.57
**
.41
**
.56
**
.45
**
.70
**
.31
**
.36
**
6.2
8(1
.05
)
R1
.16
**
.23
**
.24
**
.23
**
.20
**
.32
**
.23
**
.17
**
R2
-.3
5*
*-
.28
**
-.2
5**
-.2
7*
*-
.40
**
-.0
4-
.25
**
-.3
2**
R3
.12
**
.18
**
.22
**
.19
**
.20
**
.30
**
.18
**
.16
**
B1
.07
*.1
5**
.24
**
.18
**
.12
**
.36
**
.21
**
.09
**
B2
.43
**
.38
**
.48
**
.50
**
.48
**
.31
**
.45
**
.44
**
B3
.02
.11
**
.18
**
.16
**
.08
*.4
1**
.20
**
.07
*
B4
.07
*.1
1**
.19
**
.20
**
.13
**
.40
**
.22
**
.12
**
B5
-.0
2.0
8**
.10
**
.11
**
.05
.29
**
.12
**
.01
Motivation in a MOOC: a probabilistic analysis of online…
123
Table
2co
nti
nu
ed
R1
R2
R3
B1
B2
B3
B4
B5
A1
A2
A3
A4
C1
C2
C3
C4
R1
4.8
1(1
.32
)
R2
.13
**
1.9
3(1
.25
)
R3
.71
**
.17
**
4.6
9(1
.24
)
B1
.55
**
.16
**
.55
**
4.1
9(1
.31
)
B2
.27
**
-.1
8**
.25
**
.25
**
6.0
1(1
.26
)
B3
.47
**
.16
**
.48
**
.68
**
.23
**
4.0
3(1
.44
)
B4
.42
**
.12
**
.42
**
.55
**
.22
**
.60
**
4.3
8(1
.48
)
B5
.37
**
.18
**
.39
**
.60
**
.16
**
.63
**
.47
**
3.6
4(1
.45
)
Mea
ns
wit
hS
tan
dar
dD
evia
tion
sin
par
enth
eses
app
ear
on
the
dia
go
nal
*C
orr
elat
ion
sar
esi
gn
ifica
nt
at0
.05
**
Corr
elat
ion
sar
esi
gn
ifica
nt
at0
.01
T. L. Durksen et al.
123
2 = low; 3, 4, and 5 = medium; 6, 7 = high). Our solution also enhanced the
interpretability of the BN model since low, medium, and high responses to items
allowed for more general conclusions about the interconnectedness between the
components.
We first performed a structure learning algorithm for Ryan and Deci’s (2000)
three component (autonomy, competence, and relatedness) SDT which resulted in
approximating the best fitting model (see Fig. 1). This diagram has a BIC value of
-3400.1 and a probabilistic accuracy of 77.41 %. Each box (or node) represents an
item from the revised BSNS (items are listed in Table 1). The percentages in each
box represent the probabilities of learners’ response to that item and the directional
arrows connecting nodes represent the influence of one item on another. For
example, Autonomy (3) has an arrow directed towards Autonomy (1) which indicates
students’ response to Autonomy (3) has a direct effect on their response to Autonomy
(1). Additionally, Autonomy (1) also had an indirect effect on Autonomy (3), which
indicates the two items have a dependency on one another. Direct effects means we
do not need additional information to learn about the relationship between
Autonomy (3) on Autonomy (1); however, indirect effects do not describe its
influence between Autonomy (1) on Autonomy (3) without any observed evidence.
This means, if we do not know anything certain about the parameters of Autonomy
(1), then we cannot say how it affects item Autonomy (3).
In Fig. 1, all autonomy items are linked which indicates they have a dependency
on each other; similarly, all the competence items are linked. Therefore, items
measuring autonomy and competence have dependencies within their respective
groups. One of the more interesting links in Fig. 1 is between competence (e.g., ‘‘I
had the chance to show my capacities during Dino 101’’) and autonomy (e.g., ‘‘I
have been able to freely decide my own pace of learning during Dino 101’’), in
which students’ responses to the autonomy items may have an effect on their
Fig. 1 Bayesian network model of self-determination theory with autonomy, competence, andrelatedness items
Motivation in a MOOC: a probabilistic analysis of online…
123
response to the competence items. Relatedness, however, was not linked to
autonomy or competence.
We also performed an exploratory algorithm to analyse all 16 items in order to
test the inclusion of the fourth component (belonging). Figure 2 illustrates links
between four Belonging items (12, 13, 15, 16) indicating direct or indirect item
dependence on each other. An interesting link in Fig. 2 is between Competence (8)
and Belonging (16) which indicates students’ responses to the autonomy items have
an effect on their response to the competence items, which also have an effect on
some of the belonging items (i.e., 12, 13, 15, and 16). Figure 2 displays the best
fitting model for the four components (BIC = -4819.2) among all possible
diagrams. Although the accuracy value (79.63 %) of this model was slightly better
than the model presented in Fig. 1, the BIC score was much lower. Therefore, we
determined Fig. 1 (the three component model) as the best fitting model for the data.
4 Discussion
BN analysis allowed for a clear graphical representation of the relationships
between learners’ basic psychological needs as reported within the context of a
MOOC. The best fitting model was a three-component model consisting of
autonomy, competence, and relatedness. Belonging did not fit in a meaningful way
and thus was excluded from the final model. In this discussion we focus on the
closeness of autonomy and competence, the separation of relatedness, and why
belongingness did not make a meaningful contribution.
Although relatedness was included in the overall model, it was distinct from
competence and autonomy (see Fig. 1). The lack of connection between relatedness
in Fig. 1 (or belonging and relatedness in Fig. 2) to the other two basic
psychological needs provides some evidence that MOOC learners may experience
competence and autonomy support separately. The distinction may also indicate that
Fig. 2 Bayesian network model of self-determination theory with autonomy, competence, relatednesswith belonging items
T. L. Durksen et al.
123
the needs of relatedness and belonging are not being met to the same extent as
competence and autonomy. The MOOC learning context may also lack the
necessary ‘real human factors’ for relatedness or belonging (Moller et al. 2010).
High autonomy-related results were not surprising since one advantage of a
MOOC learning environment is that it provides learners with flexibility and the
opportunity to freely choose when to access learning materials. As displayed
through Fig. 1, participants with high autonomy had a high (80.01 %) probability of
having a moderate level of competence. For example, when a learner personally
chooses an assignment topic to explore, autonomy may increase, lead to more
engaged learning, and a subsequent increase in learning that contributes to
competency satisfaction. Overall, our model may provide some support for previous
suggestions that competence and autonomy are closely linked (i.e., sharing
supportive practices) and why the relationship between these constructs tends to
be more prominently highlighted through SDT research (Bachman and Stewart
2011).
Our results suggest that meeting the need of relatedness through computer-
mediated interactions can be more difficult than meeting the combined needs of
autonomy and competence, at least within this MOOC. We tied relatedness to
student–teacher interactions, a connection that was supported by Hartnett et al.
(2014) who applied SDT to online learning and revealed more themes of relatedness
connected to a teacher than to peers. For MOOCs this may be particularly
challenging as the student–teacher ratio is off the charts. Nonetheless, evidence
indeed suggests that MOOC instructors can hold both a ‘rock star’ like quality,
while appearing warm and attentive (Daniels et al. 2016; Kolowich 2014). Future
research needs to examine these interpretations of the instructor more closely,
perhaps through qualitative methodologies so that we can understand how
relatedness is supported. Measuring teacher–student interactions in MOOCs can
help provide a more nuanced understanding of the seemingly distinct nature of
relatedness and, in the process, offer suggestions for need satisfaction.
Satisfying the need for relatedness can be difficult in any learning environment.
We encourage further research that looks at the ways in which relationships among
members of the online community can contribute to the learning climate.
Researchers (e.g., Ho et al. 2014; Kolowich 2014; Waltrop 2013) recommend
classifying data according to MOOC learners’ enrolment reasons. For example,
future research about Dino 101 can explore similarities and differences across the
learners within the three course versions that were offered. Comparative studies that
take into account the type of learning environment (e.g., blended, face-to-face,
flipped, MOOC) and a range of subject offerings will also advance motivational
research. Since relatedness has also been an under-emphasized SDT need of
teachers (see Klassen et al. 2012), we recommend researchers incorporate a measure
of MOOC instructors’ level of need satisfaction, in addition to self-reported
teaching styles, in order to advance understanding of teacher motivation and gain a
more complete picture of the learners’ online environment.
Motivation in a MOOC: a probabilistic analysis of online…
123
4.1 Limitations
The results of the current study need to be considered in light of two main
limitations pertaining to the sample and survey administration. First, the current
study provided probabilistic results based on one quantitative questionnaire
completed by a convenience sample. Although our sample was representative of
Dino 101 learners, we relied on data from approximately 4 % of the total number of
learners who were enrolled in the course at any given time. Moreover, we are unable
to determine the extent to which our respondents completed Dino 101. Data from
learners who intentionally left the course part way through would provide a valuable
perspective and help inform course design efforts that promote relatedness in
connection with satisfying autonomy and competence, enhanced student learning,
and the retention of more learners. Linking motivation to the level of participation
in Dino 101 would have created a compelling case, however, that was beyond the
scope of the current study, which could not link the Survey Monkey� survey to the
Coursera� course platform. Second, we administered the questionnaire only once
and at the end of Dino 101. Future studies on motivation within a MOOC would
also benefit from longitudinal explorations that collect questionnaire outcome data
and responses at multiple time points (e.g., pre-course, during, and post-course).
This type of quantitative data paired with qualitative research using a diary method
of data collection can also provide insight into how learners’ motivational needs are
met.
4.2 Conclusion
Teachers can promote learner engagement in a range of settings, and with the ever-
changing technological landscape, motivational research is the key to ensuring we
understand how to best support learners in multiple contexts. Although MOOCs
allow for a range of learners to connect geographically, the developing practice of
technologies that rely on computer-mediated social interactions may pose a risk to
learners’ self-determined learning. While the MOOC learning environment may be
more conducive to an instructor’s competence and autonomy-supportive actions,
relatedness with thousands of learners is more challenging given the little
opportunity for genuine instructor-to-learner interactions. We tied belonging to
student–student interactions and yet found no place for this construct in the BN
model. Thus, although there are thousands of learners, this sample did not translate
the sheer magnitude of people into perceptions of belongingness. Although learners
can be technologically connected to others with shared interests, the need for
belonging may be best satisfied when a MOOC is organized into smaller
communities that are driven by social engagement and knowledge sharing.
Therefore, we recommend instructors use strategic course design and facilitation
strategies in order to provide the foundation for relatedness and feelings of
belonging among learners as they co-create their learning environment.
In this study, we explored the satisfaction of psychological needs in the context
of a learning environment that has come about through recent technological
advances. New technologies will continue to add a level of complexity to our
T. L. Durksen et al.
123
understanding of the multiple factors that influence student motivation. Though
technologies may claim to promote interactive learning, we need further compar-
ative evidence of need satisfaction with face-to-face learning environments—
specifically the need for relatedness (Newstok 2013). Research that draws attention
to the motivational need of relatedness across learning environments, and
specifically the human factor mediated by technology, will help ensure learners
and their teachers feel supported and engaged. In doing so, we can highlight the
importance of connectivism and continue to promote education as an enriching civic
transaction (Moe 2014).
References
Acock, A. C. (2005). Working with missing values. Journal of Marriage and Family, 67(4), 1012–1028.
doi:10.1111/j.1741-3737.2005.00191.x.
Ainley, M., & Armatas, C. (2006). Motivational perspectives on students’ responses to learning in virtual
learning environments. In J. Weiss, J. Nolan, J. Hunsinger, & P. Trifonas (Eds.), The international
handbook of virtual learning environments (Vol. 1, pp. 365–394). The Netherlands: Springer.
Bachman, C. M., & Stewart, C. (2011). Self-determination theory and web-enhanced course template
development. Teaching of Psychology, 38, 180–187. doi:10.1177/0098628311411798.
Bandura, A. (1997). Self-efficacy: The exercise of control. New York: Freeman.
Baumeister, R. F., & Leary, M. R. (1995). The need to belong: Desire for interpersonal attachments as a
fundamental human motivation. Psychological Bulletin, 117, 497–529.
Bayes, T. P. (1763). An essay towards solving a problem in the doctrine of chances. Philosophical
Transactions of the Royal Society of London, 53, 370–418.
Beaven, T., Hauck, M., Comas-Quinn, A., Lewis, T., & de los Arcos, B. (2014). MOOCS: Strking the
right balance between facilitation and self-determination. MERLOT Journal of Online Learning and
Teaching, 10. http://jolt.merlot.org/
Betoret, F. D., & Artiga, A. G. (2011). The relationship among students psychological need satisfaction,
approaches to learning, reporting of avoidance strategies and achievement. Electronic Journal of
Research in Educational Psychology, 9(2), 463–496.
Chen, K.-C., & Jang, S.-J. (2010). Motivation in online learning: Testing a model of self-determination
theory. Computers in Human Behavior, 26, 741–752. doi:10.1016/j.chb.2010.01.011.
Chesney, J. (2013a). MOOC v2.0: How Dino 101 is different and what we’ve observed so far (Part 1).
Edmonton, Alberta: University of Alberta News and Events [Webpage]. http://uofa.ualberta.ca/
digital-learning/digital-learning-at-ualberta/news-and-events/2013/december/mooc-v2-how-Dino-
101-is-different-and-what-weve-observed-so-far-part1
Chesney, J. (2013b). MOOC v2.0: How Dino 101 is different and what we’ve observed so far (Part 2).
Edmonton, Alberta: University of Alberta News and Events [Webpage]. http://uofa.ualberta.ca/
digital-learning/digital-learning-at-ualberta/news-and-events/2013/december/mooc-v2-how-Dino-
101-is-different-and-what-weve-observed-so-far-part2
Clarke, T. (2013). The advance of the MOOCs (massive open online courses): The impending
globalization of business education? Emerald Insight: Education and Training, 55(4/5), 403–413.
Cusumano, M. A. (2013). Technology strategy and management: Are the costs of ‘free’ too high in online
education? Communications of the ACM, 56(4), 26. doi:10.1145/2436256.2436264.
Daniels, L. M., Adams, C., & McCaffrey, A. (2016). Emotional and social engagement in an massive
open online course: An examination of Dino 101. In S. Y. Tettegah & M. P. McCreery (Eds.),
Emotions, technology, and learning (pp. 25–41). London, UK: Elsevier.
Darwiche, A. (2010). What are Bayesian networks and why are their applications growing across all
fields? Communications of the Association for Computing Machinery, 53, 80–90. doi:10.1145/
1859204.1859227.
Deci, E. L., & Ryan, R. M. (2008). Facilitating optimal motivation and psychological well-being across
life’s domain. Canadian Psychology, 49, 14–23. doi:10.1037/0708-5591.49.1.14.
Motivation in a MOOC: a probabilistic analysis of online…
123
Deci, E. L., Ryan, R. M., Gagne, M., Leone, D. R., Usunov, J., & Kornazheva, B. P. (2001). Need
satisfaction, motivation, and well-being in the work organizations of a former Eastern Bloc Country:
A cross-cultural study of self-determination. Personality and Social Psychology Bulletin, 27,
930–942. doi:10.1177/0146167201278002.
Faye, C., & Sharpe, D. (2008). Academic motivation in university: The role of basic psychological needs
and identity formation. Canadian Psychology, 40, 189–199. doi:10.1037/a0012858.
Fuster-Parra, P., Garcia-Mas, A., Ponseti, F. J., Palou, P., & Cruz, J. (2014). A Bayesian Network to
discover relationships between negative features in sport: A case study of teen players. Quality &
Quantity, 48, 1473–1491. doi:10.1007/s11135-013-9848-y.
Goodenow, C. (1992). Strengthening the links between educational psychology and the study of social
contexts. Educational Psychologist, 27, 177–196.
Gravetter, F. J., & Wallnau, L. B. (2009). Statistics for the behavioral sciences. Belmont, CA:
Wadsworth.
Griffits, A., & Raschella, A. (2014, November 11). Sydney’s new UTS business school building defies
convention. Australian Broadcasting Corporation. http://www.abc.net.au/news/2014-11-11/uts-
new-business-school-building-defies-convention/5883506
Hartnett, M., St. George, A., & Dron, J. (2014). Exploring motivation in an online context: A case study.
Contemporary Issues in Technology and Teacher Education, 14. http://www.citejournal.org/vol14/
iss1/general/article1.cfm
Ho, A. D., Reich, J., Nesterko, S. O., Seaton, D. T., Mullaney, T., Waldo, J. & Chuang, I. (2014, January
21). HarvardX and MITx: The first year of open online courses, Fall 2012-summer 2013 (HarvardX
and MITx Working Paper No. 1). http://dx.doi.org/10.2139/ssrn.2381263
Ilardi, B. C., Leone, D., Kasser, R., & Ryan, R. M. (1993). Employee and supervisor ratings of
motivation: Main effects and discrepancies associated with job satisfaction and adjustment in a
factory setting. Journal of Applied Social Psychology, 23, 1789–1805.
Klassen, R. M., Perry, N. E., & Frenzel, A. C. (2012). Teachers’ relatedness with students: An
underemphasized aspect of teachers’ basic psychological needs. Journal of Educational Psychology,
104, 150–165. doi:10.1037/a0026253.
Klassen, R. M., Yerdelen, S., & Durksen, T. L. (2013). Measuring teacher engagement: The development
of the Engaged Teacher Scale (ETS). Frontline Learning Research, 1, 33–52.
Koller, D., & Friedman, N. (2009). Probabilistic graphical models: Principles and techniques.
Cambridge, MA: The MIT Press.
Kolowich, S. (2014, January 14). Completion rates aren’t the best way to judge MOOCs, researchers say.
The Chronicle of Higher Education. http://chronicle.com/blogs/wiredcampus/completion-rates-
arent-the-best-way-to-judge-moocs-researchers-say/49721
Liyanagunawardena, T. R., Adams, A. A., & Williams, S. A. (2013). MOOCs: A systematic study of the
published literature 2008–2012. The International Review of Research in Open and Distance
Learning, 14(3), 202–227.
Milyavskaya, M., & Koestner, R. (2011). Psychological needs, motivation, and well-being: A test of self-
determination theory across multiple domains. Personality and Individual Differences, 50, 387–391.
doi:10.1016/j.paid.2010.10.029.
Moe, R. (2014, May 15). The MOOC problem. Hybrid Pedagogy: A Digital Journal of Learning,
Teaching, and Technology. http://www.hybridpedagogy.com/journal/mooc-problem/
Moller, A. C., Deci, E. L., & Elliot, A. J. (2010). Person-level relatedness and the incremental value of
relating. Personality and Social Psychology Bulletin, 36, 754–767. doi:10.1177/0146167210371622.
Murphy, K. P. (1997–2002). Bayes Net Toolbox for Matlab [Website]. https://github.com/bayesnet/bnt
Murphy, K. P. (2012). Machine learning: A probabilistic perspective. Cambridge, MA: MIT Press.
Newstok, S. L. (2013). A plea for ‘‘close learning’’. Liberal Education, 99, 16–19.
Niemiec, C. P., & Ryan, R. M. (2009). Autonomy, competence, and relatedness in the classroom:
Applying self-determination to educational practice. Theory and Research in Education, 7, 133–144.
doi:10.1177/1477878509104318.
Noble, D. (2001). The future of the faculty in the digital diploma mill. Academe, 87, 27–32.
Oliver, E. J., Markland, D., Hardy, J., & Petherick, C. M. (2008). The effects of autonomy- supportive
versus controlling environments on self-talk. Motivation and Emotion, 32, 200–212. doi:10.1007/
s11031-008-9097-x.
Osterman, K. F. (2000). Students’ need for belonging in the school community. Review of Educational
Research, 70, 323–367.
T. L. Durksen et al.
123
Pearl, J. (2000). Causality: Models, reasoning, and inference. Cambridge, UK: Cambridge University
Press.
Pekrun, R., Goetz, T., & Perry, R. P. (2005). Achievement Emotions Questionnaire (AEQ): User’s
manual. Munich, Germany: Department of Psychology, University of Munich.
Reis, H. T., Sheldon, K. M., Gable, S. L., Roscoe, J., & Ryan, R. M. (2000). Daily well-being: The role of
autonomy, competence and relatedness. Personality and Social Psychology Bulletin, 26, 419–435.
doi:10.1177/0146167200266002.
Russell, S., & Norvig, P. (2003). Artificial intelligence: A modern approach. London, UK: Pearson
Education.
Ryan, R. M., & Deci, E. L. (2000). Self-determination theory and the facilitation of intrinsic motivation,
social development, and well-being. American Psychologist, 55, 68–78. doi:10.1037/0003-066X.55.
1.68.
Schwarz, G. (1978). Estimating the dimension of a model. The Annals of Statistics, 6, 461–464. doi:10.
1214/aos/1176344136.
Siemens, G. (2004). Connectivism: A learning theory for the digital age [Blog]. elearnspace. http://www.
elearnspace.org/Articles/connectivism.htm
Siemens, G. (2012). What is the theory that underpins our MOOCs? [Blog]. Elearnspace. http://www.
elearnspace.org/
Vallerand, R. J., Pelletier, L. G., Blais, M. R., Briere, N. M., Senecal, C., & Vallieres, E. F. (1992). The
Academic Motivation Scale: A measure of intrinsic, extrinsic, and amotivation in education.
Educational and Psychological Measurement, 52, 1003–1017. doi:10.1177/0013164492052004025.
Waldrop, M. M. (2013). Campus 2.0: Massive open online courses are transforming education—and
providing fodder for scientific research. Nature, 495, 161–163.
Warusavitarana, P. A., Lokuge Dona, K., Piyathilake, H. C., Epitawela, D. D., & Edirisinghe, M. U.
(2014). MOOC: A higher education game changer in developing countries. In B. Hegarty, J.
McDonald, and S.-K. Loke (Eds.), Rhetoric and reality: Critical perspectives on educational
technology. Proceedings ascilite Dunedin 2014 (pp. 359–366). http://ascilite2014.otago.ac.nz/files/
fullpapers/321-Warusavitarana.pdf
West, P., Rutstein, D. W., Mislevy, R. J., Liu, J., Choi, Y., & Levy, R., et al. (2010). A Bayesian network
approach to modeling learning progressions and task performance. (CRESST Report 776). Los
Angeles, CA: University of California, National Center for Research on Evaluation, Standards,
Student Testing (CRESST). http://www.cse.ucla.edu/products/reports/R776.pdf
Wigfield, A. (1994). Expectancy-value theory of achievement motivation: A developmental perspective.
Educational Psychology Review, 6, 49–78.
Williams, G. C., & Deci, E. L. (1996). Internalization of biopsychosocial values by medical students: A
test of self-determination theory. Journal of Personality and Social Psychology, 70, 767–779.
Yang, Q. (2014). Students motivation in asynchronous online discussions with MOOC Mode. American
Journal of Educational Research, 2, 325–330. doi:10.12691/education-2-5-13.
Zyphur, M. J., & Oswald, F. L. (2015). Bayesian estimation and inference: A user’s guide. Journal of
Management, 41, 390–420. doi:10.1177/0149206313501200.
Tracy L. Durksen is a Postdoctoral Research Fellow in the School of Education at the University of New
South Wales. Her research focuses on professional learning across career phases and teachers’
interpersonal skills, motivation, and engagement. Her program of research involves studying the use of
situational judgement tests as (1) a selection method that can help assess non-academic attributes (such as
empathy and adaptability) of prospective and novice teachers and (2) an educational tool for teachers’
professional learning.
Man-Wai Chu is an Assistant Professor in the Werklund School of Education at the University of
Calgary. Her research focuses on using innovative assessments, such as interactive digital environments,
to measure students’ performance-based skills in the classroom and on standardized tests. An extension of
her research with these assessments has led her to studying students’ development of mental models and
their associations with affective variables that enhance assessment performance; this research is guided
by the Learning Errors and Formative Feedback (LEAFF) model.
Motivation in a MOOC: a probabilistic analysis of online…
123
Zaheen F. Ahmad is a Master’s Student in the Department of Computing Science Artificial Intelligence
Laboratory at the University of Alberta. His primary research interests include Artificial Intelligence,
Machine Learning and Optimization.
Amanda I. Radil is a Doctoral Candidate in the School and Clinical Child Psychology program at the
University of Alberta. Her research focuses on the motivational strategies that teachers use in the
classroom, where they learn these strategies, and how to facilitate teachers’ use of adaptive strategies. She
also maintains research interests in clinical assessment and intervention and mixed methodology.
Lia M. Daniels is an Associate Professor in the Department of Educational Psychology at the University
of Alberta. Her research focuses on (1) how teachers support students’ adaptive motivation, characterized
by autonomy, relatedness, mastery, and effort and (2) how adaptive motivation is related to students’
academic outcomes including emotions and achievement.
T. L. Durksen et al.
123