who’s talking, listening, and learning now€¦  · web viewin recent years, with the...

8
Exploring the Embodied Interactive Learning Effects – Designing an Instructional Scenario with Unity3D and Kinect V2 Sensor Xinhao Xu, Fengfeng Ke Department of Educational Psychology and Learning Systems Florida State University United States {xx11, fke}@fsu.edu Dan Huang Department of Computer Science Florida State University United States [email protected] Abstract: In this paper, we present an on-going pilot study of a larger research project on embodied interactions and learning. The project aims to evaluate how embodied interactions enabled by the Microsoft Kinect V2 will facilitate learning. We utilize Unity3D and Kinect V2 SDK to construct an interactive learning environment for learners to learn numeric systems conversions. Our target participants are thirty undergraduate or graduate students in an American university who have no or little idea about the content knowledge. We will apply pre- and post-tests to collect quantitative data to compare learning effects between the Kinect-enabled embodied interactions group and the traditional mouse-based group. We will also collect qualitative data through observations and interviews to find out any issues with regard to the usability of the learning scenario so that to further polish the instructional environment design for the next phase of the study. Introduction In recent years, with the development of the information and communication technology (ICT), especially the emergence of affordable human-computer interaction (HCI) technologies, using body movements to interact with computers is no science fiction any more. Consumer-level body sensory and tracking technologies, such as hand-held wands (e.g., Nintendo Wii), Microsoft Kinect, and LeapMotion, offer promising and exciting possibilities for people to use their own body movements to control and interact with a computer system. This kind of HCI relates to embodied

Upload: others

Post on 04-Jun-2020

1 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Who’s Talking, Listening, and Learning Now€¦  · Web viewIn recent years, with the development of the information and communication technology (ICT), especially the emergence

Exploring the Embodied Interactive Learning Effects – Designing an Instructional Scenario with Unity3D and Kinect V2 Sensor

Xinhao Xu, Fengfeng Ke Department of Educational Psychology and Learning Systems

Florida State UniversityUnited States

{xx11, fke}@fsu.edu

Dan HuangDepartment of Computer Science

Florida State UniversityUnited States

[email protected]

Abstract: In this paper, we present an on-going pilot study of a larger research project on embodied interactions and learning. The project aims to evaluate how embodied interactions enabled by the Microsoft Kinect V2 will facilitate learning. We utilize Unity3D and Kinect V2 SDK to construct an interactive learning environment for learners to learn numeric systems conversions. Our target participants are thirty undergraduate or graduate students in an American university who have no or little idea about the content knowledge. We will apply pre- and post-tests to collect quantitative data to compare learning effects between the Kinect-enabled embodied interactions group and the traditional mouse-based group. We will also collect qualitative data through observations and interviews to find out any issues with regard to the usability of the learning scenario so that to further polish the instructional environment design for the next phase of the study.

Introduction

In recent years, with the development of the information and communication technology (ICT), especially the emergence of affordable human-computer interaction (HCI) technologies, using body movements to interact with computers is no science fiction any more. Consumer-level body sensory and tracking technologies, such as hand-held wands (e.g., Nintendo Wii), Microsoft Kinect, and LeapMotion, offer promising and exciting possibilities for people to use their own body movements to control and interact with a computer system. This kind of HCI relates to embodied interactions – the interactions between the physical and social world and computer systems through body movements. Such technologies enable instructional designers and practitioners to design learning environments with body movements involved, and to study the effectiveness of such technologies on learning and cognition. In this study, the researchers regard embodied interactive learning as any learning activities that body movements and gestures are used to interact with the instructions and learning materials.

Embodied interactive learning has been popular in recent years (e.g., Chang, Chien, Chiang, Lin, & Lai, 2013; Hung, Hsu, Chen, & Kinshuk, 2015; Johnson, Pavleas, & Chang, 2013; Johnson-Glenberg, Birchfield, Tolentino, & Koziupa, 2014; Pasfield-Neofitou, Huang, & Grant, 2015; Repetto, Colombo, & Riva, 2015; etc.). Marshall, Antle, Hoven, and Rogers (2013) noted that embodiment with regard to learning and cognition is normally about how people’s perception, cognition, and experience are related to the engagement of their bodies. ‘Embodied cognition’ is the popular phase used in the cognitive science literature which illustrates the idea that people’s cognitive processes originate from the body interactions with the surrounding world and are bodily situated in the physical environment (Anderson, 2003; Barsalou, Niedenthal, Barbey, & Ruppert, 2003; Barsalou, 2008, 2010; Shapiro, 2011; Wilson, 2002; Wilson & Foglia, 2011). Existing studies show that body movements bring positive effects on learning and contribute to learners’ cognitive processes from various aspects. Body movements may facilitate the encoding mechanisms when a learner does information processing, reduce the cognitive load by offering an alternative modality, and link the concrete physical movements and experiences and the abstract knowledge (Alibali & Nathan, 2012; Arzarello, Paola,

Page 2: Who’s Talking, Listening, and Learning Now€¦  · Web viewIn recent years, with the development of the information and communication technology (ICT), especially the emergence

Robutti, & Sabena, 2009; Bautista, Roth, & Thom, 2011; Broaders, Cook, Mitchell, & Goldin-Meadow, 2007; De Koning & Tabbers, 2013; Glenberg & Kaschak, 2002; Lee, Huang, Wu, Huang, & Chen, 2012; Rumme, Saito, Ito, Oi, & Lepe, 2008; Valenzeno, Alibali, & Klatzky, 2003; Vogel, Pettersson, Kurti, & Huck, 2012). Xu and Ke (2014) carried out a comprehensive review on literature regarding body movements and learning, and came up with a conceptual postulation – the ‘motorpsycho’ learning approach. The motorpsycho approach acknowledges the association between people’s kinetic activities and cognitions, especially the proactive role that body movements and gestures play in cognitive activities.

In instructional activities that involve body movements, either the instructor or the learners will perform the movements and gestures. From a learner’s point of view, the existing literature can be categorized into two. In one category, learners watch the body movements made by others, normally instructors or peers, but not by the learner him/herself. In the other category, a leaner manipulated his/her own body movements to interact with the instructions. The current study concerns the design of an instruction session that prompts a learner to use his/her own body movements and gestures to interact with the learning materials. The researchers apply the Microsoft Kinect V2 (Kinect in short) sensor and Unity3D game engine to construct a short session for the treatment group participants to practice, using their body movements and gestures, the binary number and decimal number conversions. Participants in the baseline group use mouse to interact with the identical instructional materials.

Methodology

The current study features an experimental, pre- and post-test research design. The researchers aim to compare if embodied interactive learning enabled by the Kinect will bring better knowledge acquisition of the subject area compared to the traditional mouse interaction. At the same time, qualitative data like observations and retrospective interviews will be collected and analyzed to see how the participants in the treatment group experience the learning activities, especially how they feel the motivation and usability issues.

The research hypothesis is that learners in the Kinect-enabled embodied interactive learning group will outperform the mouse-based group in knowledge acquisition when learning binary to decimal number conversions.

The Instructional ScenarioThe scenario is about the practices of numeric systems conversions – from the binary to the decimal.

The current version of the user interface is as follows:

Figure 1: The current interface for the instructional scenario. Notice the real-time skeleton view of the participant shown at the right corner below.

Page 3: Who’s Talking, Listening, and Learning Now€¦  · Web viewIn recent years, with the development of the information and communication technology (ICT), especially the emergence

This interface supports the conversion of a binary number with eight digits at the most. For the embodied interactive learning group enabled by the Kinect, a learner will use the right arm to control the lower four digits (the four digits on the right), and the left arm to control the higher four digits (the four digits on the left), which may strengthen the linkage of the concept of the weight and the corresponding sequence of weights. Whenever a learner raises the right arm, the “on” and “off” buttons associate with the lower four digits will be highlighted, indicating that the right arm controls these four digits; intuitively, whenever a learner raises the left arm, the “on” and “off” buttons associate with the higher four digits will be highlighted, indicating that the left arm controls those four digits. The learner’s body is virtually divided into the right part and the left part, which may embody and reinforce the cognitive schema of the weight distribution of a number (the digit to the farthest right has the least weight, while the digit to the farthest left has the most weight). A learner is also prompted to position arm up to press the “on” button and down to press the “off” button, which may strengthen the idea of binary symbols of 1 and 0 (up for 1 since “on” relates to the higher voltage while down for 0 since “off” related to the lower voltage in our learning context for a computer system). After assigning 0s and 1s to each one of the digits and pressing the calculation button, the learner will see the calculation in the blank space at the top of the screen. The learner can continue use gestures to assign 0s and 1s to the digits and find instant calculations in the formula. For the mouse-based group, a learner will simply use mouse clicks to interact with the buttons and calculations.

Technical DevelopmentUnity3D 5.x, C# programing language, and MS Kinect V2 for Windows SDK 2.0 are used to construct

the instructional environment. Fig. 2 shows the basic architecture of the system design.

Figure 2: The basic technical architecture of the intervention design

The Unity3D is a popular cross-platform game engine, and version 5.x is its latest release in 2015. This study installs Unity3D 5.0.1f1 64bit version. Kinect V2, the body-sensory device, is an updated version of the Microsoft Kinect originally coming together with the XBox game console. Since the debut of the Kinect four years ago, this off-the-shelf consumer-level body sensory device has been attracting researchers and developers around the world to develop and customize applications that involve body movements and gestures. The device owns one RGB camera, a CMOS depth sensor, and an infrared (IR) sensor. With the color, depth, and IR data collected through these three sensors, the device manages to interpret the data to human skeleton with 20 points thanks to its powerful system on chip built-in algorithms. Kinect V2, unveiled in 2014, can recognize 25 points of a human body, which leads to better body movement tracking precision. The Kinect SDK (Software Developing Kit) is what Microsoft releases for developers to construct Kinect-based applications in the .NET framework. Luckily for developers working with Unity3D, Microsoft releases a Kinect V2 Unity3D API (Application Programming Interface) to make the sensor compatible in Unity3D. This study installs Kinect-v2-

Page 4: Who’s Talking, Listening, and Learning Now€¦  · Web viewIn recent years, with the development of the information and communication technology (ICT), especially the emergence

with-MS-SDK-v2.5.1.unitypackage. In Unity3D, C# is one of the dominant scripting languages to manipulate all the in-environment objects and actions, which is also compatible with the Kinect SDK. A wrapper application package, Kinect V2 with MS-SDK (Version2.6; Filkov, 2015), from the Unity3D online asset store is also used. This wrapping package envelopes some major classes and functions for Kinect together with some modifiable scripts and demo scenes, and it is free to use for educational purposes.

The Kinect V2 requires specific software and hardware configurations to run the SDK. The basic requirements are: A 64-bit Windows 8.x operation system A 64-bit (x64) processor, Intel Core i7 or higher At least 4GB effective memory or more USB 3.0 host controller DirectX11 capable graphics adapter Kinect V2 power hub and USB cable

The researchers utilize a Dell Latitude 3000 Mobile Workstation for the programing work and the running of the intervention.

Participants and ProceduresSince it is a pilot study with a relatively small scale, the researchers aim to recruit 30 undergraduate or

graduate students in an American university. There is no discrimination on race, color, or country of original during the recruitment. All participants are expected to have no/little idea in the content knowledge – numeric systems conversions, and will be randomly assigned into two groups – the treatment (Kinect-enabled embodied interactions) group and the baseline (mouse-based interactions) group.

For both groups, the participant will take the instruction on a one-by-one basis. One of the researchers will accompany a participant during each instructional session. Before the actual intervention begins, the researcher will inform the participant what the study is about with a full consent form, and ask the participant to take a 10-item pretest on the subject area. The participant will then take the intervention with either Kinect-enabled embodied interactions or mouse-based interactions. The performance of any participant will be recorded through screen video capturing software. And the researcher will also observe how a participant interacts with the instructional materials during the intervention. The participant will take a 10-item posttest on how much they have learned the content knowledge after the intervention. The researcher will then give a semi-structured interview on each participant. Two days later, the participant will take a 10-item delayed posttest. All the tests will be applied through the Qualtrics service.

Measurements and Data AnalysesAll the test items are adapted from an exam pool of an existing course computer basics in an accredited

university, and are isomorphic in terms of the item difficulty and wording. Each of the three tests (the pretest and the two posttests) comprises of 10 multiple choice questions worth a total of 10 points. All items are reviewed by two subject experts independently for the validity. The on-site observation will concentrate on the usability issues for the treatment group, i.e., how easy or difficult a participant interacts with the learning material through gestures. The interview questions for the treatment group will mainly on how a participant experiences the intervention, whether s/he feels motivated, and if there is anything about the embodied interaction that hinders the instruction or frustrates the learner. The screen-captured video will be coded every 20 seconds by assigning participants’ performances into two general categories: actions related to the learning content, and actions related to the usability issues.

For the quantitative data (the test scores), the researchers will apply ANCOVA to do the analyses. The pretest result of the participants will be treated as the covariate in ANCOVA to further eliminate the learners’ possible difference in the initial performance on the learning content. Two sets of ANCOVA will be used to compare learning effects between the two groups in the posttest and the delayed posttest. The test power and effect size will be reported after the data analyses. For the qualitative data, the researchers are planning to utilize the NVivo software to store and analyze the observations and interviews, and develop thematic results from the data.

Page 5: Who’s Talking, Listening, and Learning Now€¦  · Web viewIn recent years, with the development of the information and communication technology (ICT), especially the emergence

Anticipated Study Findings

This pilot study is part of an on-going research project and no data has been collected yet. The researchers expect to (a) see that the learners in the embodied interaction group will have better knowledge acquisition of the subject area than their counterparts in the mouse-based group, and (b) find any design issues with regard to the usability of the embodied interactive learning so that to further polish the instructional environment design for the formal study.

ReferencesAlibali, M. W., & Nathan, M. J. (2012). Embodiment in mathematics teaching and learning: Evidence from learners’ and

teachers’ gestures. Journal of Learning Science, 21(2), 247-286.Anderson, M. L. (2003). Embodied cognition: a field guide. Artificial Intelligence, 149(1), 91–130.Arzarello, F., Paola, D., Robutti, O., & Sabena, C. (2009). Gestures as semiotic resources in the mathematics classroom.

Educational Studies in Mathematics, 70(2), 97-109.Barsalou, L. W. (2008). Grounded cognition. Annual Review of Psychology, 59, 617–645.Barsalou, L. W. (2010). Grounded cognition: past, present, and future. Topics in Cognitive Science, 2(4), 716–724.Barsalou, L. W., Niedenthal, P. M., Barbey, A., & Ruppert, J. (2003). Social embodiment. In B. Ross (Ed.), The psychology

of learning and motivation (Vol. 43, pp. 43–92). San Diego: Academic.Bautista, A., Roth, W. M., & Thom, J. S. (2011). Knowing, insight learning, and the integrity of kinetic

movement. Interchange, 42(4), 363-388.Broaders, S. C., Cook, S. W., Mitchell, Z., & Goldin-Meadow, S. (2007). Making children gesture brings out implicit

knowledge and leads to learning. Journal of Experimental Psychology: General, 136(4), 539.Chang, C. Y., Chien, Y. T., Chiang, C.Y., Lin, M. C., & Lai, H. C. (2013). Embodying gesture‐based multimedia to improve

learning. British Journal of Educational Technology, 44(1), 5-9.De Koning, B. B., & Tabbers, H. K. (2013). Gestures in instructional animations: a helping hand to understanding non-

human movements? Applied Cognitive psychology, 27(5), 683-689.Filkov, R. (2015). Kinect V2 with MS-SDK [Computer Software]. Unity3D Online Asset Store. Retrieved from

https://www.assetstore.unity3d.com/en/#!/content/18708. Glenberg, A. M., & Kaschak, M. P. (2002). Grounding language in action. Psychonomic bulletin & review, 9(3), 558-565.Hung, I. C., Hsu, H. H., Chen, N. S., & Kinshuk (2015). Communicating through body: a situated embodiment-based

strategy with flag semaphore for procedural knowledge construction. Educational Technology Research and Development, 63(5), 749-769.

Johnson, K., Pavleas, J., & Chang, J. (2013). Kinecting to Mathematics through Embodied Interactions. Computer, (10), 101-104.

Johnson-Glenberg, M. C., Birchfield, D. A., Tolentino, L., & Koziupa, T. (2014). Collaborative embodied learning in mixed reality motion-capture environments: Two science studies. Journal of Educational Psychology, 106(1), 86-104.

Lee, W., Huang, C., Wu, C., Huang, S., & Chen, G. (2012). The effects of using embodied interactions to improve learning performance. Advanced Learning Technologies (ICALT), 2012 IEEE 12th International Conference on, 4-6 July 2012, 557-559.

Marshall, P., Antle, A., Hoven, E. V. D., & Rogers, Y. (2013). Introduction to the special issue on the theory and practice of embodied interaction in HCI and interaction design. ACM Transactions on Computer-Human Interaction (TOCHI), 20(1), 1.

Pasfield-Neofitou, S., Huang, H., & Grant, S. (2015). Lost in second life: virtual embodiment and language learning via multimodal communication.Educational Technology Research and Development, 63(5), 709-726.

Repetto, C., Colombo, B., & Riva, G. (2015). Is Motor Simulation Involved During Foreign Language Learning? A Virtual Reality Experiment. SAGE Open, 5(4), 2158244015609964.

Rumme, P., Saito, H., Ito, H., Oi, M., & Lepe, A. (2008). Gestures as effective teaching tools: are students getting the point? – A study in pointing gesture in the English as a second language classroom. International journal of Psychology, 43(3), 604-609.

Shapiro, L. A. (2011). Embodied cognition. London: Routledge.Valenzeno, L., Alibali, M. W., & Klatzky, R. (2003). Teachers’ gestures facilitate students’ learning: a lesson in symmetry.

Contemporary Educational Psychology, 28, 187-204. Vogel, B., Pettersson, O., Kurti, A., & Huck, A. S. (2012, March). Utilizing gesture based interaction for supporting

collaborative explorations of visualizations in TEL. In Wireless, Mobile and Ubiquitous Technology in Education (WMUTE), 2012 IEEE Seventh International Conference on (pp. 177-181). IEEE.

Wilson, M. (2002). Six views of embodied cognition. Psychonomic Bulletin & Review, 9(4), 625-636.

Page 6: Who’s Talking, Listening, and Learning Now€¦  · Web viewIn recent years, with the development of the information and communication technology (ICT), especially the emergence

Wilson, R. A., & Foglia, L. (2011). Embodied cognition. In Edward N. Zalta (Eds.). The Stanford Encyclopedia of Philosophy, Fall 2011 Edition. Retrieved from http://plato.stanford.edu/archives/fall2011/entries/embodied-cognition.

Xu, X., & Ke, F. (2014). From psychomotor to ‘motorpsycho’: learning through gestures with body sensory technologies. Educational Technology Research and Development, 62(6), 711-741.