give-me: gamification in virtual environments for ...in virtual environments for multimodal...

Post on 18-Apr-2018

227 Views

Category:

Documents

1 Downloads

Preview:

Click to see full reader

TRANSCRIPT

GIVE-ME: Gamification In Virtual Environments

for Multimodal Evaluation –

A FrameworkWAI KHOO

DEPARTMENT OF COMPUTER SCIENCE

THE GRADUATE CENTER,

CITY UNIVERSITY OF NEW YORK

Committee Members

Prof. Zhigang Zhu ◦ City College, Computer Science

Prof. Yingli Tian ◦ City College, Electrical Engineering

Prof. Tony Ro ◦ The Graduate Center, Psychology

Dr. Aries Arditi◦ Visibility Metrics LLC

2

Outline

§ Motivation§ Research questions§Approach§ Proposed framework§ 4 applications of the framework§ Conclusion & Future work

Funding supports:1. NSF Awards # ◦ Emerging Frontiers in Research

& Innovation (EFRI) 1137172◦ Chemical, Bioengineering,

Environmental, and Transport Systems (CBET) 1160046

◦ Industrial Innovation & Partnerships (IIP) 1416396

2. VentureWell (formerly NCIIA, through Award # 10087-12).

3. CUNY Graduate Center Science Fellowship (2009 –2014)

3

NYCPenn

Station

Source: Jason Gibbs. Retrieved from http://jasongibbs.com/pennstation/ on March 19, 2016

4

A B

Travel Aids

5

Guide Dog Talking GPS Miniguide White cane

Argus II, retinal implant

Brainport

Background

050100150200250300

2002 2014

WorldwideVisuallyImpairedPopulation(inmillion)

6

161 285

77%

Outline

§ Motivation

§ Research questions

§Approach

§ Proposed framework

§ 4 applications of the framework

§ Conclusion & Future work

7

Research Questions

1. How to establish a benchmark for heterogeneous systems?

2. How to provide a well-controlled and safe testing environment?

3. How to provide a robust evaluation and scientificcomparison of the effectiveness and friendliness of multimodal assistive technologies?

8

9

Inspiration

Jason Park, Helen MacRae, Laura J. Musselman, Peter Rossos, Stanley J. Hamstra, Stephen Wolman, Richard K. Reznick, Randomized controlled trial of virtual reality simulator training: transfer to live patients, The American Journal of Surgery, Volume 194, Issue 2, August 2007, Pages 205-211

Outline

§ Motivation

§ Research questions

§Approach

§ Proposed framework

§ 4 applications of the framework

§ Conclusion & Future work

10

Approach

VirtualReality

Gami-fication

Multi-modality

Unified formal evaluation and comparison approach.

Differ from Degara et al. (2013)[1]◦Only sounds

Differ from Lahav et.al (2012) [2] and Huang (2010) [3]◦ Focused on cognitive mapping in unknown space.

11

Virtual Reality

Use a game engine to design a virtual environment and simulate part of an assistive technology.

Benefits:◦Rapid prototyping◦ EARLY user involvement◦ Psychophysics evaluation◦ Safe & well-controlled environment for navigation tasks

12

Gamification

13

Use game design elements for research & evaluation.

Benefits:◦ Fun/engaging experiment sessions◦ Sustainable evaluation◦Crowd-sourcing data collection◦ Package designed VE as a simulation/training tool

Multimodality

Multimodal input and output of data

Benefits:◦ Enable alternative perception (sensory substitution)◦ Allows a mixture of input and output devices.

14

15

Who Benefits from My Research?

Researchers/developers

Assistive technology companies

Visually impaired users

Outline

§ Motivation

§ Research questions

§Approach

§ Proposed framework

§ 4 applications of the framework

§ Conclusion & Future work

16

Proposed Framework: Gamification in Virtual Environments for Multimodal Evaluation (GIVE-ME)

17

Framework: User Interface

18

Framework: Foundation

19

GIVE-ME Software Impl.

20

http://ccvcl.org/~khoo/GIVE_ME.unitypackage

• Minimal coding

• Click-&-drag

• Fully customizable

Package

Outline

§ Motivation

§ Research questions

§Approach

§ Proposed framework

§ 4 applications of the framework

§ Conclusion & Future work

21

Application 1: VibrotactileNav

22

Vista Wearable, Inc.

Application 1: VibrotactileNav

23

Application 1: VibrotactileNav

24

WaiL.Khoo,JoeyKnapp,FranklinPalmer,TonyRo,andZhigangZhu.Designingandtestingwearablerange-vibrotactiledevices.JournalofAssistiveTechnologies,7(2):102-117,2013.

Controller:Joystick/mouse

Stimulators:VibratorsSounds

Virtual sensor:Infrared

VibrotactileNav Results

25

VibrotactileNav Results

26

VibrotactileNav ResultsEASY HALLWAY

9 out of 18 succeeded

Succeeded:

Avg time: 280.10 sec

Avg bumps: 17.3

Failed:

Avg time: 288.65 sec

Avg bumps: 22.1

COMPLEX HALLWAY

3 out of 18 succeeded

Succeeded:

Avg time: 120.25 sec

Avg bumps: 12.7

Failed:

Avg time: 353.67 sec

Avg bumps: 42.727

EEG Data Collection

28

Application 2: BrainportNav

29

Wicab, Inc.

Application 2: BrainportNav

30

Margaret Vincent, Hao Tang, Wai L. Khoo, Zhigang Zhu, and Tony Ro. Shape discrimination using the tongue: Implications for a visual-to-tactile sensory substitution device. Multisensory Research, 2016.

Controller:Joystick

Stimulators:Electrode array

Virtual sensor:Camera in VEPathfinding

31

BrainportNav Results

Run1Time=151secondsAccuracy=0.95

Run2Time=91secondsAccuracy=0.95

32

BrainportNav Results

Avg accuracy:

83.33%

82.66%

93%

69.66%

33

Application 3: CrowdSourceNav

34

Application 3: CrowdSourceNav

OR

TCP/IP Conn

Stream Game View

Can be used for testing algorithms

UsabilityStudy

Wai L. Khoo, Greg Olmschenk, Zhigang Zhu and Tony Ro, "Evaluating Crowd Sourced Navigation for the Visually Impaired in a Virtual Environment," in Mobile Services (MS), 2015 IEEE International Conference on , pp.431-437, June 27 2015-July 2 2015

Controller:Joystick

Stimulator:Text-to-speech

Virtual Sensor:Camera in VE

Online crowd members

35

CrowdSourceNav Results

Maze 1Time = 514 secNumBump = 7

Maze 2Time = 345 secNumBump = 0

36

11 crowd members

Crowd completion time for either aggregation method is not significantly different (two-sample t-test, p=0.432 at 5% significance level, df=6)

CrowdSourceNav Results

37

Sample size of 11

Rating of 1 – 7 to the following statements1. It is useful2. It is easy to use3. It is user friendly4. I learned to use it quickly5. I am satisfied with it

CrowdSourceNav Results

CrowdSourceNav Real Exp.

38

5.5

m

0.5

m

3 m

0.5 m0.5 m

2 m

1 m

1 m

1 m

1 m

1 m

1 m1

m

1 m

1 m

1 m

2.235 m Similarities

• Simple average aggregation method

• Speech feedback

Differences

• Random obstacles

• Stream from cam’s view

Pros

• Skill transfer: “Preemptive” instructions

• Provided minimal training to crowd volunteer

Cons

• Various camera’s angle and height

• Various walking speed

Application 4: VistaNav

39

Vista Wearable, Inc.

Experiments

40

1. Configurations Experiment1. Four configs

2. Training Experiment1. Half with VE

training

VR setup

41

Controller:Xbox 360 game pad

Stimulators:VibratorsSounds

Virtual sensor:Infrared

VistaNav Results – Configurations

42

VistaNav Results - Configurations

43

VistaNav Results - Configurations

44

Two-way repeated measures ANOVA:

F(3,51) = 10.54, p = 0.00

Multiple comparisons w/ Bonferroni correction:

C3 vs. C4, C6 (p = 0.00)

C3 is the best!

VistaNav Results -Training

45

• Training session• Virtual hallway• 10 minutes• Free exploration• Audio & haptic feedback• 3 VISTA devices; one on each wrist and one on chest

• Testing session• Real U-shaped hallway (71 ft x 52 ft)• Sighted subjects are blindfolded• 2 VISTA devices; one on each wrist• Goal: reach destination w/o bumping into obstacles

VistaNav Results -Training

46

• 17 out of 21 subjects included in analysis• 2 outliers are trimmed from each group (training vs.

no training)

VE training significantly improved the performance in real hallway navigation.

t(15) = -1.91, p = 0.04, two-sample, one-tailed

Training Mean SD

YES 249.00s 92.45s

NO 333.78s 90.76s

VistaNav Results -Training

47

• A usability questionnaire is given at the end of the hallway• System Usability Scale (SUS)• 10 questions with 5 Likert-scale responses• Strongly disagree – strongly agree• 5 positive and 5 negative statements, which

alternate

VistaNav Results -Training

48

VistaNav Results -Training

49

Overall mean = 80.48

Overall SD = 13.62

Outline

§ Motivation

§ Research questions

§Approach

§ Proposed framework

§ 4 applications of the framework

§ Conclusion & Future work

50

GIVE-ME ContributionsUnified evaluation framework for multimodal navigational assistive technologies.

Sustainable evaluation that is fun and engaging.

Novel psychophysics evaluation for navigation tasks.

Novel collaborative environment that promotes early stakeholder involvement.

51

Future Work§ Improvement on framework implementation§New sensors, stimulators, and environments.

§ Determine metrics to evaluate ATs§ Survey population and experts

§ Brain data collection§Collect brain data to better understand how VIPs perform

visual task such as navigation

§ Comprehensive benchmark§ Large scale evaluation based on selected metrics

52

53

PublicationsBookChapters:1. E.Molina,W.L.Khoo,H.TangandZ.Zhu(2017).RegistrationofVideoImages.InA.A.

Goshtasby (Ed.),TheoryandApplicationofImageRegistration.(Invited)Hoboken,NJ:WileyPress.

Peer-reviewedJournals:1. M.Vincent,H.Tang,W.L.Khoo,Z.Zhu,T.Ro.ShapeDiscriminationusingtheTongue:

ImplicationsforaVisual-to-TactileSensorySubstitutionDevice.MultisensoryResearch(Pendingafinaldecisionafteraminorrevision)

2. W.L.Khoo,G.Olmschenk,Z.Zhu,H.Tong,W.H.Seiple,andT.Ro.DevelopmentandEvaluationofMobileCrowdAssistedNavigationfortheVisuallyImpaired.IEEETransactionsonServicesComputing(Pending;GOandWKequalcontribution).

3. W.L.KhooandZ.Zhu.MultimodalandAlternativePerceptionfortheVisuallyImpaired:ASurvey.JournalofAssistiveTechnologies10(1).pp.11-26.2016.

4. W.L.Khoo,J.Knapp,F.Palmer,T.Ro,andZ.Zhu.(2013).DesigningandTestingWearableRange-Vibrotactile Devices.JournalofAssistiveTechnologies,7(2).

PatentsPending&Provisional1. Z.Zhu,T.Ro,L.Ai,W.L.Khoo,E.Molina,F.Palmer.WearableNavigationAssistancefor

theVision-impaired,December27,2013.USPatentApp.14/141,742(pending)

54

PublicationsConferenceProceedings:1. Z.Zhu,W.L.Khoo,C.Santistevan,Y.Gosser,E.Molina,H.Tang,T.Ro,andY.Tian.EFRI-REMatCCNY:

ResearchExperienceandMentoringinMultimodalandAlternativePerceptionforVisuallyImpairedPeople.6thIEEEIntegratedSTEMEducationConference(ISEC'16),March5,2016,Princeton,NJ.

2. E.Molina,W.L.Khoo,F.Palmer,L.Ai,T.RoandZ.Zhu.VistaWearable:SeeingthroughWhole-BodyTouchwithoutContact.IEEE12thInternationalConferenceonUbiquitousIntelligenceandComputing,August10-14,2015,Beijing,China.

3. W.L.Khoo,G.Olmschenk,Z.Zhu,andT.Ro.Evaluatingcrowdsourcednavigationforthevisuallyimpairedinavirtualenvironment.InIEEE4thInternationalConferenceonMobileServices,pp.431-437.2015

4. W.L.Khoo,E.L.Seidel,andZ.Zhu.DesigningaVirtualEnvironmenttoEvaluateMultimodalSensorsforAssistingtheVisuallyImpaired.13thInternationalConferenceonComputersHelpingPeoplewithSpecialNeeds(ICCHP),7383,SpringerBerlinHeidelberg,July11-13,2012,Linz,Austria,573-580

5. A.Khan,J.Lopez,F.Moideen,W.L.Khoo,andZ.Zhu.KinDetect:KinectDetectingObjects.13thInternationalConferenceonComputersHelpingPeoplewithSpecialNeeds(ICCHP),7383,SpringerBerlinHeidelberg,July11-13,2012,Linz,Austria,588-595

6. Y.Qu,W.Khoo,E.Molina,andZ.Zhu.Multimodal3DPanoramicImagingUsingaPreciseRotatingPlatform.2010IEEE/ASMEInternationalConferenceonAdvancedIntelligentMechatronics,July6th- 9th,2010,260-265

7. W.Khoo,T.Jordan,D.Stork,andZ.Zhu.ReconstructionofaThree-DimensionalTableaufromaSingleRealistPainting,15thInternationalConferenceonVirtualSystemsandMultimedia,September9-12,2009,9-14

8. T.Jordan,D.Stork,W.Khoo,andZ.Zhu.FindingIntrinsicandExtrinsicViewingParametersfromaSingleRealistPainting,13thInternationalConferenceonComputerAnalysisofImagesandPatterns,5702,SpringerBerlinHeidelberg,September2-4,2009,293-300

55

Wai Khoowkhoo@gradcenter.cuny.edu

http://ccvcl.org/~khoo/

Questions?

Starting as TT assistant professor at RPI in January 2017!

Any advice?

References

56

1. Norberto Degara, Frederik Nagel, and Thomas Hermann. “SonEX: an evaluation exchange framework for reproducible sonification.” In Proceedings of the 19th InternationalConference on Auditory Displays, 2013.

2. Ying Ying Huang. “Design and evaluation of 3D multimodal virtual environments for visually impaired people.“ PhD thesis, KTH, 2010.

3. Orly Lahav, David Schloerb, Siddarth Kumar, and Mandyam Srinivasan. “A virtual environment for people who are blind-a usability study.” Journal of Assistive Technologies, 6(1):38-52, 2012.

Definition

USA categories:◦ Low vision◦ 20/70 – 20/200

◦ Legal blindness◦ 20/200 or worse

57

VisualImpairmentVisualacuityof20/70orworseinthebetter,even

withcorrection.

Sensorysubstitution/alternativeperceptionTransformationofthecharacteristicsofonesensorymodalityintostimuli

ofanothersensorymodality.

58

Common Eye Disorders

Source: CDC Vision Health Initiative, Common Eye Disorders, http://www.cdc.gov/visionhealth/basics/ced/index.html, Mar 28, 2016

1) Controllers

59

1) Multimodal (Virtual) Sensors

Common multimodal sensors: infrared, sonar, and RGB-D

Infrared (IR) sensor◦ Light-based sensors with a very narrow beam angle◦ Specifications:◦ Minimum distance: 10 cm◦ Detection distance: 80 cm◦ Beam width: 12 cm -> about 8.5 degree beam angle

◦ Limitations:◦ Needs to be pointed exactly at object for detection.◦ Cross interference.◦ Noisy data

60

1) Multimodal (Virtual) Sensors

RGB-D (i.e., with image size width x height pixels)◦ Optical◦ Use the camera (game) view

◦ Depth◦ Use width x height raycasts at each pixel location.◦ Raycasts parallel to the avatar’s positive z-axis (right-hand rule).◦ Maximum range of 4 meters.

61

1) Multimodal (Virtual) Sensors: Transducing

Processing◦ Convert raw data to meaningful information◦ E.g., quantize/threshold range data into 3 intervals (close, near, and

far).

Transmission◦ Send meaningful info to stimulators of another modality◦ E.g., encode the interval with appropriate vibration levels◦ Communication protocols:◦ USB/serial port◦ Bluetooth Low Energy◦ TCP/IP

62

2) Game Mechanics

Provides a set of goals to achieve and define how the user can interact with the game.

Task definition◦ What needs to be done in order to finish.◦ Simple task: Navigate from point A to B, as fast and as few

errors as possible.◦ Complex task: Exploratory in nature with termination

conditions (e.g., time-out).

Avatar Behavior◦ What controller commands are valid.◦ E.g., constant walking speed or turn/rotate in place.◦ Define what audio cues are constantly audible and which are

in response to an action or proximity.

63

3) Virtual Environment Toolbox

Third party game engine◦ Unity3D: popular, excellent documentation, & tutorials◦ C#, Javascript. ◦ Multi-platform.

Controller setup (optional)◦ Standard input devices are compatible with Unity3D. Capture

specific key action events.◦ Other controllers (e.g., Microsoft Kinect) not natively

supported by Unity3D◦ Need an external/separate program capable of

communicating and exchanging data with Unity3D.

64

3) Virtual Environment Toolbox

Environment Design◦ Static/dynamic objects; objects interaction; placing sound cues and

collectibles.

65

4) Data Collection

Type of data:◦ Multimodal sensory data◦ E.g., range data generated by virtual sensors and sounds

◦ Control/action data◦ E.g., User inputs, events, and game state.

◦ Brain/behavioral measurement, as obtained from the Measurement Device◦ E.g., EEG readings and observations

These data contribute to task evaluation and ground truth establishment.

66

67

4) Data CollectionRecommend the following metrics for task evaluation:1. Acceptability: Design that is useful, reliable, robust, aesthetic, and has positive impact

on quality of life of user.2. Compatibility: Design that is compatible with lifestyle of user and other technologies.3. Adaptability: Design that can be easily adjusted (i.e., function, location).4. Friendly: Low learning curve for the system; easy to use.5. Performance: Overall performance

While these metrics can be posed as open-ended questions, it can also be presented as rating surveys. Data to assess friendly and performance features can also be collected and extrapolated from the VE. Data includes but not limited to:1. Time to completion2. Number of errors (e.g., bumping into obstacles, incorrect response)3. Game score4. User’s trajectory5. User’s brain data (e.g., EEG, fMRI)

top related