interacting with eye movements in virtual environments

Upload: srlresearch

Post on 30-May-2018

220 views

Category:

Documents


0 download

TRANSCRIPT

  • 8/9/2019 Interacting with eye movements in virtual environments

    1/8

    C H I 2 0 0 0 1 - 6 A P RI L 2 0 0 0 PapersINTERACTING WITH EYE MOV EME NTS IN V IRTUAL

    ENVI RONMENTSVildan Tanriverdi and Robert J .K. Jacob

    D e p a r t m e n t o f E l e c t ri c a l E n g i n e e r i n g a n d C o m p u t e r S c i e n c eT u f t s U n i v e r s i t y

    M e d f o r d , M A 0 2 1 5 5 , U S A{ v i l d a n , j a c o b } @ e e c s . t u f t s . e d u

    A B S T R A C TEye movement-based interact ion offers the potent ia l ofeasy, natural, and fast ways of interacting in virtualenvironments. However , there i s l i t t le empir ica l evidenceabout the advantages or disadvantages o f this approach. W edeveloped a new interact ion technique for eye movementinteraction in a virtual environment and compared it tomore convent ional 3-D point ing. We conducted anexperiment to compare performance, of the two interactiontypes and to assess their impacts on spatial memory ofsubjects and to explore subjects' satisfaction with the twotypes of interact ions. We found that the eye movement-based interaction was faster than pointing, especially fordistant objects. However, subjects' abili ty to recall spatialinformat ion was weaker in the eye condi t ion than thepointing one. Subjects reported equal satisfaction with bothtypes of interactions, despite the technology limitations ofcurrent eye tracking equipment.KeywordsEye movements, eye t racking, Polhemus t racker , vi r tualreality, virtual environmen ts, interaction techniques.I NTRODUCTI ONVirtual environments can now display realistic, immersivegraphical worlds. However, interacting with such a worldcan sti l l be difficult . They usually lack haptic feedback toguide the hand and to support i t in space. W e thus requirenew interaction techniques to provide users with easy andnatural ways of interacting in virtual environments. This isparticularly important for dealing with displayed objectsbeyond the reach of the user ' s arm or the range of a shortwalk. Some recent studies have focused on developinginteraction techniques using arm, hand , or headmov emen ts in virtual environm ents [3, 6, 12, 13, 15], b ut

    Permiss ion to make digi ta l or ha rd cop ies of a l l or pa r t of th i s work forpe rsona l or c la ss room use i s granted w i thout fee provided tha t copiesa re not made or d i s t r ibuted for prol l t or commerc ia l advantage and tha tcopies bea r th i s not ice and the lu l l c i t at ion on the f i rs t page . To copyothe rwise , to republ i sh , to pos t on se rve rs or to redis t r ibute to l i st s ,requi res pr ior spec i f ic pe rmis s ion and /or a fee .CHI '2000 The Hague, AmsterdamCopyrightACM 200 0 1-58113-216-6/00/04...$5.00

    the field is st i l l in i ts infancy. We believe eye movement-based interaction can provide easy, natural, and fast ways ofinteract ing in vi r tual environments. Work on eyemov emen t -base d in te ract ion has t hus fa r focused ondeskto p display us er interfaces [1, 2, 7, 8, 17, 18, 19], whileeye m ovemen t-based interact ion in vi r tual rea l ity has hardlybeen explored. In this study, we develop a VR interactiontechnique using eye movements and compare i t sperformance to more convent ional pointing.EYE MOVE MENT - BASE D I NTERACTI ON I N V I RTUALREALI TYEye movement-based interact ion is an example of theemerging non-com mand based interact ion style [14] . In thistype of interaction, the computer observes and interpretsuser actions instead of waiting for explicit commands.Interact ions become more natura l and easier to use . Onesystem that suggests such advantages is a screen-basedsystem developed by Starker and Bol t [18]. I t moni tors eyemovements of the user, interprets which objects attract theuser's interest , and responds with narration about theselected objects . I t minimizes the physica effor t required tointeract with the system and increases interactivity. Highinteract ivi ty i s even more important in VR appl ica t ionswhere users of ten deal wi th more dynamic and complexenvironments.Our overal l approach in designing eye movement-basedinteraction techniques is, where possible; to obtaininformat ion from a user ' s natura l eye movements whi leviewing the display, rather than requiring the user to makespecific trained eye movements to actuate the system. Thisapproach fits particularly well with virtual realityinteraction, because the essence of a VR interface is that i texploits the user's pre-existing abili t ies and expectations.Navigat ing through a convent ional computer systemrequires a set of learned, unnatural commands, such askeywords to be typed in, or funct ion keys to be pressed.Navigating through a virtual environment exploits the user'sexisting "navigational commands," such as positioning hisor her head and eyes, turning his or her body, or walking

    265CHI Letters volume 2 issue 1

  • 8/9/2019 Interacting with eye movements in virtual environments

    2/8

    Papers C H I 2 0 0 0 t 1 - 6 A P R I L 2 0 0toward something of interest. By exploiting skills that theuser already possesses , VR in ter faces hold out the promiseof reducing the cognit ive burden of in teracting with acomputer by m aking such in teractions m ore l ike in teractingwith the res t of the wor ld . An approach to eye movementin teraction that rel ies upon natural eye movements as asource of user input extends this philosophy. Here, too, thegoal is to explo it m ore of the user ' s pre-exis t ing abil i t ies toperform in teractions with the computer .Another reason that eye tracking may be a par t icu lar ly goodmatch for VR is found in Sibert and Jacob's [17] study ondirect manipulat ion s ty le user in ter faces . They found eyemovement-based in teraction to be fas ter than in teractionwith the mouse, especial ly in d is tant regions. The eyemovement-based object select ion task was not wellmodeled by Fit ts ' Law, or , equivalently , that the Fit ts ' Lawmodel for the eye would have a very small s lope. That is ,the t ime required to m ove the ey e is only s l ightly related tothe d is tance to be moved. This suggests eye gaze in teractionwill be par t icu lar ly benef icial when users need to in teractwith distant objects, and this is often the case in a virtualenvironment.Finally , combining the eye tracker hardware with the head-mounted d isp lay al lows using the more robust head-mounted eye tracker without the inconvenience usuallyassociated with that type of eye tracker . For manyapplicat ions , the head-mounted camera assembly , while notheavy, is much more awkward to use than the remoteconf iguration . Howe ver , in a v ir tual environment d isp lay , i fthe user is already wear ing a head-mounted d isp lay device,the head-mounted eye tracker adds very l i t t le ex tra weightor complexity . The eye tracker camera obtains i ts v iew ofthe eye through a bea m spli t ter, without obscur ing any par tof the user ' s f ield of v iew.In this study, our first goal was to test the hypothesis thateye movement-based in teractions would perform better invirtual environments than other natural interaction types. Inorder to tes t i t , we need to compare against a moreconventional in teraction technique as a yardst ick . We usedhand movement for compar ison , to resemble poin ting orgrabbing in teraction that would commonly be found invir tual environments today. In addit ion , we invest igatedwhether there would be performance d if ferences between"close" and "dis tant" v ir tual environments , i .e . , whereobjects are respectively with in and beyond the reach of theuser. Since pointing-based interaction requires the user touse hand and arm mov ements , the user has to move forwardin order to reach and select the objects in the distant virtualenvironment. Eye movement-based in teraction , however ,al lows the user to in teract with objects natural ly using onlyeye movements in both close and d is tant v ir tualenvironments . Therefore, we expected eye movement-basedinteractions to be faster especially in distant virtualenvironments .

    Despite i ts potential advantages , eye movement-basedin teractions might have a drawback in terms of the users 'ab il i ty to retain spatial information in v ir tual environments .Search tasks can be cumbersome for users in v ir tualenvironments , especial ly in large v ir tual wor lds . They mayfail to rem em ber the p laces they previously v is i ted , and mayhave to v is i t them again . Therefore, we looked for the ef fectof eye move men t-based in teractions on spatial mem ory, i .e . ,the abil i ty of the user to recall where objects res ide inspace. For th is , we compared users ' ab il i ty to recall spatialinformation in the eye movement vs . poin ting condit ions .As argued above, eye movement-based in teractiondecreases the ef for t required for in teraction , whereaspoin ting based in teraction engages the user more in thein teraction . This reduced level of engagement might alsoreduce the user ' s ab il i ty to retain spatial information aboutthe objects dur ing eye movem ent-base d in teractions . Hence,we expec ted spa t ia l me mory to be weaker in eye movem en tthan in poin ting based in teractions in v ir tual environments .A N E W I N T E R A C T I O N T E C H N I Q U EUsing natural eye movements in v ir tual environmentsrequires devel opme nt of appropr iate in teraction techniques .In th is s tudy, we developed an in teraction technique thatcombines f ea tu r es o f eye movemen ts and non -commandbased in teractions in a v ir tual environment. Our obje ctive isto enable users to in teract with eye movements , withoutexplici t commands where possib le. However , we shouldalso avoid the Midas Touch problem, i .e . , unwantedactivation of commands every t ime user looks at something[9] . Our approach here was for the computer to respond tothe user ' s g lances about the v ir tual environment withcontinuous, gradual changes. Imagine a h is togram thatrepresents the accumulation of eye f ixations on eachpossib le target object in the VR environment. As the userkeeps looking at an object , h is togram value of the objectincreases s teadily , while h is togram values of al l o therobjects s lowly decrease. At any moment we thus have aprof i le of the user ' s "recent in teres t" in the var iousdisp layed objects .In our design , we respond to those h is togram values byallowing the user to select and examine the objects ofinterest. When the user shows interest in a 3D object bylooking at i t , our p rogram responds by enlai 'g ing the object ,fading its surface color out to expose its internals, andhence select ing i t . When the user looks away f rom theobject , the program gradually zooms the object out , res toresi ts in i t ial co lor , and hence deselects i t . The program usesthe h is togram values to calculate factors for zooming andfading continuously.As Figures 1, 2, and 3 suggest, there is too muchinformation in our v ir tual environment scene to show it al lat once, i.e. , with all the objects zoomed in. I t is necessaryfor the user to select objects and expand them indiv iduallyto avoid display clutter. This is intended to simulate arealis t ic complex environment, where i t is typically

    266 ,~ G M I 2 0 0 0

  • 8/9/2019 Interacting with eye movements in virtual environments

    3/8

    C H I 2 0 0 0 1 - 6 A P R I L 2 0 0 0 Papersimpossible to fit all the information on a single display; userinteraction is thus required to get all of the information. Wedeveloped an al ternate vers ion of our design for use withthe hand, using the Polhemus sensor. In this version, theuser indicates interest in a displayed object by placing avir tual poin ter on i t . When the user moves the v ir tualpoin ter in to an object , our program responds b y zoomin g inthe object, fading out its color, and hence selecting theobject . When the user moves the poin ter away f rom theobject , the program deselects i t .METHODApparatusWe conducted th is s tudy in the Human ComputerIn teraction Laboratory in the Tuf ts Univers i ty Electr icalEngineer ing and Computer Science Depar tment. We used aSil icon Graphics Indigo2 High Impact workstat ion , Vir tuali-Glasses head mounted d isp lay , ISCAN eye tracker , andPo lhemus 3Space FAS TR AK magnet ic t r acker . One o f thePolhemus receivers was on the head mounted d isp lay toprovide VR camera posit ioning , and one was on acardboard r ing around the subject ' s f inger for poin ting inthe virtual environment.The ISCAN eye tracker system consis ts of the eye tracker ,eye and scene monitors , and ISCAN Headhunter Line ofSight Computation and Plane In tersection Sof tware (vers ion1.0), and its own separate computer. The software monitorseye movements , per forms calibrat ions , and processes eyeimages. I t runs on a separate Pentium 100MHz personalcomputer . The eye tracker hardware is buil t in to the headmounted d isp lay . I t has a t iny eye camera, inf rared ( IR)light source, and a dichroic mirror. The IR light sourcecreates the corneal ref lect ion on the eye. The eye cameracaptures the eye image ref lected in the mir ror and sends i tto a f rame grabber in the PC. The PC sof tware calculatesthe visual line of gaze using the relationship between thecenter of the pupil and corneal reflection point. Unlikemore conventional usage of head-mounted eye trackers , inVR we only require the eye posit ion relat ive to the head,not the wor ld , s ince the VR disp lay moves with the head.The PC sends the s tream of processed eye data to theSil icon Graphics computer , where the main VR system isrunning via a serial port.StimuliOur v ir tual environment d isp lays a v ir tual room thatcontains f if teen geometr ical objects ( spheres and polygo ns)in five different colors (blue, pink, green, yellow andorange) (Figure 1). Each obje ct contains four cylinders thatare textured with a letter. In one version ("close") , objectsare with in the reach of the subject ' s arm; in the o ther("d is tant") , subjects need to move 5-15 inches in order toreach the objects. Our interaction technique allows subjectsto see the cylinders and letters inside the objects byselecting the objects. Initially, the cylinders and letters arenot v is ib le (Figure 2) . When the subject selects an object(by ei ther Polhemus or eye tracker ) , the object s tar ts

    growing and i ts co lor fades out , making the in ternalcylinders and letters visible (Figure 3) . When the subjectdeselects the object, it starts shrinking, its color fades back,and it returns t o its initial state, with the cylinders aga ininvis ib le. The program allows subjects to select only oneobject at a t ime. W e set the t ime constants for fading in andout based on informal tr ials with the eye and Polhemusvers ions to optimize each vers ion separately ; the result wasthat eye movement response is set about half as fas t as thepoin ting-based vers ion , mainly to avoid the Midas Touchproblem and to balance the t ime spent in locating thePolhemus.Virtual Reality SoftwareWe implemented our VR sof tware using the new PMIWuser in te r f ace managemen t sy s tem fo r non -WIMP userinterfaces being developed in our lab [10, 11]. Our systemis particularly intended for user interfaces with continuousinputs and outputs, and is thus well suited to the histograminteraction technique, with its gradual, continuous fading.The VR programs run on the Sil icon Graphics workstat ion ,using SGI Performer to d isp lay the graphics and enableusers to interact in the virtual environment. There are fourvers ions of the program, for eye movement vs . poin tingbased interactions and for close vs. distant virtualenvironments .Experim ental designWe used a with in-subjects exper imental design for devices .A l l sub jects u sed bo th eye mo ve me n t and po in tingin teractions . The order of the two was counterbalanced toeliminate d if ferences in fat igue and learn ing . We alsoinvestigated how performance var ies between the "close"and "dis tant" condit ions with a between-subjects design , bydividing the subjects into two groups; one interacted inclose virtual environments and the other, in distant.ParticipantsThir ty-one subjects volunteered to par t icipate in theexper iment. The y were undergraduate and graduate s tudentsin our depar tment. Their average age was 22 . Twenty-eightsubjects had no pr ior exper ience with VR while threesubjects had som e insignificant exper ience: they had used aVR system only once. We el iminated seven subjects f romthe sample before beginning their sess ions , because wecould not successfu lly cal ibrate the eye tracker to theirpupils . The remaining 24 subjects ( twenty male, fourfemale) successfu lly completed the exper iment.Experimental settin gFigure 4 is an illustration of the devices used in theexper iment. The subject used the VR programs by s tandingnext to the Polhemus transmitter , which was p laced on analuminum cam era tr ipod, 8 inches below the subject ' s head .We removed al l furn iture and equipment in the physicalarea used by the subjects in order to prevent d is tract ion andfacil itate move ments especial ly dur ing their in teraction with

    ~k.~l~ 267

  • 8/9/2019 Interacting with eye movements in virtual environments

    4/8

    Papers C H I 2 0 0 0 * I - 6 A PR I L 2 0 0distan t v i r tual envi ronments . We used a tab le lamp forlighting.

    Figure 4 . I l lust rat ion of the devicesP r o c e d u r eOne day before the exper iment , the subject at tended at rain ing session to famil iar ize h im or her wi th the headmounted d isp lay , eye t racker , Polhemus, and the in teract iontechniques. F i rs t , the exper imenter in t roduced the devicesand explained thei r funct ional i ty . Then , she cal ibrated theeye t racker for the subject ' s pupi l using the ISCANsoftware. This cal ibrat ion took approximately 2-3 minutes .Next , the subject used the eye t racker , the Polhemus, andthe programs to pract ice the in teract ion technique. He orshe learned how to select and deselect ob jects and see thelet ters in the v i r tual envi ronments using eye movement andpoin t ing based in teract ions. On average, the whole t rain ingsession took abo ut 20 m inutes IOn the next day , the subject was asked to do two searchtasks, one wi th the eye t racker , the o ther wi th the Polhemus.Fi rs t , the exper imenter read a scr ip t explain ing that thesearch task i s to f ind the two geometr ical ob jects 2 thatcontain a par t icu lar target let ter on thei r in ternal cy l inders ,and asked the subject to inform the exper imenter as soon ashe or she found the objects 3 . Then she ad justed the physicalposi t ion of the subject in order to ensure that al l subjectss tar t the exper iment in a s tandard posi t ion . Next , shein i t ial ized the program. At th is po in t , the v i r tualenvironment was not v is ib le to the subject yet . Second, shepronounced the target let ter and pressed a mouse but ton to

    i At the end of the experiment, we asked subjects to respond to thefollowing statement on a 7-point Likert scale (1: strongly disagree..7:strongly agree) in order to check if the training achieved its goal : "Thetraining session familiarized me w ith usin g eye tracker/polhemus." Theresponses show that they were sufficiently familiarized with eye tracker(Mean = 6.08; S tandard deviation = 1.02) and polhemus (Mean = 6.00;Standard deviation = 1.14).2 We use d 2 objects to test if there is a lea rning effect, but we could notfind any.3 At the end of the experiment, we as ked subjects to respond to thefollowing statement on a 7-point Likert scale (1: strongly disagree..7:strongly agree) in order to check if they understood he task: "I understoodthe task before starting with the experiment." The responses show thatthey understood the task (M ean = 6.29; Standard deviation = 0.95).

    make the v i r tual envi ronment v is ib le to the subject andrecord the s tar t t ime, and the subject s tar ted searching forthe target let ter . The subject no t i f ied the exper imenter eacht ime he or she found an object contain ing the target let ter .Th e ex p er im en te r p ressed t h e m o u se b u t to n t o r eco rdcomplet ion t imes for the f i rs t and second objects . When thesubject found the second object , the program and the searchwere terminated . In the analysis , we used the elapsed t imeto f ind both of the objects as the performance measure.Th e n ex t t ask was t h e m em o ry t ask . Th e g o a l was t o r eco rdhow wel l the subject could recal l which two objectscontained the target let ter . The exper imenter s tar ted aprogram that d isp layed to the user the v i r tual envi ronmenthe or she had just in teracted wi th . This d isp lay enab led thesubject to see al l o f the objects that were present in thevi r tual envi ronment , bu t no t to access the in ternal cy l indersor let ters . The exper imenter asked the subject which twoobjects had contained the target let ter in the search task ,and recorded h is or her responses. In the analysis , we usedth e n u m b er o f co r r ec t ly r eca ll ed o b j ec t s as t h e m easu re o fspat ial memory .Final ly , the subject f i l led out a survey contain ing quest ionsabout sat i sfact ion wi th the eye t racke r a nd Polhe mus 4 . Theexper imenter also encouraged the subjects to g ive verbalfeedback about thei r exper ience wi th the two technologies ,and recorded the responses.R E S U L T SOur f i rs t hypothesis was that in v i r tual envi ronmentssu b j ec t s wo u ld p er fo rm b e t t e r wi th ey e m o v em en t -b asedin teract ions than wi th hand based in teract ions. As a sub-hypothesis , we also expected an in teract ion effect , that thep er fo rm an ce d i f f e ren ce b e tween ey e an d Po lh em u s wo u ldbe greater in the d is tan t v i r tual envi ronment compared tothe close one. Our independent var iab les were in teract iontype (eye movement , po in t ing) and d is tance (close, d is tan t ) .Ou r d ep en d en t v ar i abl e was p er fo rm an ce ( t im e to co m p le t etask) . Table 1 provides the descr ip t ive s tat i s t ics for ourmeasurements . We tested the f i rs t hypothesis by comparingth e m ean s o f t h e p o o led p er fo rm an ce sco res ( p er fo rm an cescores in close and d is tan t v i r tual envi ronments) using one-way an a lys i s o f v ar i an ce (ANOV A) . W e fo u n d th a tp er fo rm an ce wi th ey e m o v em en t -b ased i n t e rac t i o n s wasindeed s igni f icant ly faster than wi th poin t ing (F[1 ,46]=10.82 , p = 0 .002) . The order in which the subjects usedeye movement and poin t ing based in teract ions d id notindicate an importan t ef fect on the performance: 21 out ofthe 24 subjects performed faster wi th eye movement-basedin teract ion . Next , we separated the performance scores inclose and d is tan t v i r tual envi ronm ents in to two subgroups,

    4 The surve y containe d questions adapte d from Shneiderman andNorman's [1 5] questionnaire for us er interface satisfaction (QUIS), andDoll and Torkzadeh's [5] e nd-user satisfaction scale.

    268 ~/k..~llJ]l ~I=IZ ~OOC

  • 8/9/2019 Interacting with eye movements in virtual environments

    5/8

    C H I 2 0 0 0 1 - 6 A P R I L 2 0 0 0 P a p e r sTable 1. Perform ance of eye movem ent and point ing based interactions

    Time to com plete the task (seconds)Eye movement s Po in t ing

    n M S___D_D N M S DClose vi r tual environments 12 67.50 23 .86 12 96.56 46.76Distant virtual environm ents 12 56.89 19.73 12 90.28 36.16Overal l 24 62.19 22.09 24 93.47 41.01Notes: n , M, and SD represent number of subjects , mean, and standard devia t ion respect ively.

    A= o

    E

    120100806040200

    y [ , c lose- 4 - - distant

    [ - - .A. -. ove ral l

    Eye PolhFigure 5. Mean t imes of performance

    and repeated the ANOV A fo r the two subgroups. We foun dthat in distant virtual environments, performance with eyemovement-based interactions was indeed significantlyhigher than performance wi th point ing (F[1,22] = 7.89, p =0.010) . However , performance between the two types ofinteractions was not significantly different in close virtualenvironments (F[1,22] = 3.7, p = 0.067). Figure 5 shows thegraphical representa tion of the means of pooled, distant andclose performance for both eye movement and point ingbased interactions.Our second hypothesis was that in vi r tual environments,spat ia l memory of subjects would be weaker in eyemovement-based interactions than that in pointing basedinteract ions. We measured spat ia l memory by the numberof objects correct ly recal led af ter complet ion of the m emorytask as our dependent var iable ; our independent var iablewas the type of interact ion as before . Table 2 shows thedescr ipt ive sta ti s tics for our measurements. Com paring themeans of the spat ia l memory scores in the two types ofinteract ions using one-way ANOVA, we found that thenumber of correct ly recal led objects in eye movement-based interactions was indeed significantly lower than thatin pointing (F[1,46] = 7.3, p = 0.010 ).Finally, we were interested in exploring subjects'satisfaction with interacting with eye movements andpointing. We asked subjects about ease of getting started,ease of use, accuracy, and fatigue felt with eye tracker andPolhemus, and whether they found these technologiesuseful in VR systems. W e w ere particularly interested in the

    users' reaction to eye tracking, because this is an immaturetechnology, and eye t rackers do not y et work as steadi ly orre l iably as mice o r Polhemus t rackers. Table 3 shows thequest ions and subjects ' responses. As the AN OV A results inthe last two columns of the table indicate, subjects'sa t i sfact ion wi th eye t racker and Polhemus were notstatistically different. We further posed the subjects thefol lowing open-ended quest ion: "D o you prefer eye t rackeror Polhemu s for this task? Please expla in why." Eighteen ofthe 24 subjects specified that overall they preferred the eyet racker to the Polhemus. They sa id that using eyemov eme nts was natural, easy, faster, less tiring. Six subjectspreferred the Polhemus because they found that i t wasaccurate and easy to adapt because of i ts similarity to themouse.D I S C U S S I O N A N D C O N C L U S I O N SBy developing an interact ion technique that a l lows the useof natura l eye movem ents in vi r tual environments, we wereable to compare the performance of eye movement andpointing based interactions in close and distant virtualenvironments. The resul ts show that interact ion wi th eyemovem ents was faster than interact ion wi th point ing. Theyfurther indicate that the speed advantage of ey e movementswas more significant in distant virtual environments. Ourf indings suggest that eye movement-based interact ionscould become a viable interaction type in virtualenvironments provided that proper interact ion techniquesare developed.However , our data a lso point to a pr ice paid for thisincreased speed w hen the task requires spatia l memory. Oursubjects had more difficulty in recalling the locations of theobjects they interacted wi th when they used eye m ovement-based interactions. They recalled significantly more objectswhen they used point ing. One possible explanat ion for thisresul t may be the ease of use of eye movement-basedinteractions. Our subjects explained that they were "justlooking," as in the real world, when they interacted with eyemovements. H ence, the cogni t ive burden o f interact ing waslow. They did not have to spend any ext ra effor t forlocating the objects or manipulating them. When they usedpoint ing, however , they had to spend ext ra physical effor t to

    2 6 9

  • 8/9/2019 Interacting with eye movements in virtual environments

    6/8

    Papers C H I 2 0 0 0 1 - 6 A PR I L 2 0 0Table 2. Spat ia l memory in eye movem ent and point ing based interactions

    Spat ial mem ory measures_N M SD

    Eye movements 24 0.79 0.83Pointin g 24 1.5 0.98Notes: Spat ia l memory is measured by the number of le t ters recal led by subjects af tercom ple tion o f the task. n, M , an d S___D_Depr ese nt nu mb er o f sub jects, m ean, and stand arddevia t ion respect ively.

    Table 3. Sat isfact ion of subjects wi th eye t racker and PolhemusEye-t racker Polhemus

    S u r ve y Q u e st io n s M S__DD M S D F - v a lu e p-valueGet t ing started wi th eye t racker/polhemus 5.33 1.69 5.54 1.59is easy.The eye t racker/polhemus is easy to use . 5 .83 1.34 5.54 1.56The eye t racker/polhemus is accurate . 5 .29 1.33 5.79 1.38Are you satisfied" with the accur acy of eye 5.83 1.05 5.83 1.43tracker/polhemus?Did you feel fa t igued when searching wi th eye 3.21 1.74 3.29 1.94tracker/polhemus ?I would find the eye tracking/pointing 5.75 1.33 5.46 1.47useful in virtual reality systems.

    (1,46)=0.19 0.66

    (1,46)=0.48 0.49(1,46)=1.63 0.21(1,46)=0.00 1.00

    (1,46)=0.02 0.88(1,46)=0.52 0.48

    Notes: Al l quest ions were posed using a 7-point L iker t sca le ranging from st rongly disagree (1) to st rongly agree (7) .M and S__DD eprese nt mean and standard deviatio n respectively. F and p-valu es are fro m AN OV A analyses thatcompare the means of the answers given for eye- t racker and polhemus.

    reach out to the objects and interact wi th them. Spendingthis ext ra effor t may have helped them re ta in the spat ia linformat ion of objects in thei r memory. This f inding hasimpl icat ions for the choice of interact ion technique in avirtual environment: eye is a particularly good choice iflater spatial recall is not necessary. It may also be possibleto design new eye movement-based interact ions thatfacili tate the user in retaining the spatial information ofobjects af ter interacting wi th them. O ne approach to addressthis weakness might be to incorporate more spat ia l cues tohelp users recognize and retain spatial information in thevirtual environments [4].Current ly, eye t racker technology is not as mature andrel iable as the Polhem us-type magnet ic t racker . This appl iesto the technology in general , other avai lable eye t rackers wehave used in other work have given roughly simi larperformance. Therefore , we had expected that the subjectswould be less sa t i sf ied wi th eye t racker technology thanwith Polhemus. However , our sa t i sfact ion survey andinterviews wi th subjects af ter the experiment showed thatthey were equal ly sa t i sf ied wi th ey e t racker and Polhemus.They sta ted that they l iked the idea of using eye movementswithout having to think of init iating a command. This

    provides support for our c la im that eye mo vements f i t wel linto a non-co mmand style user interface.This study has show n that eye movem ent-based interactionsare promising in vi r tual environments. We bel ieve that bydeveloping proper interact ion techniques, eye movement-based interact ions can address weaknesses of extantinteraction techniques in virtual environments. Insubsequent work, we hope to examine st i l l more subt le or" l ightweight" interact ion techniques using eye movements,to compare " interact ion by star ing a t" wi th a more subt le" interact ion by looking around."A C K N O W L E D G M E N T SWe want to thank Prof . Sal Soraci of the PsychologyDepartment a t Tufts for valuable discussions about thespat ia l memo ry issue . We thank each of our subjects whovolunteered to participate in this experiment.This work was supported by Nat ional Science Foundat ionGran t IRI-9625573 and Off ice of Naval Research GrantN00014-95-1-1099. We gra teful ly acknowledge thei rsupport.

    270 ~k.~l l~l ~!=1~ 2000

  • 8/9/2019 Interacting with eye movements in virtual environments

    7/8

    C H I 2 0 0 0 1 - 6 A P R I L 2 0 0 0 PapersR E F E R E N C E S1. Bolt, R.A. G aze-Orchestrated Dynam ic Windo ws,Com puter Graphics, vol. 15, no. 3, pp. 109-119, August1981.2. Bolt, R.A. Eyes at the Interface, Proc . ACM Human

    Factors in Com puter Systems C onference , pp. 360-362,1982.3. Bowman, D.A., Hod ges , L.F., A n Evaluation of

    Techniques for Grabbing and Manipulating RemoteObjects in Immersive Virtual Environments,Proceedings of the 1997 Symp osium o n Interactive 3DGraphics, 1997, pp.35-38

    4. Darken, R ., Sibert, J., W ayfin ding Strategies andBehaviors in Large Virtual Worlds, Proceedings onHum an Factors in Comput ing Systems, CH I'96 pp. 142-1495. D oll, W.J., Torkzadeh, G., The measurement o f end-user computing satisfaction, MIS-Quarterly , Vol.12,n12, pp.259-274. (1988).6. Forsb erg A., Herndon K., Zelez nik R., Effective

    Techniques for Selecting Objects in Immersive VirtualEnvironments, Proc . ACM UIST '96 Sympos ium on Use rInterface S oftware an d Tec hnolog y ( U1ST), 1996.

    7. Glenn, F.A., and others, Eye -voic e-co ntrolle d Interface,Proc. 30th Annual Meet ing of the Human FactorsSociety, pp. 322-326, Santa Monica, Calif., 1986.

    8. Jacob, R.J.K., The use of eye movements in human-comp uter interaction techniques: what you look at iswhat you get, ACM Transac t ions on In format ionSystems, 9, 3, pp. 152-169(April 1991).9. Jacob, R.J.K., Eye Tracking in Advance d InterfaceDesign, in Advanced Interface Design and Virtual

    Environments, ed. W. Barfield and T. Furness, OxfordUniversity Press, Oxfo rd (1994)10.Jacob, R.J.K., A Visual Language for Non-WlMP UserInterfaces, Proc. IEEE Symposium on Visual Languagespp. 231-238, IEEE Computer Society Press (1996).ll.Jacob, R.J.K., Deligiannidis, L., Morrison, S., ASoftware Model and Specification Language for Non-

    WIMP User Interfaces, A CM Transactions onComputer-Human Interact ion, Vol. 6(1) pp.l-46(March 1999).

    12. Koller, D., Mine, M., Hudso n, S., Head -Tracked OrbitalViewing: An Interaction Technique for ImmersiveVirtual Environments, Proc . AC M UIST '96 Sy mpos iumon User Interface Sof tware and Technology (UIST) ,1996.13.Mine M ., Brooks Jr.,F.P., Sequin C., Moving Objects inSpace; Exploiting Proprioception in Virtual-Environment Interaction, Proc e e d ings S IGGRAPH 97 ,Lo s Angeles, CA, pp. 19-2614. Nielsen, J., Noncom mand User Interfaces,Communic a t ions o f t he ACM 36, 4 pp. 83-99 (April1993)15.Po upyre v I., Billinghurst M., W eghorst S., Ichikawa T.,The Go- G o Interaction Technique: Non-linear Map pingfor Direct Manipulation in VR, Proc. ACM Symposium

    on User Interface Sof tware and Technology (U1ST),1996.16.Shneiderman, B., Norman, K ., Questionnaire for Use rInterface Satisfaction (QUIS), Designing the User

    Interface, Strategies fo r Effective Hum an-C om puterInteraction, Second Edition, Addison-Wesley press(1992).17.Sibert L.E., Jaco b, R.J.K ., Evaluation of Eye GazeInteraction Techniques, Proc. ACM CH I'2000 HumanFactors in Computing Systems Conference, Addison-We sley/A CM Press (2000). (in press)18.Starker, I., Bolt, R.A., A Gaz e-Respon sive Self-Disclosing Display, Proc . ACM CH I '90 Human Fac tors

    in Computing Systems Conference, pp.3-9, Addison-Wesley / A CM Press (1990).19.W are C., Mikaelian, H.T., A n Evaluation of an EyeTracker as a Device for Computer Input, Proc. ACMCHI+GI'87 Human Factors in Comput ing SystemsConference, pp. 183-188, 1987..

    T H E P ~ - U T / J / : ~ - t ~-~ S ~ t -~ f . : ~ f . Z2 7 1

  • 8/9/2019 Interacting with eye movements in virtual environments

    8/8

    Papers C H I 2 0 0 0 1 - 6 A P R I L 2 0 0

    l~g ere 1. 'I'ne entire virtual room.

    i Wit

    Figure Z A portion f the virtual orn. No ne f t~e ol~ec~ ~ selected n this iew.

    I

    iFigure3. Th e pea'pie object near the top o f the display is selected, and its internal details have b ecome visible.

    272 ~--k.~il~l C~I=4Z 2 0 0 0