virtual prints: augmenting virtual environments with ... · virtual prints: augmenting virtual...

19
Int. J. Human-Computer Studies 64 (2006) 221–239 Virtual prints: Augmenting virtual environments with interactive personal marks Dimitris Grammenos a, , Alexandros Mourouzis a , Constantine Stephanidis a,b a Foundation for Research and Technology—Hellas (FORTH), Institute of Computer Science, GR-70013 Heraklion, Crete, Greece b Department of Computer Science, University of Crete, Greece Available online 4 October 2005 Abstract This paper introduces the concept of Virtual Prints (ViPs) as an intuitive metaphor for supporting interaction and navigation, as well as a number of additional tasks in virtual environments (VEs). Three types of ViPs are described: Virtual Footprints, which are used for tracking user navigation (position, orientation and movement), Virtual Handprints, which are used for tracing user interaction with the VE, and Virtual Markers, which are ‘special’ marks (usually coupled with information) that can be created upon user request. In a VE, the ViPs concept is instantiated and supported through a software mechanism (the ViPs mechanism) that allows users to create, manage and interact with their personal ViPs, as well as other users’ ViPs. The paper presents the background and related work upon which the suggested concept builds, as well as the distinctive properties that differentiate ViPs from other related efforts. An account of how users can interact with ViPs is provided and related issues and challenges are discussed along with techniques and methods for addressing them. The paper also describes the process followed towards defining and experimenting with the concept of ViPs by means of iterative design and evaluation of an interactive prototype. This process involved exploratory studies, as well as several inspections and formal tests with both experts and potential end-users, in order to assess the usefulness of the concept and identify possible shortcomings, and also to evaluate and improve the usability of the proposed designs and software prototypes. In general, the findings of the studies reinforce the initial hypothesis that ViPs are an intuitive and powerful concept, and show that the related software is easy to learn and use. Overall, the results of the studies support strong evidence that an appropriately designed and implemented, fully functional ViPs mechanism can significantly increase the usability of VEs. r 2005 Elsevier Ltd. All rights reserved. 1. Introduction In the real world, every living organism constantly leaves traces of its existence and its interaction with the physical environment: deer leave their paw marks on the soft forest soil, dolphins carve foam traces on the surface of the sea, flies leave annoying black spots on windows, and young children imprint their dirty handprints on freshly painted house walls. Since the early years of their presence on earth, humans observed this inherent property of the environment and learned to use it in various ways in order to make their lives easier. For example they learned to recognize the paw prints of animals to track down their prey or to avoid ferocious creatures, they used footprints to explore unknown territories or find their colleagues in search and rescue operations (Kearney, 1999), they examined fossils to study human history and evolution (Tattersall, 1995), and they revealed and analysed fingerprints to solve crimes (Beavan, 2001). In contrast to real environments, Virtual Environments (VEs) do not allow their ‘inhabitants’ to leave any trace behind, thus suffering from an ‘extreme cleanness syn- drome’. Walk into your house after leaving your children alone for the weekend and you can instantly realize that a wild party took place while you were away. Walk into a virtual chat room seconds after a meeting of 200 people has finished and it will be exactly as if no one has ever been there before. Inspired by these observations, the concept of Virtual Prints (ViPs) is proposed (Grammenos et al., 2002) as the digital, interactive counterparts of real-life tracks. The ARTICLE IN PRESS www.elsevier.com/locate/ijhcs 1071-5819/$ - see front matter r 2005 Elsevier Ltd. All rights reserved. doi:10.1016/j.ijhcs.2005.08.011 Corresponding author. Tel.: +30 2810 391755; fax: +30 2810 391740. E-mail addresses: [email protected] (D. Grammenos), [email protected] (A. Mourouzis), [email protected] (C. Stephanidis).

Upload: dinhcong

Post on 18-Aug-2018

247 views

Category:

Documents


0 download

TRANSCRIPT

  • ARTICLE IN PRESS

    1071-5819/$ - se

    doi:10.1016/j.ijh

    CorrespondE-mail addr

    mourouzi@ics.

    Int. J. Human-Computer Studies 64 (2006) 221239

    www.elsevier.com/locate/ijhcs

    Virtual prints: Augmenting virtual environments with interactivepersonal marks

    Dimitris Grammenosa,, Alexandros Mourouzisa, Constantine Stephanidisa,b

    aFoundation for Research and TechnologyHellas (FORTH), Institute of Computer Science, GR-70013 Heraklion, Crete, GreecebDepartment of Computer Science, University of Crete, Greece

    Available online 4 October 2005

    Abstract

    This paper introduces the concept of Virtual Prints (ViPs) as an intuitive metaphor for supporting interaction and navigation, as well

    as a number of additional tasks in virtual environments (VEs). Three types of ViPs are described: Virtual Footprints, which are used for

    tracking user navigation (position, orientation and movement), Virtual Handprints, which are used for tracing user interaction with the

    VE, and Virtual Markers, which are special marks (usually coupled with information) that can be created upon user request. In a VE,

    the ViPs concept is instantiated and supported through a software mechanism (the ViPs mechanism) that allows users to create, manage

    and interact with their personal ViPs, as well as other users ViPs.

    The paper presents the background and related work upon which the suggested concept builds, as well as the distinctive properties that

    differentiate ViPs from other related efforts. An account of how users can interact with ViPs is provided and related issues and challenges

    are discussed along with techniques and methods for addressing them. The paper also describes the process followed towards defining

    and experimenting with the concept of ViPs by means of iterative design and evaluation of an interactive prototype. This process

    involved exploratory studies, as well as several inspections and formal tests with both experts and potential end-users, in order to assess

    the usefulness of the concept and identify possible shortcomings, and also to evaluate and improve the usability of the proposed designs

    and software prototypes. In general, the findings of the studies reinforce the initial hypothesis that ViPs are an intuitive and powerful

    concept, and show that the related software is easy to learn and use. Overall, the results of the studies support strong evidence that an

    appropriately designed and implemented, fully functional ViPs mechanism can significantly increase the usability of VEs.

    r 2005 Elsevier Ltd. All rights reserved.

    1. Introduction

    In the real world, every living organism constantly leavestraces of its existence and its interaction with the physicalenvironment: deer leave their paw marks on the soft forestsoil, dolphins carve foam traces on the surface of the sea,flies leave annoying black spots on windows, and youngchildren imprint their dirty handprints on freshly paintedhouse walls.

    Since the early years of their presence on earth, humansobserved this inherent property of the environment andlearned to use it in various ways in order to make their liveseasier. For example they learned to recognize the pawprints of animals to track down their prey or to avoid

    e front matter r 2005 Elsevier Ltd. All rights reserved.

    cs.2005.08.011

    ing author. Tel.: +302810 391755; fax: +30 2810 391740.

    esses: [email protected] (D. Grammenos),

    forth.gr (A. Mourouzis), [email protected] (C. Stephanidis).

    ferocious creatures, they used footprints to exploreunknown territories or find their colleagues in search andrescue operations (Kearney, 1999), they examined fossils tostudy human history and evolution (Tattersall, 1995), andthey revealed and analysed fingerprints to solve crimes(Beavan, 2001).In contrast to real environments, Virtual Environments

    (VEs) do not allow their inhabitants to leave any tracebehind, thus suffering from an extreme cleanness syn-drome. Walk into your house after leaving your childrenalone for the weekend and you can instantly realize that awild party took place while you were away. Walk into avirtual chat room seconds after a meeting of 200 people hasfinished and it will be exactly as if no one has ever beenthere before.Inspired by these observations, the concept of Virtual

    Prints (ViPs) is proposed (Grammenos et al., 2002) as thedigital, interactive counterparts of real-life tracks. The

    www.elsevier.com/locater/ijhcs

  • ARTICLE IN PRESSD. Grammenos et al. / Int. J. Human-Computer Studies 64 (2006) 221239222

    basic idea is that while a user is moving in a VE, VirtualFootprints (FootViPs) are left behind, whereas each timean interaction with an object occurs, the users VirtualHandprints1 (HandViPs) are imprinted on it. BothFootViPs and HandViPs can be time-sensitive andgradually fade, as realor virtualtime goes by. VirtualMarkers2 (MarkerViPs) are permanent marks coupled withuser-defined data (e.g. a textual or audio message) whichcan be left in the environment, or pined on any virtualobject, and can act as personal landmarks, annotations oranchors.

    ViPs can have a variety of uses in a VE, ranging fromsupporting navigation (i.e. travel and wayfinding), totraining and creation of tutorial sessions, conductinguser-based evaluations, etc. Furthermore, as FootViPsand HandViPs are actually a means for recording andvisualizing navigation and interaction history, respectively,they have the potential to introduce in VEs severalfunctions and concepts that are popular, if not standard,in conventional 2D user interfaces, such as shortcuts,bookmarks, undo/redo functions, collaborative review, aswell as marking/identifying (non) visited content (Mour-ouzis et al., 2003). Correspondingly, MarkerViPs can beused for content annotation and highlighting, or foroffering context-sensitive help. Although this paper focuseson using ViPs in a VE, they can also be effectively used inAugmented Reality Environments. For example a personusing an augmented reality system in a museum can followViPs that are related to a specific topic of interest, or thoseof a virtual guide.

    A considerable advantage of ViPs is that they can beused in any VE and in combination with any existingnavigation and wayfinding support approach. Further-more, the fact that ViPs have real-life counterparts withwhich humans are very familiar renders them an intuitiveand potentially easy to use metaphor.

    The rest of the paper is structured as follows: Section 2presents the background and related work upon which thesuggested concept builds. Section 3 describes the distinctiveproperties and characteristics of each type of ViPs, whileSection 4 provides an account of how ViPs can beinstantiated in a VE through a related software mechanism,and of how end-users can interact with them. Section 5illustrates challenges that may potentially arise whenputting ViPs to real use, along with suggestions and waysfor overcoming such challenges. Section 6 provides acomprehensive overview of possible uses of ViPs beyondnavigation, orientation and wayfinding. Section 7 describesthe process that was followed for making the transitionfrom early concept formation to a full-functioning softwareimplementation, including the exploratory studies which

    1Virtual Handprints were originally named Virtual Fingerprints, but

    our studies revealed that the concept of Handprints is far better both in

    terms of usability and intuitiveness (e.g. fingerprints are too small to be

    noticed and to interact with).2Virtual Markers were originally termed Virtual Fossils, but they were

    renamed as a result of user testing.

    were conducted, as well as several inspections and formalexperiments with both experts and potential end-users.Finally, Section 8 concludes the paper and offers an insightinto future work.

    2. Background and related work

    In the past few years, a number of industrial VEapplications have been developed and put to practicaluse. The Virtual Reality (VR) market is growing rapidly(Arrington and Staples, 2000; CYBEREDGE, 2001) andVEs have been adopted as a useful and productive tool fora variety of applications (Delaney, 1999). Nevertheless,user feedback reveals that there are still several barriersthat impede the sustainable and appropriate use of VEs inthe industry environment, including barriers concerningthe integration of technologies, barriers due to insufficientknowledge concerning the impact of such technologies onthe user, as well as usability barriers (Crosier et al., 2000;Bowman et al., 2001).

    2.1. Navigation in VEs

    Navigation is a key task in any type of VE. Navigationcan be considered as a combination of travel andwayfinding. Travel is the minimum interaction capabilityoffered by any VE and involves the ability of the users tocontrol the position (i.e. to move) and orientation (i.e. gazedirection) of their virtual body. Wayfinding means that theuser is aware of his/her current location and of how theycan to get to a desired destination. Although there havebeen numerous efforts in this area, navigation still remainsa major challenge, since observations from numerousstudies and usability analyses indicate that this task(especially in large-scale VEs) can be very difficult, andmay result in user disorientation and distress (Ellis et al.,1991; Darken and Sibert, 1993; McGovern, 1993; Darkenand Goerger, 1999; Vinson, 1999).The reasons why navigation can be so cumbersome in

    VEs can be summarized in the following:

    (a)

    Navigation is a difficult task also in the real world.Humans may have difficulties when dealing withunfamiliar or complicated and unstructured physicalenvironments (e.g. a forest, a highway, or a modernbuilding). To overcome these difficulties, navigationsupport tools have been developed including maps,compasses, signs, and electronic global positioningsystems (GPS). Thus, even if VEs were indistinguish-able from the real ones, navigation would still be amajor challenge.

    (b)

    Lack of constraints (Chen, 2003). In the real world,several constraints exist when moving from onelocation to another. There are paths to follow, doorsto go through, insurmountable obstacles, and distanceor time restrictions that significantly decrease move-ment possibilities. In most VEs, the user has the

  • ARTICLE IN PRESSD. Grammenos et al. / Int. J. Human-Computer Studies 64 (2006) 221239 223

    capability to fly or even instantly be transported toremote places, and often minimal actions and physicaleffort result in dislocation over very large distances.

    (c)

    Lack of cues (Vinson, 1999). Humans, in order tonavigate in large spaces of the physical world,subconsciously reconstruct an abstract mental repre-sentation of the environment, known as a cognitivemap. This representation is created through a combi-nation of spatial and proprioceptive cues. Spatial cuesmay include landmarks, the relative position ofstructures and buildings, but also sounds and smells.Proprioceptive cues are collected through the physicallocomotion of the body and its parts. VEs providesignificantly fewer spatial cues than real environments.First, there are technical and cost limitations forproducing high-fidelity visual representations. Addi-tionally, there is high reuse of 3D models in order toease the task and cost of populating and displaying thevirtual world, and thus sometimes it is difficult todifferentiate between different parts of the environ-ment. Non-visual cues are rarely integrated in VEs andproprioception is engaged only through devices andtechniques that are still in the form of researchprototypes. Finally, in non-immersive VR applications,where the VE is viewed at a small scale and through alimited, external, viewpoint, cues are hard to notice andinternalize.

    Related work can be broadly classified in the followingcomplementary research directions:

    Informing the design of VEs.

    Development of appropriate input techniques and

    devices for user movement in VEs.

    Development of VE navigation and wayfinding support

    tools.

    2.1.1. Informing the design of virtual environments

    This research direction is concerned with the develop-ment of appropriate guidelines (mainly by exploitingexisting environmental design principles) for the creationof well-structured spaces that inherently aid orientationand wayfinding. For example Charitos (1997) presents ataxonomy of possible objects in VEs (namely landmarks,signs, boundaries, and thresholds), as well as of the spatialelements that these objects define (places, paths, inter-sections and domains), and suggests how these elementscan be used to support wayfinding, based on architecturaldesign and on the way humans conceive and rememberspace in the real world. Along the same line of work,Darken and Sibert (1996) have studied wayfinding strate-gies and behaviours in large VEs, and suggest a set ofenvironmental design principles that can also be applied inVEs. Hunt and Waller (1999) examined the relation oforientation and wayfinding between physical and virtualworlds, and the way existing knowledge can be transferred

    from the former to the latter, while Vinson (1999) offers aset of design guidelines for the placement of landmarks in aVE in order to ease navigation, based on concepts relatedto navigation in the physical world. On the other hand,research findings presented by Satalich (1995) lead to theobservation that human behaviour with regards to naviga-tion in the real world is not identical to behavioursexhibited in VEs, and thus it is likely that existing tools andprinciples may not be adequate or sufficient if directlytransferred from one domain to the other.A basic limitation of these approaches is that they

    require the modification of the virtual spaces contents.This may not always be possible or desirable as, forexample in the case of VEs that are based on real-worldenvironments, thus rendering these approaches inappropri-ate for a large number of widely used VE applications suchas simulations, engineering design and architectural walk-throughs.

    2.1.2. Development of appropriate input techniques and

    devices for user movement in virtual environments

    This research direction aims to provide easier and moreintuitive navigation in VEs through the definition anddevelopment of appropriate hardware, as well as of relatedinput techniques and metaphors that allow the user tomove more naturally in a VE. For example the Omni-Directional Treadmill (Darken et al., 1997) and the TorusTreadmill (Iwata and Yoshida, 1999) aim to offer novelhardware solutions for naturally walking or jogging in aVE. Peterson et al. (1998) propose a new input device in theform of a body-controller interface called Virtual MotionController, and compare its performance in navigating avirtual world with a joystick. Templeman et al. (1999)present Gaiter, another input device and an associatedinteraction metaphor that allows users to direct theirmovement through VEs by stepping in place. A differentapproach is followed by Razzaque et al. (2001) whointroduce a new interaction technique supporting locomo-tion in a VE, named Redirected Walking, that does notrequire any special hardware interface.

    2.1.3. Development of VE navigation and wayfinding

    support tools

    The third direction includes techniques and tools that arenot directly integrated in VEs, but come in the form ofadditional virtual objects aiding users to identify theircurrent (or target) location in a virtual world, as well as toconstruct an overview or mental model of the overall VE.A variety of such tools have been developed, includingposition coordinate feedback (Darken and Sibert, 1993),2D maps (Darken and Sibert, 1993), 3D maps (e.g. theWorlds in Miniature (WIM) metaphor by Stoakley et al.,1995), metaphors for exploration and virtual cameracontrol (Ware and Osborne, 1990), dedicated windowsoffering alternative viewpoints (e.g. the Through-the-Lenstechniques, Stoev et al., 2001), 3D thumbnails for provid-ing memorable destinations to return later (e.g. Wordlets

  • ARTICLE IN PRESS

    Fig. 1. Examples of alternative personalized ViPs.

    D. Grammenos et al. / Int. J. Human-Computer Studies 64 (2006) 221239224

    by Elvins et al., 1997) and position coordinate feedbackthat mimics GPS (Darken and Sibert, 1993).

    Fig. 2. Example of a time-sensitive FootViP.

    2.2. ViPs vs. related work

    The last of these three research directions is the one mostclosely related to the work presented in this paper, andincludes a few concepts that share some common attributes(but have significant differences) with ViPs. A relevantexample is the concept of breadcrumbs (Darken and Sibert,1993), which are markers in the form of simple unmarkedcubes hovering just above the ground plane. They can bedropped (manually or automatically) on the users path asa means for marking trails or encoding locations, but donot offer any directional information or interactivity. Thenotion of breadcrumbs is also referred to as trailblazing(Edwards and Hand, 1997) and Hansel & Gretel technique(Darken and Peterson, 2002). Edwards and Hand (1997)have also suggested that such a technique could beextended to support the notion of embodied bookmarks,allowing the user to record and return to specific previouslymarked positions. More recently, Darken and Peterson(2002) revisited the notion of breadcrumbs by addingdirectional cues to them, admitting that a better analogy isthat of footprints since footprints are directional and

    breadcrumbs are not. They argue that the footprintstechnique can be useful for exhaustive searches, since itallows users to know if they have already been in someplace before. Furthermore, they report two relatedproblems, namely that: (a) as navigation proceeds theenvironment becomes cluttered with footprints; and (b)when the user crosses paths it becomes difficult todisambiguate which footprints belong to a given trail.Finally, the concept of environmental landmarks that can beexplicitly placed by the user (Darken and Peterson, 2002)and the notion of private and shared annotation in 3Denvironments (Yang et al., 2002) share some commonattributes with MarkerViPs. In conclusion, the main issuesrelated to ViPs that have been explored by existing workare: (a) marking trails, and (b) creating personal landmarksand annotations in VEs.

    The work presented in this paper builds upon and extendsthese efforts by introducing new concepts, such as theHandViPs (leaving tracks of the users interaction with, asopposed to mere movement in, the environment), as well asby proposing uniform, appropriate, ways and mechanisms

    for interacting with all types of ViPs and for utilizing themin an effective and efficient way. It is also worth observingthat the most prominent characteristic of ViPs, in compar-ison to previous work, is that instead of being passive visualaids, ViPs are interactive and configurable. Users canmanipulate and use them to navigate, access various typesof information, communicate, annotate a VE or evenperform actions not originally foreseen by their designers.

    3. ViPs properties and characteristics

    In correspondence to real-world marks, ViPs can bepersonalized (Fig. 1). They thus provide users with a senseof existence and individuality, and help participants ofmulti-user VEs acquire awareness of other users andactivities. ViPs can be represented in various ways,depending on the characteristics of the application andon the users requirements and preferences.ViPs can be time-sensitive (for example they can fade or

    change shape as time goes by, Fig. 2), thus avoiding ViPspollution (see Section 5.1) and helping users distinguisholder from new(er) ones, and keep track of time. RelatedViPs can (upon user request) be connected throughconnecting lines (see Section 5.2).

    3.1. Virtual footprints (FootViPs)

    While a user is moving within a virtual world, FootViPsare left behind (examples are illustrated in Figs. 3 and 7). AFootViP can store and provide:

    Spatial information, i.e. the position and orientation ofthe user in the virtual world.

    Chronological information, i.e. time and date of

    creation, last access or modification.

    Personal information, e.g. owner name, e-mail, current

    position, status.

    Information about related HandViPs or MarkerViPs.

  • ARTICLE IN PRESSD. Grammenos et al. / Int. J. Human-Computer Studies 64 (2006) 221239 225

    FootViPs can be released anytime, either upon userdemand or automatically at specific time or space intervals,

    as well as each time a HandViP or a MarkerViP is released.All FootViPs belonging to a single user that share commonspatial (e.g. are in a specific virtual area), chronological(e.g. were created during a specific time period) or semantic(e.g. were created while performing a specific task)characteristics can be grouped and visualized as VirtualPaths.

    3.2. Virtual handprints (HandViPs)

    Every time a user interacts with a virtual object, aHandViP is imprinted on it (see Fig. 8). At the same time,a FootViP, with which the HandViP is associated, isautomatically released to record the position and orienta-tion of the user at the moment the interaction took place.For example if a pointing device is used, the usersHandViPs are imprinted (i.e. visualized) at the pointedposition, e.g. as a cube or, more realistically, as a model ofa Handprint. Non-visual HandViPs (e.g. generated fromspeech-based interaction) can also be released. Similarly toFootViPs, HandViPs can store and provide:

    (i)

    Spatial information.

    (ii)

    Personal information.

    (iii)

    Information about the interactive component on

    which they are released, e.g. name of the type ofinteractive object.

    (iv)

    Descriptive information about the performed useraction, e.g. left-click on the interactive device.

    (v)

    Any other information about the induced effect, suchas semantic information, e.g. the door opened.

    3.3. Virtual markers (MarkerViPs)

    MarkerViPs are permanent marks coupled with user-defined data that can be left anywhere within the virtualworld or attached to a specific virtual object. Just likeHandViPs, whenever a MarkerViP is released a FootViP isautomatically created to record the position and orienta-tion of the user. MarkerViPs can store:

    Spatial information.

    Personal information.

    Fig. 3. An example of accessing information related to a Virtual

    Footprint.

    A message in any digital form, such as text, audio, ormultimedia, e.g. instructions of use for an interactivecomponent.

    A MarkerViP can be associated with any other ViP (i.e.acting like a shortcut) allowing quick transportation fromone location to another within the virtual world. Marker-ViPs can also be used as a context-sensitive help system. Forexample a number of MarkerViPs can be attached to specificcomponents of a VE, providing related descriptions andguidance for inexperienced users. Furthermore, MarkerViPs

    can act as bookmarks to points of interest. MarkerViPs canbe both visual and non-visual, and can be represented byany object, such as for example pins, yellow stickers, roadsigns, wall signs/posters, or computer characters.

    4. Interacting with ViPs

    In a VE, the ViPs concept is instantiated and supportedthrough a software mechanism (the ViPs mechanism) thatimplements all the required functionality for generating,tracking, configuring and handling ViPs and also providesa user interface for interacting with them. This mechanismallows the user (through a visual or voice menu) to releasea new ViP, activate or deactivate the automatic recordingof ViPs, search for specific ViPs, and modify the ViPsgeneration and display configuration.Each ViP is associated with miscellaneous data, such as

    its type, owner, creation time and date. This informationcan be presented to the user in multiple ways, depending onthe application and user requirements. For example it canbe visualized through an information sheet (such as the oneillustrated in Fig. 3) triggered by a virtual pointing device,e.g. the mouse cursor, or a virtual hand.When a ViP is selected, the supporting mechanism offers

    the following options:

    (a)

    Set display options, e.g. modify the way ViPs aredepicted; hide/display specific type(s) of ViPs; visualizeconnecting lines between related ViPs; hide the VE andsee only the ViPs; scale the ViPs up or down (i.e. aninflate/deflate effect). Fig. 4 depicts a visual userinterface as an indicative example for providing theseoptions.

    (b)

    Perform ViPs-based navigation, e.g. manually orautomatically follow (forward and backward) existingtracks, or even hyper-jump to the first/last of them.Fig. 5 depicts a visual user interface as an indicativeexample for providing these options.

  • ARTICLE IN PRESS

    Fig. 4. Example of a visual user interface for controlling ViPs display options.

    Fig. 5. Example of a visual user interface for performing ViPs-based navigation.

    D. Grammenos et al. / Int. J. Human-Computer Studies 64 (2006) 221239226

    (c)

    Turn on/off the connecting lines related to the selectedViP.

    (d)

    Find related ViPs (e.g. find the closest HandViP orMarkerViP that belong to the same owner). Fig. 6depicts a visual user interface as an indicative examplefor providing these options.

    (e)

    In case the ViP belongs to the user, change ViP creationoptions.

    Figs. 7 and 8 illustrate examples of 3D menus forinteracting with FootViPs and HandViPs, respectively.

    5. ViPs-related issues and challenges

    The integration of ViPs in a VE has revealed severalissues that need to be addressed.

    5.1. ViPs pollution

    According to Darken and Peterson (2002), as navi-gation proceeds, the environment can become clutteredwith footprints. This, of course, becomes far worsein a multi-user environment. The ViPs mechanism

    supports alternative methods for overcoming thisproblem:

    (a)

    A filtering mechanism allows the users to select whichViPs they wish to see, according to several alternativeparameters such as their type (i.e. FootViP, HandViP,MarkerViP), creator (e.g. users own ViPs, all ViPs,ViPs belonging to specific users or user groups),creation time (e.g. last X minutes, from time A to timeB), proximity (e.g. the Y closest ones) and number (e.g.only the Z most recent ones).

    (b)

    ViPs can be time-sensitive and thus disappear after aspecific time period (see Fig. 2). Of course this cancreate new problems. For example the user will be nolonger able to tell if a place has been visited after theViPs have disappeared. This can be dealt with througha filtering mechanism that allows ViPs that havedisappeared to be viewed.

    (c)

    ViPs can be viewed by using a simplified representation.This option is quite useful for places that are over-crowded with ViPs (especially if these have differentcolours, shapes and sizes). They can be all represented,for example as small dots or cones, offering to the user aclearer view of the environment in combination with

  • ARTICLE IN PRESS

    Fig. 6. Example of a visual user interface for searching for ViPs.

    Fig. 7. Example of interacting with a Virtual Footprint.

    D. Grammenos et al. / Int. J. Human-Computer Studies 64 (2006) 221239 227

    useful ViPs-related information (e.g. amount of ViPspresent, areas of interest, paths). This option requiresconsiderably less rendering resources and thus canimprove the systems performance.

    (d)

    The number of visible FootViPs can be reduced byusing a smart elimination algorithm. For exampleintermediate ViPs in a more or less straight path can be

    eliminated, unless a HandViP of MarkerViP is attachedto them.

    5.2. ViPs continuity and relation

    The problem of continuity, also reported by Darken andPeterson (2002), arises when the user crosses paths whileleaving footprints, since in this case it can become difficultto disambiguate which footprints belong to the same trail.In the proposed approach, this problem can be solved bydisplaying a connecting line between all FootViPs belong-ing to the same path (see Fig. 9).Another problem that may arise, due to the fact that

    users can interact with objects located at differentdistances, concerns relation. This means that a FootViPreleased upon user interaction (representing the usersposition and orientation at that time) can be positionedconsiderably far from the corresponding HandViP (seeSection 3.2), or even out of sight. Thus, for a user comingacross one of them (e.g. the HandViP), it may be difficult tolocate the other (i.e. the FootViP) or infer the relationbetween the two. The same may also occur between aMarkerViP and its corresponding FootViP (see Section3.3). This problem is also addressed through the use of

  • ARTICLE IN PRESS

    Fig. 8. Example of interacting with Virtual Handprints (on the red ball).

    Fig. 9. Connecting ViPs to visualize the users path.

    3http://www.acs.ohio-state.edu/units/law/swire1/psecind.htm.4http://www.epic.org/privacy/internet/EPIC_NII_privacy.txt.

    D. Grammenos et al. / Int. J. Human-Computer Studies 64 (2006) 221239228

    connecting lines (preferably of different style than the linesused for continuity).

    5.3. Overlapping ViPs

    When two or more adjacent ViPs overlap, apart from theaforementioned problem of continuity, visual and interac-tion ambiguity problems may also occur. Visual clarity maybe lost when, for example two or more ViPs of similarcolour or shape overlap, while interaction ambiguity occurswhen the pointing device is concurrently intersecting morethan one ViP and it is not clear which one the user wishes tointeract with. This problem is usually solved through scalingViPs up or down (i.e. through an inflate/deflate effect).

    5.4. Privacy and protection of personal data

    Since ViPs can be considered as a mechanism thatcollects, and thus can also potentially expose, personal

    information, adequate policies should be adopted toprotect the privacy of the participants of a VE, inaccordance with established relevant guidelines and prin-ciples such as those included in the European CommunityDirective on Data Protection (95/46/EC)3 and the PrivacyGuidelines by the Electronic Privacy Information Center.4

    To this purpose, the ViPs mechanism collects only datathat are relevant and legitimate for the purposes of itsproper functioning. Furthermore, it allows each user to:

    (a)

    View at any time the personal data that have been (andare) recorded and destroy any or all them.

    (b)

    Define what information will be made available toother users.

    (c)

    Turn the recording mechanism on or off at any time, aswell as select to manually place ViPs in the VE.

    http://www.acs.ohio-state.edu/units/law/swire1/psecind.htmhttp://www.epic.org/privacy/internet/EPIC_NII_privacy.txt

  • ARTICLE IN PRESSD. Grammenos et al. / Int. J. Human-Computer Studies 64 (2006) 221239 229

    (d)

    Use the system anonymously.

    (e)

    Grant or restrict access to own ViPs and related

    information.

    6. Uses of ViPs

    ViPs were originally conceived as a means to mainlysupport navigation in a VE. For example leaving trails on asurface or in space allows the user to quickly refer back tothem and easily reorient when the continuity of the currentorientation tracking has been interrupted. Furthermore,ViPs, and especially MarkerViPs, can act as personallandmarks, thus allowing the user to find the way back topreviously visited locations, or follow the tracks of otherusers in order to locate or follow them to a desiredlocation. Additionally, through ViPs-based navigationfacilities (e.g. Fig. 5) users can easily move in a VEfollowing predefined paths, jumping to specific places/shortcuts, returning to a specific course, or employalternative, non-traditional navigation techniques suchas navigation by query (van Ballegooij and Eliens, 2001).

    Beyond the aforementioned uses, ViPs can also beemployed for several other purposes, such as:

    (a)

    Locating other participants (e.g. friends, enemies,people the user wishes to meet or avoid) in multi-userand collaborative entertainment (gaming) environ-ments (e.g. multi-user dungeonMUDs games, chatworlds).

    (b)

    Supporting social navigation (Munro et al., 1999), aconcept based on the fact that when people are lookingfor information (e.g. directions, recommendations)they usually turn to other people rather than usingformalized information artefacts. For example partici-pants in a multi-user VE can easily identify popularplaces or options through the number of HandViPs orFootViPs on them.

    (c)

    Creating tutorial sessions. A tutor can leave behind anumber of ViPs that the students may follow, forexample to learn a specific procedure in a step by stepprocedure.

    (d)

    Developing virtual tours. Visitors can tour virtualmuseums, exhibitions, stores, etc. by following the ViPsof virtual guides.

    (e)

    Visualizing and tracking the path of moving objects,e.g. observing (or even predicting) the paths of friendlyand enemy units in military command and controlcentre applications or visualizing the orbit of planetsand other celestial objects in virtual planetariums.

    (f)

    Facilitating content annotation and marking/identify-ing (non) visited areas.

    (g)

    Providing context-sensitive help with the use ofMarkerViPs.

    (h)

    Studying user navigation (e.g. Ieronutti et al., 2004)and interaction in 3D environments and supportinguser-based evaluation of VEs (e.g. path analysis;replaying user actions; providing statistics related to

    distance travelled or least/most visited areas; findingunused or underused interactive elements, etc.).

    (i)

    Performing measurements related to distance and time.

    (j)

    Providing functions and concepts which are popular in

    conventional 2D user interfaces, such as shortcuts,bookmarks, undo/redo functions, versioning and col-laborative review, marking/identifying (non) visitedcontent, content annotation and highlighting.

    7. From concept formation to software implementation

    A user-centred approach was followed for the develop-ment of a mechanism instantiating the ViPs concept. Acooperative design and evaluation approach was followedwhich included the participation of interaction designers,usability experts, developers and potential end-users. Thedevelopment of the ViPs software mechanism included thefollowing steps:

    7.1. Concept formation and prototyping

    The proposed concept and existing relevant develop-ments and literature were used to produce preliminaryfunctional and interaction requirements for the ViPsmechanism software. Based on this description, the lookand feel of the system was developed, first as a pencil andpaper prototype. These sketches were converted to a digital(non-interactive) prototype using Sense8s WorldToolkitVR software for the creation of elaborate pictures of a 3Dworld, and subsequently Adobe Photoshop for overlayinginteraction objects such as menus and pop-ups on theVirtual World (two examples of the concept developmentprototype are presented in Fig. 10a and b). Then, thedigital prototype was presented to interaction designers,usability experts and potential end-users to obtain theirpreliminary opinion on the understandability, utility andusability of the concept and its software instantiation. Thiswas accomplished through semi-structured interviewsbased on a small set of relevant key points. Furthermore,the prototype was discussed with VE developers to assessits technical feasibility.

    7.2. First interactive prototype and exploratory studies

    Based on the outcomes of the preliminary study, aprototype VE equipped with a simplified ViPs mechanismhas been implemented, in order to study the concept ofViPs in practice. The software used to develop theprototype was Maverik for Linux, a publicly availableVR toolkit developed by the University of Manchester (formore details, see http://aig.cs.man.ac.uk/maverik/).Using this prototype, two separate exploratory studies

    were conducted, with the aim to collect qualitative datasuch as comments, ideas and opinions about the overallconcept of ViPs and their potential usefulness and short-comings, as well as about the adopted design and

    http://aig.cs.man.ac.uk/maverik/

  • ARTICLE IN PRESS

    Fig. 10. Two examples of the ViPs concept development prototype.

    D. Grammenos et al. / Int. J. Human-Computer Studies 64 (2006) 221239230

    implementation approach of the ViPs mechanism. Thesedata were very helpful not only towards validating inpractice the usefulness of the concept but also for fixingproblems and improving the overall usability of both thedesign and the implementation before proceeding intoformal user testing.

    The pilot experiments were performed using a non-immersive version of the system using a 1700 monitor. Theinteraction devices used included a standard keyboard anda mouse for user movement (the users could select theirpreferred device), while the mouse was also used forinteracting with virtual objects and the ViPs. Users wereable to turn left, right and move backward and forward,but not up and down (i.e. they were not able to take off thevirtual floor). For the needs of the experiments anoutdoor maze-like VE was constructed, as well as somesimple interactive 3D objects.

    The first study was performed with two expert interac-tion designers/usability experts with the aim to present theconcept and the design of ViPs and get comments abouttheir value and usability, but also useful ideas forimproving the design and identify potential usabilityproblems. The methods used included Heuristic Evaluation(Nielsen, 1994) and Cognitive Walkthrough (Wharton et al.,1994).

    The second study included six participants, potentialend-users of the system (4 males and 2 females), allexperienced computer users, but with different levels ofexpertise in the use of VEs. Two of the users had experiencein using immersive VEs, such as a CAVE or HMD-basedsystem; two users were familiar with VRML-based worlds,3D games and 3D chat applications; the remaining twousers had no previous experience in using a VE. Themethod used was thinking aloud (Nielsen, 1993), whereparticipants were allowed to express their thoughts,comments and feelings at any time during the test andalso interact with the two observers that were present. Therole of the observers was to prompt the users for commentsbut also to observe alternative ways of performing the task.Due to the qualitative nature of the study, userobserver

    interaction was highly encouraged, since user performancewas not traced or evaluated. The conversations wererecorded using a digital audio recorder. To support theevaluation process, a list of indicative tasks was used, thatprompted participants to explore the functionality andfacilities offered by the ViPs mechanism. After using thesystem, a small debriefing session was held, where userswere asked about their overall impression of the interactionwith the system and personal preferences, as well as forsuggestions regarding potential improvement and modifi-cations.The overall impression of the participants with respect to

    the concept of ViPs and the pilot implementation waspositive. All of them agreed that ViPs can be really usefulin several cases, and that the overall metaphor that ViPsintroduce in the context of moving and using a VE is veryeasy to grasp and utilize. Indisputably, the favourite part ofthe system was the option for personalized ViPs. All usersspent considerable time browsing the relevant list of imagesto pick their favourite, and most of them changed it quite afew times while using the system, trying to find the one thatthey preferred or that they considered best-suited to theirpersonal image. Furthermore, all participants contributedwith a number of ideas and suggestions about theinstantiation of alternative images. This fact comes as nosurprise since the image of a users ViPs is actually theusers representation in the virtual world and is oftenimplicitly associated with character and personality traits.The two interaction designers who participated in the

    experiment were mostly concerned with potential usabilityproblems of the system. They contributed their views withrespect to the problem of overlapping ViPs, and commen-ted on potential interaction patterns, organization of thepresented menus, and alternative parameters that could beemployed for creating and viewing ViPs.Participants who had previous experience with VEs

    faced no particular problems in using the system. Users ofmulti-user VEs expressed their concerns about privacyissues and the way these could be handled. Novice usersmainly had difficulties in navigating and effectively using

  • ARTICLE IN PRESS

    Fig. 11. Approach followed for the sequential evaluation of the second

    ViPs prototype (adapted from Gabbard et al., 1999).

    D. Grammenos et al. / Int. J. Human-Computer Studies 64 (2006) 221239 231

    the input devices. Half of the users commented that therewere too many suggested ViPs viewing and creationparameters, and that they would not be able to use themeffectively without prior training.

    An unexpected result of the tests was that one of theparticipants used ViPs in an artistic and playful way thatwas not foreseen when the system was designed, to drawpatterns on the ground the way people make sketches inthe sand.

    Overall, the findings of the exploratory studies confirmedthe initial hypothesis that ViPs can constitute a handy tooland a useful navigation aid, but also a feedback and historymechanism. In addition, the support that ViPs can providefor collaborative environments and social navigation wasconsidered significant and innovative. The studies alsoallowed us to identify potential usability problems of theinitial design and missing functionality, to collect userpreferences that could help in refining and improving theconcept and the resulting system, as well as to identifyareas where further experimentation and testing wasneeded.

    7.3. Second interactive prototype and sequential evaluation

    The outcomes of the exploratory studies were used todevelop a second version of the interactive prototype inwhich the usability and technical problems that weredetected were corrected and further functionality wasadded (examples are illustrated in Fig. 3, 79). For examplea Navigation and a Display Console (Mourouzis et al.,2003) was introduced. To evaluate this second prototype,the most appropriate process and methods had to beidentified. In this direction, a problem that had to be solvedwas the lack of widely used and validated VE evaluationprocesses and tools. Also, due to the experimental natureof the developed system, it was necessary to acquire asmuch input (both qualitative and quantitative) as possible,by the widest audience (experts and users). Thus, it wasdecided (Mourouzis et al., 2003) that the sequentialapproach, suggested by Gabbard et al. (1999), was themost suitable candidate, since it addresses both design andevaluation of VE user interfaces and combines severalalternative techniques and provides multiple complemen-tary data. This approach (see Fig. 11) involves a series ofevaluation techniques that run in sequence, includingCognitive Walkthrough (Wharton et al., 1994), HeuristicEvaluation (Nielsen, 1994), and Formative and Summativeevaluation (Scriven, 1967; Hix and Hartson, 1993). Ingeneral, a sequential evaluation uses both experts andusers, produces both qualitative and quantitative results,and is application-centric.

    Two different computer set-ups were used for theevaluation since it was necessary to test the concept andits effect in both non-immersive and immersive VR systemsand a non-immersive version was required to facilitate thecooperative evaluation of the system. The first set-up wasan immersive VR system using a stereo HMD (Virtual

    Research V8, with 640 480 true VGA LCDs), while thesecond was a typical desktop system using a 1700 monitor.Both versions were running on Dual Pentium III 1GHzPCs with Linux Slackware 8.0 with a Geforce-Ti4200graphics card, using a conventional 2D mouse with threebuttons for navigation and interaction.The virtual world created for the study was a maze that

    included several corridors and rooms, populated withsimple objects with which the user could interact. Althoughthe test environment was a single-user system, in order tosimulate and demonstrate the use of ViPs in multi-userenvironments a number of computer-driven avatars(simulating other users) were placed in it, leaving behindtheir own ViPs. In addition to this environment, a simpleroom (the warming up room) equipped with interactiveobjects was also constructed for user familiarization beforethe actual test with the system and its interaction facilities.The evaluation procedure started with an expert-based

    inspection of the ViPs Prototype from five assessors (threemale, two female) with a rich background in usabilityengineering and interaction design of 2D applications. Forthis inspection, the evaluation team used VIEW-IT (Trompand Nichols, 2003), an inspection method for assessing theusefulness of VEs in terms of utility and usability. Themethods main instrument consists of forms that guide theassessor through the visual assessment of the interface. Theassessors were first given a specific list of simple tasks todrive a cognitive walkthrough and were asked to describeand rate each usability issue they could identify. To definethe severity ratings of usability problems, the followingscale, suggested by the developers of the method, was used:

  • ARTICLE IN PRESSD. Grammenos et al. / Int. J. Human-Computer Studies 64 (2006) 221239232

    0 I dont agree that this is a usability problem at all1 Cosmetic problem only: need not be fixed unless extratime is available on project

    2 Minor usability problem: fixing this should be givenlow priority

    3 Major usability problem: important to fix, so shouldbe given high priority

    4 Usability catastrophe: imperative to fix this beforeusers test the system

    Then the assessors were allowed to use freely any of theimmersive and non-immersive versions of the system inorder to evaluate the interface and judge its complianceagainst 29 heuristics and general usability principles (usinga five-point scale: Strongly Disagree, Disagree, Neutral,Agree and Strongly Agree), suggested by the developersof the method. These heuristics fall into three categories:presence (7 heuristics), user-friendliness (12 heuristics) andVR-induced sickness (10 heuristics).

    Finally, the KJ-method, developed by Kawakita (1975)and described by Molich and Kindlund (2000), wasadapted and employed for consensus building among thefive individual assessors. The adapted consensus buildingprocess consisted of the following steps:

    (i)

    Each participant wrote down each major observedproblem, along with proposed solutions, on an indexcard.

    (ii)

    All index cards were placed on a board.

    (iii)

    All participants read each card silently and were

    allowed to add more problems or solutions to theproblems described.

    (iv)

    The problems were sorted by thematic area, andduplicates were eliminated.

    (v)

    A name was agreed for each problem and eachparticipant voted for the 10 most important problems,as well as for the best solutions available for each.

    In this way, the problems and proposed design solutionswere prioritized before moving into the development ofrevised version of the prototype. In summary, throughoutthe evaluation process, all assessors worked concurrentlyand independently except, of course, during the consensusbuilding activities. The entire procedure lasted about 6 h,including breaks.

    Following the expert-based evaluation, the ViPs proto-type underwent a period of refinement and code testing(three weeks) to facilitate conducting further formativeevaluations involving real users. To this purpose, aCooperative Evaluation (Wright and Monk, 1992) wasconducted with twenty people (8 female, 12 male) who wereasked to try out the proposed interface. First they wereasked to freely explore a desktop version of the system andfamiliarize themselves with the concept and functionality ofViPs and then they were guided to perform and assess somesimple tasks. This evaluation step was conducted by twoevaluators who were free to ask the participants questions

    at any time. During the trials, assessorparticipant discus-sions, user (re) actions and comments were recorded usingdigital audio and video. Debriefing interviews and pre andpost hoc questionnaires were also used to collect morespecific information on the participants and their experi-ences with the system. In order to assess the usability of thesystem, the Usability Questionnaire for VEs (Patel, 2001)was adapted and used.Finally, aiming at transforming the produced qualitative

    data into quantitative ones, a slight variation of a methodsuggested by Marsh and Wright (1999) was employed.Further to this method, empirical evaluations were assessedon the basis of the quantity and quality of the users think-aloud verbalizations: (a) the quantity was attained bycounting all verbalizations, where statements related to thesame issue were counted as a single utterance; (b) the qualityof each of these was then judged by the evaluators (as low orhigh). Further to this result, all verbalizations with highimportance were summarized and analysed in order tospecify the design refinements required prior proceeding totask-based user tests (summative evaluation).A few days later (with a maximum interval of 4 days per

    user), all individuals who were present in the cooperativeevaluation session were also asked to participate in a task-based usability evaluation using the HMD version of thesystem. In this final step, the users had to perform specifictask scenarios, while task accomplishment, user errors,actions in the VE, physical reactions and comments wererecorded by means of digital audio, digital video, andscreen video capture. Once more, several questionnaireswere used to measure a number of parameters. Theparticipants booklet for this test included: (a) pre-testquestionnaires consisting of an Immersive Tendency Ques-tionnaire (Witmer and Singer, 1998), a Stress ArousalChecklist (Gotts and Cox, 1988) and a Simulator SicknessQuestionnaire (Kennedy et al., 1993); (b) a Short SymptomChecklist (Nichols et al., 2000a) that had to be filled-inevery 5min during the test; and (c) post-test questionnairesconsisting of a Presence Questionnaire (Witmer and Singer,1998), a Stress Arousal Checklist, a Simulator SicknessQuestionnaire, and the Short Symptom Checklist mentionedabove (in order to detect changes in the users behaviourand health induced by the ViPs prototype on user ), theUsability Questionnaire for VEs mentioned above, anEnjoyment Questionnaire (Nichols et al., 1999), and aPost-immersion Assessment of Presence & Experience

    Questionnaire (Nichols et al., 2000b).The significant differences between the cooperative

    evaluation and the user-based evaluation of the ViPsPrototype were that, for the latter, the participants: (a)were given specific tasks to perform; and (b) used the HMDversion of the system. The steps that were followed in eachtrial with one participant (of a total time ranging from 65to 80min) are described below:

    Before the experiment, the assessors ensured that all thenecessary hardware was set up and working properly.

  • ARTICLE IN PRESSD. Grammenos et al. / Int. J. Human-Computer Studies 64 (2006) 221239 233

    The participant was briefly explained the evaluationmethodology adopted and then asked to complete thepre-test questionnaires (515min).

    The participant, after a short coffee break, was given a

    specific scenario (Scenario 1) based on the famousancient Greek myth of the Minotaur and the Labyrinthof Knossos (5min). The scenario included the trackingof other participants, marking areas, wayfinding andbacktracking, interacting with simple 3D objects, usingViPs as shortcuts, navigation aid, and feedback/historyfacility. As the users performed this scenario, theassessors attention was mainly focused on the degree(and on the way) in which the participant was using theViPs Mechanism in order to perform the task at hand.

    Then the participant was seated in front of the PC and

    helped by the assessor to put on the HMD helmet andmake the necessary adjustments and configurations(about 3min).

    The participant was given 15min to perform the tasks of

    Scenario 1, with a short break after each 5min ofimmersion in order to complete the Short SymptomChecklist.

    When the 15min period was over, the participant was

    interrupted, asked to remove the HMD and presentedwith the end of Scenario 1 on a standard display screen(mono), according to which the participant was taughtby the craftsman Daedalus how to fly over the maze andobserve the imprinted paths.

    The participant was then given a second scenario

    (Scenario 2) mainly aiming at assessing ViPs as anorientation aid for VEs (1015min including breaks).The participant had to navigate in the virtual space andattempt to shape with FootViPs two simple geometricalshapes, a square and an equilateral triangle. Each shapehad to be created twice: with and without the FootViPsbeing visible. As the users performed this scenario, theassessors attention was mainly focused on how well wasthe participant forming the shapes. A rating scale from 0to 10 was used to score user achievement, where 0corresponds to completely wrong shape, while 10 toperfect shape.

    Again, the participant was given 15min to perform the

    tasks of Scenario 2, with short breaks after every 5minof immersion in order to complete the Short SymptomChecklist.

    When the 15min period was over, the participant was

    interrupted, asked to remove the HMD. Then theassessor flew over the imprinted shapes in order tocapture screen shots for the comparison of the shapes, interms of size, quality, etc.

    Finally, a debriefing session took place and then the

    participant was asked to fill in a post-test questi-onnaire (around 10min) and was thanked for his/hercontribution.

    In summary, the immersion time (using the HMD) peruser was restricted to 30min in total, with a 5min break in

    after the first quarter. A total of 20 h of video wererecorded, since two different views were captured for eachparticipant: an internal view, i.e. what the user was seeingwhile interacting within the virtual world; and an externalview, i.e. a third persons view of the user interacting withthe system.Overall, the three types of experiments had complemen-

    tary objectives and facilitated: (a) the refinement of theconcept; (b) the upgrading of the ViPs software mechanismbehaviour and functionality; (c) the collection of sugges-tions for improvement of the user interface. In short, theresults of these studies were (see also Mourouzis et al.,2004):

    Expert-based review: In this study, further to the resultsof the heuristicsevaluation, the level of presence of thesystem, as measured by the VIEW-IT tool, was found tobe moderate (average 2.15, where 0 refers to unac-ceptable and 4 to ideal). It was suggested that thiscould be improved by increasing the level of detail andthe quality of the display (e.g. using an HMD of higherresolution), using visual and auditory feedback; andincreasing and stabilizing the system response rate. Onthe other hand, the level of VR-induced sickness was low(average 2.8). According to the assessors, this couldpossibly be lowered even more by keeping active usermovement to a minimum (i.e. minimize the need ofmovement while interacting with the ViPs interface).Finally, the overall user-friendliness of the system wasalso found moderate (average 2.5), and a number ofsuggestions were produced for improving it, includingproviding adequate and consistent feedback, as well ascues to permissible interactions, introducing errormessages and error management, and improving thequality of graphics. The evaluators expressed theopinion that the functionality and support provided bythe ViPs mechanism has the potential to improve theusability of a VE, both in terms of ease of learning anduser satisfaction, and in terms of efficiency, memor-ability, and error rates. Additionally, as a result of thecognitive walkthrough and the consensus buildingprocess, the assessors identified a number of designissues and proposed adequate solutions. These issueswere mainly related to the usability of the user interfaceof the ViPs Mechanism, and especially to the menus andconsoles. Furthermore, several additions regarding thefunctionality of the ViPs Mechanism emerged from thisexperiment. For instance, a Query Console was proposedto allow, among others, navigation by query (vanBallegooij and Eliens, 2001).

    Co-operative evaluation: This step produced massive

    evaluation data. A considerable number of conclusionswere derived from the analysis of the post-evaluationquestionnaires used in this step but, the majority of theresults emerged from the study of the videos (assessorparticipant discussions, etc.) capturing users verbaliza-tions about ViPs. In general, further to the analysis of

  • ARTICLE IN PRESS

    Table 1

    Overview of the average (and median) verbalizations made per participant during the cooperative evaluation

    Study time Average (median) verbalizations participant Usability-related verbalizations

    Average (Median)

    values for the 20

    participants sample

    33:09min (34:12) Total number: 106.11 (90) Total number: 11.05 (12)

    A. Think aloud

    verbalizations:

    54.74 (45) 1. Learnability: 3.03 (3)

    2. Efficiency: 3.50 (4)

    B. User to assessor

    question:

    25.58 (22) 3. Memorability: 1.03 (1)

    4. Errors: 0.00 (0)

    C. Assessor to user

    question:

    25.79 (25) 5. User satisfaction: 3.50 (3)

    % over total

    verbalizations:

    10.4% (12.8%)

    Think aloud

    verbalisations

    (51,6%)

    Assessor to user

    Question (24,3%)

    User to

    Assessor

    Question

    (24,1%)

    Efficiency

    (31,63%)

    Learnability

    (27,42%)

    Errors

    (0,00%)

    Memorability

    (9,33%)

    User

    Satisfaction

    (31,63%)

    Fig. 12. Statistical overview of the overall (left-side) and usability-related (right-side) verbalizations made during the cooperative evaluation (based on the

    average values presented in Table 1).

    Table 2

    Level of confidence on the quality judgments confusion matrix

    Evaluator 1

    Low High Total

    Evaluator 2 Low 116 10 126

    High 10 92 102

    Total 126 102 228

    D. Grammenos et al. / Int. J. Human-Computer Studies 64 (2006) 221239234

    the sample data collected through questionnaires, mostparticipants agreed that ViPs are easy to learn and useand can be a handy tool for navigation, way-finding andannotation. All participants enjoyed using the ViPsmechanism but they could not clearly identify the degreeto which ViPs can support orientation; nonetheless, theanalysis of the videos showed that ViPs actuallyprovided considerable help in this direction. In totalmore than 2100 verbalizations were recorded. About220 of them were related to the usability of the system,out of which 100 were identified by both assessors ashaving high value for improving the design and use ofViPs. Table 1 provides an overview of the number ofrecorded and analysed verbalizations per participant, astatistical analysis of which is illustrated graphically inFig. 12. Finally, following the aforementioned methodsuggested by Marsh and Wright (1999), the twoevaluators involved in this step were asked to judgethe quality (high or low) of the identified usabilityproblems. The confusion matrix of the level ofconfidence on the quality judgments is shown inTable 2. Using Cohens Kappa (K), the results of theconfusion matrix provide a significance rating of 82%,which according to this formula provides an excellentdegree of inter-rate reliability for the results (Robson,1995).

    User-based assessment: This last step reinforced the

    studys hypotheses and validated the findings of the twoprevious steps, since its results were consistent with them.Almost all the participants stated that they found thescenarios they had to perform easy to learn andremember, but also motivating and entertaining. The

    major problems reported were related to the hardwarethat was used (e.g. graphics quality, mouse navigation,system response time) and not to the ViPs mechanism.This step provided complementary review of the ViPsprototype in practice, through a detailed analysis andgrouping of user performance, errors and actions thathelped in identifying common usability problems andpatterns of use. In this direction, quantifiable usabilitymeasurements included: the time it took users tocomplete each scenarios subtasks with and without usingViPs; the number of times that ViP-related functionalityand features were utilized by a user (e.g. representationoptions, information sheets, menus, Navigation andDisplay Console); the proportion of users that used ViPseffectively (for this particular measure, users wereprovided with hints and examples of good use forViPs). Overall, most of the ViPs features were efficientlyexploited by the participants (see Table 3), except theNavigation Console, the use of which resulted to someuser errors and confusion. These errors can be mainlyattributed to the limited related functionality offered by

  • ARTICLE IN PRESS

    Table 3

    Exploitation and use of ViPs features

    Hint of good use Rate of users who

    attempted to follow the

    hint (%)

    Rate of users who

    succeeded (%)

    Comments

    You may backtrack using old footprints 100.00 90.00 Some users where lost in cases that the virtual floor

    was overloaded with their ViPs (crossways). Some

    users managed to deal with this using the connection

    lines feature, while some others used the Navigation

    Console for semi-automatic travelling

    You may travel using existing footprints

    (e.g. by employing the Navigation

    Console)

    65.00 46.15 A lot of users got confused with the use of the

    navigation console. Although they reported that

    both the purpose and functionality of this module

    were quite clear to them, due to usability problems,

    they ended-up preferring a non-automated approach

    for accomplishing related tasks

    You could use ViPs as shortcuts within the

    virtual world

    20.00 75.00 All users reported that after some further

    modifications this feature can be very helpful (mainly

    to expert users). A user got very confused as a result

    of trying this hint, while the option of creating ViPs

    automatically was turned on

    You could use Virtual Markers to highlight

    visited or important areas

    70.00 50.00 A significant percentage of the participants created

    at least one Virtual Marker during the tests but only

    half of them used the Virtual Markers for some

    practical reason in the context of the related

    scenarios

    You may improve system performance by

    hiding ViPs and/or connect lines

    65.00 69.23 Some users were confused due to the numerous

    related available options (e.g. hiding only certain

    types of ViPs, only their own, those of a specific user,

    of all users, etc.)

    You may improve system performance by

    switching from full to simple

    representation (dots)

    5.00 100.00 Only one user followed this hint. This user

    experimented with almost all the provided

    functionality

    You may improve system performance by

    making footprints (that are not really

    useful) disappear more quickly

    10.00 100.00 Out of the 20 participants, only two changed the

    default disappearance speed of ViPs. During the

    debriefing, the rest of the participants mentioned

    that they did not feel at any point that they needed to

    change the default speed

    Table 4

    Evaluation results stemming from Scenario 2

    Subtasks in Scenario

    2 related to assessing

    ViPs as an

    orientation aid

    Average performance

    scores

    Average performance

    increase when using

    ViPs (%)

    Shape a square (ViPs

    visible)

    6.3 12

    Shape a square (ViPs

    hidden)

    5.1

    Shape an equilateral

    triangle (ViPs hidden)

    4.8 14

    Shape an equilateral

    triangle (ViPs visible)

    6.2

    D. Grammenos et al. / Int. J. Human-Computer Studies 64 (2006) 221239 235

    the prototype in comparison to the full functionalityforeseen by the original design. Additionally, evaluationresults stemming from Scenario 2 (Table 4) indicate thatViPs can support user orientation, since their presenceimproved user performance in the related tasks. Further-more, an analysis of the results of the Immersive Tendencyand the Presence questionnaires, the Stress ArousalChecklist and the Simulator Sickness Questionnairehelped in identifying cases of confounding effects.Participants also completed a post-test Enjoyment Ques-tionnaire according to which most of them reportedfeeling highly motivated, and sometimes even happy andexcited. Results stemming from the Usability Question-naire are provided in Table 5 (scores are measured in ascale from 0% to 100%, where 0% corresponds to totallydisagree, 25% to disagree, 50% to neutral, 75% to agree,and 100% to strongly agree). As can be observed, theresults related to learnability, user satisfaction andefficiency are aligned to a considerable degree with thecorresponding results presented in Table 1.

    Summarizing the evaluations overall:

    Expert-based Review was very efficient and cost-effec-tive, as it quickly captured all major usability problems.

  • ARTICLE IN PRESS

    Table 5

    Overview of user-based assessment results

    The implemented ViPs

    MechanismyAverage score (0% corresponds to

    totally disagree, 25% to disagree,

    50% to neutral, 75% to agree, and

    100% to strongly agree) (%)

    is easy to learn 73.8

    can increase user satisfaction 72.5

    supports wayfinding 78.8

    supports orientation 68.8

    supports navigation 72.5

    is a handy tool for annotations and

    for highlighting VE content

    70

    improves overall task performance

    in VEs

    72.5

    is overall a handy tool for VEs 70

    D. Grammenos et al. / Int. J. Human-Computer Studies 64 (2006) 221239236

    A basic advantage of this method was that, since theassessors were experts, they could: (a) work with a lesselaborate version of the prototype; (b) envisage poten-tial usability problems, even for functionality or parts ofthe system that were not yet implemented; (c) indicatenot only problems but also suggestions for solutions.Also, this method helped in finding tasks and parts ofthe system that should be tested with the users.

    Co-operative Evaluation and User-based Assessment

    were very resource demanding both for conducting thetests, and also during the analysis of the data.Questionnaires were very efficient at providing anoverall insight into the usability of the system, but inorder to identify interaction problems, their origin andthe context in which they appeared, a series of questionswere required throughout the cooperative evaluation,with a systematic review of the user testing videos.

    When conducting a Cooperative Evaluation both Think-

    ing Aloud and Question and Answer protocols should beemployed, since the results of our studies have shownthat these two techniques produce different kind of datathat have very little correlation.

    Due to the particular characteristics of the system,

    which aimed to test an innovative concept using a noveltechnology (i.e. VR), the results of a single methodwould not necessarily produce valid results since:J in the case of Expert Review, experts might not be

    able to safely predict the level of usability andpotential user problems due to lack of accumulatedknowledge;

    J Co-operative Evaluation might not allow tracking ofimportant issues related to real uncontrolled use ofthe system such as unpredicted behaviours, usagebarriers, problems and improvised solutions;

    J if User Testing was alone used, participants might notbe able to interact with the system or know how toexpress the difficulties they encountered without theaid of an experienced assessor.

    7.4. Lessons learnt

    After a long period of experimentation and testing, theconcept and implementation of ViPs have been substan-tially re-elaborated, and undergone several enhancements.Our consolidated experience from the evaluation sessionscan be summarized as follows:

    ViPs provide an instant sense of involvement and

    empowerment: One of the most prominent qualities ofViPs is that they make users feel they can affect the VE,since even their simplest actions (e.g. movement) have adirect effect on it. This fact creates a higher sense ofinvolvement and achievement, which contributes greatly toa positive user attitude towards the VE and a will to furtherexplore and experiment with it. Furthermore ViPs, in theirsimplest form (i.e. when just the very basic interactionoptions are offered), are very intuitive to grasp and use,even for young children and inexperienced computer users.

    There is a need for task-based support tools: Initially, allViPs-related functionality was offered through a number ofcontext-sensitive menus. This approach was adequate forsimple interactions (e.g. retrieve information about a ViP,or identify its owner) but in order to take full advantage ofthe capabilities of the ViPs mechanisms, tools for support-ing specific tasks (e.g. navigation, display, search) must bebuilt on top of it. In our implementation, such tools werecreated in the form of consoles (e.g. Figs. 46) floating infront of the users viewpoint.

    The position of support tools should be user-adjustable:

    When active, the support tools mentioned in the previousparagraph are located inside the users viewpoint. Thus, theuser should be able to move (and rotate) the tools around,so that they do not obstruct viewing and also reside in aposition that allows comfortable interaction. Such adjust-ments should be remembered and automatically setwhenever the tool is used, but the user should also be ableto reset initial settings and temporarily hide or minimizethe tool.

    Eye gaze vs. feet orientation: At the time of their release,ViPs store information about the users virtual bodyposition and orientation. But often the users gazeorientation may be totally different. Consequently, some-one may miss important information when following thesetracks. For example in a virtual museum tour the user maybe required to look up at a specific position in order to seean item hanging from the ceiling. Thus, future versions ofthe system will need to consider storing both body and eyegaze orientation information, as well as using and visuallyrepresenting them accordingly.

    ViPs grouping in Virtual Paths: Practice has shown thatthe visualization of ViPs paths (see Fig. 9) is very useful. Arelated problem is that, after some time, paths become verylong and complex. A potential solution is the creation ofsub-paths (Virtual Paths). For example a user may definethe beginning and end of a path, also attaching semanticinformation which can be used for future reference andidentification (e.g. route to place X).

  • ARTICLE IN PRESSD. Grammenos et al. / Int. J. Human-Computer Studies 64 (2006) 221239 237

    Smart ViPs creation and visual representation: Auto-matically releasing FootViPs at specific time or spaceintervals is not always efficient and effective. It is moreappropriate to use an intelligent algorithm which com-bines several parameters (e.g. space, time, change ofdirection, speed, ViPs concentration). These parameterscan also be used by a ViPs displaying algorithm in order tofilter available ViPs and render only a subset of them (e.g. afixed number or a percentage).

    Interaction complexity vs. customization and control:

    There are several ViPs attributes that the user maypotentially like to change, and at different levels. Forexample users can alter one or more properties (e.g. theappearance) of all ViPs, a subset of them or just a specificone. The problem is that the more power and options areprovided, the more complex the interaction processbecomes. There is no universal solution for this trade-off,since different users in different applications and contextsof use require diverse levels of control. The mostappropriate approach is to provide alternative layers ofinteraction complexity.

    Text input should also be supported: ViPs are loaded withdifferent types of semantic information (temporal, spatial,owner data, etc.). In order for the user to be able toeffectively manage and use this information, beyond directmanipulation, a method for text input is also required (e.g.a physical or virtual keyboard or, speech recognition). Forexample the user may wish to tag specific ViPs withkeywords, or group/retrieve ViPs sharing some commoncharacteristics.

    Context information can also be provided when a ViP is

    selected: When a ViP is selected, the only informationprovided is about its creation date/time and owner.Depending on the application characteristics and userpreferences, information about the ViPs context can alsobe presented, for example the length of the path that theViP belongs to and its relative position in it, interactions/markers that are situated along the way, on how many/which other users have followed the path.

    8. Conclusions and future work

    The concept and software implementation of ViPs wassystematically studied and elaborated in the context of theVIEW of the Future project (see the Acknowledgementsection), contributing to one of the projects goals whichwas to develop new interaction, navigation and manipula-tion concepts for VE-applications. During the project, acomprehensive User Requirements Document was com-piled (Basso et al., 2002), presenting typical usage scenariosand related user needs of real VE applications for theindustrial partners. These needs were analysed andgrouped, resulting in a list of 93 distinct commonrequirements, which were used to drive all the projectsdevelopment activities. In this context, the ViPs mechanismwas designed to fully, or partially, meet 32 of therequirements (mostly related to navigation and interac-

    tion), thus having the potential to support a substantialnumber of related application tasks and scenarios.In the past few years, ViPs have been presented to several

    audiences of diverse ages and cultural and educationalbackgrounds, on occasions ranging from scientific confer-ences to in-house demos. Interestingly, there are two majorobservations that occurred each and every time. First,nobody has ever questioned the utility or intuitiveness ofViPs; secondly, almost everybody seemed to have a personalsuggestion for a new potential use of ViPs. The latterobservation could just be interpreted as a fact illustratingthat ViPs can have a rather positive effect on peoplesimagination, but it might also be an indication that they area far more powerful concept than was initially considered.In addition to these informal observations, several

    formal evaluation sessions employing various methods(expert-based review, cooperative evaluation, user-basedstudies) have been conducted, on the one hand to furtherstudy the concept in terms of intuitiveness and usefulnessand on the other hand to assess the usability of the relatedprototype software.In general, the findings of the conducted studies reinforced

    our hypothesis that ViPs are a powerful concept, while therelated software instantiation has proved easy to learn anduse, as well as a handy navigation support tool. Additionally,these studies provide strong evidence that a fully functionalViPs mechanism can significantly increase the usability ofVEs. A by-product of our experiments was the formation ofa corpus of ViPs-related guidelines covering implementation,visualization and interaction issues.A considerable advantage of ViPs is that they can be

    used in any VE, in combination with any other existingnavigation support approach, since they do not require anyalterations of the virtual space and they are not attached toa specific input interface metaphor or device. Furthermore,the fact that ViPs have real-life counterparts with whichhumans are very familiar renders them an intuitive andeasy to use metaphor.Future work will seek to further develop the software

    mechanism, integrate it into existing VE systems in diverseapplication domains, and assess its impact on the usabilityof such environments. Furthermore, it is planned tocontribute to the evolving research domain concerned withthe evaluation of VEs by testing and evolving relatedstructured processes. Finally, since the ViPs concept is alsodirectly applicable to Augmented Reality, it is planned toexperiment with its use using relevant technologies.

    Acknowledgments

    Part of the work reported in this paper has been carriedout in the context of the VIEW of the Future (IST-2000-26089) project, funded by the European Commission in theframework of the Information Society Technologies (IST)Programme. The authors would also like to thank M. Filouand P. Papadakos for developing the interactive softwareprototypes.

  • ARTICLE IN PRESSD. Grammenos et al. / Int. J. Human-Computer Studies 64 (2006) 221239238

    References

    Arrington, M.E., Staples, S., 2000. 3D Visualisation & Simulation. Jon

    Peddie Associates, CA, USA.

    Basso, V., Enrico, G., Liliana, R., Domenico, T., Davide, L., Andrina, G.,

    Saluaar, D., Letourneur, S., Lorrison, J., Greif, P., Bullinger, A.,

    Stefani, O., Hoffman, H., 2002. User requirements document. Internal

    deliverable of the VIEW of the Future Project, Contract No. IST-2000-

    26089.

    Beavan, C., 2001. Finger Prints: The Origins of Crime Detection and The

    Murder Case That Launched Forensic Science. Hyperion.

    Bowman, D.A., Gabbard, J., Hix, D., 2001. Usability evaluation in virtual

    environments: classification and comparison of methods. Technical

    Report TR-01-17, Computer Science, Virginia Tech.

    Charitos, D., 1997. Designing space in virtual environments for aiding

    wayfinding behaviour. In: Electronic Proceedings of the fourth UK

    VR-SIG Conference, 1997. Available on-line at: http://www.brunel.

    ac.uk/faculty/tech/systems/groups/vvr/vrsig97/proc.htm

    Chen, J., 2003. Effective interaction techniques in information-rich virtual

    environments. In: Proceedings of the Young VR 2003. Available

    electronically at: http://csgrad.cs.vt.edu/jichen8/publications/YVR-2003/finalversion/YVR-JianChen.pdf

    Crosier, J., Nichols, S., Stedmon, A., Patel, H., Hoffmann, H., Deisinger,

    J., Ronkko, J., Grammenos, D., Protogeros, Z., Laukkanen, S.,

    Amditis, A., Panou, M., Basso, V., Saluaar, D., Letourneur, S.,

    Bullinger, A., Mager, R., DCruz, M., Wilson, J., 2000. D1.1: State of

    the art and market analysis. Public deliverable of the VIEW of the

    Future Project, Contract No. IST-2000-26089. Available on-line at:

    http://www.view.iao.fraunhofer.de/pdf/D1_1.pdf

    CYBEREDGE, 2001. The Market for Visualisation/Virtual Reality

    Systems. Cyberedge Information Services, forth ed., ISBN:1-929696-

    05-1.

    Darken, R.P., Goerger, S.R., 1999. The transfer of strategies from virtual

    to real environments: an explanation for performance differences?

    Proceedings of Virtual Worlds and Simulations 1, 159164.

    Darken, R.P., Peterson, B., 2002. Spatial orientation, wayfinding and

    representation. In: Stanney, K.M. (Ed.), Handbook of Virtual

    Environment Technology. Lawrence Erlbaum Associates, Inc. Chapter

    28. Available on-line at: http://vehand.engr.ucf.edu/handbook/

    Chapters/Chapter28/Chapter28.html.

    Darken, R.P., Sibert, J.L., 1993. A toolset for navigation in virtual

    environments. Proceedings of the ACM User Interface Software &

    Technology, 157165.

    Darken, R.P., Sibert, J.L., 1996. Navigating in large virtual worlds. The

    International Journal of HumanComputer Interaction 8 (1), 4972.

    Darken, R.P., Cockayne, R., Carmein, D., 1997. The omni-directional

    treadmill: a locomotion device for virtual worlds. In: Proceedings of

    the UIST 97, pp. 213221.

    Delaney, B., 1999. The Market for Visual Simulation/Virtual Reality

    Systems. CyberEdge Information Services, CA, CA, USA.

    Edwards, J., Hand, C., 1997. MaPS: movement and planning support for

    navigation in an immersive VRML browser. In: Proceedings of the

    Second Symposium on the Virtual Reality Modeling Language

    (VRML97), ACM, New York, pp. 6573.

    Ellis, S.R., Smith, S.R., Grunwald, A.J., McGreevy, M.W., 1991.

    Direction judgment error in computer generated displays and actual

    scenes. In: Ellis, S. (Ed.), Pictorial Communication in Virtual and Real

    Environments. Taylor and Franxis, London.

    Elvins, T.T., Nadeau, D.R., Kirsh, D., 1997. Worldlets3D thumbnails

    for wayfinding in virtual environments. In: Proceedings of the 10th

    Annual ACM Symposium on User Interface Software and Technol-

    ogy, pp. 2130.

    Gabbard, J.L., Hix, D., Swan, E.J., 1999. User centered design and

    evaluation of virtual environments. IEEE Computer Graphics and

    Applications 19 (6), 5159.

    Gotts, G., Cox, T., 1988. Stress and Arousal Checklist: A Manual for its

    Administration, Scoring and Interpretation. Swinburne Press.

    Grammenos, G., Filou, M., Papadakos, P., Stephanidis, C., 2002. Virtual

    prints: leaving trails in virtual environments. In: Proceedings of the

    Eighth Eurographics Workshop on Virtual Environments, Barcelona,

    Spain, 3031 May.

    Hix, D., Hartson, H.R., 1993. Developing User Interfaces: Ensuring

    Usability through Product & Process. Willey, New York.

    Hunt, E., Waller, D., 1999. Orientation and wayfinding: a review. ONR

    technical report N00014-96-0380, Office of Naval Research, Arlington,

    VA.

    Ieronutti, L., Ranon, R., Chittaro, L., 2004. Automatic derivation of

    electronic maps from X3D/VRML worlds. In: Proceedings of Web3D

    2004: Ninth International Conference on 3D Web Technology, ACM

    Press, New York, April 2004, pp. 6170.

    Iwata, H., Yoshida, Y., 1999. Path reproduction tests using a Torus

    Treadmill. Presence 8, 587597.

    Kawakita, J., 1975. The KJ method: a scientific approach to problem

    solving. Technical report, Kawakita Research Institute, Tokyo.

    Kearney, J., 1999. Tracking: A Blueprint for Learning How. Pathways

    Press.

    Kennedy, R.S., Lane, N.E., Berbaum, K.S., Lilienthal, M.G., 1993.

    Simulator sickness questionnaire: an enhanced method for quantifying

    simulator sickness. The International Journal of Aviation Psychology

    3 (3), 203220.

    Marsh, T., Wright, P., 1999. Co-operative evaluation of a desktop virtual

    reality system. In: Proceedings of the Workshop on User-Centred

    Design and Implementation of Virtual Environments, pp. 99108.

    McGovern, D.E., 1993. Experience and results in teleoperation of land

    vehicles. In: Ellis, S. (Ed.), Pictorial Communication in Virtual and

    Real Environments. Taylor and Franxis, London, pp. 182195.

    Molich, R., Kindlund, E., 2000. Improving your skills in usability testing.

    CHI2000 Tutorial.

    Mourouzis, A., Grammenos, D., Filou, M., Papadakos, P., Stephanidis,

    C., 2003. Virtual prints: an empowering tool for virtual environments.

    In: Harris, D., Duffy, V., Smith, M., Stephanidis, C. (Eds.), Human-

    Centred Computing: Cognitive, Social and Ergonomic Aspects

    Volume 3 of the Proceedings of the 10th International Conference on

    HumanComputer Interaction (HCI International 2003), Crete,

    Greece, 2227 June. Lawrence Erlbaum Associates, London, pp.

    14261430.

    Mourouzis, A., Grammenos, D., Filou, M., Papadakos, P., Stephanidis,

    C., 2004. Case study: sequential evaluation of the virtual prints concept

    and pilot implementation. In: Proceedings of the Workshop on

    Virtual Reality Design and Evaluation, Nottingham, UK, 2223

    January.

    Munro, A.J., Hook, K., Benyon, D.R., 1999. Footprints in the snow. In:

    Munro, A.J., Hook, K., Benyon, D.R. (Eds.), Social Navigation of

    Information Space. Springer, London On-line available: http://

    www.sics.se/~kia/papers/IntroFINALform.pdf.

    Nichols, S., 1999. Virtual Reality Induced Symptoms and Effects

    (VRISE): Methodological and theoretical issues. Ph.D. Thesis,

    University of Nottingham.

    Nichols, S.C., Haldane, C., Wilson, J.R., 2000a. Measurement of presence

    and its consequences in virtual environments. International Journal of

    Human-Computer Studies 52, 471491.

    Nichols, S.C., Ramsey, A.D., Cobb, S., Neale, H., DCruz, M., Wilson,

    J.R., 2000b. Incidence of Virtual Reality Induced Symptoms and

    Effects (VRISE) in desktop and projection screen display systems.

    HSE Contract Research Report 274/2000.

    Nielsen, J., 1993. Usability Engineering. Academic Press Inc,, New York.

    Nielsen, J., 1994. Heuristic evaluation. In: Nielsen, J., Mack, R. (Eds.),

    Usability Inspection Methods. Wiley, New York, pp. 2562.

    Patel, H., 2001. Internal report on developing presence and usability

    questionnaires for virtual environments. VIRART Technical Report

    VIRART/2001/101HP01.

    Peterson, B., Wells, M., Furness III, T.A., Hunt, E., 1998. The effects of

    the interface on navigation in virtual environments. In: Human

    Factors and Ergonomics Society 1998 Annual Meeting.

    http://www.brunel.ac.uk/faculty/tech/systems/groups/vvr/vrsig97/proc.htmhttp://www.brunel.ac.uk/faculty/tech/systems/groups/vvr/vrsig97/proc.htmhttp://csgrad.cs.vt.edu/~jichen8/publications/YVR-2003/finalversion/YVR-JianChen.pdfhttp://csgrad.cs.vt.edu/~jichen8/publications/YVR-2003/finalversion/YVR-JianChen.pdfhttp://csgrad.cs.vt.edu/~jichen8/publications/YVR-2003/finalversion/YVR-JianChen.pdfhttp://www.view.iao.fraunhofer.de/pdf/D1_1.pdfhttp://ISBN:1-929696-05-1http://ISBN:1-929696-05-1http://vehand.engr.ucf.edu/handbook/Chapters/Chapter28/Chapter28.htmlhttp://vehand.engr.ucf.edu/handbook/Chapters/Chapter28/Chapter28.htmlhttp://www.sics.se/~kia/papers/IntroFINALform.pdfhttp://www.sics.se/~kia/papers/IntroFINALform.pdf

  • ARTICLE IN PRESSD. Grammenos et al. / Int. J. Human-Computer Studies 64 (2006) 221239 239

    Razzaque, S., Kohn, Z., Whitton, M., 2001. Redirected walking. In:

    Eurographics 2001 Proceedings. Available on-line at: http://

    www.cs.unc.edu/eve/pubs.htmlRobson, C., 1995. Real World Research: A Resource for Social Scientists

    and PractitionersResearchers. Blackwell, Oxford.

    Satalich, G.A., 1995. Navigation and wayfinding in virtual reality: finding

    the proper tools and cues to enhance navigational awareness. MS

    Thesis, University of Washington.

    Scriven, M., 1967. The methodology of evaluation. In: Stake, R.E. (Ed.),

    Perspectives of Curriculum Evaluation. American Educational Re-

    search Association Monograph, Rand McNally, Chicago.

    Stoakley, R., Conway, M.J., Pausch, R., 1995. Virtual reality on a WIM:

    interactive worlds in miniature. In: Proceedings of ACM CHI95

    Conference on Human Factors in Computing Systems, pp. 265272.

    Stoev, S., Schmalstieg, D., StraXer, W., 2001. Through-the-lens techniquesfor remote object manipulation, motion, and navigation in virtual

    environments. In: Proceedings of the Joint Immersive Projection

    Technology/EUROGRAPHICS Workshop on Virtual Environments

    (IPT/EGVE 2001), Springer, New York, pp. 5160.

    Tattersall, I., 1995. The Fossil Trail: How We Know What We Think We

    Know About Human Evolution. Oxford University Press, Oxford.

    Templeman, J.N., Denbrook, P.S., Sibert, L.E., 1999. Virtual locomotion:

    walking in place through virtual environments. Presence: Teleopera-

    tors and Virtual Environments 8 (6), 598617.

    Tromp, J., Nichols, S., 2003. VIEW-IT: A VR/CAD inspection tool for

    use in industry. In: Proceedings of the HumanComputer Interaction

    International Conference 2003 (HCII2003), Crete, Greece, 2227 June

    2003.

    van Ballegooij, A., Eliens, A., 2001. Navigat