Interacting with virtual environments: an evaluation of a model of interaction

Download Interacting with virtual environments: an evaluation of a model of interaction

Post on 05-Jul-2016

212 views

Category:

Documents

0 download

TRANSCRIPT

  • Interacting with virtual environments: an evaluationof a model of interaction

    Kulwinder Kaur*, Neil Maiden, Alistair SutcliffeCentre for HCI Design, City University, Northampton Square, London EC1V 0HB, USA

    Abstract

    There is a need for interface design guidance for virtual environments, in order to avoid commonusability problems. To develop such guidance an understanding of user interaction is required.Theoretical models of interaction with virtual environments are proposed, which consist of stagesof interaction for task/goal oriented, exploratory and reactive modes of behaviour. The models havebeen evaluated through user studies and results show the models to be reasonably complete in theirpredictions about modes and stages of interaction. Particular stages were found to be more predo-minant than others. The models were shown to be less accurate about the exact flow of interactionbetween stages. Whilst the general organisation of stages in the models remained the same, stageswere often skipped and there was backtracking to previous stages. Results have been used to refinethe theoretical models for use in informing interface design guidance for virtual environments.q 1999 Elsevier Science B.V. All rights reserved.

    Keywords: Virtual environments; Interaction modelling; Usability

    1. Introduction

    Virtual Environments (VEs), are three-dimensional, computer simulated environmentswhich are rendered in real time according to the behaviour of the user [16]. VEs differ inimportant ways from conventional interfaces, offering new possibilities and bringing newchallenges to humancomputer interface design. Compared with direct manipulation(DM) interfaces, VEs are structured as 3D graphical models with only a sub-section ofthe model presented through the interface at any one time, whereas DM systems provide a2D presentation area which continually presents objects of interest [25]. In VEs, the spatialstructure of the model remains fairly static and the user navigates around the model, tolocate objects of interest.

    Interacting with Computers 11 (1999) 403426

    0953-5438/99/$ - see front matter q 1999 Elsevier Science B.V. All rights reserved.PII: S0953-5438(98)00059-9

    * Corresponding author. Tel.: 1 44-171-477-8427; fax: 1 44-171-477-8859.E-mail addresses: k.kaur@city.ac.uk (K. Kaur), n.a.m.maiden@city.ac.uk (N. Maiden),

    a.g.sutcliffe@city.ac.uk (A. Sutcliffe)

  • VEs are significantly more difficult to design and use than 2D interfaces [9] and there isa need for better-designed VE systems [2] that support perception, navigation, explorationand engagement [29]. Significant usability problems exist with current VEs. In an evalua-tion of the Royal Navys Virtual Submarine [13], submariners experienced major inter-action problems, such as maintaining a suitable viewing angle, navigating through tightareas, losing whereabouts after getting too close to objects and recognising interactivehot-spots in the environment. Similar problems have been found in other evaluationstudies of VEs (for example [17]), and these problems appear to be different to thosefound with conventional interfaces (for example [28]).

    There are currently no guidelines and little knowledge of how VEs should be designed.Therefore, guidance is needed to VE interface design and, to develop such guidance, anunderstanding of user interaction behaviour is required [9,10,22]. There are models ofinteraction for conventional interfaces, but none exist for VEs. This paper describestheoretical models of interaction in VEs and validation studies on the models.

    2. Theory of interaction

    Previous work in interaction modelling has involved various approaches, such as the useof cognitive architectures (e.g. in the AMODEUS project, [1]), process models (e.g. [19])that describe interactions at a higher level of granularity, and models of user knowledgeand its use (e.g. Cognitive Complexity Theory, [14]). The approach adopted here was thatof process modelling which describes interaction at an appropriate level of detail fordefining general requirements to support that interaction. By omitting lower level detailof cognitive tasks, precision in modelling is sacrificed for a wider scope. The theoryelaborates on Normans [19] general model of action to describe interaction in VEs.

    K. Kaur et al. / Interacting with Computers 11 (1999) 403426404

    Fig. 1. Normans seven stage model of interaction (from [19]).

  • Normans theory is well known and has been used in developing interaction models forevaluating DM interface (see [26]). It consists of seven stage cycles of action, see Fig. 1.

    The elaboration of Normans model involved the explicit modelling of exploratory andreactive behaviours, which are important aspects of VE interaction. Tasks in VEs are oftenloosely structured with more emphasis on exploration and opportunistic action (as definedby [8]). For example, in many simulation and tutorial applications, the users task is toinvestigate the environment so behaviour is primarily opportunistic following of cues. VEsare often active, with objects operating independently of the users actions [3] and theseenvironment events may demand or invite responsive behaviours [7] from the user. There-fore, three inter-connected models have been used to describe important modes of VEinteraction:

    Task action modeldescribes purposeful behaviour in planning and carrying outspecific actions as part of users task or current goal/intention, and then evaluatingthe success of actions.

    Explore navigate modeldescribes opportunistic and less goal-directed behaviourwhen the user explores and navigates through the environment. A target may be inmind or observed features may arouse interest.

    System initiative modeldescribes reactive behaviour to system prompts and events,and to the system taking interaction control from the user (for example taking the useron a pre-set tour of the environment).The task action model was based on Normans action cycle, with additions for:

    Consideration of objects involved in an action. Since objects in a VE are not continuallypresented, the user may need to reason about what environment objects are available forcarrying out actions.

    Searching for objects when they are not within the environment section in view. Searchtasks are an important part of VE interaction (see [4]).

    Approaching objects and orienting correctly to them. Approaching objects is non-trivial in 3D interaction and appropriate 3D orientations to objects are required.

    Object investigation actions, as opposed to object manipulations. The user may onlybe interested in examining VE content [18], rather than manipulating it in someway.

    Figs. 2, 4 and 6 show flow diagrams for each of the interaction models. Walking throughtask action mode, the user establishes a goal (stage establish goals in Fig. 2) such as tostudy the electricity supply in a building, and forms an intention to carry out an action toturn on power (intention task action). S/he then considers what power objects are avail-able in the environment (consider objects), such as mains boxes and switches. If themains are not within his/her immediate vicinity, a search for them is carried out in explorenavigate mode. Once the mains are found, s/he approaches and takes up a suitable orienta-tion to them (approach/orient), see Fig. 3. They then deduce how to turn on the power atthe mains (deduce sequence) and execute the action (execute). They interpret feedbackin the environment (feedback) to see whether or not power has been turned on. Alter-natively, after approaching the mains, if s/he had had an intended action to study themains, rather than turn on power, they would closely inspect and investigate the mains

    K. Kaur et al. / Interacting with Computers 11 (1999) 403426 405

  • (inspect). Finally, s/he evaluates the outcome of this inspection or the action to turn onpower, on their goal to study the electricity supply (evaluate).

    Walking through explore navigate mode, the user forms an intention to explore theenvironment (stage explore in Fig. 4), such as a virtual building. S/he scans the

    K. Kaur et al. / Interacting with Computers 11 (1999) 403426406

    Fig. 2. Task action model, showing stages and flow of interaction.

  • observable environment (scan) and decides to move forward through the building(plan). They navigate forward (navigate) and re-scan the environment. If, for instance,they see a cupboard which arouses interest, see Fig. 5, they decide to investigate thecupboard (intention explore action) and this action is now carried out in task actionmode. Alternatively, s/he may be searching for targets, such as the mains boxes. Whenthey scan and find the mains boxes, they may return to task action mode to try and switchon power.

    System initiative behaviour may either be events or interaction control. In the case of

    K. Kaur et al. / Interacting with Computers 11 (1999) 403426 407

    Fig. 3. The users view when approaching the mains object.

    Fig. 4. Explore navigate model, showing stages and flow of interaction.

  • K. Kaur et al. / Interacting with Computers 11 (1999) 403426408

    Fig. 5. The user scans and sees a cupboard object which arouses interest.

    Fig. 6. System initiative model, showing stages and flow of interaction.

  • events, the user perceives and interprets an event (stage event in Fig. 6), such as a ringingtelephone, see Fig. 7. S/he plans how to respond to it (plan). They may immediatelydecide to answer the telephone (intention reactive action). Alternatively, they mayinvestigate how to use the telephone, in exploratory mode, or evaluate what the telephoneringing event means to their ongoing task, in task action mode. In the case of interactioncontrol, the user acknowledges the beginning of system control (acknowledge control),such as an automated tour of the building. S/he watches the tour (monitor) and acknowl-edges when the tour has ended (end control). They then plan how to respond to thissystem behaviour (plan again). Whilst watching the tour, s/he may decide they have seenenough and would like to quit the tour (intention control action).

    The theory applies to VEs that are single-user, modelled on real world phenomena and,at the level of interaction description involved, either desk-top or immersive. The modelsaim to capture the basic and typical flow of interaction, and it should be recognised thatbehaviour will often deviate from such simple patterns. Some stages may be skipped, forexample in task action mode the deduce sequence stage may not be needed by the skilleduser who has learned the required sequence. There maybe repetitions of stages or parts ofmodels, for example some tasks may involve several object searches or manipulations.Backtracking to previous stages may also occur as remedial activity when the user encoun-ters errors. Finally, the ordering of some stages may differ, for example deduce sequencemay be carried out before, instead of after, an object approach/orient.

    For the theory to be useful in developing guidance, it must be representative of actualinteraction behaviour. The following hypotheses were set to test this.

    To test the existence and cohesiveness of the three modes of behaviourin an inter-action session:

    Hypothesis 1a: there will be significantly more stage-to-stage transitions within modeboundaries, than across different modes;Hypothesis 1b: observed sequences of up to five stages long will fall within the modelmode boundaries.

    K. Kaur et al. / Interacting with Computers 11 (1999) 403426 409

    Fig. 7. The user perceives the telephone ringing event.

  • To test that the stages of interaction together describe important behaviourin aninteraction session:

    Hypothesis 2: theory stages will occur significantly more times than any other stages ofinteraction.

    To test that the interaction models represent the generalised pattern of interactionflowin an interaction session:

    Hypothesis 3: observed sequences of stage transitions will conform to patterns predictedin the models. More specifically, observed stage transitions will be either exactly aspredicted, or jumps forward or backtracks of one stage in the interaction models.

    3. Method

    3.1. Experiment

    Empirical studies were carried out to gather data on user behaviour when interactingwith VEs. Pre-study questionnaires were used to select ten participants with a range ofexperience in direct manipulation interfaces, video games, virtual reality systems, andproperty evaluation (the experiment task). The participants (seven males and threefemales) were staff and students at the School of Informatics, City University, and werepaid 10 for participating in the studies.

    The application was a business park simulation, developed by VR Solutions, and wasbeing used by The Rural Wales Development Board for marketing of business units topotential leaseholders. It was a desk-top application consisting of two worldsan externalview of the park and an inside view of a unit in the park. The unit could be viewed as eitheran empty, factory or office complex. Hot-keys, SHIFT-H and SHIFT-G, were used tomove to the external world and to different views of the inside world. Information about

    K. Kaur et al. / Interacting with Computers 11 (1999) 403426410

    Fig. 8. The external world showing the outside view of the business park. The unit represented internally is shownin the left of the picture.

  • features in the unit was available by mouse clicking on related objects, such as windowsand lighting. Figs. 812 show the external world, the different inside views of the unit andan example information box.

    Changes were made to the application to ensure it allowed for the range of behaviours tobe evaluated in the theory. The application lacked any aspect of system control, thereforean automatic guided tour was introduced to show the user around the external world. Therewas only one system event, therefore two more were addeda speech bubble appearingfrom a man (upon the user approaching the man), and a telephone ringing. The applicationwas run on a PC with a 21 inch monitor, a joystick was used for navigation and a standard2D mouse for interacting with objects.

    The task scenario told participants they were salespeople who were to gather informa-tion about the architecture and basic services of a site, represented in a VE, so that itcould be described to potential leaseholders. Participants were told that, following the

    K. Kaur et al. / Interacting with Computers 11 (1999) 403426 411

    Fig. 10. The inside world showing of one of the units in the park, as a factory complex.

    Fig. 9. The inside world showing of one of the units in the park, as an empty complex.

  • experiment, they would be questioned on the site. Specific tasks tested all aspects of thetheory, such as exploration, target searches, actions and object investigation. Participantswere given 10 minutes to explore and familiarise themselves with the VE. There were theneight set tasks, with no time limits. Two of the tasks involved finding and investigatingobjects, such as the windows. For three...

Recommended

View more >