systematic usability evaluation and design issues for collaborative virtual environments

27
Jolanda G. Tromp [email protected] www.drtromp.com VIRART Mechanical, Materials, Manufacturing Engineering and Management University of Nottingham University Park Nottingham, NG7 2RD, United Kingdom Anthony Steed Department of Computer Science University College London Gower Street London, WC1E 6BT, United Kingdom John R. Wilson VIRART Mechanical, Materials, Manufacturing Engineering and Management University of Nottingham University Park Nottingham, NG7 2RD, United Kingdom Presence, Vol. 12, No. 3, June 2003, 241–267 © 2003 by the Massachusetts Institute of Technology Systematic Usability Evaluation and Design Issues for Collaborative Virtual Environments Abstract This paper presents results of the longitudinal usability and network trials that took place throughout the COVEN (COllaborative Virtual ENvironments) Project. To address the lack of understanding about usability design and evaluation for collabo- rative virtual environments (CVEs), a deductive analysis was used to systematically identify areas of inquiry. We present a summary of the analysis and the resulting framework through which various complementary methods were utilized during our studies. The objective of these studies was to gain a better understanding about design, usability, and utility for CVEs in a multidisciplinary setting. During the studies, which span four years, we undertook longitudinal studies of user behavior and computational demands during network trials, usability inspections of each iteration of the project demonstrators, consumer evaluations to assess social acceptability and utility of our demonstrators, and continuous preparations of design guidelines for future developers of CVEs. In this paper, we discuss the need for such activities, give an overview of our devel- opment of methods and adaptation of existing methods, give a number of explana- tory examples, and review the future requirements in this area. 1 Introduction Collaborative virtual environments (CVEs) are a novel application area of computing technology, demanding an understanding of human-computer in- teraction in 3D real and virtual environments, and of human-human interac- tion as mediated by computers and virtual spaces, not yet available in the hu- man-computer interaction (HCI) literature. As yet, there have been few reported contributions to the structured design and evaluation of virtual envi- ronments (VEs). In turn, this has contributed to and also has been a conse- quence of the paucity of guidelines about the usability in VEs (Wilson, 1996). Understanding of what is meant by usability in VEs is relatively poor, and there is little agreement on which attributes among the many variables of the VE interface are significant, whether they are different in different use scenar- ios and user circumstances, and how these might be operationalized in design practice. Since the start of the COVEN project, a number of efforts have ap- peared in print that address this lack of structured advice on design and evalua- Tromp et al. 241

Upload: john-r

Post on 09-Dec-2016

213 views

Category:

Documents


1 download

TRANSCRIPT

Jolanda G. Trompjolanda.tromp@nottingham.ac.ukwww.drtromp.comVIRARTMechanical, Materials, ManufacturingEngineering and ManagementUniversity of NottinghamUniversity ParkNottingham, NG7 2RD,United Kingdom

Anthony SteedDepartment of Computer ScienceUniversity College LondonGower StreetLondon, WC1E 6BT,United Kingdom

John R. WilsonVIRARTMechanical, Materials, ManufacturingEngineering and ManagementUniversity of NottinghamUniversity ParkNottingham, NG7 2RD,United Kingdom

Presence, Vol. 12, No. 3, June 2003, 241–267

© 2003 by the Massachusetts Institute of Technology

Systematic Usability Evaluationand Design Issues forCollaborative VirtualEnvironments

Abstract

This paper presents results of the longitudinal usability and network trials that tookplace throughout the COVEN (COllaborative Virtual ENvironments) Project. Toaddress the lack of understanding about usability design and evaluation for collabo-rative virtual environments (CVEs), a deductive analysis was used to systematicallyidentify areas of inquiry. We present a summary of the analysis and the resultingframework through which various complementary methods were utilized during ourstudies. The objective of these studies was to gain a better understanding aboutdesign, usability, and utility for CVEs in a multidisciplinary setting.

During the studies, which span four years, we undertook longitudinal studies of userbehavior and computational demands during network trials, usability inspections ofeach iteration of the project demonstrators, consumer evaluations to assess socialacceptability and utility of our demonstrators, and continuous preparations of designguidelines for future developers of CVEs.

In this paper, we discuss the need for such activities, give an overview of our devel-opment of methods and adaptation of existing methods, give a number of explana-tory examples, and review the future requirements in this area.

1 Introduction

Collaborative virtual environments (CVEs) are a novel application area ofcomputing technology, demanding an understanding of human-computer in-teraction in 3D real and virtual environments, and of human-human interac-tion as mediated by computers and virtual spaces, not yet available in the hu-man-computer interaction (HCI) literature. As yet, there have been fewreported contributions to the structured design and evaluation of virtual envi-ronments (VEs). In turn, this has contributed to and also has been a conse-quence of the paucity of guidelines about the usability in VEs (Wilson, 1996).Understanding of what is meant by usability in VEs is relatively poor, andthere is little agreement on which attributes among the many variables of theVE interface are significant, whether they are different in different use scenar-ios and user circumstances, and how these might be operationalized in designpractice. Since the start of the COVEN project, a number of efforts have ap-peared in print that address this lack of structured advice on design and evalua-

Tromp et al. 241

tion. Amongst these are Gabbard (1997), who at-tempted to create a taxonomy for VE design; Parent(1998), who created a virtual environment task analysisworkbook for the creation and evaluation of virtual artexhibits; Kaur, who has defined a number of interactioncycles for single-user VE interaction, from which designproperties were proposed that significantly increased theusability of their test application (Kaur, Sutcliffe, &Maiden, 1998); Fencott (1999), who developed amodel to aid the design of VE content; Hix and col-leagues, who have given a convincing example of thecost effectiveness of iterative usability design and evalua-tion through assessments and improvements of a VE forbattlefield visualization (Hix, Swan, Gabbard, McGee,Durbin, & King, 1999); Eastgate (2001) and Wilson,Eastgate, and D’Cruz (2002) have developed a struc-tured design and evaluation process, VEDS (the virtualenvironment development structure) whose majorstages are analysis, specification, and storyboarding;building VEs; and enhancing user performance, pres-ence and interactivity, and usability and evaluation.

If we extend our review to consider CVEs as againstvirtual environments used by single users or else groupsof users employing a single display screen, we find littleadvice on usability, design, and evaluation. Exceptionsare work on system improvements, that is, the moretechnology-oriented work, which include Greenhalgh’s1999 dissertation on network load of CVEs, Lloyd’s1999 dissertation on group formation in CVEs, andReynard’s 1998 dissertation on video integration inCVEs to create mixed realities. A sociologically orientedresearch project is Hindmarsh’s 1997 dissertation onthe collaboration process in real and mediated spaces.Fraser’s PhD work uses ethnographic observations ofCVE users to guide design improvements for the sup-port of object-focused interaction between CVE partici-pants.

This paper describes the evaluation processes and as-sociated evaluation methods for the assessment of earlyprototypes of CVEs. The work in the paper has grownout of the need to conduct formative evaluation withina particular project, but the developments and lessonslearned have been generalized as far as possible. Themain discussion will be of the choice, development, and

use of a number of evaluation methods and their inte-gration into a coherent approach. However, in employ-ing these methods within a real CVE developmentproject, we have identified a number of emerging ideason CVE design that allow a start to be made upon de-fining guidelines to assist CVE designers. How usabilityapplies to the design of CVEs is a very broad topic, and,other than making comments on CVE usability factors,this paper concentrates on how people actually collabo-rate within CVEs and upon the design contributionsthat will enhance and support collaboration.

In subsection 1.1, we discuss the goal of the usabilityevaluations of the COVEN project. In section 2, wereview collaboration technologies in general and placeCVE technology in it. We also describe the current de-sign practice within which CVE development is takingplace. In section 3, we give an overview of the frame-work we devised for the COVEN development andevaluation activities. We introduce the four majorthreads of work and the standard evaluation methods,which we adapted so that they can now be profitablyapplied to CVEs. In section 4, we describe our evalua-tion strategy and utilization of the evaluation methodol-ogies. In section 5, we describe the results we generatedby using these methods, and in section 6 we discuss themerit of the methods, including clarification of new de-sign issues introduced by CVE technology develop-ment.

1.1 Evaluation Within the COVENProject

The goal of the COVEN project was to demon-strate the feasibility and utility of scalable CVE worldsthrough prototype applications in the general area ofvirtual travel, summarized by Normand et al. (1999).Demonstrators and CVE platforms were built in threeengineering cycles, with each implementation phase fol-lowed by an evaluation phase. Each evaluation phaseinvolved a weekly or fortnightly network trial schemewith user-tests to collect system, network, and usagedata. Descriptions of the COVEN platform, some of thedemonstration applications, and technical results fromthe network trials can be found in companion papers

242 PRESENCE: VOLUME 12, NUMBER 3

(Frecon, Smith, Steed, Stenius, & Ståhl, 2000, Green-halgh, Bullock, Frecon, Lloyd, & Steed, 2000). Thehappy frequency of the network trials allowed us toadopt a longitudinal approach for our usability evalua-tions. The role of evaluation was originally to perform asummative evaluation of the success of the applicationsthemselves. However, from a very early stage, it wasapparent that more formative evaluations were requiredto develop the theoretical underpinnings of our ap-proach, and that, instead of focusing specifically on theapplications, it was also important to consider the gen-eral task of collaboration, the 3D nature of CVEs, andthe methods used for evaluation.

To address the lack of VE-specific usability tools,COVEN performed investigative empirical work, con-tributing to the understanding of CVE design. Ourpoint of origin was that we needed to address the us-ability problems posed by CVE technology by investi-gating the human behavioral aspects that affect perfor-mance and satisfaction in VEs. (See Figure 1.)

Our approach to method development was based onand constrained by three premises:

1. Existing HCI design and evaluation methods for2D applications that need to be translated to 3D/CVE applications and tested for their appropriate-ness.

2. CVE-specific concepts are being developed froman understanding of human behavior generallythat are still poorly understood and need to beexplored, tested, and refined.

3. CVE-specific constraints on evaluation methodol-ogy need to be clarified and incorporated in theexperimental setup.

It should be noted that the current state of knowl-edge and the relatively low level of sophistication ofCVEs do not yet support the development of com-pletely new evaluation methods and metrics. In anycase, this is probably not appropriate. We have lookedto draw upon, to adapt where appropriate, and to utilizeexisting methods of evaluation, some from human-computer interaction generally (Helander, Landauer, &Prabhu, 1997) and some from evaluation in other set-tings (Bales, 1951; Neale, 1997). Such methods and

approaches have been used as they may be appropriateto the three-dimensional, real-time, and shared experi-ences available within CVEs.

It would be wrong to discuss evaluation of collabora-tion within CVEs as if this were somehow a totally newphenomenon. There have long been studies of peoples’interaction and collaboration in real-world settings(Goffman, 1967; Argyle, 1969; Kendon, Harris, &Ritchie Key, 1975) and, more recently, of collaborationwithin other technologies such as audio conferencing,video conferencing, computer-supported cooperativework and media space conferencing (Gaver, Sellen,Heath, & Luff, 1993; Finn, Sellen, & Wilbur, 1997;Hindmarsh, Fraser, Heath, Benford, & Greenhalgh,1998). Understanding, evaluation of, and design to en-hance collaboration within CVEs should all be informed

Figure 1. Framework within which the development of the COVEN

CVE evaluation and design strategy took place.

Tromp et al. 243

by an understanding of collaboration in these other en-vironments.

2 Background to CVE Technology

To better understand the technological develop-ments that culminated in the development of virtualenvironment conferencing software, we present a shortoverview of previous collaboration technologies. (SeeTable 1 for a summary.)

Audio conferencing has been found to be notoriouslydifficult to manage, in terms of managing turn-takingbetween multiple participants, and for large groups interms of identifying who is speaking (Walters, 1995).Video conferencing was a natural technical extension ofaudio conferencing, allowing for the inclusion of facialexpressions and a limited set of gestures to be part of

the group interaction. However, with video conferenc-ing, dialogs have been found to be significantly longer,with more interruptions, than for audio conferencing,particularly when transmission is delayed (O’Malley,Langton, Anderson, Doherty-Sneddon, & Bruce,1996). Also, sharing documents is still a problem be-cause participants cannot see when and where in thedocument was being pointed at (Heath & Luff, 1991).With the introduction of media spaces—video confer-encing with additional video cameras aimed at docu-ments and workspaces—it became possible to includeshared document views and an increased awareness ofthe video conference participants’ background for theother participants (Gaver, 1992; Gaver, Sellen, Heath,& Luff, 1993). However, it was difficult to ascertainwhich aspects of participants’ own activities and work-space were visible to their colleagues (Heath, Luff, &Sellen, 1995). Participants still had difficulties working

Table 1. Historical Overview of Remote Conferencing Technologies

Mediating technology Definition Known problems

Audio conferencing Group working overtelephone systems.

● Difficult to ascertainwho is talking

● No way to sharedocuments in real time

Video conferencing Group working via videoconnections. Inclusion offacial expressions and a

● Dialogs significantlylonger, moreinterruptions

limited set of gestures. ● Problem sharingdocuments because oflack of detail

Media space conferencing Video conferencing withadditional video camerasaimed at documents and

● Fixed cameras leave gapsin the view of theremote space

workspaces. ● Difficult to make senseof colleagues’ conduct

Virtual conferencing Group working via a shared ● Field of view limitedcomputer-generatedgraphical space with

● Object interaction andnavigation clumsy

avatars representingparticipants.

● Limited set of gestures

244 PRESENCE: VOLUME 12, NUMBER 3

together because the separate, fixed cameras leave gapsin the view offered of the remote space. CVEs werethen proposed as a virtual conferencing medium, analternative to video conferencing (Benford, Bowers,Fahlen, Greenhalgh, & Snowdon, 1995). Virtual con-ferencing was seen as having several potential advan-tages over audio or video conferencing by providing acontinuous workspace that can be shared by physicallyremote users in real time.

2.1 Collaborative VirtualEnvironments

CVEs provide virtual embodiments—or avatars—for users, allowing expression of a limited set of nonver-bal behaviors, a communication channel (be it audio,text, or both), and shared spaces including shared ob-jects (recently reviewed by (Liew, 2000)). The virtualembodiments provide users with a means to be collo-cated in the same virtual space, regardless of the actualgeographical locations of each user—although withinthis system there is little support to convey gaze direc-tion, facial expressions, or gestures that are occurring inreal time, but with a few exceptions such as the inclu-sion of some facial expressions (Guye-Vuilleme, Capin,Pandzic, Thalmann, & Thalmann, 1998; Slater, Howell,Steed, Pertaub, Gaurau, & Springel, 2000) and livevideo images of the face (Reynard, 1998). Precise ma-neuvring of the virtual embodiment is difficult due tothe limited input technology (such as the mouse, key-board, and joystick). Little information is availableabout the real environment in which other participantsare physically located, or about any changes that occurin their physical space that affect the behavior of thatuser in the virtual space.

Common behaviors that people exhibit during col-laboration in the real world—such as eye contact,gaze duration, and touch—are not normally availablein the virtual world, and are therefore not visible toothers in the virtual space. Participants in a CVE haveto establish and maintain a set of social norms or anetiquette of social conduct during the interaction,using a much lower bandwidth than is available inreal-world settings. If a user cannot interact with the

interface effectively, this creates a large cognitiveoverhead, and consequently the tasks involved in suc-cessful collaboration cannot be performed in an effi-cient and satisfactory way. A serious potential prob-lem for usability evaluations of collaborative servicesis that dealing with the prototype technology thatenables collaboration in CVEs generally creates aconsiderable cognitive overhead that distracts peoplefrom getting on with the actual collaboration task.

2.2 Design Considerations for CVEs

We considered a better insight in design practicean important guide in creating system design meth-odologies, an approach that has recently been empha-sized by Shackel (2000). To find out where and howdesign guidance is lacking, we asked five CVE design-ers to tell us about their particular design problemsand practices in an interview of an hour each. A basicform of analytical induction (Groot, 1969) was usedto draw conclusions from the interviews. The inter-view questions were carefully developed based on twopilot studies on CVE designers, observations of theCVE designers in action, and our own backgroundknowledge on CVE design issues. The designers in-terviewed all had a background in computer scienceand had been involved in building CVE demonstra-tors, ranging from work on one project for half a yearto work on eight or more projects over the past sixyears. All but the pilot interviews took place in theworkplace of the CVE designers, with the designerssitting behind the machine they typically used towork. This allowed them to illustrate issues by show-ing examples of their work on the screen. The resultsfrom the interviews gave an extensive amount ofqualitative information about CVE design practices.

The CVE designers interviewed work within twosomewhat contradictory constraints:

● A human constraint: the CVE has to be effectiveand intuitive for participants.

● A machine constraint: the CVE has to take up mini-mum computational load and network traffic.

Tromp et al. 245

The respective, potentially conflicting solutions to sat-isfy these two constraints are

● use of realistic representations and metaphors toallow users to transfer their intuition and everydayknowledge to the VE, and

● simplification of these representations.

Throughout the design process, the design choicesare influenced by a constant tension called the perfor-mance constraint (Howard, 1997). The performanceconstraint refers to the fact that the update rate de-creases as the number of polygons increases. This con-straint will always exist, although the polygon budgetwill increase over time as VR technology gets more ad-vanced. As computers become faster, either model com-plexity or the update rates can increase, but rarely both.We therefore included in our usability activities the needto formulate and test minimum user and applicationneeds. We attempted to find ways of conveying this in-formation to the CVE designers in a suitable formatthat would allow them to design supporting user inter-faces within the limited available computing resources.In other words, to give the designer team an informedstrategy with which to reach decisions about simplifica-tion.

The interviewees interpreted “design” in two differentways. The same word was used to talk about two signifi-cantly different tasks:

● design in terms of computer code (assuming a de-signer with a computer science background), and

● design in terms of the form of objects (assuming adesigner with an artistic or engineering back-ground).

Each design task belongs to a different discipline, withdifferent associated skills. However, both design tasksneed to take into account how to make things usable formultiple, collaborative, distributed users. We thereforeincorporated in our usability activities the need to for-mulate and test usability principles for CVEs in a suit-able format that would allow different designers to un-derstand the usability design principles involved inbuilding 3D interfaces. To this end, we developed stan-dard HCI inspection methods into a CVE inspection

method (and complementary design method) and at-tempted to test its effectiveness for improving the un-derstanding of design issues on COVEN project CVEdesigners.

Finally, the interviewees interpreted what is meant byguidelines in two different ways. Again, the same wordwas used to talk about two significantly different kindsof information:

● guidelines on how to work with the software tobuild scalable objects and worlds, and

● guidelines on how to design usable objects andworlds.

A strong need was expressed for both types of guide-lines. We therefore incorporated in our usability activi-ties the task to formulate and test usability design prin-ciples for CVEs in a top-down functional analysis thatwould allow designers to carefully specify the perceptualmessage they wish to convey to CVE users. To this end,we developed a CVE usability design method that usedthe interaction cycles from the inspection, the combinedproject consortium expertise, usability guidelines, andthe concept of narrative affordances as the guiding prin-ciple. (See subsection 5.2 and 6.2 for more details.)

To summarize the results from the interviews, thedesign process as a whole is governed by making designdecisions that satisfy a performance constraint. At thesame time, the design requirements are driven by theusability demands of the users. The only way to allowboth driving forces to have an equal impact on the finaldesign is by

● identifying those aspects of the design that are di-rectly influenced by tradeoff decision-making,

● clarifying user needs and computational needs toguide the reasoning that leads to design decisions,and

● making informed computational simplification deci-sions based on an understanding of the specifieduser needs.

In the absence of precise guidelines for CVE usabilitydesign, the design options should be discussed with acomplete design team, minimally composed of a de-signer, a programmer, a usability expert, and a client.

246 PRESENCE: VOLUME 12, NUMBER 3

The team should work in unison so that none of thedesign tradeoffs is made by one expert alone. In previ-ous work, we have tried to increasingly clarify our un-derstanding of those design tradeoff areas (Steed &Tromp, 1998; Tromp, Istance, Hand, & Kaur, 1998),and we have summarized the results in Table 2.

3 Framework of COVEN Developmentand Evaluation

In accordance with the Telematics Programme ofthe Fourth Framework Programme of the EU, theCOVEN project was planned and executed in a user-and customer-driven manner. This user-centered designprocess meant that the entire development process, in-cluding initial planning and the feasibility studies, in-cluded and emanated from the analysis of user needsand requirements. The principal aim of the evaluationwork packages was “internal” in that they were eitherformative evaluations of demonstrators under develop-

ment or summative evaluation of the final demonstra-tors and platforms. However, incorporating the addi-tional requirements for methodological and scientificdevelopment discussed in section 2, we arrived at fourmain threads of work. (See also Figure 2).

● Usability inspections: checking the application so asto uncover the main design flaws and cleaning upthe design, while adapting the method to includeinspection of those parts of the design that are ex-pected to support 3D interaction and collaboration

● Observational evaluations: participants performingcontrolled experimental tasks in networked trialsfocused on the evaluation of factors of the centralCVE concept of presence, copresence, interaction,and collaboration, so as to explore and better un-derstand these concepts

● Consumer evaluations: assessment of the attitudesto travel information CVEs amongst the generalpublic and the perceptions of added value of CVEtravel applications

Table 2. Tradeoff Design Areas and Associated Usability Problem Categories

Category of CVEusability problem

Design tradeoffdecision dimension Usability problem areas

Hardware/network/software problems

prototype developmentvs. demonstrableapplications

lack of functionalitylatency in performancepoor display quality

System problems runtime performancevs. user performance

usability solutions not automatically device independenthigh-ping users judged as uncooperative, low-ping users

judged as uncollaborativehigh-end users judged as higher in status, competence,

and trustworthiness than low-end usersApplication problems object representation

vs. affordancerepresentation

the meaning of objects within the environmentthe apparent or unapparent availability of actionsrealism vs. simplification choices based on performance

constraints onlyInterface problems presence and

copresence vs.minimalist design

interaction struggles in 3D space, such as navigating in3D space, picking of 3D objects, and positioningprecisely

Tromp et al. 247

● Usability guidelines: formulation of our design prac-tice, design problems, and design solutions in astructured manner, as a first set of guidelines

3.1 Inspection Method

Usability inspection is a generic name for a set ofmethods in which evaluators inspect or examine usability-related aspects of the user interface (Nielsen & Mack,1994; Helander, Landauer, & Prabhu, 1997). Usabilityinspections are normally used at a stage in the usabilityengineering cycle when a first user interface design hasbeen generated and its usability and utility for usersneeds to be evaluated, but users cannot be used yet dueto the prototypical nature of the software. The inspec-tion identifies usability problems, ranking them in orderof seriousness for usability breakdown, and recommend-ing ways of fixing the problems. The inspection meth-ods have been developed on, and for, 2D applications,so we proceeded cautiously, reflecting on the effective-ness of the method as we applied it to 3D interfaces,and subsequently adapting it where necessary during thethree iterations of our project.

The inspection method we found most directly appli-cable was the cognitive walkthrough method and addi-tionally we attempted to use the heuristic evaluation

method (Melchior, Bosser, Meder, Koch, & Schnitzler,1995). For a recent overview of the method, see alsoPreece, Rogers, and Sharp (2002). This method uses anexplicitly detailed procedure to simulate a user’s problem-solving process at each step in tasks employing an inter-face to see if the simulated user’s goals and memory foractions can be assumed to lead to the next correct ac-tion. In the case of CVEs, such user guidance is particu-larly important because of the freedom in the choice ofinteractions that is typical for these 3D environments.Inspections do not involve actual users, so the methodis particularly suited to be applied immediately after therelease of each design.

We performed our inspections with multiple inspec-tors and combined our results to collect maximum in-formation, and we reflected as a team on how to adaptthe method to 3D applications. Additionally, we askedour designers to perform an inspection, using our finaladapted method description, to access the educationalbenefits to be gained by exposing inspectors withoutusability expertise to the usability concerns of CVEs.

3.2 Observation Method

The introduction of a new technology into theworkplace or home can have unknown interactions andconsequences, that affect the perceived usability of thetechnology. The organizational and social setting intowhich computers are integrated directly influences thesuccess (or failure) of the computer software and needsto be taken into account to achieve the maximum eco-logical validity of the research data (Vicante, 1999). Wetherefore conducted observations of the actual use ofour application in the working context of the real net-worked system connecting geographically dispersed us-ers performing representative tasks in the CVE. Ethno-graphic observation analyzes sequential interactions andgenerally leads to a rich and detailed understanding ofthe meaningful relations between otherwise seeminglyunrelated acts. (See, for example, Heath et al., 1995).The great advantage of this approach is that it gives in-sight into real acts in the real activity context in whichthey take place, thus greatly increasing the quality of thedata (Lindgaard, 1992) in terms of the depth of the

Figure 2. The threads of work leading to the formulation of usability

guidelines for CVEs.

248 PRESENCE: VOLUME 12, NUMBER 3

derived insights. The disadvantages of this method arethat it leads to largely anecdotal evidence and has fewquantitative data to support its claims. It also tends tolead to concentration upon small (geographically, tem-porally, functionally, and interpersonally) fragments ofbehavior, potentially missing the “big picture” neededto make recommendations for redesign. Therefore, wedeveloped a method to perform a sequential interactionprocess analysis of the temporal and spatial activities ofCVE users. Interaction process analysis is a standardempirical method described by Bales (1951) for useoriginally in observation of small group interactions asthey occur. The heart of the method is a way of classify-ing direct interaction as it takes place, act by act, and aseries of ways of summarizing and analyzing the result-ing data so that they yield useful information (Bales,1951; Kanuritch, Farrow, Pegman, Wilson, Cobb, Cro-sier, & Webb, 1997; Neale, 1997). This method pro-vides a means for collecting quantitative data from ob-servations. We observed both experienced and noviceusers.

3.3 Consumer Evaluation

During the COVEN project, there were two con-sumer evaluation phases. One took place at the begin-ning of the project and established the need and re-quirements of a travel information CVE, based on theexperiences of travel agents. The other took place at theend of the project and consisted of two separate con-sumer evaluations: one in-situ in a tourist agency andthe other in a controlled experimental setting.

At the beginning of the project, well-known travelagents were surveyed to establish travelers’ needs in thecontext of their age and their social, economic, andbackground knowledge. Travel agents were interviewed,and the most frequently asked questions by customerswere stratified and analyzed. These were cost, where togo and what to see, what to do there, and how to getthere. The application and usability requirements speci-fication included these and other CVE-specific traveleractivities (such as exploration and travel rehearsal) inincreasingly detailed scenario descriptions and functionspecifications (COVEN Del 2.1, 2.4, 2.7). The applica-

tion and usability requirements specification documentswere important during the project, bringing clear struc-ture to the work, defining common goals for all contrib-utors to the design process, and creating a communica-tion medium that was equally accessible by theprogrammers, the designers, the usability experts, andthe managers of our project.

At the end of the project, we performed two con-sumer evaluations aimed at establishing the acceptanceof real users to a technology and travel rehearsal servicesuch as our CVE aims to demonstrate. This is especiallyimportant in the light of concerns expressed about theeffect of large-scale, internet-based CVEs on society as awhole (Chester, 1998).

3.4 Usability Guidelines

Due to the early state of CVE development, thereis a gap between requirement listings for CVEs and ac-tual, known design solutions. We believe that to bridgethis gap CVE designers need to be provided with a sys-tematic method that supports the decision-making pro-cess involved in coming from requirement specificationsto design implementations. Part of the usability workduring the COVEN project included monitoring andsummarizing our work practice, which culminated intwo documents describing the issues involved in design-ing CVEs (COVEN Del2.6, Del2.9).

4 Evaluation Strategy

The main aspect that makes CVEs different fromsingle-user VEs is that they employ the network to al-low multiple geographically dispersed users to interactin the same virtual world to perform collaborative tasks.CVEs need to enable and mediate communication be-tween people, but we do not have a precise specificationas to what exactly needs to be mediated to support col-laboration in virtual space. If humans are to use the sys-tem effectively, we have to support them in the majoracts of collaboration that are equivalent to related partsof any real-world collaboration. Thus we need threethings:

Tromp et al. 249

● An understanding of real world collaboration.● An understanding of how people actually use the

CVEs that exist to date.● Tools to make the acts and roles of collaborating

people explicit.

We therefore focused our efforts on clarifying sociologi-cal and psychological knowledge about small-group col-laboration, specified how this real-world collaborationcan and does take place in CVEs, and adapted our se-lected methods to analyze the collaborative aspects ofCVE design and use.

4.1 Focus on Collaboration

From the literature on real-world, small-groupinteraction, collaboration, and coverbal behavior (Rob-ertson, 1997; Hindmarsh & Heath, 2000a, 2000b), wecreated a hierarchical task analysis (HTA) of collabora-tion, flagging tasks and subtasks essential to collabora-tion that would likely be unavailable in CVEs or differ-ent than similar tasks in reality. (See Tromp, 2001 formore detail on the HTA for collaboration.) See Figure3 for the first three levels of the HTA.

Collaboration is about groups of people working to-gether to achieve certain goals. An individual’s ability towork on collaborative tasks relies upon peripheralawareness of the others and a subtle monitoring of theactivities of the other participants. Collaboration be-tween people sharing the same workspace—be it virtualor physical—involves the on-going and seamless transi-tion between individual and collaborative tasks. Thus,collaboration can be broken down into unfocused col-laboration, in which the individual monitors the otherparticipants’ activities without getting involved, and fo-cused collaboration, in which individuals are workingclosely together. Both focused and unfocused collabora-tion are largely accomplished through alignment to-wards the focal area of activity, such as a document,where individuals coordinate their actions with othersthrough peripheral monitoring of the others’ involve-ment in the activity “at hand” (Heath et al., 1995). Seefigure 4 and 5 for a radically simplified HTA of periph-eral awareness and focused collaboration.

A collaborating group is defined as any number ofpersons engaged in interaction with each other in a sin-gle meeting or a series of such meetings to reach a cer-tain goal. In a group each member receives some im-pression or perception of each other member distinctenough so that he can, either at the time or in laterquestioning, give some reaction to each of the others asan individual person, even though it be only to recallthat the other was present. Taking turns in a conversa-tion, the alternation of action and inaction, and the sub-sequent rotation of performance among individuals in agroup is the most salient feature of group dynamics(Markel, 1975). See Table 3 for a summary of the iden-tified interactions that take place during collaboration.

We therefore carefully created collaborative task sup-port within our CVE and included all aspects of human-human interaction, mediated by our CVE, in our ex-ploratory approach. We performed rigorous inspectionsof the interface to eliminate all major usability problemsbefore using representative users for our experiments,and had short training sessions to make subjects familiarwith the basic interface commands. Without having per-formed our requirements specification, the inspection,observation, and training sessions, we would have beenable to measure only the most immediate interfacestruggles.

4.2 Longitudinal Network Trials

During the COVEN evaluation cycles were threesix-month periods with networked trials lasting two tofour hours, and involving four sites with between fourand sixteen simultaneous users; the trials took place on aweekly or fortnightly basis. This allowed us to adopt alongitudinal research approach, an important source ofdata for psychologists studying groups of people. Longi-tudinal studies follow an individual or group of individ-uals over an extended period of time, with observationsmade at periodic intervals. After each trial, we asked theparticipants to answer two standard questionnaires, withone aimed at assessing the hardware, software, and tech-nological problems encountered, and the other aimed atassessing the degree and quality of the collaborationbetween users. Additionally, we ran controlled, net-

250 PRESENCE: VOLUME 12, NUMBER 3

Figure 3. Top levels of hierarchical task analysis of collaboration for CVEs.

Tromp et al. 251

worked, small-group interaction experiments with rep-resentative users, interviewing the subjects afterwardand asking them to answer a standardized interactionanxiousness scale questionnaire (Leary, 1983) and ourown task-related questionnaires, and we performed ourobservational analyses.

We investigated the network efficiency of ATM as wellas ISDN and Internet in relation to data traffic as typicallygenerated by CVEs (COVEN Del. 3.3, 3.5, 3.7). BothDIVE (Frecon et al., 2000) and dVS (Rygol, Ghee,Naughton-Green, & Harvey, 1996) were used as the un-derlying CVE system. There is a considerable burn-in pe-riod during networked testing, in which each distributedsite has to fine-tune its network connectivity and preparethe hardware to allow it to see and hear the other effec-tively. This burn-in period increases slightly with the addi-tion of each extra user machine. We started off using therelatively stable demonstration worlds that came with theapplications, gradually establishing connectivity betweenfour sites and incrementally adding more users per site.From our first inspections and hierarchical task analyses of

collaboration made as part of the inspections, we created anumber of representative task scenarios based on thoroughanalysis of all issues related to CVE interaction (COVENDel 3.5). These scenarios were used to run small-groupinteraction experiments in both DIVE and dVS. Secondly,we explored collaboration processes in a virtual world es-pecially developed for that purpose, the WhoDo game, acollaborative murder mystery environment (see Figure 6and 7), also discussed elsewhere (Greenhalgh et al., 2000;Tromp, 2001), and we ran experiments based on a realisticuser scenario in the final COVEN demonstrator, the Lon-don Travel Demonstrator (Steed, Frecon, Avatare-Nov,Pemberton, & Smith, 1999).

4.3 Inspection Methodology

We performed inspections at each stage of thedesign-evaluation cycle. As mentioned in subsection 3.1,most existing inspection methods were developed on andfor 2D applications. We selected two methods commonlyused—the cognitive walkthrough method and the heuris-

Figure 4. Simplified HTA of peripheral awareness. Figure 5. Simplified HTA of focused collaboration.

252 PRESENCE: VOLUME 12, NUMBER 3

tic evaluation method—to see how well they would suitinspection of CVEs. To first assess their applicability forCVE evaluation, we did not make any significant changesto either method prior to the first inspection.

Four evaluators were involved in the first inspection,each covering both the DIVE and dVS versions of the

COVEN platform. Each evaluator performed the in-spection in their own laboratory and independent fromthe others. Afterwards, we analyzed and combined thereports, and the results subsequently went to the de-signers of the COVEN platforms. We also gave the de-signers a questionnaire to assess the usefulness of the

Table 3. Elements of Social Conduct During Human-Human Interaction

Social behavior Definition

Verbal communication The exchange of audio information to establish and maintain contactbetween individuals engaged in focused or unfocused interaction.

Phatic communication The exchange of stereotyped phrases and commonplace remarks toestablish and maintain a feeling of social solidarity and well-being.

Spatial regulation The arrangement of single or combined average body-size relatedspaces around and between people and objects, signifying temporaryor permanent micro-territories, where each cultural tradition has itsown micro-territorial sizes and arrangements.

Proxemic shifts Patterns of interpersonal distance in face-to-face encountersaccompanying and influencing changes in the topic or in the socialrelationship between speakers (i.e. situational shifts).

Turn taking Nonverbal communication accompanying verbal communication hasan important role in the understanding of social interaction andturn-taking during interaction. Common turn-taking cues are headnodding, face looking, smiling, head touching, and speaking,including simultaneous speech.

Peripheral awareness A subtle monitoring of the other participants’ activities; the individualmonitors the other participants’ activities without getting involved.This is largely accomplished through alignment towards the focalarea of activity, such as a document.

Trust building Establish and confirm one’s perceived trustworthiness as a competentcollaborator, by being perceived by the other participants as actingaccording to the social norms.

Reciprocity An individual’s ability to be simultaneously both perceiver andperceived of their own embodied actions as well as the perceiver ofothers’ actions.

Indexicality Our ability to point at objects and locations and refer to them withindexical expressions such as that, there, and so on.

Gaze Gaze direction, gaze duration, gaze patterns, gaze awareness, mutualgaze, and head turning indicate direction and type of attention givento something, aid in turn-taking, and provide communicationfeedback.

Tromp et al. 253

inspection report and the inspectors a questionnaire as-sessing the effectiveness of the method as applied toCVEs. The designers reported that the report was veryuseful in helping them organize and prioritize redesignof the application, with especially major and cosmeticchanges being easy to identify. The usability problemsproved hard to fix, however, because a new designtended to introduce new usability problems. Both de-signers and inspectors found that the inspection methoddid not sufficiently focus on redesign suggestions. Theinspectors reported that the method was very useful infinding usability problems, but that the task scenariosgenerated from the inspection were too linear. For ex-ample, the task trees used for the inspection do not takeinto account the freedom of interaction with which ev-ery CVE user is endowed. This was reflected by the factthat each inspector had to add widely varying adapta-tions to the task trees during the inspections, dependingon which tasks they performed first. Additionally, theinspectors found the heuristic evaluation less effective infinding detailed usability problems than the cognitivewalkthrough. In fact, Nielsen developed the statements,or heuristics, used in the heuristic evaluation from ananalysis of 256 usability problems found during the in-spection of 2D applications. Although we created heu-

ristics adapted to 3D interfaces based on these first re-sults (Tromp, 2001), we decided to suspend the use ofthe heuristic evaluation until more CVE-specific heuris-tics are formulated from an analysis of a large set of doc-umented CVE-specific usability problems (further dis-cussed in the final section of this article).

Recently, Baker, Greenberg, and Gutwin (2001) pub-lished eight heuristics based on a theoretical analysis ofteamwork between distance-separated groups that are asyet unvalidated.

Based on our own experiences, revisions of the cogni-tive walkthrough method were devised for the seconditeration of the COVEN platform evaluation (Del 3.5).We changed from using task-related scenarios to genericinteraction scenarios in an attempt to cover the multipletasks inherent in collaboration and the wide variety oftask action choices that comes with the freedom of in-teraction in CVEs. This switch also allowed us to incor-porate aspects of collaborative activity that were hard tocapture in the task-focused scenarios. During the secondand third iterations of our usability evaluations, we addi-tionally used interaction cycles (Sutcliffe, 1995) to in-spect all interactive objects and system functions thatinvolved the users. These interaction cycles (system ini-tiative cycle, normal 2D action cycle, normal 3D action

Figure 7. Participants gathering at the start of a game.Figure 6. An overview of the WhoDo mansion.

254 PRESENCE: VOLUME 12, NUMBER 3

cycle, goal-directed exploration cycle, and exploratorybrowsing cycle) are based on the work of Sutcliffe andKaur (2000) and Kaur (1998) about usability designguidance for single-user VEs. We integrated andadapted their method with our own findings and addeda cycle of collaboration to additionally inquire into themultiuser aspects specific for CVEs. (See Table 4 for theinspection cycle questions and severity ratings.) Addi-tionally, we adapted the report forms to encourage theinspectors to add more specific and constructive rede-sign suggestions.

This adapted inspection method was tested by fourinspectors in the second evaluation phase. The reportswere used to redesign the application, give the projectmanagers a way to make explicit the prioritization ofinterface fixes based on a time/benefit analysis, and givethe scientists in our project an in-depth report of usabil-ity problems typical to CVEs.

Based on experience of applying the method in thesecond evaluation phase, we refined it once again. Inthe third phase of evaluation, four CVE designers wereasked to apply the method themselves during the engi-neering phase to assess whether this would give them abetter insight in the usability issues involved in design.The designers were asked to answer a questionnaire ad-dressing the effectiveness of their utilizing the method.The CVE designers claimed to have gained educationalbenefits from performing the inspection; most notably,they felt they had an increased sense of the importanceof feedback of actions taken in the CVE, and had a bet-ter feel for the design of task flow in general. The de-signers found it time consuming and difficult to assignthe interaction cycles to the CVE tasks; sometimes morethan one task cycle seemed to fit the task, and some-times none did. This led to rich discussions about thenature of the tasks involved, a troubleshooting list forthe manual, and better instructions.

As a result of the experience in the third evaluationphase, the inspection method was updated once more.The final version of the method has a procedure to sim-ulate a user’s problem-solving process at each step in theuser task by using a floorplan of the CVE. We also in-corporated new inspection questions to check if onegeneral task action cycle can be assumed to lead to the

next correct task action cycle, on the level of object rep-resentation and placement in the total CVE space.

4.4 Observation Methodology

For our CVE observations, we used a video re-cording of the video-out signal of a user’s CVE interac-tions visible through the CVE window on their desktopcomputer screen. In our observations, we were addi-tionally interested in the physical behaviors of the userthrough whose virtual eyes we were watching the CVE.To incorporate this information, we had a video cameraaimed at the user (framing their upper body and face).This image then was recorded onto the same video re-cording as the video-out signal of the users’ screen in asmall window in the corner of the screen using apicture-in-picture technique.

As the people in the observed group interact witheach other, the observer breaks their behavior downinto the smallest meaningful units she can distinguishand records the scores by noting

● the time at which the behavior occurs,● the number of the person exhibiting the behavior,● the proper category of observed behavior, and● the number of the recipient person or object.

The observer follows the interaction continuously inthis microscopic manner, attempting to keep the scoresin the sequence in which they occur and to omit noitem of behavior.

Interactions in a group take place over time. The inter-actions show uniformities, repetitions, and tendencies tooccur in certain sequences. For instance, at the beginningof a meeting people typically greet each other, during ameeting they listen to each other, and during the end of ameeting they say their goodbyes. These interactions can beclassified into categories based on their similarities and dif-ferences. The observer classifies every item of behavior shecan observe and interpret based on the list of categories.The classification the observer makes hinges on the infer-ence that the observed behavior has function(s), either byintent or effect. The sequences of the categorized and re-corded interactions can subsequently be analyzed to revealpatterns in behavior.

Tromp et al. 255

Table 4. Interaction Cycle Questions and Severity Ratings Used in the Inspection

System initiativeNormal taskaction 2D

Normal taskaction 3D

Goal-directedexploration

Exploratorybrowsing Collaboration

Is it clear to the userthat the systemhas taken control?

Will the users betrying to producewhatever effect theaction has?

Can the user form orremember the taskgoal?

Does the user knowwhere to startlooking?

Can the userdetermine apathway formovement?

Can the user locate theother user(s)?

Can the user resumecontrol at anypoint and is theappropriate actionclear?

Will users be able tonotice that thecorrect action isavailable?

Can the user specify anintention of what todo?

Can the userdetermine apathway towardsthe search target?

Can the userexecutemovement andnavigationactions?

Can the user recognizethe identity of theother user(s), tell theother users apart?

Are the effects ofsystem actionsvisible andrecognizable?

Once a user finds thecorrect action atthe interface, willthey know that it isthe right one forthe effects they aretrying to produce?

Are the objects or part ofthe environmentnecessary to carry outthe task-action (user’snew intentions) visible?

Can the userexecutemovement andnavigationactions?

Can the userrecognizeobjects in theenvironment?

Are thecommunicationchannels betweenthe users effective?

Are the systemactionsinterpretable?

After the action istaken, will usersunderstand thefeedback they get?

Can the objects necessaryfor the task action belocated?

Can the userrecognize thesearch target?

Can the userinterpretidentity, role,and behaviorsof objects?

Are the actions of theother user(s) visibleand recognizable?

Is the end of thesystem actionclear?

Is there an obviousnext action toperform for theuser, now that thistask has ended?

Can the users approachand orient themselvesto the objects so thenecessary action can becarried out?

Can the usersapproach andorient themselvesto the objects sothe necessaryaction can becarried out?

Can the userrememberimportantobjects orlocations?

Can the user act on ashared object whilekeeping the otheruser(s) in view?

Is there an obviousnext action toperform for theuser, now that thistask has ended?

Can the user decide whataction to take andhow?

Can the user decidewhat action totake and how?

Can the user forma mental map ofthe exploredenvironment?

Can the user easilyswitch views betweenthe shared object,other locations/objects of interest,and the otheruser(s)?

Can the user carry outthe manipulation oraction easily?

Can the user carryout themanipulation oraction easily?

Can the user get anoverview of the totalshared space and allother users in it?

Is the consequence ofthe user’s actionvisible?

Is the consequenceof the user’saction visible?

Can the user tell whenthere areinterruptions in theattention of theother user(s) to theCVE?

Can the user interpretthe change?

Can the userinterpret thechange?

Is it made clear to theuser what the nextcorrect/needed actioncould be?

Is it made clear tothe user what thenext correct/needed actioncould be?

Is there an obvious nextaction to perform forthe user, now that thistask has ended?

Severity Ratings

0. I don’t agree that this is ausability problem at all.

1. Cosmetic problem only: need notbe fixed unless extra time isavailable on project.

2. Minor usability problem: fixingthis should be given low priority.

3. Major usability problem:important to fix, so should begiven high priority.

4. Usability catastrophe: imperativeto fix this before users test thesystem.

256 PRESENCE: VOLUME 12, NUMBER 3

Seven basic interaction categories were created: com-municate, manipulate, navigate, position, scan, gesture,and external. These categories were developed in thefollowing way. First, an observation of a recording of arepresentative CVE interaction session was made, not-ing down all single units of behaviors. This list of ob-served behaviors was grouped and condensed into 26categories. A focus group (Jordan, 1998) of six partici-pants (consisting of psychologists, sociologists, and eth-nographers) assessed the categories based on their pro-fessional expertise. These categories were subsequentlytested on another recording of a representative CVEinteraction session. The frequency and fit of the catego-ries was analyzed and further reduced to seven basiccategories. These seven categories were assessed by an-other focus group of six participants (consisting of VRresearchers, evaluators, and developers) by applyingthem to another representative CVE interaction record-ing, followed by a lively discussion of the merits of thetechnique afterwards.

5 Results

5.1 Longitudinal Network Trial Results

We analyzed the results from the longitudinalquestionnaire that was filled in by the participants aftereach network trial and interpreted their replies with re-gards to the social interactions that take place duringcollaboration as evoked in subsection 4.1 and Table 3.Due to space constraints, we present only a summary ofthese results here, but a longer discussion includingquotes from the questionnaires can be found in Tromp(2001). It has to be noted that the participants in ourexperiments usually successfully performed their tasks.However, the quality of their collaboration experiencewas often in danger due to the limitations of the tech-nology and the frustrations this caused them. Confi-dent, competent participants were seen to be reduced toshy, inhibited, and quietly angry ‘collaborators.’

Verbal communication is hampered by breakups inthe audio transmissions due to network congestion. Par-ticipants have problems making themselves understood,understanding other speakers, and following the flow of

the discourse among each other. This is especially prob-lematic when speakers are nonnative to the commonlanguage used during the collaboration. Participants canfeel very uncomfortable through this lack of under-standing and have been observed to give up on the col-laboration process. Participants have to speak muchmore clearly, slowly, and louder than normal, which canbe problematic, especially in shared offices.

Phatic communication, especially important betweenpeople who do not know each other well, is generallynot well supported in CVEs and participants frequentlyexpressed feeling inhibited in their interactions due tothe absence of means to express this type of communi-cation. Typical problems were found with initiating,continuing, and ending interactions in a polite manner.

Spatial regulation is not as easy to adhere to in theCVE as in the real world due to difficulties with naviga-tion and fine-tuned positioning. Participants frequentlyunintentionally obstructed each other’s view with theirvirtual embodiments, and navigated through each oth-er’s embodiments, walls, and objects, which was some-times perceived as rude. There was an indication that,even if a CVE room was a realistic replica of real room,the virtual room was perceived as too small or the vir-tual embodiments too large. This is possibly due to thesmall field of view.

Proxemic shifts were performed by expert and noviceparticipants; however, both types of users had problemsnavigating where they wanted to be, especially whenthey all tried to view the same small object at the sametime. This suggests that they are not aware of the pre-cise dimensions of their size and the effect of this interms of distance to the other participants.

Coverbal behavior is important in negotiating turn-taking in conversations and interactions, but most of thecommonly used actions are not available in CVEs. Thelack of these cues made participants feel uncomfortableand isolated and contributed to breakdown in the com-munications.

Peripheral awareness in CVEs is limited to perceptionof movement in the field of view and any surroundsound cues picked up from the wider surroundings. Par-ticipants often panned their view 360° to update theirknowledge of their surroundings, or felt the need to

Tromp et al. 257

move backwards to increase their field of view. Periph-eral awareness of the real environment while engaged inthe CVE is minimal due to the fact that CVE users weara headset to hear and speak to the other CVE partici-pants. Peripheral awareness of the CVE while engagedin the real environments is limited to visual only whenthe headset is taken off. These limitations make switch-ing attention between the two environments more diffi-cult.

Building trust is a slow process in CVE interaction.Participants cannot easily establish whether inaction orthe absence of response of other participants should beattributed to a breakdown in their connectivity, a tem-porary absence of the participant from the CVE win-dow, or a lack of interest in being engaged in the activ-ity at hand. Especially when participants do not knoweach other well or are not overly familiar with the CVEas a medium, they are quick to interpret lack of reactionas a lack of respect for the social norms of interaction,and thus the trustworthiness of the nonresponding par-ticipant.

Reciprocity was difficult to establish for participantsmostly due to the small field of view but also because itis hard to tell where precisely the other participant islooking. Typical problems were with being able to inter-act with distant objects, causing the actor to be out ofthe view of the participants near the object. Participantstried to overcome these problems by giving a runningcommentary of their actions to the other participantsand describing their location to the others.

Indexicality is similarly difficult to achieve, as partici-pants cannot see the direction of each other’s gaze andmight not be able to see each other point and see theobject being pointed at in the same view. It was difficultfor the participants to follow directions of other users,and their references had to be sufficiently detailed andslow for the other participants to be able to catch upwith due to delays in network traffic between the geo-graphically dispersed participants.

5.2 Inspection Results

The inspection provided a large number of appar-ent problems with the interface that are not only appli-

cable to the COVEN platforms but to CVE technolo-gies in general. We categorized the problems accordingto their root technical cause, and we found that theycould be broken down into three rough categories.

● System problems including lack of functionality,performance, and display quality.

● Interface problems that concern the actions of navi-gating, and picking of objects.

● Application-specific problems concerning the actualactions and meaning of objects within the environ-ment.

System problems are not often apparent immediatelybut can result in less than optimum strategies having tobe taken in interface and environment design. For ex-ample, perhaps the most serious problem encounteredwas navigating through the doors in the environment.Although ostensibly an application problem (becausethe door is simply an object with a behavior), the prob-lems were exacerbated by three factors. Firstly, the dooropening caused the scene on the other side of the doorto be loaded, which caused a short stall during the timethat the relevant information is loaded. Secondly, thescript to control the door runs on a central server, sothe remote clients have to send a signal that delays thedoor opening. And, thirdly, synchronization problemsbetween the client view and the collision detection(which also runs on a central server) meant that, al-though the participant might see an open door, theycould not enter it because they had not received thecorrect response from the collision system yet. Thesekinds of lag and resulting consistency problems have ledto a large open research area as addressed by Vaghi(2002).

Interface problems arise because of the nature of thedesktop display where input devices with few degrees offreedom (mouse and keyboard) were used to performsix-degree-of-freedom tasks and where there is a mix of3D with traditional 2D WIMP interface. This can causeproblems with interface modalities, such as there beingseveral control modes depending on which mouse andmodifier buttons are depressed, and the actual mappingof user motions into three-dimensional transformations.These are aspects of the interface that are plainly not

258 PRESENCE: VOLUME 12, NUMBER 3

obvious to a naıve user, and they are a continuousstruggle to the expert user. Interface control problemsthus tend to be pervasive across evaluations becausethey are constrained by the capabilities of the underly-ing VR toolkit. We classified interface problems into thefollowing categories.

● Modality and mapping: There often were severalcontrol modes, depending on which mouse andmodifier buttons are depressed, each with a poten-tially nonintuitive mapping to 3D.

● 2D Disturbing 3D: 2D menus pop up over theCVE window, thus disrupting interaction in the 3Dinterface.

● Unsupported 3D actions: Many actions that could beperformed directly inside the 3D space are relegatedto 2D menus.

● Lack of 3D feedback: Actions performed on 2Dmenus that affect 3D objects often do not give rec-ognizable feedback in the 3D environment.

● Two object tasks: The 2.5D “drag and drop” mecha-nism is not supported in CVEs, making selectionand manipulation an arduous task.

We believe that these interface problems have been ne-glected to date and deserve greater attention becausethey are common to all desktop-style user interfaces.Based on our findings we predict the need for a largeresearch effort into 3D selection, manipulation, andfeedback; better support for 3D actions inside the CVE;and implementation of 2D menu interaction inside theCVE to increase visibility to all users. We summarizethese research topics in subsection 6.3.

Application problems are more-general problems withthe participant’s understanding of the purpose of theapplication components. Making spaces and objects re-alistic endows them with affordances that cannot alwaysbe satisfied (Gaver, 1992; Wilson, Eastgate, & D’Cruz,2002). There is a balance to be struck between makingthe objects realistic in appearance so that they may berecognized, and making functionality apparent to theuser (Tromp & Fraser, 1998). Essentially, the conflict isover resolving how to communicate to the freely mov-ing CVE participants what objects portray actionswithin where in the environment, and when, in what

order, and how to perform them. These applicationproblems can be summarized as follows.

● The perceptual affordances of CVE design are in-sufficiently exploited, leaving users confused aboutthe type of actions available.

● The sequential affordances of CVE design are insuf-ficiently exploited, leaving users confused about theorder in which actions should be performed.

● The narrative affordances of CVE design are insuf-ficiently exploited, leaving users confused about thepurpose of the environment and the objects in it.

Ideally, all possible affordances that objects couldprovide should be covered, even if it is just feedbackabout an inability to perform an action. The reality ofthe situation is that, within the size constraints of theenvironment or within the time constraints to build theenvironment, not all affordances can be catered for. Weperceived a need for more structure in the design of theinteractions. The design notion of narrative affordancesrefers to the observed need for a narrative structurethroughout the CVE space, a guidance for the sequenceof tasks and object interactions that has been designeddeliberately (Tromp, 2001). Our final design document(COVEN, Del 2.9) tries to take these issues into ac-count. (See section 6.2).

5.3 Observation Results

Based on our understanding of real-world collabo-ration, we expected to observe shifts in viewpoint be-tween collaborators and shared objects, and communi-cation about the task at hand. For the observationalanalysis reported here, we analyzed three separate sec-tions: the beginning, middle, and end of a total collabo-ration session. Although this data set is small, the resultsare similar to those found with a larger data set reportedby Tromp (2001). Our data set consists of 148 observa-tions, recorded during a total of three separate minutesof interactions between six users, watching the CVEthrough the eyes of one user. Figure 8 and Table 5 pro-vide an overview and breakdown of the frequencies withwhich the observed collaborative acts occurred.

Shifts in viewpoint, accomplished by fine-tuned navi-

Tromp et al. 259

gation and scanning acts, occur with a frequency of63.5%, whereas communication acts occur 31.8% of thetime, and the residue is made up of checks on windowsexternal to the CVE window (4.1%) and gesturing(0.7%). It has to be noted that the interface allows foronly one type of gesture (pointing), which is probablywhy the percentage of observed gesture acts is as low asit is.

We were interested in finding systematic relations be-tween behaviors, and Table 6 shows the frequencies oftransition from one behavior to another. The total (col-umn T) indicates the number of adjacent code pairs foreach category, and each cell indicates how often eachtransition occurred (for example, Communicate wasfollowed by a Scan on six occasions). We compared andanalyzed the beginning, middle, and end of a collabora-tion session (WhoDo Experiment I, 30-9-98). For eachcolumn (Communicate, Manipulate, and so on) arethree subcolumns containing the totals of each analyzedsection (Subcolumn 1: Begin; Subcolumn 2: Middle;Subcolumn 3: End). This allowed us to compare anydifferences in the frequencies of adjacent acts as a com-plete collaboration progresses over time.

The frequency with which a communication is fol-lowed by another communication increases as the col-laboration session progresses. Navigation has a higherfrequency during the beginning and middle parts of thecollaboration than during the end. External operations

occur mostly during the beginning stage of the collabo-ration (when the stability of the CVE application has tobe checked). Furthermore, communications are some-times followed by a scan, scanning often precedes navi-gation, and navigation is often followed by a scan. Ascan is more likely to be followed by a communicationduring the beginning of the collaboration, and scanningis a frequent activity during the total collaboration ses-sion, as is navigation and communication (although thehigh frequency of the latter two are to be expected).

Scanning occurs during the total meeting and has thehighest frequency during the middle part of the collabo-ration. Communication acts during collaboration aresupported by more than twice the amount of meta-collaboration acts. Navigation often involves many fine-tuned positioning acts to encompass the most advanta-geous viewpoint for collaboration.

In total, 21 different types of navigation acts havebeen observed. The two most frequently observed navi-gation acts are moving backwards to increase the field ofview (25.9%) and moving forwards to make the collabo-ration circle smaller (16.7%), the remaining nineteennavigational acts account for the residue. Subjects canbe seen trying to “back up” to increase their field ofview, trying to encompass as many objects and subjectsas possible in their view at the same time. Without colli-sion detection, this results in them “falling out of theroom,” where they back up to the point of goingthrough a wall. With collision detection, it can be diffi-cult to get all relevant items in one view. Indeed, onesubject in the post-test interview complained that theroom was “too small.” One common solution for thisproblem is to use an “out-of-body” view, but navigationin this mode is more complicated.

Nine different types of scanning have been observed.The two most frequently observed scanning acts arescanning by turning the view more to the left (27.5%)and scanning by turning the view more to the right(32.5%). Subjects in the experiments can be seen repeat-edly making a sweeping move from object to speaker,and back to object, and so on, during collaboration.They can also be observed having trouble making thissweeping movement smoothly, repeatedly overshootingtheir goal.

Figure 8. WhoDo Experiment I, 30-9-98, Analysis 1.

260 PRESENCE: VOLUME 12, NUMBER 3

Eleven different types of communication acts havebeen observed. The two most frequently observed com-munication acts are communications about the task athand (29.8%) and communications about the collabora-tion itself (21.3%). Slightly less than half (46.8%) of allobserved communication acts are concerned with verify-ing having heard, seen, understood, or having beenheard, seen, or understood by the other participants. Ofthis type of communication act (to verify), the two mostoften observed were communication to verify havingheard (14.9% of total communication acts) and commu-nication to verify being personally present (8.5% of totalcommunication acts). The residue of the total (2.1%) ismade up of text chat communications, often used fordouble-checking something or to exchange social amen-ities. Thus, of all observed communications, only 29.8%

are directly concerned with the collaboration task athand, whereas 68.1% are directly concerned with keep-ing the collaboration going (meta-collaboration).

To summarize, the collaboration process was foundto be insufficiently supported in the CVE used for theexperiments, creating problems for the users when try-ing to build up and maintain an understanding of whodoes what, when, aimed at whom, with what results onwhat, and for whom. General patterns of CVE user col-laboration acts lie in the realm of continuous small ad-justments in viewpoint, triggered by the happenings inthe shared space. Subjects in our experiments had threemain problems:

● keeping the referenced shared object and other par-ticipants in the same view,

Table 5. Frequency Count for Observed Subcategory Acts on WhoDo Experiment I, 30-9-98

Subcategories Frequency PercentageCumulativepercentage Description

CC 24 16.2 16.2 Communication. Audio.CT 1 0.7 16.9 Communication. Text chat in CVE.CV 22 14.9 31.8 Communication. Verifying events.EG 6 4.1 35.8 Checking information external to

graphical (main CVE) window.GP 1 0.7 36.5 Gesture: point to person.NP 54 36.5 73.0 Navigate into position.SS 40 27.0 100.0 Scanning the environment for events.Total 148 100.0

Table 6. The Frequencies of Sequences between Categories (WhoDo Experiment I, 30-9-98, Begin 15:09:00-15:09:59, Middle15:28:00-15:28:50, End 15:43:00-15:43:42)

Communicate Navigate Scan Gesture External

TBegin Middle End Begin Middle End Begin Middle End Begin Middle End Begin Middle End

Communicate 3 12 19 3 2 2 2 2 1 46Navigate 3 4 11 14 4 4 7 2 1 1 51Scan 2 5 5 7 4 4 8 3 1 1 40Gesture 1 1External 1 1 1 1 1 6

Tromp et al. 261

● identifying the referenced object and also the par-ticipant who is referring to the object, and

● monitoring the activities of the other participantswhile acting or navigating themselves (breakdownof peripheral awareness).

Many of these CVE user activities can be partly or fullyautomated, which we hypothesize will help to lower thecognitive overhead of controlling the virtual embodi-ment, so that users can get on with their collaborativetask more effectively. Our design guidelines try to takethese issues into account. (See subsection 6.2.)

5.4 Consumer Evaluation Results

One consumer evaluation involved a common userPC placed at a travel agency and used by a group of cus-tomers between the ages of 11 to 55, with differingbackgrounds, thus testing the application in a typicalusage situation. Usability data were collected afterwardsby means of a questionnaire that focused on how theusers experienced the application, and if they were satis-fied with the innovative way of making holiday reserva-tions after having visited its virtual model. Almost every-body in the subject group expressed a willingness to usesuch a system to choose and book holiday destinations.As the number of Internet users grows, the demand forsuch applications may grow, and travel agencies may beforced to change the way they interact with customers.

The second consumer evaluation involved an experi-mental task in a laboratory setting, based on a model ofthe total travel experience (from anticipating a journey,to the actual travel, to the memories afterwards), focus-ing on hypotheses about the added benefits of VR tech-nology. Although only one experiment took place inthis context, the results indicate that consumers greatlyenjoyed using such an application, providing them withan added “fun factor” in travel rehearsal. It also pointedout that, for certain tasks inherent in the total travelexperience (such as itinerary planning), the applicationused in standalone mode is perfectly acceptable for us-ers, and for other tasks (such as being able to organizemeetings in the CVE) they would be quite happy to useInternet-based connections. Generally, the subjects in

this experiment asked for information about more holi-day destinations of a rich, realistic, and detailed fashion,indicating a large market for applications such as ourdemonstrator.

6 Lessons Learned and Future Work

6.1 Cost/Benefit

Despite problems of applying established inspec-tion methods, we found that inspections could be aquick way to improve and evaluate a CVE application.Without having performed our inspections on each sub-sequent iteration of the system, we would not have beenable to perform our networked experiments in a smoothand controlled fashion because too many bugs wouldhave remained. We found that performing an inspectiontook one day, provided that the requirements specifica-tion is available to the inspector(s) and provided thatthe designer and programmer of the application are ac-tively involved in the task together with the usabilityexpert. Writing the inspection report takes another cou-ple of days, depending on the number of differently fo-cused reports requested (such as reports for the design-ers, usability engineers, managers, and researchers). Theinspection reports provided us with a rich source of ref-erence for redesign and discussion during the develop-ment phases. (see Table 7.)

The networked trials and longitudinal observationstook place in three phases, and the beginning of eachphase consisted of a considerable technical effort to es-tablish a stable networking infrastructure. The networktrials gave us important feedback about user adaptationover time, and helped us in our task of highlighting par-ticular usability issues as being key. CVEs place a highdemand on applications and network infrastructure, andwe found that the networked trials themselves were verydifficult and time consuming to run and needed carefulorchestration.

The observations illustrate that studies based on time-and-motion concepts and methods can yield extremelyvaluable information, revealing factors that were previ-ously unknown. The goal of this analysis was to collectquantifiable data with the added insights of qualitative

262 PRESENCE: VOLUME 12, NUMBER 3

data about collaboration. Although this could have beenpursued using a combination of network traffic analysisand ethnographic observation, no quantifiable datawould have been gathered about the nature and types ofnavigational and communication acts that users performin a CVE. Adopting this approach allowed us to makeinferences that build a greater understanding of theatomic acts involved in collaboration mediated by aCVE.

Our consumer evaluations were preliminary in thatwe tested the reception of our application, which wasnever intended as a finished end product, but instead asa demonstrator of future technology. The evaluationsindicated consumer interest and acceptance of the tech-nology, and a demand for more such applications fromboth customers and salespeople of travel information.This guided us to advise the EU in our final reports thatcommercializing CVEs for virtual travel informationinvolves the need to develop new interaction paradigmsboth for the information providers and the customers.

6.2 Usability Guidelines

The COVEN usability guidelines documents(COVEN Del 2.6, Del2.9) focused on objective meansto specify the perceptual message that is conveyed bythe CVE to the users. Our document is the descriptionof a method with which a design team can decide whichare the most essential human needs to support, and howto negotiate simplifications of representations and inter-action. We attempted to provide such means by creatinga functional breakdown of CVE usability requirements.These deliverables do not list all CVE usability require-ments that are known to date, nor is it an exhaustive listof heuristics or guidelines to build CVEs for usability.

Although we agree that such information would be ex-tremely useful (c.f. Carr & England, 1995; Steed &Tromp, 1998; Hix et al., 1999), much more work isrequired before such a document can be written.

A general problem area for CVE design that wefound was with the flow of interaction in 3D space. Be-cause of the freedom of navigation and interaction typi-cal for CVEs, it is difficult to guide users through theCVE interface towards their goals. When functionality isnot obvious, dealing with the interface creates an un-necessarily large cognitive overhead, slowing the userdown and frustrating them. Freedom of navigation andinteraction makes it difficult to predict what actions us-ers will take, and in what order they will perform theseactions. Users have been shown to struggle with findingthe right order in which to perform actions, with find-ing their way through the environment, and with navi-gating into precise positions (Kaur, 1998; Tromp &Snowdon, 1997). Simplifications of the representationsdue to performance constraints make it difficult for us-ers to predict which operations are available and whichare not, and users have been shown to struggle with theinterface for these reasons (Kaur, 1998; Steed &Tromp, 1998). These usability problems have led us toformulate a CVE usability design method that uses thenotion of narrative affordances, a design notion that canhelp to continually guide users through their CVE in-teractions (COVEN Del. 2.9; Tromp, 2001)—a con-ceptual model of generic CVE space, which separatesthe CVE experience into architectural space, semanticspace, and social space (Figure 9)—and we formulateddesign guidelines to complement this functional break-down. Combined, this method aims to assist CVE de-signers in specifying the perceptual message for the

Table 7. Overview of the Changes in Results of the Inspections

Cycle function Severity 4 Severity 3 Severity 2 Severity 1 Totals

Inspection 1 “Initial Demonstrator” 10 1 5 10 26Inspection 2 “Online Demonstrator” 7 3 5 3 18Inspection 3 “Final Demonstrator” 3 2 0 2 7

Tromp et al. 263

CVE users, based on the available knowledge of “bestdesign” in the absence of more-specific guidelines.

6.3 Future Research Directions

The COVEN project has come to an end, but theusability research and development areas that were initi-ated during the project will continue. A summary offuture research directions can be presented as follows.

● Performance of a meta-analysis of all types of usabil-ity problems found with single-user and multiuserVEs to date, providing a good start for the develop-ment of CVE design and evaluation heuristics.

● Development of a standard CVE inspectionmethod. Further work is required to assess whetherthe six interacting cycles that have been proposedhere are exhaustive for the inspection of the com-plete repertoire of interactions available in CVEs.

● More explicit theory for design of flow of interac-tion for CVEs, creating a structured overview ofnarrative affordances that will improve situationalawareness and guide the freely moving users, com-municating the available interactions and the bestorder in which they can be performed.

● Experimentation and evaluation of design solutionsfor smooth and flexible 3D selection, manipulationand feedback, better support of 3D actions insidethe CVE, and implementation of 2D menu interac-tion inside the CVE to increase visibility to all users.

6.4 Conclusions

The research presented here has generated fourproducts that aim to improve the design and evaluationprocess of CVE technology.

● Observation method, which is generally applicable toCVE user observation and analysis of collaborativebehaviors. The categories used for the analysis canbe adjusted to refocus the topics under observation.It was shown how we developed and applied themethod, and references are provided to more-detailed descriptions of how to use the method.

● Inspection method, which is generally applicable toCVE evaluation. This method can be used duringall design stages, assessing the usability of the de-sign for each interactive element in the total task ofa CVE user. We have shown how this method wasdeveloped to better support the nature of CVEtasks.

● Usability design method, which is generally applica-ble to CVE design. This can be used during thespecification of the actual look and feel of the CVEspaces, CVE objects, and CVE interactions. It dis-tills earlier COVEN work on usability guidelinesand provides a method to generate a narrative toguide the user without constraining them unduly.

Figure 9. Functional layers of generic CVE.

264 PRESENCE: VOLUME 12, NUMBER 3

● Hierarchical task analysis of collaboration, which isgenerally applicable to defining requirements speci-fications that should support collaborative behav-ioural needs.

Acknowledgments

This work has been funded by the European Commission,ACTS Project COVEN, N. AC040, and finalized during theIST Project VIEW of the Future, IST-2000-26089, the UKIRC project EQUATOR, funded by the EPSRC, and theIRMA project, funded by the European Commission, IMSProject 97007. Many people contributed to the work reportedhere. We cannot acknowledge all of them by name, but wewould like to specifically thank Prof. Alistair Sutcliffe, Dr. Kul-winder Kaur Deol, Dr. Veronique Normand, Dr. Dave Lloyd,and the anonymous reviewers, whose helpful and insightfulcomments have strengthened the paper enormously.

All COVEN documentation mentioned in this paper, in-cluding detailed evaluation reports, manuals for method appli-cation, example and results with COVEN applications anddesign guideline documents, are available via the COVENWeb site: http://coven.lancs.ac.uk.

References

Argyle, M. (1969). Social interaction. London: Methuen &Co, Ltd.

Baker, K., Greenberg, S., & Gutwin, C. (2001). Heuristicevaluation of groupware based on the mechanics of collabo-ration. Proceedings of the 8th IFIP Working Conference onEngineering for Human Computer Interaction (EHCI’01).

Bales, R. F. (1951). Interaction process analysis: A method forthe study of small groups. Cambridge: Addison-Wesley Press.

Benford, S., Bowers, J., Fahlen, L. E., Greenhalgh, C., Snow-don, D. (1995). User embodiment in collaborative virtualenvironments. Proceedings of CHI’95.

Carr, K., & England, R. (Eds.) (1995). Simulated and virtualrealities: Elements of perception. London: Taylor & Francis Ltd.

Chester, J. P. (1998). Towards a human information society:People issues in the implementation of the EU Framework VProgramme. Loughborough, UK: USINACTS, HUSATResearch Institute.

Eastgate, R. (2001). The structured development of virtual en-

vironments: Enhancing functionality and interactivity. Un-published doctoral dissotation, University of Nottingham.

Fencott, C. (1999). Content and Creativity in Virtual Envi-ronment Design, presented at Virtual Systems and Multime-dia (VRSMM’99), Scotland.

Finn, K., Sellen, A., & Wilbur, S. (Eds.) (1997). Video-mediated communication. New Jersey: Lawrence Erlbaum.

Fraser, M. (2000). Working with objects in collaborative virtualenvironments. Unpublished doctoral dissertation. Universityof Nottingham.

Frecon, E., Smith, G., Steed, A., Stenius, M., & Ståhl, O.(2000). An overview of the COVEN platform. Presence:Teleoperators and Virtual Environments, 10(1), 109–127

Gabbard, J. L. (1997). A taxonomy of usability characteristicsin virtual environments. Unpublished master’s thesis, Vir-ginia Polytechnic Institute and State University. Available athttp://www.theses.org.vt.htm.

Gaver, W. W. (1992). The affordances of media spaces forcollaboration. Proceedings of CSCW’92, 17–24.

Gaver, W. W., Sellen, A., Heath, C., & Luff, P. (1993). Oneis not enough: Multiple views in a media space. Proceedingsof INTERCHI’93, 335–341.

Goffman, E. (1967). Interactional ritual: Essays on face-to-facebehavior. New York: Pantheon Books.

Greenhalgh, C. (1999). Large scale collaborative virtual envi-ronments. London: Springer-Verlag.

Greenhalgh, C., Bullock, A., Frecon, E., Lloyd, D., & Steed, A.(2000). Making networked virtual environments work. Pres-ence: Teleoperators and Virtual Environments, 10(2), 142–159.

Groot, A. D. de (1969). Methodology: Foundations of Inference inResearch in the Behavioral Sciences. The Hague: Mouton & Co.

Guye-Vuilleme, A., Capin, T., Pandzic, I., Thalmann, N., &Thalmann, D. (1998). Nonverbal communication interfacefor collaborative virtual environments. Proc. of CollaborativeVirtual Environments 98 (CVE’98), 105–112.

Heath, C., & Luff, P. (1991). Disembodied conduct: Com-munication through video in a multi-media office environ-ment. Proceedings CHI’91, 99–103.

Heath, C., Luff, P., & Sellen, A. (1995). Reconsidering thevirtual workplace: Flexible support for collaborative activity.Proceedings of the European Conference on Computer Sup-ported Cooperative Work (ECSCW’95), 83–99.

Helander, M., Landauer, T., & Prabhu, P. (Eds.) (1997).Handbook of human-computer interaction, 2nd ed. (pages705–715, 717–731). Amsterdam: Elsevier.

Hindmarsh, J. (1997). The interactional constitution of objects.Unpublished doctoral dissertation, University of Surrey.

Tromp et al. 265

Hindmarsh, J., Fraser, M., Heath, C., Benford, S., & Greenhalgh,C. (1998). Fragmented interaction: Establishing mutual orienta-tion in virtual environments. Proceedings of CSCW’98, 217–226.

Hindmarsh, J., & Heath, C. (2000a). Sharing the tools of thetrade: The interactional constitution of workplace objects.Journal of Contemporary Ethnography, 29(5), 523–562.

———. (2000b). Embodied reference: A study of deixis in work-place interaction. Journal of Pragmatics, 32, 1855–1878.

Hix, D., Swan, J. E., Gabbard, J. L., McGee, M., Durbin, J.,& King, T. (1999). User-centered design and evaluation ofa real-time battlefield visualization virtual environment. Pro-ceedings of VR’99, 96–103.

Howard, S. (1997). Trade-off decision making in user interfacedesign. Behavior and Information Technology, 16(2), 98–109.

Jordan, P. (1998). An introduction to usability. London:Philips Design, Taylor & Francis Ltd.

Kanuritch, N., Farrow, G, Pegman, G. J., Wilson, J. R., Cobb, S.,Crosier, J, & Webb, P. (1997). A study of the safety and usability ofteleoperated remote handling systems (Final report UKRL.97.023).

Kaur, K. (1998). Designing virtual environments for usability.Unpublished doctoral dissertation, City University London.

Kaur, K., Maiden, N., & Sutcliffe, A. (1997). Interacting withvirtual environments: An evaluation of a model of interac-tion. Interacting with Computers, 11, 403–426.

Kaur, K., Sutcliffe, A., & Maiden, N. (1998). Improving in-teraction with virtual environments. The 3D Interface for theInformation Worker: Proceedings of IEEE Half-Day Collo-quium, 4/1–4/4.

Kendon, A., Harris, R. M., & Ritchie Key, M. (Eds.) (1975).Organization of behavior in face-to-face interaction. DenHague: Mouton Publishers.

Leary, M. (1983). Social anxiousness: The construct and itsmeasurement. Journal of Personality Assessment, 47, 66–75.

Liew, S. Y. (2000). The reasons for avatars to be true represen-tations of their users and the means of achieving this. Unpub-lished master’s thesis, University of Nottingham.

Lindgaard, G. (1992). Evaluating user interfaces in context:The ecological value of time-and-motion studies. AppliedErgonomics, 23(2), 105–114.

Lloyd, D. (1999). Formations: Explicit support for groups incollaborative virtual environments. Unpublished doctoraldissertation, University of Nottingham.

Markel, N. N. (1975). Coverbal behavior associated with con-versational turns. In: A. Kendon, R. M. Harris, & M.Ritchie Key (Eds.), Organization of behavior in face-to-faceinteraction, Den Hague: Mouton Publishers.

Melchior, E. M., Bosser, T., Meder, S., Koch, A., & Schnitz-

ler, F. (1995). Handbook for practical usability engineeringin IE projects. Telematics Application programme, Informa-tion Engineering section, Project ELPUB 105 10107.

Neale, H. (1997). A structured evaluation of virtual environ-ments in special needs education. Unpublished bachelor’shonor degree dissertation, University of Nottingham.

Nielsen, J., & Mack, R. L. (1994). Usability inspection meth-ods. New York: John Wiley and Sons.

Normand, V., Babski, C., Benford, S., Bullock, A., Carion, S.,Farcet, N., Frecon, E., Harvey, J., Kuijpers, N., Magnenat-Thalmann, N., Raupp-Musse, S., Rodden, T., Slater, M.,Smith, G., Steed, A., Thalmann, D., Tromp, J., Usoh, M.,Van Liempd, G., & Kladias, N. (1999). The COVENproject: Exploring applicative, technical and usage dimen-sions of collaborative virtual environments. Presence: Teleop-erators and Virtual Environments, 8(2), 218–236.

O’Malley, C., Langton, S., Anderson, A., Doherty-Sneddon, G., &Bruce, V. (1996). Comparison of face-to-face and video-mediatedinteraction. In: Interacting with Computers, 8(2), 177–192.

Parent, A. (1998). A virtual environment task analysis work-book for the creation and evaluation of virtual art exhibits.Technical Report NRC 41557 ERB-1056, National Re-search Council Canada.

Preece, J., Rogers, Y., & Sharp, H. (2002). Interaction design:Beyond human-computer interaction, 420–425. New York:John Wiley and Sons.

Reynard, G. (1998). Framework for awareness driven videoquality of service in collaborative virtual environments. Un-published doctoral dissertation, University of Nottingham.

Robertson, T. (1997). Cooperative work and lived cognition:A taxonomy of embodied actions. Proceedings of theECSCW’97, 205–220.

Rygol, M., Ghee, S., Naughton-Green, J., & Harvey, J.(1996). Technology for collaborative virtual environments.In Snowdon, E. Churchill, & J. Tromp (Eds.), CVE’96 Pro-ceedings, 111–119.

Shackel, B. (2000). People and computers—Some recenthighlights. Applied Ergonomics, 31, 595–608.

Slater, M., Howell, J., Steed, A., Pertaub, D-P., Gaurau, M.,& Springel, S. (2000). Acting in virtual reality. ACM Col-laborative Virtual Environments, CVE2000, 103–110.

Steed, A., Frecon, E., Avatare-Nou, A., Pemberton, D., &Smith, G. (1999). The London travel demonstrator. Pro-ceedings of the ACM Symposium on Virtual Reality Softwareand Technology’99, 58–65.

Steed, A., & Tromp, J. G. (1998). Experiences with the evalu-

266 PRESENCE: VOLUME 12, NUMBER 3

ation of CVE applications. Proceedings of Collaborative Vir-tual Environments, 2nd CVE98 Conference, 123–127.

Sutcliffe, A. G. (1995). Human-Computer Interface Design,2nd edition. London: McMillan.

Sutcliffe, A., & Kaur, K. (2000). Evaluating the usability ofvirtual reality user interfaces. Behavior and InformationTechnology, 19(6), 415–426.

Tromp, J. G. (1997). Methodology of distributed CVE evalu-ations. Proceedings of UK VR SIG 1997, 169–178.

———(2001). Systematic usability design and evaluation forcollaborative virtual environments. Unpublished doctoraldissertation. University of Nottingham.

Tromp, J. G., Istance, H., Hand, C., & Kaur, K. (Eds.)(1998). Proceedings of UEVE’98: The first internationalworkshop on usability evaluation of virtual environments.

Tromp, J. G., & Snowdon, D. (1997). Virtual body language:Providing appropriate user interface in collaborative virtual

environments. Proceedings of ACM Symposium on VirtualReality Software and Technology, 37–44.

Tromp, J., & Fraser, M. (1998). Designing flow of interaction forvirtual environments. J. Tromp, H. Istance, C. Hand, A. Steed, &K. Kaur (Eds.), Proceedings of 1st International Workshop on Us-ability Evaluation for Virtual Environments, 162–170.

Vaghi, I. (2002). Augmenting the virtual. Unpublished doc-toral dissertation, University of Nottingham.

Vicante, K. (1999). Cognitive work analysis: Towards safe, pro-ductive and healthy computer-based work. London: LawrenceErlbaum.

Walters, R. (1995). Computer-mediated communications: Mul-timedia applications. London: Artech House.

Wilson, J., Eastgate, R., & D’Cruz, M. (2002). Structureddevelopment of virtual environments. In K. Stanney (Ed.),Handbook of virtual environments: Design, implementationand applications. London: Lawrence Erlbaum Associates.

Tromp et al. 267