an approach to assessing virtual environments for synchronous and remote collaborative design

21
An approach to assessing virtual environments for synchronous and remote collaborative design Michele Germani , Maura Mengoni, Margherita Peruzzini Department of Industrial Engineering and Mathematical Sciences, Faculty of Engineering, Università Politecnica delle Marche, 60131 Via Brecce Bianche, Ancona, Italy article info Article history: Received 4 March 2011 Accepted 14 June 2012 Available online 31 July 2012 Keywords: Collaborative design Virtual environments Metrics Design review Benchmarking method abstract This paper considers applying novel Virtual Environments (VEs) in collaborative product design, focusing on reviewing activities. Companies are usually anchored to commercial ICT tools, which are mature and reliable. However, two main problems emerge: the difficulty in selecting the most suitable tools for specific purposes and the complexity in evaluating the impact that using technology has on design col- laboration. The present work aims to face both aspects by proposing a structured benchmarking method based on expert judgements and defining a set of benchmarking weights based on experimental tests. The method considers both human–human interaction and teamwork-related aspects. A subsequent evaluation protocol considering both process efficiency and human–human interaction allows a closed-loop verification process. Pilot projects evaluate different technologies, and the benchmarking weights are verified and adjusted for more reliable system assessment. This paper focuses on synchro- nous and remote design review activities: three different tools have been compared according to expert judgements. The two best performing tools have been implemented as pilot projects within real indus- trial chains. Design collaboration has been assessed by considering both process performance and human–human interaction quality, as well as benchmarking results validated by indicating some correc- tive actions. The final benchmarking weights can thus be further adopted for an agile system benchmark in synchronous and remote design. The main findings suggest defining both an innovative process to ver- ify the expert benchmark reliability and a trusty benchmarking method to evaluate tools for synchronous and remote design without experimental testing. Furthermore, the proposed method has a general valid- ity and can be properly set for different collaborative dimensions. Ó 2012 Elsevier Ltd. All rights reserved. 1. Introduction Market globalisation, short delivery times and the rapid evolu- tion of customer requirements highly influence how the product design process must be performed. It is becoming increasingly important to consider different competencies in the early process phases, which implies organising the cooperative work of a geo- graphically distributed team. Generally, team configuration dynamically changes based on specific objectives. To manage this flexible cooperation, a new approach called Collaborative Product Design (CPD) has been developed. In this case, people belong to a virtual design team. However, traditional design tools have not generally been conceived to support the collaborative teamwork in a distributed design space. New technologies have recently emerged that allow creating Virtual Design Environments (VDEs) to facilitate CPD through easy interaction and data sharing among all participants. Though several collaborative software applications have been developed to connect people and share information and data, they are basically for general purposes. Indeed, virtual design teams have specific technological needs that depend on many vari- ables: group size, task nature and objectives, media and communi- cation channels, and the interaction level between participants. To satisfy these requirements and support collaboration from all per- spectives, these factors must all be studied and widely analysed [9]. However, these various aspects are often separately investi- gated without considering their mutual effects on collaborative team performance. CPD tool performance measurements fail to thoroughly analyse the effects of adopted technology on human– human interaction, collective creativity and mutual engagement. Studies on collaboration processes instead tend to lose system- based measurements of achievable timesaving, product quality and task completion. The present work aims to find a synthesis among these issues, and it is driven by the need to understand how systems can be val- idly assessed for collaborative design purposes before implementa- tion. The research goal is to propose a structured method to compare Collaborative VDEs (CVDEs) using collaboration needs and verify the assessment quality in different use contexts. In 1474-0346/$ - see front matter Ó 2012 Elsevier Ltd. All rights reserved. http://dx.doi.org/10.1016/j.aei.2012.06.003 Corresponding author. Tel.: +39 71 2204969/2204790; fax: +39 71 2204801. E-mail address: [email protected] (M. Germani). URL: http://www.univpm.it (M. Germani). Advanced Engineering Informatics 26 (2012) 793–813 Contents lists available at SciVerse ScienceDirect Advanced Engineering Informatics journal homepage: www.elsevier.com/locate/aei

Upload: margherita

Post on 01-Jan-2017

212 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: An approach to assessing virtual environments for synchronous and remote collaborative design

Advanced Engineering Informatics 26 (2012) 793–813

Contents lists available at SciVerse ScienceDirect

Advanced Engineering Informatics

journal homepage: www.elsevier .com/ locate/ae i

An approach to assessing virtual environments for synchronous and remotecollaborative design

Michele Germani ⇑, Maura Mengoni, Margherita PeruzziniDepartment of Industrial Engineering and Mathematical Sciences, Faculty of Engineering, Università Politecnica delle Marche, 60131 Via Brecce Bianche, Ancona, Italy

a r t i c l e i n f o a b s t r a c t

Article history:Received 4 March 2011Accepted 14 June 2012Available online 31 July 2012

Keywords:Collaborative designVirtual environmentsMetricsDesign reviewBenchmarking method

1474-0346/$ - see front matter � 2012 Elsevier Ltd. Ahttp://dx.doi.org/10.1016/j.aei.2012.06.003

⇑ Corresponding author. Tel.: +39 71 2204969/2204E-mail address: [email protected] (M. GermanURL: http://www.univpm.it (M. Germani).

This paper considers applying novel Virtual Environments (VEs) in collaborative product design, focusingon reviewing activities. Companies are usually anchored to commercial ICT tools, which are mature andreliable. However, two main problems emerge: the difficulty in selecting the most suitable tools forspecific purposes and the complexity in evaluating the impact that using technology has on design col-laboration. The present work aims to face both aspects by proposing a structured benchmarking methodbased on expert judgements and defining a set of benchmarking weights based on experimental tests.The method considers both human–human interaction and teamwork-related aspects. A subsequentevaluation protocol considering both process efficiency and human–human interaction allows aclosed-loop verification process. Pilot projects evaluate different technologies, and the benchmarkingweights are verified and adjusted for more reliable system assessment. This paper focuses on synchro-nous and remote design review activities: three different tools have been compared according to expertjudgements. The two best performing tools have been implemented as pilot projects within real indus-trial chains. Design collaboration has been assessed by considering both process performance andhuman–human interaction quality, as well as benchmarking results validated by indicating some correc-tive actions. The final benchmarking weights can thus be further adopted for an agile system benchmarkin synchronous and remote design. The main findings suggest defining both an innovative process to ver-ify the expert benchmark reliability and a trusty benchmarking method to evaluate tools for synchronousand remote design without experimental testing. Furthermore, the proposed method has a general valid-ity and can be properly set for different collaborative dimensions.

� 2012 Elsevier Ltd. All rights reserved.

1. Introduction

Market globalisation, short delivery times and the rapid evolu-tion of customer requirements highly influence how the productdesign process must be performed. It is becoming increasinglyimportant to consider different competencies in the early processphases, which implies organising the cooperative work of a geo-graphically distributed team. Generally, team configurationdynamically changes based on specific objectives. To manage thisflexible cooperation, a new approach called Collaborative ProductDesign (CPD) has been developed. In this case, people belong to avirtual design team. However, traditional design tools have notgenerally been conceived to support the collaborative teamworkin a distributed design space. New technologies have recentlyemerged that allow creating Virtual Design Environments (VDEs)to facilitate CPD through easy interaction and data sharing amongall participants. Though several collaborative software applications

ll rights reserved.

790; fax: +39 71 2204801.i).

have been developed to connect people and share information anddata, they are basically for general purposes. Indeed, virtual designteams have specific technological needs that depend on many vari-ables: group size, task nature and objectives, media and communi-cation channels, and the interaction level between participants. Tosatisfy these requirements and support collaboration from all per-spectives, these factors must all be studied and widely analysed[9]. However, these various aspects are often separately investi-gated without considering their mutual effects on collaborativeteam performance. CPD tool performance measurements fail tothoroughly analyse the effects of adopted technology on human–human interaction, collective creativity and mutual engagement.Studies on collaboration processes instead tend to lose system-based measurements of achievable timesaving, product qualityand task completion.

The present work aims to find a synthesis among these issues,and it is driven by the need to understand how systems can be val-idly assessed for collaborative design purposes before implementa-tion. The research goal is to propose a structured method tocompare Collaborative VDEs (CVDEs) using collaboration needsand verify the assessment quality in different use contexts. In

Page 2: An approach to assessing virtual environments for synchronous and remote collaborative design

794 M. Germani et al. / Advanced Engineering Informatics 26 (2012) 793–813

particular, the authors focus on synchronous and remote designreview meetings, which currently represent the less supported col-laborative product design phases [25].

This work starts by investigating multidisciplinary teamworkbehaviour and human–human interaction mechanisms during de-sign meetings. It then defines a method for benchmarking VDEs toidentify which tools can optimise collaborative design activities.Moreover, it aims to test the validity of the proposed benchmarkin the field using real system testing to provide a set of verifiedweights. The final goal is to achieve reliable assessment for remoteand synchronous design activities without experimental sessions.The benchmarking validation is performed by a closed-loopprocess that implements two systems and measures the achievedperformance according to process efficiency and teamwork perfor-mance in different use contexts. Process and system experts per-form this evaluation task. A set of corrective weights can thus bedefined to improve expert judgements. Such weights can beconsidered reliable enough for system assessment, as they aretested on different technologies and pilot studies, which differfrom supply-chain organisation, company core business, and com-pany dimensions. Furthermore, the protocol metrics consider boththe human–human interaction (e.g., communication) and cognitiveaspects (e.g., product perception, design issue comprehension,mutual engagement and collective creativity). The wide range ofaspects under assessment allows general-purpose results to be ob-tained. Two proper sets of heuristics and metrics are defined tobenchmark and assess system performances. This paper illustratespart of a more extended research into Virtual Reality tool bench-marking and performance evaluation [24,11,12].

2. Supporting collaboration in product design

2.1. Collaborative product design analysis

Product design collaboration is ‘‘a problem-solving and goal-oriented process involving multidisciplinary teams in distributed,heterogeneous and dynamic environments. The process complex-ity requires managing team structure and roles, data evolution,decision-making activities, integration of different software solu-tions and representation modalities to perform specific tasks and,finally, knowledge formalization to preserve all generated informa-tion’’ [31]. This definition entails sharing data, knowledge, and con-cepts to efficiently achieve the process goal [8]. Many specialisedparticipants, including individuals, teams or even entire organisa-tions, perform design processes. Process actors can potentially pro-vide additional value to improve the process, as they evaluate eachchoice from their own perspectives. They act towards mutualunderstanding and maximising outcomes by working on non-rou-tine cognitive tasks and satisfying both their own goals and thoseof the other team members.

The literature has identified several collaboration styles. Barratt[4] considers collaboration in a single company and a more ex-tended scenario, including customers, suppliers and departmentsof other organisations. In this context, cooperation implies differ-ent time and space combinations: different timing and locationcan characterise groupware activities. Timing depends on whetherparticipants act at the same or different times, i.e., synchronous orasynchronous collaboration. Location depends on where the par-ticipants are geographically, whether in the same place (co-locatedcollaboration) or at different sites (remote collaboration).

Four collaborative dimensions can be identified:

– Synchronous and co-located (e.g., face-to-face meetings).– Synchronous and remote (e.g., videoconference meetings

between different sites).

– Asynchronous and co-located (e.g., routine design activity of ateam inside the same company).

– Asynchronous and remote (e.g., routine design activity of ateam involving multiple companies at different geographicallocations).

A peculiarity of design collaboration is the need to integrate sin-gle ‘‘pieces of work’’ that are individually developed and must beintegrated (i.e., tasks, decisions, analysis, product parts and subas-semblies). Successful product design projects strongly rely on boththe ability of the project leader to coordinate team participants andthe mutual understanding of design viewpoints and decision shar-ing. Design process control is based on the knowledge of existingdesign situations, critical evaluation, and decision-making, accord-ing to the design objectives. It is generally undertaken duringdesign review activities, where design outcomes are evaluatedand decisions made to keep the project moving forward. Evalua-tion is the collaborative moment when team members elaboratea judgement by comparing the achieved solution with the designrequirements and formulate possible changes.

Numerous design review meetings (DRs) generally occur duringthe entire design process to evaluate the product features. Thenumber and occurrence, undertaken activities, representationmodalities, supporting tools and team member skills vary accord-ing to the design stage to which they belong. Different DR formscan be recognised according to the process stage, design goalsand outcomes, competencies involved, and exploited product rep-resentations (e.g., conceptual DRs and engineering or detailedDRs). However, collaboration is more crucial and chaotic duringthe conceptual design stages: many aspects must be simulta-neously evaluated (e.g., aesthetics, functionality, ergonomics, cost,and product feasibility), multiple competencies are involved andmust operate in the same space (e.g., decision-making staff,marketing operators, stylists, and technical managers) and differ-ent product representations are used, both intuitive and technical(e.g., sketches, renders and rough physical prototypes, 2D draw-ings, functional schemes, 3D models, and virtual mock-ups).Furthermore, conceptual DRs use shared visualisations character-ised by on-demand model sections, additional product modelviews and mark-ups. Numerous human–human interaction issuesalso emerge, as collaboration is generally synchronous (co-locatedor remote).

2.2. Tools for remote and synchronous collaborative design

The present research focuses on remote and synchronous de-sign collaboration. This approach has proven particularly crucialbecause participants interact in real time, discuss intangible dataand abstract ideas, and may even have different backgrounds. Con-ceptual remote collaboration is then exceptionally challenging dueto the physical distance among all design team participants and thenecessity of sharing a common virtual design space. Mediatingtechnologies and tools are thus required to support and enhancedesign activities.

Multiple technologies, frequently called groupware tools or,more specifically, Computer-Supported Cooperative Work (CSCW)applications, have been developed to support synchronous andremote design. They range from interactive multi-user devices(e.g., electronic whiteboards and tools for sharing desktop applica-tions) to Internet-based technologies to connect people (e.g., webconferencing tools, instant messaging, and interactive white-boards). The functionalities vary from data visualisation to 3Dmodel representation, real-time rendering, product model mark-up, audio–video communication support, and Web 2.0 services

Page 3: An approach to assessing virtual environments for synchronous and remote collaborative design

M. Germani et al. / Advanced Engineering Informatics 26 (2012) 793–813 795

[16]. They can be classified according to either the above-citedtime–space domains, considering how collaboration occurs, orthe main ICT technologies (e.g., web-, CAD-, VR-, or PLM-based).

To support synchronous and remote collaboration, a VDE mustsatisfy the following aspects:

– To provide basic inspection and modelling functions to analysethe product thoroughly (i.e., rotate and manipulate CAD models,zoom, measure specific model items, add or delete some parts)according to CAD-based tools.

– To realise real-time collaboration to create a common work-space by exploiting a client–server approach (i.e., shared visual-isation, event synchronisation) (see [21].

– To organise, collect, retrieve and share information and dataproperly (i.e., activity planning and workflows), managing teamstructure and roles according to a PLM approach.

– To support product evaluation by adopting multiple productrepresentations (i.e., functional product views and interactiveDigital Mock-ups (DMUs)) and integrating specific softwaresimulation toolkits.

– To promote and support decision-making and creative design(i.e., brainstorming and proposing) by adopting Web 2.0 tools.

– To allow efficient interaction with different product/processrepresentations and involve team participants in product mod-els with interaction styles like physical prototyping by exploit-ing recent VR-based technologies and devices to enhanceinteraction and involvement [17]. Some examples are repre-sented by multimodal VR Labs [2], volumetric and holographicdisplays [14], tangible tabletops based on augmented reality(AR) technologies [20], mixed reality (MR) interactive proto-types [29].

Merging these issues in a unique VDE is not a trivial task. On theone hand, CAD-based functions are limited in multi-user applica-tions and provide weak interaction among people [32]. On theother hand, VR-based functions enhance interacting with productsby involving multiple sensorial channels (e.g., visualisation dis-plays, haptic devices, sound systems and motion technologies).However, the remote implementation of these VR-based functionsfor geographically distributed teams is difficult due to physical dis-tance and technological barriers. To overcome such limitations,web-based system platforms that combine CAD- and PLM-basedfunctionalities or networked VR-based environments (NVEs) havebeen developed in different ways to create distributed workspacesthat can remotely visualise mock-ups and connect geographicallyseparated members [22,19,20].

In the present research, the authors consider three differenttools belonging to different technological categories:

– Virtual spaces where virtual actors (avatars) reproduce teammembers to achieve high presence, real-time communicationand remote collaboration. Avatars replace physical people asactors living in the shared environment, supporting communi-cation among distributed team members. Examples of somecommercial systems include Sun Microsystems’ Wonderlandplatform [38], Linden Labs’ Second Life virtual world environ-ment [34], and the Croquet Consortium’s Croquet virtual worldplatform [10].

– AR-based collaborative workspaces, where avatars are locatedat the same place of real actors. It combines traditional video-conferencing and tele-presence systems (for examples, see[26]; Stadon 2009; [18].

– Web-based interoperable collaborative platforms, which arebased on a common workspace with a client–server approachto contemporarily exploit web facilities (e.g., multi-user datasharing, and shared visualisation) and some advanced manipu-

lation and modelling functions characterising traditional CAD-based systems (e.g., measuring, explosion, and mark-up). Somerecent examples satisfy specific design purposes in distributedenterprises [37,39,5].

Despite the recent development of such tools, they still remainat a research level. The main limitations concerning their effectiveadoption in industry include difficulties in identifying the propertechnology for specific collaboration needs and integrating newdevices and tools within the existing design context, use complex-ity, lack of capability to support ‘‘total’’ decisions [35], lack of fieldtrials for real design activities supported by experimentation withfinal users, high cost, and low usability according to users’ satisfac-tion [13]. Furthermore, when experts help companies select toolsand evaluate the potential benefits, the results are always ques-tionable because there is no valuable feedback with real-worldapplications, as a recent research survey demonstrated [30].

3. Method to assess VDEs for collaborative design meetings

A structured method applying the QFD (Quality FunctionDeployment) technique [1] has been defined to benchmark differ-ent VDEs. It has three main objectives:

– to provide experts with a set of objective indicators that canmatch co-design requirements with tool functionalities andhuman cooperation performance as well as a structure methodto evaluate collaborative processes in different collaborativedimensions;

– to evaluate co-design activities in IT-supported environmentswhen design teams have already adopted and used tools (in thiscase, the proposed benchmarking method is applied, andexperimentation can help define a set of corrective weightsand define reliable benchmarking for a specific collaborativedimension); and

– to select the most appropriate tools for specific co-design needswhen no tools are adopted and a company needs help choosingand using new tools for team collaboration (in this case, a pre-viously defined, reliable benchmarking method can be adoptedfor a comparative system evaluation).

The process of defining a reliable benchmarking method con-tains three main phases: defining benchmark matrices, evaluatingtool performances, and validating benchmarking weights. Bench-mark matrices are more reliable after validation and can be usedto quickly and easily benchmark co-design tools for a specific col-laborative situation (i.e., synchronous or remote). The correctedbenchmarking matrices can also be reused for tool assessmentindependent of any experimental sessions and testing on pilotstudies.

The overall process to define a benchmarking method for eval-uate VDEs can be organised into three phases and seven steps(Fig. 1).

(1) Benchmarking the best tools to support co-design activi-ties: this allows defining a set of estimates to compare differenttechnologies according to design activity requirements andhuman–human interaction needs using of a set of correlatedmatrices.

Step 1 – Defining a metrics-based protocol to assess human collab-oration: a set of metrics (collaboration metrics) is defined consider-ing different human–human collaboration aspects. They canobjectify human collaboration during DRs. They consider bothteamwork features and interaction behaviour (e.g., human percep-tion and cognitive processes).

Page 4: An approach to assessing virtual environments for synchronous and remote collaborative design

Fig. 1. Steps of method to evaluate collaborative design tools.

796 M. Germani et al. / Advanced Engineering Informatics 26 (2012) 793–813

Step 2 – Estimating the benchmarking weights: experts define theprevalent collaboration form in time–space domain (among thoseclassified in Section 2.1) for the specific use context and industrialcontext indicators (e.g., product complexity, number of employees,ICT diffusion, and users’ technological skills) to define a valid rangeof values for each protocol’s metrics (Matrix 1.1, Fig. 1). Expertsthen estimate the protocol metrics and define a benchmarkingweight for each metric, according to the suggested ranges for thespecific collaborative dimension (Matrix 1, Fig. 1).

Step 3 – Comparing different tools: protocol analysis is used toexamine humans at work using design practices, behaviour andcommunication. The protocol is based on observing users at workand contains data collection, segmentation, coding and analysis.Experts thus investigate system functionalities (considering visu-alisation, manipulation, data sharing, workflow management,and project management) and relate them to the collaborationmetrics to define a satisfaction level for each tested tool (Matrix2.1, Fig. 1). Weights are then used to properly link importanceto the technological performance of different tools relative tothe collaboration needs (Matrix 2, Fig. 1). A benchmarking valueis calculated for each tested tool, and the highest value indicateswhich technology better satisfies the specific collaborativedimension.

(2) Evaluating tool performance: this allows a certain collabo-ration to be experimentally investigated, the preliminary bench-marking method to be verified and reliable weights on pilotstudies to be defined. It is based on implementing several CVDEsto be compared, defining a few significant case studies taken fromthe industry, and measuring a set of parameters to objectify partic-

ipants’ teamwork behaviour and human–human interactionquality. In this case, protocol analysis is used to study humans atwork through their design practices, actors’ behaviour, and com-munication. It allows easily and valuably collecting data regardingDR meetings (e.g., discussion items, design tasks, and prototypefeatures).

Step 4 – Defining the evaluation metrics: a set of heuristics andmetrics is defined as evaluation parameters related to process per-formance and human–human interaction. They allow measuringprocess efficiency and objectifying important aspects of the collab-orative process by investigating how the exploited technologysupports collaboration.

Step 5 – Measuring performances: experts perform the final anal-ysis by measuring the defined protocol metrics. The above protocolis applied to DR meetings by involving middle-sized design teamsin real case studies. Monitored teams belong to different supplychains characterised by different structures to vary the industrialcontext under investigation. Different technologies providingdifferent functions support DR sessions. Metrics are measured indifferent use contexts and compared (Matrix 3, Fig. 1). This ap-proach allows understanding whether and how the adopted toolsenhance process efficiency and human collaboration.

(3) Validating the benchmarking method: this aims to relatethe preliminary benchmarking weights to objective evaluationdata collected during the experimental session to provide a robustbenchmarking method by identifying corrective weights whennecessary. It basically assesses CDVEs without repeating anyexperimental pilot study and provides the predominant advantageof this paper.

Page 5: An approach to assessing virtual environments for synchronous and remote collaborative design

M. Germani et al. / Advanced Engineering Informatics 26 (2012) 793–813 797

Step 6 – Calculating global evaluation data: results from test caseexperimentation are elaborated considering different use contexts.Average values are calculated and properly normalised to obtain aset of evaluators for the metrics. They are globally representative ofreal collaborations.

Step 7 – Validating the benchmarking weights: after monitoringthe DRs, experts define a correlation matrix (Matrix 4, Fig. 1) where-in evaluation metrics are coupled with the previous collaborationmetrics to define their relationships and directions (e.g., +, o, �).The relation definitions are directly based on DR analysis. It is sim-ple for human interaction metrics, as metrics are common; perfor-mance metrics, conversely, depend on multiple factors, andrelations are expressed using qualitative values. By merging corre-lation values and objective data from experimentation, it is possibleto verify the preliminary benchmarking set of data.

After method validation, the proposed benchmarking method(Stage 1) can be used to identify the most suitable tool for the spe-cific use context according to design activity requirements and hu-man–human interaction needs. The corrected benchmark strategyis the final output of the whole process and can be used for bench-marking co-design systems without any further experimentation.

The main method features are described in more detail below.

3.1. The metrics-based protocol to benchmark CVDEs

By observing product design teams at work, authors have recog-nised that both DR types (described in Section 2.1) are mostlyperformed in four main tasks:

(A) to identify real and potential at-risk areas for every projectaspect;

(B) to establish the risk level and prioritise a solution for eachrisk;

(C) to recognise the influence of every aspect on the final result;and

(D) to suggest possible solutions and formulate a sharedalternative.

All tasks contribute to the design process coordination andcontrol. Accomplishing them requires active interaction, directcommunication, and shared understanding among all team mem-bers. The goal achievement details vary according to the stage towhich the DR belongs.

To investigate human–human interaction during co-design,design collaboration can be modelled as a continuous shift fromindividual to team dimensions [27] when a design meeting occurs.Norman’s cognitive model of interaction [28] is adopted to repre-sent collaboration in the team dimension: it contains seven actionstages and involves the explicit modelling of exploratory and reac-tive behaviours. The study of such a human–human interactionmodel allows defining a set of collaborative design heuristics andrelative metrics to measure tool performance and collaborationquality. The proposed method thus focuses more on interactionmechanism than system performance, as many methods do.

Protocol heuristics consider the most meaningful cognitive ac-tions and design contents in DR activities, while metrics indicatehow to estimate each aspect qualitatively and quantitatively. Therecognised heuristics are Teamwork, Communication, HumanInvolvement and Cognitive Reaction. The corresponding metricsare identified according to the scope of analysis and are not limitedto the adopted experimental set-up, thus allowing different repre-sentation media impacts on human behaviour to be compared.Metric definition derives from numerous human interactionstudies, as reported below, and previous research work that hasexplored some of the metrics [25]. Moreover, they have been cus-tomised to investigate moment-to-moment interaction during

synchronous and co-located DR sessions. Teamwork-relatedmetrics are cross-related to the other classes, as these metrics referto the design team behaviour and characteristics. They thus char-acterise longer activity stages and usually contain different metricsfor communication, Human–Human involvement and cognitivereaction (e.g., participant skills are usually valid for the entire ses-sion; defining numerous verbal and non-verbal communicationstyles within a single decision-making stage). Table 1 provides alist of control categories, metrics and measurement units. Team-work metrics are highlighted in Table 1.

The decision-making process stages measure the design distribu-tion activity during DRs. Stages derive from Norman’s model [28]:

– Presenting design tasks, issues, problems.– Discussing the proposed subjects.– Solving problems and proposing different scenarios.– Evaluating the proposed solution and analysing the advantages

and disadvantages.

The first stage regards describing the design goals, main out-comes and main problems that occurred. Different communicationmedia can support the presentation stage, from sketches to virtualmodels, or conceptual diagrams to technical documentations. Thesecond stage refers to discussion among all participants afterpresentation. The third stage is the problem-solving process thatdefines the main actions to be executed to improve design. Thefourth stage refers to the evaluation phase, where participantsinterpret the presented ‘‘world’’, formulate questions to improvetheir understanding about design outcomes and ask for productdesign improvements and modifications. Each actor supports theirideas and tries to find a compromise between conflicting view-points. Analysing the stage distribution highlights the influenceof the adopted interaction medium. Decision-making stagesusually require a certain amount of time (10 min or more).

Product design content represents the different perspectives thatcan be adopted to analyse the design outcomes. During conceptualdesign, numerous aspects must be evaluated to define a productthat satisfies all consumer requirements: the influence of the prod-uct on the market (M), the aesthetic impact on the consumers’emotions and affection (A), the technical feasibility (e.g., manufac-turing, maintenance) (T), and ergonomics that consider bothusability and safety (U). This observation identifies which designaspects are stressed during the above-mentioned collaborativecycle stages. A certain design content usually involves a certainamount of time during the discussion.

Participants’ skill measures team members’ experience in adopt-ing different representational means (e.g., technical documents,CAD models, DMUs (digital mock-ups), conceptual diagrams). Afive-point scale is used to measure the skill level (one meansLow, five means Excellent). The participants’ skill can be measuredby the easy use of the system and the learnability of the exploitedtools, as a good system requires a low user skill to be used cor-rectly. The participants’ skill usually characterises the entire designsession if the members remain the same and the IT environmentdoes not change.

Interaction style indicates the preferred modalities used to inter-act with the adopted product models, either sketched on paper orrepresented by a three-dimensional object, e.g., physical or virtual,conceptual or detailed. Three main interaction modalities are iden-tified thus [7]:

– referring to the product model, which happens whenever par-ticipants discuss or indicate specific product features orcharacteristics;

– interacting with the model, as when handling the object, disas-sembling different parts, or touching them; and

Page 6: An approach to assessing virtual environments for synchronous and remote collaborative design

Table 1Protocol heuristics and related metrics.

Heuristics Description Evaluation metrics Units

Teamwork 1-Stages of decision-making process DM It measures the distribution of the design activityduring DRs and the communication stages used byparticipants

P = Presentation Time (min)D = DiscussionS = SolvingE = Evaluation

2-Product design content DC It refers to the perspectives adopted by team actorsduring design activity

U = Usability and ergonomics Time (min)M = Market and trendsA = Aesthetical featuresT = Technical–technological featuresF = Functionality

3-Participant skill PS It measures actors’ experience and confidence inadopting the representational tools

(1) Low 5-points Lickert scale(2) Poor(3) Medium(4) Good(5) Excellent

Communication 4-Interaction IS It measures the modality preferred by actors forinteracting with the model. Three different levels ofaction can be identified

(1) referring to the model (talking about, pointing out, etc.) Occurrences (%)Style (2) interacting with the models (manipulating, touching etc.)

(3) simulating actions on the model (uses, etc.)

5-Verbal communication style VC It refers to the characteristics of verbalcommunication used by actors

(1) Referential language Occurrences (%)(2) Highly descriptive language(3) Emotional language(4) Reflective-introspective language

6-Non-verbal communication style NC It refers to the characteristics of non-verbalcommunication used by actors

(1) Poorly gesture-marked communication Occurrences (%)(2) Highly gesture-marked communication (expressing feelings)(3) Graphical-marked communication (writing, sketching,diagramming, etc.)

Human involvement 7-Mutual engagement ME It measures the modality preferred by actors forinteracting with the supporting environment,focusing on the involvement dimension

(1) Spatial dimension (shared spaces and objects) Occurrences (%)(2) Temporal dimension (shared temporary structures)(3) Conceptual dimension (shared ideas and abstract items)

8-Collective creativity CC It measures the actors’ ability of creating new ideasand the modality of merging different creative ideasinside the team

(1) Help seeking Occurrences (%)(2) Help giving(3) Reflecting reframing(4) Reinforcing

9-Stimulating imagination SI It measures the actors’ ability of being imaginativeand forecasting solutions and/or problems

(1) Low: participants are strongly linked to the physical objects 5-points Lickert scale(2) Poor: participants partially refer to 3D digital models(3) Medium: participants refer to virtual prototype(4) Good: participants are able to directly extract features from thevirtual prototype and talk about them(5) Excellent: participants are able to image new concepts, solution,and connected problems without asking for physical objects

Cognitive reaction 10-Cognitive reaction to model CR It measures the participants’ cognitive reaction tothe adopted product design model.

(1) Actions before modelling Occurrences (%)(2) Actions during modelling(3) Actions after modelling

11-Cognitive perception of the model CP It measures the actors’ cognitive perception of themodel’s elements

(1) Attention to model elements Occurrences (%)(2) Attention to relations between model elements(3) Attention to the locations of the model elements

12-Cognitive decision-making CD It measures the actors’ decision making process fora cognitive point of view

(1) Elaboration of a new solution Occurrences (%)(2) Application of a previously developed solution

798M

.Germ

aniet

al./Advanced

EngineeringInform

atics26

(2012)793–

813

Page 7: An approach to assessing virtual environments for synchronous and remote collaborative design

M. Germani et al. / Advanced Engineering Informatics 26 (2012) 793–813 799

– simulating actions on the model, e.g., emulating actions withhands or gestures without directly referring to the model.

This observation reveals the preferred way to communicatewith and through virtual objects, which depends on performedtasks, personal attitudes and supporting tools.

Verbal communication style indicates the forms of verbal com-munication adopted to refer to product models during interaction.The proposed communication styles are thus [3]:

– referential language, usually precise and smart, refers to bothphysical or conceptual objects features;

– highly descriptive language, which is generally richer andfocuses on physical features of the object;

– emotional language, characterised by emotion and empatheticexpressions referring to personal feelings; and

– reflective-introspective language, which comprehends evoca-tive and reflective expressions.

Non-Verbal communication style investigates the communicationforms adopted beyond the verbal speech to refer to product modelsduring interaction and express personal ideas or feelings. Non-ver-bal communication can assume three different levels [3]:

– poorly gesture-marked communication, where the main com-munication channel is verbal (e.g., monotone voices andabsence of utterance);

– highly gesture-marked communication, which is rich in non-verbal communication forms (e.g., various tones of voice, facialexpressions, and hand gestures); and

– graphical-marked communication, when speech is attended bywriting or sketching.

By analysing the communication style (verbal and non-verbal),it is possible to understand the personal approach for every partic-ipant to co-design tasks and individual relationships with virtualentities.

Mutual engagement indicates the preferred modalities to inter-act with the supporting environment. To objectify participant emo-tions during task execution and object evaluation, different levelshave been identified [6]:

– spatial dimension, where importance is given to the physicalorientation that has the aptitude to maintain a shared interac-tion space during co-design tasks;

– temporal dimension, characterised by idea turnover andpersonal contribution sequences; and

– conceptual dimension, based on exchanging ideas and productconcepts among all participants, so mental and cognitiveprocesses mostly support interaction.

This analysis helps elaborate the personal involvementdimension and the influence of the adopted communicationmedia. It also represents an indirect measurement of the ‘‘social’’presence within the virtual environment and shared productspace. Indeed, social presence can be seen as a sense of beingthere and acting together as a team, and mutual engagementis strongly related to that.

Collective creativity measures the participants’ ability to createnew ideas and solutions by generating collective cognition. Accord-ing to a proposed collective creativity model [15], the four differenttypes of social interaction are considered thus:

– help seeking, which can be seen as the action set used to induceothers to join efforts for problem-solving;

– help giving, which involves answering help-seeking processesand plays an important role in enabling collective creativitymoments;

– reflecting reframing, when social interaction participants makenew sense of what they already know; and

– reinforcing, which supports individuals as they engage in help-seeking, help-giving and reflective reframing; collective creativ-ity thus emerges.

This indicates how tools support creativity and idea generation.Stimulating imagination helps measure the adopted media abil-

ity to make participants imaginative and encourage them to findsolutions and problems inside the workgroup. The proposed levelsinclude the following:

– low: actors are strongly linked to physical objects;– poor: actors partially refer to virtual models;– medium: actors refer to virtual models and do not need physical

aids;– good: actors can extract features from the model, discuss them

and generate ideas from them; and– excellent: actors can imagine new concepts, solutions, or prob-

lems without a reference model (both physical and virtual).

A 5-point scale is used to assess the supported imaginationlevel.

The Cognitive reaction to model investigates how participantsinteract with models. It measures the cognitive reaction of the par-ticipants to the model using their actions. Actions refer to physicaloperations on virtual objects (e.g., selecting, placing and relocatingobjects in a scene, assembly explosion, sectioning or measuring,and mark-ups). The following behaviour [23] can emerge:

– Actions before modelling, so the reaction is not independent ofthe modelling process but can guide it.

– Actions during modelling, so the reaction happens during mod-elling and the two processes (acting and modelling) are parallel.

– Actions after modelling, highlighting how cognitive reactionfollows modelling processes and depends on them.

The Cognitive perception of model refers to the modalities used toperceive external representations and obtain useful cues to con-sider functional, technical, and aesthetic issues [33]. It measuresthe cognitive perception of model elements to investigate the focusof attention:

– Attention to model elements.– Attention to relations among model elements.– Attention to the locations of the model elements.

Cognitive decision-making investigates how the adopted inter-faces stimulate problem-solving processes. Two different behav-iours can be registered [36]:

– Elaborating a new solution and– Applying a previously developed solution (already known).

While teamwork metrics require a certain amount of time,other metrics are usually specific and interest a limited period oftime (even just a sentence or conversation chunk).

3.2. CVDE benchmarking

The protocol application for benchmarking purposes containstwo main steps (described above) related to the matrices in Fig. 2:

Page 8: An approach to assessing virtual environments for synchronous and remote collaborative design

800 M. Germani et al. / Advanced Engineering Informatics 26 (2012) 793–813

>Benchmarking weight estimation focuses on defining a set ofweights, thus expressing the importance of all specific heuristicsand metrics for different collaborative dimensions (i.e., synchro-nous, asynchronous, remote, co-located) by considering the spe-cific design context. The weight definition is supported by someguidelines, depending on the characteristics of the analysed indus-trial chain, leader companies and products. Weights are a sort ofdesiderata for all metrics, and their values depend on the context.Guidelines are defined by estimating a set of industrial contextindicators (e.g., product complexity, number of employees, ICTdiffusion, users’ technological skills). For different industrialdimensions, Matrix 1.1 provides a valuable range of values (WRi)for protocol metrics. Three HCI (Human–Computer Interaction)and two industrial product design experts estimate the specificweights (wi) in this range (WRi) according to their own experiencesand context comprehension. The weighting values are defined bytranslating the assessed traditional ICT limitations into the neces-sary level to obtain a successful co-design process. Experts fulfilMatrix 1 (Fig. 2) by assigning expected values to the metrics for dif-ferent collaborative dimensions and design contexts by adopting afive-point scale (from 1 to 5). For the metrics, the average weight-ing value wij is calculated thus (1):

wij ¼nij

k; ð1Þ

where n is the five-point judgement expressed by an expert, i indi-cates the metric, j indicates the collaborative dimension considered,

Fig. 2. Protocol matrices for benchmark

and k is the number of experts involved. For each collaborativedimension, the final values in Matrix 1 represent the sets of weights(wij) that express the metrics’ importance for a specific design con-text (Fig. 2a). They allow the technological evaluation to be properlyscaled according to the collaboration requirements for the next step.

When comparing different tools, experts monitor activities andusers’ practices and contemporarily record useful data to satisfythe protocol metrics. In particular, three experts in co-design anddesign processes observe teamwork inside different VDEs andcollect information about participants’ behaviours and interactionmodes using audio–video recording and direct observation. Nostructured protocol analysis is defined at this stage, as expertjudgements comprise the tool assessments, which are expressedbased on recording and directly observing users at work. Theassessment is supported by analysing system functionalities andtheir correlation with protocol metrics: it gives an understandingof how each function supports a specific collaborative aspect. Ex-perts then consider the tested tools, define the functions they haveand obtain the provided support for each metric (Matrix 2.1, Fig. 2).Experts fulfil Matrix 2 (Fig. 2) by assigning 3-class values (0–3–9).For a certain collaborative dimension, Matrix 1 takes the relativeset of evaluation weights (wi) and introduces it in Matrix 2 to cor-relate the collaboration requirements with monitored design per-formance. The absolute and relative importance for differenttools is calculated using Eqs. (2) and (3):

Aj ¼X

i

Aij ð2Þ

ing and formulae used to fill them.

Page 9: An approach to assessing virtual environments for synchronous and remote collaborative design

M. Germani et al. / Advanced Engineering Informatics 26 (2012) 793–813 801

Rj ¼X

i

wi � Aij ð3Þ

The final benchmarking is obtained by multiplying each abso-lute value for the relative weight and hence by calculating the rel-ative importance for each assessed tool. The resulting values allowdifferent technologies to be compared for a specific collaborativecontext and select the one which best answers the expressed needs.Such a method assigns a set of performance values to different toolsto identify how they meet the collaboration needs, expressed byheuristics, of a specific collaborative dimension in time and space.

4. Experimental benchmarking: comparing VDEs for remoteand synchronous co-design

4.1. The experimental set-up: testing modalities and comparedtechnologies

The benchmarking method has been applied to different remoteand synchronous co-design systems. They have been assessed dur-ing the feasibility analysis and design reviews.

A pilot multidisciplinary team, with 2 researchers, 2 graduates,2 project consultants and 2 industrial managers, performed design

Fig. 3. User interfaces of the

activity. The expert evaluation refers particularly to remotemeetings, where the participants involved were geographicallydistributed and collaborated by adopting three different tools. Fourremote design meetings were monitored for each tool using audio–video recording and direct observation. Experimentation wasperformed over 2 months.

Fig. 3 shows the tested VDEs and an example of their userinterfaces.

The analysed environments belong to the three VDE classesidentified in Section 2.2: the first uses avatars in a completelyimmersed environment (Wonderland by Sun Microsystems), thesecond adopts video avatars that reproduce the participants’ facialexpressions during teleconference meetings (3D teleconferencing),and the third is a web-based platform exploiting DMUs in a sharedspace and a more project-oriented approach that integrates basicmanagement functions (CoReD of the Polytechnic University ofMarche). The system descriptions are reported below.

1. Wonderland by Sun Microsystems [38] is an open-source soft-ware toolkit for building virtual spaces where avatars representpeople acting in the same space. An applicative example isMPK20, a 3D virtual environment designed for business collab-oration and adopted in the Sun Labs’ offices. Inside the MPK20

three analysed CVDEs.

Page 10: An approach to assessing virtual environments for synchronous and remote collaborative design

802 M. Germani et al. / Advanced Engineering Informatics 26 (2012) 793–813

virtual world, people can complete business, interact with teammembers and have chance meetings with colleagues, even in aclosed workspace. The authors have used and properly custom-ised basic virtual spaces to build a private design area suitablefor the specific project purposes and scopes. The current toolkit,Open Wonderland, has client/server architecture to supportcreating a wide range of interactive, dynamic virtual worldsusing a flexible module system. Modules are written in Javaand can be freely developed to connect Wonderland with exter-nal services.

The main functions are as follows:

– to freely consider, with high fidelity, virtual worlds and movingand acting avatars inside (it is also possible to associate placemarks with any object for easy navigation of the virtual world);

– to support multiple and remote access worlds using a URL andshare multi-user 2D applications running in Java (e.g., Wonder-land Whiteboard, Viewer, and Sticky Notes), different worlds(e.g., Linux, Open Office, and Firefox), or show any portion ofthe users’ desktop;

– to provide immersive audio, up to CD-quality, enhanced by dis-tance attenuation and spatial audio to create a strong sense ofimmersion and presence;

– to integrate telephone calls with the ability to connect avatarsto the world using telephone audio;

– to integrate multimedia devices, including webcam viewer,video player for recorded or streamed video, audio recorder,and virtual microphones;

– to chat in private or group conversations;– to provide modelling tools that export to the COLLADA format

(e.g., Google SketchUp, Autodesk Maya, Blender, etc.) and candrag and drop, move, rotate or resize objects; and

– to drag and drop and visualise existing contents of differenttypes, including images (.gif, .png., .jpg), documents (.pdf,.svg), multimedia (.mp4, .mov, .wmv, .ogg), and 3D models(.kmz, .dae).

2. 3D teleconferencing is an advanced conferencing system thatmerges traditional videoconference with tele-presence. It rep-resents a novel approach to videoconferencing, as it enableseye contact in one-to-many communication. This is importantin design, as emotional behaviour is fundamental in determin-ing product appraisal. It is based on a real-time 3D face scan-ning system and a large 2D video feed screen to correctlyproduce rendered eye contact between a three-dimensionallytransmitted remote participant and a group of observers.

The main functionalities are thus:

– to real-time 3D scan users’ faces, using a structured light scan-ning system and complex algorithms to down-sample the tex-ture image and provide a light 3D face model with a highrealism level;

– to display images using an auto-stereoscopic display to providea high-quality visualisation of remote users during the meeting;

– to support one-to-many conversations by providing a strongsense of presence and transmitting the emotional componentsof verbal and non-verbal communication;

– to support design activities in traditional desktop applicationsusing a large 2D feed display.

3. CoReD (Collaborative platform for Remote Design) is a collabo-rative platform developed by the Polytechnic University of Mar-che (UNIVPM). It is a proprietary system designed to satisfy thecollaboration requirements of the CO-ENV Consortium compa-nies (ww.coenv.it), as described in a previous study [11,12].The CoReD platform has been implemented by integrating

different COTS (Commercial Off The Shelf) tools with some adhoc applications and functionalities, developed to respond bet-ter to collaboration needs. The system framework is repre-sented by a web-based collaborative portal server (MicrosoftOffice Sharepoint Server 2007) that runs additional modules(e.g., a Workflow Management System, a dynamic handler tomanage unpredictable events, a software solution to view,review, and collaborate on different documents, and advancedmulti-user videoconferencing).

CoReD supports both project management and co-design activ-ities using different modules. The main functionalities are thus:

– to share CAD models in real time with any user who is simulta-neously online;

– to manage multiple accesses and perspectives by shifting theviewpoint control from one participant to another;

– to analyse the shared product model from both a commonviewpoint and different perspectives according to the designaspect under investigation (e.g., ergonomics, aesthetics, func-tional, and manufacturing);

– to mark-up and apply annotations on the shared model andvisualise comments from other users. All notes are attributescollected in the vault;

– to use different communication tools (e.g., instant messaging,audio and video conferencing) from multiple sites and performmulti-user videoconferencing with high-quality video andaudio;

– to access the product or project information while collaboratingon the 3D product model (technical, management, and cost datacan be accessed in real time in the shared project space andremote modality); and

– to manage workflow and handle unpredictable events.

4.2. Benchmarking results

Fig. 4 presents the protocol analysis application results. Theweighting values defined for synchronous and remote design col-laboration (Matrix 1) have been introduced to Matrix 2 to correlatetool performance with the expected collaboration needs.

Matrix 2 highlights that both the Sun Microsystems’ tool andthe CoReD platform can support synchronous and remote activitieswith some differences (Global result: 573 for system No. 1, 546 forsystem No. 3). Conversely, the teleconference tool is not suited forthe identified co-design target (Global result: 246).

Investigating the relative importance values (R) and contribu-tions explain why the three systems substantially differ. SystemsNo. 1 (R = 129) and No. 2 (R = 147) both support teamwork well.In particular, the CoReD platform can answer technical feasibilityneeds because it shares CAD models and real-time modelling.These are significant functions in feasibility analysis and prelimin-ary design reviews. The Wonderland system (R = 156) and CoReDplatform (R = 153) support communication well. The former mostlikely achieves a good result because it integrates an enhancedaudio–video tool to provide a strong sense of presence and partic-ipation. The latter better exploits standard communication chan-nels (e.g., chat, audio–video conference, e-mail) in multi-userapplications. The main output of adopting different communica-tion strategies can be found in Human Involvement values:Wonderland seems to better stimulate mutual engagement, crea-tivity and imagination (R = 162); CoReD is limited in that respect(R = 105), as it adopts almost traditional audio–video conferencingtools instead of avatars and other multimedia feedback. CoReD vid-eoconference thus has medium audio-conference quality and suf-fers when numerous users are connected simultaneously.Conversely, Wonderland triggers good spatial perception and

Page 11: An approach to assessing virtual environments for synchronous and remote collaborative design

VDEs

MATRIX 2 Collaboration Metrics

WE

IGH

TS

(s

ynch

rono

us–

rem

ote)

WO

ND

ER

LA

ND

(S

UN

MIC

RO

SYST

EM

S)

3D

TE

LE

CO

NFE

RE

NC

ING

CoR

eD P

LA

TFO

RM

(U

NIV

PM)

TE

AM

WO

RK

1- Stages of decision-making process

P = Presentation 5 9 3 3 D = Discussion 2 3 0 3 S = Solving 4 3 0 3 E = Evaluation 3 3 0 3

2- Product design content

U = Usability-Ergonomics 3 3 0 3 M = Market and trends 1 9 3 3 A = Aesthetical features 5 3 0 3 T = Technical-technological 4 3 0 9 F = Functionality 2 0 0 3

3- Participant skill Level (1-5) 4 3 3 9 Relative importance for Teamwork 129 30 147

CO

MM

UN

ICA

TIO

N 4- Interaction style

1) referring 3 9 3 9 2) physically interacting 1 3 0 3 3) simulating actions 5 3 0 3

5- Verbal communication style

1) Referential 5 9 0 9 2) Highly descriptive 4 3 0 9 3) Emotional 3 3 9 3 4) Reflective-introspective 1 0 9 0

5- Nonverbal communication style

1) Poorly gesture-marked 3 9 3 0 2) Highly gesture-marked 1 3 9 3 3) Graphical-marked 5 3 0 3

Relative importance for Communication 156 63 153

HU

MA

N

INV

OL

VE

ME

NT

7-Mutual engagement 1) Spatial dimension 3 9 0 3 2) Temporal dimension 1 3 3 3 3) Conceptual dimension 5 3 0 3

8- Collective creativity

1) Help seeking 4 9 9 3 2) Help giving 4 3 3 9 3) Reflective reframing 5 3 9 3 4) Reinforcing 3 3 9 0

9- Stimulating imagination Level 5 9 3 3 Relative importance for Human Involvement 162 138 105

CO

GN

ITIV

E

RE

AC

TIO

N

10-Cognitive reaction to model

1) Before modelling 1 3 0 3 2) During modelling 5 9 0 9 3) After modelling 3 3 0 3

11-Cognitive perception of the model

1) Elements 3 3 3 9 2) Relations 5 3 0 3 3) Locations 3 9 0 3

12-Cognitive decision making

1) New solution 5 0 0 3 2) Already known solution 2 9 3 9

Relative importance for Cognitive Reaction 126 15 141

645642375ECNATROPMIEVITALER

Fig. 4. Benchmarking results for remote design review meetings.

M. Germani et al. / Advanced Engineering Informatics 26 (2012) 793–813 803

virtual environment awareness and highly stimulates imaginationdue to its hyperlinks and shared applications. For cognitive reac-tion during design activities, CoReD (R = 141) better simulateshuman reactions than Wonderland (R = 126), as the former bettersupports technical data visualisation and model mark-ups, spatialand functional relationship perception and information tracking.

Focusing on system functionalities, Wonderland allows numer-ous applications to be shared and easily recovers enterprise data;people can thus create, edit and share documents within thevirtual world and in their office. It adopts a powerful immersiveaudio system to create a sense of social presence: this aspect clar-ifies the high values achieved in communication and support forhuman-based tasks. The main strength is that Wonderland is easyto use, and it seems to be a natural fit inside the virtual space. Thisaspect can explain high values in teamwork tasks. Conversely, the

CoReD platform seems to be specifically arranged for inter-com-pany collaborative design tasks. It allows participants to performspecific design actions, including real-time sharing of different filetypologies, easy CAD-CAE assembly uploading, real-time co-mod-elling, applying mark-ups and notes, accessing project data anddynamically managing workflows involving all team members.Such aspects explain the high performance obtained in cognitivereaction and decision-making metrics. However, communicationmust be improved.

The 3D teleconferencing tool has proven inappropriate for syn-chronous and remote co-design purposes (R = 246). In particular, itis unsuitable for supporting cognitive reaction (R = 15) because itprovides no interaction with CAD models or product representa-tions. Teamwork purposes are also not well managed (R = 30)due to the lack of design tools and shared design space. However,

Page 12: An approach to assessing virtual environments for synchronous and remote collaborative design

804 M. Germani et al. / Advanced Engineering Informatics 26 (2012) 793–813

the tool does achieve good values in human involvement (R = 138).This depends on its ability to reproduce the effects of gaze, atten-tion and face gestures, which characterise face-to-face meetingsand render the teamwork experience more absorbing than tradi-tional videoconferencing.

5. Case study: supporting synchronous and remotecollaboration in different use contexts

The benchmarking indentified two technologies that can sup-port design meetings well in synchronous and remote modalities.The following questions thus arise: How reliable is the experts’evaluation? What would the system response be for real applica-tions? How do the identified systems actually support differentuse contexts?

To answer these critical questions, case study experimentationand benchmarking validation was performed on the two technolo-gies identified as the most efficient: the CoReD platform and theOpen Wonderland system. They were tested on two industrialuse contexts. Final validation checked the method’s weights on realpilot studies and generalised them, as weights are verified ondifferent tools.

5.1. The industrial case studies

Systems were tested for co-design purposes in remote and syn-chronous design. In particular, they were adopted to effectivelyconduct the DR meetings of two different user groups (EROD Con-sortium and TECH-POL Consortium). They differ in product typol-ogy, company size, technological asset, target markets, users’skills, and interaction modes. Though the products themselveschanged, their development processes were found to be similar.These factors all influence the collaboration activities and supply-chain organisation structure. The application context, characteris-ing the case studies, is thus highly different. The two groups aredescribed below.

The EROD Consortium involves 6 research centres and 21 med-ium-sized companies belonging to different industrial sectors:cooker hoods, household appliances, air cooler evaporators, electricand hybrid vehicles, and machine tools. Partners create a wide col-laboration scenario, characterised by five distinct chains. The finalaim of the EROD Consortium is to develop innovative solutions formotor-user functional groups that can maximise energy efficiency.However, each chain is interested in developing proprietary solu-tions for electric motors, so collaboration projects are generallydeveloped within a specific chain. Inter-company interactions arestrongly limited to a single sector: companies collaborate stronglywith partners belonging to the same supply chain (e.g., frequentcommunications, intense data sharing), but they have only formalcontacts with the others, as their interests differ. The organisationstructure is thus vertical and clearly defined into five parallelsupply chains. Each supply chain has a leader company that

Fig. 5. The organisational structures of the three tested design contexts, representi

coordinates the partners’ activities and responsibilities and talkswith other leaders (Fig. 5a).

The TECH-POL Consortium associates 12 companies within theMarche Region (Italy). These companies are specialised in trans-forming plastics and producing plastic products. The mission ofthe Consortium is twofold: first, promoting and transferringknowledge and expertise regarding plastic moulding to push inno-vation in this specific sector, and second, offering services andcommon facilities to support member companies (e.g., rapid proto-typing centre, high qualified consultancies, collaborations withimportant research centres). Member companies are SMEs andmicro-sized firms, but they usually collaborate at the same levelbecause they are oriented to the same specific topics. Membersthus share their competencies at the same level. Furthermore,collaboration is revealed as simultaneously intense and discontin-uous. Contacts are frequent and highly collaborative when a jointproject is developed, but only for a certain period. Due to the nat-ure of interaction, the organisation structure is linear, as membersassume the same importance during collaboration. As suggested ina recent work [39], this structure can be defined as a sequencemode because partners generally communicate in line (Fig. 5b).

5.2. Evaluating the team collaboration during remote design reviewmeetings

5.2.1. The evaluation methodTo assess the design meeting performances the collaboration

quality achieved within the two investigated chains, the authorsdefined a set of parameters (metrics) to quantify and measurethe most relevant aspects of DR meetings. Protocol analysis wasthen adopted to collect and analyse data.

By observing product design teams at work, DRs are mostlycarried out with four main tasks in mind: (A) identifying real andpotential at-risk areas for every project aspect; (B) establishingthe risk level and priority to solve critical situations; (C) recognis-ing the influence of every aspect on the final result; and (D) sug-gesting possible solutions and formulating a shared alternative.Furthermore, collaboration entails a continuous shift from theindividual to the team dimension. The metrics thus consider bothprocess/product performance aspects (e.g., process efficiency, DRefficiency, product quality and cost) and human–human interac-tion aspects (e.g., interaction style, verbal communication, mutualengagement, stimulating imagination, cognitive reaction, and deci-sion making) to make a valuable analysis.

Table 2 describes the selected metrics and provides a briefdescription. (see Table 3).

For performance measurement, metrics were defined based ontraditional product-process assessment. Defining Human Interac-tion metrics derives from the elaboration presented in Section3.1 due to the implicit nature of these indicators. Indeed, expertscan evaluate Human Interaction metrics during benchmarkingand measure them during pilot projects. Metrics can be directly re-lated in these cases. Conversely, process performances can only be

ng interaction within the EROD Consortium (a), the TECH-POL Consortium (b).

Page 13: An approach to assessing virtual environments for synchronous and remote collaborative design

Table 2Case study evaluation heuristics and related metrics.

Performance Metrics Description Unit ofmeasurement

Process Process quality Design iterations It allows to measure the reduction of the cycle design time NumberComponents reuse It quantifies the degree of idealization’s potentialities in product definition offered by the

system. It can be measured in terms of components, materials or functionsPercentage

Design reviewprocess

Design reviewsduration

It measures the average duration of DR meetings within the design cycle. Time

Users satisfaction It measures the degree of users satisfaction in working by the supporting tools Users’ judgeTechnologies’exploitation

It measures the degree of users satisfaction in terms of functions’ usefulness for designactivities

Users’ judge

Physical prototypes It refers to the number of physical prototypes carried out along the whole product designcycle

Number

Product Product quality Correspondence tobrief requirements

It measures how many features of the designed product correspond with the productbrief

Number

Manufacturingiterations

It allows knowing the efficiency of DMU simulations to predict the real behaviour of aproduct

Number

Customer satisfaction It expresses the customers’ satisfaction about the designed product and allowsunderstanding the perceived product quality

Users’ judge

Product cost Conceptual It measures the effort in conceptual design phase Person-hoursEmbodiment It measures the effort in detailed design and embodiment design phases Person-hoursTesting andprototyping

It measures the effort in testing and prototyping Person-hours

Human interactionCommunication Interaction

StyleReferring to the CADmodels

It indicates the preferred modalities used for interacting with the adopted productmodels during DR sessions. It reveals the preferred way of communicating with andthrough virtual objects that depends on tasks and attitudes

Percentage

Referring to physicalprototypes

Percentage

Referring to CAxsimulations

Percentage

Verbalcommunication

Referential language It indicates the forms of verbal communication adopted to refer to product modelsduring interaction

Percentage

Highly descriptivelanguage

Percentage

Emotional language PercentageReflective-introspectivelanguage

Percentage

Human Involv. Mutualengagement

It indicates the preferred modalities for interacting with the supporting environment Users’ judge

Stimulatingimagination

It indicates participants’ ability to create new ideas and solutions by generating acollective cognition

Users’ judge

Cognitivereaction

Reaction tomodel

Actions beforemodelling

It investigates how participants interact with models. It measures the participants’cognitive reaction to the model in terms of actions on it. Actions refer to physicaloperations on virtual objects

Percentage

Actions duringmodelling

Percentage

Actions aftermodelling

Percentage

Decision-making

Elaboration of newsolutions

It refers to the modalities used for perceiving external representations and obtaininguseful cues for reasoning about design issues

Percentage

Application of analready knownsolution

Percentage

M. Germani et al. / Advanced Engineering Informatics 26 (2012) 793–813 805

measured in case studies. In this case, they do not have a directrelation with the benchmarking metrics but are cross-related withteamwork-related metrics.

Team collaboration during DR meetings was monitored using astructured protocol analysis. Protocol analysis is accepted as aprevailing experimental technique to study human–human inter-action and gain access to designers and engineers at work. Protocolapproaches used in design research can be classified into two cat-egories: concurrent and retrospective protocols. While concurrentprotocols are used to investigate the process-oriented aspects ofdesign and information processing, retrospective protocols are

utilised when a content-oriented approach is adopted and cogni-tive aspects assume great importance. In this study, we adapt theretrospective protocol approach, comprised of data collection usingdifferent observation techniques, data segmentation, coding andanalysis. Protocol analysis was adopted to explore how team mem-bers (e.g., designers, engineers, technicians, suppliers) collaboratewithin a specific environment while performing the same tasks.After recording working sessions and collecting data, protocol datawere divided into small segments. Each segment was marked witha set of codes characterising different heuristics. The proposed cod-ing scheme is an attempt to investigate collaborative teamwork.

Page 14: An approach to assessing virtual environments for synchronous and remote collaborative design

Table 3Example of the Diary Study fulfilment for a DR meeting.

Date 10/11/2010Project Restyling OMA lineProduct focus Definition of a new aesthetic line without changing the pre-

existing structureTypology Aesthetical-technical DRSupporting

mediaCAD files from technical dept. Rendering from designers(.pdf format on CD)

Completedactivities

DR1: focus on the required characteristics (samefunctionalities but smaller dimensions), visualisation of fewshape proposals by the designer. DR2: record of furtherobservation on previous proposals, change requests to thedesigner

Physicalprototypes

None. Attending for realising the first foam prototype

Length of time 1 h 25 mPlace Teuco’s VR LabActors 2 designers (SD1 – SD2) from the same design studio,

Project Manager (PM), Product Manager (PM), ProductLeader (PL), Technical Engineer (TE), Industrial EngineerManager (IEM)

806 M. Germani et al. / Advanced Engineering Informatics 26 (2012) 793–813

Coding reliability is partially reached by comparing the valuesobtained from analysing different working sessions related to dif-ferent projects that involve different actors.

5.2.2. Testing procedure and data collectionTwo experts, one in HCI and one in product design manage-

ment, were involved in analysing the collected data, segmentingthe recorded sessions and measuring a value for each metricaccording to the protocol analysis described above. In particular,experts monitored DR meetings by combining two well-knowninvestigation techniques: Diary Study and Interaction Analysis.These techniques were performed in all test cases to collect datafor metric measurement. During DR meetings, a researcher moni-tored all activities by fulfilling the Diary Study MSWord formatand audio recording sessions. For each meeting, informationregarding duration, adopted representational media, stored docu-ments, content focus and participants was collected and tran-scribed in the Diary Study (Table 3).

Interaction Analysis was an adaptation of protocol analysis:data collection, segmentation and elaboration. Rather than askingparticipants to think aloud, their conversation was recorded andtranscribed, while gestures were replicated by being sketched onpaper. Collected data include verbal descriptions of design knowl-edge and non-verbal information, including hand and body ges-tures and communication style. The recorded data were dividedinto segments according to the main performed activities. For eachverbal communication segment, the time, metric occurrences andnon-verbal actions were transcribed and replicated in the corre-sponding columns. Finally, protocol metrics were measured byoccurrences and time. Researchers collected data in a table, as inTable 4.

5.2.3. Experimental test casesExperimentation was performed within the two above-men-

tioned industrial chains (EROD and TECH-POL). Two interdisciplin-ary teams were created in the two chains, and two collaborationpilot projects were identified for each chain. Experimentationlasted one month for each chain and exploited both technologies(CoReD platform and Open Wonderland) to support remote co-de-sign meetings. The total testing period was two months. Each pilotproject chosen for experimentation involved a large number ofpartners (6–9). In particular, they dealt with the following:

– Developing a new hybrid vehicle for urban traffic for the ERODConsortium.

– Innovating some moulded components, from the current PP66to the recycled PET for the TECH-POL Consortium.

Companies were pushed to use the CoReD platform and Won-derland system to develop the design review activities by exploit-ing the provided functionalities. Experts from the research grouphelped the project managers create the project home page orspaces and manage the project path using both tools. Experts alsomonitored the use of the two systems, paying particular attentionto synchronous and remote collaboration.

The number and skill of involved users changed according tothe project and design stage where the DR took place. The twomultidisciplinary teams involved numerous competencies depend-ing on the DR topics and tasks.

The Project Managers (PMs) were always present during themeetings, as they supervised the whole product design by assum-ing a mediation role within the design team. They usually havedeep knowledge of market demand and customer taste evolution.They also have a good level of interaction with product models,both physical and virtual, due to the experience gained over theyears.

One or two Styling Designers (SDs) were usually present duringconceptual meetings, generally from the kick-off until product con-cept validation, which conceives the product shape in accordancewith brief specifications. The designer generally elaborated designconcepts and illustrated the design outcomes to the teamwork dur-ing the design meetings. They adopted several representationalmodalities, including sketching, diagrams, foam mock-ups, concep-tual 3D models, and synthetic images. According to his/her abili-ties, the designer was also involved in other collaborationdimensions for the design solution validation.

The Project Leaders (PLs) participated in all meetings, as theyare responsible for the product release. They handled product engi-neering and usually assessed the technical feasibility of the designsolutions. They generally have a strong technical background; theyare thus familiar with technical product representations but havedifficulties with the abstract ones.

Technical Engineers (TEs) were also involved after the productaesthetics had been frozen and the product needed to be designed.They handled 3D CAD modelling and technical drawings. Theyhelped evaluate and optimise the design solution to realise the fi-nal product. Their viewpoint is strongly technical and practical.

When the product engineering begins after concept validation,some specific roles can be involved. A Product Engineer (PE) wascalled to assess the manufacturing feasibility and economic valueof alternative solutions. An Industrial Engineering Manager (IEM)was asked to predict the future product industrialisation, indicat-ing technological problems in achieving the intended productshape and surface finishing. When manufacturing quality can sig-nificantly impact product success, a Quality Manager (QM) cancontribute to conceptual design and technical feasibility. Finally,several Suppliers (S) could participate in the DR meeting whendirectly involved in the product component definition or equip-ment design. Their participation depends on the specific sessiongoals. These last competencies were all used to adopt a functionaland technical viewpoint, mostly focused on specific tasks, and theirskill encompassed CAD modelling, virtual representations, Com-puter Aided Engineering (CAE) simulations. Some difficulties mayarise in abstract representations.

Despite differences in participants’ skills and the team compo-sition variation between the two test cases, experimental resultscan be considered homogeneous and comparable, as the analysismainly focused on interactions among humans within each designteam and interaction between humans and design models, insteadof the specific project goals and tasks.

Page 15: An approach to assessing virtual environments for synchronous and remote collaborative design

Table 4Example of interaction analysis: verbal transcriptions and metrics fulfilment.

M. Germani et al. / Advanced Engineering Informatics 26 (2012) 793–813 807

During the experimentation, 10 meetings were monitored foreach chain project: five using CoReD and five using Wonderland.Table 5 summarises how experimental analysis was organisedwithin each chain (e.g., number, design tasks, team composition,duration). For an accurate analysis, each task was divided into aseries of sub-tasks according to the main design activity and thekey actions (Table 5). Nevertheless, system experimentation wasperformed according to the protocol analysis method instead oftask analysis, as the authors are interested in evaluating human–human interaction and collaboration from a general perspectivewithout specifically focusing on the system usability.

Collected data were used to fill the protocol in Matrix 3 accord-ing to the proposed experimental protocol (Fig. 1). Figs. 6 and 7show the design virtual environments created by adopting thetwo tested tools used for DR activities. Fig. 6 shows the collabora-tion area of the CoReD platform used by the EROD companies. Itallows three project members to be connected in a videoconfer-ence while they are analysing technical data on a common work-space. It allows communication during evaluation and optimisesthe meeting’s time and goals. On the left, the vertical menu makesall product and project documentation available during the meet-ing. Fig. 7 shows the virtual world realised in Wonderland, as usedby TECH-POL members. The meeting involved four avatarsrepresenting the DR meeting members (Technical Director,

Marketing Director, Material Supplier and Project Leader) whilethey shared analysis reports and CAD models.

All system functionalities were fully tested with up to six userssimultaneously connected and during co-designing on a huge CADmodel assembly (approximately 250 MB). The whole platform wasalso tested within several operative systems (i.e., Windows, Linux,and Mac) and numerous web browsers (i.e., Mozilla Firefox, Inter-net Explorer, Google Chrome, Safari and Opera).

5.3. Experimental results and discussion

5.3.1. Technological comparisonData collection and analysis from the case studies identified

whether and how the two tested systems support collaborationin different contexts. In particular, protocol analysis was appliedbefore using the tools (not-supported mode) and during their use(supported mode). In the first case, traditional tools and communi-cation means supported the team, while the system functionalitiesdescribed in Section 4.1 supported the team in the second case. Ex-perts focused on collecting data during the supported meetingsand discovering differences between the previous situations. Foreach meeting monitored by experts, a set of information character-ising the DR session was transcribed in the Diary Study, includingmeeting duration, adopted representational media, stored docu-

Page 16: An approach to assessing virtual environments for synchronous and remote collaborative design

Table 5Overview of the monitored DR meetings during the case study.

No. Chain Design tasks and subtasks Team composition Duration VDE

1.1 EROD Definition of the product brief: 1SD, 1PM, 1PL, 1QM, 1MKT(Marketing)

1h15 m CoReD

Target market 10 mPrice 16 mValues 28 mMarketing options 21 m

1.2 EROD Discussion of some aesthetic concepts of the hybrid vehicle: 2SD, 1PM, 1PL, 1CEO (Chief ExecutiveManager)

1h42 m Wonderland

Presentation of the design alternatives (renderings) 16 mDiscussion about alternative No. 1 19 mDiscussion about alternative No. 2 36 mComparison about aesthetics and ergonomics 31 mEvaluation on their impact on the target market

2.1 TECH-POL

Definition of the actual problems of PP66 products: 1PM, 1PL, 1QM, 2PE, 1IEM 1h25 m Wonderland

Analysis of the environmental impact 19 mAnalysis of the product quality 15 mAnalysis of the time cycle 11 mEvaluation of product cost 18 mEvaluation of the equipments cost 22 m

1.3 EROD Definition of the product solution: 1SD, 1PM, 1PL, 1TE, 1QM 1h55 m CoReDEngine and driving parts 10 mEquipment 36 mInteriors and fitting 26 mDimensions and available spaces 21 mOptional functions 22 m

2.2 TECH-POL

Definition of the main technological issues to face in shifting production to PP66to recycled PET*

1PM, 2PL, 1QM, 1IEM, 1PE, 2S 1h12 m CoReD

1.4 EROD Definition of some functional characteristics of the chosen solution * 1SD, 1PM, 1PL, 1TE, 1QM 1h25 m Wonderland

2.3 TECH-POL

Detailed analysis of the AS-IS process (injection moulding) * 1PM, 2PL, 1IEM, 3PE 1h52 m Wonderland

1.5 EROD Preliminary product analysis: analysis of physical and cognitive ergonomicaspects *

1PM, 1PL, 1QM, 2TE, 1SD 2h18 m Wonderland

2.4 TECH-POL

Presentation and analysis of some process alternatives (TO-BE processes) * 1PM, 1PL, 1IEM, 2PE, 1QM 1h45 m CoReD

1.6 EROD Detailed analysis of the hybrid engine (engineering of the main components) * 1PL, 2TE, 1PE, 1IEM, 2S 2h15 m CoReD

1.7 EROD Analysis of the technical feasibility of the new hybrid car* 1PL, 1PM, 1PE, 1IEM, 1QM 2h05 m Wonderland

2.5 TECH-POL

Analysis of moulding simulations and LCA analysis of the TO-BE processes * 1PM, 3PE, 1QM, 1IEM, 1TE 1h42 m CoReD

2.6 TECH-POL

Analysis of recycled PET compounds to adopt for the new products: alternativesand characteristics

1PM, 1PL, 1QM, 1IEM, 1PE 1h24 m Wonderland

1.8 EROD Analysis of product assemblability issues and revision* 1PL, 3TE, 1PE, 1IEM, 3S 1h35 m CoReD

2.7 TECH-POL

Discussion of results from experimental tests on recycled PET materials 1PM, 1PL, 1QM, 1IEM, 1PE 1h38 m CoReD

1.9 EROD Analysis of the integration of mechanical and electrical parts* 1PL, 3TE, 1PE, 1IEM, 3S 1h35 m Wonderland

1.1 EROD Analysis of the first functional prototype* 1PM, 1PL, 1QM, 2PE, 1SD 1h15 m CoReD

2.8 TECH-POL

Evaluation of the first mould test by the TO-BE process * 1PM, 1PE, 1IEM, 1QM, 2S 2h12 m Wonderland

2.9 TECH-POL

Discussion of the TO-BE process optimization * 1PM, 1PL, 1IEM, 1QM, 2PE 1h55 m CoReD

2.1 TECH-POL

Validation of the new process for recycled PET components 1PM, 1PL, 1QM, 1PE, 1IEM, 1CEO 1h13 m Wonderland

* Subtasks were omitted for brevity.

808 M. Germani et al. / Advanced Engineering Informatics 26 (2012) 793–813

ments, content focus and participants. DR meetings were analysedusing IA and protocol metrics, which were collected and measured.By comparing the not-supported and supported modes, variationsin average values (all monitored meetings) were calculated. Table 6summarises the experimental results for the CoReD platform. Asimilar table was compiled for Open Wonderland.

By comparing experimental data, interesting considerationsabout the provided technological support can be inferred. From ageneral perspective, the CoReD platform seems to support remoteDR meetings better, which also occurs in different application

scenarios and design contexts, when compared to Open Wonder-land. By adopting CoReD, the number of interactions during bothdesign and manufacturing stages significantly decreases (�25%),the number of physical prototypes is limited to few items(�31%), product costs for embodiment design and testing and pro-totyping are considerably reduced (�13% and �29%, respectively),and user satisfaction is on average high (+30%). Such results canalso be due to good support for cognitive design processes andco-design activities, as observed in the benefits of the interactionstyle (e.g., a 35% increase for CAD models and a 23% increase in

Page 17: An approach to assessing virtual environments for synchronous and remote collaborative design

Fig. 6. The collaboration area of the CoReD platform supporting an EROD meeting: the main user interface with three users contemporary connected.

Fig. 7. The virtual world in Wonderland supporting a TECH-POL meeting: the virtual space with avatars representing the team members.

M. Germani et al. / Advanced Engineering Informatics 26 (2012) 793–813 809

CAx simulations) and the users’ cognitive reaction (e.g., a 44% ac-tion reduction during modelling, a 16% increase in elaboratednew solutions and a 19% increase in reusing existing ones). Anapparently negative result of CoReD concerns the product cost inconceptual design: generally, conceptual design is always moreexpensive (+11%), but it is balanced by reduced costs for embodi-ment, testing and prototyping. Conceptual stages are more intenseand complex and are thus more expensive; the product can be de-signed with minor effort while the global cost is reduced.

Conversely, Wonderland improves collaboration even if itprovides fewer global benefits than the CoReD platform. This likelyoccurs because Wonderland is mainly a communication tool and

CoReD is designed for co-design purposes; Wonderland can thusmeet specific design activities. As proof of that, product andprocess benefits are limited using Wonderland. However, communi-cation tasks are well supported: emotional language is enhanced(+20%) and reflective-introspective mechanisms are supported(+15%). Furthermore, human involvement is stimulated (mutualengagement +35%, stimulate imagination +15%). This good Wonder-land performance in communication can depend on the users’ repre-sentation in the virtual scene (avatars).

However, some differences emerge when the data obtainedfrom the two groups are compared. Considering CoReD perfor-mance, it supports the EROD Consortium well from both the

Page 18: An approach to assessing virtual environments for synchronous and remote collaborative design

Table 6Values of protocol metrics calculated for the analysed DR meetings.

MATRIX 3 CoReD Wonderland

Performance heuristics Metrics Unit ofmeasurement

EROD(%)

TECH-POL(%)

Average values(%)

Average values(%)

Process Process quality Design iterations Number �30 �10 �20 �10Components reuse Percentage +30 +25 +28 +12

Design reviewprocess

Design reviews duration Minutes, hours �10 +8 �2 �8

Users satisfaction Users’ judge +30 +28 +29 +22Technologies’ exploitation Users’ judge +25 +7 +16 +5Physical prototypes Number �33 �19 �26 �20

Product Product quality Correspondence to briefrequirements

Number +26 +4 +15 +18

Manufacturing iterations Number �24 �8 �16 �5Customer satisfaction Users’ judge +9 �5 +2 +2

Product cost Conceptual Person-hours +6 +23 +15 +15Embodiment Person-hours �18 �5 �12 0Testing and prototyping Person-hours �35 �10 �23 �10

Human interaction heuristicsCommunication Interaction style Referring to the CAD models Percentage +40 +30 +35 +5

Referring to physical prototypes Percentage �23 �14 �19 �25Referring to CAx simulations Percentage +31 +14 +23 +2

Verbalcommunication

Referential language Percentage +24 +31 +28 +12

Highly descriptive language Percentage �10 +8 �1 �3Emotional language Percentage +5 �4 +1 +20Reflective-introspective language Percentage +3 +2 +3 +15

Human involv. Mutual engagement Users’ judge +28 +19 +24 +35Stimulatingimagination

Users’ judge +30 +23 +27 +15

Cognitivereaction

Reaction to model Actions before modelling Percentage �23 �32 �28 +5

Actions during modelling Percentage +45 +42 �44 �3Actions after modelling Percentage �8 �14 �11 �2

Decision�making Elaboration of new solutions Percentage +23 +8 +16 +21Application of an already knownsolution

Percentage +21 +17 +19 +6

810 M. Germani et al. / Advanced Engineering Informatics 26 (2012) 793–813

performance and cognitive perspectives, while the TECH-POL com-panies do not completely exploit all platform functionalities andthe achieved benefits are limited. For product quality, interactionsare generally reduced even if the EROD Consortium does havegreater benefits (�30%) than TECH-POL (�10%). Furthermore, thenumber of realised prototypes is reduced by over 30% for EROD,while reduction is limited for TECH-POL (�19%). Correspondenceto brief requirements is high in the first cases (+26%) but low forTECH-POL (+5%). This difference may be due to the inner TECH-POL Consortium structure, which is comprised of SMEs and mi-cro-sized companies, whose employees have lower levels of skilland expertise and are not as familiar with advanced ICT tools.Furthermore, there is no leader company that guides and forcessmall partners to use advanced tools, so traditional practices arestill used with the CoReD platform (e.g., e-mail, phone calls, andfaxes). There are also fewer achieved benefits that reduce the de-sign costs (+23% for conceptual design, �10% for testing and �5%for embodiment). The main difficulties of the TECH-POL Consor-tium include less CAD models reference and simulation usedduring DR meetings, poorly improved communication, and lesssupported decision-making.

5.3.2. Benchmarking weight validationThe final research goal is to define a final benchmarking method

after method validation based on experimental data. It allows theexperts’ judgements to be trusted and performs reliable evalua-tions. It is based on the method introduced in Section 3. In partic-ular, it considers the experimental data on all tested systems andcompares them to benchmarking weights by considering the

relations between collaboration metrics and experimental testingparameters. Fig. 8 shows the adopted approach using the protocolsin Matrices 3 and 4. Matrix 3 is filled with collected data accordingto the test case evaluation metrics. Data are collected for everytested tool and averaged and properly normalised following a0–1 scale. Matrix 4 directly correlates the collaboration metricsused during benchmarking and evaluation metrics, and it is thenused to validate benchmarking weights and find the correlationweighting values. In particular, experimental results for case stud-ies are combined with correlation values (+ or �) and compared tothe weights to check their coherence. If the weights are not coher-ent, corrective weights can be proposed.

First, the analysis of the initial weights highlights that presenta-tion (five points) and evaluation (3) stages are considered the mostimportant in DR activity. High values in aesthetic (5) and techni-cal–technological (4) features indicate that experts believe thatdiscussion frequently focuses on these aspects. The representationmodalities adopted should thus be highly centred on them. Forpeople’s communication, simulating action (5) is considered themost successful interaction style, due to the necessity of reachinga shared and immediate understanding among all participants.Communication is considered highly referential (5) and descriptive(4), especially when team members must present and detail spe-cific design issues. Drawings and sketches (5) are frequently usedto support conversations. For human involvement, the experts be-lieved that the conceptual dimension is the most strategic forachieving strong mutual engagement (5): the supporting toolsmust promote exchanging new ideas and concepts, mentally elab-orating the analysed product features and stimulating imagination

Page 19: An approach to assessing virtual environments for synchronous and remote collaborative design

Fig. 8. Protocol matrices for benchmarking validation and the formulae used to complete them.

M. Germani et al. / Advanced Engineering Informatics 26 (2012) 793–813 811

(5) to make innovative and original products. The supporting envi-ronment must be barrier-free and encourage natural and intuitiveinteractions. To achieve collective creativity, reflective reframingwas also considered crucial (5). Considering the team participants’cognitive reactions, physical operations on objects (e.g., selection,placement, sectioning, and mark-up) are required during productdesign modelling (5) to support direct and instinctive human reac-tions. Furthermore, model perception is preferably referred to rela-tions among product elements (5). Finally, decision-makingactivities should be based on finding new solutions (5) rather thanreusing existing knowledge.

Table 7 shows the results of benchmarking validation. It indi-cates that the preliminary expert judgements are globallyvalidated by experimental data averaged on the two systems anduse contexts. Indeed, almost all metrics are well estimated, exceptthe communication and cognitive reaction metrics. Correctionweights are proposed for some metrics. Matrix 4 drives the weightoptimisation.

The weights for the decision-making stage are coherent withexperimental data (e.g., presentation has a negative correlationwith a low value 0.11 and a positive correlation with a high value0.80; it is thus definitely relevant, and a weight of 5 is correct). Byscanning the matrix, all metrics are coherent (i.e., interaction styleand mutual engagement) except three: verbal communication,cognitive reaction to model and cognitive decision-making. Forverbal communication, the correlation matrix is diagonal, so thedata interpretation is linear: this concept can be understood byconsidering the average experimental results and the direct posi-tive correlation using a relative comparison. The adopted weights

seem to overestimate the importance of highly descriptivelanguage, whereas they underestimate the reflective-introspectivelanguage. Weights can also be changed (e.g., the weight for highlydescriptive language can be 3 instead of 4 and that for reflectivelanguage can be 2 instead of 1). Similarly, cognitive reactionweights can be optimised by modifying the importance of ‘‘cogni-tive reaction to model – actions after modelling’’, giving it 2 in-stead of 3, ‘‘cognitive decision making – elaboration of newsolution’’ can be 4 instead of 5, and ‘‘reusing existing solutions’’can be 3 instead of 2. Such a corrected set of weights can be usedfor system assessment in the specific collaborative dimensionwithout any further experimentation.

6. Conclusions

The present work proposes a structured methodology tocompare collaborative VDEs for remote and synchronous designactivities and face the reliability of expert evaluations. The pro-posed method is a step towards robust VDE selection and applica-tion for co-design activities in different time–space domains. Thisapproach has a general validity, as it applies preliminarybenchmarking for quick system assessment and validates theadopted benchmarking weights using a closed-loop process withexperimental case studies. The weights provide a robust set of indi-cators for system assessment in remote and synchronous co-designpurposes. The final set of weights does not require any furtherexperimentation. Experimental data demonstrate that the pro-posed method allows benchmarking different technologies for

Page 20: An approach to assessing virtual environments for synchronous and remote collaborative design

Table 7Experimental benchmarking weight validation.

812 M. Germani et al. / Advanced Engineering Informatics 26 (2012) 793–813

remote design purposes, understanding the quality of the supportprovided and testing the robustness and transferability of the sametool in different contexts. Instead of focusing on system-basedmeasurement and provided functionalities, it allows assessing per-formance as well as human and communicative collaboration as-pects using a human-based set of heuristics and metrics. Duringexperimentation, an advanced platform (CoReD platform) was de-signed and tested within two industrial Consortia. Collaborationduring remote design meetings was analysed, and the obtainedresults were related to the inner characteristics of the Consortia,

including the organisation structure and the users’ skills andexpertise.

The main scientific contribution was providing a reliable VDEperformance evaluation from a human-based viewpoint by objec-tifying the human interaction mechanisms that characterise designcollaboration and measuring differences in use within differentdesign contexts. Experimental results highlight that (1) the pro-posed CoReD platform proves well suited for supporting remoteDR meetings when compared to other commercial tools and ap-plied to different industrial contexts; (2) applying the method

Page 21: An approach to assessing virtual environments for synchronous and remote collaborative design

M. Germani et al. / Advanced Engineering Informatics 26 (2012) 793–813 813

can provide a detailed evaluation of co-design systems, consideringboth performances and human interaction and validate the bench-marking weights by comparing them with the experimental data;and (3) the final benchmarking weight set can be used to assessother technologies without any experimentation. This method al-lows the technological follow-up to be addressed in future researchand represents a valid solution to several technological challenges.

Acknowledgements

The authors are grateful to EROD and TECH-POL companies forthe useful collaboration.

References

[1] Y. Akao, G.H. Mazur, The leading edge in QFD: past, present, future,International Journal of Quality and Reliability Management 20 (2003) 20–35.

[2] F. Argelaguet, A. Kunert, A. Kulik, B. Froehlich, Improving co-locatedcollaboration with show-through techniques, in: IEEE Symposium on 3DUser Interfaces, Waltham, Massachusetts, USA, 2010, pp. 55–62.

[3] R. Bandler, J. Grinder, Frogs into Princes: Neuro Linguistic Programming, RealPeople Press, USA, 1979.

[4] M. Barratt, Understanding the meaning of collaboration in the supply chain,Supply Chain Management: An International Journal 9 (2004) 30–42.

[5] C. Bock, X.F. Zha, H. Suh, J.H. Lee, Ontological product modeling forcollaborative design, Advanced Engineering Informatics 24 (2010)510–524.

[6] N. Bryan-Kinns, P.G.T. Healey, J. Leach, Exploring mutual engagement increative collaborations, in: Creativity and Cognition Conference 2007, ACMPress, Washington, USA, 2007.

[7] S. Bucolo, M. Brereton, Design activity within immersive design environments,in: Eriksen, Malmborg, Nielsen (Eds.). CADE2004, Web Proceedings ofComputers in Arts and Design Education Conference, Copenhagen BusinessSchool, Denmark and Malmo University, Sweden, 29 July 2004.

[8] M.L. Chiu, An organizational view of design communication in designcollaboration, Design Studies 23 (2002) 187–210.

[9] C.A. Collazos, L.A. Guerrero, J.A. Pino, S. Renzi, J. Klobas, Evaluatingcollaborative learning processes using system-based measurement,Educational Technology and Society 10 (2007) 257–274.

[10] Croquet, 2010. <http://www.opencroquet.org/index.php/Main_Page>.[11] M. Germani, M. Mengoni, M. Peruzzini, A method to define a co-design

platform to support cooperative work in SMEs, in: Proceedings of the PLM09conference, Bath, United Kingdom, 6–8 July 2009, CD-ROM, ISBN 978-1-86197-173-1.

[12] M. Germani, M. Mengoni, M. Peruzzini, Metrics-based approach for VRtechnology evaluation in styling product design, in: Proceedings of the ASME2009 International Design Engineering Technical Conferences & Computersand Information in Engineering Conference, IDETC/CIE, San Diego, CA, 30August–2 September, 2009.

[13] M. Germani, M. Mengoni, M. Peruzzini, A benchmarking method to investigateco-design virtual environments for enhancing industrial collaboration, Proc.World Conference on Innovative VR 2010, WINVR2010, Ames, IOWA (USA),12–14 May 2010. ISBN:978-0-7918-3869-3.

[14] T. Grossman, R. Balakrishnan, Collaborative interaction with volumetricdisplays, in: Proc. the Twenty-Sixth Annual Sigchi Conference on HumanFactors in Computing Systems, ACM, New York, NY, USA, 2008, pp. 383–392.

[15] A. Hargadon, B. Bechky, When collections of creatives become creativecollective – a field study of problem solving at work, Organization Science17 (4) (2006) 484–500.

[16] F. He, S. Han, A method and tool for human–human interaction and instantcollaboration in CSCW-based CAD, Computers in Industry 57 (2006)740–751.

[17] H. Hrimech, F. Merienne, Human factors and interaction techniques incollaborative virtual environment, in: Proceedings of Virtual Concept 2008,Beijing, China.

[18] T. Kantonen, C. Woodward, N. Katz, Mixed reality in virtual worldteleconferencing, in: Proc. IEEE Virtual Reality 2010, Waltham,Massachusetts, USA, 2010, pp. 179–182.

[19] T. Kikukawa, Y. Kitamura, T. Ohno, S. Sakurai, T. Yamaguchi, F. Kishino, Y.Kunita, M. Isogai, H. Kimata, N. Matsuura, An image-based system for sharing a3D object by transmitting to remote locations, Proc. IEEE Virtual Reality,Waltham, Massachusetts, USA, 2010. 277-278.

[20] J.Y. Lee, D.W. Seo, G.W. Rhee, Tangible authoring of 3D virtual scenes indynamic augmented reality environment, Computer in Industry (2010), http://dx.doi.org/10.1016/j.compind.2010.07.003.

[21] W.D. Li, S.K. Ong, A.Y.C. Nee, Integrated and Collaborative ProductDevelopment Environment – Technologies and Implementation, WorldScientific Publisher, 2006.

[22] M. Madhjoub, M. Monticolo, S. Gomes, J.C. Sagot, A collaborative design forusability approach supported by virtual reality and a multi-agent systemembedded in a PLM environment, Computer Aided Design 43 (5) (2010) 402–413.

[23] M.L. Maher, M.L. Kim, Studying designers using a tabletop system for 3Ddesign with a focus on the impact on spatial cognition, in: IEEE InternationalWorkshop on Horizontal Interactive Human–Computer Systems (Tabletop2006), IEEE, Adelaide, Australia, 2006, pp. 105–112.

[24] M. Mengoni, M. Bordegoni, M. Germani, Virtual reality systems: a method toevaluate the applicability based on the design context, in: Proc. InternationalDesign Engineering Technical Conferences and Computers InformationEngineering Conference ASME, IDECT/CIE 200, Las Vegas, Nevada, USA, 4–7September, 2007.

[25] M. Mengoni, M. Peruzzini, M. Germani, The impact of virtual environments onhumans collaboration in product design, in: M. NorellBergendahl, et al. (Eds.),Proc. International Conference on Engineering Design (ICED’09), StanfordUniversity, Stanford, CA, 24–27 August 2009. ISBN:978-1-904670-16-2.

[26] T.J. Nam, K. Sakong, Collaborative 3D workspace and interaction techniques forsynchronous distributed product design reviews, International Journal ofDesign 3 (1) (2009) 43–55.

[27] D. Noble, M. Letsky, Cognitive-based metrics to evaluate collaborationeffectiveness. in: Proceedings of the RTO SAS Symposium on Analysis ofMilitary Effectiveness of Future C2 Concepts and Systems, Den Hague, TheNetherlands, 2002.

[28] D.A. Norman, The Psychology of Everyday Things, Basic Books, New York,1998.

[29] H. Park, H.C. Moon, Y.J. Lee, Tangible augmented prototyping of digitalhandheld products, Computers in Industry 60 (2) (2009) 114–125.

[30] M. Peruzzini, M. Mengoni, M. Germani, PLM benefits for networked SMEs. in:Proceedings of the PLM11 Conference, Eindhoven, The Netherlands, 11–13 July2011.

[31] G. Pol, C. Merlo, J. Legardeur, G. Jared, Collaboration in product design andPLM-based coordination, in: Proceeding of International Conference onProduct Lifecycle Management 2007, Stezzano, Italy, 11–13 July 2007.

[32] B.F. Robertson, D.F. Radcliffe, Impact of CAD tools on creative problem solvingin engineering design, Computer-Aided Design 41 (2009) 136–146.

[33] D. Schon, G. Wiggins, Kinds of seeing and their function in designing, DesignStudies 13 (2) (1992) 135–156.

[34] Second Life, 2010. <http://secondlife.com/whatis/?lang=en-US>.[35] W. Shen, O. Hao, H. Mak, J. Neelamkavil, H. Xie, J. Dickinson, R. Thomas, A.

Pardasani, H. Xue, Systems integration and collaboration in architecture,engineering, construction, and facilities management: a review, AdvancedEngineering Informatics 24 (2010) 196–207.

[36] M. Suwa, J. Gero, T. Purcell, Unexpected discoveries and s-inventions of designrequirements: important vehicles for a design process, Design Studies 21(2000) 539–567.

[37] H. Wang, A. Johnson, H. Zhang, S. Liang, Towards a collaborative modelling andsimulation platform on the Internet, Advanced Engineering Informatics 24(2010) 208–218.

[38] Wonderland, 2010. <http://www.projectwonderland.com/>.[39] H. Zhang, H. Wang, D. Chen, G. Zacharewicz, A model-driven approach to

multidisciplinary collaborative simulation for virtual product development,Advanced Engineering Informatics 24 (2010) 167–179.