architectures for shared haptic virtual environments

9
Pergamon Comtrut. Xr Grauhics. Vol. 21. No. 4. DD. 421-429. 1997 C 1997’Elsevier Science Ltd. kil rights reserved Printed in Great Britain 009778493,/97 $17.00 + 0.00 PII: s0097-8493(97)ooo19-8 Haptic Displays in Virtual Environments ARCHITECTURES FOR SHARED HAPTIC VIRTUAL ENVIRONMENTS PIETRO BUTTOLO’+, ROBERTO OBOE’ and BLAKE HANNAFORD3 ‘Scientific Research Laboratory. Ford Motor Company, Dearborn, MI, U.S.A. e-md [email protected] “Department of Electronics and Information Science, University of Padua, Padua, Italy ‘Department of Electrical Engineering, University of Washington. Seattle. WA, U.S.A. Abstract-The lack of force feedback in visual-only simulations may seriously hamper user proprioception, effectiveness and sense of immersion while manipulating virtual environments. Haptic rendering, the process of feeding back force to the user in response to interaction with the environment is sensitive to delay and can become unstable. In this paper we will dessribe various techniques to integrate force feedback in shared virtual simulations, dealing with significant and unpredictable delays. Three different implementations are investigated: static. collaborative and cooperative haptic virtual environments. 8 1997 Elsevier Science Ltd 1. INTRODUCTION In the last few years implementation of shared virtual environments has been particularly active. Distrib- uted, multi-user simulations have been implemented for training [ 11, education [2], concurrent engineering [3], entertainment [4], and battle simulation [5]. However, the lack of force feedback in visual-only simulations seriously hampers user proprioception, effectiveness and sense of immersion while manip- ulating objects in virtual environments. Virtual simulations in which haptic devices apply force feedback to the user are receiving growing attention from both industries and universities [6, 71. In the last few years haptic research has focused on the design of devices, human perception studies, and haptic rendering of virtual environments, leaving little space for integration of haptics into shared virtual environments [S]. Spidar was the first successful implementation of a multi-user haptic simulation. In Spidar, two users can simultaneously grasp and manipulate the same virtual object [9]. In the dual user configuration described in [lo] the joint action of two hands is necessary to successfully complete an assembly task. There are two major problems in implementing a shared virtual simulation: (1) The manipulation, and therefore the modifica- tion of the same shared virtual environment by users at different sites might result in diverging representations. Coherency of the virtual envir- onment state must be guaranteed. + Author for correspondence. (2) The neea to communicate over large distances may introduce a significant latency. Moreover, latency cdn be unpredictable when communica- tion throughput is not guaranteed, as with the Internet [ 111. Both communication latency and system architec- ture influence the overall delay in processing haptic information. In a graphic-only simulation with significant delay. the user adopts a ‘move and wait’ strategy to restore hand-ye coordination. In a haptic simulation, delay in processing information can easily bring the system composed of user and device to instability, since haptic displays are active devices which exchange energy with the user. The focus in implementing a shared haptic simulation must therefore be to reduce delay in processing force information, satisfying at the same time the applications requirements and constraints. Moreover, because of limited development resources, it is often required to ‘plug-in’ force feedback into existing visual simulations. In some cases limitations inherent in running a simulation over large distances make it physically impossible to meet the initial requirements. Let us consider the task of simulating two users moving an engine block across a room, feeling each other pushing and pulling onto the shared engine. A high quality simulation is impos- sible if communication latency is in the order of hundreds of milliseconds, as it often happens across the Internet. We identilied three major classes of shared interaction: (1) browsing static environments, such as feeling haptic information in documents, data- bases, web pages; (2) sharing collaborative environ- ments, in which users alternate in manipulating a common environment; (3) interacting in cooperative 421

Upload: pietro-buttolo

Post on 15-Sep-2016

214 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Architectures for shared haptic virtual environments

Pergamon Comtrut. Xr Grauhics. Vol. 21. No. 4. DD. 421-429. 1997

C 1997’Elsevier Science Ltd. kil rights reserved Printed in Great Britain

009778493,/97 $17.00 + 0.00

PII: s0097-8493(97)ooo19-8 Haptic Displays in Virtual Environments

ARCHITECTURES FOR SHARED HAPTIC VIRTUAL ENVIRONMENTS

PIETRO BUTTOLO’+, ROBERTO OBOE’ and BLAKE HANNAFORD3

‘Scientific Research Laboratory. Ford Motor Company, Dearborn, MI, U.S.A. e-md [email protected]

“Department of Electronics and Information Science, University of Padua, Padua, Italy ‘Department of Electrical Engineering, University of Washington.

Seattle. WA, U.S.A.

Abstract-The lack of force feedback in visual-only simulations may seriously hamper user proprioception, effectiveness and sense of immersion while manipulating virtual environments. Haptic rendering, the process of feeding back force to the user in response to interaction with the environment is sensitive to delay and can become unstable. In this paper we will dessribe various techniques to integrate force feedback in shared virtual simulations, dealing with significant and unpredictable delays. Three different implementations are investigated: static. collaborative and cooperative haptic virtual environments. 8 1997 Elsevier Science Ltd

1. INTRODUCTION

In the last few years implementation of shared virtual environments has been particularly active. Distrib- uted, multi-user simulations have been implemented for training [ 11, education [2], concurrent engineering [3], entertainment [4], and battle simulation [5]. However, the lack of force feedback in visual-only simulations seriously hampers user proprioception, effectiveness and sense of immersion while manip- ulating objects in virtual environments. Virtual simulations in which haptic devices apply force feedback to the user are receiving growing attention from both industries and universities [6, 71. In the last few years haptic research has focused on the design of devices, human perception studies, and haptic rendering of virtual environments, leaving little space for integration of haptics into shared virtual environments [S]. Spidar was the first successful implementation of a multi-user haptic simulation. In Spidar, two users can simultaneously grasp and manipulate the same virtual object [9]. In the dual user configuration described in [lo] the joint action of two hands is necessary to successfully complete an assembly task.

There are two major problems in implementing a shared virtual simulation:

(1) The manipulation, and therefore the modifica- tion of the same shared virtual environment by users at different sites might result in diverging representations. Coherency of the virtual envir- onment state must be guaranteed.

+ Author for correspondence.

(2) The neea to communicate over large distances may introduce a significant latency. Moreover, latency cdn be unpredictable when communica- tion throughput is not guaranteed, as with the Internet [ 111.

Both communication latency and system architec- ture influence the overall delay in processing haptic information. In a graphic-only simulation with significant delay. the user adopts a ‘move and wait’ strategy to restore hand-ye coordination. In a haptic simulation, delay in processing information can easily bring the system composed of user and device to instability, since haptic displays are active devices which exchange energy with the user.

The focus in implementing a shared haptic simulation must therefore be to reduce delay in processing force information, satisfying at the same time the applications requirements and constraints. Moreover, because of limited development resources, it is often required to ‘plug-in’ force feedback into existing visual simulations. In some cases limitations inherent in running a simulation over large distances make it physically impossible to meet the initial requirements. Let us consider the task of simulating two users moving an engine block across a room, feeling each other pushing and pulling onto the shared engine. A high quality simulation is impos- sible if communication latency is in the order of hundreds of milliseconds, as it often happens across the Internet.

We identilied three major classes of shared interaction: (1) browsing static environments, such as feeling haptic information in documents, data- bases, web pages; (2) sharing collaborative environ- ments, in which users alternate in manipulating a common environment; (3) interacting in cooperative

421

Page 2: Architectures for shared haptic virtual environments

422 P. Buttolo et al.

environments, in which the task requires the simulta- neous action of more than one user.

The system implementation should also depend on how users interact with virtual objects. We will consider two modalities of rendering haptic informa- tion. The first, impulsive haptic rendering, models impulsive collisions such as kicking a ball, and hammering a nail. The second, continuous haptic rendering, models extended collisions such as pushing against a wall or lifting an object.

In the next section the process of feeding back force as a result of user actions will be briefly described. In particular we will outline its require- ments in terms of refresh rate and stability. Then we will analyze the three different system implementa- tions for shared haptic virtual environments, point- ing out limitations and advantages. A practical example will be described in more details.

2. HAPTIC RENDERING

Haptic rendering is the process of computing and applying force feedback to the user in response to his/her interaction with the virtual environment. How haptic rendering is implemented should depend on the application requirements, since there is no unique or best solution. In this section we will describe two different approaches: impulsive haptic rendering and continuous haptic rendering.

2.1. Impulsive haptic rendering In some applications we might be interested in

haptically rendering only sharp collisions between objects, such as for example a tennis racket hitting a ball. We will call the module that detects collisions and calculates impulsive forces the Collision Detec- tion Engine (CDE). For example, let us consider a discrete time computer simulation, sampling time T, modeling a ball of radius r2 sliding on a frictionless surface colliding against a virtual racket, as shown in Fig. 1. We will call the module that computes the virtual objects motion the Dynamic Engine (DE). In this simple case it consists of the following equation:

.x2(t, + T) = X2(h) + X2(t,)T (1)

If at time steps tl and t2 the position of racket and ball swaps, as in the figure, a collision must have happened somewhere in between. Modeling motion with parametric functions, and enforcing contact between racket and ball, we can calculate the time of

impact tl <t,< t2:

w(t) = Xl(h) +-G(t - [I),

x2(t) = x2(t1) + x2ct - t1), (2)

ll-x~(t,) - x2(tc)/12 = 4’1 + 1’2

In an elastic collision, kinetic energy and momen- tum must be equal before and after the collision; therefore:

To simplify the implementation, the force applied to the racket to simulate the impulsive collision can be rendered as an uniform force pulse of duration T and intensity AQ/T, where AQ is given by

jF(t)dt=/mdv=ml&-rn1.k) =AQ (4)

More realistic impulsive functions are described in [ 121. This haptic rendering implementation does not need to run at an high sampling rate since force feedback is applied open loop. The major limitation is that the impulsive rendering method does not work for extended contact with virtual objects. Continuous haptic rendering is a different approach that models this type of collisions.

2.2. Continuous haptic rendering When pushing a finger against a rigid wall, the

amount of displacement induced in the wall and the force applied by the finger are continuous variables satisfying a physical relation imposed by the wall impedance, i.e. the transfer function between force and displacement, as shown in Fig. 2.

The CDE continuously estimates the force to be fed back to the user’s finger from

F(t) = f&(t) + Bi(t) if x > 0

0, otherwise (5)

In case of moving objects, the force F computed by the CDE is used by the DE to simulate motion:

F(t) = m.?(t) + b.?(t) (6)

where m and b are mass and damping relative to object motions. The end-effector of the master manipulator is modeled in the virtual environment

-3 x2 tl = nT t, < te < t2 t2 = (n+l)T 6

Fig. 1, A virtual racket and a tennis ball collide at t = t,. The ball motion after collision (drawing on the right) is calculated by equating energy and morxentum before and after collision.

Page 3: Architectures for shared haptic virtual environments

Architectures for shared haptic virtual environments 423

x Fig. 2. A stiff virtual wall modeled as a mechanical impedance. The stiffness component (spring) models elastic collisions while the damper (shock absorber) models energy

dissipation.

as a rigid body having zero inertia and damping coefficients and infinite stiffness. In general, objects of complex shape can be modeled as a collection of springs and shock absorbers normal to the surface.

2.3. Integrating graphics and haptics In practice, the continuous haptic renderer is

implemented as a discrete time process, as shown in Fig. 3. See [ 131 for a detailed discussion on computer controlled systems. In a shared virtual environment communication between different components intro- duces delay into the system. Latency might be present between the CDE and the DE, or between the Haptic Display position and force signals and the CDE.

How fast should we sample? And how will delay affect performances? Some authors suggest a thresh- old based on human perception, resulting in a requirement for force reflection bandwidth of at least 30-50 Hz [14]. We found that, to realistically

simulate colhsions with rigid objects, a stiffness of at least 100&10000 [Nm-‘1 must be simulated. How- ever, how this relates to the sampling time T and communication delay depends on the haptic device itself and on the CDE-DE implementation. We experimentally measured stability maps ‘virtual object stiffness versus communication delay’ and ‘virtual object stiffness versus sampling time’ for our system [15] <and continuous haptic rendering. The results are not only quantitatively valid for our setup, but also qualitatively of genera1 significance. When the sampling rate is decreased or delay is increased, the maximum stiffness that can be simulated without bringing the system to instability drops dramatically (Fig. 4).

In our system, to simulate objects with 1000 Nm-’ stiffness. the sampling rate must be at least 200 Hz, and the delay less than 5 ms. Many state of the art haptic systems use 1000 Hz sampling rates [6]. The real time requirements for graphic rendering and haptic rendering are therefore quite different, depending on the haptic rendering imple- mentation (see Fig. 5).

In the following section we will analyze in detail three different architectures for Shared Haptic Virtual Environments (SHVE). We will give imple- mentation examples using both impulsive and con- tinuous rendering.

3. ARCHITECTURES FOR SHARED HAPTIC VIRTUAL

ENVIRONMENTS

We can group architectures for SHVE in three major classes, static, collaborative and cooperative shared virtual simulations [lo, 16, 171. In a static virtual environment each user can explore by looking and touching, but not modify the environment, Users may or may not see each other during the simulation, but cannot touch each other. Examples

I Human Operator I I

I Xhd ; kT &lay2

>y+y-

L---------------J

Fig. 3. Block diagram of a haptic virtual environment. An operator holding a haptic device interacts with a virtual object. The region outside the dashed box is a discrete time process, sampling time T. In a shared virtual environment communication between different components introduces delay into the system. If CDE and DE run on different sites, then delay3 and delay4 are present. I f CDE and the haptic device

controller run on different sites, then delay1 and de’ay2 are present.

Page 4: Architectures for shared haptic virtual environments

424 P. Buttolo et al.

de& - stifhsss stability curw

0 200 400 600 800 1000

sameline rate - stiffness stabirtv wive

80 Ii- E - 60

% E 40

l 20

0 0 200 virlual$4 stiffEi 800 1000

[N/m]

Fig. 4. Stability maps ‘virtual object stiffness vs communication delay’ [left] and ‘virtual object stiffness vs sampling time’ [right]. To guarantee system stability with significant delay or slow refresh rate stiffness

must be reduced.

are browsing geometrical or shared databases on the net. In a collaborative virtual environment users can modify the environment but may not simultaneously shape or move the same virtual object. This scheme can be applied for surgical or professional training, co-located CAD and entertainment. In a cooperative virtual environment users can simultaneously modify the same virtual object. Users can see and touch each other, directly or indirectly through a common object. Possible applications are like those for collaborative environments. In the following paragraphs we will analyze in detail the three architectures.

3.1. Static virtual environment: browsing a database In a static virtual environment users cannot modify

the environment. The restriction of not letting users edit the environment greatly simplifies the imple- mentation. Each user connects to a central database in a Client--Server fashion. If the application requires awareness to others, status information, such as position can be exchanged by communicating all information to the server, where it can be accessed by all participants, or directly from user to user. in a Peer-to-Peer fashion [18] (see Fig. 6). The simulation can be partitioned into multiple servers, each

managing a different region. Clients migrate during the simulation from one server to another, and can intera.ct only with clients connected to the same server.

After connecting to the server the user locally replicates the virtual environment by either down- loading the full Virtual Environment (VE) database or periodically requesting information pertinent to the immediate neighborhood. At the user level this is analogous to a single user simulation since there is no interaction with others, as shown for one of the user by the shaded area in Fig. 7. Therefore, this scheme works well for any delay in communicating information. If the application requires continuous contact with the environment the graphics and haptic loops must be decoupled, as shown on the right in Fig. 5, to guarantee a minimum delay in processing haptic information.

The overall layout is shown in Fig. 7. Note that information pertinent to the VE flows from the server to the users, and not vice versa. At the user sites information is visually processed (Graphic Renderer, GR) and displayed (Visual Display, VD), and forces are computed (CDE) and applied (Haptic Display, HD).

Impulsive baptic rendering

I J I

Continuous baptic rendering

Fig. 5. A virtual simulation with impulsive haptic rendering [left], and continuous haptic rendering [right] implementations. Note the different constraint in computational update for the two sensory channels.

Page 5: Architectures for shared haptic virtual environments

Architectures for shared haptic virtual environments 425

n usern

oyk&2 user2

0 server

user3 0 user4 user4

Client-Server Peer to Peer Fig. 6. ClientkServer and Peer-to-Peer connectivities.

3.2. Collaborative virtual environment: ‘one at a time’ In a collaborative virtual environment only ‘one

user at a time’ can simultaneously edit the same virtual object. Replication of the VE at each user site is still convenient to reduce delay in the haptic rendering loop. However, since these local copies can be modified during object manipulation special care must be taken in enforcing a coherent representation of the VE. There are two alternatives to synchronize users access to the VE:

(1 I) Central Server: a central server acts as a scheduler and keeps the only ofJicia1 copy of the VE. Whenever a user gets close enough to an object, a request to edit is sent to the server. The server processes these requests in a first-come- first-serve basis, granting ownership and locking the object to prevent modification from other users. Other users are still allowed to touch and modify their own copy, however these changes will not be copied at the server site. After the user owning the object has finished editing and moves away, the object representation modified at the user site is sent and updated at the server site. A request to release object ownership is then sent to the server. The server sends the new object representation to all other users and then assigns

the object ownership to the next user (see Fig. 8). Client-Server connectivity fits well with this implementation (see Fig. 6).

(2) Tokerz Ring: users own the right to edit an object

according to some predefined rules. In a token ring implementation users are sequentially given permission to edit. After the client owning permission to edit has completed the task, it sends a message to the next user passing owner- ship. There is no official copy of the VE at the server site, but this is passed around from user to user, instead. Since there is no need for a central server, l?eer-to-Peer connectivity fits well with this implementation (see Fig. 6). Note that for Client-Server connectivity all communications are directed to/from a server. The server must process all incoming data, and then broadcast them. This scheme clearly does not scale well as the number of users increases since the server is over-loaded. In Peer-to-Peer connectivity the computational load is distributed more homo- geneously in the system, and exchange of information is faster between different users since it does not need to pass through the server [19]. Multicasting can further reduce the communica- tion load [20].

User 1 User 2 User 3 User 4

Fig. 7. A haptic browser allows different users to access a common database. The database is stored at the server site, but upon connection is replicated at each site. Each user can view and touch the environment.

but not modify it.

Page 6: Architectures for shared haptic virtual environments

User 1 User 2

L----l

local haptic \ \

Buttolo et ui.

User 3 User 4

--I network )-

Fig. 8. Collaborative virtual environment, Central Server configuration. The server acts as a scheduler, Qssigning ownership on a first-come-first serve basis. and it keeps the only official VE copy. The Finite

Element Engine (FEE) is introduced to allow shape manipulation.

In both Central Server and Token Ring schemes the haptic rendering loop is executed at each user site on a local replica of the virtual environment (see Fig. 8). Local simulation is therefore decoupled from that of other participants, and it is possible to cope with large delay, as we will see in the next subsection. where we will describe a practical implementation of Collaborative Token Ring ar- chitecture.

3.2.1. Force Feedback Multi-player Squash

(FFMS). ‘Force Feedback Multi-player Squash’ (FFMS) is a practical implementation of a Token Ring Collaborative architecture [21]. A similar but only visual simulation, ‘multi-player handball’. has been implemented at the University of Alberta [4]. In real squash. two players alternate in hitting the ball. At a specific time, a player is the designated hitter, the others are waiting, trying to anticipate the next move. In FFMS, more than two players can play together. but as in real squash. only one player, the

Active Player (AP) is allowed to hit the ball. Using a haptic device, as in real squash. players feel the collision with the ball. The speed of the ball after collision and the intensity of the impact reflected to the operator are determined using the impulsive haptic rendering method described by Equation (3) and Equation (4).

The system connectivity is Peer-to-Peer to reduce communication delay (see Fig. 9). All players send their relative position to the other players and the server. every 50 ms. These messages are also used to check heart beating, in other words that all connec- tions are still alive. After hitting the ball. the AP broadcasts the new position and speed of the ball to the other players. Non-APs update the position of the ball on a local replica. Once a packet from the AP signals a collision the new data is used to adjust the estimated position. This technique is called dead-

reckorhg [22]. Graphics and force rendering are synchronous with communication. Force impulses

LAN IJW 1 LAN Univ. of Padua

client2 I

Padua

client-to-client client-to-server (or peer-VW communication communrcatron

4 b +--mm*

Fig. 9. Force Feedback Multi-player Squash (FFMS). A server is used to handle connection. disconnection requests to the game. to synchronize the start and end of the game. and to monitor the correct functioning of the system. Multiple clients, one per player, contain the complete model of the system (replication). that consists of the dimensions of the squash-court. the position of all players, and

the position and the speed of the ball.

Page 7: Architectures for shared haptic virtual environments

Architectures for shared haptic virtual environments 421

calculated using Equation (4) are applied in response to collision with the ball.

We implemented an Adaptive-Dynamic-Bounder, to ensure that the ‘dynamics’ of the simulation is compatible with the delay in the communication. To this purpose, each client estimates the round trip delay in between peers, and sends the data to the server. The server determines the maximum round trip delay, and sends back to all the clients the maximum absolute speed and maximum change in speed allowed after a collision of the ball with a virtual racket. These parameters are calculated so that, in the worst case, the ball will not travel more than a distance equal to the length of the squash-court in a time equal to the communication delay. This is necessary because the local estimated position of the ball is not necessarily the position of the ball of the AP. When the AP hits the ball, and passes the role of AP to the next player, the ball could have already traveled out of the court. The longer the delay. the higher the probability of such an occurrence. A similar technique, Adaptive-User-Motion-Bounder, not yet implemented, could be used to limit the maximum speed achievable by the user in moving the racket, introducing force feedback damping propor- tional to the maximum delay. In such a way, not only the dynamics of the ball could be limited, but also that of the players.

The protocol used for communication is an enhanced UDP built on top of the UDP/IP. The standard UDP allows a faster round trip commu- nication, but packets may be lost or received out of order. Our enhanced UDP implementation manages a reliable flow for single event packets. such as connection and disconnection requests in the case of continuous flow of information, such as the positions of the players. packets are sent unreliably, but packets out of orcler are discarded.

3.3. Cooperative virtual environment: ,‘feeling each 0 tiler ’

In a Haptically Cooperative Environment (HCE) users can simultaneously manipulate and haptically feel the same object. This also involves the ability of feeling and pushing other users while moving in the simulation.

In [IO] exoskeleton devices are used in a coopera- tive assemblying simulation. Since these devices are not grounded, users cannot kinethetically interact because forces are not transmitted from user to user while pushing onto the same object. This means that if a user is passively holding onto an object moved by a different user, hand*ye coordination will be compromised, since the passive user position is changing in the VE but not in reality. The possibility of kinesthetically interacting with other users makes the simulation truly realistic. On the other hand, this also poses stringent constraints on the system layout and the maximum allowable communication latency. Is it really worth it?

We believe that HCEs are potentially beneficial, in some cases even indispensable, for:

(1)

(2)

(3)

Training of a team of professionals: force feed- back has already been integrated into virtual surgical simulators. Let us imagine a team operating on the same real patient. Each team member interaction with the patient is perceived by all the other members, indirectly when pulling the patient tissue, or directly because of collisions in the limited workspace. These factors might need to be reproduced in a realistic simulation. Entertainment: adding force feedback, thus allowing participants to kinesthetically interact with each other, adds a new dimension of fun. Telerobotics: a telemanipulation system shares many aspects of a shared virtual environment [23]. In current implementations, the mix of virtual fixtures and real manipulators enhance the quality and safety of the remote manipula- tion. Remote manipulation could be shared among multiple users.

Because multiple users are simultaneously inter- acting on the same object, it is necessary to allow only one DE to modify the object position, and only one FFE to modify its shape. These are in fact the only two modules that change the status of the virtual environment.

A possible solution is to perform all the computa- tion at the server site, CDE, DE, and FFE, while the clients are simply sending to the server their haptic display (or pointing device) positions, and receiving the force-torque vectors to apply back to the users. This system architecture can be improved distribut- ing the CD13 among the clients (see Fig. IO). Each client performs its own collision detection, calculates its own interaction forces with the virtual environ- ments, and 1 hen sends the information to the server that will update positions and shapes. The CDE on the server site is responsible to calculate collisions between virtual objects.

The problem with allowing simultaneous manip- ulation is that, to enforce coherency it is not possible to run the dynamic engine on a local replica of the virtual environment. This means that the local copy is updated with a certain delay after manipulation occurs, since data processing (DE, FFE) is per- formed at the server site (see Fig. 3 and Fig. 10). Kinesthetically linking remote users is therefore particularly challenging. The latency in the haptic rendering loop should not be bigger than 5-10 ms to stably interact with stiff virtual objects. Delay of up to 50-100 ms could be manageable if we accept a reduction in the object stiffness or the introduction of artificial damping to keep the haptic device stable (see Fig. 4).

delay,,,, = {

5- lOms, stiff objects 30 ms, soft objects (7)

If, during the simulation. one of the users is

Page 8: Architectures for shared haptic virtual environments

428 P. Buttolo et al.

Server User 1 User 2: Haptic User 2: Graphics User 3

distributed rendering

Fig. 10. Cooperative Virtual Environment. Each user computes its own interaction forces. but manipulation of VE is centralized at the server site. This scheme was successfully implemented and

tested at the University of Washington for communication latency less than 30 ms.

performing a delicate manipulation that requires high quality force feedback, it is possible to move the DE and FFE to its local site. In this way the client becomes the server for the required amount of time, and the delay in this client force feedback loop does not include any transmission time. However, the other participants are still affected. An a&@ve dynamic bounder can be used to change object stiffness depending on an round trip communication latency estimate to keep the simulation stable.

4. CONCLUSIONS

In this paper we showed how delay in the haptic rendering process might cause instability, and how different approaches would work for specific group of applications. We laid out three different system architectures that simulate shared interaction in static, collaborative and cooperative environments. The focus was in reducing delay in processing force information, satisfying at the same time the applica- tions requirements and constraints.

Practical implementations were tested for all three architectures. However. more tests and further developments are needed to assess these schemes for a larger number of users and for different communication protocols and media.

Acknowledgements-This work was supported by the National Science Foundation (grant #BCS-9058408) and partially by the Allen Innovation Award from the University of Washington Libraries.

REFERENCES

I. Stansfield. S., Miner, N., Shawver, D. and Rogers, D.. An application of shared virtual reality to situational training. In Proceedings Virtual Reality Awual Inter- national Symposium, 1995. pp. 156161.

2. Loeffler, C. E.. Distributed virtual reality: applications for education. Entertainment and Industry. Telektro- nikk, 1992, 89(4).

3. Maxfield. J.. Fernando, T. and Dew. P., A distributed virtual environment for concurrent engineering. In Proceedings Virtual Reality Annual International Sym- posizrnr. 1995, pp. 1622171.

4.

5.

6.

7.

8.

9.

10.

11.

12.

13.

14.

15.

16.

17.

18.

Shaw, C. and Green, M., The MR toolkit peers package and experiment. In Proceedings IEEE Virtual Reality Annual International Symposium. 1993. pp. 4633 470. Calvin, J., Dickens, A., Gaines, B.. Metzger. P., Miller, D. and Owen, D., The Simnet virtual world architec- ture. In Proceedings IEEE Virtual Realit?; Annual International Symposium, 1993, pp. 450455. Massie, T. H. and Salisbury, J. K., Probing virtual objects with the PHANTOM haptic interface. In Proceedings ASME Winter Annual Meeting. Session on Haptic Interfaces fbr Virtual Environment and Teieoperator Systems, 1994. McNeely, W. A., Robotic graphics: a new approach to force feedback for virtual reality. In Proceedings IEEE Virtual Reulity Annual International Symposium, 1993, pp. 336341. Buttolo. P., Shared virtual environments with haptic and visual feedback. http://rcs.ee.washington.edu/BRL/ project/shared/. Ishii, M.. Nakata, M. and Sato, M., Networked SPIDAR: a networked virtual environment with visual. auditory, and haptic interactions. PRESENCE, 1994, 3(4). 351-359. Pere, E., Gomez, D., Burdea. G. and Langrana, N., PC-based virtual reality system with dextrous force- feedback. In Proceedings of the ASME Dynamics Sy.ytems und Control Division. 1996. Claffy, K. C.. Internet traffic characterization. Dissertd- tion for the degree of Doctor of Philosophy, CSE. University of California, San Diego, CA, 1994. Brach. R. M., Merhanical Impact Dynamics. Wiley, New York, 1991. Franklin. G. F. and Powell, J. D., Feedback Control oj Dynamic Systems, Addison-Wesley. 1991. Shimoga. K. B., A survey of perceptual feedback issues in dexterous telemanipulation: Part I. Finger force feedback. In Proceedings IEEE Yirtual Realit~~ .dnnual Imernational Symposium, 1993. pp. 2633270. Buttolo, P. and Hannaford, B., Pen-based force display for precision manipulation in virtual environment, In Proceedings IEEE Virtual Realit! Annual International Symposium, 1995. Broll. H. W., Interacting in distributed collaborative virtual environments. In Proceedings Virtua/ Reality Annual International Symposium, 1995, pp. 148-155. Buttolo, P.. Hannaford, B. and McNeely, B., An introduction to haptic simulation, Tutorial Notes IEEE Virtual Reality Annual International Sympo- sium, 1996. Singh, G.. Serra, L., Png, W.. Wong. A. and Ng, H..

Page 9: Architectures for shared haptic virtual environments

Architectures for shared haptic virtual environments 429

BrickNet: sharjng object behaviors on the Net. In Proceedings IEEE Virtual Reality Annual hternational Symposium, 1995, pp. 19-27.

19. Funkhouser, T., Network topologies for scalable multi- user environments. In Proceedings IEEE Virtual Reality Annual Internalional Symposium, 1996, pp. 222-228.

20. Macedonia, M. R., Zyda, M. .I., Pratt, D. R.. Brutzman, D. P. and Barham, P. T., Exploiting reality with multicast groups. In Proceedhgs IEEE Virtual Reali!! Annual International S),mposium, 1995. pp. 2-10.

21. Buttolo, I?., Oboe, R., Hannaford, B. and McNeely, B.. Force feedback in virtual and shared environments. In Proceedirrgs MICAD, Paris, 1996.

22. Gossweiler, R., Laferriere, R. J., Keller, M. L. and Pausch, R.. An introductory tutorial for developing multiuser virtual environments. PRESENCE, 1994, 3(4), 225.-264.

23. Buttolo, P., Kung, D. and Hannaford, B., Manipula- tion jn real. remote and virtual environments. In Proceedit;gs IEEE Conference on Systwns, Man. and Cyhernetrcs. 1995.