collaboration in 3d shared spaces using x3d and vrml

7
Collaboration in 3D Shared Spaces using X3D and VRML Lei Wei, Alexei Sourin Herbert Stocker Nanyang Technological University Bitmanagement Software GmbH Singapore Germany e-mail: weil0004 | assourin @ntu.edu.sg e-mail: [email protected] Abstract—We propose a framework for visual and haptic collaboration in 3D shared virtual spaces. Virtual objects can de declared as shared objects which visual and physical properties are rendered synchronously on each client computer. We introduce virtual tools which are shared objects associated with interactive and haptic devices. We implement the proposed ideas as new pilot versions of BS Collaborate server and BS Contact VRML/X3D viewer. In our collaborative framework, two pipelines—visual and haptic—complement each other to provide a simple and efficient solution to problem requiring collaboration in shared virtual spaces on the web. We discuss two implementation frameworks based on the strong and thin server concepts. Keywords-collaboration; shared virtual spaces; X3D, VRML; haptics I. INTRODUCTION Extensible 3D (X3D) and its predecessor Virtual Reality Modeling Language (VRML) are open standard file formats and run-time architectures to represent and communicate 3D scenes and objects. For many years X3D and VRML have successfully provided a system for the storage, retrieval and playback of real time graphics content embedded in applications supporting a wide array of domains and user scenarios [1]. However, X3D and VRML have several unresolved problems which importance has increased during the last few years when attention of researchers and developers has been attracted to 3D internet and virtualization of life. One of them is that X3D and VRML are lacking an ability of setting collaborative shared scenes and require 3 rd party software tools to be used for doing it. In Section II, we survey the ways of setting collaborative shared virtual spaces with X3D/VRML as well as research works on haptic rendering on the web. In Section III, we introduce our definition of shared objects, tools and other related parts of shared collaboration on 3D web. In Section IV, we present our implementation and illustrate the developed software with a few notable examples. Finally, we draw conclusions and outline further work. II. VISUAL AND HAPTIC COLLABORATION IN X3D AND VRML There are several commercial tools developed for supporting collaboration in X3D and VRML scenes. Blaxxun Communication server [2] was developed to support shared VRML. To a certain extent it can support X3D worlds if X3D files are called from VRML root file and used with BS Contact VRML/X3D viewer. It is capable of providing collaborations using Server-Client mode, however the update rate of the platform is hard- coded at 3 times per second, which is rather at the low end to implement real-time interactions. Also, though the authors have a full license of the server, the software is no longer offered and supported on a complementary web access basis as it was done in the past. ABNet software [3] is a java-based tool for turning single-user X3D scene into a multiuser environment. It is based on Server-Client model and it can support asynchronous shared environments using XML messages sent by TCP/IP. It provides an experience quite similar to blaxxun Communication server, while both the Server and Client are free for non-commercial and educational purposes. Planet 9 GeoFeeder Server [4] is a multi-user server which mainly focuses on geo-visualization applications. RayGun is the corresponding client 3D viewer which supports X3D and provides tracking, navigation and social networking. However, since GeoFeed Server is developed for geo-visualization services such as GPS tracking and data metering, it is not suitable for general purpose collaborative applications. The Octaga Collaboration server [ 5 ] is a newly developed collaborative server which upgraded from the Octaga MPEG4-based MU server. It supports core and interactive profiles of X3D and provides real-time interactions by the recently introduced Connection and NetworkSensor nodes. Collaboration can be implemented by using both server and client modes of the Octaga Player—the corresponding client for viewing X3D content either as a standalone application or as a plug-in to mainstream Internet browsers. The BS Collaborate server [ 6 ] is a networked communication platform which supports X3D multi-user interactions in collaborative shared environments such as text chatting, server side computation and client identification mechanism. BS Collaborate works together with the BS Contact VRML/X3D client which is a viewer of VRML and X3D scenes. The pilot version of BS Collaborate was developed for setting up simple shared environments and was lacking server-side locking and shared objects. Further design and development work on BS Collaborate is presented in Section 4 of this paper. Compared to vision, very little has been done to make touch a regular part of communication with a computer in X3D shared virtual spaces. Thus, H3D [7] is proposed as an open source, cross platform, device independent software. It adopts X3D to express the content and render the scene both visually and haptically. In H3D, users can also use C++ and Python to program, while the Python 2009 International Conference on CyberWorlds 978-0-7695-3791-7/09 $26.00 © 2009 IEEE DOI 10.1109/CW.2009.7 36 2009 International Conference on CyberWorlds 978-0-7695-3791-7/09 $26.00 © 2009 IEEE DOI 10.1109/CW.2009.7 36

Upload: others

Post on 03-Feb-2022

8 views

Category:

Documents


0 download

TRANSCRIPT

Collaboration in 3D Shared Spaces using X3D and VRML

Lei Wei, Alexei Sourin Herbert Stocker Nanyang Technological University Bitmanagement Software GmbH

Singapore Germany e-mail: weil0004 | assourin @ntu.edu.sg e-mail: [email protected]

Abstract—We propose a framework for visual and haptic collaboration in 3D shared virtual spaces. Virtual objects can de declared as shared objects which visual and physical properties are rendered synchronously on each client computer. We introduce virtual tools which are shared objects associated with interactive and haptic devices. We implement the proposed ideas as new pilot versions of BS Collaborate server and BS Contact VRML/X3D viewer. In our collaborative framework, two pipelines—visual and haptic—complement each other to provide a simple and efficient solution to problem requiring collaboration in shared virtual spaces on the web. We discuss two implementation frameworks based on the strong and thin server concepts.

Keywords-collaboration; shared virtual spaces; X3D, VRML; haptics

I. INTRODUCTION Extensible 3D (X3D) and its predecessor Virtual

Reality Modeling Language (VRML) are open standard file formats and run-time architectures to represent and communicate 3D scenes and objects. For many years X3D and VRML have successfully provided a system for the storage, retrieval and playback of real time graphics content embedded in applications supporting a wide array of domains and user scenarios [1]. However, X3D and VRML have several unresolved problems which importance has increased during the last few years when attention of researchers and developers has been attracted to 3D internet and virtualization of life. One of them is that X3D and VRML are lacking an ability of setting collaborative shared scenes and require 3rd party software tools to be used for doing it.

In Section II, we survey the ways of setting collaborative shared virtual spaces with X3D/VRML as well as research works on haptic rendering on the web. In Section III, we introduce our definition of shared objects, tools and other related parts of shared collaboration on 3D web. In Section IV, we present our implementation and illustrate the developed software with a few notable examples. Finally, we draw conclusions and outline further work.

II. VISUAL AND HAPTIC COLLABORATION IN X3D AND VRML

There are several commercial tools developed for supporting collaboration in X3D and VRML scenes.

Blaxxun Communication server [2] was developed to support shared VRML. To a certain extent it can support X3D worlds if X3D files are called from VRML root file and used with BS Contact VRML/X3D viewer. It is

capable of providing collaborations using Server-Client mode, however the update rate of the platform is hard-coded at 3 times per second, which is rather at the low end to implement real-time interactions. Also, though the authors have a full license of the server, the software is no longer offered and supported on a complementary web access basis as it was done in the past.

ABNet software [3] is a java-based tool for turning single-user X3D scene into a multiuser environment. It is based on Server-Client model and it can support asynchronous shared environments using XML messages sent by TCP/IP. It provides an experience quite similar to blaxxun Communication server, while both the Server and Client are free for non-commercial and educational purposes.

Planet 9 GeoFeeder Server [4] is a multi-user server which mainly focuses on geo-visualization applications. RayGun is the corresponding client 3D viewer which supports X3D and provides tracking, navigation and social networking. However, since GeoFeed Server is developed for geo-visualization services such as GPS tracking and data metering, it is not suitable for general purpose collaborative applications.

The Octaga Collaboration server [ 5 ] is a newly developed collaborative server which upgraded from the Octaga MPEG4-based MU server. It supports core and interactive profiles of X3D and provides real-time interactions by the recently introduced Connection and NetworkSensor nodes. Collaboration can be implemented by using both server and client modes of the Octaga Player—the corresponding client for viewing X3D content either as a standalone application or as a plug-in to mainstream Internet browsers.

The BS Collaborate server [ 6 ] is a networked communication platform which supports X3D multi-user interactions in collaborative shared environments such as text chatting, server side computation and client identification mechanism. BS Collaborate works together with the BS Contact VRML/X3D client which is a viewer of VRML and X3D scenes. The pilot version of BS Collaborate was developed for setting up simple shared environments and was lacking server-side locking and shared objects. Further design and development work on BS Collaborate is presented in Section 4 of this paper.

Compared to vision, very little has been done to make touch a regular part of communication with a computer in X3D shared virtual spaces. Thus, H3D [7] is proposed as an open source, cross platform, device independent software. It adopts X3D to express the content and render the scene both visually and haptically. In H3D, users can also use C++ and Python to program, while the Python

2009 International Conference on CyberWorlds

978-0-7695-3791-7/09 $26.00 © 2009 IEEE

DOI 10.1109/CW.2009.7

36

2009 International Conference on CyberWorlds

978-0-7695-3791-7/09 $26.00 © 2009 IEEE

DOI 10.1109/CW.2009.7

36

script and an X3D file are used concurrently for execution. H3D uses X3D file format to store the scene description, however it is not exactly a standard X3D file that can be parsed and displayed in other X3D browsers. Besides, interactions within the scene are defined outside the X3D file, which restricts the independent usage of X3D. Furthermore, the basic idea of H3D is to use a local executable file to load scenes and render them locally rather than collaboratively.

Besides H3D, some other research works on incorporating haptics with X3D have also been done. A survey of medical applications that make use of Web3D technologies including haptic interfaces can be found in [8]. In [9], an X3D extension was proposed for volume rendering in medical education and surgical training, which also incorporates haptic devices for immersive interactions. In [ 10 ] prototype molecular visualizer application has been developed based on Web3D standards, plus extensions for support of haptic interaction. In [11], an X3D-based haptic approach to e-learning and simulation was proposed. In [12], several haptic modes have been introduced to do volume haptics, such as viscosity, gradient, force, vector follow, vortex tube, surface/friction and combined mode. In [ 13 ], an X3D-based 3D Object Haptic Browser was proposed and implemented to augment the user experience of accessing the network. It can be also noted that attention of some general haptic research works shifts towards integration with X3D [14, 15, 16].

In [17, 18, 19, 20, 21] we propose to define geometry of the virtual objects, as well as their visual appearance, and tangible physical properties, by concurrent using of implicit, explicit and parametric function definitions and procedures. These three components of the objects can be defined separately in their own coordinate domains and then merged together into one virtual object. Physical properties of the objects can be rendered by various haptic devices. Since the function-based models are small in size, it is possible to perform their collaborative interactive modifications with concurrent synchronous visualization at each client computer with any required level of detail.

III. SHARED OBJECTS AND TOOLS There are several functions typically required to be

implemented in 3D networked collaborative applications, namely information transmission, shared objects, shared events, synchronization, and consistency control.

Through vision we receive most of the information about the world around us. In 3D networked collaborative virtual environments visual information is transmitted either as streamed 2D images, or by model transmission (the whole model or modified parts) followed by rendering on client computers. While visualization is rather a passive information collection process, touch is an active and bi-directional process. It allows us to perceive tangible properties of objects through stimulation of skin which has different sensitivity in different parts of our body. We also manipulate objects as well as exert forces to receive a force feedback from them. In 3D networked collaborative virtual environments this is done by haptic rendering using force-feedback interactive (haptic)

devices. A common and relatively affordable type of these devices is a desktop robotic arm which allows for 3D navigation of a virtual tool as well as for obtaining a force feedback given in form of 3, 6 and more degrees of freedom (translation, rotation, torque). More than one device can be used by the user simultaneously.

To support both visual and haptic collaboration, shared objects with physical properties defined have to be used. We propose to define tangible physical properties of the virtual objects as three components: surface properties, inner density and force field. Like visual properties, the physical properties are also associated with some geometry, however it can be rather a placeholder or even an invisible container for some physical properties. To explore surface friction and density, the virtual representation of the actuator of the haptic device, which can be defined as a virtual point, vector, frame of reference or a 3D object, has to be moved by the surface or inside the object. To explore the force field, it is sufficient to simply hold the actuator within the area where the force is exerted.

Ideally, when a virtual scene is set to be shared among several users, each object has to become a shared object, i.e. changes of its location, geometry and appearance have to be synchronously seen by all the users in the scene. However, in reality only a limited number of objects in the scene are declared shared because it requires frequent transmissions across the network of events about the location and orientation of such objects as well as their model definitions when they change. In many applications mostly location and orientation of shared objects are considered (e.g. shared virtual scenes with the objects available in virtual stores).

The scene is normally shared by downloading it to the client computer and by running locally different scripts controlling object behavior that can be synchronized by either timer signals common for internet connected computers or small events sent between the clients and/or server. In fact, in each shared virtual scene there are always a few shared objects which are visual avatars of the users however they are implemented at the viewer level as either standard X3D/VRML objects stored on the client computer or web-located objects, which URLs are provided to the viewer via different ways (e.g. scripts, html pages, and databases associated with the scene).

We define a shared object as the object which visual and physical properties (not only location and orientation) are synchronously rendered by all the client computers as they change.

For haptic rendering, there is a need in an object that can be used as a 3D visual representation (avatar) of the virtual haptic tool. In our considerations, such object is just a shared object that is associated with the respective haptic device to change its location and orientation and possibly even geometry and appearance while it is being applied. The motion of this object can be seen by all the participants synchronously and it can interact with other objects that possess tangible physical properties. The way how such tool object is associated with the haptic device should be flexible since these devices have limited angles of rotations for their actuators and may need to reattach

3737

the tool-avatar to reach the point of interest at a required angle.

Shared objects are normally used together with shared events. Shared events are inherited from the event transmission where an event from one client is sent to other clients either through server or directly. No matter which way the platform adopts, synchronization is performed to allow multiple users to immerse into the virtual scene and share it. Another crucial issue here is how to prevent concurrent operations on shared objects performed by different clients. To ensure synchronization among all clients, consistency control algorithms such as locking and serialization mechanisms are required. Shared objects can be locked by the user for their private use. Hence, a shared object selected by the user as a tool can be then set as unavailable for other users. They will be able to see how this object moves following the motion of the user’s haptic device actuator. However, we permit to use a shared object as a tool by several users. The motion of the object will be then a resulting motion of the haptic devices connected to it. Locking can also be useful when interactive changes to the object are being performed by one of the users while others are not expected to make any changes to it. Last but not least, each user may require more than one haptic device with different tools associated and used concurrently. These can be either different haptic devices or devices with multiple actuators, which can be considered as different virtual tools in the scene.

Strong locking means that a client has to release the lock before another client can succeed in requesting the lock. Weak locking indicates that if a client requests the lock while another owns it, the owning client loses the lock and the requester gains the lock.

IV. FRAMEWORK FOR COLLABORATION IN X3D AND VRML SCENES

A. Overview We developed two collaborative frameworks based on

the concepts of the strong and thin servers. The strong server controls all the issues concerned the synchronous visualization of shared objects on each client computer including storage of the parameters of the objects being shared. The thin server only provides locking and relaying messages between the client computers while the shared objects are implemented by exchanging models between the applications on the client computers.

As an implementation platform we used a new pilot version of BS Collaborate server and BS Contact VRML/X3D viewer which were developed within the framework of our project. These software tools work as a server-client pair to support visual and haptic collaboration in shared virtual spaces defined by X3D and VRML and, optionally, by their extensions such as the function-based extension which we use for defining physical properties of the virtual objects.

BS collaborate server is a networked application which supports information transmission, shared events, shared objects and locking. Compared to other collaborative platforms, it does not impose restrictions on the shared scene, such as fixed file framework and

window layout, and leaves the developers very much in control of most of the collaborative issues.

BS Contact VRML/X3D viewer is a client for Microsoft Internet Explorer and Mozilla Firefox web browsers. It can make an X3D scene a part of an html page. The viewer also allows for performing haptic interaction with the X3D scene using one or several interactive and haptic devices.

B. Collaboration Based on the Strong Server When this method is used, any X3D/VRML scene can

be made a shared collaborative scene by adding to the scene root file a few script modules that set sharing and collaborative features and parameters (Fig. 1). This approach allows the developers to control all aspects of the collaborative application while using all of the server features for supporting the collaboration.

Figure 1. Setting shared and collaborative features of the scene with the

strong server used. Shaded blocks are provided by the server.

BS collaborate allows for defining the viewer’s avatar, text and text-to-speech (TTS) chat, as well as shared objects and tools associated with various interactive and haptic devices. Physics properties can be optionally added to shared objects and tools by using the function-based extension of X3D and VRML [21]. More advance features allowing for user management can be implemented by using 3rd party web-servers, script generators and databases. Each of the shaded modules in Fig. 1 are scripts and prototypes which only require to add names (URLs) of the X3D/VRML files defining the actual avatars, prototypes of the shared objects and tools and their initial positions and orientations.

The core part of the server implementation is the EventStreamSensor, which is a specific data stream for transmitting certain objects and events in the shared environment between a network stream and the X3D event graph. All the values that are transmitted through the EventStreamSensor and stored on the server resemble the scene state. By incorporating this mechanism, any field values in an X3D/VRML scene can send and receive updates via a TCP/UDP socket or HTTP server. To increase the flexibility in application design without increasing the complexity of the implementation, it is possible to have multiple EventStreamSensor nodes in the

Link to object Link to object

Server connection script

Shared avatar script

Chat script

Shared object/tool script

Server database

Shape model

X3D/VRML scene

Avatar model

3838

scene that are assigned the same stream name, which allows for sending events from one part of the scene to another. Besides, the associated stream name can be changed dynamically. This makes the EventStreamSensor node disconnect from one stream and connect to another. It allows a script to purposefully select a stream and set values there. We use this mechanism for initializing a shared object before we create it.

Shared objects and tools have to be defined as their shape prototypes. Each prototype may have several instances available concurrently in the shared environment. In the shared objects/tools prototypes, any number of fields can be exposed to the server and updated by various user interactions. These exposed fields can be position, orientation, geometry, appearance, physical properties, or even the whole shape node. During the collaboration run time, shared events will be generated by different user interactions, such as movements, clicks or modification commands. These shared events are then received and interpreted as scripts which are sent into the shared object/tool module prototypes, triggering the corresponding field changes. By doing this, all instances of a certain shared object/tool prototype will be updated simultaneously. Since this procedure is broadcast by the server and identically executed on all the clients, they all will be updated adequately. Besides, since only the update events are transmitted, the server will not be overloaded. We have tested the maximum possible shared events update rate by using haptic device interval as the trigger (1000HZ) which is beyond normal speed of user interaction. The result was quite acceptable for making interactive shared collaboration (less than a 1 sec delay).

In order to ensure that all new clients are displaying the current state of the scene when they connect, the server makes a back up of all the shared fields in a database to send them to the new clients when they log in. Even if there are no users in the scene, the scene parameters are still kept for further use. There is also a possibility that the application code updates a value too frequently. In that case the client is allowed to drop or consolidate events if too many events are sent for the bandwidth available and if the reduction does not change semantics. Besides sharing the scene state, clients also need to communicate with each other for performing a consistent presentation. However, this is not a state that should be stored, and new clients joining later should not receive the event.

Server-side locking is another important feature of the server. To allow for a server side locking mechanism, special fields have been added into the EventStreamSensor, which allow for defining different modes of server-side locking (strong, weak), as well as information of lock owner and lock states. Every time the locking state of a stream changes, the respective field of all EventStreamSensors associated with that stream emits information about the client who owns the lock now. This information includes the client login name.

Figure 2. Networked collaboration of two users in a shared scene. Left user is moving the fancy function-defined object while the right

user is using a standard X3D/VRML model of a house as a tool. In Fig. 2, we show a collaborative scene where several

shared objects have been defined. The objects displayed in the figure are turned into tangible by associating them with the physical properties as it is described in [21]. Hence, they can be felt by the networked haptic devices. If any of the shared objects is assigned to be a tool by linking from the shared object modules of the root file (Fig. 1), it will then move following the motion of the respective device and all the users will synchronously see its motion. Each user can have one or several tools concurrently which have to be associated with the respective interactive or haptic devices. If the tool shape has tangible physical properties, other users will be able to feel it as well as other tangible objects. When a shared object is selected as a tool, either strong or weak locking can be applied either prohibiting or allowing other users to reclaim this object as a tool. Hence, if two clients with haptic devices share the same tool object, it will result in a synchronized motion of the networked haptic devices and coordinated motion of the tool on the client screens. The users then will be able to physically feel each other’s motion.

C. Collaboration Based on the Thin Server This collaboration framework (Fig. 3) expects very

few server functions to be used for maintaining collaboration while relying on the clients to implement the rest of them. The server only provides locking mechanism and information transmission between the clients. The client-based software is responsible for sending to the clients through the server all the modified object models whenever any of such modifications occur. This approach makes the application software independent of the collaboration server used and allows the developers to easily implement the maximum sharing ability by exposing all shared fields to run-time changes: all properties of shared objects and tools can be dynamically changed and synchronized, including geometry, appearance, and physical properties. This method is particularly efficient when relatively small function-defined (procedural) models are used for defining shared virtual objects and their properties however we used it with standard X3D/VRML shapes as well.

3939

Figure 3. Setting shared and collaborative features of the scene with the

thin server used. Shaded blocks are provided by the server. All collaborative modifications are controlled by the

clients. For each of the properties that are going to be shared, a special routing script has to be set up for monitoring events and recomposing output. When one client initiates an action, it will first request the lock from the server. After that, the interactions such as geometry modification and color modification will be received by the local routing script, and a corresponding output based on the client’s status (e.g. object’s location, orientation, etc.) will be composed. The interaction will cause the server to relay the updates which will be finally received and executed by all the clients. The server is not responsible for storing scene status. Instead, one of the users in the scene must be responsible for temporarily storing current status of the scene. When all the clients leave the scene, its status will be lost unless it is somehow stored manually before (e.g. 3D snapshot of the scene).

The main events in the collaborative framework are: • New client joins the modeling session; • The model is modified; • New model is loaded; • The model has to be saved or exported for further

use. The details of the events implementation follow. When a new client joins the current collaborative

session, the user has to see the scene in its current status with all the modifications which could have been made by other users before the new one joined them. It means the most recently updated model of the scene has to be forwarded to this client. When a new client has been initialized, it first detects whether the joining session is a new session or an existing session. This is done by inspecting the Initialization lock implemented by the EventStreamSensor node. This lock is obtained by the primary client who joined the session first. This client will be responsible to send the current modeling object to new clients joining the design session. If the client finds that the Initialization lock is available, the session is declared as a new session. For a new session, some original or void scene has to be loaded and visualized. If the client realizes that the Initialization lock has already been acquired, the client sends a message to request the current scene. When the primary client left the session, the Initialization lock will assign one of the available clients to be the primary client.

When the scene is modified by any of the clients, the modifications must propagate to all other clients and update the scene as they see it on their computers. The scene can be defined by the standard or extended

X3D/VRML. It is only essential that the scene description has to be done in ASCII form to be transmitted as messages and that the scene browsers at the client computers should be able to render the received scene descriptions. When an object modification is done by any of the clients, the Modification lock has to be checked. If any other client is modifying the shape at the same time, the modification to the current model will be ignored. Otherwise, the modification is converted into a message and sent to the server which will broadcast it to all the clients participating in the collaborative session. The Modification lock will be released by the client after the modification message is successfully sent.

When a new scene is loaded or reset by any of the clients, the new scene has to be delivered to all the participants. Since the design session is web-based, the scene model can be entered through an HTML text box. Then, it will be sent as a message by a certain java script to all the clients participating in the design session.

Finally, the users should be able to save the design at any time on their client computers since the server is not responsible for storing such information as in the case of the strong server. The source code of the scene is assembled as strings in X3D, VRML or any other ASCII formats which the application may require. After this, the strings are printed in the console pane of the X3D/VRML browser, since neither X3D/VRML script nor java script in HTML can access local hard drives due to security reasons. The code can be then manually copied from the console pane to a file which the client will open on the hard drive.

The messages are transmitted over the network by the pair of X3D/VRML prototypes which are responsible for sending messages from the user and for receiving messages from any current users, respectively. These prototypes naturally integrate with the server-side locking mechanism, which means that if there are messages transmitting, all other messages from other clients will automatically be ignored. The length of the message size is configurable, with a default size of 4096 bytes. Depending on the estimation of the message length, longer or shorter chunk sizes can be set to find the balance between the transmitting efficiency and the Internet bandwidth. When a new message arrives, the message is processed by the local script. When a complete message has been received, different methods are called according to the message type. If the message contains the whole scene, the current modeling scene is replaced. This is used for loading a new scene or initializing the current modeling scene when a new client joins the session. If the message contains modifications, the current modeling scene is modified according to the modifications. If the message requests the current modeling scene, only the primary client will send back the whole current scene. Since the presence of CRLF characters is essential to make the text messages editable when we need to save the scene source code, we replace CRLF with special characters and restore them after receiving the message. In order to solve the partial transmission problem, we added a prefix and suffix to each message transmitted over the Internet. When the prefix is found in the received message, the modeling tool begins to accumulate the

Server connection script

X3D/VRML scene,

shared objects and tools, avatars,

interactive scripts, chat scripts, and

html codes

Lock script

Message script

4040

received data until the suffix can be found. By doing this, we can successfully receive the messages.

The example of using the developed collaborative framework is illustrated in Fig. 4. Here, the model of a human brain reconstructed from the MRI data is collaboratively explored and edited by several networked users. The model of the brain and the sampling tool are shared by all the clients joining the session. The surface of the brain is reconstructed from the MRI data by using trilinear interpolation function while the MRI file is located on the web server and hence shared by the users. The brain color is also obtained from the MRI file with a function mapping the density to the color. The surface of the brain is declared tangible to feel it with the optional haptic device. Whenever the sampling tool (sphere) is moved, all the clients see it at a new location and with a new texture mapped on its surface which is a color sampled from the MRI data file.

Figure 4. Example of a collaborative scene with the thin server used.

The tool is a sphere which is moved across the surface of the brain model to display the sampled colors mapped from the original MRI data.

The same collaborative framework is used for making

other two applications illustrated in Fig.5. In Fig. 5a, it is an educational shared interactive tool for making geometric shapes defined by mathematical formulas. Function-defined models are exchanged between the clients in this case. In Fig 5b, it is a pilot version of the shared orthopedic training simulation software. Here, standard VRML shapes are used for defining shared objects (fractured bones) and tools (implants).

V. CONCLUSION We have proposed a visual and haptic collaborative

framework for shared X3D and VRML scenes. We define virtual objects with physical properties, which can be explored haptically with various force-feedback devices. Virtual objects can de declared as shared objects which visual and physical properties are rendered synchronously on each client computer. We introduce virtual tools which are shared objects associated with interactive and haptic devices. The proposed framework can be used for making shared collaborative environments with both standard X3D and VRML as well as their extensions. The proposed ideas have been implemented in new versions of Bitmanagement BS Collaborate server and BS Contact

VRML/X3D viewer. We developed two implementation frameworks based on the strong and thin server concepts. A video illustrating some of the experiments with the developed software is available at http://www.ntu.edu.sg/home/assourin/fvrml/video.htm.

(a)

(b)

Figure 5. Shared collaborative applications where model exchange is implemented at the client side.

ACKNOWLEDGMENT This project is supported by the Singapore Ministry of

Education Teaching Excellence Fund Grant “Cyber-learning with Cyber-instructors”, by the Singapore National Research Foundation Interactive Digital Media R&D Program, under research Grant NRF2008IDM-IDM004-002 “Visual and Haptic Rendering in Co-Space”, and partially by the Singapore Bioimaging Consortium Innovative Grant RP C-012/2006 “Improving Measurement Accuracy of Magnetic Resonance Brain Images to Support Change Detection in Large Cohort Studies”.

4141

REFERENCES [1] X3D and VRML: http://www.web3d.org. [2] Blaxxun: http://www.blaxxun.com. [3] ABNet software: http://kimballsoftware.com/abnet. [4] Planet 9 GeoFeeder: http://www.planet9.com. [5] Octaga: http://www.octaga.com. [6] BS Collaborate: http://www.bitmanagement.com. [7] H3D: http://www.h3d.org. [8] N. W. John, “The impact of Web3D technologies on

medical education and training”, Computers & Education. Elsevier, vol. 49(1), 2007, pp. 19-31.

[9] Y. Jung, R. Recker, M. Olbrich, and U. Bockholt, “Using X3D for medical training simulations”, Proc. 13th Int. Symp. on 3D Web Technology (Web3D 08), ACM, Los Angeles, California, 2008, pp. 43-51.

[10] R. A. Davies, N. W. John, J. N. MacDonald, and K. H. Hughes, “Visualization of molecular quantum dynamics: a molecular visualization tool with integrated Web3D and haptics”, Proc. 10th Int. Conf. on 3D Web Technology (Web3D 08), 2005, pp. 143-150.

[11] F. G. Hamza-Lup, and I. Sopin, “Haptics and extensible 3D in web-based environments for e-learning and simulation”, Proc. 4th Int. Conf. on Web Information Systems and Technologies (WEBIST 08), Funchal, Madeira , Portugal, 2008, pp. 309-315.

[12] K. Lundin, A. Persson, D. Evestedt, and A. Ynnerman, “Enabling design and interactive selection of haptic modes”, Virtual Reality, Springer, vol 11, 2006, pp. 1-13.

[13] C. Magnusson, C. Tan, W. Yu, “Haptic access to 3D objects on the web”, Proc. EuroHaptics 2006.

[14] M. Eid, A. Alamri, and A. El Saddik, “MPEG-7 description of haptic applications using HAML”, Proc. IEEE Int. Workshop on Haptic Audio Visual Environments and their Applications, (HAVE 06), 4-5 Nov 2006, pp. 134-139.

[15] M. Eid, S. Andrews, A. Alamri, and A. El Saddik, “HAMLAT: A HAML-based authoring tool for haptic application development”, Proc. 6th International Conference, EuroHaptics 2008, Madrid, Spain, June 11-13, 2008, printed in “Haptics: Perception, Devices and Scenarios”, LNCS 5024/2008, Springer, 2008, pp. 857-866.

[16] F. R. El-Far, M. Eid, M. Orozco, and A. El Saddik, “Haptic applications meta-language”, Proc. 10th IEEE Int. Symp. on Distributed Simulation and Real-Time Applications (DS-RT 06), 2006, pp. 261-264.

[17] Q. Liu, A. Sourin, “Function-based shape modelling extension of the Virtual Reality Modelling Language”, Computers & Graphics, Elsevier, vol. 30(4), 2006, pp. 629-645.

[18] Q. Liu, and A. Sourin, “Function-defined shape metamorphoses in visual cyberworlds”, The Visual Computer, Springer, vol. 22(12), 2006, pp. 977-990.

[19] L. Wei, A. Sourin, and O. Sourina, “Function-based visualization and haptic rendering in shared virtual spaces”, The Visual Computer, Springer, vol. 24(10), 2008, pp. 871-880.

[20] A. Sourin, O. Sourina, L. Wei, and P. Gagnon, “Visual immersive haptic mathematics in shared virtual spaces”, Transactions on Computational Science III, LNCS5300, Springer, 5300/2009, 2009, pp. 1-19.

[21] L. Wei, A. Sourin, and H. Stocker, “Function-based haptic collaboration in X3D”, Proc. 14th Int. Conf. on 3D Web Technology (Web3D 09), 16-17 June 2009, Darmstadt, ACM Press, pp. 15-23.

4242