mixed reality techniques for visualizations in a 3d ... · potential for contributing to the...

12
Mixed Reality Techniques for Visualizations in a 3D Information Space Ralf Dörner Wiesbaden University of Applied Science, Germany Christian Geiger, Anke Lehmann Düsseldorf University of Applied Science, Germany Leif Oppermann University of Nottingham, UK Abstract With 3D information space we denote a three-dimensional visualization that contains 2D visualizations and puts them in a semantic context. In this paper, we show how Mixed Reality (MR) can be exploited as a technology in order to realize better interaction in a 3D information space and, as a result, to develop new interactive visualization techniques. Here, the real space is used as a metaphor for interacting with the 3D information space. In this context, we present MR based interaction techniques for basic operations like locate, slice, manipulate, freeze and compare. In addition, we examine the hardware set-up that serves as a framework for user interaction and present a novel systems approach how to technically implement the MR based visualization system. Our evaluation shows advantages of our interaction techniques like the direct experience of distances in 3D information space, the usage of a real frame of reference for virtual visualizations, or the intuitive specification of positions and orientations. 1 Introduction In the information age, visualization is of key importance because it supports us in finding information in data. As a consequence, there is a considerable demand for optimization of existing visualization techniques and a quest for finding new forms of visualization. Mixed Reality (MR) as a novel methodology for creating imagery has a potential for contributing to the improvement of visualization techniques – especially since it allows integrating imagery in our 3D world [14]. Although we actually live in a 3D world, most visualization techniques today are two dimensional. One major reason is that it has been technologically much easier to draw and print on 2D surfaces or to display visualizations on a 2D screen. Like stereoscopy and Virtual Reality, MR can be seen as a technology that helps to overcome these technical limitations in presenting 3D visualizations. 2D visualizations, however, are not inferior to 3D visualizations in principle and it is not desirable to generally substitute 2D with 3D. Because 2D visualizations are more commonplace, people are used to work with them. In addition, one major drawback of 3D so far has been that users find it difficult to interact with virtual 3D visualizations – in contrast to real 3D visualizations (like scale models). In this context, suitable MR-based techniques for interacting with 3D visualizations could mitigate these problems by combining the strength of visualization in a virtual space with the advantages of visualization in a real space. In this paper, we show how MR-based interaction techniques together with dedicated technological set-ups can be conceived, realized and used for visualization purposes. Our approach for visualization is that we use the real world as an interaction metaphor for a 3D information space.

Upload: others

Post on 23-Jun-2020

10 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Mixed Reality Techniques for Visualizations in a 3D ... · potential for contributing to the improvement of visualization techniques – especially since it allows integrating imagery

Mixed Reality Techniques for Visualizations in a 3D Information Space

Ralf Dörner

Wiesbaden University of Applied Science, Germany

Christian Geiger, Anke LehmannDüsseldorf University of Applied

Science, Germany

Leif Oppermann University of

Nottingham, UK

Abstract With 3D information space we denote a three-dimensional visualization that contains 2D visualizations and puts them in a semantic context. In this paper, we show how Mixed Reality (MR) can be exploited as a technology in order to realize better interaction in a 3D information space and, as a result, to develop new interactive visualization techniques. Here, the real space is used as a metaphor for interacting with the 3D information space. In this context, we present MR based interaction techniques for basic operations like locate, slice, manipulate, freeze and compare. In addition, we examine the hardware set-up that serves as a framework for user interaction and present a novel systems approach how to technically implement the MR based visualization system. Our evaluation shows advantages of our interaction techniques like the direct experience of distances in 3D information space, the usage of a real frame of reference for virtual visualizations, or the intuitive specification of positions and orientations.

1 Introduction In the information age, visualization is of key importance because it supports us in finding information in data. As a consequence, there is a considerable demand for optimization of existing visualization techniques and a quest for finding new forms of visualization. Mixed Reality (MR) as a novel methodology for creating imagery has a potential for contributing to the improvement of visualization techniques – especially since it allows integrating imagery in our 3D world [14]. Although we actually live in a 3D world, most visualization techniques today are two dimensional. One major reason is that it has been technologically much easier to draw and print on 2D surfaces or to display visualizations on a 2D screen. Like stereoscopy and Virtual Reality, MR can be seen as a technology that helps to overcome these technical limitations in presenting 3D visualizations. 2D visualizations, however, are not inferior to 3D visualizations in principle and it is not desirable to generally substitute 2D with 3D. Because 2D visualizations are more commonplace, people are used to work with them. In addition, one major drawback of 3D so far has been that users find it difficult to interact with virtual 3D visualizations – in contrast to real 3D visualizations (like scale models). In this context, suitable MR-based techniques for interacting with 3D visualizations could mitigate these problems by combining the strength of visualization in a virtual space with the advantages of visualization in a real space. In this paper, we show how MR-based interaction techniques together with dedicated technological set-ups can be conceived, realized and used for visualization purposes. Our approach for visualization is that we use the real world as an interaction metaphor for a 3D information space.

Page 2: Mixed Reality Techniques for Visualizations in a 3D ... · potential for contributing to the improvement of visualization techniques – especially since it allows integrating imagery

Figure 1: Exploring a 3D information space using a MR-based interaction technique in a

minimal set-up and a reference bounce box set-up

A 3D information space is a 3D visualization in which a number of 2D visualizations are integrated in a way that their position in 3D information space is meaningful. For example, in Figure 1 we can see a 3D information space in which 2D visualizations of different stages of human evolution are depicted – the distance of such a 2D visualization from the viewer is correlated with time. A simple MR-based interaction technique would be to have the user hold an interaction device (here: a sheet of paper) and move it back and forth in order to visualize the evolution (here: the videostream from the webcam filming the user's hands and the sheet of paper is augmented with a 2D visualization – this virtual image is exactly positioned over the image of the paper sheet in the video using computer vision technology). Depending on the position of the interaction device different 2D visualizations are displayed on it using MR techniques.

2 Background Virtual Reality techniques and interactive 3D graphics provide powerful means for the visualization of scientific data and models [16]. The rich set of interaction techniques in VR and 3D is used to operate on large data sets that describe complex visualization problems. These data sets are often described as complicated 3D structures and visualized in 3D because scientific visualization problems have inherent spatial properties. To take advantage of the strength of visualizations in 2D and 3D, many researchers propose to combine 2D and 3D views [19]. Such combinations are used for example in window managers [20] like XGL. In addition to combining 3D visualizations with 2D, a combination with haptic rendering provides The combination of 3D visualization techniques with haptic rendering provides a more natural means of interacting with the data. Lawrence et al. describe the use of rendering modes that combine haptic and visual interfaces [17]. Their findings suggest that users understand data more clearly if a haptic component augments the 3D visualization. For data with no inherent spatial properties (e.g. business data) a 2D visualization is often more appropriate and interaction techniques from VR can not be applied easily. Active haptic interaction techniques (e.g. using devices like a phantom) can be unfamiliar for users because haptic rendering modes do not directly correspond to everyday haptic experiences. The use of Mixed Reality

Page 3: Mixed Reality Techniques for Visualizations in a 3D ... · potential for contributing to the improvement of visualization techniques – especially since it allows integrating imagery

interaction techniques to represent 2D visualizations and to interact with them using passive haptic devices provides a solution to these problems. With Mixed Reality (MR) we denote the view-dependent combination of real and computer generated imagery that may be considered as a point in a continuous transition between reality and virtuality [14]. Mixed Reality offers the potential to increase the repertoire of interaction techniques by seamlessly integrating new interaction devices that are simple real world objects. These physical objects are used as passive haptic devices which provide feedback simply by their physical presence without being controlled by the computer. Usage of such physical props has already proven to be useful in visualization applications that are based on VR techniques. Hinkley designed a set of two-handed, neurosurgical imaging tools that rely on commonly used surgical interaction metaphors [6]. Several other tangible interfaces have been developed using a pen and pad to interact with virtual objects [8]. The design of intuitive VR-based exploration tools for scientific visualization is described by de Haan in [5]. Based on different interaction scenarios, a virtual environment for visualization has been developed. Using a transparent pad and a pen as input devices and a responsive workbench, this set-up realizes the “laboratory table” metaphor for interaction. A number of 3D interaction techniques have been implemented like the worlds in miniature (WIM) metaphor for overview, navigation and manipulating rotation, or the construction of a “Regions of interest” box with pad and pen. One noteworthy feature of interaction techniques that employ physical props is the intuitive integration of two-handed techniques. Based on Guiard’s work [4] many researchers studied one- and two-handed techniques combined with passive haptic feedback for intuitive interaction schemes. In our work, we seek to exploit the potential of such physical props in order to build novel MR-based interaction techniques. As a consequence, we need to design not only novel interaction techniques but also novel interaction devices. Given that in our application context we are especially interested in 2D visualizations, we have to take into account that not only 3D interaction techniques with physical props have proven to be useful. Lindemann et al. showed that a hand-held window with passive haptics provides an effective interface for manipulation tasks if 2D interaction on the pad is provided [7]. Mixed Reality methodologies have been applied for visualization purposes. In this context, some of them made use of physical props. For instance, a special physical prop, the personal interaction panel, has been used by the Studierstube team to visualize and collaboratively explore dynamical systems with augmented reality. The evaluation showed that the used interaction techniques greatly simplified the visualization tasks [3]. A recent approach from the Technical University of Graz allows calculating the volume data for liver resection planning in real time, features virtual resection using physical props for slicing [1]. An AR widget framework for augmented interaction is presented in [2] including three specific physical widgets: a magnifying widget implementing a magic lens interaction technique, a Cylinder widget for viewing life-size objects and a Cube Widget that allows capturing distant objects and enhancing them with supplementary data on the cube’s sides. While many of the approaches described previously used expensive tracking techniques, optical tracking based on fiducial markers provides a low cost alternative. Slay et al. investigated interaction modes for AR visualization and focused on selection in AR views. The “physical” movement and orientation of ARToolkit markers was proposed as natural interaction for changing views. Selection was supported by a

Page 4: Mixed Reality Techniques for Visualizations in a 3D ... · potential for contributing to the improvement of visualization techniques – especially since it allows integrating imagery

second marker attached to a pointing device [9]. None of the previous work, however, has designed a MR set-up and interaction techniques that are dedicated to the visualization of a 3D information space. Previous work concentrated either on scientific visualization or on information visualization while a realization of a 3D information space should be applicable to both subfields. A major drawback of current systems that employ MR technologies for visualization purposes is that they are difficult to integrate in a common workplace (e.g. because they use obtrusive equipment, or they require too much space, or they have specific requirements concerning environmental conditions like lighting). Many systems use MR-based interaction only for simple non-visualization related tasks, like the rotation of the visualization [9]. However, research in interactive visualization shows that there exist elementary interaction techniques for specific visualization purposes. For example, a general principle (Visual Information Seeking Mantra) was formulated by Ben Shneiderman in [12]: Overview, zoom and filter, then concentrate on details-on-demand. Dedicated probing techniques allow for inspecting data using single point, lines, plane (slice) and sub-volume probes [5]. Thus, for the visualization of a 3D information space we need to identify and support suitable interaction techniques for general tasks (like navigate, save and restore) as well as visualization specific tasks like (zoom, filter and compare)

3 Interaction Framework For the design of MR-based interaction techniques it is important to consider the set-up that serves as a framework for interaction. We found the following questions as being essential: • Which kind of hardware (sensors and displays) is used in order to create a Mixed

Reality? • What are the interaction devices used? • What is the spatial layout of sensors, displays, interaction devices and user position?

How does this spatial layout support hand-eye coordination during interaction? • What part of the visualization is virtual, what part is real? What physical props are

needed to implement the latter part? How are the physical props integrated in the spatial layout?

• How can the set-up improve the quality of the Mixed Reality experience (e.g. by avoiding the occurrence of occlusion in camera-based tracking)?

By examining the set-up of existing MR systems intended for related applications (see section 2) and by conducting our own experiments we identified several set-ups that answered the questions raised above. From these set-ups we present two in this paper. First, a minimal set-up that is as inexpensive as possible while still being useful. Second, a novel set-up that is particularly suited for our application and that exhibits specific advantages over existing MR set-ups. The minimal set-up uses a web cam on a tripod as a sensor and a PC monitor or laptop as a display. As interaction device we use mouse, keyboard and two sheets of paper with markers printed on them to facilitate marker-based tracking. One sheet of paper (the space-paper) is used in order to specify the position and orientation of the 3D information space in the user's view. It lies on a table and can be translated and rotated by the user with one hand. The other sheet of paper (the plane-paper) is held in both hands and is

Page 5: Mixed Reality Techniques for Visualizations in a 3D ... · potential for contributing to the improvement of visualization techniques – especially since it allows integrating imagery

used to interactively specify and visualize the position and orientation of the plane where one of the 2D visualizations is located in the 3D information space. The spatial layout is depicted in Figure 1. Hand-eye coordination is achieved by switching the focus between display and plane-paper – the plane-paper needs to be held close enough to the display that even if the user focuses on the display the plane-paper is still present in the peripheral vision. This way, the user can match the kinesthetic reference frame of the hands better with the visual reference frame given in the visualization [11]. An alternative spatial layout can be achieved by holding the plane-paper behind the display and thus, having the display act as magical lens where the user sees through in order to experience the Mixed Reality. This offers a more direct manipulation experience. However, this set-up was perceived as uncomfortable in our informal user tests since the user needs to bow forward towards the display in order to hold the plane-paper far enough behind the display to keep a minimum distance between camera and plane-paper. In our minimal set-up the visualization is completely virtual. The idea behind our second, more sophisticated set-up is that we want to represent the 3D information space's frame of reference in the real world. This makes the frame of reference more permanent, independent from the view point of the user and haptically accessible. The reason for stressing the visualization of the frame of reference is that the major semantic function of a 3D information space is to provide a frame of reference for the 2D visualizations it contains. Thus, in our minimal set-up we substitute the space-paper with a real cube. For building this cube, we rely on the experience of photographers where a cubic space called bounce box is commonly used in order to provide ideal background and lighting conditions for taking pictures of objects (e.g. for advertising). The bounce box is a five-sided cube where the top side is made of diffuse plexiglas and contains a lamp while the remaining sides are painted white. We take such a bounce box of 60 cm size, equip it with a lamp on the top, highlight the edges with black paint and mount a camera on it. We call this a reference bounce box (RBB). Our RBB (see Figure 2) is made of 4 mm thick wooden plates held together with clamps – this way the bounce box can be assembled or disassembled within 3 minutes. Concerning interaction devices, we replace the plane-paper with four different real props. First, we use a board made of thin wood since it is more rigid than a sheet of paper. The board is held with two hands like the plane-paper. The board consists of two halves held together with magnets. It is possible to pull the board apart holding each half in one hand. This is meant for comparing two 2D visualizations in the 3D information space simultaneously. Therefore, there are two markers on the board to distinguish the two halves. The inverse operation, putting the board together, is also possible. Second, we use paddles made of a telescope stick where a cardboard (with a marker printed on it) is mounted. This interaction device can be used with one hand and since the length of the stick is variable the user can remain in a comfortable seating position while placing one or two paddles in the RBB. However, the longer the stick the more difficult it is to hold the paddle steady – the paddle is prone to jitter due to fine trembling of the hand. Third, we use a painter's palette with a hole for the thumb and a marker printed on it. The palette has the advantage that it is possible to grab it firmly and interact on it. Fourth, we use a cord-less gyroscopic mouse [10] with an ergonomic handle and attach a cardboard with a marker printed on it. This gives us a powerful low-cost interaction device that we call gyrom. We can use it as a paddle (e.g. in order to specify a 2D plane in the RBB) and as

Page 6: Mixed Reality Techniques for Visualizations in a 3D ... · potential for contributing to the improvement of visualization techniques – especially since it allows integrating imagery

well as a mouse (e.g. in order to interact with a 2D visualization or access some remote points in the real environment) by just waving the gyrom. We can also use the gyrom buttons for different semantic purposes (e.g. toggle between paddle and mouse mode). Our interaction devices are depicted in Figure 2.

Figure 2: Different interaction devices used with the RBB: Paddle (top left), Gyrom (top

right), Board with the two parts attached to each other (down left), Board divided in a part for the left and a part for the right hand (down right)

4 Interaction Techniques Based on our interaction framework we will present elementary MR based techniques for interacting with a 3D information space. In the last subsection we will show how elementary interaction techniques can be combined to design more sophisticated interaction. We will describe them for the RBB set-up only.

Locate The locate interaction technique aims to specify a location in the 3D information space. The user might know the exact coordinates of this location or the user has an idea where this location lies in the reference system given by the 3D information space. For this, the user simply takes an interaction device and holds it in the RBB at a certain location. Visual feedback in the form of the current coordinates is given. Depending on the task, not only the position of the interaction device but also its orientation can be important.

Page 7: Mixed Reality Techniques for Visualizations in a 3D ... · potential for contributing to the improvement of visualization techniques – especially since it allows integrating imagery

The 2D visualization at the specified location in the 3D information space is then shown on the plane associated with the interaction device. Let us look at an example from the field of pharmacokinetics: x mg of medicament A and y mg of medicament B are given to a patient and blood samples of the patient are taken at certain time intervals. Based on this information we obtained a simulation model that calculates n blood values dependent of x, y and time t. This is visualized as a 3D information space (x, y, t). The user can interactively select a location in the RBB and on the paddle we visualize the n blood values using MR techniques (see Figure 3). In this example, the orientation of the interaction device is not meaningful. We map t to the z-axis of the RBB and get the metaphor that the distance of the paddle from the user is related to time: the nearer the paddle is to the user the more time has elapsed since the medicaments had been taken. We can inverse this metaphor by using a left-handed instead of a right-handed coordinate system in the RBB and placing the origin of the 3D information space in the near lower left corner instead of the far lower left corner.

Figure 3: Illustration for the locate and compare interaction technique

Explore By having the user change the location dynamically we obtain the explore technique where the user moves the interaction device in the RBB. As a result the user sees an animation since the 2D visualizations associated to the locations the user passes while moving the interaction device are shown sequentially. In our example, the user can explore how the blood values change dependent of the doses of medicament B by moving the interaction device up and down.

Slice Using the paddle the user is able to specify a cut through the 3D information space. An according visualization is shown on the plane specified by the interaction device. Here, the orientation of the paddle is important. For example, if the z-axis of the RBB is mapped to time and the user rotates the interaction device around the x-axis, the user can see values of different points in time. This interaction technique is especially suited for volumetric data where each of the 3 dimensions represents spatial values.

Compare Compare is a two-handed interaction technique. With two interaction devices the user is able to view two 2D visualizations simultaneously. For this, the board interaction device is particularly useful. The user can view the exact measurement of the difference between the locations of the two interaction devices. With the compare interaction technique the

Page 8: Mixed Reality Techniques for Visualizations in a 3D ... · potential for contributing to the improvement of visualization techniques – especially since it allows integrating imagery

user is able to find trends in data or identify differences. For the task of classifying data or finding a given sample in the data we conceived a variant of the compare technique where on one interaction device the 2D visualization is held constant, i.e. independent from the location in the RBB. This constant 2D visualization shows the sample the user is looking for. In the example shown in Figure 3 (right), the user compares houses. The dimensions of the information space are number of rooms, price, and distance of the house to the workplace. In the 2D visualization, a picture of the house and additional information is shown. By mechanically putting together the two halves of the interaction device, the user can easily switch to other interaction techniques like explore.

Navigate We need to provide techniques that allow the user to interactively navigate through the 3D information space. Since the user should not move the RBB, we use a center of workspace metaphor combined with a 3D zooming interface [11] that allows changing interactively the scale on the RBB. To accomplish this, the mouse is used to pan and zoom in and out. Alternatively, the user can use the gyrom, hold it at a certain point in the RBB and move it while pushing a button – this way a subspace of the 3D information space can be specified. If the button is released this subspace is centred in the RBB.

Manipulate We provide interaction techniques that allow the user to manipulate the 2D visualization interactively. For this, the 2D visualization shown on the interaction device can be thought of as a 2D window located in 3D where an interactive application is running, e.g. an application that allows annotating the 2D visualization (see Figure 6). The mouse or the gyrom is used to perform these interactions. A 3D mouse pointer is provided as visual feedback. In addition, the user can substitute the MR picture shown on the display with the 2D window that contains the 2D visualization either by clicking a button on the gyrom or by hitting the “F2” key on the keyboard. With this, the user can immediately switch to his common working environment without MR, sitting in front of a computer display and working with 2D windows using mouse and keyboard.

Figure 4: Illustration of the manipulate technique and a 2D window on a 3D plane

Page 9: Mixed Reality Techniques for Visualizations in a 3D ... · potential for contributing to the improvement of visualization techniques – especially since it allows integrating imagery

Freeze and Restore If the user has found a particular interesting position and orientation in the 3D information space, the user might want to keep it by hitting a key or by pushing a button on the interaction device. This is the freeze interaction technique. Subsequent movements of the interaction device do not change the 2D visualization depicted in the MR. The user can store this visualization along with position and orientation. The user should also be able to retrieve such a stored 2D visualization. For this, the restore interaction technique allows him to match the pose of the interaction device with the pose stored: the pose stored is visualized in the MR and the user tries to put the interaction device at the same position in the same orientation. If the user was able to match the position within certain accuracy the 2D visualization is highlighted and the locate interaction technique is enabled, i.e. subsequent changes in the position of the interaction device lead to a change in the 2D visualization. The restore interaction technique can also be used to reconstruct where a given 2D visualization was taken within the reference framework of the 3D information space.

Complex interaction The elementary interaction techniques presented can be combined to allow more complex interaction. For example, the compare interaction technique can be used to specify the boundaries of a subspace in 3D information space that might be used for navigation. For another example consider the “locate and freeze” techniques that can be combined to specify the key frames for an animated 2D visualization.

5 Implementation We implement our MR interaction techniques using a prototype built on top of OpenGL and ARToolkit [18]. With our prototype a 3D information space can be presented and analyzed using the MR interaction techniques implemented. One of the functions our system needs to provide is the placement of 2D visualizations at the correct position. For this, ARToolkit gives us the transformation matrix M for each marker relative to eye coordinates. For the minimal set-up up we use a marker for registering the MR information space (space paper) and the 2D visualization (plane paper). We get M1 for the space paper and M2 for the plane paper. Choosing the local coordinate system of the space paper marker as world coordinate system, we can get the coordinates of the plane paper by transforming its local coordinates with M1

-1 · M2 to world coordinates. For actually representing the graphical data of the 3D information space we use different implementations. For discrete data, we use 2D textures that are displayed at the marker position of the interaction device. The actual 2D texture displayed is chosen according to the current position of the interaction device. Alternatively, we map the 2D window of an arbitrary application to the marker position. For continuous data, we use volume textures. A three-dimensional texture image can be thought of as layers of two-dimensional sub image rectangles where each such rectangle represents a 2D visualization. In this case,

Page 10: Mixed Reality Techniques for Visualizations in a 3D ... · potential for contributing to the improvement of visualization techniques – especially since it allows integrating imagery

our interaction device defines the position and orientation of a clipping plane used to clip the 3D volume between eye point and clipping plane. Thus, a 2D visualization on the plane specified by the interaction device is visible for the user. The idea is to use already existing tools for creating and presenting 2D visualizations, e.g. spread sheet tools like Excel or multimedia tools like Flash. Rendering the graphical output of these programs on a 3D plane is supported by integration of ActiveX Controls in our system. For example, our prototype is able to render Flash movies on a 2D surface registered in 3D using a Flash ActiveX control with restricted interaction capabilities (due to simulation of the Windows message queue). Thus, our system behaves like a window manager that is able to render 2D windows in 3D space. Figure 4 shows how a 2D window (running Excel) is mapped on a 3D plane in our RBB. The realization of interaction techniques in such 2D windows represented in 3D space (as it is needed by the manipulate interaction technique) requires a pixel-precise picking on a surface. For our work we constrained the problem to find the exact position of the 2D mouse cursor on a planar surface that is registered with the marker. We choose the following simple approach to solve this problem. For every 2D window a plane is registered but not rendered in the application. This object allows the system to determine the exact position of a 2D mouse cursor independent of the plane's position and orientation using color picking. This is realized by coloring the plane's vertices with red and green and interpolating between colors. If a plane object is picked with the mouse, the plane closest to the viewer is selected and the pixel color under the cursor is retrieved. The color code is used to specify the exact position of the cursor on the plane. This position is used together with key / mouse events to realize interactive interface elements. Figure 4 illustrates this idea using an editor for highlighting regions in a 2D visualization. The information about 3D position and 3D orientation of markers as well as 2D position of the plane delivered from the MR subsystem can be directly evaluated in the visualization system. As a result the 2D visualizations shown in the MR are updated. For example, moving the paddle in Fig. 4 (right) would result in updating the pie chart in the Excel application according to the new position in the 3D information space.

6 Evaluation and Conclusion Our preliminary informal evaluation showed that every interaction technique described in section 4 could be realized with the minimal set-up. This shows that MR based interaction techniques for 3D information visualization can be implemented as a low-cost solution with standard components while still offering significant added value. Using the set-up with the RBB the users appreciated the real frame of reference. One advantage here is that the spatial relationships specified with the interaction devices can be viewed directly in reality – which is superior to most stereoscopic displays with regard to fidelity and provision of depth cues. Only head-mounted displays that allow for a direct see-through (e.g. retina displays) obtain a similarly unimpeded direct view of reality. They offer the advantage of being able to blend in virtual imagery even providing proper parallax depth cues of the virtual objects. Moreover, they have the additional advantage that manipulation space and viewing space is not disjoined. However, these displays have the serious drawback that they are costly and most important that they may be perceived obtrusive by the user and that serious user-acceptance problems may arise.

Page 11: Mixed Reality Techniques for Visualizations in a 3D ... · potential for contributing to the improvement of visualization techniques – especially since it allows integrating imagery

First tests confirmed the findings cited in section 2 that MR-based interaction techniques allow for a very intuitive and easy to learn manipulation of positions in 3D. While not comparing directly with conventional user interfaces, our anecdotal evidence strongly supports the claim that it is far easier to specify a pose in 3D in the real world with our interaction techniques than with using 2D interaction techniques like the mouse on a 2D display. By changing the head position slightly the user is able to step from Mixed Reality to reality and concentrate just on the spatial aspects how the planes associated with the interaction devices are located and oriented in the real framework. Looking at the display and stepping back in MR, the user can augment this view with the 2D visualizations directly associated at the position of the interaction devices. While our experiment was able to provide insight in the MR set-up and its usability it did not compare our MR-based visualization techniques with conventional visualization techniques like 2D visualizations presented just on a 2D display. Such a comparison is very difficult since it cannot be decided which performance penalties in the MR-based visualization are introduced by the shortcomings of today’s MR technology (like problems with robustness of tracking as described above). However, advantages over conventional 2D visualization is already confirmed by previous work that explored similar concepts of using a 3D information space like a fishtank VR approach where the positions of one or more virtual visualizations within three-space, manipulated with an instrumented glove, determined the values of parameters that influence the appearance of each visualization [15]. Compared to this previous work our approach exhibits significant improvements - at least in principle, if we put the shortcomings of today’s MR prototypes aside which are likely to be overcome with maturing MR technology. First, MR offers to use a much broader spectrum of non obtrusive input devices (in contrast to using a 3D trackball or a data glove) which in turn can be employed to implement novel techniques for interacting with visualizations (as we presented in section 3). Second, a pose (i.e. a position and orientation in 3D) can be specified more easily. Third, it is possible to specify several poses and therefore to compare points in 3D information space. Fourth, arbitrary 2D visualizations (especially ones that rely on visualization techniques the user is familiar with) can be seamlessly integrated in the 3D information space using MR. In contrast to previous approaches, the visualizations are directly shown and manipulated in 3D space (rather than manipulating parameters in 3D which in turn effect a 2D visualization). Fifth, the spatial relationships are not only shown in a virtual 3D space but can be directly mapped to spatial relationships in the real world. Our evaluation shows that the resulting “haptic visualization” is especially advantageous: distances in the 3D information space become less abstract as the user develops a feeling how the 2D information is arranged in the 3D information space. This provides the user with additional insight about the data visualized. Sixth, MR technology allows for a seamless integration in the workplace. In our tests, the working environment was perceived as comfortable. Several subjects remarked that the familiar PC workplace environment was maintained: the user was able to immediately leave the MR environment, no obtrusive equipment like HMDs had to be used and the MR set-up is integrated in the usual working place. For this integration in the workplace, the 60 cm length of the RBB is a good compromise between liberty of movement, accessibility from a sitting position and space consumption in the workplace.

Page 12: Mixed Reality Techniques for Visualizations in a 3D ... · potential for contributing to the improvement of visualization techniques – especially since it allows integrating imagery

7 References [1] B. Reitinger, A. Bornik, R. Beichel, G. Werkgartner, E. Sorantin. Tools for

augmented reality based liver resection planning. SPIE Medical Imaging '04, San Diego, February 2004

[2] L Brown, H. Hua, C. Gao.: A Widget Framework for Augmented Interaction in SCAPE. UIST’03, Vancouver, BC, Canada, 2003

[3] A. Fuhrmann, H. Loffelmann, and D. Schmalstieg, "Collaborative augmented reality: Exploring dynamical systems, in Proc IEEE Visualization '97, October 1997

[4] Y. Guiard.: Assymmetric Division of Laborin Human Skilled Bimanual Action: The Kinematic Chain as Model. Journal of Motor Behavior, 19(4):, 1987

[5] G. de Haan, M. Koutek, F. H. Post.: Towards Intuitive Exploration Tools for Data Visualization in VR. Virtual Reality Systems and Techniques, VRST’02, Nov. 11-13, 2002, Hong Kong.

[6] K. Hinckley, R. Pausch, D. Proffitt, N. Kassell: Two-Handed Virtual Manipulation. ACM Transactions on Computer-Human Interaction, vol 5, no 3, 260-302, 1998

[7] Lindeman, Robert W., Sibert, John L., and Hahn, James K. "Towards Usable VR: An Empirical Study of User Interfaces for Immersive Virtual Environments" in Proceedings of SIGCHI '99, ACM SIGCHI, 1999.

[8] D. Schmalstieg, L.M. Encarnacao, Zs. Szalavari. Using Transparent Props for Interaction with the Virtual Table. In ACM SIGGRAPH Symposium on Interactive 3D Graphics Atlanta, GA. ACM SIGGRAPH, May 1999

[9] H. Slay, M. Phillips, R. Vernik, B. Thomas.: Tangible user interaction using augmented reality. ACM 3rd Int. Australasian conference on User interfaces. Melbourne, Australia 2002.

[10] http://www.gyration.com [11] B.B. Bederson, J. Meyer, L. Good: Jazz: An Extensible Zoomable User Interace

Graphics ToolKit in Java, 13th ACM symposium on User Interface Software and Technology, ACM Press, 1999, pp. 171-180

[12] Shneiderman, B. The Eyes Have It: A Task by Data Type Taxonomy for Information Visualizations, Proc. 1996 IEEE Conference on Visual Languages

[13] Rosenbaum, D. A. (1991). Human motor control. San Diego, CA: Academic Press [14] P. Milgram and F. Kishino, A taxonomy of Mixed Reality visual displays, IEICE

Trans. Information Systems E77-D (1994), no. 12, 1321–1329. [15] S. Feiner and C. Beshers, "Worlds within worlds: Metaphors for exploring n-

dimensional virtual worlds", Proceedings of ACM UIST 1990, 76-83. [16] A. van Dam, A. S. Forsberg, D. H. Laidlaw, J. J. LaViola Jr, R. M. Simpson.

Immersive VR for Scientific Visualization: A Progress Report. IEEE Computer Graphics and Applications, Nov/Dec 2000

[17] D. A. Lawrence, L. Y. Pao, C. D. Lee, R. Y. Novoselov. Synergistic Visual / Haptic Rendering Modes for Scientific Visualization. IEEE Computer Graphics and Applications, Nov/Dec 2004

[18] R. Dörner, L. Oppermann, C. Geiger. Implementing MR-based Interaction Techniques for Manipulating 2D Visualizations in 3D Information Space, ISMAR04.

[19] M. Tory, T. Möller, M. Stella Atkins, A.E. Kirkpatrick: Combining 2D and 3D Views for Orientation and Relative Position Tasks. CHI 2004, ACM Press, NY.

[20] G. Robertson, M. v. Dantzich, D. Robbins, M. Czerwinski, K. Hinckley, K. Risden, D. Thiel, V. Gorokhovsky: The Task Gallery. A 3D Window Manager. CHI 2000.