supporting presentation and discussion of visualization ...ct/pub... · { presentation of and...

16
Noname manuscript No. (will be inserted by the editor) Supporting Presentation and Discussion of Visualization Results in Smart Meeting Rooms Axel Radloff · Christian Tominski · Thomas Nocke · Heidrun Schumann Received: date / Accepted: date Abstract Visualization has become an accepted tool to support the process of gaining insight into data. Current visualization research mainly focuses on ex- ploratory or confirmatory visualization taking place in classic workplace settings. In this paper, we focus on the presentation and discussion of visualization results among domain experts, rather than on the generation of visual representations by visualization experts. We de- velop a visualization infrastructure for a novel kind of visualization environment labeled smart meeting room, which provides plenty of display space to present the vi- sual information to be discussed. We describe the mech- anisms needed to show multiple visualization views on multiple displays and to interact with the views across device boundaries. Our system includes methods to dy- namically generate visualization views, to suggest suit- able layouts of the views, and to enable interactive fine-tuning to accommodate the dynamically changing needs of the user (e.g., access to details on demand). The benefits for the users are illustrated by an applica- tion in the context of climate impact research. A. Radloff Institute for Computer Science, University of Rostock, Ger- many E-mail: axel.radloff@uni-rostock.de C. Tominski Institute for Computer Science, University of Rostock, Ger- many E-mail: [email protected] T. Nocke Potsdam Institute for Climate Impact Research, Germany E-mail: [email protected] H. Schumann Institute for Computer Science, University of Rostock, Ger- many E-mail: [email protected] Keywords Information Presentation · Smart Meeting Room · Visualization · Interaction 1 Introduction Visualization has matured to an indispensable means to support the analysis of big and complex data. While visualization research primarily focuses on exploratory and confirmatory tasks (i.e., formulation and validation of hypotheses), the presentation of visualization results has attracted only little attention. Particularly, the in- teraction with the generated and presented images is rarely considered. When we look at the working environment in which visualization is typically applied these days, we will most certainly see the classic setup where a user is sitting at a computer with a display, a mouse, and a keyboard. On the other hand, recent research indi- cates that alternative working environments (e.g., large high-resolution displays or touch-enabled tabletop dis- plays) can be quite attractive for visualization applica- tions [11, 19, 33, 50]. In this article, we bring together aspects of: presentation of and interaction with visualization results and modern visualization environments. In particular, our goal is to support domain experts in discussing visual representations in a smart meeting room. The smart meeting room is an environment in which multiple heterogeneous display devices (e.g., projectors as well as stationary and mobile displays) provide am- ple space for visual representations. Tracking devices (e.g., location or eye tracking) and associated software

Upload: others

Post on 24-Jul-2020

0 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Supporting Presentation and Discussion of Visualization ...ct/pub... · { presentation of and interaction with visualization results and { modern visualization environments. In particular,

Noname manuscript No.(will be inserted by the editor)

Supporting Presentation and Discussion of VisualizationResults in Smart Meeting Rooms

Axel Radloff · Christian Tominski · Thomas Nocke · Heidrun Schumann

Received: date / Accepted: date

Abstract Visualization has become an accepted tool

to support the process of gaining insight into data.

Current visualization research mainly focuses on ex-

ploratory or confirmatory visualization taking place in

classic workplace settings. In this paper, we focus on

the presentation and discussion of visualization results

among domain experts, rather than on the generation of

visual representations by visualization experts. We de-

velop a visualization infrastructure for a novel kind of

visualization environment labeled smart meeting room,

which provides plenty of display space to present the vi-

sual information to be discussed. We describe the mech-

anisms needed to show multiple visualization views on

multiple displays and to interact with the views across

device boundaries. Our system includes methods to dy-

namically generate visualization views, to suggest suit-able layouts of the views, and to enable interactive

fine-tuning to accommodate the dynamically changing

needs of the user (e.g., access to details on demand).

The benefits for the users are illustrated by an applica-

tion in the context of climate impact research.

A. RadloffInstitute for Computer Science, University of Rostock, Ger-manyE-mail: [email protected]

C. TominskiInstitute for Computer Science, University of Rostock, Ger-manyE-mail: [email protected]

T. NockePotsdam Institute for Climate Impact Research, GermanyE-mail: [email protected]

H. SchumannInstitute for Computer Science, University of Rostock, Ger-manyE-mail: [email protected]

Keywords Information Presentation · Smart Meeting

Room · Visualization · Interaction

1 Introduction

Visualization has matured to an indispensable means

to support the analysis of big and complex data. While

visualization research primarily focuses on exploratory

and confirmatory tasks (i.e., formulation and validation

of hypotheses), the presentation of visualization results

has attracted only little attention. Particularly, the in-

teraction with the generated and presented images is

rarely considered.

When we look at the working environment in which

visualization is typically applied these days, we will

most certainly see the classic setup where a user is

sitting at a computer with a display, a mouse, and

a keyboard. On the other hand, recent research indi-

cates that alternative working environments (e.g., large

high-resolution displays or touch-enabled tabletop dis-

plays) can be quite attractive for visualization applica-

tions [11,19,33,50].

In this article, we bring together aspects of:

– presentation of and interaction with visualization

results and

– modern visualization environments.

In particular, our goal is to support domain experts

in discussing visual representations in a smart meeting

room.

The smart meeting room is an environment in which

multiple heterogeneous display devices (e.g., projectors

as well as stationary and mobile displays) provide am-

ple space for visual representations. Tracking devices

(e.g., location or eye tracking) and associated software

ct
Sticky Note
This is a preprint of A. Radloff, C. Tominski, T. Nocke, and H. Schumann. Supporting Presentation and Discussion of Visualization Results in Smart Meeting Rooms. The Visual Computer, Vol. 31, No. 9, 2015. The final publication is available at http://link.springer.com
ct
Draft
Page 2: Supporting Presentation and Discussion of Visualization ...ct/pub... · { presentation of and interaction with visualization results and { modern visualization environments. In particular,

2 Axel Radloff et al.

make the smart meeting room aware of its internal state

and its inhabitants [1,16,51]. This awareness allows for

customized user assistance to support a variety of user

tasks.

We focus on the task of discussing research results.

The discussions are structured as a presenter-audience

setting. The presenter is in charge of moderating the

discussion about analysis results and insights, which

are to be communicated via pre-built visual representa-

tions. The audience engages in the discussion and can

interactively change views. Moreover, the audience may

contribute additional visual content at any time.

Both the discussion of visual representations as a

user task and the smart meeting room as a visualiza-

tion environment combined have not been considered

in previous studies. However, the objective of this work

is not to develop any new sophisticated visualization

technique. The approach presented here is more of a

technical basis for improving the way how people work

with visual representations generated by multiple users

and shown on multiple displays. By building an appro-

priate visualization infrastructure that utilizes the fa-

cilities provided by the smart meeting room, we aim at

supporting the users to impart visual information and

discuss it in a dynamic process.

Based on a scenario description in Section 2, we de-

velop a novel visualization infrastructure for supporting

presentation and discussion in smart meeting rooms in

Section 3. In Section 4 we elaborate on technical as-

pects of the implementation. An application related to

climate impact research will be given in Section 5. Re-

lated work will be briefly reviewed in Section 6. We

conclude and indicate directions for future work in Sec-

tion 7.

2 Background

Let us start with sketching the scenario we are address-

ing. First we will characterize the task to be accom-

plished by the users and secondly we will describe the

environment in which the task is to be carried out.

2.1 The Task: Presenting and Discussing

The objective is to support the task of presenting and

discussing the insights gained from visual representa-

tions of data. The discussions involve multiple domain

experts and are carried out as follows. At the begin-

ning of a discussion, the presenter introduces the topic

and the associated data. Preliminary research results

are explained to the audience by pointing out key find-

ings illustrated in pre-built visual representations. This

way, the audience is informed about the major char-

acteristics of the matter to be discussed and the aim

of the discussion. After the introduction, presenter and

audience start the discussion.

During the discussion, the demands of the partici-

pants may change dynamically. The participants, under

the guidance of the presenter, need to be enabled to ac-

cess further information about specific details or to in-

quire insights regarding relationships to other data. The

carefully prepared presenter can accommodate these

changes by selecting different visual representations. How-

ever, even in the setting of a focused discussion, the

presenter cannot foresee all eventualities. For example,

aspects that were initially assumed to be irrelevant may

become relevant, but they are not readily encoded in

the visual representations being discussed. Data that

were deemed to be unrelated to the topic of the dis-

cussion may yet turn out to be connected, but they

were not included in the presentation. Handling such

dynamic requests is one contribution of the infrastruc-

ture presented later.

Examples of such discussions can be found in many

fields of study. Our work is situated in the context of cli-

mate impact research. Climate impact researchers com-

monly face various types of heterogeneous data and

hence many findings about the data need to be dis-

cussed. For now, we shall only briefly illustrate how a

typical discussion among experts in climate impact re-

search might look like. How such discussions can benefit

from our approach will be illustrated later in Section 5.

In climate impact research, scientists from differ-

ent domains (e.g., physics, biology, geology, mathemat-

ics, statistics) work together. In our introductory ex-

ample, we assume that a biologist has created a new

model for the impact of carbon dioxide on the fraction

of trees. She wants to discuss her findings with climate

impact researchers from other domain backgrounds, be-

cause she assumes influences to another climate impact

model, the soil water content model. To prepare the dis-

cussion she asks a visualization expert to render expres-

sive visual representations of the data generated from

her model. In addition, she asks the visualization ex-

pert to prepare visual representations of causal models

(e.g, temperature and precipitation) and further impact

models, including the soil water content model.

At the meeting, she first introduces the prepared

visual representations to the attending colleges. During

the discussion a statistician comes up with her finding

that the evapotranspiration in Europe may influence

the tree fraction as well. However, this specific impact

model had not been considered by the presenting biol-

ogist. To discuss this question, the statistician’s visu-

alization results need to be presented in combination

Page 3: Supporting Presentation and Discussion of Visualization ...ct/pub... · { presentation of and interaction with visualization results and { modern visualization environments. In particular,

Supporting Presentation and Discussion of Visualization Results in Smart Meeting Rooms 3

with the findings of the presenting biologist. Usually,

this requires additional effort by the presenter (e.g.,

for transferring visual representations to the presenter’s

computer and creating the combined view) and causes

interruptions to the discussion.

Moreover, in the light of the ongoing discussion, a

physicist comes up with a new hypothesis about his

data, only recently generated by a new climate model.

In order to discuss this hypothesis on the spot with

the gathered experts, new visual representations of the

physicist’s data need to be created and dynamically in-

cluded into the current presentation. This requires even

more effort and results in the discussion being post-

poned.

This is where our solution comes into play. The ap-

proach presented in this paper facilitates such dynamic

discussions and allows for a sufficient degree of flexi-

bility when presenting visual representations. Dynam-

ically raised requests can be accommodated on the fly

and the discussion can continue without wasting the

precious time of the attending experts.

Next, we will take a closer look at the environment

in which the discussion takes place.

2.2 The Environment: Smart Meeting Rooms

The classic setup for such a discussion would be a sin-

gle PC running a presentation of slides. The slides are

usually shown on a large public display (e.g., projec-

tor) to share the visual information with the discussion

participants. Using only a single public display for pre-

sentation and discussion of data limits the number of

simultaneously presentable views and thus the infor-

mation that can be communicated and discussed at a

time.

Modern visualization environments, such as smart

meeting rooms, offer new possibilities to extend the lim-

its of classic work spaces. Smart meeting rooms aim to

support users in working collaboratively to reach a com-

mon goal. They are a form of smart environments [10]:

“A small world where all kinds of smart devices are

continuously working to make inhabitants’ lives more

comfortable.”

Smart meeting rooms have three basic characteris-

tics that need to be taken into account when develop-

ing visualization solutions. Smart meeting rooms are:

(1) ad-hoc, (2) multi-display, and (3) multi-user envi-

ronments.

Ad-hoc means that smart meeting rooms integrate

devices brought in by the users dynamically [7]. So in

addition to devices that are static (e.g., sensors or pro-

jectors and canvases), there will also be devices that

are dynamic (e.g., laptops, smart phones, or tablets).

Static and dynamic displays are integrated in a com-

mon ensemble of devices that work in concert to assist

the users.

Smart meeting rooms are multi-display environments.

In fact, through the mix of dynamic and static displays,

smart meeting rooms are heterogeneous multi-display

environments. Displays with different properties (i.e.,

size, resolution, pixel density, etc.) from tablets up to

large public displays can be utilized simultaneously to

communicate visual information efficiently.

The technical properties of smart meeting rooms en-

able multiple users to work collaboratively. In this con-

text, smart means that customized user assistance is

provided based on the current state of the environment

and its users. Typically the users take on different roles

(e.g., presenter or audience), which are associated with

different preferences and aims. According to their role,

the users work together in a defined scenario (e.g., dis-

cussion or presentation) and even may change between

these scenarios [18].

In summary, we can pinpoint the task of present-

ing visual representations in a smart meeting room for

the purpose of discussing insights among multiple do-

main experts. A visualization infrastructure specifically

tailored for such a scenario will be introduced next.

3 Supporting Discussion in Smart Meeting

Rooms

Before we go into any details, we will first discuss our

goals and associated requirements. An overview of theinfrastructure will pave the way for the description of

its individual components.

3.1 Goals and Requirements

Forming a big picture about some analyzed data in-

volves combining insights captured in multiple dedi-

cated visual representations. Therefore, our first goal

is to increase the quantity of information that can be

shown at a time by utilizing the displays available in

the smart meeting room. This requires an appropriate

compilation of the visual information to be communi-

cated and its automatic placement on the displays, tak-

ing into account the configuration and properties of the

displays as well as the roles, locations, and view direc-

tions of the users. For example, a large public display

located close to the presenter should be chosen as the

main display, whereas displays being located behind the

audience should be avoided.

Page 4: Supporting Presentation and Discussion of Visualization ...ct/pub... · { presentation of and interaction with visualization results and { modern visualization environments. In particular,

4 Axel Radloff et al.

Forming a big picture in the course of a discussion is

a dynamic process with changing demands. Therefore,

our second goal is to provide sufficient flexibility allow-

ing the users to react to changing demands effortlessly,

but within limits reasonable in a discussion scenario. In

the first place, this means that the presenter must be

enabled to select the visual content that is relevant to

the current state of the discussion and hence needs to

be displayed. As the discussion goes on, the selection

will be updated by all users and the multi-display en-

vironment has to automatically adjust itself to the new

situation.

When it comes to discussing details of the data, the

visual representations should provide more details as

well. For example, when a discussion shifts from a global

perspective to certain areas of the tropical rain forest,

the presenter must be able to mark the new regions

of interest. In turn, the system has to provide higher

resolution images of the marked parts of the data and

display them to the audience on the fly.

As indicated earlier, the demands during a discus-

sion may change even more drastically. At some point,

it might even become necessary to re-encode certain

visual representations or to create entirely new ones.

In such cases, participants with sufficient expertise can

create new visual representations on their own devices

using the visualization tools they are familiar with. Our

infrastructure must provide the means to dynamically

integrate the new visual representations into the ongo-

ing discussion.

In the light of the addressed task and environment,

the aforementioned aspects can be summarized into two

technology-oriented research questions:

– How can we present visual content in a multi-display

environment?

– How can we cope with changing demands of dy-

namic discussions?

3.2 Visualization Infrastructure

As an answer to the previous questions, we propose a

visualization infrastructure and its implementation in

a smart meeting room. This section outlines the basic

components.

The central element we are handling in the infras-

tructure are visual representations of data. For brevity

we call visual representations views from now on. The

two key aspects that need to be considered by the in-

frastructure are the display of views and the interaction

with views. In previous work, display and interaction

have been investigated separately in [30] and [29], re-

spectively. Here, we bring together both aspects and

Fig. 1 The displays of the smart meeting room show differentvisualization views originating from the presenter’s and theattendee’s devices.

develop a unified solution for displaying views and in-

teracting with them.

Displaying the views is supported by four basic com-

ponents of the infrastructure: view generation, view group-

ing, view mapping, and view layout. Interacting with

views is enabled by three components: interaction grab-

bing, interaction mapping, and interaction handling.

Each of these components will be described in more

detail in the following paragraphs.

3.2.1 Display components

First we will briefly explain the components that han-

dle the display of views. For more details, we refer the

interested reader to the explanations in [30].

View generation The view generation component han-

dles the process of creating views and inserting them

into the environment. As we do not want to impose

any restrictions on what visualization software can be

applied, we propose a twofold strategy:

– Exposing views: For visualization software that is

dedicated to be run in the smart environment, we

provide an API that can be used to expose views to

the environment.

– Grabbing views: For visualization software that is

unaware of being used in the smart environment,

we implemented tools that allow us to grab visual

content from the devices participating in the envi-

ronment.

Both strategies have advantages and disadvantages.

Using the API to expose views requires modifying ex-

isting visualization software, but on the positive side,

the exposed views can be generated according to the

specific needs of the environment and can even be en-

riched with semantic meta-information about what is

Page 5: Supporting Presentation and Discussion of Visualization ...ct/pub... · { presentation of and interaction with visualization results and { modern visualization environments. In particular,

Supporting Presentation and Discussion of Visualization Results in Smart Meeting Rooms 5

Fig. 2 Focus+context embedding of a dynamically definedregion of interest.

being shown in the views. The grabbing-based strat-

egy has the advantage of being applicable to virtually

any visualization software, but on the other hand, the

created views contain no other information than the

grabbed pixels.

Taken together, the view generation strategies are

capable of capturing multiple visualization views from

heterogeneous sources in a smart meeting room (see

Figure 1). This is a key benefit of our approach. The

presenter can prepare a deck of views prior to the dis-

cussion in an authoring process. Should new or alterna-

tive views be needed during the discussion, anyone from

the audience can dynamically contribute new content.

This way, a seamless integration and presentation of

views is supported, and discussions can continue with

only moderate interruptions. Otherwise, delegating the

task of generating new views to a dedicated visualiza-

tion expert would result in longer breaks, adversely af-

fecting the flow of the discussion.

In order to achieve scalability in terms of the dif-

ferent display resolutions available in the smart envi-ronment, views (exposed or grabbed) need to be en-

coded into a multi-resolution representation. For this

purpose, we employ the capabilities of the JPEG2000

standard [37]. This standard implements the philosophy

of encode once and decode many different ways. For the

encoding, we use the resolution dictated by the data,

which is typically higher than the resolution of the dis-

plays. JPEG2000 then allows us to later decode views

with exact the resolution needed, be it the original data

resolution for an HD projection or a lower-resolution for

a combined display of several views.

Additionally, the JPEG2000 encoding facilitates zoom-

ing into details for user-defined regions of interest with-

out generating entirely new views. Thanks to JPEG2000’s

multi-level content description, regions of interest can

be scaled to make them bigger and easier to see. We

support the classic detail-only, overview+context, and

focus+context strategies for presenting enlarged details

and the corresponding context. Figure 2 illustrates the

classic fisheye embedding of the focus into the context.

It is worth reiterating that this functionality does not

depend on the underlying visualization software, but

is provided for free by the integration of JPEG2000 in

our infrastructure. This means that any view being pre-

sented using our solution can be explored at different

levels of detail for varying regions of interest without

any additional effort on the view generation side.

In summary, the result of the view generation pro-

cess is a collection of JPEG2000-encoded views. In or-

der to compute a suitable arrangement of views in the

smart environment, it is necessary to create a semanti-

cally meaningful grouping of the views.

View grouping The view grouping component is needed

to define the affiliation of views. Views that seman-

tically belong together (in terms of the visualization

task to be addressed) are grouped into in so-called view

packages.

Although an automatic generation of view packages

would be desirable, the number and complexity of influ-

encing factors makes this goal hard to achieve, and the

situation is even more complicated without appropri-

ate meta-information (e.g., when using view grabbing).

Therefore, we resort to the users’ knowledge and ex-

perience in authoring semantically meaningful groups

of views. This gives users the freedom to express their

definition of ’semantically meaningful’. For example, a

view package could include views that show the same

data differently, views that are needed at a particular

time during the discussion, or views that illustrate the

development of the data over a number of time steps.

Technically, views are grouped using a simple drag

and drop GUI. The GUI enables the presenter to group

views in a pre-process in preparation of a discussion,

but also anyone from the audience to integrate new

views in an ad-hoc fashion during a discussion should

it become necessary. By generating and deploying view

packages, the users also select the views to be presented

at a certain time during the discussion.

With appropriately defined view packages, we can

later support the principle of spatial proximity equals

semantic similarity when it comes to mapping view

packages to the displays available in the smart meet-

ing room.

View mapping The view mapping component automat-

ically determines which views are to be shown on which

display. To generate a suitable distribution of views in

the smart meeting room, a number of aspects need to

be taken into account. In the first place, the seman-

tic grouping of views determines which views should be

shown together in close proximity (e.g., on the same dis-

play or on adjacent displays). The positions and viewing

Page 6: Supporting Presentation and Discussion of Visualization ...ct/pub... · { presentation of and interaction with visualization results and { modern visualization environments. In particular,

6 Axel Radloff et al.

Fig. 3 The layout mechanism arranges views such that they are not occluded by the presenter walking in front of the canvas.

directions of the users are needed to be able to map the

views to displays currently being visible to the audience.

Furthermore, we have to consider the display properties

as it makes no sense to map a high-resolution view to

a small, low-resolution screen.

The problem of assigning views to displays can be

formulated as an optimization problem whose objec-

tive is to ensure spatial quality qs, temporal continuity

(quality qt) and semantic proximity (quality qp):

q(mt−1,mt) = a · qs(mt) + b · qt(mt−1,mt) + c · qp(mt)

The different qualities are weighted (a, b, c) and mt−1

and mt denote consecutive mappings. Spatial quality

concerns the visibility of views, which is influenced,

for example, by the distance and the angle between a

user and a display. Temporal quality qt rates changes of

view-to-display assignments. Semantic proximity qp de-

scribes the spatial relationship of all views of the same

view package. As the optimization is too complex to be

solved analytically, we utilize a genetic algorithm [25]

to find a local optimum within reasonable time [30].

As a result, views belonging semantically together

are mapped to the same display, and the displays are

chosen such that views currently being discussed appear

next to the presenter, while care is taken that they are

also sufficiently visible to the audience. The next step

is to lay out the individual views on their assigned dis-

plays.

View layout The view layout component arranges the

views assigned to a certain display. While it is possi-

ble to use pre-defined layouts to prepare a discussion,

the ad-hoc character of the smart meeting room also

demands taking into account dynamic changes, includ-

ing requests for details on demand or movements of

users in the environment. We employ an iterative lay-

out mechanism that produces initial results quickly and

can accommodate dynamic changes.

A view layout is characterized by two aspects: the

positions and the sizes at which views are to be shown

on a display. We consider both aspects simultaneously

by combining force-directed placement and pressure-

based resizing of views (similar to [3]). As the number

of views per display is not excessive, it is possible to

compute the layout continuously according to the cur-

rent situation of the environment. However, we have

to take care that the layout remains reasonably stable

and does not flicker. We deal with this by computing

a quality function. Only if the quality of a newly com-

puted layout is significantly improved (above a certain

threshold) will it be smoothly blended in to replace the

existing layout.

Designed this way, the layout mechanism is capa-

ble of producing good initial view layouts, integrating

views generated by any user in the audience on the fly,

and adjusting them dynamically as the discussion goes

on. An example for the dynamic adjustment is given

in Figure 3, where the layout mechanism is geared to-

ward preventing views from being occluded by a user

stepping in front of a display canvas. Although the lay-

out of views is updated with regard to the position of

the person, the iterative nature of our algorithm avoids

disruptive changes to maintain user orientation at all

times.

Suitability of our layout component has been con-

firmed in a small study. We showed 14 different lay-

outs with different content to 20 users who were asked

to identify information encoded in these views and to

compare the identified information for similarities in the

underlying data. As a result, the feedback of the users

indicates that they were satisfied with the layouts gen-

erated by our solution.

The view layout is the final stage of the display

pipeline of our infrastructure. Now we can move on

to investigating the aspect of interaction in the smart

meeting room.

3.2.2 Interaction components

The previous sections focused on displaying views in

the smart meeting room. The described mechanisms,

among other aspects, also consider the users’ positions

Page 7: Supporting Presentation and Discussion of Visualization ...ct/pub... · { presentation of and interaction with visualization results and { modern visualization environments. In particular,

Supporting Presentation and Discussion of Visualization Results in Smart Meeting Rooms 7

and viewing directions. In this sense, users always in-

teract with the system, but they do so implicitly (e.g.,

by walking in the room). Yet we have to give the users

the opportunity to interact explicitly with the visual-

izations depicted in the environment. This can be neces-

sary, for example, in situations where the environment

has only incomplete information about the users intents

or where aspects are involved that cannot be sensed ap-

propriately by the environment.

Therefore, our display components have to be cou-

pled with suitable interaction components. These com-

ponents have to capture the interaction taking place on

the different devices, map them according to the cur-

rent arrangement of views, and handle the interaction

to achieve the effects intended by the users.

In most general terms, one can differentiate two

kinds of interaction: (1) interaction with views and (2)

interaction with the content depicted in a view. By in-

teraction with views we mean operations such as mov-

ing a view from one display to another, rearranging the

view layout manually, or setting up a region of interest

to be magnified. On the other hand, interaction with

a view’s content means operating directly on the de-

picted data, for example, to select relevant data items.

This requires propagating interactions through to the

application that generated a view. There, the interac-

tion is handled and corresponding visual feedback is

generated to update the view.

To make these interactions possible across device

and display boundaries, dedicated interaction compo-

nents have been developed for our infrastructure. In the

following, we describe them briefly. For more detailed

descriptions, we refer the interested reader to [29].

Interaction grabbing The interaction grabber compo-

nent is required to gather the interaction taking place

on the devices of the smart meeting room. As the inter-

action can originate from different interaction devices,

we need to convert the interactions into a generic in-

termediate description that abstracts from the details

of the individual devices. On the most general level,

our intermediate description supports pointing interac-

tion (e.g., with mouse or Wiimote) and trigger interac-

tion (e.g., key strokes or mouse buttons). The generic

description based on pointing and triggering is suffi-

ciently expressive to cover a broad range of interaction

in the smart environment, including classic mouse and

keyboard interaction as well as modern tracking-based

interaction using controllers.

Interaction mapping Once the intermediate description

of the interaction has been encoded, the next step is

to map the interaction to the device responsible for

Fig. 4 During the discussion, the presenter can adjust thelayout of views easily by picking views (top), relocating views(center), and resizing views (bottom).

handling the interaction. The interaction mapper de-

termines the display where the interaction is performed

and delegates the interaction to the device that is re-

sponsible for handling the interaction. To this end, the

interaction mapper utilizes knowledge about the posi-

tions of the displays in the room and the layout of views

across the room. We also consider the computing device

that generated a view in order to support interaction

with the view’s content. But this is only possible if the

device is still part of the environment.

Interaction handling The interaction handler interprets

the interaction and executes the necessary actions. As

mentioned earlier, we differentiate:

Interaction with views: The generic intermediate descrip-

tion of the interaction is interpreted by the inter-

action handler that is in charge of managing the

views in the environment. Operations such as relo-

cating or resizing a view are handled at this stage.

For the purpose of illustration, Figure 4 shows how

a user adjusts the views laid out on a canvas. Using

a Wiimote, the user picks a view, relocates it, and

adjusts its size as needed. As the JPEG2000-based

regions of interest are also independent of the un-

derlying visualization software, any such operations

(e.g., the fish eye magnification or overview + detail

as detailed in [31]) are carried out at this stage as

well.

Interaction with a view’s content: The generic interme-

diate description of the interaction, is transformed

Page 8: Supporting Presentation and Discussion of Visualization ...ct/pub... · { presentation of and interaction with visualization results and { modern visualization environments. In particular,

8 Axel Radloff et al.

to the local space of the view being interacted with.

The transformed interaction description is then del-

egated to and executed by the application that gen-

erated the view. In turn, the view generation com-

ponent is set up to receive and process the visual

feedback (in the form of a new view) generated by

the application.

The three components for grabbing, mapping, and

handling interaction in our infrastructure enable the

presenter and anyone from the audience (if granted)

to adjust views and their content according to current

state of the discussion. It is worth mentioning that the

interaction components enable the users to carry out

the necessary steps with any available interaction de-

vice. This is a significant benefit over classic discussion

scenarios, where interaction is usually reserved for the

presenter only, who has to sit down at a particular ma-

chine in order to interact. Moreover, if someone from

the audience would like to contribute, he or she is usu-

ally required to use the presenter’s device or to attach

the personal device to a projector, which is inconve-

nient and causes interruption of the discussion. With

our approach these adverse effects can be avoided.

3.3 Discussion

Our approach supports both automatic layout and in-

teractive manipulation. In the first place, views are as-

signed to and arranged at display devices through auto-

matic methods. On top of that, we provide interactive

methods allowing presenter and audience to manipu-

late the layout assignment and appearance of the views.

This raises the question of balancing automatic and in-

teractive means and addressing consistency during the

discussion.

A basic requirement is that an automatic algorithm

should not override an interactive manipulation of a

user unless it is explicitly intended. This requirement

is dealt with as follows. Based on the initial generation

and grouping of views, the assignment and layout on

the display devices take place. The view mapping and

layout are constantly updated and improved in a back-

ground process. If an interaction occurs (e.g., rearrang-

ing or resizing views, moving views between displays, or

even adding and removing views), then the automatic

improvement strategies affecting the display(s) where

the interaction took place are halted. This includes the

view mapping and the automatic layout on the specific

display device. Thus, no automatic process overrides in-

teractive changes of the presenter or the audience. Of

course, it is possible to reactivate the automatic map-

ping and layout at any time when deemed necessary.

Another aspect worthy of discussion is the handling

of potentially conflicting interaction in a multi-user con-

text. This is a classic problem to which various ap-

proaches exist (e.g., [8,11,35]), but none provides a uni-

versal answer. We do not claim to have a definite solu-

tion either. Yet the setting that we address in our work,

a setting in which a presenter is in charge of leading

and moderating the discussion, a pragmatic solution is

possible. To avoid conflicts, we assume that the pre-

senter grants permission to individual members of the

audience to carry out interactions affecting any public

displays. All other participants are not allowed to in-

teract until the presenter advises them otherwise. In a

sense, our solution is not a technical one, but a social

one where the presenter “controls” the audience.

3.4 Summary

According to the requirements formulated in Section 3.1,

we can conclude the following. Using our architecture

with its seven display and interaction components, the

demands of discussion scenarios can be addressed. Views

showing visual representations of the studied data can

be generated and published in an ad-hoc manner us-

ing any device. The views can be grouped according to

semantic needs and they are mapped to the available

displays based on certain quality criteria.

Interaction with the displayed views and the visual

representations they show is supported across devices.

Views can be manipulated on an abstract level (e.g.,

view selection, view placement, view resizing) indepen-

dent of the original visualization software. Moreover, it

is also possible to feed back the interaction that takesplace in the environment to the view-generating visu-

alization software, for example, to re-parameterize the

visual encoding.

In the next section, we take a look at some details

of the implementation of our infrastructure in a smart

meeting room.

4 Implementation Details

The approach introduced before has been instantiated

in a prototypical implementation in a smart meeting

room. First we will briefly characterize our smart meet-

ing room, including the physical and the communica-

tion environment. Then, we will give implementation

details regarding the display and the interaction com-

ponents.

The smart meeting room basically contains seven

displays, including mobile laptop displays and static

projectors connected to regular PCs as illustrated in

Page 9: Supporting Presentation and Discussion of Visualization ...ct/pub... · { presentation of and interaction with visualization results and { modern visualization environments. In particular,

Supporting Presentation and Discussion of Visualization Results in Smart Meeting Rooms 9

Fig. 5 Scene in our smart lab. The smart view management is used to display the information, while the smart interactionmanagement enables the presenter to interact with all displayed views.

Figure 5. Users can bring their own devices, which are

seamlessly integrated into the environment using wire-

less network connections.

Several sensor devices monitor the state of the envi-

ronment (e.g., lightness or temperature). For tracking

the environment’s inhabitants, the smart meeting room

is equipped with Ubisense [39] devices, which are based

on small tags and ultra-wide band signals, and a Sense-

floor [32], which is based on measuring pressure on the

floor tiles. As interaction devices, we provide regular

keyboard and mice as well as Wiimote controllers con-

nected to PCs and laptops.

Our implementation builds upon service-oriented con-

cepts developed by Thiede et al. [38] and a middleware

called Helferlein by Bader et al. [5,6]. Based on the

service-oriented concept small pieces of software with

well-defined data input and output interfaces provideinformation about the environment (e.g., the Ubisense

user tracking data), map these data to coordinates of

the smart meeting room, and provide pre-processed lo-

cation information about the users for other applica-

tions. This information is stored in the form of tuples

in a so-called tuple space [13], which is implemented

in the Helferlein middleware. All applications running

in the smart meeting room can access the information

about the environment and put it to use for different

kinds of user assistance. In a sense, Helferlein’s tuple

space is the information backbone of the smart meet-

ing room.

Our implementation of the display and interaction

components described earlier builds upon this infras-

tructure. We use Java as the common programming

language and runtime environment. This makes our ap-

proach largely platform independent, but may require

new users to install the necessary runtime libraries. But

other than that, there are no restrictions regarding the

OS or the applications to be used.

Figure 6 provides an architectural overview of our

implementation. In the following, we elaborate on some

of the details of the display and interaction components

integrated in our approach.

Display components The display components described

before (view generation, view grouping, view mapping,

view layout) are shown on the left hand side of Figure 6.

The views are generated either by using an API or

by grabbing pixels. As mentioned earlier, the API-based

solution requires extending existing visualization soft-

ware with appropriate calls to API functions in order

to expose visual content and meta-information. When

using the grabbing-based approach, the user selects a

certain region of the device screen. The pixels contained

in the selected region are captured to create a new view.

Native JAVA functionality is utilized to capture the im-

age.

The outcome of the two methods of the view gener-

ation is a view containing the image data encoded with

JPEG2000 available for the presentation in the smart

meeting room. The views are transmitted to the man-

agement component via a network connection. Both ap-

proaches can be used simultaneously with one or more

personal devices. To this end, a connection to the smart

meeting room’s network and the ability to run the soft-

ware required to generate views are preconditions for

a personal device to participate in the ad-hoc environ-

ment.

The view grouping is carried out interactively by the

user with an easy-to-use graphical tool. This tool col-

lects all views and possibly already existing view pack-

ages from the tuple space and visualizes them in a ba-

sic GUI. New views can be added to existing groups

or groups can be reconfigured using simple drag and

drop gestures. Once all updates are finished, the cur-

rent view package configuration is fetched back to the

Page 10: Supporting Presentation and Discussion of Visualization ...ct/pub... · { presentation of and interaction with visualization results and { modern visualization environments. In particular,

10 Axel Radloff et al.

Fig. 6 Architecture of our infrastructure for presenter support in smart meeting rooms.

smart meeting room’s tuple space. The view grouping

GUI can be started as a single application at any de-

vice participating in the environment; the GUI itself is

implemented in JAVA.

The view mapping is encapsulated in a software

component running in the smart meeting room with-

out any specific user interface for neither the presenter

nor the audience. Through the middleware it gathers

the necessary information about the room (user po-

sitions and view directions, display device positions,

sizes and orientations) and the views and view pack-

ages. Based on this information, a genetic algorithm

tries to optimize a quality function to assign the views

to the available display devices. For this, random view-

display mappings are generated. These mappings are

rated based on the quality function (see Section 3.2.1).

Considering the rating, new mappings are generated

that are rated again. The genetic algorithm converges

to a good mapping quickly.

Finally, the views are transferred to the display de-

vices where the view layout takes place. The layout

strategy described in Section 3.2.1 considers the fol-

lowing information: the resolution and aspect ratio of

the display device, the number of views to be displayed,

and the properties of the individual views (e.g., aspect

ratio and original size). Based on this information, a

force-directed placement and pressure-based resizing is

performed. For the force-directed placement, attract-

ing and repulsing forces are defined automatically be-

tween the views. For the pressure-based resizing, an in-

ner pressure of any view (regarding the actual size and

the optimal size) and an outer pressure (with regards

to the used display space) are defined. These forces are

processed by the layout algorithm.

Furthermore, the layout algorithm is capable of tak-

ing the users’ positions into account to avoid physical

occlusion of views displayed via a projector to a projec-

tion surface. The physical positions of projectors and

users in the smart meeting room and the size of the

presented display surface are collected from the tuple

space. Based on this information, projection cones and

their intersections with the users are calculated. The

intersections are projected to the canvases and mapped

into the display space. Through this we obtain dummy

views that represent the areas of the display surfaces

that are occluded by users. These dummy views oc-

cupy display space which prevent the view layout to

place any other real view in that area. More details on

this procedure can be found in [30].

Interaction components The three interaction compo-

nents of our approach (interaction grabber, interaction

Page 11: Supporting Presentation and Discussion of Visualization ...ct/pub... · { presentation of and interaction with visualization results and { modern visualization environments. In particular,

Supporting Presentation and Discussion of Visualization Results in Smart Meeting Rooms 11

mapper, and interaction handler) are shown on the right

hand side of Figure 6.

The interaction grabber gathers the interaction from

the individual devices. To make this possible, a software

service is started on a personal device, monitoring the

interactions of connected interaction devices. The inter-

action events are converted into our generic interaction

description, representing pointer movement and trigger

interactions. This description of a single event is pro-

vided to the tuple space of the smart meeting room.

In the next step, the interaction mapper assigns this

interaction description to a certain display device. This

is achieved by a relative pointer movement monitoring,

based on the positions of the public displays (queried

from tuple space) and a defined origin position of a

pointer. Any interaction device participating in the ad-

hoc environment has its own pointer. Multiple pointers

can be used simultaneously. Moving the pointer out of a

display device causes the interaction mapper to assign

the interaction to the adjacent display device.

In the last step, the interaction is interpreted using

the interaction handler. It converts the generic interac-

tion description into specific events performed on the

assigned display device.

Through this interaction process, it is technically

possible for multiple users to interact with all displayed

views simultaneously. The views can be resized, reposi-

tioned, assigned to other display devices, zoomed, and

so forth. Note that neither the original image data have

to be modified for such manipulations nor does the orig-

inal software need to provide functionality for resizing

or zooming. All this is realized by utilizing the capa-

bilities of the JPEG2000 encoding. For further imple-

mentation details on the interaction process, we refer

to [29].

Implications and limitations As the reader can guess,

many technical details need to be taken care of to get

such a complex and dynamic environment up and run-

ning. We should admit that the implementation of our

approach as well as the smart meeting room itself are

work in progress with ongoing contributions from many

researchers.

Apparently such a setting implies some dependen-

cies and limitations. In particular, we depend on the

middleware Helferlein, which provides us with all the

information we need about the environment and the

users. This information is the basis for the dynamic

mapping and layout of views on the multiple displays.

Secondly, our approach is limited to views defined by

nothing else but pixels. Pixels are the least common de-

nominator to integrate contents from heterogeneous de-

vices of multiple users for the purpose of presentation.

Fig. 7 The PIK vegetation visualizer (PVV).

Using JPEG2000 we can somewhat enhance our pixel-

oriented solution with flexible regions of interest and

multi-resolution encoding. The API-based view grab-

bing provides some rudimentary ways of considering

additional meta-information for views. But incorporat-

ing sophisticated content-aware mechanisms that define

views based on their semantics would require substan-

tial additional effort.

Although our implementation is not a production-

ready system, we were able to run test scenarios with

visualization applications in the smart meeting room.

The next paragraphs will illustrate this.

5 Application to Climate Impact Research

In this section, we illustrate how our approach can be

applied in the context of climate impact research. Cli-

mate impact research requires interdisciplinary collab-

oration of experts from different domains (e.g., physics,

statistics, geology, or computer science). For these ex-

perts exploration and analysis of large amounts of data

is day-to-day business. Once research results have been

crystallized from the data, they need to be communi-

cated to fellow researchers or even the public.

However, communicating the research results is a

challenging problem, because of the complexity of the

data and the heterogeneity of the collaborating researchers.

Interactive visualization can play a key role in convey-

ing complex information and bridging the gaps between

the different scientific languages spoken by the involved

experts. In this section, we demonstrate how visualiza-

tion in a smart meeting room using our solution can im-

prove the current practice of presenting and discussing

research results among climate impact researchers.

As the basis for our test runs we used the PIK Veg-

etation Visualizer (PVV), a tool for interactive visual-

ization of biosphere simulation data [27,49]. PVV is ap-

plied by climate researchers on a regular basis to present

Page 12: Supporting Presentation and Discussion of Visualization ...ct/pub... · { presentation of and interaction with visualization results and { modern visualization environments. In particular,

12 Axel Radloff et al.

and discuss the diverging outcomes of alternative mod-

els, different simulations, and associated parameteriza-

tions. As the data to be visualized is very large, PVV

uses an off-line phase to pre-calculate visual represen-

tations for a well-defined set of models and parameter

settings. This results in a large image database. During

a discussion, PVV allows the presenter (not the audi-

ence) to query the image database according to time

intervals, models, or parameters via standard GUI el-

ements. Up to four images, one focus image and three

context images, are shown as a fixed layout as depicted

in Figure 7.

In daily work, PVV is applied in different situations,

including presentations to larger expert or non-expert

audiences on standard projectors, presentations at ex-

hibitions or open house events on medium or large high-

resolution displays, as well as presentations and discus-

sions with guest scientists or decision-makers in meeting

rooms with different presentation devices. In these ap-

plication scenarios, the climate researchers are facing

always the same limitations of the original version of

the PVV tool:

– fixed number of pre-calculated resolutions,

– fixed window layout,

– restricted comparative visualization only,

– interaction restricted to the presenter.

To overcome these limitations, we extended PVV

such that it uses our API to expose visualization views

to the smart meeting room. The following improve-

ments could be achieved.

In the original PVV, a large number of images were

generated at multiple resolutions suitable for common

display devices. However, the underlying data resolu-

tion is much higher, so the generated images show only

a fraction of the information contained in the data. The

integration of JPEG2000 makes it now possible to en-

code all images at full detail and decode exactly the

level of detail needed for a certain task or display de-

vice. This can be done on-the-fly, without resorting to

any settings of the original PVV GUI controls.

A major improvement over the original practice of

applying PVV in standard meeting room is the switch

to the smart meeting room. In a experimental setup,

we tested how a typical discussion scenario of climate

researchers can be supported (see Figure 5). In this sce-

nario, multiple experts from climate science, hydrology

and ecology in the room discuss the potential global and

local impacts of climate change, using the options the

smart meeting room offers in combination with their

own notebooks, smartphones, or portable projectors.

The integration of such mobile devices is realized by

the Helferlein middleware (see Section 4). The avail-

ability of the devices in the smart meeting room leads

to two fundamental novel options.

First, applying the display facilities of the room, it is

now possible to compare several climate model drivers

(e.g. temperature and precipitation) for geographical

regions of interest over multiple climate models together

with multiple eco-system variables such as forest cover-

age and carbon storage at a glance. The view grouping

helps to steer and structure the layout. Views related

to the core topic of the discussion are mapped to the

primary displays. Side topics that surface during the

discussion can be integrated on-the-fly, but are usually

parked on secondary displays until they become the fo-

cus of the discussion. By using this multi-view, multi-

display approach, it becomes much easier to answer

complex questions related to complex, multi-faceted cli-

mate data.

Second, the presenter can change both the views and

the layout to match the current topic of interest. For

example, it is possible to move a view to another display

to support a requested comparison with other views. On

the display, where the view is inserted, all other views

automatically scale down to free space for the new view.

Analogously, the views of the source display scale up to

exploit the vacant space. The JPEG2000 coding holds

all necessary information to carry out these operations.

But not only the presenter, also the attendees can eas-

ily move views between displays or zoom into regions of

interest with their personal interaction devices. By do-

ing so, they can emphasize their statements or illustrate

their questions. They can also generate new views, e.g.,

showing the behavior of further variables, and assign

them to a view package or directly to a certain display.

In this way, many different aspects of the climate

data can be visualized and discussed in relation to cli-

mate impacts. This facilitates getting the big picture

around the complex relationships to be studied in cli-

mate impact research, going clearly beyond the func-

tionality offered by the original PVV tool.

The technical implementation in the smart meet-

ing room enables the presenter to be fully in charge of

what and how visual information is shown and allows

the audience to add content on demand and partici-

pate actively. The interaction with the system, even in

its prototypical state, is designed to be easy enough to

be operated by users who are not necessarily experts in

visualization or multi-display systems.

6 Related Work

Our solution is inline with several other approaches

and research prototypes that address the integration

Page 13: Supporting Presentation and Discussion of Visualization ...ct/pub... · { presentation of and interaction with visualization results and { modern visualization environments. In particular,

Supporting Presentation and Discussion of Visualization Results in Smart Meeting Rooms 13

of multiple possibly large displays and novel interac-

tion modalities. Here, we briefly review relevant related

work that is concerned with information presentation

in multi-display environments in general. More specif-

ically, we look at approaches that (1) provide and dis-

play visual representations and (2) interactively manip-

ulate these visual representations.

6.1 Display of visual representations

There are a few systems that address the presenta-

tion of information in multi-display environments. The

existing approaches can be outlined with respect to

three categories: (1) operating-system-specific informa-

tion presentation, (2) data-based information presenta-

tion, and (3) image-based information presentation.

OS-specific information presentation systems, such

as the Windows-based WinCuts [36] or the Linux-based

Deskotheque [40,42], utilize features of a specific OS

(e.g., GDI under Windows or XWindow under Linux

or Unix) to display windows and graphical primitives.

Such systems master the technical difficulties for infor-

mation presentation. However, they alone are not suffi-

cient for smart meeting rooms, because smart meeting

rooms are ad-hoc environments where users can bring

their own devices with different operating systems.

Data-based information presentation systems trans-

fer data to a public display device, where the data are

processed by a certain application that generates the vi-

sual output to be displayed. Examples are iRoom [20],

Dynamo [17], the multi-user, multi-display molecular

visualization application in [11], the ZOIL framework

in [19], or the VisPorter approach in [9]. Data-based in-

formation presentation allows for dynamic generation of

visual representations for certain situations, demands,

and tasks. However, these systems are usually limited

to specific applications and are tightly intertwined with

the environment. With our approach for smart meeting

rooms, we strive for flexible applicability across appli-

cation boundaries.

Image-based information presentation systems gen-

erate visual representations of the data on a source de-

vice and transfer them as plain image data to the dis-

play device that is to show the data. An example is

WeSpace [46], which uses a client-server architecture to

collect and transmit content from multiple private de-

vices to a large public display. This is accomplished via

a modified VNC client. As such, image-based informa-

tion presentation systems are suited for smart meeting

rooms. They are not restricted to certain operating sys-

tems or applications and can be used in a dynamic ad-

hoc environment. However, current approaches neither

deal conceptually with the visual outputs of different

applications nor do they support the automatic config-

uration of the information display.

In summary, the existing solutions have different re-

strictions. They depend on particular features of some

OS, they depend on particular applications or data struc-

tures, or they lack automatic configuration. Our ap-

proach supports heterogeneous devices, integrates views

from different applications and requires no specific data

structures, just pixels, and we support automatic map-

ping and layout of contents.

6.2 Interaction with visual representations

The question of interacting in multi-display environ-

ments is tackled differently in the literature. There are

solutions that (1) make mouse and keyboard available

for all displays or (2) utilize new interaction devices.

PointRight [21] allows for interacting with mouse

and keyboard in iRoom by utilizing the iROS middle-

ware. Clicky [4] uses a specific server-client architec-

ture. 3D geometry models of the display devices can be

used to support relative mouse pointer navigation via

the Perspective Cursor [26]. In Deskotheque the use of

mouse and keyboard across display boundaries is sup-

ported through a modified synergy client [41,43].

New interaction modalities applied in multi-display

environments include laser pointers [2,12,28], the XWand

[47,48], and the Nintendo Wiimote [24]. Gesture-based

interaction is possible as well [22,19]. To the best of

our knowledge, there is no approach combining classic

mouse and keyboard and new modalities for interaction

in smart meeting rooms.

Furthermore, there are a few approaches dealing

with more than just the technical basis for displaying

of and interacting with visual representations in multi-

display environments. The automatic arrangement of

the visual content has been addressed in [40,45]. Show-

ing relations between applications by means of visual

linking was proposed in [34,44]. Interactive annotations

of pre-defined visual content is discussed in [14,15,23].

However, the existing approaches are not capable of

modifying visual representations on the fly.

Taken together, we see several existing solutions all

with individual strengths and limitations. All these ap-

proaches document the relevance of research on multi-

user multi-display environments. With our work, we

are part of an ongoing movement to go beyond the

classic single-user on a desktop machine pattern to-

ward richly supported and flexibly applicable multi-user

multi-display environments.

Page 14: Supporting Presentation and Discussion of Visualization ...ct/pub... · { presentation of and interaction with visualization results and { modern visualization environments. In particular,

14 Axel Radloff et al.

7 Summary and Future Work

In summary, by considering aspects of capturing visu-

alization views and user interaction as well as mapping

them appropriately according to the smart room’s de-

vices and users’ needs, we are able to create an envi-

ronment that can support data analysts in presenting

and discussing their research results. We understand

our work as initial steps toward a better integration of

heterogeneous visualization software, multiple display

and interaction devices for visualization, and dynami-

cally changing user requirements. With the developed

technological basis it is now possible to study theoret-

ical aspects of group data analysis in smart meeting

rooms. Continuing on this road of research, we hope to

arrive at what we call a smart visualization session. We

have illustrated how such a session could look like for an

application scenario related to climate impact research.

For the future, we see several possibilities for ex-

tensions and improvements. We are currently extend-

ing our work from the presenter-audience scenario to a

multi-presenter scenario. Our goal is to support a seam-

less switch between different presenters and a dynamic

generation of presentations. We want to support a flex-

ible enhancement of presentations with regard to a cur-

rent discussion based on the input of multiple presenters

who contribute to the topic in an ad-hoc manner. This

requires a well-designed framework of underlying mod-

els describing, for example, structured presentations or

dependencies of interactions. Our investigations are em-

bedded in current activities of a larger research effort

aiming at developing new forms of project work, teach-

ing, and learning facilitated by smart meeting rooms.

References

1. Aarts, E.H.L., Encarnacao, J.L.: True Visions: The Emer-gence of Ambient Intelligence. Springer-Verlag New York,Inc. (2006)

2. Ahlborn, B.A., Thompson, D., Kreylos, O., Hamann, B.,Staadt, O.G.: A Practical System for Laser Pointer In-teraction on Large Displays. In: Proceedings of the ACMSymposium on Virtual Reality Software and Technol-ogy, VRST ’05, pp. 106–109. ACM, New York, NY, USA(2005). DOI 10.1145/1101616.1101637

3. Ali, K., Hartmann, K., Fuchs, G., Schumann, H.: Adap-tive Layout for Interactive Documents. In: Proceed-ings of the International Symposium on Smart Graphics,SG’11, vol. 5166, pp. 247–254. Springer (2008). DOI10.1007/978-3-540-85412-8 24

4. Andrews, C.R., Sampemane, G., Weiler, A., Campbell,R.H.: Clicky: User-centric Input for Active Spaces. Tech-nical Report UIUCDCS-R-2004, University of Illinois ATUrbana-Champaign, Dept. of CS (2004)

5. Bader, S., Nyolt, M.: A Context-Aware Publish-Subscribe Middleware for Distributed Smart Environ-ments. In: Proceedings of the 9th IEEE Workshop

on Managing Ubiquitous Communications and Services(MUCS 2012). IEEE, Lugano, Switzerland (2012)

6. Bader, S., Ruscher, G., Kirste, T.: A Middleware forRapid Prototyping Smart Environments. In: Proceedingsof the 12th ACM international conference adjunct paperson Ubiquitous computing, pp. 355–356. ACM, Copen-hagen, Denmark (2010). DOI 10.1145/1864431.1864433

7. Bauer, M., Becker, C., Rothermel, K.: Location Mod-els from the Perspective of Context-Aware Applica-tions and Mobile Ad Hoc Networks. Personal andUbiquitous Computing 6(5), 322–328 (2002). DOI10.1007/s007790200036

8. Bryden, A., Phillips GeorgeN., J., Griguer, Y., Moxon,J., Gleicher, M.: Improving Collaborative Visualizationof Structural Biology. In: G. Bebis, R. Boyle, B. Parvin,D. Koracin, S. Wang, K. Kyungnam, B. Benes, K. More-land, C. Borst, S. DiVerdi, C. Yi-Jen, J. Ming (eds.) Ad-vances in Visual Computing, Lecture Notes in ComputerScience, vol. 6938, pp. 518–529. Springer (2011). DOI10.1007/978-3-642-24028-7 48

9. Chung, H., North, C., Self, J.Z., Chu, S.L., Quek, F.K.H.:VisPorter: Facilitating Information Sharing for Collabo-rative Sensemaking on Multiple Displays. Personal andUbiquitous Computing 18(5), 1169–1186 (2014). DOI10.1007/s00779-013-0727-2

10. Cook, D., Das, S.: Smart Environments: Technology,Protocols and Applications, vol. 43. Wiley-Interscience(2004)

11. Forlines, C., Lilien, R.: Adapting a Single-user, Single-display Molecular Visualization Application for Use ina Multi-user, Multi-display Environment. In: Proceed-ings of the Working Conference on Advanced VisualInterfaces, AVI’08, pp. 367–371. ACM (2008). DOI10.1145/1385569.1385635

12. Fukazawa, R., Takashima, K., Shoemaker, G., Kitamura,Y., Itoh, Y., Kishino, F.: Comparison of Multimodal In-teractions in Perspective-corrected Multi-display Envi-ronment. In: Proceedings of the 2010 IEEE Sympo-sium on 3D User Interfaces, 3DUI ’10, pp. 103–110. IEEEComputer Society, Washington, DC, USA (2010). DOI10.1109/3DUI.2010.5444711

13. Gelernter, D.: Generative communication in Linda. ACMTransactions on Programming Languages and Systems7(1), 80–112 (1985). DOI 10.1145/2363.2433

14. Haller, M., Brandl, P., Leithinger, D., Leitner, J.,Seifried, T., Billinghurst, M.: Shared Design Space:Sketching Ideas Using Digital Pens and a Large Aug-mented Tabletop Setup. In: Proceedings of the 16th In-ternational Conference on Artificial Reality and Telexis-tence, ICAT 2006, pp. 185–196. Springer (2006). DOI10.1007/11941354 20

15. Haller, M., Brandl, P., Richter, C., Leitner, J., Seifried,T., Gokcezade, A., Leithinger, D.: Interactive Displaysand Next-Generation Interfaces. In: Hagenberg Research,pp. 433–472. Springer (2009). DOI 10.1007/978-3-642-02127-5 10

16. Heider, T., Kirste, T.: Multimodal Appliance Coopera-tion based on Explicit Goals: Concepts & Potentials. In:Proceedings of the 2005 joint Conference on Smart Ob-jects and Ambient Intelligence: Innovative Context-awareServices: Usages and Technologies, sOc-EUSAI ’05, pp.271–276. ACM (2005). DOI 10.1145/1107548.1107614

17. Izadi, S., Brignull, H., Rodden, T., Rogers, Y., Under-wood, M.: Dynamo: A Public Interactive Surface Sup-porting the Cooperative Sharing and Exchange of Media.In: Proceedings of the 16th annual ACM Symposium on

Page 15: Supporting Presentation and Discussion of Visualization ...ct/pub... · { presentation of and interaction with visualization results and { modern visualization environments. In particular,

Supporting Presentation and Discussion of Visualization Results in Smart Meeting Rooms 15

User Interface Software and Technology, UIST ’03, pp.159–168. ACM (2003). DOI 10.1145/964696.964714

18. Jaimes, A., Miyazaki, J.: Building a Smart MeetingRoom: From Infrastructure to the Video Gap (Re-search and Open Issues). In: Proceedings of the 21stInternational Conference on Data Engineering Work-shops, ICDEW ’05, pp. 1173–1173. IEEE (2005). DOI10.1109/ICDE.2005.202

19. Jetter, H.C., Zollner, M., Gerken, J., Reiterer, H.: Designand Implementation of Post-WIMP Distributed User In-terfaces with ZOIL. International Journal of Human-Computer Interaction 28(11), 737–747 (2012). DOI10.1080/10447318.2012.715539

20. Johanson, B., Fox, A., Winograd, T.: The InteractiveWorkspaces Project: Experiences with Ubiquitous Com-puting Rooms. IEEE Pervasive Computing 1(2), 67–74(2002). DOI 10.1109/MPRV.2002.1012339

21. Johanson, B., Hutchins, G., Winograd, T.: PointRight: ASystem for Pointer/Keyboard Redirection Among Multi-ple Displays and Machines. Tech. Rep. CS-2000-03, Stan-ford University (2000)

22. Karam, M., Schraefel, M.C.: A Taxonomy of Gesturesin Human Computer Interactions. Tech. rep., ECSTR-IAM05-009, University of Southampton (2005)

23. Kurihara, K., Igarashi, T.: A Flexible PresentationTool for Diverse Multi-display Environments. In:C. Baranauskas, P. Palanque, J. Abascal, S. Barbosa(eds.) Human-Computer Interaction ? INTERACT 2007,Lecture Notes in Computer Science, vol. 4662, pp. 430–433. Springer (2007). DOI 10.1007/978-3-540-74796-3 41

24. Lee, J.C.: Wii Projects.http://johnnylee.net/projects/wii/ (2013). (08/2013)

25. Mitchell, M.: An Introduction to Genetic Algorithms.MIT Press, Cambridge, MA, USA (1998)

26. Nacenta, M.A., Sallam, S., Champoux, B., Subramanian,S., Gutwin, C.: Perspective Cursor: Perspective-based In-teraction for Multi-display Environments. In: Proceed-ings of the SIGCHI Conference on Human factors in com-puting systems, CHI ’06, pp. 289–298. ACM, New York,NY, USA (2006). DOI 10.1145/1124772.1124817

27. Nocke, T., Heyder, U., Petri, S., Vohland, K., Wrobel,M., Lucht, W.: Visualization of Biosphere Changes inthe Context of Climate Change. In: Proceedings of 2ndInternational Conference IT for Empowerment Informa-tion Technology and Climate Change, ITCC’08, pp. 29–36 (2009)

28. Peck, C.: Useful Parameters for the Design of LaserPointer Interaction Techniques. In: Proceedings of theCHI’01 extended abstracts on Human Factors in Com-puting Systems, CHI EA ’01, pp. 461–462. ACM (2001).DOI 10.1145/634067.634333

29. Radloff, A., Lehmann, A., Staadt, O., Schumann, H.:Smart Interaction Management: An Interaction Ap-proach for Smart Meeting Rooms. In: Proceedings ofthe 8th International Conference on Intelligent Environ-ments, IE ’12, pp. 228–235. IEEE Computer Society,Guanajuato, Mexico (2012). DOI 10.1109/IE.2012.34

30. Radloff, A., Luboschik, M., Schumann, H.: Smart Viewsin Smart Environments. In: Proceedings of the Inter-national Symposium on Smart Graphics, SG’11 (2011).DOI 10.1007/978-3-642-22571-0 1

31. Rosenbaum, R.: Mobile Image Communication usingJPEG2000. Ph.D. thesis, University of Rostock, Ger-many (2006)

32. Shape, F.: SensFloor. http://www.future-shape.de/de/technologies/11/sensfloor (2013). (05/2013)

33. Spindler, M., Tominski, C., Schumann, H., Dachselt, R.:Tangible Views for Information Visualization. In: Pro-ceedings of the ACM International Conference on In-teractive Tabletops and Surfaces, ITS ’10, pp. 157–166.ACM (2010). DOI 10.1145/1936652.1936684

34. Steinberger, M., Waldner, M., Streit, M., Lex, A.,Schmalstieg, D.: Context-Preserving Visual Links. IEEETransactions on Visualization and Computer Graphics17(12), 2249–2258 (2011). DOI 10.1109/TVCG.2011.183

35. Su, S., Loftin, R., Chen, D., Fang, Y.C., Lin, C.Y.:Distributed Collaborative Virtual Environment: Pauling-world. In: Proceedings of the 10th International Confer-ence on Artificial Reality and Telexistence, pp. 112–117(2000)

36. Tan, D.S., Meyers, B., Czerwinski, M.: WinCuts: Manip-ulating Arbitrary Window Regions for More Effective Useof Screen Space. In: Proceedings of the CHI’04 extendedabstracts on Human factors in computing systems, pp.1525–1528 (2004). DOI 10.1145/985921.986106

37. Taubman, D.S., Marcellin, M.W.: JPEG2000: ImageCompression Fundamentals, Standards and Practice.Springer (2002)

38. Thiede, C., Tominski, C., Schumann, H.: Service-Oriented Information Visualization for Smart Environ-ments. In: Proceedings of the 2009 13th InternationalConference Information Visualisation, IV ’09, pp. 227–234. IEEE Computer Society, Washington, DC, USA(2009). DOI 10.1109/IV.2009.54

39. Ubisense: Ubisense Series 7000.http://www.ubisense.net/en/resources/factsheets/series-7000-ip-sensors.html (2013).(06/2013)

40. Waldner, M.: WIMP Interfaces for Emerging Display En-vironments. Ph.D. thesis, Graz University of Technology(2011)

41. Waldner, M., Kruijff, E., Schmalstieg, D.: Bridging Gapswith Pointer Warping in Multi-display Environments. In:Proceedings of the 6th Nordic Conference on Human-Computer Interaction: Extending Boundaries, NordiCHI’10, pp. 813–816. ACM, New York, NY, USA (2010).DOI 10.1145/1868914.1869036

42. Waldner, M., Lex, A., Streit, M., Schmalstieg, D.:Design Considerations for Collaborative InformationWorkspaces in Multi-Display Environments. In: Proceed-ings of the Workshop on Collaborative Visualization onInteractive Surfaces, CoVIS’09, pp. 36–39 (2009)

43. Waldner, M., Pirchheim, C., Kruijff, E., Schmalstieg, D.:Automatic Configuration of Spatially Consistent MousePointer Navigation in Multi-display Environments. In:Proceedings of the 15th International Conference on In-telligent User Interfaces, IUI ’10, pp. 397–400. ACM, NewYork, NY, USA (2010). DOI 10.1145/1719970.1720040

44. Waldner, M., Puff, W., Lex, A., Streit, M., Schmalstieg,D.: Visual Links Across Applications. In: Proceedings ofGraphics Interface, GI ’10, pp. 129–136. Canadian Infor-mation Processing Society (2010)

45. Waldner, M., Steinberger, M., Grasset, R., Schmalstieg,D.: Importance-Driven Compositing Window Manage-ment. In: Proceedings of the 2011 annual Conferenceon Human factors in computing systems, pp. 959–968.ACM (2011)

46. Wigdor, D., Jiang, H., Forlines, C., Borkin, M., Shen,C.: WeSpace: The Design Development and Deploymentof a Walk-up and Share Multi-surface Visual Collab-oration System. In: Proceedings of the 27th Interna-tional Conference on Human factors in computing sys-

Page 16: Supporting Presentation and Discussion of Visualization ...ct/pub... · { presentation of and interaction with visualization results and { modern visualization environments. In particular,

16 Axel Radloff et al.

tems, CHI ’09, pp. 1237–1246. ACM (2009). DOI10.1145/1518701.1518886

47. Wilson, A., Pham, H.: Pointing in Intelligent Environ-ments with the WorldCursor. In: Proceedings of INTER-ACT. IOS Press (2003)

48. Wilson, A., Shafer, S.: XWand: UI for Intelligent Spaces.In: Proceedings of the SIGCHI Conference on Hu-man Factors in Computing Systems, CHI ’03, pp.545–552. ACM, New York, NY, USA (2003). DOI10.1145/642611.642706

49. Wrobel, M., Hinkel, J., Hofmann, M., Nocke, T., Voh-land, K.: Interactive Access to Climate Change Informa-tion. In: Proceedings of the International Symposiumon Environmental Software Systems, ISESS’09. Venice(2009)

50. Yost, B., Haciahmetoglu, Y., North, C.: Beyond VisualAcuity: The Perceptual Scalability of Information Vi-sualizations for Large Displays. In: Proceedings of theSIGCHI Conference on Human Factors in ComputingSystems, CHI ’07, pp. 101–110. ACM, New York, NY,USA (2007). DOI 10.1145/1240624.1240639

51. Youngblood, G.M., Heierman, E.O., Holder, L.B., Cook,D.J.: Automation Intelligence for the Smart Environ-ment. In: Proceedings of the 19th International JointConference on Artificial Intelligence, IJCAI’05, vol. 19,pp. 1513–1514 (2005)