interacting with augmented environments

3
56 PERVASIVE computing Published by the IEEE CS n 1536-1268/10/$26.00 © 2010 IEEE Works in Progress Editor: Anthony D. Joseph n University of California, Berkeley n [email protected] n n n n n n n n n n n n n LINKING PHYSICAL DEVICES AND THEIR ONTOLOGY Salvatore Sorce, University of Palermo Pervasive systems augment environ- ments by integrating information processing into everyday objects and activities. They consist of two parts: a visible part populated by animate (visi- tors, operators) or inanimate (AI) enti- ties interacting with the environment through digital devices, and an invisible part composed of software objects per- forming specific tasks in an underlying framework. The digital devices differ in scale along several axes, including commu- nication mode, size, price, and connec- tivity. Ideally, a pervasive system could integrate any device that has built-in active or passive intelligence. However, few devices were originally conceived to cooperate with others. They use dif- ferent languages, and their interfaces are often incompatible. Currently, per- vasive system designers must account for this heterogeneity by programming different devices one by one, then mak- ing them work together through a spe- cific protocol. Consequently, even though many usefully deployed pervasive systems exist, the integration of different devices within a single programming and design platform still poses sig- nificant reliability, scalability, secu- rity, QoS, and privacy challenges. The interfaces and services involve unprec- edented complexity, and efforts to real- ize operating systems, programming languages, and middleware that will support device interoperability haven’t yet achieved satisfactory results. Cur- rent systems tend to focus on specific application domains using specific devices. Achieving interoperability at all lev- els of ubiquitous computing will require an infrastructure to act as a logical con- nection layer. Middleware—whether it’s hardware, software, or both—is essential to support application mobil- ity and adaptation to changing con- texts. The middleware implementation must optimize a trade-off between transparency and awareness. To support more integrated interac- tion among pervasive system devices, researchers at the University of Palermo are developing a thorough classification of devices, with particular reference to computing, memory, I/O, and network- ing capabilities. At the same time, we’re working on a common model for rep- resenting the classification. The model will include a structured description of all features and capabilities of classi- fied devices, with particular attention to interaction modes. Designers can then refer to this common model when composing a pervasive system. The device classifications and model definition will support whatever mid- dleware designers use to create perva- sive systems. A suitably classified and represented library of available devices will let system developers include a device just by selecting its name. The representation will pass useful pro- gramming and interface information to the middleware so that devices can correctly interact with each other. We plan to make the classification and rep- resentation model available as a Web- accessible database. We’re also considering a smart link between a device and its representation in our model, using optical or RFID tags that come with the device. The link would solve a problem that’s come up during our classification task—namely, the difficulty in identifying each device correctly. Many similar devices have no clear indication of their exact model name. Even when the model name is clear, detecting the update version number is sometimes difficult. Linking attached device tags to their model rep- resentation would give programmers access to feature specifications that would make it easier to include a spe- cific device in a pervasive system proj- Interacting with Augmented Environments EDITOR’S INTRO Ongoing work from the University of Palermo’s Department of Computer Science and Engineering addresses two issues related to simplifying and broadening aug- mented environment access. —Anthony D. Joseph

Upload: giovanni

Post on 11-Mar-2017

212 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Interacting with Augmented Environments

56 PERVASIVEcomputing Published by the IEEE CS n 1536-1268/10/$26.00©2010IEEE

Works in ProgressEditor:AnthonyD.JosephnUniversityofCalifornia,[email protected]

n n n n n n n n n n n n n

LINKING PHYSICAL DEVICES AND THEIR ONTOLOGYSalvatore Sorce, University of Palermo

Pervasive systems augment environ-ments by integrating information processing into everyday objects and activities. They consist of two parts: a visible part populated by animate (visi-tors, operators) or inanimate (AI) enti-ties interacting with the environment through digital devices, and an invisible part composed of software objects per-forming specific tasks in an underlying framework.

The digital devices differ in scale along several axes, including commu-nication mode, size, price, and connec-tivity. Ideally, a pervasive system could integrate any device that has built-in active or passive intelligence. However, few devices were originally conceived to cooperate with others. They use dif-ferent languages, and their interfaces are often incompatible. Currently, per-vasive system designers must account for this heterogeneity by programming different devices one by one, then mak-ing them work together through a spe-cific protocol.

Consequently, even though many

usefully deployed pervasive systems exist, the integration of different devices within a single programming and design platform still poses sig-nificant reliability, scalability, secu-rity, QoS, and privacy challenges. The interfaces and services involve unprec-edented complexity, and efforts to real-ize operating systems, programming languages, and middleware that will support device interoperability haven’t yet achieved satisfactory results. Cur-rent systems tend to focus on specific application domains using specific devices.

Achieving interoperability at all lev-els of ubiquitous computing will require an infrastructure to act as a logical con-nection layer. Middleware—whether it’s hardware, software, or both—is essential to support application mobil-ity and adaptation to changing con-texts. The middleware implementation must optimize a trade-off between transparency and awareness.

To support more integrated interac-tion among pervasive system devices, researchers at the University of Palermo are developing a thorough classification of devices, with particular reference to computing, memory, I/O, and network-ing capabilities. At the same time, we’re

working on a common model for rep-resenting the classification. The model will include a structured description of all features and capabilities of classi-fied devices, with particular attention to interaction modes. Designers can then refer to this common model when composing a pervasive system.

The device classifications and model definition will support whatever mid-dleware designers use to create perva-sive systems. A suitably classified and represented library of available devices will let system developers include a device just by selecting its name. The representation will pass useful pro-gramming and interface information to the middleware so that devices can correctly interact with each other. We plan to make the classification and rep-resentation model available as a Web-accessible database.

We’re also considering a smart link between a device and its representation in our model, using optical or RFID tags that come with the device. The link would solve a problem that’s come up during our classification task—namely, the difficulty in identifying each device correctly. Many similar devices have no clear indication of their exact model name. Even when the model name is clear, detecting the update version number is sometimes difficult. Linking attached device tags to their model rep-resentation would give programmers access to feature specifications that would make it easier to include a spe-cific device in a pervasive system proj-

Interacting with Augmented Environments

EDITOR’SINTRO

OngoingworkfromtheUniversityofPalermo’sDepartmentofComputerScienceandEngineeringaddressestwoissuesrelatedtosimplifyingandbroadeningaug-mentedenvironmentaccess. —Anthony D. Joseph

Page 2: Interacting with Augmented Environments

APRIL–JUNE2010 PERVASIVEcomputing 57

ect. The detected tag would become a key to querying the device database and retrieving its features in a document structured according to the representa-tion model.

At the moment, we’re working on the representation model. This is the most difficult task because the model must account for all possible features of all devices that a pervasive system might use.

For more information, contact Salva-tore Sorce at [email protected].

n n n n n n n n n n n n n

SMARTER GUIDES FOR SMARTER PLACESSalvatore Sorce, Agnese Augello, Antonella Santangelo, Antonio Gentile, Alessandro Genco, and Salvatore Gaglio, University of PalermoGiovanni Pilato, Italian National Research Council

Giving users the freedom to choose among multiple interaction modes is crucial to wider acceptance of perva-sive systems in public places. For three years, we’ve been developing multi-modal services that let users choose how they access context-dependent information in several different aug-mented environments, especially cul-tural heritage sites. Our approach provides a virtual guide system that integrates an intelligent conversational agent, speech recognition and synthesis technologies, an RFID-based location framework, and a Wi-Fi–based data-exchange framework. Users access the system through a PDA equipped with an RFID tag reader that’s used for autolocation. The conversational agent integrates reasoning capabilities into a knowledge-representation module and provides a natural language interface to the system’s information-retrieval and presentation services.

Figure 1 shows the system architec-ture. The interaction module, which executes within the mobile device, embeds the vocal interaction, a tradi-

tional point-and-click interface, and automatic self-location. It acquires user inputs in multimodal form and feeds simple questions to the reasoning mod-ule in textual form. The reasoning mod-ule retrieves the appropriate informa-tion and returns text, images, and links to the interaction module for output.

When a user makes a vocal request, a multimodal browser looks for a match in a local grammar file. If the browser finds a match, it submits the recognized query text to the reason-ing module, where a chatbot attaches semantics to the text and builds the answer based on an ontology query. RFID tags can also trigger interaction between the PDA application and the system. In this case, when the user’s PDA is in a tagged object’s proximity, the system provides basic information about that object. This is an implicit-interaction mode; because users don’t start this interaction intentionally, they perceive the system as proactive. They can then ask more questions about the

object, or they can ignore it and con-tinue walking.

Since we started working on the sys-tem, it has evolved to give people use-ful services in different places, such as university campuses and city neigh-borhoods. Our early implementations tagged points of interest in the environ-ment. In this site-centered operational mode, the RFID framework triggers position-related interactions, based on an estimate of a PDA’s position. The RFID infrastructure consists of pas-sive tags, each with a unique ID associ-ated to a specific point of interest. PDAs are equipped with compact-flash or secure-digital RFID tag readers. Once the RFID reader detects a tag, it passes the unique ID to the server through a wireless network. The chatbot uses this information to start the interaction with the user.

In follow-up questionnaires, users indicated overall high satisfaction with the system but frustration with having to position their PDAs precisely near

Text, images, links

Multimodal user input

Multimodalbrowser

AutomaticspeechrecognitionText-to-speech

Interactionmodule

Reasoning module

Chatbot

Vocal input Vocal

output

RFID tag

Ontology

Point-and-click input

Figure 1. The virtual guide system architecture. On the left, the interaction module runs on the PDA, collects the user’s multimodal input, and sends it to the reasoning module (on the right). The chatbot builds the content by querying the ontology and sends it back to the interaction module for multimodal output.

Page 3: Interacting with Augmented Environments

58 PERVASIVEcomputing www.computer.org/pervasive

WORKS IN PROGRESS

WORKS IN PROGRESS

the tag to start an interaction in that mode. Because the tags are almost invis-ible, users couldn’t be sure whether the guide’s silence resulted from a fault, a misreading, or a nonexistent tag.

On the basis of this field experi-ence, we’re now switching the context-

sensing RFID framework to a sym-metric version that attaches tags to the users and readers to the points of interest. This user-centered operational mode also improves the system’s user-profiling capabilities while still main-taining its flexibility for group manage-ment. PDAs still provide the human-environment interface, but they needn’t be the only one. We can place RFID readers at given points of interest or arrange them to compose a sensing mesh that lets the system detect the users’ motion and direction through the environment as well as their position in it. This gives the system a sense of sight, even if it doesn’t actually use a camera or image-recognition algorithm. Users are immersed in a pervasive RFID-based identification-and-detection sys-tem that acts as a distributed eye.

In the user-centered approach, the system manager can vary the reader’s granularity according to the needs of each area within a site. In fact, tag-ging humans instead of objects makes it easy to manage both single users and groups and to use different interaction media. For example, single users can

profile themselves, and the conversa-tional agent can compose appropriate information while the system detects their positions at points of interest and passageways. Group leaders can profile their entire group as a single entity and act as an intermediary between the sys-tem and the group.

We’re currently implementing this version of our system in the Sala Magna of the Steri, the historical headquarters of the University of Palermo and the current seat of the university rector’s office. The Sala Magna has a 14th-century wooden ceiling composed of 32 biblical, mythological, and knightly stories painted in tempera. We plan to hide RFID reader antennas in the exist-ing double-bottom floor, which already contains the service cables.

A demo video of the vocal interac-tion is available at www.unipa.it/sorce/steri.wmv (in Italian). For more infor-mation, contact Salvatore Sorce at [email protected].

SelectedCSarticlesandcolumnsarealsoavailableforfreeathttp://ComputingNow.computer.org.

IEEE Pervasive Computing explores the many facets of pervasive and ubiquitous computing with research articles, case studies, product reviews, conference reports,

departments covering wearable and mobile technologies, and much more.

Keep abreast of rapid technology change by subscribing today!

www.computer.org/pervasive/SUBSCRIBE

INTHEMIDDLEOFAPERVASIVECOMPUTINGPROJECT?

Inadditiontofeature-lengtharticles,IEEE Pervasive Computinginviteswork-in-progresssubmissionsof250wordsorlessontopicsrangingfromhardwaretechnologyandsoftwareinfrastructuretoenvironmentalsensingandhuman-computerinteraction.

Worksinprogressarenotformallypeer-reviewed,butsubmissionsmustbeapprovedbytheWiPsdepartmenteditor,AnthonyD.Joseph.Ifaccepted,theyareeditedbythemagazine’sstaffforgrammarandstyleconventions.

[email protected].