system software for ubiquitous computing - champignon.net · ubiquitous computing u biquitous...

12
70 PERVASIVE computing 1536-1268/02/$17.00 © 2002 IEEE System Software for Ubiquitous Computing U biquitous computing, or ubicomp, systems designers embed devices in various physical objects and places. Frequently mobile, these devices— such as those we carry with us and embed in cars—are typically wirelessly networked. Some 30 years of research have gone into creating distributed computing systems, and we’ve invested nearly 20 years of experience in mobile computing. With this background, and with today’s develop- ments in miniaturization and wireless operation, our commu- nity seems poised to realize the ubicomp vision. However, we aren’t there yet. Ubicomp software must deliver functionality in our everyday world. It must do so on failure-prone hardware with limited resources. Additionally, ubicomp software must operate in conditions of radical change. Vary- ing physical circumstances cause components rou- tinely to make and break associations with peers of a new degree of functional heterogeneity. Mobile and distributed computing research has already addressed parts of these requirements, but a quali- tative difference remains between the requirements and the achievements. In this article, we examine today’s ubiquitous systems, focusing on software infrastructure, and discuss the road that lies ahead. Characteristics of ubiquitous systems We base our analysis on physical integration and spontaneous interoperation, two main characteristics of ubicomp systems, because much of the ubicomp vision, as expounded by Mark Weiser and others, 1 either deals directly with or is predicated on them. Physical integration A ubicomp system involves some integration between computing nodes and the physical world. For example, a smart coffee cup, such as a Media- Cup, 2 serves as a coffee cup in the usual way but also contains sensing, processing, and networking elements that let it communicate its state (full or empty, held or put down). So, the cup can give col- leagues a hint about the state of the cup’s owner. Or consider a smart meeting room that senses the pres- ence of users in meetings, records their actions, 3 and provides services as they sit at a table or talk at a whiteboard. 4 The room contains digital furniture such as chairs with sensors, whiteboards that record what’s written on them, and projectors that you can activate from anywhere in the room using a PDA (personal digital assistant). Human administrative, territorial, and cultural considerations mean that ubicomp takes place in more or less discrete environments based, for exam- ple, on homes, rooms, or airport lounges. In other words, the world consists of ubiquitous systems rather than “the ubiquitous system.” So, from phys- ical integration, we draw our Boundary Principle: Ubicomp system designers should divide the ubicomp world into environments with boundaries that demarcate their content. A clear system boundary criterion—often, The authors identify two key characteristics of ubiquitous computing systems, physical integration and spontaneous interoperation. They examine how these properties affect the design of ubicomp software and discuss future directions. REACHING FOR WEISER’S VISION Tim Kindberg Hewlett-Packard Laboratories Armando Fox Stanford University

Upload: letram

Post on 05-Jun-2019

236 views

Category:

Documents


0 download

TRANSCRIPT

70 PERVASIVEcomputing 1536-1268/02/$17.00 © 2002 IEEE

System Software forUbiquitous Computing

Ubiquitous computing, or ubicomp,systems designers embed devices invarious physical objects and places.Frequently mobile, these devices—such as those we carry with us and

embed in cars—are typically wirelessly networked.Some 30 years of research have gone into creatingdistributed computing systems, and we’ve investednearly 20 years of experience in mobile computing.With this background, and with today’s develop-

ments in miniaturization andwireless operation, our commu-nity seems poised to realize theubicomp vision.

However, we aren’t there yet.Ubicomp software must deliverfunctionality in our everyday

world. It must do so on failure-prone hardware withlimited resources. Additionally, ubicomp softwaremust operate in conditions of radical change. Vary-ing physical circumstances cause components rou-tinely to make and break associations with peers ofa new degree of functional heterogeneity. Mobileand distributed computing research has alreadyaddressed parts of these requirements, but a quali-tative difference remains between the requirementsand the achievements. In this article, we examinetoday’s ubiquitous systems, focusing on softwareinfrastructure, and discuss the road that lies ahead.

Characteristics of ubiquitous systemsWe base our analysis on physical integration and

spontaneous interoperation, two main characteristicsof ubicomp systems, because much of the ubicompvision, as expounded by Mark Weiser and others,1

either deals directly with or is predicated on them.

Physical integrationA ubicomp system involves some integration

between computing nodes and the physical world.For example, a smart coffee cup, such as a Media-Cup,2 serves as a coffee cup in the usual way butalso contains sensing, processing, and networkingelements that let it communicate its state (full orempty, held or put down). So, the cup can give col-leagues a hint about the state of the cup’s owner. Orconsider a smart meeting room that senses the pres-ence of users in meetings, records their actions,3 andprovides services as they sit at a table or talk at awhiteboard.4 The room contains digital furnituresuch as chairs with sensors, whiteboards that recordwhat’s written on them, and projectors that you canactivate from anywhere in the room using a PDA(personal digital assistant).

Human administrative, territorial, and culturalconsiderations mean that ubicomp takes place inmore or less discrete environments based, for exam-ple, on homes, rooms, or airport lounges. In otherwords, the world consists of ubiquitous systemsrather than “the ubiquitous system.” So, from phys-ical integration, we draw our Boundary Principle:

Ubicomp system designers should divide the ubicompworld into environments with boundaries that demarcatetheir content. A clear system boundary criterion—often,

The authors identify two key characteristics of ubiquitous computingsystems, physical integration and spontaneous interoperation. Theyexamine how these properties affect the design of ubicomp software anddiscuss future directions.

R E A C H I N G F O R W E I S E R ’ S V I S I O N

Tim KindbergHewlett-Packard Laboratories

Armando FoxStanford University

but not necessarily, related to a boundary inthe physical world—should exist. A bound-ary should specify an environment’s scope butdoesn’t necessarily constrain interoperation.

Spontaneous interoperationIn an environment, or ambient,5 there

are components—units of software thatimplement abstractions such as services,clients, resources, or applications (see Fig-ure 1). An environment can contain infra-structure components, which are more orless fixed, and spontaneous componentsbased on devices that arrive and leave rou-tinely. Although this isn’t a hard and fastdistinction, we’ve found it to be useful.

In a ubiquitous system, componentsmust spontaneously interoperate in chang-ing environments. A component interop-erates spontaneously if it interacts with aset of communicating components that canchange both identity and functionality overtime as its circumstances change. A spon-taneously interacting component changespartners during its normal operation, as itmoves or as other components enter itsenvironment; it changes partners withoutneeding new software or parameters.

Rather than a de facto characteristic,spontaneous interoperation is a desirableubicomp feature. Mobile computing re-search has successfully addressed aspectsof interoperability through work on adap-tation to heterogeneous content and de-vices, but it has not discovered how toachieve the interoperability that thebreadth of functional heterogeneity foundin physically integrated systems requires.

For a more concrete definition, supposethe owners of a smart meeting room pro-pose a “magic mirror,” which shows thosefacing it their actions in the meeting. Ide-ally, the mirror would interact with theroom’s other components from the momentyou switch it on. It would make sponta-neous associations with all relevant localsources of information about users. Asanother example, suppose a visitor fromanother organization brings his PDA intothe room and, without manually configur-ing it in any way, uses it to send his presen-tation to the room’s projector.

We choose spontaneous rather than therelated term ad hoc because the lattertends to be associated with networking—ad hoc networks6 are autonomous systemsof mobile routers. But you cannot achievespontaneous interoperation as we havedefined it solely at the network level. Also,some use ad hoc to mean infrastructure-free.7 However, ubicomp involves bothinfrastructure-free and infrastructure-enhanced computing. This includes, forexample, spontaneous interaction betweensmall networked sensors such as particlesof smart dust,8 separate from any othersupport, or our previous example of aPDA interacting with infrastructure com-ponents in its environment.

From spontaneous interoperation, wedraw our Volatility Principle:

You should design ubicomp systems on theassumption that the set of participating users,hardware, and software is highly dynamicand unpredictable. Clear invariants that gov-ern the entire system’s execution should exist.

The mobile computing community willrecognize the Volatility Principle in theory,if not by name. However, research has con-centrated on how individual mobile com-ponents adapt to fluctuating conditions.Here, we emphasize that ubicomp adap-tation should involve more than “everycomponent for itself” and not restrict itselfto the short term; designers should specifysystem-wide invariants and implementthem despite volatility.

ExamplesThe following examples help clarify how

physical integration and spontaneous inter-operation characterize ubicomp systems.Our previous ubicomp example of themagic mirror demonstrates physical inte-gration because it can act as a conventionalmirror and is sensitive to a user’s identity.It spontaneously interoperates with com-ponents in any room.

Nonexamples of ubicomp include

• Accessing email over a phone line froma laptop. This case involves neither phys-ical integration nor spontaneous inter-operation; the laptop maintains the sameassociation to a fixed email server. Thisexemplifies a physically mobile system:it can operate in various physical envi-ronments but only because those envi-ronments are equally transparent to it.A truly mobile (although not necessarilyubiquitous) system engages in sponta-neous interactions.9

• A collection of wirelessly connected lap-tops at a conference. Laptops with anIEEE 802.11 capability can connectspontaneously to the same local IEEE802.11 network, assuming no encryp-tion exists. Laptops can run variousapplications that enable interaction,such as file sharing. You could argue thatdiscovery of the local network is physi-cal integration, but you’d miss the es-sence of ubicomp, which is integrationwith that part of the world that has a

JANUARY–MARCH 2002 PERVASIVEcomputing 71

System definitionboundaries

Environment 2Environment 1

Component “appears” inenvironment (switched on

there or moves there)

Mobilecomponent

moves betweenenvironments

Figure 1. The ubiquitous computingworld comprises environments withboundaries and components appearingin or moving between them.

nonelectronic function for us. As forspontaneous interaction, realistically,even simple file sharing would requireconsiderable manual intervention.

Borderline ubicomp examples include

• A smart coffee cup and saucer. Our smartcoffee cup (inspired by but different fromthe MediaCup) clearly demonstratesphysical integration, and it demonstratesa device that you could only find in aubiquitous system. However, if you con-structed it to interact only with its corre-sponding smart saucer according to aspecialized protocol, it would not satisfyspontaneous interoperation. The ownercouldn’t use the coffee cup in anotherenvironment if she forgot the saucer, sowe would have localization instead ofubiquitous functionality.

• Peer-to-peer games. Users play gamessuch as Pirates!10 with portable devicesconnected by a local area wireless net-work. Some cases involve physical inte-gration because the client devices havesensors, such as proximity sensors inPirates! Additionally, some games candiscover other players over the local net-work, and this dynamic associationbetween the players’ devices resemblesspontaneous interaction. However, suchgames require preconfigured compo-

nents. A more convincing case wouldinvolve players with generic game-play-ing “pieces,” which let them sponta-neously join local games even if they hadnever encountered them before.

• The Web. The Web is increasingly inte-grated with the physical world. Manydevices have small, embedded Webservers,11 and you can turn objects intophysical hyperlinks—the user is pre-sented with a Web page when it sensesan identifier on the object.12 NumerousWeb sites with new functionality (forexample, new types of e-commerce sites)spring up on the Web without, in manycases, the need to reconfigure browsers.However, this lacks spontaneous inter-operation—the Web requires humansupervision to keep it going. The “humanin the loop” changes the browser’s asso-ciation to Web sites. Additionally, theuser must sometimes install plug-ins touse a new type of downloaded content.

Software challengesPhysical integration and spontaneous

interoperation have major implications forsoftware infrastructure. These ubicompchallenges include a new level of compo-nent interoperability and extensibility, andnew dependability guarantees, includingadaptation to changing environments, tol-erance of routine failures or failurelike con-

ditions, and security despite a shrunkenbasis of trust. Put crudely, physical inte-gration for system designers can imply “theresources available to you are sometimeshighly constrained,” and spontaneousinteroperation means “the resources avail-able to you are highly dynamic but youmust work anywhere, with minimal or nointervention.” Additionally, a ubicomp sys-tem’s behavior while it deals with theseissues must match users’ expectations ofhow the physical world behaves. This isextremely challenging because users don’tthink of the physical world as a collectionof computing environments, but as a col-lection of places13 with rich cultural andadministrative semantics.

The semantic RubiconSystems software practitioners have long

ignored the divide between system- andhuman-level semantics. Ubicomp removesthis luxury. Little evidence exists to suggestthat software alone can meet ubicomp chal-lenges, given the techniques currently avail-able. Therefore, in the short term, design-ers should make clear choices about whatsystem software will not do, but humanswill. System designers should pay careful,explicit attention to what we call the se-mantic Rubicon of ubiquitous systems (seeFigure 2). (The historical Rubicon rivermarked the boundary of what was Italy inthe time of Julius Caesar. Caesar broughthis army across it into Italy, but only aftergreat hesitation.) The semantic Rubicon isthe division between system and user forhigh-level decision-making or physical-world semantics processing. When respon-sibility shifts between system and user, thesemantic Rubicon is crossed. This divisionshould be exposed in system design, andthe criteria and mechanisms for crossingit should be clearly indicated. Although thesemantic Rubicon might seem like a hu-man-computer interaction (HCI) notion,

72 PERVASIVEcomputing http://computer.org/pervasive

R E A C H I N G F O R W E I S E R ’ S V I S I O N

System responsible for this User responsible for this

Explicit mechanisms fortransferring responsibilities

Figure 2. The semantic Rubicondemarcates responsibility for decision-making between the system and the user;allocation between the two sides can bestatic or dynamic.

especially to systems practitioners, it is, orshould be, a ubicomp systems notion.

Progress reportMany ubicomp researchers have iden-

tified similar challenges but pursuedthem with quite different approaches;we offer a representative sampling andcomparison of these approaches. Ourdiscussion focuses on these areas, whichappear common to Weiser’s and others’ubicomp scenarios:

• Discovery. When a device enters an envi-ronment, how does mutual discoverytake place between it and other availableservices and devices, and among whichones is interaction appropriate?

• Adaptation. When near other hetero-geneous devices, how can we use thedevice to display or manipulate data oruser interfaces from other devices in theenvironment?

• Integration. What connects the device’ssoftware and the physical environment,and what affects the connection?

• Programming framework. What does itmean to write the traditional program-ming exercise “Hello World” for a ubi-comp environment: Are discovery, adap-tation, and integration addressed at theapplication level, middleware–OS level,language level, or a combination of these?

• Robustness. How can we shield thedevice and user from transient faults andsimilar failures (for example, going outof network range) that will likely affecta ubicomp environment?

• Security. What can we let the device oruser do? Whom does it trust and howdoes authentication take place? Howcan we minimize threats to privacy?

Discovery and interactionUbiquitous systems tend to develop acci-

dentally over the medium-to-long term,14

that is, in piecemeal as users integrate newdevices into their physical environmentsand as they adopt new usage models forexisting devices. You can’t reboot theworld, let alone rewrite it, to introduce newfunctionality. Users shouldn’t be led into adisposable physical world just because they

can’t keep up with software upgrades. Allthis suggests that ubicomp systems shouldbe incrementally extensible.

Spontaneous interoperation makesshorter-term demands on our components’interoperability. As we defined a sponta-neously interoperating system, devices caninteract with partners of varying function-ality over time. We do not mean arbitrar-ily varying functionality: For example, it’snot clear how a camera could meaningfullyassociate with a coffee cup. Our challengeis to bring about spontaneous interactionbetween devices that conform to wide-spread models of interoperation—as wideas is practicable.

Consider a component that enters anenvironment (see Figure 1). To satisfy spon-taneous interoperation, the componentfaces these issues:

• Bootstrapping. The component requiresa priori knowledge of addresses (forexample, multicast or broadcast addres-ses) and any other parameters neededfor network integration and service dis-covery. This is largely a solved problem.Protocols exist for joining underlyingnetworks, such as IEEE 802.11. You candynamically assign IP addresses eitherstatefully, using DHCP (dynamic hostconfiguration protocol),15 or statelesslyin IPv6.16

• Service discovery. A service discovery sys-tem dynamically locates a service instancethat matches a component’s require-ments. It thus solves the association prob-lem: Hundreds or even thousands ofdevices and components might exist percubic meter; with which of these, if any,is it appropriate for the arriving compo-nent to interact?

• Interaction. Components must conformto a common interoperation model tointeract.

Additionally, if the arriving componentimplements a service, the components inthe service’s environment might need todiscover and interact with it. Service dis-covery and interaction are generally sepa-rable (although the Intentional NamingSystem17 combines them).

Service discovery. Address allocation andname resolution are two examples of localservices that an arriving component mightneed to access. Client components gener-ally require an a priori specification ofrequired services, and corresponding ser-vice components must provide similar spec-ifications to be discoverable. Several sys-tems perform service discovery.17–21 Eachprovides syntax and vocabulary for speci-fying services. The specification typicallyconsists of attribute–value pairs such as serviceType=printer and type=laser.

A challenge in ubicomp service descrip-tion is avoiding overspecification. Considera wireless-enabled camera brought into ahome. If the camera contains a printing ser-vice description, a user could take a pictureand send it from the camera to a matchingprinter without using a PC. But one issue isbrittleness: Both camera and printer mustobey the same vocabulary and syntax.Association can’t take place if the camerarequests serviceType=printing when theprinter has service=print. Agreement onsuch details is a practical obstacle, and it’sdebatable whether service specification cankeep up with service development.

Furthermore, lost opportunities for asso-ciation arise if devices must contain speci-fications of all services with which they caninteract. Suppose our example home con-tains a digital picture frame. A camera thatcan send an image for printing should beable to send the image to a digital frame.But, if the camera doesn’t have a specifi-cation for serviceType=digitalFrame, theopportunity passes.

Other designs, such as Cooltown,21 triedto use more abstract and much less detailedservice specifications to get around suchproblems. For example, a camera couldsend its images to any device that consumesimages of the same encoding (such asJPEG) without needing to know the spe-cific service.

But abstraction can lead to ambiguity. Ifa camera were to choose a device in thehouse, it would potentially face many dataconsumers with no way of discriminatingbetween them. Also, it’s not clear how acamera should set the target device’s para-meters, such as image resolution, without

JANUARY–MARCH 2002 PERVASIVEcomputing 73

a model of the device. Users, however, tendto be good at associating devices appro-priately. So, we observe:

System designers must decide the human’srole to resolve tension between interoper-ability and ambiguity.

Additionally, the Boundary Principletells us that the definition of here is impor-tant for meaningful discovery results. Forexample, a client might not be interestedin services in Japan if he has just arrived ina smart house in London. Figure 1 showsvarious environments with a dotted linemarking the boundary around service com-ponents that we deem to be in the sameenvironment as the arriving device.

In one approach, clients might sensetheir current location’s name or positionand use that in their service specificationsto a global discovery service. But most ser-vice discovery systems are local, of the net-work discovery type. They primarily definehere as the collection of services that agroup, or multicast, communication canreach. The discovery service listens forquery and registration messages on a multi-cast address that all participating compo-nents possess a priori. These messages’scope of delivery typically includes one ormore local connected subnets, which theycan efficiently reach with the underlyingnetwork’s multicast or broadcast facilities.

The problem here is that the approachputs discovery firmly on the systems sideof the semantic Rubicon, but the very sim-plicity of subnet multicast makes it blindto human issues such as territory or useconventions. It is unlikely, for example,that the devices connected to a particularsubnet are in a meaningful place such as aroom. Using physically constrained mediasuch as infrared, ultrasound, or 60-GHzradio (which walls substantially attenuate)helps solve territorial issues, but it does not

solve other cultural issues. Another alter-native is to rely on human supervision todetermine the scope of each instance of thediscovery service.22

Interaction. After associating to a service,a component employs a programminginterface to use it. If services in a ubicompenvironment arbitrarily invented their owninterfaces, no arriving component could beexpected to access them. Jini lets service-access code and data, in the form of a Javaobject, migrate to a device. However, howcan the software on the device use thedownloaded object without a prioriknowledge of its methods?

Event systems18, 23 or tuple spaces18,24–26

offer alternative approaches. These interac-tions share two common features: the sys-tem interface comprises a few fixed opera-tions, and the interactions are data-oriented.

In event systems, components calledpublishers publish self-describing dataitems, called events or event messages, tothe local event service. That service matchesthe data in a published event to specifica-tions that components called subscribershave registered. When an event is pub-lished, the event service forwards it to allsubscribers with a matching subscription.The only interaction methods an event sys-tem uses are publish, subscribe, and handle(that is, handle an event).

A tuple space provides a repository of datatuples. Again, any component interacting viaa tuple space needs to know only three oper-ations. (A fourth operation, eval, exists insome tuple space systems for process orthread creation.) Components can add tuplesto a tuple space, or they can take tuples outor read them without removing them.

In each case, components interact bysharing a common service (a tuple space oran event service); they don’t need direct

knowledge of one another. However, thesedata-oriented systems have shifted the bur-den of making interaction consistent ontodata items—namely, events and tuples. Ifone component publishes events or addstuples of which no other component has apriori knowledge, interaction can fail. So,we observe:

Data-oriented interaction is a promisingmodel that has shown its value for spon-taneous interaction inside the boundariesof individual environments. It seems thatthis requires ubiquitous data standardiza-tion for it to work across environmentboundaries.

AdaptationSeveral reasons exist for why ubicomp

implies dealing with limited and dynami-cally varying computational resources.Embedded devices tend to be small, andlimits to miniaturization mean constrainedresources. Additionally, for devices that runon batteries, a trade-off exists between bat-tery life and computational prowess or net-work communication. For extreme exam-ples of ubicomp such as smart dust, theseconstraints become an order of magnitudemore severe than for components such asPDAs, which previous research has con-sidered resource-impoverished.

Furthermore, the available resourcestend to vary dynamically. For example, adevice that has access to a high-bandwidthwireless network such as IEEE 802.11b inone environment might find itself with onlya low-bandwidth wide-area connection inanother—a familiar problem in mobilecomputing. A new challenge for ubiqui-tous systems arises because adaptationmust often take place without human inter-vention, to achieve what Weiser calls calmcomputing.27 In this new environment, weexamine possible extensions of existingmobile computing techniques: transfor-mation and adaptation for content and thehuman interface.

Content. Embedding computation in orlinking general-purpose computing de-vices with the physical world implies het-erogeneity across the devices, including de-vices embedded in and those brought to theenvironment by users (for example, lap-

74 PERVASIVEcomputing http://computer.org/pervasive

R E A C H I N G F O R W E I S E R ’ S V I S I O N

A new challenge for ubiquitous systems arises

because adaptation must often take place

without human intervention, to achieve what

Weiser calls calm computing.

tops). Mobile computing research success-fully addressed content adaptation forresource-poor devices at both the OS andapplication levels. Pioneering work such asthe Coda file systemexplored network dis-connection and content adaptation withinthe context of a specific application, namelythe file system, handling all such adapta-tion in the operating system to enable appli-cation transparency. Coda’s successor,Odyssey,29 and other, later work on adapt-ing Web content for slow networks andsmall devices30,31 moved some responsibil-ity to the application layer, observing thatbecause the OS is rarely familiar with thesemantics of the application’s data, it is notalways in the best position to make adap-tation decisions; this reflected the thinkingof application-level framing.32

Adaptation in ubicomp is quantitativelyharder. Instead of adapting a small fixed setof content types to a variety of device types(1-to-n), we must potentially adapt contentamong n heterogeneous device types (n-to-n). Fortunately, the general approachesmobile computing research has exploredhave immediate applicability. For example,the Stanford Interactive Workspaces’ smartclipboard can copy and paste data betweenincompatible applications on different plat-forms.33 It builds on well-known mobilecomputing content adaptation, the maindifference being that the smart clipboardmust transparently invoke the machinerywhenever the user performs a copy-and-paste operation. A more sophisticated butless general approach, semantic snarfing,as implemented in Carnegie Mellon’s Peb-bles project, captures content from a largedisplay onto a small display and attempts to

emulate the content’s underlying behav-iors.34 For example, snarfing a pop-upmenu from a large display onto a handheldlets the user manipulate the menu on herhandheld, relaying the menu choice backto the large display. This approach triesharder to do “the right thing,” but ratherthan relying on a generic transformationframework, it requires special-case code foreach content type (such as menus and Webpage content) as well as special-case client-server code for each pair of desired devices(Windows to Palm, X/Motif to WindowsCE). Nonetheless, both approaches demon-strate the applicability of content transfor-mation to the ubicomp environment. Ingeneral, we observe:

The content adaptation approaches and tech-nology developed in mobile computing imme-diately pertain to ubicomp, but they solveonly a part of the overall problem. We muststill address such issues as discovery androbustness, and we must find a way to applythe adaptation using mechanisms invisible tothe user.

The human interface. Earlier, we intro-duced the commonly quoted ubicomp sce-nario of a user entering an environmentand immediately accessing or controllingvarious aspects of the environment throughher PDA. To achieve such a scenario re-quires separating user interfaces (UIs)from their applications—a well-under-stood problem in desktop systems. In ubi-comp, however, the goal might be to movethe human interface to a physically differ-ent device chosen on the fly. Based on pre-vious on-the-fly transformations appliedto this problem, we can distinguish fourlevels of client intelligence. At the lowest

level, a client such as VNC35 displays thebits and collects user input, without knowl-edge of UI widget semantics. At the nextlevel, the client manipulates a higher-level,usually declarative description of the inter-face elements and has the intelligence torender them itself. X Windows and its low-bandwidth derivatives—for example, LBX(Low-Bandwidth X)—do this, although inthese systems, the device displaying the UIruns what is confusingly called a server.Smarter clients also do their own geometrymanagement and some local UI interac-tion processing, so not every interactionrequires a roundtrip to the server (see Fig-ure 3); Tcl/Tk and JavaScript-enhancedWeb pages are examples.

A noteworthy variant involves deliver-ing an HTML UI (based on forms andHTML widgets) via HTTP to any devicethat can host a Web browser, thus remov-ing the need to reconfigure the device fordifferent interfaces. This important specialcase arises as a result of the Web’s phe-nomenal success. However, HTML pro-vides only limited expressiveness for a UI.Also, HTML and HTTP address deliver-ing, rendering, and letting the user interactwith the UI, but they do not address howto determine what belongs in it or how UIelements associate with the applications ordevices they control.

The next level is represented by ToddHodes and his colleagues,36 who proposeda Tcl/Tk framework in which controllableentities export Tcl-code descriptions, fromwhich a client can generate its UI. Thedescriptions are declarative and high level(“This control is a Boolean toggle,” “Thiscontrol is a slider with range 0 to 255 in

JANUARY–MARCH 2002 PERVASIVEcomputing 75

Figure 3. Spontaneously generated user interfaces to control lights and displays in a meeting room. The same high-level markupwas used to generate device-specific UIs for (a) a desktop HTML browser, (b) a Java-equipped device, and (c) a Palm device.

(a) (b) (c)

increments of 1”), letting the geometry andthe generated interface’s visual details varyamong platforms. The Stanford InteractiveWorkspaces project has successfully gen-eralized Hodes’s approach: The InterfaceCrafter framework4 supports a level ofindirection of specialized per-device or per-service interface generators and a genericinterface transformation facility. In this sys-tem, the high-level service descriptions per-mit adaptation for nontraditional UI modal-ities such as voice control.

Finally, a fully generalized mobile codefacility, such as Jini, lets the client down-load and execute arbitrary code orpseudocode that implements the interfaceand communication with the service. Be-cause the protocol between a Jini serviceand its UI is opaque—embedded in the codeor pseudocode—you usually can’t enlist a

different user agent to interpret and renderthe UI. You also can’t realize the UI on adevice not running a Java Virtual Machineor on one with nontraditional I/O charac-teristics (for example, a voice-controlled orvery small glass device). We conclude:

Applying transformation to the UI can be apowerful approach to achieving spontaneousinteraction in ubicomp, provided we expressthe UI in terms that enable extensive manip-ulation and transformation.

Integration with the physical worldTo integrate computing environments

(which are virtual by definition) with thephysical world, we need low-level appli-cation programming interfaces that let soft-ware deal with physical sensors and actu-ators, and a high-level software frameworkthat lets applications sense and interactwith their environment, including physicalsensors and actuators. Traditional devicedrivers offer APIs that are too low-level,because application designers usually thinkof functionality at the level of widgets instandard toolkits such as Tcl/Tk, VisualBasic, and X/Motif. A potentially more

useful low-level API is a Phidget,37 a GUIwidget element whose state and behaviorscorrespond to those of a physical sensor oractuator. For example, querying a servo-motor phidget returns the motor’s angularposition, and setting the Phidget’s angularvalue causes the motor to step to the givenposition. You can directly incorporatePhidgets into Visual Basic programs usingthe popular Visual Basic WYSIWYG drag-and-drop UI editor. At a higher level, theContext Toolkit framework38 providesapplications with a context widget soft-ware abstraction, which lets an applicationaccess different types of context informa-tion while hiding how the information wassensed or collected. For example, one typeof context widget provides information onthe presence of a person in a particularlocation, without exposing how it collected

the information. The Context Toolkit’smiddleware contribution is its ability toexpose a uniform abstraction of (for exam-ple) location tracking, hiding the details ofthe sensing system or systems used to col-lect the information.

Location sensing and tracking figureprominently in several projects. The earlyActive Badge work39 and the more recentSentient Computing effort40 use infrared,radio, and ultrasound to track physicalobjects that users carry or wear. Knowingthe users’ locations and identities enableslocation tracking and novel behaviors forexisting devices, such as a camera thatknows who it’s photographing. On theother hand, EasyLiving41 and the MITIntelligent Room42 use location tracking asone of several elements that help infer ordisambiguate a user action. For example,if a user requests “Show me the agenda,”the system can respond in a context-sensi-tive manner if it can determine the user’sidentity and location. Such approaches leadto potentially impressive applications,although they should make it clear on

which side of the semantic Rubicon deci-sions are being made at the point of inte-gration with the user’s physical world. Suchsystems should make it easy to identifywhere the decisions are being made at thepoint of integration with the user’s physi-cal world.

Several groups have investigated ways oflinking physical entities directly to servicesusing identification technologies. For exam-ple, Roy Want and his colleagues at XeroxParc43 augmented books and documents byattaching radio frequency identificationtags to them and presenting the electronicversion to users who scanned them withhandheld devices. Jun Rekimoto and YujiAyatsuka44 attached symbols speciallydesigned for digital camera capture to phys-ical objects, to link them to electronic ser-vices. Hewlett-Packard Labs’ Cooltownproject focuses on associating Web re-sources (Web presences) with physical enti-ties. For example, a visitor to a museum canobtain related Web pages automaticallyover a wireless connection by reading iden-tifiers associated with the exhibits. Theidentifier might come from a barcode or ashort-range infrared beacon. Although theCooltown infrastructure has primarily tar-geted direct human interaction, work is inprogress on interoperation between Webpresences using XML over HTTP to exportquery and update operations.45

Programming frameworksWhat does it mean to write “Hello

World” for a ubicomp environment? Aframework for programming such envi-ronments might address the ubicomp chal-lenges in applications, the operating sys-tem or middleware, and the developmentlanguage. Some environments might usecombinations of these.

Support for legacy applications and com-modity OSs has always been a systemsresearch challenge. Although we havestressed the need for incremental extensi-bility in any new system, legacy supportalso implies accommodating existing appli-cations and OSs. It’s important to leverageexisting applications because of largeinvestments in the applications, their data,and knowledge of how to use them. It’s

76 PERVASIVEcomputing http://computer.org/pervasive

R E A C H I N G F O R W E I S E R ’ S V I S I O N

It’s important to leverage existing applications

because of large investments in the applications,

their data, and knowledge of how to use them.

important to leverage OSs without modifi-cation because OSs are fragile, most usersare reluctant or insufficiently sophisticatedto apply OS patches, and OS functionalityis generally regarded as beyond applicationwriters’ control. (This latter point can beespecially true in ubicomp, as seen in spe-cialized embedded OSs such as the TinyOSused in smart dust.46) Although someresearchers choose to design entirely newsystems at the expense of legacy support,we do not always have this option in ubi-comp. The Volatility Principle tells us thatubicomp environments change incremen-tally and continuously over time: today’snew systems are tomorrow’s legacy sys-tems, and given innovation’s current pace,legacy systems are accreting faster than ever.Therefore, our approaches should includesupport for legacy systems. The StanfordInteractive Workspaces project has takenthis approach, providing only an applica-tion coordination mechanism via a tuplespace26 and delegating state storage, sensorfusion for context determination, and soforth to other components at the applica-tion level. This results in more loosely cou-pled pieces, but makes the basic infrastruc-ture easy to port to new and legacy devices,encouraging rapid device integration as anaspect of incremental evolution.

Conversely, the MIT Intelligent Roomproject’s Metaglue framework providesJava-specific extensions for freezing objectstate in a persistent database and express-ing functional connections between com-ponents. (For example, you can expressthat one component relies on the func-tionality of another, and the system pro-vides resource brokering and managementto satisfy the dependency at runtime.) InMetaglue, the component coordinationsystem is also the language and system forbuilding the applications. The researcherschose this approach, which forgoes legacysupport, as a better fit for the project’sfocus—namely, applying AI techniques todetermine the intent of a user’s action (forexample, by sensing identity via imageanalysis from a video feed and position rel-ative to landmarks in the room). AlthoughAI techniques can make valuable contri-butions, such work must clearly define on

which side of the semantic Rubicon a par-ticular functionality resides. An importantquestion will be whether the Metaglueframework is sufficiently expressive andprovides abstractions at the appropriatelevel to facilitate this.

Other places to provide programmingsupport for addressing the ubicomp chal-lenges are middleware and the OS. Whenmobile computing faced similar challengesin breaking away from the “one desktopmachine per fixed user” model of com-puting, researchers introduced middlewareas one approach. We define middleware asservices provided by a layer in between theoperating system and the applications.Middleware usually requires only minimalchanges to existing applications and OSs(hence its name). The ActiveSpaces projectat the University of Illinois, Urbana-Cham-paign, is developing a large framework,Gaia,47 based on distributed objects and asoftware object bus that can connectobjects from different frameworks (Corba,DCOM, Enterprise JavaBeans). Gaia, amiddleware OS, views an ActiveSpace andits devices as analogous to a traditional OSwith the resources and peripherals it man-ages. The framework thus provides moretightly integrated facilities for sensorfusion, quality-of-service-aware resourcemanagement, code distribution, adaptiveand distributed rendering, and security.The ActiveSpaces project chooses to deem-phasize extensive support for existing sys-tems and devices.

Robustness and routine failuresUbiquitous systems, especially those

using wireless networking, see a radicalincrease of “failure” frequency comparedto a wired distributed system. Some ofthese failures are not literal failures butunpredictable events from which it is sim-ilarly complicated to recover. For example,physical integration often implies usingbatteries with a relatively short mean timeto failure due to battery exhaustion. Wire-less networks, with limited range andprone to interference from nearby struc-tures, afford much less reliable communi-cation than wireline networks. In sponta-neous interoperation, associations are

sometimes gained and lost unpredictablyas, for example, when a device suddenlyleaves an environment.

Although the distributed systems litera-ture contains techniques for fault-tolerantcomputing, they are often based on resourceredundancy rather than scarcity. Further-more, the common assumption in distrib-uted systems that failures are relatively rarecould lead to expensive or ungraceful recov-ery, contrary to the expectation of calmbehavior for ubiquitous systems.

Two widely accepted truths about depend-ability are that you can’t add it to a systemafter the fact but must design it in,48 and thatit is ultimately an end-to-end property.49

Design choices made at lower layers of a sys-tem can either facilitate or hinder the soundengineering of dependability at higher systemlayers or in the project’s later stages.

In ubicomp systems, various transientfailures can actually characterize steady-state operation. The Internet protocol com-munity and the distributed systems com-munity have considered such scenarios. Welook to both for techniques specificallydesigned to function under the assumptionthat “failure is a common case.” The com-mon tie is the underlying assumption thatrecovering from frequent transient failuresbasically entails being always prepared toreacquire lost resources.

Expiration-based schemes and soft state.Consider the PointRight system,50 in whicha single mouse and keyboard can controla collection of otherwise-independentdisplays in a ubicomp environment. Whena user enters or leaves a ubicomp envi-ronment that has PointRight, the user’sdevice could register with a centralizeddirectory that tracks the room’s screengeometry. But this directory is a singlepoint of failure. If it fails and has not savedits data to stable storage, it must notify alldevices to reregister when the directorycomes back up. Furthermore, if a devicedies or fails to deregister before leaving theenvironment, it creates inconsistencybetween the directory contents and theenvironment’s true state.

An alternative solution requires eachavailable device or service to send a peri-

JANUARY–MARCH 2002 PERVASIVEcomputing 77

odic advertisement announcing its pres-ence or availability to the directory service,which collects these advertisements andexpires them when a new advertisementarrives or after a designated timeout period.Should the directory service fail, new adver-tisements will repopulate it when it restarts;hence, the directory itself constitutes softstate. Should a device fail, it will stop send-ing advertisements, and once the device’sprevious advertisement has expired, thedirectory will no longer list the device.

A detailed analysis of how to choose atimeout to balance the resulting “windowof potential inconsistency” with the net-work and computation capacity consumedby advertisements is available elsewhere.51

Of course, it might be more practical tostore nonchanging information such asphysical room geometry in a fixed data-base for reliability and high performance.Microsoft takes this approach for describ-ing the geometry of EasyLiving-awarerooms. We propose the following sugges-tion for practitioners:

When failure is a common case, identify whatcritical static or dynamic state you must per-sistently store and consider reconstructablesoft state for the rest.

Separating failure-free and failure-proneoperations. In a dynamically changingenvironment, some operations can failwhile others must necessarily succeed. Theone.world middleware separates applica-tion code into operations, which can fail,and logic, which does not fail exceptunder catastrophic circumstances.23 Gen-erally, an operation is anything that mightrequire allocating or accessing a potentiallynonlocal resource whose availability isdynamic, such as file or network I/O. Forrobustness, a process must always be pre-pared to rediscover and reacquire lostresources. For example, opening a file ona remote server creates a binding betweenthe local filehandle and the remote file. Ifthe application thread is migrated toanother host, the file might still be avail-able on the server, but you’ll need to estab-lish a new binding on the new host. Theapplication must provide code that dealswith such conditions when doing opera-

tions that can fail. one.world automaticallyprovides for some common cases, such asretrying idempotent operations a finitenumber of times. So, we observe:

Because not all operations are equally likely tofail, clearly indicate which ones are more likelyto fail because of dynamism and requiredspontaneous interoperation, so developers canprovide more effective error handling.

Group communication for “free” indirection. You can use group communi-cation to provide a level of indirection thathelps rediscover lost resources. Althoughapproaches that rely on group communi-cation have been criticized for their poorscaling, in ubicomp, we might be able toexploit the Boundary Principle. For exam-ple, a meeting room only accommodates afinite number of persons, because effectivemeetings must necessarily be limited in size;a corporate campus has a security andadministrative boundary that separates itfrom the rest of the world; and so on. Infact, systems that aim to provide location-dependent services might be thwarted bythe reverse scalability problem: wirelessgroup communication’s typical scope oftenexceeds the size of the locale in which theservice will operate. So, we observe:

Although ubicomp projects should certainlyaddress scalability, the Boundary Principle canconfound the attempt. Sometimes observingthe Boundary Principle can help, by enablingsolutions that favor robustness over scalabil-ity. Other times, observing the principle canintroduce complications, as is the case whena physically range-limited networking tech-nology still fails to capture other (cultural oradministrative) important boundaries.

SecurityWe described how components enter into

spontaneous associations in a ubiquitoussystem and interact with the componentsthey discover. However, how do we protectcomponents from one another? Certain ser-vices in a smart room, for example, needprotection from devices belonging to visi-tors. Similarly, data and devices broughtinto an environment, such as users’ PDAsor particles of smart dust, require protec-tion from hostile devices nearby.

Looked at from a higher level, users

require security for their resources and, inmany cases, privacy for themselves.52

Mobile computing has led to an under-standing of vulnerabilities such as the open-ness of wireless networks.53 But physicalintegration and spontaneous interoperationraise new challenges, requiring new mod-els of trust and authentication as well asnew technologies.

Trust. In ubiquitous systems, trust is anissue because of spontaneous interopera-tion. What basis of trust can exist betweencomponents that can associate sponta-neously? Components might belong to dis-parate individuals or organizations andhave no relevant a priori knowledge of oneanother or a trusted third party. Fortu-nately, physical integration can work toour advantage—at least with an appropri-ate placement of the semantic Rubicon.Humans can make judgments about theirenvironments’ trustworthiness,7 and thephysical world offers mechanisms for boot-strapping security based on that trust. Forexample, users might exchange crypto-graphic keys with their environment oreach other over a physically constrainedchannel such as short-range infrared, oncethey have established trust.

Security for resource-poor devices. Physi-cal integration impacts security protocols.Particles of smart dust and other extremelyresource-poor devices do not have sufficientcomputing resources for asymmetric (pub-lic key) encryption—even when using ellip-tic curve cryptography54—and protocolsmust minimize communication overheadsto preserve battery life. For example, Spins(security protocols for sensor networks) pro-vides security guarantees for the data thatsmart dust particles exchange in a poten-tially hostile environment.55 They use onlysymmetric-key cryptography, which, unlikeasymmetric-key cryptography, works onvery low-power devices. But this does notaddress what has been called the “sleepdeprivation torture attack” on battery-pow-ered nodes:56 an attacker can always denyservice by jamming the wireless networkwith a signal that rapidly causes the devicesto run down their batteries.

78 PERVASIVEcomputing http://computer.org/pervasive

R E A C H I N G F O R W E I S E R ’ S V I S I O N

Access control. Another problem in ubi-comp systems is basing access control onauthenticating users’ identities. Physicalintegration makes this awkward for usersbecause knowing an identified user’s where-abouts raises privacy issues. Spontaneousinteroperation makes it problematic forenvironments, which have to integrate astream of users and devices that can spon-taneously appear and disappear. It can beadvantageous to issue time-limited capa-bilities to devices that have spontaneouslyappeared in an environment, rather thansetting up access control lists. Capabilitiescould be issued on the basis of dynamicallyestablished trust such as we described ear-lier and used without explicitly disclosingan identity. Also, they reduce the need forsystem-wide configuration and avoid com-munication with a central server.19 Becausecapabilities resemble physical keys, a secu-rity component can enable, for example, avisitor’s PDA to use a soft-drinks machinewithout communicating with that machine.

Location. Physical integration has severalimplications for security through location.Customizing services to a location canresult in a loss of privacy for identified indi-viduals. However, location-aware com-puting doesn’t have to require user track-ing. Users, rather than the system, canestablish their own locations.57 Moreover,even if the system learns the user’s location,it does not follow necessarily that it cancorrelate that location with any personaldata. In a ubicomp system, locations canbe tied to temporary identifiers and keys,and some cases require no identity criteria.In location authentication, the systemestablishes a component’s physical loca-tion, for example, as a criterion for pro-viding services. So, an Internet cafe mightprovide a walk-up-and-print service toanyone who is on the cafe’s premises. Itdoesn’t matter who they are, the cafe caresonly about where they are. Physically con-strained channels, such as ultrasound,infrared, and short-range radio transmis-sion, particularly at highly attenuated fre-quencies, have enabled work on protocolsthat prove whether clients are where theyclaim to be.58,59

New patterns of resource-sharing. Wecommonly use intranet architectures toprotect resources, using a firewall thatcleanly separates resources inside and out-side an organization. Mobile ambients,5 amodel of firewalled environments compo-nents that can be crossed in certain cir-cumstances, more closely address mobilecomputing requirements than those of ubi-comp. Physical integration works againstthe firewall model because it entails shar-ing “home” devices with visitors. Sponta-neous interaction makes extranet tech-nologies unsuitable, because developersintend them for long-term relationshipswith outside users. Work on securing dis-covery services19 and access to individualservices60 tends to assume the opposite ofthe firewall model: that locals and visitorsgo through the same security interface toaccess resources, although the formermight require more convenient access.

Evidently, a more convenient and fine-grained model of protected sharing isrequired. As an example, Weiser envisioneddevices you can use temporarily as personaldevices and then return for others to use.The “Resurrecting Duckling” paper sug-gests how a user might do this.56 The userimprints the device by transmitting a sym-metric key to it over a physically securechannel, such as by direct contact. Onceimprinted, the device only obeys commandsfrom other components that prove posses-sion of the same key until it receives instruc-tions to return to the imprintable state.

C learly, much more research isneeded. Consider the commonscenario of a user who walksinto an environment for the

first time and “finds” and uses a printer.

Even this seemingly easy scenario raisessubtle issues. How do you know if it’s cul-turally OK to use a particular printer? (Itmight be for the boss’s use, even though it’sin a shared space on a shared network.)Which printers are conveniently located?How should we define ontologies for suchthings as “color printer”?

We are not aware of a de facto solutionto the printer example. At least, no solu-tions exist that we can use out of the lab.Too often, we only investigate interoper-ability mechanisms within environmentsinstead of looking at truly spontaneousinteroperation between environments. Wesuggest exploring content-oriented pro-gramming—data-oriented programmingusing content that is standard acrossboundaries—as a promising way forward.The Web demonstrates the effectiveness ofa system that enables content standards toevolve, while users invoke processing

through a small, fixed interface.In some cases, the research community

isn’t realistic enough about either physical-world semantics’ complexity or knownmechanisms’ inadequacy to make decisionswhen responding to exigencies, as in adap-tation and failure recovery. Some problemsroutinely put forward are actually AI-hard.For example, similar to the Turing test foremulating human discourse, we could imag-ine a meeting test for context-aware systems:can a system accurately determine, as com-pared with our human sensibilities, when ameeting is in session in a given room?

To make progress, ubicomp researchshould concentrate on the case where ahuman is supervising well-defined ubicompaspects—for example, by assisting in appro-priate association or by making a judgmentabout a system’s boundary. Once the ubi-comp community can get that right—andwe believe that human-supervised operation

JANUARY–MARCH 2002 PERVASIVEcomputing 79

In some cases, the research community isn’t realistic

enough about either physical-world semantics’

complexity or known mechanisms’ inadequacy to

make decisions when responding to exigencies.

represents as much as a 75 percent solutionfor ubicomp—it will be time to push certainresponsibilities across the semantic Rubi-con, back toward the system.

By the way, has anyone seen a printeraround here?

ACKNOWLEDGMENTSWe thank Gregory Abowd, Nigel Davies, MahadevSatyanarayanan, Mirjana Spasojevic, Roy Want,and the anonymous reviewers for their helpfulfeedback on earlier drafts of this article. Thanks alsoto John Barton, who suggested what became theVolatility Principle.

REFERENCES1. M. Weiser, “The Computer for the 21st Cen-

tury,” Scientific American, vol. 265, no. 3,Sept. 1991, pp. 94–104 (reprinted in thisissue, see pp. 19–25).

2. M. Beigl, H.-W. Gellersen, and A. Schmidt,“MediaCups: Experience with Design andUse of Computer-Augmented EverydayObjects,” Computer Networks, vol. 35, no.4, Mar. 2001, pp. 401–409.

3. G.D. Abowd, “Classroom 2000: An Exper-iment with the Instrumentation of a LivingEducational Environment,” IBM Systems J.,vol. 38, no. 4, Oct. 1999, pp. 508–530.

4. S.R. Ponnekanti et al., “ICrafter: A ServiceFramework for Ubiquitous ComputingEnvironments,” Ubicomp 2001: UbiquitousComputing, Lecture Notes in Computer Sci-ence, vol. 2201, Springer-Verlag, Berlin,2001, pp. 56–75.

5. L. Cardelli and A.D. Gordon, “MobileAmbients,” Foundations of Software Scienceand Computation Structures, Lecture Notesin Computer Science, vol. 1378, Springer-Verlag, Berlin, 1998, pp. 140–155.

6. C.E. Perkins, ed., Ad Hoc Networking,Addison-Wesley, Reading, Mass., 2001.

7. L. Feeney, B. Ahlgren, and A. Westerlund,“Spontaneous Networking: An Application-Oriented Approach to Ad Hoc Network-ing,” IEEE Comm. Magazine, vol. 39, no. 6,June 2001, pp. 176–181.

8. J.M. Kahn, R.H. Katz, and K.S.J. Pister,“Mobile Networking for Smart Dust,” Proc.Int’l Conf. Mobile Computing and Net-working (MobiCom 99), ACM Press, NewYork, 1999, pp. 271–278.

9. R. Milner, Communicating and Mobile Sys-tems: The π calculus, Cambridge Univ. Press,Cambridge, UK, 1999.

10.S. Björk et al., “Pirates! Using the PhysicalWorld as a Game Board,” Proc. Conf.Human-Computer Interaction (Interact 01),IOS Press, Amsterdam, 2001, pp. 9–13.

11.G. Borriello and R. Want, “Embedded Com-putation Meets the World Wide Web,”Comm. ACM, vol. 43, no. 5, May 1999, pp.59–66.

12.T. Kindberg et al., “People, Places, Things:Web Presence for the Real World,” Proc. 3rdIEEE Workshop Mobile Computing Sys-tems and Applications (WMCSA 2000),IEEE CS Press, Los Alamitos, Calif., 2000,pp. 19–28.

13.S. Harrison and P. Dourish, “Re-Place-ingSpace: The Roles of Place and Space in Col-laborative Systems,” Proc. ACM Conf.Computer-Supported Cooperative Work(CSCW 96), ACM Press, New York, 1996,pp. 67–76.

14.W.K. Edwards and R.E. Grinter, “At Homewith Ubiquitous Computing: Seven Chal-lenges,” Ubicomp 2001: Ubiquitous Com-puting, Lecture Notes in Computer Science,vol. 2201, Springer-Verlag, Berlin, 2001, pp.256–272.

15.R. Droms, Dynamic Host ConfigurationProtocol, RFC 2131, Internet EngineeringTask Force, www.ietf.org.

16.S. Thomson and T. Narten, IPv6 StatelessAddress Autoconfiguration, RFC 1971,Internet Engineering Task Force, www.ietf.org.

17.W. Adjie-Winoto et al., “The Design andImplementation of an Intentional NamingSystem,” Proc. 17th ACM Symp. OperatingSystem Principles (SOSP 99), ACM Press,New York, 1999, pp. 186–201.

18.K. Arnold et al., “The Jini Specification,”Addison-Wesley, Reading, Mass., 1999.

19.S.E. Czerwinski et al., “An Architecture fora Secure Service Discovery Service,” Proc.5th Ann. ACM/IEEE Int’l Conf. MobileComputing and Networks (MobiCom 99),ACM Press, New York, 1999, pp. 24–35.

20.E. Guttman, “Service Location Protocol:Automatic Discovery of IP Network Ser-vices,” IEEE Internet Computing, vol. 3, no.4, July/Aug. 1999, pp. 71–80.

21.T. Kindberg and J. Barton, “A Web-BasedNomadic Computing System,” ComputerNetworks, vol. 35, no. 4, Mar. 2001, pp.443–456.

22. J. Barton, T. Kindberg, and S. Sadalgi, “Phys-ical Registration: Configuring ElectronicDirectories Using Handheld Devices,” tobe published in IEEE Wireless Comm.,

vol. 9, no. 1, Feb. 2002; tech. report HPL–2001–119, Hewlett-Packard, Palo Alto,Calif., 2001.

23.R. Grimm et al., “Systems Directions for Per-vasive Computing,” Proc. 8th WorkshopHot Topics in Operating Systems (HotOSVIII), 2001, pp. 128–132.

24.N. Carriero and D. Gelernter, “Linda inContext,” Comm. ACM, vol. 32, no. 4, Apr.1989, pp. 444–458.

25.N. Davies et al., “Limbo: A Tuple SpaceBased Platform for Adaptive Mobile Appli-cations,” Proc. Joint Int’l Conf. Open Dis-tributed Processing and Distributed Plat-forms (ICODP/ICDP 97), Chapman andHall, Boca Raton, Fla., 1997.

26.B. Johanson and A. Fox, “Tuplespaces asCoordination Infrastructure for InteractiveWorkspaces,” Proc. Workshop ApplicationModels and Programming Tools for Ubiq-uitous Computing (UbiTools 01), 2001,http://choices.cs.uiuc.edu/UbiTools01/pub/08-fox.pdf.

27.M. Weiser and J. Brown, “Designing CalmTechnology,” PowerGrid J., vol. 1, 1996.

28. J.J. Kistler and M. Satyanarayanan, “Dis-connected Operation in the Coda File Sys-tem,” ACM Trans. Computer Systems, vol.10, no. 1, Feb. 1992, pp. 3–25.

29.B.D. Noble, M. Price, and M. Satyanara-yanan, “A Programming Interface for Appli-cation-Aware Adaptation in Mobile Com-puting,” Proc. USENIX Symp. Mobileand Location-Independent Computing,USENIX, Berkeley, Calif., 1995.

30.B. Zenel and D. Duchamp, “A General Pur-pose Proxy Filtering Mechanism Applied tothe Mobile Environment,” Proc. 3rd Ann.ACM/IEEE Intl. Conf. Mobile Computingand Networking, ACM Press, New York,1997, pp. 248–259.

31.A. Fox et al., “Adapting to Network andClient Variation Using Active Proxies:Lessons and Perspectives,” IEEE PersonalComm., Aug. 1998, pp. 10–19.

32.D.D. Clark and D.L. Tennenhouse, “Archi-tectural Considerations for a New Genera-tion of Protocols,” Computer Comm. Rev.,vol. 20, no. 4, Sept. 1990.

33.E. Kiciman and A. Fox, “Using DynamicMediation to Integrate COTS Entities in aUbiquitous Computing Environment,” Proc.2nd Int’l Symp. Handheld and UbiquitousComputing (HUC2K), Lecture Notes inComputer Science, vol. 1927, Springer-Ver-lag, Berlin, 2000, pp. 211–226.

34.B.A. Myers et al., “Interacting at a DistanceUsing Semantic Snarfing,” Ubicomp 2001:

80 PERVASIVEcomputing http://computer.org/pervasive

R E A C H I N G F O R W E I S E R ’ S V I S I O N

Ubiquitous Computing, Lecture Notes inComputer Science, vol. 2201, Springer-Ver-lag, Berlin, 2001, pp. 305–314.

35.T. Richardson et al., “Virtual Network Com-puting,” IEEE Internet Computing, vol. 2,no. 1, Jan./Feb. 1998, pp. 33–38.

36.T.D. Hodes et al., “Composable Ad HocMobile Services for Universal Interaction,”Proc. Third Int’l Symp. Mobile Computingand Networking (MobiCom 97), ACMPress, New York, 1997, pp. 1–12.

37.C. Fitchett and S. Greenberg, “The PhidgetArchitecture: Rapid Development of Physi-cal User Interfaces,” Proc. Workshop Appli-cation Models and Programming Tools forUbiquitous Computing (UbiTools 01), 2001,http://choices.cs.uiuc.edu/UbiTools01/pub/17-fitchett.pdf.

38.D. Salber, A.K. Dey, and G.D. Abowd, “TheContext Toolkit: Aiding the Development ofContext-Enabled Applications,” Proc. ACMSIGCHI Conf. Human Factors in Comput-ing Systems (CHI 99), ACM Press, NewYork, 1999, pp. 434–441.

39.R. Want et al., “The Active Badge LocationSystem,” ACM Trans. Information Systems,vol. 10, no. 1, Jan. 1992, pp. 91–102.

40.M. Addlesee et al., “Implementing a SentientComputing System,” Computer, vol. 34, no.8, Aug. 2001, pp. 50–56.

41.B. Brumitt et al., “EasyLiving: Technologiesfor Intelligent Environments,” Proc. 2ndInt’l Symp. Handheld and Ubiquitous Com-puting (HUC2K), Lecture Notes in Com-puter Science, vol. 1927, Springer-Verlag,Berlin, 2000, pp. 12–29.

42.M. Coen et al., “Meeting the ComputationalNeeds of Intelligent Environments: TheMetaglue System,” Proc. 1st Int’l WorkshopManaging Interactions in Smart Environ-ments (MANSE 99), Springer-Verlag, Berlin,1999.

43.R. Want et al., “Bridging Physical and Vir-tual Worlds with Electronic Tags,” Proc.1999 Conf. Human Factors in ComputingSystems (CHI 99), ACM Press, New York,pp. 370–377.

44. J. Rekimoto and Y. Ayatsuka, “CyberCode:Designing Augmented Reality Environmentswith Visual Tags,” Proc. Designing Aug-mented Reality Environments, ACM Press,New York, 2000, pp. 1–10.

45.D. Caswell and P. Debaty, “Creating WebRepresentations for Places,” Proc. 2nd Int’lSymp. Handheld and Ubiquitous Comput-ing (HUC2K), Lecture Notes in ComputerScience, vol. 1927, Springer-Verlag, Berlin,2000, pp. 114–126.

46.D.E. Culler et al., “A Network-CentricApproach to Embedded Software for TinyDevices,” Proc. DARPA Workshop Embed-ded Software, 2001.

47.M. Roman and R.H. Campbell, “GAIA:Enabling Active Spaces,” Proc. 9th ACMSIGOPS European Workshop, ACM Press,New York, 2000.

48.B. Lampson, “Hints for Computer SystemDesign,” ACM Operating Systems Rev., vol.15, no. 5, Oct. 1983, pp. 33–48.

49. J.H. Saltzer, D.P. Reed, and D.D. Clark,“End-to-End Arguments in System Design,”ACM Trans. Computer Systems, vol. 2, no.4, Nov. 1984, pp. 277–288.

50.B. Johanson, G. Hutchins, and T. Winograd,PointRight: A System for Pointer/KeyboardRedirection among Multiple Displays andMachines, tech. report CS-2000-03, Com-puter Science Dept., Stanford Univ., Stan-ford, Calif., 2000.

51.S. Raman and S. McCanne, “A Model,Analysis, and Protocol Framework for SoftState-Based Communication,” Proc. SpecialInterest Group on Data Communication(SIGCOMM 99), ACM Press, New York,1999, pp. 15–25.

52.M. Langheinrich, “Privacy by Design: Prin-ciples of Privacy-Aware Ubiquitous Sys-tems,” Ubicomp 2001: Ubiquitous Com-puting, Lecture Notes in Computer Science,vol. 2201, Springer-Verlag, Berlin, 2001, pp.273–291.

53.N. Borisov, I. Goldberg, and D. Wagner,“Intercepting Mobile Communications: TheInsecurity of 802.11,” Proc. 7th Ann. Int’lConf. Mobile Computing and Networks(MobiCom 2001), ACM Press, New York,2001, pp. 180–188.

54.N. Smart, I.F. Blake, and G. Seroussi, Ellip-tic Curves in Cryptography, CambridgeUniv. Press, Cambridge, UK, 1999.

55.A. Perrig et al., “SPINS: Security Protocolsfor Sensor Networks,” Proc. 7th Ann. Int’lConf. Mobile Computing and Networks(MobiCom 2001), ACM Press, New York,2001, pp. 189–199.

56.F. Stajano and R. Anderson, “The Resur-recting Duckling: Security Issues for Ad HocWireless Networks,” Security Protocols,Lecture Notes in Computer Science, vol.1796, Springer-Verlag, Berlin, 1999, pp. pp.172–194.

57.N.B. Priyantha, A. Chakraborty, and H. Bal-akrishnan, “The Cricket Location-SupportSystem,” Proc. 6th Ann. Int’l Conf. MobileComputing and Networks (MobiCom2000), ACM Press, New York, 2000, pp.32–43.

58.E. Gabber and A. Wool, “How to ProveWhere You Are,” Proc 5th ACM Conf.Computer and Comm. Security, ACM Press,New York, 1998, pp. 142–149.

59.T. Kindberg and K. Zhang, “ContextAuthentication Using Constrained Chan-nels,” tech. report HPL–2001–84, Hewlett-Packard Laboratories, Palo Alto, Calif.,2001.

60.P. Eronen and P. Nikander, “DecentralizedJini Security,” Proc. Network and Distrib-uted System Security 2001 (NDSS 2001),The Internet Soc., Reston, Va., 2001, pp.161–172.

For more information on this or any other com-puting topic, please visit our Digital Library athttp://computer.org/publications/dlib.

JANUARY–MARCH 2002 PERVASIVEcomputing 81

the AUTHORS

Tim Kindberg is a seniorresearcher at Hewlett-Packard Labs, Palo Alto,where he works on nomadiccomputing systems as partof the Cooltown program.His research interests includeubiquitous computing sys-

tems, distributed systems, and human factors.He has a BA in mathematics from the Universityof Cambridge and a PhD in computer sciencefrom the University of Westminster. He is coau-thor of Distributed Systems: Concepts & Design(Addison-Wesley, 2001). Contact him at MobileSystems and Services Lab, Hewlett-PackardLabs, 1501 Page Mill Rd., MS 1138, Palo Alto,CA 94304-1126; [email protected]; www.champignon.net/TimKindberg.

Armando Fox is an assistantprofessor at Stanford Univer-sity. His research interestsinclude the design of robustsoftware infrastructure forInternet services and ubiqui-tous computing, and userinterface issues related to

mobile and ubiquitous computing. He receiveda BS in electrical engineering from the Massa-chusetts Institute of Technology, an MS in elec-trical engineering from the University of Illinois,and a PhD in computer science from Universityof California, Berkeley, as a researcher in theDaedalus wireless and mobile computing pro-ject. He is an ACM member and a founder ofProxiNet (now a division of PumaTech), whichis commercializing thin client mobile comput-ing technology developed at UC Berkeley. Con-tact him at 446 Gates Bldg., 4-A, StanfordUniv., Stanford, CA 94305-9040; [email protected]; http://swig.stanford. edu/~fox.