interactive personal privacy at the dawn of ubiquitous computing scott lederer

120
Designing Disclosure: Interactive Personal Privacy at the Dawn of Ubiquitous Computing Scott Lederer M.S. Report Computer Science Division University of California at Berkeley December 2003

Upload: others

Post on 11-Sep-2021

3 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Interactive Personal Privacy at the Dawn of Ubiquitous Computing Scott Lederer

Designing Disclosure: Interactive Personal Privacy at the

Dawn of Ubiquitous Computing

Scott Lederer

M.S. Report Computer Science Division

University of California at Berkeley December 2003

Page 2: Interactive Personal Privacy at the Dawn of Ubiquitous Computing Scott Lederer
Page 3: Interactive Personal Privacy at the Dawn of Ubiquitous Computing Scott Lederer

3

Although the size of the step a human being can take in comprehension, innovation, or execution is small in comparison to the over-all size of the step needed to solve a complex problem, human beings nevertheless do solve complex problems. It is the augmentation means that serve to break down a large problem in such a way that the human being can walk through it with his little steps…

--D.C. Engelbart, 1962

Page 4: Interactive Personal Privacy at the Dawn of Ubiquitous Computing Scott Lederer
Page 5: Interactive Personal Privacy at the Dawn of Ubiquitous Computing Scott Lederer

Table of Contents

5

ACKNOWLEDGEMENTS 9

1 INTRODUCTION 11

1.1 UBIQUITOUS COMPUTING 12 1.2 UBIQUITOUS COMPUTING’S PRIVACY IMPLICATIONS 13 1.3 PERSONAL PRIVACY 14 1.4 A PREVIEW OF THIS REPORT 15 1.5 SUMMARY 18

PART ONE: ARTICULATING PRIVACY

2 BACKGROUND AND RELATED WORK 23

2.1 HCI-RELATED RESEARCH FINDINGS 23 2.2 HCI-RELATED SYSTEMS 25 2.3 SUMMARY 33

3 DECONSTRUCTING THE PRIVACY SPACE 35

3.1 OTHER DECONSTRUCTIONS OF THE PRIVACY SPACE 36 3.2 DIMENSIONS OF THE PRIVACY SPACE 36 3.3 CLASSIFYING EXISTING PRIVACY PHENOMENA 43 3.4 CLASSIFYING EXISTING PRIVACY-AFFECTING SYSTEMS 44 3.5 SUMMARY 44

PART TWO: INTERACTION SUPPORT FOR PRIVACY IN UBIQUITOUS COMPUTING

4 FORMATIVE INQUIRIES 47

4.1 EVERYDAY PRIVACY IN UBIQUITOUS COMPUTING 47 4.2 SUBJECTIVE FACTORS OF PRIVACY IN UBIQUITOUS COMPUTING 49 4.3 RELATIVE IMPORTANCE OF INQUIRER AND SITUATION 53 4.4 SUMMARY 57

5 FACES: A PRELIMINARY INTERACTION FRAMEWORK 59

5.1 INTRODUCTION 59

Page 6: Interactive Personal Privacy at the Dawn of Ubiquitous Computing Scott Lederer

Table of Contents

6

5.2 AN INTERACTION FRAMEWORK FOR PRIVACY MANAGEMENT 61 5.3 A PROTOTYPICAL USER INTERFACE FOR PRIVACY MANAGEMENT 63 5.4 AN EVALUATION OF THE PRELIMINARY FRAMEWORK 66 5.5 ADDITIONAL OBSERVATIONS AND DISCUSSION 72 5.6 SITUATING THE FRAMEWORK INTO THE PRIVACY SPACE 76 5.7 SUMMARY 77

6 THE PRECISION DIAL: A REFINED INTERACTION FRAMEWORK 79

6.1 A FLAWED DESIGN 80 6.2 A REFINED DESIGN: EXPLOITING AMBIGUITY 81 6.3 SOME CRITICISMS OF THE FRAMEWORK 89 6.4 CLOSING REMARKS 92

PART THREE: PERSONAL PRIVACY THROUGH UNDERSTANDING AND ACTION

7 FIVE PITFALLS TO AVOID WHEN DESIGNING FOR PRIVACY 97

7.1 COMMON DESIGN FLAWS 98 7.2 PERSONAL PRIVACY 99 7.3 UNDERSTANDING AND ACTION 100 7.4 FIVE PITFALLS IN DESIGNING FOR PRIVACY 101 7.5 DESIGN IMPLICATIONS 108 7.6 SUMMARY 112

8 CONCLUSION 113

8.1 SUMMARY OF FINDINGS 113 8.2 ENDING AS WE BEGAN 114

APPENDIX A: EXTENDED BACKGROUND 121

THE CHANGING SHAPE OF PRIVACY 121 A BRIEF HISTORY OF PRIVACY 122 THE FUTURE OF PRIVACY: UBIQUITOUS COMPUTING 129 PRIVACY AS A CROSS-DOMAIN PROBLEM 131 PRIVACY AS AN HCI PROBLEM 132

APPENDIX B: MATERIALS USED IN INTERVIEWS IN CHAPTER TWO 143

VERBAL DESCRIPTION OF UBIQUITOUS COMPUTING 143 SMART OFFICE SCENARIO 143 SMART SHOPPING SCENARIO 151 SMART HOME SCENARIO: INFORMAL COCKTAIL PARTY 157 GENERAL QUESTIONNAIRE 167

Page 7: Interactive Personal Privacy at the Dawn of Ubiquitous Computing Scott Lederer

Table of Contents

7

APPENDIX C: WEB-BASED QUESTIONNAIRE 183

INTRODUCTORY PAGE 183 BACKGROUND PAGE 183 SURVEY PAGE 184

APPENDIX D: FACES EVALUATION MATERIALS 187

QUESTIONNAIRE 187 PRE-EVALUATION INTERVIEW 188 INTRODUCTION 188 TUTORIAL SLIDES 189 TASK DIRECTIVE #1 193 TASK DIRECTIVE #2 194 TASK DIRECTIVE #3 194 POST-TASK QUESTIONNAIRE 195 POST-EVALUATION INTERVIEW 196

Page 8: Interactive Personal Privacy at the Dawn of Ubiquitous Computing Scott Lederer
Page 9: Interactive Personal Privacy at the Dawn of Ubiquitous Computing Scott Lederer

Acknowledgements

9

Acknowledgements Two years’ academic and personal investment has resulted in this report. I am grateful for the privilege to make such an investment. I am grateful to have done it. I am grateful it is done. In the doing, I am indebted to the following people and organizations: The American Society for Engineering Education, the U.S. Department of Education, and the National Science Foundation for, respectively, an NDSEG fellowship, a GAANN fellowship, and Grant No. IIS-0205644. The Computer Science Division of the University of California at Berkeley. My advisors and friends: Jennifer Mankoff and Anind Dey. Thank you for your honesty, your care, your gumption, your warmth, and for caring more for me than for my research. My readers: Jennifer Mankoff, Anind Dey, and John Canny. My privacy collaborators: Chris Beckmann, danah boyd, Jason Hong, Xiaodong Jiang, John Canny, Karen Teng, Jeff Huang, David Nguyen, Carlos Jensen, Jens Grossklags, Alessandro Acquisti. The participants of the privacy workshops at CSCW 2002 and Ubicomp 2002. My collaborators during my 2002 Intel Internship: Richard Beckwith, Miriam Walker, Genevieve Bell, Sunny Consolvo, and the entire Intel Peoples and Practices Research group. People who have given me relevant, important ideas and insights: Marc Langheinrich, Deirdre Mulligan, David Phillips, Mark Ackermann, Nancy Van House, Doug Tygar, James Landay, Rachna Dhamija. Berkeley lab mates: Scott Carter, Holly Fait, Louise Barkhuus, Scott Klemmer, Tara Matthews, Gary Hsieh, Alan Newberger, Jeff Heer, Chris Beckmann. The Amazonians of San Francisco. Roots: My family and crew in New York. I never left. I never will.

SML

Page 10: Interactive Personal Privacy at the Dawn of Ubiquitous Computing Scott Lederer
Page 11: Interactive Personal Privacy at the Dawn of Ubiquitous Computing Scott Lederer

Introduction

11

1 Introduction This report is an exploration of the evolving means, meanings, and mechanics of interactive personal privacy at the dawn of the ubiquitous computing age.1 Ubiquitous computing (ubicomp) stands to redefine established notions of privacy with the introduction of regular, pervasive sensing of personal information such as identity, location, and activity. To effectively and comfortably negotiate the boundaries of social life amidst the persistent disclosure of dynamic personal information—to maintain a sense of empowerment and self-determination in a panoptic context—end-users will need user interfaces that provide them with opportunities to understand and influence the privacy implications relevant to a system’s operation. We offer the following contributions to that end:

• a review of Human-Computer Interaction-related research and interactive systems in the privacy space,

• an analysis of the privacy space to help clarify discourse and design, • an iteratively designed interaction framework for managing personal privacy in

ubiquitous computing environments, and, • a set of guidelines for designing desktop or ubicomp systems that empower

people to create and maintain personal privacy by understanding and acting upon the system’s privacy implications.

It is not hard to motivate these contributions. Beyond the informal recognition that one can scarcely mention the words “ubiquitous computing” without someone mourning the “death of privacy,” we can provide more substantive motivation for interactive privacy research. We know that privacy is an important concern of the general public (Cranor, Reagle et al. 2000; Taylor 2003; Turow 2003). We know that privacy is not one-size-fits-all (Westin 1967; Lederer, Mankoff et al. 2003), so some interactivity may be useful for people to customize computer-mediated disclosures. We know that interactive systems continue to mishandle users’ privacy needs (Whitten and Tygar 1999; Good and Krekelberg 2003), so interactive privacy remains an open problem. And we know that, since the design of interactive systems can influence the practice of important values, careful design of value-sensitive systems is critical (Friedman 1997). Privacy is largely an issue of control. In a 2003 Harris poll, 79% (N=1,010) of subjects said “being in control of who can get information about [them]” is “extremely important” (Taylor 2003). For the purposes of this report, there are at least two ways of interpreting this notion of “being in control.” One represents a sense of legally, contractually, or normatively endowed control, whereby one could assert control over information flow through, for example, threat or execution of legal action, agreement with service contracts, specifying semi-permanent preferences buried on websites, or enrolling with do-not-call registries. This sort of control is asserted in practice irregularly and requires few insights from Human-Computer Interaction (HCI).

1 By interactive personal privacy, we mean the culturally meaningful practices by which people manage their privacy in the context of interactive technical systems.

Page 12: Interactive Personal Privacy at the Dawn of Ubiquitous Computing Scott Lederer

Introduction

12

A second relevant interpretation of “control” is the regular, everyday, participatory management of personal information disclosure. This is a fluid, intuitive, ongoing process rather than an occasional, focused task. Current examples include changing the status and visibility of one’s instant messaging presence, intentionally letting incoming phone calls go to voicemail, using different pseudonyms across online services, encrypting email sent through private networks, and even more traditional, mundane efforts like lowering one’s voice when discussing private matters in the presence of others, or wearing sunglasses and a hat to minimize recognizance in public areas. Both interpretations of control will impact the development of technical privacy protection mechanisms, but the latter is of greater interest to and will be more greatly influenced by HCI-related researchers. The everyday negotiation of privacy through interactive ubiquitous computing systems is an open problem and is the one we address in this report.

1.1 Ubiquitous Computing Ubiquitous computing presents significant challenges for technologists (Weiser 1993; Abowd and Mynatt 2000; Ackerman 2000; Bellotti, Back et al. 2002), but is far more than just a grand engineering effort. It is the potentially inextricable embedding of networked computation into the fabric of society, into the everyday lives of people (Dourish 2001). It represents the merging of our embodied and our online lives. At some point in the future, from a personal standpoint, there will be no choice but to be online almost all of the time. And from a societal standpoint, there will be no turning back from ubiquitous computing except apocalyptically. The late Mark Weiser coined the term ubiquitous computing and its abbreviation, ubicomp, and initiated the eponymous project at Xerox PARC in 1988, later promoting the vision in a series of articles in the early 90’s (e.g., (Weiser 1991)). On his website, Weiser describes the ubicomp project:

Inspired by the social scientists, philosophers, and anthropologists at PARC, we have been trying to take a radical look at what computing and networking ought to be like. We believe that people live through their practices and tacit knowledge so that the most powerful things are those that are effectively invisible in use. This is a challenge that affects all of computer science. Our preliminary approach: Activate the world. Provide hundreds of wireless computing devices per person per office, of all scales (from 1" displays to wall sized)… It is invisible, everywhere computing that does not live on a personal device of any sort, but is in the woodwork everywhere (Weiser 1996).

In the ubicomp vision, the act of computing becomes largely invisible. Not literally invisible, of course, but invisible in the sense that your car becomes invisible—becomes an afterthought—while you are in the act of driving it. Drivers focus on processing the flood of information from the driving environment to achieve the goal of reaching their destinations. They do not focus on the car per se. The car is an enabling factor in the driving activity. It is Heidegger’s hammer, its presence subsumed by the nail’s (Heidegger 1962).

Page 13: Interactive Personal Privacy at the Dawn of Ubiquitous Computing Scott Lederer

Introduction

13

Today, fifteen years after PARC’s ubicomp project began, the world is considerably closer to realizing Weiser’s vision on a grand scale. We have devices of all sorts—worn, carried, stationary, mobile, and embedded—that communicate, coordinate, and synchronize over high-speed wired and wireless networks ramifying from the tech metropolises out to the countryside and into third-world nations. Our use of these devices is becoming increasingly second-nature. People don’t use a personal computer; they email. They don’t operate a mobile phone; they call. Computing is becoming invisible. And it will continue to recede into the woodwork as cameras and microphones (Brin 1999), RFID sensors (Garfinkel 2000), and locatable mobile phones (FCC 2003) increasingly saturate social life and are repurposed as inputs to tacitly interactive computer systems.

1.2 Ubiquitous Computing’s Privacy Implications But fluid, situated interaction comes at a cost. Automatic sensing of individuals’ activities will grow more sophisticated, soon allowing for fine-grained knowledge of what someone is doing and whom they are doing it with. Mobile phones and GPS can provide a person’s outdoor location with increasing resolution, and RFID tags embedded in badges, clothing, or other objects can track movement indoors. Computer vision and speech recognition technology, while still relatively crude, can recognize faces pulled from a crowd or automatically flag keywords in a phone conversation. Machine learning techniques are being developed to infer a high-level understanding of a conversation or meeting based on the relations between participants. Coupled with the digital availability of a person’s identity, pervasive information gathering becomes a powerful surveillance and profiling tool. The emergence of ubiquitous sensor networks and robust data mining techniques will amplify the tracking and profiling capabilities of personal information collectors. As sensors pervade public and private environments, people will effectively and implicitly create continuous streams of personal information regarding their activities. One’s location, activity, and proximity to other individuals will be sensed in real-time, possibly stored for long periods, and shared and sold amongst corporate and government interests just as transaction profiles are traded today. Alone, these raw data streams have the potential to reveal a great deal about the immediate aspects of a person’s life, but when linked to one’s identity, these streams can feed and refine increasingly dense historical profiles maintained by law enforcement, commercial interests, and other individuals both familiar and unknown to the subject. As sensors pervading the social environment empower organizations to maintain continually updated dossiers on individuals’ locations, activities, companions, and correspondences, ostensibly in the interests of personalization, security, and efficiency, the very notion of being offline may become obsolete. This begs the question, given that we still have yet to iron out standard practices to honor and manage privacy on the world wide web, how exactly will we confront the same problem when the world is the web and there’s no way to escape it? Are we ready to

Page 14: Interactive Personal Privacy at the Dawn of Ubiquitous Computing Scott Lederer

Introduction

14

confront ubiquitous computing’s compulsory privacy challenges? To even begin to answer these questions, we first have to take a deeper look at the shape of privacy here at the dawn of the ubiquitous computing age.

1.3 Personal Privacy Westin defined information privacy as “the claim of individuals, groups or organizations to determine for themselves when, how and to what extent information about them is communicated to others” (Westin 1967). Though normally associated with the protection of personal data collected by organizations, this definition also aligns with interpersonal privacy, whereby people intuitively negotiate the many social tensions of everyday life (Palen and Dourish 2003). In this report we collapse both notions into personal privacy: the processes by which an individual selectively discloses personal information—e.g., shopping history or location—to organizations and to other people. Here we will briefly preview some perspectives on personal privacy that will prove relevant to this report.

1.3.1 Fair Information Practices A good place to start analyzing contemporary privacy needs is the fair information practices. The nature of privacy has been evolving for centuries. Catalyzed by the emergence of information technology, much contemporary social discourse on privacy is concerned with information privacy, which has required new regulatory and legislative efforts to define and accommodate it. The fair information practices are a regulatory paradigm specifying how personal information should and should not be collected and handled by organizations. Most privacy law around the world is based on some variation on the fair information practices. Although there are many of these variations, a useful interpretation characterizes the fair information practices as (1) Notice of what information is collected and how it is used, (2) Choice over how personal information is used, (3) Access to see and correct information that has been collected, (4) reasonable Security for collected information, and (5) Accountability for compliance with the other practices.2 The fair information practices impose a common structure and consistency on organizational information collection and use. Organizations employing them must collect and process information in an organized, accountable fashion. Indeed, the ubiquitous computing systems that collect dynamic personal information in real-time are likely to be installed and administered by organizations beholden to some variation on the fair information practices. Even if a disclosure is interpersonal, it will be mediated by a system or series of systems owned by organizational interests. In principle, the structured information handling imposed on these systems by the fair information practices can lead to a metadata-driven application programming interface (API), by which information collection and use can be exposed to and influenced by end-users through a user interface. Much of the work in this report assumes an information-handling infrastructure with such an API. 2 The Center for Democracy and Technology has an online guide on the various incarnations of the Fair Information Practices at http://www.cdt.org/privacy/guide/basic/fips.html

Page 15: Interactive Personal Privacy at the Dawn of Ubiquitous Computing Scott Lederer

Introduction

15

1.3.2 Feedback and Control As with any interaction design challenge, feedback and control are critical concerns when designing for privacy (Bellotti and Sellen 1993). Westin’s definition of information privacy—“the claim of individuals, groups or organizations to determine for themselves when, how, and to what extent information about them is communicated to others” (Westin 1967)—is predicated on the subject having control over what is disclosed to others, and feedback over how that information is used and how that information flows to others. The fair information practices provision for feedback and control primarily by requiring notice and choice, respectively. In principle, given an information infrastructure that exposes information flow through an API, designers could address the ubicomp privacy problem by providing notice and choice in the form of real-time alerts and dialogs. However, given the sheer number of disclosure events expected in ubicomp, with sensors and computers everywhere, it is unclear that people will welcome a flood of proverbial “OK” buttons as they go about their mobile, active lives. Consider how tedious it would be to use the web with cookie management turned off, accepting or rejecting each cookie individually. Now imagine performing the same process for ubicomp disclosure events on your mobile phone, all day long. Clearly, empowering people to manage personal privacy in ubicomp will require subtler approaches to feedback and control. Much of the work we report herein explores this problem.

1.3.3 Fragmented Identities and Boundary Negotiation Social psychology tells us a great deal about personal privacy. Altman tells us that privacy is a continuous, dialectic process that people negotiate intuitively in the course of their everyday lives (Altman 1975). Goffman tells us that people disclose and withhold information selectively to maintain different fronts for different audiences. Each front represents an internally consistent role, but there are often glaring (and carefully managed) inconsistencies across fronts, roles, and audiences (Goffman 1956). (Palen and Dourish 2003) and (Boyle 2003) have analyzed computer-mediated privacy from Altman’s perspective. (Phillips 2002) and (boyd 2002) have done the same from Goffman’s perspective. It will become apparent that these analyses have greatly influenced our research. Experience and social psychology tell us that privacy is a highly nuanced process, one that is not easily addressed through a simple, or, for that matter, complex, control panel. Yet somehow, supporting personal privacy in ubiquitous computing will require reconciling feedback and control mechanisms—afforded by user interfaces and enforced by the fair information practices—with people’s nuanced privacy practices. This report hopes to contribute to this reconciliation.

1.4 A Preview of this Report The scope of this report varies as it proceeds. Part One—Chapters Two and Three—articulates the context of this project and, hence, covers quite a bit of ground. Chapter Two covers related work in HCI. We review a range of findings from usability, sociological, ethnographic, regulatory, and design perspectives, and we examine a series

Page 16: Interactive Personal Privacy at the Dawn of Ubiquitous Computing Scott Lederer

Introduction

16

of interactive systems representing various approaches to managing privacy on and off the desktop. Recognizing that the privacy space is cluttered by numerous perspectives and interests, in Chapter Three we deconstruct the privacy space—emphasizing factors of importance to HCI and ubiquitous computing—to help clarify privacy-related discourse and design. This disambiguates a number of factors that create the conditions for privacy (and the violation thereof) in arbitrary phenomena and systems. In Part Two—Chapters Four through Six—we report on the iterative design of an interaction framework and prototype for managing personal privacy in ubiquitous computing—off the desktop. The privacy scope of this part starts off broad, in that our initial framework sought to address the entire privacy space. However, by Chapter Six we narrow the scope of our design to a specific subspace of the privacy space, one we believe is ripe for interaction support. In Part Three we expand our scope once again. Chapter Seven presents some common design pitfalls that designers should beware of as they develop privacy-affecting systems, on and off the desktop. We culled these pitfalls from an array of existing research and commercial systems and from our own experience designing the system we report on in Part Two. In the remainder of this section, we preview these chapters in greater detail.

1.4.1 Part One: Articulating Privacy In order to discuss the challenges of designing for privacy in ubicomp, we first need to articulate the context in which our work is situated. We do this first by reviewing related HCI research in privacy, both on the desktop and in ubicomp. The range of approaches to privacy that these projects adopt is quite broad, and so we then analyze the privacy space to help clarify just what people talk about when they talk about privacy.

1.4.1.1 Human-Computer Interaction Research in Privacy There has been a tremendous amount of HCI-oriented research in personal privacy in the context of technical systems. This includes polls showing considerable public concerns about privacy on the Internet (Cranor, Reagle et al. 2000; Taylor 2003; Turow 2003); interviews and surveys exploring privacy design issues for context-aware systems (Harper, Lamming et al. 1992; Kaasinen 2003; Lederer, Mankoff et al. 2003); studies exposing privacy perceptions and practices in groupware (Palen 1999), multimedia environments (Adams 2000), and location-aware systems (Beckwith 2003); and experiments revealing usability problems affecting privacy in email (Whitten and Tygar 1999) and file-sharing (Good and Krekelberg 2003) applications. HCI research will continue to make important technological and normative contributions to the societal negotiation of privacy. The products of interaction designers will heavily impact people’s ability to conceptualize and manage their privacy in the sensor-rich environments of the future. Indeed, interaction design can heavily influence the adoption

Page 17: Interactive Personal Privacy at the Dawn of Ubiquitous Computing Scott Lederer

Introduction

17

of an interactive system.3 When that system operationalizes one of society’s core values (read: privacy), then that system had better be successful. Few want to see people grow accustomed to surrendering reasonable privacy protections simply because they are too tedious to manage. In Chapter Two we will review a number of HCI-oriented research findings and interactive systems that address personal privacy on and off the desktop.

1.4.1.2 Deconstructing the Privacy Space Privacy is an enormous topic, incorporating multiple perspectives, from the sociological, e.g., (Altman 1975), to the legal, e.g., (Alderman and Kennedy 1995), to the computational, e.g., (Goldberg, Nichols et al. 1992), to the interactive, e.g., (Whitten and Tygar 1999). This can lead to inconclusive discourse. Parties to the discussion start with asymmetric assumptions about just what systems, actors, relations, and information are under consideration. And it can lead to misleading designs, whereby users expect a system to protect their privacy in ways it cannot. When designing for or discussing privacy, then, it is critical to identify the conditions that create a system’s or phenomenon’s privacy implications. In Chapter Three, we present a set of interdependent dimensions that, when applied to the analysis of a privacy-related system or phenomenon, can expose the factors that determine the role of privacy therein. This can help focus the scope of discourse and the design of privacy in the HCI and ubiquitous computing communities.

1.4.2 Part Two: An Iteratively Designed Interaction Framework In Part Two we report on the iterative design of an interaction framework and prototype intended to provide users with the feedback and control necessary to manage their personal privacy in ubicomp, giving them a means to maintain fragmented identities and negotiate social boundaries in a world of pervasive, networked sensors. Chapter Four presents a series of formative analysis and studies that led to our first design, Faces, which we present in Chapter Five. Faces lets users assign different disclosure preferences to different inquirers, optionally parameterized by situation (a conjunction of location, activity, time, and nearby people). We employed the metaphor of faces to represent disclosure preferences. Each face encapsulates privacy preferences for a select set of categories of personal information. The face metaphor is a fairly direct translation of Goffman, who posited that a person works to present himself to an audience in such a way as to maintain a consistent impression of his role in relation to that audience (Goffman 1956). Users specify their preferences by creating 3-tuples of inquirers, situations, and faces, with each 3-tuple meaning “if this inquirer wants information about me when I’m in this situation, show her this face.” Wildcards are allowed in the inquirer and situation slots to handle requests involving inquirers or situations that the user has not specified.

3 For example, the PDA market stagnated for years until the Palm Pilot broke through, largely because of its elegant design, and email encryption remains grossly underused, largely because of its inelegant design (Whitten, A. and J. D. Tygar (1999). Why Johnny Can't Encrypt: A Usability Evaluation of PGP 5.0. 8th USENIX Security Symposium.).

Page 18: Interactive Personal Privacy at the Dawn of Ubiquitous Computing Scott Lederer

Introduction

18

We will show that the design of Faces involved some crucial missteps. Among these flaws is a significant configuration effort, much of which is decontextualized from the settings to which the configured privacy preferences apply. The result is that Faces requires users to model their privacy practices within the interface, rather than empowering them to conduct those practices through the interface. Accordingly, in Chapter Six we refine the framework, narrowing its scope from the whole of the privacy space down to a specific subspace—the disclosure of dynamic context (e.g., location, nearby people) to personal contacts. The core of the refined framework is a precision dial, which we envision as a feature on a mobile phone by which users could quickly adjust the precision of the context information they disclose to personal contacts. Precursors to such a feature can be found in services currently being rolled out by the likes of AT&T Wireless (AT&T 2003) and Nokia (Nokia 2002).

1.4.3 Part Three: Empowering Personal Privacy by Designing for Understanding and Action

In Part Three we expand away from our particular framework and offer a set of general guidelines for designers of privacy-affecting systems, on or off the desktop. We base these guidelines on an analysis of existing systems and on our own design experiences as reported in Part Two. We learned firsthand that privacy cannot be addressed with a singular interactive system. It has to be addressed system by system, domain by domain. Despite this, there are common requirements of any interactive privacy-affecting system, and we show that many existing systems do not meet these requirements. Our five guidelines—represented as pitfalls to avoid in the design process—hinge on the recognition that people create and maintain personal privacy by understanding the privacy implications relevant to a situation and influencing them through intuitive social action. It is a challenge for designers of interactive systems to empower these human-level processes of understanding and action through the limited technical mechanisms of feedback and control. To help meet this challenge, we present five pitfalls to avoid when designing interactive systems with personal privacy implications, on or off the desktop. These pitfalls are: obscuring potential information flow, obscuring actual information flow, emphasizing configuration over action, lacking coarse-grained control, and inhibiting established practice. We illustrate how some existing research and commercial systems—Faces included—fall into these pitfalls, and how some avoid them. Designs that avoid them provide feedback and control mechanisms optimized to support the understanding and action required to create and maintain personal privacy.

1.5 Summary We began this project with the intention of designing a prototypical user interface for managing personal privacy in ubiquitous computing. In the course of that pursuit, we learned a great deal about our subject, including the fundamental fact that privacy is not a thing that can be solved in any standard sense of the word. Privacy is not a force to be tamed or overcome, like gravity or distance. It is an ongoing, fluid, cooperative human process that must be addressed and readdressed in the design of every privacy-affecting

Page 19: Interactive Personal Privacy at the Dawn of Ubiquitous Computing Scott Lederer

Introduction

19

system. Rather than solving it, designs can accommodate it. And the way to do so is to empower the end-user to intuitively understand and influence the conditions that create it.

Page 20: Interactive Personal Privacy at the Dawn of Ubiquitous Computing Scott Lederer
Page 21: Interactive Personal Privacy at the Dawn of Ubiquitous Computing Scott Lederer

21

PART ONE

ARTICULATING PRIVACY

Mama Say Mama Sa Mama Ku Sa, Mama say Mama Sa Mama Ku Sa.

--Michael Jackson

Page 22: Interactive Personal Privacy at the Dawn of Ubiquitous Computing Scott Lederer
Page 23: Interactive Personal Privacy at the Dawn of Ubiquitous Computing Scott Lederer

Background and Related Work

23

2 Background and Related Work We are presently amidst an extended transition period in the history of privacy, as the notion of personal privacy expands beyond bodily and territorial concerns to include information privacy—the protections and consequences of computer-mediated information disclosure. It makes sense that privacy is being remixed by the evolution of our technical infrastructures. As legal scholar Lawrence Lessig explains, the nature of privacy at any given point in place and time is contingent on the interplay of laws, markets, norms, and technological architectures (Lessig 2000). Starting with the advent of the computer and proceeding through the explosion of the World Wide Web and the emergence of ubiquitous computing, the information age continues to catalyze upheavals in these four sectors as they continually recondition privacy in response to each other. A looming technological and normative challenge is the development and deployment of interactive mechanisms for managing personal privacy in ubiquitous computing. To influence disclosure, the subject requires some sort of user interface, explicit or otherwise. In ubicomp, the disclosure of personal information is near unavoidable and largely implicit, and the user interfaces for monitoring and managing disclosure have to be carefully designed lest they disrupt the regularity of everyday practices. It is worth taking a look at existing HCI-related research in privacy, to situate our work. First we review the findings of analytical, experimental, and ethnographic research in HCI, computer-supported cooperative work (CSCW), and ubiquitous computing. These findings constitute a foundation of knowledge of personal and information privacy in sociotechnical contexts. We then look at specific commercial and experimental systems for managing privacy in such contexts. For readers interested in a slightly deeper analysis of the history of privacy and the related work mentioned below, Appendix A contains a longer version of this chapter.

2.1 HCI-related Research Findings Many researchers have investigated human factors in privacy-sensitive systems, from sociological, psychological, anthropological, usability, and design perspectives. Together they shape our understanding of the factors and trappings of privacy, which should inform any efforts to build interactive mechanisms for managing it. Here we review a number of these research efforts. We have chosen specific projects and findings that inform our work, and we organize them below according to their findings.

2.1.1 Privacy as Boundary Negotiation Drawing on the writings of social psychologist Irwin Altman, Palen and Dourish describe privacy management as “a dynamic response to circumstance rather than a static enforcement of rules” and they accentuate the nuanced effects disruptive technologies have on privacy. They seek to dispel the conception, common among technologists and designers, that privacy is a decontextualized resource to be abstracted and operationalized. Privacy, they stress, is a dialectic process “defined by a set of tensions between competing needs; and … technology can have many impacts, by way of

Page 24: Interactive Personal Privacy at the Dawn of Ubiquitous Computing Scott Lederer

Background and Related Work

24

disrupting boundaries, spanning them, establishing new ones, etc.” (Palen and Dourish 2003).

2.1.2 Privacy as Fragmented Identity Building on the work of psychologist Erving Goffman, Phillips casts a chilling light on the effects of ubiquitous identity-tracking on those who have heretofore safely managed different personas in geographically and culturally separate communities. He points out that we all maintain different personas in different contexts and that the practice of identity management includes selectively revealing certain inconsistencies in these personas to certain trusted people. He writes, “Social contexts and social identities are tightly linked. Each constitutes the other.” Following this reasoning, when the properties of one’s various social contexts are made available to others through ubiquitous sensing systems, one’s social identities are consequently laid bare and can collapse into a singular, heterogeneous, perhaps shockingly inconsistent, identity. In a society that prosecutes and marginalizes deviant behaviors, this could become an unjust and destructive phenomenon (Phillips 2002).

2.1.3 Factoring the Perception of Privacy Adams conducted an empirical investigation into the individual’s perception of privacy in environments outfitted with audio/video capture equipment. She found that subjects’ perceptions of privacy in these environments depend on the interrelation of four key factors: who they think is receiving the information, what they think will be done with the information, how sensitive they feel the information is, and the context in which the information is disclosed (Adams 2000).

2.1.4 Perceptions Differ Across Populations Beckwith conducted an ethnography of a community of elder residents, direct-care staff, management, and residents’ families associated with a residential care facility employing advanced sensors to monitor the locations and activities of residents and staff. A key finding was that different stakeholders can have drastically different perceptions of a technology’s invasiveness, its potential for abuse, and even its purpose (Beckwith 2003).

2.1.5 Flexibility and Coevolution When studying usage practices surrounding an established and compulsory group calendaring system at a large technology company, Palen reported a number of findings about sociotechnical support for privacy in CSCW. One key finding was that privacy-sensitive systems should be flexible enough to support nuanced and variable usage practices across populations and individuals, and they should be amenable to coevolution (Palen 1999). Indeed, a system too finely structured for compulsory usage practices cannot support the unpredictable refinements that emerge during long-term real-world use (Suchman 1994).

2.1.6 Usability Design Whitten and Tygar point out that poor usability design is a key factor of the poor market performance of Pretty Good Privacy (PGP), the de facto encryption standard for private email communication. “PGP 5.0 is not usable enough to provide effective security for

Page 25: Interactive Personal Privacy at the Dawn of Ubiquitous Computing Scott Lederer

Background and Related Work

25

most computer users, despite its attractive graphical user interface, supporting our hypothesis that user interface design for effective security remains an open problem” (Whitten and Tygar 1999).

2.1.7 Feedback and Control Analyzing the privacy-related aspects of the design and use of the RAVE mediaspace4 system at EuroPARC in the early 1990’s, Bellotti and Sellen found that designers should provide feedback about and reasonable control over what personal information is captured, what happens to the information after capture, who accesses the information, and what the information is used for (Bellotti and Sellen 1993).

2.1.8 Fair Information Practices Recognizing that absolute security and privacy will remain forever elusive, Langheinrich argues for the common adoption of design principles that would minimize the risks of undesired disclosures in ubiquitous computing environments. Based on the fair information practices (CDT 2000), his principles include support for notice of information collection, choice and consent over disclosure, employing anonymity and pseudonymity whenever appropriate, limiting information collection and usage to normative standards of proximity and locality,5 providing adequate security, and providing channels for access and recourse (Langheinrich 2001).

2.1.9 Summary Together these findings constitute a complex body of knowledge about personal privacy amidst sociotechnical systems. We know that privacy-sensitive systems need careful usability design (Whitten and Tygar 1999), including mechanisms for feedback and control (Bellotti and Sellen 1993) over the who, what, and when of disclosure (Adams 2000). We know they should support the fair information practices (Langheinrich 2001). We know that systems should be flexible enough to support different populations (Beckwith 2003) and evolving practices (Palen 1999). And we know they should afford themselves such that users can appropriate them into their nuanced boundary management (Palen and Dourish 2003) and fragmented identity management (Phillips 2002) practices. In the next section we take a look at existing commercial and experimental interactive systems that, to varying degrees of success, have addressed the challenge spelled out by these findings.

2.2 HCI-related Systems Here we will present a partial survey of past and current research and commercial projects that address privacy management from a human factors perspective. Each of these systems seeks to empower end-users to disclose information according to their needs and preferences.

4 A mediaspace system is characterized by high-resolution, real-time audio-video mechanisms installed in cooperative, remote workspaces supporting regular interpersonal communication and collaboration. 5 Kindberg and Fox discuss a variation, called the Boundary Principle, whereby the scope of sensed data should be determined by existing spatial, administrative, and normative boundaries (Kindberg, T. and A. Fox (2002). "System Software for Ubiquitous Computing." IEEE Pervasive 1(1).).

Page 26: Interactive Personal Privacy at the Dawn of Ubiquitous Computing Scott Lederer

Background and Related Work

26

We will begin with a selection of tools designed for online environments, i.e., those requiring the use of conventional personal computers. We will follow that with a selection of systems designed for use beyond the desktop, in mobile or ubiquitous computing environments.

2.2.1 Online Privacy Management Tools

2.2.1.1 Default Web Browser Controls The sixth version of Microsoft® Internet Explorer introduced top-level privacy controls into the market-leading web browser’s configuration options (Figure 2-1). This feature allows the user to select from a series of pre-configured options that encode cookie-handling preferences (Accept All Cookies Low Privacy Medium Medium High

High Privacy Block All Cookies). Fine-grained cookie-handling controls are available for advanced users.

Page 27: Interactive Personal Privacy at the Dawn of Ubiquitous Computing Scott Lederer

Background and Related Work

27

The Internet Explorer privacy feature reads websites’ P3P-encoded6 privacy policies to determine whether their practices are inline with the user’s expressed preferences. If there is a conflict, the browser will automatically reject the offending cookies and display an unobtrusive visual symbol in the corner of the window as notification. This sometimes disrupts the website’s functionality enough to make the user’s task difficult or impossible, but at present, that is the cost of choosing a more privacy-preserving setting. Both users and service providers shoulder this cost. Users may have to (temporarily) relax their privacy preferences to make use of a certain website. On the other hand, if enough users avoid websites that employ privacy-invasive practice, the website provider may be forced to alter its policies.

2.2.1.2 AT&T Privacy Bird The AT&T Privacy Bird is a freely available P3P-aware extension to Internet Explorer (AT&T 2002). Designed by Lorrie Cranor, one of the creators of P3P, the Privacy Bird features mechanisms for unobtrusively notifying users of the personal information usage practices of the organizations behind the websites they visit. Future versions may incorporate consent mechanisms as well. When a website is loaded in the browser, the Privacy Bird reads the site’s P3P-encoded privacy policy, compares the policy to the user’s notification preferences, and then notifies the user as necessary. Users configure their notification preferences using a series of checkboxes (Figure 2-2). For example, users can choose to be notified, or not, when a site uses financial

6 P3P is the Platform for Privacy Preferences Project of the World Wide Web Consortium (see Appendix A for a description of P3P, along with a deeper review of the history of privacy).

Figure 2-1. Microsoft Internet Explorer 6 privacy controls.

Page 28: Interactive Personal Privacy at the Dawn of Ubiquitous Computing Scott Lederer

Background and Related Work

28

information for research and development (i.e., marketing) purposes. The categorizing and labeling of the configuration options were iteratively designed to maximize efficacy and minimize complexity.

According to the level of agreement between the user’s preferences and a site’s policies, the program notifies the user by adjusting the representation of a small, cartoon-like bird icon situated peripherally in the browser’s title bar (Figure 2-3). When there is no disagreement, the bird is a peaceful green color. When there is disagreement, the bird turns an alarming red. When a site has no available P3P policy, the bird turns warningly yellow. Such subtle, peripheral cues to potential privacy invasions have been proposed as an elegant way to design for notice (Ackermann and Cranor 1999).

Figure 2-2. Privacy Bird configuration screen for choosing which privacy policies warrant notification.

Page 29: Interactive Personal Privacy at the Dawn of Ubiquitous Computing Scott Lederer

Background and Related Work

29

2.2.1.3 Identity-Manager Jendricke and Gerd tom Markotten have developed an experimental system for managing personal information disclosure to websites. Like SecureId, the Identity-Manager uses a Goffmanesque approach to disclosure, supporting the explicit creation and configuration of multiple profiles, each associated with certain elements of the user’s personal information and each designated as a proxy for a set of websites (Jendricke and Gerd tom Markotten 2000).

For each site visited, the user can assign an existing profile or create a new one, specifying what contact and financial information the website can obtain during a transaction (Figure 2-4). The system is designed to run in conjunction with standard web browsers.

Figure 2-3. AT&T Privacy Bird notification images, corresponding to websites that, respectively, agree with the user's preferences; have no P3P-encoded policies; and conflict with the user's

Figure 2-4. Identity-Manager's profile configuration screen.

Page 30: Interactive Personal Privacy at the Dawn of Ubiquitous Computing Scott Lederer

Background and Related Work

30

2.2.1.4 SecureID As part of an investigation into fragmented identity management in online environments, boyd created an interactive system, called SecureId, supporting knowledge-based access control to structured personal information associated with the many facets of users’ identities (boyd 2002). When first registering with the SecureId system, the user specifies a detailed profile, including contact, biographic, occupational, and avocational information. From the information in the profile – e.g., the user’s interests and various email addresses – the system infers a set of facets of the user’s identity, or different personas that the user may present in different contexts (Figure 2-5).

After approving some of the suggested facets and/or creating new ones, the user establishes what boyd calls knowledge-based access control for each facet. That is, she specifies a set of questions (and their answers), the answers to which would likely be known only by people already familiar with that facet of the user’s identity. She can then associate various pieces of information with each facet, e.g., documents, web pages, etc. When another online user attempts to access the user’s personal space, he is presented with a list of the user’s facets. After choosing a facet with which he considers himself associated, he is required to accurately answer the questions that guard the information in that facet. If he succeeds, he can access that information. Boyd’s prototype operationalizes some of Goffman’s key insights about people’s tendency to behave differently, indeed to present different personas, to different audiences (Goffman 1956). Arguably, while manual configuration of facets is too tedious and knowledge-based access control too insecure for production-level systems, SecureId

Figure 2-5. One user's SecureId identity facets, corresponding to

semantically related groups of people and data.

Page 31: Interactive Personal Privacy at the Dawn of Ubiquitous Computing Scott Lederer

Background and Related Work

31

elegantly illustrates the potential for translating established real-world identity management practices into virtual identity management techniques.

2.2.2 Mobile and Ubiquitous Computing Privacy Management Tools Having described some online privacy tools, below we will present descriptions of some privacy tools designed for use beyond the desktop. Some of these are interaction-oriented systems, designed to serve as the user interface to a privacy-affecting system, while others are infrastructure-level systems, operating below the interface but with direct implications for the end-user experience.

2.2.2.1 AT&T mMode Find Friends Among the first location-based services introduced to the North American mobile phone market, the AT&T mMode Find Friends service allows AT&T mMode subscribers to locate their mMode-subscribed friends in real-time (AT&T 2003). Reported locations are approximate, as the current GSM/GPRS-based telecom infrastructure can report only the location of the last cell tower contacted by the phone. Management of location privacy is reasonably simple. Suppose Bob wants to locate Alice. First, Bob has to add Alice to his phone’s list of friends, an action which Alice has to explicitly approve through an option on her phone. Alice’s approval effectively grants Bob permission to attempt to locate her anytime, though it should be noted that every attempt to locate Alice sends a notification to her phone. Once this permission is established, Alice can prevent Bob from locating her by either (1) turning off her phone, (2) setting her phone to a globally invisible state, or (3) setting her phone to an invisible state only with respect to Bob. The first two options would stop anyone from locating Alice. The third option, though slightly more interactionally demanding than the second, allows Alice to tailor her privacy with respect to specific individuals.

2.2.2.2 pawS: Privacy Awareness System Langheinrich built an experimental system supporting privacy in ubiquitous computing environments (Langheinrich 2002). The Privacy Awareness System, or pawS, is designed not to enforce absolutist notions of privacy but rather to implement a subset of the principles Langeheinrich laid out in his seminal ubiquitous computing privacy paper (Langheinrich 2001). Specifically, pawS is composed of mechanisms supporting notice, choice and consent, proximity and locality, and access and recourse. The pawS system is intended to work in scenarios like the following. Alice carries a wirelessly networked mobile computer that contains her encoded privacy preferences, or a link to their network location. As Alice goes about her day, she encounters various ubiquitous computing services, many of which require some of her personal information (e.g., name, location, etc.) to operate. When in the vicinity of one of these services, Alice’s mobile computer receives a beaconed notification of the network location of the service’s P3P-encoded privacy policy. Alice’s user-agent compares her preferences with the service’s policies and negotiates disclosure- and service-levels that meet both parties’ requirements. Each party keeps negotiation, disclosure, and service provision records in a database.

Page 32: Interactive Personal Privacy at the Dawn of Ubiquitous Computing Scott Lederer

Background and Related Work

32

Langheinrich designed pawS to support some of the principles he outlined in (Langheinrich 2001), as mentioned above in section 2.1.8. By providing privacy policies and beaconing them out to users, the system supports the principle of notice. By negotiating disclosure- and service-levels according to users’ privacy preferences, the system supports the principle of choice and consent, as encoded by those preferences. By focusing on situated systems that beacon to proximate users, the system supports the principle of proximity and locality, and by storing negotiation, disclosure, and service records in both users’ and the providers’ databases, it supports the principle of access and recourse.

2.2.2.3 Privacy Mirrors Nguyen and Mynatt have developed the Privacy Mirrors framework, promoting designs that provide users with largely visual representations of disclosure instances and histories in ubiquitous computing environments (Nguyen and Mynatt 2002). The Privacy Mirrors framework aims to engender rich awareness of information flow, thereby encouraging accountability and empowering users to manage their environments as necessary.

Figure 2-6 shows an instance of a privacy mirror prototype, specifically a wall display that communicates, among other things, the color of the shirt of the person currently being tracked by the system behind it.

Figure 2-6. A Privacy Mirror displaying meaningful representations

of the person and place being monitored.

Page 33: Interactive Personal Privacy at the Dawn of Ubiquitous Computing Scott Lederer

Background and Related Work

33

2.2.2.4 Privacy-sensitive Home Media Spaces Neustaedter and Greenberg designed and constructed a home mediaspace with implicit and explicit privacy controls for moderating the disclosure of audio and video feeds to remote coworkers (Neustaedter and Greenberg 2003). For instance, the home office worker can manually adjust the fidelity of his videoconference stream. But he can also let the system automatically engage and disengage the camera when a non-employee housemate enters the room. By regulating the outgoing rich media information flow from the home office, the system helps the employee maintain the room’s dual uses as both office and familial space.

2.3 Summary In this chapter we have surveyed a range of HCI-oriented privacy research to establish an understanding of personal privacy amidst sociotechnical systems. We know that privacy-sensitive systems need careful usability design (Whitten and Tygar 1999), including mechanisms for feedback and control (Bellotti and Sellen 1993) over the who, what, and when of disclosure (Adams 2000). We know they should support the fair information practices when appropriate (Langheinrich 2001). We know that systems should be flexible enough to support different populations (Beckwith 2003) and evolving practices (Palen 1999). And we know they should afford themselves such that users can appropriate them into their nuanced boundary management (Palen and Dourish 2003) and fragmented identity management (Phillips 2002) practices. We then looked at existing commercial and experimental interactive privacy management systems, on and off the desktop. None of these systems is a panacea. Each, to varying degrees of success, addresses some aspect of personal privacy management, but none can be said to “solve” privacy comprehensively. This turns out to be an important point, for as we will see in Chapters Five and Six, privacy is not a singularly solvable thing. It must be addressed system by system, domain by domain, culture by culture. When we recognize that privacy—an inherently complex process—continues to elude definition as people are subjected to a confusing procession of potentially invasive technical innovations, we see that people are easily misled when it comes to privacy. Does the typical end-user understand, say, that the AT&T Privacy Bird only gives notice of potential privacy violations, but cannot stop them? The confusion grows as computation moves off the desktop. Would the typical end-user understand that the AT&T Find Friends privacy feature can stop a friend from locating her but cannot stop AT&T or the authorities from locating her? Or that pawS might apply his privacy preferences to the nearby identity sensor but that the nearby surveillance camera is immune to its influence? Empowering end-users to understand the scope and operation of distributed sensing systems is a gapingly open problem (Bellotti, Back et al. 2002). When it comes to privacy, it is important that designers make clear to users just which aspects of their privacy a system pertains to. To help designers and researchers articulate the scope of their systems, in the next chapter we deconstruct the privacy space into a set of interdependent dimensions. By assessing a privacy-related system or phenomenon with respect to these dimensions, one

Page 34: Interactive Personal Privacy at the Dawn of Ubiquitous Computing Scott Lederer

Background and Related Work

34

can identify the conditions that create its particular privacy implications. By then focusing subsequent analysis on these conditions specific to the problem at hand, rather than on the nebulous word privacy (too broadly defined to be useful by itself), one can clarify the scope of discourse and design. At the end of the next chapter, we will briefly return to each of the systems covered above, situating it into the privacy space to clarify its scope in the management of personal privacy.

Page 35: Interactive Personal Privacy at the Dawn of Ubiquitous Computing Scott Lederer

Deconstructing the Privacy Space

35

3 Deconstructing the Privacy Space When designing for or discussing privacy, it is critical to identify the conditions that create a system’s or phenomenon’s privacy implications. In this chapter, we present a set of interdependent dimensions that, when applied to the analysis of a privacy-related system or phenomenon (i.e., technical systems, policies, practices, and incidents), can expose the factors that determine the role of privacy therein. This can help focus the scope of privacy-related discourse and the design in the HCI and ubiquitous computing communities. Privacy is an enormous topic, incorporating multiple perspectives, from the sociological, e.g., (Altman 1975), to the legal, e.g., (Alderman and Kennedy 1995), to the computational, e.g., (Goldberg, Nichols et al. 1992), to the interactive, e.g., (Whitten and Tygar 1999). This often leads to inconclusive discourse on privacy. Parties to the discussion start with asymmetric assumptions about just what systems, actors, relations, and information are under consideration. Are we talking about individuals disclosing transactional information to businesses, or about groups keeping secrets from the state, or about individuals withholding activities from other individuals, or about encryption as a tool for anonymity, or about public surveillance cameras, or about wiretaps, or about…? These examples illustrate the range of issues often conflated when we talk about privacy. Although privacy is an important factor in each of them, they highlight different points in the privacy space. In this chapter we offer the HCI and ubiquitous computing communities a framework for exposing key aspects of any given privacy-related phenomenon that influence the conditions and implications of privacy therein. In other words, our goal is to help researchers and designers identify the conditions that create the privacy implications specific to any given phenomenon, so they can better analyze and address them. To do this, we disambiguate the privacy space into a set of interdependent dimensions that help define a phenomenon’s implications for privacy. We stress that these dimensions are interdependent. We offer them not as rigid, orthogonal guides for concretely measuring deterministic phenomena, but as a flexible, conceptual tool for making the assessment of ambiguous phenomena a bit more lucid. These dimensions are not exhaustive; we have chosen them because of their high impact on the end-user privacy experience. We cluster these dimensions into three categories: system properties, actor relations, and information types. System properties determine important aspects of the mechanisms of disclosure and how participatory the disclosure is. Actor relations help determine the type of relationship between subject and observer and how much history they share, all of which affect how the observer might use the information and how much the subject might trust the observer. Information types help determine how sensitive the information is and how intentional the disclosure is. By assessing a privacy-related phenomenon with respect to these dimensions, one can expose the elements of that phenomenon that sensitize the disclosure of personal information. Such an assessment does not explain why these elements sensitize disclosure; it simply highlights them amidst the cluttered privacy space. In other words, situating a privacy phenomenon into the space

Page 36: Interactive Personal Privacy at the Dawn of Ubiquitous Computing Scott Lederer

Deconstructing the Privacy Space

36

neither defines the phenomenon nor specifies a solution to its privacy implications, but rather narrows the scope of those implications, thereby focusing discourse and design efforts and facilitating comparative analyses. While this work has an intrinsic emphasis on HCI and ubiquitous computing research, it may be useful to the legal, regulatory, cryptographic, and other communities as well. It applies to a range of privacy phenomena, including technical systems, social practices, and regulatory frameworks. By determining exactly how the phenomenon at hand relates to each dimension of the space, domain experts can isolate the phenomenon’s pertinence to their field, e.g., HCI practitioners and researchers can identify design goals and classify research efforts.

3.1 Other Deconstructions of the Privacy Space This chapter continues the unpacking of privacy begun by Palen and Dourish, who, employing Altman’s notion of privacy as a continuous dialectic social process (Altman 1975), articulated the need for the HCI community to transcend conventional role-based and access-control notions of privacy (Palen and Dourish 2003). They exposed privacy as the ongoing negotiation of multiple boundaries in continuous tension. Our work unpacks privacy from a different angle, articulating key properties of the systems, actors, and information involved in a disclosure. Brunk categorized a considerable number of online privacy tools into a feature space (Brunk 2002). Our articulation of the privacy space applies not only to the functions of tools, but to the conditions of privacy phenomena in general, e.g., protocols, laws, norms, and even specific incidents. It aims for general applicability by teasing out common concepts and relations that determine the implications for privacy in and across phenomena. Jacobs and Abowd articulated a framework for examining ubicomp privacy from legal and normative perspectives (Jacobs and Abowd 2003). Our deconstruction is not based on legal precedent; it is an analytic and discursive tool based on HCI and privacy literature, and on existing systems. Adams and Sasse isolated four interdependent factors of the perception of privacy in multimedia environments: information receiver, sensitivity, and usage, and the context of disclosure (Adams and Sasse 1999). Our work differs by extending the privacy space beyond subjective perception, though we do emphasize the subject’s perspective.

3.2 Dimensions of the Privacy Space When people talk about privacy, often what they are really discussing is not the nebulous whole of privacy, but some set of phenomena classifiable into subspaces of the privacy space. Such subspaces are effectively defined by revaluing the dimensions that constitute the privacy space. Table 3-1 lists the dimensions of the privacy space we have chosen to address, classified into three categories: system properties, actor relations, and information types. System properties are the how of disclosure and disclosure management; they describe crucial aspects of the technical systems involved in the disclosure. Actor relations are the who of disclosure; they categorize the relations between the primary participants in the disclosure. Information types are the what of disclosure; they distinguish between critically distinct types of disclosable information.

Page 37: Interactive Personal Privacy at the Dawn of Ubiquitous Computing Scott Lederer

Deconstructing the Privacy Space

37

Table 3-1. The dimensions of the privacy space

System Properties Feedback and Control Surveillance vs. Transaction

Actor Relations Interpersonal vs. Organizational Familiarity

Information Types Persona vs. Activity Primary vs. Incidental Content

While some of these dimensions are somewhat linear (e.g., familiarity) and others are more categorical (e.g., feedback and control), we have collapsed many of them into two categories (e.g., surveillance vs. transaction) to keep the space manageable. The resulting loss of fidelity is worth the clarity of a simpler articulation of this complex space. Importantly, the categories of a dimension are not exclusive; a phenomenon can involve both poles of a dimension. But by articulating its relation to each pole, one disambiguates its implications with respect to that dimension. In describing each dimension below, we will illustrate some of the ways in which they relate to each other, for, again, they are not orthogonal. Rather than rigidly defining the privacy space, they are a set of flexible, conceptual guides to help identify the approximate subspace pertinent to a given privacy-related phenomenon.

3.2.1 System Properties Privacy phenomena differ in part by the degree to which subjects have feedback about and control over the disclosure process, and by the disclosure’s status as surveillance or transaction.

3.2.1.1 Feedback and Control Different privacy-related systems employ different ratios, degrees, and methods of feedback about and control over the disclosure process. To use a simple example to illustrate different ratios of feedback and control, consider two public spaces, each with a camera observing it and a sign declaring the camera’s operation and purpose. In one space, there is an obvious and accessible switch to disable the camera; in the other there is no switch. Each space provides some reasonable feedback, but only one provides any direct control. The privacy implications of these systems differ fundamentally. Degrees and methods of feedback and control can vary widely and obviously. For example, a red light on the camera in the public space is a different method of, and arguably provides a different degree of, feedback about the camera’s operation and purpose. We are not declaring that more feedback or control is better than less, or that one method of feedback or control is better than another, or that a perfect balance of feedback and control is optimal. Targeted guidelines for designing feedback and control mechanisms, e.g., (Bellotti and Sellen 1993), can help designers answer those questions. We are merely suggesting that

Page 38: Interactive Personal Privacy at the Dawn of Ubiquitous Computing Scott Lederer

Deconstructing the Privacy Space

38

discourse on any given privacy phenomenon should identify the ratio, degrees, and methods of feedback and control at play therein.

3.2.1.2 Surveillance vs. Transaction Related to the participatory nature of feedback and control is the distinction between surveillance (e.g., monitoring by public cameras) and transaction (e.g., credit card purchases). These means of disclosure often differ by the level of participation on the subject’s part, and by the disclosure medium’s amenability to machine-processing. Our distinction between surveillance and transaction aligns approximately with Lessig’s distinction between the monitored and the searched (Lessig 2000) and with Agre’s distinction between surveillance and capture (Agre 1994). Surveillance often implies a disempowerment of the subject, whose ability to intentionally participate in disclosure is hampered and whose behavior is often socially constrained as a result (Foucault 1977). Transactions, however, often imply a sense of intention or agreement on the part of the subject. Examples of disclosure through surveillance include organizationally managed cameras, personal cameras, overseen and overheard behavior, and even less obvious phenomena like weblogs and camblogs, whereby information about or pictures of the subject can be posted beyond the subject’s technical participatory reach. Examples of disclosure through transaction include credit card transactions, radio frequency identification (RFID) tags, and HTML forms. In each case, the subject has some technical ability to alter the disclosure by changing the content or conditions of the transaction. Much of the distinction between surveillance and transaction is intrinsic to contemporary technology’s efficiency at processing the format of the disclosed information. If a commodity PC could parse and alter a surveillance video stream as reliably and quickly as it can an XML-encoded transaction, it could be programmed to blur or remove the representations of specific individuals. Disclosure of presence and identity would become a transaction whose content could be altered by the subject. As computers improve at parsing and classifying multimedia input, the surveilled becomes transactable. Further, conventionally surveilled information is made transactable by altering the means of information acquisition. For example, real-time location used to be conveyed to a remote party verbally, perhaps via telephone, or visually via camera. Now this information can be disclosed through more structured, computationally mutable media involving mobile phones, RFID tags, Wi-Fi access points, and GPS receivers. Some disclosures include elements of both surveillance and transaction. Consider the act of withdrawing cash from an automatic teller machine. A camera in the machine captures the image of the subject using the machine, and the bank maintains a record of the cash withdrawal. By not using the machine, i.e., not participating in the transaction, the subject can choose to limit his participation in the surveillance, i.e., not be captured by the camera.

Page 39: Interactive Personal Privacy at the Dawn of Ubiquitous Computing Scott Lederer

Deconstructing the Privacy Space

39

When assessing privacy phenomena, it is important to distinguish the degrees to which the disclosure occurs through surveillance and through transaction. Instances of the former are often not amenable to interactive participation and may be best addressed through social processes like laws and norms, while instances of the latter may support interactive control of information flows. With respect to ubicomp, it is important to recognize that many forms of disclosure typically considered surveillance may indeed occur through transaction (e.g., location tracking), but that the scope and complexity of ubicomp ensure the continual reemergence of leaks and new forms of informal or unstructured surveillance (e.g., camblogs).

3.2.2 Actor Relations Privacy phenomena differ in part by the subject’s relation to the observer: personal or organizational, and familiar or unfamiliar.

3.2.2.1 Interpersonal vs. Organizational There is a crucial difference between revealing sensitive information to another person and revealing it to industry or the state. Subjects often find the need to regulate disclosure to both individual and organizational observers, but the consequences of disclosure tend to differ drastically. Intentionally disclosing sensitive information to another person in a social context often serves to strengthen trust. Such disclosure to an organization, however, tends to accompany the provision or maintenance of a service. Unintentionally disclosing sensitive information to another person often generates distrust and can even dissolve important relationships. Such disclosure to an organization, however, generally results in inconveniences like spam. Trust is an important element in both types of relations, but distrusting a software company and distrusting a spouse have fundamentally different implications. It may be worth further dividing organizational disclosure into disclosure to organizations of which one is a member (e.g., an employer) and disclosure to those of which one is not (e.g., a merchant). Subjects often disclose, implicitly or explicitly, preferential, contact, financial, and activity information to both interpersonal and organizational observers, but the format, fidelity, and use of disclosures tend to differ. Interpersonal disclosure often involves rich, dynamic, vocal or textual representations of present and past activities that influence social bonds. Organizational disclosure tends to involve dry, textual representations in databases that affect service provision or membership status. Many phenomena involve both interpersonal and organizational disclosure. For example, personal email travels through organizationally controlled systems that often retain and use information thereabout. Spouses often share bank accounts, with each having access to the other’s transaction records. An employee’s relationship with her boss involves both interpersonal and organizational elements of disclosure. When classifying compound phenomena, it is important to tease apart the aspects that are interpersonal and those that are organizational.

Page 40: Interactive Personal Privacy at the Dawn of Ubiquitous Computing Scott Lederer

Deconstructing the Privacy Space

40

3.2.2.2 Familiarity The implications of a disclosure differ with the degree of familiarity between subject and observer. Familiarity can be bilateral7 or unilateral, i.e., both parties might be familiar with each other, or only one might be familiar with the other. The unilateral case divides further into cases where the subject is unaware of the observer (though not necessarily unaware of the observation), and cases where the potential observer is unaware of the subject, e.g., a retail store that a shopper has neither visited nor purchased anything from, but whose reputation is known to the shopper. Although a linear notion of familiarity is useful, two critical moments are worth noting: (1) when a subject is first known to an observer, and (2) when an observer is first known to a subject. If neither of these moments has occurred, then both subject and observer are unknown to each other; no disclosures have occurred. Interestingly, a subject can be known to an observer indirectly, e.g., through reputation or rumor (interpersonal) or through the personal information market (organizational). It is worth pointing out that familiarity is not a direct indicator of trust. One might be very familiar with a homicidal maniac, but one probably would not trust him very much. Nonetheless, the more familiar a subject is with an observer, the more informed the subject is to evaluate her level of trust in the observer, which will affect her comfort with disclosing a given set of information to him. When assessing privacy phenomena, it is critical to evaluate the degrees of familiarity between potential and actual subjects and observers. When this factor is considered in the light of the other dimensions of the privacy space (e.g., persona vs. activity, as will become obvious shortly), it can help explain the observer’s motivations for collecting information and the subject’s motivations for and comfort with disclosing it (or withholding it, as the case may be).

3.2.3 Information Types Privacy phenomena differ in part by the type of information being disclosed. For instance, disclosing the existence of a persona differs from disclosing information about that persona. Further, sensitive information can have different privacy implications depending on whether it is the primary or incidental content of the disclosure.

3.2.3.1 Persona vs. Activity This dimension concerns the notion of identity. By persona we mean a unique identifier, to which a history of activities might be associated. Examples include true names, login names, email addresses, phone numbers, credit card numbers, frequent shopper numbers, and employee numbers. In this sense, conveying a persona primarily conveys the existence of an identity. Conveying a persona to an observer theretofore unfamiliar with the subject is the moment in which that observer obtains some degree of familiarity with the subject. By activity we mean information about the subject. We connote this with “activity” because (1) arguably, any information about a person results from some activity, often involving the person

7 Bilateral familiarity neither implies nor excludes symmetric familiarity.

Page 41: Interactive Personal Privacy at the Dawn of Ubiquitous Computing Scott Lederer

Deconstructing the Privacy Space

41

directly, and (2) ubicomp promises to convey increasing amounts of information about people by conveying the contexts of their activities (two intrinsically inseparable notions (Agre 2001)). Conveying activities associated with a persona increases the observer’s familiarity with the subject. The fidelity of activity conveyed is roughly proportionate to the number and/or precision of data points conveyed. Take three pieces of context, for example: location, action, and duration. If Alice’s location, action, and duration are conveyed as “Using laptop in café on New Montgomery Street for two hours,” a fairly constrained range of possible activities is disclosed. Lowering the number of data points by, say, disclosing only location (“café on New Montgomery Street”) decreases the degree to which the subject’s activity is conveyed. Maintaining the same data points but lowering their precision (“Currently using a computer in San Francisco”) will also obscure the subject’s activity. Persona and activity both convey identity, but in different ways. Persona conveys the existence of an identity. Activity conveys its essence. Conveying a persona merely signals the existence of an identity fragment. It opens the door to a dark room. But as the activities associated with that persona are disclosed to that observer with greater precision or frequency or duration, a history of activity is increasingly revealed. The room becomes brighter, its furnishings more obvious. Notably, an observer can often accelerate the illumination of a newly encountered persona through auxiliary means (e.g., reputation or the personal information market). Conveying activity independent of an associated persona is effectively anonymity. It is worth noting, however, that anonymity dissolves as the observer develops a consolidated history of activity in association with a specific (decreasingly anonymous) persona. Eventually, a sufficiently informed observer can infer one of the subject’s established personae from external sources (e.g., the personal information market), or else can assign an arbitrary persona that, for all practical purposes, can become an established persona for the subject (e.g., as a public bus driver grows familiar with one of his regular (anonymous) passengers, his maintained conception of her identity can have practical effects for her; perhaps he holds the bus a few minutes when she is occasionally late to arrive at the bus stop). An important thing to note is the inverse relationships that persona and activity disclosure have with the observer’s familiarity with the subject. When the subject is unfamiliar to the observer, his activity is generally less sensitive, because of anonymity, yet his personae are more sensitive, because disclosing a persona immediately eliminates anonymity to some degree. On the other hand, when the subject is known to the observer, his personae are less sensitive, since the observer already knows some subset of his personae8, but his activities become more sensitive, since disclosing activities out of character with his known persona can reveal hidden personae and collapse the boundaries between his fragmented identities (Goffman 1956; Phillips 2002).

3.2.3.2 Primary vs. Incidental Content Here we distinguish between whether the sensitive information is the primary content or an incidental byproduct of the disclosure. We generally consider this distinction from the subject’s

8 And accidental disclosure of a hidden persona to a familiar observer can often be smoothed over with social dexterity, e.g., “That email address? I only use that to absorb spam.”

Page 42: Interactive Personal Privacy at the Dawn of Ubiquitous Computing Scott Lederer

Deconstructing the Privacy Space

42

perspective. The consequences of disclosure may be comparable in either case, but disambiguating the origin of the sensitive information can determine how intentional the disclosure is, thereby sharpening analysis of the phenomenon and clarifying design requirements. Perhaps the most infamous class of sensitive incidental content is transaction-generated information (TGI). TGI is meaningful information generated in the course of a service transaction (Samarajiva 1997). It is typically used by service providers to personalize service provision, generate aggregate reports, and create user profiles that can be sold for profit. TGI emerges, for instance, from every credit card purchase, every phone call, and every email transmission. Every disclosure contains both primary and incidental content, and either or both can be sensitive. For example, whispering a secret to a friend conveys the secret (primary), but also conveys, perhaps disagreeably, to a nearby observer that you are sharing secrets with the recipient (incidental). It is particularly useful to distinguish between primary and incidental content when discussing context-aware systems. Context-aware services typically exploit incidentally generated information to personalize service provision; the disclosure of context to these systems is often incidental to the subject’s primary actions. However, context can be repositioned as the primary content of activity disclosure to personal contacts (e.g., the AT&T Find Friends service, which converts incidental, organizationally collected mobile phone location information into the primary content of interpersonal disclosures). An important distinction in ubicomp lies between systems that automatically pull personal information from the user, and those that only use information pushed out by the user. This notion goes to the heart of intentionality of disclosure and aligns roughly with our distinction between primary (cf. push) and incidental (cf. pull) content.

3.2.4 Connecting the Dimensions Having articulated the dimensions, it is worth examining how they relate to each other. In the interests of space, we will focus on the relation of primary vs. incidental content to three other dimensions. With respect to persona vs. activity disclosure, personae and activity can both be disclosed as primary or incidental content. For example, if the primary content of a disclosure is “someone has been motionless in Alice’s office for three hours,” then both Alice’s work persona and a limited range of activity have been disclosed with a high probability. Incidental disclosure of persona and activity is exemplified by the credit card purchase, which, in addition to disclosing the primary content (money) to the merchant, incidentally discloses a persona (i.e., a name and credit card number) to the merchant and activity (i.e., a particular purchase) to the merchant and the bank. The implications of surveillance vs. transaction on primary and incidental disclosure are nuanced. One way to conceptualize them is as follows. From the subject’s perspective, disclosure through surveillance is primarily incidental; being surveilled is not the subject’s primary concern. But from the observer’s perspective, surveillance does not distinguish content; it is all

Page 43: Interactive Personal Privacy at the Dawn of Ubiquitous Computing Scott Lederer

Deconstructing the Privacy Space

43

primary. Transactions, however, as exemplified by TGI, disclose both primary and incidental content, and each is reasonably distinguishable by both subject and observer. With respect to interpersonal vs. organizational disclosure, each can occur through both primary and incidental disclosure, but organizations, through TGI, seem to make significant use of incidental disclosures. Notably, incidental organizational disclosure is often a byproduct of a primary interpersonal disclosure, e.g., sending an email to a friend discloses part of your social network to your ISP. Conversely, incidental interpersonal disclosure can be a byproduct of primary organizational disclosure, e.g., a spouse might view the subject’s purchasing activity as recorded in a shared financial account.

3.3 Classifying Existing Privacy Phenomena So far we have presented a conceptual argument, interspersed with some useful examples. To further concretize the discussion, Table 3-2 classifies some existing privacy-related phenomena into the privacy space. We hope this helps illustrate the origins of differences between the privacy implications of, say, P3P and camera surveillance. In keeping our descriptions general, both categories of a given dimension will often apply to a phenomenon (e.g., camera surveillance can disclose both persona and activity), but for any specific instance of a phenomenon, analysis can clarify the role each category plays in the disclosure.

Table 3-2. Existing privacy phenomena classified into the privacy space

Dimension Phenomenon

Feedback And

Control

Surveillance vs.

Transaction

Interpersonal vs.

Organizational

Familiarity Persona vs.

Activity

Primary vs.

Incidental

P3P At present, emphasis on feedback

Transaction Organizational Variable Both Both, with notable emphasis on TGI

Fair Information Practices

Both Emphasis on transaction

Organizational Variable Both Both, with notable emphasis on TGI

Online Chat Rooms

Feedback (e.g., list of other personae in room, displayed history of content) and Control (e.g., recourse to moderators)

Both Interpersonal Variable Both Both (primary to intended observers, incidental to surveillers)

Automated Location

Disclosure to Boss

Depends on implementation; arguably both are limited

Likely surveillance, but disclosures are probably generated transactionally

Both Bilateral Activity Depends on implementation, though likely incidental for subject, primary for boss

Window Shades Facing

Neighbors

Control Surveillance Interpersonal Relatively bilateral Both Likely incidental

Camera Surveillance

Possibly some feedback

Surveillance Organizational Variable Both. Typically an anonymous persona

Incidental for subject, primary for observer

Page 44: Interactive Personal Privacy at the Dawn of Ubiquitous Computing Scott Lederer

Deconstructing the Privacy Space

44

3.4 Classifying Existing Privacy-affecting Systems Table 3-3 assesses the privacy-affecting systems from an earlier chapter against the dimensions of the privacy space. None of these systems addresses the entire privacy space; each was designed for a specific subspace and domain. pawS comes close to addressing the whole space, but it focuses on organizational disclosures, not interpersonal. Though designed to be a general privacy manager for ubiquitous computing environments, it comprises the service and database layers of such a system, not the interface layer, and it has not been deployed or evaluated to any degree of significance. The work we describe in the following chapters is an investigation of metaphors and interfaces for ubicomp privacy management, which could in principle serve as the interface layer to a system like pawS.

Table 3-3. Privacy-affecting systems covered in a previous chapter, assessed against the dimensions of the privacy space.

Dimension System

Feedback And

Control

Surveillance vs.

Transaction

Interpersonal vs.

Organizational

Familiarity Persona vs.

Activity

Primary vs.

Incidental

Microsoft® IE 6 Both Transaction Organizational Variable Both Both, with emphasis on TGI

AT&T Privacy Bird

Feedback Transaction Organizational Variable Both Both, with emphasis on TGI

SecureID Both Transaction Interpersonal Shared contexts

Activity Primary

Desktop

Identity-Manager

Control Transaction Organizational Variable Persona Primary

AT&T Find Friends

Both Transaction Interpersonal Bilateral Activity Primary

pawS Both Both Organizational Variable Both Both

Privacy Mirrors Feedback Both Both Variable Both Both Ubicomp

Home Media Space

Both Surveillance Interpersonal in organizational context

Bilateral Activity Both

3.5 Summary Privacy resists definition, and attempts at definition inevitably lead to contention and confusion. This is because privacy is neither a descriptive attribute of nor an operational function of a phenomenon. It is a value process whose properties and operation vary with the natures and relations of the systems, actors, and information at play. In this chapter we have begun to deconstruct the privacy space. We have presented a set of dimensions that highlight the conditions that create the privacy implications of any given privacy-related phenomenon. Categorizing a phenomenon into this space can expose the qualities that shape privacy’s role therein. We believe this can help stabilize discourse on privacy and focus the design of privacy-related systems. In this and the previous chapter, we articulated the complex notion of privacy by examining related research and systems and by deconstructing the privacy space with an eye towards HCI and ubiquitous computing. This articulation lays a foundation for our work on designing a privacy user interface for ubiquitous computing, which we present in the following chapters.

Page 45: Interactive Personal Privacy at the Dawn of Ubiquitous Computing Scott Lederer

45

PART TWO

INTERACTION SUPPORT FOR PRIVACY IN UBIQUITOUS COMPUTING

[F]irst-order approximations essentially try to find workarounds for the socio-technical gap, to edge around it in ways that are not extremely odious and to do so with known effects.

--Mark Ackerman, 2000

Page 46: Interactive Personal Privacy at the Dawn of Ubiquitous Computing Scott Lederer
Page 47: Interactive Personal Privacy at the Dawn of Ubiquitous Computing Scott Lederer

Formative Inquiries

47

4 Formative Inquiries This chapter covers a series of formative surveys, analyses, and experiments we conducted in the early stages of our project to design a user interface for managing personal privacy in ubicomp based on the findings of the previous chapters. Their purpose was compound: to mature our own understanding of privacy, to gather evidence about individuals’ conceptions of privacy in ubiquitous computing environments, and, consequently, to inform the design of our interaction framework for managing personal privacy in ubiquitous computing. At this early stage, we were relatively naïve about the sprawling complexity and scope of privacy. In our innocence, we effectively intended to design a prototypical user interface for ubicomp privacy that would address the entire privacy space. It would provide feedback and control over disclosure, apply to surveillance and transaction, apply to organizational and interpersonal observations regardless of familiarity, protect both persona and activity, and address both primary and incidental content. We had not deconstructed the privacy space at this point, so we did not conceive of these particular factors with these labels, but in retrospect it is clear that we intended to address them all with a single interface. The inquiries reported in this chapter and the design and evaluation reported later represent these ambitious efforts. At the end of these chapters, it will become clear that such a singular interface is impractical. However, we would not have known this had we not attempted to build it. Following these chapters, we offer a revised proposal for a user interface for managing a particular subspace of the privacy space, with respect to ubicomp, which we feel is considerably more amenable to interaction support. But it is worth recounting our more ambitious efforts first. Below, we will first present an attempt to crystallize the insights of some of the authors discussed in the previous chapter into a simple conceptual model of everyday privacy in ubiquitous computing. We then discuss the design and results of an experiment to determine the salient factors in an individual’s assessment of privacy in ubiquitous computing scenarios. Finally we present the design and results of an experiment to determine the relative importance, from the individual’s perspective, of two of these factors, which figure prominently in the design we present later.

4.1 Everyday Privacy in Ubiquitous Computing In the initial stage of this project, we surveyed research and other literature found at the intersection of privacy, ubiquitous computing, and human factors. Much of this effort is reported directly in the background chapter of this report, but is also implicit in the experimental, conceptual, and interaction designs in this and other chapters. The initial conceptual model to arise from the literature survey, a notion we call everyday privacy, was a synthesis of the insights of Langheinrich (Langheinrich 2001), Bellotti and Sellen (Bellotti and Sellen 1993), Lessig (Lessig 1998), and Adams (Adams 2000). The everyday privacy model attempts to isolate the concepts that will shape individuals’ regular exposure to and control over personal information disclosure in ubiquitous computing (Lederer, Dey et al. 2002).

Page 48: Interactive Personal Privacy at the Dawn of Ubiquitous Computing Scott Lederer

Formative Inquiries

48

Central to everyday privacy is the understanding that, among the design principles Langheinrich adapted from the fair information practices, notice and consent are of the greatest everyday utility to users concerned with the ongoing collection and use of personal information. While other fair information practices (e.g., access, redress, security) are critical components of ethical information collection, and Langheinrich’s other design principles (e.g., proximity and locality, anonymity and pseudonymity) may prove functionally useful, notice and consent are by definition the elements by which an individual gains feedback from and exhibits control over the privacy-sensitive aspects of a system. In the general case, feedback and control are fundamental concepts behind any user interface, the recurrent interaction with which engenders the user’s conceptual model of the system. By maintaining awareness of the state of a system (feedback) and altering that state as necessary (control), both through the system’s user interface, a user maintains in her head an approximate model of the evolving state of the system, which she can use to optimize her use and predict future states (Norman 1988). Extending this notion to the meta-system of personal information collection and use, we see that notice and consent are effectively the feedback and control of privacy. If mechanisms of notice and consent are provided in a consistent, elegant fashion, individuals would arguably maintain approximate awareness and reasonable control over the collection and use of their personal information (Bellotti and Sellen 1993). By facilitating in the individual the robust understanding of information disclosure necessary to make informed use of the other fair information practices and Langheinrich’s other design principles, notice and consent are the fair information practices of greatest everyday utility to individuals. This is not groundbreaking news. Bellotti and Sellen understood and promoted the importance of feedback and control in ubiquitous computing back in the early nineties (Bellotti and Sellen 1993). And indeed the notions of notice and consent have been around at least since the late sixties and early seventies when Alan Westin influenced the passing of the Fair Credit Reporting Act and the Department of Health, Education, and Welfare’s codification of the fair information practices (Garfinkel 2000). The question now is, how do we optimize feedback and control in ubiquitous computing to avoid overwhelming users with an endless flood of disclosure alerts in need of approval? Exactly what do we provide notice of and control over? Bellotti and Sellen suggested that designers provide feedback and control, to whatever degree is reasonable, over what information is collected, how it is transformed within the system, who it is disclosed to, and how it is used (Bellotti and Sellen 1993). But this is a considerable amount of feedback and control, enough to clutter a user interface, and some of it is unfeasible (e.g., control over how an observer uses information after disclosure), as Bellotti and Sellen admit. So the question becomes, can we optimize feedback and control guidelines beyond what Bellotti and Sellen suggested?

Page 49: Interactive Personal Privacy at the Dawn of Ubiquitous Computing Scott Lederer

Formative Inquiries

49

The everyday privacy model attempts to answer this with a synthesis of Lessig’s societal-scale model of privacy (Lessig 1998) and Adams’s individual-scale model of privacy (Adams 2000). Lessig holds that the shape of privacy is contingent on legal, market, normative, and technological forces. In conjunction with smaller-scale contextual factors like location, time, social role, and nearby people, these forces constrain the possible degrees of privacy in a given situation. Within this constrained space, Adams’s model emphasizes that the individual’s perception of privacy is informed by his perception of who is receiving disclosed information, what it is to be used for, and its personal sensitivity. The everyday privacy model suggests that, in a given situation, notice be provided about any of these factors (i.e., laws, market factors, norms, technologies, current context, information receiver, information usage, information sensitivity) not already implicit in the situation. As acculturated social actors, individuals tend to know quite a bit about many of these factors in any given situation (e.g., laws, market factors, norms, context, information sensitivity), so comprehensive feedback would be overwhelming and unnecessary. Designing a privacy-sensitive ubiquitous computing system according to the everyday privacy model would include the challenge of isolating those factors likely to be non-obvious, providing feedback mechanisms to advertise them, and, to whatever degree possible, providing control mechanisms to alter them. An additional benefit of notice is that, as surreptitious monitoring increases in the coming years, notice can help mitigate the disparity Adams uncovered between an individual’s perception of information collection and use in a given situation and the actual collection and use of information in that situation (Adams and Sasse 1999). In the next two sections, we present two studies we conducted to refine and fortify the everyday privacy framework.

4.2 Subjective Factors of Privacy in Ubiquitous Computing In search of evidence in support of the everyday privacy model and to identify operational concepts for the interaction framework described later in this report, we conducted a series of scenario-driven interviews with participants from the Berkeley community to investigate the subjective importance of various factors in an individual’s assessment of privacy in ubiquitous computing. We also sought to elicit individuals’ privacy preferences and concerns across various ubiquitous computing contexts. As discussed previously, Adams has shown that, in the context of a given audio/video-captured environment, the most important factors in determining an individual’s perception of privacy are the perceived identity of the information recipient, the perceived usage of the information, the subjective sensitivity of the information, and the context in which the captures were made (Adams 2000). We uncovered further evidence of the importance of these factors in our study involving twelve people solicited from the general public in Berkeley, each of whom completed a questionnaire asking them to rate the importance of thirteen factors in determining the

Page 50: Interactive Personal Privacy at the Dawn of Ubiquitous Computing Scott Lederer

Formative Inquiries

50

appropriate level of personal privacy in a given situation. Figure 4-1 illustrates the results’ general alignment with Adams’ findings. Recipient, usage, sensitivity, and situation are among the most important factors. The next most important factor is data content, or the information being disclosed.

These results are the strongest and most pertinent results to come out of this experiment. We discuss the study design and further results in the following sections.

4.2.1 Scenario-driven Experimentation in Ubiquitous Computing To make our experiment tractable, we adopted a scenario-driven approach to the evaluation of ubiquitous computing environments (Abowd and Mynatt 2000). Controlled experiments in sensor-rich environments are virtually impossible for most research groups to conduct. While ubiquitous computing technologies continue to emerge on the market, such as RFID tags, public video cameras, short- and long-range wireless networks, and automatic discovery and configuration services, they remain largely uncoordinated, independently managed resources. The sorts of robustly coordinated, pervasively sensed, adaptive, interactive environments envisioned by ubiquitous computing researchers have yet to fully emerge from our imaginations into our labs, let alone into production environments. Hence the design and implementation of realistic experiments in such environments, and evaluations of such environments themselves, are daunting challenges (Consolvo, Arnstein et al. 2002). Abowd and Mynatt’s scenario-based approach to ubiquitous computing evaluation provides a reasonable way around these practical obstacles for the time being. For the purposes of this experiment, our interpretation of the scenario-driven technique consists of:

1. a set of hypothetical scenarios resembling expected uses of future computing environments,

2. the convincing, immersive presentation of these scenarios to test subjects, and,

Figure 4-1. Mean rating of the subjective importance of thirteen factors in

determining preferred privacy in a given situation (1=Not important, 7=Extremely important).

Page 51: Interactive Personal Privacy at the Dawn of Ubiquitous Computing Scott Lederer

Formative Inquiries

51

3. interviews with test subjects based on their experiences with these hypothetical scenarios.

Advantages of the scenario-based approach include low expense and the ability to conduct experiments involving systems and environments that do not yet exist. Disadvantages include a significant deficit in realism and an accordant risk of invalid results. This is because, firstly, the scenarios can convey mistaken predictions of future systems and environments, and secondly, subjects’ behaviors in practice can differ from their self-reported predictions in the hypothetical scenarios. Indeed, discordance between self-reported and actual behavior is a common phenomenon, in the realm of personal information disclosure (Kolko, McQuivey et al. 2002) and in general (Thomas, Kellogg et al. 2001). Nonetheless, until large-scale deployments of ubiquitous computing systems are a reality, our choices are severely limited. The scenario-based approach remains a cheap and, for now, reasonably effective technique for experiments involving ubiquitous computing environments.

4.2.2 Method We conducted twelve trials of the experiment (six female, six male subjects). Each session lasted approximately ninety minutes. We recruited ten subjects from the general population in and around Berkeley through a public announcement website. We recruited the other two subjects—UC Berkeley graduate students—through informal word-of-mouth. We introduced each subject to the concept of ubiquitous computing through a verbal description followed by a series of brief professionally-produced videos illustrating fictional ubiquitous computing situations. We employed these videos, supplied courtesy of HP Labs and originally produced to demonstrate usage scenarios of HP’s CoolTown project (Kindberg, Barton et al. 2000), to provide an immersive, persuasive illustration of the rich, situated interactivity inherent in the promise of ubiquitous computing. Fresh from this immersion, we then asked subjects to imagine themselves in a series of verbally-described, hypothetical ubiquitous computing scenarios. We developed three scenarios reflecting different social practices ingrained in everyday life and involving common environments likely to grow increasingly sensor-rich over time. The scenarios took place in a smart office, a smart shopping district, and an informal cocktail party in a smart home. To inspire a sense of immersion, we projected photographs representative of each scenario onto the wall at appropriate times. After the description of each scenario, we asked subjects a series of questions about their concerns and preferences regarding privacy in that scenario. Subjects participated in one, two, or three scenarios, depending on time constraints. Most completed two scenarios.

Page 52: Interactive Personal Privacy at the Dawn of Ubiquitous Computing Scott Lederer

Formative Inquiries

52

Subjects then completed an extensive questionnaire about their opinions on privacy in online, offline, and ubiquitous computing situations. Appendix B contains the descriptive passages and questions presented to the subjects in this experiment.

4.2.3 Results Our interviews generated some interesting results. Before summarizing them, however, it is important to note their limits. First, our sample size was limited. Not only is a sample size of twelve insufficient to draw conclusions pertinent to society at large, but all subjects were drawn from a politically polarized and intellectually rich cultural region that does not represent the general American population, let alone the world, by any stretch of the imagination. Further, we did not strive for statistical significance in our analyses. Rather, we sought from our subjects insights and commonalities that might help guide further inquiries and prototype designs. With that said, we found the following results, some of which we operationalized in the interaction framework we report in later chapters, and some of which indicate potential directions for future research.

4.2.3.1 Factors of Privacy As previously mentioned, information recipient, information usage, information sensitivity, and situation are among the most important factors. The prominence of these factors aligns with Anne Adams’s findings.

4.2.3.2 Relative Importance of Notice and Consent Subjects felt that notification of information recipient and usage is more important than the option to consent to or be anonymous in audio-video records of social situations.

4.2.3.3 Means of Notice 92% of subjects preferred to be notified via posted signs with symbols and labels, not unlike highway signs. 33% wanted to be notified through wireless communications to their PDAs or wristwatches.

4.2.3.4 Timing of Consent Regarding the ability to consent to personal information sensing, 17% of subjects preferred to consent (or not, as the case may be) at the time of each disclosure event. 50% preferred to consent to all possible disclosure events once in any given situation. 8% would rather grant conditional consent automatically through preconfigured preferences.

4.2.3.5 Configuring Stored Consent Preferences If consenting through stored preferences, 50% of subjects would prefer to configure the preferences explicitly, while 50% prefer to train the system gradually over time.

Page 53: Interactive Personal Privacy at the Dawn of Ubiquitous Computing Scott Lederer

Formative Inquiries

53

4.2.3.6 Aggregating Preferences In situations where the system needs to account for multiple users' preferences in a single situation, 40% of subjects prefer the system employ the most restrictive, or privacy-preserving, preferences among the group of users. 20% prefer the system employ the most popular preference values. 10% prefer the system average the preferences across the group. 30% did not care how the negotiation should be performed.

4.2.3.7 Categorizing Situations A preliminary factor analysis revealed that subjects distinguished between three loosely partitioned classes of situation that affect privacy assessments differently. We have labeled these classes familiar, public, and sensitive.

Familiar Familiar situations include informal and social situations, and those involving family, friends, and acquaintances.

Public Public situations include shopping and online situations and those involving strangers or unfamiliar environments.

Sensitive Sensitive situations include professional situations and those encountered while in a foreign country.

4.2.3.8 Face Metaphor 100% of subjects agreed that people wear different "faces" in different situations and maintain a different level of privacy for each face. Of these, 83% agreed strongly or better with that position, and 42% agreed completely. The face metaphor is an interpretation of the work of social psychologist Erving Goffman, who used a theatrical metaphor to explore the “fronts” people employ when playing different social roles (Goffman 1956). We will revisit this notion in subsequent chapters of this report.

4.3 Relative Importance of Inquirer and Situation With the knowledge from the analyses and interviews described above, we began to arrive at an initial design for an interaction framework for managing personal privacy in ubiquitous computing. Building on Goffman, who held that a person presents himself to a given audience in such a way as to maintain a consistent identity with respect to that audience, the crux of our design is:

Individuals prefer different levels of privacy depending on who receives the information and the context in which the information is disclosed.

Take this example. In the public library, one might be comfortable disclosing one’s name and interests to the librarian upon checking out a book, but one might be uncomfortable disclosing the same information to the stranger staring at one from a nearby seat. On the

Page 54: Interactive Personal Privacy at the Dawn of Ubiquitous Computing Scott Lederer

Formative Inquiries

54

other hand, one might be comfortable disclosing this information to an attractive stranger in a nightclub, but not to the club’s proprietor.9 To reiterate the general case, the information one is comfortable disclosing depends in large part on whom one is disclosing it to and the context in which one discloses it. This begs the following question. Which of these two factors—the information receiver or the context of disclosure, i.e., the who or the when—is the stronger determinant of one’s preferred level of disclosure in a ubiquitous computing environment? We conducted a questionnaire-based study to answer this question. The results indicate that the information receiver is the stronger determinant, except when the receiver is the individual’s employer, in which case the importance of the context of disclosure increases significantly. These results had a key influence on the design of the framework presented later in this report. We describe the experiment and results in greater detail in the remainder of this section

4.3.1 Operationalizing the Variables Tractable study design requires operationalizing the key variables under investigation. In this case, there were three such variables: information receiver, context of disclosure, and preferred level of privacy. The study’s goal was to determine the relative effects of the first two variables (receiver and context) on the last (privacy preferences). The operational definitions were:

Inquirer This is the party inquiring for the subject’s personal information. This word implies an active request for information rather than the passive receipt implied by “information receiver.” This resulted from a value-based design decision holding that no sensitive information be disclosed without a direct request.

Situation This is the set of current contextual information, which might include but is not limited to location, time, activity, and nearby people.

Precision Preferences These regulate the precision of disclosed information. On the restrictive extreme, zero precision would mean no information is disclosed. On the liberal extreme, full precision would mean the information is disclosed at the maximum reasonable technical precision. For example, imagine one’s precise location were represented as “2nd floor of the San Francisco Public Library, Mission Branch, 24th Street between Mission and Valencia.” A less precise version of one’s location might be “San Francisco.” With even less precision, one’s location might be “California.” And so on. Adjustable precision can allow for nuanced disclosure approximating that of traditional social life. By disclosing one’s location as “San Francisco” but not “2nd floor of the San Francisco Public Library…,” one reveals just enough information to satisfy certain inquirers without breaching one’s desired level of privacy.

9 Common patterns of disclosure and privacy such as these are at the heart of Palen and Dourish’s notion of genres of disclosure, a design pattern-like approach to designing privacy-sensitive systems (Palen, L. and P. Dourish (2003). Unpacking “privacy” for a networked world. Proceedings of the conference on Human factors in computing systems, Fort Lauderdale, FL, ACM Press.).

Page 55: Interactive Personal Privacy at the Dawn of Ubiquitous Computing Scott Lederer

Formative Inquiries

55

The relation between these concepts is that the inquirer’s identity and the subject’s situation at the time of inquiry together determine the subject’s preferred precision of disclosed information. The experiment aimed to determine the relative importance of inquirer and situation in determining subjects’ privacy preferences in a ubiquitous computing environment. These three abstractions map partially to Adams’s findings, which, as discussed above, show that perception of privacy in an audio/video-captured environment is shaped by the interrelation of (1) the perceived identity of the information receiver (comparable to our inquirer), (2) the perceived usage of the information, (3) the subjective sensitivity of the disclosed information (made adjustable by our precision preferences), and (4) the context in which the information is disclosed (comparable to our situation). Information usage was not directly addressed in this study in order to keep the experimental design simple. Especially with respect to the inquirers used in the study, expected usage is often implicit in the identity of the inquirer. In the study, precision preferences were represented by metaphorical “faces.” That is, subjects could moderate the precision of disclosed information by choosing a face to “wear.” This design decision was a direct result of evidence from the previous experiment that supported Goffman’s insight that an individual actively yet intuitively monitors and adjusts his behavior in the presence of others in an attempt to control their conceptions of his identity. The notion of fragmented identity pervades user interfaces in the forms of pseudonyms and profiles and has been used in research on privacy-oriented user interfaces on the desktop, e.g., (boyd 2002). We chose the metaphorical “face” as a more colloquial term than “front.”

4.3.2 Method The experiment was implemented as a scenario-based web questionnaire (Appendix C). The website asked each subject to imagine she had a cell phone containing her name (true name and a set of pseudonyms) and profile (primary and secondary email addresses, occupation, and interests) and capable of automatically determining her location and activity. Interested parties could collect some or all of this information in real-time through various services (e.g., a remote friend could determine the subject’s location through a website, or a nearby merchant could directly query the phone for the subject’s contact information and interests). The phone contains a set of three “faces,” each of which specifies the precision of the personal information an inquirer can collect about the subject while she wears that face (Table 4-1).

Page 56: Interactive Personal Privacy at the Dawn of Ubiquitous Computing Scott Lederer

Formative Inquiries

56

Subjects were asked to assign a face to each of the eight personal information disclosure events representing the cross-product of two situations:

Working Lunch Downtown with a colleague

Social Evening Live music club with two friends

and four inquirers:

Spouse/Significant Other Remotely located

Employer Remotely located

Stranger Proximately located

Merchant Proximately located

For each disclosure event, subjects chose one of the 3 available faces to blur the information disclosed to that inquirer in that situation, or they created a custom face by specifying precision levels for each category of personal information (i.e., identity, profile, activity, location).

4.3.3 Results The questionnaire was posted to community websites across the U.S. and to engineering students at UC Berkeley, resulting in a sample size of 130. Note these results are self-reported and based on imaginary scenarios. The primary concern was not which face a subject chose for a given event, but rather which factor had a greater influence over that choice. We wanted to know whether users would be more likely to (1) assign a given face to handle a given inquirer in all situations, or (2) assign a given face to handle all inquirers in a given situation. Results indicated that the inquirer’s identity is a stronger determinant of privacy preferences than is the user’s situation. The mean number of different faces used across the four inquirers in the working lunch situation was 2.72 (SD: 0.84); the mean in the social evening situation was 2.58 (SD: 0.89). This shows that within a given situation, subjects did vary faces across inquirers. In contrast, for a given inquirer, subjects generally did not vary faces across situations.

Table 4-1. For each disclosure event, subjects could assign any of these three faces or create a custom face.

Personal Information Precision Face Identity Profile Activity Location True Actual Primary email,

Occupation, Interests

Actual Actual

Vague Pseudonym Secondary email, Interests

Vague Vague

Blank Anonymous Undisclosed Undisclosed Undisclosed

Page 57: Interactive Personal Privacy at the Dawn of Ubiquitous Computing Scott Lederer

Formative Inquiries

57

Figure 4-2 shows that when the inquirer is a significant other, stranger, or merchant, the situation (or, at least, the two situations covered in the study) is a weak determinant of the choice of face. One subject wrote, “The recipient is more important than the context, because the information will likely outlive the circumstances.” Another wrote, “For me, ‘who’ is all that matters. If I don't trust the person with personal information, I wouldn't want to give them any information at any time. If I do trust the person, I'm willing to give out information freely.” When the inquirer is the subject’s employer, situation becomes a stronger determinant of face. 45.4% of subjects assigned a different face to employers in different situations, more than twice as many as for any other inquirer. One subject wrote, “[D]uring the work day, or after-hours during crunch time, I'd want my boss/coworkers to find [me] - after hours I'd rather be more anonymous.”

4.3.4 Summary Study results showed that (1) identity of the information inquirer is a stronger determinant of privacy preferences than is the situation in which the information is collected, and (2) situation is nonetheless an important determinant, especially when the inquirer is the user’s employer. These results imply that designers of privacy user interfaces for ubiquitous computing should strongly consider emphasizing the inquirer as the primary index and the situation as a secondary index into the user’s privacy preferences.

4.4 Summary In this chapter we reported a series of analyses and studies that constituted the requirements gathering phase of this project. With the results of this phase in hand, we proceeded to design an interaction framework and user interface prototype for managing personal privacy in ubiquitous computing environments. The framework emphasizes the inquirer as the primary index and the situation as the secondary index into the user’s privacy preferences. The next chapter describes the design and evaluation of this framework and prototype.

109

71101 112

21

5929 18

0%

20%

40%

60%

80%

100%

Spouse/S.O. Employer Stranger Merchant

Different face

Same face

Figure 4-2. Number of subjects who assigned the same face or different faces to inquirers in two

situations.

Page 58: Interactive Personal Privacy at the Dawn of Ubiquitous Computing Scott Lederer

Formative Inquiries

58

As previously mentioned, in retrospect we will see that this particular interface was intended to address the whole of the privacy space with respect to ubiquitous computing. The evaluation will show that this attempt was a bit too ambitious. In a subsequent chapter, we will suggest a refined ubicomp privacy interaction framework that addresses a specific subspace of the privacy space, which we believe is considerably more amenable to interaction support than is the space as a whole.

Page 59: Interactive Personal Privacy at the Dawn of Ubiquitous Computing Scott Lederer

Faces: A Preliminary Interaction Framework

59

5 Faces: A Preliminary Interaction Framework In this chapter we describe Faces, an interaction framework designed to support personal privacy management in ubiquitous computing by empowering end-users to adjust the precision of disclosed information. This work is based on the findings of the literature survey and experiments reported in the previous chapters. We also describe a prototype user interface we built to instantiate and evaluate the Faces framework. The Faces interaction framework relies on three key strategies: encapsulation, a priori configuration, and manual configuration. Below we provide detailed descriptions of these concepts, the interaction framework, and the prototypical user interface built to instantiate the framework. Then we present the design, results, and discussion of a formative evaluation of the framework. The results will show that the Faces framework is too unwieldy for facilitating everyday privacy management, but they also indicate directions for a more refined framework, which we present in the next chapter.

5.1 Introduction Ubiquitous computing implies automated disclosure of personal information. For example, disclosure of identity is required to personalize a service. Disclosure of location is a byproduct of the situated disclosure of identity. Disclosure of activity provides input to self-reconfiguring intelligent services. Indeed, ubiquitous computing infrastructures point toward the emergence of a global heterogeneous real-time database composed of people, places, and things instead of records, tables, and fields, whereby any party with proper permissions can access one’s personal information in real-time. As ubiquitous computing services start truly becoming ubiquitous, people will engage them throughout much of their lives, disclosing a stream of personal information accessible by parties near and far. This implies a crucial need for end-user control over information disclosure. We take as fundamental that people should be legally, technically, and reasonably empowered to decide when and what to disclose to whom. The framework described herein uses three techniques to empower users to actualize their privacy preferences:

Encapsulation The large set of inquiries for information, and the variety of contexts in which such inquiries occur can lead to unmanageable complexity for end users. We seek to identify the minimal set of user-level concepts that together encapsulate the multivariate nature of personal information disclosure into a usable and effective privacy management framework.

A priori configuration It seems unreasonable to expect users to be comfortable consenting (or not) to each and every inquiry in real-time, as the potential number of interruptions is prohibitively high. Logic precludes the act of consenting to disclosure from occurring after the disclosure. Hence our framework emphasizes the configuration of preferences at convenient times prior to disclosure. Preferences persist through all disclosures until the user changes them.

Manual configuration Our framework emphasizes the manual configuration of preferences. Given the sensitive nature of privacy, we believe decisions about it should be made by the people whose privacy is in question. In a 2003 Harris poll, 79% (N=1,010) of subjects said “being in control of who can get information about you” is “extremely important” (Taylor 2003). Although this approach could be combined with machine

Page 60: Interactive Personal Privacy at the Dawn of Ubiquitous Computing Scott Lederer

Faces: A Preliminary Interaction Framework

60

learning, we believe that end user configuration is still necessary to mitigate errors and provide a feeling of control.

The central notion behind our design is that people disclose different versions of personal information to different parties under different conditions. This notion is rooted in the work of Goffman, who used a theatrical metaphor to explore the “fronts” people employ when playing different social “roles” (Goffman 1956). For example, if your spouse sends you an instant message while you are at work and asks what you are doing, you might reply that you are “busy…ttyl?”10 But if your boss were to make the same inquiry under the same conditions, you might reply that you are “Proofing the trip report. I’ll have it in ten minutes.” In the first case, your role is that of “employed spouse,” in the second your role is “person responsible for the trip report.” In each case, the front you present is tailored not only to the pertinent audience, but also to the current conditions, and it determines the amount of information you are willing to disclose to the audience. That is, if these same two people made this same inquiry while you were calculating your taxes one evening, you might reveal more information to your spouse than to your employer, rather than the converse. Interpreting Goffman’s fronts as means of controlling the immediate flow of personal information, or, managing privacy, we see that managing privacy in everyday life is an intuitive, situated social process. Managing privacy in ubiquitous computing, however, remains a complicated process involving multiple local and remote observers to whom one may want to present multiple fronts simultaneously. For example, if you have called in sick to work but are actually interviewing for a job at a competing firm, when your boss inquires about your location through a ubiquitous computing system, you may not want to disclose your precise location. But how would you practically assert this privacy preference? Would your mobile phone alert you to the inquiry and provide you an opportunity to tailor the response? Would you have preconfigured the service that morning to disclose false information in response to any inquiries from your boss? The interactional means of managing privacy in ubiquitous computing have until now remained unexamined. We address this issue by attempting to infuse ubiquitous computing privacy management with the intuitiveness of Goffman’s fronts. While others have used similar approaches to managing personal information flow in online environments, e.g., (Jendricke and Gerd tom Markotten 2000; boyd 2002), to our knowledge, this work is the first to use this approach in ubiquitous computing environments. We believe the key to enabling people to automatically present different fronts to different parties under different conditions in ubiquitous computing is the intentional, automatic adjustment of the precision of disclosed personal information, according to a priori, manual user configuration made manageable by focusing on a small number of user-level concepts. That is, under certain conditions you may want to allow certain parties to know precisely where you are or what you are doing, while under the same conditions you may want to allow others to know vaguely where you are or what you are

10 “ttyl” is instant messaging shorthand for “talk to you later.”

Page 61: Interactive Personal Privacy at the Dawn of Ubiquitous Computing Scott Lederer

Faces: A Preliminary Interaction Framework

61

doing. To return to our earlier example, your response to your spouse (“busy”) was a less precise version than your response to your boss (“proofing trip report”). Assuming the system were able to infer your activity at some precision, it could arguably transform that inference into a less precise version depending on who was inquiring about it, thereby presenting different fronts to different inquirers. The remainder of this chapter is organized as follows. First we provide our operational definition of privacy and describe our conceptual framework in detail, highlighting the three key abstractions of inquirer, situation, and face. Then we describe the user interface prototype we built, which allows a user to specify her privacy preferences using those abstractions. Finally we present the design, results, and discussion of our evaluation of the framework.

5.2 An Interaction Framework for Privacy Management This section presents our definition of privacy, and then discusses the Faces framework derived from the work reported in earlier chapters of this report.

5.2.1 Operational Scope of Privacy For the purposes of this system, we have limited our scope to the immediate disclosure of five dimensions of personal information: identity, location, activity, identities of nearby people, and profile, the latter being composed of the user’s email address, phone number, street address, gender, age, and occupation. By “immediate” we mean the initial disclosure; we do not address issues of information transformation, storage, or use beyond initial disclosure.

5.2.2 Ordinal Precision Scale The notion of information precision is central to privacy management in ubiquitous computing. Not only can technical conditions such as sensor noise and imperfect inferencing degrade information accuracy, but also users can intentionally adjust information precision to exert privacy control (Cuellar, Morris et al. 2003). Disclosure need not be a binary decision; an individual could insist that his location be disclosed at a certain level of precision. For example, a student might allow his advisor to know which building he is in when on campus, but not his precise location within the building. Adams showed that people’s privacy perceptions in augmented environments are largely affected by four key factors: who is receiving the information, what will they do with it, how sensitive the information is, and the context in which it is collected (Adams and Sasse 1999). Interestingly, in computer-mediated communication, only one of these four factors is directly amenable to the subject’s control: sensitivity. The subject generally cannot stop someone from asking for information, and he generally cannot stop someone from using that information however they will. Further, people are always in some context, and they cannot be expected to alter their context drastically to protect their privacy. But subjects can in principle, and we believe will in future practice, adjust the sensitivity of disclosed information by adjusting its precision. Adjustable precision makes sensitivity mutable.

Page 62: Interactive Personal Privacy at the Dawn of Ubiquitous Computing Scott Lederer

Faces: A Preliminary Interaction Framework

62

Different categories of personal information would seem to require different scales of precision. Nonetheless we are experimenting with a general ordinal precision scale that might map operationally to specific precision scales. From highest to lowest precision, our scale consists of the following values:

Precise Approximate Vague Undisclosed Table 5-1 shows how each precision level might affect what would be disclosed for each dimension of personal information. Alternatively, a user can specify a custom string to be disclosed in lieu of an automatic transformation. For example, a user might want to always disclose her location as “the library” whenever her advisor inquires about it, regardless of her actual location.

Table 5-1. Normalized precision levels of personal information

Identity Location indoor/outdoor

Activity Nearby People

Precise True name Room/Block Precise Names Approximate Pseudonym Building/District Categorical Roles Vague Role Municipality Busy / Not Busy Number Undisclosed Undisclosed Undisclosed Undisclosed Undisclosed

5.2.3 Conceptual Framework Extending the operational definitions used in the web-study reported earlier in this report, the Faces framework of privacy management relies on three core abstractions:

Inquirer This is the entity requesting some subset of the user’s personal information. Inquirers might be specific people familiar to the user, specific companies or organizations, or general classes of entities (e.g., clothing retailers, strangers, business associates).

Situation This is an encapsulation of contextual information. A situation is a four-tuple, {location, activity, time, identities of nearby people}, where any element can be a wildcard. Hence a situation might be as simple as a specific location (e.g., at home) with all other parameters ignored, or as complex as walking with my son in Central Park on Sunday mornings. Whenever all of a situation’s conditions are met, that situation is considered active.

Face This is an encapsulation of disclosure precision preferences. In specifying a face, a user specifies (1) a level of precision on our ordinal precision scale for each of four dimensions of personal information (identity, location, activity, and nearby people), and (2) the subset of his profile that he is willing to disclose. That is, a face is a five-tuple {identity precision, location precision, activity precision, nearby people precision, profile subset} describing the preferred levels of precision at which each dimension of information should be disclosed when that face is used.

The essence of the framework, then, is that people present different faces to different inquirers in different situations, where a face is a set of preferences regarding the precision of personal information disclosed to the inquirer. Referring to Figure 5-1, which illustrates the framework, let us return to our earlier instant messaging example. An inquirer (i.e., your boss or your spouse) inquires for your activity. The identity of the inquirer, together with your current situation (i.e., proofing your trip report at work), determines which face to use, according to your prior configuration of the system. Let us

Page 63: Interactive Personal Privacy at the Dawn of Ubiquitous Computing Scott Lederer

Faces: A Preliminary Interaction Framework

63

assume the face you present to your boss while at work is configured to reveal your precise activity, while the face you present to your spouse in the same situation transforms your activity to “busy” or “not busy” depending on how focused you are on your current task and whether any deadlines are imminent. The system then automatically transforms your information according to the preferences encapsulated in the appropriate face, and discloses the transformed information to the inquirer.

5.3 A Prototypical User Interface for Privacy Management In this section we describe the Java-based prototype we built to instantiate and evaluate our conceptual framework in an experimental laboratory setting. In a production setting, the system would require a ubiquitous context-aware sensing and dissemination infrastructure for determining active situations, collecting users’ personal information, routing inquiries, and disclosing information to inquirers. In our experimental setting, this infrastructure, as well as the algorithm for precision adjustment of information being disclosed, is simulated using Wizard-of-Oz techniques. The prototype allows people to create inquirer, situation, and face objects and to assign faces to inquirers, optionally parameterized by situation. The specific face used to handle a given inquiry is indexed by the appropriate inquirer/situation pair (the current version allows only one face to be assigned to a given inquirer/situation pair). In work reported earlier in this report, we found evidence that the identity of the inquirer is a stronger determinant of one’s ubiquitous computing privacy preferences than is one’s situation at the time of inquiry. Accordingly, the prototype emphasizes inquirers as the primary

Figure 5-1. An illustration of the conceptual framework.

Page 64: Interactive Personal Privacy at the Dawn of Ubiquitous Computing Scott Lederer

Faces: A Preliminary Interaction Framework

64

objects to which faces are assigned, but these assignments can be parameterized by situation objects. Note that since faces are situation- and inquirer-independent, they are somewhat abstract to the end user, not being situated in a specific disclosure instance. This will turn out to be a crucial issue in our evaluation and will be discussed in depth. Situations and faces are reusable. A single face can be assigned to multiple inquirers and a single situation can be used to parameterize face assignments for multiple inquirers. Special cases arise when an inquiry is made by an unfamiliar inquirer (i.e., an inquirer for whom the user has no preferences assigned) and when none of the user’s situations are active. We created two special objects to represent these wildcard cases for inquirers and situations. We call these the General Public and the Default Situation, respectively. These two cases intersect to create a third special case, the Default Face. We explain each of these below.

General Public This special inquirer represents all inquirers that the system does not recognize.

Default Situation This special situation is active whenever none of the user’s specified situations are active.

Default Face By assigning a face to the General Public in the Default Situation, the user indicates her preferences for handling inquiries made by unfamiliar inquirers (e.g., a retail store in the mall in which she is shopping) whenever none of her situations is active.

The prototype consists of two synchronized user interface modules: a PC-based module that supports in-depth configuration, and a lightweight handheld component that affords rapid interaction on the fly. We will describe each in turn.

5.3.1 PC Component The main screen of the PC component (Figure 5-2) is used to create and edit objects representing inquirers, situations, and faces, and to assign faces to inquirers, optionally parameterized by a situation. In Figure 5-2, the user has selected the Roommate inquirer and the Studying situation. The field in the upper-right corner of the screenshot displays the name of the face that will handle inquiries made by the selected inquirer when the user is in the selected situation. In this example, if the user were to click the Student face and then the up-arrow, she would have assigned the Student face to handle all inquiries made by her roommate whenever she is studying. A second screen displays a log of the personal information disclosed to inquirers (Figure 5-3). For each inquiry, the user can see who made the inquiry, the user’s context at the time of inquiry, the situation and face that determined the disclosure precision, and the actual information disclosed. We designed the disclosure log to provide users with feedback on what they have disclosed, thereby supporting notice. This feedback is intended to support an iterative configuration process whereby a user can react to an unsatisfactory disclosure by altering her preferences accordingly. While the log is an important feature of the system, we should note that we did not evaluate it in our study.

Page 65: Interactive Personal Privacy at the Dawn of Ubiquitous Computing Scott Lederer

Faces: A Preliminary Interaction Framework

65

5.3.2 Handheld Component In practice, we envision the handheld component operating on users’ mobile phones, however the current prototype is confined to a small window on a PC or PDA running Java. The main screen (Figure 5-4) of the handheld module is divided into two portions. The top portion displays the user’s precise context. The bottom portion displays the face that will handle inquiries made by unfamiliar inquirers (i.e., the face assigned to the

Figure 5-2. The PC Component. Main screen for creating inquirers, situations, and faces, and

binding them to each other.

Figure 5-3. The PC Component's Disclosure Log.

Page 66: Interactive Personal Privacy at the Dawn of Ubiquitous Computing Scott Lederer

Faces: A Preliminary Interaction Framework

66

General Public in the current situation) under the current conditions. This screen gives the user immediate feedback about what will be disclosed to inquirers permitted to obtain precise information, and it shows which face the user is showing the general public. In addition to the main screen and a simplified disclosure log, this module has two special features worth noting:

Override The override function allows the user to choose one specific face to handle all inquiries by all inquirers under any conditions until override is disengaged. While engaged, all previously configured preferences are overridden.

Situation Snapshot The situation snapshot function creates a record of the user’s current context, which he can subsequently edit and label, thereby creating a new situation that can be used to parameterize face assignments.

5.4 An Evaluation of the Preliminary Framework In this section we evaluate our initial framework according to observations and results of a series of experiments we conducted with our prototype. We will discuss the value and suitability of the following features of the framework.

Encapsulation Encapsulating multiple dimensions into a larger category minimizes the number of immediate factors users have to consider, but the importance of the inner dimensions does not necessarily diminish in the process. Does encapsulation simplify or complicate the framework? Are users able to remember the values of the inner dimensions of a given encapsulation when necessary?

A priori configuration Our framework assumes users will not want to be alerted to each inquiry as it occurs and will prefer to configure their preferences beforehand. But will de-situated, a priori configuration of abstract preferences lead to a posteriori satisfaction with specific instances of disclosure?

Manual configuration Manually configuring the system consumes time and cognition. Simpler strategies for privacy management, such as always disclosing all information at a fixed precision level, might require

Figure 5-4. The handheld module main screen. The top portion displays the user’s

precise context. The bottom portion displays the face that will handle inquiries made by unfamiliar inquirers (effectively, the General Public) under the current conditions.

Page 67: Interactive Personal Privacy at the Dawn of Ubiquitous Computing Scott Lederer

Faces: A Preliminary Interaction Framework

67

zero configuration effort. Would simpler strategies requiring less cognitive effort result in similar or better a posteriori satisfaction with specific instances of disclosure?

Abstractions Are the inquirer, situation, and face abstractions useful to end-users?

Unfortunately, we did not evaluate the disclosure log in our experiments. We designed the disclosure log to provide users with feedback on what they have disclosed, thereby supporting notice. This feedback is intended to support an iterative configuration process whereby a user can react to an unsatisfactory disclosure by altering her abstract preferences accordingly. While ongoing use of the system in realistic environments, including log-driven iterative reconfiguration, would be the real determinant of whether encapsulation, a priori configuration, and manual configuration are useful strategies for privacy management in ubiquitous computing, we believe the results and observations of our experiments nonetheless provide considerable input to the iterative design process. Before describing the experiment, its results, and our observations, it is worth commenting on the difficulties of evaluating ubiquitous computing systems.

5.4.1 The Challenge of Evaluating Ubiquitous Computing Systems The considerable difficulty of evaluating ubiquitous computing applications has received much recent attention (Consolvo, Arnstein et al. 2002; Scholtz, Arnstein et al. 2002; Trevor, Hilbert et al. 2002; Mankoff, Dey et al. 2003). One major difficulty is that one may have to build a ubiquitous computing system in order to evaluate it. However, this is not always appropriate for early-stage or iterative design because of the quantity of work necessary for each iteration. As a support tool that is supposed to function in the context of other ubiquitous computing applications, the problems we faced in evaluating our framework were particularly difficult. To truly test it, we would have had to build a variety of ubiquitous computing applications that make use of personal information in different ways, and get a number of third party vendors and information consumers to start using the system before we could realistically test the privacy management component. Rather than attempting to build a series of robust ubiquitous computing applications making use of our framework—a gargantuan task at best—we chose to conduct a controlled laboratory experiment combining descriptions of everyday scenarios with use of our prototype. As a laboratory experiment involving limited use of an early prototype promising none of the potentially invasive consequences implicit in real-world use of a privacy management system, our experiment suffered from limited realism. Nonetheless, its value lies in the opportunity to observe people attempting to leverage our conceptual model as an intellectual and interactive means of understanding and managing disclosure in sensed environments.

5.4.2 Experimental Setup Here we will describe the hypotheses that motivated the experiment’s design, the people who served as its subjects, and the procedures that comprised its execution.

Page 68: Interactive Personal Privacy at the Dawn of Ubiquitous Computing Scott Lederer

Faces: A Preliminary Interaction Framework

68

5.4.2.1 Hypotheses Based on the four issues we chose to investigate in our evaluation, we began our study with the following hypotheses:

1. Participants could accurately predict the precision at which each dimension of information would be disclosed in specific instances based on their earlier configuration of the system. This hypothesis pertains to the encapsulation issue.

2. Participants would be satisfied a posteriori with the precision of information disclosed in specific instances, as determined by the preferences they set a priori.

3. Manual configuration of preferences would meet participants’ disclosure needs better than simpler disclosure strategies.

4. Inquirer, situation, and face are suitable abstractions for end-user privacy management in ubiquitous computing.

5.4.2.2 Participants Four women and one man, all average computer users, from the non-engineering undergraduate student body at UC Berkeley, were each given a $20 gift certificate to a local bookstore for participating. They had an average of 2.2 years of experience using mobile phones and almost no experience with PDAs. Four out of five subjects rated themselves as moderately to seriously concerned about both online and everyday privacy.

5.4.2.3 Procedure Participants were first interviewed for about ten minutes to determine their reactions to specific scenarios involving sharing information with different recipients in different situations. We asked questions such as: “Imagine you are purchasing something in a store. The cashier asks for your phone number. Do you give it? When might you and when might you not?” This served to prime participants by making them think through their feelings and opinions about privacy in ubiquitous computing. Participants were then given a verbal description of ubiquitous computing and a demonstration of both the handheld11 and desktop versions of our system. They were given as much time as they liked to explore the system themselves. Participants then completed a series of tasks using the PC interface, after which they were briefly interviewed about their experience with the system. Participants completed the following tasks:

1. Select or create one or more faces and situations to satisfy two example sets of requirements we provided (e.g., allow your roommate to know approximately where you are when you are studying, but not whom you are with).

2. Create two situations from your own life, and select or define default faces to be used in each situation (i.e., assign a face to the General Public for each of the two situations).

3. Create two inquirers from your own life, and select existing faces or create new ones to be used for each possible combination of inquirer and situation, including the Default Situation and the two situations created in the previous task.

11 Though participants were introduced to the handheld component to fortify their understanding of our conceptual framework, this component did not play a role in the evaluation.

Page 69: Interactive Personal Privacy at the Dawn of Ubiquitous Computing Scott Lederer

Faces: A Preliminary Interaction Framework

69

Participants were moved away from the system after a five-minute break and were asked to imagine that one of the inquirers they had specified had asked for all of their personal information during one of the situations they had specified. Participants were then asked the following:

1. What do you think was disclosed (precision level and actual content) about each dimension of information in this instance, based on your recall of the preferences you set earlier?

2. In retrospect, what would you want to disclose (precision level and actual content) for each dimension of information to this inquirer under these conditions?

3. For each dimension of information how important is it that the precision of the disclosed information be at the level you want in this instance?

This process was repeated for the second inquirer and second situation that participants had specified, and again for a third inquirer and situation of our choice (a popular national clothing retailer in a popular local shopping district). We have interpreted the outcome of the experiments according to two categories, which we term results and observations. By results we mean the data that emerged from the questions we asked subjects and the measurements we made of their use of the system. We report these results, organized according to our hypotheses, below. However, given the limits of the experiment—i.e., a small and homogenous sample, a limited number of inquiries, completely desituated use of the system, and no incentive to behave honestly—we feel the true value of the evaluation comes through in our observations about which features of the framework appeared useful, which did not, and why. We present these additional observations when we discuss our results in section 5.5.

5.4.3 Results Most measures were made on a per-inquiry basis, not per-subject, though we include some per-subject results. With five subjects and three inquiries per subject, there were a total of 15 inquiries. We present the results for our first three hypotheses. The fourth hypothesis, regarding the suitability of the face, situation, and inquirer abstractions, is discussed in the next section.

5.4.3.1 Encapsulation: Recalling Preferences To measure participants’ ability to remember the values of encapsulated dimensions of faces, we compared the precision levels actually configured with the precision levels the subject thought she configured after a five-minute break. Subjects accurately predicted disclosure precision levels for 73% of inquiries with respect to identity, 87% with respect to location, 71% with respect to activity, 87% with respect to nearby people, and 60% with respect to profile. This 60-87% success rate in recalling privacy preferences is not promising. It is important to note, however, that asking someone to remember how he configured an unfamiliar system five minutes ago is a far cry from asking them after long-term use of a familiar system. This is not to say we expect the recall rate would improve, but simply that the measure’s weak validity makes it inconclusive.

Page 70: Interactive Personal Privacy at the Dawn of Ubiquitous Computing Scott Lederer

Faces: A Preliminary Interaction Framework

70

5.4.3.2 A priori Configuration: A posteriori Satisfaction To measure participants’ a posteriori satisfaction with the precision of each specific instance of disclosure, given that faces were designed a priori – decontextualized from specific disclosure events – we compared the precision levels the subject configured a priori with the precision levels the subject would have preferred a posteriori to a given disclosure. Measured across all inquiries, abstract a priori preferences matched a posteriori preferences 67% of the time for identity, 47% for location, 57% for activity, 80% for nearby people, and 47% for profile. However, these poor results do not take into account that a difference of one, two, or even three levels of precision may be acceptable if a subject deems it not very important that a given dimension be disclosed at his preferred precision. Indeed, when the self-reported importance of adhering to a priori preferences was taken into account, results improved: preferences configured a priori matched these modified a posteriori preferences 67% of the time for identity, 87% for location, 100% for activity, and 87% for nearby people. (The derivation of these numbers is discussed in the next passage.) These numbers are an improvement, but not enough of one. As with the previous hypothesis, their weak validity due to the limits of the study makes them inconclusive. We address them further in the discussion of our observations.

5.4.3.3 Manual Configuration: Compared to Simple Automation To measure whether manual configuration would result in higher satisfaction than simple, automated strategies, we compared the precision levels the subject would have preferred a posteriori with the precision levels that would have been used in each of the following simple, automated strategies:

Precise Always disclose all information at the precise precision level.

Approximate Always disclose all information at the approximate precision level.

Vague Always disclose all information at the vague precision level.

Undisclosed Never disclose any information.

Random For each dimension and for each inquiry, choose one of the above four levels at random. We ran the random trials twice.

For each of these strategies and for each dimension12 of personal information, we measured the difference D between the precision that would have been used by that strategy and the precision the subject would have preferred a posteriori for each inquirer in each situation. We then looked at the importance rating (1-7) the subject gave to each dimension. If the importance was 1 (not important), then any value for D was considered satisfactory. If the importance was 2-3, D could be no more than two levels of precision

12 We did not include the profile dimension in these measures, because it did not conform as formulated to the ordinal precision scale we employed.

Page 71: Interactive Personal Privacy at the Dawn of Ubiquitous Computing Scott Lederer

Faces: A Preliminary Interaction Framework

71

to be considered satisfactory. If importance was 4-5, D could be no more than one level precision. If the importance was 6-7 (extremely important), D had to be zero. Table 5-2 shows the success rate of our manual configuration strategy compared to that of the automated strategies. In all measured dimensions except identity, our approach resulted in more satisfactory disclosures than the other strategies. When examined on a per-subject basis and averaged across all four measured dimensions of information, manual configuration met participants’ needs an average of 84% (SD 21) of the time, ranging from 58% to 100%. The next best option, never disclosing any information, had an average 63% (SD 32) success rate, ranging from 25% to 80%. Table 5-3 shows the success rates of the remaining automated techniques.

5.4.4 Results Summary The numerical results reported above indicate that subjects, in some cases, successfully employed the Faces framework and prototype to specify preferences that remained agreeable when put in the context of a specific instance of disclosure. Yet there were many cases in which subjects struggled with the framework. They had difficulty recalling the preferences they had earlier encapsulated inside faces. Their a priori preferences

Table 5-2. Percentage of disclosures, across all participants, that would have satisfied participants’ a posteriori precision preferences for a given instance of disclosure, given the importance that preferences be met. The highest number in each column is shown in boldface.

Identity Location Activity Nearby People Manual 67% 87% 100% 87% Precise 73% 60% 71% 27% Approximate 40% 53% 79% 40% Vague 27% 60% 71% 60% Undisclosed 40% 73% 57% 80% Random 1 47% 53% 79% 67% Random 2 33% 60% 79% 53%

Table 5-3. Average percent of disclosures that would have satisfied participants’ a posteriori precision preferences for a given instance of disclosure, given the importance that preferences be met. These results are averaged across all information dimensions and reported per-subject. The highest number in each row is shown in bold.

Subject Manual Undisclosed Vague Approx. Precise Random 1 Random 2

1 89% 80% 80% 69% 69% 88% 80% 2 58% 50% 67% 75% 50% 67% 67% 3 100% 58% 41% 25% 50% 33% 33% 4 75% 67% 58% 58% 58% 75% 58% 5 100% 58% 25% 33% 58% 42% 42% Mean 84% 63% 54% 52% 57% 61% 56% SD 21 32 28 33 33 32 25

Page 72: Interactive Personal Privacy at the Dawn of Ubiquitous Computing Scott Lederer

Faces: A Preliminary Interaction Framework

72

often did not match their situated preferences; folding importance ratings into the calculation improved this matching, but there were still a number of instances when the degree of disclosure would have been disagreeable to the subject. These results are generally superior to what would have occurred through use of simple automated disclosure algorithms, but that does not imply that these rates would be agreeable to users exposed to real sensing systems. And we did not compare our approach to more complex automated approaches, e.g., agent-based or machine learning approaches. Despite the limited validity of the results due to the lack of realism in the experiment, they indicate that Faces may ultimately be too complex to support simple management of everyday privacy. But to improve the design of the framework, we need more than just its condemnation. We need an explanation for the poor results, so that we can know how to direct our refinement efforts. We seek out this explanation in the remainder of this chapter.

5.5 Additional Observations and Discussion Here we offer a meaningful discussion of some problems in the Faces framework that grew apparent through our observations of subjects using the prototype. We believe these problems are the primary causes of the results discussed above. One concept that will arise in this discussion is the gap between a priori preferences, buried inside faces configured in the absence of specific inquirers or situations, and actual preferences in situated instances of disclosure. We will call this the abstract-situated gap. Naturally an entirely different framework in which users choose a precision level for each disclosure upon being alerted to each inquiry in real-time could eliminate this gap. But as discussed earlier, ubiquitous computing promises to flood users with such alerts. This option appears intolerable. We designed our framework to alleviate this flood by shifting the act of consent to an occasional (re)configuration process, but in doing so we created the abstract-situated gap. We will address the ramifications of the gap at various points in this discussion.

5.5.1 Suitability of Abstractions Our final hypothesis was that the abstractions represented in our interface are appropriate for managing end-user privacy. While manual configuration takes time and effort, we have attempted to minimize this by supporting these reusable, high-level abstractions in our framework and interface. Here we discuss these abstractions in the light of recurring critical incidents encountered in the evaluation. Incidents fell into three broad categories: managing mappings, conflating abstraction, and underutilizing situations.

5.5.1.1 Managing Mappings While participants generally understood the basic notion behind the framework, they had difficulty translating that understanding into effective use of the system. Some participants didn’t realize that editing the inner dimensions of a face affects that face across all inquirer/situation pairs to which it is assigned. Some did not reuse faces but simply created a new one for each inquirer/situation pair, even when reuse would have met the task requirements. Some complained about and appeared to struggle with the

Page 73: Interactive Personal Privacy at the Dawn of Ubiquitous Computing Scott Lederer

Faces: A Preliminary Interaction Framework

73

conceptual complexity of the many possible inquirer/situation/face mappings. Some, after creating a face, did not actually assign it to an inquirer/situation pair, apparently thinking that the very act of creating a face would automatically assign it to the pertinent inquirer/situation pair. When asked to create a Default Face, one subject created a face called “Default Face” but did not assign it to the General Public/Default Situation pair, apparently thinking that simply naming it “Default Face” would make it live up to its name. One subject expressed concern about disclosing more information than desired due to misunderstanding the mappings. We believe some of the blame for these difficulties lies in the unrefined design of the software prototype our participants used. An improved interface design and repeated exposure to the system would arguably dissolve some confusion over reusing faces and situations, assigning faces to inquirer/situation pairs, and the meaning of Default Face. Nonetheless, much of the confusion about mappings lies in the underlying framework itself. Managing an arbitrarily large set of one-to-one mappings would be challenge enough, but our framework asks users to manage an arbitrarily large set of many-to-many-to-many mappings while also managing the inner values of the mapped encapsulations. The potential for confusion is too great, especially in a domain as sensitive as privacy.

5.5.1.2 Conflating Abstractions Some participants had difficulty separating situations from faces, repeatedly confusing the roles each abstraction plays in the framework. For example, we observed behavior similar to the following. When asked to create a default face to handle inquiries when the subject is studying, the subject would create a face and type “studying” into the custom field of the face’s Activity property, rather than creating a face, creating a situation with its Activity property set to “studying,” and assigning that face to the General Public in that situation. Further, some participants were generally unable to semantically and operationally distinguish situations and activities. Our model posits activity as a single, simple contextual variable (despite its technical acquisition being far from simple), whereas situation is a collection of related contextual variables. Hence activity is a property of a situation. Activity is also a dimension of information that can be disclosed and, through a precision preference within a face, blurred. Despite this operational distinction, subjects conflated these two concepts, which led to operational confusion. Of course, it is completely valid that subjects semantically conflated these concepts, for they can hardly be separated. Activity is a rather broad term that correlates well with context. We discuss design ramifications of this below.

5.5.1.3 Underutilizing Situations When constructing situations, participants made heavy use of the wildcard option, often specifying details for only one inner dimension, e.g., location or activity. There are many possible combinations of values for the four dimensions of a situation, but by using wildcards subjects collapsed massive sections of that space.

Page 74: Interactive Personal Privacy at the Dawn of Ubiquitous Computing Scott Lederer

Faces: A Preliminary Interaction Framework

74

Use of the situation snapshot feature would simplify the finer setting of situation parameters, but it is questionable whether that would be necessary. It may be that privacy preferences are coarsely but sufficiently correlated with location alone, or with activity alone (though “activity alone” is an oversimplification, to be sure), such that fine-tuning the situation parameters is unnecessary. This draws into question the very need for the situation abstraction.

5.5.1.4 Abstractions Summary Some of these incidents may be partially due to design flaws in our prototype user interface rather than to weaknesses in the underlying framework, but we feel they reflect elements of complexity rooted in the framework itself. These complications place doubt on the suitability and integrity of the face and situation abstractions (we have found no evidence of confusion or misuse of the prototype due to the inquirer abstraction). We address this further below.

5.5.2 Implications for Design In this part we analyze flaws in the framework that we feel are responsible for the results of the evaluation. In discussing these flaws, we position them as implications for refinements to the design of the framework.

5.5.2.1 Ambiguity Is Useful We would have thought that the end of the iterative design of this framework was the minimization of the abstract-situated gap, but there is evidence that such a gap may be acceptable to some degree. Subjects rated the importance of adhering to a priori preferences relatively low in a number of cases, enough to significantly improve our measure of a posteriori satisfaction with the level of disclosure in specific instances. This would imply that a privacy management system has some elbow room in determining how to execute its users’ preferences.

5.5.2.2 Static and Dynamic Information Should Not Be Grouped Together Some of our participants pointed out a design flaw in grouping static and dynamic information precision preferences together. The sensitivities of static and dynamic information have different dependencies on the subject’s familiarity with and proximity to the inquirer. That is, people are less concerned about disclosing static information (i.e., identity and profile) to parties with whom they have established relations because those parties already know (some of) that information. But people are concerned about whether and when remote familiar parties obtain their dynamic information (i.e., location, activity, and nearby people) because that information can affect those relationships. On the other hand, people are concerned about disclosing static information to unfamiliar parties (e.g., revealing one’s identity and contact information to a retail store). But people are less concerned about revealing dynamic information to unfamiliar parties, since it will have less of an effect if one has no established relations with them. Of course, if static and dynamic information are disclosed to an unfamiliar party at the same time, then the dynamic information is sensitive because the inquirer becomes immediately more familiar. Table 5-4 illustrates these points.

Page 75: Interactive Personal Privacy at the Dawn of Ubiquitous Computing Scott Lederer

Faces: A Preliminary Interaction Framework

75

The main implication here is that grouping preferences about identity and profile in the same encapsulation as those about location, activity, and nearby people complicates matters. Our next design teases apart these concerns.

5.5.2.3 Situation and Face are Two Sides of the Same Coin Especially if we concern ourselves only with dynamic information, then we can consider the situation as the precise version of one’s dynamic information, and the face as the transformed version that one wishes to convey. Introducing a level of indirection between these concerns flies in the face of the situated nature of interaction vigorously explored by the likes of Suchman (Suchman 1987) and Dourish (Dourish 2001). This was demonstrated clearly when participants repeatedly confused these two abstractions. Merging or eliminating these abstractions would eliminate this indirection.

5.5.2.4 Encapsulations May Introduce Unnecessary Indirection Many participants created situations that contained a value for a single inner dimension (e.g., location) and wildcards for all other dimensions. This implies it may be more intuitive to eliminate the situation abstraction and simply support parameterization of face assignments by one or more contextual dimensions a la carte. This would also help eliminate the confusion participants exhibited between situation and activity.

5.5.2.5 Face Is a Weak Metaphor for Dynamic Information Face as a metaphor for identity may have merit, but it stretches one’s conception as a metaphor for the precision of dynamic information. Some experts believe metaphors should be used sparingly, e.g., (Cooper 1995). It might be better to emphasize precision per se, without masking it behind a metaphorical face.

5.5.2.6 Wildcards in a Three-Way Mapping are Confusing The three-dimensional inquirer/situation/face space proved a bit confusing for participants. In particular, participants struggled to understand the implications of wildcards in the inquirer/situation/face mapping, i.e., the General Public, Default Situation, and Default Face. Operationalizing these exceptional conditions in a complex space in an intuitive fashion proved quite challenging. By merging or eliminating the situation and/or face abstractions in our next design, we hope to mitigate this confusion.

Table 5-4. Managing personal information flow to familiar parties is only of concern when they are remotely located and when the information is of a dynamic nature. Managing information flow to unfamiliar parties is primarily of concern when the information is one’s identity and/or other static information. It is unclear whether one’s dynamic information is of much interest to remotely located parties with whom one has no established relation.

Dynamic Information Static Information Proximate Not sensitive Familiar Inquirers Remote Sensitive

Not sensitive

Proximate Unfamiliar Inquirers

Remote Not sensitive if anonymous Sensitive

Page 76: Interactive Personal Privacy at the Dawn of Ubiquitous Computing Scott Lederer

Faces: A Preliminary Interaction Framework

76

5.5.2.7 Support for Groups is Missing A person’s privacy decisions are heavily influenced by his associations and loyalties. His privacy preferences reflect not only his possibly antagonistic relationships to inquirers, but also the normative practices of the groups in which he holds membership. Our framework positions the user as an individual against the world, rather than as a member of various groups cooperating against and in support of the world. This insight arose not through analysis of our experiments, but through discussions with colleagues. Nonetheless, we believe lack of support for groups is a major obstacle to the successful application of the framework.

5.6 Situating the Framework into the Privacy Space In retrospect, we see that we intended this framework to apply to the whole of the privacy space. We intended to supply both feedback and control mechanisms. We envisioned the system to apply to any disclosure system, whether surveillance- or transaction-oriented. We intended it to apply to both organizational and interpersonal observers. We intended to address any level of familiarity (from General Public through classes of inquirers to specific inquirers). We attempted to manage both persona (through the identity and profile components of a face) and activity (through the other, more dynamic components). And we envisioned that, largely by way of the fair information practices, the rules encoded in an individual’s faces would apply to both primary and incidental content. In the end, these intentions were a bit too far-reaching. We meant to collapse the entire interactive ubicomp privacy management problem into a simple metaphor. The problem is that, while the metaphor aligns with theoretical notions of privacy regulation, like (Goffman 1956) and, to a lesser degree, (Palen and Dourish 2003), it gets in the way of people’s actual privacy management habits. In theory and in practice, people do show different faces, or, present different precisions of information, to different audiences under different circumstances. But in practice, they do not actively manage these faces as distinct objects, like a master of disguise manages his masks. Rather, they intuitively engage these faces in the course of their everyday actions. Interactive support for ubicomp privacy cannot address the entire privacy space but rather needs to target whatever subspace of the privacy space it can address. Here we outline that subspace. First, surveillance does not generally afford itself to be controlled by the surveilled. Arguably, we can provide interaction support for transaction systems at best. And even within that space, absolute control is highly unlikely, for it would require a pervasive, unhackable digital rights management infrastructure, the feasibility of which has never been demonstrated at societal scales. Hence, a lightweight system for managing transaction-based systems seems most amenable to interaction support. Regarding organizational observers, the hard truth of the contemporary personal information market is that organizations will collect as much personal information as they can, up to the point at which they would alienate a significant portion of their client base. No user interface will be able to alter this. This point is also an argument against the

Page 77: Interactive Personal Privacy at the Dawn of Ubiquitous Computing Scott Lederer

Faces: A Preliminary Interaction Framework

77

likelihood of interactive control over incidental disclosure; observers will exploit incidental information to whatever degree they can benefit from it. Notably, regarding organizations, people already manage their personal information flow to organizations through managing a separate account for each organization. While this is a tedious and frustrating process, commercial efforts to homogenize account management have thus far failed. We believe this is because, with respect to organizational disclosure, managing these multiple accounts is the very manifestation of the contemporary fragmented identity and boundary negotiation processes put forth by Goffman and Altman, respectively. That is, the scope and nature of the disclosure of personal information to organizations is embedded in the normative practices of engaging the service provided by that organization. The scope and nature of this disclosure cannot be abstracted away from the process of engaging the observer and packaged in a singular disclosure management service. It is inextricably embedded in the engagement. The same can be said about interpersonal disclosure, but we will argue later for a simple mechanism to allow subtle regulation of interpersonal information flow, a mechanism which we believe can be embedded into existing interpersonal disclosure practices. Regarding persona disclosure, people have established means of disclosing personae to observers, through personal introductions, business cards, filling out forms, email, etc. Interactive support for disclosing activity in ubicomp is an open problem, however, since information about peoples’ activity states will become increasingly available as sensors and inferencing technologies proliferate. The upshot is that feedback and control over transaction-based interpersonal disclosure of activity (as primary content) remains an open problem. In other words, we can narrow our design focus to a user interface that lets people tell their friends/families/colleagues what they are doing. This is precisely what we address in the following chapter.

5.7 Summary In this chapter we described the design and evaluation results of our initial framework, which operationalizes privacy in the relations a user creates between three key abstractions: inquirer, situation, and face, an encapsulation of information precision preferences. We argued that the Faces framework’s applicability is severely limited by a series of important flaws, the upshot being that it requires the user to model his identity management practices within the interface, rather than enabling him to practice them through it. We positioned these flaws as implications for the design of a refined framework and we narrowed our scope of the privacy space down to a subspace which appears amenable to interaction support in ubicomp. In the next chapter, we present this refined framework and provide rationale for its design.

Page 78: Interactive Personal Privacy at the Dawn of Ubiquitous Computing Scott Lederer
Page 79: Interactive Personal Privacy at the Dawn of Ubiquitous Computing Scott Lederer

The Precision Dial: A Refined Interaction Framework

79

6 The Precision Dial: A Refined Interaction Framework In this chapter we present a refined interaction framework based on the implications of the evaluation from the previous chapter, on the literature, and on critical reflection. This framework, simpler and more flexible than its predecessor, is designed to embed naturally into existing socio-technical practices. We believe this interaction framework is considerably more viable—from commercially, technically, and socially informed perspectives—than its predecessor. The core of the revised framework is a sort of precision dial on the user’s mobile phone with which she could quickly adjust the precision at which all context information (e.g., location, nearby people) would be disclosed to her personal contacts. If so inclined, she could organize her contacts into groups, and adjust precision on a per-group basis. Or she might use a single precision for all contacts. As before, all disclosures would be available in a disclosure log. These and further details are discussed in the sections that follow. In line with the privacy subspace we targeted at the close of the last chapter, we are concerned herein with individuals’ regular, voluntary disclosure of their dynamic contextual information to remote identifiable parties through ubicomp systems. We are concerned with cases where users are interested in sharing their personal information with the right people at the right level of detail. Here, people want to disclose contextual information to others, perhaps to facilitate micro-coordination of arrivals at a meeting place or to convey a sense of presence to intimate companions. The key here is that people choose to selectively disclose their information to people in their social and professional networks. This does not mean that secrecy and anonymity are not important. There are many reasonable cases for not disclosing information to colleagues, friends, and family. The problem, however, is that secrecy and anonymity only focus on an extreme case, and cannot cover the many situations in everyday life where people do want to share information with others. The question is how to share personal information with the right people at the right level of detail, with secrecy and anonymity possibly being the solution in some cases. As with our earlier framework, central to this framework is the ability to adjust the precision of dynamic contextual information disclosed to other people. Control over precision provides control over the information density of a given disclosure. The more precise the disclosure, the more information disclosed. For example, disclosing location and duration as “Last seen in downtown San Francisco” is decidedly less revealing than disclosing “At a strip club on Market Street for the past two hours.” This allows people to manage how they present themselves in ubiquitous computing environments. Users might not be able to convey that they are acting appropriately with respect to a given observer, but the lack of information conveys a likelihood that they are not acting inappropriately. The observer is left to infer some range of possible activities. If the observer’s trust in the subject is sufficiently low to inspire the assumption of inappropriate activity, then this is a matter for the observer and subject to work out together, a social process that no amount of ubicomp technology can replace.

Page 80: Interactive Personal Privacy at the Dawn of Ubiquitous Computing Scott Lederer

The Precision Dial: A Refined Interaction Framework

80

This approach relies on context distribution infrastructures incorporating some level of inherent ambiguity for purposes of plausible deniability. This requires some explanation. Consider the following. If a person does not answer a cell phone call, it could be for technical reasons—such as being outside of a cell, not having the phone with them, or that the phone is off—or for social reasons, such as being busy or not wanting to talk to the caller right now. The result is that the person being called has a simple model for protecting their privacy, while the caller cannot tell why that person is not answering. A similar situation will likely exist in ubiquitous computing systems, where ambiguity exists due to sensor noise, incomplete wireless coverage, and forgetting to carry devices, as well as situational propriety and the subjective desire to be let alone. The range and validity of reasons for both intentional and unintentional imprecision creates an ambiguity around the conditions of disclosure and conveyance that is too thick for observers to bother deconstructing. Indeed, cultural accommodation to imprecise knowledge on the part of remote observers can engender a norm of plausible deniability (Nardi, Whittaker et al. 2000; Woodruff and Aoki 2003).

6.1 A Flawed Design The evaluation of our Faces framework revealed significant flaws. This does not invalidate the results of our formative inquiries, of course, but it means we operationalized them rather impractically. While subjects understood and agreed with the core notion of the framework, i.e., that people present different faces in different situations with each face regulating the outbound flow of personal information, they described and demonstrated a certain clunkiness in the framework. It is overly structured. It requires substantial configuration effort. It obscures actual precision preferences behind abstract “faces” and actual contextual factors behind abstract “situations,” leading to potentially significant misalignments between actual system state and the user’s predictive, immediate, and retrospective conceptions thereof. In short, it requires the user to explicitly manage—albeit not necessarily in real-time—an approximate model of the self-presentation practices she already manages naturally. She is coordinating two parallel identities: the natural one reflected in her embodied social practices, and the coarse approximation represented by the inquirer-situation-face relations mapped into her privacy management system. Such a design is ultimately impractical. Despite the variable subjective importance of adhering to exact disclosure precision preferences, the abstract-situated gap introduces a fundamental element of ambiguity into the system’s functionality. Information would often be disclosed at unintended—though not necessarily disagreeable—precisions. When it comes right down to it, people will not participate in the framework’s complex configuration requirements when their results are regularly uncertain. Such ambiguous results are achievable with far less configuration effort and in conjunction with a far less obfuscated conceptual framework and, indeed, this is exactly the motivation behind our refined design.

Page 81: Interactive Personal Privacy at the Dawn of Ubiquitous Computing Scott Lederer

The Precision Dial: A Refined Interaction Framework

81

6.2 A Refined Design: Exploiting Ambiguity Our goals in revising the framework were to simplify it conceptually, to minimize its required configuration effort, and to craft it to embed comfortably into established social practices rather than to impose an awkward structure alongside them. Regardless of the accuracy of social theories like Goffman’s, interactive systems intended to support them should not model them directly, as did the inquirer-situation-face system. Rather, the interfaces to these systems must be deftly designed to embed into the established practices these theories describe. These interfaces should be amenable to, but should not recreate, these practices. With those goals in mind, we recognized that what appeared to be one of the framework’s weaknesses—the abstract-situated gap—was instead the instantiation of a fundamental property of the social process of privacy management, namely, ambiguity. As Palen and Dourish point out, privacy management is not a set of binary decisions about information access control. It is the situated negotiation of multiple boundaries—e.g., disclosure boundaries, personal boundaries, temporal boundaries—and it is an intuitive process in which we all engage (Palen and Dourish 2003). There is an ambiguity at the intersection of the compound boundaries we continuously negotiate. We participate in this ambiguity. We participate in its creation, in its exploitation, in its coordination, in its resolution, and in its perpetuation. Our privacy framework should align with these embodied practices and should support their intuitive extension into and across ubiquitous computing systems. We believe there are two keys to empowering users to leverage ambiguity in the creation of privacy management practices involving a technical system. First, the system must be simple and flexible, allowing users to incorporate it into their own practices rather than requiring them to conform to an overly structured framework. Second, ambiguity must be supported at both ends of the disclosure chain: in the disclosure preferences available for subjects, and in the information and meta-information presented to inquirers in response to their inquiries. We address both of these points below in the description of our revised framework. The precision dial differs from the Faces framework as follows:

Separate static and dynamic information Identities and their attendant profiles are managed separately from dynamic information like location, activity, and companions. The new framework focuses on the management of dynamic information.

Collapsed information dimensions Users apply a single disclosure precision preference across all dynamic information dimensions. For example, if Alice discloses vague information to Bob, then her location, activity, and companions are all disclosed vaguely. She does not specify different precisions for each dimension. Fine-grained control could be offered at an advanced level but is not emphasized.

Eliminate encapsulation Precision levels are exposed, not masked and encapsulated into faces. Privacy is adjusted by changing disclosure precisions per se. Similarly, contextual factors are exposed, not collected into situations. Privacy preferences can be parameterized by any valid contextual factors per se.

Page 82: Interactive Personal Privacy at the Dawn of Ubiquitous Computing Scott Lederer

The Precision Dial: A Refined Interaction Framework

82

Fundamental support for groups Users create any number of groups and categorize inquirers into them. Users assign a default precision preference from the ordinal precision scale to each group, but can optionally parameterize precision according to any valid contextual value, e.g., location, time, companions, activity. Interpreting assigned precision levels as an indication of trust, a system based on this framework could, with permission, crawl the settings of its users, hopping from user to user, inferring social networks and suggesting or actively configuring precision levels similar to socially-proximate users or according to collaboratively filtered settings.

Real-time and a priori configuration In addition to the manual override function, users can manually adjust precision preferences for any group in real-time, on the fly, using some sort of precision dial. This provides finer-grained control than manual override alone. With only four precision levels to choose from, instead of an arbitrary number of obscure faces, blurring personal information is feasible for a user with a sufficiently small set of groups.

Ambiguity and plausible deniability Inquirers cannot readily determine the cause of imprecision in the disclosed information. Rather than blaming the imprecision on the subject’s judgment of the inquirer, it would be far easier and more socially acceptable for the inquirer to blame it on the combinatorial space generated by the inherent ambiguity of the ordinal precision scale, the prominence of social conventions that favor imprecise disclosure, and the unpredictability of technical conditions that create it.

The ordinal precision scale (precise > approximate > vague > undisclosed) remains unchanged. In theory, a user should be able to easily navigate to a group of inquirers on her mobile phone and alter its disclosure precision level. Manual override, by which the user selects a specific precision level to handle all inquiries from all inquirers until override is disengaged, should be as easy to engage as the ringer volume control on today’s better-designed mobile phones. Precision adjustment might be afforded by a simple dial on the mobile phone. We will now address each of the above changes in more detail.

6.2.1 Separate Static and Dynamic Information In the new framework, identities and their attendant profiles are managed separately from dynamic information such as location, activity, and companions. The framework emphasizes the management of dynamic information.

6.2.1.1 Static and Dynamic Information Are Different Table 5-4 illustrates the relationships among information type (static or dynamic), inquirer familiarity, and inquirer proximity in determining the sensitivity of personal information. With respect to static information, e.g., identity and profile, sensitivity is inversely related to the subject’s familiarity with the inquirer, regardless of proximity. Indeed, familiarity might be operationally indicative of how much static information an inquirer has about a subject. The more familiar an inquirer, the more the inquirer knows about the subject’s identity(ies) and profile(s), and hence disclosing this information to such an inquirer is relatively moot. This holds regardless of whether the inquirer is proximate or remote at the time of inquiry.

Page 83: Interactive Personal Privacy at the Dawn of Ubiquitous Computing Scott Lederer

The Precision Dial: A Refined Interaction Framework

83

On the other hand, the means by which a subject keeps an unfamiliar inquirer unfamiliar is by minimizing disclosure of identity and profile to that inquirer. Hence static information is relatively sensitive with respect to unfamiliar inquirers, again regardless of proximity. With respect to dynamic information, e.g., location, activity, and companions, sensitivity is dependant on both the inquirer’s proximity to the subject and on the inquirer’s familiarity to the subject. Regardless of familiarity, a proximate inquirer, by virtue of his proximity, already has some knowledge of the subject’s dynamic information. Even if the inquirer doesn’t know the subject’s identity, she at least knows that someone is in this location performing this activity with this number of other people. The sensitivity of dynamic information increases when the inquirer is remotely located and is particularly sensitive when the inquirer is familiar. Many a relationship has been tried by the revelation that Alice was in a location or performing an activity or with a companion that Bob finds disagreeable. Dynamic information is effectively anonymous and of limited sensitivity when the inquirer is unfamiliar. But if the unfamiliar inquirer also obtains the subject’s identity in the same or a related transaction, then the inquirer effectively becomes familiar (perhaps disagreeably) and the dynamic information becomes immediately sensitive. Having established the rationale for managing static and dynamic information separately, we will now discuss the means of managing them.

6.2.1.2 Managing Static Information: Identities and Profiles In principle, identities and profiles could be selected like keys from key ring as the user encounters different environments, either manually or automatically with the help of context-awareness. But we do not expect this approach is viable. If a particular identity swap were poorly timed, the owner of the sensing system would sense both the initial and the subsequent identities, thereby correlating them and effectively eliminating any distinction between them. We believe a more likely outcome is that multiple identification services will be active at once, and that individuals will log into each one independently. An individual’s biometric and other properties will make it effectively impossible to have multiple identities in any single service, but one could create different identities and profiles across different services. For example, one identity might be associated with one’s mobile phone, another might be associated with the surveillance cameras in one’s workplace, and another might be associated with the RF transponder in one’s car. The information an inquirer collects would depend on which intermediate service supplies him with the subject’s identity and profile. Identity management norms and techniques might arise around the exploitation of multiple identification services, just as they have around the use of multiple email addresses.

Page 84: Interactive Personal Privacy at the Dawn of Ubiquitous Computing Scott Lederer

The Precision Dial: A Refined Interaction Framework

84

Hence we believe managing the disclosure of static personal information in ubiquitous computing is not amenable to a general interaction framework. The challenges lie instead in the design of ubiquitous computing authentication and deauthentication mechanisms and in organizational support for exposing information practices to the user. This latter challenge can be partially addressed by the provision of disclosure logs. One can envision logging into one’s mobile phone provider’s website to access all personal information transactions conveyed through that service and to initiate recourse in the case of a dispute, not unlike one logs into one’s bank website today to do the same with respect to financial transactions.

6.2.1.3 Managing Dynamic Information Having delegated the management of static information disclosure to market and normative evolution, we focus our new framework, and the remainder of this chapter, on the management of dynamic information disclosure. Unlike any attempts at integrated identity management, we believe our revised framework for dynamic information management is simple enough to be supported across ubiquitous computing service providers, either through a lookup service or through locally stored preferences.

6.2.2 Collapsed Information Dimensions The new framework eliminates the face abstraction and collapses the dimensions of dynamic information. This drastically simplifies the basic notion behind blurring disclosed information. For a given inquiry, a single precision preference applies across all requested information dimensions. For example, if Alice discloses vague information to Bob, then any and all dimensions he requests, e.g., location, activity, and/or companions, are disclosed vaguely. She does not specify different precisions for each dimension. Fine-grained control could be offered at an advanced level but the framework does not emphasize it. This simplifies configuration of the system and maintenance of the user’s conceptual model, because the configuration space is shrunk drastically. For any given inquiry, there are four possibilities of disclosure, corresponding to the ordinal precision scale, instead of twelve possibilities for dynamic information (four precision levels times three information dimensions) and an amount not worth calculating for static information. We believe this should have a considerably favorable impact on the user’s ability to predict the level at which his information is disclosed in response to a given inquiry. That is, it should shrink the abstract-situated gap. The collapsing of information dimensions also eliminates an arguably false distinction between the dimensions of dynamic information. While location, activity, and companions can be conceptualized independently, their values tend correlate. In other words, the more an observer knows about where someone is or whom he is with, the more she can infer his activity. We might even argue the various dimensions of a person’s context are all just transformations of the description of a situated activity. This aligns with our subjects’ confusion between situation and activity. They often mistook one for the other and, indeed, they are arguably the same.

Page 85: Interactive Personal Privacy at the Dawn of Ubiquitous Computing Scott Lederer

The Precision Dial: A Refined Interaction Framework

85

Although mundane examples would support this view, we will use a somewhat extreme example to illustrate it. If Alice’s privacy management system discloses to her husband, Bob, her location precisely (“Motel Six”), her companion vaguely (“one person”), and her activity as undisclosed, Bob has enough information to reasonably infer her activity anyway. But if all of Alice’s information is vaguely disclosed as “meeting with someone in San Francisco”, then the only information Bob has to suspect foul play is the meta-information contained in the information precision: why would Alice disclose this information to Bob at a vague precision level? But as we will show below in our discussion of plausible deniability, there can rightly be enough ambiguity around this meta-information to mitigate the likelihood of suspicion and more accurately reflect established nuances of social disclosure.

6.2.3 Eliminate Encapsulation As mentioned, the new framework eliminates the face abstraction. The ordinal precision scale is exposed explicitly, say, as “Precision,” throughout the interaction process, much like volume control is exposed as “Volume” throughout TV and stereo interaction designs. Beyond the design rationale described above, this decision eliminates a semantic inconsistency, namely, that the term “face” seems to apply reasonably well to the notion of identity, but not to dynamic information like location, activity, and companions. The framework also eliminates the situation abstraction. Contextual factors are exposed for what they are, rather than aggregated into situations. Precision preferences can be parameterized by any valid contextual factors. Users assign each group of inquirers (more on groups below) a default precision level and, optionally, precision levels for specific contexts. For example, Alice can specify that, by default, information should be disclosed to Bob at an approximate precision level, but that after 6pm information should be disclosed at a precise level, and that when she is in San Francisco information should be disclosed at a vague precision level. Multiple contextual parameterizations for a given group and precision level are considered disjunctively, i.e., if any one of these contextual conditions is met, then disclose at this precision level. Within this disjunctive set, compound, conjunctive parameterizations (e.g., in San Francisco and after 6pm) are allowed. These are effectively the same as situations from the earlier framework, but they are exposed as the contextual parameters they are, instead of being masked behind semantic indirection. Resolution of context conflicts (e.g., Alice is in San Francisco, which requires she should disclose to Bob vaguely, but it is also after 6pm, which requires she disclose precisely) can be handled in various ways. A simple option might be to choose the most privacy-preserving level (in this case, vague). A more complex option would be to allow the user to prioritize the multiple contextual parameterizations (e.g., that it is after 6pm overrides that Alice is in San Francisco).

6.2.4 Fundamental Support for Groups The refined framework emphasizes fundamental support both for groups of (potentially antagonistic) inquirers and for groups of cooperative users.

Page 86: Interactive Personal Privacy at the Dawn of Ubiquitous Computing Scott Lederer

The Precision Dial: A Refined Interaction Framework

86

6.2.4.1 Inquirer Groups Users create any number of groups and categorize inquirers into them. Users assign a default precision level to each group and optionally parameterize precision levels according to valid contextual values, as described above. We believe most users would maintain only a small set of top-level groups, e.g., friends, family, and colleagues. Generally, the first three of these examples are trusted (which correlates with but is not synonymous with “familiar”), while the last is untrusted (correlating with but not synonymous with “unfamiliar”). However, many people also maintain relations that fall into the inverse of the norm: untrusted friends, family, and colleagues, and trusted organizations. Hence we cautiously suggest that the typical user might maintain about eight groups: trusted and untrusted sets of friends, family, colleagues, and organizations. Users might partition each of these groups further, e.g., the friends group might contain categories reflecting actual partitions of the user’s social network, but we believe the user would often apply the same disclosure preference across these partitions. An alternative configuration of groups might correspond to the user’s familiarity with her inquirers, ignoring canonical divisions between friends, family, and colleagues. The user might have, say, four groups along the lines of very familiar, familiar, marginally familiar, unfamiliar. Our point is, while users have arbitrary flexibility in partitioning and managing groups, the number of groups would arguably be manageably small. We further suggest that users would typically refrain from actively modulating the information disclosed to untrusted inquirers. They would set default, and perhaps conditional, precision levels for those parties and then leave those settings largely unchanged. We believe users are more likely to actively modulate disclosure precisions for trusted and familiar inquirers, further minimizing typical configuration effort.

6.2.4.2 Socially-influenced Configuration The revised framework supports groups at both the interaction level and the infrastructural level. Deep support for groups is critical, for we are never truly independent actors. This is particularly true in relation to privacy, a fundamentally social concept. Phil Agre writes, “So long as privacy issues are located solely in the lives and relationships of individuals, it will be impossible to conceptualize either the potentials or the dangers of new media technologies for a democratic society” (Agre 1997). Interpreting group partitions and disclosure precision preferences as indications of familiarity and trust, a system based on this framework could, with permission, recursively crawl the web of its users, inferring social networks from the associations made explicit in group configurations. From there, it can optimize a user’s precision levels in relation to socially-proximate users or according to collaboratively filtered settings.

Page 87: Interactive Personal Privacy at the Dawn of Ubiquitous Computing Scott Lederer

The Precision Dial: A Refined Interaction Framework

87

The power of this framework is compound. As described, it can extend the individual’s established information management practices into and across ubiquitous computing systems. But it can also empower social information management. It can reinforce group norms and expose information asymmetries by exploiting knowledge of users’ social networks, implicit in their group configurations, to suggest or actively configure disclosure settings for various inquirers and contexts.

6.2.5 Real-time and A Priori Configuration The framework retains the a priori configuration feature but minimizes reliance on it. The complexity of the earlier framework disqualified real-time configuration beyond the manual override feature. The new framework, however, is simple enough to support real-time precision adjustments. With only four precision levels to choose from, instead of an arbitrary number of obscure faces, adjusting precision levels on-the-fly is feasible for a user with a sufficiently small set of groups. This is not to say we expect users to adjust precision levels for each inquiry, but they could adjust them according to situational changes, just as people turn off their mobile phone ringers in movie theaters and meetings. The framework retains the manual override function, which blankets all inquirers in a veil of a single precision chosen by the user. Real-time configuration is functionally similar to manual override but operates at a per-group granularity instead of comprehensively.

6.2.6 Ambiguity and Plausible Deniability By facilitating ambiguous disclosure and creating the conditions for plausible deniability, the framework becomes a tool with which people can extend their nuanced practices of social boundary negotiation to the domain of sensing systems.

6.2.6.1 Embedding User Controls for Ambiguity into Existing Practices The effort the framework requires of end-users falls into two categories: group management and precision adjustment. Group management is the heavier task. Each new inquirer needs to be categorized and the potential number of inquirers is high. However, people already conduct this sort of group partitioning, as reflected in the management of mailing list subscriptions, email folders, instant messaging clients, address books, and social network services like Friendster. For a system based on our framework, group categories might be inferred from these established instances of a user’s social organization. Or they might be created through techniques already familiar to end-users, e.g., email challenge-response as verification of interest in a new inquirer. The point here is that, although group management is not a lightweight task to be undertaken in a mobile computing environment, it is a practice with which users are already familiar. The newer task, then, is precision adjustment. Users would adjust disclosure precisions a priori or on-the-fly or both. The new framework’s simple affordances for adjusting precisions on-the-fly are what facilitate its embedding into existing social practices. A small number of groups to manage, and the short ordinal precision scale, together mean it takes only a moment to blur (or unblur) one’s information with a couple of button presses on a mobile phone. Indeed, this is precisely what the “Be Invisible” feature of AT&T’s

Page 88: Interactive Personal Privacy at the Dawn of Ubiquitous Computing Scott Lederer

The Precision Dial: A Refined Interaction Framework

88

mMode Find Friends service does; a few button presses and no one can locate you until you disengage the feature. Our framework extends that feature beyond the binary visible/invisible options, supporting more nuanced boundary management.

6.2.6.2 Enforcing Ambiguity at the Inquirer’s End When an inquirer receives imprecise information, he cannot readily determine the cause of the imprecision. It may be the result of the subject’s directive. It may be the result of signal noise or some other stochastic technical factor. It may be imposed by the organization governing the subject’s current location. It would be far easier and more socially acceptable for the inquirer to blame the imprecision on the combinatorial space generated by all of these possibilities – to effectively blame it on the wind – then it would be to harbor or communicate suspicion of the subject’s trust in the inquirer. Even if the inquirer is given some meta-information explaining the cause of the imprecision, such as an error message (e.g., weak signal, response timeout, network load, missing data), the reported cause could be the result of heuristic algorithms operating on behalf of the subject to create plausible conditions of ambiguity. This is similar to the contemporary phenomenon of a mobile phone service informing a caller that her call cannot be completed because the recipient’s phone is out of range, when the recipient may have intentionally turned off the phone, perhaps even in response to that specific incoming call. The prevalence of voicemail, of course, alters this phenomenon, but the underlying boundary management practice remains.

6.2.6.3 Bilateral Ambiguity Facilitates Plausible Deniability Ambiguity on both ends, i.e., in the subject’s precision scale and in the inquirer’s comprehension of the source of any imprecision, creates a flexible platform for social boundary negotiation. Depending on how much feedback the system at hand provides, Alice might only guess at the precise, approximate, and vague versions of her personal information in a given situation. Similarly, Bob has to accept the values returned to him, for, in most cases, determining the degree to which the ambiguity was intentional would incur too great a social and logistical cost. Indeed, if Bob requires less ambiguous information, he can obtain it through more conventional means, e.g., a phone call or text message. Bilateral ambiguity creates the opportunity for plausible deniability. Inquirers would generally lack enough evidence to suspect mistrust on the part of the subject when enough uncertainty persists in the set of conditions that can cause imprecise disclosures. With such a system, users can manage their self-presentations in ways amenable to traditional, embodied practices.

6.2.7 Disclosure Log We have primarily addressed elements of control in this chapter, not feedback. Our view of disclosure feedback in ubiquitous computing remains the same as it was in the previous framework: the disclosure log is the key.

Page 89: Interactive Personal Privacy at the Dawn of Ubiquitous Computing Scott Lederer

The Precision Dial: A Refined Interaction Framework

89

One can imagine a desktop application that downloads disclosure logs from the user’s various ubiquitous computing services and integrates them into a single, searchable, sortable database. This is not unlike Quicken or Microsoft® Money, two popular personal finance applications that effectively do just that with respect to one’s financial accounts. Instead of or in conjunction with this, each service could provide access to the user’s disclosure log through an authenticated website, just as mobile phone service providers do today for call logs. Providing real-time disclosure feedback through the user’s mobile phone or proximate embedded devices is a technical possibility, but we suspect the sheer number of disclosure events would make this a disagreeable option. Nonetheless, providing notice of data collection after the fact is a controversial approach. We believe that with adequate legal and organizational support for convenient access and recourse, disclosure records that subjects find disagreeable after the fact can be amended, or subjects can reconfigure their preferences to avoid similar disclosures in the future, or both.

6.3 Some Criticisms of the Framework Since we have not instantiated our revised framework in a prototype, or evaluated it on potential users, we only validate it analytically. It would need further design iterations before any significant deployment. In lieu of a robust evaluation, we have come up with some criticisms of the framework worth addressing here.

6.3.1 End of Life for Fair Information Practices? Agre suggests that the data protection model of privacy regulation, originating with Alan Westin and finding regulatory expression in the fair information practices, does not align with the nature and extent of data collection in ubiquitous computing scenarios. This model results from information processing metaphors that will grow outdated as pervasive, real-time monitoring increases in scope (Agre 1997). Since we intend our framework to support the everyday privacy model outlined earlier in this report, which emphasizes the everyday utility of feedback (notice) and control (consent) for shaping users’ comprehension of their outbound personal information flow, we recognize that a regulatory successor to the data protection model would, in theory, require adjustment to our framework. Nonetheless, the fair information practices do not appear to be going anywhere anytime soon, for better or for worse. They remain the only operational model of personal information protection with which legislators and executives are familiar. Further, regardless of the worth or pertinence of notice and consent, feedback and control remain fundamentally importance elements of user interaction design (Norman 1988).

6.3.2 The Constraints of Metaphor Some metaphors are obvious, some are not. In eliminating the face metaphor, we escaped a metaphorical misalignment that may have confused users (Exactly what does one’s face, i.e., the flesh and blood face, have to do with one’s location, activity, or companions?). A strong metaphor, like the face, can lead to incorrect expectations on the

Page 90: Interactive Personal Privacy at the Dawn of Ubiquitous Computing Scott Lederer

The Precision Dial: A Refined Interaction Framework

90

part of the user (Can one use a happy face? Why can’t one see the face?). So we are rid of it. But in its place we have elevated the notion of precision, itself a metaphor but a less obvious one. The framework would seem to hold that adjusting the precision of the information one discloses is tantamount to managing one’s privacy. Some reflection reveals this as a stretch of definitions. The notion of privacy, as we have tried to establish, operates at a level of social, political, technical, and economic complexity that could not possibly be collapsed into four ordinal notches on an arbitrary scale of precision. Indeed, we question whether there could ever be a reasonably accurate metaphor for privacy. Nonetheless, precision, as we have positioned it herein, is not a metaphor for privacy. It is a metaphor for, and an approximate measure of, the density of disclosed information. And, importantly, its mutability is a tool that people can use, in conjunction with their cultural sensibilities and practices, to manage their self-presentation to inquiring parties. Precision is not privacy; it is a partial and simple means by which privacy can be managed in a ubiquitous computing world.

6.3.3 Infrastructural Support We have intentionally focused on the interaction level of privacy management and avoided its infrastructural requirements. Others are addressing infrastructural support for privacy, including cryptographers, e.g., (Canny 2002), and context-awareness researchers, e.g., (Hong, Boriello et al. 2003). Our framework nonetheless contains certain implications for infrastructural support. Among them are:

Universal support for precision adjustment For a derivative framework to succeed commercially, all sensing systems to which its users are exposed must support the ordinal precision scale (or reasonable translations of it) and they must ensure that users’ preferences will be upheld, lest inquirers favor deviant systems. We suspect that bottom-up market pressure might lead to universal support for such a framework, though the passage below on backchannel communication is also relevant here.

Heuristically selected error messages Service providers on both the subject’s and the inquirer’s ends must support error messages ambiguous enough to support plausible deniability. Nonetheless, if this does not occur, we suspect norms supporting plausible deniability would co-evolve with the technology. For example, if Bob is upset with Alice’s vague disclosure, she might explain it away by telling him that in that particular situation she didn’t want certain people that share Bob’s group to obtain her precise information, so she applied the vague filter to Bob’s entire group, not Bob alone.

6.3.4 Backchannel Communication It would seem that absolute universal support for honoring subjects’ preferences is impossible. Today, Carol can surreptitiously, even incidentally, photograph Alice with a camera in her mobile phone and immediately post the photograph on her website, where Bob might see it. With our framework in place, even if Alice’s preferences called for vague disclosure to Bob, it is legally, normatively, and technically unreasonable to expect some part of the infrastructure to blur Alice’s image in Carol’s photograph. Nonetheless,

Page 91: Interactive Personal Privacy at the Dawn of Ubiquitous Computing Scott Lederer

The Precision Dial: A Refined Interaction Framework

91

there is a qualitative difference between structured sensing systems under organizational control, e.g., RFID tags, and unstructured technologies in the hands of the public, e.g., cameras and the WWW. We believe our framework is applicable to the former only. There will always be communicative backchannels. Privacy is ultimately an exercise of the subject’s ability to coordinate the range of contemporary social and technical disclosure management mechanisms at his disposal. This includes managing both obvious disclosures and backchannel communication. Nonetheless, it is the responsibility of technologists to design technical mechanisms for usability and in such a way as to elegantly support the coordination of social practices.

6.3.5 Information Usage The revised framework, like its predecessor, does not allow for automatic precision adjustment in response to intended information usage. Precision can be adjusted automatically according to inquirer and context only. Our rationale for this is compound. First, we believe that information usage correlates with the inquirer’s identity. People are increasingly aware of the information processing practices of organizations, and they have intuitive understandings of what their personal contacts might do with disclosed information. Usage is largely implicit in the subject’s knowledge of the inquirer and may therefore be superfluous to a privacy user interface. Second, while organizations may be legally bound to declare information usage (depending on jurisdictional laws), few social conventions require personal contacts to state the intended usage of information prior to collection. Usage is unlikely to be encoded into many, if not most, inquiries. Third, with respect to organizational inquiries, disclosure log entries can either declare the usage or link to the organization’s privacy policy, wherein the range of usages is declared.

6.3.6 Inferring Precise Information from Histories Consider the case in which Alice allows Bob to learn her precise location over some period of time, say, a couple of hours, then abruptly chooses to blur her disclosure to Bob while she is in a certain sensitive location. If Bob has been monitoring her precise location throughout that time, then it would be trivial for him to infer her location when it becomes vague, which is clearly counter to Alice’s preference. Our response to this is twofold. First, there are reasonable technical ways to address this, e.g., by regulating the rate of inquiries from any given inquirer; if Bob can only inquire once every ten minutes, then his ability to infer Alice’s location is weakened. Second, and more importantly, Alice and Bob need to talk this out. Why is Bob monitoring Alice so closely? If he is untrusted, why is Alice disclosing precise versions of her information to him? Why is she disclosing anything to him at all? Perhaps Alice should contact the authorities. The point here is one we have come back to a number of

Page 92: Interactive Personal Privacy at the Dawn of Ubiquitous Computing Scott Lederer

The Precision Dial: A Refined Interaction Framework

92

times in this chapter: privacy is a socio-technical matter, not a purely technical one. It cannot be “solved” with technical innovations or interaction frameworks. It can only evolve as such resources converge with norms, laws, and other forces to carve out new definitions of private and public “space.” This point is worth stressing, for it is often overlooked or brushed aside by the engineering perspective.

6.4 Closing Remarks

6.4.1 Impression Management It is worth nothing that the revised framework still supports Goffman’s theory on impression management and, indeed, supports it more faithfully than its predecessor. This is because, rather than trying to be the singular medium of impression management, it merely offers a simple tool for the user to incorporate into his set of techniques for intuitive impression management. Social actors can manage their representations in part through the framework, rather than in whole within the framework. It enables privacy management, instead of manifesting it. This is a more faithful reflection of our natural impression management practices, for in the real world, our fronts are ambiguous mental representations in the audience’s mind; they are not material masks held before our faces.

6.4.2 Flexibility and Coevolution It is worth examining our framework in the light of Palen’s findings that privacy-sensitive systems should be flexible enough to support nuanced and variable usage practices across populations and individuals, and they should be amenable to coevolution (Palen 1999). Our framework’s ability to support nuanced and variable usage practices rests in its simplicity. Rather than imposing an inflexible, top-down structure on disclosure practices, it can be wielded as much or as little as the user desires. Users can group and regroup their inquirers as needed, to optimize impression management. And they can cooperatively (or antagonistically) develop norms around contextually-dependent precision levels through support for socially-determined preferences.

6.4.3 Social Negotiation of Boundaries in Continuous Tension In the light of (Palen and Dourish 2003), our revised framework strives to support established boundary management practices by refraining from asserting an overt structure on the mediation of impression through sensing systems. We leverage ambiguity rather than attempting to vanquish it. And we trust that users, too, can leverage this ambiguity, as they already do in their everyday negotiation of boundaries.

6.4.4 Situating the Revised Framework in the Privacy Space Rather than vainly attempting to apply interactive ubicomp privacy management to the entire privacy space, our revised framework isolates a specific subspace that appears amenable to interaction support. In short, we advocate a sort of precision dial that might be afforded by a mobile phone or similar device and would allow the user to adjust the precision of the context information (e.g., location, nearby people) that known inquirers could acquire. It is intended for transaction-based systems and it considers context the primary content of the disclosure. It is intended for interpersonal disclosure to parties of variable familiarity to the user (familiarity might correlate with the user’s inquirer

Page 93: Interactive Personal Privacy at the Dawn of Ubiquitous Computing Scott Lederer

The Precision Dial: A Refined Interaction Framework

93

groupings), though it may also be applicable to organizational disclosure. It is intended to manage the disclosure of activity (as context), not personae. The dial itself is a control mechanism. The disclosure log would provide feedback, which might also be provided on the fly according to the user’s preferences. It is our belief that this particular subspace of the privacy space, more than any other, is amenable to interactive management on the fly. And beyond being technically and commercially practical, it is normatively compatible, because it is a simple, intuitive way for people to negotiate interpersonal disclosure boundaries and manage their fragmented identities amongst the rich information flows and collapsed distances that the ubiquitous computing world brings to bear on our everyday lives.

Page 94: Interactive Personal Privacy at the Dawn of Ubiquitous Computing Scott Lederer
Page 95: Interactive Personal Privacy at the Dawn of Ubiquitous Computing Scott Lederer

95

PART THREE

PERSONAL PRIVACY THROUGH UNDERSTANDING AND ACTION

Man's problem-solving capability represents possibly the most important resource possessed by a society.

--D.C. Engelbart, 1962

Page 96: Interactive Personal Privacy at the Dawn of Ubiquitous Computing Scott Lederer
Page 97: Interactive Personal Privacy at the Dawn of Ubiquitous Computing Scott Lederer

Five Pitfalls to Avoid when Designing for Privacy

97

7 Five Pitfalls to Avoid when Designing for Privacy In previous chapters, we reviewed related work, conducted an analytical deconstruction of the privacy space, and described the iterative design of an interaction framework that ultimately addresses a subspace of that space. In this chapter, we distill from these experiences and from existing privacy-affecting systems a set of design guidelines that can be applied to the design of privacy-affecting systems on or off the desktop. This chapter constitutes our most actionable contribution to the field, for it gives designers a conceptual tool that can be immediately used to improve systems’ handling of users’ privacy needs. Such improvements are needed, for, despite the abundance of research and design knowledge, many systems still make it hard for people to maintain privacy on and off the desktop. One possible explanation for why designing privacy-sensitive13 systems is so difficult is that privacy simply lives up to its name. Instead of exposing an unambiguous public representation of itself for all to see and comprehend, it cloaks itself behind an assortment of meanings depending on who’s watching. When sociologists look at privacy, they see nuanced processes that engineers overlook. When cryptologists look at privacy, they see technical mechanisms that everyday people ignore. When the European Union looks at privacy, it sees moral expectations that American policymakers do not. Amidst this fog of heterogeneous practices, technologies, and policies that characterize privacy, designers of interactive systems face increasing market pressure and a persistent moral imperative to design privacy-sensitive systems. This chapter cannot dispel that fog, but it does attempt to shine some light through it by offering a partial set of guidelines for designers of privacy-affecting interactive systems, on and off the desktop. We say “partial set” because this chapter is not a self-contained how-to guide. We do not mean to imply that systems that follow these guidelines will decidedly support privacy. We do mean to imply that systems that ignore any of these guidelines without careful rationale face significant risk of disrupting or inhibiting users’ abilities to manage their personal privacy. For this reason, we present our guidelines as a set of pitfalls to avoid when designing privacy-affecting systems. Avoiding a pitfall does not ensure success, but ignoring one can potentially lead to disaster. In addition to using our guidelines, designers of privacy-affecting ubiquitous computing systems should consult Bellotti and Sellen’s framework for feedback and control (Bellotti and Sellen 1993), Langheinrich’s translation of the fair information practices (Langheinrich 2001), Palen and Dourish’s sociologically informed analysis of privacy as boundary negotiation (Palen and Dourish 2003), and Jiang et al.’s principle of minimum asymmetry (Jiang, Hong et al. 2002). Our work synthesizes some of the core lessons of those frameworks to inform our analysis of common privacy problems we identified across a broad range of existing systems.

13 We will use the term privacy-affecting as a general description of any interactive system whose use has personal privacy implications. We will use the term privacy-sensitive to describe any privacy-affecting system that, by whatever metrics are contextually relevant, successfully avoids invading or disrupting personal privacy. This article is intended to help minimize the number of privacy-affecting systems that are not privacy-sensitive.

Page 98: Interactive Personal Privacy at the Dawn of Ubiquitous Computing Scott Lederer

Five Pitfalls to Avoid when Designing for Privacy

98

7.1 Common Design Flaws There has been a tremendous amount of research on privacy in the context of technical systems. For example, many polls have shown considerable public concerns about privacy on the Internet (Cranor, Reagle et al. 2000; Taylor 2003; Turow 2003). There have also been interviews and surveys exploring privacy design issues for context-aware systems (Harper, Lamming et al. 1992; Kaasinen 2003; Lederer, Mankoff et al. 2003); studies exposing privacy perceptions and practices in groupware (Palen 1999), multimedia environments (Adams 2000), and location-aware systems (Beckwith 2003); and experiments revealing usability problems affecting privacy in email (Whitten and Tygar 1999) and file-sharing (Good and Krekelberg 2003) applications. Despite this abundance of research and design knowledge, many systems still make it hard for people to manage their privacy on and off the desktop. We suggest this is because the designs of these systems inhibit peoples’ abilities to both understand the privacy implications of their use and to conduct socially meaningful action through them. We further suggest that designs that avoid our five pitfalls will go a long way towards helping people achieve the understanding and action that personal privacy regulation requires. Although some of these pitfalls may appear obvious, we will demonstrate below that many systems continue to fall into them. Some of these systems have encountered privacy controversies (e.g., web browsers), while others that have avoided the pitfalls have enjoyed considerable commercial or social success (e.g., instant messaging). Our investigation into these pitfalls began when we fell into them ourselves in the design of Faces. Despite the input of our formative interviews, surveys, and literature review, an evaluation indicated a series of fundamental missteps in our design rationale. Further analysis showed that these missteps fall into several categories of common missteps in many existing commercial and research systems. While we cannot enumerate every possible privacy design flaw, we can offer these categories—formulated as design pitfalls—to the design community as cause for concern. To help designers remember these pitfalls, we have clustered them into those that primarily affect users’ understanding of a system’s privacy implications and those that primarily affect their ability to conduct socially meaningful action through the system. Concerning Understanding 1. Obscuring potential information flow. Designs should not obscure the nature and

extent of a system’s potential for disclosure. Users can make informed use of a system only when they understand the scope of its privacy implications.

2. Obscuring actual information flow. Designs should not conceal the actual disclosure of information through a system. Users should understand what information is being disclosed to whom.

Concerning Action

Page 99: Interactive Personal Privacy at the Dawn of Ubiquitous Computing Scott Lederer

Five Pitfalls to Avoid when Designing for Privacy

99

3. Emphasizing configuration over action. Designs should not require excessive configuration to manage privacy. They should enable users to practice privacy as a natural consequence of their normal engagement of the system.

4. Lacking coarse-grained control. Designs should not forgo an obvious, top-level mechanism for halting and resuming the disclosure.

5. Inhibiting established practice. Designs should not inhibit users from transferring established social practices to emerging technologies.

Before further articulating and providing evidence supporting these suggestions, it is worth revisiting what we mean by personal privacy and articulating what we mean by understanding and action.

7.2 Personal Privacy Legal and policy scholar Alan F. Westin asserted that “no definition of privacy is possible, because privacy issues are fundamentally matters of values, interests and power” (Westin 1995) (as quoted in (Gellman 1998)). We will not be so bold as to define privacy, but we will attempt to qualify within the scope of this chapter the phrase personal privacy. Years prior to the assertion quoted above, Westin described information privacy as “the claim of individuals, groups or institutions to determine for themselves when, how and to what extent information about them is communicated to others” (Westin 1967). Largely intended for policymakers, the reasoning behind this formulation served as the basis for the fair information practices, a set of flexible policy guidelines that continue to shape privacy legislation throughout the world. Since many privacy-affecting interactive systems are developed or deployed by organizations beholden to some interpretation of the fair information practices, Westin’s formulation is a good place to start when elucidating personal privacy to designers. But we cannot end there, for there is more to privacy than this rather deterministic, libertarian formulation conveys. Building on the work of social psychologist Irwin Altman (Altman 1975), Palen and Dourish offer a more organic, sociologically-informed view that “privacy management in everyday life involves combinations of social and technical arrangements that reflect, reproduce and engender social expectations, guide the interpretability of action, and evolve as both technologies and social practices change” (Palen and Dourish 2003). In this sense, privacy is less about a definitive entitlement to determine the flow of one’s personal information and more about the intuitive fulfillment and maintenance of one’s compound roles in the evolving, overlapping socio-technical contexts of everyday life. While neither formulation excludes the other, one might say—at the risk of oversimplification—that Westin’s formulation emphasizes privacy as conscious process, while Palen and Dourish’s emphasizes privacy as intuitive practice. Clearly, however, people regulate their privacy in ways both deliberate and intuitive. Drawing directly from each formulation, then, what we are trying to signify by the phrase personal privacy is this set of both deliberate and intuitive practices by which an individual exercises her

Page 100: Interactive Personal Privacy at the Dawn of Ubiquitous Computing Scott Lederer

Five Pitfalls to Avoid when Designing for Privacy

100

claim to determine personal information disclosure and which constitute, in part, her participation in the co-evolving technologies and expectations of everyday life. A useful term that can make this discussion more concrete is Palen and Dourish’s genres of disclosure, which are “socially-constructed patterns of privacy management,” or “regularly reproduced arrangements of people, technology and practice that yield identifiable and socially meaningful styles of interaction, information, etc.” (Palen and Dourish 2003). Examples might include creating and managing accounts at shopping websites, taking (or not, as the genre may oblige) photographs at social events, exchanging contact information with a new acquaintance, and revealing personal history to strangers. These all involve recognizable, socially meaningful, “normal” patterns of information disclosure and use. A genre of disclosure creates social expectations of its participants. Amidst a given genre, people expect each other to disclose this, to withhold that, and to use information in this but not that way. A person cooperates with (or antagonizes) a genre of disclosure through her performance of her role in that genre, and the degree to which a system does not align with that genre is the degree to which it fails to support the user’s (and genre’s) privacy regulation process. In this sense, what we call personal privacy in this chapter is simply the role the individual plays within a given genre of disclosure. Of course, personal privacy can also include acting contrary to expectation. As technologies evolve, so do the practices that involve them. New modes and expectations of disclosure emerge as people both embrace and resist technologies and practices. Regardless of the case, a person can neither fully participate in nor effectively defy a genre of disclosure without understanding whether the system at hand aligns with that genre and without the ability to act in (or out of) alignment with it.

7.3 Understanding and Action To be clear, we do not intend this dyadic formulation of understanding and action as a contribution to the theory of privacy, but simply as a conceptual framework for the arguments in this chapter. We frame our arguments using these straightforward terms in the hope of reaching as broad an audience as possible, for the sooner that designs improve their ability to support personal privacy regulation, the better. With respect to genres of disclosure, we are proposing that a person cannot fulfill her role in the apposite genre of disclosure if she does not understand the degree to which the system at hand aligns with that genre and if she cannot conduct socially interpretable action involving the system. We suggest that a system that falls into any of our pitfalls without due rationale can disrupt its users’ abilities to appropriate it in accordance with the relevant genre of disclosure. In so doing, it would by definition disrupt the genre itself or—if it is an emerging genre—make it unnecessarily complex. A privacy-sensitive interactive system will sustain the appropriate genre of disclosure—and will help their users do the same—through cues, affordances, and functions that empower users to understand and influence their privacy implications.

Page 101: Interactive Personal Privacy at the Dawn of Ubiquitous Computing Scott Lederer

Five Pitfalls to Avoid when Designing for Privacy

101

Empowering understanding and action is similar in meaning to bridging Norman's gulfs of evaluation and execution (Norman 1988). We feel the terms we have chosen convey a richer sense of the social implications of privacy-affecting systems than do Norman’s terms, which seem to best address the perceptual/cognitive/motor problem of single-user human-device interaction. Privacy regulation does not conform to a plan-act-evaluate cycle; it is a continual, intuitive, multidimensional balancing act that requires nonlinear social dexterity. That said, at the end of this chapter we will examine another of Norman’s canonical contributions—his elucidation of the role of mental models in the design process—and extend it to accommodate the social dimension of the privacy design process. The rest of this chapter is organized as follows. First, we describe the five pitfalls in detail, with illustrative examples from both our own and related work. We then discuss the pitfalls’ implications for the design process, including an extension of Norman’s elucidation of mental models. Finally, we offer negative and positive case studies of systems that, respectively, fall into and avoid the pitfalls.

7.4 Five Pitfalls In Designing for Privacy Our pitfalls encode an analysis of common problems in interaction design across several systems, constituting a preventative guide to help designers avoid mistakes that may appear obvious but are still being made. Designers should carefully avoid them throughout the design cycle as appropriate. Naturally, not all of the pitfalls will apply to every system; they should be interpreted within the context of the system being designed. The pitfalls fit into a history of analyses and guidelines on developing privacy-sensitive systems. They are, in part, an effort to reconcile Palen and Dourish’s theoretical insights about how people maintain privacy with Bellotti and Sellen’s practical guidelines for designing feedback and control to support it. In reaching for this middle ground, we have tried to honor the fair information practices, as developed by Westin (Westin 1967) and more recently promoted by Langheinrich (Langheinrich 2001), and to minimize information asymmetry between users and observers, as argued by Jiang et al. (Jiang, Hong et al. 2002).

7.4.1 Concerning Understanding: Two Pitfalls Our first two pitfalls primarily involve the user’s understanding of the system’s privacy implications. Designs can enable this understanding by illuminating (1) the system’s potential for information disclosure and (2) the actual disclosures made through it.

7.4.1.1 Pitfall 1: OBSCURING POTENTIAL INFORMATION FLOW To whatever degree is reasonable, systems should make clear the nature and extent of their potential for disclosure, lest they give false impressions about their implications for personal privacy. Users will have difficulty appropriating a system if the scope of its privacy implications is unclear. This scope includes the types of information the system conveys, the kinds of observers it conveys to, the potential for unintentional disclosure, the presence of third parties, and the collection of meta-information like traffic analysis.

Page 102: Interactive Personal Privacy at the Dawn of Ubiquitous Computing Scott Lederer

Five Pitfalls to Avoid when Designing for Privacy

102

Clarifying a system’s potential for conveying personal information is vital to users’ ability to predict the social consequences of using the system. Among the conveyable information types that should be exposed are identifiable personae (e.g., true name, email addresses, credit card numbers, social security number) and monitorable activities (e.g., locations, purchases, web browsing histories, communications, A/V records, social networks). People cannot maintain consistent identities without knowing which of their activities can be associated with which of their personae (Phillips 2002). Privacy-affecting systems tend to involve both interpersonal disclosure (revealing sensitive information to another person) and organizational disclosure (companies or governments). Designs should clarify the potential involvement of each, making clear the extent to which primarily interpersonal disclosures (e.g., chat) involve incidental organizational disclosures (e.g., workplace chat monitoring) and, conversely, the extent to which primarily organizational disclosures (e.g., workplace cameras) involve secondary interpersonal disclosures (e.g., mediaspaces). “Privacy” is a broad term whose unqualified use as a label can mislead users into thinking a system protects or erodes privacy in ways it does not. Making the scope of a system’s privacy implications clear will help users understand its capabilities and limits. This in turn provides grounding for comprehending the actual flow of information through the system, addressed in the next pitfall. Evidence: Falling into the Pitfall An easy way to obscure a system’s privacy scope is to present its functionality ambiguously. One example is Microsoft’s Windows operating systems. The Windows Internet control panel offers ordinal degrees of privacy protection (from Low to High) for Internet use. The functional meaning of this scale is unclear to average users and, as it turns out, this mechanism does not affect general Internet use through the operating system; its scope is limited to a particular web browser’s cookie management heuristics. Similarly, Anonymizer.com’s free anonymizing software can give the impression that all Internet activity is anonymous when the service is active, but in actuality it only affects web browsing, not email, chat, or other services. A for-pay version covers those services. Another example is found in Beckwith’s report of an eldercare facility using worn transponder badges to monitor the locations of residents and staff (Beckwith 2003). Many residents perceived the badge only as a call-button (which it was) but not as a persistent location tracker (which it also was). They did not understand the scope of its privacy implications. Similarly, some hospitals use badges to track the location of nurses for efficiency and accountability purposes, but they can neglect to clarify what kind of information the system conveys. One concerned nurse wrote, erroneously, “They've placed it in the nurses' lounge and kitchen. Somebody can click it on and listen to the conversation. You don't need a Big Brother overlooking your shoulder” (Reang 2002).

Page 103: Interactive Personal Privacy at the Dawn of Ubiquitous Computing Scott Lederer

Five Pitfalls to Avoid when Designing for Privacy

103

Evidence: Avoiding the Pitfall Many web sites that require an email address for creating an account give clear notice on their sign-up forms that they do not share email addresses with third parties or use them for extraneous communication with the user. Clear, concise statements like these help clarify scope. Another successful design is Tribe.net, a social networking service that carefully conveys that members’ information will be made available only to other members within a certain number of degrees of social separation.

7.4.1.2 Pitfall 2: OBSCURING ACTUAL INFORMATION FLOW Having addressed the user’s need to understand a system’s potential privacy implications, we move now to the issue of actual instances of disclosure. To whatever degree is reasonable, designs should make clear the actual disclosure of information through the system. Users should understand what information is being conveyed to whom. The disclosure should be obvious to the user as it occurs; if this is impractical, notice should be provided within a reasonable delay. There should be just enough feedback to inform but not overwhelm the user. We will not dwell on this point, for it is perhaps the most obvious of the five pitfalls. We suggest Bellotti and Sellen (Bellotti and Sellen 1993) as a useful guide to exposing actual information disclosure. By avoiding both this and the prior pitfall, designs can clarify the extent to which users’ actions engage the system’s range of privacy implications. This enables users to understand the consequences of their use of the system thus far, and it empowers them to predict the consequences of future use. In the Design Implications section, we will elaborate on how avoiding both of these pitfalls can support the user’s mental model of his personal information flow. Evidence: Falling into the Pitfall Web browser support for cookies is a persistent example of obscuring information flow (Millett, Friedman et al. 2001). Most browsers do not, by default, indicate when a site sets a cookie or what information is disclosed through its use. The prevalence of third-party cookies and web bugs (tiny web page images that monitor who is reading the page) exacerbates users’ ignorance of who is observing their browsing activities. Another example of concealed information flow is in the Kazaa P2P file-sharing application, which has been shown to facilitate the concealed disclosure of highly sensitive personal information to unknown parties (Good and Krekelberg 2003). Another simple example is locator badges like those described in (Harper, Lamming et al. 1992; Beckwith 2003), which generally do not inform their wearers about who is locating them.

Page 104: Interactive Personal Privacy at the Dawn of Ubiquitous Computing Scott Lederer

Five Pitfalls to Avoid when Designing for Privacy

104

Evidence: Avoiding the Pitfall Friedman et al’s. redesign of cookie management improves browsers’ ability to show what information is being disclosed to what web sites (Friedman, Howe et al. 2002). Instant messaging systems often employ a symmetric design that informs the user when someone else wants to add her to his contact list, allowing her to do the same. By then letting users see and adjust their own status (e.g., “Busy” or “Out to Lunch”), they inform users who can see what about them. This gives individuals a better understanding of how they are presenting themselves. AT&T’s mMode Find Friends service, which lets mobile phone users locate other users of the service, informs the user when someone else is locating them. They learn who is obtaining what (their location).

7.4.2 Concerning Action: Three Pitfalls Our last three pitfalls primarily involve the user’s socially meaningful actions involving the system. These three pitfalls support the recognition that privacy regulation occurs not within technical parameters but in the social consequences of discernable actions involving technical systems.

7.4.2.1 Pitfall 3: EMPHASIZING CONFIGURATION OVER ACTION Designs should not require excessive configuration to create and maintain privacy. They should enable users to practice privacy management as a natural consequence of their ordinary use of the system. Palen and Dourish write, “setting explicit parameters and then requiring people to live by them simply does not work, and yet this is often what information technology requires… Instead, a fine and shifting line between privacy and publicity exists, and is dependent on social context, intention, and the fine-grained coordination between action and the disclosure of that action” (Palen and Dourish 2003). But because configuration has become a universal UI design pattern, many systems fall into the pitfall of configuration. Configured privacy breaks down for at least two reasons. First, in real settings users manage privacy semi-intuitively; they do not spell out their privacy needs in an auxiliary, focused effort (Whitten and Tygar 1999). Configuration imposes an awkward requirement on users, one they will often forsake in favor of default settings (Mackay 1991; Palen 1999). If users are to manage their privacy at all, it needs to be done in an intuitive fashion, as a predictable outcome of their situated actions involving the system. Second, the act of configuring preferences is too easily desituated from the contexts in which those preferences apply. Users are challenged to predict their needs under hypothetical circumstances, and they can forget their preferences over time. If they predict wrongly, or remember incorrectly, their configured preferences will differ from their in situ needs, creating the conditions for an invasion of privacy.

Page 105: Interactive Personal Privacy at the Dawn of Ubiquitous Computing Scott Lederer

Five Pitfalls to Avoid when Designing for Privacy

105

People generally do not set out to explicitly protect their privacy. Rather, they participate in some activity, with privacy regulation being an embedded component of that activity. Designs should take care not to extract the privacy regulation process from the activity within which it is normally conducted. Evidence: Falling into the Pitfall Many systems emphasize explicit configuration of privacy, including experimental online identity managers (Jendricke and Gerd tom Markotten 2000; boyd 2002), P2P file-sharing software (Good and Krekelberg 2003), web browsers (Millett, Friedman et al. 2001), email encryption software (Whitten and Tygar 1999), and our Faces prototype. Evidence: Avoiding the Pitfall Successful solutions can involve some measure of configuration, but tend to embed it into the actions necessary to use the system. Web sites like Friendster.com and Tribe.net allow users to regulate information flow by modifying their social networks—a process that is embedded into the use of these applications. Georgia Tech’s In/Out Board (Dey, Salber et al. 2001) lets users reveal or conceal their presence in a workspace by badging into an entryway device. Its purpose is to convey this information, but it can be intuitively used to withhold information as well, by falsely signaling your in/out status. Ignoring the moral implications, another example involves camera surveillance. When someone is aware of a camera’s presence, she tends to intuitively adjust her behavior to present herself as she wants to be perceived (Foucault 1977). Cadiz and Gupta propose a smart card that one could hand to a receptionist to grant limited access to her calendar to schedule an appointment; he would hand it back right afterwards. No one would have to fumble with setting permissions. They also suggest extending scheduling systems to automatically grant meeting partners access to a user’s location in the minutes leading up to a meeting, so they can infer his arrival time. The action of scheduling a meeting implies limited approval of location disclosure (Cadiz and Gupta 2001).

7.4.2.2 Pitfall 4: LACKING COARSE-GRAINED CONTROL Designs should offer an obvious, top-level mechanism for halting and resuming the disclosure of personal information. Often a power button or exit button will do the trick. Users are accustomed to turning a thing off when they want its operation to stop. Turning off information flow is an instinctive behavior that affects personal privacy. Beyond binary control, a simple linear control may also be appropriate in some cases (cf., audio devices’ mute and volume controls). Ubicomp systems that convey location or other context could incorporate both a precision dial and a hide button, so users can either adjust the precision at which their context is disclosed or decidedly halt disclosure.

Page 106: Interactive Personal Privacy at the Dawn of Ubiquitous Computing Scott Lederer

Five Pitfalls to Avoid when Designing for Privacy

106

In the general case, users can become remarkably adept at wielding coarse-grained controls to yield nuanced results (e.g., driving a car). Coarse-grained controls tend to reflect their state, providing direct feedback and freeing the user from having to remember whether she set a preference properly. This helps users accommodate the controls and even co-opt them in ways the designer may not have intended. Examples specific to privacy include: setting a door ajar, covering up or repositioning cameras (Bellotti and Sellen 1993; Jancke, Venolia et al. 2001), turning off a phone or using its invisible mode rather than navigating its privacy-related options, and removing a locator badge. While some fine-grained controls may be unavoidable, the flexibility that fine-grained controls are intended to provide is often neglected by users (see Pitfall #3). Flexibility in the control of privacy often comes not from within the system, but from the user’s nuanced manipulation of coarse-grained controls. Evidence: Falling into the Pitfall Many e-commerce web sites recommend to shoppers items that were purchased by other shoppers with similar shopping histories. While this is a useful service, there are times when a shopper does not want the item at hand to be included in his profile; he effectively wants to shop anonymously during the current session. Even though the merchant will know about the purchase, the shopper may not want his personalized shopping environment—which others can see—to reflect this private purchase. We have encountered no web sites that provide a simple mechanism for excluding the current purchase from our profiles. Similarly, most web browsers still bury their privacy controls under two or three layers of configuration panels (Millett, Friedman et al. 2001). Third-party applications that expose cookie control have begun to appear (e.g., GuideScope.com). Further, wearable locator-badges like those described in (Harper, Lamming et al. 1992) and (Beckwith 2003) do not have power buttons. One could remove the badge and leave it somewhere else, but simply turning it off would at times be more practical or preferable. Evidence: Avoiding the Pitfall Systems that expose simple, obvious ways of halting and resuming disclosure include easily coverable cameras (Bellotti and Sellen 1993), mobile phone power buttons, chat systems with invisible modes, the In/Out Board (Dey, Salber et al. 2001), and our Faces prototype, with a button on a handheld application that overrides current settings.

7.4.2.3 Pitfall 5: INHIBITING ESTABLISHED PRACTICE Designs should beware inhibiting existing social practice. People manage privacy through a range of nuanced practices. For simplicity’s sake, we might divide such practices into those that are already established and those that will evolve as new disclosure technologies emerge. While early designs might lack elegant support for emergent practices, they can at least take care to avoid inhibiting established ones.

Page 107: Interactive Personal Privacy at the Dawn of Ubiquitous Computing Scott Lederer

Five Pitfalls to Avoid when Designing for Privacy

107

Nuanced social practices cannot evolve around a system until the system is deployed. Designers might take care to co-evolve their systems along with the practices that develop around them. Interestingly, despite being notoriously awkward at supporting social nuance (Ackerman 2000), technical systems that survive long enough in the field will often contribute to the emergence of nuanced practice regardless of whether they suffered from socially awkward design in the first place (e.g., (Green, Lachoee et al. 2001; boyd 2003)). In other words, nuance happens. Nonetheless, emergent practices are intrinsically difficult to predict and design for. To facilitate adoption, designs should accommodate users’ natural efforts to transfer existing practices to them. Designers can identify and assess the relevant existing genres of disclosure into which their systems will be introduced. From there, they can identify, support, and possibly enhance the technologies, roles, relations, and practices already at play in those genres. Beyond genre-specific practices, certain meta-practices are worth noting. In particular, we emphasize the broad applicability of plausible deniability (whereby the observer cannot determine whether a lack of disclosure was intentional) (Nardi, Whittaker et al. 2000; Woodruff and Aoki 2003) and disclosing ambiguous information (e.g., pseudonyms, imprecise location). These common techniques allow people to finesse disclosure through technical systems to achieve nuanced social ends. Systems that rigidly belie meta-practices like plausible deniability and ambiguous disclosure may likely encounter significant resistance during deployment (Suchman 1994). Evidence: Falling into the Pitfall Some researchers envision context-aware mobile phones that can inform the caller of the user’s activity, to help explain why their call was not answered (Siewiorek, Smailagic et al. 2003). But this can prohibit users from exploiting plausible deniability. There can be value in keeping the caller ignorant of the reason for not answering. Location-tracking systems like those described in (Harper, Lamming et al. 1992) and (Beckwith 2003) constrain users’ ability to incorporate ambiguity into their location disclosures. Users can only convey a single precision of location or, at times, nothing at all. Evidence: Avoiding the Pitfall Mobile phones, push-to-talk phones (Woodruff and Aoki 2003), and instant messaging let users exploit plausible deniability by not responding to hails and not having to explain why. Although privacy on the web is a common concern, a basic function of HTML allows users to practice ambiguous disclosure. Forms that let users enter false data facilitate anonymous account creation and service provision.

Page 108: Interactive Personal Privacy at the Dawn of Ubiquitous Computing Scott Lederer

Five Pitfalls to Avoid when Designing for Privacy

108

Tribe.net supports another subtle real-world practice. Tribe allows users to partition their social networks into “tribes,” thereby letting pre-existing groups represent themselves online, situated within the greater networks to which they are connected. In contrast, Friendster.com users each have a single set of friends that cannot be functionally partitioned.

7.5 Design Implications Having described the five pitfalls and provided evidence of systems that fall into and avoid them, we now examine some of the deeper implications they have for design. We begin by elaborating on the influence of our first two pitfalls on the user’s mental model of his information disclosures. This leads to the introduction of a new conceptual tool to help the design process. Then we present an analytical argument for why designs that avoid our five pitfalls can support the human processes of understanding and action necessary for personal privacy maintenance. Using our Faces prototype as a case study, we then show how falling into these pitfalls can undermine an otherwise ordinary design process. Finally we discuss some successful systems that have largely avoided the pitfalls.

7.5.1 Mental Models of Information Flow As we said earlier, avoiding our first two pitfalls—obscuring potential and actual information flow—can clarify the extent to which users’ actions engage the system’s range of privacy implications. Users can understand the consequences of their use of the system thus far, and they can predict the consequences of future use. Illuminating disclosure contributes constructively to the user’s mental model of the portrayal of her identity in the context of the system. If she has a reasonable understanding of what observers already know about her (Pitfall 2) and of what they can learn about her (Pitfall 1), she can maintain and exploit this mental model to inform the ongoing portrayal of her identity through the system. In the context of interactive systems, the personal information a user conveys is often tightly integrated with her interaction with the system. For example, by simply browsing the web, a user generates a wealth of information that can be used in ways that directly impact her life. When interaction and disclosure are integrated thusly, an informed user’s mental model of the system’s operation and her mental model of her disclosures are interdependent. This suggests an extension to Norman’s canonical elucidation of the role of mental models in the design process. According to Norman, the designer’s goal is to design the system image (i.e., those aspects of the implementation with which the user interacts) such that the user’s mental model of the system’s operation coincides with the designer’s mental model of the same (Norman 1988). When we take into account the coupling of interaction and disclosure, we see that the designer’s goal has expanded. She now strives to design the system image such that the user’s mental models of the system’s operation and of the portrayal of his identity

Page 109: Interactive Personal Privacy at the Dawn of Ubiquitous Computing Scott Lederer

Five Pitfalls to Avoid when Designing for Privacy

109

through it are both accurate. As with Norman’s original notion, ideally the designer’s and the user’s models of the system’s operation will coincide. But the designer generally cannot have a model of the user’s identity; that depends on the user and the context of use. Indeed, here the designer’s task is not to harmonize the user’s model of his information flow with her own (she likely has none), but to harmonize the user’s information model with the observer’s (Figure 7-1). In other words, she wants to design the system image to accurately convey a model not only of how other parties can observe the user’s behavior through the system, but also what they do observe. Generalizing this notion beyond privacy, to cooperative information flow in general, may be of further use to the CSCW community but is beyond the scope of this chapter.

7.5.2 Opportunities for Understanding and Action We have argued that people maintain personal privacy by understanding the privacy implications of their socio-technical contexts and influencing them through socially meaningful action. When a technical system is embedded into a social process, the primary means that designers have to engender understanding and action are feedback and control mechanisms. We encourage designers of privacy-affecting systems to think of feedback and control mechanisms as opportunities for understanding and action. They are the designer’s opportunity to empower those processes, and they are the user’s opportunity to practice them. Thinking thusly can help designers reach across what Ackerman calls the socio-technical gap—the difference between systems’ technical capabilities and their social requirements (Ackerman 2000)—just enough to empower informed social action. The challenge is to find that intermediate point where carefully designed technical feedback and control translates into social understanding and action. Reaching too far can overwhelm the user. Reaching not far enough can disempower him. We believe that avoiding our pitfalls can help designers reach that intermediate point. Carefully designed feedback about potential (#1) and actual (#2) information flow can help users understand the representation and conveyance of their behavior through the system. Curtailing configuration (#3), providing coarse-grained control (#4), and

Figure 7-1. Building on Norman, designers should strive to harmonize the user’s and the observer’s understandings of the user’s personal information disclosures.

Page 110: Interactive Personal Privacy at the Dawn of Ubiquitous Computing Scott Lederer

Five Pitfalls to Avoid when Designing for Privacy

110

supporting established practices (#5) can help people make productive, intuitive use of a privacy-affecting system. Designs that heed these suggestions make their consequences known and do not require great effort to use, helping people incorporate them meaningfully into the lexicon of personal privacy practices by which they engage everyday life’s genres of disclosure.

7.5.3 Negative Case Study: Faces We return now to Faces—our prototypical ubicomp privacy UI—as a case study in how to fall into the pitfalls. Pitfall 1: Obscuring Potential Flow. In trying to be a UI for managing privacy across any ubicomp system, Faces abstracted away the true capabilities of any underlying system. Users could not gauge its potential information flow because it aimed to address all information flow. Its scope was impractically broad and, hence, obscure. Pitfall 2: Obscuring Actual Flow. Faces conveyed actual information flow through the user’s disclosure log. The record was accessible after the relevant disclosure. While this design intends to illuminate information flow, it is unclear whether postponing notice is optimal. Embedding notice directly into the real-time experience of disclosure might foster a stronger understanding of information flow. Pitfall 3: Configuration over Action. Faces required a considerable amount of configuration. Once configuration was done, and assuming it was done correctly (which our evaluation brings into doubt), the system required little ad-hoc configuration. The user simply goes about his business. Nonetheless, the sheer amount and desituated nature of configuration positions Faces squarely in this pitfall. Pitfall 4: Lacking Coarse-grained Control. Faces avoided this pitfall somewhat by including an Override function that afforded quick transitions to alternate faces. Notably, this was not considered a central design feature. Pitfall 5: Inhibiting Established Practice. While Faces modeled the nuance of Goffman’s identity management theory, it appeared to hinder the actual identity management practice by requiring the user to maintain virtual representations of his fragmented identities in addition to manifesting them naturally through intuitive, socially meaningful behavior. In this sense, Faces disrupts privacy management practice at a fundamental level. Our evaluation of Faces revealed a complex, abstract configuration requirement that belies the intuitive situatedness of privacy as practiced in real settings. Faces also futilely aimed to address privacy needs across an arbitrary range of information types—both static (e.g., contact information) and dynamic (e.g., location). In reality, privacy management employs critically different techniques for different information types. The upshot is that, rather than attempt to revise Faces to address our evaluation findings, we found it more appropriate to retire the Faces concept and scale our design focus down to a

Page 111: Interactive Personal Privacy at the Dawn of Ubiquitous Computing Scott Lederer

Five Pitfalls to Avoid when Designing for Privacy

111

more isolable point in the ubicomp privacy space. In the following section, we assess a interaction concept that recently emerged from that process.

7.5.4 (Potentially) Positive Case Study: Precision Dial Reminding the reader of the speculative nature of this assessment, since the precision dial is merely a proposal, we suggest that, in comparison to Faces, the precision dial can largely avoid the five pitfalls in the following ways. Pitfall 1: Obscuring Potential Flow. Unlike Faces, this tool is deliberately scoped to a specific subspace of the privacy space: intentional interpersonal disclosure of activity—as presence—to familiar observers. In other words, it lets your friends find out what you are doing, with your permission. A system employing this tool should clarify its operational definition of presence. And it should clarify that information is conveyable only to people on the user’s contact list. By letting users collect observers into groups, they can know who has the potential to obtain what information about them at which precisions. Pitfall 2: Obscuring Actual Flow. Disclosures can be exposed by real-time alerts, a disclosure log, or both. Pitfall 3: Configuration over Action. The notion of blurring precision arguably aligns with the mental model of "I don't want to reveal too much about my activity right now." A readily accessible dial would allow quick assertion of such a preference and would achieve socially meaningful results. Managing groups might present a configuration a burden, but good design practices can minimize it. For instance, the user could have the option to quickly choose a group for each observer at the time he adds her to his contact list. Pitfall 4: Lacking Coarse-grained Control. One cannot get much coarser than an ambiguous four-point ordinal precision scale. Nonetheless, we have chosen the number of points rather arbitrarily. A three-point scale might be better. Any coarser would result in a binary button, but we suspect people would prefer to leverage some gray area between the extremes of disclosing everything and disclosing nothing. Pitfall 5: Inhibiting Established Practice. The precision dial supports both ambiguous disclosure and plausible deniability. The former is a consequence of the intrinsic ambiguity of the precision scale. The latter is supported by the observer’s ignorance of the reasons why the user employed any given precision level; it may have been due to social expectations (i.e., the user may have simply been keeping inline with the relevant genre of disclosure), or due to technical factors (e.g., signal loss), or simply the desire to be left alone.

7.5.5 Positive Case Study: IM and Mobile Phones Interestingly, two systems that largely avoid our pitfalls—mobile phones and instant messaging (IM)—are primarily communication media. Disclosure is one of their central functions. We will briefly assess these services against the pitfalls, focusing on their primary functions—textual and vocal communication—and on some of their secondary

Page 112: Interactive Personal Privacy at the Dawn of Ubiquitous Computing Scott Lederer

Five Pitfalls to Avoid when Designing for Privacy

112

features that support these functions. We will not address orthogonal, controversial features like the location-tracking capabilities of some mobile phones and the capture of IM sessions, which would have to be addressed by a more robust assessment of the privacy implications of these technologies. IM and mobile telephony each make clear the potential and actual flow of disclosed information, making for a robust, shared mental model of information flow through these cooperative interactive systems. Potential flow is scoped by features like Caller ID (telephony), Buddy Lists (IM), and feedback about the user’s own online presence (IM). Actual flow is self-evident in the contents of the communications. Each technology requires minimal configuration for maintaining privacy (though other features often require excessive configuration), largely due to coarse-grained controls for halting and resuming information flow—power button (telephony), application exit (IM), invisible mode (IM), and ringer volume (telephony). Lastly, each supports existing practices of plausible deniability—people can choose to ignore incoming messages and calls without having to explain why—and ambiguous disclosure—the linguistic nature of each medium allows for arbitrary customization of disclosed information (Nardi, Whittaker et al. 2000; Woodruff and Aoki 2003). Indeed, communication media could serve as a model for designing other privacy-affecting systems not conventionally categorized under as communication technologies. Disclosure is essentially communication, whether it results from the use of a symmetric linguistic medium—e.g., telephony—or an asymmetric event-based medium—e.g., e-commerce, context-aware systems. Systems that affect privacy but are not positioned as communication media do nonetheless communicate personal information to observers. Exposing and addressing these disclosure media as communication media might liberate designs to leverage users’ intuitive privacy maintenance skills.

7.6 Summary In this chapter we described five common pitfalls to which designs of privacy-affecting systems often succumb. These pitfalls include obscuring potential information flow, obscuring actual flow, emphasizing configuration over action, lacking coarse-grained control, and inhibiting established practice. We analyzed these pitfalls and provided several examples of systems that fall into or manage to avoid them, including Faces, our UI prototype for managing ubicomp privacy. We encourage designers to identify the genres of disclosure to which their systems will contribute and—with the help of our guidelines—to design opportunities for the user to (1) understand the extent of the system’s alignment with those genres and (2) conduct socially meaningfully action that supports them (or disrupts them, as the case may be).

Page 113: Interactive Personal Privacy at the Dawn of Ubiquitous Computing Scott Lederer

Conclusion

113

8 Conclusion Ubiquitous computing is driving a fundamental shift in the meaning and mechanics of personal privacy. Human-computer interaction is moving rapidly beyond the confines of the desktop and automated data collection is embedding deeper into everyday life. Where these two trends intersect, people will enjoy a range of services both long-awaited and yet-to-be-imagined, but they will also live under the watchful eyes of the machines and their providers. For ubiquitous computing to succeed, people must feel—and be—empowered and self-determined. More than ever, interaction designers will play a pivotal role in this process by helping people effectively and comfortably negotiate the complex boundaries of social life amidst the dynamic disclosure of personal information. Designers can meet this challenge by weaving together regulatory structures, technological infrastructures, normative patterns, and HCI research findings into user interfaces that empower people to understand and influence the privacy implications of the augmented situations in which they go about their lives. In this report, we have offered the following contributions to that end:

• a review of HCI-related research and interactive systems in the privacy space, • an analysis of the privacy space to help clarify discourse and design, • an iteratively designed interaction framework for managing personal privacy in

ubiquitous computing environments, and, • a set of guidelines for designing desktop or ubicomp systems that empower

people to create and maintain personal privacy by understanding and acting upon the system’s privacy implications.

8.1 Summary of Findings We offer the following take-away points for readers.

8.1.1 Clarify Discourse and Design Privacy is an immensely complex issue. When discussing or designing for privacy, rather than casually throwing around the word “privacy” we suggest people clarify just what they are talking about. Our deconstruction of the privacy space in Chapter Three can help people identify the conditions that shape the privacy implications of a given phenomenon or system.

8.1.2 Address Privacy Piecemeal Our experience designing and evaluating Faces has taught us that a so-called ubicomp privacy user interface is likely unattainable. Privacy is not a singular thing. It is intangibly everywhere and is a thousand things at once. We urge designers of privacy-affecting systems to examine the particular privacy practices at play in the communities into which their systems will be deployed, and to support and possibly enhance them.

8.1.3 Blur Activity Disclosures Within the scope of interpersonal disclosure of dynamic activity information (i.e., context) in ubicomp, we have argued that people can intuitively manage their privacy

Page 114: Interactive Personal Privacy at the Dawn of Ubiquitous Computing Scott Lederer

Conclusion

114

with the help of a sort of precision dial on their mobile phones. The dial would have only three or four ordinal settings. By leveraging the ambiguity inherent in the ordinal precision scale and in the imperfect, noisy infrastructures that will convey representations of their activities, people can practice nuanced identity management. We do not argue the dial as the solution to privacy. It is merely one means by which people might comfortably negotiate the boundaries of social life amidst the collapsing distance of ubicomp.

8.1.4 Design for Understanding and Action Privacy is a human problem. Its negotiation must occur in human dimensions. Rather than building systems that take as their inputs the user’s privacy preferences and then act on her behalf, or that pay short shrift to privacy in the first place, we urge designers to build systems that provide feedback and control mechanisms carefully honed to empower the user to assert her privacy needs intuitively, as a natural consequence of her behavior with respect to the system. Her particular privacy should result directly from her “natural” social behavior involving the system, rather than from her efforts to configure its control panel. If the system affords itself such that the user can make sense of and appropriate its privacy implications, then it has made great strides towards honoring her privacy needs. Avoiding the design pitfalls we discuss in Chapter Seven can help designers meet this challenge.

8.2 Ending as We Began We began this project with the intention of designing a prototypical user interface for managing personal privacy in ubiquitous computing. In the course of that pursuit, we learned a great deal about our subject, including the fundamental fact that privacy is not a thing that can be solved in any standard sense of the word. Privacy is not a force to be tamed or overcome, like gravity or distance. It is an ongoing, fluid, cooperative human process that must be addressed and readdressed in the design of every privacy-affecting system. Rather than solving it, designs can accommodate it. And the way to do so is to empower the end-user to intuitively understand and influence the conditions that create it. We hope the work we describe in this report contributes in some small way to that goal.

Page 115: Interactive Personal Privacy at the Dawn of Ubiquitous Computing Scott Lederer

References

115

References Abowd, G. D. and A. Jacobs (2001). The Impact of Awareness Technologies on Privacy

Litigation. SIGCHI Bulletin. September/October 2001: 9. Abowd, G. D. and E. D. Mynatt (2000). "Charting Past, Present, and Future Research in

Ubiquitous Computing." ACM Transactions on Computer-Human Interaction 7(1): 29-58.

Ackerman, M. S. (2000). "The Intellectual Challenge of CSCW: The Gap Between Social Requirements and Technical Feasibility." Human-Computer Interaction 15(2/3): 181-203.

Ackermann, M. S. and L. Cranor (1999). Privacy Critics: UI Components to Safeguard Users' Privacy. Extended Abstracts of the ACM Conference on Human Factors in Computing Systems (CHI 99), New York, NY.

Adams, A. (2000). Multimedia Information Changes the Whole Privacy Ballgame. Conference on Computers, Freedom, and Privacy, Toronto, Ontario, CA, ACM Press.

Adams, A. and M. A. Sasse (1999). Taming the Wolf in Sheep's Clothing: Privacy in Multimedia Communications. Proceedings of ACM Multimedia, Orlando, FL, USA.

Agre, P. E. (1994). "Surveillance and capture: Two models of privacy." The Information Society 10(2).

Agre, P. E. (1997). Beyond the Mirror World: Privacy and the Representational Practices of Computing. Technology and Privacy: The New Landscape. P. E. Agre and M. Rotenberg. Cambridge, MA, The MIT Press.

Agre, P. E. (2001). "Changing Places: Contexts of Awareness in Computing." Human-Computer Interaction 16(2-4): 177-192.

Alderman, E. and C. Kennedy (1995). The Right to Privacy. New York, Knopf. Altman, I. (1975). The Environment and Social Behavior: Privacy, Personal Space,

Territory, and Crowding. Monterey, CA, Brooks/Cole Publishing Co. AT&T (2002). AT&T Privacy Bird (http://www.privacybird.com). AT&T (2003). AT&T Wireless mMode Find Friends

(http://www.attws.com/mmode/features/findit/FindFriends/). Beckwith, R. (2003). "Designing for Ubiquity: The Perception of Privacy." IEEE

Pervasive 2(2): 40-46. Bellotti, V., M. Back, et al. (2002). Making sense of sensing systems: five questions for

designers and researchers. Proceedings of the SIGCHI conference on Human factors in computing systems, Minneapolis, Minnesota, USA, ACM Press.

Bellotti, V. and A. Sellen (1993). Design for Privacy in Ubiquitous Computing Environments. Proceedings of the Third European Conference on Computer Supported Cooperative Work (ECSCW'93).

boyd, d. (2002). Faceted Id/Entity: Managing representation in a digital world, MS Thesis. Program in Media Arts and Sciences, School of Architecture and Planning. Cambridge, MA, Massachusetts Institute of Technology.

boyd, d. (2003). Reflections on Friendster, Trust and Intimacy. Workshop on Intimate Ubiquitous Computing, Ubicomp 2003, Seattle, WA, USA.

Page 116: Interactive Personal Privacy at the Dawn of Ubiquitous Computing Scott Lederer

References

116

Boyle, M. (2003). A Shared Vocabulary for Privacy. Workshop on Ubicomp Communities: Privacy as Boundary Negotiation, at The Fifth International Conference on Ubiquitous Computing, Seattle, WA, USA.

Brin, D. (1999). The Transparent Society. Reading, MA, Perseus Books. Brunk, B. D. (2002). "Understanding the Privacy Space." First Monday 7(10). Cadiz, J. and A. Gupta (2001). Privacy Interfaces for Collaboration, Technical Report

MSR-TR-2001-82. Redmond, WA, Microsoft Corp. Canny, J. (2002). Collaborative Filtering with Privacy via Factor Analysis. Proceedings

of ACM Conference on Research and Development in Information Retrieval (SIGIR 02), Tampere, Finland.

CDT (2000). Generic Principles of Fair Information Practices (http://www.cdt.org/privacy/guide/basic/generic.html), Center for Democracy and Technology.

Consolvo, S., L. Arnstein, et al. (2002). User Study Techniques in the Design and Evaluation of a Ubicomp Environment. Proceedings of Fourth Annual Conference on Ubiquitous Computing (Ubicomp 2002), GÖteberg, Sweden, Springer-Verlag.

Cooper, A. (1995). The Myth of Metaphor. Visual Basic Programmer's Journal. Cranor, L., M. Langheinrich, et al. (2002). The Platform for Privacy Preferences 1.0

(P3P1.0) Specification, World Wide Web Consortium. Cranor, L., M. Langheinrich, et al. (2002). A P3P Preference Exchange Language 1.0

(APPEL1.0), World Wide Web Consortium. Cranor, L., J. Reagle, et al. (2000). Beyond Concern: Understanding Net Users' Attitudes

About Online Privacy. The Internet Upheaval: Raising Questions, Seeking Answers in Communications Policy. I. Vogelsang and B. M. Compaine, MIT Press: 47-70.

Cuellar, J., J. B. Morris, et al. (2003). Geopriv Requirements (IETF Internet-Draft), Geographic Location/Privacy IETF Working Group.

Dey, A. K., D. Salber, et al. (2001). "A Conceptual Framework and a Toolkit for Supporting the Rapid Prototyping of Context-Aware Applications." Human-Computer Interaction 16(2-4): 97-166.

Dourish, P. (2001). Where the Action Is. Cambridge, MA, MIT Press. FCC (2003). Enhanced 911 (http://www.fcc.gov/911/enhanced/), U.S. Federal

Communications Commission. Foucault, M. (1977). Discipline and Punish. New York, Vintage Books. Friedman, B., Ed. (1997). Human Values and the Design of Computer Technology. New

York, NY and Stanford, CA, Cambridge University Press and CSLI, Stanford University.

Friedman, B., D. C. Howe, et al. (2002). Informed Consent in the Mozilla Browser: Implementing Value-Sensitive Design. Proc. of 35th Annual Hawaii International Conference on System Sciences.

Garfinkel, S. (2000). Database Nation: The Death of Privacy in the 21st Century. Sebastopol, CA, O'Reilly & Associates.

Gellman, R. (1998). Does Privacy Law Work? Technology and Privacy: The New Landscape. P. E. Agre and M. Rotenberg. Cambridge, MA, MIT Press: 193-218.

Goffman, E. (1956). The Presentation of Self in Everyday Life. New York, NY, Doubleday.

Page 117: Interactive Personal Privacy at the Dawn of Ubiquitous Computing Scott Lederer

References

117

Goldberg, D., D. Nichols, et al. (1992). "Using collaborative filtering to weave an information tapestry." Communications of the ACM 35(12): 61-70.

Good, N. S. and A. Krekelberg (2003). Usability and privacy: a study of Kazaa P2P file-sharing. Proceedings of the conference on Human factors in computing systems, Ft. Lauderdale, FL, USA, ACM Press.

Green, N., H. Lachoee, et al. (2001). Rethinking Queer Communications: Mobile Phones and beyond. Sexualities, Medias and Technologies Conference, University of Surrey.

Grinter, R. E. and M. A. Eldridge (2001). y do tngrs luv 2 txt msg? Proceedings of the Seventh European Conference on Computer-Supported Cooperative Work (ECSCW '01), Bonn, Germany.

Harper, R. H. R., M. G. Lamming, et al. (1992). "Locating Systems at Work: Implications for the Development of Active Badge Applications." Interacting with Computers 4(3): 343-363.

Heidegger, M. (1962). Being and Time. New York, Harper. Hong, J. I., G. Boriello, et al. (2003). Privacy and Security in the Location-enhanced

World Wide Web. Workshop on Ubicomp Communities: Privacy as Boundary Negotiation, at The Fifth International Conference on Ubiquitous Computing, Seattle, WA, USA.

Hong, J. I. and J. A. Landay (2001). "An Infrastructure Approach to Context-Aware Computing." Human-Computer Interaction 16(2-3).

Jacobs, A. (2002). The Benefits Of The Legal Analytic Perspective For Designers Of Context-Aware Technologies. Ubicomp 2002 (Workshop on Socially-informed Design of Privacy-enhancing Solutions in Ubiquitous Computing), Göteborg, Sweden.

Jacobs, A. R. and G. D. Abowd (2003). "A Framework for Comparing Perspectives on Privacy and Pervasive Technologies." IEEE Pervasive Computing 2(4): 78-84.

Jancke, G., G. D. Venolia, et al. (2001). Linking public spaces: technical and social issues. Proceedings of the SIGCHI conference on Human factors in computing systems, Seattle, WA, USA, ACM Press.

Jendricke, U. and D. Gerd tom Markotten (2000). Usability meets Security – The Identity-Manager as your Personal Security Assistant for the Internet. 16th Annual Computer Security Applications Conference.

Jiang, X., J. I. Hong, et al. (2002). Approximate Information Flows: Socially-based Modeling of Privacy in Ubiquitous Computing. The Fourth International Conference on Ubiquitous Computing, Springer-Verlag LNCS.

Kaasinen, E. (2003). "User needs for location-aware mobile services." Personal and Ubiquitous Computing 7(1): 70-79.

Kindberg, T., J. Barton, et al. (2000). People, Places, Things: Web Presence for the Real World. 3rd IEEE Workshop on Mobile Computing Systems and Applications (WMCSA 2000), Monterey, CA.

Kindberg, T. and A. Fox (2002). "System Software for Ubiquitous Computing." IEEE Pervasive 1(1).

Kolko, J., J. L. McQuivey, et al. (2002). Privacy For Sale: Just Pennies Per Day. San Francisco, CA, Forrester Research, Inc.

Page 118: Interactive Personal Privacy at the Dawn of Ubiquitous Computing Scott Lederer

References

118

Langheinrich, M. (2001). Privacy by Design - Principles of Privacy-Aware Ubiquitous Systems. The Third International Conference on Ubiquitous Computing, Springer-Verlag LNCS.

Langheinrich, M. (2002). A Privacy Awareness System for Ubiquitous Computing Environments. The Fourth International Conference on Ubiquitous Computing, Springer-Verlag LNCS.

Lederer, S., A. K. Dey, et al. (2002). A Conceptual Model and a Metaphor of Everyday Privacy in Ubiquitous Computing Environments, Technical Report CSD-02-1188. Berkeley, CA, UC Berkeley.

Lederer, S., J. Mankoff, et al. (2003). Who wants to know what when? Privacy preference determinants in ubiquitous computing. CHI '03 extended abstracts on Human factors in computer systems, Ft. Lauderdale, FL, USA, ACM Press.

Lessig, L. (1998). The Architecture of Privacy. Taiwan Net, Taipei, Taiwan. Lessig, L. (2000). Code and Other Laws of Cyberspace, Basic Books. Mackay, W. E. (1991). Triggers and barriers to customizing software. Proceedings of the

SIGCHI conference on Human factors in computing systems, New Orleans, LA, USA, ACM Press.

Mankoff, J., A. K. Dey, et al. (2003). Heuristic evaluation of ambient displays. CHI 2003, ACM Conference on Human Factors in Computing Systems, Fort Lauderdale, FL, USA, ACM Press.

Marx, G. T. (2001). "Murky Conceptual Waters: the Public and the Private." Ethics and Information Technology 3(3): 157-169.

Millett, L. I., B. Friedman, et al. (2001). Cookies and Web browser design: toward realizing informed consent online. Proceedings of the SIGCHI conference on Human factors in computing systems, Seattle, WA, USA, ACM Press.

Nardi, B. A., S. Whittaker, et al. (2000). Interaction and Outeraction: Instant Messaging in Action. Proc. ACM CSCW Conf., New York, NY, ACM.

Neustaedter, C. and S. Greenberg (2003). The Design of a Context-Aware Home Media Space for Balancing Privacy and Awareness. The Fifth International Conference on Ubiquitous Computing, Seattle, WA, USA, Springer-Verlag LNCS.

Nguyen, D. H. and E. D. Mynatt (2002). Privacy Mirrors: Understanding and Shaping Socio-technical Ubiquitous Computing Systems, Georgia Institute of Technology Technical Report GIT-GVU-02-16.

Nokia (2002). Nokia brings a new server solution for sophisticated communications, http://press.nokia.com/PR/200211/880012_5.html.

Norman, D. A. (1988). The Design of Everyday Things. New York, NY, Basic Books. Palen, L. (1999). Social, Individual & Technological Issues for Groupware Calendar

Systems. Proceedings of the ACM Conference on Human Factors in Computing Systems (CHI 99), Pittsburgh, PA, ACM.

Palen, L. and P. Dourish (2003). Unpacking “privacy” for a networked world. Proceedings of the conference on Human factors in computing systems, Fort Lauderdale, FL, ACM Press.

Phillips, D. J. (2002). Context, Identity, and Privacy in Ubiquitous Computing Environments. Ubicomp 2002 (Workshop on Socially-informed Design of Privacy-enhancing Solutions in Ubiquitous Computing), Göteborg, Sweden.

Page 119: Interactive Personal Privacy at the Dawn of Ubiquitous Computing Scott Lederer

References

119

Reang, P. (2002). Dozens of nurses in Castro Valley balk at wearing locators. Mercury News. San Jose, CA.

Samarajiva, R. (1997). Interactivity As Though Privacy Mattered. Technology and Privacy: The New Landscape. P. E. Agre and M. Rotenberg. Cambridge, MA, The MIT Press.

Samuelson, P. (2000). "Privacy as Intellectual Property?" Stanford Law Review 52: 1125.

Scholtz, J., L. Arnstein, et al. (2002). User-Centered Evaluations of Ubicomp Applications (Technical Report IRS-TR-02-006), Intel Research.

Siewiorek, D., A. Smailagic, et al. (2003). SenSay: A Context-Aware Mobile Phone. IEEE International Symposium on Wearable Computers, White Plains, NY, USA.

Suchman, L. (1987). Plans and Situated Actions. New York, NY, Cambridge University Press.

Suchman, L. (1994). "Do Categories have Politics? The language/action perspective reconsidered." Computer-Supported Cooperative Work 2: 177-190.

Taylor, H. (2003). Most People Are "Privacy Pragmatists" Who, While Concerned about Privacy, Will Sometimes Trade It Off for Other Benefits, The Harris Poll.

Thomas, J. C., W. A. Kellogg, et al. (2001). "The knowledge management puzzle: Human and social factors in knowledge management." IBM Systems Journal 40(4).

Trevor, J., D. M. Hilbert, et al. (2002). Issues in Personalizing Shared Ubiquitous Devices. Proceedings of Fourth Annual Conference on Ubiquitous Computing (Ubicomp 2002), Springer-Verlag.

Turow, J. (2003). Americans and Online Privacy: The System is Broken, Annenberg Public Policy Center, University of Pennsylvania.

Warren, S. and L. Brandeis (1890). "The right to privacy." Harvard Law Review 4: 193-220.

Weiser, M. (1991). "The Computer for the Twenty-First Century." Scientific American 265(3): 94-104.

Weiser, M. (1993). "Some Computer Science Issues in Ubiquitous Computing." Communications of the ACM 36(7): 75-83.

Weiser, M. (1996). Ubiquitous Computing (http://www.ubiq.com/hypertext/weiser/UbiHome.html).

Westin, A. (1967). Privacy and Freedom. New York, NY, Atheneum. Westin, A. (1995). Privacy in America: An Historical and Socio-Political Analysis.

National Privacy and Public Policy Symposium, Hartford, CT. Westin, A. F. (1967). Privacy and Freedom. New York NY, Atheneum. Whitten, A. and J. D. Tygar (1999). Why Johnny Can't Encrypt: A Usability Evaluation

of PGP 5.0. 8th USENIX Security Symposium. Woodruff, A. and P. M. Aoki (2003). How Push-to-Talk Makes Talk Less Pushy. Proc.

ACM SIGGROUP Conf. on Supporting Group Work (GROUP '03), Sanibel Island, FL, ACM Press.

Page 120: Interactive Personal Privacy at the Dawn of Ubiquitous Computing Scott Lederer