protection and utilization of privacy information via sensing

8
2 IEICE TRANS. INF. & SYST., VOL.E98–D, NO.1 JANUARY 2015 INVITED PAPER Special Section on Enriched Multimedia Protection and Utilization of Privacy Information via Sensing Noboru BABAGUCHI a) , Fellow and Yuta NAKASHIMA †† , Member SUMMARY Our society has been getting more privacy-sensitive. Di- verse information is given by users to information and communications technology (ICT) systems such as IC cards benefiting them. The infor- mation is stored as so-called big data, and there is concern over privacy violation. Visual information such as images and videos is also consid- ered privacy-sensitive. The growing deployment of surveillance cameras and social network services has caused a privacy problem of information given from various sensors. To protect privacy of subjects presented in visual information, their face or figure is processed by means of pixeliza- tion or blurring. As image analysis technologies have made considerable progress, many attempts to automatically process flexible privacy protec- tion have been made since 2000, and utilization of privacy information un- der some restrictions has been taken into account in recent years. This paper addresses the recent progress of privacy protection for visual infor- mation, showing our research projects: PriSurv, Digital Diorama (DD), and Mobile Privacy Protection (MPP). Furthermore, we discuss Harmonized Information Field (HIFI) for appropriate utilization of protected privacy in- formation in a specific area. key words: privacy information, sensing, visual abstraction, privacy pro- tection, information disclosure and utilization 1. Introduction In recent years, our society has been getting more privacy- sensitive or privacy-aware. We are facing various informa- tion and communications technology (ICT) systems, e.g., an IC card system, in our daily life, and receiving a lot of benefit and convenience from them. At this moment, we are unconsciously giving diverse information such as per- sonal data and movement history to them. The information is stored as so-called big data, which leads to concern over privacy violation. Visual information such as images and videos is also considered privacy-sensitive. As systems and services using visual information have been developed, pri- vacy issues of subjects in the visual information have at- tracted considerable attention [1]–[4]. In Japan, the lawsuit about infringement on the privacy right by Google Street View triggered the public attention. Video surveillance is also an essential ICT system to maintain safety at surveil- lance areas; however, it can be viewed as an critical source of privacy disclosure. Manuscript received March 13, 2014. The author is with the Graduate School of Engineering, Osaka University, Suita-shi, 565–0871 Japan. †† The author is with the Graduate School of Information Sci- ence, Nara Institute of Science and Technology, Ikoma-shi, 630– 0192 Japan. This work is supported in part by a Grant-in-Aid for scientific research from the Japan Society for the Promotion of Science. a) E-mail: [email protected] DOI: 10.1587/transinf.2014MUI0001 In fact, visual information distributed via TV broad- casts and the Internet contains privacy-sensitive information (PSI) [5], [6], e.g., subjects’ faces, which may cause privacy violation in its acquisition, distribution, and share. There- fore, some protection procedures for the PSI are strongly re- quired. In Google Street View, regions of subjects’ faces are blurred to prevent their identification. In surveillance video, whichever the camera is static or mobile, various protection methods are proposed. In TV broadcasts, not only subjects’ region but also the background region is sometimes blurred so that the viewer cannot guess where the video was taken. Formerly, this kind of protection procedures was man- ually carried out, which was very time-consuming. Nowa- days, automated procedures are tried using image analysis technologies. After detecting PSI regions in an image, an image processing operator, such as pixelization and blur- ring, is performed onto all the regions so that they cannot be seen. This can be viewed as exhaustive privacy protec- tion. For Google Street View images, all the regions of hu- man faces and car license plates are blurred. This automated method [7] has drawn attention as a practical example of vi- sual privacy protection. A similar method is also provided on YouTube [8]. In social network services (SNSs), such as Twitter, Facebook and Flickr, the number of users who upload images are surprisingly increasing. A system is pro- posed that alerts users to potential privacy violation due to uploaded images based on image content analysis [9]. In this way, the technologies of visual privacy protection and those of image analysis are mutually correlated. In this paper, we address the recent progress of pri- vacy protection for visual information, showing our research projects: PriSurv, Digital Diorama (DD), and Mobile Pri- vacy Protection (MPP). They aim at flexible privacy pro- tection in contrast to exhaustive privacy protection. Fur- thermore, we discuss the importance of utilizing privacy- protected information appropriately in some special situatu- ions. The rest of the paper is organized as follows. Section 2 overviews the related work. Section 3 discusses the PSI in detail. Section 4 presents our research projects about vi- sual privacy protection. Section 5 presents our new attempt to build an information field to realize both protection and utilization of privacy information. Section 6 concludes the paper. Copyright c 2015 The Institute of Electronics, Information and Communication Engineers

Upload: others

Post on 03-Feb-2022

3 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Protection and Utilization of Privacy Information via Sensing

2IEICE TRANS. INF. & SYST., VOL.E98–D, NO.1 JANUARY 2015

INVITED PAPER Special Section on Enriched Multimedia

Protection and Utilization of Privacy Information via Sensing∗

Noboru BABAGUCHI†a), Fellow and Yuta NAKASHIMA††, Member

SUMMARY Our society has been getting more privacy-sensitive. Di-verse information is given by users to information and communicationstechnology (ICT) systems such as IC cards benefiting them. The infor-mation is stored as so-called big data, and there is concern over privacyviolation. Visual information such as images and videos is also consid-ered privacy-sensitive. The growing deployment of surveillance camerasand social network services has caused a privacy problem of informationgiven from various sensors. To protect privacy of subjects presented invisual information, their face or figure is processed by means of pixeliza-tion or blurring. As image analysis technologies have made considerableprogress, many attempts to automatically process flexible privacy protec-tion have been made since 2000, and utilization of privacy information un-der some restrictions has been taken into account in recent years. Thispaper addresses the recent progress of privacy protection for visual infor-mation, showing our research projects: PriSurv, Digital Diorama (DD), andMobile Privacy Protection (MPP). Furthermore, we discuss HarmonizedInformation Field (HIFI) for appropriate utilization of protected privacy in-formation in a specific area.key words: privacy information, sensing, visual abstraction, privacy pro-tection, information disclosure and utilization

1. Introduction

In recent years, our society has been getting more privacy-sensitive or privacy-aware. We are facing various informa-tion and communications technology (ICT) systems, e.g.,an IC card system, in our daily life, and receiving a lot ofbenefit and convenience from them. At this moment, weare unconsciously giving diverse information such as per-sonal data and movement history to them. The informationis stored as so-called big data, which leads to concern overprivacy violation. Visual information such as images andvideos is also considered privacy-sensitive. As systems andservices using visual information have been developed, pri-vacy issues of subjects in the visual information have at-tracted considerable attention [1]–[4]. In Japan, the lawsuitabout infringement on the privacy right by Google StreetView triggered the public attention. Video surveillance isalso an essential ICT system to maintain safety at surveil-lance areas; however, it can be viewed as an critical sourceof privacy disclosure.

Manuscript received March 13, 2014.†The author is with the Graduate School of Engineering, Osaka

University, Suita-shi, 565–0871 Japan.††The author is with the Graduate School of Information Sci-

ence, Nara Institute of Science and Technology, Ikoma-shi, 630–0192 Japan.

∗This work is supported in part by a Grant-in-Aid for scientificresearch from the Japan Society for the Promotion of Science.

a) E-mail: [email protected]: 10.1587/transinf.2014MUI0001

In fact, visual information distributed via TV broad-casts and the Internet contains privacy-sensitive information(PSI) [5], [6], e.g., subjects’ faces, which may cause privacyviolation in its acquisition, distribution, and share. There-fore, some protection procedures for the PSI are strongly re-quired. In Google Street View, regions of subjects’ faces areblurred to prevent their identification. In surveillance video,whichever the camera is static or mobile, various protectionmethods are proposed. In TV broadcasts, not only subjects’region but also the background region is sometimes blurredso that the viewer cannot guess where the video was taken.

Formerly, this kind of protection procedures was man-ually carried out, which was very time-consuming. Nowa-days, automated procedures are tried using image analysistechnologies. After detecting PSI regions in an image, animage processing operator, such as pixelization and blur-ring, is performed onto all the regions so that they cannotbe seen. This can be viewed as exhaustive privacy protec-tion. For Google Street View images, all the regions of hu-man faces and car license plates are blurred. This automatedmethod [7] has drawn attention as a practical example of vi-sual privacy protection. A similar method is also providedon YouTube [8]. In social network services (SNSs), suchas Twitter, Facebook and Flickr, the number of users whoupload images are surprisingly increasing. A system is pro-posed that alerts users to potential privacy violation due touploaded images based on image content analysis [9]. Inthis way, the technologies of visual privacy protection andthose of image analysis are mutually correlated.

In this paper, we address the recent progress of pri-vacy protection for visual information, showing our researchprojects: PriSurv, Digital Diorama (DD), and Mobile Pri-vacy Protection (MPP). They aim at flexible privacy pro-tection in contrast to exhaustive privacy protection. Fur-thermore, we discuss the importance of utilizing privacy-protected information appropriately in some special situatu-ions.

The rest of the paper is organized as follows. Section2 overviews the related work. Section 3 discusses the PSIin detail. Section 4 presents our research projects about vi-sual privacy protection. Section 5 presents our new attemptto build an information field to realize both protection andutilization of privacy information. Section 6 concludes thepaper.

Copyright c© 2015 The Institute of Electronics, Information and Communication Engineers

Page 2: Protection and Utilization of Privacy Information via Sensing

BABAGUCHI and NAKASHIMA: PROTECTION AND UTILIZATION OF PRIVACY INFORMATION VIA SENSING3

2. Related Work

In the last decade, research efforts have been dedicated toprivacy protection for visual information. For removing PSIsuch as faces, entire bodies, or other specific objects [10],wide variety of image processing techniques, e.g., blurring,pixelization, blocking out, and edge detection, are appliedto them, and some of which capability of privacy protec-tion was evaluated [11]–[16]. Some research attempts weremade for preserving some specific visual content such asfacial details [17] and body motion [18]. An eigen-spacefiltering-based technique [19] determines visual content tobe preserved based on a preliminarily calculated set of im-age bases. With these techniques, PSI is permanently lostso that the visual information is viewed without infringingon others’ privacy. In other words, they are irrecoverableabstraction of the image/video content.

Recoverable abstraction is another interesting researchdirection, which allows authorized entities to view the orig-inal visual content recovered from an abstracted one. Thistype of techniques also involves the problem of privacy pro-tected images/video delivery to designated entities. Recov-erable abstraction is beneficial for video surveillance sys-tems that require further investigation once an abnormalevent happens. Dufaux and Ebrahimi [20] proposed to en-crypt some PSI regions in a compressed domain. This tech-nique was adapted for compressed video streams [21]–[24]and uncompressed ones [25], [26]. Instead of using encryp-tion techniques, information hiding is also employed to em-bed PSI into the abstracted images/video [27]–[29].

With the development of these abstraction techniques,various applications with image/video privacy protectionhave been developed. Privacy protection for video surveil-lance is one of such applications that has been widelystudied [30]. The basic approach is locating PSI regionsin surveillance video, e.g., using background subtraction,and applying such visual abstracting techniques as blurring,blocking out, and pixelization as in [31], [32]. Some sys-tems provide removed PSI through a different data chan-nel for later investigation. Since the main purpose of videosurveillance is security, recoverable abstraction techniqueshave been employed, which are based on encryption anddata hiding to allow authorized entities to access PSI asmentioned above, for example, [33]–[38]. An importantidea of privacy is that people have the right to control in-formation on them, leading to video surveillance systemsthat allow people who are captured in surveillance videoto choose whether their appearance is disclosed [39]–[41],which often use RFID or other techniques for identificationof people in video. A new attempt for video surveillancepotentially provides various additional values by selectivelydisclosing PSI to public [42].

Compared with video surveillance systems that can as-sume fixed cameras, mobile camera-based applications con-front the difficulty in locating PSI regions in images/videobecause of the inapplicability of background subtraction,

and an object detection technique such as face or pedes-trian detection is used instead. For mobile surveillance,which captures suspicious people using mobile cameras,some privacy-aware systems are summarized in [43], e.g.,[44], [45], and a more recent work focused on automati-cally discriminating people of interests based on the videog-rapher’s intention [46]. Besides mobile video surveillance,privacy protection has been designed for various applica-tions. Google Street View is a well-known example ofsuch applications, and some researchers focus on remov-ing pedestrian regions from the images with less visual arti-facts [47] or on improving the recall of PSI detection (i.e.,face or pedestrian detection) [7]. Some systems are pro-posed for other applications such as life-log [48], video con-ferencing [49], and consumer generated videos [50], [51].

Some techniques do not rely on visual abstraction toprotect privacy in images/video. For example, systems forknowledge discovery from unpublished videos have beenproposed for medical purposes [52], [53]. PicAlert [9] auto-matically finds images from an image collection that poten-tially contain privacy sensitive contents and notify the userto prevent their disclosure to public. Privacy Visor [54] isanother interesting approach to privacy protection, whichphysically masks face regions by goggles with a tailored pat-terns to hinder face detection.

Privacy protection is generally considered as a sub-discipline of information security and often adopts tech-niques of information hiding, encryption, secret sharing, andaccess control for abstraction and identification (e.g. [20],[29], [37], [55]).

3. Privacy Sensitive Information

Privacy sensitive information (PSI) can be defined as infor-mation that may invade somebody’s privacy in case it is ex-tracted from the entire information acquired from sensorslocated in the real world [5]. We classify the PSI into 1) in-formation accessible to personal ID, 2) information relatedto personal ID, and 3) information on one’s private area.

The information accessible to personal ID is the in-formation with which a corresponding individual is distin-guishable from others. This kind of PSI includes person’sappearance, such as faces, full figures, clothes, gaits, ges-tures, and expressions in images, as well as utterance inaudio signals. Behavioral information, such as locationsand trajectories, is also included. As a special case, texton nameplates or car license plates is included as well.

The information related to personal ID is not essen-tially privacy-sensitive in itself, but turns into PSI once itis associated with personal ID. This kind of PSI includesone’s belongings and possessions. For example, an animalis usually not concerned with privacy; however, if it is pos-sessed by someone as his/her pet and is known to be his/herpossession, it turns into PSI.

The information on one’s private area is the third kindof PSI. The private area implies where most people feel pri-vate. For example, the inside of one’s house, a clothes-

Page 3: Protection and Utilization of Privacy Information via Sensing

4IEICE TRANS. INF. & SYST., VOL.E98–D, NO.1 JANUARY 2015

Fig. 1 An example of visual privacy protection.

drying place, and displays of PCs or smartphones are in thiscase.

Let us think about a scene in an image as indicated inFig. 1(a). Which region in the image is privacy-sensitive?Figure 1(b) shows the regions corresponding to three kindsof PSI, i.e. 1), 2), and 3) as stated above. If we take ad-vantage of exhaustive privacy protection, most parts of theimage cannot be seen, and the image becomes meaninglessas shown in Fig. 1(c). Therefore, a framework that flexiblydeals with the PSI depending on the subject and the viewerwill be needed.

4. Privacy Protection for Visual Information

In the following, we proceed to describe our researchprojects about visual privacy protection.

4.1 PriSurv

PriSurv (Privacy Protected Video Surveillance System)project was an attempt to construct a pilot system of videosurveillance with protection of subjects’ privacy. The pur-pose of PriSurv is to make video surveillance a social systemoffering both safety and security. Our prototype ofPriSurv [41] works in spatially local community such as aschool zone while preserving community members’ privacy.PriSurv is capable of generating a surveillance video thatdynamically changes the appearance of subjects in order toprotect his/her privacy. In contrast to the existing systemsby exhaustive privacy protection, PriSurv is characterizedby selective privacy protection [56].

To control disclosure of the subject’s visual PSI,PriSurv provides multiple image processing operatorsnamed visual abstraction. The subject’s appearance can bechanged via visual abstraction operators. PriSurv offers 12operators: transparency, dot, bar, box, silhouette, border,edge, mosaic, blur, monotone, see-through, and as-is.

One of the main characteristics of PriSurv is that thecontent of generated video is changeable based on the rela-tionship between the viewer and the subject. Figure 2 shows

Fig. 2 Privacy control in PriSurv.

privacy control in PriSurv. From the original image I, visualabstraction operator a for the subject s and the viewer v pro-duces an privacy protected image Ia

<s,v> that is different foreach viewer. PriSurv works in a community where there arepreliminarily registered members and non-members. Eachmember can describe in PriSurv’s privacy policy how his/herappearance should be presented to various viewers: fam-ily, neighbors, strangers, etc. The relationship between thevisual abstraction and the human sense of privacy was in-vestigated from a psychological viewpoint [57], which wasreflected on design of PriSurv’s privacy policy. It should benoted that PriSurv is embodiment of self-information con-trol, which is modern interpretation of the privacy right.

4.2 Digital Diorama

In Sensing-Web project, we addressed how appearance ofeach person should be represented in surveillance video thatcaptures a scene of a public space when the video is open topublic through the Internet. Our main focus was to protectprivacy of subjects in a strict way. In this project, we builta real-world content named Digital Diorama (DD) [58] thatdisplays real-time information acquired from various sen-sors. In DD, movement of people in the 3D environment can

Page 4: Protection and Utilization of Privacy Information via Sensing

BABAGUCHI and NAKASHIMA: PROTECTION AND UTILIZATION OF PRIVACY INFORMATION VIA SENSING5

Fig. 3 Privacy control in DD.

be observed from an arbitrary viewpoint. Location of eachperson is estimated from the video stream from surveillancecameras, and is presented in a texture mapped 3D model ofthe target scene.

Because DD deals with a public space, privacy of thepeople should be strictly protected. We think that the ap-pearance of all the subjects in surveillance video should notbe presented to viewers. In DD, a subject is representedas a human-shaped stick figure. Furthermore, DD displaysspecific subjects in a different color: If a viewer is a mem-ber of a certain group registered to DD and is authenticated,he/she can find other members easily by showing them in acolor different from other people. This function is very use-ful for locating family or friends. DD realizes identificationof the subjects, authentication of the viewer, and registrationof group members using RFID tags. Figure 3 shows privacycontrol in DD. The human-shaped stick figure in pink be-longs to the same group as the viewer, and the other peopleare colored blue.

DD’s privacy control [42] is similar to PriSurv’s one,because it is based on the relationship between the viewerand subject. DD was demonstrated at a shopping mall ‘Shin-Pu-Kan’ in Kyoto for actual visitors. The questionnaire sur-vey indicated that DD’s privacy protection was favorable tomost visitors.

4.3 Mobile Privacy Protection

Our mobile privacy protection (MPP) project aims at auto-matically generating privacy protected videos captured byvideographers using mobile cameras. One of the essentialdifferences between surveillance videos and ones capturedby videographers is that the videographers have their cap-ture intentions, e.g., to record a child’s growth. A video cap-tured by a videographer contains important persons, withoutwhom the video becomes meaningless and thus the videog-rapher captures them intentionally (intentionally capturedpersons; ICPs), as well as objects that are not important forthe intentions (non-ICPs). For this, we developed a methodfor automatically detecting ICPs and their abstraction [46],

Fig. 4 Overview of MPP.

Fig. 5 Examples of original video frame (left) and privacy protected oneby [50] (right).

Fig. 6 Our prototype of real-time mobile privacy protection system (left)and its usage example (right).

[51], [59]–[61]. Figure 4 shows an overview of privacy pro-tection for mobile cameras.

One of the fundamental techniques in MPP is ICPdetection [59], [61], which automatically locates ICPs invideo. After locating all people using face/pedestrian de-tection [62], [63], assuming that videographers’ intention isreflected in how they move their mobile camera, ICP detec-tion extracts visual features such as each detected persons’position, size, and similarity between the person’s motionand the camera motion. Using these features, a machinelearning technique-based ICP model, e.g., a support vectormachine and a Markov random field (MRF)-based model,classifies each detected person into ICP or non-ICP.

Our MPP then obscures non-ICPs identified by ICP de-tection. Considering that most videos captured with mobile

Page 5: Protection and Utilization of Privacy Information via Sensing

6IEICE TRANS. INF. & SYST., VOL.E98–D, NO.1 JANUARY 2015

cameras are watched by others, possibly through video shar-ing services, MPP must provide various types of abstrac-tion so that its users can choose appropriate one for theirvideo contents. In [60], we implemented blurring, blockingout, and removal by seam carving [64]. We also developeda background estimation technique for mobile cameras andadopted it for abstraction as shown in Fig. 5 [50], which canbe a preprocessing for various abstraction operators used inPriSurv [41]. In addition, we proposed a mobile system thatautomatically obscures non-ICPs in real-time as in Fig. 6.

5. Protection and Utilization of Privacy Information

Based on the research results on sensing and privacy, we

Table 1 Comparison of our projects.

Fig. 7 Conceptual sketch of HIFI.

have deepen our ideas, and launched Harmonized Informa-tion Field (HIFI) project [65]. Table 1 summarizes the com-parison of our projects in terms of sensors, subjects, andsensing targets.

HIFI is an information infrastructure that underlies afield, which is a certain area in the real world, and providesits inside people with timely recommendations based on pri-vacy information intentionally released by the people. In ourproject, the field is a bounded space for specific purposes,for example, shopping malls, stations, and theme parks. Weregard, as privacy information in a broad sense, various in-formation that links to a person’s ID: face, figure, expres-sion, motion, location, trajectory, preference, interest, andso on. A user, who is a visitor to HIFI, will intentionallyrelease his/her privacy information and gain profit in ex-change.

Let us describe HIFI’s goal in the following scenario.Everybody knows that he/she can get useful information andpremium services by giving, to a some extent, his/her pri-vacy information to trusted organizations and shops. An ex-pert salesclerk in a shop will recommend some goods thatsuit the preference or interest of his/her familiar guest. Thiscomfortable recommendation can be derived from privacyinformation released by the guest. In other words, it is verydifficult for the guest to obtain premium services withoutgiving his/her privacy information. Note that the disclosureof privacy information and the obtained profit in exchangefor the disclosure will be balanced in HIFI.

Figure 7 shows a conceptual sketch of HIFI. A visi-tor who comes into the field submits his/her privacy infor-

Page 6: Protection and Utilization of Privacy Information via Sensing

BABAGUCHI and NAKASHIMA: PROTECTION AND UTILIZATION OF PRIVACY INFORMATION VIA SENSING7

mation to HIFI. The privacy information acquired from ac-tive or passive sensors is securely stored and structuralizedin the spatio-temporal database (STDB). Mining the STDBproduces useful information that will be fed back to the vis-itor as recommendation of useful or premium informationand services tailored for him/her. HIFI’s privacy informa-tion processing consists of four steps: 1) collection and dis-closure, 2) protection, 3) storing and structuring, and 4) uti-lization.

We are now pursuing the following research issues:

• Fusing environmental and social sensing• Acquiring spatial and temporal attributes of users to be

stored in the STDB• Data cleansing for privacy protection• Recommendation by utilizing privacy information• Information entry at HIFI’s entrance and exit gates• Evaluation of the balance between the disclosure and

profit

Among them, we here describe the information entry sys-tem (IES) placed in entrance/exit gates of HIFI. At the gate,the visitor will be able to enter his/her privacy informationand to determine its disclosure level. Then HIFI will obtainhis/her consent on privacy management in HIFI. Namely,the visitor can decide, according to his/her own intention,whether his/her privacy information is stored or discarded atthe entrance and exit. Thus, IES plays an important role inHIFI.

As illustrated in Fig. 8, IES exists at the boundary ofHIFI and the external world (EW), and has two sides: theHIFI- and EW-sides. IES consists of a camera, Kinect sen-sors, and a touch-screen monitor for interactively collectingvisitors’ privacy information, e.g., their face, clothing, age,

Fig. 8 Entrance gate of HIFI.

gender, and so on, with a less load on its visitors. In IES,a visitor first passes through the EW-side gate and moves tothe front of the monitor. Next he/she chooses the ID typeduring his/her stay at HIFI, interacting with the monitor.Then he/she comes into HIFI through the HIFI-side gate.

The ID type is divided into three classes: autonym,pseudonym, and anonym, which mean that the visitor isidentifiable in both HIFI and EW, identifiable in HIFI butnot in EW, and not identifiable in both HIFI and EW, respec-tively. With the ID type, the visitor can determine the levelof his/her privacy information disclosure by himself/herself.The autonym ID will make all privacy information in EWdisclosed in HIFI. The pseudonym ID will make abstractedor partial privacy information disclosed. The anonym IDwill make no information disclosed.

While the visitor passes through IES, Kinect sensorscontinually detect his/her body region on depth images us-ing background subtraction. His/Her clothing informationis represented as a color histogram based on the extractedbody region. The camera installed on the top of the mon-itor extracts his/her frontal face region by a face detectiontechnique during the behavior in front of the monitor whenhe/she chooses his/her ID type. The face information isrepresented as facial images. The facial images are usedfor visitor identification and age/gender estimation for au-tonym ID, and are immediately discarded for pseudonymand anonym IDs. IES tries to collect privacy informationwith interaction in order to facilitate the visitor’s manipula-tions.

6. Conclusion

The trend of privacy information processing has been chang-ing from exhaustive protection and selective protection toappropriate disclosure and utilization. In HIFI, we focuson ‘profit’ as the opposed concept of ‘disclosure’ of privacyinformation, and try to harmonize them. It is impossiblefor everybody to disclose his/her privacy information at anytime, at any place. We therefore consider ‘disclosure’ at thebounded space called the field. HIFI’s usefulness dependson how much benefit the visitor can obtain in it. It shouldbe noted that ‘protection’, ‘disclosure’ and ‘profit’ are re-lated to human factors, subjectivity, and situation/context-awareness. Human-centered perspective is indispensable forsystem development. In addition, we point out that not onlytechnological approaches but also psychological, social, andlegal approaches are of great importance in privacy informa-tion processing.

References

[1] A. Senior, ed., Protecting Privacy in Video Surveillance, Springer,2009.

[2] Special Issue on Enhancing Privacy Protection in Multimedia Sys-tems, EURASIP Journal on Information Security, vol.2009, 2009.

[3] Special Issue on Privacy-aware Multimedia Surveillance Systems,Multimedia Systems, Springer, vol.18, no.2, 2012.

[4] Special Issue on Intelligent Video Surveillance for Public Security &

Page 7: Protection and Utilization of Privacy Information via Sensing

8IEICE TRANS. INF. & SYST., VOL.E98–D, NO.1 JANUARY 2015

Personal Privacy, IEEE Trans. Information Forensics and Security,vol.8, no.10, 2013.

[5] N. Babaguchi, “Visual privacy sensitive information and its pro-cessing,” ISCIE Journal ’Systems, Control and Information’, vol.54,no.6, pp.242–247, 2010. (in Japanese)

[6] N. Babaguchi and S. Nishio, “Sensing and privacy protection of net-work robots,” J. IEICE, vol.91, no.5, pp.380–386, May 2008. (inJapanese)

[7] A. Frome, G. Cheung, A. Abdulkader, M. Zennaro, B. Wu, A.Bissacco, H. Adam, H. Neven, and L. Vincent, “Large-scale privacyprotection in google street view,” Proc. IEEE Int. Conf. ComputerVision, pp.2373–2380, 2009.

[8] YouTube, Face blurring: when footage requires anonymity,http://youtube-global.blogspot.com/2012/07/face-blurring-when-footage-requires.htmlGG, accessed Nov. 21, 2014.

[9] S. Zerr, S. Siersdorfer, and J. Hare, “Picalert!: a system for privacy-aware image classification and retrieval,” Proc. ACM Int. Conf. In-formation and Knowledge Management, pp.2710–2712, 2012.

[10] A. Cavallaro, O. Steiger, and T. Ebrahimi, “Semantic video analy-sis for adaptive content delivery and automatic description,” IEEETrans. Circuits Syst. Video Technol., vol.15, no.10, pp.1200–1209,2005.

[11] Q.A. Zhao and J.T. Stasko, “Evaluating image filtering based tech-niques in media space applications,” Proc. ACM Conf. ComputerSupported Cooperative Work, pp.11–18, 1998.

[12] M. Boyle, C. Edwards, and S. Greenberg, “The effects of filteredvideo on awareness and privacy,” Proc. ACM Conf. Computer Sup-ported Cooperative Work, pp.1–10, 2000.

[13] C. Neustaedter, S. Greenberg, and M. Boyle, “Blur filtration fails topreserve privacy for home-based video conferencing,” ACM Trans.Computer-Human Interaction, vol.13, no.1, pp.1–36, 2006.

[14] N. Babaguchi, T. Koshimizu, I. Umata, and T. Toriyama, “Psycho-logical study for designing privacy protected video surveillance sys-tem: PriSurv,” in Protecting Privacy in Video Surveillance, pp.147–164, 2009.

[15] F. Dufaux and T. Ebrahimi, “A framework for the validation of pri-vacy protection solutions in video surveillance,” Proc. IEEE Int.Conf. Multimedia and Expo, pp.66–71, 2010.

[16] P. Korshunov, S. Cai, and T. Ebrahimi, “Crowdsourcing approach forevaluation of privacy filters in video surveillance,” Proc. ACM Mul-timedia 2012 Workshop on Crowdsourcing for Multimedia, pp.35–40, 2012.

[17] E. Newton, L. Sweeney, and B. Malin, “Preserving privacy by de-identifying face images,” IEEE Trans. Knowl. Data Eng., vol.17,no.2, pp.232–243, 2005.

[18] D. Chen, Y. Chang, R. Yan, and J. Yang, “Tools for protecting theprivacy of specific individuals in video,” EURASIP J. Advances inSignal Processing, vol.2007, pp.1–9, 2007.

[19] J.L. Crowley, J. Coutaz, and F. Berard, “Things that see,” Commun.ACM, vol.43, no.3, pp.54–64, 2000.

[20] F. Dufaux and T. Ebrahimi, “Video surveillance using JPEG 2000,”Proc. SPIE, pp.268–275, 2004.

[21] F. Dufaux and T. Ebrahimi, “H.264/AVC video scrambling for pri-vacy protection,” Proc. IEEE Int. Conf. Image Processing, pp.1688–1691, 2008.

[22] K. Martin and K. Plataniotis, “Privacy protected surveillance usingsecure visual object coding,” IEEE Trans. Circuits Syst. for VideoTechnol., vol.18, no.8, pp.1152–1162, 2008.

[23] F. Daia, L. Tonga, Y. Zhanga, and J. Lia, “Restricted H.264/AVCvideo coding for privacy protected video scrambling,” J. VisualCommunication and Image Representation, vol.22, no.6, pp.479–490, 2011.

[24] Y. Wang, M. O’Neill, and F. Kurugollu, “Privacy region protectionfor H.264/AVC by encrypting the intra prediction modes withoutdrift error in I frames,” Proc. IEEE Int. Conf. Acoustics, Speech andSignal Processing, pp.2964–2968, 2013.

[25] P. Carrillo, H. Kalva, and S. Magliveras, “Compression independent

object encryption for ensuring privacy in video surveillance,” Proc.IEEE Int. Conf. Multimedia and Expo, pp.273–276, 2008.

[26] S.M.M. Rahman, M.A. Hossain, H. Mouftah, A.E. Saddik, andE. Okamoto, “Chaos-cryptography based privacy preservation tech-nique for video surveillance,” Multimedia Syst., vol.18, no.2,pp.145–155, 2012.

[27] W. Zhang, S. Cheung, and M. Chen, “Hiding privacy informationin video surveillance system,” IEEE Int. Conf. Image Processing,pp.868–871, 2005.

[28] P. Meuel, M. Chaumont, and W. Puech, “Data hiding in H.264 videofor lossless reconstruction of region of interest,” Proc. European Sig-nal Processing Conf., pp.2301–2305, 2007.

[29] G. Li, Y. Ito, X. Yu, N. Nitta, and N. Babaguchi, “Recoverable pri-vacy protection for video content distribution,” EURASIP Journalon Information Security, vol.2009, 9 pages, 2009.

[30] S. Moncrieff, S. Venkatesh, and G. West, “Dynamic privacy in publicsurveillance,” Computer, vol.42, no.9, pp.22–28, 2009.

[31] A. Cavallaro, “Adding privacy constraints to video-based applica-tions,” European Workshop on the Integration of Knowledge, Se-mantics and Digital Media Technology, 8 pages, 2004.

[32] S. Fleck and W. Straßer, “Towards secure and privacy sensitivesurveillance,” Proc. ACM/IEEE Int. Conf. Distributed Smart Cam-eras, pp.126–132, 2010.

[33] A. Senior, S. Pankanti, A. Hampapur, L. Brown, Y.L. Tian, andA. Ekin, “Enabling video privacy through computer vision,” IEEESecurity & Privacy, vol.3, no.3, pp.50–57, 2005.

[34] T.E. Boult, “PICO: privacy through invertible cryptographic obscu-ration,” Computer Vision for Interactive and Intelligent Environ-ment, pp.27–38, 2005.

[35] K. Yabuta, H. Kitazawa, and T. Tanaka, “A new concept of secu-rity camera monitoring with privacy protection by masking movingobjects,” Proc. Pacific-Rim Conf. Multimedia, pp.831–842, 2005.

[36] A. Chattopadhyay and T. Boult, “PrivacyCam: a privacy preservingcamera using uCLinux on the blackfin DSP,” Proc. IEEE Workshopon Embedded Vision Systems, pp.1–8, 2007.

[37] M. Upmanyu, A.M. Namboodiri, K. Srinathan, and C. Jawahar, “Ef-ficient privacy preserving video surveillance,” Proc. IEEE Int. Conf.Computer Vision, pp.1639–1646, 2009.

[38] T. Winkler and B. Rinner, “TrustCAM: security and privacy-protection for an embedded smart camera based on trusted comput-ing,” IEEE Int. Conf. Advanced Video and Signal Based Surveil-lance, pp.593–600, 2010.

[39] J. Wickramasuriya, M. Datt, S. Mehrotra, and N.Venkatasubramanian, “Privacy protecting data collection in mediaspaces,” ACM Int. Conf. Multimedia, pp.48–55, 2004.

[40] J. Schiff, M. Meingast, D. Mulligan, S. Sastry, and K. Goldberg,“Respectful cameras: Detecting visual markers in real-time toaddress privacy concerns,” Proc. IEEE/RSJ Int. Conf. IntelligentRobots and Systems, pp.971–978, 2007.

[41] K. Chinomi, N. Nitta, Y. Ito, and N. Babaguchi, “Prisurv: privacyprotected video surveillance system using adaptive visual abstrac-tion,” Proc. Int. Multimedia Conf., pp.144–154, 2008.

[42] T. Takehara, N. Nitta, and N. Babaguchi, “Three-level privacy con-trol for sensing-based real-world content digital diorama,” Proc. Int.Conf. Internet Multimedia Computing and Service, pp.17–20, 2011.

[43] R. Cucchiara and G. Gualdi, “Mobile video surveillance systems:an architectural overview,” Proc. Int. Workshop on Mobile Multi-media Processing, pp.89–109, 2008.

[44] I. Kitahara, K. Kogure, and N. Hagita, “Stealth vision for protectingprivacy,” Proc. Int. Conf. Pattern Recognition, pp.404–407, 2004.

[45] J. Brassil, “Technical challenges in location-aware video surveil-lance privacy,” in Protecting Privacy in Video Surveillance, pp.91–113, 2009.

[46] Y. Nakashima, N. Babaguchi, and J. Fan, “Intended human ob-ject detection for automatically protecting privacy in mobile videosurveillance,” Multimedia Syst., vol.18, no.2, pp.157–173, 2012.

[47] A. Flores and S. Belongie, “Removing pedestrians from google

Page 8: Protection and Utilization of Privacy Information via Sensing

BABAGUCHI and NAKASHIMA: PROTECTION AND UTILIZATION OF PRIVACY INFORMATION VIA SENSING9

street view images,” Proc. IEEE Int. Workshop on Mobile Vision,pp.53–58, 2010.

[48] J. Chaudhari, S.C.S. Cheung, and M.V. Venkatesh, “Privacy protec-tion for life-log video,” Proc. IEEE Workshop on Signal ProcessingApplications for Public Security and Forensics, pp.1–5, 2007.

[49] M. Vijay Venkatesh, J. Zhao, L. Profitt, and S. Cheung, “Audio-visual privacy protection for video conference,” Proc. IEEE Int.Conf. Multimedia and Expo, pp.1574–1575, 2009.

[50] Y. Nakashima, N. Babaguchi, and J. Fan, “Automatic generation ofprivacy-protected videos using background estimation,” Proc. IEEEInt. Conf. Multimedia and Expo, 6 pages, 2011.

[51] T. Koyama, Y. Nakashima, and N. Babaguchi, “Real-time privacyprotection system for social videos using intentionally-capturedpersons detection,” Proc. IEEE Int. Conf. Multimedia and Expo,pp.1–6, 2013.

[52] J. Fan, H. Luo, M.S. Hacid, and E. Bertin, “A novel approach forprivacy-preserving video sharing,” Proc. ACM Conf. Informationand Knowledge Management, pp.609–616, 2005.

[53] J. Peng, N. Babaguchi, H. Luo, Y. Gao, and J. Fan, “Constructingdistributed hippocratic video databases for privacy-preserving onlinepatient training and counseling,” IEEE Trans. Inf. Technol. Biomed.,vol.14, no.4, pp.1014–1026, 2010.

[54] T. Yamada, S. Gohshi, and I. Echizen, “Privacy visor: wearable de-vice for preventing privacy invasion through face recognition fromcamera images,” Proc. Int. Workshop on Advanced Image Technol-ogy, pp.90–93, 2013.

[55] S. Ye, Y. Luo, J. Zhao, and S.C. Cheung, “Anonymous biometricaccess control,” EURASIP J. Information Security, vol.2009, pp.1–17, 2009.

[56] T. Winkler and B. Rinner, “User-centric privacy awareness in videosurveillance,” Multimedia Syst., vol.18, no.2, pp.99–121, 2012.

[57] N. Babaguchi, T. Koshimizu, I. Umata, and T. Toriyama, “Psycho-logical study for designing privacy protected video surveillance sys-tem: PriSurv,” Protecting Privacy in Video Surveillance, pp.147–164, 2009.

[58] N. Nitta, Y. Ito, and N. Babaguchi, “Sensing-based real-world con-tent –digital diorama,” JSAI, vol.24, no.2, pp.220–225, 2009. (inJapanese)

[59] Y. Nakashima, N. Babaguchi, and J. Fan, “Detecting intended hu-man objects in human-captured videos,” Proc. IEEE Computer So-ciety Workshop on Perceptual Organization in Computer Vision,pp.1–8, 2010.

[60] Y. Nakashima, N. Babaguchi, and J. Fan, “Automatically protectingprivacy in consumer generated videos using intended human objectdetector,” Proc. ACM Int. Conf. Multimedia, pp.1135–1138, 2010.

[61] T. Koyama, Y. Nakashima, and N. Babaguchi, “Markov randomfield-based real-time detection of intentionally-captured persons,”Proc. IEEE Int. Conf. Image Processing, pp.1377–1380, 2012.

[62] P. Viola and M.J. Jones, “Robust real-time face detection,” Int. J.Comput. Vis., vol.57, no.2, pp.137–154, 2004.

[63] N. Dalal and B. Triggs, “Histograms of oriented gradients for humandetection,” Proc. IEEE Computer Society Conf. Computer Visionand Pattern Recognition, pp.886–893, 2005.

[64] S. Avidan and A. Shamir, “Seam carving for content-aware imageresizing,” ACM Trans. Graphics (Proc. ACM SIGGRAPPH 2007),vol.26, no.3, pp.10-1–10-9, 2007.

[65] N. Babaguchi, “[special lecture] multimedia processing and privacyprotection –towards new services at real fields by disclosing pri-vacy information,” IEICE Technical Report, EMM2012-61, 2012.(in Japanese)

Noboru Babaguchi received the B.E., M.E.,and Ph.D. degrees in communication engineer-ing from Osaka University, in 1979, 1981, and1984, respectively. is currently a Professor ofthe Department of Communication Engineering,Osaka University. From 1996 to 1997, he was aVisiting Scholar at the University of California,San Diego. His research interests include im-age analysis, multimedia computing, and intel-ligent systems. He received Best Paper Awardof 2006 Pacific-Rim Conference on Multimedia

(PCM 2006), and Fifth International Conference on Information Assuranceand Security (IAS 2009). He is on the editorial board of Multimedia Toolsand Applications, Advances in Multimedia, and New Generation Comput-ing. He served as a General Co-chair of the 14th International MultiMediaModeling Conference (MMM 2008), ACM Multimedia 2012, and so on.He has published over 200 journal and conference papers and several text-books. He is a fellow of the IEICE, a senior member of the IEEE, and amember of the ACM, the IPSJ, the ITE and the JSA.

Yuta Nakashima received the B.E. andM.E. degrees in communication engineeringfrom Osaka University, Osaka, Japan in 2006and 2008, respectively, and the Ph.D. degreein engineering from Osaka University, Osaka,Japan, in 2012. He is currently an assistant pro-fessor at Graduate School of Information Sci-ence, Nara Institute of Science and Technology(NAIST). He was a research fellow of the JapanSociety for the Promotion of Science (JSPS)from 2010 to 2012, and was a Visiting Scholar

at the University of North Carolina at Charlotte in 2012. His research in-terests include video content analysis using probabilistic and statistical ap-proaches. He received Open Paper Award of 2013 International Conferenceon Image Processing (ICIP 2013). He is a member of the IEEE, the ACM,and the IEICE.