wearable privacy protection with visual …cheung/doc/icmew2016.pdfwearable privacy protection with...

6
WEARABLE PRIVACY PROTECTION WITH VISUAL BUBBLE Shaoqian Wang ? , Sen-ching S. Cheung , Ying Luo Department of Electrical and Computer Engineering University of Kentucky, Lexington, KY 40506 ? [email protected], [email protected], [email protected] ABSTRACT Wearable cameras are increasingly used in many different applications from law enforcement to medicine. In this pa- per 1 , we consider an application of using a wearable camera to record one-on-one therapy with a child in a classroom or clinic. To protect the privacy of other individuals in the same environment, we introduce a new visual privacy paradigm called privacy bubble. Privacy bubble is a virtual zone cen- tered around the camera for observation whereas the rest of the environment and people are obfuscated. In contrast to most existing visual privacy systems that rely on visual clas- sifier, privacy bubble is based on depth estimation to deter- mine the extent of privacy protection. To demonstrate this concept, we construct a wearable stereo-camera for depth es- timation on the Raspberry Pi platform. We also propose a novel framework to quantify the uncertainty in depth mea- surements so as to minimize a statistical privacy risk in con- structing the depth-based privacy bubble. The effectiveness of the proposed scheme is demonstrated with preliminary ex- perimental results. Index Termsprivacy protection, privacy bubble, wear- able camera, depth uncertainty, stereo quantization 1. INTRODUCTION The increasing computation power of small embedded plat- forms and affordable camera sensors enable many new and diverse applications ranging from entertainment, security to healthcare. Some of these applications have strong privacy need as required by law. For example, in the past year, there have been strong calls for U.S. law enforcement officials to wear body cameras, recording their interactions with the gen- eral public [1, 2, 3]. This video, if shared, could offer a wealth of information to social scientists, citizen activists, and others. Another example, which is the focus of this paper, is video of behaviors of special-need children, especially those that capture their interactions with others in naturalistic envi- ronments like schools and homes. They are highly valuable 1 This work was supported in part by the National Science Foundation under Grant 1237134. for diagnosis and treatment of various developmental disor- ders including autism and ADHD [4]. With the popularity of smartphone cameras and wearable cameras, videos can be recorded in any environment, capturing important intermit- tent behaviors that are difficult to observe during a brief clin- ical visit. By sharing such video, it becomes an effective tool to facilitate communication between families and profession- als [5, 6]. However, their usages are governed by a myriad of privacy laws including HIPPA [7] and FERPA [8] in US. Consent from bystanders is often difficult, if possible, to ob- tain. Many studies have found that privacy is among the top concerns when setting up cameras in home and at school and sharing such videos online [9, 10, 11, 12]. As a result, visual privacy protection has garnered a great deal of attention in the last few years. A recent survey pa- per has provided a comprehensive overview of different visual privacy protection technologies [13]. Most existing visual pri- vacy protection schemes rely on intelligent classifiers to iden- tify sensitive information such as faces or entire persons for protection. However, many of these classifiers are of ques- tionable reliability and missing even a few important pixels can significantly degrade the protection. Furthermore, these techniques require additional selection mechanisms to differ- entiate target subjects, whose behaviors need to be recorded, from others whose privacy needs to be protected [14]. Any misidentification of target subjects can defeat the entire pur- pose of privacy protection. Motivated by the need of strong video privacy protection and portable recording, we describe in this paper a wearable privacy-enhanced video camera that can be mounted on an adult observer for behavioral observation. The preliminary design is shown in Figure 1(a). The novel contribution of our proposed system is its use of a “privacy bubble” for visual privacy protection: the “privacy bubble” defines an adjustable virtual zone around the camera for recording. The assumption is that the subject of interest is usually the person closest to the observer and therefore falls within the bubble. Individuals and environment outside the zone are completely obfuscated. This is a reasonable assumption for our target application of behavior observation with children as the observer wearing the camera is usually a parent, a teacher or a therapist working with the subject. Another advantage is that pixel-based depth

Upload: truongthu

Post on 20-Mar-2018

216 views

Category:

Documents


2 download

TRANSCRIPT

Page 1: WEARABLE PRIVACY PROTECTION WITH VISUAL …cheung/doc/ICMEW2016.pdfWEARABLE PRIVACY PROTECTION WITH VISUAL BUBBLE Shaoqian Wang?, Sen-ching S. Cheungy, ... Another advantage is that

WEARABLE PRIVACY PROTECTION WITH VISUAL BUBBLE

Shaoqian Wang?, Sen-ching S. Cheung†, Ying Luo‡

Department of Electrical and Computer EngineeringUniversity of Kentucky, Lexington, KY 40506

[email protected], † [email protected], ‡ [email protected]

ABSTRACT

Wearable cameras are increasingly used in many differentapplications from law enforcement to medicine. In this pa-per1, we consider an application of using a wearable camerato record one-on-one therapy with a child in a classroom orclinic. To protect the privacy of other individuals in the sameenvironment, we introduce a new visual privacy paradigmcalled privacy bubble. Privacy bubble is a virtual zone cen-tered around the camera for observation whereas the rest ofthe environment and people are obfuscated. In contrast tomost existing visual privacy systems that rely on visual clas-sifier, privacy bubble is based on depth estimation to deter-mine the extent of privacy protection. To demonstrate thisconcept, we construct a wearable stereo-camera for depth es-timation on the Raspberry Pi platform. We also propose anovel framework to quantify the uncertainty in depth mea-surements so as to minimize a statistical privacy risk in con-structing the depth-based privacy bubble. The effectivenessof the proposed scheme is demonstrated with preliminary ex-perimental results.

Index Terms— privacy protection, privacy bubble, wear-able camera, depth uncertainty, stereo quantization

1. INTRODUCTION

The increasing computation power of small embedded plat-forms and affordable camera sensors enable many new anddiverse applications ranging from entertainment, security tohealthcare. Some of these applications have strong privacyneed as required by law. For example, in the past year, therehave been strong calls for U.S. law enforcement officials towear body cameras, recording their interactions with the gen-eral public [1, 2, 3]. This video, if shared, could offer a wealthof information to social scientists, citizen activists, and others.

Another example, which is the focus of this paper, isvideo of behaviors of special-need children, especially thosethat capture their interactions with others in naturalistic envi-ronments like schools and homes. They are highly valuable

1This work was supported in part by the National Science Foundationunder Grant 1237134.

for diagnosis and treatment of various developmental disor-ders including autism and ADHD [4]. With the popularityof smartphone cameras and wearable cameras, videos can berecorded in any environment, capturing important intermit-tent behaviors that are difficult to observe during a brief clin-ical visit. By sharing such video, it becomes an effective toolto facilitate communication between families and profession-als [5, 6]. However, their usages are governed by a myriadof privacy laws including HIPPA [7] and FERPA [8] in US.Consent from bystanders is often difficult, if possible, to ob-tain. Many studies have found that privacy is among the topconcerns when setting up cameras in home and at school andsharing such videos online [9, 10, 11, 12].

As a result, visual privacy protection has garnered a greatdeal of attention in the last few years. A recent survey pa-per has provided a comprehensive overview of different visualprivacy protection technologies [13]. Most existing visual pri-vacy protection schemes rely on intelligent classifiers to iden-tify sensitive information such as faces or entire persons forprotection. However, many of these classifiers are of ques-tionable reliability and missing even a few important pixelscan significantly degrade the protection. Furthermore, thesetechniques require additional selection mechanisms to differ-entiate target subjects, whose behaviors need to be recorded,from others whose privacy needs to be protected [14]. Anymisidentification of target subjects can defeat the entire pur-pose of privacy protection.

Motivated by the need of strong video privacy protectionand portable recording, we describe in this paper a wearableprivacy-enhanced video camera that can be mounted on anadult observer for behavioral observation. The preliminarydesign is shown in Figure 1(a). The novel contribution of ourproposed system is its use of a “privacy bubble” for visualprivacy protection: the “privacy bubble” defines an adjustablevirtual zone around the camera for recording. The assumptionis that the subject of interest is usually the person closest tothe observer and therefore falls within the bubble. Individualsand environment outside the zone are completely obfuscated.This is a reasonable assumption for our target application ofbehavior observation with children as the observer wearingthe camera is usually a parent, a teacher or a therapist workingwith the subject. Another advantage is that pixel-based depth

Page 2: WEARABLE PRIVACY PROTECTION WITH VISUAL …cheung/doc/ICMEW2016.pdfWEARABLE PRIVACY PROTECTION WITH VISUAL BUBBLE Shaoqian Wang?, Sen-ching S. Cheungy, ... Another advantage is that

measurement can be estimated with high enough fidelity forprivacy protection and cost low enough for general public.The popular Kinect 2 camera by Microsoft provides a verylow-cost solution for such an application. Using a Kinectcamera, we can easily demonstrate a privacy bubble by se-lectively applying obfuscation on the color pixel based on itsdepth value. An example is shown in Figure 1(b).

(a) (b)

Fig. 1: (a) Wearable privacy camera, and (b) privacy bubbleimplemented with a Kinect sensor

On the other hand, Kinect camera is not portable and doesnot work well in outdoor environments. Among all depthsensing technologies, the most robust approach is based onstereo cameras. In this paper, we propose an embedded de-sign of privacy-enhanced wearable stereo cameras using em-bedded cameras on the popular Raspberry Pi platform [15].In our design, the depth measurement is based on disparityestimated by a stereo-matching algorithm. Unlike a typicaldepth-based system, we propose a statistical framework toquantify the uncertainty of the depth measurement and cre-ate the privacy bubble by minimizing a statistical privacy riskso as to satisfy the more conservative requirement of privacyprotection.

The rest of the paper is organized as follows. After re-viewing related work in Section 2, we describe the proposedframework in analyzing the uncertainty in stereo-depth mea-surement in Section 3. Armed with this uncertainty frame-work, we introduce the Privacy Bubble protection in Section4. We present our hardware implementation and experimen-tal results in Section 5. Section 6 concludes the paper anddiscusses future work.

2. RELATED WORK

With the pervasiveness of surveillance and smartphone cam-eras, visual privacy has attracted much attention in recentyears [13]. Some of the systems protect the individuals pri-vacy by replacing selective objects with black boxes, largepixels [16] or scrambling [17], whereas others completelyremove the objects and fill with background and other fore-ground objects [18]. All of these algorithms require a mech-anism to identify the subjects for privacy protection, us-

ing methods ranging from special markers like yellow hard-hats [19] and visual tags [20] to RFID [21] and biometricsignals [14]. The drawback of all these approaches is theirreliance on image segmentation and subject identification al-gorithms which are often not reliable enough for privacy pro-tection.

Stereo-matching is one of the earliest approaches fordepth measurements [22]. Readers can find a listof recently-proposed stereo-matching algorithms and theirperformances at http://vision.middlebury.edu/stereo/eval3/. Earlier works in uncertainty analysisof stereo-matching primarily dealt with the impact of spa-tial quantization error. For example, in [23, 24], the authorsderived the probability density function of the range estima-tion error based on various design parameters of the stereoimaging system. Matching costs based on simplistic errorfunctions [25] and signal-to-noise ratio [26] were introducedto model the uncertainty in disparity measurements. Morerecent works analyzed the uncertainty based on estimatingthe confidence of the matching costs using different machinelearning techniques, ranging from linear discriminant analy-sis [27] to random forest [28] and convolutional neural net-work [29]. The focus of our paper, however, is not on the ex-act approach in obtaining the confidence, but rather how it canbe used in designing the privacy bubble. As such, we focus onthe simplistic approach of combining the stereo quantizationerror and the mismatch in the block matching to quantify theuncertainty of the depth measurement.

3. UNCERTAINTY IN STEREO-DEPTHMEASUREMENT

In most stereo-depth applications, the goal is to estimate thedepth z for each pixel based on the measured disparity valued. The privacy bubble is determined based on the value ofz. The uncertainty of depth estimation however varies frompixel to pixel. For privacy related applications, it is importantto be conservative so as not to reveal pixels with highly un-certain depth estimates. As such, our goal in this section is tocharacterize the conditional probability density function (pdf)f(z|d) of the estimation process in order to determine howreliable the depth estimate is. We model f(z|d) based on itsrelationship with two other pdf’s: f(z|dk) and f(dk|d) wheredk with k = 0, 1, 2, ... represents the ideal but unknown dis-parity, quantized due to the discrete nature of the system. Us-ing Bayes’ rule, these three pdf’s are related by the followingrelationship:

f(z|d) =∑k

f(z|dk)P (dk|d), (1)

For standard stereo pinhole camera setup, assume f is thefocal length of both cameras, B is the baseline, δ is the im-age sampling interval. Assume a spatial point with world co-ordinate (x, y, z) forms images on both image planes with x-

Page 3: WEARABLE PRIVACY PROTECTION WITH VISUAL …cheung/doc/ICMEW2016.pdfWEARABLE PRIVACY PROTECTION WITH VISUAL BUBBLE Shaoqian Wang?, Sen-ching S. Cheungy, ... Another advantage is that

coordiate xL and xR, respectively. It follows that the disparityfor the spatial point

d = xL − xR =fB

z. (2)

However, because of the discrete nature of the imaging sys-tem, xL and xR will be quantized and the therefore the ac-quired disparity is quantized to integral number of δ, whichwe denote as dk. Assume the nomial distance zk (betweenthe spatial point and the camera plane) corresponds to dk,

zk :=fB

dk, (3)

where dk = kδ, k = 1, 2, · · · ,m.Further, assume that the unquantized coordinates xL and

xR are independent and uniformly distributed random vari-ables, the conditional pdf of z given zk can be obtained asfollows,

fZ(z|zk) =

{( 1δ2 ( fBz −

fBzk

) + 1δ ) fBz2 , for zk ≤ z ≤ zk−1,

(− 1δ2 ( fBz −

fBzk

) + 1δ ) fBz2 , for zk+1 ≤ z < zk.

(4)

The details of the derivation can be found in [23].Since the real depth z is confined within the range

[zk+1, zk−1], we could use the length ∆k of this interval toquantify the uncertainty of true depth:

∆k := zk−1 − zk+1 =2

fBz2kδ− δ

fB

Note that the farther the point is from the camera, the biggerthe ∆k is and the more uncertain its true depth becomes. Also,a smaller baseline B can increase the depth uncertainty. Thisis important to the design of a wearable stereo camera as thebaseline is highly constrained due to its compact size.

Because of (3), f(z|dk) = f(z|zk) and as such, we obtainthe uncertainty of the depth measurement given the quantizedtrue disparity. The disparity value is “true” in the sense thatwe have assumed perfect stereo matching in producing thedisparity value d.

In a practical stereo matching system, false matches of-ten occur due to varying illumination, lack of texture of thescene and camera distortion. The uncertainty of the stereomatching process is modeled by P (dk|d), which is the condi-tional probability of the quantized disparity dk correspondingto the perfect disparity, given the measured disparity value dobtained from the stereo-matching algorithm. The approachto estimate P (dk|d) largely depends on the chosen stereo-matching algorithm. As mentioned in Section 2, there are agreat number of stereo-matching algorithms available. In ourpreliminary design, we have chosen the popular semi-globalmatching algorithm in [30] and adopted an efficient lookup-table approach in computing P (dk|d). As the approach isspecific to the implementation of the algorithm in [30], wedefer the description to Section 5.

4. PRIVACY BUBBLE

In this section, we show how the privacy bubble is generatedusing the estimated f(z|d). In our target application, the sub-ject needed to be recorded are close to the wearable camerawhile we want to protect the privacy of the rest of the en-vironment. Therefore, we could rely on the depth map andits uncertainty to segment the foreground subject and gener-ate a privacy bubble by obfuscating other pixels. Assume wewould like to generate a privacy bubble around foregroundsubject within the depth of zp. In order to generate the pri-vacy bubble, we need to decide whether a pixel with depth zshould be shown or obfuscated. While the true z is unknown,we have a measurement of disparity d. The conditional prob-ability of the event z < zp given d can be numerically com-puted as follows:

P (z < zp|d) =

∫ zp

zmin

f(z|d)dz, (5)

To determine whether this pixel should be revealed, we relyon the following likelihood test:

P (z < zp|d)

1− P (z < zp|d)> S, (6)

where S > 0 is the privacy protection threshold. If (6) is sat-isfied, the pixel is shown. Otherwise, it is obfuscated. Thechoice of threshold S reflects how stringent the privacy re-quirement of the target application is. S � 1 will be veryconservative but may wrongly obfuscate part of the subject ofinterest.

Now, we can apply the results from Section 3 to evaluate(5). It can be simplified with (4) and (1) as follows:

P (z < zp|d)

=

∫ zp

zmin

∑k

f(z|dk)P (dk|d)dz

=

m∑k=l+1

P (dk|d) + P (dl−1|d)

∫ zp

zl

f(z|dl−1)dz

+ P (dl|d)

∫ zp

zl+1

f(z|dl)dz (7)

where zl ≤ zp < zl−1.

5. HARDWARE IMPLEMENTATION ANDEXPERIMENT

We have built a wearable privacy bubble system using Rasp-berry Pi Compute Module (RPCM). The block diagram of thesystem design and the prototype are shown in Figure 2. Theblock diagram, shown in Figure 2 (a), consists of the RPCM,an I/O board, two Pi cameras and a WiFi dongle providingnetworking capability. The prototype housed in a 3D printed

Page 4: WEARABLE PRIVACY PROTECTION WITH VISUAL …cheung/doc/ICMEW2016.pdfWEARABLE PRIVACY PROTECTION WITH VISUAL BUBBLE Shaoqian Wang?, Sen-ching S. Cheungy, ... Another advantage is that

(a) System Diagram

(b) RpiCam

Fig. 2: System diagram and hardware implementation

case and mounted on a chest strap harness is shown in Fig-ure 2(b). While the current prototype is quite large (11.7cmby 9.7cm by 6cm), using a customized PC Board instead ofthe RPCM I/O board from the Raspberry Pi development kitwould make the system much smaller. The stereo vision sys-tem is remotely controlled by a smart phone via a ssh connec-tion.

In our wearable privacy bubble system, the stereo baselineB = 6cm. The focal length of the Pi camera is f = 3.60mm,with image sampling interval δ = 6 µm. Figure 3(a) showsa red-cyan anaglyph of a pair of stereo images taken with thePi cameras. Figure 3(b) is the depth map generated from thesemi-global block matching algorithm using default param-eters. Next, we illustrate how we estimate the uncertaintyof the disparity map and demonstrate a privacy bubble withzp = 4.5m. Here, we choose privacy protection thresholdS = 4.

Using the Matlab implementation of the semi-globalblock matching algorithm [30], the parameter ‘Unique-nessThreshold’ indicates the uniqueness of a correspondencematch. If the second smallest sum of absolute difference(SAD) value over the whole disparity range is not larger thanthe smallest SAD by the extent specified by this parameter,

the estimated disparity will be marked as unreliable. We ob-served that when ‘UniquenessThreshold’ is set to 100, all ofthe stereo match will be labeled as unreliable. As such, wehave run a series of tests by varying ‘UniquenessThreshold’from 0 to 90 with a stepsize of 15. By counting how manytimes the computed disparity value is label as ‘reliable’, wecould quantify the reliability of the disparity map of eachpixel into seven levels, with 7 being the most reliable and 0being not reliable at all. Figure 3(c) shows the reliability ofthe disparity map, with the red end being the most reliable.

Next, for reliability level k, 1 ≤ k ≤ 7, the true disparityfalls into one of 1 + 2(7− k) disparity bins and the probabil-ity mass function forms a triangle shape with the given com-puted disparity value in the middle. We use (7) to calculatethe overall probability of a spatial point being within the pri-vacy bubble. The result is in Figure 3(d). Figure 3(e) showsthe results of simply thresholding the depth, while Figure 3(f)shows the actual privacy bubble based on our uncertainty cal-culations. In Figure 3(e), the person in the background hasslightly higher uncertainty and is erroneously classified to bewithin the bubble. After taking into account the uncertainty,the person is filtered out in Figure 3(f).

Another feature of privacy bubble is that spatial pointsnear the depth bubble boundary with less reliable disparityvalue will be filtered out. For example, in Figure 3(g), weincrease the bubble boundary to 5.5m and the lights are onthe boundary of the bubble. In Figure 3(d), we can see thedisparity values of the lights pixels do not have a very highreliability value. As such, the lights are filtered out in the pri-vacy bubble as shown in Figure 3(h). The privacy bubble iscleaner than the depth bubble only showing the pixels belong-ing to the bubble with high confidence level.

Figure 4 further demonstrates the idea of privacy bub-ble and the importance of the proposed uncertainty frame-work. Each row corresponds to a different scenario. Thethree columns correspond to the original images (left viewof the stereo pair), depth-only bubbles, and privacy bubbleswith the same radius and a conservative privacy protectionthreshold of S = 10. Note that depth based privacy protectionworks equally well in both indoors and outdoors. More resultscan be found at http://vis.uky.edu/nsf-autism/wearable-privacy-cam/.

6. CONCLUSION AND FUTURE WORK

In this paper, we have proposed a new video privacy protec-tion technique using moving privacy bubble. To minimizethe statistical privacy risk in constructing the depth-based pri-vacy bubble, stereo depth uncertainty has been considered intwo aspects: uncertainty from quantization and from imper-fect stereo matching. An implementation of the wearable pri-vacy bubble camera using Raspberry Pi Compute Module hasalso been presented. Experimental results have demonstratedthe effectiveness of our proposed privacy bubble technique. In

Page 5: WEARABLE PRIVACY PROTECTION WITH VISUAL …cheung/doc/ICMEW2016.pdfWEARABLE PRIVACY PROTECTION WITH VISUAL BUBBLE Shaoqian Wang?, Sen-ching S. Cheungy, ... Another advantage is that

(a) test images (b) disparity map

(c) reliability map (d) probability map

(e) depth bubble (f) privacy bubble

(g) 5.5 m depth bubble (h) 5.5 m privacy bubble

Fig. 3: (a)-(d): intermediate results. (e)-(h): pure depth-basedbubble versus uncertainty-based privacy bubble

addition to a privacy bubble with a fixed radius, we are alsoexperimenting a privacy bubble with varying depth based onthe closest individual. Figure 5 shows the preliminary resultsof determining the radius by clustering pixel depths using theK-means algorithm (K = 3) and assuming that the closest in-dividual occupies the closest cluster. The preliminary resultsare reasonable though additional work are needed to deter-mine a more robust clustering scheme.

Fig. 4: Left: original. Middle: Depth Bubble. Right: PrivacyBubble.

Fig. 5: Varying-depth privacy bubble

7. REFERENCES

[1] R. Baillon, Mayor calls for 1,200 body cameras tobe utilized by Milwaukee police officers, but how willthey work?, Fox6Now.com, http://tinyurl.com/zsx8zja, September 3 2015.

[2] B. Keilar and D. Mercia, Hillary Clinton calls for manda-tory policy body cameras, end era of mass incarceration,CNN.com, http://tinyurl.com/goeuxsl, April 292015.

[3] J. Stanley, Police body-mounted cameras: With right policiesin place, a win for all., ACLU, http://tinyurl.com/hx7y9hz, 2013.

[4] Richard Longabaugh, “The systematic observation of behaviorin naturalistic settings,” Handbook of cross-cultural psychol-ogy, vol. 2, pp. 57–126, 1980.

[5] Brooke Ingersoll and Anna Dvortcsak, “Including parent train-ing in the early childhood special education curriculum forchildren with autism spectrum disorders,” Journal of PositiveBehavior Interventions, vol. 8, no. 2, pp. 79–87, 2006.

Page 6: WEARABLE PRIVACY PROTECTION WITH VISUAL …cheung/doc/ICMEW2016.pdfWEARABLE PRIVACY PROTECTION WITH VISUAL BUBBLE Shaoqian Wang?, Sen-ching S. Cheungy, ... Another advantage is that

[6] Carolyn Webster-Stratton, “Advancing videotape parent train-ing: A comparison study.,” Journal of Consulting and ClinicalPsychology, vol. 62, no. 3, pp. 583, 1994.

[7] An Act, “Health insurance portability and accountability act of1996,” Public Law, vol. 104, pp. 191, 1996.

[8] Bobbye G Fry, “The family educational rights and privacy actof 1974,” Student Records Management: A Handbook, p. 43,1997.

[9] Gillian R Hayes and Khai N Truong, “Selective archiving:a model for privacy sensitive capture and access technolo-gies,” in Protecting Privacy in Video Surveillance, pp. 165–184. Springer, 2009.

[10] Julie A Kientz and Gregory D Abowd, “Kidcam: toward an ef-fective technology for the capture of childrens moments of in-terest,” in Pervasive Computing, pp. 115–132. Springer, 2009.

[11] Gabriela Marcu, Anind K Dey, and Sara Kiesler, “Parent-driven use of wearable cameras for autism support: a fieldstudy with families,” in Proceedings of the 2012 ACM Con-ference on Ubiquitous Computing. ACM, 2012, pp. 401–410.

[12] N Nazneen, Agata Rozga, Mario Romero, Addie J Findley,Nathan A Call, Gregory D Abowd, and Rosa I Arriaga, “Sup-porting parents for in-home capture of problem behaviors ofchildren with developmental disabilities,” Personal and Ubiq-uitous Computing, vol. 16, no. 2, pp. 193–207, 2012.

[13] Thomas Winkler and Bernhard Rinner, “Security and privacyprotection in visual sensor networks: A survey,” ACM Com-puting Surveys (CSUR), vol. 47, no. 1, pp. 2, 2014.

[14] Ying Luo, Shuiming Ye, and Sen-ching S Cheung, “Anony-mous subject identification in privacy-aware video surveil-lance,” in Multimedia and Expo (ICME), 2010 IEEE Inter-national Conference on. IEEE, 2010, pp. 83–88.

[15] Raspberry Pi Foundation, “Teach, learn and make with Rasp-berry Pi,” http://www.raspberrypi.org, 2015.

[16] J. Wada, K. Kaiyama, K. Ikoma, and H. Kogane, Monitorcamera system and method of displaying picture from monitorcamera thereof, Matsushita Electric Industrial Co. Ltd., April2001.

[17] Frdric Dufaux and Touradj Ebrahimi, “Scrambling for videosurveillance with privacy,” 2006 Conference on Computer Vi-sion and Pattern Recognition Workshop (CVPRW’06), p. 160,2006.

[18] S.-C. Cheung, M. V. Venkatesh, J. Paruchuri, J. Zhao, andT. Nguyen, “Protecting and managing privacy information invideo surveillance systems,” in Protecting Privacy in VideoSurveillance, A. Senior, Ed. Springer, 2009.

[19] J. Schiff, M. Meingast, D. Mulligan, S. Sastry, and K. Gold-berg, “Respectful cameras: Detecting visual markers in real-time to address privacy concerns,” in International Conferenceon Intelligent Robots and Systems (IROS). Springer, 2007, pp.971–978.

[20] J. Zhao and S-C. S. Cheung, “Multi-camera surveillance withvisual tagging and generic camera placement,” in Proceedingsof ACM/IEEE International Conference on Distributed SmartCameras, 2007.

[21] J. Wickramasuriya, M. Datt, S. Mehrotra, and N. Venkatasub-ramanian, “Privacy protecting data collection in media spaces,”in ACM International Conference on Multimedia, New York,NY, Oct. 2004.

[22] Daniel Scharstein and Richard Szeliski, “A taxonomy and eval-uation of dense two-frame stereo correspondence algorithms,”International journal of computer vision, vol. 47, no. 1-3, pp.7–42, 2002.

[23] Jeffrey J Rodriguez and JK Aggarwal, “Quantization error instereo imaging,” in Computer Vision and Pattern Recognition,1988. Proceedings CVPR’88., Computer Society Conferenceon. IEEE, 1988, pp. 153–158.

[24] Raman Balasubramanian, Sukhendu Das, S Udayabaskaran,and Krishnan Swaminathan, “Quantization error in stereoimaging systems,” International journal of computer mathe-matics, vol. 79, no. 6, pp. 671–691, 2002.

[25] Emanuele Trucco, Vito Roberto, S Tinonin, and M Corbatto,“Ssd disparity estimation for dynamic stereo.,” in BMVC,1996, pp. 1–10.

[26] Andrea Fusiello, Vito Roberto, and Emanuele Trucco, “Ex-periments with a new area-based stereo algorithm,” in ImageAnalysis and Processing. Springer, 1997, pp. 669–676.

[27] Martin Peris, Atsuto Maki, Sara Martull, Yoshihiro Ohkawa,and Kazuhiro Fukui, “Towards a simulation driven stereo vi-sion system,” in Pattern Recognition (ICPR), 2012 21st Inter-national Conference on. IEEE, 2012, pp. 1038–1042.

[28] Ralf Haeusler, Rahul Nair, and Daniel Kondermann, “Ensem-ble learning for confidence measures in stereo vision,” in Pro-ceedings of the IEEE Conference on Computer Vision and Pat-tern Recognition, 2013, pp. 305–312.

[29] Jure Zbontar and Yann LeCun, “Stereo matching by train-ing a convolutional neural network to compare image patches,”arXiv preprint arXiv:1510.05970, 2015.

[30] Heiko Hirschmuller, “Accurate and efficient stereo processingby semi-global matching and mutual information,” in Com-puter Vision and Pattern Recognition, 2005. CVPR 2005. IEEEComputer Society Conference on. IEEE, 2005, vol. 2, pp. 807–814.