face recognition technology - university of notre damekwb/bowyer_tech_soc_2004.pdf · such as image...

12
IEEE TECHNOLOGY AND SOCIETY MAGAZINE | SPRING 2004 0278-0079/04/$20.00©2004IEEE | 9 ideo surveillance and face recognition sys- tems have become the subject of increased inter- est and controversy after the September 11 terrorist attacks on the United States. In favor of face recognition technology, there is the lure of a powerful tool to aid national security. On the negative side, there are fears of an Orwellian invasion of privacy. Given the ongoing nature of the controversy, and the fact that face recognition systems represent lead- ing-edge and rapidly changing tech- nology, face recognition technology is currently a major issue in the area of social impact of technology. This article analyzes the interplay of tech- nical and social issues involved in the widespread application of video sur- veillance for person identification. Right to Privacy The United States Constitution declares a level of protection for the rights of individual citizens against oppression by their government that has made America unique. One right that has become a firmly entrenched American value, even though it is not explicitly enumerat- ed in the Constitution, is the right to privacy. This is evidenced by the fact that phrases like “Orwellian nightmare” and “Big Brother” have passed into such common use that they have meaning even for those who have never read George Orwell’s book [2]. The terrorist attacks of Septem- ber 11, 2001, immediately focused attention on the field of biometrics, the study of ways to recognize peo- ple and verify identity. Face recog- nition is one type of biometric tech- nology. Face recognition was highly controversial when used at the Super Bowl in early 2001. But the idea that video surveillance and face recognition systems might have alerted authorities to the terrorists that boarded airliners on September 11 has advanced public acceptance of the technology and motivated a number of airports to evaluate such systems. One idealized counter-ter- rorism scenario was outlined as fol- lows in a Business Week article that appeared shortly after September 11 ([3, p. 39]): “Khalid Al-Midhar came to the attention of federal law enforcement about a year ago. As the Saudi Ara- bian strolled into a meeting with some of Osama bin Laden’s lieu- tenants at a hotel in Kuala Lumpur © GETTY IMAGES Face Recognition Technology: Security versus Privacy KEVIN W. BOWYER They that can give up essential liberty to obtain a little temporary safety deserve neither liberty nor safety. — Benjamin Franklin, Historical Review of Pennsylvania, 1759. Benjamin Franklin said that anyone who gives up essential liberties to preserve freedom is a fool, but maybe he didn’t conceive of nuclear war and dirty bombs. — Songwriter Neil Young, discussing 9-11 memorial song “Let’s Roll” [1] V

Upload: trinhnhi

Post on 14-Mar-2018

215 views

Category:

Documents


2 download

TRANSCRIPT

IEEE TECHNOLOGY AND SOCIETY MAGAZINE | SPRING 2004 0278-0079/04/$20.00©2004IEEE | 9

ideo surveillance andface recognition sys-

tems have become thesubject of increased inter-

est and controversy afterthe September 11 terrorist attacks onthe United States. In favor of facerecognition technology, there is thelure of a powerful tool to aid nationalsecurity. On the negative side, thereare fears of an Orwellian invasion ofprivacy. Given the ongoing nature ofthe controversy, and the fact that facerecognition systems represent lead-ing-edge and rapidly changing tech-nology, face recognition technologyis currently a major issue in the areaof social impact of technology. Thisarticle analyzes the interplay of tech-nical and social issues involved in thewidespread application of video sur-veillance for person identification.

Right to PrivacyThe United States Constitutiondeclares a level of protection for therights of individual citizens againstoppression by their government thathas made America unique. Oneright that has become a firmlyentrenched American value, eventhough it is not explicitly enumerat-ed in the Constitution, is the right toprivacy. This is evidenced by thefact that phrases like “Orwelliannightmare” and “Big Brother” havepassed into such common use thatthey have meaning even for thosewho have never read GeorgeOrwell’s book [2].

The terrorist attacks of Septem-ber 11, 2001, immediately focusedattention on the field of biometrics,the study of ways to recognize peo-ple and verify identity. Face recog-

nition is one type of biometric tech-nology. Face recognition was highlycontroversial when used at theSuper Bowl in early 2001. But theidea that video surveillance and facerecognition systems might havealerted authorities to the terroriststhat boarded airliners on September11 has advanced public acceptanceof the technology and motivated anumber of airports to evaluate suchsystems. One idealized counter-ter-rorism scenario was outlined as fol-lows in a Business Week article thatappeared shortly after September 11([3, p. 39]):

“Khalid Al-Midhar came to theattention of federal law enforcementabout a year ago. As the Saudi Ara-bian strolled into a meeting withsome of Osama bin Laden’s lieu-tenants at a hotel in Kuala Lumpur

© GETTY IMAGES

Face Recognition Technology:Security versus Privacy

KEVIN W. BOWYER

They that can give up essential liberty to obtain a little temporary safety deserve neither liberty nor safety.

— Benjamin Franklin, Historical Review of Pennsylvania, 1759.

Benjamin Franklin said that anyone who gives up essential liberties to preserve freedom is a fool, but maybe he didn’t conceive of nuclear war and dirty bombs.

— Songwriter Neil Young, discussing 9-11 memorial song “Let’s Roll” [1]

V

in December 1999, he was video-taped by a Malaysian surveillanceteam. The tape was turned over toU.S. intelligence officials and, afterseveral months, Al-Midhar’s namewas put on the Immigration andNaturalization Service’s “watchlist” of potential terrorists. … Thevideotape of Al-Midhar also couldhave been helpful. Using biometricprofiling, it would have been possi-ble to make a precise digital map ofhis face. This data could have beenhooked up to airport surveillancecameras. When the cameras cap-tured Al-Midhar, analarm would havesounded, allowingcops to take him intocustody.” (Fig. 1.)

This “dream sce-nario” outlined in theBusiness Week articlehas great publicappeal. The averagecitizen is familiar withthe mug-shot styleimages of the terror-ists that have appeared in the media.The average citizen is also aware atsome level that video surveillance isan everyday presence in activitiessuch as banking, shopping, purchas-ing gas, driving through major inter-sections, and entering public andprivate buildings. There is greatinherent appeal in the idea thatfuture terrorist attacks could be pre-vented by high-tech video surveil-lance. A poll taken just after the 9-11 attacks asked for responsesregarding “some increased powersof investigation that law enforce-ment agencies might use when deal-ing with people suspected of terror-ist activity, which would also affectour civil liberties” [4]. This pollfound that 86% of those respondingwere in favor of “use of facialrecognition technology to scan forsuspected terrorists at various loca-tions and public events.” Also, 63%were in favor of “expanded camerasurveillance on streets and in publicplaces.” A follow-up poll taken sixmonths later found that support for

use of face recognition technologyfell by only a small amount, to 81%[5]. Support for other forms ofincreased surveillance generallydropped by larger amounts (Fig. 2).

If face recognition technologyworked well enough, then it couldbecome a valuable tool in fightingterrorism. However, it would alsobe a tool with great potential forenabling government invasion ofprivacy. Our society is currently inthe midst of making important deci-sions about investment in, anddeployment of, various forms of

biometric technology. The debate isalready engaged, with organiza-tions such as the American CivilLiberties (ACLU) and ElectronicFrontier Foundation (EFF) on oneside, and government, law enforce-ment, and some corporations on theother side.

It is important that all citizens areable to think critically about thetechnical, social, and ethical issuesthat are involved in the decisionsthat confront our society. With thisarticle, we hope to achieve the fol-lowing: 1) outline a realistic under-standing of the current state of theart in face recognition technology,2) develop an understanding of fun-damental technical tradeoffs inher-ent in such technology, 3) becomefamiliar with some basic vocabularyused in discussing the performanceof recognition systems, 4) be able toanalyze the appropriateness of sug-gested analogies to the deploymentof face recognition systems, 5) beable to assess the potential for mis-use or abuse of such technology, and

6) identify issues to be dealt with inresponsible deployment of suchtechnology.

How Does A “FaceRecognition System” Work?The term biometrics refers to meth-ods of identifying people based onsome aspect of their biology. Finger-prints are a familiar example. Howev-er, fingerprints generally require theactive participation of the person tobe recognized. A technique such asface recognition could, at least inprinciple, be used to recognize people

“passively,” withouttheir knowledge orcooperation. Whilethere are numerouspossible biometric ap-proaches (ear image[25], gait analysis [26],...), this paper focuseson face recognitiontechnology of the typecurrently being con-sidered for use in pub-lic spaces such as air-

ports and tourist districts (Fig. 3).The basic operation of a face

recognition system is fairly simple.First, there is a camera that viewssome space — for example, theboarding area in an airport. Insteadof a person continuously monitoringthe video, the goal is to have a com-puter monitor the video and alert ahuman operator if an “interestingperson” is in view. This presumesthat there is a list of known “inter-esting persons.” This is the watchlist, or gallery. Enrolling a person inthe gallery requires that you have apicture of the person.

The first step in the computerprocessing of the video is to detectwhen a person’s face comes intoview. The development of algo-rithms for finding a face in an imagehas been an active research area inits own right [6]. Once a face isdetected, the face image is croppedfrom the video to be used as a probeinto the gallery to check for possiblematches. The face image is pre-processed to account for factors

10 | IEEE TECHNOLOGY AND SOCIETY MAGAZINE | SPRING 2004

The terrorist attacks ofSeptember 11, 2001,immediately focused attentionon the field of biometrics,the study of ways to recognizepeople and verify identity.

such as image size and illumination,and to detect particular features ofthe face. The data from the probeimage is then matched against theentries in the gallery.

There has been an enormousamount of research on data structuresand algorithms for use in matchingface images [7], [27]. In general, thematching algorithm will produce asimilarity measure for the match ofthe probe image to each of the galleryimages. A threshold can be set so thata match is reported to the operatoronly when the similarity measurebetween the probe and a galleryimage exceeds the threshold. If morethan one gallery image generates anabove-threshold match, those imagescan be ranked according to the simi-larity measure. This threshold valuecan be raised or lowered to adjust thesensitivity of the system.

A reported match, or “alarm,”typically alerts the system operatorand displays the matched probe andgallery images. Incorrect matchesmay be dimissed by the operatorbased on differences in age, height,weight, gender, or other factors dif-fering between the persons in theprobe and gallery images. If thereported match is plausible, then theoperator must decide what steps totake to confirm the identity of theperson in the probe image. Thus theuse of a face recognition system isgenerally conceived as a method ofallocating scarce human resourcesto verifying the identity of the peo-ple that appear most similar tosomeone on the watch list.

There are several important ele-ments of this scenario that should beclearly understood. First, the systemcan only recognize persons whoseimages have been enrolled in thegallery. If a terrorist is known byname and reputation, but no pictureof the terrorist is available, then theface recognition system is useless.Another point is that the systemmust be able to acquire face imagesof reasonable quality to use asprobes. If lighting conditions arepoor, viewing angle is extreme, or

people take measures to alter theirappearance from that in the galleryimage, then the performance of theface recognition system will gener-ally suffer. A third point is that thesystem has a sensitivity thresholdthat must be set appropriately. Set-ting the threshold too low will resultin too many false positives – aninnocent person is subjected to someamount of scrutiny based on resem-blance to a person on the watch list.Setting the threshold too high willresult in too many false negatives – aterrorist is not recognized due to dif-ferences in appearance between thegallery and probe images of the ter-rorist. The diagram in Fig. 4 explainsthese possible outcomes of a recog-nition system’s decision.

While the system threshold canbe adjusted to be either more or lesssensitive, it cannot be set to simulta-neously give both fewer false posi-tives and fewer false negatives. For

any given current state of the tech-nology, the system operator isinevitably faced with the choice oftrading occurences of one type oferror against occurences of the othertype of error. Being able to achieveboth fewer false positives and fewerfalse negatives requires new and bet-ter technology. Thus, if face recogni-tion appears to be a potential solu-tion in a given application, but theavailable tradeoff between true posi-tives and false positives is notacceptable, then the answer is todevelop improved technology.

One way of summarizing thepotentional performance of this typeof recognition system is a Cumula-tive Match Characteristic curve, orCMC curve. The CMC curve illus-trates, in a certain way, the tradeoffof true positive versus false positiveresults. The Y axis is the true posi-tive rate. In our current discussion,this is the fraction of the time that,

IEEE TECHNOLOGY AND SOCIETY MAGAZINE | SPRING 2004 | 11

Khalid Al-MidharTerrorist on American Airlines Flight#77 that crashed into the Pentagon

building in Washington, DC.

Mohamed AttaTerrorist on American Airlines Flight #11that crashed into the North Tower of the

World Trade Center in New York.

Fig. 1. Face images released by the FBI for two of the 9-11 terrorists (Images areavailable at www.fbi.gov/pressrel/penttbom/penttbomb.htm.)

Fig. 2. Americans are familiar with widespread video surveillance. Surveillancecameras are prominent at banks, stores, gas stations, toll booths, road intersec-tions, many public and private buildings, and other locations.

when someone on the watch listappears in the surveillance image,the system signals an alarm to theoperator. The X axis is the cumula-tive rank, which we can think of asthe maximum number of images thatthe system is allowed to report whengiving an alarm for a given probe. Ifthe system is allowed to report alarger number of possible matches,the true positive rate generallyincreases. However, the number ofpeople that are incorrectly suggestedfor consideration, the false positiverate, also generally increases. And,of course, the workload on thehuman operator increases.

The CMC curve conceptbecomes important when evaluatingand comparing the performance offace recognition systems. Improvedtechnology would result in a betterCMC curve, one that would run

more toward the upper left corner ofthe plot as it is drawn in Fig. 5. Ingeneral, it makes no sense to com-pare the performance of systemsbased only on a true positive rate orfalse positive rate, since either num-ber can be improved at the expenseof making the other worse. To bemeaningful, a comparison of twosystems would need to take intoaccount the CMC curve for eachsystem, and the CMC curves wouldneed to be obtained from the sameexperimental data.

Our discussion here primarilyconsiders a recognition scenario asopposed to a verification scenario.That is, we assume that face recog-nition is being used to recognize theidentity of an unknown person asmatching someone on the watch list.In a verification scenario, facerecognition is used to confirm that a

given person is who they claim tobe. The verification scenario is in asense an easier problem, since thetask is ‘only’ to verify a specificclaimed identity, rather than tomatch against the entire gallery. Theverification scenario is relevant toapplications such as controllingaccess to a secure building. A per-son might approach the entry pointand claim to be Jane Doe, someonewho is on the list of persons allowedto enter. The face recognition sys-tem would then acquire the person’simage, compare it to a known imageof Jane Doe, and allow entry if thesimilarity is close enough. In thistype of scenario, system perfor-mance would be described by aReceiver Operating Characteristiccurve, or ROC curve. The ROCcurve is effectively a special case ofthe CMC curve [30].

How Well Does FaceRecognition Technology Work?There are at least two senses inwhich one can discuss whether facerecognition technology “works” – asystem-technical sense, and anapplication-behavioral sense. Thesystem-technical sense is concernedwith the CMC curve and how manytrue and false positives the systemproduces in some test environmentover some time period. The applica-tion-behavioral sense is concernedwith how behavior is changed bythe fact that the system is put intoapplication. For example, say that aface recognition system is installedin an airport and it is known that ithas only a 50% chance of correctlyrecognizing someone on the watchlist when they appear in the airport.One might be tempted to think ofthis technical performance as a fail-ure. However, a 50% chance ofbeing identified might be sufficientto cause a terrorist to avoid the air-port. If so, then one might be tempt-ed to think of the system as a suc-cess in an application sense.

One often-repeated claim relatedto the effectiveness of video surveil-lance systems with face recognition

12 | IEEE TECHNOLOGY AND SOCIETY MAGAZINE | SPRING 2004

Watch List, or “gallery” —image(s) of persons to watch for.

Face detection —Isolate individualface in image.

preprocessing.

Video

“probe” image.

Gallery Images Of Closest Match(es), If Any

Matching — algorithm tomatch probe against gallery.

ProbeImage

Rank 1GalleryMatch

Rank 2GalleryMatch

Rank NGalleryMatch

Human observerjudges validity of thematch and need toverify identity of person in probe image.

Fig. 3. Conceptual organization of a face recognition surveillance system. Impor-tant points are that the system can only recognize someone enrolled in thewatch list, and that the candidate matches are screened by a human operator.

Recognition System Decision for Probe Image

No Match

No Match

Match

True Negative —Innocent Citizen Not

Disturbed.

False Positive —Innocent Citizen Has

Identity Checked.

False Negative —Terrorist GoesUndetected.

True Positive —Terrorist Apprehended.

Match

Fig. 4. Possible outcomes of face recognition for each person considered.

capability is that the introduction ofsuch a system in London caused amajor decrease in the crime rate [8].News articles reported a 20% to 40%drop in crime, with the variation inthe reported figures apparently aris-ing from reporting statistics for dif-ferent categories of crime, and/oradjusting for the increase in crimerate elsewhere in London over thesame time period. Those who argueagainst the use of face recognitiontechnology raise various objectionsto the reports of this experience.

One objection is that there issome inherent variability in reportedcrime statistics, and so perhaps thereported decrease was only a ran-dom occurence. Another objectionis that the face recognition systemwas apparently not directly respon-sible for any arrests of criminalsduring this time. A related objectionis that if the crime rate was reduced,perhaps the crimes were simply dis-placed to neighboring areas thatwere not using face recognitiontechnology.

It seems plausible that this couldbe at least partially true. However,citizens in the area where the tech-nology is deployed may still feelthat it “works” for them, and thisonly begs the question of whatwould be the effect if the technolo-gy were more widely deployed.

One practical evaluation of facerecognition technology was carriedout at the Palm Beach InternationalAirport. A face recognition systemevaluated there captured about10 000 face images per day, of about5000 persons, during four weeks oftesting. Of 958 images captured ofvolunteer subjects in the gallery,455 were successfully matched, fora recogntion rate of approximatelyforty-seven percent. The reportobtained from the Palm BeachCounty Department of Airportsstates that – “the false alarm ratewas approximately 0.4% of totalface captures, or about 2-3 falsealarms per hour” [10], see AppendixA. A news article related to ACLUpublicity about the test stated this

same information a little differently– “more than 1000 false alarms overthe four weeks of testing” [9]. Prob-lems were noted due to subjectswearing eyeglasses, lighting glare,and getting images that were notdirect frontal views of the face.

An ACLU representative wasquoted as saying that the results ofthe Palm Beach airport evaluationshowed that “face recognition is adisaster” [9]. However, someobservers may not agree with thisassessment. A fifty percent chanceof detection on a single face imagemay be sufficient to ward off someterrorists or criminals. Also, to thedegree that the failures are due topsuedo-random effects such aslighting glare and viewing direction,acquiring images at multiple inde-pendent checkpoints could increasethe chances of detecting a subject ona given trip through the airport. Inother words, the chances of evadingdetection at both of two differentcheckpoint images might be as lowas one in four, at three differenctcheckpoints as low as one in eight,and so on. Also, researchers andcompanies in face recognition tech-nology argue that the technology isstill in very early stages of develop-ment. The systems are improvingrapidly [22], and tests like the one atPalm Beach International Airportprovide feedback to guide furtherresearch and development. A goodsense of the current state of the artfor commercial face recognitiontechnology can be found in the 2002

Face Recognition Vendor Test [22].In the end, those on either side of

the debate can find interpretationsof the performance data to supporttheir argument. Face recognitiontechnology does not yet work aswell as desired in an technical sense[10], but the technology is improv-ing [22], and reports such as thosefrom London indicate that it doespotentially produce some usefuleffect in application.

Another way of evaluating howwell face recognition techologyworks is to look at how it has beenadopted by private commercialusers. One supplier of face recogni-tion technology claims to haveinstalled fifty systems in casinos,where the technology “is used bysurveillance operators to identifycheaters and other casino undesir-ables, as well as casino VIPs” [29].Casino owners are presumably eval-uating the technology on a cost-effectiveness basis, and judging thatit will allow them to increase theiroverall profit.

However, the context for decid-ing to deploy face recognition tech-nology in such an application mayinvolve important considerationsthat are not present in public anti-terrorism applications. For the pur-poses of a casino, the ability to auto-matically recognize “VIP’s” –people who can be motivated togamble big and lose big and mayenjoy being recogonized – may beat least as important as the ability torecognize “undesirables.” The

IEEE TECHNOLOGY AND SOCIETY MAGAZINE | SPRING 2004 | 13

100%improved technology

TruePositiveRate —

fraction ofactual

terroristsrecognized

as such

Cumulative Rank — rank of gallery image counted as a match.

0

1 G

Fig. 5. Conceptual depiction of a cumulative match characteristic curve.

counter-terrorism scenario containsnothing analogous to the casino VIP.Thus the fact that face recognitiontechnology is deployed in one appli-cation area does not automaticallymean that it is appropriate for otherapplication areas.

Viewpoints and AnalogiesA number of media people, publicofficials, and special interest groupshave gone on record with variousopinions and arguments regardingthe deployment of face recognitiontechnology. A critical-thinkinganalysis of some examples can helpto sharpen one’s ability to discernthe essence of the issues.

An article in Time magazine justafter face recognition was used atSuper Bowl XXXV provides anexample of unrestrained belief inthe technology [13] – “The beautyof the system is that it is disguise-proof. You can grow a beard and puton sunglasses, and FaceTrac willstill pick you out of a crowd.”

As any careful observer mightsuspect, the idea that any facerecognition technology is “disguise-proof” is wildly optimistic! Whilesome technologies may handle par-ticular types of disguises better thanothers, no known current facerecognition technology is “disguise-proof” in any general sense of theterm. The results of the Face Recog-nition Vendor Test clearly show thatthere is much research and develop-ment still to be done in the area offace recognition [22]. It is alsoworth noting that researchers arebeginning to look at ways of foolingface recognition systems [28].

Again in the context of facerecognition used at Super BowlXXXV, congressman Ed Markey(D., Mass.) was quoted as saying[13] – “It’s chilling, the notion that100 000 people were subject tovideo surveillance and had theiridentities checked by the govern-ment.” This quote illustrates a subtlemisunderstanding of the technology.If we hear that police “checkedsomeone’s identity,” then we gener-

ally understand that police requiredthe person to prove their identity,perhaps by examining the person’sdriver’s license or taking their fin-gerprints. In using face recognitiontechnology, the only people whocan be recognized are those in thewatch list, and the only people whomight have to have their identitiesconfirmed are those who look likesomeone on the watch list.

What really happened at theSuper Bowl was that the systemsuggested that 99 980 of the atten-dees had no close resemblance toanyone on the watch list and thatabout 20 of the people were a possi-ble match to someone on the list.This is not exactly the same aseveryone having their identitieschecked. The system did not in anysense create a list of the identities ofthe 99 980 people.

A rather more obvious exampleof confusion about the technologyoccurred in a New York Times arti-cle following the Super Bowl. Thearticle reported that [14]:

“A woman in Texas who saw theimage claimed the man in the pic-ture was wanted for crimes. Shecalled the Tampa police, who ques-tioned the man, a constructionworker. It was the wrong person ...The system … is not 100 percentaccurate.”

The problem here is that thereported incident is a case of mis-taken identity by a human, and notby the face recognition system! It isthe viewer in Texas who suggestedan incorrect identity for the personin the picture.

Another example of misunder-standing the technology occurs inthis quote from a New York Timesarticle [31]: “a computer glitchcould match the face of an innocentperson with the digital image of acriminal.” The problem here is thatthe occurrence of such a false posi-tive is not a “glitch,” but instead it isan inescapable element of using thetechnology. To say that it is a“glitch” implies that it is something

that can be eliminated in a debuggedversion of the system.

The possibility of false positivescannot ever be completely engi-neered out of the system. In fact, asalready mentioned, for any givenstate of the art of the technology, inorder to be more certain of catchingthe terrorists, a higher rate of falsepositives has to be accepted. Thepublic should never be encouragedto think that perfect performance –all terrorists caught and no innocentperson inconvenienced – is some-thing actually achievable.

A variety of analogies have beenused in attempts to characterize theapplication of face recognition tech-nology. The “police lineup” analogywas used by the ACLU in theirprotest over the use of face recogni-tion at the Super Bowl [11]: “We donot believe that the public under-stands or accepts that they will besubjected to a computerized policelineup as a condition of admission.”The term “police lineup” brings tomind a scenario of being called tothe police station to be viewed alongwith a handful of other people forpossible identification by a witnessto a crime. There is a suggestion ofa certain level of inconvenienceinvolved, and also a bit of animplied accusation. It is doubtfulthat people who have walkedthrough an area under video surveil-lance feel that they have participatedin a police lineup.

A different analogy for the appli-cation of face recognition systemsappears in the following quote [15]:“Police are enthusiastic about thesystem, saying it is no different froman officer standing on a street cornerwith a criminal’s photograph in handand checking out a passing crowd.They say it can reduce bias based onrace or dress.” The idea that a police-man might carry along a photo of awanted person and scan crowds totry to find the person seems wellaccepted. And this is essentially thetask that is being automated by theuse of face recognition systems.Thus this seems like the most appro-

14 | IEEE TECHNOLOGY AND SOCIETY MAGAZINE | SPRING 2004

priate of the analogies mentionedhere. However, there is the importantdifference that the use of face recog-nition technology makes it feasibleto do this type of scanning on a scalethat is impossible in practical termsusing human observers.

Does Face RecognitionSurveillance in PublicSpaces Invade Privacy?The most fundamental argumentagainst government use of facerecognition technology in publicspaces is that it is a violation of theconstitutional right to privacy. Thiscore objection was advanced by theACLU in the context of the use offace recognition at the Super Bowl[11]: “... this activity raises concernsabout the Fourth Amendment rightof all citizens to be free of unreason-able searches and seizures.” Howev-er, essentially all legal commentators

agree that use of face recognitionsystems in public spaces cannot beconsidered a “search” for constitu-tional purposes. Woodard’s analysisof the issue seems careful and repre-sentative [12]:

“Under current law, however, thetype of facial recognition used at theSuper Bowl would almost certainlybe constitutional. The SupremeCourt has explained that govern-ment action constitutes a searchwhen it invades a person’s reason-able expectation or privacy. But thecourt has also found that a persondoes not have a reasonable expecta-tion of privacy with regard to physi-cal characteristics that are constant-ly exposed to the public, such asone’s facial features, voice, andhandwriting. So although the FourthAmendment requires that a search

conducted by government actors be‘reasonable,’ which generally meansthat there must be some degree ofsuspicion that the person to besearched is engaged in wrongdoing,the scan of spectators’ facial charac-teristics at the Super Bowl did notconstitute a search.”

However, this interpretation ofthe right to privacy was formulatedbefore it was technically conceiv-able that a system could automati-cally match the face of every personentering a public space against agallery of images of people wantedby authorities. Thus, someobservers may argue that the scaleof operations made possible bycomputerized face recognition tech-nology should result in a change tothe Supreme Court’s traditionalinterpretation of the right to privacy.

Another concern is simply thatcitizens should be notified when

they enter a public space wherevideo surveillance is being used. Theidea is apparently that people couldthen make an informed choice ofwhether or not to subject themselvesto surveillance. Of course, if all air-ports install face recognition sys-tems then there may be little practi-cal “choice” left for some travelers.However, given the level of screen-ing already in place for passengersboarding an airplane, posing for apicture for a face recognition systemwould seem to be a rather minimaladded inconvenience.

There is also a policy questionregarding the decision to add a per-son’s image to the watch list. Itseems clear that if there is an arrestwarrant for a person, then the per-son’s image could be put in thewatch list. But what if a person is

only wanted for questioning? Orwhat if authorities simply want tokeep track of where a particularperson goes and who they meetwith? Should approval from ajudge be required in order to placea person’s image in the watch list,perhaps in a process similar to therequirement for a telephone wire-tap? Should a permanent record bekept of when a person’s image wasentered into the watch list and atwhose request? Clearly some levelof official procedure is required inorder to guard against abuse. If noapproval is required for an individ-ual law enforcement officer toenter images into the watch list, itis easy to imagine scenarios forabuse. A suspicious husband orwife might be tempted to want tokeep tabs on his or her spouse. Aperson running for office mightwant to know whom an opponent isvisiting. Such abuses of power areof course not new with face recog-nition technology, and so existingrules for other types of surveillancetechnology may suggest analogies.For example, there are well-devel-oped rules for authorizing, using,and monitoring wiretaps [21], andthis may provide a good startingpoint for developing rules for facerecognition technology.

Another potential concern is thewholesale archiving of images forpossible later use. One companyadvertises that the only probe imagesthat are stored by the system arethose that register a potential matchto someone in the gallery. However,there is no real technical limitation toarchiving all face images captured bythe system, only the cost considera-tion for the volume of storagerequired. The archive of imagescould then be “mined” in the futureto find out if a certain person hadbeen at the checkpoint in the past.

Some observers have noted that“function creep” inevitably occursafter the installation of new tech-nology. For instance, individualscan have an identifying cardattached to their car that allows

IEEE TECHNOLOGY AND SOCIETY MAGAZINE | SPRING 2004 | 15

There is great inherent appeal inthe idea that future terrorist attackscould be prevented by high-techvideo surveillance.

them to pay automatically at tollbooths. The original purpose is sim-ply to provide greater convenienceto individuals and to generallyspeed the traffic at the toll booth.But if the record of toll paymentsmade by an individual would laterprove useful in another context, forexample in a court case to prove ordisprove who had been where atwhat time, then courts would typi-cally order that the information beprovided. If all images acquired bya face recognition system arearchived, then the temptationtoward function creep becomesstrong. Again, this is not a newproblem, but one that has been not-ed with earlier technologies and canbe expected to arise in similar wayswith new technologies.

For all of these reasons and oth-ers, the ACLU is one of the mostvocal critics of face recognitiontechnology. One press releaseemphasizes the threat to privacy, asillustrated in quotes such as: “Thereis an alarming potential for misuseof all of these systems” and “We areextremely troubled by this unprece-dented expansion in high-tech sur-veillance in the United States” [23].Another press release emphasizesthe limitations of the technology, asillustrated in: “it is abundantly clearthat the security benefits of such anapproach would be minimal to non-existent, for a simple reason: thetechnology doesn’t work” and “Any-one who claims that facial recogni-tion technology is an effective lawenforcement tool is probably work-ing for one of the companies tryingto sell it to the government” [23].

Of course, there is a competinglogic to these viewpoints. If thetechnology does not work, it can’tbe a real threat to privacy. And if itis a real threat to privacy, then itwould seem to have at least somepotential in law enforcement. Itseems that it cannot be simultane-ously both a technology that doesnot work and one that presents aserious threat to privacy.

Conceivably, it could be a tech-

nology that does not yet work wellenough to be of use in fighting ter-rorism, but that eventually willdevelop sufficiently to become use-ful in fighting terrorism and so thenalso then also a potential threat toprivacy. But this only returns to thequestion of whether and how totrade off privacy versus security, anddoes not provide any easy answer tothe question.

An issue that generally increasesthe intensity of all other concerns isthat of networked recognition sys-tems and databases. As initiallyenvisioned, a face recognition sys-tem provides a way of detectingwhen a particular person enters aparticular surveillance area. Everymajor airport and every major pub-lic building might have its ownindependent surveillance system.But what if all of these systemswere networked together? Forexample, a person might be on thewatch list for crossing the Canadianborder into the United States. Theperson could be detected on cross-ing the border, and the person’simage forwarded to the watch listfor the expressway toll booths. Theperson could then be detected againas they take the exit to the airport,and their image forwarded to thewatch list of the airport system, andso on. Each of these events can alsobe reported back to a central loca-tion. The result is a continuousmonitoring of the person’s where-abouts. While we might think this isgood for a non-citizen who is on alist of suspected terrorists, wewould see it as a invasion of priva-cy if done for a person who hassimply been a vocal critic of currentgoverment policies. And if done ona large scale without due cause, itbegins to approach the level ofOrwell’s “Big Brother.”

Of course, the potential for mon-itoring a person’s location andactions through a combination ofexisting technologies such as creditcard records, cell phones, and glob-al positioning systems in automo-biles may already exceed the poten-

tial invasion of privacy that couldcome from mis-use of face recogni-tion technology. It may be that, tosome extent, face recognition tech-nology is evaluated more emotion-ally simply because it uses imagesof a person’s face.

Policies For Industryand GovernmentPresently, there are essentially nolegal guidelines for where andhow face recognition technologycan be used in public spaces.Local governments are makingdecisions on the deployment offace recognition technology on acase-by-case basis, sometimeswithout a clear understanding ofwhat they are approving [16].There are important questions tobe answered in terms of the poli-cies that are going to control howgovernment makes use of thistechnology. How should theinstallation of a face recognitionsystem for a particular publicspace be proposed, approved, andmonitored? What technical perfor-mance requirements should therebe for such systems? What sort ofnotice needs to be given about theuse of the technology to personsentering the public space? Whatrules control whose image can beput in the gallery and how long itcan remain there? What imagesacquired by the system can be keptand for how long? At least onecompany involved in face recogni-tion technology has made initialproposals with regard to some ofthese issues. The Visionics webpages list a set of proposed “Priva-cy Protection Principles” (seeAppendix B). However, there aremany details still to be workedout before any comprehensiveanswers emerge.

Guidance from Codes ofEthics for the ComputingProfessionsEach of the codes of ethics relevantto the computing-oriented profes-sions [21] indicates a general con-

16 | IEEE TECHNOLOGY AND SOCIETY MAGAZINE | SPRING 2004

cern for privacy. However, none ofthe codes can be looked to for directand consistent guidance on thequestion of whether face recogni-tion systems should be deployed inpublic spaces. For example, thestandards of conduct of the Associa-tion of Information Technology Pro-fessionals (AITP) contains severalstatements potentially relevant tothe debate over deployment of facerecognition surveillance systems:

“In recognition of my obligationto society I shall ...

Protect the privacy and confiden-tiality of all information entrusted tome. ...

Use my skill and knowledge toinform the public in all areas of myexpertise. ...

To the best of my ability, insurethat the products of my work areused in a socially responsible way.”

The statement about protectingthe privacy of information entrustedto the computing professional canbe understood to suggest that thedevelopers of a surveillance systemhave a responsibility to build in pro-tections for the security and privacyof the information in the system.The statement about informing thepublic can be understood to suggestthat computing professionals shouldbe active in the public debate con-cerning surveillance systems andprivacy. Computing professionalsshould particularly make inputs tothe debate based on their specialexpertise, striving to give a clearunderstanding of the potential andthe limitations of the technology.The statement about insuring thattechnology is used in a sociallyresponsible way can be understoodto suggest that the system should beconstructed so as to minimize thepossibility of misuse.

The code of ethics for theAssociation for ComputingMachinery (ACM) has, in its listof “general moral imperatives,”some similar general statementsabout obligation to society andrespect for privacy:

“As an ACM member I will...:1.1 – Contribute to society and

human well-being.1.7 – Respect the privacy of others.”

Also, as part of its “organization-al leadership imperatives,” the ACMcode of ethics states:

“As an ACM member and anorganizational leader, I will...

3.4 – Ensure that users and thosewho will be affected by a computingsystem have their needs clearlyarticulated during the assessmentand design of requirements. Laterthe system must be validated tomeet requirements.”

These statements in the ACMcode can be understood to supportconcerns similar to those in theAITP standards of conduct. Thepoint that is a bit more explicit hereis that the computing professionalshould, during the design and vali-dation of the system, consider theneeds of those who will be affectedby the system.

The Software Engineering codeof ethics contains the followingstatements:

“Software engineers shall actconsistently with the public interest.In particular, software engineersshall, as appropriate: …

1.3 Approve software only ifthey have a well-founded belief thatit is safe, meets specifications, pass-es appropriate tests, and does notdiminish quality of life, diminishprivacy or harm the environment.The ultimate effect of the workshould be to the public good.

1.4 Disclose to appropriate per-sons or authorities any actual orpotential danger to the user, the pub-lic, or the environment, that theyreasonably believe to be associatedwith software or related documents.

1.5 Cooperate in efforts toaddress matters of grave public con-cern caused by software, its installa-tion, maintenance, support or docu-mentation.”

These statements in the softwareengineering code of ethics reflect con-cerns similar to those in the AITP andACM codes. Item 1.3 suggests thatsoftware engineers involved in thedevelopment of surveillance systemsshould approve the system only if theybelieve that it does not diminish priva-cy and that its ultimate effect is to thepublic good. But one likely scenario inthis case is that a software engineermight believe both that the systemdoes diminish privacy and yet alsodoes contribute to the public good byaffecting some increase in security.The code of ethics does not say how tobalance these competing goals.

In the end, the codes of ethicsindicate that 1) the computing pro-fessional should have a concern forprivacy, 2) the computing profes-sional should give informed input topublic debate, and 3) the decisionshould ultimately be that which isjudged to best contribute to thewell-being of society. The codesprovide a framework to guide deci-sion-making, but do not directlysuggest whether or not the proposedapplication is ethical. We are leftwith the value judgment of whetherthe cost in diminished privacy is off-set by an increase in security.

Several of the codes of ethicsalso contain elements that warnagainst unrealistic statements aboutthe potential of a technology. Forexample, the Software EngineeringCode of Ethics contains the admo-nition, “Be fair and avoid deceptionin all statements, particularly pub-lic ones, concerning software.”Similarly the International Biomet-ric Industry Association advocatesthis principle [32], “Accountabilityin Marketing: Because truth is thekey to industry credibility, mem-bers attest that their stated claimsare accurate and can be indepen-dently verified by a competentauthority.” The CEO of one compa-ny in the biometrics industry gavethe following quote that somewould consider over-optimistic andothers might consider as appropri-ately qualified:

IEEE TECHNOLOGY AND SOCIETY MAGAZINE | SPRING 2004 | 17

“We know that three out of thenineteen were already known onwatch lists and knowing how manycheckpoints these people had to gothrough, we had a high probabilityto alert, intercept, these individualsmaybe August 21st or 23rd whenthey crossed the Canadian borderand we would have perhaps foiledthe whole plot” [20].

Despite the presence of qualifierssuch as “high probability” and “per-haps,” it is this author’s opinion thatmuch of the general public wouldform unrealistic expectations of thetechnology based on hearing thistype of quote.

Trading Liberty for SecurityThe full depth and meaning of Ben-jamin Franklin’s warning about trad-ing liberty for security is not alwaysappreciated. He posed the tradeoff asone of giving up “essential liberty”in order to obtain a “little temporarysafety.” Thus we can expect thatmuch of the disagreement in thisarea comes down to whether a par-ticular liberty is judged as essentialor inessential, and whether theincrease in safety is judged to be lit-tle or much, and temporary or per-manent. Perhaps this can be appreci-ated by considering a sort ofcomplement to Franklin’s statement:

“They that insist on keeping aninessential liberty at the cost of a largeand permanent threat to safety ....”

The introduction of video surveil-lance and face recognition systemsinto public spaces presents our societywith important decisions. We mustdecide: 1) when or whether a sophisti-cated high-tech application works wellenough to be worth deploying, 2)which elements of privacy are essen-tial and which are inessential, and 3)what level of increased safety cancome through the introduction of thistechnology. Especially for those incomputing-related professions, whomay be tempted to focus only on thetechnology, it is important that thepotential of this technology both in

terms of increased security and interms of potential abuse be properlyunderstood.

Additional InformationFor additional information, there arenumerous information resources onthe web related to this topic. TheBiometrics Consortium providesaccess to the industry viewpoint(www.biometrics.org). The Electron-ic Frontier Foundation (www.eff.org)and Electronic Information Privacycenter (EPIC) (www.epics.org/priva-cy/facerecognition/) provide infor-mation from the privacy-advocateviewpoint. Also, an article by PhilipAgre attempts to counter essentiallyall known arguments in favor of facerecognition technology (dlis.gseis.ucla.edu/~pagre/bar-code.html).Much of the academic research con-cerned with advancing face recogni-tion technology appears in journalssuch as the IEEE Transactions onPattern Analysis and Machine Intel-ligence and conferences such as Faceand Gesture Recognition and theInternational Conference on Audio-and Video-based Biometric PersonAuthentication.

Teachers who wish to cover thistopic, or other topics related tosocial impact of computing, in thierclasses may find it useful to browsethe site www.cse.nd.edu/~kwb/nsf-ufe. This site contains a variety ofeducational materials related toteaching ethics and computing. Italso contains a powerpoint file thatcan be used to lead a class discus-sion along the lines of the materialpresented in this paper.

AcknowledgmentsThe author would like to thankCherie Ann Sherman of Ramapo Col-lege of New Jersey, Sudeep Sarkar ofthe University of South Florida, andJonathon Phillips of the NationalInstitute of Standards for reading andcommenting on earlier drafts of thispaper. The author would also like tothank Patrick Flynn of the Universityof Notre Dame for numerous helpfuldiscussions in this area.

The author receives fundingfrom the Defense AdvancedResearch Projects Agency and theNational Science Foundation forresearch in the development andperformance evaluation of advancedbiometrics. The author has alsoreceived funding for the develop-ment of curriculum materials to sup-port to teaching ethics and comput-ing (DUE 9752792). The author hasno financial interest in any companyin the biometrics industry. The opin-ions expressed in this paper arethose of the author and do not nec-essarily reflect those of affiliatedresearchers, the University of NotreDame, or any funding agency.

Author InformationThe author is Schubmehl-PreinDepartment Chair, Department ofComputer Science & Engineering,University of Notre Dame, SouthBend, IN. Email: [email protected].

References[1] E. Gunderson, “Neil Young is passionateabout 9/11 anthem,” USA Today, Apr. 11, 2002.[2] G. Orwell, 1984. www-online-literature.com z/orwell/1984/[3] “Special report: privacy in an age of terror,”Business Week, Nov. 5, 2001, pp. 39-46.[4] T. Humphrey, “Overwhelming public sup-port for increasing surveillance powers …,”Oct. 3, 2001, www.harrisinteractive.com/har-ris_poll/index.asp?PID=260[5] T. Humphrey, “Overwhelming public sup-port for increasing surveillance powers …,”Apr. 3, 2001, www.harrisinteractive.com/har-ris_poll/index.asp?PID=293[6] M.-H. Yang, D. Kriegman, and N. Ahuja,“Detecting faces in images: A survey,” IEEETrans. Pattern Analysis & Machine Intelli-gence, vol. 24, no. 1, pp. 34-58, Jan. 2002.[7] R. Chellappa, C.L. Wilson, and S. Sirohey,“Human and machine recognition of faces – Asurvey,” Proc. IEEE, vol. 83, no. 5, pp 705-740,May 1995.[8] B. Lack, “Development of facial recogni-tion technologies in CCTV systems,” TheSource: Public Management J., Oct. 25, 1999,www.sourceuk.net/articles/a00624.html[9] S. Bonisteel, “Face recognition technologyfails again, ACLU claims,” May 16, 2002,washingtonpost.com, www.newsbytes.com/news/02/176621.html[10] www.aclu.org/issues/privacy/FaceRec_data.pdf.[11] D. McCullagh, “Call it Super Bowl FaceScan I,” Feb. 2, 2001, www.wired.com/news/print/0,1294,41571,00.html[12] J.D. Woodard, “Super Bowl surveillance: Fac-ing up to biometrics,” /www.rand.org/publica-tions/IP/IP209/[13] L. Grossman, “Welcome to the snooper

18 | IEEE TECHNOLOGY AND SOCIETY MAGAZINE | SPRING 2004

IEEE TECHNOLOGY AND SOCIETY MAGAZINE | SPRING 2004 | 19

bowl,” Time, Feb 12, 2001.[14] “Electronic surveillance: From ‘BigBrother’ fears to safety tool,” New York Times,Dec. 6, 2001.[15] J. Cienski, “Police cameras denounced asthreat to privacy,” July 12, 2001, National PostOnline, ww.nationalpost.com[16] D. Nguyen, “Council: So that’s what weokayed,” St Pete Times Online, July 6, 2001.www.sptimes.com/News/070601/TampaBay/Council__So_that_s_wh.shtml[17] A.M. Burton, S. Wilson, M. Cowan, and V.Bruce, “Face recognition in poor quality video:Evidence from security surveillance,” Psycho-logical Sci., vol. 10, no. 3, pp. 243-248, May1999.[18] A. Jain, R. Bolle, and S. Pankanti, Biomet-rics: Personal Identification In A NetworkedSociety. Kluwer, 1999. [19] R. Beveridge, “Evaluation of face recogni-tion algorithms.” www.colostate.edu/evalfacerec/[20] “Morning Edition,” National PublicRadio, Feb. 25, 2002. Transcript from Bur-relle’s Information Services, Box 7, Liv-ingston, NJ 07039.

[21] K. Bowyer, Ethics and Computing: LivingResponsibly In A Computerized World, rev. ed.New York, NY: IEEE/Wiley, 2001.[22] P.J. Phillips, P. Grother, R. Michaels, D.M.Blackburn, E. Tabassi, and J. Bone, “FaceRecognition Vendor Test 2002: EvaluationReport,” www.frvt.org[23] “Proliferation of surveillance devicesthreatens privacy,” archive.aclu.org/news/2001/n071101a.html.[24] “ACLU opposes use of face recognitionsoftware in airports due to ineffectiveness andprivacy concerns,” archive.aclu.org/issues/pri-vacy/FaceRec_Feature.html.[25] K. Chang, K. Bowyer, S. Sarkar, and B.Victor, “Comparison and combination of earand face images for appearance-based biomet-rics,” IEEE Trans. Pattern Analysis andMachine Intelligence, vol. 25, no.9, pp.1160-1165, Sept. 2003.[26] P.J. Phillips, S. Sarkar, I. Robledo, P.Grother, and K.W. Bowyer, “Baseline resultsfor the challenge problem of Human IDusing gait analysis,” in Proc. Face and Ges-ture Recognition 2002, Washington, DC, pp.

137-142.[27] W. Zhao, R. Chellappa, A. Rosenfeld, P.J.Phillips, “Face Recognition: A literature survey,”ACM Computing Surveys, vol. 35, no. 4, pp.399–458, Dec. 2003.[28] J. Alexander and J. Smith, “Engineering pri-vacy in public: Confounding face recognition,”presented at Workshop on Privacy EnhancingTechnologies (PET 2003), Dresden, Germany,Mar. 2003, www.cis.upenn.edu/~jalex/papers/pet2003.pdf[29] “Viisage technology and biometrica sys-tems achieve 50th face recognition installationat Mirage Resort, Las Vegas,” Viisage Corpora-tion press release, Mar. 29, 2000, www.viis-age.com/March29_2000.htm[30] P. Grother, R. Michaels, and P.J.Phillips, “Face Recognition Vendor Test2002 performance metrics,” presented atInt. Conf. on Audio- and Video-based Bio-metric Person Authentication, Surrey, U.K.,June 2003.[31] “Super bowl snooping,” New York Times,Feb 4, 2001.[32] www.ibia.org/principl.htm

Appendix A

Results of face recognition test at the Palm Beach International Airport.

Facial Recognition System Test (Phase I)Summary.

Palm Beach County, Department of Airports con-ducted a test of the Visionics “Argus” facial recog-nition sys-tem. The purpose of the test was to ascer-tain the effectiveness of this technology in an airportcheckpoint environment.

Utilizing a test group of 15 airport employees and adatabase of 250 photographs, the system featuresthat were tested included:

� Face capture rate.� False alarm rate.� The ability to successfully identify test group

against database photographs.

The data collected and compared to the manufactur-er’s advertised specifications revealed the following:

� Input photographs populating the databaseneed to be of a good quality to avoid falsealarms and in-sure successful matches.

� Motion of test subject head has a significanteffect on the system ability to both captureand alarm on test subject.

� There was a substantial loss in matching if test

subject had a pose 15 to 30 degrees (up / down,right / left) off of input camera focal point.

� Eyeglasses were problematic, glare fromambient light and tinted lenses diminishedthe system’s effectiveness.

� System required approximately 250 lux ofdirectional lighting to successfully capturefaces and alarm on test subjects.

� Face capture rate was approximately 10,000face captures per day. The actual trafficthrough the security checkpoint is approxi-mately 5,000 ticketed passengers and airportemployees. There were multiple face cap-tures for each event.

� The false alarm rate was approximately 0.4%of total face captures. Or about 2-3 falsealarms per hour.

� Of the 958 total combined attempts therewere 455 successful matches, (47% success-ful rate).

Test conducted at Palm Beach International Airport Concourse

C security checkpoint, March 11-th through April 15-th, 2002.

This document was obtained from the Palm Beach Count

Department of Airports by the American Civil Liberties Union

of Florida.

20 | IEEE TECHNOLOGY AND SOCIETY MAGAZINE | SPRING 2004

Privacy Protection PrinciplesIn recent weeks, much media attention had beengiven to a new tool for combating crime, installedfor the first time in the United States in Ybor City,the entertainment district of Tampa, Florida. Thesystem uses FaceIt® face recognition technologyfrom Visionics to alert police to the presence ofknown criminals in crowds in a public place.

Visionics has pioneered facial recognition andcontinues to be the worldwide leader in advancingthe state-of-the-art. The company is most qualified tounderstand the power of the technology and its abili-ty to impact society. The massive reduction in crimein the Newham borough of London (up to 40%) andelsewhere is a testament to the benefits that societycan reap from the use of facial recognition. At thesame time, the company is cognizant that powerfultechnologies require responsible use.

In keeping with its belief that its leadership mustextend beyond the technology itself, Visionics hastakend an active role in articulating the most appro-priate and ethical ways for society to benefit fromits technology while ensuring there are no opportu-nities for abuse.

Thus far Visionics has formulated responsibleuse guidelines, secured their acceptance by thoseadopting its technology, been vigilant in ensuringcompliance and, where possible the company hasbuilt technical measures to maintain control over theinstallations.

So far these measures have been effective. Butwithout systematic oversight and enforcement,they may not be enough in the long run. Conse-quently, Visionics is calling for federal legisla-tion in the United States to help transform theseresponsible use principles into responsible pub-lic policy.

Among the issues that must be addressed whenexamining the use of facial recognition as a tool forcombating crime in public places.

� Public Knowledge: Guidelines that estab-lish the proper ommunication mechanismsto the public (such as street signage and

media alerts) and the circumstances underwhich exceptions could be made (e.g., mat-ters of national security at airports and bor-ders).

� Database: Face recognition requires awatch-list database. Specific guidelinesmust be established for database protocolssuch as: need, justification for inclusion andremoval, valid duration of information, dis-semination, review, disclosure and sharing.This should include very explicit guidelinesthat spell out who can be in the “watch-list”database (e.g., fugitive with warrants, crimi-nals under court supervision, etc.). Techni-cal measures should be in place to ensurecontrol over database size and its integrity.

� No Match – No Memory: Guidelines toensure that no audit trail is kept of faces thatdo not match a known criminal or personunder active police inestigation. Non-matches should be purged instantly.

� Authorized Operation and Access: Techni-cal and physical safeguards such as logon,encryption, control logs, and security toensure that only authorized, trained individ-uals have access to the system and to thedatabase.

� Enforcement and Penalty: Oversight pro-cedures and penalties for violaton of theabove principles should be formulated.

In the months to come, the company will workclosely with federal legislators, privacy interest andindustry groups to share its knowledge, experienceand privacy protection principles pertaining to allapplications of face recognition technology. Thecompany will continue to promote an improvedpublic understanding of the creation, use and con-trol of facial recognition systems and will supportpublic policy that ensures that deployment of facialrecognition are conducted in a way that upholds allprivacy rights.

© 2001 Visionics Corp.

Appendix B

“Privacy protection principles” from the Visionics Corporation web pagewww.faceit.com/newsroom/biometrics/privacy.html (February 2, 2002).