face recognition 2.1.2013

89
Face Recognition 2.1.2013 Presented by: Galit Levin.

Upload: zamora

Post on 24-Feb-2016

23 views

Category:

Documents


2 download

DESCRIPTION

Face Recognition 2.1.2013. Presented by: Galit Levin. First Article: Face Recognition in Hyperspectral Images. . Zhihong Pan, Student Member, IEEE, Glenn Healey, Senior Member, IEEE, Manish Prasad, and Bruce Tromberg December 2003. The Problem. - PowerPoint PPT Presentation

TRANSCRIPT

Top K Monitoring

Face Recognition2.1.2013

Presented by: Galit Levin.First Article:Face Recognition in Hyperspectral Images

.Zhihong Pan, Student Member, IEEE, Glenn Healey, Senior Member, IEEE,Manish Prasad, and Bruce TrombergDecember 2003The ProblemHow to perform accurate Face Recognition in the presence of changes in facial pose and expression and over time intervals between images.

Current Face RecognitionUse spatial discriminants that are based on geometric facial features [3][4][5][6].

Perform well on database acquired under controlled conditions.

Exhibit degradation in the presence of changes in face orientation.4Current Face RecognitionPerform poorly when subjects are imaged at different times.

Significant degradation in recognition performance for images of faces that are rotated more than 32 degrees! : -20% .5MotivationAccurate face recognition over time in the presence of changes in facial pose and expression.

Algorithm that performs better than current face recognition for rotated faces.

6A Little BiologyEpidermal The outermost layers of cells in the skin.Dermal The layer between the epidermis and subcutaneous tissues.Epidermal and Dermal layers of human skin contains several pigments: Melanin, Hemoglobin, Bilirubin and - carotene.Small changes in the distribution of the pigments include significant changes in the skins spectral reflectance!

7Penetration DepthVisible wavelengths are : 380 740 nm.Near infrared wavelengths are : 750 2500 nm.In the near-infrared (NIR) skin has larger penetration depth than for visible wavelengths.Example: optical penetration of 3.57mm at 850nm and 0.48mm at 550nm.

Larger penetration enables characteristics that are difficult for a person to modify.Spectral Change In Human Skin

Right cheek of four subjects over NIRDifferences in amplitude and shape. 4 . .

9And Now The Same Object

Different camera angles and poses. . . .10Spectral MeasurementsNIR skin and hair reflectance.2 subjects in a front view illumination is the same for both!

11Spectral MeasurementsNIR skin and hair reflectance2 subjects in 90 degree side view.

12ConclusionsSignificant spectral variability from one subject to the other.

Spectral characteristics from one subject remain stable over a large change in face orientation.

Skin spectra differences are very pronounced.

Hair spectra differences also are noticeable and valuable for recognition.Experiments200 human subjects using hyperspectral face images.Each subject imaged (NIR) over a range of poses and expressions.Several subjects imaged multiple times over several weeks.

Recognition is achieved by combining spectral measurements for different tissue types. ExperimentsAll images were captured with 31 spectral bands separated by 0.01 nm over the NIR (700nm 1000nm).

2 light sources that provide uniform illumination on the subject.Hyperspectral Bands16

31 bands for one subject in ascending order all used!16Spectral Reflectance ImagesMain Idea: Convert the hyperspectral images to spectral reflectance images.Spectral Reflectance ImagesTwo spectralon were used during calibration.

White spectralon : a panel with 99% reflectance.

Black spectralon : a panel with 2% reflectance.

Both panels have nearly constant reflectance over the NIR range.Some Calculations

Raw measurement obtained by hyperspectral imaging at coordinate (x,y) and wavelength .

L illumination.S system spectral response.R reflectance of the viewed surface.O offset.Some CalculationsImage of white spectralon

Same as for black spectralon.Rw is the reflectance function of the white spectralon.We average 10 images of white and black spectralon panels and estimate E(Iw ), E(IB).

Some CalculationsNow we can estimate L*S

And then estimate O. Now we can estimate R (of the subject) Not dependent in L if L doesnt change during the experiment.

21

21 .R .Data Distribution 200 subjects

Diverse composition in term of gender, age and ethnicity.Images Examples237 images for each subject, up to 5 tissue types.

Fg, fa front view , neutral expression.All frs are with rotations 45 and 90 degrees.20 of the 200 were imaged at different times up to 5 weeks.

23Images ExamplesFront view taken in 4 different visits.

24

24Image RepresentationEach face image is represented by spectral reflectance vectors.

These vectors are extracted from small facial regions which are visible.

The regions are selected manually.

2525Image Representation 5 regions

2 regions

26

26Spectral Reflectance Vector27Each reflectance vector of a region t and wavelength is estimated by averaging over the N pixels in the region.

Spectral reflectance vector for each facial region

Normalize.

27B bandst , , , .Spectral DistanceThe distance between face image I and face image J for tissue type t.

represents the B * B covariance matrix variability for tissue type t over the entire database.

In our experiment, we use a single for the entire data.

L var .

28Forehead Spectrum29

Larger variance at the ends of spectral range due to sensitivity to noise.29 . var .ConceptsGallery (C) group of hyperspectral images of known identity. Example : image fg.

Probes The remaining images of the subject that are used to test the recognition algorithm.

Duplicates The images taken in the second and subsequent sessions.

30

Our ExperimentsEvery image j in the probe set is present in the gallery as Tj.

Calculate D(i,j) For each i. j Probes , i C.

Hit if D(Tj , j) is the smallest from all C distances.Our Experiments

M1 The number of correctly recognized probes.

Mn The number of probes that D(Tj , j) is one of the n smallest of the C distances

N The rank.

P total number of probes.

32Example

M2:

33Experiments34

Skin is the most useful tissue hair and lips are less!90% of the probes were recognizes accurately 200 images in the DB.34Rank n d(I,j) .Reminder35fg is the gallery image.fa, fb are the probe images.

35Recognition Performance36All tissue types, two probes.fa same expression as the gallery, fb different expression

36Recognition not impacted significantly by changes in facial expressions although it is harder to identifyRecognition using hyperSpectral discriminants is not impacted significantly by changes in facial expressions although it is harder to !identify3737Recognition Performance38Individual tissue types, two probes.Degradation pyramid forehead , left + right cheek , lips

38 . degradation . degradation , fg fa fb fb . . 2 .

All Tissues Recognition39Change in face orientation over all 200 images in DB.75% recognition for 45 degrees rotation.80% recognition have a match in the top 10 for 90 degrees.

39Face OrientationCurrent face recognition systems experience difficulties in recognizing probes that differ from a frontal gallery more than 32 degrees.

Hyperspectral images achieve accurate recognition results for larger rotations!4040Face Orientation Recognition41

(a)Female (b)male (c)asian (d)caucasion (e)black (f)18-20 (g)21-30 (h)31-40 (i)over 40.41 DB 200 DB. . 18-20 (86 )-21-30 (67 ) -45 -90 21-30 ( ) .Table Analysis42

Four table analysis : front view Neutral expression probes, front view changed expression probes, 45 degrees rotation , 90 degrees rotation For all categories Example:

Female probes tend to false match with female images in the gallery.Same is for Male and Asian probes.

42Duplicates4398 probes from 20 subjects at 3 days and 5 weeks.92% have correct match in the top 10.

43 probes . 5 .DuplicatesPerformance duplicates is similar when acquired within one week or over.

Significant reduction in recognition accuracy for images not acquired on the same day as the gallery.

Assumptions: drift in sensor characteristics or changes in the subject conditions including variation in blood, water concentration, melanin concentration

Hyperspectral imaging has potential for face recognition over time!

4444ConclusionPurpose : Face recognition over time in the presence of changes in facial pose and expression.

Implementation : Hyperspectral images over the NIR (0.7 nm-1 nm),Images for 200 subjects.Spectral comparison of combinations of tissue types.

45ConclusionResults : Performs significantly better than current face recognition for rotated faces. Accurate recognition performance for expression changes and for images acquired over time intervals.

Expectations : Further improvement by modeling spectral reflectance changes due to face orientation changes.We use only spectral information. Improvement can be achieved by incorporating spatial information.

46Second Article:Illumination Invariant Face Recognition Using Near Infrared Images.Stan Z. Li, Senior Member, IEEE, RuFeng Chu, ShengCai Liao, and Lun ZhangApril 2007

The ProblemLighting conditions drastically change the appearance of a face.

Changes between images of a person under different illumination conditions are larger than those between the images of two people under the same illumination.

The ProblemLighting is the top most issue to solve for reliable face-based applications.

The system should adapt to the environment and not vice versa.Current Face Recognition SystemsMost current face recognition systems are based on face images captured in the visible light spectrum.

These systems are compromised in accuracy by changes in the environmental illumination.

Most of these systems are designed for indoor use.50Related WorkMost of the related work improved recognition performance but have not led to a face recognition method that is illumination invariant.

51Related WorkOne good direction is to use 3D data.

Such data captures geometric shapes of face and as a result is less affected by environmental lighting.

It can cope with rotated face.

Disadvantages: increases cost and slow speed and not necessarily produce better recognition results recognition performance by a 2D image and by a 3D image may be similar.52MotivationAchieving illumination invariant face recognition using active near infrared (active NIR) imaging techniques.

Build accurate and fast face recognition systems.

53Control Light DirectionTwo strategies to control light direction: 1. Provide frontal lighting 2. Minimize environment lighting.

Now will present the hardware design of the algorithm.54Frontal LightingWe would like to produce a clear frontal-lighted face image.

We build an active NIR imaging hardware.

Mount active lights on the camera to provide frontal lighting the best possible straight frontal lighting than mounting anywhere else.

55Frontal LightingChoose the active lights in the NIR spectrum (780 1100 nm).

We use LEDS.

The camera-face distance is between 50-100 cm, convenient range for the user.

Guideline : Frontal lighting should be at least stronger than expected environment illumination but still safe for human eyes.56Minimize Environment LightingFilter that cuts off visible light while allowing NIR light to pass.

Our filter passes wavelengths 720 , 800, 850, 880 at rates 0, 50, 88, 99 respectively.

This filter cuts off visible environment lights (