a systematic way of hybrid model design and … systematic way of hybrid model design and...
TRANSCRIPT
A Systematic way of Hybrid model design and comparative
analysis of EBGM and eigen values for biometric face
recognition using neural network
Er.Jagmeet Singh Brar
Lecturer
9988232197
Govt. Polytechnic College
G.T.B Garh Moga,Punjab
Abstract Face recognition plays an essential role in human-
machine interfaces and naturally an automatic face recognition system is an application of great
interest. Although the roots of automatic face
recognition trace back to the 1960, a complete
system that gives satisfactory results for video
streams still remains an open problem. Research in
the field has been intensified the last decade due to
an increasing number of applications that can apply
recognition techniques, such as security systems,
ATM machines, “smart rooms” and other human-
machine interfaces. Elastic Bunch Graph Matching
(EBGM) [3] is a feature-based face identification
method. The algorithm assumes that the positions of
certain fiducial points on the faces are known and
stores information about the faces by convolving the
images around the fiducial points with 2D Gabor
wavelets of varying size. The results of all
convolutions form the Gabor jet for that fiducial point. EBGM treats all images as graphs (called
Face Graphs), with each jet forming a node. The
training images are all stacked in a structure called
the Face Bunch Graph (FBG), which is the model
used for identification. For each test image, the first
step is to estimate the position of fiducial points on
the face based on the known positions of fiducial
points in the FBG. Eigenfaces are a set
of eigenvectors used in the computer vision
problem of human face recognition. The approach
of using eigenfaces for recognitionwas developed by
Sirovich and Kirby (1987) and used by
Turk and Alex Pentland in face classification. It is
considered the first successful example of facial
recognition technology. The purpose of this paper is
the implementation of various methods from Two
different families of face recognition algorithms, namely the the EBGM and eigenvalues for biometric face recognition.
Er.Sonika Jindal
Assistant Professor
9888605641
Shaheed Bhagat Singh College of Engg. &
Tech, Ferozepur
Punjab,India
Introduction: For human beings, the task of face identification is
fairly straightforward and seemingly uncomplicated; for the average person, only a few glimpses of an
unknown face are needed to place it in memory and
just as easily recall it when needed. Although humans
perform so well in this task, it is not clear how the
desired result is achieved; deducing the underlying
mechanisms which enable this process is a totally
different story, while at the same time being a crucial
step in allowing computers to imitate our face
recognition capabilities in a reliable and robust
manner. When a machine is presented with the face
identification problem, it must process a given image
or video stream and return the most probable
identities of the people present (possibly more than
one), according to the contents of its database (i.e. the
people the machine “knows”). In an effort to
duplicate the human decision process, two main
categories of algorithms have been proposed, relying on either information about the whole face or
specific, easily-located points on it (facial features).
The first of these families of methods is usually
termed appearance-based in the literature, whereas
the second is referred to as the feature-based
approach. Perhaps the best known appearance-based
algorithm is the Principal Component Analysis (PCA,
[1]), which belongs to the family of Subspace
Projection Methods. PCA considers the image as a
whole, arranges all pixel values in a line vector and
regards each pixel as a separate dimension of the
problem. This vector is then projected on a space of
much lower dimension (hence the name of the
family), in an attempt to reduce the problem size
while retaining as much information as possible
about the original image. PCA is usually enhanced
with Linear Discriminant Analysis (LDA, [2]) in an
effort to improve performance. LDA is essentially a
Jagmeet Singh Brar et al ,Int.J.Computer Technology & Applications,Vol 3 (5), 1747-1751
IJCTA | Sept-Oct 2012 Available [email protected]
1747
ISSN:2229-6093
supervised training method of the system in the
projected subspace which tries to form tight clusters
of points corresponding to images from the same
subject, while at the same time placing clusters
corresponding to different individuals as far away as
possible. Feature-based approaches, on the other
hand, rely on information about well-defined facial
characteristics and the image area around these points
to represent a face in the problem space and perform recognition. Examples of these facial features are the
eyes, nose, mouth, eyebrows etc. The exact
coordinates of the eyes in particular are ideally given,
although in practice the algorithm can only work with
estimates obtained from a face detection and eye
zone locator module that precedes the recognition
process. An example of a feature-based approach is
the Elastic Bunch Graph Matching (EBGM, [3])
algorithm, which stores spectral information about
the neighborhoods of facial features by convolving
these areas with Gabor wavelets (masks).
The Process of Face Recognition System The ultimate goal of face Recognition system is
image understanding the ability not only to recover
image structure but also to know what it represents.
A general statement of automatic face recognition
can be formulated as follows: given still or video images of a scene,
identify or verify one or more persons in the scene
using a stored database of faces. The solution to the
problem involves segmentation of faces (face
detection) from cluttered scenes, feature extraction
from the face regions, recognition or verification.
Figure1 shows the Face Recognition steps
Background study Face recognition algorithms can be classified into
two broad categories according to feature extraction
schemes for face representation: feature-based
methods and appearance-based methods [9].
Properties and geometric relations such as the areas,
distances, and angles between the facial feature
points are used as descriptors for face recognition. On
the other hand, appearance-based methods consider the global properties of the face image intensity
pattern.
Figure2. Shows the first six basis vectors of
Eigenfaces
Typically appearance-based face recognition
algorithms proceed by computing basis vectors to
represent the face data efficiently. In the next step,
the faces are projected onto these vectors and the
projection coefficients can be used for representing
the face images. Popular algorithms such as PCA,
LDA, ICA, LFA, Correlation Filters, Manifolds and
Tensorfaces are based on the appearance of the face.
Eigenfaces (PCA):
in this paper, the author suggest principal component
analysis. Eigenfaces [8] also known as Principal
Components Analysis (PCA) find the minimum mean
squared error linear subspace that maps from the
original N dimensional data space into an M-
dimensional feature space. By doing this, Eigenfaces
(where typically M << N) achieve dimensionality
reduction by using the M eigenvectors of the
covariance matrix corresponding to the largest
eigenvalues. The resulting basis vectors are obtained
by finding the optimal basis vectors that maximize the total variance of the projected data(i.e. the set of
basis vectors that best describe the data).
Linear Discriminant Analysis (LDA) and
Fisherfaces: in this technique :
Linear Discriminant Analysis (LDA) [10] is more
suited for finding projections that best discriminate
different classes. It does this by seeking the optimal
projection vectors which maximize the ratio of the
between-class scatter and the within-class scatter (i.e. maximizing class separation in the projected
Jagmeet Singh Brar et al ,Int.J.Computer Technology & Applications,Vol 3 (5), 1747-1751
IJCTA | Sept-Oct 2012 Available [email protected]
1748
ISSN:2229-6093
space).The optimal basis vectors of LDA can be
denoted as
Figure3. shows the first six basis vectors of
Fisherfaces.
where SB and SW indicate between-class scatter matrix and within-class scatter
matrix respectively.
Neural Networks (NN) and Support Vector
Machines (SVM)
In this paper, Neural Networks and Support Vector
Machines (SVMs) are usually used in low
dimensional feature spaces due to the computational
complexity of the processing involved using high-
dimensional face data. Neural network approaches
[11] have been widely explored for feature
representation and face recognition. However, as the number of people for training increases, NN requires
computational burden exponentially. Fusion of
multiple neural networks.
Proposed Algorithmic Steps with example for
EBGM
Step1: Jets are selected by hand to serve as examples
of facial Features.
Step2. A bunch graph is created. Each node of a
bunch graph corresponds to a facial landmark and
contains a bunch of model jets extracted from the
model imagery.
Step3.-Landmark points are located for every image.
First, a novel jet is extracted from the novel image.
The novel jet’s displacement from the actual location
is estimated by comparing it to the most similar
model jet from the corresponding bunch.
Step4 A face graph is created by each image by
extracting a jet for each landmark. The graph
contains the locations of the landmarks and value of
the jets. The original image can then be discarded.
Steps5 Face similarity is computed as a function of
landmark locations and jet values.
The EBGM algorithm computes the similarity of two
images. To accomplish this task, the algorithm first
finds landmark locations on the images that
correspond to facial features such as the eyes, nose,
and mouth. It then uses Gabor wavelet convolutions
at these points to describe the features of the landmark. All of the wavelet convolution values at a
single point are referred to as a Gabor jet and are
used to represent a landmark. A face graph is used to
represent each image. The face graph nodes are
placed at the landmark locations, and each node
contains a Gabor jet extracted from that location. The
similarity of two images is a function of the
corresponding face graphs.
Eigenfaces for face Detection/ Recognition
1. Acquire an initial set of face images(the
training set).
Jagmeet Singh Brar et al ,Int.J.Computer Technology & Applications,Vol 3 (5), 1747-1751
IJCTA | Sept-Oct 2012 Available [email protected]
1749
ISSN:2229-6093
2. Calculate the eigenfaces from the training
set,keeping only the M images that
correspond to the highest eigenvalues.These
M images define the face space.As new
faces are experienced,the eigenfaces can be
updated or recalculated.
3. Calculated the corresponding distribution in
M-dimensional weight space for each
known individual,by projecting their face images onto the “face space”.
These operations can also be performed from time to
time whenever there is free excess computational
capacity.Having initialized the system the following
steps are then used to recognize new face images:
1. Calculate a set of weights based on the input
image and the M eigenfaces by projecting
the input image onto each of the eigenfaces.
2. Determine if the image is a face at all
(whether known or unknown) by checking
to see if the image is sufficiently close to
“face space”.
3. If it is a face,classify the weight pattern as
either a known person or as unknown.
4. (Optional) Update the eigenfaces and/or
weight patterns.
(Optional) If the same unknown face is seen several times,calculate its characteristic weight pattern and
incorporate into the known faces.
Proposed Steps for the Implementation.
In this paper, we have discussed the two algorithms
EBGM and Eigenvalues for biometric face
recognition. We have to implement the hybrid model
design of it and also performed the comparative
analysis of both the algorithm. The following are the
proposed steps for our approach:
To recognize a sample face from a set of
faces .
Implementation of the hybrid model i.e
combination of EBGM and eigenvalues
Use of hybrid model for face recognition by using both the algorithm of face recognition.
Comparison of EBGM and eigenvalues on the basis
of various parameters.
The proposed face recognition system passes through
three main phases during a face recognition process.
Three major functional units are involved in these
phases and they are depicted in Figure .The
characteristics of these phases inconjunction with the
three functional units are given below:
Face library formation phase:-- In this phase, the
acquisition and the preprocessing of the face images
that are going to be added to the face library are
performed. Face images are stored in a face library in
the system. We call this face database a "face library"
because at the moment, it does not have the
properties of a relational database.
Training phase:-- After adding face images to the
initially empty face library,the system is ready to
perform training set and eigenface formations. Those
face images that are going to be in the training set are chosen from the entire face library. Because that the
face library entries are normalized, no further pre-
processing is necessary at this step. After choosing
the training set,
eigenfaces are formed and stored for later use.
Eigenfaces are calculated from the training set,
keeping only the M images that correspond to the
highest eigenvalues. These M eigenfaces define the
M-dimensional "face space".
Recognition and learning phase:-- After choosing a
training set and constructing the weight vectors of
face library members, now the system is ready to perform the recognition process. User initiates the
recognition process by choosing a face image. Based
on the user request and the acquired image size, pre-
processing steps are applied to normalize this
acquired image to face library specifications (if
necessary). Once the image is normalized, its weight
vector is constructed with the help of the eigenfaces
that were already stored during the training phase. Functional units involved in the library formation
phase
Functional units involved in the three phases
Jagmeet Singh Brar et al ,Int.J.Computer Technology & Applications,Vol 3 (5), 1747-1751
IJCTA | Sept-Oct 2012 Available [email protected]
1750
ISSN:2229-6093
Analysis of the Algorithms
Analysis of the algorithm based on the various
fiducial points provided with human scan images.
After the fiducial points for a testing image have been
estimated, the algorithm proceeds to extract Gabor
jets from all those positions and construct the Face
Graph, which is then compared against all training
images in the FBG to produce the system’s decision
for the identification problem.
Conclusion
The EBGM algorithm has been studied extensively;
both in itself and in comparison with Eigenface and
their respective merits and shortcomings have been
investigated and analyzed. EBGM has proven to be a
fairly mathematically involved face identification
method that exhibits robustness under illumination variations, image resizing and imperfect eye
localization. It is a very good choice for off-line
applications and cases where training images are
scarce; however, its high computational complexity
makes it inappropriate for real-time applications.
References
1. R. Duda, P. Hart and D. Stork, Pattern
Classification, Wiley-Interscience, New
York, 2000.
2. Laurenz Wiskott, Jean-Marc Fellous,
Norbert Krueger and Christoph von der
Malsburg, “Face Recognition by Elastic
Bunch Graph Matching”, in Intelligent
Biometric Techniques in Fingerprint and
Face Recognition, eds. L.C. Jain et al., publ.
CRC Press, ISBN 0-8493-2055-0, Chapter
11, pp. 355-396, 1999.
3. David S. Bolme, “Elastic Bunch Graph
Matching”, Master’s Thesis, Computer
Science Department, Colorado State
University, Summer 2003.
4. Laurenz Wiskott, Jean-Marc Fellous,
Norbert Krueger and Christoph von der
Malsburg, “Face Recognition by Elastic
Bunch Graph Matching”, in Intelligent
Biometric Techniques in Fingerprint and
Face Recognition, eds. L.C. Jain et al., publ.
CRC Press, ISBN 0-8493-2055-0, Chapter
11, pp. 355-396, 1999.
5. M. Turk and A. Pentland, “Eigenfaces for Recognition”, J. Cognitive Neuroscience,
Vol. 3, March 1991, pp. 71-86.
6. Jun Zhang; Yong Yan; Lades, M., "Face
recognition: eigenface, elastic matching, and
neural nets ", Proceedings of the IEEE ,
Volume. 85, No. 9, pp. 1423 – 1435, Sept.
1997.
7. Handbook of Biometrics by Anil K. Jain
Michigan State University, USA Patrick
Flynn University of Notre Dame, USA.
8. M. Turk and A. Pentland. Eigenfaces for
Recognition. Journal of Cognitive
Neuroscience, 3(1):71–86, 1991.
9. W. Zhao, R. Chellappa, A. Rosenfeld, and P. J. Phillips. Face Recognition: A Literature
Survey. ACM Computing Surveys, pages
399–458, 2003.
10. R. O. Duda, P. E. Hart, and D. G. Stork.
Pattern Classification. Wiley- Interscience
Publication, 2000.
11. S. Lawrence, C. L. Giles, A. C. Tsoi, and A.
D. Back. Face Recognition: A
Convolutional Neural-Network Approach.
IEEE Transactions on Neural Networks,
8(1):98–113, 1997.
Jagmeet Singh Brar et al ,Int.J.Computer Technology & Applications,Vol 3 (5), 1747-1751
IJCTA | Sept-Oct 2012 Available [email protected]
1751
ISSN:2229-6093