new a face authentication system using the trace transform · 2006. 11. 29. · a face...

22
A Face Authentication system using the Trace Transform S Srisuk, * M Petrou, W Kurutach , A Kadyrov § March 21, 2006 Abstract In this paper we introduce novel face representations, the masked Trace transform (MTT), the shape Trace transform (STT) and the weighted Trace transform (WTT), for recognising faces in a face authen- tication system. We first transform the image space to the Trace transform space to produce the MTT. We then identify the points of the MTT which take similar values irrespective of intraclass variations and this way we create the WTT. Next we threshold the MTT and extract the edges of the thresholded regions to produce some shapes that characterise the person. This is the STT. Therefore, each person in the database is represented by their WTT and STT. We estimate the dissimilarity between two shapes by a new measure we propose, the Hausdorff context. Reinforcement learning is used to search for the optimal parameter values of the algorithm. Shape and features from the MTT are then integrated at the decision level, by a classifier combination algorithm. Our system is evaluated with experiments on the XM2VTS database using 2360 face images. We achieve a Total Error Rate (TER) of 0.18%, which is the lowest error among all other reported methods which used the same data and the same evaluation protocol in a recently published study. Keywords: Face authentication, Trace transform, Hausdorff context 1 Introduction Texture and 2D shape play a crucial role in face recognition and authentication systems [5, 7, 9, 10, 16, 17, 20, 31, 35, 36]. They provide significant features which can be used to classify face images of the same individual as well as discriminate between face images of different individuals. There are, however, inherently difficult problems to construct an ideal feature representation because the images involved are * Sanun Srisuk is with the School of Electronics and Physical Sciences, University of Surrey, Guildford GU2 7XH, United Kingdom, on leave of absence from the Advanced Machine Intelligence Research Laboratory, Department of Computer Engi- neering, Mahanakorn University of Technology, Nong Chok, Bangkok 10530, Thailand. Maria Petrou is with the Informatics and Telematics Institute, CERTH, PO Box 361, Thessaloniki, 57001, Greece, on leave of absence from the School of Electronics and Physical Sciences, University of Surrey, Guildford GU2 7XH, United Kingdom. E-mail: [email protected]. Werasak Kurutach is with the Department of Information Technology, Mahanakorn University of Technology, Nong Chok, Bangkok 10530, Thailand. § Alexander Kadyrov is with the School of Electronics and Physical Sciences, University of Surrey, Guildford GU2 7XH, United Kingdom. 1

Upload: others

Post on 26-Sep-2020

2 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: New A Face Authentication system using the Trace Transform · 2006. 11. 29. · A Face Authentication system using the Trace Transform S Srisuk, M Petrou,y W Kurutachz, A Kadyrovx

A Face Authentication system using the Trace Transform

S Srisuk,∗ M Petrou,† W Kurutach‡, A Kadyrov§

March 21, 2006

Abstract

In this paper we introduce novel face representations, the masked Trace transform (MTT), the shapeTrace transform (STT) and the weighted Trace transform (WTT), for recognising faces in a face authen-tication system. We first transform the image space to the Trace transform space to produce the MTT.We then identify the points of the MTT which take similar values irrespective of intraclass variationsand this way we create the WTT. Next we threshold the MTT and extract the edges of the thresholdedregions to produce some shapes that characterise the person. This is the STT. Therefore, each person inthe database is represented by their WTT and STT. We estimate the dissimilarity between two shapesby a new measure we propose, the Hausdorff context. Reinforcement learning is used to search for theoptimal parameter values of the algorithm. Shape and features from the MTT are then integrated at thedecision level, by a classifier combination algorithm. Our system is evaluated with experiments on theXM2VTS database using 2360 face images. We achieve a Total Error Rate (TER) of 0.18%, which isthe lowest error among all other reported methods which used the same data and the same evaluationprotocol in a recently published study.

Keywords: Face authentication, Trace transform, Hausdorff context

1 Introduction

Texture and 2D shape play a crucial role in face recognition and authentication systems [5, 7, 9, 10, 16,

17, 20, 31, 35, 36]. They provide significant features which can be used to classify face images of the

same individual as well as discriminate between face images of different individuals. There are, however,

inherently difficult problems to construct an ideal feature representation because the images involved are∗Sanun Srisuk is with the School of Electronics and Physical Sciences, University of Surrey, Guildford GU2 7XH, United

Kingdom, on leave of absence from the Advanced Machine Intelligence Research Laboratory, Department of Computer Engi-neering, Mahanakorn University of Technology, Nong Chok, Bangkok 10530, Thailand.

†Maria Petrou is with the Informatics and Telematics Institute, CERTH, PO Box 361, Thessaloniki, 57001, Greece, on leaveof absence from the School of Electronics and Physical Sciences, University of Surrey, Guildford GU2 7XH, United Kingdom.E-mail: [email protected].

‡Werasak Kurutach is with the Department of Information Technology, Mahanakorn University of Technology, Nong Chok,Bangkok 10530, Thailand.

§Alexander Kadyrov is with the School of Electronics and Physical Sciences, University of Surrey, Guildford GU2 7XH,United Kingdom.

1

Page 2: New A Face Authentication system using the Trace Transform · 2006. 11. 29. · A Face Authentication system using the Trace Transform S Srisuk, M Petrou,y W Kurutachz, A Kadyrovx

complex and also highly variable. Consider for example the two faces in Fig. 1(a) regarded as vectors

of pixel brightness values and compared using some norm: they are very similar. However, considered as

texture features (Fig. 1(b)) in the Masked Trace transform they look subtly different. Moreover, regarded

as shapes produced by thresholding their Masked Trace transform representations in Fig. 1(c), they appear

quite different. Inspired by this observation, our goal in this paper is to formulate a new face feature

representation, that has a very high discriminative power and can be used for face authentication. An

important characteristic of our approach is that it maximises the between-class variance, and yet it has the

classification ability to minimise the within-class variance. Our system accomplishes this by incorporating

a reinforcement learning algorithm to control the parameters of the shape Trace transform and the weighted

Trace transform.

(a) (b) (c)

Figure 1: The idea behind this paper. (a) In terms of pixel-to-pixel comparisons, these two face imagesare very similar. (b) There are however subtle differences between the features in the corresponding facerepresentations by the Masked Trace transform. (c) Moreover, regarded as shapes, created from the MaskedTrace transform, they appear quite different.

The original contributions of the proposed system are:

1. It offers the capability of constructing robust features which describe the characteristics of face images

without necessarily having a physical or geometric meaning.

2. It incorporates shape and texture information.

3. It proposes a robust shape matching measure which is based on both spatial and structural informa-

tion.

The well-known approaches used for face authentication and recognition are based on the use of eigen-

faces [1,7,22,25,32,36], elastic matching [9,16,17,31,35,36], neural nets [18,19,28,36], waveletfaces [6]

and fisherfaces [1,20]. Most of the old approaches make use of grey values in the image space or features in

2

Page 3: New A Face Authentication system using the Trace Transform · 2006. 11. 29. · A Face Authentication system using the Trace Transform S Srisuk, M Petrou,y W Kurutachz, A Kadyrovx

a reduced space. It is unlikely that a few characteristics measured from a reduced space will suffice to allow

one to discriminate all faces, particularly in a large face database. Recent approaches apply sophisticated

transformations in order to produce a space where the face classes are separable. One of the characteristics

of the human vision system, supported by physiological evidence [4], is the redundancy built into it: redun-

dancy allows for robustness in performance, so we argue that the use of a large number of features, even

correlated ones, may help solve some difficult recognition problems. In addition, representing the data in a

way different from the conventional way may make explicit variations in the data which are only implicitly

present in the original representation. The Trace transform is an alternative image representation that allows

one to construct thousands of features from an image [12, 13, 27]. Therefore, the Trace transform may be

appropriate for a face authentication system, and that is why we decided to investigate its usefulness in the

particular problem. The Trace transform performs computations along lines scanning the image. The most

closely related work which uses lines scanning the face is the line-based face recognition system [33]. The

authors in [33] use a set of random rectilinear line segments of 2D face image views as the underlying im-

age representation, together with the nearest-neighbour classifier as the line-matching scheme. The Trace

transform on the other hand uses lines criss-crossing the image in all possible directions and computes the

values of various functionals along each line.

2 The Trace Transform for Face Feature Representation2.1 The Trace Transform

The Trace transform [12,13,27], a generalisation of the Radon transform, is a new tool for image processing.

To produce the Trace transform one computes a functional T along tracing lines of an image. Each line is

characterised by two parameters (p, φ). Parameter p is the distance of the line from the centre of the axes

and φ is the orientation of the normal to the line with respect to the reference direction. With the Trace

transform the image is transformed into another “image”, which is a 2-D function g(φ, p). The resultant

Trace transform depends on the functional used. Different Trace transforms can be produced from an image

using different functionals T . In our current implementation we have 22 different trace functionals, each of

which is shown in table 1. The first one is just the integral of the image function f(t) along the tracing line.

This produces the Radon transform of the image. Let us denote by t the variable defined along a tracing

line (φ, p). Let us also denote by n the number of points along the tracing line. Parameter n may be varied

depending on the length of the tracing line. The notation medianx{x, w} means the weighted median of

3

Page 4: New A Face Authentication system using the Trace Transform · 2006. 11. 29. · A Face Authentication system using the Trace Transform S Srisuk, M Petrou,y W Kurutachz, A Kadyrovx

sequence x with weights in the sequence w. For example, median{{4, 2, 6, 1}, {2, 1, 3, 1}} indicates the

median of numbers 4, 2, 6, and 1 with corresponding weights 2, 1, 3, and 1. This means the standard median

of numbers 4, 4, 2, 6, 6, 6, 1, i.e. the median of the ranked sequence 1, 2, 4, 4, 6, 6, 6, which is 4. See [12] for

more details and for the properties of the Trace transform.

Table 1: The Trace functionals T

No. Trace Functionals Details1 T (f(t)) =

∫ ∞

0 f(t)dt Radon transform2 T (f(t)) =

[∫ ∞

0 |f(t)|p dt]q

p = 0.5, q = 1/p

3 T (f(t)) =[∫ ∞

0 |f(t)|p dt]q

p = 4, q = 1/p

4 T (f(t)) =∫ ∞

0|f(t)′| dt f(t)′ = [(t2 − t1), (t3 − t2), . . . , (tn − tn−1)]

5 T (f(t)) = mediant{f(t), |f(t)|}6 T (f(t)) = mediant{f(t), |f(t)′|}

7 T (f(t)) =[∫ x

0|F{{(t)}|p

]q F means taking the discrete Fourier transformp = 4, q = 1/p, x = n/2

8 T (f(t)) =∫ ∞

0

ddtM{{(t)}

∣ dtM is a median filtering operator, using a local window oflength 3, and d/dt means taking the difference of successivesamples

9 T (f(t)) =∫ ∞

0rf(t)dt r = |l − c|, l = 1, 2, . . . , n, c = medianl{l, f(t)}

10 T (f(t)) = mediant{√

rf(t), |f(t)′|1/2}11 T (f(t)) =

∫ ∞

0r2f(t)dt

12 T (f(t)) =∫ ∞

c∗

√rf(t)dt c∗ signifies the nearest integer to c

13 T (f(t)) =∫ ∞

c∗ rf(t)dt14 T (f(t)) =

∫ ∞

c∗ r2f(t)dt

15 T (f(t)) = mediant∗{f(t∗), |f(t∗)|1/2} f(t∗) = [f(tc∗) f(tc∗+1) . . . f(tn)]

16 T (f(t)) = mediant∗{rf(t∗), |f(t∗)|1/2} f(t∗), r and c∗ as abovel = c∗, c∗+ 1, . . . , n, c = medianl{l, |f(t)|1/2}

17 T (f(t)) =∣

∫ ∞

c∗+1ei4 log (r)√rf(t)dt

r = |l − c|, l = 1, 2, . . . , n, i =√−1

18 T (f(t)) =∣

∫ ∞

c∗+1 ei3 log (r)f(t)dt∣

∣c = medianl{l, |f(t)|1/2},

19 T (f(t)) =∣

∫ ∞

c∗+1ei5 log (r)rf(t)dt

∣c∗ signifies the nearest integer to c

20 T (f(t)) =∫ ∞

c

√rf(t)dt r = |l − c|, l = 1, 2, . . . , n,

21 T (f(t)) =∫ ∞

c rf(t)dt c = 1S

∫ ∞

0 r|f(t)|dt,22 T (f(t)) =

∫ ∞

cr2f(t)dt S =

∫ ∞

0|f(t)|dt

2.2 The Masked Trace Transform (MTT)

The Trace transform is a global transform, applicable to full images. If we are going to use it to recognise

faces, we must consider a local version of it. The Trace transform is known to be able to pick up shape

as well as texture characteristics of the object it is used to describe. We extract faces following [29], and

represent each extracted face by an ellipse. We call this the masked Trace transform (MTT). We show the

result of the Trace transform of the full image, the rectangular face and elliptical shape in Fig. 2. The MTT

representation may be regarded as expressing combined shape and texture characteristics of the face.

4

Page 5: New A Face Authentication system using the Trace Transform · 2006. 11. 29. · A Face Authentication system using the Trace Transform S Srisuk, M Petrou,y W Kurutachz, A Kadyrovx

(a) (b) (c) (d) (e) (f)

Figure 2: Examples of the Trace transform with different windows. (a) and (d) Full images; (b) and (e)Rectangular shapes, and (c) and (f) Elliptical shapes.

2.3 The Shape Trace Transform (STT)

In this section we introduce a novel face representation using shapes derived from MTT, hereafter simply

called shape Trace transform (STT). As the Trace transform offers an alternative representation of the

image of an object, it is possible to identify a person directly from its Trace transform, without extracting

any features from it. This is only possible if the object that has to be identified is not rotated or scaled with

respect to the reference object. In the face authentication task we may assume that this is the case as we

have control over the image capturing conditions. The Trace transform is a very rich representation of an

image and in order to use it directly for recognition, one has to produce a much simplified version of it. One

way of doing it is by simple thresholding:

B(φ, p) =

{

1, if g(φ, p) ≥ υ,

0, Otherwise, (1)

where υ is some threshold. This way we produce a binarised version of the Trace transform. The shapes

of the outlines of the extracted regions may be used for the authentication task. Fig. 3 shows in the

top two rows such representations of two different images of the same person, while in the bottom row the

corresponding representation for a different person. In the task of face authentication one has to discriminate

the images between clients and impostors. The MTTs in Fig. 3(b) (bottom two rows), regarded as textures,

exhibit only subtle differences between the two images. If, however, we regard the shapes extracted from

them, shown in Fig. 3(d), the discriminating power of these face representations for an automatic system is

significantly increased. The outline of the object in Fig. 3(d) can be regarded as a “shape” representation of

the face image. This “shape”, however, is not directly related to the shape we see when we see a person. It

may be thought of as the shape seen through the eyes of “an alien vision system” which instead of imaging

the scene by recording the brightness value at sample points called pixels, it images the scene by computing

5

Page 6: New A Face Authentication system using the Trace Transform · 2006. 11. 29. · A Face Authentication system using the Trace Transform S Srisuk, M Petrou,y W Kurutachz, A Kadyrovx

(a) (b) (c) (d)

Figure 3: Examples of extracted shapes from MTT. (a) Face examples. (b) MTT. (c) Binarisation of MTT.(d) STT.

a functional along lines called tracing lines. We may choose threshold υ for each client in such a way as to

decrease the within-class variance, while maintaining the between-class variance.

2.4 The Weighted Trace Transform (WTT)

MTT allows us to define a new way for face coding and representation. From this representation we saw

how we can extract shape information. We may also use the values of the Trace transform itself as some

sort of texture features. However, not all values of a Trace transform are useful or good features. Every

point in the Trace representation of an image represents a tracing line. Here, we shall describe a method

for weighting each tracing line according to the role it plays in recognising the face. We need to find the

persistence of the features in MTT for each person. So, by selecting the features in the Trace transform

which persist for an individual, even when their expression changes, we are identifying those scanning

lines that are most important in the definition of the person. We refer to this method as the weighted Trace

transform (WTT). Suppose that we have 3 training images which were transformed to the Trace transform

space. We first compute the differences between the MTTs of the 3 images.

D1(φ, p) ≡ |g1(φ, p) − g2(φ, p)| ,

D2(φ, p) ≡ |g1(φ, p) − g3(φ, p)| ,

D3(φ, p) ≡ |g2(φ, p) − g3(φ, p)| , (2)

6

Page 7: New A Face Authentication system using the Trace Transform · 2006. 11. 29. · A Face Authentication system using the Trace Transform S Srisuk, M Petrou,y W Kurutachz, A Kadyrovx

where gi is the MTT of the ith training image. These image differences can be used to indicate the char-

acteristics of the variations in appearance of images of the same face. We define the weight matrix as

follows

W (φ, p) =

{

1, if D1(φ, p) ≤ κ and D2(φ, p) ≤ κ and D3(φ, p) ≤ κ

0, Otherwise. (3)

where κ is some threshold. In other words, the weight matrix flags only those scanning lines which in all

three images produced values for the Trace transform that differ from each other by only up to a certain

tolerance κ. The values of these lines will constitute the “texture” features from the Trace transform. We

use the weight matrix W (φ, p) to measure the similarities between two images as

r2(Tr, Tt) ≡ exp

[

− 1

φ,p

|Tr(φ, p) − Tt(φ, p)|W (φ, p)

]

, (4)

where Tr is the MTT of one of the training images, all of which are used as reference images, Tt the MTT

of the test image, and nκ the total number of flagged lines in WTT. This measure is used as the confidence

level of matching two WTTs. It was chosen because it is bounded, varying between 0 for large level of

differences and 1 for absolute agreement, and it is very simple to compute, using the L1 norm between the

values that have weight 1.

3 Shape Matching

If the shapes extracted from the Trace transform are to be used for recognition, we need to employ a

shape matching algorithm. In this section, we propose a novel shape difference (or distance) measure, the

“Hausdorff context”, based on the combination of the Hausdorff distance and shape context.

3.1 The Shape Context

Let us denote by P = {p1, . . . , pn} the set of n edge pixels. The shape context of a point pi is computed

from the set of the other n − 1 points as follows [2]: The positions of the remaining points are expressed

in terms of log polar coordinates defined with centre point pi. Then the n − 1 points are binned using a 2D

histogram array as that shown in figure 4a. This histogram hi is defined to be the shape context of point pi.

Each shape context, therefore, is a log-polar histogram of the coordinates of the rest of the points measured

using the reference point as the origin, and it constitutes a compact representation of the distribution of

7

Page 8: New A Face Authentication system using the Trace Transform · 2006. 11. 29. · A Face Authentication system using the Trace Transform S Srisuk, M Petrou,y W Kurutachz, A Kadyrovx

(a) (c)(b)00000

1

2

3

4

5

3 6 9 12θ

log

r

Figure 4: (a) Diagram of log-polar histogram. (b) STT computed from trace functional 1. (c) Shape contextexample of the point marked by ◦ in (b) (dark=large value). Here we use 5 bins for log r and 12 bins for θ.

points relative to each point. This log-polar histogram is more sensitive to positions of nearby sample

points than to those of points farther away. An example is shown in figure 4c.

When two points are matched, their contexts should also be matched. The mis-matching of their contexts

constitutes the “cost” of matching two points. We denote this cost of matching points pi and qj by C(pi, qj)

and compute it by using the χ2 test statistic:

C(pi, qj) =1

2

K∑

k=1

[hi(k) − hj(k)]2

hi(k) + hj(k), (5)

where pi is a point of shape P and qj a point of shape Q, and hi(k) and hj(k) denote the K-bin normalised

context histograms computed for points pi and qj , respectively.

3.2 The Hausdorff Distance

Given two point sets A and B, the modified Hausdorff distance [8, 11] between A and B is defined as

H(A, B) = max(h(A, B), h(B, A)), (6)

where

h(A, B) =1

n

a∈A

minb∈B

D(a, b) (7)

with D(a, b) denoting the distance of points a and b (e.g. their Euclidean distance) and h(B, A) being

defined in a similar way. This is a measure of dissimilarity or difference between two point sets. It can be

calculated without an explicit pairing of points in their respective data sets.

3.3 The Hausdorff context

Consider point a of the first shape and set B of the points which make up the second shape as shown in Fig.

5. Let us assume that A and B represent the same shape, but they may have been distorted by noise and

8

Page 9: New A Face Authentication system using the Trace Transform · 2006. 11. 29. · A Face Authentication system using the Trace Transform S Srisuk, M Petrou,y W Kurutachz, A Kadyrovx

Figure 5: The integration of Hausdorff distance and shape context. The grey shade indicates the neighbour-hood area. The point marked by ◦ is a sample point a of the first shape A. The points marked by / and �are the candidate matching points of the second shape B.

they may be discontinuous.

The Hausdorff distance measures the distance from point a to all points of set B, and selects the one at

the minimum distance. In this case, the candidate point marked by / is selected. The minimum distance is

therefore based only on spatial information. This may lead to incorrect results when we have to deal with

discontinuities in the shapes represented by the two sets caused by segmentation and edge detection errors.

We propose an alternative way to find the minimum distance between point a and set B to overcome the

above problem: For each point b of set B we compute C(a, b) as defined by equation (5). Among all these

points, we choose the one that has the most similar context with a, i.e. the one with the minimum value of

C(a, b), instead of choosing the one with the minimum Euclidean distance as done in the original Hausdorff

distance calculation. Let us say that this is point b′. So, for every point a we identify a corresponding point

b′ with the most similar context. The Euclidean distance between the corresponding pairs of points (a, b′)

is used as a weight when summing up the values of C(a, b′) in order to produce a distance of set A from set

B:

hHC(A, B) ≡∑

a∈A D(a, b′)C(a, b′)∑

a∈A D(a, b′)(8)

If in formula (6), we use hHC(A, B) defined above, instead of h(A, B) defined by equation (7), we have

a new way of measuring the dissimilarity between two shapes, which we call the “Hausdorff context”.

For practical purposes, and in order to reduce the computational cost, instead of searching among all

points b of set B to find the one that has the most similar context with point a, we may restrict our search

among only those points of set B that are within a neighbourhood N(a) of point a.

We use this new shape dissimilarity measure to define the confidence level with which we match two

STTs:

r1(A, B) ≡ 1 − H(A, B). (9)

9

Page 10: New A Face Authentication system using the Trace Transform · 2006. 11. 29. · A Face Authentication system using the Trace Transform S Srisuk, M Petrou,y W Kurutachz, A Kadyrovx

4 Threshold Selection

When we use either WTT or the shapes extracted from MTT to solve the problems of face authentication,

we use algorithms which rely either on threshold υ (see equation (1)) or threshold κ (see equation (3)). The

quality of the obtained result depends strongly on the values of these parameters. To choose the values of υ

and κ, we use reinforcement learning. A chess playing computer program that uses the outcome of a game

to improve its performance is an example of a reinforcement learning system. In this framework, we use

the so called REINFORCE algorithm described in [3,26]. The specific algorithm we used has the following

form [34]: At time step t, after generating output y(t) (i.e. a value for either υ or κ), which produces

matching between the reference and the input training image x with confidence r(t), increment each weight

wij between the jth input and the ith output unit, by

∆wij(t) = α[(r(t) − r(t − 1))(yi(t) − yi(t − 1))]xj − δwij(t), (10)

where α is a learning rate, and δ a weight decay rate. r(t) is the weighted average of prior reinforcement

values: r(t) ≡ γr(t − 1) + (1 − γ)r(t), with r(0) = 0. yi(t) is an average of past values of yi defined as:

yi(t) = γyi(t − 1) + (1 − γ)yi(t). The input values xj are only 8 randomly selected points of the MTT of

the reference image. Once they have been chosen for a particular image, they are fixed when training with

that image. If the algorithm does not converge for this particular set of points, 8 other points may be chosen

instead, and the process is repeated. The 8 output values yi are the bits of the binary representation of the

value of the computed threshold. The algorithm we use to learn the best values of υ and κ is:

1. Initialise randomly weights wij , by assigning them values in the range [−1, 1].

2. Initialise matching confidence rk to 0 (i.e. rk = 0, ∀k).

3. For each image k in the tuning set do

(a) Input 8 randomly picked values of its MTT.

(b) Update parameters υ and κ for STT and WTT, and use them on the MTT:

• For STT:

Segment MTT of image k with current segmentation parameter υ, and obtain STT

• For WTT:

Compute weight matrix W (φ, p) with current parameter κ,

10

Page 11: New A Face Authentication system using the Trace Transform · 2006. 11. 29. · A Face Authentication system using the Trace Transform S Srisuk, M Petrou,y W Kurutachz, A Kadyrovx

(c) Compute matching measures for STT and WTT by comparing with the reference transform,

using either equation (9) or (4) respectively.

(d) Update each weight wij using rk as the reinforcement parameter for the RL algorithm, using

equation (10).

4. Find the minimum matching confidence ζ = mink rk.

5. Repeat step 3 until the number of iterations has reached a certain pre-specified number, or ζ ≥ τr.

Here rk is either the r1 value (for STT) or the r2 value (for WTT) for image k. The minimum matching

confidence ζ is used to terminate the algorithm ensuring that all tuning images have been matched with the

corresponding reference image with a certain minimum confidence τr. The parameter values used in all our

experiments were α = 0.9, δ = 0.01, γ = 0.9 and τr = 0.92.

5 Classifier Combination

After the matching scores of STT and WTT have been calculated, they are combined by a classifier com-

bination method [14, 15]. Let us denote by r(l)com(x) the overall support class l receives from input x. We

assume r(l)com(x) to be a linear combination of the supports r(l)(x) the same class receives from the individual

classifiers. We assign different weights for different classes to the individual classifiers:

r(l)com(x; α(l)) = α(l)T

r(l)(x), (11)

where α(l) = {α(l)1 , α

(l)2 }T is the nonuniform weighting factor for class l and r(l)(x) = {r(l)

1 (x), r(l)2 (x)}T is

the set of soft decision labels for class l produced by the two classifiers, namely STT and WTT, respectively.

6 Experiments and Results

Our method was evaluated with the XM2VTS database which is a publicly available database accessible

at http://xm2vtsdb.ee.surrey.ac.uk/. There are also the results of the state-of-the-art techniques from the

face authentication contest [23], which were made available for comparison in [30]. This is a major face

database for biometric authentication systems.

6.1 Evaluation Protocol and Performance Measures

The XM2VTS face database [24] contains 2360 face images of 295 subjects (8 images per subject) that

were divided into three sets: training set, evaluation set, and test set (see Fig. 6). The images were obtained

11

Page 12: New A Face Authentication system using the Trace Transform · 2006. 11. 29. · A Face Authentication system using the Trace Transform S Srisuk, M Petrou,y W Kurutachz, A Kadyrovx

Configuration ISession Shot Clients Impostors

TrainingEvaluation

Test

Evaluation Test

TrainingEvaluationTrainingEvaluation

12121212

1

2

3

4

Figure 6: The partitioning of the XM2VTS database according to Lausanne protocol configuration I.

during four different sessions over a period of four months. Therefore, plenty of intra-person variation is

present in the data, e.g., changes in hairstyle, presence or absence of glasses and beards, facial expressions,

3D pose, etc.

The training set is used to build client models, while the evaluation set is used to produce client and

impostor access scores, and finally the test set is used to simulate real authentication tests. The evaluation

set is also used for choosing the optimal weighting factors α(l). The database was randomly divided into 200

clients, 25 evaluation impostors, and 70 test impostors (see [21] for the subjects’ IDs of the three groups).

Let us call T the threshold of similarity with which we accept or reject a client. According to the evaluation

protocol we follow, we choose three different thresholds. The thresholds are set using the evaluation data

set to obtain certain false acceptance (FAE) and false rejection (FRE) rate values. Experiments are then

conducted using the test set and each one of these thresholds. The three thresholds are chosen so that they

make the two error rates over the evaluation set to be FAE=FRE, FRE=0, or FAE=0. For each threshold,

we compute the false acceptance (FA) and the false rejection (FR) rates over the test set. So we obtain six

scores which can be combined to form three Total Error Rates (TER):

TERFAE=FRE = FAFAE=FRE + FRFAE=FRE

TERFRE=0 = FAFRE=0 + FRFRE=0

TERFAE=0 = FAFAE=0 + FRFAE=0.

6.2 Experimental Results

Figure 7 shows an example of an MTT and the corresponding WTT and an STT representations of the

image constructed from it, for a particular face. Only the values of MTT which correspond to the white

points of the WTT are used to characterise the face.

The WTT representations of each image are used by the WTT classifier, and the STT representations

of each image are used by the STT classifier. During training we choose the optimal parameters, υ and κ,

for which the within-class variances of the STT and WTT are minimised, while maintaining the between-

12

Page 13: New A Face Authentication system using the Trace Transform · 2006. 11. 29. · A Face Authentication system using the Trace Transform S Srisuk, M Petrou,y W Kurutachz, A Kadyrovx

(a) (b) (c) (d)

Figure 7: The role of shape and texture (a) Face image (b) MTT (c) WTT obtained with optimal thresholdingparameter κ. The white areas indicate the most significant tracing lines. (d) STT obtained with optimalsegmentation parameter υ.

class variance, as well as the trace functionals that maximise the between-class variance, yet have the

classification ability to minimise the within-class variance, while maintaining the between-class variance.

It turned out that we may use all 22 trace functionals to build WTTs, while only functionals 1, 2, 7, 9, 11,

12, 13, 14, 20, 21, and 22 are useful for the construction of STTs.

Note that we have 3 training images. One of them is used as the reference image, and the other two as

tuning images. Each tuning image will produce its own optimal value for υ. Both values are kept and used

as thresholds for a test image, when it is to be compared with the corresponding client. The same policy is

followed for the two threshold values of κ computed from the two tuning images.

Suppose now that a person comes and their image X is captured. This person claims to be person C.

Immediately, image X is converted to its 44 WTT representations (22 functionals times 2 different values of

threshold κ) and its 22 STT representations (11 functionals times 2 values of threshold υ). Classifier WTT

measures the similarity of the 44 WTT representations of the image with the corresponding representations

for client C using as similarity measure the value of r2 computed by equation (4). It concludes that person

X is person C with confidence equal to the maximum value of all these similarity measures. Classifier STT

measures the similarity of the 22 STT representations of the image with the corresponding representations

for client C using as similarity measure the value of r1 computed by equation (9). It concludes that person

X is person C with confidence equal to the maximum value of all these similarity measures. The results of

these two classifiers are then combined by the classifier combination module.

The evaluation set was used to find the optimal weights for classifier combination in terms of minimising

the classification errors. When the optimal weights α(l) are identified, we use them on the test set. Figure

8(a) shows the error rates obtained by the combination of STT and WTT as α(1) varies. α(1)1 and α

(2)1 are

the weighting factors for classes 1 (client) and 2 (impostor) for classifier 1 (STT), whereas α(1)2 and α

(2)2

are the weighting factors for client and impostor respectively, for classifier 2 (WTT). The lowest equal

error rate was obtained when we place more weight on WTT than STT with respect to class 1, i.e. for

13

Page 14: New A Face Authentication system using the Trace Transform · 2006. 11. 29. · A Face Authentication system using the Trace Transform S Srisuk, M Petrou,y W Kurutachz, A Kadyrovx

0 0.05 0.1 0.15 0.2 0.250

0.05

0.1

0.15

0.2

0.25Receiver Operating Characteristics

FA

FR

IDIAPSYDNEYAUTUNIS-A-G-NCUNIS-S-G-NCSTTWTTSTT+WTT

EER Line

0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 10

5

10

15

20

25

α1(1)

Erro

r Lowest Error on FRE in which FAE=0

Lowest Error on FAE=FRE

EERFRE (FAE=0)

(a) (b)

Figure 8: Error rates obtained from the evaluation and test sets. (a) Combinations of STT and WTT on theevaluation set in which the lowest error rates on FAE=FRE and FRE with FAE=0 are obtained. The figurein (a) shows the error rates obtained when α

(1)1 is varied between 0 and 1 with α

(1)2 = 1− α

(1)1 . It should be

noted that α(1) stands for weighting factors for class 1 (client), while α(2) is the weighting factors for class2 (impostor). The weighting factors for class impostor (2) were fixed to α(2) = {α(2)

1 , α(2)2 } = {0.85, 0.15}.

(b) Receiver operating characteristics computed on the test set for the proposed method against the methodsreported in [23].

α(1) = {α(1)1 , α

(1)2 } = {0.2, 0.8} with a fixed weighting factor α(2) = {α(2)

1 , α(2)2 } = {0.85, 0.15}. However,

when a high level of security is needed the weight for class 1 must be set to α(1)1 = 0.75 and α

(1)2 = 0.25

which has the lowest FRE for FAE=0.

Table 2 shows the results on combinations of STT and WTT which were computed on the evaluation

and test sets. We fixed the weighting factors α(l) to α(1) = {0.75, 0.25} and α(2) = {0.85, 0.15} which

means that we place more weight on STT than WTT in this case. The values of false acceptance were fixed

to 1.0, 0.1, 0.01, 0.001 and 0.0 for which the false rejection can be calculated. We obtained a very low false

rejection (0.83% on the evaluation set and 0.0% on the test set) when the false acceptance value was set to

1.0%, which means that this is a method suitable for a high level security system.

Table 2: False rejection with a fixed false acceptance on the combination of STT and WTT obtained fromthe evaluation and test sets. The weighting factors α(1) = {0.75, 0.25} and α(2) = {0.85, 0.15} were fixedfor this calculation.

False Acceptance (%) False Rejection (%)Evaluation Set Test Set

1.0 0.83 0.00.1 3.42 1.250.01 7.33 2.75

0.001 8.08 6.250.0 8.5 6.75

Our method has been compared with other approaches using the same database and test protocol as

14

Page 15: New A Face Authentication system using the Trace Transform · 2006. 11. 29. · A Face Authentication system using the Trace Transform S Srisuk, M Petrou,y W Kurutachz, A Kadyrovx

the one presented in [23]. The error rates corresponding to the evaluation and test sets are summarised in

Tables 3 and 4. Here the results of STT and WTT are combined with weighting factors α(1) = {0.2, 0.8} and

α(2) = {0.85, 0.15}, which correspond to the case of lowest equal error rate (FAE=FRE). A TERFAE=FRE

equal to 0.18%, was obtained. From the inspection of these tables, it can be seen that the proposed method,

STT+WTT, is ranked as the first method with respect to TER. No other method has achieved such a good

performance, particularly when FRE or FAE is zero. The reason that other methods do not perform well may

be explained in the following way: all other methods make use of grey pixels in image space or features

in a reduced space. Due to similarity in the structure of face images, these features cannot achieve high

discriminatory performance. In addition, the variation in lighting conditions makes reliance on gray-scale

brightness values risky. In contrast, our method makes use of the robust shape and texture features derived

from MTT. Therefore, it depends on the trace functional we used. These functionals may be chosen to be

relatively insensitive to the variations of an individual, while maintaining discriminability. Fig. 8(b) shows

a comparison between the ROC curves on the test set for the methods reported in [23] (score files have

been made available in [30]) and the proposed method. It is seen that the area under the ROC curve for the

proposed method is much smaller than for all other ones.

Table 3: Error rates for the evaluation set according to protocol configuration I [23] (the results with * arefrom [23]). The weighting factors α(1) = {0.2, 0.8} and α(2) = {0.85, 0.15} were used in the calculationfor the combinations of STT and WTT.

Experiment Evaluation setFAE=FRE FAE(FRE=0) FRE(FAE=0)

AUT* 8.1 48.4 19.0IDIAP* 8.0 54.9 16.0SYDNEY* 12.9 94.4 70.5UniS-A-G-NC* 5.7 96.4 26.7UniS-S-G-NC* 3.5 81.1 16.2STT 1.12 2.62 9.16WTT 9.55 70.59 88.83STT+WTT 0.319 0.41 18.83

7 Computational complexity of the algorithm

The computational complexity of the algorithm can be studied better by examining the training algorithm.

The algorithm has several steps, so we shall examine each one separately. The algorithm depends on the

following parameters:

• NT : the number of trace functionals;

15

Page 16: New A Face Authentication system using the Trace Transform · 2006. 11. 29. · A Face Authentication system using the Trace Transform S Srisuk, M Petrou,y W Kurutachz, A Kadyrovx

Table 4: Error rates for the test set according to protocol configuration I [23] (the results with * are from[23]). The weighting factors α(1) = {0.2, 0.8} and α(2) = {0.85, 0.15} were used in the calculation for thecombinations of STT and WTT.

Experiment Test setFAE=FRE FRE=0 FAE=0 Total Error Rate (TER)FA FR FA FR FA FR FAE=FRE FRE=0 FAE=0

AUT* 8.2 6.0 46.6 0.8 0.5 20.0 14.2 47.4 20.5IDIAP* 8.1 8.5 54.5 0.5 0.5 20.5 16.6 55 21SYDNEY* 13.6 12.3 94.0 0.0 0.0 81.3 25.9 94 81.3UniS-A-G-NC* 7.6 6.8 96.5 0.3 0.0 27.5 14.4 96.8 27.5UniS-S-G-NC* 3.5 2.8 81.2 0.0 0.0 14.5 6.3 81.2 14.5STT 0.97 0.5 3.3 0.0 0.0 6.5 1.47 3.3 6.5WTT 6.5 10.25 72.39 0.0 0.0 87.5 16.75 72.39 87.5STT+WTT 0.18 0.0 0.25 0.0 0.0 18.75 0.18 0.25 18.75

• np: the number of samples of parameter p;

• nφ: the number of samples of parameter φ;

• nt: the number of samples of parameter t;

• CT : the number of operations per sample for each trace functional;

• CE: the number of operations per sample for edge detection;

• Csc: the number of operations per sample for shape context;

• nE: the number of edge pixels;

MTT requires CT ntnpnφ operations for each trace functional. MTT is binarised by thresholding. This

segmentation needs npnφ operations for each trace functional. Edge detection that follows in order to

produce the STT requires CEnpnφ operations and results in nE pixels. We then create the shape context for

each point before proceeding to use Hausdorff context matching. This way the Hausdorff context matching

requires n2E log(Csc) operations. The computational complexity of the reinforcement learning concerns

entirely the learning phase. The main operation of the feed forward step requires M(L + N) operations,

where L, M and N are the number of neurons in the input, hidden and output layers, respectively. The

backward step performs LMN operations and thus the total computational complexity of the reinforcement

learning is of the order of LMN for a single forward and backward pass. Thus, the overall computational

complexity of STT for each iteration of the training algorithm and for each one of the NT = 22 functionals

we use, is

16

Page 17: New A Face Authentication system using the Trace Transform · 2006. 11. 29. · A Face Authentication system using the Trace Transform S Srisuk, M Petrou,y W Kurutachz, A Kadyrovx

CT ntnpnφ + npnφ + CEnpnφ + n2E log(Csc) + LMN. (12)

For WTT, additional operations are needed for the weight matrix computation and template matching.

The weight matrix computation requires npnφ operations and template matching needs npnφ + ln(npnφ)

operations. The overall computational complexity of WTT for each iteration of the training algorithm is

CT ntnpnφ + npnφ + [npnφ + ln(npnφ)] + LMN. (13)

Typical values of these parameters are: nt ' np ' 100, nφ = 180, nE ' 300, L = 8, M = 2, N = 8,

CT ' CE ' 10, Csc ' 100. The number of iterations needed for learning the threshold for STT vary from

1000 to 1500, while the number of iterations needed for learning the thresholds for WTT varies between

200 and 400. Typical sizes of the elliptical faces used had a small semi-axis in the range 95 to 110 and a

large semi-axis in the range 150 to 165 pixels. The whole training process in a Pentium 4, 1.6Mhz, 512 MB

of RAM machine, using Visual C++ as compiler, took about 100 hours, with no effort made to optimise the

code. However to test for a single image it took less than 30 seconds.

8 Discussion and Conclusions

In this paper we introduced novel face representations using shape and texture characteristics derived from

MTT. Our primary contribution in this work is the general framework for constructing robust features

from the Trace transform, which in conjunction with the proposed Hausdorff context distance measure and

reinforcement learning, resulted in a face authentication system which is the best among all other methods

for which there are comparative data. A very low TER of 0.18% was obtained when we combine STT with

WTT by means of a classifier combination method. Extensive experimental results demonstrated that the

proposed method provides a new way for face feature representation and recognition.

The basic question is why our method works so much better than other methods. We believe that its

strength lies in the multi-representational approach we use, without really reducing the original information.

For example, the first functional we use is the integral of the image function along the tracing line. This

leads to the Radon transform of the image, which is known to contain exactly the same information as

the original representation, only viewed differently. Each of the alternative representations we use makes

explicit a different aspect of the data, and that helps the process of recognition. We used 22 functionals,

i.e. 22 different representations of the images. There is no reason why fewer or more functionals may not

17

Page 18: New A Face Authentication system using the Trace Transform · 2006. 11. 29. · A Face Authentication system using the Trace Transform S Srisuk, M Petrou,y W Kurutachz, A Kadyrovx

be used. These particular functionals happened to be available from previous implementations of the Trace

transform where they had shown to be useful. The Trace transform is known not to be invariant to rotation,

translation and scaling (although it may be used to construct such invariant features [12]). So, our system

is not invariant to these transformations, but due to the redundancy it contains and the training phase it

uses, it can tolerate a reasonable level of pose and expression variation. Indeed, the face database we used

contains such variations. This is demonstrated in figure 9 where we show the faces of three different people

from the database, one shown with and without glasses, one face-on and slightly sideways looking, and one

smiling and frowning. In all cases the algorithm could be trained to pick up thresholds for creating shapes

which are remarkably stable between the two poses. The four columns of shapes shown correspond to four

different tracing functionals. One can see the stability of the extracted shape for the same person, and the

differentiation of this shape from the shapes produced for the other people.

Acknowledgements

This work was partly supported by EPSRC grant GR/M88600. The authors would like to acknowledge the

suggestions of Dr Khamron Sunat on Computational Complexity.

References

[1] P. N. Belhumeur, J. P. Hespanha and D. J. Kriegman, Eigenfaces vs. Fisherfaces: Recognition using

Class Specific Linear Projection, IEEE Transactions on Pattern Analysis and Machine Intelligence,

Vol. 19, No. 7, pp. 711-720, Jul. 1997.

[2] S. Belongie, J. Malik and J. Puzicha, Shape Matching and Object Recognition using Shape Context,

IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol. 24, No. 24, pp. 509-522, Apr.

2002.

[3] B. Bhanu and J. Peng, Adaptive Integrated Image Segmentation and Object Recognition, IEEE Trans-

actions on Systems, Man, and Cybernetics-Part C, Vol. 30, No. 4, pp. 427-441, Nov. 2000.

[4] V. Bruce and P. R. Green, Visual Perception: Physiology, Psychology and Ecology, Lawrence Erlbaum

Associates, London and Hove, 2nd Edition.

[5] R. Brunelli and T. Poggio, Face Recognition: Features versus Templates, IEEE Transactions on Pattern

Analysis and Machine Intelligence, Vol. 15, No. 10, pp. 1042-1052, Oct. 1993.

18

Page 19: New A Face Authentication system using the Trace Transform · 2006. 11. 29. · A Face Authentication system using the Trace Transform S Srisuk, M Petrou,y W Kurutachz, A Kadyrovx

th=170 th=131

th=176 th=133

th=161

th=159

th=168

th=165

th=175 th=109

th=173 th=102

(a) (b) (c) (d)

th=122

th=121

th=139

th=127

th=135

th=121

th=119

th=177

th=180

th=98

th=97(e)

th=136

Figure 9: Demonstration of the stability of the extracted shapes over glasses wearing or not, pose variationand expression variation. Each column of shapes corresponds to a different functional.

[6] J.-T. Chien and C.-C. Wu, Discriminant Waveletfaces and Nearest Feature Classifiers for Face Recogni-

tion, IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol. 24, No. 12, pp. 1644-1649,

Dec. 2002.

[7] I. Craw, N. Costen, T. Kato and S. Akamatsu, How Should We Represent Faces for Automatic Recog-

nition?, IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol. 21, No. 8, pp. 725-736,

Aug. 1999.

[8] M. Dubuisson and A. K. Jain, A Modified Hausdorff Distance for Object Matching, In Proc. Int. Conf.

on Pattern Recognition, pp. 566-568, 1994.

19

Page 20: New A Face Authentication system using the Trace Transform · 2006. 11. 29. · A Face Authentication system using the Trace Transform S Srisuk, M Petrou,y W Kurutachz, A Kadyrovx

[9] B. Duc, S. Fischer and J. Bigun, Face Authentication with Gabor Information on Deformable Graphs,

IEEE Transactions on Image Processing, Vol. 8, No. 4, pp. 504-516, Apr. 1999.

[10] Y. Gao and M. K.H. Leung, Face Recognition using Line Edge Map, IEEE Transactions on Pattern

Analysis and Machine Intelligence, Vol. 24, No. 6, pp. 764-779, Jun. 2002.

[11] D. P. Huttenlocher, G. Klanderman and W. Rucklidge, Comparing Images using the Hausdorff Dis-

tance, IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol. 15, No. 9, pp. 850-863,

Sep. 1993.

[12] A. Kadyrov and M. Petrou, The Trace Transform and Its Applications, IEEE Transactions on Pattern

Analysis and Machine Intelligence, Vol. 23, No. 8, pp. 811-828, Aug. 2001.

[13] A Kadyrov and M Petrou, 2003. “Object signatures invariant to affine distortions derived from the

Trace transform”. Image and Vision Computing, Vol 21, pp 1135–1143.

[14] J. Kittler, M. Hatef, R. P.W. Duin and J. Matas, On Combining Classifiers, IEEE Transactions on

Pattern Analysis and Machine Intelligence, Vol. 20, No. 3, pp. 226-239, Mar. 1998.

[15] J. Kittler and S. A. Hojjatoleslami, A Weighted Combination of Classifiers Employing Shared and

Distinct Representations, In Proc. IEEE Int. Conf. on Computer Vision and Pattern Recognition, pp.

924-929, 1998.

[16] C. L. Kotropoulos, A. Tefas and I. Pitas, Frontal Face Authentication using Discriminating Grids with

Morphological Feature Vectors, IEEE Transactions on Multimedia, Vol. 2, No. 1, pp. 14-26, Mar. 2000.

[17] M. Lades, J. C. Vorbruggen, J. Buhmann, J. Lange, C. v.d. Malsburg, R. P. Wurtz and W. Konen,

Distortion Invariant Object Recognition in the Dynamic Link Architecture, IEEE Transactions on Com-

puters, Vol. 42, No. 3, pp. 300-311, Mar. 1993.

[18] S. Lawrence, C. L. Giles, A. C. Tsoi and A. D. Back, Face Recognition: A Convolutional Neural-

Network Approach, IEEE Transactions on Neural Networks, Vol. 8, No. 1, pp. 98-113, Jan. 1997.

[19] S.-H. Lin, S.-Y. Kung and L.-J. Lin, Face Recognition/Detection by Probabilistic Decision-Based

Neural Network, IEEE Transactions on Neural Networks, Vol. 8, No. 1, pp. 114-132, Jan. 1997.

20

Page 21: New A Face Authentication system using the Trace Transform · 2006. 11. 29. · A Face Authentication system using the Trace Transform S Srisuk, M Petrou,y W Kurutachz, A Kadyrovx

[20] C. Liu and H. Wechsler, A Shape- and Texture-Based Enhanced Fisher Classifier for Face Recognition,

IEEE Transactions on Image Processing, Vol. 10, No. 4, pp. 598-608, Apr. 2001.

[21] J. Luettin and G. Maitre, Evaluation Protocol for the Extended M2VTS Database (XM2VTSDB), In

IDIAP Communication 98-05, IDIAP, Martigny, Switzerland, Oct. 1998.

[22] A. M. Martinez and A. C. Kak, PCA versus LDA, IEEE Transactions on Pattern Analysis and Machine

Intelligence, Vol. 23, No. 2, pp. 228-233, Feb. 2001.

[23] J. Matas, M. Hamouz, K. Jonsson, J. Kittler, Y. Li, C. Kotropoulos, A. Tefas, I. Pitas, T. Tan, H. Yan,

F. Smeraldi, J. Bigun, N. Capdevielle, W. Gerstner, S. B. Yacoub, Y. Abdeljaoued and E. Mayoraz,

Comparison of Face Verification Results on the XM2VTS Database, In Proc. Int. Conf. on Pattern

Recognition, pp. 858-863, 2000.

[24] K. Messer, J. Matas, J. Kittler, J. Luettin and G. Maitre, XM2VTSDB: The Extended M2VTS Database,

in Proc. Int. Conf. Audio- and Video-Based Biometric Person Authentication, pp. 72-77, 1999.

[25] B. Moghaddam, Principal Manifolds and Probabilistic Subspaces for Visual Recognition, IEEE Trans-

actions on Pattern Analysis and Machine Intelligence, Vol. 24, No. 6, pp. 780-788, Jun. 2002.

[26] J. Peng and B. Bhanu, Closed-Loop Object Recognition using Reinforcement Learning, IEEE Trans-

actions on Pattern Analysis and Machine Intelligence, Vol. 20, No. 2, pp. 139-154, Feb. 1998.

[27] M. Petrou and A. Kadyrov, 2004. Affine invariant features from the Trace transform, IEEE Transac-

tions on Pattern Analysis and Machine Intelligence, PAMI-26, pp 30–44.

[28] P. J. Phillips, 1999. Support Vector Machines applied to face recognition, Proceedings of the Confer-

ence on Advances in neural information processing systems II, pp 803–809, ISBN 0-262-11245-0.

[29] S. Srisuk and W. Kurutach, A New Robust Face Detection in Color Images, in Proc. IEEE Int. Conf.

on Automatic Face and Gesture Recognition, Washington, D.C., USA, pp. 306-311, May 2002.

[30] Surrey Univ. XM2VTS Face Authentication Contest,

http://www.ee.surrey.ac.uk/CVSSP/xm2vtsdb/results/face/verifica tion LP/, 2000.

[31] A. Tefas, C. Kotropoulos and I. Pitas, Using Support Vector Machines to Enhance the Performance of

Elastic Graph Matching for Frontal Face Authentication, IEEE Transactions on Pattern Analysis and

Machine Intelligence, Vol. 23, No. 7, pp. 735-746, Jul. 2001.

21

Page 22: New A Face Authentication system using the Trace Transform · 2006. 11. 29. · A Face Authentication system using the Trace Transform S Srisuk, M Petrou,y W Kurutachz, A Kadyrovx

[32] M. Turk and A. Pentland, Eigenfaces for Recognition, Journal of Cognitive Neuroscience, Vol. 3, No.

1, pp. 71-86, 1991.

[33] O. d. Vel and S. Aeberhard, Line-Based Face Recognition under varying Pose, IEEE Transactions on

Pattern Analysis and Machine Intelligence, Vol. 21, No. 10, pp. 1081-1088, Oct. 1999.

[34] R. J. Williams and J. Peng, Function Optimization using Connectionist Reinforcement Learning Algo-

rithms, Connection Science, Vol. 3, No. 3, pp. 241-268, 1991.

[35] L. Wiskott, J.-M. Fellous, N. Kruger and C. v.d. Malsburg, Face Recognition by Elastic Bunch Graph

Matching, IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol. 19, No. 7, pp. 775-

779, Jul. 1997.

[36] J. Zhang, Y. Yan and M. Lades, Face Recognition: Eigenface, Elastic Matching, and Neural Nets,

Proc. of IEEE, Vol. 85, No. 9, pp. 1423-1435, Sep. 1997.

22