dual optimal multiband features for face recognition

6
Dual optimal multiband features for face recognition Yee Wan Wong * , Kah Phooi Seng, Li-Minn Ang The University of Nottingham Malaysia Campus, Faculty of Engineering, Jalan Broga, 43500 Semenyih, Selangor, Malaysia article info Keywords: Face recognition Multiband features Wavelet packet transform Illumination variation Adaptive fusion Neural network abstract Illumination and expression variations degrade the performance of a face recognition system. In this paper, a novel dual optimal multiband features method for face recognition is presented. This method aims to increase the robustness of face recognition system to both illumination and expression variations. The wavelet packet transform decomposes image into frequency subbands and the multiband feature fusion technique is incorporated to select optimal multiband feature sets that are invariant to illumina- tion and expression variation separately. Parallel radial basis function neural networks are employed to classify the two sets of feature. The scores generated are then combined and processed by an adaptive fusion mechanism. In this mechanism, the level of illumination variations of the input image is estimated and the weights are assigned to the scores accordingly. Experiments based on Yale, YaleB, AR and ORL databases show that the proposed method outperformed other algorithms. Ó 2009 Elsevier Ltd. All rights reserved. 1. Introduction Face recognition system is an automatic system that identifies a person identity using human facial characteristics. It has received significant attention because of its wide range of applications (Chellappa, Wilson, & Sirohey, 1995). However, uncontrolled fac- tors such as illumination variation, facial expression, facial occlu- sion, pose variation and aging impose challenges to face recognition system. Methods have been proposed to solve these problems. However, most of these existing methods consider only a single problem. For example, most of the methods have been pro- posed to solve illumination problem (Ekenel & Sankur, 2005; Shin, Lee, & Kim, 2008, Zhang et al., 2009) but not including facial expression problem. This is because illumination variation affects the low-frequency component or global appearance of a face image (Adini, Moses, & Ullman, 1997), whereas facial expression variation affects the high-frequency components (Naster & Ayache, 1996). Hence, the compensation for one kind of variation causes adverse effect on the other. Most recently approaches in solving illumination and facial expression variation problems are proposed. Jadhav and Holambe (2008) presented a face recognition system based on combination of Radon and wavelet transform, which is invariant to illumination and facial expression variations. The DC component of the low-fre- quency subband was removed when testing the algorithm perfor- mance in illumination variation. They showed that their proposed system achieved high recognition accuracy in the variation of facial expression and illumination separately. However, the system per- formance may degrade when it is tested against the combination of variations due to the removal of the DC component. Xie and Lam (2007) proposed an elastic shape-texture matching (ESTM) method which considers edge map, Gabor wavelet (GW) (Chui, 1992), and angle (Gonzalez & Wood, 1993) to represent face image. The Haus- dorff distance (Huttenlocher, Klanderman, & Rucklidge, 1993) was used to compute the similarity of two face images. The experimen- tal results showed high robustness of this algorithm under differ- ent conditions. One flaw of this algorithm is that the weights of the three distance measures that affect the recognition perfor- mance of the algorithm have to be set manually. In this paper, a novel dual optimal multiband feature (DOMF) method for face recognition is presented. The aims of the proposed DOMF are: (1) to extracts the optimal sets of subband that are invariant to facial expression and illumination, (2) to avoid the ad- verse effect by introducing an adaptive fusion method to combine the optimal feature sets. Fig. 1.1 shows the block diagram of the proposed DOMF method for face recognition system. The wavelet packet transform (WPT) (Primer, 1998) decomposes the image into frequency subbands to represent the facial features of the face im- age. The multiband feature fusion technique which we proposed in one of our papers (Wong, Seng, & Ang, 2009) is incorporated to search for subbands that are invariant to illumination and facial expression variation separately. The optimal multiband features that were found to be invariant to illumination in Wong et al. (2009) will be used and named as optimal multiband feature for illumination (OMF_I) in this paper. Subbands that are invariant to facial expression are searched by the same technique and the subband set is named as optimal multiband feature for expression 0957-4174/$ - see front matter Ó 2009 Elsevier Ltd. All rights reserved. doi:10.1016/j.eswa.2009.09.039 * Corresponding author. E-mail address: [email protected] (Y.W. Wong). Expert Systems with Applications 37 (2010) 2957–2962 Contents lists available at ScienceDirect Expert Systems with Applications journal homepage: www.elsevier.com/locate/eswa

Upload: yee-wan-wong

Post on 21-Jun-2016

214 views

Category:

Documents


1 download

TRANSCRIPT

Page 1: Dual optimal multiband features for face recognition

Expert Systems with Applications 37 (2010) 2957–2962

Contents lists available at ScienceDirect

Expert Systems with Applications

journal homepage: www.elsevier .com/locate /eswa

Dual optimal multiband features for face recognition

Yee Wan Wong *, Kah Phooi Seng, Li-Minn AngThe University of Nottingham Malaysia Campus, Faculty of Engineering, Jalan Broga, 43500 Semenyih, Selangor, Malaysia

a r t i c l e i n f o

Keywords:Face recognitionMultiband featuresWavelet packet transformIllumination variationAdaptive fusionNeural network

0957-4174/$ - see front matter � 2009 Elsevier Ltd. Adoi:10.1016/j.eswa.2009.09.039

* Corresponding author.E-mail address: [email protected]

a b s t r a c t

Illumination and expression variations degrade the performance of a face recognition system. In thispaper, a novel dual optimal multiband features method for face recognition is presented. This methodaims to increase the robustness of face recognition system to both illumination and expression variations.The wavelet packet transform decomposes image into frequency subbands and the multiband featurefusion technique is incorporated to select optimal multiband feature sets that are invariant to illumina-tion and expression variation separately. Parallel radial basis function neural networks are employed toclassify the two sets of feature. The scores generated are then combined and processed by an adaptivefusion mechanism. In this mechanism, the level of illumination variations of the input image is estimatedand the weights are assigned to the scores accordingly. Experiments based on Yale, YaleB, AR and ORLdatabases show that the proposed method outperformed other algorithms.

� 2009 Elsevier Ltd. All rights reserved.

1. Introduction

Face recognition system is an automatic system that identifies aperson identity using human facial characteristics. It has receivedsignificant attention because of its wide range of applications(Chellappa, Wilson, & Sirohey, 1995). However, uncontrolled fac-tors such as illumination variation, facial expression, facial occlu-sion, pose variation and aging impose challenges to facerecognition system. Methods have been proposed to solve theseproblems. However, most of these existing methods consider onlya single problem. For example, most of the methods have been pro-posed to solve illumination problem (Ekenel & Sankur, 2005; Shin,Lee, & Kim, 2008, Zhang et al., 2009) but not including facialexpression problem. This is because illumination variation affectsthe low-frequency component or global appearance of a face image(Adini, Moses, & Ullman, 1997), whereas facial expression variationaffects the high-frequency components (Naster & Ayache, 1996).Hence, the compensation for one kind of variation causes adverseeffect on the other.

Most recently approaches in solving illumination and facialexpression variation problems are proposed. Jadhav and Holambe(2008) presented a face recognition system based on combinationof Radon and wavelet transform, which is invariant to illuminationand facial expression variations. The DC component of the low-fre-quency subband was removed when testing the algorithm perfor-mance in illumination variation. They showed that their proposedsystem achieved high recognition accuracy in the variation of facial

ll rights reserved.

y (Y.W. Wong).

expression and illumination separately. However, the system per-formance may degrade when it is tested against the combination ofvariations due to the removal of the DC component. Xie and Lam(2007) proposed an elastic shape-texture matching (ESTM) methodwhich considers edge map, Gabor wavelet (GW) (Chui, 1992), andangle (Gonzalez & Wood, 1993) to represent face image. The Haus-dorff distance (Huttenlocher, Klanderman, & Rucklidge, 1993) wasused to compute the similarity of two face images. The experimen-tal results showed high robustness of this algorithm under differ-ent conditions. One flaw of this algorithm is that the weights ofthe three distance measures that affect the recognition perfor-mance of the algorithm have to be set manually.

In this paper, a novel dual optimal multiband feature (DOMF)method for face recognition is presented. The aims of the proposedDOMF are: (1) to extracts the optimal sets of subband that areinvariant to facial expression and illumination, (2) to avoid the ad-verse effect by introducing an adaptive fusion method to combinethe optimal feature sets. Fig. 1.1 shows the block diagram of theproposed DOMF method for face recognition system. The waveletpacket transform (WPT) (Primer, 1998) decomposes the image intofrequency subbands to represent the facial features of the face im-age. The multiband feature fusion technique which we proposed inone of our papers (Wong, Seng, & Ang, 2009) is incorporated tosearch for subbands that are invariant to illumination and facialexpression variation separately. The optimal multiband featuresthat were found to be invariant to illumination in Wong et al.(2009) will be used and named as optimal multiband feature forillumination (OMF_I) in this paper. Subbands that are invariantto facial expression are searched by the same technique and thesubband set is named as optimal multiband feature for expression

Page 2: Dual optimal multiband features for face recognition

Wavelet Packet Transform

RBF Neural Network

RBF Neural Network

Adaptive Fusion

Illumination Variation Estimation

Adaptive Weight

OMF_I

OMF_E

Output

Score1

Score2

Illumination variation factor

Multiband Feature Fusion Method

Feature Selection

Fig. 1.1. Block diagram of the proposed DOMF for face recognition system.

2958 Y.W. Wong et al. / Expert Systems with Applications 37 (2010) 2957–2962

(OMF_E). Parallel radial basis function (RBF) neural networks(Ranganath & Arun, 1997) are used to classify the OMF_I andOMF_E. The decision scores are linearly combined through a setof fusion weight. In this method, the weights are determined bythe illumination variation estimator where the illumination varia-tion factor will be assigned based on the illumination variation le-vel of the input image. For example, the weight assigned to thescore of the OMF_I is higher than OMF_E if the input image is inhigh illumination variation. Therefore, the DOMF, can reduce theeffect of expression and illumination, and achieve a good recogni-tion performance under different variations. Experimental resultbased on different databases show that DOMF outperforms radonand wavelet, ESTM, principle component analysis (PCA) (Turk &Pentland, 1991), wavelet (Primer, 1998) and linear discriminantanalysis (LDA) (Chen, Liao, Ko, Lin, & Yu, 2000) under illuminationand facial expression variation conditions.

This paper is organized as follows. Section 2 describes the mul-tiband feature fusion method which is used to search for featuresthat are invariant to illumination and facial expression variations.The classification and fusion method which incorporate the illumi-nation variation estimation and adaptive fusion are discussed inSection 3. Experimental results are given in Section 4, which com-pare the performance of our proposed method to other face recog-nition algorithm based on the Yale database (Yale University,http://cvc.yale.edu/projects/yalefaces/yalefaces.html), the YaleBdatabase (Georghiades, Belhumeur, & Jacobs, 2001), the AR data-base (Martinez & Benavente, 1998) and the ORL database (TheORL in Cambridge, UK, http://www.cl.cam.ac.uk/research/dtg/attarchive/facedatabase.html). Finally, concluding remarks are gi-ven in Section 5.

2. Multiband feature fusion technique

We first locate the facial features that are invariant to facialexpression and illumination variation. Naster and Ayache (1996)and Naster, Moghaddam, and Pentland (1997) found that changesin illumination affect the low-frequency spectrum. This statementindicates that high-frequency components are invariant to illumi-nation. On the other hand, the facial expression variation only af-fects the high-frequency spectrum. This means thatcompensation for one variation may have an adverse effect on an-other. To avoid this scenario, we therefore proposed a multibandfeature technique to search for the frequency subbands that areinvariant to illumination variation and facial expression variationseparately. The wavelet decomposition tree that is used in this

technique is shown in Fig. 2.1. It is important to note that otherthan LL being decomposed further, LH, HL and HH are also beingdecomposed further in the multiband feature fusion technique.

Recognition accuracy and statistical analysis based on class sep-aration are used to evaluate the recognition performance of thesubbands. Recognition accuracy shows how well the system canmatch images from the same people and class separation showshow well the system can distinguish images from different people(Feng, Yuen, & Dai, 2000). To test the class separation, N faceimages from a database, one image per person are used. The faceimages are chosen randomly from the training and testing sets.The similarity matrix q(i, j) with the size N � N records the similar-ity between image i and image j. For a good representation, q(i, j)should be closed to one if i = j and q(i, j) should be close to zeroif i – j. The Average Unmatched Similarity Value, AUMSV (Fenget al., 2000) that is defined as below,

AUMSV ¼ 1ðN2 � NÞ

XN

i¼1

XN

j¼1

qði; jÞ i – j ð1Þ

is used to give a single numerical value to the similarity perfor-mance of the subband. This term shows how well the subband rep-resentation distinguishes the images from different people, and itranges from 0 to 1, which means the higher the discriminatorypower, the smaller the AUMSV value.

Below are the steps proposed to select the optimal subbands:

Step 1: Compute the AUMSV and recognition accuracy in eachsubbands from level 1 and 2.Step 2: The subbands that obtain AUMSV that is lower than 0.5and recognition accuracy that is higher than half of the recogni-tion rate of the original image will be selected for furtherdecomposition to level 3. The threshold values determine thecomputational complexity of the system. This step reducesthe computational complexity by avoiding decomposition ofall subbands from level 2 to level 3.Step 3: Further decompose subbands that fulfill the subbandselection criteria to level 3 decomposition.Step 4: Two best performing subbands in terms of AUMSV inlevel 3 decomposition will be concatenated.

The optimal feature set for facial expression and illuminationare named as OMF_E and OMF_I, respectively. The location of thefrequency subbands OMF_E and OMF_I are shown in Fig. 2.2aand b, respectively. The figure shows that the combination of LL

Page 3: Dual optimal multiband features for face recognition

Fig. 2.1. Wavelet decomposition tree used in multiband feature fusion technique.

ALL HLL VLL DLL

LH

HL HH

(a)

AALL HALL AHLL HHLL AALH HALH AHLH HHLH

VALL DALL VHLL DH LL VALH DALH VHLH DHLH

AVLL HVLL ADLL HD LL AVLH HVLH ADLH HDLH

VVLL DVLL VDLL DDLL VVLH DVLH VDLH DDLH

AAHL HAHL AHHL HHHL AAHH HAHH AHHH HHHH

VAHL DAHL VHHL DH HL VAHH DAHH VHHH HHHH

AVHL HVHL ADHL HD HL AVHH HVHH ADHH HDHH

VVHL DVHL VDHL DDHL VVHH DVHH VDHH DDHH

(b)

Fig. 2.2. Location of the frequency subband (a) LL and ALL, OMF_E (shaded box), (b) HALL and AALH, OMF_I (shaded boxes).

Fig. 3.1. Sample images of the (a) original image of YaleB database, (b) logarithmtransform of original image, (c) illumination variation estimation by morphologicalopening.

Y.W. Wong et al. / Expert Systems with Applications 37 (2010) 2957–2962 2959

and ALL forms the OMF_E and the combination of HALL and AALHforms the OMF_I.

3. Classification and fusion

In the previous section, the multiband feature fusion techniquewhich is used in obtaining the OMF_E and OMF_I was discussed. Inthis section, the illumination variation estimation is presented.This estimator is implemented prior to the combination of theOMF_I and OMF_E to obtain the illumination variation factor whichinfluences the weight assignment of the system. In order to be ro-bust to face image under different illumination variation, we firsttake the logarithm transform of the image. Hence, we haveI0 � log (I) where I represents the original image. One reason fortransforming I into logarithm domain is that the logarithm of theimage can reduce the effect of luminance. Another reason is to re-duce the pixel value of the original image.

The illumination variation estimation based on morphologicalopening is then applied on the I0. By using morphological opening,the facial features of the image can be removed and therefore theillumination variation of the image can be estimated. The level ofillumination variation can be described as the illumination varia-tion factor k. Assuming that the face image that contains illumina-tion variation has one side of the image brighter than the other, thek can be determined as the difference between the mean pixel va-lue at the left and the right sides of the image. Fig. 3.1 depicts somesample images taken under different illumination variation in Ya-leB database and the corresponding logarithm transform of originalimage and illumination variation estimation by morphologicalopening.

After obtaining the illumination variation factor k, the weight ofthe system can be determined adaptively based on the k value ofthe testing image. As discussed ealier, parallel RBF neural networkswill generate two sets of score. The sum rule is incorporated in this

system to combine the scores. Sum rule computes the final scorefrom (Alexandre, Campilho, & Kamel, 2001)

Page 4: Dual optimal multiband features for face recognition

2960 Y.W. Wong et al. / Expert Systems with Applications 37 (2010) 2957–2962

s ¼XJ

i¼1

wisi ð2Þ

where J is the number of modalities (which is two in this case), wi

are a set of fusion weights and si are the scores obtained from theJ modalities. The fusion weights are adaptive in the sense that theweights assigned to the modalities are based on the illuminationvariation factor of the testing image. The weight for each image isdetermined with the following definition

wi ¼w k � T

1�w k < T

�ð3Þ

T denotes the threshold of the illumination variation factor where itis determined as the maximum value of the illumination variationfactors of the training images. The variation w is fixed and will beobtained experimentally and the results will be shown in the nextsection.

Table 1Recognition rate (%) of the OMF_I in YaleB and AR database.

Subband YaleB AR

OMF_I 81.6 84.5

Table 2AUMSV and correct recognition rate of the two best performing subbands and OMF_Ein ORL database.

Subband AUMSV Recognition rate (%)

LL 0.299 96.8ALL 0.356 95.2OMF_E 0.298 99.0

4. Experimental results

In this section, we evaluate the performance of the DOMF meth-od with different databases. The databases used include the Yaledatabase, the YaleB database, the AR database and the ORL data-base. In the Yale database, the lighting is either from the left orthe right of the face images and it also contains images with differ-ent facial expressions. The YaleB database is often used to investi-gate the effect of illumination variation in face recognition. Itcontains ligting from different angles. In the AR database, otherthan ligting from left and right, it also includes lighting from bothsides of the face image. Besides, it contains facial expression vari-ations. The ORL database includes facial expression and perspec-tive variations. Some sample of images in these databases areshown in Fig. 4.1. Following are the sets of experiments includedin this section.

(1) Evaluating performance of OMF_E and OMF_I.(2) Parameter selection for DOMF method.(3) Evaluating performance of DOMF method with different

databases.

Fig. 4.1. Some cropped images from (a) Yale database, (b

4.1. OMF_I and OMF_E under different illumination and facialexpression variations

The first experiment shows recognition performance of OMF_Eunder different facial expression variations. The multiband featurefusion technique was used to locate the OMF_E which is invariantto facial expression variations. The nearest neighbor classifier wasused for classification in this experiment only. In Wong et al.(2009), we found that the combination of HALL (high-frequencysubband) and AALH (mid-frequency subband) forms the OMF_I.Hence, in the following experiment, OMF_I refers to this form ofsubband combination. The recognition performance of the OMF_Iin YaleB and AR database are shown in Table 1.

In this experiment, the ORL database and AR database wereused in locating OMF_E. The ORL and AR database were dividedinto sub-classes that only contain facial expression variation. InORL database, 9 subjects were chosen. There were 18 samplesimages with normal facial expression were used for training and63 samples images with different facial expressions were usedfor testing. In AR database, 100 subjects were used. There were200 samples images with normal facial expression were used fortraining and 200 samples images with facial expression variationwere used for testing. All images were scaled to 32 � 32 pixels res-olution. To test AUMSV, N = 9 for ORL and N = 100 for AR database.

The two best performing subbands in ORL and AR databaseswere shown in Tables 2 and 3, respectively. The LL achieved thelowest AUMSV and the highest recognition rate in ORL and AR dat-

) YaleB database, (c) AR database (d) ORL database.

Page 5: Dual optimal multiband features for face recognition

Table 3AUMSV and correct recognition rate of the two best performing subbands and OMF_Ein AR database.

Subband AUMSV Recognition rate (%)

LL 0.379 79.5ALL 0.383 76.5OMF_E 0.371 81.5

Table 5Face recognition rate of OMF_I, OMF_E and DOMF on different databases.

Recognition rate (%) OMF_I OMF_E DOMF

Yale 76.3 83.0 92.6AR 86.0 46.5 98.0ORL 16.3 88.0 88.4YaleB 85.5 53.8 91.0

Table 6Face recognition rate based on different databases.

Recognitionrate (%)

PCA Wavelet LDA ESTM (Xie &Lam, 2007)

Radon andwavelet

DOMF

Y.W. Wong et al. / Expert Systems with Applications 37 (2010) 2957–2962 2961

abases. The second best performing subband is ALL, which is thelow-frequency subband from the second level of decomposition.Combining LL and ALL forms the OMF_E. The results show thatOMF_E achieved recognition rate of 99% and 81.5% in ORL and ARdatabase, respectively.

Yale 68.0 79.3 81.5 88.7 73.0 92.6AR 80.0 64.3 59.0 97.7 50.0 98.0ORL 60.3 83.1 83.8 78.3 74.4 88.4YaleB 68.6 28.8 60.0 89.5 37.7 91.0

4.2. DOMF performance evaluation

After obtaining the OMF_E and OMF_I, they will be combined bythe DOMF method. In this experiment, the performance of DOMFwith different weight values were tested based on the differentdatabases. The databases used were not divided into sub-classes.The number of distinct subjects, the number of training imagesand the number of testing images in the respective databases aretabulated in Table 4. The face images in different databases arecaptured under different conditions, such as facial expression vara-tions, illumination variation, etc. In each database, the trainingimages were chosen to be images with face image of frontal view,and under even illumination and neutral facial expression, othersformed the testing set. To future reduce the effect of uneven illumi-nation, the logarithm transform was applied to the testing imagesin YaleB database. The weight value was chosen based on theexperimental results. The respective recognition rates based onthe different databases are shown in Fig. 4.2. The results show thatthe weight value of 0.6 achieved the highest recognition rate for allfour database. Hence, the 0.6 was applied as the weight value inthe proposed system.

After obtaining the weight, the performance of the OMF_E,OMF_I and DOMF method were tested on the different databases.RBF neural networks were used for the classification. Table 5

Table 4The databases used in the experiments.

Yale AR ORL YaleB

Number of subjects 15 100 40 10Number of training images 30 300 80 50Number of testing images 135 300 320 600

Fig. 4.2. Weight values against recognition rate in the databases.

shows that the proposed DOMF method achieved higher recogni-tion rate as compared to OMF_I and OMF_E. This shows that theproposed DOMF method is robust to illumination and facialexpression variations. The adverse effect of compensating one kindof the variations was avoided as shown in the results.

The performance of DOMF was evaluated and compared withthe PCA, wavelet, LDA, ESTM and radon with wavelet. For PCA,all the eigenfaces available for each testing of database were used.The wavelet referred to discrete wavelet transform where the low-frequency subband at level one decomposition was used. The LDAproposed by Chen et al. (2000) was used. The setting of ESTM aspublished in (Xie & Lam, 2007) was used in this experiment. Forthe radon with wavelet technique, 180 radon projections and threelevel of Daubechies wavelet transform were used as the setting.The relative performances of the different algorithms were shownin Table 6. The DOMF achieved recognition rate of 92.6%, 98%,88.4% and 91% in Yale, AR, ORL and YaleB database, respectively.From Table 6, we see that DOMF outperformed all the other algo-rithms tested in terms of recognition rate on the different dat-abases. The result also shows that the DOMF achieved the highrecognition rate in ORL database which contains faces rotatedout of the image plane.

5. Conclusion

In this paper, a novel dual optimal multiband feature (DOMF)method for face recognition was presented. This method aimedto increase the robustness of face recognition system to illumina-tion and facial expression variations. In our approach, the multi-band feature fusion technique was incorporated to search forsubbands that were invariant to illumination and facial expressionvariation separately. The optimal multiband features that werefound to be invariant to illumination variation and facial expres-sion variation, namely OMF_I and OMF_E, respectively were com-bined by the adaptive fusion method. Parallel RBF neuralnetworks were employed for the classification of the two set of fea-tures. The adverse effect of compensating one kind of the varia-tions was avoided by estimating the level of illuminationvariations of the input image and the weights were assigned tothe modalities accordingly. Experimental results showed thatOMF_I and OMF_E achieved high recognition rate under differentillumination and facial expression variation, respectively. Thenthe recognition performance of OMF_E, OMF_I and DOMF werecompared. The results showed that DOMF achieved better perfor-mance than OMF_E and OMF_I. The paper also compared the rec-

Page 6: Dual optimal multiband features for face recognition

2962 Y.W. Wong et al. / Expert Systems with Applications 37 (2010) 2957–2962

ognition performance of DOMF with other face recognition algo-rithms on different databases. The experimental results showedthat DOMF outperformed other algorithms tested and alsoachieved consistent and promising performance on differentconditions.

References

Adini, Y., Moses, Y., & Ullman, S. (1997). Face recognition: The problem ofcompensating for changes in illumination direction. IEEE Transactions onPattern Analysis and Machine Intelligence, 18(11), 1067–1079.

Alexandre, L. A., Campilho, A. C., & Kamel, M. (2001). On combining classifiers usingsum and products rules. Pattern Recognition Letters, 22, 1283–1289.

Chellappa, R., Wilson, C. L., & Sirohey, S. (1995). Human and machine recognition offaces: A survey. Proceedings of the IEEE, 83, 705–740.

Chen, L. F., Liao, H.-Y. M., Ko, M.-T., Lin, J.-C., & Yu, G.-J. (2000). A new-LDA-basedface recognition system which can solve the small sample size problem. PatternRecognition, 33, 1713–1726.

Chui, C. K. (1992). An introduction to wavelets. Boston: Academic Press.Ekenel, H. K., & Sankur, B. (2005). Multiresolution face recognition. Image and Vision

Computing, 23, 469–477.Feng, G. C., Yuen, P. C., & Dai, D. Q. (2000). Human face recognition using PCA on

wavelet subband. Journal of Electronic Imaging, 9, 226–233.Georghiades, A. S., Belhumeur, P. N., & Jacobs, D. W. (2001). From few to many:

Illumination cone models for face recognition under variable lighting and pose.IEEE Transactions on Pattern Analysis and Machine Intelligence, 23(6), 630–660.

Gonzalez, R. C., & Wood, R. E. (1993). Digital image processing. Reading, MA:Addison-Wesley.

Huttenlocher, D. P., Klanderman, G. A., & Rucklidge, W. J. (1993). Comparing imagesusing the Hauddroff distance. IEEE Transactions on Pattern Analysis and MachineIntelligence, 15(9), 850–863.

Jadhav, D. V., & Holambe, R. S. (2008). Feature extraction using Radon and wavelettransforms with application to face recognition. Neurocomputing, 1–9.

Martinez, A.M., & Benavente, R. (1998). The AR face database. CVC Tech. Report, 24.Naster, C., & Ayache, N. (1996). Frequency-based non-rigid motion analysis. IEEE

Transactions on Pattern Analysis and Machine Intelligence, 18, 1067–1079.Naster, C., Moghaddam, B., & Pentland, A. (1997). Flexible images: Matching and

recognition using learned deformations. Computer Vision Image Understanding,65(2), 179–191.

Primer, A. (1998). Introduction to wavelet and wavelet transform. Prentice-Hall.Ranganath, S., & Arun, K. (1997). Face recognition using transform features and

neural networks. Pattern Recognition, 10, 1615–1622.Shin, D., Lee, H.-S., & Kim, D. (2008). Illumination-robust face recognition using

ridge regressive bilinear models. Pattern Recognition Letters, 29, 49–58.Turk, M., & Pentland, A. (1991). Eigenfaces for recognition. Journal of Cognitive

Neuroscience, 3(1), 71–86.Wong, Y. W., Seng, K. P., & Ang, L.-M. (2009) The audio-visual authentication system

over internet protocol. In 2009 IAENG international conference on imagingengineering, 36(2), 167-174.

Xie, X., & Lam, K.-M. (2007). Elastic shape-texture matching for human facerecognition. Pattern Recognition, 41, 398–405.

The ORL in Cambridge, UK, <http://www.cl.cam.ac.uk/research/dtg/attarchive/facedatabase.html>.

Yale University, <http://www.cvc.yale.edu/projects/yalefaces/yalefaces.html>.Zhang, T., Fang, B., Yuan, Y., Tang, Y. Y., Shang, Z., Li, D. et al. (2009). Multiscale facial

structure representation for face recognition under varying illumination.Pattern Recognition, 42, 251–258.