fingerprint matching based on weighting method and the svm

10
Neurocomputing 70 (2007) 849–858 Fingerprint matching based on weighting method and the SVM Jia Jia , Lianhong Cai, Pinyan Lu, Xuhui Liu Key Laboratory of Pervasive Computing, Tsinghua University, Ministry of Education, Beijing 100084, PR China Available online 18 October 2006 Abstract Fingerprint verification is an important biometric technology. In this paper, an improved fingerprint matching approach that uses both the weighting method and the support vector machine (SVM) is presented. A new weighting feature based on the distance between minutiae is introduced to supplement the minutiae information, which is particularly useful for fingerprint images of poor quality. Furthermore, the traditional minutiae-based matching task is studied as a classification task in the proposed approach by using SVM. To give an objective assessment of the approach, both international and domestic fingerprint verification competition databases have been used for the evaluation. Experimental results show substantial improvements in the accuracy and performance of fingerprint verification. r 2006 Elsevier B.V. All rights reserved. Keywords: Fingerprint verification; Minutiae-base; Weighting feature; SVM 1. Introduction Fingerprint authentication is one of the most important biometric technologies [21]. A fingerprint is the pattern of ridges and valleys (furrows) on the surface of the finger. In automatic fingerprint verification system (AFVS) the characteristic features obtained from the test fingerprint are matched against those from a template fingerprint. As the fingerprint of a person is unique and immutable, the AFVS can be widely used in both anti-criminal and civilian applications where precision is important. Therefore, accuracy and performance improvements are the key points in AFVS current research. The uniqueness of a fingerprint can be determined by the global pattern of ridges and valleys, and by the local pattern of bifurcations and endings which are called minutiae (Fig. 1). These two types of minutiae are considered by the Federal Bureau of Investigation for identification purposes [6]. The minutiae are extracted from the thinned image obtained from fingerprint preprocessing [10,11,13]. Usually, the similarity between two fingerprints is determined by computing the total number of matching minutiae, and the corresponding process is called minutiae- based matching [13]. However, general minutiae-based matching algorithms (GMMAs) in AFVS only make use of minutiae localizations (positions and orientations). Our main work focuses on the minutiae-based matching scheme. We present a fingerprint matching approach that uses not only the minutiae localizations, but also a weight feature, which is the distance between a minutia and its nearest neighbor minutia. Considering that the matching process can be regarded as a two-class classification problem (two fingerprint images are either matched or not), using the extracted minutiae positions, orientations and weights as features, we define a vector standing for the similarity of two fingerprints, and choose support vector machine (SVM) as the classifier. The proposed approach is motivated by the following observations: (1) The minutiae information in fingerprint images may not be discriminative because of the different sensors and skin conditions. Most of the sensors, particularly capacitive sensors, capture only a small area of the fingertip, which means some minutiae information outside the area is missing. Furthermore, in practice a significant percentage of fingerprint images are of poor quality due to variations in skin conditions like postnatal marks or occupational marks, and impression conditions. This may lead to a large number of errors in minutiae positions and orientations, which may cause problems in next matching stage. In such ARTICLE IN PRESS www.elsevier.com/locate/neucom 0925-2312/$ - see front matter r 2006 Elsevier B.V. All rights reserved. doi:10.1016/j.neucom.2006.10.028 Corresponding author. E-mail address: [email protected] (J. Jia).

Upload: jia-jia

Post on 10-Sep-2016

213 views

Category:

Documents


1 download

TRANSCRIPT

ARTICLE IN PRESS

0925-2312/$ - se

doi:10.1016/j.ne

�CorrespondE-mail addr

Neurocomputing 70 (2007) 849–858

www.elsevier.com/locate/neucom

Fingerprint matching based on weighting method and the SVM

Jia Jia�, Lianhong Cai, Pinyan Lu, Xuhui Liu

Key Laboratory of Pervasive Computing, Tsinghua University, Ministry of Education, Beijing 100084, PR China

Available online 18 October 2006

Abstract

Fingerprint verification is an important biometric technology. In this paper, an improved fingerprint matching approach that uses both

the weighting method and the support vector machine (SVM) is presented. A new weighting feature based on the distance between

minutiae is introduced to supplement the minutiae information, which is particularly useful for fingerprint images of poor quality.

Furthermore, the traditional minutiae-based matching task is studied as a classification task in the proposed approach by using SVM. To

give an objective assessment of the approach, both international and domestic fingerprint verification competition databases have been

used for the evaluation. Experimental results show substantial improvements in the accuracy and performance of fingerprint verification.

r 2006 Elsevier B.V. All rights reserved.

Keywords: Fingerprint verification; Minutiae-base; Weighting feature; SVM

1. Introduction

Fingerprint authentication is one of the most importantbiometric technologies [21]. A fingerprint is the pattern ofridges and valleys (furrows) on the surface of the finger. Inautomatic fingerprint verification system (AFVS) thecharacteristic features obtained from the test fingerprintare matched against those from a template fingerprint. Asthe fingerprint of a person is unique and immutable, theAFVS can be widely used in both anti-criminal and civilianapplications where precision is important. Therefore,accuracy and performance improvements are the keypoints in AFVS current research.

The uniqueness of a fingerprint can be determined by theglobal pattern of ridges and valleys, and by the localpattern of bifurcations and endings which are calledminutiae (Fig. 1). These two types of minutiae areconsidered by the Federal Bureau of Investigation foridentification purposes [6]. The minutiae are extracted fromthe thinned image obtained from fingerprint preprocessing[10,11,13]. Usually, the similarity between two fingerprintsis determined by computing the total number of matchingminutiae, and the corresponding process is called minutiae-

e front matter r 2006 Elsevier B.V. All rights reserved.

ucom.2006.10.028

ing author.

ess: [email protected] (J. Jia).

based matching [13]. However, general minutiae-basedmatching algorithms (GMMAs) in AFVS only make use ofminutiae localizations (positions and orientations).Our main work focuses on the minutiae-based matching

scheme. We present a fingerprint matching approach thatuses not only the minutiae localizations, but also a weightfeature, which is the distance between a minutia and itsnearest neighbor minutia. Considering that the matchingprocess can be regarded as a two-class classificationproblem (two fingerprint images are either matched ornot), using the extracted minutiae positions, orientationsand weights as features, we define a vector standing for thesimilarity of two fingerprints, and choose support vectormachine (SVM) as the classifier. The proposed approach ismotivated by the following observations:(1) The minutiae information in fingerprint images may

not be discriminative because of the different sensors andskin conditions. Most of the sensors, particularly capacitivesensors, capture only a small area of the fingertip, whichmeans some minutiae information outside the area ismissing. Furthermore, in practice a significant percentageof fingerprint images are of poor quality due to variationsin skin conditions like postnatal marks or occupationalmarks, and impression conditions. This may lead to a largenumber of errors in minutiae positions and orientations,which may cause problems in next matching stage. In such

ARTICLE IN PRESS

Fig. 1. Examples of fingerprint minutiae. (a) A ridge ending. (b) A ridge

bifurcation.

Fig. 2. A flowchart showing different phases of minutiae-based fingerprint

matching approach. The highlighted modules show the area of our work.

J. Jia et al. / Neurocomputing 70 (2007) 849–858850

cases, the weight based on two minutiae’s distance is notonly an estimate of fingerprint structure, but also asupplement for minutiae information.

(2) After getting the total number t of matchingminutiae, a judgment must be made: Do these two imagesmatch? The normal method is to compare t with a certainthreshold l. If tXl, then the two images match. If t is notgreater than or equal to the threshold, than the images donot match. That means the value of l is critical in thedecision making process. In order to reduce the influence ofl, a machine learning technique can be used to determinethe threshold for different databases. In addition, SVM is apowerful classification method that can properly labelmatching results.

In summary, we present a fingerprint matching schemethat uses both minutiae localizations and estimated weightsas features, transforming the matching problem to aclassification problem and using SVM to solve it. Theexperiments with both international and domestic finger-print verification competition data show substantialimprovements in accuracy and performance of fingerprintverification.

The rest of this paper is organized as follows: in the nextsection we introduce important background information.In Section 3 we give the analysis of GMMA. Section 4outlines the problems of GMMA and the proposedsolution. In Section 5, our approach based on weightingmethod and SVM is presented in detail. Before concludingwith discussions, experimental results and analysis aregiven in Section 6.

2. Related works

Fingerprint matching techniques may be broadly classi-fied as being either minutiae-based or image-based [2] (for agood survey see [18]). Minutiae-based approaches firstextract the minutiae from the fingerprint images; then, thematching between two fingerprints is made using the twosets of minutiae localizations. Image-based approachesusually extract the features directly from the raw imagesince a gray-level fingerprint image is available; then, thematching decision is made using these features. AFVSs areusually based on minutiae matching [9,20,23,33]. Minutiae-based fingerprint matching approach involves differentstages (see Fig. 2 for an illustration).

Our main work focuses on minutiae extraction and resultdecision stages.

Most of the fingerprint minutiae extraction methods arethinning-based by which the skeletonization processconverts each ridge contour to one pixel wide. Theminutiae points are detected by tracing the thin ridgecontours. When the trace stops, a ridge ending pointis marked. Bifurcation points are those with more thantwo neighbors [16,27,28]. In practice, thinning methodshave been found to be sensitive to noise, and the ske-leton structure does not match up with the intuitiveexpectation.Several approaches to automatic minutiae extraction

have been proposed in the literature using different types ofenhancement approaches. O’Gorman and Nickerson [22]present a technique for enhancement based on convolutionof image with a filter oriented according to the ridgedominant direction. Sherlock et al. [26] and Lee and Wang[15] define a technique for fingerprint enhancement andbinarization that performs a frequency domain filteringthrough position-dependent filters [17]. The most recentworks reported in Ref. [10] are based on the usage ofGabor filters. Lee and Wang [15] proposed a Gabor-filter-based method for fingerprint recognition. Jain et al. [10]present a fast fingerprint enhancement algorithm for use byGabor filters. Although the third method employs anisotropic constructing element and as a result keeps theoriginal shape of the fingerprint, the impulsive noise cannotbe completely eliminated. However, the time complexity ofthese three is at least O(N2d2) for an image with N�N pixelentries and a filter whose radius is d. It is time consuming.In this paper, we first propose a simple and linear timecomplexity method, which employs generalized morpholo-gical operators (GMO) [1] based on distance transform andintegral image [31], to eliminate the impulsive noise.In the result decision stage, most of minutiae-based

fingerprint matching approaches use estimate mechanismsto determine the threshold. In addition to the estimatemechanisms, a great deal of literature has presented someingenious technologies related to the use of machinelearning methods for final decision making. From apractical point of view, fingerprint verification has alsobeen studied as a two-class classification problem by using

ARTICLE IN PRESSJ. Jia et al. / Neurocomputing 70 (2007) 849–858 851

a number of machine learning paradigms, for example:neural networks, decision trees and SVMs [3,7,8,25,30].For example, the approach presented in [12] combines thefeatures extracted by FingerCode with Parzen WindowClassifier (PWC) [5]. These studies have shown perfor-mance gains with trained classifiers, and favored SVMsover neural networks and decision trees.

Fig. 3. Examples of fake minutiae. (a) The structures of spacing, bridge

and pore. (b) Agglomerate fake minutiae points marked by panes.

3. General minutiae-based matching algorithm

All the GMMAs [11,13,32] can be generally classifiedinto the following stages: (1) extracting minutiae, (2)generating the transformation parameters that relate thetest image and the template image, (3) aligning the twoimages under these parameters to get the total number ofmatching minutiae and (4) determining the final resultaccording to the result of stage (3).

Extracting minutiae: Most feature extraction methodsare based on thinned images. The minutiae are detected bytracing the ridge contours. The features include: eachminutia’s location coordinate (x,y) and orientation of theridge on which the minutia is detected [24].

Generating transformation parameters: To insure that thecommon regions overlap, the two images need to bealigned first. This is done by determining the transforma-tion parameters (tx; ty;r; y), where tx,ty indicate theadjustable distances in x-axis and y-axis, r indicates theflex coefficient and y indicates the rotated angle.The (tx; ty;r; y) is computed by coordinates of two pairsof minutiae (usually delta points and core points) fromboth images [32]. These two pairs of minutiae are calledreference points.

Aligning the test image and the template image: Once thetransformation parameters (tx; ty;r; y) are obtained, thetest image can be aligned. Let (x,y) represent the originalcoordinate, then the aligned coordinate (u,v) is obtained as

u

v

� �¼ r

cos y sin y

� sin y cos y

� �x

y

" #þ

tx

ty

" #. (1)

After the images are aligned, the total number ofmatching minutiae can be computed.

Determining the results: Due to the structures offingerprints themselves and the conditions of sensors, somefingerprint images do not have delta points or center points[32]. When this occurs, every possible two pairs of minutiaefrom the test image and template image are used asreference points to obtain all of the potential correspond-ing transformation parameters (tx; ty;r; y). For differentreference points, there will be different numbers ofmatching minutiae, and the maximum number will becompared with a certain threshold l to decide whether thetwo images are matched. That means if we adopt themethod of exhaustion, this determining process will need toprocess O(n2m2) calculations, where n and m indicate thenumber of minutiae of the two images.

4. Problem statement and solution

From the analysis of GMMA, we find three noticeableproblems: (1) fake minutiae do have a negative effect onthe result, (2) the process of determination presented inSection 2 will hurt the algorithm performance and (3) anunsuitable threshold l will lead to an incorrect conclusion.

4.1. A new weighting method for solving the problem of fake

minutiae

Fake minutiae always come from structures likespacings, bridges and pores created during the thinningprocess (Fig. 3(a)). Through observation, we find anintercommunity of these structures such that the fakeminutiae on them are usually much closer to each otherthan real ones (Fig. 3(b)). In other words, if the distancesbetween a minutia and its neighbors are very small, theminutia in question may be a fake one.Therefore, to supplement minutiae information, we

define a weight feature w besides the minutiae localization.

Definition 1. A minutia’s weight w is the distance betweenit and its nearest neighbor minutia.

The value of w is normalized into [0,1]. For a minutia,the greater the value of its weight w is, the greater theprobability of it being a real.

4.2. A new ranking strategy for improving the performance

As presented in Section 2, for different reference pointsthere will be different transformation parameters andnumbers of matching minutiae. Our experiments showthat it will take about 30 s to get the maximum number ofmatching minutiae pairs using an exhaustion algorithm. Toreduce the operating time, we present a ranking strategythat sorts the minutiae by descending order according totheir weights w, and choose only the top l ¼ 20 minutiae asthe reference points. This strategy can reveal most of thereal minutiae. Experimental results show that the comput-ing time for matching two images can be reduced to onlyabout 0.5 s, which means the performance is significantlyimproved.To further improve the performance, the least square

method is used to obtain transformation parameters(tx; ty; r; y). Let P ¼ {pi} represent the minutiae sequence

ARTICLE IN PRESS

Fig. 4. Explanation for SVM. (a) For circles A and B, their projection to

either axis has superposition. But in two-dimensional space, they are

linear-dividable, which means the line l can divide them. (b) An SVM is a

hyperplane that separates the positive and negative examples with

maximum margin. The examples closest to the hyperplane are called

support vectors.

J. Jia et al. / Neurocomputing 70 (2007) 849–858852

of test image and Q ¼ {qi} represent the sequence oftemplate image:

P ¼ fðx1; y1Þ; . . . ; ðxn; ynÞg,

Q ¼ fðu1; v1Þ; . . . ; ðun; vnÞg, ð2Þ

where n represents the number of minutiae in a image. Leta ¼ r cos y; b ¼ r sin y, e ¼ tx, f ¼ ty, then Eq. (1) can bedescribed as

u

v

� �¼

a b

�b a

� �x

y

" #þ

e

f

" #. (3)

If T ransðxi; yiÞT¼ ðu0i; v

0iÞT, then the square error � can be

given by

�2 ¼Xn

i¼1

ððui � u0iÞ2þ ðvi � v0iÞ

¼Xn

i¼1

ððui � axi � byi � eÞ2 þ ðvi þ bxi � ayi � f Þ2Þ. ð4Þ

Using least square method, the transformation para-meters of a, b, c, f are

½a b e f �T ¼ A�1C, (5)

where A and C can be computed as

A ¼

Pðx2

i þ y2i Þ 0

Pxi

Pyi

0Pðx2

i þ y2i Þ

Pyi �

PxiP

xi

Pyi n 0P

yi �P

xi 0 n

2666664

3777775,

C ¼

Pðuixi þ viyiÞPðuiyi � vixiÞP

uiPvi

2666664

3777775. ð6Þ

4.3. SVM classifier

GMMAs normally determine the final result by compar-ing threshold l to the total number of matching minutiae t.That is a one-dimensional method, which means the valueof l plays a key role in determining the final result. Insteadof estimating a threshold l, we present a method that uses ahyperplane and a set of matching vectors, which stand forthe similarity of fingerprints. That means we transform theproblem, which is hard to solve in one-dimensional spaceinto a multidimensional space challenge. This can beexplained clearly by Fig. 4(a).

Therefore, we choose SVM which has shown out-standing classification performance in practice as theclassifier [4,34]. SVM is based on a solid theoreticalfoundation—structural risk minimization [29], and itssimplest linear form is shown in Fig. 4(b). The large

margin between positive and negative examples has beenproven to lead to good generalizations [29].

5. A matching scheme based on weighting method and SVM

classifier

Before matching steps, fingerprint preprocessingmust be accomplished first. Here we use an enhancingalgorithm based on estimated local ridge orientationand frequency and filter the image using Gabor filters[10]. Our matching approach is outlined in the followingsections.

5.1. Extracting the minutiae features

For each minutia, we define a 5-tuple (x; y; type; theta;w)to describe its features. The x and y indicate the minutia’scoordinate. The value of type, 1 or 2, indicates that whetherthe minutia is an ending or a bifurcation. Theta indicatesthe tangent angle of the ridge where the minutia is located,and w indicates the minutia’s weight. The 5-tuple(x; y; type; theta;w) is obtained using the following steps(Fig. 5):Let Itest represent the test image, and Itemp represent the

template image.For Itest and Itemp:Step 1: Perform image normalization, and then estimate

the local orientation and frequency. After that, a thresholdimage can be obtained by filtering the image [10].

Step 2: Perform ridge thinning, and then extract thecoordinate (x,y) and orientation theta of each minutia.

Step 3: Compute the distance between each minutia andits nearest neighbor minutia to obtain each minutiae’sweight, w.

Step 4: Sort the minutiae by descending order accordingto their weights w.That means, for each fingerprint image, we get a

minutiae sequence p1; p2; . . . ; pn with degressive weights.

ARTICLE IN PRESS

Fig. 5. Fingerprint image preprocessing and minutiae extraction. (a) Raw

image. (b) Threshold image. (c) Thinned image with minutiae.

J. Jia et al. / Neurocomputing 70 (2007) 849–858 853

5.2. Matching the minutiae under the optimal

transformation parameters

Let p1; p2; . . . ; pn represent the minutiae sequence of Itestand q1; q2; . . . ; qm represent the sequence of Itemp. Wecompute the total number of matching minutiae by thefollowing steps:

Step 1: Choose the top l ¼ 20 minutiae {pi} (1pipl) and{qj} (1pjpl) from the two sequences as the referencepoints.

Step 2: For pi, qj (1pi, jpl), if they have the same valueof type, add (pi, qj) to set A.

For every two members (pi1, qj1), (pi2, qj2)AA, doSteps 3–5.

Step 3: Compute the transformation parameters(tx; ty;r; y) according to (pi1, qj1) and (pi2, qj2) by Eqs. (5)and (6).

Step 4: Select (tx; ty;r; y), if r� 1�� ��40:1 or yp=3, skip

Step 5. This selection ensures that the flex coefficient rapproaches 1 and the rotation angle y is less than p=3.

Step 5: For p1; p2; . . . ; pn, compute their new coordinatesand orientations according to Eq. (1), then match themwith q1; q2; . . . ; qm. Record the number of matchingminutiae pairs.

After every two members (pi1, qj1), (pi2, qj2)AA have beenchosen to finish from Steps 3–5, record the maximumnumber of matching minutiae pairs t and the correspond-ing translation parameters (t0x; t

0y; r0; q0).

Algorithm minutiae matchingObjective: compute the total number of matching minutiaebetween the test image and the template imageInput:

Itest: the test image.Itemp: the template image.{pn}: the minutiae sequence of Itest with degressive

weights.{qm}: the minutiae sequence of Itemp with degressive

weights.Output:

the maximum number of matching minutiae pairs: t.Method:{

int i, j, t[i];

int l ¼ 20;For (1pipl; 1pjpl)

if type [pi] ¼ type [qj]add (pi,qj) to set A

For (8(pi1,qj1), (pi2, qj2)AA){

compute (tx; ty;r; y) by Eq. (5) and

Eq. (6);ifðjr� 1jp0:1kypp=3Þ{

For (0oion+1){compute (xi,yi,thetai)new by Eq. (1);{pi}new ¼ {(xi, yi, thetai)new};}

match {pi}new with {qj};t[i] ¼ the number of matching minutiae

pairs;}

}t ¼Max ({t[i]});return t;

}

5.3. Determining the results using matching vector and SVM

We define a matching vector V(n, m, t, ave, err) todescribe the similarity of Itest and Itemp. Vector V isobtained using the following steps:

Step 1: Record the minutiae number n of Itest and m ofItemp, and maximum matching minutiae number t.

Step 2: For the matching minutiae pairs, let v1; v2; . . . ; vt

present the weights of the minutiae in Itest and u1; u2; . . . ; ut

present the weights of the minutiae in Itemp. Let ave

represent an average of the weight values of these matchingminutiae, and ave is calculated by

ave ¼Xt

i¼1

2uivi

ui þ vi

: (7)

Step 3: Get a 100 pixel� 100 pixel subimage Isub fromthe center of the threshold image of Itest. Translate Isub, byparameters (t0x; t

0y; r0; y0) to I0sub, and let err represent the

number of pixels in I0sub that have the same intensity as itscorresponding pixel in Itemp. The err describes an estima-tion of the matching error.For a matching vector V(n, m, t, ave, err), we need to

label it with ‘‘matching success’’ or ‘‘matching failure’’. Thedecision function of an SVM is shown in

f ðVÞ ¼ hw � Vi þ b. (8)

The hw � Viis the dot product between w (the normalvector to the hyperplane) and V (the matching vector). Themargin for an input vector V i is yif ðV iÞ where yi 2 �1; 1f g

is the class label for V i. Seeking the maximum margin canbe expressed as minimizing hw � wi subject to yiðhw � V ii

ARTICLE IN PRESS

Fig. 6. A flowchart showing different phases of our approach.

Table 1

The number of fingerprint images for each data set

The source of the The number Total images

J. Jia et al. / Neurocomputing 70 (2007) 849–858854

þbÞX1;8i. We allow but penalize the examples falling tothe wrong side of the hyperplane.

The flowchart of our approach is shown in Fig. 6.

data sets of different

fingers

First database BVC2004 db1 40 400

Second database BVC2004 db2 40 400

Third database BVC2004 db3 40 400

Fourth database BVC2004 db4 40 400

Fifth database FVC2002 db1 230 1840

6. Experiments and discussions

We conducted experiments with data of fingerprintverification competitions, to demonstrate the advantagesof our proposed approach to fingerprint verification.

6.1. Data sets

We have collected five data sets from FVC20021 (TheSecond International Fingerprint Verification Competi-tion) and BVC20042 (Biometrics Verification Competition2004). In order to prove the influence of different imagequalities to our matching approach, four subsets ofBVC2004 were selected and named db1–db4. Three ofthese databases were acquired by various sensors includinglow-cost and high-quality, optical and capacitive. Thefourth data base contains synthetically generated images.The fifth database db5, which is a subset of FVC2002, waschosen to prove the influence of number of fingerprintimages. For each data set, we show in Table 1 the numberof different fingers and total fingerprint images.

For each data set, we show in Table 2 the information ofsensors and the condition of images.

Each fingerprint image allows a rotation angle thatbelongs to ½�p=4;p=4� (compared with the vertical line).Every two images from one finger have an overlap ofcommon region. But there may be no delta points or corepoints in some fingerprint images.

1http://bias.csr.unibo.it/fvc2002/.2http://www.sinobiometrics.com/chinese/conferences/

sinobiometrics%2704.htm.

6.2. Experimental setup

We conducted three experiments. All the matchingapproaches in our experiments used the same imagepreprocessing algorithms [10]. All the experiments weredone by the method of 5-folder cross validation.For db1–db4, 400 images were divided into five parts,

each of which had 80 images. All the algorithms were runfive times. Each time the algorithms were run, four of thefive parts were used as training sets, and the other one partwas used as the test set. The average verification resultswere reported over each of these five trials. For db5, 1840images were divided into five parts, each of which had 368images. All the algorithms were run five times. Each time,one of the five parts was used as training set, and the otherfour parts were used as test sets. The average verificationresults were reported over each of these five trials.

Experiment 1: We compared our approach with GMMAmentioned in Section 3. In GMMA, every two minutiaepairs extracted from test image and template image, asreference points, were chosen to get the correspondingtransformation parameters (tx; ty;r; y) and the number ofmatching minutiae. The maximum number of matchingminutiae pairs was compared with a certain threshold l to

ARTICLE IN PRESS

Table 2

The information of sensors and images for each data set

Sensor type Manufacturer Image size Resolution (dpi)

First database Optical sensor URU 4000 412� 362 500

Second database CMOS sensor Fujitsu MBF200 256� 300 500

Third database Thermal sweeping Sensor Atmel 300� 480 500

Fourth database – Fingerpass 380� 460 500

Fifth database Optical sensor TouchView II 388� 374 500

Table 3

FNMR and FMR obtained from db1 to db4

FNMR FMR

Our approach (%) GMMA (%)

First database 2.03 6.67 �0

Second database 12.78 30.28 �0

Third database 7.78 24.17 �0

Fourth database 5.28 13.05 �0

Table 4

FNMR and FMR obtained from db5

FNMR FMR (%) Max match memory

(our approach)

J. Jia et al. / Neurocomputing 70 (2007) 849–858 855

decide whether the two images are matched. Now we justset l ¼ 13 manually.

Experiment 2: We compared our approach to ourapproach without the weighting method.

Experiment 3: We evaluated our approach’s performanceby choosing top l (l ¼ 10, 20, 30, n) minutiae from minutiaesequence p1; p2; . . . ; pn with degressive weights.

Our matching approach was written in C++ andcompiled by Microsoft Visual Studio 6.0 (VC++ 6.0).All the experiments were performed on the same computer.The configuration of the running computer is PIV1.0G,256M DDR. The operating system in use was MicrosoftWindows XP.

We used SVMlight3 for the implementation of SVM [34],and took linear kernel in experiments.

(K byte)

Our approach

(%)

GMMA (%)

4.08 13.84 0.1 6028

15

20

(%

)

Our approach

Our approach

without weighting method

6.3. Measures

The accuracy of the fingerprint verification algorithmwas measured by false non-match rate (FNMR: eachsample in the subset A is matched against the remainingsamples of the same finger), false match rate (FMR: thefirst sample of each finger in the subset A is matchedagainst the first sample of the remaining fingers in A) andthe average time of matching two images in our experi-ments. Especially for database 5, we also measure themaximum memory size of our approach.

db1 db2 db3 db4 db5 average

0

5

10

FN

MR

Different databases

Fig. 7. Comparison of the FNMR of our approach and GMMA in five

different data sets.

6.4. Experimental results and discussions

We first performed five data sets of experiments, all ofwhich examined the accuracy of our approach comparedwith GMMA. Tables 3 and 4 show the FNMR and FMRresults of db1–db5.

The FNMR and FMR results of db1–db4 are shown inTable 3. We see that our approach achieves much betteraccuracy than GMMA for fingerprint verification. Asshown in Table 2, sensors of different types capturefingerprints of these four data sets, so the images are ofvarying degree of quality. This strongly suggests that ourfeature extraction and SVM methods capture well theinformation needed for fingerprint verification, and are notsignificantly influenced by fingerprint image quality.

3http://svmlight.joachims.org/.

The FNMR and FMR results of db5 are shown inTable 4. We see that although the proportion of trainingsets is reduced, and the number of test members isincreased, our approach still works better than GMMA,and the maximum memory for matching a pair offingerprints is also acceptable. We think this is because of

ARTICLE IN PRESS

Table 5

FNMR and FMR obtained from db1 and db2 choosing different minutiae

number l from minutiae sequence p1; p2; . . . ; pn with degressive weights

FNMR FMR

l1 ¼ 10

(%)

l2 ¼ 20

(%)

l3 ¼ 30

(%)

l4 ¼ n

(%)

First database 6.75 2.03 3.31 3.93 �0

Second database 29.05 12.78 15.17 16.23 �0

l1=10 l2=20 l3=30 l4=n

0

5

10

15

20

25

30

FN

MR

(%

)

Different minutiae sequence number

database 1 database 2

Fig. 8. Comparison of the FNMR of our approach using different

minutiae sequence number l in two different data sets.

db1 db2 db3 db4 db5

0.1

1

10

100

Avera

ge m

atc

hin

g t

ime (

Lo

g)

Different databases

Our approach GMMA

Fig. 9. Comparison of the average matching time of our approach and

GMMA in five different data sets.

Table 6

Average time of matching two fingerprints obtained from db1 to db5

First database (s) Second database (s)

Our approach 1.003 0.603

GMMA X30

J. Jia et al. / Neurocomputing 70 (2007) 849–858856

the fact that SVM makes effective use of the matchingvectors to enhance classification.Then we turn to examine the accuracy of our approach,

compared with our approach without weighting method.The experimental results are shown in Fig. 7.We can see that our approach outperforms our approach

without the weighting method in all the five databases. Andthe FNMR result is 4.92% better when using minutiaeweighting method on average. It shows that the minutiaeweighting method is very effective in decreasing theinfluence of fake minutiae.We also evaluated our approach’s performance by

choosing different number l of minutiae from minutiaesequence: l ¼ 10, 20, 30 and n. Their performances on db1and db2 are displayed in Table 5 and Fig. 8.It is shown that the result is the best when l is assigned to

20 for both data sets. When l is set more than 20, theFNMR will be a little higher. The reason is that more fakeminutiae are evolved in the matching process.And when l is set to 10, the results are also not good

because the minutiae selected from the two images are notmatched even though they are usually all real minutiae.The time performance obtained from db1 to db5 are

shown in Fig. 9. The average time for matching twofingerprints using our approach is only one-thirtieth lessthan GMMA (Table 6). This strongly suggests that ourweighting method improves time performance significantly.In addition, the creation of the template of a pair offingerprints (training stage) takes �4 s on a PIV1.0G,256M DDR computer, and the creation of the template ofa pair of fingerprints (testing stage) takes �0.5 s on thesame computer.It must be mentioned that our matching approach

successfully uses weighting features and ranking strategy toavoid detecting core points in two fingerprints aligningprocess. Core point detection is mostly used to align thetest image and the template image. But in some images thecore point may be present close to the boundary ofthe image, or may not even be present [19], which will leadto an error in the final decision.

7. Conclusions

Our main contributions to fingerprint verification in thispaper include the following: (1) a weighting method wasproposed to supplement minutiae information for finger-print images of poor quality. (2) The traditional minutiae-based matching task was translated to a classification taskusing a powerful SVM classifier.

Third database (s) Fourth database (s) Fifth database (s)

1.024 0.831 0.583

X30

ARTICLE IN PRESSJ. Jia et al. / Neurocomputing 70 (2007) 849–858 857

Future work may include the examination of largerdatabases to verify the performance of our approach, andhow to use transductive SVM (TSVM) [14] instead of SVMto improve the matching performance.

Acknowledgment

We would like to thank the anonymous reviewers fortheir helpful comments and suggestions.

References

[1] G. Agam, I. Dinstein, Generalized morphological operators applied

to map-analysis, in: Proceedings of SSPE’96, 1996, pp. 60–69.

[2] A.M. Bazen, G.T.B. Verwaaijen, Correlation based fingerprint

verification system, in: Proceedings Program for Research on

Integrated Systems and Circuits, Veldhoven, The Netherlands,

1 November–29 December, 2000, pp. 205–213.

[3] S. Ben-Yacoub, Y. Abdeljaoued, E. Mayoraz, Fusion of face and

speech data for person identity verification, IEEE Trans. Neural

Networks 10 (5) (1999) 1065–1074.

[4] N. Cristianini, J. Shawe-Taylor, An Introduction to Support Vector

Machines, Cambridge University Press, Cambridge, UK, 2000.

[5] R. Duda, P. Hart, D. Stork, Pattern Classification, Wiley, New York,

2000.

[6] Federal Bureau of Investigation (FBI), The Science of Finger-

prints: Classification and Uses, US Government Printing Office,

Washington, DC, 1984.

[7] J. Fierrez-Aguilar, J. Ortega-Garcia, D .Garcia-Romero, J. Gonzalez-

Rodriguez, A comparative evaluation of fusion strategies for multi-

modal biometric verification, in: Proceedings of the IAPR Interna-

tional Conference on Audio and Video-Based Person Authentication,

AVBPA, Springer, Berlin, 2003, pp. 830–837.

[8] B. Gutschoven, P. Verlinde, Multi-modal identity verification using

support vector machines (SVM), in: Proceedings of the International

Conference on Information Fusion, FUSION, IEEE Press,

New York, 2000, pp. 3–8.

[9] J. Hollingum, Automated fingerprint analysis offers fast verification,

Sens. Rev. 12 (3) (1992) 12–15.

[10] L. Hong, Y.F. Wang, A. Jain, Fingerprint image enhancement:

algorithm and performance evaluation, IEEE Trans. Pattern Anal.

Mach. Intell. 20 (8) (1998).

[11] A. Jain, L. Hong, On-line fingerprint verification. IEEE Trans.

Pattern Anal. Mach. Intell. 19 (4) (1997) 302–314.

[12] A. Jain, S. Prabhakar, L. Hong, S. Pankanti, Filterbank-based

fingerprint matching, IEEE Trans. Image Process. 5 (9) (2000)

846–859.

[13] A. Jain, A. Ross, S. Prabhakar, Fingerprint matching using minutiae

and texture features, in: International Conference on Image Proces-

sing (ICIP), Thessaloniki, Greece, 2001, pp. 282–285.

[14] T. Joachims, Transductive inference for text classification using

support vector machines, in: Proceedings of the 16th International

Conference on Machine Learning (ICML), Bled, Slovenia, 1999,

pp. 200–209.

[15] C.J. Lee, S.D. Wang, A Gabor filter-based approach to fingerprint

recognition, in: IEEE Workshop on Signal Processing System, SiPS

99, 1999, pp. 371–378.

[16] S. Madhvanath, G. Kim, V. Govindaraju, Chain code processing for

handwritten word recognition, IEEE Trans. Pattern Anal. Mach.

Intell. 21 (1997) 928–932.

[17] D. Maio, Direct gray-scale minutiae detection in fingerprints, IEEE

Trans. Pattern Anal. Mach. Intell. 19 (1) (1997) 27–40.

[18] D. Maio, D. Maltoni, A.K. Jain, S. Prabhakar, Handbook of

Fingerprint Recognition, Springer, New York, 2003.

[19] D. Maio, L. Nanni L, An efficient fingerprint verification system

using integrated gabor filters and Parzen Window Classifier,

Neurocomputing 68 (2005) 208–216.

[20] B.M. Mehtre, N.N. Murthy, A minutia based fingerprint identifica-

tion system, in: Proceedings of the Second International Conference

in Advances in Pattern Recognition and Digital Techniques,

Calcutta, 1986.

[21] E. Newham, The Biometric Report, SJB Services, New York, 1995.

[22] L. O’Gorman, J.V. Nickerson, An approach to fingerprint filter

design, Pattern Recognition 22 (1) (1989) 29–38.

[23] F. Pernus, S. Kovacic, L. Gyergyek, Minutiae-based fingerprint

recognition, in: Proceedings of the Fifth International Conference on

Pattern Recognition, 1980, pp. 1380–1382.

[24] N.K. Ratha, S. Chen, A. Jain, Adaptive flow orientation-based

feature extraction in fingerprint images, Pattern Recognition 28 (11)

(1995).

[25] A. Ross, A. Jain, Information fusion in biometrics, Pattern

Recognition Lett. 24 (13) (2003) 2115–2125.

[26] B.G. Sherlock, D.M. Monro, K. Millard, Algorithm for enhancing

fingerprint images, Electron. Lett. 18 (1992) 28.

[27] Z. Shi, V. Govindaraju, Segmentation and recognition of connected

handwritten numeral strings, J Pattern Recognition 30 (9) (1997)

1501–1504.

[28] D. Simon-Zorita, J. Ortega-Garcia, S. Cruz-Llanas, J.L. Sanchez-

Bote, J. Glez-Rodriguez, An improved image enhancement scheme

for fingerprint minutiae extraction in biometric identification, in:

Proceedings of the Third International Conference on Audio and

Video-Based Biometric Person Authentication AVBPA’01, in:

J. Bigun, F. Smeraldi (Eds.), Lecture Notes in Computer Science,

vol. 2091, 2001, pp. 217–222.

[29] V.N. Vapnik, The Nature of Statistical Learning Theory, Springer,

Berlin, Heidelberg, New York, 2000.

[30] P. Verlinde, G. Chollet, M. Acheroy, Multi-modal identity verifica-

tion using expert fusion, Inf. Fusion 1 (1) (2000) 17–33.

[31] P. Viola, M. Jones, Robust real-time face detection, Int. J. Comput.

Vision 57 (2) (2004) 137–154.

[32] C.W. Wang, J.W. Li, Fingerprint identification using point pattern

matching, J. Chongqing Univ. (Nat. Sci. Edn.) 25 (6) (2002).

[33] J.H. Wegstein, An Automated Fingerprint Identification System, US

Government Publication, Washington, 1982.

[34] Y. Yuan, F. Paolo, P. Massimiliano, Fingerprint classification with

combinations of support vector machines, Lecture Notes in

Computer Science, vol. 2091, Springer, Berlin, Heidelberg, New

York, 2001, pp. 253–258.

Jia Jia received the B.E. degree in computer

science and technology from Tsinghua Univer-

sity, Beijing, China, in 2003. She is now a Ph.D.

candidate of the Key Laboratory of Pervasive

Computing (Tsinghua University), Ministry of

Education. Her research interests include

biometrics, pattern recognition and image

processing.

Lianhong Cai is now a professor with the Depart-

ment of Computer Science and Technology,

Tsinghua University. She directs the Human–

Computer Speech Interaction Group. Prof. Cai is

a member of Multimedia Committee of Chinese

Graphics and Image Society, and a member of

Chinese Acoustics Society. She has been awarded

Scientific Progress Prizes and the Invention Prizes

from the Ministry of Mechanism and Electronics,

and the Ministry of Education.

ARTICLE IN PRESSJ. Jia et al. / Neurocomputing 70 (2007) 849–858858

Pinyan Lu received the B.E. degree in computer

science and technology from Tsinghua University,

Beijing, China, in 2005. He is currently a first year

graduate student at Tsinghua University in

theoretical computer science. His research inter-

ests include algorithms and complexity theory.

Xuhui Liu received the B.E. degree in computer

science and technology from Tsinghua University,

Beijing, China, in 2005. He is currently a first year

graduate student at Tsinghua University in

Database Laboratory, Institute of Software De-

partment of Computer Science and Technology.

His research interests include database and soft-

ware engineering.