contact-freeandpose-invarianthand-biometric-based

12
Wang et al. / J Zhejiang Univ-Sci C (Comput & Electron) 2014 15(7):525-536 525 Journal of Zhejiang University-SCIENCE C (Computers & Electronics) ISSN 1869-1951 (Print); ISSN 1869-196X (Online) www.zju.edu.cn/jzus; www.springerlink.com E-mail: [email protected] Contact-free and pose-invariant hand-biometric-based personal identification system using RGB and depth data Can WANG, Hong LIU , Xing LIU (Engineering Laboratory on Intelligent Perception for Internet of Things (ELIP) and MOE Key Laboratory of Machine Perception, Shenzhen Graduate School, Peking University, Shenzhen 518055, China ) E-mail: [email protected]; [email protected]; [email protected] Received July 13, 2013; Revision accepted Mar. 24, 2014; Crosschecked May 6, 2014 Abstract: Hand-biometric-based personal identification is considered to be an effective method for automatic recognition. However, existing systems require strict constraints during data acquisition, such as costly devices, specified postures, simple background, and stable illumination. In this paper, a contactless personal identification system is proposed based on matching hand geometry features and color features. An inexpensive Kinect sensor is used to acquire depth and color images of the hand. During image acquisition, no pegs or surfaces are used to constrain hand position or posture. We segment the hand from the background through depth images through a process which is insensitive to illumination and background. Then finger orientations and landmark points, like finger tips or finger valleys, are obtained by geodesic hand contour analysis. Geometric features are extracted from depth images and palmprint features from intensity images. In previous systems, hand features like finger length and width are normalized, which results in the loss of the original geometric features. In our system, we transform 2D image points into real world coordinates, so that the geometric features remain invariant to distance and perspective effects. Extensive experiments demonstrate that the proposed hand-biometric-based personal identification system is effective and robust in various practical situations. Key words: Hand biometric, Contact free, Pose invariant, Identification system, Multiple features doi: 10.1631/jzus.C1300190 Document code: A CLC number: TP391.4 1 Introduction With rapid development in information technol- ogy, biometrics have emerged to provide a greater degree of security for personal identification sys- tems. Among the various biometric systems, hand- biometric-based systems have received increasing in- terest because of their user acceptability for biomet- Corresponding author * Project supported by the National Natural Science Foundation of China (Nos. 61340046, 60875050, and 60675025), the National High-Tech R&D Program (863) of China (No. 2006AA04Z247), the Scientific and Technical Innovation Commission of Shenzhen Municipality (Nos. JCYJ20120614152234873, CXC201104210010A, JCYJ20130331144631730, and JCYJ20130331144716089), and the Specialized Research Fund for the Doctoral Program of Higher Education, China (No. 20130001110011) c Zhejiang University and Springer-Verlag Berlin Heidelberg 2014 ric traits (Kanhangad et al., 2010; Ramalho et al., 2011; Michael et al., 2012). Shape features extracted from the hand carried only limited information for discrimination. Over the years, various approaches have been proposed to address this problem. Some systems have been developed to eliminate the use of pegs and avoid the need for the user to make contact with a flat surface. These hand-biometric-based identification systems can be grouped into three categories based on the method of image acquisition, as described by Kanhangad et al. (2011a): 1. Constrained and contact based: This kind of system employs pegs and a flat surface to constrain the position of the user’s hand. Many commercial systems and early research systems fall in this cat-

Upload: others

Post on 30-Oct-2021

1 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Contact-freeandpose-invarianthand-biometric-based

Wang et al. / J Zhejiang Univ-Sci C (Comput & Electron) 2014 15(7):525-536 525

Journal of Zhejiang University-SCIENCE C (Computers & Electronics)

ISSN 1869-1951 (Print); ISSN 1869-196X (Online)

www.zju.edu.cn/jzus; www.springerlink.com

E-mail: [email protected]

Contact-free andpose-invariant hand-biometric-based

personal identification systemusingRGBanddepth data∗

Can WANG, Hong LIU‡, Xing LIU(Engineering Laboratory on Intelligent Perception for Internet of Things (ELIP) and MOE Key Laboratory of

Machine Perception, Shenzhen Graduate School, Peking University, Shenzhen 518055, China)

E-mail: [email protected]; [email protected]; [email protected]

Received July 13, 2013; Revision accepted Mar. 24, 2014; Crosschecked May 6, 2014

Abstract: Hand-biometric-based personal identification is considered to be an effective method for automaticrecognition. However, existing systems require strict constraints during data acquisition, such as costly devices,specified postures, simple background, and stable illumination. In this paper, a contactless personal identificationsystem is proposed based on matching hand geometry features and color features. An inexpensive Kinect sensoris used to acquire depth and color images of the hand. During image acquisition, no pegs or surfaces are used toconstrain hand position or posture. We segment the hand from the background through depth images through aprocess which is insensitive to illumination and background. Then finger orientations and landmark points, likefinger tips or finger valleys, are obtained by geodesic hand contour analysis. Geometric features are extracted fromdepth images and palmprint features from intensity images. In previous systems, hand features like finger length andwidth are normalized, which results in the loss of the original geometric features. In our system, we transform 2Dimage points into real world coordinates, so that the geometric features remain invariant to distance and perspectiveeffects. Extensive experiments demonstrate that the proposed hand-biometric-based personal identification systemis effective and robust in various practical situations.

Key words: Hand biometric, Contact free, Pose invariant, Identification system, Multiple featuresdoi:10.1631/jzus.C1300190 Document code: A CLC number: TP391.4

1 Introduction

With rapid development in information technol-ogy, biometrics have emerged to provide a greaterdegree of security for personal identification sys-tems. Among the various biometric systems, hand-biometric-based systems have received increasing in-terest because of their user acceptability for biomet-

‡ Corresponding author* Project supported by the National Natural Science Foundationof China (Nos. 61340046, 60875050, and 60675025), the NationalHigh-Tech R&D Program (863) of China (No. 2006AA04Z247),the Scientific and Technical Innovation Commission ofShenzhen Municipality (Nos. JCYJ20120614152234873,CXC201104210010A, JCYJ20130331144631730, andJCYJ20130331144716089), and the Specialized ResearchFund for the Doctoral Program of Higher Education, China(No. 20130001110011)c©Zhejiang University and Springer-Verlag Berlin Heidelberg 2014

ric traits (Kanhangad et al., 2010; Ramalho et al.,2011; Michael et al., 2012).

Shape features extracted from the hand carriedonly limited information for discrimination. Overthe years, various approaches have been proposedto address this problem. Some systems have beendeveloped to eliminate the use of pegs and avoidthe need for the user to make contact with a flatsurface. These hand-biometric-based identificationsystems can be grouped into three categories basedon the method of image acquisition, as described byKanhangad et al. (2011a):

1. Constrained and contact based: This kind ofsystem employs pegs and a flat surface to constrainthe position of the user’s hand. Many commercialsystems and early research systems fall in this cat-

Page 2: Contact-freeandpose-invarianthand-biometric-based

526 Wang et al. / J Zhejiang Univ-Sci C (Comput & Electron) 2014 15(7):525-536

egory (Sanchez-Reillo et al., 2000; Sanchez-Reillo,2000). Sanchez-Reillo et al. (2000) and Sanchez-Reillo (2000) used six pegs for hand geometry imple-mentation. They measured 25 geometric sizes of ahand, obtaining an equal error rate (EER) of 4.9%.Although pegs provide consistent measuring posi-tions, they also cause user inconvenience and raisepublic-health concerns.

2. Unconstrained and contact based: This kindof system also requires the user to put his/her handon a flat surface (Woodard and Flynn, 2005; Methaniand Namboodiri, 2009) or a digital scanner (Ribaricand Fratric, 2005; Xiong et al., 2005). These systemsdo not constrain position or posture, which is moreuser friendly. Woodard and Flynn (2005) extracted3D finger surface features from 3D range images ofhands placed on a flat surface. Ribaric and Fratric(2005) scanned images of the hand using a low-costscanner. However, contact based image acquisitionstill brings hygiene and public-health concerns.

3. Unconstrained and contact-free: This kind ofsystem requires no peg or flat surface during handimage acquisition. It is believed to be more user-friendly and has therefore attracted much attentionrecently (Zhang et al., 2003; Kanhangad et al., 2010;2011a; Dai et al., 2012; Morales et al., 2012). A typ-ical peg-free hand geometry technique uses opticaldevices to capture the image of a hand. An expen-sive 3D digitizer is used to capture 3D hand images(Kanhangad et al., 2010; 2011a; 2011b). Althoughthe systems are unconstrained and contact-free, theexpensive device makes these systems inapplicable inreal environments. Moreover, clear and simple back-grounds are required for hand segmentation. Skin-like and bad illumination conditions pose difficultiesfor these systems. In this study, we focus on eliminat-ing these illumination and background constraintsby means of an inexpensive Kinect sensor (MicrosoftCorporation, Kinect for Xbox 360).

Another problem to be addressed is the segmen-tation of hands. Many existing systems cannot seg-ment a hand from the background accurately. Kan-hangad et al. (2011b) used an expensive 3D scanningdigitizer to acquire color and depth images of hands,but this method also requires a clean and simplebackground. Morales et al. (2008) modified a web-cam to receive infrared emissions. Maximizing con-trast and brightness with a low value achieves a veryhigh contrast between the hand and the background,

so the hand can be separated easily. In our system,an inexpensive Kinect sensor is used to acquire colorand depths image of a hand. Hand segmentation isimplemented based on the depth image of a hand,which is insensitive to changes in illumination andbackground distractions. So, unlike many other sys-tems, our system can still work in poor illuminationenvironments with cluttered backgrounds.

The main contribution of our work lies in twoaspects. First, a novel geodesic contour analysismethod is proposed to localize landmark points ofthe hand and extract its geometric features. Be-sides explicit features like finger length widely usedin previous systems, implicit geometric features canalso be obtained by analyzing a curvature matrixwhich contains details of all hand contour features.Second, 3D hand registration is conducted to ensurethat features extracted from hand regions are keptrotation, pose, and depth invariant. This greatly re-duces the intra-class difference when describing thesame hand with different poses and distances. Exten-sive experiments were conducted which validate theeffectiveness and robustness of the proposed frame-work for describing hand biometric features and itspracticality for personal identification.

2 System description

2.1 Data acquisition

An illustration of the proposed hand-biometric-based identification system framework given inFig. 1. The first step in our system is to acquiredata from the hand. A Kinect sensor is used to cap-ture color and depth images of the hand. Duringimage acquisition, the user is requested to positionhis/her palm in front of the Kinect sensor. Fingersare slightly and casually stretched apart. There areno guide pegs or flat surfaces to constrain the user’shand. The user can put his/her hand naturally abovethe sensor in various postures. The optimal interac-tion region in our system is set as 60–100 cm fromthe Kinect sensor. The background and illuminationconditions in our system are not strictly constrained.

2.2 Hand segmentation

The second step in our system is to segment thehand for subsequent feature extraction.

In our system, given depth data captured from

Page 3: Contact-freeandpose-invarianthand-biometric-based

Wang et al. / J Zhejiang Univ-Sci C (Comput & Electron) 2014 15(7):525-536 527

Fig. 1 A summary illustration of the proposed biometric-feature-based human hand identification system

the Kinect sensor, the hand region will be segmentedand its contour will be extracted for the followinganalysis.

In this framework, a weighted-depth-histogram-based first-peak hand segmentation method isadopted to segment the hand in front of the depthsensor. Suppose the depth histogram of the depthmap is Hd = (h1

d, h2d, · · · , hNd

d ). The weighted depthhistogram Hw can be given as

Hw = (h1w, h

2w, · · · , hNd

w ), hkw =

ωd

khkd, (1)

where ωd is a weight controlling parameter andhkw (k = 1, 2, · · · , Nd) indicates the histogram value

in the kth bin. Based on the weighted histogram,the first peak is extracted, which is regarded asthe hand region. A brief illustration is given inFig. 2. Fig. 2b is the original depth histogram andFig. 2c the weighted histogram. The histogram valuearound the hand region becomes larger after weight-ing, so the first peak is obvious and the hand regioncan be separated more accurately and robustly fromnoise.

After the hand is segmented on the 2D depthimage, its contour is extracted based on Canny edgeextraction, denoted as Ch = {p1, p2, · · · , pN}, wherepk (k = 1, 2, · · · , N) is the kth contour point of thehand with 2D coordinates (uk, vk) on the depth im-age. In this framework, a depth-invariant transfor-mation (DIT) is adopted to transform 2D points onthe image plane into 3D camera coordinates, formu-lated as

xc = d · u− u0

fu, yc = d · v − v0

fv, zc = d, (2)

where u0, v0, fu, fv are intrinsic parameters of thesensor and d is the depth value of pixel (u, v) on thedepth map. DIT transforms a point (u, v) in thedepth map with depth value d to a point (xc, yc, zc)

in the 3D camera coordinates. Fig. 2d shows thesegmented hand from Fig. 2a transformed to cameracoordinates.

DIT is simple but important for subsequent geo-metric feature extraction in our framework. It trans-forms the hand contour to the same scale as the realworld length metric, which keeps all geometric fea-tures of the hand real and makes them invariant todepth changes or perspective effects.

2.3 Geodesic contour analysis

Based on the hand segmentation describedabove, geodesic contour analysis is conducted to ex-tract geometric features of the hand, which is basedon our previous work in Wang et al. (2013). LetRh denote the segmented hand region and Ch its 3Dcontour. Suppose there are N points on Ch. Letpi denote a contour point. The contour Ch can berepresented by Ch = {p1,p2, · · · ,pN}.

Given any point pi on the hand region contourCh, a geodesic curve Γ

sji around pi at step sj is de-

fined as a point set:

Γsji = χ(pi, sj) = {pk|i− sj ≤ k ≤ i+ sj}, (3)

where step sj determines the length span of localgeodesic curve Γ

sji .

For each contour point pi, its geodesic curve setSi is defined, which consists of local geodesic curvesin M steps, formulated as

Si = {Γ sji = χ(pi, sj)|1 ≤ j ≤ M}. (4)

Given any local geodesic curve Γsji on the hand

contour, we calculate its curvature for biometric fea-ture representation. Let κsj

i denote the curvature ofcurve Γ

sji , formulated as

κsji = δ · l

sjpi

|pi−sj ,pi+sj |, (5)

Page 4: Contact-freeandpose-invarianthand-biometric-based

528 Wang et al. / J Zhejiang Univ-Sci C (Comput & Electron) 2014 15(7):525-536

(a)

(b)

(c)

(d)

50

450

400

350

300

250

200

150

100

100 200 300 400 500 600

50

0

200

150

100

100 200 300 400 500 600 700 800 900 10000

100 200 300 400 500 600 700

–100 –50 0 50 100

800 900 1000

0.06

0.07

0

0.01

0.02

0.03

0.04

0.05

0.060.070.080.09

00.010.020.030.040.05

0

–100

100

50

0

–50

Depth

Pixel index

Pixel index

Pixel index

Nor

mal

ized

dep

thN

orm

aliz

ed w

eigh

ted

dept

h

Pix

el in

dex

Fig. 2 An illustration of the weighted-depth-histogram based first-peak hand segmentationmethod: (a) original depth map; (b) its depth his-togram; (c) weighted depth histogram; (d) 3D handcontour transformed to the camera coordinates

where |pi,pj| is defined as the Euclidean distancebetween any two points pi and pj , and l

sji denotes

the geodesic length of curve Γsji :

lsjpi=

i+sj−1∑

k=i−sj

|pk,pk+1|. (6)

To avoid repeated calculation of the Euclidean dis-tance between two points in Eqs. (5) and (6), a Eu-clidean distance look-up table is adopted in our im-plementation, which ensures that all Euclidean dis-tances between each two points used in curvaturecalculation are calculated only once.

In Eq. (5) δ is defined to indicate whether localcurve Γ

sji is concave or convex:

δ =

{1, [pi−sj ,pi+sj ] ∈ Rh,

−1, otherwise,(7)

where [pi,pj] indicates a line segment between pointspi and pj . The term [pi,pj ] ∈ Rh indicates that theline segment [pi,pj ] is inside the hand region Rh. Inour framework, for simplicity, the middle point p

iSjm

is used to judge whether the line segment betweenpoints pi and pj is in the hand region. Thus, Eq. (7)can be reformulated as

δ =

{1, p

iSjm ∈ Rh,

−1, otherwise.(8)

Based on Eqs. (3) and (5), given any point pi on thehand contour Ch, at a certain step si, the correspond-ing geodesic curve is denoted as Γ

sji with curvature

value κsji . For its geodesic curve set Si in Eq. (4),

let Ki denote its corresponding curvature set as Si,formulated as

Ki = {κsji |1 ≤ sj ≤ M}. (9)

For N contour points {p1,p2, · · · ,pN}, their curva-ture set {K1,K2, · · · ,KN} constitutes a 2D curva-ture matrix with M rows and N columns, denotedas Mc. The element on the jth row and ith col-umn of Mc is actually the curvature κ

sji of the local

geodesic curve of the ith contour point pi, denotedas Mji

c . A curvature map with pseudo color is givenin Fig. 3. The values vary due to different curvaturesof contour points at different curvature steps.

Then, the 2D curvature matrix Mc is analyzedto find contour points on finger tips and finger val-leys. A curvature histogram H is constructed to givestatistics of curvatures on various steps for every con-tour point:

H = {h1, h2, · · · , hN},

hi =

N∑

j=1

Mijc =

N∑

j=1

κsji , (10)

Page 5: Contact-freeandpose-invarianthand-biometric-based

Wang et al. / J Zhejiang Univ-Sci C (Comput & Electron) 2014 15(7):525-536 529

where hi is the mean value of the curvatures of con-tour point pi at various steps. A curvature histogramis shown in Fig. 3b.

Given the curvature histogram H , a 1D Gaus-sian kernel is first adopted to smooth it, and thenthe candidate points on finger tips and valleys areselected according to the following criteria:

Ptip = {pi|hi > hi−1 and hi+1 > hi},Pval = {pi|hi < hi−1 and hi+1 < hi}. (11)

Based on point sets Ptip and Pval, five pointswith the largest histogram values are selected as thefinal finger tip points (Ptip), and four points withthe smallest histogram values are selected as fin-ger valley points (Pval) (Fig. 3). The five finger tippoints and four finger valley points can be orderedby their indexes on the contour. The finger lengthLf = {l(1)f , l

(2)f , · · · , l(5)f } can be obtained by calcu-

lating the geodesic distance from finger tips to fingervalleys:

Ptip = {p(1)tip ,p

(2)tip ,p

(3)tip ,p

(4)tip ,p

(5)tip},

Pval = {p(1)val,p

(2)val,p

(3)val,p

(4)val},

l(1)f = |p(1)

tip − p(1)val|, l(5)f = |p(5)

tip − p(4)val|,

l(k)f = min{|p(k)

tip − p(k−1)val |, |p(k)

tip − p(k)val |}, k = 2, 3, 4.

(12)

The order {p(1)tip ,p

(2)tip ,p

(3)tip ,p

(4)tip ,p

(5)tip} is little finger,

ring finger, middle finger, forefinger, and thumb.Based on this order, the curvature matrixMh of

the hand contour can be re-formulated with the lastcolumn corresponding to the thumb’s tip point p(5)

tip .After the five finger tips are determined (Fig. 3d),the curvature matrix in Fig. 3a is re-formulated us-ing p

(5)tip as reference. Thus, all hand contours can

be represented by this geodesic curve curvature ma-trix Mh, which contains details of the hand’s bio-metric contour features. Geometric features are ex-tracted by analyzing the hand’s curvature matrix inour framework, which will be described in detail inSection 2.5.1.

2.4 Three-dimensional hand registration

2.4.1 Hand rotation invariance

To achieve hand rotation invariance which is aprerequisite for the subsequent hand feature extrac-tion, the orientation of the hand must be calculated

Fig. 3 (a) Original curvature matrix; (b) Curvaturehistogram with five peak points and four valley pointsmarked by circles; (c) Illustration of the tip and valleypoints found along the length of the finger; (d) Re-formulated curvature matrix

Page 6: Contact-freeandpose-invarianthand-biometric-based

530 Wang et al. / J Zhejiang Univ-Sci C (Comput & Electron) 2014 15(7):525-536

first. Given any point pi on boundary Ch, and sup-posing p

sjm is the middle point between points pi−sj

and pi+sj , the orientation of point pi is calculatedby averaging local orientations of a contour point atvarious steps:

oi =1

M

M∑

j=1

pi − psjm

|pi − psjm | . (13)

In our framework, the orientation of point p(3)tip

on the middle finger tip is defined as the principalorientation of the hand, denoted as oh (Fig. 4a).

In this hand rotation invariant module, we con-sider only 2D in-plane rotation. Suppose the 2Dorientation of the hand can be formulated as oh =

(ox, oy). The rotation matrix can be given as

Rθ =

⎝cos θ sin θ 0

− sin θ cos θ 0

0 0 1

⎠ , θ = arctan(oxoy

). (14)

The rotation matrix Rθ is then used to correctthe rotation of the hand on the intensity images.Suppose the original intensity data in the 2D imageplane can be represented as

I2D =

⎣x1 x2 · · · xn

y1 y2 · · · yni1 i2 · · · in

⎦ , (15)

where ik (k = 1, 2, · · · , n) indicates the intensityvalue of pixel (uk, vk) of intensity image I2D, and(xk, yk) the corresponding coordinates in the cam-era coordinates after DIT (Eq. (2)). The rotationinvariant hand data can be given by

I2D = RθI2D. (16)

2.4.2 Hand pose invariance

To achieve an invariant description of a handfrom all view points, besides 2D in-plane rotationinvariance, 3D out-of-plane rotation caused by dif-ferent hand poses must be corrected. In our frame-work, hand pose invariance is achieved by rotating3D hand points according to the normal vector of thepalm plane. Four valley points {p(1)

val,p(2)val,p

(3)val,p

(4)val}

are used for palm plane fitting (Fig. 4). The fit-ting results and normal vector of the palm plane areshown (Fig. 4c).

Let n = (nx, ny, nz) denote the normal vectorof the palm plane. The 3D rotation matrix can be

z

Fig. 4 (a) Hand 2D orientation; (b) Hand 3D orien-tation and the normal vector of the palm plane; (c)Plane fitting for the palm; (d) The final results of thesegmented hand with finger tips (red circles), fingervalleys (green circles), finger orientations (yellow ar-rows) and the palm’s 3D orientation (purple arrow).References to color refer to the online version of thisfigure

formulated as

Rn =

⎜⎜⎝

cos θy 0 sin θ 0

sin θx sin θy cos θx − sin θx cos θy 0

− cos θx sin θy sin θx cos θx cos θy 0

0 0 0 1

⎟⎟⎠ ,

(17)where

θx = − arctan

(ny

nz

), θy = arctan

(nx

nz

). (18)

Suppose the 3D hand points with intensity

Page 7: Contact-freeandpose-invarianthand-biometric-based

Wang et al. / J Zhejiang Univ-Sci C (Comput & Electron) 2014 15(7):525-536 531

values can be represented as

I3D =

⎢⎢⎣

x1 x2 · · · xn

y1 y2 · · · ynz1 z2 · · · zni1 i2 · · · in

⎥⎥⎦ , (19)

where (xk, yk, zk) (k = 1, 2, · · · , n) are the 3D co-ordinates of any point in hand regions and ik itscorresponding intensity value. Then, hand pose in-variance can be achieved by the following 3D rotationtransformation:

I3D = RnI3D. (20)

Thus, through 2D in-plane rotation and 3D out-of-plane rotation, hands with various poses are regis-tered to the same view. The hand registration resultsof our framework will be evaluated in Section 3.2.The next step is to extract hand biometric featuresfrom the registered hand.

2.5 Biometric feature extraction

2.5.1 Geometric features

Geometric features can be applied in recognitionas described by Malassiotis et al. (2006) and Kumarand Zhang (2007). As mentioned in Section 2.3,a hand contour can be represented by the geodesiccurve curvature matrix Mh, which contains detailsof the hand’s geometric contour features. In thisframework, we extract the main geometric features ofthe hand by conducting singular value decomposition(SVD), formulated as follows:

Mh = U

(Δ 0

0 0

)V H, (21)

where U and V H are unitary matrices and Δ is adiagonal matrix with singular values of Mh, denotedas

Δ = diag{δ1, δ2, · · · , δr}, δi =√λi, i = 1, 2, · · · , r.

(22)Here r is the rank of Mh and λi is one of its singularvalues.

Based on the SVD of Mh, an implicit geomet-ric feature vector vg is formulated as a normalizedvector:

vg =(δ1, δ2, · · · , δr

), δi =

δi∑rk=1 δk

. (23)

Besides the implicit geometric feature vector vg,an explicit geometric feature vector vf consideringfinger length is used in our framework:

vf =(l(1)f , l

(2)f , · · · , l(n)f

), l

(i)f =

l(i)f∑r

k=1 l(k)f

, (24)

where l(k)f is the finger length as given in Eq. (12).

The final geometric feature of a hand can be de-scribed as a concatenated vector obtained by weight-ing two normalized feature vectors vg and vf , formu-lated as

vgf = ωgvg + ωfvf , ωg+ωf=1. (25)

2.5.2 Intensity features

To increase the robustness of our system, sim-ilar to many previous studies (Choras and Choras,2006; Kanhangad et al., 2011b; Michael et al., 2012),we combine hand geometry features with palmprintfeatures for hand biometric feature description.

In this framework, geometric features are ex-tracted through geodesic contour analysis usingdepth data. Through the hand registration module,depth data and intensity data in the hand region areregistered with rotation and pose invariance. Thenext step is to extract palmprint features from inten-sity data after hand registration, which ensures thepalmprint features remain invariant to various handposes and orientations. There is another problemworthy of consideration, which is the scale problem:due to perspective effects, the projection scale of anobject on the 2D image plane changes when its depthchanges. To eliminate the scale problem, the handregion is often normalized to the same scale. How-ever, this leads unavoidably to a loss of biometricfeatures in the hand description, as different handsinherently have different sizes.

In our framework, to handle the scale problemfor better hand feature extraction, rather than sim-ply normalizing all hands to the same scale, we con-vert hand regions on the image plane to depth in-variant image coordinates which have the same scaleas the real world coordinates. For any point (ui, vi)

on the intensity image plane, the transformation isconducted by

uiDI = α′

(Woff + di(ui − u0)

1

fu

),

viDI = α′(Hoff + di(vi − v0)

1

fv

), (26)

Page 8: Contact-freeandpose-invarianthand-biometric-based

532 Wang et al. / J Zhejiang Univ-Sci C (Comput & Electron) 2014 15(7):525-536

where (u0, v0, fu, fv) are the camera’s intrinsic pa-rameters and α′ is the scale factor. Here, Woff andHoff are set to make sure the minimum values ofuiDI and viDI are 1. di is the corresponding depth of

(ui, vi). This is similar to the DIT in Eq. (2) whichtransforms the point (ui, vi) on the image plane tothe 3D camera coordinates. Actually, this transfor-mation can be regarded as projecting the 3D pointswith intensity data of the hand region onto a 2D im-age plane, without any perspective effects. That isto say, the size of hand projection on the transformedimage plane depends only on its real size in the realworld, which is depth-invariant.

Based on this useful attribute, palmprint fea-tures are extracted following the method proposed byKong and Zhang (2004). This method uses a compet-itive coding scheme for palmprint verification. Thecompetitive coding scheme uses multiple 2D Gaborfilters to extract orientation information from palmlines. This information is then stored in a featurevector called the ‘competitive code’. The angularmatching with an effective implementation is thendefined for comparing the proposed codes in nearlyreal time.

In our framework, feature vectors for hands ofthe same person can be simply trained and testedusing the K-nearest-neighbors (KNN) method, forboth geometric and palmprint features. As this mod-ule is not the main contribution of our work, detailswill not be described, but the results of experimentsusing geometric and palmprint features will be givenin the experiments section.

3 Experiments and discussion

To demonstrate the effectiveness of our pro-posed method, extensive experiments were con-ducted on a PC connected to a Kinect sensor. Thealgorithm was processed on a Pentium i3-2100 3.10GHz CPU with 2 GB RAM.

As discussed in Section 2.1, since there is nopublicly available 3D hand database where handimages are obtained in a contactless and posture-unconstrained manner, we have developed ourdatabase using an inexpensive Kinect sensor. Par-ticipants in the data collection process conducted inour experiments included 30 students in our labo-ratory who volunteered to have their hand biomet-ric features analyzed by our system. Left and right

hand images were acquired from these 30 students.To test our approach under different pose variations,students were instructed to present their left handsin various poses.

For all experiments, parameters were set as fol-lows: in Eq. (1) ωd = 1000, in Eq. (9) M = N/16,in Eq. (25) ωg = ωf = 0.5, in Eq. (26) α′ = 0.05,and in Eq. (27) (Section 3.3), N1 = 30. Size of thedepth image was 640×480, and of the color imagewas 1280×960.

3.1 Orientation and landmark point detection

The first group of experiments evaluated theperformances of finger orientation and landmarkpoint detection. As discussed in Section 2.3, geodesiccontour analysis is conducted before extracting handgeometry features. Landmark points and finger ori-entations were obtained through geodesic contouranalysis for hand registration. No strict illumina-tion or background conditions are required in ourpersonal identification system (Fig. 5). The com-plex and challenging conditions in our testing se-quences included unstable illumination, clutteredbackground, and various 2D rotation and out-of-plane poses.

Finger orientations and landmark points weredetected in depth images and then plotted on thecolor images in Fig. 5. Finger orientations and land-mark points were well detected even in poorly illumi-nated and cluttered backgrounds with hands rotated.As in the last sequence in Fig. 5, the user’s fingerorientations and landmark points were detected out-doors with poor illumination. This is a tough prob-lem for many state-of-the-art systems lacking depthinformation. With finger orientations and landmarkpoints accurately obtained, hand registration and ge-ometric feature extraction can be well conducted.

3.2 Performance of hand registration

The second group of experiments evaluated theperformance of hand registration with 2D rotationand various poses. To make the hand geometry ex-traction more accurate, hand registration was con-ducted before feature extraction. For various handrotations in a plane, we rotated the hand back to aposition with the hand pointing upwards. An exam-ple of hand registration is illustrated in Fig. 6.

Page 9: Contact-freeandpose-invarianthand-biometric-based

Wang et al. / J Zhejiang Univ-Sci C (Comput & Electron) 2014 15(7):525-536 533

Fig. 5 Finger orientation and landmark point detection in six real and challenging environments. Orientationsof fingers are plotted with a yellow arrow. Landmark points like finger tips and finger valleys are plotted asred and yellow circles, respectively. References to color refer to the online version of this figure

Page 10: Contact-freeandpose-invarianthand-biometric-based

534 Wang et al. / J Zhejiang Univ-Sci C (Comput & Electron) 2014 15(7):525-536

3.3 Hand geometry features

The third group of experiments evaluated theperformance of hand geometry feature extraction.The geometric features extracted in our system arethe length of the fingers and the SVD of the handcurvature matrix. To evaluate the accuracy of fingerlength calculation, we define an accuracy rate R as

R =1

N1

N1∑

i=1

(1− |lri − lci |

lri

), (27)

where N1 is the number of users in our experiments,lci is the calculated length of a finger of the ith user,and lri is the real length of the finger.

To evaluate the effectiveness of the proposed ap-proach, verification experiments were performed onthe acquired database. In the first set of experi-ments, we evaluated the improvement of hand poseregistration. Hand geometry features and palmprintfeatures were evaluated before and after registration.

Table 1 gives the accuracy rates of four fingers of

Fig. 6 Hand registration based on finger orientation. Twelve real registration examples are given; hands arerotated to face the camera and point upwards. The subfigures on the left are the original color images andthose on the right are the color images being rotated with the hand pointing upwards. To show clearly theperformance of 2D rotation registration, we rotated the whole color image to illustrate the hand registration

Page 11: Contact-freeandpose-invarianthand-biometric-based

Wang et al. / J Zhejiang Univ-Sci C (Comput & Electron) 2014 15(7):525-536 535

the left hand, including the little finger, ring finger,middle finger, and forefinger. To evaluate the depthinvariant performance of our system, we determinedto test the accuracy rates with different distances be-tween the hand and the Kinect sensor. The distancedid not affect the performance of the finger lengthcalculation (Table 1). The reason is that we haveprojected the coordinates of points on the hand tothe real world, so the length calculation uses real-world coordinates instead of camera coordinates.

Table 1 Accuracy rates R of hand finger length cal-culations at various distances between the hand andthe camera

Distance Accuracy rate (%)

(cm) Little finger Ring finger Middle finger Forefinger

60 95.1 94.5 92.7 96.270 96.2 95.4 94.6 93.880 94.7 93.6 92.9 97.190 95.5 94.7 94.4 93.6

100 93.2 95.2 95.8 95.7

To test the discrimination ability of geometricfeatures, the identification rates for six challengingsequences are given in Table 2. The identificationrate is an average identification rate in a sequence,calculated as p = nc/n, where nc is the number ofcorrect identifications and n is the number of tests.From Table 2, we find that fusion of features canincrease robustness, as concluded by Michael et al.(2012).

Table 2 The identification rates based on differentgeometric features for six challenging sequences

SequenceIdentification rate (%)

Finger length Hand curvature Fused

1 81.2 80.4 81.52 79.4 82.7 82.43 81.1 82.7 83.84 84.5 83.8 84.25 83.5 84.6 84.76 82.3 81.4 82.9

3.4 Performance of the identification system

The fourth group of experiments evaluated theperformance of our hand-biometric-based identifica-tion system. The correct identification rates p forour hand geometry and color features based identi-fication system are given in Fig. 7. All experiments

1 2 3 4 5 6index

Fig. 7 Performance of the hand-biometric-based per-sonal identification system. Geometric features, RGBfeatures, and fused features were applied to identifypeople in six sequences

followed a leave-one-out strategy. We performed afull matching test on all the hand images, and eachexperiment was conducted five times.

We evaluated the performance of geometric fea-tures based, color features based, and fused featuresbased identification systems. A system based on thefusion of features is more robust than systems basedon geometric or color features (Fig. 7). Geometricfeatures give better results than color features, as ge-ometric features are not sensitive to illumination orbackground conditions.

4 Conclusions and future work

In this paper, a contactless personal identifica-tion system is proposed based on matching hand ge-ometry features and color features. An inexpensiveKinect sensor is used to acquire depth and color im-ages of hands. Previous hand-biometric-based per-sonal identification systems were severely limited byfactors such as distance, background, illumination,and viewpoint. In our method, these limitationscan be overcome. Extensive experiments have shownthat hands can be registered and normalized basedon finger and palm orientations. Real hand geom-etry data can be obtained, unaffected by the dis-tance or viewpoint of the camera. Hand geome-try features and color features used in our systemshow good discrimination ability. In future work, wewill focus on developing a more natural and robusthand-biometric-based personal identification systemfor real environments. More robust features will beextracted to discriminate hands better.

ReferencesChoras, R.S., Choras, M., 2006. Hand shape geometry and

palmprint features for the personal identification. 6th

Page 12: Contact-freeandpose-invarianthand-biometric-based

536 Wang et al. / J Zhejiang Univ-Sci C (Comput & Electron) 2014 15(7):525-536

Int. Conf. on Intelligent Systems Design and Applica-tions, p.1085-1090. [doi:10.1109/ISDA.2006.253763]

Dai, J., Feng, J., Zhou, J., 2012. Robust and effi-cient ridge-based palmprint matching. IEEE Trans.Pattern Anal. Mach. Intell., 34(8):1618-1632.[doi:10.1109/TPAMI.2011.237]

Kanhangad, V., Kumar, A., Zhang, D., 2010. Hu-man hand identification with 3D hand pose varia-tions. IEEE Computer Society Conf. on ComputerVision and Pattern Recognition Workshops, p.17-21.[doi:10.1109/CVPRW.2010.5543236]

Kanhangad, V., Kumar, A., Zhang, D., 2011a. Contactlessand pose invariant biometric identification using handsurface. IEEE Trans. Image Process., 20(5):1415-1424.[doi:10.1109/TIP.2010.2090888]

Kanhangad, V., Kumar, A., Zhang, D., 2011b. A uni-fied framework for contactless hand verification. IEEETrans. Inform. Forens. Secur., 6(3):1014-1027.[doi:10.1109/TIFS.2011.2121062]

Kong, A., Zhang, D., 2004. Competitive codingscheme for palmprint verification. Proc. 17thInt. Conf. on Pattern Recognition, p.520-523.[doi:10.1109/ICPR.2004.1334184]

Kumar, A., Zhang, D., 2007. Hand geometry recog-nition using entropy-based discretization. IEEETrans. Inform. Forens. Secur., 2(2):181-187.[doi:10.1109/TIFS.2007.896915]

Malassiotis, S., Aifanti, N., Strintzis, M.G., 2006. Per-sonal authentication using 3-D finger geometry. IEEETrans. Inform. Forens. Secur., 1(1):12-21.[doi:10.1109/TIFS.2005.863508]

Methani, C., Namboodiri, A.M., 2009. Pose invari-ant palmprint recognition. LNCS, 5558:577-586.[doi:10.1007/978-3-642-01793-3_59]

Michael, G.K.O., Connie, T., Teoh, A.B.J., 2012. A con-tactless biometric system using multiple hand features.J. Vis. Commun. Image Represent., 23(7):1068-1084.[doi:10.1016/j.jvcir.2012.07.004]

Morales, A., Ferrer, M.A., Diaz, F., et al., 2008. Contact-free hand biometric system for real environments. 16thEuropean Signal Processing Conf., p.1-5.

Morales, A., Ferrer, M.A., Travieso, C.M., et al., 2012. Mul-tisampling approach applied to contactless hand bio-metrics. IEEE Int. Carnahan Conf. on Security Tech-nology, p.224-229. [doi:10.1109/CCST.2012.6393563]

Ramalho, M., Correia, P., Soares, L., 2011. Distributedsource coding for securing a hand-based biometric recog-nition system. 18th IEEE Int. Conf. on Image Pro-cessing, p.1825-1828. [doi:10.1109/ICIP.2011.6115820]

Ribaric, S., Fratric, I., 2005. A biometric identification sys-tem based on eigenpalm and eigenfinger features. IEEETrans. Pattern Anal. Mach. Intell., 27(11):1698-1709.[doi:10.1109/TPAMI.2005.209]

Sanchez-Reillo, R., 2000. Hand geometry pattern recog-nition through Gaussian mixture modelling. Proc.15th Int. Conf. on Pattern Recognition, p.937-940.[doi:10.1109/ICPR.2000.906228]

Sanchez-Reillo, R., Sanchez-Avila, C., Gonzalez-Macros, A.,2000. Biometric identification through hand geometrymeasurements. IEEE Trans. Pattern Anal. Mach.Intell., 22(10):1168-1171. [doi:10.1109/34.879796]

Wang, C., Liu, H., Liu, X., 2013. Maximally sta-ble curvature regions for 3D hand tracking. IEEEInt. Conf. on Image Processing, p.3895-3899.[doi:10.1109/ICIP.2013.6738802]

Woodard, D., Flynn, P., 2005. Finger surface as a bio-metric identifier. Comput. Vis. Image Understand.,100(3):357-384. [doi:10.1016/j.cviu.2005.06.003]

Xiong, W., Toh, K., Yau, W., et al., 2005. Model-guided deformable hand shape recognition without po-sitioning aids. Pattern Recogn., 38(10):1651-1664.[doi:10.1016/j.patcog.2004.07.008]

Zhang, D., Kong, W., You, J., et al., 2003. On-line palmprint identification. IEEE Trans. Pat-tern Anal. Mach. Intell., 25(9):1041-1050.[doi:10.1109/TPAMI.2003.1227981]