3d fingerprint reconstruction_liuzhang_pattern recognition letters
DESCRIPTION
research paperTRANSCRIPT
-
5/25/2018 3D Fingerprint Reconstruction_LiuZhang_Pattern Recognition Letters
1/16
3D ngerprint reconstruction system using feature correspondences
and prior estimated nger model
Feng Liu, David Zhang n
Department of Computing, the Hong Kong Polytechnic University, Hung Hom, Kowloon, Hong Kong
a r t i c l e i n f o
Article history:
Received 5 January 2013
Received in revised form21 May 2013
Accepted 10 June 2013Available online 21 June 2013
Keywords:
3D ngerprint reconstruction
Finger shape model
Fingerprint features correspondences
Orientation map
Frequency map
Touchless multi-view imaging
a b s t r a c t
The paper studies a 3D ngerprint reconstruction technique based on multi-view touchless ngerprint
images. This technique offers a solution for 3D ngerprint image generation and application when only
multi-view 2D images are available. However, the difculties and stresses of 3D ngerprint reconstruc-
tion are the establishment of feature correspondences based on 2D touchless ngerprint images and the
estimation of the nger shape model. In this paper, several popular used features, such as scale invariant
feature transformation (SIFT) feature, ridge feature and minutiae, are employed for correspondences
establishment. To extract these ngerprint features accurately, an improved ngerprint enhancement
method has been proposed by polishing orientation and ridge frequency maps according to the
characteristics of 2D touchless ngerprint images. Therefore, correspondences can be established by
adopting hierarchicalngerprint matching approaches. Through an analysis of 440 3D point cloud nger
data (220 ngers, 2 pictures each) collected by a 3D scanning technique, i.e., the structured light
illumination (SLI) method, the nger shape model is estimated. It is found that the binary quadratic
function is more suitable for the nger shape model than the other mixed model tested in this paper. In
our experiments, the reconstruction accuracy is illustrated by constructing a cylinder. Furthermore,
results obtained from different ngerprint feature correspondences are analyzed and compared to show
which features are more suitable for 3D ngerprint images generation.
& 2013 Elsevier Ltd. All rights reserved.
1. Introduction
As one of the most widely used biometrics, ngerprints have
been investigated for more than a century [1]. Advanced Auto-
mated Fingerprint Recognition Systems (AFRSs) are available in
the market everywhere and most of them capture ngerprint
images by using the touch-based technique, since it is easy to
obtain images with high ridge-valley contrast. However, the
touch-based imaging technique introduces distortions and incon-
sistencies to the images due to the contact of nger skin with
device surface. In addition, the curved 3D nger surface attens
into 2D plane during image acquisition, destroying the 3D natureof ngers. To deal with these problems, 3D ngerprint imaging
techniques start to be considered [28]. Usually, these techniques
capturengerprint images at a distance and provide the 3D nger
shape feature simultaneously. The advent of these techniques
brings new challenges and opportunities to existing AFRSs.
Currently, there are three kinds of popular 3D imaging techni-
ques: multi-view reconstruction [24], laser scanning [5,27,28],
and structured light scanning[68]. Among them, the multi-view
reconstruction technique has the advantage of low cost but the
disadvantage of low accuracy. Laser scanning normally achieves
high resolution 3D images but costs too much and the collecting
time is long [5,27,28]. As mentioned in Ref. [28], the currently
available commercial 3D scanning systems cost from $2500 to
$240,000USD. The time of scanning a turtle gurine (18 cm long)
is from 4 to 30 min for different scanners [27]. The status (wet or
dry) of objects also affects the accuracy of 3D images due to
surface reection. The wetter the surface is, the lower the accuracy
will be[5]. Different from the multi-view reconstruction and laser
scanning, structured light imaging has high accuracy as well as a
moderate cost. However, it also takes much time to collect 3D dataand suffers from the instability problem such that one needs to
keep still when it projects some structured light patterns to the
human nger[68]. Thus, it is necessary and important to study
the reconstruction technique based on multi-view 2D ngerprint
images when considering the cost, friendliness, as well as the
complexity of device design. It is well known that the 3D spatial
coordinates of an object are available from its two different plane
pictures captured at one time according to binocular stereo vision
theory, if some camera parameters and the corresponding
matched pairs are provided [3]. In Ref. [2], the authors briey
introduce the 3D reconstruction method since it is the same as
those methods used to reconstruct any other type of 3D objects.
Contents lists available atScienceDirect
journal homepage: www.elsevier.com/locate/pr
Pattern Recognition
0031-3203/$- see front matter & 2013 Elsevier Ltd. All rights reserved.
http://dx.doi.org/10.1016/j.patcog.2013.06.009
n Corresponding author. Tel.:+852 27667271; fax: +852 27740842.
E-mail address:[email protected] (D. Zhang).
Pattern Recognition 47 (2014) 178 193
http://www.sciencedirect.com/science/journal/00313203http://www.elsevier.com/locate/prhttp://dx.doi.org/10.1016/j.patcog.2013.06.009mailto:[email protected]://dx.doi.org/10.1016/j.patcog.2013.06.009http://dx.doi.org/10.1016/j.patcog.2013.06.009http://dx.doi.org/10.1016/j.patcog.2013.06.009http://dx.doi.org/10.1016/j.patcog.2013.06.009mailto:[email protected]://crossmark.crossref.org/dialog/?doi=10.1016/j.patcog.2013.06.009&domain=pdfhttp://crossmark.crossref.org/dialog/?doi=10.1016/j.patcog.2013.06.009&domain=pdfhttp://crossmark.crossref.org/dialog/?doi=10.1016/j.patcog.2013.06.009&domain=pdfhttp://dx.doi.org/10.1016/j.patcog.2013.06.009http://dx.doi.org/10.1016/j.patcog.2013.06.009http://dx.doi.org/10.1016/j.patcog.2013.06.009http://www.elsevier.com/locate/prhttp://www.sciencedirect.com/science/journal/00313203 -
5/25/2018 3D Fingerprint Reconstruction_LiuZhang_Pattern Recognition Letters
2/16
There are several drawbacks with adopting general methods for
3D ngerprint reconstruction. For instance, it is time-consuming
for the reason that the coordinate of each pixel needs to be
calculated. Only the 3D coordinates of correspondences which
represent the same portion of the skin between a pair of neighbor
images can be calculated. 3D visualization ofnger is unavailable,
if correspondences cannot be found between two neighbor
images.
To overcome the disadvantages mentioned above, a new 3Dngerprint reconstruction system using feature correspondences
and the prior estimated nger model is proposed in this paper.
Comparative little research has been carried out into touchless
ngerprint matching due to the characteristics of touchless
ngerprint imaging, and hardly any work can be found for nger
shape model analyses. This paper for the rst time analyzes
touchless ngerprint features for correspondences establishment
and studies the model of humannger shape. 3D ngerprints are
then reconstructed based on the images captured by a touchless
multi-view ngerprint imaging device designed by us [9]. Fig. 1
shows the schematic diagram of our designed acquisition device
and an example of 2D ngerprint images. Finally, 3D ngerprint
reconstruction results based on different feature correspondences
are given and compared with those based on manually labeled
correspondences. It is concluded that such reconstruction results
are helpful to 3D ngerprint recognition.
The paper is organized as follows. In Section 2, the imaging
device and the procedure of the proposed 3D ngerprint recon-
struction system are briey introduced. Section 3 is devoted to
presenting the methods proposed to establish ngerprint feature
correspondences. The approach to estimating the nger shape
model is described in Section 4. Experimental results and the
reconstructing error analysis are given in Section 5. Section 6concludes the paper and indicates the future work.
2. 3D ngerprint reconstruction system
Before reconstruction, multi-view ngerprint images need to
be provided. The images used in this paper are captured by the
touchless multi-view ngerprint acquisition device designed by
us. The schematic diagram of the acquisition device is shown in
Fig. 1(a). One central camera and two side cameras are focused on
the nger. Four blue LEDs are used to light the nger and arranged
to give uniform brightness. A hole is designed to place the nger in
a xed position. All of the three cameras are JAI CV-A50. The lens
focal length is 12 mm and the object-to-lens distance is set to91 mm due to the consideration of image quality and device size
The angle between the central camera and the side cameras is
roughly 301. The image size of each channel is restricted to
576 768 pixels and the resolution of the image is 400 dpi. The
three view images of a nger captured by the device are shown in
Fig.1(b). More details of the parameter setting of the device can be
found in Ref.[9].
According to the theory of binocular stereo vision in computer
vision domain[3], the 3D information of an object can be obtained
from its two different plane pictures captured at one time. As
shown in Fig. 2(a), given two images Cl and Crcaptured at one
time, the 3D coordinate of A can be calculated if some camera
parameters (e.g., focal length of the left camera , focal length o
the right camera fr, principal point of the left cameraOl, principa
point of the right camera Or) and the matched pair
(alul; vl2arur; vr, where ann represents a 2D point in the
given images Cl or Cr; un is the column-axis of the 2D image, and
vn is the row-axis of the 2D image) are provided. Once the shape
model and several calculated 3D coordinates of the 3D object are
known, the shape of the 3D object can be obtained after inter-
polation. As can be seen in Fig. 2(b), the triangle in 3D space is
obtained after computing 3D coordinates of three vertices and
interpolating according to a triangle model. Therefore, the recon-
struction method is divided into ve steps, including the camera
parameters calculation, correspondences establishment, 3D coor-
dinates computation, shape model estimation, and interpolation
Fig. 1. Device and captured touchless multi-viewngerprint images. (a) Schematic
diagram of our designed touchless multi-view ngerprint acquisition device, (b)
images of a nger captured by the device (left, frontal, right).
Fig. 2. An illustration of constructing a 3D triangle based on binocular stereo vision. (a) 3D coordinates calculation on 3D space, (b) 3D triangle reconstruction.
F. Liu, D. Zhang / Pattern Recognition 47 (2014) 178193 179
-
5/25/2018 3D Fingerprint Reconstruction_LiuZhang_Pattern Recognition Letters
3/16
The ow chart of the reconstruction system in this paper is shown
inFig. 3.
Camera calibration is the rst step for 3D reconstruction. It
provides the intrinsic parameters (Focal Length, Principal Point,
Skew, and Distortion) of each camera and extrinsic parameters
(Rotation, Translation) between cameras necessary for reconstruc-
tion. It is usually implemented off-line. In this paper, the
methodology proposed in Ref. [10] and the improved algorithms
coded by Bouguet [11] are employed. The free codes can be
obtained from the website [11]. It can be noted that there are
three cameras used in our ngerprint capturing device. The
position of the middle camera is chosen as the reference system,
because the central part of the ngerprint is more likely to be
captured by this camera, where the core and the delta are usually
located. The frontal image captured by the middle camera is also
selected as the texture image when the nal 3D ngerprint imageis generated. To ensure that the frontal view ofnger is captured
by the middle camera of the device, a simple guide is given for
users to correctly use the device.
Correspondences establishment is of great importance to the
3D reconstruction accuracy. It will be introduced in detail in
Section 3.
Once camera parameters and matched pairs between nger-
print images of different views are both obtained, the 3D coordi-
nate of each correspondence can be calculated by using the stereo
triangulation method[11].Fig. 3. The ow chart of our reconstruction system.
Fig. 4. Example of correspondences establishment based on SIFT features. (a) Original frontal image, (b) extracted SIFT feature from (a), (c) original left-side image,
(d) extracted SIFT feature from (c), (e) initial correspondences established by point wise matching, (f) nal correspondences after rening by the RANSAC method.
F. Liu, D. Zhang / Pattern Recognition 47 (2014) 178193180
-
5/25/2018 3D Fingerprint Reconstruction_LiuZhang_Pattern Recognition Letters
4/16
Since it is very hard to identify all of the correspondences
which represent the same portion of the skin between two
neighboring ngerprint image pairs, it is very important to
calculate the 3D nger shape for 3D ngerprint visualization. This
paper for the rst time analyzes nger shape models. The details
will be presented inSection 4.
Based on the calculated 3D coordinates of limited feature
correspondences and the estimated shape model, a 3D nger
shape can be nally reconstructed by interpolation. Here, theclassical approach, namely, multiple linear regression using least
squares[32,33], is adopted for interpolation because of its simpli-
city and effectiveness.
3. Fingerprint feature correspondences establishment
Fingerprints are distinguished by their features. Different
ngerprint features can be observed from different resolution
ngerprint images. There are three frequently-used features for
low resolution ngerprint images, namely Scale Invariant Feature
Transformation (SIFT) feature, ridge map and minutiae [1220].
This paper thus tries to extract such features and establish
correspondences between different views of
ngerprint images.
3.1. Correspondences establishment based on SIFT feature
SIFT [21]is popular in object recognition and image retrieval,
since it is robust to low quality image. For touchless ngerprint
images, they have the characteristic of low ridgevalley contrast.
This feature makes true correspondences possible to be estab-
lished when minutiae and ridge features cannot be correctly
extracted. Moreover, it is robust to deformation variation and rich
in quantity [15,17]. Fig. 4(b) and Fig. 4(d) illustrate the extracted
1911 and 1524 SIFT features, respectively. 108 pairs are matched by
using the point wise matching method to Fig. 4(a) and (c), as
shown inFig. 4(e). FromFig. 4(e), we can see that there exist false
correspondences and hence rened algorithms are needed to be
employed to select true ones. To this end, the classical RANSAC
algorithm, which is insensitive to initial alignment and outliers
[22] is utilized. It should be noted that the TPS model which is
popularly used in ngerprint domains[12,19,23] is adopted in theRANSAC algorithm due to the curved surface of nger and
distortions introduced by cameras.Fig. 4(f) gives the nal selected
true correspondences when RANSAC with the TPS model acts on
the initial correspondences ofFig. 4(e).
3.2. Correspondences establishment based on ridge map
Before establishing correspondences between ridge maps
ridges must be extracted and recorded. In general, ridge map
refers to the thinning image where ridges are one-pixel-width
and ridge pixels have value 1 and background pixels have value 0
Fig. 5 shows the owchart of steps for ridge map extraction
However, touchless ngerprint images have low ridgevalley
contrast and their ridge frequency increases from center to sideas shown inFig. 4(a) and (c). These make it difcult to extract the
ridge map accurately due to the difculty ofngerprint enhance-
ment. Currently, there are a number of ngerprint enhancemen
approaches, such as Gaborlter-based, STFT-based, DCT-based and
Diffusionlter-based methods[24,3441]. Among them, the Gabor
lter based method is the simplest and the most traditional one. It
is nally adopted in this paper. Fingerprint images are enhanced
by a bank of Gabor lters generated from given ngerprin
orientation and frequency. Orientation and frequency maps play
an important role in the enhancement approach. This paper thus
tries to improve the orientation map and frequency map so as to
acquire better enhanced results.
As introduced in Ref. [1], the gradient-based ridge orientation
estimation method is the simplest and most intuitive one. It is
efcient and popularly used in ngerprint recognition studies
However, it also has some drawbacks, such as sensitivity to noise
when orientation is estimated at too ne a scale, low accuracy
when smooth factors are used to the orientation map, as shown in
Fig. 6(a) (lower rectangle) and Fig. 6(b) (right rectangle). To keep
the estimation accuracy of a good quality area and correct theFig. 5. Flowchart of ridge map extraction.
Fig. 6. Fingerprint ridge orientation maps. (a) Original orientation map, (b) smoothed orientation map of (a), (c) improved orientation map by our proposed method.
F. Liu, D. Zhang / Pattern Recognition 47 (2014) 178193 18
-
5/25/2018 3D Fingerprint Reconstruction_LiuZhang_Pattern Recognition Letters
5/16
orientation where noises exist, a method is proposed to act on
original orientation map to improve the orientation map. The
main steps include: (i) part the original orientation map into eight
uniform regions. Small blocks in the uniform regions represent the
wrong estimated orientation results (see Fig. 7(a), in the redcircles); (ii) sort uniform regions with the same color in a
descending manner; such regions whose size is smaller than the
mean size of all regions with the same color are set to zero (see
Fig. 7(b), the dark regions in ROI); (iii) assign values to the points
with zero value set by step (ii) according to the nearest neighbor
method. The improved orientation map is obtained by following
these three steps. Fig. 6(c) shows the improved orientation map
based onFig. 6(a), and Fig.7(c) gives the partition map according
to Fig. 6(c). The results show that the estimation accuracy of a
good quality area is kept and the wrong orientation area is
corrected (Fig. 6(c), rectangle).
Frequency maps record the number of ridges per unit length
along a hypothetical segment and orthogonal to the local ridge
orientation. The simplest and most popular ridge frequency
estimation method is the x-signatures based method[1]. However,
this kind of method does not work with blurry or noisy ngerprint
areas. In this situation, interpolation and ltering is used to post-
process the original estimated frequency map. For touchless
ngerprint images, frequency maps are harder to estimate than
touch-based ngerprint images due to the low ridgevalley con-
trast of touchless ngerprint images, and simple interpolation or
ltering is invalid when the frequency is wrongly estimated in
neighborhoods. By observing the ridges on touchless ngerprint
images, we nd their frequency increases from the central part to
Fig. 7. Partition results according to orientation map. (a) Partition result according to original orientation map, (b) partition result according to our improved
orientation map.
1/6
1/6
1/7.5
1/7 1/6
Fig. 8. Frequency variation of touchless ngerprint images. (a) Original touchless ngerprint image and (b) corresponding frequency map.
Fig. 9. Distance between lens and different parts of the nger.
F. Liu, D. Zhang / Pattern Recognition 47 (2014) 178193182
-
5/25/2018 3D Fingerprint Reconstruction_LiuZhang_Pattern Recognition Letters
6/16
the side part for horizontal section and decreases from the
ngertip to the distal interphalangeal crease for the vertical
section, as shown in Fig. 8 (ridge frequency is calculated with
blocks of 32 32 pixels). This phenomenon can be explained from
the touchless capturing technique and the observation of the
human nger. As shown in Eq.(1),M is the optical magnication.
p and q are the lens-to-object and lens-to-image distances,
respectively. For a xq, a largep will lead to a small magnication
M. Fig. 9illustrates three different values ofp . It can be seen thatthe distance from the side parts to the lens (i.e., D2or D3) is larger
than the distance from the central part to the lens (i.e., D1), which
leads to smaller Mon the side parts than on the central part. The
smaller the magnicationM is, the larger the ridge frequency will
be. Thus, it is larger in the central part of the ridge period than
side-view ones for the horizontal section. The vertical distribution
of ridge period increases from the ngertip to the distal inter-
phalangeal crease, because p increases from the tip to the center
part of the nger and the ridges are wider near the distal
interphalangeal crease than the other parts by observation.
Mq
p 1
According to the distribution of ridge frequency of touchless
ngerprint images, this paper proposes to use monotone increas-
ing function (logarithmic function) to t the ridge period (1/ridge
frequency) map along the vertical direction and quadratic curve
along the horizontal direction. The improved ridge period map is
nally achieved by tting original ridge period map with a mixed
model of logarithmic function and quadratic curve.
Once the orientation and ridge frequency maps are calculated,
a series of Gabor lter can be generated based on them. The
enhanced ngerprint image was then obtained, as shown in
Fig. 10. After binarizing the enhanced ngerprint image by simple
threshold and morphology approaches, the nal ridge map is
acquired. Fig. 10(a) and (b) shows the ridge maps of Fig. 4
(a) enhanced by using the original orientation map and the
original ridge frequency map interpolated by mean value of the
frequency map.Fig.10(c) and (d) show the enhanced ridge maps ofFig. 4(a) using the improved orientation and ridge frequency maps.
Better results by using the improved orientation and ridge
frequency maps are achieved when comparing Fig. 10(c),
(d) with (a), (b) (labeled in rectangles). It should be noticed that
the pre-process steps of ROI extraction and normalization are the
same as those proposed in Ref. [9].
Before correspondences establishment, ridges are recorded at
tracing starting from minutiae where ridges are disconnected. Due
to the existence of noise, a ridge image often has some spurs and
breaks. In some cases of insignicant noise, the ridge structure can
be correctly recovered by removing short ridges or connecting
broken ridges. However, in other cases of strong noise, it is difcul
to recover the correct ridge structure by removing short ridges or
connecting broken ridges. In such cases, we remove all related
ridges. Finally, the down sampled ridge point coordinates of each
ridge are recorded in a list.
Coarse alignment of two ridge maps is done by using the globatransform model calculated in Section 3.1 when SIFT features
matched. Ridges in ridge maps are then matched by adopting the
Dynamic Programming (DP) method. As shown in Fig. 11 and
Table 1, {a1,a2,,a10} represents a ridge line in the template ridge
map and {b1,b2,,b8} denotes a ridge line in the test ridge map. For
any ridge in template and test ridge maps, the Euclidian distance
between each pair of compared ridge lines is calculated. The status
will be 1 if the distance of a pair of ridge points is smaller than a
threshold (it is set to 5 points in this paper), otherwise, the status
will be 0. The DP method is adopted to nd matched ridge pairs
with the largest number. Coarse ridge correspondences are then
established after DP. RANSAC algorithm introduced inSection 3.1is
Fig. 10. Ridge maps. (a) Ridge map ofFig. 4(a) enhanced by using original orientation and ridge frequency maps, (b) thinned ridge map of (a), (c) ridge map ofFig. 4
(a) enhanced by using improved orientation and ridge frequency maps, (d) thinned ridge map of (c).
Fig. 11. Correspondences establishment between two ridges.
Table 1
Record of status among ridge points inFig. 11.
a1 a2 a3 a4 a5 a6 a7 a8 a9 a10
b1 0 0 0 0 0 0 0 0 0 0
b2 0 0 0 0 0 0 0 0 0 0
b3 0 0 0 0 1 0 0 0 0 0
b4 0 0 0 0 0 1 1 0 0 0
b5 0 0 0 0 0 0 1 1 0 0
b6 0 0 0 0 0 0 0 1 0 0
b7 0 0 0 0 0 0 0 0 1 0
b8 0 0 0 0 0 0 0 0 0 0
F. Liu, D. Zhang / Pattern Recognition 47 (2014) 178193 183
-
5/25/2018 3D Fingerprint Reconstruction_LiuZhang_Pattern Recognition Letters
7/16
then adopted to select true ones from the coarse set. Fig. 12shows
the results of the established ridge correspondences.
3.3. Correspondences establishment based on minutiae
Due to their distinctive ability, minutiae are widely used for
ngerprint recognition and also considered in the paper. They are
extracted from the ridge map calculated in Section 3.2. An
example of extracted minutiae using the method introduced in
Ref. [25]is shown inFig. 13.
Since the transformation model is obtained when SIFT corre-
spondences are established, minutiae sets can be coarsely aligned
by the calculated transformation model. Then, initial minutiae
correspondences are established by the nearest neighbor method,
and the nal result is achieved by the RANSAC algorithm with a
TPS model. This kind of minutiae correspondences establishment
is demonstrated inFig. 14.
4. Finger shape model estimation
To reconstruct the nger shape, it is necessary to know the
shape model after certain 3D points of the nger are calculated.
Unfortunately, exact model for human's nger shape is not directly
available, and hence, it should be estimated. To this end, we
propose to estimate the nger shape model by analyzing 440 3D
point cloud data collected from human ngers (220 ngers,
2 pictures each) in this paper. The 3D point cloud data are dened
as the depth information of each point on the nger. They are
collected by a camera together with a projector using the Struc-
tured Light Illumination (SLI) method [6,29]. The structure dia-
gram of the collection device is shown in Fig. 15. 13 structured
light stripes generated by a computer are projected onto the ngersurface by using the Liquid Crystal Display (LCD) projector. The
camera then captures the ngerprint images formed with pro-
jected stripes on it. 3D point cloud data, which consists of depth
information of each point on the nger, can be calculated using
transition and phase expansion techniques [30]. Since this tech-
nique is well studied and proved to acquire 3D depth information
of each point on the nger with high accuracy [68,2931], 3D
point cloud data obtained using this technique are taken as the
ground truth of the human nger to build the database for nger
shape model estimation.
Fig. 16(a) displays an example of 3D point cloud data we
collected from a thumb. We randomly selected and drew the
horizontal prole and the vertical prole of the 3D point cloud
data, as shown inFig. 17(thick rugged line). The horizontal prole
Fig. 12. Ridge correspondences establishment. (a) Initial correspondences and (b) nal correspondences after RANSAC.
Fig. 13. Example of minutiae extraction result.
Fig. 14. Minutiae correspondences establishment. (a) Initial correspondences and
(b) nal correspondences after RANSAC.
Fig. 15. Structure diagram of device used to capture 3D point cloud data of human
nger[6].
F. Liu, D. Zhang / Pattern Recognition 47 (2014) 178193184
-
5/25/2018 3D Fingerprint Reconstruction_LiuZhang_Pattern Recognition Letters
8/16
is in a parabola-like shape, as shown inFig. 17(a), while the vertical
prole can be represented by a quadratic curve or a logarithmic
function (see Fig. 17(b)). Thus, both of the binary quadratic
function.
f1x;y Ax2 By2 Cxy DxEyF 2
and the mixed model with parabola and logarithmic function
f2x;y Ax2 BxClny D 3
are chosen to t all of our collected 440 3D point cloud nger data
by the regression method[32,33]. Note that, in (2) and (3), A,B,C,
D, E, and F represent the coefcients of the functions, x is the
variable of column-coordinate of the image, and yis the variable of
row-coordinate of the image. Fig. 16(b) gives the tting result of
Fig. 16(a) (denoted byV) by the binary quadratic function (denoted
by ~VEq:2), while Fig. 16(c) gives the tting result ofFig. 16(a) by
the mixed model (represented by ~VEq:3). It can be seen tha
binary quadratic function is closer to the nger shape model
Therefore, binary quadratic function in Eq. (2) is nally adopted in
this paper.
5. Experimental results and analysis
5.1. 3Dngerprint reconstruction system error analysis
Reconstruction and system errors are inevitable. To acquire
these errors, the reconstruction of an object with the standard
Fig.16. Example 3D nger point cloud data and its tting results by different models. (a) 3D point cloud data of a thumb, (b) tting result of (a) by binary quadratic function
(c) tting result of (a) by a mixed model with parabola and logarithmic function.
50 100 150 200 250 300 350 400 4500
0.05
0.1
0.15
0.2
0.25
0.3
0.35
0.40.45
Column
Depth
0 50 100 150 200 250 300 350 400-0.2
0
0.2
0.4
0.6
0.8
1
1.2
Row
Depth
Fig. 17. Randomly selected proles ofFig. 16(a). (a) Horizontal prole, thick rugged line depicts real data, thin smooth line is tting by Parabola, (b) vertical prole, thick
rugged line depicts real data, thin smooth lines are tting by Quadratic Curve (closer to real data) and logarithmic Function, respectively.
F. Liu, D. Zhang / Pattern Recognition 47 (2014) 178193 185
-
5/25/2018 3D Fingerprint Reconstruction_LiuZhang_Pattern Recognition Letters
9/16
Fig.18. Reconstruction accuracy analysis of cylinder shape object. (a) Original cylinder shape object wrapped with grid paper, (b) correspondences established between left-
side and frontal images captured by our device, (c) correspondences established between right-side and frontal images captured by our device, (d) 3D space points
corresponding to (b), (e) 3D space points corresponding to (c), (f) tting result corresponding to (d), (g) tting result corresponding to (e), (h) error map corresponding to
(d) when tting by cylinder shape with radius of 10 mm, (i) error map corresponding to (e) when tting by cylinder shape with radius of 10 mm.
F. Liu, D. Zhang / Pattern Recognition 47 (2014) 178193186
-
5/25/2018 3D Fingerprint Reconstruction_LiuZhang_Pattern Recognition Letters
10/16
cylinder shape and of radius 10 mm is given. The example object is
shown inFig. 18(a). The surface of the object is wrapped by a grid
paper to facilitate feature extraction. Three 2D pictures (left-side,
frontal, and right-side) of the cylinder are captured by the
touchless multi-view imaging device we designed. Fig. 18(b) and
(c) shows two grouped images (left-side & frontal, right-side &
frontal). As mentioned inSection 2, there are ve main steps in our
reconstruction technique. Camera parameters are rst calculated
off-line. The corner features of the wrapped grid paper are thenlabeled and their correspondences between grouped images are
established manually, as shown inFig. 18(b) and (c).Fig. 18(d) and
(e) illustrates the calculated 3D coordinates corresponding to the
matched pairs shown in Fig. 18(b) and (c) based on the given
camera parameters and feature correspondences. Shape model
estimation is unnecessary since the cylinder model is known as a
prior knowledge. By using the calculated 3D coordinates and the
known shape model of cylinder, the cylindrical surface is nally
generated by interpolation based on the multiple linear regres-
sions using the least squares method[31,32].Fig. 18(f) andFig. 18
(g) are the reconstructed cylinders shown by a 3D display software
called Imageware 12.1. This software is used for 3D point cloud
data display and analysis. The error maps shown in Fig. 18(h) and
(i) are also obtained by this software. From Fig. 18(f) and (g), wecan see that the radius of reconstructed cylinders from 40 3D
points of Figs. 19(d) and 18(e) are 9.91 mm and 9.84 mm
compared with the real radius 10 mm.Fig. 18(h) and (i) gives the
error maps of 3D points corresponding toFig. 18(d) and (e) when
tted by cylinder shape with radius of 10 mm. The error ranges are
[0.07 mm0.06 mm] and [0.1 mm0.06 mm]. The results
demonstrate that the reconstruction error of our device is within
0.2 mm.
5.2. Comparison and analysis of reconstruction results based on
differentngerprint feature correspondences
By following the ve steps introduced in Section 2, recon-
structed 3D ngerprint images can be obtained. Since there are
three ngerprint images captured at one time and the central
camera is selected as the reference system, the proposed recon-
struction system consists of two parts (left-side camera and
central camera, right-side camera and central camera) according
to binocular stereo vision theory. In this paper, these two parts
are combined before the fourth steps by normalizing the calcu-
lated depth value of correspondences into [0, 1]. Here, the
MinMax strategy of normalization is used. This combination is
adopted for two reasons. One is that there are parts of overlapping
region between two adjacent ngerprint images, and the distribu-
tion of correspondences may focus on a small part ofngerprint
images. Larger areas of ngerprint image can be covered
by discrete correspondences through combining two parts of
the system. The other is that very simple to accomplish and the
system error of combining two parts before model tting is
alleviated.Table 2shows the reconstruction results based on three
different ngerprint feature correspondences using the example
images shown inFig.19. The results are different corresponding to
different feature matched pairs due to different numbers and
distribution of established ngerprint feature correspondences
and the existence of false correspondences.
To investigate which features are more suitable for 3D nger-
print reconstruction, we also manually labeled ngerprint corre-spondences, as shown in Fig. 20. The histogram of error map
between reconstructed results inTable 3andFig. 20is shown in
Fig. 21. The results show that for single feature used, a reconstruc-
tion result based on SIFT features achieves the best result, while
the ridge feature-based is the worst one. When combining with
other features, best reconstruction results can be generated if al
three features of correspondences are used. However, comparable
results are obtained by using SIFT and minutiae. Considering the
computational complexity, it is recommended to simply use SIFT
and minutiae.
5.3. Validation of estimated nger shape model
The effectiveness of the proposed
nger shape model isvalidated by analyzing the tting errors. Table 3 presents the
errors measured by the mean distance and the standard variation
between the estimated nger shape and the original 3D poin
cloud data inFig.16(a). It can be seen that the error between Vand~VEq:2 is smaller than the one between Vand ~VEq:3. Next, the errors
between the 3D point cloud data and their corresponding tting
results of all 440 ngers we collected are computed. It can be seen
fromFig. 22that the binary quadratic function is more suitable for
the nger shape model since smaller errors are obtained between
the original 3D point cloud data and their corresponding tting
results by the binary quadratic function. For this reason, the binary
quadratic function is chosen as the nger shape model in
this paper.
Since the nal 3D nger shape is obtained after interpolation
according to the prior estimated nger shape model, we compared
the reconstruction result with the 3D point cloud data of the same
nger to verify the effectiveness of the model. From the results
shown inFig. 23, it can be seen that the prole of the nger shape
reconstructed from multi-cameras is similar to the 3D poin
cloud data even though it is not that accurate. The real distance
between the upper left core point and the lower left delta point is
also calculated and shown in Fig. 23(a) and (c), the values are
0.357 and 0.386 respectively. As a result, it is concluded that the
estimatednger shape model is effective even though there is an
error between the reconstruction result and the 3D poin
cloud data.
5.4. Reconstruction system computation time analysis
There are six main parts included in our reconstruction system
from image acquisition to result generation, as the block diagram
shows in Fig. 3. The reconstruction method is implemented by
Matlab on Fujitsu notebook embedded with Intel Core 2 Duo CPU
T9600 (2.80 GHz) processor. For image acquisition, it consumes no
more than 100 ms to capture three views of ngerprint images
since the frame rate of each camera is 30 frames/s. Because both of
the camera parameters calculation and shape model estimation
are done off-line, they do not occupy any time in the whole
system. The correspondences establishment step consists of fea-
ture extraction and matching, which consumes considerable time
This time is variable for different images. The average time
statistically calculated in our database is then used to measure
They are 60.3 s and 24.32 s. It takes 0.31 s to compute the 3DFig. 19. Example ngerprint images captured by our device (left, middle, right).
F. Liu, D. Zhang / Pattern Recognition 47 (2014) 178193 187
-
5/25/2018 3D Fingerprint Reconstruction_LiuZhang_Pattern Recognition Letters
11/16
Table 2
Reconstruction results from different ngerprint feature correspondences ofFig. 20.
Results Established correspondences Reconstructed 3D ngerprint image
Used feature
SIFT feature
Minutiae
Ridge feature
Feature combination Reconstructed 3D ngerprint image
SIFT feature and minutiae
SIFT and ridge feature
F. Liu, D. Zhang / Pattern Recognition 47 (2014) 178193188
-
5/25/2018 3D Fingerprint Reconstruction_LiuZhang_Pattern Recognition Letters
12/16
coordinates of feature correspondences. For interpolation, the
code included in the matlab toolbox is employed and the con-
sumption time is 1.21 s. To summarize, it takes 1.5 min to
generate a 3D image by using the proposed system. It is believed,
however, this time will be largely reduced once the code is
compiled by C/C++ language and the multithread processing
technique is used.
6. Conclusion and future work
This paper investigates a 3D reconstruction technique based on
limited feature correspondences in 2D ngerprint images captured
by the designed multi-view touchless ngerprint imaging device
Specic to the characteristic of low ridgevalley contrast of touch-
less ngerprint images, an improved ngerprint enhancemen
Table 2 (continued )
Results Established correspondences Reconstructed 3D ngerprint image
Used feature
Minutiae and ridge feature
SIFT feature, minutiae and ridge feature
Fig. 20. Reconstruction of 3D nger shape ofFig. 19. (a) Manually labeled correspondences between ngerprint images, (b) reconstructed 3D nger shape based on (a).
F. Liu, D. Zhang / Pattern Recognition 47 (2014) 178193 189
-
5/25/2018 3D Fingerprint Reconstruction_LiuZhang_Pattern Recognition Letters
13/16
Table 3
Mean distance and standard variation of error map between estimated nger shape and real nger shape of example images inFig. 16.
Index factor Mean distancemeanV ~V Standard variationstdV ~V
Fitting model function
f1x;y 0.024 0.019
f2x;y 0.082 0.057
Fig. 21. Histogram of error maps between reconstructed results inTable 2andFig. 20(b). (a) Histogram of err map betweenFig. 20(b) and reconstruction result by using SIFT
feature only, (b) histogram of err map betweenFig. 20(b) and reconstruction result by using minutiae only, (c) histogram of err map between Fig. 20(b) and reconstructionresult by using ridge feature only, (d) histogram of err map betweenFig. 20(b) and reconstruction result by using both SIFT feature and minutiae, (e) histogram of err map
betweenFig. 20(b) and reconstruction result by using both SIFT feature and ridge feature, (f) histogram of err map betweenFig. 20(b) and reconstruction result by using both
minutiae and ridge feature, (g) histogram of err map between Fig. 20(b) and reconstruction result by using SIFT feature, minutiae and ridge feature.
F. Liu, D. Zhang / Pattern Recognition 47 (2014) 178193190
-
5/25/2018 3D Fingerprint Reconstruction_LiuZhang_Pattern Recognition Letters
14/16
0 50 100 150 200 250 300 350 400 4500
0.01
0.02
0.03
0.04
0.05
0.06
0.07
0.08
0.09
Finger Sample
Error(MeanDistance)
0 50 100 150 200 250 300 350 400 4500.01
0.02
0.03
0.04
0.05
0.06
0.07
Finger Sample
Er
ror(StandardVariation)
0 50 100 150 200 250 300 350 400 4500
0.02
0.04
0.06
0.08
0.1
0.12
0.14
0.16
Finger Sample
Error(MeanDistance)
0 50 100 150 200 250 300 350 400 4500.01
0.02
0.03
0.04
0.05
0.06
0.07
0.08
0.09
0.1
Finger Sample
Error(StandardVariation)
Fig. 22. Errors between the original 3D point cloud data of all 440 ngers we collected and their correspondingtting results by different models. (a) Errors represented by
the mean distance between the original 3D point cloud data and their corresponding tting result by binary quadratic function, (b) errors represented by the standard
variation between the original 3D point cloud data and their corresponding tting result by binary quadratic function, (c) errors represented by the mean distance between
the original 3D point cloud data and their corresponding tting result by the mixed model, (d) errors represented by the standard variation between the original 3D poin
cloud data and their corresponding tting result by the mixed model.
Fig. 23. Comparison of 3D ngerprint images from the same nger but different acquisition technique. (a) Originalngerprint image captured by the camera when collecting
3D point cloud, (b) 3D point cloud collected by one camera and a projector using the SLI method, (c) original ngerprint image captured by our device, (d) reconstructed 3D
ngerprint image with labeled correspondences.
F. Liu, D. Zhang / Pattern Recognition 47 (2014) 178193 19
-
5/25/2018 3D Fingerprint Reconstruction_LiuZhang_Pattern Recognition Letters
15/16
method is proposed, so as to extract more robust ngerprint
features. Then, three frequently used features, i.e., SIFT feature,
ridge feature and minutiae, having different numbers and various
distributions, are considered for correspondences establishment.
Correspondences are nally established by adopting the hierarch-
ical ngerprint matching approaches. The nger shape model in
this paper is estimated by analyzing 3D point cloud nger data
collected by one camera and a projector using the SLI method.
Results show that the binary quadratic function is more suitable forthe nger shape model compared with another mixed model pro-
posed in the paper. By reconstructing a standard cylinder object, it is
shown that it is reasonable and feasible for us to adopt the methodol-
ogy of the reconstruction technique, as well as the capturing device.
The comparison and analysis of 3D ngerprint reconstruction results
based on different ngerprint feature correspondences illustrates that
best reconstruction results can be generated if all three features of
correspondences are used. However, it is recommended to simply use
SIFT and minutiae since comparable results are achieved by using
them. The effectiveness of the estimated nger shape model is veried
by comparing the reconstructed 3D nger shape with the correspond-
ing 3D point cloud nger data.
Currently, researchersnd that 3D ngerprint images provide
more attributes for
ngerprint features than 2D
ngerprintimages. For instance, a minutia feature in 2D ngerprint image is
usually represented by its location fx;yg and orientation . While
in 3D case, it may be noted byfx;y;z; ;g, wherex,y and zare the
spatial coordinates. Two angles of orientation of the ridge in 3D
space and are available. Thus, ngerprint recognition with
higher security can be achieved by matching features in 3D space
(e.g. 3D minutia matching [26]). By observing ngerprint in 3D
images, we nd that the center part of the nger is higher than the
side parts, and the core point on ngerprints is located at almost
the highest part of the nger. These characteristics of 3D nger-
print images benet alignment when two ngerprint images are
compared. Thus, our future work will investigate the application of
such 3D information for ngerprint recognition.
Conict of interest statement
None declared.
Acknowledgments
The authors would like to thank the editor and the anonymous
reviewers for their help in improving the paper. The work is
partially supported by the GRF fund from the HKSAR Government,
the central fund from Hong Kong Polytechnic University, the NSFC
fund (61020106004, 61272292, 61271344, 61101150), Shenzhen
Fundamental Research fund (JC201005260184A), Shenzhen special
fund for the strategic development of emerging industries(JCYJ20120831165730901), and Key Laboratory of Network Oriented
Intelligent Computation, Shenzhen, China.
References
[1] D. Maltoni, D. Maio, A. Jain, S. Prabhakar, Handbook of Fingerprint Recogni-tion, Springer, New York, 2009.
[2]G. Parziale, E. Diaz-Santana, The surround imager: a multi-camera touchless deviceto acquire 3D rolled-equivalent ngerprints, in: Proceedings of InternationalConference on Biometrics (ICB), Hong Kong, China, 2006, pp. 244250.
[3] R. Hartley, Multiple View Geometry in Computer Vision, Cambridge UniversityPress, Cambridge, U.K., 2000.
[4] C. Hernandez, G. Vogiatzis, R. Cipolla, Multiview photometric stereo, IEEETransactions on Pattern Analysis and Machine Intelligence 30 (3) (2008)548554.
[5] F. Blais, M. Rious, J. Beraldin, Practical considerations for a design of a high
precision 3-D laser scanner system, Proceedings of SPIE 959 (1988) 225 246.
[6] Y. Wang, L. Hassebrook, D. Lau, Data acquisition and processing of 3-Dngerprints, IEEE Transactions on Information Forensics and Security 5 (4)(2010) 750760.
[7] G. Stockman, S. Chen, G. Hu, N. Shrikhande, Sensing and recognition of rigid objectsusing structured light, IEEE Control Systems Magazine 8 (3) (1988) 1422.
[8] G. Hu, G. Stockman, 3-D surface solution using structured light and constraintpropagation, IEEE Transactions on Pattern Analysis and Machine Intelligence11 (4) (1989) 390402.
[9] F. Liu, D. Zhang, G. Lu, C. Song, Touchless multi-view ngerprint acquisitionand mosaicking, IEEE Transactions on Instrumentation and Measurement,http://dx.doi.org/10.1109/TIM.2013.2258248, submitted for publication.
[10] Z. Zhang, A exible new technique for camera calibration, IEEE Transactions onPattern Analysis and Machine Intelligence 24 (11) (2000) 1330 1334.
[11] J. Bouguet, Camera Calibration Toolbox for Matlab, (http://www.vision.caltech.edu/bouguetj/calib_doc/download/index.html ).
[12] H. Choi, K. Choi, J. Kim, Mosaicing touchless and mirror-reected ngerprintimages, IEEE Transactions on Information Forensics and Security 5 (1) (2010)5261.
[13] D. Zhang, F. Liu, Q. Zhao, G. Lu, N. Luo, Selecting a reference high resolution forngerprint recognition using minutiae and pores, IEEE Transactions onInstrumentation and Measurement 60 (3) (2011) 863871.
[14] A. Kumar, Y. Zhou, Contactless ngerprint identication using level zerofeatures, in: Proceedings of CVPR'11, CVPR'W 2011, Colorado Springs, USA,June 2011, pp. 121126.
[15] U. Park, S. Pankanti, A. Jain, Fingerprint verication using SIFT features, in:Proceedings of SPIE6944, 69440K-69440K-9, 2008.
[16] J. Feng, Combining minutiae descriptors for ngerprint matching, PatternRecognition 41 (1) (2008) 342352.
[17] S. Malathi, C. Meena, Partial ngerprint matching based on SIFT features,
International Journal on Computer Science and Engineering 4 (2) (2010)14111414.
[18] A. Jain, A. Ross, Fingerprint mosaicking, in: Proceedings of the IEEE Interna-tional Conference on Acoustics, Speech, and Signal Processing (ICASSP),Orlando, Florida, vol. 4, May 2002, pp. IV-4064 IV-4067.
[19] S. Shah, A. Ross, J. Shah, S. Crihalmeanu, Fingerprint mosaicking usingthin plate splines, in: Proceedings of the Biometric Consortium Conference,2005.
[20] K. Choi, H. Choi, S. Lee, J . Kim, Fingerprint image mosaicking by recursive ridgemapping, Special Issue on Recent Advances in Biometrics Systems, IEEETransactions on Systems, Man, and Cybernetics, Part B 37 (5) (2007)11911203.
[21] D. Lowe, Distinctive image features from scale-invariant keypoints, Interna-tional Journal of Computer Vision 60 (2) (2004) 91 110.
[22] M. Fishler, R. Bolles, Random sample consensus: a paradigm for model ttingwith applications to image analysis and automated cartography, Communica-tions of the ACM 24 (6) (1981) 381395.
[23] A. Ross, S. Dass, A. Jain, A deformable model for ngerprint matching, PatternRecognition 38 (1) (2005) 95103.
[24] L. Hong, Y. Wan, A.K. Jain, Fingerprint image enhancement: algorithm andperformance evaluation, IEEE Transactions on Pattern Analysis and MachineIntelligence 20 (8) (1998) 777789.
[25] A. Jain, L. Hong, R. Bolle, On-line ngerprint verication, IEEE Transactions onPattern Analysis and Machine Intelligence 19 (4) (1997) 302314.
[26] G. Parziale, A. Niel, A ngerprint matching using minutiae triangulation, in:Proceedings of the International Conference on Biometric Authentication(ICBA), LNCS, vol. 3072, 2004, pp. 241248.
[27] S. Rusinkiewicz, O. Holt, M. Levoy, Real-time 3D model acquisition, in:Proceedings of the 29th Annual Conference on Computer Graphics andInteractive Techniques, no. 3, vol. 21, July 2002, pp. 438446.
[28] B. Bradley, A. Chan, M. Hayes, A simple, low cost, 3D scanning system usingthe laser light-sectioning method, in: Proceedings of the IEEE InternationalInstrumentation and Measurement Technology Conference Victoria, Vancou-ver Island, Canada, May 2002, pp. 299304.
[29] D. Zhang, V. Kanhangad, N. Luo, A. Kumar, Robust palmprint verication using2D and 3D features, Pattern Recognition 43 (1) (2010) 358 368.
[30] H.O. Saldner, J.M. Huntley, Temporal phase unwrapping: application to
surface proling of discontinuous objects, Applied Optics 36 (13) (1997)27702775.
[31] D. Zhang, G. Lu, W. Li, Palmprint recognition using 3-D information, IEEETransactions on Systems, Man, and Cybernetics. Part C: Applications andReviews 39 (5) (2009) 505519.
[32] S. Chatterjee, A. Hadi, Inuential observations, high leverage points, andoutliers in linear regression, Statistical Science 1 (3) (1986) 379416.
[33] N. Draper, H. Smith, Applied Regression Analysis, 2nd ed., Wiley, U.S., 1981 .[34] S. Chikkerur, A Cartwright, V. Govindaraju, Fingerprint enhancement using
STFT analysis, Pattern Recognition 40 (1) (2007) 198211.[35] S. Jirachaweng, V. Areekul, Fingerprint enhancement based on discrete cosine
transform, in: Proceedings of the International Conference on Biometrics,LNCS 4642, 2007, pp. 96105.
[36] J. Weichert, Coherence-enhancing diffusion ltering, International Journal ofComputer Vision 31 (23) (1999) 111127.
[37] H. Chen, G. Dong, Fingerprint image enhancement by diffusion processes, in:Proceedings of the 13th International Conference on Image Processing, 2006,pp. 297300.
[38] Y. Hao, C. Yuan, Fingerprint image enhancement based on nonlinear aniso-
tropic reverse diffusion equations, in: Proceedings of the 26th Annual
F. Liu, D. Zhang / Pattern Recognition 47 (2014) 178193192
http://refhub.elsevier.com/S0031-3203(13)00261-6/sbref1http://refhub.elsevier.com/S0031-3203(13)00261-6/sbref1http://refhub.elsevier.com/S0031-3203(13)00261-6/sbref1http://refhub.elsevier.com/S0031-3203(13)00261-6/othref0005http://refhub.elsevier.com/S0031-3203(13)00261-6/othref0005http://refhub.elsevier.com/S0031-3203(13)00261-6/othref0005http://refhub.elsevier.com/S0031-3203(13)00261-6/othref0005http://refhub.elsevier.com/S0031-3203(13)00261-6/othref0005http://refhub.elsevier.com/S0031-3203(13)00261-6/othref0005http://refhub.elsevier.com/S0031-3203(13)00261-6/othref0005http://refhub.elsevier.com/S0031-3203(13)00261-6/sbref2http://refhub.elsevier.com/S0031-3203(13)00261-6/sbref2http://refhub.elsevier.com/S0031-3203(13)00261-6/sbref2http://refhub.elsevier.com/S0031-3203(13)00261-6/sbref3http://refhub.elsevier.com/S0031-3203(13)00261-6/sbref3http://refhub.elsevier.com/S0031-3203(13)00261-6/sbref3http://refhub.elsevier.com/S0031-3203(13)00261-6/sbref3http://refhub.elsevier.com/S0031-3203(13)00261-6/sbref3http://refhub.elsevier.com/S0031-3203(13)00261-6/sbref3http://refhub.elsevier.com/S0031-3203(13)00261-6/sbref4http://refhub.elsevier.com/S0031-3203(13)00261-6/sbref4http://refhub.elsevier.com/S0031-3203(13)00261-6/sbref4http://refhub.elsevier.com/S0031-3203(13)00261-6/sbref4http://refhub.elsevier.com/S0031-3203(13)00261-6/sbref4http://refhub.elsevier.com/S0031-3203(13)00261-6/sbref5http://refhub.elsevier.com/S0031-3203(13)00261-6/sbref5http://refhub.elsevier.com/S0031-3203(13)00261-6/sbref5http://refhub.elsevier.com/S0031-3203(13)00261-6/sbref5http://refhub.elsevier.com/S0031-3203(13)00261-6/sbref5http://refhub.elsevier.com/S0031-3203(13)00261-6/sbref5http://refhub.elsevier.com/S0031-3203(13)00261-6/sbref5http://refhub.elsevier.com/S0031-3203(13)00261-6/sbref6http://refhub.elsevier.com/S0031-3203(13)00261-6/sbref6http://refhub.elsevier.com/S0031-3203(13)00261-6/sbref6http://refhub.elsevier.com/S0031-3203(13)00261-6/sbref6http://refhub.elsevier.com/S0031-3203(13)00261-6/sbref6http://refhub.elsevier.com/S0031-3203(13)00261-6/sbref7http://refhub.elsevier.com/S0031-3203(13)00261-6/sbref7http://refhub.elsevier.com/S0031-3203(13)00261-6/sbref7http://refhub.elsevier.com/S0031-3203(13)00261-6/sbref7http://refhub.elsevier.com/S0031-3203(13)00261-6/sbref7http://refhub.elsevier.com/S0031-3203(13)00261-6/sbref7http://refhub.elsevier.com/S0031-3203(13)00261-6/othref0010http://refhub.elsevier.com/S0031-3203(13)00261-6/othref0010http://refhub.elsevier.com/S0031-3203(13)00261-6/othref0010http://refhub.elsevier.com/S0031-3203(13)00261-6/othref0010http://dx.doi.org/10.1109/TIM.2013.2258248http://refhub.elsevier.com/S0031-3203(13)00261-6/sbref8http://refhub.elsevier.com/S0031-3203(13)00261-6/sbref8http://refhub.elsevier.com/S0031-3203(13)00261-6/sbref8http://refhub.elsevier.com/S0031-3203(13)00261-6/sbref8http://refhub.elsevier.com/S0031-3203(13)00261-6/sbref8http://refhub.elsevier.com/S0031-3203(13)00261-6/sbref8http://refhub.elsevier.com/S0031-3203(13)00261-6/sbref8http://www.vision.caltech.edu/bouguetj/calib_doc/download/index.htmlhttp://www.vision.caltech.edu/bouguetj/calib_doc/download/index.htmlhttp://www.vision.caltech.edu/bouguetj/calib_doc/download/index.htmlhttp://www.vision.caltech.edu/bouguetj/calib_doc/download/index.htmlhttp://refhub.elsevier.com/S0031-3203(13)00261-6/sbref9http://refhub.elsevier.com/S0031-3203(13)00261-6/sbref9http://refhub.elsevier.com/S0031-3203(13)00261-6/sbref9http://refhub.elsevier.com/S0031-3203(13)00261-6/sbref9http://refhub.elsevier.com/S0031-3203(13)00261-6/sbref9http://refhub.elsevier.com/S0031-3203(13)00261-6/sbref9http://refhub.elsevier.com/S0031-3203(13)00261-6/sbref9http://refhub.elsevier.com/S0031-3203(13)00261-6/sbref9http://refhub.elsevier.com/S0031-3203(13)00261-6/sbref9http://refhub.elsevier.com/S0031-3203(13)00261-6/sbref9http://refhub.elsevier.com/S0031-3203(13)00261-6/sbref10http://refhub.elsevier.com/S0031-3203(13)00261-6/sbref10http://refhub.elsevier.com/S0031-3203(13)00261-6/sbref10http://refhub.elsevier.com/S0031-3203(13)00261-6/sbref10http://refhub.elsevier.com/S0031-3203(13)00261-6/sbref10http://refhub.elsevier.com/S0031-3203(13)00261-6/sbref10http://refhub.elsevier.com/S0031-3203(13)00261-6/sbref10http://refhub.elsevier.com/S0031-3203(13)00261-6/othref0020http://refhub.elsevier.com/S0031-3203(13)00261-6/othref0020http://refhub.elsevier.com/S0031-3203(13)00261-6/othref0020http://refhub.elsevier.com/S0031-3203(13)00261-6/othref0020http://refhub.elsevier.com/S0031-3203(13)00261-6/othref0020http://refhub.elsevier.com/S0031-3203(13)00261-6/othref0020http://refhub.elsevier.com/S0031-3203(13)00261-6/othref0020http://refhub.elsevier.com/S0031-3203(13)00261-6/othref0020http://refhub.elsevier.com/S0031-3203(13)00261-6/othref0020http://refhub.elsevier.com/S0031-3203(13)00261-6/othref0020http://refhub.elsevier.com/S0031-3203(13)00261-6/othref0020http://refhub.elsevier.com/S0031-3203(13)00261-6/othref0020http://refhub.elsevier.com/S0031-3203(13)00261-6/othref0020http://refhub.elsevier.com/S0031-3203(13)00261-6/othref0025http://refhub.elsevier.com/S0031-3203(13)00261-6/othref0025http://refhub.elsevier.com/S0031-3203(13)00261-6/othref0025http://refhub.elsevier.com/S0031-3203(13)00261-6/othref0025http://refhub.elsevier.com/S0031-3203(13)00261-6/sbref11http://refhub.elsevier.com/S0031-3203(13)00261-6/sbref11http://refhub.elsevier.com/S0031-3203(13)00261-6/sbref11http://refhub.elsevier.com/S0031-3203(13)00261-6/sbref11http://refhub.elsevier.com/S0031-3203(13)00261-6/sbref11http://refhub.elsevier.com/S0031-3203(13)00261-6/sbref11http://refhub.elsevier.com/S0031-3203(13)00261-6/sbref11http://refhub.elsevier.com/S0031-3203(13)00261-6/sbref12http://refhub.elsevier.com/S0031-3203(13)00261-6/sbref12http://refhub.elsevier.com/S0031-3203(13)00261-6/sbref12http://refhub.elsevier.com/S0031-3203(13)00261-6/sbref12http://refhub.elsevier.com/S0031-3203(13)00261-6/sbref12http://refhub.elsevier.com/S0031-3203(13)00261-6/sbref12http://refhub.elsevier.com/S0031-3203(13)00261-6/sbref12http://refhub.elsevier.com/S0031-3203(13)00261-6/sbref12http://refhub.elsevier.com/S0031-3203(13)00261-6/othref0030http://refhub.elsevier.com/S0031-3203(13)00261-6/othref0030http://refhub.elsevier.com/S0031-3203(13)00261-6/othref0030http://refhub.elsevier.com/S0031-3203(13)00261-6/othref0030http://refhub.elsevier.com/S0031-3203(13)00261-6/othref0030http://refhub.elsevier.com/S0031-3203(13)00261-6/othref0035http://refhub.elsevier.com/S0031-3203(13)00261-6/othref0035http://refhub.elsevier.com/S0031-3203(13)00261-6/othref0035http://refhub.elsevier.com/S0031-3203(13)00261-6/sbref13http://refhub.elsevier.com/S0031-3203(13)00261-6/sbref13http://refhub.elsevier.com/S0031-3203(13)00261-6/sbref13http://refhub.elsevier.com/S0031-3203(13)00261-6/sbref13http://refhub.elsevier.com/S0031-3203(13)00261-6/sbref13http://refhub.elsevier.com/S0031-3203(13)00261-6/sbref13http://refhub.elsevier.com/S0031-3203(13)00261-6/sbref13http://refhub.elsevier.com/S0031-3203(13)00261-6/sbref14http://refhub.elsevier.com/S0031-3203(13)00261-6/sbref14http://refhub.elsevier.com/S0031-3203(13)00261-6/sbref14http://refhub.elsevier.com/S0031-3203(13)00261-6/sbref14http://refhub.elsevier.com/S0031-3203(13)00261-6/sbref14http://refhub.elsevier.com/S0031-3203(13)00261-6/sbref15http://refhub.elsevier.com/S0031-3203(13)00261-6/sbref15http://refhub.elsevier.com/S0031-3203(13)00261-6/sbref15http://refhub.elsevier.com/S0031-3203(13)00261-6/sbref15http://refhub.elsevier.com/S0031-3203(13)00261-6/sbref15http://refhub.elsevier.com/S0031-3203(13)00261-6/sbref15http://refhub.elsevier.com/S0031-3203(13)00261-6/sbref15http://refhub.elsevier.com/S0031-3203(13)00261-6/sbref15http://refhub.elsevier.com/S0031-3203(13)00261-6/sbref16http://refhub.elsevier.com/S0031-3203(13)00261-6/sbref16http://refhub.elsevier.com/S0031-3203(13)00261-6/sbref16http://refhub.elsevier.com/S0031-3203(13)00261-6/sbref16http://refhub.elsevier.com/S0031-3203(13)00261-6/sbref16http://refhub.elsevier.com/S0031-3203(13)00261-6/sbref16http://refhub.elsevier.com/S0031-3203(13)00261-6/sbref16http://refhub.elsevier.com/S0031-3203(13)00261-6/sbref17http://refhub.elsevier.com/S0031-3203(13)00261-6/sbref17http://refhub.elsevier.com/S0031-3203(13)00261-6/sbref17http://refhub.elsevier.com/S0031-3203(13)00261-6/sbref17http://refhub.elsevier.com/S0031-3203(13)00261-6/sbref17http://refhub.elsevier.com/S0031-3203(13)00261-6/sbref17http://refhub.elsevier.com/S0031-3203(13)00261-6/sbref18http://refhub.elsevier.com/S0031-3203(13)00261-6/sbref18http://refhub.elsevier.com/S0031-3203(13)00261-6/sbref18http://refhub.elsevier.com/S0031-3203(13)00261-6/sbref18http://refhub.elsevier.com/S0031-3203(13)00261-6/sbref18http://refhub.elsevier.com/S0031-3203(13)00261-6/sbref18http://refhub.elsevier.com/S0031-3203(13)00261-6/sbref18http://refhub.elsevier.com/S0031-3203(13)00261-6/sbref18http://refhub.elsevier.com/S0031-3203(13)00261-6/sbref18http://refhub.elsevier.com/S0031-3203(13)00261-6/othref0040http://refhub.elsevier.com/S0031-3203(13)00261-6/othref0040http://refhub.elsevier.com/S0031-3203(13)00261-6/othref0040http://refhub.elsevier.com/S0031-3203(13)00261-6/othref0040http://refhub.elsevier.com/S0031-3203(13)00261-6/othref0040http://refhub.elsevier.com/S0031-3203(13)00261-6/othref0040http://refhub.elsevier.com/S0031-3203(13)00261-6/othref0040http://refhub.elsevier.com/S0031-3203(13)00261-6/othref0045http://refhub.elsevier.com/S0031-3203(13)00261-6/othref0045http://refhub.elsevier.com/S0031-3203(13)00261-6/othref0045http://refhub.elsevier.com/S0031-3203(13)00261-6/othref0045http://refhub.elsevier.com/S0031-3203(13)00261-6/othref0045http://refhub.elsevier.com/S0031-3203(13)00261-6/othref0050http://refhub.elsevier.com/S0031-3203(13)00261-6/othref0050http://refhub.elsevier.com/S0031-3203(13)00261-6/othref0050http://refhub.elsevier.com/S0031-3203(13)00261-6/othref0050http://refhub.elsevier.com/S0031-3203(13)00261-6/othref0050http://refhub.elsevier.com/S0031-3203(13)00261-6/othref0050http://refhub.elsevier.com/S0031-3203(13)00261-6/sbref19http://refhub.elsevier.com/S0031-3203(13)00261-6/sbref19http://refhub.elsevier.com/S0031-3203(13)00261-6/sbref19http://refhub.elsevier.com/S0031-3203(13)00261-6/sbref19http://refhub.elsevier.com/S0031-3203(13)00261-6/sbref19http://refhub.elsevier.com/S0031-3203(13)00261-6/sbref19http://refhub.elsevier.com/S0031-3203(13)00261-6/sbref19http://refhub.elsevier.com/S0031-3203(13)00261-6/sbref20http://refhub.elsevier.com/S0031-3203(13)00261-6/sbref20http://refhub.elsevier.com/S0031-3203(13)00261-6/sbref20http://refhub.elsevier.com/S0031-3203(13)00261-6/sbref20http://refhub.elsevier.com/S0031-3203(13)00261-6/sbref20http://refhub.elsevier.com/S0031-3203(13)00261-6/sbref20http://refhub.elsevier.com/S0031-3203(13)00261-6/sbref20http://refhub.elsevier.com/S0031-3203(13)00261-6/sbref20http://refhub.elsevier.com/S0031-3203(13)00261-6/sbref21http://refhub.elsevier.com/S0031-3203(13)00261-6/sbref21http://refhub.elsevier.com/S0031-3203(13)00261-6/sbref21http://refhub.elsevier.com/S0031-3203(13)00261-6/sbref21http://refhub.elsevier.com/S0031-3203(13)00261-6/sbref21http://refhub.elsevier.com/S0031-3203(13)00261-6/sbref21http://refhub.elsevier.com/S0031-3203(13)00261-6/sbref22http://refhub.elsevier.com/S0031-3203(13)00261-6/sbref22http://refhub.elsevier.com/S0031-3203(13)00261-6/sbref22http://refhub.elsevier.com/S0031-3203(13)00261-6/sbref22http://refhub.elsevier.com/S0031-3203(13)00261-6/sbref22http://refhub.elsevier.com/S0031-3203(13)00261-6/sbref22http://refhub.elsevier.com/S0031-3203(13)00261-6/sbref22http://refhub.elsevier.com/S0031-3203(13)00261-6/sbref23http://refhub.elsevier.com/S0031-3203(13)00261-6/sbref23http://refhub.elsevier.com/S0031-3203(13)00261-6/sbref24http://refhub.elsevier.com/S0031-3203(13)00261-6/sbref24http://refhub.elsevier.com/S0031-3203(13)00261-6/sbref24http://refhub.elsevier.com/S0031-3203(13)00261-6/sbref24http://refhub.elsevier.com/S0031-3203(13)00261-6/sbref24http://refhub.elsevier.com/S0031-3203(13)00261-6/othref0055http://refhub.elsevier.com/S0031-3203(13)00261-6/othref0055http://refhub.elsevier.com/S0031-3203(13)00261-6/othref0055http://refhub.elsevier.com/S0031-3203(13)00261-6/othref0055http://refhub.elsevier.com/S0031-3203(13)00261-6/othref0055http://refhub.elsevier.com/S0031-3203(13)00261-6/sbref25http://refhub.elsevier.com/S0031-3203(13)00261-6/sbref25http://refhub.elsevier.com/S0031-3203(13)00261-6/sbref25http://refhub.elsevier.com/S0031-3203(13)00261-6/sbref25http://refhub.elsevier.com/S0031-3203(13)00261-6/sbref25http://refhub.elsevier.com/S0031-3203(13)00261-6/sbref25http://refhub.elsevier.com/S0031-3203(13)00261-6/sbref25http://refhub.elsevier.com/S0031-3203(13)00261-6/sbref25http://refhub.elsevier.com/S0031-3203(13)00261-6/sbref25http://refhub.elsevier.com/S0031-3203(13)00261-6/othref0060http://refhub.elsevier.com/S0031-3203(13)00261-6/othref0060http://refhub.elsevier.com/S0031-3203(13)00261-6/othref0060http://refhub.elsevier.com/S0031-3203(13)00261-6/othref0060http://refhub.elsevier.com/S0031-3203(13)00261-6/othref0060http://refhub.elsevier.com/S0031-3203(13)00261-6/othref0065http://refhub.elsevier.com/S0031-3203(13)00261-6/othref0065http://refhub.elsevier.com/S0031-3203(13)00261-6/othref0065http://refhub.elsevier.com/S0031-3203(13)00261-6/othref0065http://refhub.elsevier.com/S0031-3203(13)00261-6/othref0060http://refhub.elsevier.com/S0031-3203(13)00261-6/othref0060http://refhub.elsevier.com/S0031-3203(13)00261-6/othref0060http://refhub.elsevier.com/S0031-3203(13)00261-6/sbref25http://refhub.elsevier.com/S0031-3203(13)00261-6/sbref25http://refhub.elsevier.com/S0031-3203(13)00261-6/othref0055http://refhub.elsevier.com/S0031-3203(13)00261-6/othref0055http://refhub.elsevier.com/S0031-3203(13)00261-6/othref0055http://refhub.elsevier.com/S0031-3203(13)00261-6/sbref24http://refhub.elsevier.com/S0031-3203(13)00261-6/sbref24http://refhub.elsevier.com/S0031-3203(13)00261-6/sbref23http://refhub.elsevier.com/S0031-3203(13)00261-6/sbref22http://refhub.elsevier.com/S0031-3203(13)00261-6/sbref22http://refhub.elsevier.com/S0031-3203(13)00261-6/sbref21http://refhub.elsevier.com/S0031-3203(13)00261-6/sbref21http://refhub.elsevier.com/S0031-3203(13)00261-6/sbref21http://refhub.elsevier.com/S0031-3203(13)00261-6/sbref20http://refhub.elsevier.com/S0031-3203(13)00261-6/sbref20http://refhub.elsevier.com/S0031-3203(13)00261-6/sbref20http://refhub.elsevier.com/S0031-3203(13)00261-6/sbref19http://refhub.elsevier.com/S0031-3203(13)00261-6/sbref19http://refhub.elsevier.com/S0031-3203(13)00261-6/othref0050http://refhub.elsevier.com/S0031-3203(13)00261-6/othref0050http://refhub.elsevier.com/S0031-3203(13)00261-6/othref0050http://refhub.elsevier.com/S0031-3203(13)00261-6/othref0050http://refhub.elsevier.com/S0031-3203(13)00261-6/othref0045http://refhub.elsevier.com/S0031-3203(13)00261-6/othref0045http://refhub.elsevier.com/S0031-3203(13)00261-6/othref0045http://refhub.elsevier.com/S0031-3203(13)00261-6/othref0040http://refhub.elsevier.com/S0031-3203(13)00261-6/othref0040http://refhub.elsevier.com/S0031-3203(13)00261-6/othref0040http://refhub.elsevier.com/S0031-3203(13)00261-6/sbref18http://refhub.elsevier.com/S0031-3203(13)00261-6/sbref18http://refhub.elsevier.com/S0031-3203(13)00261-6/sbref17http://refhub.elsevier.com/S0031-3203(13)00261-6/sbref17http://refhub.elsevier.com/S0031-3203(13)00261-6/sbref17http://refhub.elsevier.com/S0031-3203(13)00261-6/sbref16http://refhub.elsevier.com/S0031-3203(13)00261-6/sbref16http://refhub.elsevier.com/S0031-3203(13)00261-6/sbref15http://refhub.elsevier.com/S0031-3203(13)00261-6/sbref15http://refhub.elsevier.com/S0031-3203(13)00261-6/sbref15http://refhub.elsevier.com/S0031-3203(13)00261-6/sbref14http://refhub.elsevier.com/S0031-3203(13)00261-6/sbref14http://refhub.elsevier.com/S0031-3203(13)00261-6/sbref13http://refhub.elsevier.com/S0031-3203(13)00261-6/sbref13http://refhub.elsevier.com/S0031-3203(13)00261-6/sbref13http://refhub.elsevier.com/S0031-3203(13)00261-6/sbref13http://refhub.elsevier.com/S0031-3203(13)00261-6/othref0035http://refhub.elsevier.com/S0031-3203(13)00261-6/othref0035http://refhub.elsevier.com/S0031-3203(13)00261-6/othref0035http://refhub.elsevier.com/S0031-3203(13)00261-6/othref0030http://refhub.elsevier.com/S0031-3203(13)00261-6/othref0030http://refhub.elsevier.com/S0031-3203(13)00261-6/othref0030http://refhub.elsevier.com/S0031-3203(13)00261-6/sbref12http://refhub.elsevier.com/S0031-3203(13)00261-6/sbref12http://refhub.elsevier.com/S0031-3203(13)00261-6/sbref12http://refhub.elsevier.com/S0031-3203(13)00261-6/sbref11http://refhub.elsevier.com/S0031-3203(13)00261-6/sbref11http://refhub.elsevier.com/S0031-3203(13)00261-6/othref0025http://refhub.elsevier.com/S0031-3203(13)00261-6/othref0025http://refhub.elsevier.com/S0031-3203(13)00261-6/othref0020http://refhub.elsevier.com/S0031-3203(13)00261-6/othref0020http://refhub.elsevier.com/S0031-3203(13)00261-6/othref0020http://refhub.elsevier.com/S0031-3203(13)00261-6/sbref10http://refhub.elsevier.com/S0031-3203(13)00261-6/sbref10http://refhub.elsevier.com/S0031-3203(13)00261-6/sbref10http://refhub.elsevier.com/S0031-3203(13)00261-6/sbref9http://refhub.elsevier.com/S0031-3203(13)00261-6/sbref9http://refhub.elsevier.com/S0031-3203(13)00261-6/sbref9http://www.vision.caltech.edu/bouguetj/calib_doc/download/index.htmlhttp://www.vision.caltech.edu/bouguetj/calib_doc/download/index.htmlhttp://refhub.elsevier.com/S0031-3203(13)00261-6/sbref8http://refhub.elsevier.com/S0031-3203(13)00261-6/sbref8http://dx.doi.org/10.1109/TIM.2013.2258248http://refhub.elsevier.com/S0031-3203(13)00261-6/othref0010http://refhub.elsevier.com/S0031-3203(13)00261-6/othref0010http://refhub.elsevier.com/S0031-3203(13)00261-6/sbref7http://refhub.elsevier.com/S0031-3203(13)00261-6/sbref7http://refhub.elsevier.com/S0031-3203(13)00261-6/sbref7http://refhub.elsevier.com/S0031-3203(13)00261-6/sbref6http://refhub.elsevier.com/S0031-3203(13)00261-6/sbref6http://refhub.elsevier.com/S0031-3203(13)00261-6/sbref5http://refhub.elsevier.com/S0031-3203(13)00261-6/sbref5http://refhub.elsevier.com/S0031-3203(13)00261-6/sbref5http://refhub.elsevier.com/S0031-3203(13)00261-6/sbref4http://refhub.elsevier.com/S0031-3203(13)00261-6/sbref4http://refhub.elsevier.com/S0031-3203(13)00261-6/sbref3http://refhub.elsevier.com/S0031-3203(13)00261-6/sbref3http://refhub.elsevier.com/S0031-3203(13)00261-6/sbref3http://refhub.elsevier.com/S0031-3203(13)00261-6/sbref2http://refhub.elsevier.com/S0031-3203(13)00261-6/sbref2http://refhub.elsevier.com/S0031-3203(13)00261-6/othref0005http://refhub.elsevier.com/S0031-3203(13)00261-6/othref0005http://refhub.elsevier.com/S0031-3203(13)00261-6/othref0005http://refhub.elsevier.com/S0031-3203(13)00261-6/sbref1http://refhub.elsevier.com/S0031-3203(13)00261-6/sbref1 -
5/25/2018 3D Fingerprint Reconstruction_LiuZhang_Pattern Recognition Letters
16/16
International Conference of the IEEE Engineering in Medicine and BiologySociety, 2004, pp. 16011604.
[39] R. Hastings, Ridge enhancement in ngerprint images using orienteddiffusion, Digital Image Computing Techniques and Applications (2007)245252.
[40] A. Almansa, T. Lindeberg, Fingerprint enhancement by shape adaptation oscale-space operators with automatic scale selection, IEEE Transactions onImage Processing 9 (12) (2000) 20272042.
[41] M. Xie, Z. Wang, Fingerprint enhancement based on edge-directed diffusion, inProceedings of the 11th International Conference on Image Processing, 2004.
Feng Liureceived the B.S. degree and the M.S. degree both from the Department of Electrical and Engineering, Xidian University, Xi 'an, Shaanxi, China, respectively in 2006and 2009. She is now a Ph.D. student in Computer Science of the Department of Computing at the Hong Kong Polytechnic University. Her research interests include pattern
recognition and image processing, especially focus on their applications to ngerprints.
David Zhanggraduated in Computer Science from Peking University. He received his M.Sc. in Computer Science in 1982 and his Ph.D. in 1985 from the Harbin Institute oTechnology (HIT). From 1986 to 1988 he was a Postdoctoral Fellow at Tsinghua University and then an Associate Professor at the Academia Sinica, Beijing. In 1994 he receivedhis second Ph.D. in Electrical and Computer Engineering from the University of Waterloo, Ontario, Canada. Currently, he is a Head, Department of Computing, and a ChairProfessor at the Hong Kong Polytechnic University where he is the Founding Director of the Biometrics Technology Centre (UGC/CRC) supported by the Hong Kong SARGovernment in 1998. He also serves as Visiting Chair Professor in Tsinghua University, and Adjunct Professor in Shanghai Jiao Tong University, Peking University, HarbinInstitute of Technology, and the University of Waterloo. He is the Founder and Editor-in-Chief, International Journal of Image and Graphics (IJIG); Book Editor, SpringerInternational Series on Biometrics (KISB); Organizer, the rst International Conference on Biometrics Authentication (ICBA); Associate Editor of more than ten internationa
journals including IEEE Transactions and Pattern Recognition; Technical Committee Chair of IEEE CIS and the author of more than 10 books and 200 journal papers. ProfessoZhang is a Croucher Senior Research Fellow, Distinguished Speaker of the IEEE Computer Society, and a Fellow of both IEEE and IAPR.
F. Liu, D. Zhang / Pattern Recognition 47 (2014) 178193 193
http://refhub.elsevier.com/S0031-3203(13)00261-6/othref0065http://refhub.elsevier.com/S0031-3203(13)00261-6/othref0065http://refhub.elsevier.com/S0031-3203(13)00261-6/othref0065http://refhub.elsevier.com/S0031-3203(13)00261-6/othref0065http://refhub.elsevier.com/S0031-3203(13)00261-6/sbref26http://refhub.elsevier.com/S0031-3203(13)00261-6/sbref26http://refhub.elsevier.com/S0031-3203(13)00261-6/sbref26http://refhub.elsevier.com/S0031-3203(13)00261-6/sbref26http://refhub.elsevier.com/S0031-3203(13)00261-6/sbref26http://refhub.elsevier.com/S0031-3203(13)00261-6/sbref26http://refhub.elsevier.com/S0031-3203(13)00261-6/sbref26http://refhub.elsevier.com/S0031-3203(13)00261-6/sbref26http://refhub.elsevier.com/S0031-3203(13)00261-6/sbref27http://refhub.elsevier.com/S0031-3203(13)00261-6/sbref27http://refhub.elsevier.com/S0031-3203(13)00261-6/sbref27http://refhub.elsevier.com/S0031-3203(13)00261-6/sbref27http://refhub.elsevier.com/S0031-3203(13)00261-6/sbref27http://refhub.elsevier.com/S0031-3203(13)00261-6/sbref27http://refhub.elsevier.com/S0031-3203(13)00261-6/othref0070http://refhub.elsevier.com/S0031-3203(13)00261-6/othref0070http://refhub.elsevier.com/S0031-3203(13)00261-6/othref0070http://refhub.elsevier.com/S0031-3203(13)00261-6/othref0070http://refhub.elsevier.com/S0031-3203(13)00261-6/sbref27http://refhub.elsevier.com/S0031-3203(13)00261-6/sbref27http://refhub.elsevier.com/S0031-3203(13)00261-6/sbref27http://refhub.elsevier.com/S0031-3203(13)00261-6/sbref26http://refhub.elsevier.com/S0031-3203(13)00261-6/sbref26http://refhub.elsevier.com/S0031-3203(13)00261-6/sbref26http://refhub.elsevier.com/S0031-3203(13)00261-6/othref0065http://refhub.elsevier.com/S0031-3203(13)00261-6/othref0065