ieee transactions on image processing, vol. 22, no. 9, … · 2019. 12. 9. · ieee transactions on...

11
IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 22, NO. 9, SEPTEMBER 2013 3625 Retina Verification System Based on Biometric Graph Matching Seyed Mehdi Lajevardi, Member, IEEE, Arathi Arakala, Stephen A. Davis, and Kathy J. Horadam Abstract— This paper presents an automatic retina verification framework based on the biometric graph matching (BGM) algorithm. The retinal vasculature is extracted using a family of matched filters in the frequency domain and morphological operators. Then, retinal templates are defined as formal spatial graphs derived from the retinal vasculature. The BGM algorithm, a noisy graph matching algorithm, robust to translation, non- linear distortion, and small rotations, is used to compare retinal templates. The BGM algorithm uses graph topology to define three distance measures between a pair of graphs, two of which are new. A support vector machine (SVM) classifier is used to distinguish between genuine and imposter comparisons. Using single as well as multiple graph measures, the classifier achieves complete separation on a training set of images from the VARIA database (60% of the data), equaling the state-of-the-art for retina verification. Because the available data set is small, kernel density estimation (KDE) of the genuine and imposter score distributions of the training set are used to measure performance of the BGM algorithm. In the one dimensional case, the KDE model is validated with the testing set. A 0 EER on testing shows that the KDE model is a good fit for the empirical distribution. For the multiple graph measures, a novel combination of the SVM boundary and the KDE model is used to obtain a fair comparison with the KDE model for the single measure. A clear benefit in using multiple graph measures over a single measure to distinguish genuine and imposter comparisons is demonstrated by a drop in theoretical error of between 60% and more than two orders of magnitude. Index Terms— Retinal feature extraction, retina vessels detec- tion, multiple graph attributes, biometric graph matching, person verification. I. I NTRODUCTION T HE vascular pattern in the retina is thought to be highly unique across the human population [1]. Apart from its claimed distinctiveness, the retina is robust to changes in human physiology, remaining almost the same throughout the lifespan of an individual. As it is internal to the body, the retinal image is not as accessible as common commercial biometrics like fingerprint and face, though it is claimed to be possible to capture a retina image from up to 1 metre. Manuscript received August 9, 2012; revised February 20, 2013 and April 23, 2013; accepted May 22, 2013. Date of publication June 4, 2013; date of current version August 13, 2013. This work was supported in part by the Australian Research Council under Grant DP120101188. The associate editor coordinating the review of this manuscript and approving it for publication was Dr. Nikolaos Boulgouris. The authors are with the School of Mathematical and Geospa- tial Sciences, Royal Melbourne Institute of Technology, Melbourne 3000, Australia (e-mail: [email protected]; [email protected]; [email protected]; [email protected]). Color versions of one or more of the figures in this paper are available online at http://ieeexplore.ieee.org. Digital Object Identifier 10.1109/TIP.2013.2266257 While accessibility is desirable in most commercial biometric systems, it also makes the biometric easier to spoof. The retinal vasculature is a biometric that is best suited to high security applications where the user is co-operative [1]. Liveness is automatically ensured if authenticating in real time. These advantages, along with improvement in scanning technology, have contributed to the resurgence of interest in the retinal biometric in the last few years [2]–[5]. This paper proposes an automatic retina verification frame- work where the vascular pattern of a retinal image is processed to created a spatial graph template. A biometric graph match- ing algorithm is presented, which compares two graphs derived from two retinal images and returns a set of distance measures between them. The lack of large publicly available retinal databases inhibits large scale and consistent testing of any retinal verification system. To overcome this issue, the data- base used is split into 2 parts - training set and testing set. Kernel density estimation is used to model the distribution of distances resulting from comparisons in the training set [6]. The model is then used to select an appropriate operating threshold for the system. The verification system is tested at the chosen threshold using the testing set. Each pair of templates from the testing set is input to the BGM to determine the distance between them. If this distance is less than or equal to the specified threshold, they are labeled as genuine, otherwise, they are labeled as imposter. Figure 1 shows the proposed verification framework. This paper is organized as follows. Section II describes the existing work on retina in biometrics. Retinal feature extraction and representation of the retina graph are described in Section III. The BGM algorithm is detailed in Section IV. The retina database, optimised system parameters and dis- cussion of the experiments are described in Section V. Analysed experimental results are presented in Section VI and Section VII draws some final conclusions. II. BACKGROUND For biometric purposes, retinal templates have been created using general image processing techniques on the raw image [2] and entire images have been matched based on correlation of the overall blood vessel pattern from the image [7]. More often, feature extraction techniques have been used for retinal images including traversing a circular region centred on the fovea [1], an annulus centred on the optical disc [3] and extraction of restricted patterns of vessels crossing the retina boundary [8]. Vessel calibre was added in [9] as an additional 1057-7149/$31.00 © 2013 IEEE

Upload: others

Post on 15-Sep-2020

2 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 22, NO. 9, … · 2019. 12. 9. · IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 22, NO. 9, SEPTEMBER 2013 3625 Retina Verification System

IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 22, NO. 9, SEPTEMBER 2013 3625

Retina Verification System Based on BiometricGraph Matching

Seyed Mehdi Lajevardi, Member, IEEE, Arathi Arakala, Stephen A. Davis, and Kathy J. Horadam

Abstract— This paper presents an automatic retina verificationframework based on the biometric graph matching (BGM)algorithm. The retinal vasculature is extracted using a familyof matched filters in the frequency domain and morphologicaloperators. Then, retinal templates are defined as formal spatialgraphs derived from the retinal vasculature. The BGM algorithm,a noisy graph matching algorithm, robust to translation, non-linear distortion, and small rotations, is used to compare retinaltemplates. The BGM algorithm uses graph topology to definethree distance measures between a pair of graphs, two of whichare new. A support vector machine (SVM) classifier is used todistinguish between genuine and imposter comparisons. Usingsingle as well as multiple graph measures, the classifier achievescomplete separation on a training set of images from the VARIAdatabase (60% of the data), equaling the state-of-the-art forretina verification. Because the available data set is small, kerneldensity estimation (KDE) of the genuine and imposter scoredistributions of the training set are used to measure performanceof the BGM algorithm. In the one dimensional case, the KDEmodel is validated with the testing set. A 0 EER on testing showsthat the KDE model is a good fit for the empirical distribution.For the multiple graph measures, a novel combination of theSVM boundary and the KDE model is used to obtain a faircomparison with the KDE model for the single measure. A clearbenefit in using multiple graph measures over a single measure todistinguish genuine and imposter comparisons is demonstratedby a drop in theoretical error of between 60% and more thantwo orders of magnitude.

Index Terms— Retinal feature extraction, retina vessels detec-tion, multiple graph attributes, biometric graph matching, personverification.

I. INTRODUCTION

THE vascular pattern in the retina is thought to be highlyunique across the human population [1]. Apart from its

claimed distinctiveness, the retina is robust to changes inhuman physiology, remaining almost the same throughout thelifespan of an individual. As it is internal to the body, theretinal image is not as accessible as common commercialbiometrics like fingerprint and face, though it is claimed tobe possible to capture a retina image from up to 1 metre.

Manuscript received August 9, 2012; revised February 20, 2013 and April23, 2013; accepted May 22, 2013. Date of publication June 4, 2013; dateof current version August 13, 2013. This work was supported in part by theAustralian Research Council under Grant DP120101188. The associate editorcoordinating the review of this manuscript and approving it for publicationwas Dr. Nikolaos Boulgouris.

The authors are with the School of Mathematical and Geospa-tial Sciences, Royal Melbourne Institute of Technology, Melbourne3000, Australia (e-mail: [email protected]; [email protected];[email protected]; [email protected]).

Color versions of one or more of the figures in this paper are availableonline at http://ieeexplore.ieee.org.

Digital Object Identifier 10.1109/TIP.2013.2266257

While accessibility is desirable in most commercial biometricsystems, it also makes the biometric easier to spoof. The retinalvasculature is a biometric that is best suited to high securityapplications where the user is co-operative [1]. Liveness isautomatically ensured if authenticating in real time. Theseadvantages, along with improvement in scanning technology,have contributed to the resurgence of interest in the retinalbiometric in the last few years [2]–[5].

This paper proposes an automatic retina verification frame-work where the vascular pattern of a retinal image is processedto created a spatial graph template. A biometric graph match-ing algorithm is presented, which compares two graphs derivedfrom two retinal images and returns a set of distance measuresbetween them. The lack of large publicly available retinaldatabases inhibits large scale and consistent testing of anyretinal verification system. To overcome this issue, the data-base used is split into 2 parts - training set and testing set.Kernel density estimation is used to model the distribution ofdistances resulting from comparisons in the training set [6].The model is then used to select an appropriate operatingthreshold for the system. The verification system is testedat the chosen threshold using the testing set. Each pair oftemplates from the testing set is input to the BGM to determinethe distance between them. If this distance is less than orequal to the specified threshold, they are labeled as genuine,otherwise, they are labeled as imposter. Figure 1 shows theproposed verification framework.

This paper is organized as follows. Section II describesthe existing work on retina in biometrics. Retinal featureextraction and representation of the retina graph are describedin Section III. The BGM algorithm is detailed in Section IV.The retina database, optimised system parameters and dis-cussion of the experiments are described in Section V.Analysed experimental results are presented in Section VI andSection VII draws some final conclusions.

II. BACKGROUND

For biometric purposes, retinal templates have been createdusing general image processing techniques on the raw image[2] and entire images have been matched based on correlationof the overall blood vessel pattern from the image [7]. Moreoften, feature extraction techniques have been used for retinalimages including traversing a circular region centred on thefovea [1], an annulus centred on the optical disc [3] andextraction of restricted patterns of vessels crossing the retinaboundary [8]. Vessel calibre was added in [9] as an additional

1057-7149/$31.00 © 2013 IEEE

Page 2: IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 22, NO. 9, … · 2019. 12. 9. · IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 22, NO. 9, SEPTEMBER 2013 3625 Retina Verification System

3626 IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 22, NO. 9, SEPTEMBER 2013

Fig. 1. Block Diagram of Verification Framework.

feature to [3]. Templates based on feature points (positions ofvessel branchings and crossovers) have been studied recentlyin [4], [10]–[15] and the effect of matching such featuresin fovea-centred versus optical disc-centred images has beenstudied in [5].

One obstacle to studying retina as a biometric has beenthe lack of large public databases of clinically normal retinas.Some of the results above have been for very small sam-ple sizes. The VARIA database [16] is the largest currentlyavailable publicly, with 139 individuals and 233 images. Thestudies [4], [5], [13]–[15] use it. They use different samplesfrom the VARIA database, different algorithms to matchfeature points, and different functions to score comparisons. In[17, Table 1] the scores obtained in [4], [14] are compared withresults for 7 scoring functions using the manually extractedfeature point templates of [15]. All studies using a functionbased on the geometric mean of set sizes give an Equal ErrorRate (EER) of 0%. That is, using this specific scoring function,there is no overlap between the sample population of gen-uine comparisons (between two retina images from the sameperson) and the sample population of imposter comparisons(between retina images from two different people). Ideally,sample sizes should be large enough to permit measurementof the False Match Rate (FMR) and the False Non-MatchRate (FNMR) and so derive Receiver Operating Characteristic(ROC) curves and EERs which better represent the population,but this is not possible using current databases. Kernel densityestimation based on manually extracted templates is used tomodel the population score distributions in [17].

Automatic segmentation of retinal vessels is a very challeng-ing research topic, which has gained much attention duringthe last few decades [18]–[20]. The presence of noise, the lowcontrast between vasculature and background, brightness, andthe variability of vessel width and shape are the main obsta-cles. A number of approaches have been proposed for vesselsegmentation from retinal images. Early research on retinaused a matched filtering for blood vessel segmentation [18].In [19], [20], tracking methods were proposed to obtainthe vascular structure of the retina. Region-based thresholdprobing of the matched filter response [21], morphologi-cal operations and a combination of nonlinear filtering andmorphological operations [22], [23] were previously usedto segment the vessels from the retinal image. In [24],

the filtered retinal image by two dimensional (2-D) Gaborfilters is classified using a Gaussian mixture model (GMM)for vessel segmentation. A combination of matched filters andlikelihood ratio vesselness (LRV) was proposed in [25]. In[26], the line detector and support vector machine (SVM)were used for vessel detection. Some of these methods arecomplicated and they are time consuming. Furthermore, mostof them have been used in medical science studies for whichmore detailed information (e.g. shape, width, length) aboutthe vessels is needed. To overcome these issues, a family ofmatched filters in frequency domain, combined with morpho-logical operators, is used for retinal vessel segmentation.

The retinal vasculature can be represented as a formalgraph when vessel branch and crossing point locations areaugmented by the information about connections betweenthem. This information represents the retinal vasculature betterthan using vessel feature point locations alone. Graphs fromvasculature are used in medical imaging for vessel registrationand vessel annotation [27]. In this paper, a spatial graphextracted from the retinal vasculature as described in [15] isused for the BGM algorithm.

III. RETINAL IMAGE FEATURES

A. Retinal Image Enhancement

The image pre-processing procedure is a very important stepin the retina feature extraction task. Its aim is to obtain imageswhich have normalized intensity, uniform size and shape,and depict only enhanced retinal vessels. The retinal imageintensity is normalized using histogram equalization [28].The blood vessels usually have poor local contrast [18], [29].In order to enhance the vessels, matched filter detection isused to detect piece-wise linear segments of vessels in retinalimages. The gray-scale profile of the cross section of a bloodvessel can be approximated by a Gaussian filter.

As the retinal vessels have different directions, a family of2-D matched filters with different orientations is used to searchfor vessel segments along all possible directions [18]. Given aninput image of N1 × N2 pixels, i(x, y), represented in gray-scale, for each pixel position, the response with maximummodulus over all possible orientations is given by

o(x, y) = maxθ

[i(x, y) ∗ gθ (x, y)] (1)

Page 3: IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 22, NO. 9, … · 2019. 12. 9. · IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 22, NO. 9, SEPTEMBER 2013 3625 Retina Verification System

LAJEVARDI et al.: RETINA VERIFICATION SYSTEM BASED ON BGM 3627

where 1 ≤ x ≤ N1, 1 ≤ y ≤ N2, “*” denotes convolution andgθ (x, y) is a family of matched filters given by

gθ (x, y) = hθ (x, y) −∑N1

x=1

∑N2y=1 hθ (x, y)

N. (2)

Here hθ (x, y) = h(x cos θ − y sin θ, x sin θ + y cos θ)represents a 2-D filter with orientation θ = πa

6 , a = 0, . . . , 5,N is the number of pixels in the matched filter and h(x, y) isgiven by h(x, y) = h1(x)h2(y), where

h1(x) = −e(−x2/2σ 2x ), h2(x) = �(y/L) (3)

where σx represents the bandwidth of Gaussian filter, Lrepresents the length of the segment for which the vessel isassumed to have a fixed orientation and �(y) represents thesquare function which is given by

�(y) ={

0 |y| ≥ 12

1 |y| < 12

(4)

Here, in order to reduce the convolution computation, thefiltering is implemented in frequency domain. The Fouriertransform (F ) of the filter is given by

F(h(x, y)) =∫∫

h(x, y)e− j2π(ux+vy)dxdy . (5)

As h1(x) and h2(y) are independent functions, their Fouriertransforms can be calculated separately as

H1(u) =∫

−e(−x2/2σ 2x )e− j2πuxdx = −

√2π

σ 2u

e(−u2/2σ 2u ) (6)

andH2(v) =

�(y/L)e− j2πvydy = 2 sin 2πvL

πv(7)

where u and v are the frequency components of the two-dimensional filter and σu = 1

σx. The frequency domain of

h(x, y) is given by

H(u, v) = −√

σ 2u

e(−u2/2σ 2u ) 2 sin 2πvL

πv. (8)

As the frequency domain must be rotated by the same angleθ that was applied to h(x, y), the frequency domain of thematched filter is given by

Gθ (u, v) = Hθ (u, v) −∑Nu

u=1

∑Nvv=1 Hθ (u, v)

N, (9)

where Hθ (u, v) = H(u cos θ − v sin θ, u sin θ + v cos θ), Nis the number of points in the matched filter and Nu andNv are the number of the points in u and v frequenciesrespectively. The filtered image with different orientations isgiven by

rθ (x, y) = F−1[I(l1, l2)Gθ (u, v)] (10)

whereI(l1, l2) = F(i(x, y)) . (11)

The output is given by

o(x, y) = maxθ

�[rθ (x, y)] . (12)

(a)

(b)

Fig. 2. The matched filter with θ = 0, (a) in spatial domain (g) (b) infrequency domain (G).

Figure 2 illustrates the spatial and frequency domain of thematched filter.

To distinguish between enhanced vessel segments and thebackground, the enhanced retinal image, o(x, y), is processedby a specific thresholding scheme. A candidate threshold isdefined for each intensity value i as follows:

ti = |o > i ||o| , i = 1, 2, · · · , max(o) (13)

where | • | represents the number of elements and ti is thecandidate threshold. It was observed empirically that the veinsoccupy between 10% to 15% of the pixels in the image o(x, y).Therefore the optimum threshold (T ) and binary image (ib) aregiven by

T = arg maxi

{ti |ti < 0.15} (14)

ib(x, y) ={

1 o(x, y) ≥ T

0 o(x, y) < T .(15)

Figure 3 illustrates the original, enhanced and filtered retinalimage. The binary retinal image generated in the frequencydomain takes 0.7 seconds per image whereas in the spatialdomain it takes 1.06 seconds per image.

Page 4: IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 22, NO. 9, … · 2019. 12. 9. · IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 22, NO. 9, SEPTEMBER 2013 3625 Retina Verification System

3628 IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 22, NO. 9, SEPTEMBER 2013

(a)

(b)

(c)

Fig. 3. (a) Retinal raw image from VARIA database [16], (b) normalizedretinal image using histogram equalization, (c) retinal image after applying 6matched filters in frequency domain and finding the maximum response usingEq. 12.

B. Retinal Feature ExtractionFeature extraction is the process of generating features

to be used in the classification task [30]. The extractiontask transforms rich content of images into various contentfeatures. In this research work, the skeleton of the vesselsis extracted based on morphological image analysis and thefeature points (vessel bifurcations and crossovers) are used forclassification [28]. The skeleton of the binary image can be

expressed by

is =Ns⋃

p=1

ipb (16)

andipb = (ib � pw) − (ib � pw) ◦ w (17)

where Ns = max{p|(ib � pw) �= ∅}, (ib � pw) and ◦ arethe morphological p successive erosion of ib and opening,respectively, and w is a structuring element.

To detect the feature points, the skeleton image (is) isscanned with a 9 pixel (3 × 3) mask. If the central pixel hasvalue 1 and has only one neighbour with value 1 then thecentral pixel is a termination point. If the central pixel hasvalue 1 and has exactly 3 neighbours with value 1 then thecentral pixel is a feature point. If this leads to neighbouringpixels in the same 3×3 region being labelled as feature points,the incorrect labels are removed, as follows. If a central pixelis a feature point and has two or more neighbours which arefeature points and are on different sides of the central pixelthen only the central pixel will be listed as a feature point.

From Figure 4(b), it can be seen that there are severalspurious feature points (red marks) with short paths (fromred marks to termination points). To remove these points, thelength of each connected path to a spurious point is calculatedby tracing the skeleton of the connected path using a 3 × 3mask. The connected path with termination point will beremoved if the length is less than 15 pixels.

The retina graph is generated from the true feature points(yellow points) and the connected paths between them.Figure 4(c) shows the feature points and resulting straightedges which form the retina graph.

C. The Retina Graph

The retinal template is defined as a spatial graph extractedfrom the retinal vasculature. The retina graph is defined byg = (V, E, μ, ν), where the vertex set V is the set of featurepoints extracted from the image skeleton and the edge set Eis the set of pairs of feature points connected by a vessel. Thevertex labeling function μ : V → R

2 maps each vertex v to itscartesian coordinates (q1, q2) in the retinal image frame andthe edge labeling function ν : E → R

2 maps each edge e to(l, θ), where the straight line between the pair of connectedfeature points has length l and slope θ in the retinal imageframe. Figure 4(c) shows a typical retina graph.

IV. BIOMETRIC GRAPH MATCHING ALGORITHM

Image samples from the same retinal vascular pattern couldvary due to noise, translation, rotation and minor scale changesinvolved during image capture. Furthermore, the skeletonof the vasculature could be extracted partially when retinalimages have slight variation in illumination. These issues causechanges in the structure of the spatial graphs, i.e, change inthe location of the vertices, number of edges and number ofvertices. Therefore, retina graphs are examples of noisy spatialgraphs. The BGM algorithm is a noisy spatial graph matchingtechnique that involves two modules: graph registration and

Page 5: IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 22, NO. 9, … · 2019. 12. 9. · IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 22, NO. 9, SEPTEMBER 2013 3625 Retina Verification System

LAJEVARDI et al.: RETINA VERIFICATION SYSTEM BASED ON BGM 3629

(a)

(b)

(c)

Fig. 4. (a) Binarised image of the Retina in Figure 3(c) (b) The skeletonof the retinal image in (a) with spurious feature points which have shortpaths ending in termination points (red marks) and true feature points (yellowmarks). (c) Retina graph of (b) after removal of spurious points.

error-tolerant graph matching [15]. The graph registration stepaligns the graphs by comparing edge and vertex labels. Theerror-tolerant graph matching stage outputs a spatial graphcalled the maximum common subgraph (mcs), which will bethe intersection of the two compared graphs. Furthermore, theBGM algorithm uses the topological features of the mcs todistinguish between genuine and imposter comparisons.

A. Graph Registration Module

The aim of graph registration is to ensure that the two graphsbeing compared are on the same spatial frame, free from theeffects of translation and rotation of the images. The graph reg-istration module is a type of greedy RANSAC algorithm andis described in Algorithm 1. The first stage of the registrationalgorithm is edge comparison. Here, every pair of edges fromthe compared graphs are scored using a dissimilarity measure

Algorithm 1 Graph Registration Module

based on their edge labels. Let Ni denotes the number of mostsimilar edge pairs (lowest dissimilarity) which are consideredto find the best alignment of the compared graphs (g, g′). Foreach edge pair, the compared graphs are translated and rotated

Page 6: IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 22, NO. 9, … · 2019. 12. 9. · IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 22, NO. 9, SEPTEMBER 2013 3625 Retina Verification System

3630 IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 22, NO. 9, SEPTEMBER 2013

to make the start vertices of the respective matching edges asthe origin. Then, the vertex labels of the translated and rotatedgraphs (go, g′

o) are compared with each other. If the Euclideandistance between the labels of a pair of vertices (vi , v

′j ) is

less than a tolerance, ε, the vertices are considered to match.The number of matched vertices (C) is used to calculate adistance score (dk) between go and g′

o. The edge pair givingthe smallest dk is used for alignment. The outputs of theregistration module are the pair of aligned graphs (ga, g′

a).Figures 5(a) and 5(b) show the registration between a typicalgenuine pair and imposter pair of graphs.

B. Error-Tolerant Graph Matching Module

Once two graphs are aligned, an error-tolerant graph match-ing module is used to compare the noisy spatial graphs. Thismodule first uses a sub optimal polynomial time algorithmdescribed in [31], that treats the noisy graph matching problemas an assignment problem in a bipartite graph and is a clevermodification of Munkres’s algorithm [32]. The input to thisalgorithm is a cost matrix C which represents the cost of editoperations to convert ga to g′

a. The permitted edit operationsare deleting vertices of ga , substituting a vertex of ga with avertex of g′

a and inserting vertices of g′a .

The cost matrix C defined in [31] is given as:

C =[

C1 C2C3 C4

]

(18)

where C1 = [ci j |1 ≤ i ≤ m, 1 ≤ j ≤ m′] and ci j representsthe cost of substituting vertex vi of ga with vertex v ′

j of g′a.

The sub-matrices C2 and C3 are square matrices with allelements outside the main diagonal equal to ∞. The diagonalelements, ciδ of C2 and cδ j of C3 indicate the cost of deletingvertices of ga and inserting vertices of g′

a respectively. C4 isan all zero matrix.

The performance of the matching algorithm is stronglydependent on how well the definition of ci j , cδ j and ciδ , reflectthe characteristics of the graphs being compared. In this paper,the cost functions defined by [15] for spatial biometric graphsare used and are as follows:

ci j = ||vi , v′j || =

√((q1)i − (q ′

1) j )2 + ((q2)i − (q ′2) j )2.

(19)

cε j = CI D(1 + D(v ′j )) (20)

ciε = CI D(1 + D(vi )) (21)

where CI D is the cost of deleting a vertex from ga or insertinga vertex in g′

a and D(v) is the degree (number of incidentedges) of vertex v.

The algorithm has two outputs, the minimum graph editdistance between ga and g′

a and the optimal graph edit path. Inthis paper, the optimal graph edit path is used to define a mea-sure of dissimilarity between ga and g′

a . The optimal graph editpath indicates the cheapest set of edit operations to convert ga

to g′a. From these edit operations, the list of substitutions give

the vertex correspondences between ga and g′a while the dele-

tions and insertions identify the outlier vertices in ga and g′a

respectively. A new graph, called the maximum common

subgraph mcs(ga, g′a) of ga and g′

a is defined from the listof substitutions. Formally, it is a vertex-induced subgraph ofg′

a [15] that contains the vertices of g′a that are substitutions

for vertices in ga . An edge exists in the mcs if an edgeexists between a pair of vertices in ga and the correspondingsubstituted vertices in g′

a . (Note that defining mcs(ga, g′a) as

a subgraph of ga is equivalent.)A study of the mcss demonstrates striking differences in

graph topological characteristics that can be used to distinguishbetween a genuine and an imposter comparison. Figures 5(c)and 5(d) show the mcss from the genuine and impostercomparisons in Figures 5(a) and 5(b), respectively.

Three measures, two of which are new, are obtained fromobservations on the topological features of mcs(ga, g′

a). Thefirst observation is the size of the mcs. A genuine mcs typicallyhas more vertices than an imposter mcs. We use the best-performing distance in the literature (see [17]) based on mcssize, given by

dv = 1 − |mcs(ga, g′a)|√|ga||g′

a|(22)

where | • | defines the number of vertices in the graph. Thesecond, new, observation is that a genuine mcs typically hasmore connected components than an imposter mcs. Further,the components in a genuine mcs involve more vertices thanthe components in an imposter mcs. A distance based on thesizes of the two largest components in mcs(ga, g′

a) is givenby

dc = 1 −∑2

i=1 Ci√|ga||g′

a|, (23)

where Ci is the number of vertices in the i th largest connectedcomponent in mcs(ga, g′

a). The third, new, observation is thata genuine mcs typically has vertices of higher degree than animposter mcs. The maximum vertex degree in mcs(ga, g′

a) isgiven by

Dmax = max(D(vk)), k = 1, 2, . . . , |mcs(ga, g′a)|. (24)

One or more of these measures are used in the graphmatching process of the BGM algorithm.

V. EXPERIMENTAL PROCEDURE

To evaluate the BGM algorithm, the VARIA database [16]is used for the experiment. It has 233 images from 139people, with between one to seven samples per person. Theimages with only one sample and some samples that wereidentical to each other are excluded from the database. Thisleft 135 retina images from 57 individuals including 42 with2 samples, 10 with 3 samples, 4 with 4 samples and 1 with5 samples. The retinal vasculature is extracted using a familyof six matched filters (16 × 15 pixels) from which at eachpixel only the maximum of their responses is retained and thebinary image is created based on threshold. The spatial graphsare generated from the binary image using morphologicaloperators described in Section III.

In order to validate the automatic extraction process, thespatial graphs are also created by manually extracting thelocations of the feature points using a GUI-based point and

Page 7: IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 22, NO. 9, … · 2019. 12. 9. · IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 22, NO. 9, SEPTEMBER 2013 3625 Retina Verification System

LAJEVARDI et al.: RETINA VERIFICATION SYSTEM BASED ON BGM 3631

click tool [17]. The edges in the graph are identified bytracing along the ridges between two feature points. TheBGM algorithm is performed on both automatic and manuallyextracted retina graphs.

The BGM algorithm has three main parameters that need tobe tuned including the number of most similar edge pairs (Ni )to be considered in the registration algorithm, the tolerance(ε) and the cost (CI D). A large value of Ni significantlyincreases time for graph registration whereas a small value ofNi might overlook the best edge pair. In this experiment, Ni =50 is chosen empirically, as no improvement is obtained inthe matching performance by increasing beyond this number.The value of ε defines the level of tolerance to non-lineardistortion that is allowed when matching vertices between twographs. A large value allows for false vertex matches while asmall value is too restrictive on the vertex matching process.A small CI D results in more insertions of vertices in g′

a anddeletions of vertices in ga over substitutions leading to smalland sparse mcss. On the other hand, a large CI D results inmore substitutions of vertices from ga to g′

a, leading to largeand dense mcss.

The VARIA database is divided into two parts: the first 33individuals form the training set and the rest of the imagesform the testing set. The training set has a total of 67 genuinecomparisons between samples of the same individuals. Thefirst sample of each individual is compared against each otherto obtain 528 imposter comparisons. The training set is used tomodel the distribution of genuine and imposter comparisons.The testing set has a total of 47 genuine comparisons and 253imposter comparisons and is used to validate the performanceof the BGM algorithm.

A. Single Graph Measure Based BGM

The match score between a pair of graphs is defined bya single measure (dv) from Eq. 22. The training set is usedto determine empirically the ε and CI D values that give thelargest separation between the genuine and imposter scores.The genuine scores are expected to be close to 0 while theimposter scores are expected to be close to 1. A complete sep-aration between the genuine and imposter scores is achievedwhen this separation is a positive value. The minimum, ornearest neighbour (NN), distance between genuine comparisonscores and imposter comparison scores is the decision rangein which the FMR and the FNMR are both 0.

A pair of (ε, CI D ) that gives the largest NN distance onthe training set is used for verification. For the automaticallyextracted graphs, (ε = 5, CI D = 5) are the best parameters.The distribution of the scores and their separation is shownin Figure 6 (a). For the manually extracted graphs, (ε = 10,CI D = 5) are the best parameters and the score distributionusing these parameters is shown in Figure 6 (b).

A complete separation between the genuine and imposterscore distributions is achieved in both the automatic andmanual training sets. The score distribution from the train-ing set is typically used to select operating thresholds forvalidation. However, an operating threshold selection usingthis distribution is unreliable as the training set has limited

(a) (b)

(c) (d)

Fig. 5. (a) Registration of a pair of graphs in a genuine comparison,(b) Registration of a pair of graphs in an imposter comparison, (c) Maximumcommon subgraph (mcs) for the pair of graphs in (a), (d) Maximum commonsubgraph (mcs) for the pair of graphs in (b). In (a) and (b) notice that thealigned edge pair is at (0,0) of the reference frame. The mcs is the result ofgraph matching where vertices are matched and edges are common to bothgraphs. The genuine mcs in (c) has longer paths, more vertices and edges andmore and larger components than the imposter mcs in (d).

TABLE I

VALIDATION FOR AUTOMATIC AND MANUAL RETINA GRAPHS USING

BGM ALGORITHM

Automatic Graph ExtractionTraining Set (KDE) Testing Set (Empirical)

FMR Threshold FMR FNMREER 0.005 0.76 0 0

FNMR100 0.002 0.74 0 0FNMR1000 0.03 0.79 0.02 0

Manual Graph ExtractionTraining Set (KDE) Testing Set (Empirical)

EER 0.001 0.76 0.0039 0FNMR100 0 0.72 0 0FNMR1000 0.001 0.77 0 0

size. To overcome this issue, a KDE is performed on theobserved scores which gives a statistical estimation of thegenuine and imposter score distributions [6]. The bandwidthof KDE is selected based on the mean integrated squarederror [6]. Figures 6 (a) and (b) show the KDE curves for theautomatically and manually extracted graphs respectively.

As retina systems are used in high security applicationswhere false matches must be avoided, the thresholds atFNMR100 and FNMR1000 are used for verification. In thisexperiment, three operating points are chosen including EER,FNMR100 and FNMR1000 for validation using the testing set.The FMRs and corresponding thresholds at these points areobtained from KDEs and the results are illustrated in Table I.

Page 8: IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 22, NO. 9, … · 2019. 12. 9. · IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 22, NO. 9, SEPTEMBER 2013 3625 Retina Verification System

3632 IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 22, NO. 9, SEPTEMBER 2013

(a)

(b)

Fig. 6. The training set histogram and KDE of genuine and imposterscores for (a) automatic retina graph extraction and (b) manual retina graphextraction. The KDE bandwidth is chosen based on mean integrated squarederror. As there is complete separation between genuine and imposter scores,the EER for both automatic and manual graph extraction is zero when usingdistance measure (dv ). The theoretical EER using KDE is 0.005 and 0.001for automatic and manual graph extraction respectively.

B. Multiple Graph Measures Based BGM

While a complete separation is obtained by using the singlemeasure dv , the other two measures dc and Dmax are used toimprove the discrimination between genuine and imposter inthe training set. Figure 7 plots the three measures resultingfrom the genuine and imposter comparisons in the trainingset. The three measures from the training set are input toa linear SVM classifier and the maximum margin plane isused for classification [33]. Figures 8(a) and 8(b) show theclassification of genuine and imposter scores by the SVMclassifier for automatically and manually extracted graphs.

To overcome the limited data size, a KDE of the distrib-ution of the multiple graph measures is used to estimate theverification performance. Two-dimensional KDE surfaces forthe three combinations, (dv , dc), (dv , Dmax ) and (dc, Dmax ),of the graph measures and a three-dimensional KDE surfacefor the three graph measures (dv , dc, Dmax ) taken together arecalculated. In order to compare the classification performancewhen using multiple measures over a single measure, we needto compare the misclassification error rates at a comparablethreshold on the KDE surfaces and KDE curves respectively.As it is not straightforward to compare a single dimensionthreshold with a multi dimension threshold, an innovativetechnique is introduced to enable a fair comparison. First, anSVM classifier is trained on the multiple graph measures fromthe genuine and imposter comparisons in the training set to

(a)

(b)

Fig. 7. Multiple graph measures of training set for (a) automatic graphextraction and (b) Manual graph extraction. It can be seen that the discrim-ination between genuine and imposter scores is increased when using twoextra measures dc and Dmax .

obtain the maximum margin line (two measures) or maximummargin plane (three measures). The maximum margin lineor plane is used as the threshold at which misclassificationerror is calculated for the multiple measures. Let fg(x)and fi (x) denote the KDE surfaces/curves for genuine andimposter comparisons obtained from the training data. Thevariable x = 〈xi 〉, i = 1, 2, · · · D where D = 1, 2 and 3for one, two and three graph measures respectively. TheKDE surfaces/curves were generated using the kde functionin the ks package in R and the bandwidth matrix is gen-erated using an unconstrained data-driven bandwidth matrixplug-in also available in the ks package. Let denote theregion on the genuine side of the decision boundary and′ denote the region on the imposter side of the decisionboundary.

For two graph measures, the FMR and FNMR from theKDE surfaces at the maximum margin line is given by:

FMR2d =∫∫

x∈

fi (x)dxdy (25)

FNMR2d =∫∫

x∈′fg(x)dxdy (26)

A similar triple integral can be defined for FMR3d andFNMR3d when computing for three graph measures. Fig-ure 8(a) can be treated as a two-dimensional projection of thedistribution of the three graph measures on the (dv , dc) axis.Figure 9 shows the corresponding KDE surface for distributionof (dv , dc) as a heat map with the maximum margin line

Page 9: IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 22, NO. 9, … · 2019. 12. 9. · IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 22, NO. 9, SEPTEMBER 2013 3625 Retina Verification System

LAJEVARDI et al.: RETINA VERIFICATION SYSTEM BASED ON BGM 3633

(a)

(b)

Fig. 8. Discriminating effect of multiple graph measures on the training setof (a) automatic and (b) manual data. The black line illustrates the maximummargin boundary between genuine and imposter scores. Different colors areused for Dmax measure. It can be seen that the distance between genuine andimposter scores with same Dmax is increased compared to the same scoresin dv measure. Furthermore, the dc measure improves the separation betweengenuine and imposter scores.

Fig. 9. The two dimensional KDE distribution for automatic data with theseparating line defined by the SVM classifier. The region below the line is and that above the line is ′. When estimating the error, the region underthe imposter KDE that lies in contributes to the FMR and the region underthe genuine KDE that lies in ′ contributes to the FNMR. The axes d∗

v andd∗

c represent values of dv and dc rescaled by the SVM classifier.

between the two surfaces. In this case is the region belowthe line and ′ is the region above the line. The KDE surfacesof the genuine and imposter distributions will have their tailsin the ′ and regions respectively allowing us to obtainnon trivial FMR and FNMR values at the threshold definedby the maximum margin line. Next, the KDE curves of thesingle graph measure are used to locate the threshold T where

TABLE II

IMPROVED PERFORMANCE OF MULTIPLE GRAPH MEASURES OVER

SINGLE GRAPH MEASURES

FNMRMultiple Single FMR FMRMeasures Measure Multi Single

Automatic0.059 dv , dc, Dmax dv 6.4E-5 4.4E-30.039 dv , dc dv 3.5E-4 1.1E-20.052 dv , Dmax dv 7.81E-6 6.24E-30.25 dc, Dmax dc 1.3E-3 3.4E-3

Manual0.008 dv , dc, Dmax dv 0 5.1E-30.014 dv , dc dv 0 7.3E-40.006 dv , Dmax dv 0 1.4E-20.066 dc, Dmax dc 0 7.2E-3

the FNMR is equal to that obtained from the KDE surfacesof multiple graph measures. The corresponding FMR at thisthreshold T is

FMR1d =∫ T

0fi (x)dx, (27)

Four experiments are run on both the automatic and manualtraining data comparing three combinations of two graphmeasures and one combination of three graph measures againsttheir corresponding single graph measure. The FMRs for themulti and single graph measures are noted in each case at thecommon FNMR. The results are recorded in the Table II.

With regards to execution time, each retina pair com-parison involving graph creation, registration and matchingtakes approximately 6 seconds with the majority of the timeconsumed by the registration process. Feature extraction fromthe image takes 0.8 seconds while the graph matching takesonly 0.1 seconds. Registration involves testing a maximum ofNi = 50 edge pairs for alignment and takes under 6 secondson average for every pair. The experiments were run on anIntel 3.4 Ghz PC using the software environment R.

VI. DISCUSSION OF RESULTS

When only dv is used in the BGM, the manually extractedgraphs gave a larger separation than automatic ones. In manycases manual graph extraction captured more features thanautomatic graph extraction, resulting in a greater distinguish-ing ability for the manually extracted graphs. If the theoreticalmodel is a good predictor of the behavior of the retinalverification framework, the performance on the testing setmust not be significantly worse than the predicted error rates.The results in Table I show that the performance on testing isbetter than the predicted performance, thus demonstrating thata reasonable estimate of the performance of the framework canbe obtained by using a KDE based statistical model to predictthe score distribution, when the size of database is limited.

The real benefit of the BGM algorithm comes from theability to use multiple graph measures to support the classifi-cation process. As we get complete separation for the retinadata in both single and multiple graph measures, the KDEcurves/surfaces give a theoretical estimate of the improvement

Page 10: IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 22, NO. 9, … · 2019. 12. 9. · IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 22, NO. 9, SEPTEMBER 2013 3625 Retina Verification System

3634 IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 22, NO. 9, SEPTEMBER 2013

obtained using multiple graph measures. Although we runthe risk of extrapolating beyond the data using KDE, thistechnique is useful to estimate the performance in the absenceof large public retina databases. Figures 8 and 9 show cor-relation between the three graph measures. However, as theyare not perfectly correlated there is potential for improvementin classification when the measures are combined. Observingcolumns 4 and 5 in Table II, notice that the multiple graphmeasures give a lower FMR than the single graph measurein every experiment for both automatic and manual data. Forthe automatic data the drop in FMR on using multiple graphmeasures ranges between 60% for (dc, Dmax ) vs dc to abovetwo orders of magnitude for the other three comparisons. Forthe manual data the FMR drops to 0% on using multiple graphmeasures. The kde function in R estimates 0 density for allpoints on the imposter KDE surface lying below the separatingboundary. The lowest non-zero imposter KDE density forthe manual data is of the order of E-26, and lies above theseparating boundary. This implies a very small FMR valuewould be found in larger populations.

For comparison with state-of-the-art, the proposed frame-work used multiple characteristics of the scores of retinagraphs in the decision making process whereas previous state-of-the-art retinal verification frameworks used only a singlemeasure to compare the retinal templates [3]–[5], [7] andit achieves complete separation on the VARIA database.The state-of-the-art results have shown complete separationbetween the genuine and imposter scores using VARIA data-base [4], [5]. However, only [4] validated against a testing setand none used statistical models of the score distributions tocompensate for the small database.

VII. CONCLUSION

An automatic retinal verification system has been presentedbased on the Biometric Graph Matching algorithm. The retinalimage was enhanced using histogram equalization and then theretinal skeleton vessels of the enhanced image were extractedusing matched filters and morphological operators. A spatialgraph was generated from this skeleton of retinal vessels andit was used as an input to the BGM algorithm. The BGMalgorithm used multiple graph topological measures includingmcs size, size of two largest components of the mcs andmaximum vertex degree of the mcs to compare a pair ofnoisy spatial graphs and generate a set of scores from thetraining set. A single graph measure in the BGM algorithmgave complete separation between genuine and imposter scoreson the training set. The distribution of these scores wasmodeled using KDE and the thresholds at EER, FNMR100and FNMR1000 were chosen to validate the model using thetesting set. Experimental results indicated that the KDE modelwas a good fit to the data and showed marginal difference inperformance between the automatic and the manually extractedgraphs. Multiple graph measures derived from the BGMalgorithm were shown to enhance matching performance ofthe verification system. This was proved using a novel com-bination of SVM classification and KDE curves/surfaces. Inevery experiment on the manual and automatic data, using two

or more graph measures to discriminate genuine comparisonsfrom imposters gave a better performance than using a singlemeasure. In many cases there was a drop of over 2 ordersof magnitude in error rate when using multiple measurescompared to a single measure.

The future work in this area will include improving thealgorithm for the removal of spurious vertices during graphcreation, optimising the registration algorithm to increasespeed, using additional graph based edge and vertex attributesderived from the retinal image for matching and using newdistance measures to aid the classification task. This researchalso seeks to test the verification system on larger retinadatabases with high quality color images and multiple samples,when such become available.

REFERENCES

[1] R. Hill, Biometrics: Personal Identification in Networked Society. NewYork, NY, USA: Springer-Verlag, 1999, ch. 6, pp. 123–141.

[2] C. Mariño, M. G. Penedo, M. J. Carreira, and F. González, “Retinalangiography based authentication,” in Proc. Progr. Pattern Recognit.,Speech Image Anal. Lect. Notes Comput. Sci., 2003, pp. 306–313.

[3] H. Farzin, H. Abrishami-Moghaddam, and M. S. Moin, “A novel retinalidentification system,” EURASIP J. Adv. Signal Process., vol. 2008,pp. 1–10, Apr. 2008.

[4] M. Ortega, M. G. Penedo, J. Rouco, N. Barreira, and M. J. Carreira,“Retinal verification using a feature points-based biometric pattern,”EURASIP J. Adv. Signal Process., vol. 2009, pp. 1–13, Mar. 2009.

[5] H. Oinonen, H. Forsvik, P. Ruusuvuori, O. Yli-Harja, V. Voipio, andH. Huttunen, “Identity verification based on vessel matching from fundusimages,” in Proc. Int. Conf. Image Process., Sep. 2010, pp. 4089–4092.

[6] B. Ulery, W. Fellner, P. Hallinan, A. Hicklin, and C. Watson, “Studiesof biometric fusion,” Dept. Commerce, Nat. Inst. Standards Technol.,Gaithersburg, MD, USA, Tech. Rep. IR 7346, Sep. 2006.

[7] C. Mariño, M. G. Penedo, M. Penas, M. J. Carreira, and F. González,“Personal authentication using digital retinal images,” J. Pattern Anal.Appl., vol. 9, no. 1, pp. 21–33, 2006.

[8] M. Z. C. Azemin, D. K. Kumar, and H. R. Wu, “Shape signature forretinal biometrics,” in Proc. Int. Conf. Digital Image Comput. Tech.Appl., Dec. 2009, pp. 381–386.

[9] A. Arakala, J. S. Culpepper, J. Jeffers, A. Turpin, B. S. K. J. Horadam,and A. M. McKendrick, “Entropy of the retina template,” in Proc. 3rdInt. Conf. Adv. Biometrics, Jun. 2009, pp. 1250–1259.

[10] Z. Xu, X. Guo, X. Hu, X. Chen, and Z. Wang, “The identification andrecognition based on point for blood vessel of ocular fundus,” in Proc.Int. Conf. Biometrics, Jan. 2006, pp. 770–776.

[11] V. Bevilacqua, L. Cariello, D. Columbo, M. D. Fabiano, M. Giannini,G. Mastronardi, and M. Castellano, “Retinal fundus biometric analy-sis for personal identifications,” in Proc. Int. Conf. Intell. Comput.,Sep. 2008, pp. 1229–1237.

[12] D. Calvo, M. Ortega, M. G. Penedo, J. Rouco, and B. Remeseiro,“Characterisation of retinal feature points applied to a biometric system,”in Proc. Int. Conf. Image Anal. Process., Sep. 2009, pp. 355–363.

[13] J. Jeffers, A. Arakala, and K. J. Horadam, “Entropy of feature point-based retina templates,” in Proc. 20th Int. Conf. Pattern Recognit.,Aug. 2010, pp. 213–216.

[14] L. Latha and S. Thangasamy, “A robust person authentication systembased on score level fusion of left and right irises and retinal features,”Procedia Comput. Sci., vol. 2, pp. 111–120, Jan. 2010.

[15] A. Arakala, K. Horadam, and S. A. Davis, “Retina features basedon vessel graph substructures,” in Proc. Int. Joint Conf. Biometrics,Oct. 2011, pp. 1–6.

[16] (2006). VARPA. Varpa Retinal Images for Authentication Database.[Online]. Available: http://www.varpa.es/varia.html

[17] J. Jeffers, S. A. Davis, and K. J. Horadam, “Estimating individualityin feature point based retina templates,” in Proc. 5th IAPR Int. Conf.Biometrics, Apr. 2012, pp. 454–459.

[18] S. Chaudhuri, S. Chatterjee, N. Katz, M. Nelson, and M. Goldbaum,“Detection of blood vessels in retinal images using two-dimensionalmatched filters,” IEEE Trans. Med. Imag., vol. 8, no. 3, pp. 263–269,Sep. 1989.

Page 11: IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 22, NO. 9, … · 2019. 12. 9. · IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 22, NO. 9, SEPTEMBER 2013 3625 Retina Verification System

LAJEVARDI et al.: RETINA VERIFICATION SYSTEM BASED ON BGM 3635

[19] I. Liu and Y. Sun, “Recursive tracking of vascular networks inangiograms based on the detection-deletion scheme,” IEEE Trans. Med.Imag., vol. 12, no. 2, pp. 334–341, Jun. 1993.

[20] Y. A. Tolias and S. M. Panas, “A fuzzy vessel tracking algorithm forretinal images based on fuzzy clustering,” IEEE Trans. Med. Imag.,vol. 17, no. 2, pp. 263–273, Apr. 1998.

[21] X. Jiang and D. Mojon, “Adaptive local thresholding by verification-based multithreshold probing with application to vessel detection inretinal images,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 25, no. 1,pp. 131–137, Jan. 2003.

[22] F. Zana and J. C. Klein, “Segmentation of vessel-like patterns usingmathematical morphology and curvature evaluation,” IEEE Trans. ImageProcess., vol. 10, no. 7, pp. 1010–1019, Jul. 2001.

[23] A. Mendonca and A. Campilho, “Segmentation of retinal blood vesselsby combining the detection of centerlines and morphological reconstruc-tion,” IEEE Trans. Med. Imag., vol. 25, no. 9, pp. 1200–1213, Sep. 2006.

[24] J. Soares, J. Leandro, R. Cesar, H. Jelinek, and M. Cree, “Retinal vesselsegmentation using the 2-d Gabor wavelet and supervised classification,”IEEE Trans. Med. Imag., vol. 25, no. 9, pp. 1214–1222, Sep. 2006.

[25] C. V. Sofka, M. Stewart, “Retinal vessel centerline extraction usingmultiscale matched filters, confidence and edge measures,” IEEE Trans.Med. Imag., vol. 25, no. 12, pp. 1531–1546, Dec. 2006.

[26] E. Ricci and R. Perfetti, “Retinal blood vessel segmentation using lineoperators and support vector classification,” IEEE Trans. Med. Imag.,vol. 26, no. 10, pp. 1357–1365, Oct. 2007.

[27] K. Drechsler and C. Laura, “Hierarchical decomposition of vesselskeletons for graph creation and feature extraction,” in Proc. Int. Conf.Bioinformat. Biomed., Dec. 2010, pp. 456–461.

[28] R. C. Gonzalez, R. E. Woods, and S. L. Eddins, Digital Image Process-ing Using MATLAB. Upper Saddle River, NJ, USA: Prentice-Hall, 2003.

[29] T. Chanwimaluang and G. Fan, “An efficient algorithm for extractionof anatomical structures in retinal images,” in Proc. Int. Conf. ImageProcess., vol. 1. Sep. 2003, pp. 1093–1096.

[30] R. O. Duda, P. E. Hart, and D. G. Stork, Pattern Classification, 2nd ed.New York, NY, USA: Wiley, 2001.

[31] K. Riesen and H. Bunke, “Approximate graph edit distance computationby means of bipartite graph matching,” Image Vis. Comput., vol. 27,no. 7, pp. 950–959, 2009.

[32] J. Munkres, “Algorithms for the assignment and transportation prob-lems,” J. Soc. Ind. Appl. Math., vol. 5, no. 1, pp. 32–38, 1957.

[33] P. Gutschoven and B. Verlinde, “Multi-modal identity verification usingsupport vector machines (SVM),” in Proc. 3rd Int. Conf. Fusion Inf.,Jul. 2000, pp. 3–8.

Seyed Mehdi Lajevardi (S’08–M’11) received theB.Eng. degree in electronic engineering from theCentral Azad University of Tehran, Tehran, Iran,in 2007, and the Ph.D. degree in electrical andcomputer engineering from Royal Melbourne Insti-tute of Technology, Melbourne, Australia, in 2011.His current research interests include digital imageprocessing algorithms, computer vision, biometrics,satellite imagery, and artificial intelligence.

Arathi Arakala received the B.Eng. degree inelectronic and communications engineering fromthe Manipal Institute of Technology, Manipal Uni-versity, Manipal, India, in 2002, the Master’sdegree in telecommunications engineering fromRoyal Melbourne Institute of Technology (RMIT),Melbourne, Australia, in 2004, and the Ph.D. degreein mathematics from RMIT University in 2008. Hercurrent research interests include structural patternrecognition, biometric authentication, and informa-tion security.

Stephen A. Davis received the B.Sc. (Hons.) degreefrom the University of Melbourne, Melbourne, Aus-tralia, in 1994, and the Ph.D. degree in applied math-ematics from the University of New South Wales,Sydney, Australia, in 2000. After Post-Doctoral andResearch Associate positions with the Universityof Antwerp, Antwerp, Belgium, the University ofUtrecht, Utrecht, The Netherlands, and Yale Univer-sity, New Haven, CT, USA, he held an academicposition with Royal Melbourne Institute of Tech-nology, Melbourne. His current research interests

include mathematical modeling of the contact networks underlying infectiousdisease transmission, as well as the use of graphs and networks in patternrecognition.

Kathy J. Horadam received the B.Sc. (Hons.) andPh.D. degrees in pure mathematics from AustralianNational University, Canberra, Australia, in 1972and 1977, respectively. She has been a Professorof mathematics with Royal Melbourne Institute ofTechnology (RMIT), Melbourne, Australia, since1995, and has worked in academia for 33 years andfor three years with Cryptomathematics ResearchGroup, Australia’s Defence Science and TechnologyOrganisation. She leads the Information Security-Informatics Group, RMIT. Her current research

interests include information security, particularly biometrics, robust infor-mation transmission, especially error-correcting codes and low-correlationcommunications sequence design, applied algebra and combinatorics, andcomplex networks.