an indexing method for color iris imagesrossarun/pubs/crihalmeanu... · related work: existing...

11
An Indexing Method for Color Iris Images Simona G. Crihalmeanu a , Arun A. Ross a a Michigan State University, East Lansing, Michigan, 48824 USA ABSTRACT In this work, we study the possibility of indexing color iris images. In the proposed approach, a clustering scheme on a training set of iris images is used to determine cluster centroids that capture the variations in chromaticity of the iris texture. An input iris image is indexed by comparing its pixels against these centroids and determining the dominant clusters - i.e., those clusters to which the majority of its pixels are assigned to. The cluster indices serve as an index code for the input iris image and are used during the search process, when an input probe has to be compared with a gallery of irides. Experiments using multiple color spaces convey the efficacy of the scheme on good quality images, with hit rates closes to 100% being achieved at low penetration rates. Keywords: iris, classification, clustering, K-means, color spaces 1. INTRODUCTION Iris recognition refers to the process of recognizing individuals based on their iris pattern. The iris is the annular region of the eye that is bounded by the pupil and the sclera. A typical iris recognition system captures frontal images of the eye in the near-infrared (NIR) spectrum. However, recently, researchers have attempted to perform iris recognition in the visible spectrum, in non-cooperative scenarios with relaxed constraints 1–8 . Competitions such as NICE I and II, have encouraged researchers to develop effective algorithms for iris segmentation and feature extraction in the visible spectrum. 9 This increased attention in processing color (i.e., RGB) iris im- ages has been driven by several factors: 1) An interest in performing iris recognition using the color camera present in smart phones; 2) The advent of periocular biometrics, where color images of the periocular region contain the iris also 10–12 ; 3) Performing iris recognition from high-resolution RGB face images; 4) Performing non-cooperative iris recognition at longer distances (4 to 8 m). Utilizing visible light can circumvent some of the problems associated with NIR imaging that requires the LEDs to be in close proximity to the ocular region; 5) The use of multispectral imagery (NIR + Visible) has been shown to improve iris recognition accuracy due to the availability of additional information 13 . In spite of the aforementioned motivations, iris recognition in the visible spectrum is not an easy task. Dark- colored irides are less easily discerned in the visible spectrum due to the absorption characteristics of melanin. Further, specular reflections can occlude portions of the iris and confound the segmentation process. Notwith- standing these concerns, iris recognition in the visible spectrum is an area of active research. This work focuses on designing a scheme for classifying color iris patterns into multiple categories based on Figure 1. An ocular RGB image showing the iris and surrounding structures. their inherent chromaticity and texture. Different color schemes are investigated to find the most suitable color Simona Crihalmeanu: E-mail: [email protected] Arun Ross: E-mail: [email protected] 1 Appeared in Proc. of SPIE Biometric and Surveillance Technology for Human and Activity Identification XII, (Baltimore, USA), April 2015

Upload: others

Post on 16-Sep-2020

6 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: An Indexing Method for Color Iris Imagesrossarun/pubs/Crihalmeanu... · Related Work: Existing color iris indexing schemes are color-based, texture-based or a combination of the two

An Indexing Method for Color Iris Images

Simona G. Crihalmeanua, Arun A. Rossa

aMichigan State University, East Lansing, Michigan, 48824 USA

ABSTRACT

In this work, we study the possibility of indexing color iris images. In the proposed approach, a clustering schemeon a training set of iris images is used to determine cluster centroids that capture the variations in chromaticityof the iris texture. An input iris image is indexed by comparing its pixels against these centroids and determiningthe dominant clusters - i.e., those clusters to which the majority of its pixels are assigned to. The cluster indicesserve as an index code for the input iris image and are used during the search process, when an input probehas to be compared with a gallery of irides. Experiments using multiple color spaces convey the efficacy of thescheme on good quality images, with hit rates closes to 100% being achieved at low penetration rates.

Keywords: iris, classification, clustering, K-means, color spaces

1. INTRODUCTION

Iris recognition refers to the process of recognizing individuals based on their iris pattern. The iris is the annularregion of the eye that is bounded by the pupil and the sclera. A typical iris recognition system captures frontalimages of the eye in the near-infrared (NIR) spectrum. However, recently, researchers have attempted to performiris recognition in the visible spectrum, in non-cooperative scenarios with relaxed constraints1–8 . Competitionssuch as NICE I and II, have encouraged researchers to develop effective algorithms for iris segmentation andfeature extraction in the visible spectrum.9 This increased attention in processing color (i.e., RGB) iris im-ages has been driven by several factors: 1) An interest in performing iris recognition using the color camerapresent in smart phones; 2) The advent of periocular biometrics, where color images of the periocular regioncontain the iris also10–12 ; 3) Performing iris recognition from high-resolution RGB face images; 4) Performingnon-cooperative iris recognition at longer distances (4 to 8 m). Utilizing visible light can circumvent some of theproblems associated with NIR imaging that requires the LEDs to be in close proximity to the ocular region; 5)The use of multispectral imagery (NIR + Visible) has been shown to improve iris recognition accuracy due tothe availability of additional information13 .In spite of the aforementioned motivations, iris recognition in the visible spectrum is not an easy task. Dark-colored irides are less easily discerned in the visible spectrum due to the absorption characteristics of melanin.Further, specular reflections can occlude portions of the iris and confound the segmentation process. Notwith-standing these concerns, iris recognition in the visible spectrum is an area of active research.

This work focuses on designing a scheme for classifying color iris patterns into multiple categories based on

Figure 1. An ocular RGB image showing the iris and surrounding structures.

their inherent chromaticity and texture. Different color schemes are investigated to find the most suitable color

Simona Crihalmeanu: E-mail: [email protected] Ross: E-mail: [email protected]

1

Appeared in Proc. of SPIE Biometric and Surveillance Technology for Human and Activity Identification XII, (Baltimore, USA), April 2015

Page 2: An Indexing Method for Color Iris Imagesrossarun/pubs/Crihalmeanu... · Related Work: Existing color iris indexing schemes are color-based, texture-based or a combination of the two

space that can characterize the irides. Such an exercise has several benefits. Firstly, it provides an insight intothe variations in color and texture across different irides in multiple color spaces. Secondly, in the context ofidentification - where an input probe image has to be compared against a database of labeled irides in order tolocate a match - iris classification can help reduce the search space thereby improving the response time of theidentification system.

Our proposed method of classification clusters all the iris pixels in an image based on their intensity valuesto reduce the number of colors, and then computes the dominant colors in the image. A dominant color is thecolor associated with the highest number of pixels in the image. Furthermore, the algorithm considers solelythe chromaticity components of multiple color representation schemes for performance comparison. The studyis conducted on an agglomeration of images from three datasets, collected with different cameras, resolutions,and illuminations, thereby exhibiting large variations in colors and shades, in order to assess its applicability toheterogeneous data sets. The experiments reported in this paper do not deal with periocular data or images ofthe eye from high resolution face images. However, we intend to conduct such experiments in the future.

Related Work: Existing color iris indexing schemes are color-based, texture-based or a combination ofthe two. Puhan and Sudha14 propose two color indices calculated using the chrominance components Cb andCr in the Y CbCr color space for iris classification. Zhang et al.15 use a texton-based method for classifyingcolor irides. Color features from three color spaces, RGB, HSI and L∗a∗b, are used to define the texton. SamSunder and Ross16 use color and texture features extracted from the inner half iris region of the normalizediris for classification. Jayaraman et al.17 search the iris database using two color indices calculated using thechrominance components Cb and Cr of the Y CbCr color space, and the candidate list is further narrowed usingSpeeded Up Robust Features (SURF) to only retrieve those iris images with maximum corresponding points.

The problem of classifying color irides is confounded by two primary factors: (a) the chromaticity of theiris image is impacted by the nature of the illuminant used, the photometric characteristics of the camera andthe spatial resolution of the image; and (b) most irides are multichromatic and, therefore, a single color cannotadequately represent it. Figure 2 provides an overview of the proposed method that will be discussed below. Therest of the article is organized as follows: Section 2 presents the clustering and classification algorithm, Section3 describes the datasets used, and Section 4 discusses the results.

Figure 2. Determining dominant colors in an iris image.

2. CLUSTERING AND INDEXING

A color model is an abstract mathematical model to represent colors, usually as a combination of three numbers,or three components. Example of color models are RGB and CMYK. The RGB is an additive color model inwhich the primary colors red, green, and blue are added together in various proportions to create a broad arrayof colors. The combination of the primary colors in equal intensities results in the white color. The CMYK isa subtractive color model in which the large variety of colors are obtained by subtracting from the white colorthe primary colors of pigment yellow, cyan, magenta and black. A color space is a specific organization of colors.Example of color spaces are L∗a∗b, HSV , Y CbCr, HSI, etc. In L∗a∗b color space, the entire gamut of colorsis represented by three parameters, namely, the luminance L with values that range from 0 to 100, and thechromaticity components ∗a and ∗b. Along the ∗a component the colors vary between magenta (positive ∗a) andgreen (negative ∗a). Along the ∗b component the colors vary between blue (positive ∗b) and yellow (negative∗b). Compared to other color spaces, L∗a∗b is perceptually linear, meaning that for a change in color there is anequivalent important change in visual perception of the color. In HSI color space, parameters are the intensityI, chromaticity hue H and saturation S. HSI approximates the way in which humans perceive and interpret

2

Appeared in Proc. of SPIE Biometric and Surveillance Technology for Human and Activity Identification XII, (Baltimore, USA), April 2015

Page 3: An Indexing Method for Color Iris Imagesrossarun/pubs/Crihalmeanu... · Related Work: Existing color iris indexing schemes are color-based, texture-based or a combination of the two

colors. It is easy to manipulate colors in the HSI color space since it uses a double hexcone with the highestvalue of I = 1 (white) and the lowest value I = 0 (black). The hue is the angle around the vertical axis (I) withthe color red located at 0 degrees. The complementary colors are 180 degrees apart on the hexcone. Saturationis represented by the distance from the vertical axis I to the hexcone surface. It varies from 0 to 1, and signifiesthe purity of the color. Another popular color space is Y CbCr used mostly in image and video compression.The luminance Y is separated from chroma blue Cb and chroma red Cr. This color space is based on the ideathat the human eye is more sensitive to luminance than colors. Therefore, the chromaticity is encoded usingfewer bits through a sub-sampling process that results in various Y CbCr formats. Figure 3 visualizes an iriswhen various color spaces are used. Our proposed indexing method partitions the color pixels pertaining to the

(a) (b) (c) (d)Figure 3. Normalized iris in various color spaces: (a) RGB. (b) HSI. (c) Y CbCr. (d) L

∗a∗b

1 2 3 4 50

2000

4000

6000

8000

10000

12000

14000

Cluster number

Nu

mb

er o

f p

ixel

s / c

lust

er

Dominant clusters 1, 4

1 2 3 4 50

2000

4000

6000

8000

10000

12000

Cluster number

Nu

mb

er o

f p

ixel

s / c

lust

er

Dominant clusters 2, 5

1 2 3 4 50

2000

4000

6000

8000

10000

12000

Cluster number

Nu

mb

er o

f p

ixel

s / c

lust

er

Dominant clusters 3, 4

(a) (b) (c)

1 2 3 4 50

2000

4000

6000

8000

10000

Cluster number

Nu

mb

er o

f p

ixel

s/ c

lust

er

Dominant clusters 4, 3

1 2 3 4 50

0.5

1

1.5

2x 10

4

Cluster number

Nu

mb

er o

f p

ixel

s/ c

lust

er

Dominant clusters 5,2

(d) (e)Figure 4. Classified iris images using 5 clusters and 2 dominant colors, combined collections, ∗a∗b components. (a) Clusters1,4. (b) Clusters 2,5. (c) Clusters 3,4 (d) Clusters 4,3. (e) Clusters 5,2. First row, unwrapped and labeled iris pixels.Second row, histogram - number of pixels per cluster.

3

Appeared in Proc. of SPIE Biometric and Surveillance Technology for Human and Activity Identification XII, (Baltimore, USA), April 2015

Page 4: An Indexing Method for Color Iris Imagesrossarun/pubs/Crihalmeanu... · Related Work: Existing color iris indexing schemes are color-based, texture-based or a combination of the two

iris region in clusters based on their intensity value. K-means algorithm is chosen to partition the pixels intoclusters. Compared with other clustering methods, such as hierarchical clustering, K-means creates a single levelof clusters, therefore being suitable for fast indexing and retrieval from the database. Euclidean distance wasused to minimize the sum of point-to-centroid distances. The output depends on the selection of starting pointsand the number of iterations. In the search for the global minima, the clustering may be replicated multipletimes with different initial values of the cluster centroids with the best solution chosen in the end. K-means issensible to outliers, therefore an initial outlier removal process may improve the stability of the learned centroids.Our proposed method has three steps:

Training. In this step the cluster centers that correspond to the color information of the iris region are found.Given M images Im×p×3, and k clusters, the RGB images are converted to L∗a∗b, Y CbCr or HSI color scheme.The input of the K-means algorithm is a feature vector that gathers information from all M images, with rowscorresponding to each pixel data and columns corresponding to the three components of the converted image.The output of the algorithm comprises k mutually exclusive clusters, the color dataset C = {ci|i = 1 · · · k} thatrepresents the iris color information and forms the index space. Experiments are conducted with 5, 10, 15, 20and 25 clusters.

Testing. a) Gallery indexing: After learning the clusters, each pixel in a gallery image is assigned a colorcluster index based on its proximity to the cluster centers using a minimum distance rule, see Figure 4. Assignpixel (x, y) to cluster ck if k = argmin{L2(I(x, y), cei), i ∈ [1, k]}, where cei is the centroid of cluster ci andL2 is the Euclidean distance. The output is a labeled image Il = {I(x, y) = ci|x ∈ [1,m], y ∈ [1, p], ci ∈ C}.Further, the histogram of the labeled image Il is found, resulting in a vector n = {n1, n2, · · ·nk} , whereni =

∑occurrence(ci) within the labeled image, see Figure 4. The dominant colors are represented by the

cluster indices with the highest number of occurrences. Denote the index string of an image as SI = {cj |cj ∈C, j ∈ [1, N ], nj ∈ n, nj ≥ nj+1} where k is the number of clusters, and N ∈ {2, 3, 4} is the number of dominantcolors considered and C is the color or cluster dataset. According to N , the template size, the length of the indexstring is reduced to 2, 3 or 4 integer values. Therefore, the classification scheme consists of creating C tables, onetable for each cluster (dominant color). The table corresponding to a cluster contains the identification numbersof those irides that are associated with that dominant color or cluster from the index string.b) Searching based on Probe: Similar to the indexing procedure applied to gallery images, each pixel of theprobe image is assigned a color cluster, the number of occurrences of each cluster is calculated and the indices(clusters with the highest occurrences) are associated with the probe image. These indices are used to determinea match. Specifically, only those gallery identities in the tables corresponding to the dominant colors of theprobe image are retrieved and compared against the probe images.Figure 6 shows the distribution of the first two dominant clusters for one of the probes and all the gallery images,when chromaticity components of the L∗a∗b color space are used, with 5 clusters. As observed, the distributionof the dominant colors is almost the same.

The performance of the algorithm depends on the number of training samples and the diversity of the eyecolors. Diverse eye colors results in clusters with a larger distance between the centroids. Larger distancesbetween centroids allows for a larger variance of the colors within the clusters, Figure 5. It is also important howthe irides are distributed across the clusters. A reasonably equal distribution of irides across clusters denotes agood classification scheme. However, it is likely that some clusters may be unique and host only a few identitiesdue to the rarity of the chromaticity. It may also happen that some irides will be represented by fewer dominantcolors. This shows that besides the diversity in chromaticity of eyes, irides greatly differ in the number of colorsand shades observed on their surface (e.g., 5 clusters used for classification but only 2 color clusters observed onthe iris surface).

3. IRIS DATABASES

In our work we used three color iris datasets:1) UPOL dataset18 , displayed in Figure 7 (a). The database contains 384 high resolution RGB frontal irisimages, collected from 64 subjects with 3 images/eye/subject. The images are 24-bit depth with a size of 576 x768 pixels in PNG file format. The images were obtained using a TOPCON TRC50IA optical device connectedto a SONY DXC-950P 3CCD camera.2) UBIRIS.v1 dataset19 , displayed in Figure 7 (c). The database contains 1877 high resolution frontal iris

4

Appeared in Proc. of SPIE Biometric and Surveillance Technology for Human and Activity Identification XII, (Baltimore, USA), April 2015

Page 5: An Indexing Method for Color Iris Imagesrossarun/pubs/Crihalmeanu... · Related Work: Existing color iris indexing schemes are color-based, texture-based or a combination of the two

−10 −5 0 5 10 15 20 25 30−10

−5

0

5

10

15

20

25

30

35

40

Centroid position

−20 −15 −10 −5 0 5 10 15 20 25 30−10

−5

0

5

10

15

20

25

30

35

40

45

50Centroid position

(a) (b)Figure 5. The distribution of the centroids for dark colored irides subset (red) vs. the entire dataset (blue) when chro-maticity ∗a∗b is considered. (a) k = 10. (b) k = 15

1 2 3 4 50

10

20

30

40

50

60

70

Cluster number

Nu

mb

er o

f p

ixel

s / c

lust

er

Gallery

Dominantcolor 1

Dominantcolor 2

1 2 3 4 50

10

20

30

40

50

60

70

Cluster number

Nu

mb

er o

f p

ixel

s / c

lust

erProbe

Dominantcolor 1

Dominantcolor 2

(a) (b)Figure 6. The distribution of the dominant colors (∗a∗b components, 5 clusters, 2 dominant colors) on the combineddataset. (a) Gallery. (b) Probe.

images, collected from 241 subjects with 4 images/right eye/subject. The images are 24-bit depth with a size of800 x 600 in JPEG format and are collected using a Nikon E5700 camera.3) MS WVU dataset20 , displayed in Figure 7 (b). The database contains 496 frontal iris images, collected from31 subjects, with 4 images/eye/subject. The images are of size 1035 x 1373 x 3, BMP format and are collectedusing a DuncanTech MS 3100 multispectral camera. The R, G,and B components are extracted from the colornear-infrared images using a demosaicing algorithm applied to the Bayer pattern.

The proposed algorithm is evaluated on each individual dataset, as well as on images from all three datasetscombined into a single larger dataset, in order to assess its applicability to heterogeneous datasets. Since images inthese datasets are captured using cameras with different photometric characteristics, merging the three datasetsresults in a database exhibiting a higher variation in colors and shades. Across these three collections, iris imagesare captured in different illumination conditions, and have different capture resolutions. Further, the UBIRISdatabase exhibits several noise factors related to reflections, luminosity, contrast, and focus.

Iris regions in the UPOL and UBIRIS datasets are manually segmented. Iris regions from the MS WVUdataset are segmented using a modified version of the algorithm discussed by Crihalmeanu and Ross20 . Aftersegmentation, each iris is geometrically normalized to a 64 x 360 radial and angular resolution by a processtermed as unwrapping1 .

The classification algorithm is applied to multiple color spaces, viz., RGB, L∗a∗b, HSI and Y CbCr. Table 1presents the training and testing datasets used for the three protocols. In protocol #1, the training set containsall the left eye images (L) from UPOL and MS WVU datasets and half of the right eyes of UBIRIS v1 dataset

5

Appeared in Proc. of SPIE Biometric and Surveillance Technology for Human and Activity Identification XII, (Baltimore, USA), April 2015

Page 6: An Indexing Method for Color Iris Imagesrossarun/pubs/Crihalmeanu... · Related Work: Existing color iris indexing schemes are color-based, texture-based or a combination of the two

(a) (b) (c)Figure 7. Example of iris images. (a) MSWVU collection. (b) UBIRIS collection.(c) UPOL collection.

(henceforth referred as UB1) and is used to find the clusters and the corresponding centroids, while the test setcontains the right eyes (R) from UPOL and MS WVU datasets and the other half of the right eyes of UBIRIS v1dataset (henceforth referred as UB2 ∗). For each eye in the test set, the first image is used as the gallery and thesecond, third and fourth images are used as probes. In protocol #2, the training set contains the right eyes (R)from UPOL and MS WVU datasets and half of the right eyes of UBIRIS v1 dataset (UB2), while the test setcontains the left eyes (L) from UPOL and MS WVU datasets and the other half of the right eyes of UBIRIS v1dataset (UB1). In both protocols, #1 and #2, it is very likely that the dominant color centroids in the trainingset capture the color distribution of the test set, based on the reasonably similar chromatic composition of botheyes. To avoid this bias, we consider a third protocol, where, the left eye image dataset (L) of each collectionin the database is further divided in two equal subsets, labeled L1 and L2, and the right eye image dataset (R)of all collections in the database is similarly divided in two equal subsets, R1 and R2. We further consider fourcross-validation scenarios as seen in Table 1.

Table 1. The use of left and right eye datasets by protocol #Dataset Protocol #1 Protocol #2 Protocol #3

Training L R L1 L2 R1 R2

Testing R L R2 R1 L2 L1

4. RESULTS

The performance of the iris indexing algorithm is assessed using the hit and penetration rates14,21 . In our case,the hit rate is defined as the probability that the correct cluster or dominant color is retrieved. The penetrationrate is defined as the fraction of the identities retrieved from the database when a probe is submitted. A highvalue of the hit rate and a low value of the penetration rate indicate a good indexing method. The results obtainedare categorized based on color schemes, number of dominant colors used and number of clusters generated, andpresented in Tables 2 and 3. Results suggest that the proposed scheme results in very high hit rates and verylow penetration rates. In protocol #3, the performance is slightly lower compared with the first two protocolswhen the entire set of left or right eye images are used to find the clusters. This is explained by the similarchromatic composition in both eyes as mentioned in Section 2, where the dominant color centroids in the trainingset capture the color distribution of the test set. This underscores the importance of the training set, the numberof training samples used and the diversity of color composition. Overall, according to the color scheme used,the hit rate either remains constant or slightly decreases by 1% to 3% when the number of clusters is increasedand it is slightly higher by 1% to 2% when the number of dominant colors is increased. An analysis of theresults on the combined dataset for different values of N and k, as displayed in Figure 8, shows that the hit rate

∗Subjects in UB1 and UB2 are mutually exclusive

6

Appeared in Proc. of SPIE Biometric and Surveillance Technology for Human and Activity Identification XII, (Baltimore, USA), April 2015

Page 7: An Indexing Method for Color Iris Imagesrossarun/pubs/Crihalmeanu... · Related Work: Existing color iris indexing schemes are color-based, texture-based or a combination of the two

is in the interval between 96.5% and 100% with a majority of the instances higher than 99.6%. As presentedin Figure 8, the performances of HSI, and Y CbCr are lower when compared with the performances of L∗a∗b,∗a∗b, HS, and CbCr. Overall, we recommend using L∗a∗b, ∗a∗b, HS, and CbCr for improved performance.The best performances are obtained with 2 dominant colors and 25 clusters. As observed in Tables 2 and 3,the penetration rate increases with the number of dominant colors. However, for the same number of dominantcolors, the penetration rate rapidly decreases as the number of clusters is increased from 5 to 25. The penetrationrate observed across the color schemes, regardless of the number of colors or the number of clusters, is presentedin Figure 9. Most of the rates are between 10% and 30%. The lowest values for the penetration rate are obtainedwhen a higher number of clusters (20,25) is used. The best results when considering both the hit and penetrationrates are obtained when using L∗a∗b color scheme, or ∗a∗b or CbCr chromaticity components corresponding to25 clusters and 2 dominant colors.

To verify the robustness of the classification scheme to noisy images, two experiments were conducted.

Table 2. Hit rate (H) and penetration rate (P) for the combined dataset when K clusters are used.

L ∗a ∗b ∗a ∗b HSI HSK H(%) P(%) H(%) P(%) H(%) P(%) H(%) P(%)

Two Dominant Colors

5 99.6-100 44.5-53.8 99.3-100 41.6-49.2 99.6-100 42.1-67.2 98.9-100 46.7-53.1

15 98.6-100 16.7-22.6 98.9-99.6 16.6-20.9 99.3-100 17.9-23 98.6-100 17.1-29.6

25 98.6-100 10.9-13.9 97.9-99.3 10.1-14.2 97.2-99.6 11.9-14.4 98.6-99.5 11.5-14.4

Three Dominant Colors

5 100 62.8-68.4 100 62.7-71 100 61-76.9 100 67.7-72.2

15 98.9-100 23.6-30 100 24.5-28.5 99.3-100 26-29.3 99.6-100 25.9-40.4

25 99.6-100 15.2-19.3 98.9-100 15.2-19 98.9-100 16.7-19.8 98.9-100 16.4-19.9

Four Dominant Colors

5 100 75.4-82.8 100 80.6-83.7 100 78.1-89.3 100 84.6-88.4

15 100 30.6-35.7 100 33.1-37.2 100 34.2-37 100 36.1-49.9

25 99.6-100 19.2-23.7 100 20.2-24.3 98.9-100 21.5-24.8 99.6-100 21.4-25

Table 3. Hit rate (H) and penetration rate (P) for the combined dataset when K clusters are used.

RGB YCbCr CbCrK H(%) P(%) H(%) P(%) H(%) P(%)

Two Dominant Colors

5 99.3-100 49.4-53.7 99.3-100 45.7-53.4 100 48.8-52.8

15 98.6-100 18-22.1 98.2-100 18-23.3 98.9-100 17.6-20.9

25 97.2-99.3 11.6-15.8 96.5-99.6 11.7-14.6 97.9-99.6 10.7-15.2

Three Dominant Colors

5 100 69.1-71.5 100 68.7-70.5 100 62.8-66.2

15 98.9-100 26.1-30.7 98.9-100 26.8-30.5 100 25.2-28.6

25 98.9-100 16.4-21.5 98.9-100 17.1-21.1 99.6-100 15.4-19.9

Four Dominant Colors

5 100 80-85.4 100 80.3-85.5 100 80-81.9

15 99.3-100 33.5-37.3 99.3-100 32.8-37.6 100 32.3-37.7

25 98.9-100 21.1-26.8 99.3-100 21.5-25.5 100 20-24.5

In the first experiment, multiplicative noise was added to each R,G,B channel of the probe images, afternormalization process. Further, the images were converted to HSI, Y CbCr and L∗a∗b color schemes and theclassification performance was evaluated. Given an input image Iin, the noisy image is obtained as follows:Iout = Iin+n×Iin where n is a uniform distribution with mean 0 and variance v. The investigation is conductedfor left eyes, using multiple values of variance v ∈ {0.01, 0.02, 0.03, 0.04} for all the scenarios mentioned in Table

7

Appeared in Proc. of SPIE Biometric and Surveillance Technology for Human and Activity Identification XII, (Baltimore, USA), April 2015

Page 8: An Indexing Method for Color Iris Imagesrossarun/pubs/Crihalmeanu... · Related Work: Existing color iris indexing schemes are color-based, texture-based or a combination of the two

Figure 8. Hit rate distribution across color schemes. The results obtained for 2, 3 or 4 dominant colors and 5 to 25 clustersare pooled together.

Figure 9. Penetration rate distribution across color schemes. The results obtained for 2, 3 or 4 dominant colors and 5 to25 clusters are pooled together.

1, for 2, 3 and 4 dominant colors. In the second experiment, “salt and pepper noise” is added to the R,G,Bcomponents of the probe images, with various noise densities sp ∈ {0.04, 0.05, 0.07, 0.1, 0.15}. Results for bothexperiments are presented in Table 4 and Table 5. Although the values of hit rate slightly decrease, the resultsdemonstrate the robustness of the method to the added noise types.

In order to understand why some color spaces perform better than others, it is necessary to look at theirstructure. RGB is an additive color model that was designed to match an intuitive human perception of thecolors. A vast array of colors is obtained by adding in various proportions red, green and blue light. It isdevice-dependent, meaning that a set of R, G, and B values do not define the same color across devices, andhence manufacturers, without considering the usage of color management. The information in all three channelsis correlated. Compared with RGB, L∗a∗b22 is a device-independent color space (obtained from XY Z). Thisexplains the better performance compared with RGB (the three datasets mixed together are collected withdifferent cameras). Moreover, L∗a∗b’s gamut exceeds that of the RGB model, since it includes all perceivablecolors. Luminance L is separated from the chrominance ∗a (red-green) and ∗b (blue-yellow) components. Using

8

Appeared in Proc. of SPIE Biometric and Surveillance Technology for Human and Activity Identification XII, (Baltimore, USA), April 2015

Page 9: An Indexing Method for Color Iris Imagesrossarun/pubs/Crihalmeanu... · Related Work: Existing color iris indexing schemes are color-based, texture-based or a combination of the two

Table 4. The hit rate and the penetration rate when multiplicative noise is added to the R, G, B channels of the probeimage.

v = 0.01 v = 0.02 v = 0.03 v = 0.04K H(%) P(%) H(%) P(%) H(%) P(%) H(%) P(%)

2 dominant colors

5 99.65 53.00 99.65 51.66 99.65 51.10 99.65 50.58

15 97.54 20.91 94.89 20.74 92.25 20.67 90.14 20.74

25 97.18 16.78 95.25 16.91 88.73 16.67 80.99 16.12

3 dominant colors

5 100 71.66 100 71.72 100 71.34 100 71.51

15 99.30 28.15 98.94 27.27 98.59 26.54 98.24 25.89

25 99.65 20.33 98.77 19.74 98.42 19.35 95.42 19.01

4 dominant colors

5 100 86.92 100 86.96 100 86.68 100 86.44

15 100 35.66 100 34.82 100 33.70 100 32.54

25 100 24.49 99.65 23.81 99.65 23.13 94.37 22.19

Table 5. The hit rate and the penetration rate when “salt and pepper” noise is added to the R, G, B channels of theprobe image.

k sp = 0.05 sp = 0.07 sp = 0.1 sp = 0.15H(%) P(%) H(%) P(%) H(%) P(%) H(%) P(%)

2 dominant colors

5 99.65 52.74 99.65 52.46 99.65 51.61 99.65 50.15

15 97.71 21.08 97.71 20.97 97.71 20.80 96.65 19.76

25 97.89 16.28 97.89 16.29 97.71 16.23 96.48 15.89

3 dominant colors

5 100 71.08 100 70.89 100 70.64 100 70.35

15 99.30 28.36 99.30 28.13 99.30 27.54 99.12 25.54

25 99.82 20.41 99.65 20.37 99.65 20.09 98.94 18.49

4 dominant colors

5 100 87.96 100 88.27 100 88.25 100 88.33

15 100 35.71 100 35.19 100 34.17 100 31.90

25 99.82 24.87 99.82 24.79 99.82 24 99.82 21.60

only the chrominance components of the color schemes (∗a∗b or HS or CbCr) further improves the results sincethe variation in illumination is largely reduced. All these attributes explain the performance and suggest the useof L∗a∗b for classification. Another color scheme studied in this work is Y CbCr. Y component represents theintensity of the light and Cb and Cr components specify the intensities of the blue and red components relativeto the green component. It is based on the human perception of luminance and chrominance. Specifically thehuman eye is more sensitive to changes in luminance and less sensitive to changes in chrominance. Hence whenconverting to Y CbCr less bits are allocated to chrominance than to luminance, favoring the use of this colorscheme in computing (compression and encoding). The fourth color space used for classification is HSI (hue,saturation,intensity)23 . Humans tend to describe a color by its hue, the pure color (ex. red or orange etc.).Saturation is a measure of the degree of dilution of the pure color with white. Therefore HSI is practicalfor human interpretation of colors. Intensity is decoupled from the hue and saturation that carry the chromaticinformation. Both Y CbCr and HSI are device-dependent. An insight into each one of these color spaces suggeststhat the color schemes in which the luminance, brightness or light intensity is separated from the chrominance, sothat only the chrominance is used for classification, will perform the best. Being device-independent is anotherattribute that results in better performance on heterogenous datasets.

5. CONCLUSIONS

In this work we explored the possibility of using the dominant colors in the iris region to classify RGB eye images.The proposed method is fast and reliable with hit rates in the interval 96.5% and 100% and penetration rates in

9

Appeared in Proc. of SPIE Biometric and Surveillance Technology for Human and Activity Identification XII, (Baltimore, USA), April 2015

Page 10: An Indexing Method for Color Iris Imagesrossarun/pubs/Crihalmeanu... · Related Work: Existing color iris indexing schemes are color-based, texture-based or a combination of the two

the interval 10% and 30%. The best results based on both the hit and penetration rates are obtained when usingL∗a∗b color scheme, or ∗a∗b and CbCr chromaticity components, for 25 clusters and 2 dominant colors. Theresults suggest further investigation into the spatial distribution of the colors within the iris region to classifyirides.

In the future, we plan to apply the classification algorithm to a larger dataset acquired with different photocameras, with the same subjects across cameras, and with images of the eye re-scaled at different sizes.

REFERENCES

[1] Ross, A., “Iris recognition: The path forward,” IEEE Computer , 30–35 (February 2010).

[2] Proenca, H., “Iris recognition: On the segmentation of degraded images acquired in the visible wavelength,”IEEE Transaction on Pattern Analysis and Machine Intelligence 32, 1502–1516 (August 2010).

[3] Krichen, E., Mellakh, M., Garcia-Salicetti, S., and Dorizzi, B., “Iris identification using wavelet packets,”Proceedings of the 17th International Conference on Pattern Recognition (ICPR) 4, 335–338 (2004).

[4] Tankasala, S., Gottemukkula, V., Saripalle, S., Nalamati, V., Derakhshani, R., Pasula, R., and Ross, A., “Avideo based hyper focal imaging method for iris recognition in the visible spectrum,” IEEE InternationalConference on Technologies for Homeland Security (HST) , 214–219 (November 2012).

[5] Santos, G., Bernardo, M., Proenca, H., and Fiadeiro, P., “Iris recognition: Preliminary assessment about thediscriminating capacity of visible wavelength data,” IEEE International Symposium on Multimedia (ISM) ,13–15 (December 2010).

[6] Proenca, H., [Handbook of Iris Recognition ], ch. Iris Recognition in the Visible Wavelength, 151–171,Springer (2013).

[7] Vatsa, M., Singh, R., Ross, A., and Noore, A., “Quality-based fusion for multichannel iris recognition,”International Conference on Pattern Recognition (ICPR) 0, 1314–1317 (August 2010).

[8] SamSunder, M. and Ross, A., “Iris image retrieval based on macro-features,” Proc. of International Con-ference on Pattern Recognition (ICPR) , 1318–1321 (August 2010).

[9] Bowyer, K., “The results of the NICE II iris biometrics competition,” Pattern Recognition Letters 33,965–969 (June 2012).

[10] Park, U., Jillela, R., Ross, A., and Jain, A., “Periocular biometrics in the visible spectrum,” IEEE Trans-actions on Information Forensics and Security (TIFS) 6, 96–106 (March 2011).

[11] Woodard, D., Pundlik, S., Lyle, J., and Miller, P., “Periocular region appearance cues for biometric iden-tification,” IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops(CVPRW) , 162–169 (June 2010).

[12] Klontz, J. and Burge, M., [Handbook of Iris Recognition ], ch. Periocular Recognition from Low-Quality IrisImages, 309–321, Springer (2013).

[13] Boyce, C., Ross, A., Monaco, M., Hornak, L., and Li, X., “Multispectral iris analysis: A preliminary study,”Proceedings of Computer Vision and Pattern Recognition Workshop on Biometrics (CVPRW) (June 2006).

[14] Puhan, B. and Sudha, N., “A novel iris database indexing method using the iris color,” IEEE Conferenceon Industrial Electronics and Applications (ICIEA) , 1886–1891 (2008).

[15] Zhang, H., Sun, Z., Tan, T., and Wang, J., “Iris image classification based on color information,” Interna-tional Conference on Pattern Recognition (ICPR) , 11–15 (November 2012).

[16] Ross, A. and SamSunder, M., “Block based texture analysis for iris classification and matching,” Proc. ofIEEE Computer Society Workshop on Biometrics at the Computer Vision and Pattern Recogniton (CVPR), 30–37 (June 2010).

[17] Jayaraman, U., Prakash, S., and Gupta, P., “An iris retrieval technique based on color and texture,”Proceedings of the 7th Indian Conference on Computer Vision, Graphics and Image Processing (ICVGIP) ,93–100 (2010).

[18] Dobes, M., Martinek, J., Skoupil, D., Dobesova, Z., and Pospisil, J., “Human eye localization using themodified hough transform,” Optik 117(10), 468–473 (2006).

[19] Proenca, H. and Alexandre, L., “UBIRIS: A noisy iris image database,” in [International Conference onImage Analysis and Processing (ICIAP) ], 1, 970–977 (2005).

10

Appeared in Proc. of SPIE Biometric and Surveillance Technology for Human and Activity Identification XII, (Baltimore, USA), April 2015

Page 11: An Indexing Method for Color Iris Imagesrossarun/pubs/Crihalmeanu... · Related Work: Existing color iris indexing schemes are color-based, texture-based or a combination of the two

[20] Crihalmeanu, S. and Ross, A., “Multispectral scleral patterns for ocular biometric recognition,” PatternRecognition Letters 33, 1860–1869 (October 2012).

[21] Mukherjee, R. and Ross, A., “Indexing iris images,” Proc. of International Conference on Pattern Recogni-tion (ICPR) (December 2008).

[22] Dubois, E. and Bovik, A. C., [The Structure and Properties of Color Spaces and the Representation of ColorImages ], Morgan and Claypool Publishers, 1 ed. (2009).

[23] Gonzales, R. C. and Woods, R. E., [Digital Image Processing ], Tom Robbins, Upper Saddle River, NewJersey, 07458, 2 ed. (2001).

11

Appeared in Proc. of SPIE Biometric and Surveillance Technology for Human and Activity Identification XII, (Baltimore, USA), April 2015