iris authentication mechanism for fake...
Post on 10-Mar-2020
13 Views
Preview:
TRANSCRIPT
© 2015, IJARCSSE All Rights Reserved Page | 920
Volume 5, Issue 3, March 2015 ISSN: 2277 128X
International Journal of Advanced Research in Computer Science and Software Engineering Research Paper Available online at: www.ijarcsse.com
Iris Authentication Mechanism for Fake IdentificationSweta Shiwani
*
Dept of Electronics and
Communications Engg.
Gyan Ganga College of
Technology, Jabalpur,
RGTU - Bhopal , India
Neeraj Shukla
Department of Computer Science
Engineering
Gyan Ganga College of
Technology, Jabalpur,
RGTU – Bhopal, India
Abhishek Kumar
Department of Computer Science
Engineering
Satya Sai Institute of Technology,
Sehore,
RGTU – Bhopal, India
Abstract— An automated method of biometric identification that uses mathematical pattern recognition technique to
images of the iris of an individual eyes is known as Iris recognition .In this paper we discuss the typical image
enhancement algorithms such as Histogram Equalization ,it needs to compare a given image with a given template
and verify their equivalence .For this an automatic segmentation was achieved through the use of circular Hough
transform for localizing the iris and pupil regions, and the linear Hough transform for localizing occluding eyelids .
.Thresholding was also employed for isolating eyelashes and reflections .Again segmented iris region was normalized
to eliminate dimensional inconsistencies between iris region .Finally feature of iris were encoded by convolving
normalized iris region with Discrete Wavelet transform to produce bitwise template .In this paper we improved
recognition rate using 2D WAVELET TRANSFORM technique.
Keywords—Biometric, iris recognition ,Histogram Equalization ,circular Hough transform ,linear Hough
transform ,Wavelet transform , Hamming distance
I. INTRODUCTION
Biometric technology deals with recognizing the identity of individuals based on their unique physical or behavioural
characteristics. Physical characteristics such as finger-print, palm print, hand geometry and iris patterns or behavioural
characteristics such as typing pattern and hand- written signature present unique information about person and can be
used in authentication applications.
The human iris, a thin circular diaphragm lying between the cornea and the lens, has an intricate structure with many
minute characteristics such as furrows, freckles, crypts and corneas.
Figure 1 Sample of iris image
A biometric system provides automatic identification of an individual based on a unique feature and characteristic
possessed by the individual. Iris recognition is regarded as the most reliable and accurate biometric identification system
available. Most commercial iris recognition systems use patented algorithms developed by Daugman,and these
algorithms are able to produce perfect recognition rates. Many issues, including system robustness, consistent
performance under variability, speed of enrolment and recognition, and non-cooperative identification remain to be
addressed. In our project a realizable solution to some of these problems has been handled. Here, we investigate a novel
method for iris matching using zero crossings of a one dimensional Wavelet transform as a means of feature extraction
for later classification. The work involved in our project is developing an iris recognition system in order to verify both
the uniqueness of the human iris and also its performance as a biometric. For determining the recognition performance of
the system databases of digitized gray scale eye image will be used. The iris recognition system consists of three main
modules: pre-processing, feature extraction and matching. As a part of pre-processing, automatic segmentation system is
developed that is based on the Hough transform, and is able to localize the circular iris and pupil region, including
eyelids and eyelashes and reflections. The extracted iris region will be then normalized into a rectangular block with
constant dimensions to account for imaging inconsistencies. Feature extraction is done by obtaining the DWT
coefficients and finally for comparing two iris codes, a nearest neighbour approach is taken, where the distance between
two feature vectors is measured using the product-of-sum (POS) of individual sub feature Hamming Distance (HD). The
algorithm will be simulated and tested using MATLAB and image processing tool.
Shiwani et al., International Journal of Advanced Research in Computer Science and Software Engineering 5(3),
March - 2015, pp. 920-927
© 2015, IJARCSSE All Rights Reserved Page | 921
II. NOTEWORTHY CONTRIBUTION IN THE FIELD OF PROPOSED WORK
In this work, we will study different unsupervised learning methods as:
I. Circular Hough transform,
II. 2D-Discrete Wavelet transform
III. Linear Hough transform
IV. Histogram Equalization
V. Hamming distance.
We briefly summarize how these algorithms are employed in our system:
I Linear Hough transform: The Hough transform is a feature extraction technique used in image
analysis, computer vision, and digital image processing. The linear Hough transform algorithm uses a two-
dimensional array, called an accumulator, to detect the existence of a line described by r=xcosθ+ysinθ
II Circular Hough transform:- The Hough Transform is a method for finding shapes in an image. The Circular
Hough transform, which is used to find circles within an image .The circular hough transform is almost
identical to the hough transform for lines, but uses the parametric form for a circle: x=x0+rcos ,y=y0+rsinα
III Hamming distance :It measures the minimum number of substitutions required to change one string into the
other, or the minimum number of errors that could have transformed one string into the other.
IV Histogram Equalization: -It improves the contrast in an image, in order to stretch out the intensity range. It is
used to enhance contrast. It is not necessary that contrast will always be increase in this. There may be some
cases were histogram equalization can be worse. In that cases the contrast is decreased.
III. PROPOSED METHOD The proposed method as described is based on following discussed techniques: Circular Hough transform, Linear
Hough transform, Dynamic size model and Wavelet transform. This method consists of five stages: Iris Acquisition ,
Segmentation, Normalization ,Feature Extraction, Matching.
Figure 2 Flow chart of iris recognition system
Segmentation In segmentation, it is desired to distinguish the iris texture from the rest of the image. An iris is normally
segmented by detecting its inner (pupil) and outer (limbus) boundaries. Well-known methods such as the Integro-
differential, Hough transform and active contour models have been successful techniques in detecting the boundaries.
Circular Hough Transform
It is a standard image analysis tool curves that can be defined in a parametrical form such as lines, polynomials
and circles. The recognition of a global pattern is achieved using local patterns. For instance, recognition of a circle can
be achieved by considering the strong edges in an image as the local patterns and searching for the maximum value of a
circular Hough transform. The localization method, similar to Daugman‟s method, is also based on the first derivative of
the image. In the proposed method by Wildes, an edge map of the image is first obtained by thresholding the magnitude
of the image intensity gradient:
|∇𝐺 𝑥, 𝑦 ∗ 𝐼(𝑥, 𝑦)|,
Where ∇≡ (𝜕
𝜕𝑥,
𝜕
𝜕𝑦) and 𝐺(𝑥, 𝑦) = 1/2𝜋𝜍^2𝑒^ − (𝑥 − 𝑥𝑜)^2 + 𝑦 − 𝑦𝑜 2/2𝜍^2
𝐺(𝑥, 𝑦) is Gaussian smoothing function with scaling parameter 𝜍 to select the proper scale of edge analysis. The
edge map is then used in a voting process to maximize the defined Hough transform for the desired counter. Considering
the obtained edge points as 𝑥𝑗, 𝑦𝑗 , 𝑗 = 1,2, ……… . , 𝑛
Hough transform can be written as:
𝐻 𝑥𝑐, 𝑦𝑐, 𝑟 = (𝑥𝑗, 𝑦𝑗, 𝑥𝑐, 𝑦𝑐, 𝑟)𝑛𝑗=1 ,
Shiwani et al., International Journal of Advanced Research in Computer Science and Software Engineering 5(3),
March - 2015, pp. 920-927
© 2015, IJARCSSE All Rights Reserved Page | 922
Where,
𝑥𝑗, 𝑦𝑗, 𝑥𝑐, 𝑦𝑐, 𝑟 = 1 𝑖𝑓 𝑔 𝑥𝑗, 𝑦𝑗, 𝑥𝑐, 𝑦𝑐, 𝑟 = 0 0 otherwise
The limbus and pupil are both modulated as circles and the parametric function g is defined as:
𝑔 𝑥𝑗, 𝑦𝑗, 𝑥𝑐, 𝑦𝑐, 𝑟 = 𝑥𝑗 − 𝑥𝑐 2 + 𝑦𝑗 − 𝑦𝑐 2 − 𝑟^2 Assuming a circle with the centre (xc,yc) and radius r, the edge points that are located over the circle results in a zero
value of the function. The value of g is then transformed to 1 by the h function, which represents the local pattern of the
contour. The local patterns are then used in a voting procedure using the Hough transform, H, in order to locate the
proper pupil and limbus boundaries. In order to detect limbus, only vertical edge information is used. The upper and
lower parts, which have the horizontal edge information, are usually covered by the two eyelids. The horizontal edge
information is used for detecting the upper and lower eyelids, which are modelled as parabolic arcs.
Normalization
Normalization refers to preparing a segmented iris image for the Feature extraction process. In Cartesian coordinates,
iris images are highly affected by their distance and angular position with respect to camera.
Moreover, illumination has a direct impact on pupil size and causes non-linear variations of the iris patterns. A proper
normalization technique is expected to transform the iris image to compensate these variations.
Daugman’s Cartesian to Polar Transform
Daugman‟s normalization method transforms a localized iris texture from Cartesian to polar coordinates. The
proposed method is capable of compensating the unwanted variations due to distance of eye from camera (scale) and its
position with respect to camera (translation). The Cartesian to polar transform is defined as:
𝑥 (𝜌, 𝜃) = (1 − 𝜌) × 𝑥𝑝(𝜃) +𝜌 × 𝑥𝑖(𝜃) ,
𝑦 𝜌, 𝜃 = 1 − 𝜌 × 𝑦𝑝 𝜃 + 𝜌 × 𝑦𝑖 𝜃 , Where,
𝑥𝑝 𝜃 = 𝑥𝑝𝑜(𝜃) + 𝑟𝑝 × cos 𝜃 , 𝑦𝑝 𝜃 = 𝑦𝑝𝑜(𝜃) + 𝑟𝑝 × sin 𝜃 , 𝑥𝑖 𝜃 = 𝑥𝑖𝑜(𝜃) + 𝑟𝑖 × cos 𝜃 , 𝑦𝑖 𝜃 = 𝑦𝑖𝑜(𝜃) + 𝑟𝑖 × sin 𝜃 , The process is inherently dimensionless in the angular direction. In the radial direction, the texture is assumed to
change linearly, which is known as the rubber sheet model. The rubber sheet model linearly maps the iris texture in the
radial direction from pupil border to limbus border into the interval [0 1] and creates a dimensionless transformation in
the radial direction as well.
Figure 3 The normalized iris image using the Cartesian to polar transformation
Although the normalization method compensates variations due to scale, translation and pupil dilation, it is not
inherently invariant to the rotation of iris rotation of an iris in the Cartesian coordinates is equivalent to a shift in the
rotation of an iris in the Cartesian coordinates is equivalent to a shift in the polar coordinates. In order to compensate the
rotation of iris textures, a best of n test of agreement technique is proposed by Daugman in the matching process. In this
method, iris templates are shifted and compared in n different directions to compensate the rotational effects.
Figure 4 Normalized fixed rectangular size image
Figure 5 In each radius of iris there are different number of pixels
Shiwani et al., International Journal of Advanced Research in Computer Science and Software Engineering 5(3),
March - 2015, pp. 920-927
© 2015, IJARCSSE All Rights Reserved Page | 923
IMAGE ENHANCEMENT
Image quality is very important factor in the performance of an iris recognition system. When the higher
quality images are not available the iris recognition can be compromised by using the low quality images such as those
acquired in a non-invasive, non-cooperative environment, e.g. iris images obtained at a distance and on the move. These
images characterized by abundant degrading factors such as low resolution, lighting and contrast, extensive specular
reflections, eyelid occlusion, presence of contact lenses and distracting eye ware, etc. Thus, methods for iris image
enhancement play an important part in
contributing to the accuracy of iris recognition systems. Enhancement of an image can be implemented by using
different operations of brightness increment, sharping, blurring or noise removal. Unfortunately, there is no general
theory for determining what „good‟ image enhancement, when it comes to human perception. If it looks good, it is good!
While categorizing Image Enhancement operations can be divided in two categories:
Figure 6 Operations of image Enhancement
or some other reason. Noise removal helps an image processing system to extract necessary As shown in Fig, image
enhancement can be implemented by noise removal or contrast Enhancement Noise Removal is an operation to remove
unwanted details from an image. This detail gets attached to an image while capturing or acquisition process. Noise may
be due to environment particles, capturing device inability, lack of experience of machine/computer operator information
only. Other operation of Image Enhancement is contrast improvement.
Techniques of contrast enhancement
These techniques can be broadly categorized into two groups:
Direct method
Indirect method
Direct method
In direct method of contrast enhancement, a contrast measure is first defined, which is then modified
by a mapping function to generate the pixel value of the enhanced image. Various mapping functions such as the square
root function, the exponential function etc., have been introduced for the contrast measure modification. However, these
functions do not produce satisfactory contrast enhancement results and are usually sensitive to noise and digitization
effects. In addition, they are computationally complex from the point of view of implementation. The polynomial
function is ready to implement on digital computers and provides very satisfactory contrast enhancement.
Indirect method
Indirect methods, on the other hand, improve the contrast through exploiting the underutilized regions of the dynamic
range without defining a specific contrast term. Most methods in the literature fall into the second group.
Histogram Equalization
The histogram in the context of image processing is the operation by which the occurrence of each intensity value in
the image is shown. Normally, histogram is a graph showing the number of pixels in an image at each different intensity
value found in that image. For an 8-bit grayscale image there are 256 different possible intensities, and so the histogram
will graphically display 256 numbers showing the distribution of pixels amongst those grayscale values. Histogram
equalization is technique by which the dynamic range of the histogram of an image is increased.
The Discrete Wavelet Transform
Computing wavelet coefficients at every possible scale data. That is why we choose only a subset of scales and
positions at which to make our calculations. It turns out, rather remarkably, that if we choose scales and position then our
analysis will be much more efficient and just as accurate. We obtain such an analysis from the discrete wavelet transform
(DWT) given by equation
DWT= 𝑞(𝑘, 𝑙)Ѱ(2−𝑘𝑡 − 𝑙)∞𝑙=−∞
∞𝑘=1
Shiwani et al., International Journal of Advanced Research in Computer Science and Software Engineering 5(3),
March - 2015, pp. 920-927
© 2015, IJARCSSE All Rights Reserved Page | 924
An efficient way to implement this scheme using filters was developed in 1988. This algorithm is in fact a
classical scheme known in the signal processing community as a two-channel sub band coder. This very practical
filtering algorithm yields a fast wavelet transform a box into which a signal passes, and out of which wavelet coefficients
quickly emerge. Let‟s examine this in more depth.
Let,
𝜑 𝑥 = 𝜑 𝑛 2𝑛 𝜑(2x-n)
Ѱ(x)= Ѱ𝑛 (x) 2Ѱ(2x-n)
W𝜑 ( j,m, n) Approximate coefficients
WѰ ( j,m,n) Horizontal coefficients
WѰ ( j,m,n) Vertical coefficients
WD ( j,m,n) Diagonal coefficients
Here is the original image whose DWT is to be computed.
2-D Discrete Wavelet Transform
For use of wavelet transform in image processing we must implement a 2D version of the analysis and synthesis filter
banks. In the 2D case, the 1D analysis filter bank is first applied to the columns of the image and then applied to the rows.
If the image has N1 rows and N2 columns, then after applying the 1D analysis filter bank to each column we have two
subband images, each having N1/2 rows and N2 columns; after applying the 1D analysis filter bank to each row of both
of the two subband images, we have four subband images, each having N1/2 rows and N2/2 columns. This is illustrated
in the diagram below. The 2D synthesis filter bank combines the four subband images to obtain the original image of size
N1 by N2.
Figure 7 2D Wavelet Filter
HAMMING DISTANCE
For comparing two iris codes, a nearest-neighbour approach is taken ,where the distance between two feature vectors
is measured using the product of sum (POS) of individual sub feature Hamming distance (HD).
This can be defined as follows:
HD=Hamming Distance (HD)
HD=( 𝑠𝑢𝑏 𝑓𝑒𝑎𝑡𝑢𝑟𝑒 1 ⊕ 𝑠𝑢𝑏 𝑓𝑒𝑎𝑡𝑢𝑟𝑒 2 𝑁
𝑗=1
𝑁
𝑀𝑖=1 ) ^
1
𝑀
Here ,we consider the iris code as a rectangular block of size M×N, M being the number of bits sub feature and N is
the total number of sub features in a feature vector. Corresponding sub feature bits are Ex-OR and the resultant N-length
vector is summed and normalized by dividing by N. This is done for all M sub featured bits and the geometric mean of
these M sums give the normalised HD lying in the rage 0 to 1. For a perfect match ,where every bit from feature 1
matches with every corresponding bit of feature 2, all M sums are 0 and so is the HD while, for a total opposite, where
every bit from the first feature is reversed in the second , MN/N s are obtained with a final HD OF 1. Since a total bit
reversal is highly unlikely, it is expected that a random pattern difference should produce an HD of around 0.5.
Image pre-processing
For coding , irises are extracted from the eye images and normalized to a standard format for feature extraction in
order to remove variability introduced by pupil dilation , camera to eye distance, head tilt ,and tortional eye rotation
within its socket. Moreover , images acquired by different cameras under different environmental condition have
different resolution and illumination distributions .all these factors need to be taken into consideration and comphensated
for in order to generate a final normalized version complaint with the feature extraction input format.Iris images already
normalized to a resolution of 512×80 pixels for 308 classes were obtained from CASIA ,and the 150 classes from the
bath database were pre-processed in house.
Shiwani et al., International Journal of Advanced Research in Computer Science and Software Engineering 5(3),
March - 2015, pp. 920-927
© 2015, IJARCSSE All Rights Reserved Page | 925
Figure 8 Sample Human Eye
Feature Extaction
As in Fourier-based iris coding work, we start from a general paradigm where by the feature vectors will be derived from
the zero crossings of the differences between WAVELET coefficients calculated in rectangular image patches, as
illustrated by figure. Averaging across the width of these patches with appropriate windowing helps to smooth the data
and mitigate the effects of noise and other image artifa
IV. RESULTS AND DISCUSSIONS
The first was to confirm the uniqueness of iris patterns .Testing the uniqueness of iris patterns is
important , since the recognition relies on iris patterns from different eyes being entirely independent , with failure of a
test of statistical independence resulting in a match . Uniqueness was determined by comparing templates generated from
different eye to each other , and examining the distribution of Hamming distance values produced . This distribution is
known as inter class distribution .
According to statistical theory , the mean Hamming distance for comparison between inter class iris
templates will be 0.5 . This is because , if truly independent , the bits in each template can be thought of as being
randomly set , so there is a 50% chance of being set 0 to a 50% chance of being set to 1 . Therefore , half of the bits will
agree between two templates , and half will disagree , resulting in a Hamming distance of 0.5 .
Uniqueness was also be determined by measuring the number of degree of freedom represented by the
templates . This gives a measure of the complexity of the iris patterns , and can be calculated by approximating the
collection of inter class Hamming distance value as a binomial distribution .
The number of degree of freedom , DOF can be calculated by :
DOF =ρ (1-ρ)/ 2
Where , ρ is mean , and 𝜍 is the standard deviation of the distribution
Figure 9 Iris recognition system screen
Figure 10 Segmented template screen
Figure 11 Normalized template screen
Shiwani et al., International Journal of Advanced Research in Computer Science and Software Engineering 5(3),
March - 2015, pp. 920-927
© 2015, IJARCSSE All Rights Reserved Page | 926
Figure 12 Encoded screen template
Figure 13 Segmented query screen
Figure 14 Segmented eye with iris screen
Figure 15 Encoded screen
Figure 16 Authentication Demo-1
Figure 17 Segmented query screen
Figure 18 Normalised query screen
Shiwani et al., International Journal of Advanced Research in Computer Science and Software Engineering 5(3),
March - 2015, pp. 920-927
© 2015, IJARCSSE All Rights Reserved Page | 927
Figure 19 Encoded screen
Figure 20 Authentication Demo-2
V. CONCLUSION
In this paper feature of iris were encoded by convolving the normalized iris region with WAVELET TRANSFORM to
produce a bitwise template. Here using MATLAB and 2D WAVELET TRANSFORM we improved its Recognition rate.
The Hamming distance was chosen as matching matrix , which give measure of how many bits disagreed between two
templates . A failure of statistical independence between two template would result in match , that is , the two templates
were deemed to have been generated from the same iris if the Hamming distance produce was lower than a set Hamming
distance .
VI. FUTURE WORK
As improvement could be made in the speed of the system . The most computations intensive stages include performing
the hough transform , and calculating Hamming distance values between to search for match .In future the system is
implemented using other method ,which help to improve its recogination rate.
REFERENCES [1] M.Uenohara and T. kanade, ”Use of fourier and karhunen-loeve Decomposition for fast pattern matching with a
large set of template,”IEEE Trans. Pattern Analysis and machine intelligence, vol. 19,pp.891-898,1997.
[2] M. Turk and A.pentland ,”eigenspaces for Recognition,” J.Cognitive Neroscience,vol.3,no.1,pp.71-86,1991.
[3] A.K. Jain,s.Pankanti, S.Prabhakar, L.Hong,A, Ross,and J.L. Wayman ,”Biometrics: A Grand
challrnge ,”proc.17th
Int‟1Conf. pattern Recognition ,vol.2pp,935-942 ,2004.
[4] A.K. Jain, A.Ross ,and S.Prabhakhar,{„an introduction to Biometric Recognition ,”IEEE Trans,circuits and
sysrem for vedio technology,vol14,pp. 4-20,2004.
[5] A.Ross and A.K Jain,”Multimodal Biometrics:An overview,” Proc.12th
European signal processing
conf.,pp.1221-1224,2004.
[6] F.H. Adler ,Physology of the eye .Mosby,1965.
[7] P.Kronfeld, Gross Anatomy and embryology of the Eye Academic Press,1962.
[8] J.Daugman and c. Dawning, “Epigenetic Randomness, Complexity,and Sigularity of Human Iris Pattern
“,”proc. Royal Soc. (London ) B:Biological Sciences ,Vol 268,pp. 1737-1740,2001.
[9] J. Daugman ,”High confidence Visual Reconnition of person by a test of Statistical Independence,‟IEEE
Trans.pattern analysis and Machine Intelligence,vol. 15,pp.1148-1161,1993.
[10] J. Daugman ,”The Importance of Being Random: Statistical Principle of Iris Recognition ,”Pattern
Recongnition ,v vol.36,pp. 279-291,2003.
[11] J. Daugman ,”statistical Richness of Visual phase Information:Update on Reconnizing persons by Iris
pattern,”Int‟ I J. computer vision , vol.45,pp. 25-38,2001.
[12] R.P Wlides ,”Iris Recognition : An Emerging Biometric Technology”,proc. IEEE ,Vol. 85,pp. 1348-1363,1997.
[13] W.W. Boles and B. Boashash ,” A Human Identification Technique Using Wavelet Transform and Image of
the Iris.
[14] W.W. Boles,”A Security System based on Human Iris Identification using wavelet Transform,‟proc. First
Int‟1 Conf. Knowledge-based Intelligent Electonic System ,vol. 2,pp.533-541,1997.
[15] Pradeep Kumar, ―Iris Recognition with Fake Identification‖, Computer Engineering and Intelligent Systems,
ISSN 2222-1719 (Paper) ISSN 2222-2863 (Online), Vol 2, No.4, 2011
top related