accurate iris segmentation based on geometric method of ... · of the gabor wavelet based iris...

5
a Technovision-2014: 1 st International Conference at SITS, Narhe, Pune on April 5-6, 2014 All copyrights Reserved by Technovision-2014, Department of Electronics and Telecommunication Engineering, Sinhgad Institute of Technology and Science, Narhe, Pune Published by IJECCE (www.ijecce.org) 231 International Journal of Electronics Communication and Computer Engineering Volume 5, Issue (4) July, Technovision-2014, ISSN 2249–071X Accurate Iris Segmentation based on Geometric Method of Pupil Localization Samina Salim Mujawar M.E. Electronics (Pursuing) K. B. P. C. O. E. P., Satara, Maharashtra, India Email: [email protected] Prof. Nanaware J. D. Associate Professor, Dept. of Electronics Engineering K. B. P. C. O. E. P., Satara, Maharashtra, India Email: [email protected] Abstract – Pattern based analysis of biological data allows iris recognition technology to be highly reliable for personal identification. Universality, uniqueness, permanence, performance, acceptability etc, characteristics of iris signals make iris recognition system to be robust. The pupil of an eye is like a black circular disk leads to pupil detection that means finding of a black disk in the input image. It can be achieved by evaluating the circularity and area of the black region of the input image. So this paper focus on use of pupil parameters such as pupil circularity and pupil radius features of iris. These features can be used for authentications and are generally called as "dynamic features (DFs)" of iris. These features are dependent on some parameters such as how the human eye reacts to light. Extraction of information is done by using these parameters by using segmentation system which localizes the circular iris and pupil region; including removal of eyelids and eyelashes, and reflections and such information is used for biometric recognition purposes. Keywords – Iris, Iris Recognition, Segmentation. I. INTRODUCTION Biometrics system uses hardware as well as software part for authentication. The hardware is used to capture the biometric information, and software to analyze, manage and store that information. In general, the software system translates these measures into a mathematical, computer readable format. When a user first creates a biometric profile, the biometric information is processed to get a template, that template is stored in a database. The biometrics system then compares this template to the new image created every time a user accesses the system. John Daugman a Professor of Computer Vision and Pattern Recognition is best known for his pioneering work in biometric identification, in particular the development of the Gabor wavelet based iris recognition algorithm. He proposed Image analysis algorithms to find the iris in a live video image of a person's face, and encode its texture into a compact signature. or "iris code." So that the iris texture is extracted from the image at multiple scales of analysis by a self-similar set o1' quadrature (2-D Galror) bald pass lilters defined in a dimensionless polar coordinate s1,'stenr [1]. A method for rapid visual recognition of person is also described by John Doughman which is based on the failure of a statistical test of independence [2]. Then he worked on encoding of the visible texture of a person's iris in a real-time video image is into a compact sequence of multi-scale quadrature 2-D Gabor wavelet coefficients [31. W. W. Boles and B. Boashash presented new approach for iris reorganization based on wavelet transform. Zero- crossings of the wavelet transform at various resolution levels are calculated over concentric circles on the iris, and the resulting one-dimensional (l-D) signals are compared with model features using different dissimilarity functions [4]. Zhenan Sun, Member, IEEE, and Tieniu Tan has Proposed a method of using ordinal measures for iris feature representation with the objective of characterizing qualitative relationships between iris regions rather than precise measurements of iris image structures[5]. Ronaldo Martins da Costa and Adilson Gonzaga worked on the evaluation of the texture features observed during pupil movements and the iris contraction and dilation rates due to the alteration of the illumination conditions [6]. Libin Wang, Zhenan Sun, Tieniu Tan has proposed a robust regularized linear programming feature selection method fbr iris recognition which includes use of compact and effective ordinal feature set for iris recognition. [7]. Hugo Proenc’a and Lur's A. Alexandre have focused on the capture of iris images at large distances, under less controlled lighting conditions, and without active participation of the subjects. i.e. non cooperative view of iris recognition[8]. II. SEGMENTATION The human eye is sensitive to visible light. When bright light fall on the eye light sensitive cells in the retina, including rod and cone photoreceptors and melanopsin ganglion cells, will send signals to the oculomotor nerve, which terminates on the circular iris sphincter muscle. When this muscle contracts, the pupil size reduces. This is called the pupillary light reflex, Furthermore; the pupil will dilate if a person sees an object of interest. The pupil contracts and dilates depending on the intensity of visible light, and the iris and the sclera reflect light exceptionally well [9]. The first step in any pattern recognition method is the image acquisition. The image of the iris can be captured using a standard camera using both visible and infrared light. Then the next stage is to isolate actual iris region in an eye image. The iris region, shown in Figure 1, can be approximated by two circles, one for the iris/sclera boundary and another, interior to the first, for the iris/pupil

Upload: others

Post on 01-Aug-2020

3 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Accurate Iris Segmentation based on Geometric Method of ... · of the Gabor wavelet based iris recognition algorithm. He proposed Image analysis algorithms to find the iris in a live

a

Technovision-2014: 1st International Conference at SITS, Narhe, Pune on April 5-6, 2014

All copyrights Reserved by Technovision-2014, Department of Electronics and Telecommunication Engineering,Sinhgad Institute of Technology and Science, Narhe, PunePublished by IJECCE (www.ijecce.org) 231

International Journal of Electronics Communication and Computer EngineeringVolume 5, Issue (4) July, Technovision-2014, ISSN 2249–071X

Accurate Iris Segmentation based on Geometric Methodof Pupil Localization

Samina Salim MujawarM.E. Electronics (Pursuing)

K. B. P. C. O. E. P., Satara, Maharashtra, IndiaEmail: [email protected]

Prof. Nanaware J. D.Associate Professor, Dept. of Electronics Engineering

K. B. P. C. O. E. P., Satara, Maharashtra, IndiaEmail: [email protected]

Abstract – Pattern based analysis of biological data allowsiris recognition technology to be highly reliable for personalidentification. Universality, uniqueness, permanence,performance, acceptability etc, characteristics of iris signalsmake iris recognition system to be robust. The pupil of an eyeis like a black circular disk leads to pupil detection thatmeans finding of a black disk in the input image. It can beachieved by evaluating the circularity and area of the blackregion of the input image. So this paper focus on use of pupilparameters such as pupil circularity and pupil radiusfeatures of iris. These features can be used forauthentications and are generally called as "dynamicfeatures (DFs)" of iris. These features are dependent on someparameters such as how the human eye reacts to light.Extraction of information is done by using these parametersby using segmentation system which localizes the circular irisand pupil region; including removal of eyelids and eyelashes,and reflections and such information is used for biometricrecognition purposes.

Keywords – Iris, Iris Recognition, Segmentation.

I. INTRODUCTION

Biometrics system uses hardware as well as softwarepart for authentication. The hardware is used to capture thebiometric information, and software to analyze, manageand store that information. In general, the software systemtranslates these measures into a mathematical, computerreadable format. When a user first creates a biometricprofile, the biometric information is processed to get atemplate, that template is stored in a database. Thebiometrics system then compares this template to the newimage created every time a user accesses the system.

John Daugman a Professor of Computer Vision andPattern Recognition is best known for his pioneering workin biometric identification, in particular the developmentof the Gabor wavelet based iris recognition algorithm. Heproposed Image analysis algorithms to find the iris in alive video image of a person's face, and encode its textureinto a compact signature. or "iris code." So that the iristexture is extracted from the image at multiple scales ofanalysis by a self-similar set o1' quadrature (2-D Galror)bald pass lilters defined in a dimensionless polarcoordinate s1,'stenr [1]. A method for rapid visualrecognition of person is also described by John Doughmanwhich is based on the failure of a statistical test ofindependence [2]. Then he worked on encoding of thevisible texture of a person's iris in a real-time video image

is into a compact sequence of multi-scale quadrature 2-DGabor wavelet coefficients [31.

W. W. Boles and B. Boashash presented new approachfor iris reorganization based on wavelet transform. Zero-crossings of the wavelet transform at various resolutionlevels are calculated over concentric circles on the iris, andthe resulting one-dimensional (l-D) signals are comparedwith model features using different dissimilarity functions[4]. Zhenan Sun, Member, IEEE, and Tieniu Tan hasProposed a method of using ordinal measures for irisfeature representation with the objective of characterizingqualitative relationships between iris regions rather thanprecise measurements of iris image structures[5]. RonaldoMartins da Costa and Adilson Gonzaga worked on theevaluation of the texture features observed during pupilmovements and the iris contraction and dilation rates dueto the alteration of the illumination conditions [6]. LibinWang, Zhenan Sun, Tieniu Tan has proposed a robustregularized linear programming feature selection methodfbr iris recognition which includes use of compact andeffective ordinal feature set for iris recognition. [7]. HugoProenc’a and Lur's A. Alexandre have focused on thecapture of iris images at large distances, under lesscontrolled lighting conditions, and without activeparticipation of the subjects. i.e. non cooperative view ofiris recognition[8].

II. SEGMENTATION

The human eye is sensitive to visible light. When brightlight fall on the eye light sensitive cells in the retina,including rod and cone photoreceptors and melanopsinganglion cells, will send signals to the oculomotor nerve,which terminates on the circular iris sphincter muscle.When this muscle contracts, the pupil size reduces. This iscalled the pupillary light reflex, Furthermore; the pupilwill dilate if a person sees an object of interest. The pupilcontracts and dilates depending on the intensity of visiblelight, and the iris and the sclera reflect light exceptionallywell [9].

The first step in any pattern recognition method is theimage acquisition. The image of the iris can be capturedusing a standard camera using both visible and infraredlight. Then the next stage is to isolate actual iris region inan eye image. The iris region, shown in Figure 1, can beapproximated by two circles, one for the iris/scleraboundary and another, interior to the first, for the iris/pupil

Page 2: Accurate Iris Segmentation based on Geometric Method of ... · of the Gabor wavelet based iris recognition algorithm. He proposed Image analysis algorithms to find the iris in a live

a

Technovision-2014: 1st International Conference at SITS, Narhe, Pune on April 5-6, 2014

All copyrights Reserved by Technovision-2014, Department of Electronics and Telecommunication Engineering,Sinhgad Institute of Technology and Science, Narhe, PunePublished by IJECCE (www.ijecce.org) 232

International Journal of Electronics Communication and Computer EngineeringVolume 5, Issue (4) July, Technovision-2014, ISSN 2249–071X

boundary. The eyelids and eyelashes normally occlude theupper and lower parts of the iris region. Also, specularreflections can occur within the iris region corrupting theiris pattern. A technique is required to isolate and excludethese artifacts as well as locating the circular iris region.The success of segmentation depends on the imagingquality of eye images.

Fig.1. Sample image of eye from CASIA database.(Image No. 001 2_1)

Iris segmentation is one of the important operationsinvolved in iris recognition system. The successful andprecise feature extraction and recognition, andconsequently the high performance level of the irisrecognition system depend upon accurate irissegmentation. Iris segmentation requires detailed study ofmodeling parameters and characteristics, which preventstheir effective real-time applications and makes the systemhighly sensitive to noise [10][11]. The 3 important stagesof segmentation are as given below. Edge detection Finding a circle. Eyelid detection.A) Edge Detection:

Edges characterize object boundaries. A region whereintensity function changes abruptly is considered Edges.Edges in images are areas with strong intensity contrasti.e. a jump in intensity from one pixel to the next. Edgedetection in an image not only reduces the amount of datafrom image and filters out useless information but alsopreserves the important structural properties in an image.In segmentation process to detect the iris boundary, it isnecessary to create an edge map. The Canny edgedetection principle is used to generate an edge map. Thecanny edge detector first smoothens the image to eliminatenoise. It then finds the image gradient to highlight regionswith high spatial derivatives. The algorithm then tracksalong these regions and suppresses any pixel that is not atthe maximum (nonmaxima suppression). The nonmaximaarray is now further reduced by hysteresis. Hysteresis isused to track along the remaining pixels that have not beensuppressed. Hysteresis uses two thresholds, if themagnitude is below lower threshold, it is set to zero. If themagnitude is above higher threshold, it is treated as edge.And if the magnitude is between the two thresholds, then it

is set to zero unless there is a path from this pixel to apixel with a value above lower threshold.1) Image Smoothing:

The smoothing of image is done to suppress the noise.Noise is associated with high frequency and suppressionof high frequencies leads to the noise suppression. Theinput eye image is firstly smoothened using a GaussianFilter. Sharp edges are blurred due to the smoothing. Let I[i, denote the image, G [i, j, o] be a Gaussian smoothingoperator. Gaussian smoothing operator is a 2Dconvolution operator which is used to blur images andremove noise and is given by 2D Gaussian equation:

2 2

2 2

1( , ) exp2 2

i jG i j

(1)

Where is the standard deviation i.e. spread of theGaussian and controls the degree of smoothing. ‘ ’ act asdeciding factor of degree of smoothing. Increase in willmake the gap between different levels of edges larger.This means that large value of sigma will give moreblurring. And decrease in will make the gap betweendifferent levels of edges smaller. This means small valueof sigma will give less blurring [2][13].The result ofconvolution of image I [i, j] with G [i, j, ] gives an arrayof smoothed data as;

, , , ,S i j I i j G i j (2)2) Gradient Calculation:The application of gradient operators detects changes inimage function. Change in pixel value corresponds to largegradient value. Gradient operators are based on localderivatives of image function. So, at locations whereimage function undergoes rapid change, derivatives arelarger. Gradient operators suppress only the lowfrequencies in Fourier transform domain. As noise isgenerally associated with high frequencies, the gradientoperator increases noise level. Firstly, the gradient of thesmoothed array S [i, j] is used to produce the x and ypartial derivatives p [i, j] and Q [i, j] respectively. Themagnitude and orientation of the gradient can be computedas,

2 2, , ,M i j P i j Q i j

1 [ , ], tan[ , ]

Q i ji jP i j

(3)

Figure 2 shows Gradient amplitude image showing outerand inner boundary of iris.

Fig.2. Gradient amplitude image showing outer and innerboundary of iris

Page 3: Accurate Iris Segmentation based on Geometric Method of ... · of the Gabor wavelet based iris recognition algorithm. He proposed Image analysis algorithms to find the iris in a live

a

Technovision-2014: 1st International Conference at SITS, Narhe, Pune on April 5-6, 2014

All copyrights Reserved by Technovision-2014, Department of Electronics and Telecommunication Engineering,Sinhgad Institute of Technology and Science, Narhe, PunePublished by IJECCE (www.ijecce.org) 233

International Journal of Electronics Communication and Computer EngineeringVolume 5, Issue (4) July, Technovision-2014, ISSN 2249–071X

In order to detect the edges, it is essential to determineintensity changes in the neighborhood of a pixel. Thus thecontrast enhancement is applied in order to haveprominent edge map [4]. Figure 3 shows the contrastenhanced images while finding outer and inner boundary.

Fig.3. Contrast enhanced image showing outer and innerboundary of iris.

3) Nonmaxima Suppression:In canny edge detection method, an edge point is

defined to be a point whose strength is locally maximumin the direction of the gradient. This means that zero valueis assigned every, where except the local maxima points.At the local maxima points the value is preserved and allother values are marked as zeros. This process, whichresults in one pixel wide ridges, is called as non-maximasuppression[3] [15]. A snapshot of the image after non-maxima suppression is as shown in figure 4.

Fig.4. Non maxima suppressed image showing outer andinner boundary of iris.

The non-maxima suppression follows following steps: Find out pixels with non-zero magnitude. For each pixel with non-zero magnitude, inspect the two

adjacent pixels in Radial direction of edge If the edge magnitude of either of these two exceeds that

of pixel under Inspection then marks the pixelinspection for deletion.

When all the pixels have been inspected rescans theimage and erases all edge Data points which are markedfor deletion [5]

4) Hysteresis Thresholding:There may contain many false edge fragments caused by

noise and fine texture in the nonmaxima suppressedmagnitude image N [i, j]. These false edge fragments inthe nonmaxima suppressed image should be reduced byapplying a threshold to nonmaxima suppressed magnitudeimage N [i, j]. The threshold T is decided such that a

prominent edge map is created. All values below thresholdare set to zero. After application of threshold to thenonmaxima suppressed magnitude image, an anay E [i, j]containing the edges detected in the image I [i, i] isobtained. If T is too low, some false edges will remain andif T is too high, some useful edges will disappear. So amore effective thresholding scheme uses two thresholdsT1 (upper threshold) and T2 (lower threshold) to find theedge mapped image. Thresholding reduces the nonmaximasuppressed image array. Hysteresis thresholding is analgorithm proposed by Canny [13] in order to mark theedges of the underlying image and is used so that lines thatinclude strong and weak gradients are not split up. Thefollowing statements below describe the process. If the gradient magnitude value at any pixel is above Tl,

that pixel is immediately marked as a part of an edge. For a given pixel, if the gradient magnitude is below T2

it is unconditionally set to zero. If the gradient is between these two, that pixel is

considered to be a part of an edge only when it isconnected to a pixel already marked as a part of an edge.This step is repeated until no new pixel is marked as apart of an edge.This algorithm simply treats an edge as a connected set

of pixels [6][17]. Figure 5 shows the image obtained fromFigure 4 after applying thresholding.

Fig.5. Threshold image showing outer and inner boundaryof iris.

B) Finding A Circle (Hough Transform):After finding the edge map of input eye image by canny

edge detector, next step is to find radius and centercoordinates of the outer and inner circular boundary of irisregion. The circular Hough transform is employed to findout the radius and centre coordinates of the circularboundary of pupil and iris outer boundary. From the edgemap, votes are cast in Hough space for the parameters ofcircles passing through each edge point. Then, votes in acircular Hough space are analyzed to estimate the threeparameters of one circle (xo, yo, r). A Hough space isdefined as:

0 0 0 0, , , , , ,i iH x y r iH x y x y r (4)Where: ( , )i ix y = An Edge pixel

0 0

1 if ( , ) is on the circle, , , ,

0 otherwisei i

i i

x yH x y x y r

(5)

Page 4: Accurate Iris Segmentation based on Geometric Method of ... · of the Gabor wavelet based iris recognition algorithm. He proposed Image analysis algorithms to find the iris in a live

a

Technovision-2014: 1st International Conference at SITS, Narhe, Pune on April 5-6, 2014

All copyrights Reserved by Technovision-2014, Department of Electronics and Telecommunication Engineering,Sinhgad Institute of Technology and Science, Narhe, PunePublished by IJECCE (www.ijecce.org) 234

International Journal of Electronics Communication and Computer EngineeringVolume 5, Issue (4) July, Technovision-2014, ISSN 2249–071X

The location (xo, yo, r) with the maximum value of H(xo,yo, r) is chosen as the parameter vector for the strongestcircular boundary. [18][19]C) Eyelid Detection:

The occlusion due to eyelids over the iris region affectsthe performance of the recognition system. The eyeliddetection procedure is near about same as that of edgedetection Firstly edges are detected using the canny edgedetector and then horizontal lines are detected. Eyelids areisolated by first fitting a line to the upper and lower eyelidusing the linear Hough transform [19][20]

III. ALGORITHM FOR IRIS LOCALIZATION

By accurate iris segmentation we get the Cartesiancoordinates of the center of the eye image .Similarly wecan find out inner and outer boundaries of pupil .Fromthese parameters we can compute the radii of theboundaries . In this way we can localize the iris bydrawing a perfect geometry that fits the boundaries [21].This can be achieved by following steps1) Consider the center points as parameter l. Let x, y arecoordinates of the center of the eye image.2) Consider the geometry for the boundaries of the iris asparameter 2. Let r1 be radius l and r2 be radius 23) Capture parameter 1.4) Capture the co-ordinates of2 points on the innerboundaries of the iris pX, pY respectively.5) Compute radius (r1) by using the centre of the eyeimage and the vertical points co-ordinates using formula

2 21 1 1r x y (6)

r1 is distance between the centre of the eye image and thevertical points co-ordinatesWhere xl = pX . xl - pcenter . xl

yl = pX . yl – pcenter . yl

6) Compute radius (r2) by using the centre of the eyeimage and the horizontal points co-ordinates using formula

2 22 2 2r x y (7)

r2 is distance between the centre of the eye image and thehorizontal points co-ordinatesWhere x2 = pY. x2 – pcenter. x2

y2 = pY. y2 - pcenter. y2

7) By using the co-ordinates of the centre of the eye, andradius rl and r2 Draw an ellipse.8) Store the co-ordinates of the centre of the eye imageand the computed radius rl and r2 for the inner boundary..9) Repeat step 3 to 5 for the outer boundary.l0) Store the co-ordinates of the centre of the eye imageand the computed radius rl and r2 for the outer boundary.11) Stop.

IV. CONCLUSION

Extraction of information is done by using segmentationsystem which localizes the circular iris and pupil region;including removal of eyelids and eyelashes andreflections. The performance of the identification systemis closely related to the precision of the iris localizationstep. From proposed method we can accurately define boththe inner and outer boundaries of the iris. From thecaptured parameters we can have the geometry which maybe (circle or eclipse). So this method is simple, robust, andflexible and computes accurately.

REFERENCES

[1] John J.Doughrnan, "Biometric personal idenentification systembased on iris analysis" U S patent no. 5,291,560, l" march 1994

[2] John J. Doughnan "High confidence visual recognition ofpersons by test of _statistical independence,' IEEE transactionson pattern analysis and machine intelligence, vol 15, no.1 l, Nov1993

[3] John J. Doughrnan ,"High confidence iris Recognition of personsby Rapid Video Analysis of Iris Texture', Europein Conventionon Security and Detection., l6- l8 May 1995, Conferencepublication No. 408, IEE, 1995

[4] W. W. Boles and B. Boashash,', A Human IdentificationTechnique Using Images of the Iris and Wavelet Transform',IEEE transactions on signal processing, VOL. 46, NO. 4. APRILl99g

[5] Zhenan Sun, Member, IEEE, and Tieniu Tan, Fellow, IEEE,“Ordinal Measures lor lris Recognition". IEEE transactions onpatrern analysis and machine intelligence, vol. 3 l, no. 12,December 2009

[6] Ronatdo Martins da Costa and Adilson Goruaga, ,'Dynamic pl._l*yLli:bl info r Iris Recognition "Member. IEEE vol.42,no .4atg}012 Wang, Zhenan Sun, Tieniu Tan,,, Robust RegularizedFeatureSelection for Iris Recognition via Linear programming,,,2lst ICPR 201 2 November | 1-1 5, 2012. Tsukuba, Japan

[8] Hugo Proenc,a and Lur's A. Alexandre,',Toward NoncooperativeIris Recognition:A Classification Approach Using MultipleSignatures" IEEE transactions on pattem analysis and machineintelligence, vol. 29, no. 4, apil 2007

[9] E. Wolff "Anatomy of the Eye and Orbit.,, 7th edition. H. K.Lewis & Co. LTD, 1976.

[10] Mayank Vatsa, , Richa Singh,and Afzel Noore,,, lmproving IrisRecognition Performance Using Segmentation, equalityEnhancement, Match Score Fusion, and lndexing,,, IEEEtransactions on systems, man, and cybemetics-part b:cybernetics, 10g3_4419t$25.00 0 2008 rEEE

[11] Richard P. Wildes, Member, IEEE,,, Iris Recognition: AnEmerging Biometric Technology,, proceedings of the IEEE, vol.g5, no. 9, september 1997

[12] W. Kong, D. Zhang. "Accurate iris segmentation based on novelreflection and eyelash detection model;,. proceedings of 2001International Symposium on Intelligent Multimedia, Video andSpeech Processing, Hong Kong, 2001.

[13] JOHN CANNY," A Computational Approach to EdgeDetection", IEEE transactions on pattern analyiis and machineintelligence, vol. pami-8, no. 6, november l9g6

[14] John M. Gauch," Image Segmentation and Analysis viaMultiscale Gradient Watershed Hierarchies,,, IEEE ffansactionson image processing, vol. 8, no. l,january 1999

[15] Jing Huang a, XingeYou a, _, yuanyanTang a,b, Liang Dua,YuanYuan ," A novel iris segmentation using radial-iuppressionedge detection ", Signal Processing 89 (2009) 263V2643

Page 5: Accurate Iris Segmentation based on Geometric Method of ... · of the Gabor wavelet based iris recognition algorithm. He proposed Image analysis algorithms to find the iris in a live

a

Technovision-2014: 1st International Conference at SITS, Narhe, Pune on April 5-6, 2014

All copyrights Reserved by Technovision-2014, Department of Electronics and Telecommunication Engineering,Sinhgad Institute of Technology and Science, Narhe, PunePublished by IJECCE (www.ijecce.org) 235

International Journal of Electronics Communication and Computer EngineeringVolume 5, Issue (4) July, Technovision-2014, ISSN 2249–071X

[16] Rishi R, Rakesh. C.A.murthy, “Thresholding in Edge detection:A statistical approach" IEEE transactions on imagi processing,vol. 13, no. 7, july 2004

[17] Debashis Sen, Sankar K. pal," Gradient histogram: Thresholdingin a region of interest for edge detection,'1mage and VisionComputing 28 (2010) 677495

[18] Prateek Verma, Maheedhar Dubey, Somak Basu, PraveenVerma," Hough Transform Method for lris Recognition-ABiometric Approach ", Intemational Joumal of Engineering andInnovative Technology

[9] W.Kong, D. Zhang, "Accurate Iris Segmentation Based onnovel_Reflection and Eyelash Detection Model,,, proceedings of2001 International Symposium on Intelligent Multi-media,Video and Speech Processing, Hong Kong, 2001,pp.263-266.

[20] Duratulain Mirza, Imtiaz A. Taj, Ayesha IGalid. ', A robusteyelid and eyelash removal method an a local binarization basedfe^ature extraction technique for Iris Recognition System,,,97g_l-4244-487 3 -9 I 09/525.00 02009 IEEE

[21] Abikoye O.C.l, Omolehin J.O.2, Sadiku J.S.t, “Some Refinementon Iris Localization Algorithm” International Journal ofEngineering and Technology Volume 2 No. I l, November.20l2