iris & peri-ocular recognition

Click here to load reader

Upload: shashank-dhariwal

Post on 24-May-2015

363 views

Category:

Technology


0 download

TRANSCRIPT

  • 1. IRIS RECOGNITION Shashank Dhariwal Aisha Jabeen

2. SchemeIris RecognitionFig: 1-Iris Region 2Periocular RecognitionFig: 2-Periocular Region 3. Final Detection Iris Recognition Output Image=0 Images Matched Sum of DifferencePeriocular Recognition Output Image3>0 Images not matched 4. Iris Recognition Segmentation NormalizationFig: 3-Segmented Image [1] Feature Extraction Template Matching Fig: 4-Normalized Image [1] 4 5. Segmentation Localization Circular Hough Transform Canny edge detection togenerate edge map Gradients biased in vertical direction for outer iris/sclera boundary Vertical and horizontal gradients weighted equally for inner iris/ pupil boundary 6 parameters stored at the end 5Fig: 5-Localised Image 6. Segmentation Eyelid/Eyelashdetection Linear Hough Transform used for fitting line 2nd horizontal line drawn Canny edge detection for edge map Only horizontal gradient information taken Simple thresholding for isolating eyelashes Fig: 6-Eyelid/Eyelash occlusion6 7. Normalization Counters dimensionalinconsistencies Produces iris regions with constant dimensions Remaps each point within iris region to a pair of polar coordinatesFig : 7- Daugmans rubber sheet model [2]7 8. Normalization Radial & Angular resolution Pupil being non-concentric Normalized pattern createdby backtracking Cartesian data points. 2D arrays for polarcoordinates, and marking reflections, eyelashes and eyelids Data points occurring along pupil border are discarded8Fig : 8- Result of Normalization 9. Feature Extraction Represent iris textureas a binary vector of 2046 bitIris Code Textured region 9Fig: 9-Iris code & Textured region [2] 10. 8 Bands * 128 Textures10Fig: 10-Textures [2] 11. Feature Encoding 11 12. Feature Encoding12 13. Matching Bit-wise comparison Measured byHamming distance K-nearest neighbourclassification13 14. Periocular Biometrics 15. Definition The process of identifying a person based on the study of area around the eye, namely the edges of eye, eyebrows, eye lashes and skin.[3] It is the region of interest that defines the method to be used for feature extraction and matching, and are broadly classified as Global Matcher (uses information about colour, texture and shape) or Local Matcher (uses the information contained in Key Points).Fig 1. Area of interest [3] 15Fig 2. Figure showing key points obtained using SIFT [3] 16. Why Periocular Iris -Iris is a moving object located in another moving object eye ball which is again located in moving head which again is connected to a moving body. (lot of movements!!!!) - Small surface area (difficult to capture!!!!) - Typically imaged in NIR (appropriate lighting required to illuminate!!!!) - Requires subject co operation (as if thugs would co operate!!!!) - Occlusion by eyelids and flowing hair affected the results. Retina Typically a coherent light source required to illuminate- Again Subject Co operation is required. Face good- There is a trade off between the recognition based on Iris and Face.- Iris requires the subject to be close to camera, so we miss out the facial info - Face requires subject to be at some distance from the camera, we cant get resolution image for Iris.Periocular could use colour and NIR both, distance to camera not a problem and best no subject co operation required.16 17. Periocular Success Periocular introduced by Park in year 2009 [3], used colour images obtained from an off the shelf Cannon camera. Accuracy 77% (combined- Local + Global) 958 images- Later in year 2010, Woodard [4] combined Iris with Periocular used 520 NIR images database Local Binary Pattern as the global matching method Accuracy - combined - 96.5% , Iris 13.8% and Periocular 92.5% . - Present project NIR images used (Three database of varying size 40 - 77) SIFT [5] as Local matching technique. Accuracy 100% !!!!!!17 18. Extraction and Matching Methods Global Feature extraction and matching matchingLocal Feature extraction andFig 4. Local Descriptor Construction [3]Fig 3 . Global Descriptor Construction [3]-18Gradient Based (GO) histogram. Local Binary Pattern (LBP)- SIFT - SURFEuclidean distance used to calculate matching-Distance ratio based matching Squared Euclidean distance 19. Implementation Dataset used DB1 40 NIR images from CASIA V3_2 Iris Twins DB2 36 NIR images from CASIA V3_2 Iris lamp DB3 77 NIR images from CASIA V3_2 Iris Interval Parameters for SIFT Detection (Key point - centre coordinates, size/scale, orientation/theta ) Octavesdynamically set as per the size of the image log2(min (width, length))(inversely proportional to image resolution) Scale-3(smoothing level)Peak Threshold-0(high value will eliminate key points)Edge Threshold-10( low value will eliminate more key points) Description Magnification factor- 3Gaussian Window Size- 1.5 x scale of key point (smaller values let the centre of descriptor count more)(descriptor size is determined by multiplying the key point scale by this factor.) Matching 1.519ThresholdMeasured by L2 norm for min difference between two descriptors(Squared Euclidean Distance Ratio) 20. Implementation System Details Processor Intel (R) Xenon (R) 2.67GHz, 64-bit Ram- 12.0 GB Matlab ver - R2011a Various Tests Test 1 Test 2 Test 3 Test 4 Test 5 Test 620 Query image from same data set Query image from other data set Effect on identification when Noise added to query image Effect on identification when Blur added to query image Effect on identification when query image is rotated Effect on identification when query image is scaled 21. Results Test 1 (a) Database DB1 Query image DB1 40 query images Accuracy 100% 1600Matches FoundTime Taken61400No of Matches Found1200 4 1000 8003600 2 400 1 2002100 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 Query Image NoTime taken to find the match(s)5 22. Results Test 1 (b) Database DB2 Query image DB2 36 query images Accuracy 100% Matches FoundTime Taken160065 No of Matches found1200 4 1000 80036002 400 1 200 0220 0123456789 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 Query Image NoTime Taken to find the match(s)1400 23. Results Test 1 (c) Database DB3 Query image DB3 77 query images Accuracy 100% Matches foundTime taken 1.44501.2No of Matches Found400 13503000.8250 0.6200 1500.4100 0.250 00 0 1 2 3 4 5 6 7 8 910 12 14 16 18 20 22 24 26 28 30 32 34 36 38 40 42 44 46 48 50 52 54 56 58 60 62 64 66 68 70 72 74 76 11 13 15 17 19 21 23 25 27 29 31 33 35 37 39 41 43 45 47 49 51 53 55 57 59 61 63 65 67 69 71 73 75 7723Query Image NoTime taken to find the matches(s)500 24. Results Test 2 (a ) Database DB1 Query image DB2 36 query images Accuracy 100%Test 2 (b ) Database DB2 Query image DB1 40 query images Accuracy 100%24 25. Results Test 3 (a) Salt & Pepper Noise Query Image2520% Salt & pepper NoiseEnter the query image = 27 No of Matches = 25 Match Found , Image 27 Elapsed time is 1.404492 seconds.Matched Image 26. Results Test 3 (b) Gaussian Noise Query Image26Mean = 0 ; variance = .05Enter the query image = 10 No of Matches = 30 Match Found , Image 10 Elapsed time is 1.357519 seconds.Matched Image 27. Results Test 4 Blurring Query Image27Linear = 20 ; Theta = 35Enter the query image = 15 No of Matches = 137 Match Found , Image 15 Elapsed time is 3.862147 seconds.Matched Image 28. Results Test 5 Rotation Query Image28Deg = 80Enter the query image = 20 No of Matches = 1034 Match Found , Image 20 Elapsed time is 4.084018 seconds..Matched Image 29. Results Test 6 Scaling Query Image29Ratio = .5Enter the query image = 5 No of Matches = 127 Match Found , Image 5 Elapsed time is 0.646144 seconds.Matched Image 30. Analysis With the given size of the database this methodhas given an accuracy of 100% even when introduced by noise, blur and transformation. (however it is prudent to test this method on a larger database) Time taken to match is found to be proportionalto the no of key points selected and hence the number of matches. Time taken is also proportional to the size of the image, for e.g in Test 1 (c) the size of the image is 280 x 320 against the image size 480 x 640 in Test 1 (a & b) ,the time taken for match is quarter of that taken in Test 1 (a & b). 30 31. Reference [1] Libor Masek, Recognition of Human Iris Patterns for Biometric Identification Bachelors Thesis, The University of Western Australia, 2003 [2] J. Daugman (2004). How iris recognition works, IEEE Trans. CSVT, vol. 14, no. 1, pp. 21 30. [3] U. Park, A. Ross, and A.K. Jain, Periocular biometrics in the visible spectrum: a feasibility study, in Proceedings of the 3rd IEEE International Conference on Biometrics: Theory, Applications and systems, 2009, pp. 153158. [4] D. Woodard, S Pundlik, P Miller, On the Fusion of Periocular and Iris Biometrics in Non Ideal Imagery, in International conference on Pattern recognition, 2010. [5] A Vedaldi and B Fulkerson http://ww w.vlfeat.org31 32. DEMO 32