[ieee 2011 first international conference on informatics and computational intelligence (ici) -...

5
A Novel Palmprint Segmentation Technique M. O. Rotinwa-Akinbile Department of Mechatronics Engineering, International Islamic University Malaysia. P.O Box, 10, 50728, Malaysia [email protected] A.M. Aibinu 1 and M. J. E. Salami Department of Mechatronics Engineering, International Islamic University Malaysia P.O Box, 10, 50728, Malaysia [email protected] 1 Abstract: Recent paradigm shift from the conventional contact based palmprint recognition to contactless based systems (CBS) has necessitated the development of a variety of these systems. A major challenge of these systems is it robustness to illumination variation in unconstrained environment, thus making segmentation difficult. In this paper, the acquired image undergoes color space conversion and the output is filtered using coefficients obtained from the training of an artificial neural network (ANN) based model coefficient determination technique. Performance analysis of the proposed technique shows better performance in term of mean square error, true positive rate and accuracy when compared with two other techniques. Furthermore, it has also been observed that the proposed method is illumination invariant hence its suitability for deployment in contactless palmprint recognition systems. Keywords: Biometrics, Hand, Illumination, Segmentation. I. INTRODUCTION Palmprint recognition (PPR) like any other biometric recognition is a versatile technology explored for personal identification through the automated use of human physiological and/or behavioural characteristics. Over the years, various characteristics which include but not limited to face, iris, retina, dynamic signature, fingerprints and palm prints have been investigated as recognition puzzle solver deployable in various institutions where system and asset security are of major concern. Equally, hand based biometrics such as fingerprint, hand geometry, vein pattern and palmprint have found real time commercial deployment as part of access control systems [1]. Fingerprint recognition system need to contend with issues of system spoof and high implementation cost. Similarly, hand geometry is not a significant and appropriate feature of identification in a large database. Palmprint on the other hand is a biometric feature that has been effectively deployed in real world for access control and Identification (ACI) due to its robustness, accuracy, user friendliness of the system and most importantly the cost effectiveness as compared to other specifications [2]. Typical pattern recognition is depicted in Fig. 1. Image data acquisition can be achieved through the use of sensors, scanners or cameras depending on the biometric characteristics of interest. Palmprint images acquired through scanners [3], necessitate contact between the usershand and the device. This conventional approach has gradually been substituted with the incorporation of digital cameras, towards achieving contactless biometric system (CBS) [2]. The choice of the CBS is reinforced by the desire to protect users against infections and also curtail the spread of contagious diseases which is basically a concern for issues of public health. Camera based acquisition in unconstrained environment usually yields low quality image. Hence, segmentation of hand from the background might be quite tedious owing to the influence of noise which consequently affects feature extraction if not properly resolved. Most researches are based on using a considerable lightning condition [2] and specific background type, very few incorporate algorithms that are suitable for complex background imaging. This paper presents a new skin segmentation technique suitable for images with complex background; hence it is limited to the first two blocks in Fig. 1. A concise review of related work is presented in section II. The proposed methodology and experimental results are discussed in Section III and section IV respectively. Section V gives the concluding remark to this paper. Fig. 1: Flow diagram of typical pattern recognition II. LITERATURE REVIEW Image Preprocessing Feature Extraction Decision level Image Acquisition Storing & Matching 2011 First International Conference on Informatics and Computational Intelligence 978-0-7695-4618-6/11 $26.00 © 2011 IEEE DOI 10.1109/ICI.2011.45 235 2011 First International Conference on Informatics and Computational Intelligence 978-0-7695-4618-6/11 $26.00 © 2011 IEEE DOI 10.1109/ICI.2011.45 235

Upload: m-j-e

Post on 31-Jan-2017

215 views

Category:

Documents


3 download

TRANSCRIPT

Page 1: [IEEE 2011 First International Conference on Informatics and Computational Intelligence (ICI) - Bandung, Indonesia (2011.12.12-2011.12.14)] 2011 First International Conference on Informatics

A Novel Palmprint Segmentation Technique

M. O. Rotinwa-Akinbile Department of Mechatronics Engineering,

International Islamic University Malaysia.

P.O Box, 10, 50728, Malaysia

[email protected]

A.M. Aibinu1 and M. J. E. Salami

Department of Mechatronics Engineering,

International Islamic University Malaysia

P.O Box, 10, 50728, Malaysia

[email protected]

Abstract: Recent paradigm shift from the

conventional contact based palmprint recognition to

contactless based systems (CBS) has necessitated the

development of a variety of these systems. A major

challenge of these systems is it robustness to

illumination variation in unconstrained

environment, thus making segmentation difficult. In

this paper, the acquired image undergoes color

space conversion and the output is filtered using

coefficients obtained from the training of an

artificial neural network (ANN) based model

coefficient determination technique. Performance

analysis of the proposed technique shows better

performance in term of mean square error, true

positive rate and accuracy when compared with two

other techniques. Furthermore, it has also been

observed that the proposed method is illumination

invariant hence its suitability for deployment in

contactless palmprint recognition systems.

Keywords: Biometrics, Hand, Illumination,

Segmentation.

I. INTRODUCTION

Palmprint recognition (PPR) like any other biometric recognition is a versatile technology explored for personal identification through the automated use of human physiological and/or behavioural characteristics. Over the years, various characteristics which include but not limited to face, iris, retina, dynamic signature, fingerprints and palm prints have been investigated as recognition puzzle solver deployable in various institutions where system and asset security are of major concern. Equally, hand based biometrics such as fingerprint, hand geometry, vein pattern and palmprint have found real time commercial deployment as part of access control systems [1]. Fingerprint recognition system need to contend with issues of system spoof and high implementation cost. Similarly, hand geometry is not a significant and appropriate feature of identification in a large database. Palmprint on the other hand is a biometric feature that has been effectively deployed in real world for access control and Identification (ACI) due to its robustness, accuracy, user friendliness of the system and most importantly the cost effectiveness as compared to other specifications [2].

Typical pattern recognition is depicted in Fig. 1. Image data acquisition can be achieved through the use of sensors, scanners or cameras depending on the biometric characteristics of interest. Palmprint images acquired through scanners [3], necessitate contact between the users’ hand and the device. This conventional approach has gradually been substituted with the incorporation of digital cameras, towards achieving contactless biometric system (CBS) [2]. The choice of the CBS is reinforced by the desire to protect users against infections and also curtail the spread of contagious diseases which is basically a concern for issues of public health.

Camera based acquisition in unconstrained environment usually yields low quality image. Hence, segmentation of hand from the background might be quite tedious owing to the influence of noise which consequently affects feature extraction if not properly resolved. Most researches are based on using a considerable lightning condition [2] and specific background type, very few incorporate algorithms that are suitable for complex background imaging.

This paper presents a new skin segmentation technique suitable for images with complex background; hence it is limited to the first two blocks in Fig. 1. A concise review of related work is presented in section II. The proposed methodology and experimental results are discussed in Section III and section IV respectively. Section V gives the concluding remark to this paper.

Fig. 1: Flow diagram of typical pattern recognition

II. LITERATURE REVIEW

Image

Preprocessing

Feature

Extraction

Decision

level

Image

Acquisition

Storing &

Matching

2011 First International Conference on Informatics and Computational Intelligence

978-0-7695-4618-6/11 $26.00 © 2011 IEEE

DOI 10.1109/ICI.2011.45

235

2011 First International Conference on Informatics and Computational Intelligence

978-0-7695-4618-6/11 $26.00 © 2011 IEEE

DOI 10.1109/ICI.2011.45

235

Page 2: [IEEE 2011 First International Conference on Informatics and Computational Intelligence (ICI) - Bandung, Indonesia (2011.12.12-2011.12.14)] 2011 First International Conference on Informatics

Palm detection is a critical step preceding feature extraction in PPR systems. Successful detection of the palm region usually depends on the acquisition parameters like the capturing device, captured image colour space, background information amidst others. In controlled acquisition environment such as scanner systems [3] or camera based system with specified background, skin detection is less tedious. Simple thresholding is sufficient to isolate the palm from the background in such systems. Otsu’s algorithm is a simple and effective commonly used algorithm in such a case. It involves computation of a global threshold for each image based on the variance of the two classes (i.e. background and foreground). Wong et al. [4] and Poinsot et al. [5] segmented RGB hand images from the background using the generalized Otsu’s method on the red channel of the acquired image. Red channel was presumed to contain more information about the hand than the background since there was a relatively high contrast between the duo during acquisition. Qichaum et al. [6] used the same technique but considered the three colour channels. Poon et al. [7] employed adaptive threshold technique based on the statistical information of the background which is of uniformly low intensity as compared to the hand image.

A generic characteristics of the systems proposed in [4, 5, 6, 7] that makes thresholding worthwhile is the high contrast between the foreground and background of the image. On the contrary, this algorithm is not robust to illumination variation because RGB sensitive to luminance [8], thus cannot be directly used for images acquired with complex background. RGB transformation to other colour spaces (otherwise referred to as skin modelling techniques) such as Hue-Saturation-Value (HSV), Normalized RGB (rgb), YCbCr and CIELAB have been proposed to increase the separability between skin and non-skin pixels by various researchers. Choras et al [9] proposed a RGB rule–based modelling technique stated in (1)

BRGRGR

BGRBGRBGR

&&&

15),,min(),,max(&20&40&95

(1)

where R, G, B are the intensity values red, green, blue channels respectively and ‘&’ is a logical AND operator. Implementation of this approach gave satisfactory performance but it is only effective for images acquired with controlled illumination and uniform background.

Doublet et al. [10] concurrently deployed a 3 input node ANN architecture to model the RGB domain for the skin colour segmentation and Principal component analysis (PCA) to define the skin space which builds the probalistic map. Finally, skin pixels were segmented from the input image with a fixed threshold of the map. Although, this methodology was quite effective for images embedded in a complex background, the use of a fixed threshold is presumably inappropriate in a large database of high inter class skin

variation as the threshold value might not be optimal for other images.

In [2], the Y, Cb and Cr which refer to Luminance, Chromatic blue and Chromatic red colour space was adopted for skin segmentation. Skin pixels were modeled using Gaussian distribution of the chromatic red (Cr) component and followed by a generalized thresholding technique. Feng et al [11] proposed a CbCr Histogram based probability classier for skin detection in a real time palmprint recognition system. Similarly, the Chromatic components (CbCr) histogram statistics of an illumination compensated image were used in [8] for skin detection. Contrary to the algorithms in [2, 8, 11] where the luminance component was disregarded, Cheddad et al. [12] proposed an algorithm that considers the luminance component. Image luminance was defined in two folds. The first one involves a linear transformation of the R, G and B channels defined by (2) and the other (3) disregarded the red component by considering the green and blue channel. The skin was finally detected by determining two extreme boundaries as the threshold.

(3) ))(),(max()('

(2) )()( ))(),(),((

xBxGxI

bgrxI xxx

In this paper, the YCbCr colour space is considered because of its performance based on its attribution to Tint-Saturation-Luminance (TSL) colour space which had the peak performance in the comparative study conducted on different colour spaces for face detection [13]. Also, this colour space is known for its separating ability of luminance and chrominance. Besides, transformation from RGB to HSV or CIELAB is computationally expensive due to the processing time involved. The proposed methodology is discussed in the next section.

III. PROPOSED ALGORITHM

In this paper, hand images are captured in an unconstrained environment. The proposed algorithm is executable in two phases: training and testing phase and this is as illustrated in Fig. 2.

Training phase: Acquired images in the default RGB colour space were firstly converted to the YCbCr colour space using the standard conversion equation given as:

)( 71.0

)( 56.0

0.114B 587G 0 0.299R

YRCr

YBCb

Y

YCbCr (4)

Contrary to the proposed methodology in [11], the luminance component of the image was discarded in this paper. The Cb and Cr components of both the skin and non-skin regions were extracted due to the fact that skin information is contained in these two components [12]. For accurate determination of the skin pixels in the image of the two regions artificial neural network (ANN) was employed.

236236

Page 3: [IEEE 2011 First International Conference on Informatics and Computational Intelligence (ICI) - Bandung, Indonesia (2011.12.12-2011.12.14)] 2011 First International Conference on Informatics

Fig. 2: Flow diagram of the proposed technique

ANN is a generic mathematical computing scheme that emulates the operations of the biological neurons of the human body. A basic ANN architecture consists of the input nodes, hidden and output layers. Back propagation (BP) algorithm was used in training the network. The choice of BP is based on its versatility and easy implementation. The number of neurons in the hidden layer is set at 5 and the number of neurons in the output node is 1. Upon convergence, the weight and coefficients of the ANN were extracted as proposed in [13].

In general, the training phase was just an experimental procedure for determining accurate model coefficients that can be used in extracting the skin region from the acquired image during testing phase by convolution or filtering operation mathematically expressed in (5).

(5)

where I(x,y), h(i,j) and f(x,y) are the filtered image, predetermined filter coefficient obtained using proposed technique in [13] and transformed YCbCr image respectively.

The extracted coefficients thus served as the Finite impulse response (FIR) filter coefficients used in (5) for filtering the skin and non skin in the acquired image.

Testing phase: The function of the first two blocks is similar to that in the training phases. Having converted the acquired imaged from the original RGB to YCbCr colour space, the Cb and Cr channels are extracted and convolved with FIR filter coefficients h(i,j). The output image I(x,y) is then transformed into binary image using K-means algorithm. That is, the filtered image is then clustered into two classes (background and foreground) using the standard K-means algorithm. The pseudo code of the proposed technique is presented in Table 1 and Table 2.

Table 1 Pseudo Code for the training phase

Table 2: Pseudo Code for the testing phase

Step 1: Input acquired RGB image.

Step 2: Convert image from RGB to YCbCr colour space.

Step 3: Convolved the extracted coefficients with the transformed YCbCr image

Step 4: Cluster the output of step 3 into two (foreground and background) using K-means algorithm.

IV. EXPERIMENTAL RESULTS

Images of different subjects were captured in an unconstrained environment and used in testing the proposed algorithm. However, only one image is sufficient in the training phase to obtain the generalized skin model coefficients h(i,j). Also to ensure an objective evaluation, the ground truth of each image is extracted manually. Alongside the implementation of the algorithm, the reported algorithms in [9] and [12] were also executed so as to appraise the efficiency of this technique. The MATLAB code for [12] was obtained from [15]. The performance of the three techniques was evaluated using:

FP P Precision

T

TP

1

1 1),(),(

N

y

M

xyxyx BA

MNMSE

𝐶𝑜𝑟𝑟𝑒𝑙𝑎𝑡𝑖𝑜𝑛 = (𝐴(𝑥 ,𝑦) 𝑛𝑖=1 − 𝜇𝐴 )(𝐵(𝑥 ,𝑦)− 𝜇𝐵)

(𝐴(𝑥 ,𝑦) 𝑛𝑖=1 − 𝜇𝐴 )2 (𝐵(𝑥 ,𝑦 )

𝑛𝑖=1 − 𝜇𝐵)2

(10)

where TP, TN and FP is the true positive, true negative and false positive respectively. In (10) and (11); ),( yxAconnotes a pixel in the original image and ),( yxB the

equivalent pixel in the ground truth. M, N denotes the dimension of the image while and n are the mean

and vector length respectively.

Step 1: Input acquired RGB image.

Step 2: Convert image from RGB to YCbCr colour space.

Step 3: Crop a portion of the skin and non skin pixels, decompose the components and select the Cb and Cr data

Step 4: Concatenate the Cb and Cr data of skin and non skin pixels and feed into a 2-input neural network architecture to get the modeled coefficients.

NP

TNTPAccuracy

(6)

Positive rate positive Total

TPTrue (7)

(9)

),(

2/

2/

2/

2/

),(),(),(),( * jyix

K

Ki

K

Kj

jiyxjiyx fhfhI

(8)

237237

Page 4: [IEEE 2011 First International Conference on Informatics and Computational Intelligence (ICI) - Bandung, Indonesia (2011.12.12-2011.12.14)] 2011 First International Conference on Informatics

The results of the performance obtained from four different sample images are shown in Table 1 and segmented hand image of different subjects is shown in Fig. 3- Fig. 6. In the Table 3, M[9] and M[12] refer to the reported algorithm in [9] and [12] respectively. Results obtained show that the proposed technique gives the best accuracy with highest true positive rate. However, the precision obtained for M[9] is better than that of the proposed model. This might likely be the segmentation stage. Similarly, the mean square error (MSE) of the proposed model shows better performance when compared with other techniques. It is envisioned that subsequent enhancement of the proposed algorithm will further improve the precision of this technique.

V. CONCLUSION

In this paper, a new method of hand segmentation in a contactless environment has been proposed. Four images of different individuals were captured under varying illumination condition and background. These images were segmented using the proposed algorithm in this paper as well as two from the literature and benchmarked with the ground truth of the individual image. The influence of image colour space with a non-uniform background has been validated. Results obtained show that the proposed algorithm yielded best performance as compared with the other techniques.

ACKNOWLEDGEMENT

This research is fully sponsored by International Islamic University Malaysia (IIUM) under the Research endowment grant Type B, EDW B11-012-0490

REFERENCES

[1] Biometric Technology Application Manual(2008). National Biometric Security Project.

[2] G. K. O. Michael, T. Connie, A. B. J. Teoh (2008). Touch-less palm print biometrics:Novel design and implementation. Journal of image and Vision Computing , 26 (2008). pp 1551-1560.

[3] A. Kong, D, Zhang, M. Kamel (2009) A survey on palmprint recognition. Journal of Pattern recognition, 42(2009). pp 1408-1418.

[4] E. Wong, G. Sainarayana, A. Chekima (2007). Palmprint based biometric system: A comparative study on discrete cosine transformenergy, wavelet transform energy and sobel code methods. IJBSCHS, 14(1). pp 11-19.

[5] A. Poinsot, F. Yang, M. Paindavoine. (2009). Small sample biometric recognition based on palmprint and face fusion. International Multi-Conference on Computing in the Global Information Technology IEEE. pp 118-122.

[6] T.Qichuan, L. Ziliang, Z. Yanchun (2010). A novel palmprint segmentation and recognition algorithm. IEEE computer society. pp 273-276.

[7] C. Poon, D. Wong, H. Shen (2004). A new method in locating and segmenting palmprint into region of interest. International Conference on Pattern Recognition, IEEE Vol (4). pp 533-536

[8] J. Yun, H. Lee, A. K. Paul,J. Baek (2007). Robust face detection for video summary using illumination-compensation and morphological processing. . International Conference on Natural Computing. pp 710-714

[9] M. Choras, R.Kozik, A. Zelek, (2008). A novel shape texture approach to palmprint detection and identificationn. IEEE computer society. pp 638-643

[10] J. Doublet, O. Lepetit, M. Revenu (2007). Contactless hand recognition based on distribution estimation. IEEE Biometrics Symposium. pp 1-6

[11] Y. Feng, J. Li, L. Huang, C. Liu (2011). Real-time ROI Acquisition for Unsupervised and Touch-less Palmprint. World Academy of science, Engineering and Technology (78). pp 823-827

[12] A. Cheddad, J. Condell, K. Curran, P. Kevitt. (2009). A new colour space for skin tone detection. ICIP, IEEE. pp 497-500

[13] J. Terrillon, M. N. Shirazi, H. Fukamachi, S. Akamatsu (2000), Comparative Performance of Different Skin Chrominance Models and Chrominance Spaces for the Automatic Detection of Human Faces in Color Images, Automatic Face and Gesture Recognition, IEEE. pp 54-61

[14] M.J.E Salami, A.M Aibinu, S. K. Mohideen and S. A. Mansor (2011), Design of An Intelligent Robotic Donation box A Case Study. International Conference on Mechatronics (ICOM), IEEE. pp 1-6

[15] http://www.mathworks.com/matlabcentral/ fileexchange/authors /68043

238238

Page 5: [IEEE 2011 First International Conference on Informatics and Computational Intelligence (ICI) - Bandung, Indonesia (2011.12.12-2011.12.14)] 2011 First International Conference on Informatics

Table 3: Performance analysis of the proposed algorithm

Fig. 3: Segmentation of image 1

Fig. 4: Segmentation of image 2

Fig. 5: Segmentation of image 3

Fig. 6: Segmentation of image 4

Fig. 3-Fig. 6: (a) Original image, Results of hand/ skin segmentation using (b) Proposed Technique (c) Method proposed in [12] (d) Method proposed in [9]

Accuracy True Positive rate Precision Mean Square Error (MSE) Correlation coefficients

Model M [9] M [12] Model M [9] M [12] Model M [9] M [12] Model M [9] M [12] Model M [9] M [12]

0.8669 0.4674 0.8257 0.8712 0.099 0.7329 0.8998 0.9786 0.9627 0.1331 0.5326 0.1743 0.7273 0.1991 0.6844

0.8377 0.5977 0.8309 0.9408 0.2645 0.7011 0.798 0.9952 0.9848 0.1623 0.4023 0.1691 0.6803 0.3718 0.7028

0.9851

0.5340 0.9779 0.9911 0.0988 0.9616 0.9803 0.9998 0.9956 0.0149 0.466 0.0221 0.9703 0.2242 0.9564

0.9854 0.6636 0.9835 0.9938 0.2845 0.9749 0.9764 0.987 0.9908 0.0146 0.3464 0.0165 0.971 0.4061 0.9671

(a) (b) (c) (d)

(a) (b) (c) (d)

(a) (b) (c) (d)

(a) (b) (c) (d)

239239