Informatica 37 (2013) 429-433 429 A Novel Similarity Measurement for Iris Authentication Mohamed Mostafa Abd Allah Minia University, Faculty of Engineering, Egypt Department of Electrical, Communications and Electronics section E-mail: mmustafa@yic.edu.sa Keywords: iris authentication, genuine and impostor pairs, similarity measurement Received: July 7, 2013 This paper introduces a novel similarity measurement which derives the likelihood ratio between two eyes. The proposed method takes into consideration the individual and system error rates of eye features. It handles two kinds of individual probabilities: (consistent Probability (CP), the Inconsistent Probability (IP),) to achieve the best matching approach between two feature sets. While calculating the probabilities, we assume that a reasonable alignment approach should be obtained before the matching approach introduced. The proposed matching algorithm is theoretically proved to be optimal, and experimental results show that the proposed method has more efficient performance on separating genuine and impostor pairs Povzetek: Predstavljena je nova metoda za prepoznavanje identitete oces. 1 Introduction The iris is the color part of the eye behind the eyelids, and in front of the lens. It is the only internal organ of the body which is normally externally visible. Whose unique pattern is stable after age one. Compared with other biometric features such as the face and the fingerprint, iris patterns are more stable and reliable. Iris recognition systems are non-invasive to their users, but require a cooperative subject. For this reason, iris recognition is usually used for verification or identification purposes, rather than for a watch list that is, a large database with which individuals are compared to determine if they belong to a selected group, such as terrorists. Iris recognition is gaining acceptance as a robust biometric for high security and large-scale applications [1][2]. Most classical algorithms verify a person's claimed identity by measuring the features between two iris [2], which consist of two stages: alignment and matching. The alignment stage employs a special pattern matching approach to achieve the best alignment between two feature sets. The matching stage compares the feature sets under the estimated transformation parameters and returns a similarity score using a constructed similarity measurement. If the similarity score is larger than an acceptance threshold, the two irises are recognized as a genuine pairs, otherwise the claimed identity is rejected. Associating with the similarity threshold, there are two error rates: False Match Rate (FMR) and False Nonmatch Rate (FNMR). FMR denotes the probability that the score of an impostor pair is larger than the threshold. FNMR denotes the probability that the score of a genuine pair is less than the threshold. The overall FMR and FNMR for a set of eyes are the integration or average of the FMR and FNMR for all individual eyes in the data set. Conventional methods construct the similarity measurement with simple decisions [3] or multi-decisions based on fusing the similarity scores of different features [5], which use one unified threshold for all eyes to make the final decision. Their similarity thresholds are experimentally determined to assure that the average error rates are lower than a required level, while the individual error rates of some eyes are higher than this required level although the average error rates for all eyes are sufficient. The difficulty of constructing the similarity measurement is that the threshold which balances the tradeoff between the overall FMR and FNMR may not be optimal for each individual eye and thus not optimal for the overall FMR and FNMR of all eyes.The rest of this paper is organized as follows. In section II, iris alignment algorithm regard transformation parameters have been presented. In section III, iris matching algorithm presents the estimation of consistent Probability (CP) and Inconsistent Probability (IP) under the assumption of No/High correlation. Section IV conducts several experiments to evaluate the proposed method. Conclusion has been presented in section V. 430 Informática 37 (2013) 429-433 M.M.A. Allah ( s' J^ (cos Dû y S J V s û y ■sin DÛ 0 Y sJ ^ sin Dû 0 cos Dû 0 sJ V sû + ( Dx \ Dy V Dûy (1) Figurel: Examples of iris images. 2 Alignment approach Most previous matching methods, suffer from memory requirement, time consuming and computationally exhaustive processes. This is because that the distribution of matching scores is evaluated in every possible transformation. This paper, assume that a reasonable alignment approach should be obtained before the Matching, to overcome such problems and provides a fast and memory-efficient matching process. The proposed method, defines vector representation of Template iris features (T), Input iris features (I), and Transformed iris features (S') as following: T={ti, t2,..., tm |, i=1..m}, I={si, s2,..., s„ |, j=1..„}, and S'=(5'j, ¿j, s'i) Let, FAx,Ay,Aq that formulated in Eq. l., be the geometrical transformation function that maps sj (input iris features) into sj' ( transformed iris features). Vti(3) Vti(4) Figure 2: The iris distances representation between two iris features vectors. Hough transform alignment approach [9] uses an accumulator array A(p, q, r) to counts and collect alignment scores of each transformation parameter Ax,Ay,Aq respectively . In practice, each transformation parameter is discredited into a finite set of values: Ax = {Axi , ..., Axp}, Ay = {Ayi , ..., AyQ} and A6= {Aq , ..., A0r}. A direct implementation of a 3-D Hough transform alignment approach [8] is infeasible for embedded devices with limited memory budget. Suppose that P=256, Q=256 and R=128, then 8,388,608 memory units are required for such implementation. Obviously, to overcome such problems and provides a memory-efficient process, a new alignment technique should be proposed. Figure 3: The distribution of features positions after reasonable alignment. 2.1 Proposed alignment approach The proposed alignment approach with multi-resolution accumulator array could greatly reduce the amount of required memory units. For each value of D9r, there is an exactly one shift vector (Axp, Ayg) of each pair (ti, sj) such as given by Eq. 2. Therefore, 2-D accumulator array B with entry B(p, q) is enough to evaluate accumulation of alignment at rotation A&. For all possible rotation that could done in a specific tolerance area So, the proposed approach accumulate evidence value into the array B and the maximum alignment score will represent the best geometrical transformation alignment between I & T. Applying this computation method reduces memory requirement to 4,096 memory units. ( Dx„ \ Dyq q y tJ V y ( cos Dûr sin Dû in Dû Y sJ - sin cos Aq r y A sJ v y y (2) Memory optimization result is not only giving advantages for smaller memory requirement of the proposed approach but also offering a faster alignment peak detection process. Detecting alignment peak value in a smaller Hough space is faster than one in the conventional method [4]. A Novel Similarity Measurement for Iris Authentication Informatica 37 (2013) 429-433 431 2.2 Two vector similarity measure Although several kinds of features can be extracted from Iris image [3][6][8][9], the proposed approach introduces a novel measurement of iris contour features. The feature representation in this proposal offers alternative matching criteria between two vectors called the similarity measure (sM). The proposed matching criteria are derived by accumulating spatial differences between the corresponding trace points of two vectors. As shown in Figure. 2(a), the proposed Iris features is approximated represent by piece-wise linear segments extracted along the iris contour [4]. The vector representation of Iris contour feature S can be given as: S = (Px , Py , 00 , fi , f 2 , f 3 ) (3) Where, (P*, Py) represent as feature position, sq as the contour orientation and (s&i , s®2 , s®3 ) as the orientation differences of two adjacent linear segments. As shown in Figure .3, if T(Ptx , Pty , 0to, f ti , f t2 , f a), and S (PSx , Psy , 0 so, fsi , fs2 , f s3) represent the template and input irises vectors in the tolerance overlapped area O. A pair vector (K) from T are considered to be mated with corresponding features from S if and only if their accumulating spatial differences (aD) is equal or smaller than the tolerance threshold Do and the direction difference (dD) between them is smaller than an angular tolerance 0o. These tolerance thresholds (Do and 0o ) are necessary to compensate the unavoidable errors from image processing and features extraction algorithm. From the accumulated distances, aD=T,k V(k), we derive the similarity sM as follows: aD(tj) = f(Dist, Dfi, Df2, Df3) f (aD) aD(t,,s' j) < D0 (4) 0 others sM(t j ) : 3.1 Iris consistent probability Assume that template and input irises are originated from different eyes and have no correlation between each other. If the consistent probability result is large enough, the two eyes are represented as an impostor pair. Therefore, if there are i - 1 arbitrarily features from T located in O, and all of which are mated with features from S, the rest overlapped area can be represented with O - (i - 1) So, and the unmated randomly distributed features number of S in O is represented with N -(i-1). In additional, the probability that the i -th randomly distributed features from T in S corresponds to one of the N - (i - 1) features from S in the overlapped area O can be denoted with: N-^ = 1.......K) & ) E—(i — l) 7 1 507 (6) Since the corresponding pairs K between T & S under the estimated transformation parameters, the rest consistent probability can be considered as unmated features. The probability that the (K+ 1) - th randomly distributed features from T does not correspond to any features from S in the overlapped area O and can be represented by: E—N --(7) e—k v ' The probability that the (K+ j) - th feature from T is randomly distributed in the rest overlapped area O-(K+j)So and does not correspond to any feature from S in O can be calculated with: E—(N+j) j= 1......M-K) (8) e—(k+j) Therefore, the Iris Consistent Probability between template and input irises under the assumption that T and S have no correlation can be given as: Pep (S ±T) = Cl n N ■ S M—K (i - 1) T-r # where, Dfi = fsi - fti Df 2 =fs2 - ft2 Df 3 = fs3 - ft3 Dist(s, t) = |Df i | + | 2Df i +Df 2 | + | 3Df i +2Df 2 +Df i | (5) The sM function returns value from 0 (different) to a constant positive value ma*Sim (same). 3 Probability matching approach While calculating the probabilities, we assume in the overlapped area O, there are M features from template iris, and N features from input iris. A tolerance area of features spatial distance is assigned as ^0. The probabilities that a randomly distributed M features from template iris corresponds with one of the Nfeatures from input iris in the overlapped area O can be estimated by two aspects: Iris consistent probability and Iris inconsistent probability. I IE i=l (i - 1) I A E ■ j=i n (N+j) (k+j) (9) 3.2 Inconsistent probability Assume that template and input irises are originated from the same eye and have high correlation between each other. If the inconsistent probability result is large enough, the two irises are represented as a genuine pair. Considering that the poor quality irises detected during iris acquisition and feature extraction may cause some truth features to be missing or spurious features to be detected, we assume the truth features from iris T and S in the overlapped area O are m and n, respectively. Thus, the spurious features counts in iris T and S are M - m and N - n. For the truth features between T and S, there should be someone to one correspondence between each other. But due to the existence of eye deformation, features position change and features missing, there are position gaps between the corresponding features of two irises even for genuine pairs. The position gaps of the missing truth features are treated as max(m, n) and can be calculated as m+ n - (h + +). The identical spurious features ) > max(M -m,N -n) and can be calculated as (M - m) + (N - n) + (K - h). In practice, the total features count in O is thus calculated with M+N -K- 3.3 Probability distribution Since the Probability Distribution of the positional differences in corresponding features extracted from mated irises is similar to Gaussian distribution [3] [8]. The probability that the position difference with respect to the corresponding features exceeds the tolerance threshold r to be represented with: 1 - i00G(r)dr (12) where G(r) is the probability of position difference for mated features. Therefore, the probability that truth features (h ) that are located inside r and truth features (+)that are located outside r is calculated by: Ptf = c£+g PpD(sd< r0) PpD(sd> r0) (13) For the spurious features, since there is no one to one correspondence between each other, the probability calculation can be accomplished by replaced Mby M-m, N is replaced by N -n and S is replaced by S + [h-(m+n)]S. Therefore, the probability that the i - th randomly distributed spurious features of M-m from T in S+[h-(m+n)]S corresponds to one of the (N - n) -(i - 1) spurious features from I is denoted with: (N-n)-(i- 1) E + (h-(m + n))-(i- 1) = 1..........K-h) (14) For the un-mated spurious features, M is replaced by M-m, Nis replaced by N- n, Sis replaced by S+ [h -(m + n)]0, and K is replaced by K -h. The probability that the (K -h+jj-th spurious features of M -m from T is randomly distributed in the rest overlapped area S +[h-(m+n)]S0-(K-h+j)S0 and does not correspond to any spurious features of N - n from I in S+ [h - (m + n)]S is derived by: g+(;-(m+n);-(0V-nW) (j=1......((M-m)-(K-h))) . E+(h-(m+n)]-((K-h)+j) u vv ' v . (15) Therefore, the probability that K -h spurious features are mated and (M-m)-(K-h) spurious features are un-mated between M - m and N - n spurious features from T and I is calculated as:PsF= k-6 n m-m I I k.—h,^ m—m—k—h , , „ n (N-n)-(i- 1) j-f E+ (h-(m + n))-((N-n)+j) n l=-U + (ft-(m + n))-(i-1) !$! E + {h-(m + n))-((K-h)+J) (16) The IP between T and I under the assumption that T and I are highly correlated is given by: Pip(i = T) = Yl=0rn=5Yi=oY.7=h5V(m,n,h,g) (17) where ~>>max(m,n) n = d c >max(m,n)+>max(M—m,N—n) ch+7 ptfrCK—hr ^ pw y d ' _ . ' >max(m,n) p 1 rM^>max(M—m,N—n) 1 SF i 0 else 4 Experimental results The proposed technique has been tested over 4320 images. The iris data are captured from 60 people by using three different kinds of iris sensors (BERC, CASIA V1.0, and CASIA-Iris V3). 24 iris image samples per person for each sensor are captured. That mean the total field test data were 60person x 8Iris x 3samples x 3 sensor = 4320 iris Image. The size of Iris is 128*128pixels. In the feature extraction process [4], a pattern is extracted from each iris image using the linear predictive analysis of an 8-pole filter. Firstly, we compare the proposed approach with two existing methods [8] and [9]. The three methods are implemented into a same Iris-based verification system. We use total field test data to construct the evaluation, in which there are number of genuine and impostor matches. The performances of different methods are shown in a representation of the ROC curves, which are plotted as FAR against FRR, as shown in Figure .4. From the ROC curves, it can be observed that the proposed algorithm causes the most improvement. With a given FAR, the proposed approach can help the system to obtain the lowest FRR. Statistically, compared with the other two systems, the proposed algorithm can reduce the system FRR when FAR=0.01%. Secondly, we investigate evaluating iris image quality. , and the measure becomes larger in clear iris image, and smaller in faded image. Figure 5 shows ROC curves correspond to application of image-quality parameter. Under the terms of (a) (without examination in image-quality), which means we don't reject faded images. (b)Examining both registered and verification data (all Iris images). (c)Examining the images, which should be registered only?. Recognition rate is improved from 95.6% to 99.3%. A Novel Similarity Measurement for Iris Authentication Informatica 37 (2013) 429-43 3 433 Figure 4: FAR & FRR evaluation result of the proposed approach. 5 Conclusion The proposed alignment approach which using features vector representation generates a higher peak in Hough space than a conventional vector representation. Hence, an accumulator array with lower resolution could be employed without suffering difficulty of alignment. The proposed approach evaluation result FAR & FRR as shown in Figure .4, work as better as some previously presented approaches. We have been Applied the proposed discriminate algorithm to iris verification device which operates in real world. This evaluation makes it possible that the proposed approach can be implemented into an embedded system, such as DSP-based iris identification module. As shown at figure 5, Comparing with other methods, the proposed method can obtain the best performance for separating the genuine and impostor, which benefits from the utilization of CP and IP to construct the likelihood ratio. This paper invent a method to utilize parameters groups that has a relation with iris image quality and iris image information to got a perfect enrollment procedure results in the capture of the highest quality iris image(s). Another merit of the proposed approach is that it does not depend on the sensor type. Therefore, the proposed approach is more robust and implemental in practice. References [1] A. K. Jain, A. Ross, and S. Prabhakar (2004). An Introduction to Biometric Recognition, IEEE Trans. Circuits and Systems for Video Tech., vol. 14, pp. 4 - 20. [2] K. W. Bowyer, K. Hollingsworth, and P. J. Flynn (2008). Image understanding for iris biometrics: A survey Computer Vision and Image Understanding, 110(2):281 - 307. [3] Y. Du (2006). Using 2d log-gabor spatial filters for iris recognition. In Proc. of the SPIE Biometric Technology for Human Identification III, pages 62020:F1-F8. [4] Yukun Liu, Dongju Li, Tsuyoshi Isshiki and Hiroaki Kunieda, (2010) "A Novel Similarity Measurement for Minutiae-based Fingerprint £5 ^4 «3 oí3 .22 31 R o tu Test All Iris Images Test Images should be Rejestered Without Image Quality Test 0,5 1 False Acceptance Rate (FAR)% 1,5 Figure 5: ROC curve vs. application of Image Quality Parameters. Verification"," IEEE Trans. on Circuits and Systems for Video Technology,vol.14, No.1, pp.8694. [5] K. P. Hollingsworth, K. W. Bowyer, and P. J. Flynn. (2009). The best bits in an iris code. IEEE Transactions on Pattern Analysis and Machine Intelligence, 31(6):964-973. [6] L. Ma, T. Tan, Y. Wang, and D. Zhang (2004). Efficient Iris Recogntion by Characterizing Key Local Variations. IEEE Transactions on Image Processing, 13(6):739-750. [7] L. Masek (2003). Recognition of human iris patterns for biometric identification. Master's thesis, University of Western Australia. [8] C. Rathgeb, A. Uhl, and P. Wild (2011). Shifting score fusion: On exploiting shifting variation in iris recognition. In Proc. Of the 26th ACM Symposium On Applied Computing (SAC'11), pages 1-5. [9] A. Uhl and P. Wild (2010). Enhancing iris matching using levenshte in distance with alignment constraints. In Proc. of the 6th Int. Symp. on Advances in Visual Computing (ISVC'10),pages 469-479. [10] S. Ziauddin and M. Dailey (2008). Iris recognition performance enhancement using weighted majority voting. In Proc. of the 15th Int. Conf. on Image Processing (ICIP '08), pages 277- 280. [11] A. M. Bazen, R. N. J. Veldhuis, (2004). Likelihood-Ratio-Based Biometric Verification, IEEE Trans. on Circuits and Systems for Video Technology, vol.14, No.1, pp.86-94. 6 0