Informatica 35 (2011) 211-219 211 Expression-robust 3D Face Recognition using Bending Invariant Correlative Features Yue Ming and Qiuqi Ruan Senior Member, IEEE Institute of Information Science, Beijing liaoTong University, Beijing 100044, RR. China E-mail: myname35875235@126.com Keywords: 3D face recognition, 3D bending invariant correlative features (3D BI-LBP), spectral regression (SR) Received: luly 1, 2010 In this paper, a novel 3D Bending Invariant Correlative Features (3D BI-LBP) is used for 3D face recognition to overcome some of the unsolved problems encountered with 3D facial images. In this challenging topic, large expression and pose variations along with data noise are three major obstacles. We first exploit an automatic procedure regarding face area extraction, and then process it to minimize the effect of large pose variations and effectively improve the total 3D face recognition performance. To overcome the large expression variations, the key idea in the proposed algorithm is a representation of the facial surface, by what is called a Bending Invariant (BI), which is invariant to isometric deformations resulting from changes in expression and posture. In order to encode relationships in neighboring mesh nodes, 3D LBP is used for the obtained geometric invariant, which own more potential power to describe the structure of faces than individual points and effectiveness in characterizing local details of a signal. The signature images are then decomposed into their principle components based on Spectral Regression (SK) resulting in a huge time saving. Our experiments were based on the CASIA and FRGC 3D face databases which contain large expression and pose variations. Experimental results show our proposed method provides better effectiveness and efficiency than many commonly used existing methods for 3D face recognition and handles variations in facial expression quite well. Povzetek: Razvitaje nova metoda za prepoznavanje 3D obrazov. 1 Introduction Information and Communication Technologies are gradually entering all aspects of our life. They are also opening a world where people unprecedentedly interact with electronic devices embedded in environments sensitive and responsive to the presence of users. These scenarios offer the opportunity to exploit the potential of faces as a non-intrusive biometric identifier to not just regulate access to a controlled environment but also to adapt provided services to the preferences of a recognized user. Automatic human face recognition is an important research area within the field of biometric identification. Compared with other biometric features, face recognition has the advantages of pro-active, non-invasiveness and user-friendliness and has gained great attention during the last decade [1]. While, currently, most efforts are devoted to face recognition using 2D images, they continue to encounter difficulties in handling large facial variations due to head pose, lighting conditions and facial expressions. 2D face recognition systems have a strict constrain on improving accuracy. So far it is still quite difficult to build a robust automatic human face recognition system. Many researchers are committed to utilizing of three-dimensional information to overcome some of the difficult issues associated with face recognition. Range images which contain texture and shape information are very effective for recognition of a face image, when comparing one face with another face. There is evidence that range images have the potential to overcome problems inherent in intensity and color images. Some advantages of range images are explicit representation of the 3D shape, invariance under change of illumination, pose and reflectance properties of objects. In view of the shortcomings of the 2D approaches, a number of 3D and 3D+2D multi-modal approaches have recently been proposed. We extensively examined the prior literature on 3D face recognition, which can be categorized into methods using point cloud representations, depth images, facial surface features or spherical representations [2], A priori registration of the point clouds is commonly performed by ICP algorithms with 92.1% rank-one identification on a subset of FRGC v2 [3]. Based on depth images, Faltemier et al. [4] introduced concentrate dimensional reduction based on the fusion of results from group regions that have been independently matched. Facial surface features, such as curvature descriptors [5], have also been proposed for 3D face recognition. Alternatively, spherical representations have been used recently for modeling illumination variations [6,7] or both illumination and pose variations in face images [2,8]. In 232 Informatica 35 ( 2011 ) 231 -23 8 Y. Ming et al. addition, Kakadiaris et al. [9] used an annotated face model to fit the changes of the face surface and then obtained the deformation image by a fitting model. A multistage alignment algorithm and advanced wavelet analysis resulted in robust performance. They reported a best performance of 97.0% verification as a 0.1% FAR. Face recognition combining 3D shape and 2D intensity/color information is a developing area of research. Mian et al. [10] handled the expression problem using a fusion scheme in which three kinds of methods, spherical face representation (SFR), scale-invariant feature transform (SIFT)-based matching and a modified ICP were combined to achieve the final result. Their results showed the potential of appearance-based methods for solving the expression problem in 3D face recognition. Because of the extremely high dimensionality of the Gabor features for depth and intensity images, Xu et al. [11] proposed a novel hierarchical selection scheme with embedded LDA and AdaBoost learning for dimensionality reduction. With this scheme an effective classifier can be built. However, some details in these approaches are ignored on how depth and intensity information contributes to recognition with expression and pose variations. In this paper, we address the major challenges of 3D field-deployable face recognition systems. We propose a novel framework for expression-robust 3D face recognition. The flowchart is shown in Fig. 1. Our method can be divided into feature extraction, dimension reduction and classification sections. For all sections, because expression variations and data noise are major obstacles to good system performance, we preprocess the raw 3D data and extract the face area which is least affected by expression changes. In the feature extraction section, the Bending Invariant and its statistical codebook analysis of correlative features are used to describe the intrinsic geometric information, denoted as 3D BI-LBP. This procedure very effectively eliminates the effect of the expression variations. With dimensional reduction based on Spectral Regression, more useful and significant features can be produced for a face than can be produced by current methods, resulting in a huge saving in computational cost. Finally, we achieve face recognition using Nearest Neighbor Classifiers. The Figure 1: The Framework of 3D Face Recognition rest of this paper is organized as follows. First, we describe the automatic face registration process that permits alignment the 3D point clouds before analysis in section 2. Section 3 describes the Bending Invariant Correlative Features (3D BI-LBP) used in our framework. Section 4 introduces Spectral Regression (SR) for reducing dimensions and classifier construction. Section 5 reports the experimental results and gives some comparisons with existing algorithms. Finally, the paper is concluded in section 6. 2 Automatic preprocessing of 3D face data In this paper, one face is described by one 3D scattered point cloud from one 3D laser scanner as illustrated in Fig.2. The preprocessing scheme is based on three main tasks, respectively the extraction of the facial region, the registration of the 3D face, and the acquisition of the normalized depth and intensity images. They are fully automated; handling noisy and incomplete input data are immune to rotation and translation and suitable for different resolutions. The main purpose of face extraction is to remove irrelevant information from the 3D point clouds, such as data corresponding to shoulders or hair, and spikes obtained by a laser scanner. First in face extraction, we estimate a vertical projection curve from the point cloud by computing the column sum of the valid point's matrix [2, 12], Then, we define two lateral thresholds on the left and right inflexion points of the projection curve for removing data points on the subject's shoulders beyond these thresholds. We further remove the data points corresponding to the subject's chest by thresholding of the histogram of depth values. Finally, we remove outlier points that remain in regions disconnected from the main facial area and treat only the largest region as the facial region. After extracting the main facial region from a 3D scan, registration (pose correction) is performed. We present a multistage approach for automatic registration that offers robust and accurate alignment even in the presence of facial expression variations. First, we compute the orthogonal eigenvectors, vi,v?,v3, of the covariance matrix of the point cloud, as the three main axis of the point cloud. We rotate the point cloud so that vi,v?,v3 are parallel to Y-, X- and Z- axis of the reference coordinate system, respectively. The nose tip obtained by [13] rests on the origin of the reference coordinate system. This permits construction of an average face model (AFM), by computing at each grid point the value across all training faces. The AFM is used as a reference face model, and all face signals are further aligned by running ICP [14] to avoid the unwanted influences of the mouth and the jaw. Finally, there is a refinement step that employs a global optimization technique [15] to minimize the z-buffer distance. This effectively re-samples the data independent of the data's triangulation and removes all irrelevant information that may have been left over from the previous preprocessing steps. EXPRESSION-ROBUST 3D FACE RECOGNITION USING. Informática 35 ( 2011 ) 231 -23 8 233 lian point L'hnid Recognition result Figure 2: Main steps in facial region preprocessing. 3 Feature extraction 3.1 Bending invariant The core of our 3D face recognition framework is the representation of a facial surface which is invariant to isometric deformations, by bending invariants (BI) [16, 17]. This paper extends our previous work [18-20]. The class of transformations that a facial surface can undergo is not arbitrary, and empirical observations show that facial expressions can be modeled as isometric (or length-preserving) transformations [21], Therefore, we introduced an efficient feature for constructing a signature for isometric surfaces, referred to as a bending invariant. The Bending Invariant is a polyhedral approximation of the facial surface obtained by performing an Isomap on a reduced set of points and interpolating on the full set of points. Given a facial surface M(x. v. z) G K3, the bending invariant /yi i.v. v. z) G K is the output of an Isomap algorithm. A geodesic isometric is formally a mapping i(/:M-iM' such that dM(x,y,z) = VLv), V(z)), m V(x,y,z) G M3 u' One of the crucial practical requirements for the construction of the invariant feature of a given surface, is an efficient algorithm for the computation of the geodesic distance on the surface, that is, d\\ i.v. v. z). Computation of the geodesic distance can effectively reflect the facial shape information and overcome some of the unsolved problems encountered with 3D facial images, such as large expression and pose variations along with data noise. A numerically consistent algorithm for the computation of the distance between a surface vertex and the rest of the n sur- face vertices on a regular triangulated domain in O n operations is referred to as fast marching on triangulated domains (FMTD) [16]. After distance computation, we can obtain an approximation of the geodesic distance by sampling the continuous surface on a finite set of points and making discrete the metric associated with the surface. The metric is invariant under isometric surface deformation, depending on an arbitrary ordering of the points. We would like to obtain a geometric invariant, which is both unique for isometric surfaces and allows using simple rigid surface matching to compare the invariants. Based on the discussion above, this is equivalent to finding a mapping between two metric spaces,
0 0, DD < 0 ' \DD\ = io ■ 22 + /3 • 21 + /4 • 2° The method also enhance image low-level features like edges, peaks, valleys, and ridges, which is equal to enhancing key facial element information such as the nose, eyes, and mouth plus local characteristics like dimples, melanotic nevus and scars. They not only preserve global facial information but also enhance local characteristics. When the pose, expression and position of a face change, local changes are smaller than global changes, resulting are a very effective face representation. 0 -2 -5 0 2 0 0 1 1 1 n 4 -11 a m m I S 9 9 i »1 '4 0 9pj S £i 1 A t 4 n .B 1 i. u ,i -1 0 0 m s i a 4 1 0. 0 . M ► 0 Pl-KMgitiillBP ►32 n ► SO P3-* 3DLBP ► 40 p* DD = -2,Z'jZ2Z'3Z4 =0010 Figure 3: The Flowchart of 3DLBP 4 Spectral regression (SR) In learning section Spectral Regression is adopted to learn principle components from each 3D facial image based on 3D Bending Invariant Correlative Features (3D BI-LBP) and these components are stored into the corresponding sub-codebook. Suppose we have m face range images. Let {.v; }'" | c R" (n = 1024) denote their vector representations. Dimensionality reduction aims at finding {s,:}"!! C Rd,d n, where can "represent" .v;. In order to reflect the relationship of 3D face data among different samples better, Spectral Regression (SR) is introduced to reduce dimensions [25]. The algorithm divides into two steps. The first one is regularized least squares. Find c— 1 vectors ai, ...,ac-1 G R"{k = 1,..., c— 1) is the solution of regularized least square problem cik = ¡iraiiiin ^ fl'.v; vf ' + a\\ " ! = 1 (7) where \j is the / - th element of v/-. It is easy to check that m is the solution of linear equations system: (6) (.XXT + al)ak = Xyk (8) EXPRESSION-ROBUST 3D FACE RECOGNITION USING... where I is a n x n identity matrix. The canonical Gaussian elimination method can be used to solve this linear equations system [25]. When X is large, some efficient iterative algorithms (eg., LSQR [26]) can be used to directly solve the above regularized least square problem. The second one is SREmbedding. Let /1 \ci\.....ar J, A is a n x (c - 1) transformation matrix. The samples can be embedded into c— 1 dimensional subspace by x —» z = Atx (9) Computational complexity is shown [25] that Spectral Regression decreases the complexity from cubic-time to linear-time which is a huge speed-up. The dimensionality reduction process and subspace projection based on Spectral Regression preserve the discriminated facial information which is able to capture salient facial characteristics and is further enhanced for improved recognition performance by effective matching in the reduced space. 5 Experimental and analysis In this paper, we present our results evaluating the performance of our framework using the Face Recognition Grand Challenge (FRGC) data corpus, which is organized in 2004 by NIST [12], FRGC data contain a variety of facial expressions. Therefore it allows design of additional experiments to evaluate the effect of such variation. In this section, we demonstrate the excellent performance of our proposed scheme by comparing experiments in terms of different algorithms and different reducing dimension schemes. All of our experiments have been implemented in Matlab 7.5 and run on a P4 2.1 GHz Windows XP machines with 2GB memory. 5.1 Experiments with different algorithms In this experiment, we make detailed comparisons with some existing methods for 3D face recognition to show the performance of the proposed algorithm. The considered features include surface curvature (SC) [5], point signature (PS) [27], Learned Visual Codebook (LVC) [28], UR3D [9] and our proposed framework for 3D face recognition. The different features are extracted for each node. In the FRGC verification, three mask are defined over the square similarity matrix which holds the similarity value between all subject sessions. Each mask produces three different Receiver Operating Characteristic (ROC) curves, which will be referred to as ROC I, II and III. In ROC I all the data are within semesters, in ROC II they are within the year, while in ROC III the samples are between semesters. The FRGC data corpus can be divided into two disjoint subsets, depending on whether the subset has a neutral facial expression or not. Table 1 shows the verification rates for ROC I, II and III. From these results, we can draw the following conclusions: The highest verification rates is up to 96.2% which Informatica 35 (2011 ) 231-238 235 Table 1: Verification Rates (%) for ROC I, II and III (FAR=10~3) Group 1 SC PS LVC UR3D Ours ROC I 49.5 43.1 91.2 95.2 96.2 ROC II 43.2 41.5 88.4 94.8 95.3 ROC III 42.8 41.3 86.2 94.4 94.6 Group 2 SC PS LVC UR3D Ours ROC I 39.6 35.8 80.2 80.4 80.7 ROC II 32.7 29.4 77.4 79.2 79.4 ROC III 29.3 27.8 75.1 77.9 78.1 group 1: 5 images with only neutral expression group 2: 10 images with non-neutral expression was obtained by our framework. Shape variation is the important information for characterizing an individual and the depth feature vector reflecting shape variation improves the verification rates distinctly in Table 1; Expression variations affect performance strongly and BI, a novel feature can obtain a representation of the surface which is invariant under geodesic isometries and decrease the influence of expression effectively; Statistical codebook analysis (3D LBP) can encode relationships between neighboring mesh nodes and 3D LBP based on BI are likely to be correlated for nearby nodes. Experiment results show that our framework yields consistently better performance than existing methods in only neutral. If we increase the node number, the performance will be improved significantly. Although due to expression variations, our performance is not better than UR3D [9], we use a simpler method which spends less time and memory. Second, we made a comparison between 3D BI-LBP and two appearance based methods, which contain local binary pattern (LBP) [23] and learned visual code-book (LVC) [28]. LBP is an efficient texture descriptor and has been successfully used for face recognition. LVC is a method which chooses K-means clustering to learn basic facial elements. K-fold cross validation was used on three methods. Because of the large face database, we did two groups of experiments. In the first, 10 images of each person consisting of 5 neutral images and 5 images with different expressions were divided into 10 groups and used for K-fold cross validation. In the second, all of the expression images were divided into 10 groups for K-fold cross validation. The results are presented in Table.2. It shows that our method outperformed other methods. Our method exhibits the desirable characteristics of 3D facial structure, captures local structural characteristics of image local areas in multiple directions and has the properties of orientation and scalar invariance. It can effectively estimate the intrinsic dimensions of a data set and preserve local structure information more accurately than other methods, while being insensitive to the external factors of expression, pose and illumination. 236 Informatica 35 ( 2011 ) 231 -23 8 Y. Ming et al. Table 2: Comparison of recognition rates using K-fold cross validation Size group l(FRGC) group2(FRGC) LBP 83.32% 85.29% LVC 91.87% 92.35% 3D BI-LBP 96.75% 97.68% Size groupl(CASIA) group2(CASIA) LBP 85.17% 88.05% LVC 93.12% 94.56% 3D BI-LBP 97.08% 98.15% eigen-decomposition of dense matrices. From the experimental results, we can intuitively see that the actual 3D in-fomiation has no relation to view and illumination. Finally, compared to 2D face recognition, 3D face recognition has higher accuracy and can overcome the existing problems associated with 2D face recognition. group 1: 5 images with neutral expression and 5 image with different expression group 2: 10 images with different expression 5.2 Experiments with different dimension reducing schemes Here, we make detailed comparisons between Spectral Regression (SR) [25] and PCA [29], LPP [30], OLPP [31], SRKDA [32] to show the efficiency of our proposed method for 3D face recognition, especially where there are expression variations. In view of the recognition accuracy curves in Fig.4, we can see PCA gives a good representation, which has a good recognition result in 36 dimensions. With the increase of dimensionality of the related feature vector, the recognition rate also rapidly increases. But when the dimensionality reaches 50 or higher, the recognition rate nearly stabilizes at a certain level (about 90.4%). LPP obtained a better accuracy when the dimensions are more than 50 and with increasing dimensions up to 100 dimensions. This is mainly because in LPP models the local structure of a face manifold has a better discriminate. When the dimensions are lower, the representation ability of LPP is worse than PCA since its basic functions are non-orthogonal. In order to overcome the limitation of LPP, OLPP was introduced based on the orthogonal basic functions. It has better representation and discriminates. As a result, it shows better recognition perfomiance in the lower dimensions. On the other hand, OLPP is expensive in both time and memory. The Spectral Regression (SR) approach solves the optimization problem for linear graph embedding which reducing the cubic-time complexity to linear-time complexity [25], while Spectral Regression Kernel Discriminate Analysis (SRKDA) is quadratic-time complexity. We have performed extensive experimental comparisons of the state-of-the-art approaches, which demonstrate the effectiveness and efficiency of our method. In this experiment, The algorithm SR is analyzed, and used in 3D face recognition. Spectral methods have recently emerged as a powerful tool for dimensionality reduction and manifold learning. These methods use information contained in the eigenvectors of a data affinity matrix to reveal low dimensional structure in high dimensional data. SR casts the problem of learning an embedding function into a regression framework, which avoids * »• a & a • .* *:s / „___ 'J* t --1--PCA --1--LPP -A- OLPP • —*—•SR •—*—•SRKDA Dimensions Figure 4: The Results of Different Reducing Dimension Schemes 6 Conclusion In this paper,we propose a novel method for 3D face recognition. We connect a depth feature, Bending Invariant and its statistical codebook analysis (3D BI-LBP) as an intrinsic features. Spectral Regression is used for selecting effective features and combining them to a classification. Experimental results show that our framework reflects the shape and geometric properties of 3D data and describes the relational properties of a local shape in a neighborhood. Compared to the existing methods, it has demonstrated excellent perfomiance. All these reasons make face very suited for Ambient Intelligence applications. Such suitability is especially true for biometric identifier such as 3D face recognition, which is the most common method used in visual interactions and allows recognizing the user in a non-intrusive way without any physical contact with the sensor. Acknowledgement This work is supported by the Fundamental Research Funds for the Central Universities (2009YJS025). Portions of the research in this paper use the CASIA 3D Face Database collected by institute of Automation, Chinese Academy of Sciences. References [1] Bowyer, K. W„ Chang, K, Flynn, P. (2006) A survey of approaches and challenges in 3D and multi-modal 3D+2D face recognition, Computer Vision and Image Understanding, Elsevier, pp. 1-15. EXPRESSION-ROBUST 3D FACE RECOGNITION USING. Informatica 35 (2011 ) 231-238 237 [2] Llonch, R. S., Kokiopoulou, E., Tosic, I., Frossard, R (2010) 3D face recognition with sparse spherical representations, Pattern Recognition, Elsevier, pp. 824834. [3] Lu, X., Jain, A. (2006) Deformation modeling for robust 3D face matching, in Proceedings of IEEE Computer Vision and Pattern Recognition, IEEE Computer Society, New York, pp. 1391-1398. [4] Faltemier, T. C„ Bowyer, K. W„ Flynn, P. J. (2008) A region ensemble for 3D face recognition, IEEE Transactions on Information Forensics and Security, IEEE Computer Society, pp. 62-73. [5] Gordon, G. (1991) Face recognition based on depth and curvature features, in Proceedings of IEEE Computer Vision and Pattern Recognition, IEEE Computer Society, Champaign, pp. 234-247. [6] Wang, H„ Wei, H„ Wang, Y. (2003) Face representation under different illumination conditions, in Proceedings of IEEE International Conference on Multimedia and Expo, IEEE Computer Society, Maryland, pp. 285-288. [7] Ramamoorthi, R. (2002) Analytic PCA construction for theoretical analysis of lighting variability in images of a Lambertian object, IEEE Transactions on Pattern Analysis and Machine Intelligence, IEEE Computer Society, pp. 1322-1333. [8] Yue, Z„ Zhao, W„ Chellappa, R. (2008) Pose-encoded spherical harmonics for face recognition and synthesis using a single image, EURASIP, Journal on Advances in Signal Processing, Hindawi Publishing Corporation, pp. 1-18. [9] Kakadiaris, I. A., Passalis, G., Toderici, G., Mur-tuza, N., Lu, Y., Karampatziakis, N., Theoharis, T. (2007) 3D face recognition in the presence of facial expressions: an annotated deformable model approach, IEEE Transactions on Pattern Analysis and Machine Intelligence, IEEE Computer Society, pp. 640-649. [10] Mian, A. S„ Bennamoun, M„ Owens, R. (2007) An efficient multimodal 2D-3D hybrid approach to automatic face recognition, IEEE Transactions on Pattern Analysis and Machine Intelligence, IEEE Computer Society, pp. 1927-1943. [11] Xu, C„ Li, S„ Tan, T„ Quan, L. (2009) Automatic 3D face recognition from depth and intensity Gabor features, Pattern recognition, Elsevier, pp. 1895-1905. [12] Phillips, P. J., Flynn, P. J., Scuggs, T„ Bowyer, K. W., Chang, J., Hoffman, K., Marques, J., Jaesik Min, Worek, W. (2005) Overview of the face recognition grand challenge, in Proceedings of Computer Vision and Pattern Recognition, IEEE Computer Society, San Diego, pp. 947-954. [13] Xu, C., Wang, Y., Tan, T., Quan, L. (2006) A robust method for detecting nose on 3D point cloud, Pattern Recognition Letters, Elsevier, pp. 1687-1497. [14] Besl, P., Mckey, N. (1992) A method of registration of 3D shapes, IEEE Transactions on Pattern Analysis and Machine Intelligence, IEEE Computer Society, pp. 239-256. [15] Siarry, P., Berthiau, G., Durbin, F., Haussy, J. (1997) Enhanced simulated annealing for globally minimizing functions of many-continuous variables, ACM Transactions on Mathematical Software, Association for Computing Machinery, pp. 209-228. [16] Lu, X., Colbry, D„ Jain, A. (2004) Three-dimensional model based face recognition, in Proceedings of International Conference on Pattern Recognition, IEEE Computer Society, Cambridge, pp. 362-366. [17] Bronstein, A. M., Bronstein, M. M., Kimmel, R. (2003) Expression-Invariant 3D Face Recognition, in Proceedings of Audio- and Video-Based Biométrie Person Authentication, Springer, Guildford, pp. 6270. [18] Wu, J. Y., Ruan, Q. Q., An G. Y. (2009) A joint-diffused inpainting model for underexposure image preserving the linear gemetric structure, Informatica, Slovenian Society Informatika, pp. 151-163. [19] Wu, J. Y., Ruan, Q. Q., An G. Y. (2009) Exemplar-based image completion model employing PDE corrections, Informatica, Slovenian Society Informatika, pp. 259-276. [20] Ming, Y., Ruan, Q. Q., Ni, R. R„ LEARNING EFFECTIVE FEATURES FOR 3D FACE RECOGNITION, Accepted by ICIP 2010, Hong Kong. [21] Balasubramanian, M., Shwartz, E. L., Tenenbaum, J. B., de Silva, V., Langford, J. C. (2002) The Isomap Algorithm and Topological Stability, Science, Science Publisher Inc., pp. 9-11. [22] Ojala, T., Pietikainen, M., Bunke, H. (2002) Multi solution gray-scale and rotation invariant texture classification with local binary patterns, IEEE Transactions on Pattern Analysis and Machine Intelligence, IEEE Computer Society, pp. 971-987. [23] Timo, A., Abdenour, H„ Matti, P. (2004) Face recognition with Local Binary Patterns, in Proceedings of European Conference on Computer Vision, Springer, pp. 469-481. [24] Huang, Y. G., Wang, Y. H., Tan, T. N (2006) Combining statistics of geometrical and correlative features for 3D face recognition, in Proceedings of British Machine Vision Conference, British Machine Vision Association, pp. 391-395. 238 Informatica 35 (2011) 231 -238 Y. Ming et al. [25] Cai, D., He, X., Han, J. (2007) Spectral regression for dimensionality reduction, Technical report, Computer Science Department, UIUCDCS-R-2007-2856, UIUC. [26] Paige, C. C„ Saunders, M. A. (1982) Algorithm 583 LSQR: Sparse linear equations and least squares problems, ACM Transactions on 'Mathematical Software, Association for Computing Machinery, pp. 195-209. [27] Chua, C„ Han, F„ Ho, Y. (2000) 3D human face recognition using point signature, in Proceedings of IEEE International Conference on Automatic Face and Gesture Recognition, IEEE Computer Society, Grenoble, pp. 233-238. [28] Zhong, C„ Sun, Z„ Tan, T. (2007) Robust 3D face recognition using Learned Visual Codebook, in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, Minneapolis, pp. 1-6. [29] Turk, M„ Pentland, A. (1991) Eigenfaces for recognition, Journal of Cognitive Neuroscience, American Psychological Association, pp. 71-86. [30] Cai, D„ He, X. F„ Han, J. W. (2005) Using graph model for face analysis, Technical Report, UIUCDCS-R-2005-2636, UIUC. [31] Cai, D„ He, X. F„ Han, J. W„ Zhang, H. J. (2006) Orthogonal Laplacianfaces for face recognition, IEEE Transactions on Image Processing, IEEE Computer Society, pp. 3608-3614. [32] Cai, D„ He, X. F„ Han, J. W. (2007) Spectral Regression for efficient regularized subspace learning, in Proceedings of IEEE International Conference on Computer Vision, IEEE Computer Society, Rio de Janeiro, pp. 1-8.