[BACK]
Computers, Materials & Continua
DOI:10.32604/cmc.2021.020084
images
Article

Secure Rotation Invariant Face Detection System for Authentication

Amit Verma1, Mohammed Baljon2, Shailendra Mishra2,*, Iqbaldeep Kaur1, Ritika Saini1, Sharad Saxena3 and Sanjay Kumar Sharma4

1Department of Computer Science & Engineering, Chandigarh Group of Colleges, Mohali, 140317, India
2Department of Computer Engineering, College of Computer & Information Science, Majmaah University, 11952, Saudi Arabia
3Department of Computer Science and Engineering, Thapar Institute of Engineering & Technology, Patiala, 147004, India
4Department of Computer Centre & Campus Network Facility Science Complex, PO Central University, Gachibowli, Hyderabad, 500046, Telangana, India
*Corresponding Author: Shailendra Mishra. Email: s.mishra@mu.edu.sa
Received: 07 May 2021; Accepted: 08 June 2021

Abstract: Biometric applications widely use the face as a component for recognition and automatic detection. Face rotation is a variable component and makes face detection a complex and challenging task with varied angles and rotation. This problem has been investigated, and a novice algorithm, namely RIFDS (Rotation Invariant Face Detection System), has been devised. The objective of the paper is to implement a robust method for face detection taken at various angle. Further to achieve better results than known algorithms for face detection. In RIFDS Polar Harmonic Transforms (PHT) technique is combined with Multi-Block Local Binary Pattern (MBLBP) in a hybrid manner. The MBLBP is used to extract texture patterns from the digital image, and the PHT is used to manage invariant rotation characteristics. In this manner, RIFDS can detect human faces at different rotations and with different facial expressions. The RIFDS performance is validated on different face databases like LFW, ORL, CMU, MIT-CBCL, JAFFF Face Databases, and Lena images. The results show that the RIFDS algorithm can detect faces at varying angles and at different image resolutions and with an accuracy of 99.9%. The RIFDS algorithm outperforms previous methods like Viola-Jones, Multi-block Local Binary Pattern (MBLBP), and Polar Harmonic Transforms (PHTs). The RIFDS approach has a further scope with a genetic algorithm to detect faces (approximation) even from shadows.

Keywords: Pose variations; face detection; frontal faces; facial expressions; emotions

1  Introduction

Face recognition is an important process for facial emotion recognition, face tracking, gender classification, multimedia applications, automatic face recognition, and many others [1,2]. Many algorithms have been proposed for face detection, but many challenges with efficient and fast detection of faces exist. For example, if faces are tilted, varied with angle, and rotated along the axis, its detection is difficult. Hence, a fast and robust detection system is the need of the hour. Google developed a face recognition algorithm and a huge database consisting of 200 million images and eight million unique searching tasks [3]. The auto-tagging function can see another example in Facebook in which the identity of a person can be automatically recognized from an image that he uploads to Facebook [4]. Biometric face recognition is a great utility to recognize a person's identity by comparing his face to an image stored in an ID/passport for security [5]. Apart from that, face recognition has drawn attention towards different researches and applications [6]. The rotated or tilted image recognition is a great challenge during authentication and pattern recognition system. This is illustrated in Fig. 1a. When a photo is taken through the camera, it may detect the face and create a rectangle shape with different angles and poses. This is due to pose variations, lighting conditions, and rotations of a camera during the shooting. It is a complex task to detect the face from the rotated or tilted images. Authors in [7] stated that most recognition algorithms might degrade 10% in face verification, thus indicating that the pose variation remains a significant challenge in face recognition. Authors in [8] suggested thinking about feature representation invariant to pose in recognizing in surveillance videos.

images

Figure 1: (a) Rotated face example (b) Facial feature extraction from digital image using MBLBP

Nevertheless, rotated face recognition remains a challenge in practical scenarios [911]. The rotation invariant detection capability using various methodologies is summarized in Tabs. 1 and 2. So the Multi-block (MBLBP) [12] and Polar Harmonic Transforms (PHT) [1,2] techniques alone are not sufficient for fast display and rotation. For the picture illustrated in Fig. 1a. Viola-Jones algorithm [11] is not able to detect the rotated face images. LBP [13] and HOG [1,2] features are also utilized to fetch facial features of the image, but they are not rotation invariant and unable to detect the face from rotated images [14]. To address this problem, the paper proposes a Rotation Invariant Face Detection System (RIFDS) technique to detect the face from different angles of rotations [15]. RIFDS combines Polar Harmonic Transforms (PHTs) [1,2] with Multi-Block LBP (MBLBP) [12] technique for fast and accurate detection of rotated faces. MBLBP is used to extract the texture features from different angles of the image, and the PHT [1,2] method is implemented to recognize the face from any angle. MBLBP [12] extracts the features from small blocks, and these features are more précises than the features extracted from a single image as a whole [16].

images

images

Thus the features extracted from small blocks of a single image are more detailed. This leads to more accurate results. RIFDS uses binary images to display the selected facial features. When a test image is uploaded, it is converted into a grayscale image because image color increases the complexity of multiple color channels (like RGB and CMYK) [9]. RIFDS is tested on the face databases, namely JAFFF, ORL, CMU, MIT-CBCL, and LFW. The database contains images with different sizes (i.e., resolution), poses (i.e., face direction in left, right, up and down), facial expression (i.e., fear, joy, cry, anger, happiness, and sadness, shyness), and rotations (i.e., rotated at different angles). The paper is structured in four main sections: Section 1 introduces the content of the article, Section 2 presents the proposed method, Section 3 validates it experimentally, and lastly, Section 4 concludes the paper.

1.1 Binary Images

It uses two colors (black and white) and two-pixel values, i.e., 0 and 1. A binary image with m number of rows and n number of columns has N pixels and is given by Eq. (1). They display the extracted edges and other facial features in the Multi-Block LBP. When the LBP operator is applied to a digital image, detected edges are shown with white pixel values, and the rest of the image is the background. Different facial features from digital images are extracted by using the LBP operator as shown in Fig. 1b, in which extracted features (i.e., edges) are shown in white color, and the rest are the background.

I:N[0,1](1)

1.2 Multi-Block Local Binary Pattern (MBLBP)

It detects faces from digital images through the concept of head and face boundary extraction. It can detect faces at a 15° angle (i.e., an image with a pose left side or right side) and 360° (i.e., frontal face) [12]. It is also used to encode the rectangular region's intensity using a local binary pattern [17]. LBP looks at nine pixels at a time (i.e., a 3 × 3 window of image = 9-pixel values) and 2∧9 = 512 possible values (see Fig. 2). MBLBP allows 256 types of different binary patterns to be formed for edge detection and face detection from images. The MBLBP operator is computed to identify the rectangle by comparing the central's rectangle average intensity, kc, with those of its neighborhood rectangles {k0,…,k8}. In this way, a binary sequence is generated. The MBLBP value is obtained by Eq. (2).

MBLBP=i=18s(kikc)2i(2)

s(x)={1,x>00,x<0(3)

where, kcis the average intensity of center rectangle and ki (i = 1..8) are the intensity of neighborhood rectangles.

images

Figure 2: Local Binary Pattern (LBP) computation on a 3 × 3 matrix

1.3 Polar Harmonic Transforms (PHTs)

They are used for feature extraction and generate an invariant rotation feature. According to it if f(r, θ) represents a continuous image function on a unit disk D={(r,θ): 0  r  1, 0 θ 2Π. The PHT with m repetition and order n is given by Eq. (4).

PHT=λ02Π01f(r,θ)Gmn(r,θ)drdθ(4)

where m, n has the values as (+1, −1, +2, −2, …) and G* is the conjugate of G and is given by Eq. (5).

Gmn(r,θ)=Rn(r)ejmθ;j=1(5)

The radial part Rn(r) of image is given by Eq. (6).

Rn(r)={cos(πnr2);for PCTsin(πnr2);for PST(6)

λ={1Π,when n=02Π,when n0(7)

With the help of PHTs, non-frontal faces are detected at different angles of rotation of faces (i.e., ±30, ±45, ±60, ±90, ±120, ±135, ±150, 180, ±210, ±225, ±240, ±270, ±300, ±315, ±330 ±360) Gradient direction histogram (HOG) features can be used for face recognition under non-restrictive conditions [18]. HOG is a feature descriptor used in image and vision processing for face and object detection. The technique measure incidences of gradient alignment in localized part of the test image. This method is comparable to that of edge orientation histograms or scale invariant feature transform descriptors, and shape contexts. The major variance with other techniques is to compute on a dense grid of homogeneously spaced cells and uses touching local contrast normalization for better-quality accuracy. Tab. 1 demonstrates various face detection methods to detect the rotated faces. It has been shown that Viola-Jones, HOG features, LBP features, and Multi Block-LBP features are not rotation invariant (i.e., unable to detect rotated faces). On the other hand, Polar Harmonic Transforms (PHTs) is rotation invariant (i.e., detect the rotated faces). Tab. 2 represents features supported by different methods used for the detection of faces.

2  Proposed Rotation Invariant Face Detection System (RIFDS)

2.1 Pre-Processing Framework

The RIFDS system combines two methods PHT and MBLBP. MBLBP is used to extract texture patterns from the digital image, while PHT keeps rotation invariant characteristics [13]. This process is illustrated in Fig. 3. Here, a query image is selected to detect the rotated face from the sample data set. Then, pre-processing operations like morphological operators and classification are performed to the query image for fast processing. The query image is rotated at a 45° angle to make it ready for analysis. The facial entities are selected as features (i.e., eyes, nose, and mouth) from the modified image. Facial features are selected and extracted for the training of the face recognition system. Face detection is applied to the selected features. The rotated face is generated and finally cropped at a 45° angle. The sample dataset is chosen randomly. The proposed system can detect the face from digital images at 30°, 45°, 60°, 90°, 120°, 135°, 150°, 180°, 210°, 225°, 240°, 270°, 300°, 315°, 330° and 360° angles. Fig. 4 represents the detection of the face at different rotations on a digital image.

2.2 Face Detection at Different Rotations

PHT technique is used to detect faces at different rotations. PHT is robust to noise, minimum information redundancy, fast and accurate face detection technique at different angles. So basically after selection of the test image and selecting angle with initial morphological operation images have been processed. The PHT techniques and cascading have been performed. Finally, detection of faces at various angle has been achieved. The steps for the algorithm are shown in Algorithm 1 and Fig. 5. Here, an image is selected from the dataset and chooses a 140° angle for image rotation. User input for any angle (30°, −30°, 45°, −45°, 60°, −60°, 140°, −140°, 180°, 270°, −270° and 360°) can be chosen by selecting angle for the given dataset and test image. The selected image is cropped into a circle for better detection and rotation. PHT shown in Eq. (4) is applied to the selected image. The zeroth-order approximation for Eq. (4) is computed by Eq. (8).

Mpq=λi=0n1k=0n1f(xi,yk)Gpq(xi,yk)ΔxiΔyk(8)

where, xi=(2i+1N)/D and yk=(2i+1N)/D for k = 0,…,N-1 and

Δxi=Δyk=2D(9)

images

Figure 3: RIFDS system: (a) Sample dataset; (b) Query image; (c) Pre-processing operation applied to the image; (d) Rotated image at a 45° angle; (e) MBLBP facial features are extracted; (f) Combined facial features are selected; (g) The selected features are extracted; (h) Face detector is applied; (i) Face is detected from the query image

For inner circle mapping, D = N, and outer circle mapping D = N√2. The image is reconstructed using the inverse transform function given in Eq. (10).

G(xi,yk)=p=minmaxq=minmaxMpqGpq(xi,yk);i,k=0,1,,N1(10)

images

Figure 4: Face from query image is detected at various degrees

Algorithm 1: Face detection at different rotations

images

images

Figure 5: Level 2 DFD of face detection by polar harmonic transforms

Where min and max are the minimum and maximum values of p and q for PHT. G(xi,yk) is the reconstructed image of the original image G(xi,yk). The mean square for the image is computed by Eq. (11).

E=i=0N1k=0N1[((G(xi,yk))]G(xi,yk))i=0N1k=0N1G(xi,yk)2(11)

The cascade object detector is used for face detection. Finally, the detected face is created at the rotated angle (140°). The results of face detection are shown in Fig. 6.

images

Figure 6: Face detection using polar harmonic transformation

2.3 Facial Features Extraction and Detection

The Multi-Block LBP is used for facial feature extraction and detection. Initially test image has been selected and after rescaling processed by dividing into blocks. Comparison and binary numbers have been concluded. Further with MBLB and cascading of facial extraction the detection of faces have been performed. The local binary operator is used for the calculation of binary patterns in digital images. Extracted features of the input image are displayed using the binary image. The calculation of the local binary pattern is shown in Fig. 2. Comparison of neighboring pixels is done with the center pixel. If the neighbor pixel value is more than or equal to the center pixel value, then assign 1; otherwise, assign 0. The steps to calculate the multi-block local binary pattern for face facial feature extraction and detection are given in Algorithm 2. Figs. 7 and 8 show the detection of the face using Multi-Block LBP.

Algorithm 2: Facial feature extraction and detection

images

images

Figure 7: Face detection using MB-LBP. (a) Original image, (b) local binary pattern, (c) MBLBP for face detection, (d) output of face detected, (e) MBLBP histogram

images

Figure 8: Another example of face detection. (a) Original image, (b) local binary pattern, (c) MBLBP for face detection, (d) output of face detected, (e) MBLBP histogram

In MBLBP, feature extraction performance also depends on the number of blocks or scale size used to form the filter from the operator. Its’ detection process is shown in Fig. 9. In MBLBP, s is denoted as the parameter, which is the scale of the MBLBP operator. The feature extraction is implemented with three different scales (3 × 3, 9× 9, 12× 12, and 21 × 21). By using different block sizes, it can be observed that if the scale is small, i.e., (3 × 3), it works very effectively, but it cost more than others. The average size filter (9 × 9) is computed effectively and works very fast. It also works better on noise present in the image. If large-size filters are chosen, they are easy to implement and costs less. But a large amount of discriminative information will be lost. Tab. 3 shows the performance of MBLBP with different block numbers.

images

Figure 9: Flow diagram of multi-block LBP face detection process

images

2.4 RIFDS Algorithm Description

The RIFDS approach of the face detection system is shown in Algorithm 3 and Fig. 10. It can detect faces at different angles of rotation with accuracy (i.e., ± 30°, ± 45°, ± 60°, ± 90°, ± 120°, ± 135°, ± 150°, 180°, ± 210°, ± 225°, ± 240°, ± 270°, ± 300°, ± 315°, ± 330° and 360°). Data flows of the proposed system are shown in Figs. 11 and 12.

Algorithm 3: The RIFDS algorithm

images

images

Figure 10: Face detection

images

Figure 11: Level 1 DFD for face detection using PHTs and MBLBP

images

Figure 12: Level 2 DFD for face detection using PHTs and MBLBP

3  Results and Discussions

The RIFDS technique is compared with the previous works on five different face databases: LFW, ORL, MIT-CBCL, JAFFF, and CMU, including different image resolution, poses, and rotations [6]. LFW database contains 13,000 images with a resolution for each image as 250 × 250. JAFFF database has a total of 215 images in 256 × 256 resolutions. ORL face dataset contains 400 images in 92 × 112. CMU database contains 1,888 images in both 64 × 112 and 128 × 120. MIT-CBCL face dataset contains 2000 images in 82 × 82, 105 × 105, 106 × 149, 110 × 110 and 115 × 115. Lena images with different image resolution 32 × 32, 64 × 64, 256 × 256, 512 × 512 and 1024 × 1024 are also used. Fig. 13 shows the face detection results using PHT and MBLBP techniques with a query image at a 180° angle. The same 180° image is tested using the Viola-Jones algorithm, and the results are shown in Fig. 14. It is able to detect the frontal face (i.e., at 360° angle) only and cannot detect the rotated face (i.e., at 180° angle) as shown in Fig. 15. The proposed RIFDS algorithm can detect face accurately at 180°, as shown in Fig. 16. Further, it can detect faces at all rotations (i.e., ± 30°, ± 45°, ± 60°, ± 90°, ± 140°, 180°, ± 270°, and 360°). Tab. 4 shows the face detection accuracy of the RIFDS detection system at different face databases. Tab. 5 represents the comparison of the proposed face detection system with Viola-Jones, LBP, and MBLBP. It has been verified that the proposed face detection system can detect the face at different image resolution like 115 × 115, 82 × 82, 105 × 105, 250 × 250, 92 × 118, 128 × 120, 512 × 512, 106 × 49, 64 × 64, 1024 × 1024 and 256 × 256 and with different facial expressions and emotions, and with different resolutions.

images

Figure 13: Face detection at 180 degree. (a) Original image (b) PHT (c) PHT+MBLBP applied (d) output image

images

Figure 14: Viola-Jones can detect the frontal face only at 360-degree

images

Figure 15: MBLBP cannot detect the rotated face at 180-degree. (a) Original image, (b) MB local binary pattern, (c) MBLBP for face detection, (d) output face detected, (e) MBLBP histogram

images

Figure 16: RIFDS at 180-degree compared to Viola-Jones and multi-block-LBP. (a) Original image, (b) PHT applied, (c) PHT+MBLBP applied, and (d) final output

images

images

The results using RIFDS is shown in Tab. 6. The output of face detection using RIFDS on CMU face dataset with different angles (i.e., 30°, −30°, 45°, −45°, 60°, −60°, 140°, −140°, 180°, 270°, −270° and 360°) and resolutions (115 × 115, 82 × 82, 110 × 110, 106 × 149) is shown in Figs. 17 and 19 and with JAFFF dataset is shown in Fig. 18. In Fig. 20, the results on LENA face dataset with image resolution as 512 × 512 are shown. Fig. 21 shows the result analysis of the proposed algorithm along with accuracy and time analysis. The face detection time comparison is shown in Tab. 7. Tab. 8 shows the comparison of RIFDS with PHT. In the PHT face reorganization method, feature extraction is done from a complete image. One issue in this idea is that it did not extract the features from the rotated image. While in the RIFDS approach, the feature is extracted from small blocks from a single image by using MBLBP, PHT is applied for face recognition.

images

images

Figure 17: Face detection at different rotations result using RIFDS on CMU face database: (a) 115 × 115 resolution image; (b) 82 × 82 resolution image; (c) 110 × 110 resolution image; (d) 106 × 149 resolution image

images

Figure 18: Face detection at different rotation on JAFFF face database and resolution 256 × 256

images

Figure 19: Face detection of the proposal on CMU face database

images

Figure 20: Face detection of the proposal on LENA face database

images

Figure 21: Analysis of rotated faces on (a) All algorithm (b) Different database (c) Time analysis

images

images

According to Tabs. 4 to Tab. 8 and Figs. 13 to Fig. 20, the objectives of the paper have been achieved using RIFDS technique. The algorithm achieved promising comparable results. The accuracy is 99.99%. For the test images with angle starting from 30° to 180° results shows better performance than the said known algorithm and techniques.

4  Conclusions

This paper presents a new algorithm called Rotation Invariant Face Detection System (RIFDS) to detect the face from different angles of rotations. It aims to fast and accurately detect rotated faces by combining Polar Harmonic Transforms (PHTs) with Multi-Block LBP (MBLBP). In the RIFDS approach, texture patterns are extracted from the image using MBLBP, and PHT is used to keep invariant rotation characteristics. The proposed face detection system is able to detect faces within a short time and at different angles (i.e., 30°, −30°, 45°, −45°, 60°, −60°, 140°, −140°, 180°, 270°, −270° and 360). There are scary limitations of this hybrid approach. Firstly, if the scale of MBLBP is 3 × 3, it will not be able to acquire the primary features of a large scale. To solve this issue, the process is then generalized to used neighbor's information. The other is that when pp is used without Bessal Functions, not any other radial kernel can be defined explicitly, which causes some time increase the computational complexity if not defined properly. The technique is also tested for face detection at different image resolutions. It has been tested and verified that the proposed RIFDS technique can detect faces with different angles, facial expressions, and emotions speedily and accurately. The accuracy achieved is 99.99% as margin of .01% is due to noise and external uncontrollable factors like calculating ability of the algorithm as per significant figures of any numeric value. The extension or futuristic benefits of the algorithm can be used in the domain of automation, machine learning and deep learning through genetic algorithms for face detections from shadows. The application of the algorithm are in the areas of Twin face recognition, Object and shape recognition , Video or live surveillance, detection of face in the incarnation and in medical image processing for tumor detection by focusing the detection of malignant cells.

Acknowledgement: The authors sincerely acknowledge the support from Majmaah University, Saudi Arabia for this research.

Funding Statement: The authors would like to thank the Deanship of Scientific Research at Majmaah University for supporting this work under Project Number No-R-2021-154.

Conflicts of Interest: The authors declare that they have no conflicts of interest to report regarding the present study.

References

  1. Y. Said and M. Barr, “Human emotion recognition based on facial expressions via deep learning on high-resolution images,” Multimedia Tools and Applications, vol. 80, pp. 1–13, 202
  2. A. Bhandari and N. R. Pal, “Can edges help convolution neural networks in emotion recognition?,” Neurocomputing, vol. 433, pp. 162–168, 2021.
  3. A. Norval and E. Prasopoulou, “Public faces a critical exploration of the diffusion of face recognition technologies in online social networks,” New Media & Society, vol. 19, no. 4, pp. 637–654, 2017.
  4. F. Nan, Q. Zeng, Y. Xing and Y. Qian, “Single image super-resolution reconstruction based on the resNeXt network,” Multimedia Tools and Applications, vol. 79, no. 45, pp. 34459–34470, 2020.
  5. X. Y. Li and Z. X. Lin, “Face recognition based on HOG and fast PCA algorithm,” in Procoding of the Euro-China Conf. on Intelligent Data Analysis and Applications, Switzerland, Springer, vol. 682, pp. 10–21, 2017.
  6. S. Zhao, J. Li and J. Wang, “Disentangled representation learning and residual GAN for age-invariant face verification,” Pattern Recognition, vol. 100, pp. 1–10, 2020.
  7. L. Tran, X. Yin and X. Liu, “Disentangled Representation Learning for Pose-Invariant Face Recognition,” in Proc. of the IEEE Conf. on Computer Vision and Pattern Recognition, Honolulu, HI, USA, pp. 1415–1424, 201
  8. S. Zhang, H. Wang and W. Huang, “Two-stage plant species recognition by localmean clustering and weighted sparse representation classification,” Cluster Computing, vol. 20, no. 2, pp. 1517–1525, 2017.
  9. Y. H. Li, I. C. Lo and H. H. Chen, “Deep face rectification for 360° dual-fisheye cameras,” IEEE Transactions on Image Processing, vol. 30, pp. 264–276, 2021.
  10. A. Elmahmudi and H. Ugail, “Deep face recognition using imperfect facial data,” Future Generation Computer Systems, vol. 99, pp. 213–225, 2019.
  11. P. Viola and M. Jones, “Rapid object detection using a boosted cascade of simple features,” Proc. of the 2001 IEEE CSC on Computer Vision and Pattern Recognition (CVPR 2001Kauai, HI, USA, pp. I–I, 2001.
  12. T. Li, Y. Gao, L. Zhao and H. Zhou, “Compressed multi-block local binary pattern for object tracking,” in Proc. of Tenth Int. Conf. on Machine Vision (ICMV 2017Vienna, Austria, vol. 10696, pp. 609–618, 2018.
  13. M. Abdur Rahim, M. Najmul Hossain, M. T. Wahid and M. S. Azam, “Face recognition using local binary patterns (LBP),” Global Journal of Computer Science and Technology Graphics & Vision, vol. 13, no. 4, pp. 1–9, 20
  14. S. Zafeiriou, C. Zhang and Z. Zhang, “A survey on face detection in the wild: Past, present and future,” Computer Vision and Image Understanding, vol. 138, pp. 1–24, 2015.
  15. M. Hassaballah, H. A. Alshazly and A. A. Ali, “Robust local oriented patterns for ear recognition,” Multimedia Tools and Applications, vol. 79, no. 41, pp. 31183–31204, 2020.
  16. R. Upneja, M. Pawlak and A. M. Sahan, “An accurate approach for the computation of polar harmonic transforms,” Optik, vol. 158, pp. 623–633, 2018.
  17. X. Y. Li and Z. X. Lin, “Face recognition based on HOG and fast PCA algorithm,” in Proc. the Euro-China Conf. on Intelligent Data Analysis and Apps, Springer, Cham, Manhattan, New York, pp. 10–21, 20
  18. C. Singh and A. Kaur, “Fast computation of polar harmonic transforms,” Journal of Real-Time Image Processing, vol. 10, no. 1, pp. 59–66, 2015.
  19.  ORL Face Database. Kaggle Inc. 2021. [Online]. Available: https://www.kaggle.com/kasikrit/att-database-of-faces.
  20.  UMASS, LFW Face Database. 2009. [Online]. Available: http://vis-www.cs.umass.edu/lfw/#download.
  21.  Simon Baker, CMU Face Database. 2009. [Online]. Available: http://www.cs.cmu.edu/afs/cs/project/vision/vasc/idb/www/html/face/.
  22.  Zenodo, JAFFF Face Database. 2020. [Online]. Available: https://zenodo.org/record/3451524#.YJZA69UzbIU.
  23.  Gonzalez and Woods, Lena Database. 2002. [Online]. Available: http://www.imageprocessingplace.com/root_files_V3/image_database.htm.
  24.  MIT, MIT-CBCL Database. [Online]. Available: http://cbcl.mit.edu/software-datasets/heisele/facerecognition-database.html.
images This work is licensed under a Creative Commons Attribution 4.0 International License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.