A New Descriptor for Smile Classification Based on Cascade Classifier in Unconstrained Scenarios
<p>Facial action units (AUs).</p> "> Figure 2
<p>System flowchart.</p> "> Figure 3
<p>Facial regions. (<b>a</b>) Input, (<b>b</b>) face detection (Step 1), and (<b>c</b>) region detection (Step 2).</p> "> Figure 4
<p>Facial landmarks.</p> "> Figure 5
<p>For each smile sample in (1), (2), and (3) (<b>a</b>) From left to right: RGB images after brightness equalization, outside contours, internal contours, outcomes of the outside boundary, outcomes of the internal boundary, and finishing segmentation outcomes; (<b>b</b>) convergence outcomes of the outside and internal contours; (<b>c</b>) segmentation outcomes of images in (<b>b</b>).</p> "> Figure 6
<p>Different smiles for a person.</p> ">
Abstract
:1. Introduction
Motivation and Contribution
2. Related Work
3. Methodology
3.1. Face Detection
3.2. Feature Extraction
3.2.1. Histogram Feature Extraction
3.2.2. Alpha and Beta Features
3.3. Cascade Classifier
4. Experimental Results
Performance Analysis
5. Conclusions
Author Contributions
Funding
Data Availability Statement
Conflicts of Interest
References
- Andrian, R.; Supangkat, S.H. Comparative Analysis of Deep Convolutional Neural Networks Architecture in Facial Expression Recognition: A Survey. In Proceedings of the International Conference on ICT for Smart Society (ICISS), Bandung, Indonesia, 19–20 November 2020; pp. 1–6. [Google Scholar]
- Nestor, M.S.; Fischer, D.; Arnold, D. Masking our emotions: Botulinum toxin, facial expression, and well-being in the age of COVID-19. J. Cosmet. Dermatol. 2020, 19, 2154–2160. [Google Scholar] [CrossRef] [PubMed]
- Geng, Z.; Cao, C.; Tulyakov, S. Towards Photo-Realistic Facial Expression Manipulation. Int. J. Comput. Vis. 2020, 128, 2744–2761. [Google Scholar] [CrossRef]
- Zhang, F.; Zhang, T.; Mao, Q.; Xu, C. Geometry guided pose-invariant facial expression recognition. IEEE Trans. Image Process. 2020, 29, 4445–4460. [Google Scholar] [CrossRef] [PubMed]
- Gogić, I.; Manhart, M.; Pandžić, I.S.; Ahlberg, J. Fast facial expression recognition using local binary features and shallow neural networks. Vis. Comput. 2020, 36, 97–112. [Google Scholar] [CrossRef]
- Escalera, S.; Puertas, E.; Radeva, P.; Pujol, O. Multi-modal laughter recognition in video conversations. In Proceedings of the IEEE Conference on Computer Vision, Miami, FL, USA, 20–25 June 2009; pp. 110–115. [Google Scholar]
- Gong, B.; Wang, Y.; Liu, J.; Tang, X. Automatic facial expression recognition on a single 3D face by exploring shape deformation. In Proceedings of the 17th ACM international conference on Multimedia, Beijing, China, 23 October 2009; pp. 569–572. [Google Scholar]
- Cotter, S.F. MobiExpressNet: A deep learning network for face expression recognition on smart phones. In Proceedings of the IEEE International Conference on Consumer Electronics (ICCE), Las Vegas, NV, USA, 4–6 January 2020; pp. 1–4. [Google Scholar]
- Law, S.; Seresinhe, C.I.; Shen, Y.; Gutierrez-Roig, M. Street-Frontage-Net: Urban image classification using deep convolutional neural networks. Int. J. Geogr. Inf. Sci. 2020, 34, 681–707. [Google Scholar] [CrossRef] [Green Version]
- Alarifi, A.; Tolba, A.; Al-Makhadmeh, Z.; Said, W. A big data approach to sentiment analysis using greedy feature selection with cat swarm optimization-based long short-term memory neural networks. J. Supercomput. 2020, 76, 4414–4429. [Google Scholar] [CrossRef]
- Ashir, A.M.; Eleyan, A.; Akdemir, B. Facial expression recognition with dynamic cascaded classifier. Neural Comput. Appl. 2020, 32, 6295–6309. [Google Scholar] [CrossRef]
- Cossetin, M.J.; Nievola, J.C.; Koerich, A.L. Facial expression recognition using a pairwise feature selection and classification approach. In Proceedings of the International Joint Conference on Neural Networks (IJCNN), Vancouver, BC, Canada, 24–29 July 2016; pp. 5149–5155. [Google Scholar]
- Happy, S.L.; Routray, A. Automatic facial expression recognition using features of salient facial patches. IEEE Trans. Affect. Comput. 2015, 6, 1–12. [Google Scholar] [CrossRef] [Green Version]
- Mistry, K.; Zhang, L.; Neoh, S.C.; Lim, C.P.; Fielding, B. A micro-GA embedded PSO feature selection approach to intelligent facial emotion recognition. IEEE Trans. Cybern. 2017, 47, 1496–1509. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Samara, A.; Galway, L.; Bond, R.; Wang, H. Affective state detection via facial expression analysis within a human–computer interaction context. J. Ambient Intell. Humaniz. Comput. 2019, 10, 2175–2184. [Google Scholar] [CrossRef] [Green Version]
- Mitra, S.; Acharya, T. Gesture Recognition: A Survey. IEEE Trans. Syst. Man Cybern. Part C Appl. Rev. 2007, 37, 311–324. [Google Scholar] [CrossRef]
- Hassen, O.; Abu, N.; Abidin, Z. Human identification system: A review. Int. J. Comput. Bus. Res. IJCBR 2019, 9, 1–26. [Google Scholar]
- Buciu, I.; Kotropoulos, C.; Pitas, I. ICA and Gabor representation for facial expression recognition. In Proceedings of the Proceedings 2003 International Conference on Image Processing (Cat. No.03CH37429), Barcelona, Spain, 14–17 September 2003; p. II-855. [Google Scholar]
- Minsky, M.; Papert, S.A. Perceptrons: An Introduction to Computational Geometry; MIT press: Cambridge, MA, USA, 2017. [Google Scholar]
- Goodfellow, I.; Bengio, Y.; Courville, A.; Bengio, Y. Deep Learning; MIT Press: Cambridge, MA, USA, 2016; Volume 1. [Google Scholar]
- Sikkandar, H.; Thiyagarajan, R. Soft biometrics-based face image retrieval using improved grey wolf optimization. IET Image Process. 2020, 14, 451–461. [Google Scholar] [CrossRef]
- Woźniak, M.; Połap, D. Bio-inspired methods modeled for respiratory disease detection from medical images. Swarm Evol. Comput. 2018, 41, 69–96. [Google Scholar] [CrossRef]
- Woźniak, M.; Połap, D. Adaptive neuro-heuristic hybrid model for fruit peel defects detection. Neural Netw. 2018, 98, 16–33. [Google Scholar] [CrossRef] [PubMed]
- Kanan, H.R.; Faez, K.; Hosseinzadeh, M. Face recognition system using ant colony optimization-based selected features. In Proceedings of the IEEE Symposium on Computational Intelligence in Security and Defense Applications, Honolulu, HI, USA, 1–5 April 2007; pp. 57–62. [Google Scholar]
- Karaboga, N. A new design method based on artificial bee colony algorithm for digital IIR filters. J. Frankl. Inst. 2009, 346, 328–348. [Google Scholar] [CrossRef]
- Ababneh, J.I.; Bataineh, M.H. Linear phase FIR filter design using particle swarm optimization and genetic algorithms. Digit. Signal Process. 2008, 18, 657–668. [Google Scholar] [CrossRef]
- Chu, S.C.; Tsai, P.W. Computational intelligence based on the behavior of cats. Int. J. Innov. Comput. Inf. Control 2007, 3, 163–173. [Google Scholar]
- Aziz, M.A.E.; Ewees, A.A.; Hassanien, A.E. Multi-objective whale optimization algorithm for content-based image retrieval. Multimed. Tools Appl. 2018, 77, 26135–26172. [Google Scholar] [CrossRef]
- Ibrahim, R.A.; Elaziz, M.A.; Lu, S. Chaotic opposition-based grey-wolf optimization algorithm based on differential evolution and disruption operator for global optimization. Expert Syst. Appl. 2018, 108, 1–27. [Google Scholar] [CrossRef]
- Aziz, M.A.E.; Hassanien, A.E. Modified cuckoo search algorithm with rough sets for feature selection. Neural Comput. Appl. 2018, 29, 925–934. [Google Scholar] [CrossRef]
- Mostafa, A.; Hassanien, A.E.; Houseni, M.; Hefny, H. Liver segmentation in MRI images based on whale optimization algorithm. Multimed. Tools Appl. 2018, 76, 24931–24954. [Google Scholar] [CrossRef]
- Krithika, L.B.; Priya, G.L. Graph based feature extraction and hybrid classification approach for facial expression recognition. J. Ambient Intell. Humaniz. Comput. 2021, 12, 2131–2147. [Google Scholar] [CrossRef] [PubMed]
- Fan, X.; Tjahjadi, T. A dynamic framework based on local Zernike moment and motion history image for facial expression recognition. Pattern Recognit. 2017, 64, 399–406. [Google Scholar] [CrossRef] [Green Version]
- Roomi, M.; Naghasundharam, S.A.; Kumar, S.; Sugavanam, R. Emotion recognition from facial expression-A target oriented approach using neural network. In Proceedings of the Indian Conference on Computer Vision, Graphics and Image Processing, Kolkata, India, 16–18 December 2004; pp. 1–4. [Google Scholar]
- Fuentes, C.; Herskovic, V.; Rodríguez, I.; Gerea, C.; Marques, M.; Rossel, P.O. A systematic literature review about technologies for self-reporting emotional information. J. Ambient Intell. Humaniz. Comput. 2017, 8, 593–606. [Google Scholar] [CrossRef]
- Langeroodi, B.Y.A.; Kojouri, K.K. Automatic Facial Expression Recognition Using Neural Network. In Proceedings of the International Conference on Image Processing, Computer Vision, and Pattern Recognition (IPCV), Las Vegas, NV, USA, 18–21 July 2011; pp. 1–5. [Google Scholar]
- Mollahosseini, A.; Chan, D.; Mahoor, M.H. Going deeper in facial expression recognition using deep neural networks. In Proceedings of the 2016 IEEE Winter Conference on Applications of Computer Vision (WACV), Lake Placid, NY, USA, 7–10 March 2016; pp. 1–10. [Google Scholar]
- Walecki, R.; Rudovic, O.; Pavlovic, V.; Schuller, B.; Pantic, M. Deep structured learning for facial expression intensity estimation. Image Vis. Comput. 2017, 259, 143–154. [Google Scholar]
- Yolcu, G.; Oztel, I.; Kazan, S.; Oz, C.; Bunyak, F. Deep learning-based face analysis system for monitoring customer interest. J. Ambient Intell. Humaniz. Comput. 2020, 11, 237–248. [Google Scholar] [CrossRef]
- Zhao, K.; Chu, W.-S.; Zhang, H. Deep region and multi-label learning for facial action unit detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 3391–3399. [Google Scholar]
- Viola, P.; Jones, M. Robust real-time object detection. Int. J. Comput. Vis. 2001, 4, 34–47. [Google Scholar]
- Silva, C.; Schnitman, L.; Oliveira, L. Detection of facial landmarks using local-based information. In Proceedings of the 19th Brazilian Conference on Automation, Campina Grande, Brazil, 2–6 September 2012; pp. 1–5. [Google Scholar]
- Garali, I.; Adel, M.; Bourennane, S.; Guedj, E. Histogram-based features selection and volume of interest ranking for brain PET image classification. IEEE J. Transl. Eng. Health Med. 2018, 6, 1–12. [Google Scholar] [CrossRef]
- Lu, Y.; Liu, Q. Lip segmentation using automatic selected initial contours based on localized active contour model. EURASIP J. Image Video Process. 2018, 7, 1–12. [Google Scholar] [CrossRef] [Green Version]
- Khoshgoftaar, T.M.; Hulse, J.V.; Napolitano, A. Comparing boosting and bagging techniques with noisy and imbalanced data. IEEE Trans. Syst. Man Cybern. Part A Syst. Hum. 2011, 41, 552–568. [Google Scholar] [CrossRef]
- Jeni, L.A.; Lőrincz, A.; Nagy, T.; Palotai, Z.; Sebők, J.; Szabó, Z.; Takács, D. 3Dshape estimation in video sequences provides high precision evaluation of facial expressions. Image Vis. Comput. 2012, 30, 785–795. [Google Scholar] [CrossRef] [Green Version]
- Sikka, K.; Wu, T.; Susskind, J.; Bartlett, M. Exploring bag of words architectures in the facial expression domain. In European Conference on Computer Vision; Springer: Berlin/Heidelberg, Germany, 2012; pp. 250–259. [Google Scholar]
- Jeni, L.A.; Lőrincz, A.; Szabó, Z.; Cohn, J.F.; Kanade, T. Spatio-temporal event classification using time-series kernel based structured sparsity. In European Conference on Computer Vision; Springer: Cham, Switzerland, 2014; pp. 135–150. [Google Scholar]
- Mahoor, M.; Zhou, M.; Veon, K.L.; Mavadati, S.; Cohn, J. Facial action unit recognition with sparse representation. In Proceedings of the Automatic Face Gesture Recognition and Workshops, Santa Barbara, CA, USA, 21–25 March 2011; pp. 336–342. [Google Scholar]
- Zafeiriou, S.; Petrou, M. Sparse representations for facial expressions recognition via l1 optimization. In Proceedings of the Computer Vision and Pattern Recognition Workshops (CVPRW), San Francisco, CA, USA, 13–18 June 2010; pp. 32–39. [Google Scholar]
Data Set | TP | TN | FP | FN | Accuracy | TPR | TNR | PPV | |
---|---|---|---|---|---|---|---|---|---|
CK+ | 2048.9 | 1916.59 | 33.1 | 33.1 | 98.35 | 0.98 | 0.98 | 0.98 | 0.98 |
CK+48 | 180.2 | 165.74 | 15.8 | 15.8 | 91.64 | 0.92 | 0.92 | 0.92 | 0.92 |
JAFFE | 26.5 | 34.22 | 16.5 | 16.5 | 65.02 | 0.62 | 0.68 | 0.62 | 0.62 |
Data Set | TP | TN | FP | FN | Accuracy | TPR | TNR | PPV | |
---|---|---|---|---|---|---|---|---|---|
CK+ | 2049.2 | 1919.32 | 32.8 | 32.8 | 98.38 | 0.98 | 0.98 | 0.98 | 0.98 |
CK+48 | 179.8 | 165.66 | 16.2 | 16.2 | 91.44 | 0.92 | 0.91 | 0.92 | 0.98 |
JAFFE | 27.3 | 34.47 | 15.7 | 15.7 | 66.62 | 0.65 | 0.69 | 0.63 | 0.64 |
Data Set | TP | TN | FP | FN | Accuracy | TPR | TNR | PPV | |
---|---|---|---|---|---|---|---|---|---|
CK+ | 2047.5 | 1919.19 | 34.5 | 34.5 | 98.29 | 0.98 | 0.98 | 0.98 | 0.98 |
CK+48 | 179.7 | 165.67 | 16.3 | 16.3 | 91.39 | 0.92 | 0.91 | 0.92 | 0.92 |
JAFFE | 26.2 | 34.46 | 16.8 | 16.8 | 64.64 | 0.61 | 0.68 | 0.61 | 0.61 |
Emotions | E1 | E2 | E3 | E4 | E5 | E6 | E7 |
---|---|---|---|---|---|---|---|
E1 | 31 | 0 | 0 | 0 | 0 | 0 | 0 |
E2 | 2 | 5 | 0 | 0 | 0 | 0 | 0 |
E3 | 0 | 5 | 24 | 0 | 0 | 0 | 0 |
E4 | 0 | 0 | 5 | 16 | 0 | 0 | 0 |
E5 | 0 | 0 | 0 | 1 | 35 | 0 | 0 |
E6 | 0 | 0 | 0 | 0 | 3 | 13 | 0 |
E7 | 0 | 0 | 0 | 0 | 0 | 4 | 52 |
Emotions | E1 | E2 | E3 | E4 | E5 | E6 | E7 |
---|---|---|---|---|---|---|---|
E1 | 15 | 0 | 0 | 0 | 0 | 0 | 0 |
E2 | 3 | 3 | 0 | 0 | 0 | 0 | 0 |
E3 | 0 | 3 | 3 | 0 | 0 | 0 | 0 |
E4 | 0 | 0 | 2 | 2 | 0 | 0 | 0 |
E5 | 0 | 0 | 0 | 1 | 1 | 0 | 0 |
E6 | 0 | 0 | 0 | 0 | 1 | 6 | 0 |
E7 | 0 | 0 | 0 | 0 | 0 | 1 | 2 |
Emotions | E1 | E2 | E3 | E4 | E5 | E6 | E7 | E8 | E9 | E10 | E11 | E12 | E13 |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
E1 | 418 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
E2 | 4 | 371 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
E3 | 0 | 3 | 341 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
E4 | 0 | 0 | 3 | 329 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
E5 | 0 | 0 | 0 | 4 | 231 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
E6 | 0 | 0 | 0 | 0 | 3 | 209 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
E7 | 0 | 0 | 0 | 0 | 0 | 5 | 85 | 0 | 0 | 0 | 0 | 0 | 0 |
E8 | 0 | 0 | 0 | 0 | 0 | 0 | 5 | 25 | 0 | 0 | 0 | 0 | 0 |
E9 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 5 | 3 | 0 | 0 | 0 | 0 |
E10 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 | 8 | 0 | 0 | 0 |
E11 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 | 14 | 0 | 0 |
E12 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 | 5 | 0 |
E13 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 | 3 |
Data Set | Samples Numbers | Accuracy Rate |
---|---|---|
CK+ | 1000 | 80.01 |
2000 | 82.10 | |
3000 | 85.00 | |
4000 | 87.00 | |
5000 | 88.02 | |
6000 | 89.02 | |
7000 | 91.01 | |
8000 | 95.00 | |
9000 | 97.05 | |
10,414 | 98.35 | |
CK+48 | 100 | 70.01 |
200 | 75.06 | |
300 | 76.08 | |
400 | 79.80 | |
500 | 81.02 | |
600 | 81.06 | |
700 | 85.00 | |
800 | 89.90 | |
981 | 91.64 | |
JAFFE | 40 | 48.00 |
80 | 49.00 | |
120 | 60.03 | |
160 | 62.50 | |
213 | 65.02 |
Data Set | Samples Numbers | Accuracy Rate |
---|---|---|
CK+ | 1000 | 81.81 |
2000 | 83.15 | |
3000 | 84.50 | |
4000 | 86.00 | |
5000 | 85.02 | |
6000 | 88.02 | |
7000 | 90.01 | |
8000 | 92.00 | |
9000 | 96.05 | |
10,414 | 98.38 | |
CK+48 | 100 | 71.01 |
200 | 73.05 | |
300 | 75.28 | |
400 | 77.85 | |
500 | 80.52 | |
600 | 82.46 | |
700 | 85.50 | |
800 | 88.95 | |
981 | 91.44 | |
JAFFE | 40 | 42.00 |
80 | 44.00 | |
120 | 55.03 | |
160 | 60.50 | |
213 | 66.62 |
Data Set | Samples Numbers | Accuracy Rate |
---|---|---|
CK+ | 1000 | 82.82 |
2000 | 84.35 | |
3000 | 84.58 | |
4000 | 85.55 | |
5000 | 86.08 | |
6000 | 88.72 | |
7000 | 91.51 | |
8000 | 92.07 | |
9000 | 97.07 | |
10,414 | 98.29 | |
CK+48 | 100 | 71.61 |
200 | 72.65 | |
300 | 75.26 | |
400 | 76.85 | |
500 | 82.72 | |
600 | 84.47 | |
700 | 85.57 | |
800 | 89.75 | |
981 | 91.39 | |
JAFFE | 40 | 47 |
80 | 55.05 | |
120 | 60.03 | |
160 | 62.5 | |
213 | 64.64 |
Data Set | Noise Amount | Accuracy Rate |
---|---|---|
CK+ | 1 | 98.01 |
2 | 96.5 | |
5 | 95.02 | |
7 | 93.54 | |
10 | 92.68 | |
CK+48 | 1 | 90.72 |
2 | 89.23 | |
5 | 87.25 | |
7 | 85.03 | |
10 | 84.38 | |
JAFFE | 1 | 65.73 |
2 | 62.32 | |
5 | 61.96 | |
7 | 61.61 | |
10 | 60.63 |
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |
© 2021 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Hassen, O.A.; Abu, N.A.; Abidin, Z.Z.; Darwish, S.M. A New Descriptor for Smile Classification Based on Cascade Classifier in Unconstrained Scenarios. Symmetry 2021, 13, 805. https://doi.org/10.3390/sym13050805
Hassen OA, Abu NA, Abidin ZZ, Darwish SM. A New Descriptor for Smile Classification Based on Cascade Classifier in Unconstrained Scenarios. Symmetry. 2021; 13(5):805. https://doi.org/10.3390/sym13050805
Chicago/Turabian StyleHassen, Oday A., Nur Azman Abu, Zaheera Zainal Abidin, and Saad M. Darwish. 2021. "A New Descriptor for Smile Classification Based on Cascade Classifier in Unconstrained Scenarios" Symmetry 13, no. 5: 805. https://doi.org/10.3390/sym13050805