Nothing Special   »   [go: up one dir, main page]

skip to main content
research-article

Deep convolution network based emotion analysis towards mental health care

Published: 07 May 2020 Publication History

Abstract

Facial expressions play an important role during communications, allowing information regarding the emotional state of an individual to be conveyed and inferred. Research suggests that automatic facial expression recognition is a promising avenue of enquiry in mental healthcare, as facial expressions can also reflect an individual's mental state. In order to develop user-friendly, low-cost and effective facial expression analysis systems for mental health care, this paper presents a novel deep convolution network based emotion analysis framework to support mental state detection and diagnosis. The proposed system is able to process facial images and interpret the temporal evolution of emotions through a new solution in which deep features are extracted from the Fully Connected Layer 6 of the AlexNet, with a standard Linear Discriminant Analysis Classifier exploited to obtain the final classification outcome. It is tested against 5 benchmarking databases, including JAFFE, KDEF,CK+, and databases with the images obtained ‘in the wild’ such as FER2013 and AffectNet. Compared with the other state-of-the-art methods, we observe that our method has overall higher accuracy of facial expression recognition. Additionally, when compared to the state-of-the-art deep learning algorithms such as Vgg16, GoogleNet, ResNet and AlexNet, the proposed method demonstrated better efficiency and has less device requirements. The experiments presented in this paper demonstrate that the proposed method outperforms the other methods in terms of accuracy and efficiency which suggests it could act as a smart, low-cost, user-friendly cognitive aid to detect, monitor, and diagnose the mental health of a patient through automatic facial expression analysis.

References

[1]
N. Yildirm, A. Varol, A research on estimation of emotion using EEG signals and brain computer interfaces, in: Proceedings of the 2nd International Conference on Computer Science and Engineering UBMK'20, 2017, pp. 1132–1136,.
[2]
A. Oliveira, C. Pinho, S. Monteiro, A. Marcos, A. Marques, Usability testing of a respiratory interface using computer screen and facial expressions videos, Comput. Biol. Med. 43 (2013) 2205–2213,.
[3]
M.-T. Yang, Y.-J. Cheng, Y.-C. Shih, Facial expression recognition for learning status analysis, 2011. https://doi.org/10.1007/978-3-642-21619-0_18.
[4]
G.S. Bahr, C. Balaban, M. Milanova, H. Choe, Nonverbally smart user interfaces: postural and facial expression data in human computer interaction, 2007.
[5]
P. Branco, P. Firth, L.M. Encarnação, P. Bonato, Faces of emotion in human-computer interaction, in: Proceedings of the Conference on Human Factors in Computing Systems, 2005, pp. 1236–1239,.
[6]
O. Grynszpan, J.-C. Martin, J. Nadel, Using facial expressions depicting emotions in a human-computer interface intended for people with autism, 2005. https://doi.org/10.1007/11550617_41.
[7]
D.K. Jain, P. Shamsolmoali, P. Sehdev, Extended deep neural network for facial emotion recognition, Pattern Recognit. Lett. 120 (2019) 69–74,.
[8]
J. Shao, Y. Qian, Three convolutional neural network models for facial expression recognition in the wild, Neurocomputing 355 (2019) 82–92,.
[9]
C. Lv, Z. Wu, X. Wang, M. Zhou, 3D facial expression modeling based on facial landmarks in single image, Neurocomputing 355 (2019) 155–167,.
[10]
M.Z. Uddin, J.J. Lee, T.-S. Kim, An enhanced independent component-based human facial expression recognition from video, IEEE Trans. Consum. Electron. 55 (2009) 2216–2224,.
[11]
J.J. Lee, Z. Uddin, T.-S. Kim, Spatiotemporal human facial expression recognition using fisher independent component analysis and Hidden Markov Model, in: Proceedings of the 30th Annual International Conference of the IEEE Engineering in Medicine and Biology Society, 2008, pp. 2546–2549. EMBS’08 - "Personalized Healthc. through Technol.
[12]
D. Engin, C. Ecabert, H.K. Ekenel, J.P. Thiran, Face frontalization for cross-pose facial expression recognition, Eur. Signal Process. Conf., European Signal Processing Conference, EUSIPCO, 2018, pp. 1795–1799,.
[13]
W. Sun, H. Zhao, Z. Jin, An efficient unconstrained facial expression recognition algorithm based on Stack Binarized Auto-Encoders and Binarized Neural Networks, Neurocomputing 267 (2017) 385–395,.
[14]
A. Samal, P.A. Iyengar, Automatic recognition and analysis of human faces and facial expressions: a survey, Pattern Recognit. (1992) 25,.
[15]
B. Fasel, J. Luettin, Automatic facial expression analysis: a survey, Pattern Recognit. 36 (2003),.
[16]
G. Sandbach, S. Zafeiriou, M. Pantic, L. Yin, Static and dynamic 3D facial expression recognition: a comprehensive survey, Image Vis. Comput. 30 (2012),.
[17]
Z. Zeng, M. Pantic, G.I. Roisman, T.S. Huang, A survey of affect recognition methods: audio, visual, and spontaneous expressions, IEEE Trans. Pattern Anal. Mach. Intell. 31 (2009),.
[18]
E. Sariyanidi, H. Gunes, A. Cavallaro, Automatic analysis of facial affect: a survey of registration, representation, and recognition, IEEE Trans. Pattern Anal. Mach. Intell. 37 (2015),.
[19]
B. Ko, B.Chul Ko, A brief review of facial emotion recognition based on visual information, Sensors 18 (2018) 401,.
[20]
S. Baron-Cohen, A. Riviere, M. Fukushima, D. French, J. Hadwin, P. Cross, C. Bryant, M. Sotillo, Reading the mind in the face: a cross-cultural and developmental study, Vis. Cogn. 3 (1996) 39–59.
[21]
K. Burton, A. Kaszniak, Emotional experience and facial expression in Alzheimer's disease, Aging, Neuropsychol. Cogn. 13 (2006),.
[22]
S. Passardi, P. Peyk, M. Rufer, T.S.H. Wingenbach, M.C. Pfaltz, Facial mimicry, facial emotion recognition and alexithymia in post-traumatic stress disorder, Behav. Res. Ther. 122 (2019),.
[23]
J.D. Henry, P.G. Rendell, A. Scicluna, M. Jackson, L.H. Phillips, Emotion experience, expression, and regulation in Alzheimer's disease, Psychol. Aging. 24 (2009),.
[24]
M.C. Smith, Facial expression in mild dementia of the Alzheimer type, Behav. Neurol. 8 (1995) 149–156.
[25]
A. Lundqvist, D. Flykt, A. Öhman, The Karolinska Directed Emotional Faces KDEF, CD ROM from Department of Clinical Neuroscience, Psychology section, Karolinska Institutet, 1998.
[26]
M. Lyons, S. Akamatsu, M. Kamachi, J. Gyoba, Coding facial expressions with Gabor wavelets, in: Proceedings of the 3rd IEEE International Conference on Automatic Face and Gesture Recognition, IEEE Computer Society, n.d.: pp. 200–205. https://doi.org/10.1109/AFGR.1998.670949.
[27]
T. Kanade, J.F. Cohn, Yingli Tian, Comprehensive database for facial expression analysis, in: Proceedings of the Fourth IEEE International Conference on Automatic Face and Gesture Recognition (Cat. No. PR00580), IEEE Computer Society, n.d.: pp. 46–53. https://doi.org/10.1109/AFGR.2000.840611.
[28]
P. Lucey, J.F. Cohn, T. Kanade, J. Saragih, Z. Ambadar, I. Matthews, The Extended Cohn–Kanade Dataset (CK+): a complete dataset for action unit and emotion-specified expression, in: Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, IEEE, 2010, pp. 94–101,.
[29]
Challenges in representation learning: facial expression recognition challenge | Kaggle, (n.d.). https://www.kaggle.com/c/challenges-in-representation-learning-facial-expression-recognition-challenge/overview(accessed April 15, 2019).
[30]
A. Mollahosseini, B. Hasani, M.H. Mahoor, AffectNet: a database for facial expression, valence, and arousal computing in the wild, IEEE Trans. Affect. Comput. 10 (2019) 18–31,.
[31]
P. Viola, M. Jones, Rapid object detection using a boosted cascade of simple features, in: Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition. IEEE Computer Society, n.d.: p. I-511-I-518. https://doi.org/10.1109/CVPR.2001.990517.
[32]
Detect objects using the Viola-Jones algorithm - MATLAB - MathWorks United Kingdom, (n.d.). https://uk.mathworks.com/help/vision/ref/vision.cascadeobjectdetector-system-object.html(accessed January 29, 2019).
[33]
Concatenate arrays - MATLAB cat - MathWorks United Kingdom, (n.d.). https://uk.mathworks.com/help/matlab/ref/cat.html(accessed May 7, 2019).
[34]
Pretrained AlexNet convolutional neural network - MATLAB alexnet - MathWorks United Kingdom, (n.d.). https://uk.mathworks.com/help/deeplearning/ref/alexnet.html;jsessionid=c4669357f290858d36ed1bcbf8cf(accessed September 21, 2018).
[35]
M. Matsugu, K. Mori, Y. Mitari, Y. Kaneda, Subject independent facial expression recognition with robust face detection using a convolutional neural network, Neural Netw. 16 (2003) 555–559,.
[36]
R. Collobert, J. Weston, A unified architecture for natural language processing, in: Proceedings of the 25th International Conference on Machine Learning - ICML ’08, New York, New York, USA, ACM Press, 2008, pp. 160–167,.
[37]
A. Krizhevsky, I. Sutskever, G.E. Hinton, ImageNet classification with deep convolutional neural networks, n.d.http://code.google.com/p/cuda-convnet/(accessed September 21, 2018).
[38]
X. Chen, X. Yang, M. Wang, J. Zou, Convolution neural network for automatic facial expression recognition, in: Proceedings of the International Conference Applied System Innovation, IEEE, 2017, pp. 814–817,.
[39]
K. Shan, J. Guo, W. You, D. Lu, R. Bie, Automatic facial expression recognition based on a deep convolutional-neural-network structure, in: Proceedings of the IEEE 15th International Conference on Software Engineering Research, Management and Application, IEEE, 2017, pp. 123–128,.
[40]
X. Han, Y. Zhong, L. Cao, L. Zhang, Pre-trained alexnet architecture with pyramid pooling and supervision for high spatial resolution remote sensing image scene classification, Remote Sens. (2017) 9,.
[41]
U. Chavan, D. Kulkarni, Optimizing deep convolutional neural network for facial expression recognitions, Advances in Intelligent Systems and Computing, Springer Verlag, 2019, pp. 185–196,.
[42]
Feature Extraction Using AlexNet - MATLAB & Simulink - MathWorks United Kingdom, (n.d.). https://uk.mathworks.com/help/deeplearning/examples/feature-extraction-using-alexnet.html;jsessionid=a9dd0dd508fd2b96854cd6f11d5c(accessed January 23, 2019).
[43]
P. McAllister, H. Zheng, R. Bond, A. Moorhead, Combining deep residual neural network features with supervised machine learning algorithms to classify diverse food image datasets, Comput. Biol. Med. 95 (2018) 217–233,.
[44]
Visualize High-Dimensional Data Using t-SNE - MATLAB & Simulink - MathWorks United Kingdom, (n.d.). https://uk.mathworks.com/help/stats/visualize-high-dimensional-data-using-t-sne.html(accessed April 15, 2019).
[45]
L. van der Maaten, Barnes-Hut-SNE, (2013). http://arxiv.org/abs/1301.3342(accessed April 15, 2019).
[46]
Visualize Activations of a Convolutional Neural Network - MATLAB & Simulink - MathWorks United Kingdom, (n.d.). https://uk.mathworks.com/help/deeplearning/examples/visualize-activations-of-a-convolutional-neural-network.html(accessed January 23, 2019).
[47]
Q. Ye, N. Ye, T. Yin, Fast orthogonal linear discriminant analysis with application to image classification, Neurocomputing 158 (2015) 216–224,.
[48]
N.A.A. Shashoa, N.A. Salem, I.N. Jleta, O. Abusaeeda, Classification depend on linear discriminant analysis using desired outputs, in: Proceedings of the 17th International Conference Science Technology and Automation Control Computing Engineering, IEEE, 2016, pp. 328–332,.
[49]
Compute convolutional neural network layer activations - MATLAB activations - MathWorks United Kingdom, (n.d.). https://uk.mathworks.com/help/deeplearning/ref/activations.html(accessed January 30, 2019).
[50]
N. Zeng, H. Zhang, B. Song, W. Liu, Y. Li, A.M. Dobaie, Facial expression recognition via learning deep sparse autoencoders, Neurocomputing 273 (2018) 643–649,.
[51]
N. Zeng, Z. Wang, H. Zhang, K.-E. Kim, Y. Li, X. Liu, An improved particle filter with a novel hybrid proposal distribution for quantitative analysis of gold immunochromatographic strips, IEEE Trans. Nanotechnol. 18 (2019) 819–829,.
[52]
N. Zeng, Z. Wang, H. Zhang, W. Liu, F.E. Alsaadi, Deep belief networks for quantitative analysis of a gold immunochromatographic strip, Cognit. Comput. 8 (2016) 684–692,.
[53]
L. Ding, H. Li, C. Hu, W. Zhang, S. Wang, Alexnet feature extraction and multi-kernel learning for object-oriented classification, in: Proceedings of the International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences- ISPRS Archives., International Society for Photogrammetry and Remote Sensing, 2018, pp. 277–281,.
[54]
Y. Li, J. Zeng, S. Shan, X. Chen, Occlusion aware facial expression recognition using CNN with attention mechanism, IEEE Trans. Image Process. 28 (2019) 2439–2450,.
[55]
S. Jyoti, G. Sharma, A. Dhall, A single hierarchical network for face, action unit and emotion detection, in: Proceedings of the Digital Image Computing: Techniques and Applications, IEEE, 2018, pp. 1–8,.
[56]
K. Rujirakul, C. So-In, Histogram equalized deep pca with ELM classification for expressive face recognition, in: Proceedings of the International Workshop on Advanced Image Technology, IEEE, 2018, pp. 1–4,.
[57]
P. Lucey, J.F. Cohn, T. Kanade, J. Saragih, Z. Ambadar, I. Matthews, The extended Cohn–Kanade dataset (CK+): a complete dataset for action unit and emotion-specified expression, in: Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops CVPRW, 2010,.
[58]
T. Jabid, M.H. Kabir, O. Chae, Robust facial expression recognition based on local directional pattern, ETRI J. 32 (2010) 784–794,.
[59]
A.T. Lopes, E. de Aguiar, A.F. De Souza, T. Oliveira-Santos, Facial expression recognition with convolutional neural networks: coping with few data and the training sample order, Pattern Recognit. (2017) 61,.
[60]
G. Fanelli, A. Yao, P.-L. Noel, J. Gall, L. Van Gool, Hough Forest-Based Facial Expression Recognition from Video Sequences, Springer, Berlin, Heidelberg, 2012, pp. 195–206,.
[61]
NN SVG. n.d. [accessed 14 December 2019].

Cited By

View all

Index Terms

  1. Deep convolution network based emotion analysis towards mental health care
          Index terms have been assigned to the content through auto-classification.

          Recommendations

          Comments

          Please enable JavaScript to view thecomments powered by Disqus.

          Information & Contributors

          Information

          Published In

          cover image Neurocomputing
          Neurocomputing  Volume 388, Issue C
          May 2020
          347 pages

          Publisher

          Elsevier Science Publishers B. V.

          Netherlands

          Publication History

          Published: 07 May 2020

          Author Tags

          1. Facial expression recognition
          2. Deep convolution network
          3. Mental health care
          4. Emotion analysis

          Qualifiers

          • Research-article

          Contributors

          Other Metrics

          Bibliometrics & Citations

          Bibliometrics

          Article Metrics

          • Downloads (Last 12 months)0
          • Downloads (Last 6 weeks)0
          Reflects downloads up to 21 Dec 2024

          Other Metrics

          Citations

          Cited By

          View all
          • (2024)Deep Adaptation of Adult-Child Facial Expressions by Fusing Landmark FeaturesIEEE Transactions on Affective Computing10.1109/TAFFC.2023.329707515:3(847-858)Online publication date: 1-Jul-2024
          • (2024)ERNetCLKnowledge-Based Systems10.1016/j.knosys.2024.111434286:COnline publication date: 17-Apr-2024
          • (2024)A lightweight and continuous dimensional emotion analysis system of facial expression recognition under complex backgroundJournal of Visual Communication and Image Representation10.1016/j.jvcir.2024.104260103:COnline publication date: 1-Aug-2024
          • (2024)Learning informative and discriminative semantic features for robust facial expression recognitionJournal of Visual Communication and Image Representation10.1016/j.jvcir.2024.10406298:COnline publication date: 1-Feb-2024
          • (2024)LSTPNetImage and Vision Computing10.1016/j.imavis.2024.104915142:COnline publication date: 16-May-2024
          • (2024)Survey on facial expressions recognition: databases, features and classification schemesMultimedia Tools and Applications10.1007/s11042-023-15139-w83:3(7457-7478)Online publication date: 1-Jan-2024
          • (2024)A multimodal fusion-based deep learning framework combined with local-global contextual TCNs for continuous emotion recognition from videosApplied Intelligence10.1007/s10489-024-05329-w54:4(3040-3057)Online publication date: 1-Feb-2024
          • (2023)Towards a tool for identification of emotions and integration with serious game learningProceedings of the XXII Brazilian Symposium on Human Factors in Computing Systems10.1145/3638067.3638122(1-10)Online publication date: 16-Oct-2023
          • (2023)AffectFAL: Federated Active Affective Computing with Non-IID DataProceedings of the 31st ACM International Conference on Multimedia10.1145/3581783.3612442(871-882)Online publication date: 26-Oct-2023
          • (2023)ASTDF-Net: Attention-Based Spatial-Temporal Dual-Stream Fusion Network for EEG-Based Emotion RecognitionProceedings of the 31st ACM International Conference on Multimedia10.1145/3581783.3612208(883-892)Online publication date: 26-Oct-2023
          • Show More Cited By

          View Options

          View options

          Media

          Figures

          Other

          Tables

          Share

          Share

          Share this Publication link

          Share on social media