Nothing Special   »   [go: up one dir, main page]

skip to main content
review-article

Automated emotion recognition: : Current trends and future perspectives

Published: 01 March 2022 Publication History

Highlights

State-of-the-art automated emotion recognition systems are reviewed.
Various input modalities such as electroencephalogram (EEG), facial, speech signals, as well as fusion of different modalities-based emotion recognitions systems are explored.
Both machine learning and deep learning-based techniques are reviewed and tabulated.
Current trends and future perspectives are also presented.

Abstract

Background

Human emotions greatly affect the actions of a person. The automated emotion recognition has applications in multiple domains such as health care, e-learning, surveillance, etc. The development of computer-aided diagnosis (CAD) tools has led to the automated recognition of human emotions.

Objective

This review paper provides an insight into various methods employed using electroencephalogram (EEG), facial, and speech signals coupled with multi-modal emotion recognition techniques. In this work, we have reviewed most of the state-of-the-art papers published on this topic.

Method

This study was carried out by considering the various emotion recognition (ER) models proposed between 2016 and 2021. The papers were analysed based on methods employed, classifier used and performance obtained.

Results

There is a significant rise in the application of deep learning techniques for ER. They have been widely applied for EEG, speech, facial expression, and multimodal features to develop an accurate ER model.

Conclusion

Our study reveals that most of the proposed machine and deep learning-based systems have yielded good performances for automated ER in a controlled environment. However, there is a need to obtain high performance for ER even in an uncontrolled environment.

References

[1]
J. Kumar, J.A. Kumar, Machine learning approach to classify emotions using GSR, Adv. Res. Electr. Electron. Eng. 2 (12) (2015) 72–76. https://www.krishisanskriti.org/vol_image/18Dec201512125258 Jyotish kumar(electrical) 227-231.pdf227-231.pdf (consultado a 02-12-2020).
[2]
M. Ménard, P. Richard, H. Hamdi, B. Daucé, T. Yamaguchi, Emotion recognition based on heart rate and skin conductance, in: PhyCS 2015 - 2nd International Conference on Physiological Computing Systems, 2015, pp. 26–32,., Proceedings, January 2015.
[3]
Ekman, P. (1999). Basic emotions. In New York: Sussex U.K.: JohnWiley and Sons,Ltd (pp. 1–6). 10.1007/978-3-319-28099-8_495-1
[4]
Schmidt, P., Reiss, A., Duerichen, R., & Van Laerhoven, K. (2018). Wearable affect and stress recognition: a review. http://arxiv.org/abs/1811.08854
[5]
B. Bontchev, Adaptation in affective video games: a literature review, Cybern. Inform. Technol. 16 (3) (2016) 3–34,.
[6]
M. Ali, A.H. Mosa, F.A. Machot, K Kyamakya, Emotion recognition involving physiological and speech signals: a comprehensive review, In: Stud. Syst. Decis. Control 18 (7) (2018),.
[7]
Candra, H. (2017). Emotion recognition using facial expression and electroencephalography features with support vector machine classifier student.
[8]
D. Liao, W. Zhang, G. Liang, Y. Li, J. Xie, L. Zhu, X. Xu, L. Shu, Arousal evaluation of VR affective scenes based on HR and SAM, in: IEEE MTT-S 2019 International Microwave Biomedical Conference, IMBioC 2019, 2019,.
[9]
A. Gruenewald, D. Kroenert, J. Poehler, R. Brueck, F. Li, J. Littau, K. Schnieber, A. Piet, M. Grzegorzek, H. Kampling, B. Niehaves, Biomedical data acquisition and processing to recognize emotions for affective learning, in: 2018 IEEE 18th International Conference on Bioinformatics and Bioengineering, BIBE 2018, 2018, pp. 126–132,.
[10]
A. Goshvarpour, A. Abbasi, A. Goshvarpour, An accurate emotion recognition system using ECG and GSR signals and matching pursuit method, Biomed. J. 40 (6) (2017) 355–368,.
[11]
A.S. Kanagaraj, A. Shahina, M. Devosh, N. Kamalakannan, EmoMeter: measuring mixed emotions using weighted combinational model, in: 2014 International Conference on Recent Trends in Information Technology, ICRTIT 2014, 2014, pp. 2–7,.
[12]
J.A. Russell, A circumplex model of affect, J. Pers. Soc. Psychol. 39 (6) (1980) 1161–1178,.
[13]
H. Dabas, C. Sethi, C. Dua, M. Dalawat, D. Sethia, Emotion classification using EEG signals, in: ACM International Conference Proceeding Series, 2018, pp. 380–384,. December.
[14]
N.S. Suhaimi, J. Mountstephens, J. Teo, EEG-based emotion recognition: a state-of-the-art review of current trends and opportunities, Comput. Intell. Neurosci. 2020 (2020),.
[15]
J. Zhang, Z. Yin, P. Chen, S. Nichele, Emotion recognition using multi-modal data and machine learning techniques: a tutorial and review, Inform. Fus. 59 (2020) 103–126,. March 2019.
[16]
A. Hassouneh, A.M. Mutawa, M. Murugappan, Development of a real-time emotion recognition system using facial expressions and EEG based on machine learning and deep neural network methods, Inform. Med. Unlocked 20 (2020),.
[17]
X. Wang, X. Chen, C. Cao, Human emotion recognition by optimally fusing facial expression and speech feature, Signal Process. Image Commun. 84 (2020),. January.
[18]
Z. Farhoudi, S. Setayeshi, Fusion of deep learning features with mixture of brain emotional learning for audio-visual emotion recognition, Speech Commun. 127 (2021) 92–103,. June 2020.
[19]
C. Torres-Valencia, M. Álvarez-López, Á. Orozco-Gutiérrez, SVM-based feature selection methods for emotion recognition from multimodal data, J. Multimodal User Interfaces 11 (1) (2017) 9–23,.
[20]
W. Nie, Y. Yan, D. Song, K. Wang, Multi-modal feature fusion based on multi-layers LSTM for video emotion recognition, Multimedia Tools Appl. 80 (11) (2021) 16205–16214,.
[21]
P.A. Gandhi, J. Kishore, Prevalence of depression and the associated factors among the software professionals in Delhi: a cross-sectional study, Indian J. Public Health 64 (4) (2020) 413–416,.
[22]
S. Deb, P.R. Banu, S. Thomas, R.V. Vardhan, P.T. Rao, N. Khawaja, D. Komorowski, S Pietraszek, Depression among Indian university students and its association with perceived university academic environment, living arrangements and personal issues, Asian J. Psychiatry 23 (1) (2016) 1–15,.
[23]
D. Moher, A. Liberati, J. Tetzlaff, D.G. Altman, Preferred reporting items for systematic reviews and meta-analyses: the PRISMA statement, BMJ 339 (7716) (2009) 332–336,.
[24]
R. Karthik, R. Menaka, A. Johnson, S. Anand, Neuroimaging and deep learning for brain stroke detection - a review of recent advancements and future prospects, Comput. Methods Programs Biomed. 197 (2020),.
[25]
S. Layeghian Javan, M.M. Sepehri, H. Aghajani, Toward analyzing and synthesizing previous research in early prediction of cardiac arrest using machine learning based on a multi-layered integrative framework, J. Biomed. Inform. 88 (2018) 70–89,. September.
[26]
S.M. Alarcão, M.J. Fonseca, Emotions recognition using EEG signals: a survey, IEEE Trans. Affective Comput. 10 (3) (2019) 374–393,.
[27]
S. Koelstra, I. Patras, Fusion of facial expressions and EEG for implicit affective tagging, Image Vision Comput. 31 (2) (2013) 164–174,.
[28]
W.L. Zheng, J.Y. Zhu, B.L. Lu, Identifying stable patterns over time for emotion recognition from eeg, IEEE Trans. Affect. Comput. 10 (3) (2019) 417–429,.
[29]
T.B. Alakus, M. Gonen, I. Turkoglu, Database for an emotion recognition system based on EEG signals and various computer games – GAMEEMO, Biomed. Signal Process. Control 60 (2020),.
[30]
M. Alex, U. Tariq, F. Al-Shargie, H.S. Mir, H. Al Nashash, Discrimination of genuine and acted emotional expressions using EEG signal and machine learning, IEEE Access 8 (2020) 191080–191089,.
[31]
A.M. Asghar, M.J. Khan, M. Rizwan, M. Shorfuzzaman, R.M. Mehmood, AI inspired EEG ‑ based spatial feature selection method using multivariate empirical mode decomposition for emotion classification, Multimedia Syst. (2021),.
[32]
M.K. Ahirwal, M.R. Kose, Audio-visual stimulation based emotion classification by correlated EEG channels, Health Technol. 10 (1) (2020) 7–23,.
[33]
Y. Liu, G. Fu, Emotion recognition by deeply learned multi-channel textual and EEG features, Future Gen. Comput. Syst. 119 (2021) 1–6,.
[34]
N. Salankar, P. Mishra, L. Garg, Emotion recognition from EEG signals using empirical mode decomposition and second-order difference plot, Biomed. Signal Process. Control 65 (2021),. December 2020.
[35]
T. Tuncer, S. Dogan, A. Subasi, A new fractal pattern feature generation function based emotion recognition method using EEG, Chaos Solitons Fractals 144 (2021),.
[36]
Y. Zhang, J. Chen, J.H. Tan, Y. Chen, Y. Chen, D. Li, L. Yang, J. Su, X. Huang, W. Che, An investigation of deep learning models for EEG-based emotion recognition, Front. Neurosci. 14 (2020) 1–12,. December.
[37]
L. Jin, E.Y. Kim, Interpretable cross-subject EEG-based emotion recognition using channel-wise features †, Sensors 20 (2020) 750–762,.
[38]
T. Song, W. Zheng, P. Song, Z. Cui, EEG Emotion Recognition Using Dynamical Graph Convolutional Neural Networks, in: IEEE TRANSACTIONS ON AFFECTIVE COMPUTING, 11, IEEE, 2020, pp. 532–541.
[39]
Y. Yin, X. Zheng, B. Hu, Y. Zhang, X. Cui, EEG emotion recognition using fusion model of graph convolutional neural networks and LSTM, Appl. Soft Comput. 100 (2021),.
[40]
F. Hasanzadeh, M. Annabestani, S. Moghimi, Continuous emotion recognition during music listening using EEG signals: a fuzzy parallel cascades model, Appl. Soft Comput. 101 (2021),.
[41]
J. Fdez, N. Guttenberg, O. Witkowski, A. Pasquali, Cross-subject EEG-based emotion recognition through neural networks with stratified normalization, Front. Neurosci. 15 (2021),. February.
[42]
F. Shen, G. Dai, G. Lin, J. Zhang, W. Kong, H. Zeng, EEG-based emotion recognition using 4D convolutional recurrent neural network, Cognit. Neurodyn. 14 (6) (2020) 815–828,.
[43]
D. Komorowski, S. Pietraszek, The use of continuous wavelet transform based on the fast fourier transform in the analysis of multi-channel electrogastrography recordings, J. Med. Syst. 40 (1) (2016) 1–15,.
[44]
W.L. Zheng, B.L. Lu, Investigating critical frequency bands and channels for EEG-based emotion recognition with deep neural networks, IEEE Trans. Auton. Ment. Dev. 7 (3) (2015) 162–175,.
[45]
R. Alhalaseh, S. Alasasfeh, Machine-learning-based emotion recognition system using EEG signals, Computers 9 (4) (2020) 1–15,.
[46]
S.M. Ghosh, S. Bandyopadhyay, D. Mitra, Nonlinear classification of emotion from EEG signal based on maximized mutual information, Expert Syst. Appl. 185 (2021),. July.
[47]
H. Choi, M. Hahn, Sequence-to-sequence emotional voice conversion with strength control, IEEE Access 9 (2021) 42674–42687,.
[48]
M.B. Er, A novel approach for classification of speech emotions based on deep and acoustic features, IEEE Access 8 (2020),.
[49]
C. Busso, M. Bulut, C.C. Lee, A. Kazemzadeh, E. Mower, S. Kim, J.N. Chang, S. Lee, S.S. Narayanan, IEMOCAP: interactive emotional dyadic motion capture database, Lang. Resour. Eval. 42 (4) (2008) 335–359,.
[50]
F. Burkhardt, A. Paeschke, M. Rolfes, W. Sendlmeier, B. Weiss, A database of German emotional speech, in: 9th European Conference on Speech Communication and Technology, September, 2005, pp. 1517–1520,.
[51]
Parthasarathy, S., Member, S., Busso, C., & Member, S. (2020). Semi-supervised speech emotion recognition. 28, 2697–2709.
[52]
S.M. Mustaqeem, S. Kwon, MLT-DNet: speech emotion recognition using 1D dilated CNN based on multi-learning trick approach, Expert Syst. Appl. 167 (2021),. October 2020.
[53]
Z. Zhao, Q. Li, Z. Zhang, N. Cummins, H. Wang, J. Tao, W. Schuller, B, Combining a parallel 2D CNN with a self-attention dilated residual network for CTC-based discrete speech emotion recognition, Neural Netw. 141 (2021) 52–60,.
[54]
J. Zhao, X. Mao, L. Chen, Speech emotion recognition using deep 1D & 2D CNN LSTM networks, Biomed. Signal Process. Control 47 (2019) 312–323,.
[55]
N. Patel, S. Patel, S.H. Mankad, Impact of autoencoder based compact representation on emotion detection from audio, J. Ambient Intell. Human. Comput. (2021),.
[56]
J. Ancilin, A. Milton, Improved speech emotion recognition with mel frequency magnitude coefficient, Appl. Acoust. 179 (2021),.
[57]
T. Tuncer, S. Dogan, U.R. Acharya, Automated accurate speech emotion recognition system using twine shuffle pattern and iterative neighborhood component analysis techniques, Knowl.-Based Syst. 211 (2021),.
[58]
M. Farooq, F. Hussain, N.K. Baloch, F.R. Raja, H. Yu, Y.B. Zikria, Impact of feature selection algorithm on speech emotion recognition using deep convolutional neural network, Sensors 20 (21) (2020) 1–18,.
[59]
Z. Yang, Y. Huang, Algorithm for speech emotion recognition classification based on mel-frequency cepstral coefficients and broad learning system, Evol. Intell. (2021),.
[60]
S.R. Kadiri, P. Alku, Excitation features of speech for speaker-specific emotion detection, IEEE Access 8 (2020) 60382–60391,.
[61]
R. Jahangir, Y.W. Teh, F. Hanif, G. Mujtaba, Deep learning approaches for speech emotion recognition: state of the art and research challenges, In Multimedia Tools Appl. 80 (16) (2021),. Multimedia Tools and Applications.
[62]
Niu, Y., Zou, D., Niu, Y., He, Z., & Tan, H. (2017). A breakthrough in speech emotion recognition using deep retinal convolution neural networks. ArXiv, 1–7.
[63]
Dinakaran, K., & Ashokkrishna, E.M. (2020). Efficient regional multi feature similarity measure based emotion detection system in web portal using artificial neural network. Microprocessors Microsyst., 77. 10.1016/j.micpro.2020.103112
[64]
H. Ghazouani, A genetic programming-based feature selection and fusion for facial expression recognition, Appl. Soft Comput. 103 (2021),.
[65]
Y. Ma, W. Chen, X. Ma, J. Xu, X. Huang, R. Maciejewski, A.K.H. Tung, EasySVM: a visual analysis approach for open-box support vector machines, Computat. Vis. Media 3 (2) (2017) 161–175,.
[66]
H. Jung, S. Lee, J. Yim, S. Park, J. Kim, Joint fine-tuning in deep neural networks for facial expression recognition, in: 2015 International Conference on Computer Vision, 2015, pp. 2983–2991,.
[67]
H. Yang, U. Ciftci, L. Yin, Facial expression recognition by de-expression residue learning, in: Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2018, pp. 2168–2177,.
[68]
D.Y. Choi, B.C. Song, Semi-supervised learning for continuous emotion recognition based on metric learning, IEEE Access 8 (2020) 113443–113455,.
[69]
A.S.D. Devi, C.H. Satyanarayana, An efficient facial emotion recognition system using novel deep learning neural network-regression activation classifier, Multimedia Tools Appl. (2021),.
[70]
P. Lucey, J.F. Cohn, T. Kanade, J. Saragih, Z. Ambadar, I. Matthews, The extended Cohn-Kanade dataset (CK+): a complete dataset for action unit and emotion-specified expression, in: 2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition - Workshops, CVPRW 2010, 2010, pp. 94–101,. July.
[71]
M. Lyons, “Excavating AI” re-excavated: debunking a fallacious account of the Jaffe dataset, SSRN Electron. J. (2021) 1–20,.
[72]
I.J. Goodfellow, D. Erhan, P. Luc Carrier, A. Courville, M. Mirza, B. Hamner, W. Cukierski, Y. Tang, D. Thaler, D.H. Lee, Y. Zhou, C. Ramaiah, F. Feng, R. Li, X. Wang, D. Athanasakis, J. Shawe-Taylor, M. Milakov, J. Park, Y. Bengio, Challenges in representation learning: a report on three machine learning contests, Neural Netw. 64 (2015) 59–63,. December 2017.
[73]
D.Y. Choi, B.C. Song, Semi-supervised learning for facial expression-based emotion recognition in the continuous domain, Multimedia Tools Appl. 79 (37–38) (2020) 28169–28187,.
[74]
M.K. Chowdary, T.N. Nguyen, D.J. Hemanth, Deep learning-based facial emotion recognition for human – computer interaction applications, Neural Comput. Appl. 8 (2021),.
[75]
N. Mehendale, Facial emotion recognition using convolutional neural networks (FERC), SN Appl. Sci. 2 (3) (2020) 1–8,.
[76]
N. Hajarolasvadi, E. Bashirov, H. Demirel, Video-based person-dependent and person-independent facial emotion recognition, Signal Image Video Process. (2021),.
[77]
D. Lakshmi, R. Ponnusamy, Facial emotion recognition using modified HOG and LBP features with deep stacked autoencoders, Microprocessors Microsyst. 82 (2021),. October 2020.
[78]
D. Liu, L. Chen, L. Wang, Z. Wang, A multi-modal emotion fusion classification method combined expression and speech based on attention mechanism, Multimedia Tools Appl. (2021),.
[79]
R.J.R. Kumar, M. Sundaram, N. Arumugam, Facial emotion recognition using subband selective multilevel stationary wavelet gradient transform and fuzzy support vector machine, Vis. Comput. (2020),.
[80]
P. Tzirakis, G. Trigeorgis, M.A. Nicolaou, W. Schuller, End-to-end multimodal emotion recognition using deep neural networks, IEEE J. Sel. Top. Signal Process. 11 (8) (2017) 1301–1309.
[81]
H. Zhang, Expression-EEG based collaborative multimodal emotion recognition using deep autoencoder, IEEE Access 8 (2020) 164130–164143,.
[82]
E.S. Salama, R.A. El-Khoribi, M.E. Shoman, M.A. Wahby Shalaby, A 3D-convolutional neural network framework with ensemble learning techniques for multi-modal emotion recognition, Egypt. Inform. J. (2021),. xxxx.
[83]
A. Zadeh, P.P. Liang, J. Vanbriesen, S. Poria, E. Tong, E. Cambria, M. Chen, L.P. Morency, Multimodal language analysis in the wild: CMU-MOSEI dataset and interpretable dynamic fusion graph, in: ACL 2018 - 56th Annual Meeting of the Association for Computational Linguistics, Proceedings of the Conference (Long Papers), 1, 2018, pp. 2236–2246,.
[84]
O. Martin, I. Kotsia, B. Macq, I. Pitas, The eNTERFACE’ 05 Audio-Visual Emotion Database - IEEE Conference Publication, 1, 2019, pp. 2–9. https://ieeexplore.ieee.org/abstract/document/1623803.
[85]
N.H. Ho, H.J. Yang, S.H. Kim, G. Lee, Multimodal approach of speech emotion recognition using multi-level multi-head fusion attention-based recurrent neural network, IEEE Access 8 (2020) 61672–61686,.
[86]
C.P. Loizou, An automated integrated speech and face imageanalysis system for the identification of human emotions, Speech Commun. 130 (2021) 15–26,. February.
[87]
D. Roy, P. Panda, K. Roy, Tree-CNN: a hierarchical deep convolutional neural network for incremental learning, Neural Netw. 121 (2020) 148–160,.
[88]
Y. Fan, X. Lu, D. Li, Y. Liu, Video-based emotion recognition using CNN-RNN and C3D hybrid networks, in: ICMI 2016 - Proceedings of the 18th ACM International Conference on Multimodal Interaction, 2016, pp. 445–450,. October 2017.
[89]
B. Islam, F. Mahmud, A. Hossain, P.B. Goala, M.S. Mia, A facial region segmentation based approach to recognize human emotion using fusion of HOG LBP features and artificial neural network, in: 4th International Conference on Electrical Engineering and Information and Communication Technology, ICEEiCT 2018, 2019, pp. 642–646,.
[90]
B. Li, D. Lima, Facial expression recognition via ResNet-50, Int. J. Cognit. Comput. Eng. 2 (2021) 57–64,. January.
[91]
Mungra, D., Agrawal, A., Sharma, P., Tanwar, S., & Obaidat, M.S. (2020). PRATIT: a CNN-based emotion recognition system using histogram equalization and data augmentation. Multimedia Tools Appl., 79(3–4), 2285–2307. 10.1007/s11042-019-08397-0
[92]
H. Becker, J. Fleureau, P. Guillotel, F. Wendling, I. Merlet, L. Albera, Emotion recognition based on high-resolution EEG recordings and reconstructed brain sources, IEEE Trans. Affective Comput. 11 (2) (2020) 244–257,.
[93]
H. Zhang, A. Jolfaei, M. Alazab, A Face Emotion recognition method using convolutional neural network and image edge computing, IEEE Access 7 (2019) 159081–159089,.
[94]
J. Li, Z. Zhang, H. He, Hierarchical convolutional neural networks for EEG-based emotion recognition, Cognit. Comput. 10 (2) (2018) 368–380,.
[95]
Xiaowei Li, B. Hu, S. Sun, H. Cai, EEG-based mild depressive detection using feature selection methods and classifiers, Comput. Methods Programs Biomed. 136 (2016) 151–161,. November.
[96]
R. Alazrai, R. Homoud, H. Alwanni, M.I. Daoud, EEG-based emotion recognition using quadratic time-frequency distribution, Sensors 18 (8) (2018) 1–32,.
[97]
M. Abdelwahab, C. Busso, Domain adversarial for acoustic emotion recognition, IEEE/ACM Trans. Audio Speech Lang. Process. 26 (12) (2018) 2423–2435,.
[98]
M.S. Akhtar, D.S. Chauhan, D. Ghosal, S. Poria, A. Ekbal, P. Bhattacharyya, Multi-task learning for multi-modal emotion recognition and sentiment analysis, in: NAACL HLT 2019 - 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies - Proceedings of the Conference, 1, 2019, pp. 370–379,.
[99]
F. Al-shargie, T.B. Tang, N. Badruddin, M. Kiguchi, Towards multilevel mental stress assessment using SVM with ECOC: an EEG approach, Med. Biol. Eng. Comput. 56 (1) (2018) 125–136,.
[100]
F. Al-Shargie, U. Tariq, M. Alex, H. Mir, H. Al-Nashash, Emotion recognition based on fusion of local cortical activations and dynamic functional networks connectivity: an EEG study, IEEE Access 7 (2019) 143550–143562,.
[101]
D.A. AL CHANTI, A. Caplier, Deep learning for spatio-temporal modeling of dynamic spontaneous emotions, IEEE Trans. Affect. Comput. 3045(c) (2018) 1–14,.
[102]
S. Alhagry, A.A. Fahmy, R.A. El-Khoribi, Emotion Recognition based on EEG using LSTM Recurrent Neural Network, Int. J. Adv. Comput. Sci. Appl. 8 (10) (2017) 8–11,.
[103]
M. Ali, A.H. Mosa, F. Al Machot, K Kyamakya, EEG-based emotion recognition approach for e-healthcare applications, in: International Conference on Ubiquitous and Future Networks, ICUFN, 2016, pp. 946–950,. 2016-Augus.
[104]
R.S. Alkhawaldeh, DGR: gender recognition of human speech using one-dimensional conventional neural network, Sci. Program. 2019 (2019),.
[105]
A.S. Alphonse, D. Dharma, Novel directional patterns and a generalized supervised dimension reduction system (GSDRS) for facial emotion recognition, Multimedia Tools Appl. 77 (8) (2018) 9455–9488,.
[106]
M. Alsolamy, A. Fattouh, Emotion estimation from EEG signals during listening to Quran using PSD features, in: CSIT 2016: 2016 7th International Conference on Computer Science and Information Technology, 2016, pp. 3–7,.
[107]
Y. An, N. Xu, Z. Qu, Leveraging spatial-temporal convolutional features for EEG-based emotion recognition, Biomed. Signal Process. Control 69 (2021),. June.
[108]
T. Anvarjon, Mustaqeem, S. Kwon, Deep-net: a lightweight cnn-based speech emotion recognition system using deep frequency features, Sensors 20 (18) (2020) 1–16,.
[109]
K.A. Araño, P. Gloor, C. Orsenigo, C. Vercellis, When old meets new: emotion recognition from speech signals, Cognit. Comput. (2021),. October 2020.
[110]
I. Ariav, I. Cohen, An end-to-end multimodal voice activity detection using WaveNet encoder and residual networks, IEEE J. Sel. Top. Signal Process. 13 (2) (2019) 265–274,.
[111]
M. Arora, M. Kumar, AutoFER: PCA and PSO based automatic facial emotion recognition, Multimedia Tools Appl. 80 (2) (2021) 3039–3049,.
[112]
M. Aslan, CNN based efficient approach for emotion recognition, J. King Saud Univ. (2021),. xxxx.
[113]
O. Atila, A. Şengür, Attention guided 3D CNN-LSTM model for accurate speech based emotion recognition, Appl. Acoust. 182 (2021),.
[114]
J. Atkinson, D. Campos, Improving BCI-based emotion recognition by combining EEG feature selection and kernel classifiers, Expert Syst. Appl. 47 (2016) 35–41,.
[115]
B.T. Atmaja, M. Akagi, Two-stage dimensional emotion recognition by fusing predictions of acoustic and text networks using SVM, Speech Commun. 126 (2021) 9–21,. November 2020.
[116]
D. Ayata, Y. Yaslan, M.E. Kamasak, Emotion recognition from multimodal physiological signals for emotion aware healthcare systems, J. Med. Biol. Eng. 40 (2) (2020) 149–157,.
[117]
A.M. Badshah, J. Ahmad, N. Rahim, S.W. Baik, Speech emotion recognition from spectrograms with deep convolutional neural network, in: 2017 International Conference on Platform Technology and Service, PlatCon 2017, 2017,. - Proceedings.
[118]
A.M. Bhatti, M. Majid, S.M. Anwar, B. Khan, Human emotion recognition and analysis in response to audio music using brain signals, Comput. Hum. Behav. 65 (2016) 267–275,.
[119]
J.D. Bodapati, N. Veeranjaneyulu, Facial emotion recognition using deep CNN based features, Int. J. Innov. Technol. Explor. Eng. 8 (7) (2019) 1928–1931.
[120]
H. Cai, Z. Qu, Z. Li, Y. Zhang, X. Hu, B. Hu, Feature-level fusion approaches based on multimodal EEG data for depression recognition, Inform. Fus. 59 (2020) 127–138,. March 2019.
[121]
W. Cao, Z. Feng, D. Zhang, Y. Huang, Facial expression recognition via a CBAM embedded network, Proc. Comput. Sci. 174 (2020) 463–477,.
[122]
W.Y. Chang, S.H. Hsu, J.H. Chien, FATAUVA-net: an integrated deep learning framework for facial attribute recognition, action unit detection, and valence-arousal estimation, in: IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops, 2017,. 2017-July 1963–1971.
[123]
A. Chatziagapi, G. Paraskevopoulos, D. Sgouropoulos, G. Pantazopoulos, M. Nikandrou, T. Giannakopoulos, A. Katsamanis, A. Potamianos, S. Narayanan, Data augmentation using GANs for speech emotion recognition, in: Proceedings of the Annual Conference of the International Speech Communication Association, INTERSPEECH, 2019-Septe, 2019, pp. 171–175,.
[124]
K.H. Cheah, H. Nisar, V.V. Yap, C.Y. Lee, Short-time-span EEG-based personalized emotion recognition with deep convolutional neural network, in: Proceedings of the 2019 IEEE International Conference on Signal and Image Processing Applications, ICSIPA 2019, 2019, pp. 78–83,.
[125]
J.X. Chen, P.W. Zhang, Z.J. Mao, Y.F. Huang, D.M. Jiang, Y.N. Zhang, Accurate EEG-based emotion recognition on combined features using deep convolutional neural networks, IEEE Access 7 (2019) 44317–44328,.
[126]
M. Chen, X. He, J. Yang, H. Zhang, 3-D convolutional recurrent neural networks with attention model for speech emotion recognition, IEEE Signal Process Lett. 25 (10) (2018) 1440–1444,.
[127]
Chen, P., & Zhang, J. (2017). Performance comparison of machine learning algorithms for EEG-signal-based emotion recognition. Lecture Notes in Computer Science (Including Subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), 10613 LNCS, 208–216. 10.1007/978-3-319-68600-4_25
[128]
Q. Chen, G. Huang, A novel dual attention-based BLSTM with hybrid features in speech emotion recognition, Eng. Appl. Artif. Intell. 102 (2021),. April.
[129]
T. Chen, H. Yin, Emotion recognition based on fusion of long short-term memory networks and SVMs, Digital Signal Process. 1 (2021) 1–10,.
[130]
Chen, X., Huang, R., Li, X., Xiao, L., Zhou, M., & Zhang, L. (2021). A novel user emotional interaction design model using long and short-term memory networks and deep learning. 12(April), 1–13. 10.3389/fpsyg.2021.674853
[131]
Chernykh, V., & Prikhodko, P. (2018). Emotion recognition from speech with recurrent neural networks. ArXiv.
[132]
A. Christy, S. Vaithyasubramanian, A. Jesudoss, M.D.A. Praveena, Multimodal speech emotion recognition and classification using convolutional neural network techniques, Int. J. Speech Technol. 23 (2) (2020) 381–388,.
[133]
S. Cunningham, H. Ridley, J. Weinel, R. Picking, Supervised machine learning for audio emotion recognition: enhancing film sound design using audio features, regression models and artificial neural networks, Pers. Ubiquitous Comput. (2020),.
[134]
S. Datta, D. Sen, R. Balasubramanian, Integrating geometric and textural features for facial emotion classification using SVM frameworks, in: Proceedings of International Conference on Computer Vision and Image Processing, 78, 2017, pp. 10287–10323,.
[135]
S. Deb, S. Dandapat, Emotion classification using segmentation of vowel-like and non-vowel-like regions, IEEE Trans. Affective Comput. 10 (3) (2019) 360–373.
[136]
J. Deng, X. Xu, Z. Zhang, S. Fruhholz, B. Schuller, Universum autoencoder-based domain adaptation for speech emotion recognition, IEEE Signal Process Lett. 24 (4) (2017) 500–504,.
[137]
P. Dhankhar, N. Delhi, ResNet-50 and VGG-16 for recognizing facial emotions, Int. J. Innov. Eng. Technol. 13 (4) (2019) 126–130.
[138]
L.N. Do, H.J. Yang, H.D. Nguyen, S.H. Kim, G.S. Lee, I.S. Na, Deep neural network-based fusion model for emotion recognition using visual data, J. Supercomput. (2021),.
[139]
A. Dogan, M. Akay, P. Barua, M. Baygin, S. Dogan, T. Tuncer, A. Dogru, U. Acharya, PrimePatNet87: prime pattern and tunable q-factor wavelet transform techniques for automated accurate EEG emotion recognition, Comput. Biol. Med. 138 (2021).
[140]
G. Du, Z. Wang, B. Gao, S. Mumtaz, K.M. Abualnaja, C. Du, A convolution bidirectional long short-term memory neural network for driver emotion recognition, IEEE Trans. Intell. Transp. Syst. (2020) 1–9,.
[141]
M.B. Er, H. Çiğ, İ.B. Aydilek, A new approach to recognition of human emotions using brain signals and music stimuli, Appl. Acoust. 175 (2021),.
[142]
Y. Fang, H. Yang, X. Zhang, H. Liu, B. Tao, Multi-feature input deep forest for EEG-based emotion recognition, Front. Neurorobot. 14 (2021) 1–11,. January.
[143]
Y. Fang, R. Rong, J. Huang, Hierarchical fusion of visual and physiological signals for emotion recognition, Multidimension. Syst. Signal Process. 32 (4) (2021) 1103–1121,.
[144]
S. Farashi, R. Khosrowabadi, EEG based emotion recognition using minimum spanning tree, Phys. Eng. Sci. Med. 43 (3) (2020) 985–996,.
[145]
H.M. Fayek, M. Lech, L. Cavedon, Evaluating deep learning architectures for speech emotion recognition, Neural Netw. 92 (2017) 60–68,.
[146]
R. Fourati, B. Ammar, J. Sanchez-Medina, A.M. Alimi, Unsupervised learning in reservoir computing for EEG-based emotion recognition, IEEE Trans. Affect. Comput. 3045 (c) (2020) 1–13,.
[147]
N. Ganapathy, Y.R. Veeranki, H. Kumar, R Swaminathan, Emotion recognition using electrodermal activity signals and multiscale deep convolutional neural network, J. Med. Syst. 45 (49) (2021) 1–10,.
[148]
Qiang Gao, Y. Yang, Q. Kang, Z. Tian, Y Song, EEG-based emotion recognition with feature fusion networks, Int. J. Mach. Learn. Cybern. (2021),.
[149]
Qinquan Gao, H. Zeng, G. Li, T. Tong, Graph reasoning-based emotion recognition network, IEEE Access 9 (2021) 6488–6497,.
[150]
C. Guanghui, Z. Xiaoping, Multi-modal emotion recognition by fusing correlation features of speech-visual, IEEE Signal Process Lett. 28 (2021) 533–537,.
[151]
L. Guo, L. Wang, J. Dang, Z. Liu, H. Guan, Exploration of complementary features for speech emotion recognition based on kernel extreme learning machine, IEEE Access 7 (2019) 75798–75809,.
[152]
A. Gupta, S. Arunachalam, R. Balakrishnan, Deep self-attention network for facial emotion recognition, Proc. Comput. Sci. 171 (2019) (2020) 1527–1534,.
[153]
R. Gupta, L.K. Vishwamitra, Facial expression recognition from videos using CNN and feature aggregation, Mater. Today (2021),. xxxx.
[154]
V. Gupta, M.D. Chopda, R.B. Pachori, Cross-subject emotion recognition using flexible analytic wavelet transform from EEG signals, IEEE Sensors J. 19 (6) (2019) 2266–2274,.
[155]
A.K. Hassan, S.N. Mohammed, A novel facial emotion recognition scheme based on graph mining, Defence Technol. 16 (5) (2020) 1062–1072,.
[156]
H. He, Y. Tan, J. Ying, W. Zhang, Strengthen EEG-based emotion recognition using firefly integrated optimization algorithm, Appl. Soft Comput. J. 94 (2020),.
[157]
X. He, W. Zhang, Emotion recognition by assisted learning with convolutional neural networks, Neurocomputing 291 (2018) 187–194,.
[158]
F. Hernández-Luquin, H.J. Escalante, Multi-branch deep radial basis function networks for facial emotion recognition, Neural Comput. Appl. (2021),.
[159]
M.S. Hossain, G. Muhammad, Audio-visual emotion recognition using multi-directional regression and Ridgelet transform, Journal on Multimodal User Interfaces 10 (4) (2016) 325–333,.
[160]
M.S. Hossain, G. Muhammad, Emotion recognition using deep learning approach from audio–visual emotional big data, Inform. Fus. 49 (2019) 69–78,. November 2017.
[161]
M. Hu, H. Wang, X. Wang, J. Yang, R. Wang, Video facial emotion recognition based on local enhanced motion history image and CNN-CTSLSTM networks, J. Visual Commun. Image Represent. 59 (2019) 176–185,.
[162]
R.H. Huan, J. Shu, S.L. Bao, R.H. Liang, P. Chen, K.K. Chi, Video multimodal emotion recognition based on Bi-GRU and attention fusion, Multimedia Tools Appl. 80 (6) (2021) 8213–8240,.
[163]
Huang, X., Kortelainen, J., Zhao, G., Li, X., Moilanen, A., Seppänen, T., & Pietikäinen, M. (2016). Multi-modal emotion analysis from facial expressions and electroencephalogram. 147, 114–124. 10.1016/j.cviu.2015.09.015
[164]
X. Huang, S.J. Wang, X. Liu, G. Zhao, X. Feng, M. Pietikainen, Discriminative spatiotemporal local binary pattern with revisited integral projection for spontaneous facial micro-expression recognition, IEEE Trans. Affect. Comput. 10 (1) (2019) 32–47,.
[165]
X. Huang, G. Zhao, X. Hong, W. Zheng, M. Pietikäinen, Spontaneous facial micro-expression analysis using spatiotemporal completed local quantized patterns, Neurocomputing 175 (2016) 564–578,. PartA.
[166]
M.G. Huddar, S.S. Sannakki, V.S. Rajpurohit, Attention-based multimodal contextual fusion for sentiment and emotion classification using bidirectional LSTM, Multimedia Tools Appl. 80 (9) (2021) 13059–13076,.
[167]
M.R. Islam, M.M. Islam, M.M. Rahman, C. Mondal, S.K. Singha, M. Ahmad, A. Awal, M.S. Islam, M.A. Moni, EEG Channel Correlation Based Model for Emotion Recognition, Comput. Biol. Med. 136 (2021),. August.
[168]
D.K. Jain, P. Shamsolmoali, P. Sehdev, Extended deep neural network for facial emotion recognition, Pattern Recognit. Lett. 120 (2019) 69–74,.
[169]
A. Jaiswal, A. Krishnama Raju, S. Deb, Facial emotion detection using deep learning, in: 2020 International Conference for Emerging Technology, INCET 2020, 2020, pp. 1–5,.
[170]
A. Jalilifard, E.B. Pizzolato, M.K. Islam, Emotion classification using single-channel scalp-EEG recording, in: Proceedings of the Annual International Conference of the IEEE Engineering in Medicine and Biology Society, EMBS, 2016, pp. 845–849,. 2016-Octob.
[171]
M. Javidan, M. Yazdchi, Z. Baharlouei, A. Mahnam, Feature and channel selection for designing a regression-based continuous-variable emotion recognition system with two EEG channels, Biomed. Signal Process. Control 70 (2021),. July.
[172]
J. Jayalekshmi, Tessy Mathew, Facial expression recognition and emotion classification system for sentiment analysis, in: 2017 International Conference on Networks & Advances in Computational Technologies (NetACT), IEEE, 2017.
[173]
N. Ji, L. Ma, H. Dong, X. Zhang, EEG signals feature extraction based on DWT and EMD combined with approximate entropy, Brain Sci. 9 (8) (2019),.
[174]
J. Jia, S. Zhou, Y. Yin, B. Wu, W. Chen, F. Meng, Y. Wang, Inferring Emotions from Large-Scale Internet Voice Data, IEEE Trans. Multimedia 21 (7) (2019) 1853–1866,.
[175]
A. Joseph, P. Geetha, Facial emotion detection using modified eyemap–mouthmap algorithm on an enhanced image and classification with tensorflow, Vis. Comput. 36 (3) (2020) 529–539,.
[176]
V.M. Joshi, R.B. Ghongade, IDEA: intellect database for emotion analysis using EEG signal, J. King Saud Univ. (2020),. xxxx.
[177]
P. Kaviya, T. Arumugaprakash, Group facial emotion analysis system using convolutional neural network, in: Proceedings of the Fourth International Conference on Trends in Electronics and Informatics (ICOEI 2020), IEEE, 2020, pp. 643–647.
[178]
M. Kheirkhah, S. Brodoehl, L. Leistritz, T. Götz, P. Baumbach, R. Huonker, O.W. Witte, C.M. Klingner, Automated emotion classification in the early stages of cortical processing: an MEG study, Artif. Intell. Med. 115 (2021),. July 2019.
[179]
D. Kollias, S.P. Zafeiriou, Exploiting multi-CNN features in CNN-RNN based dimensional emotion recognition on the OMG in-the-wild dataset, IEEE Trans. Affect. Comput. 3045(c) (2020) 1–12,.
[180]
W. Kong, X. Song, J. Sun, Emotion recognition based on sparse representation of phase synchronization features, Multimedia Tools Appl. 80 (14) (2021) 21203–21217,.
[181]
R. Kosti, J.M. Alvarez, A. Recasens, A. Lapedriza, Context based emotion recognition using EMOTIC dataset, IEEE Trans. Pattern Anal. Mach. Intell. 42 (11) (2020) 2755–2766.
[182]
P.T. Krishnan, A.N. Joseph Raj, V. Rajangam, Emotion classification from speech signal based on empirical mode decomposition and non-linear features, Complex Intell. Syst. 7 (4) (2021) 1919–1934,.
[183]
N. Kumar, K. Khaund, S.M. Hazarika, Bispectral analysis of EEG for emotion recognition, Proc. Comput. Sci. 84 (2016) 31–35,.
[184]
U. Kumaran, S. Radha Rammohan, S.M. Nagarajan, A. Prathik, Fusion of mel and gammatone frequency cepstral coefficients for speech emotion recognition using deep C-RNN, Int. J. Speech Technol. 24 (2) (2021) 303–314,.
[185]
S. Kuruvayil, S. Palaniswamy, Emotion recognition from facial images with simultaneous occlusion, pose and illumination variations using meta-learning, J. King Saud Univ. (2021),. xxxx.
[186]
Z. Lan, G.R. Muller-Putz, L. Wang, Y. Liu, O. Sourina, R. Scherer, Using support vector regression to estimate valence level from EEG, in: 2016 IEEE International Conference on Systems, Man, and Cybernetics, SMC 2016, 2016, pp. 2558–2563,. - Conference Proceedings.
[187]
Z. Lan, O. Sourina, L. Wang, Y. Liu, Real-time EEG-based emotion monitoring using stable features, Vis. Comput. 32 (3) (2016) 347–358,.
[188]
I. Lasri, A.R. Solh, M.E. Belkacemi, Facial emotion recognition of students using convolutional neural network, in: 2019 Third International Conference on Intelligent Computing in Data Sciences (ICDS), 2019, pp. 1–6.
[189]
S. Latif, R. Rana, S. Younis, J. Qadir, J. Epps, Transfer learning for improving speech emotion classification accuracy, in: Proceedings of the Annual Conference of the International Speech Communication Association, INTERSPEECH, 2018, pp. 257–261,. 2018-Septe.
[190]
D. Le, Z. Aldeneh, E.M. Provost, Discretized continuous speech emotion recognition with multi-task deep recurrent neural network, in: Proceedings of the Annual Conference of the International Speech Communication Association, INTERSPEECH, 2017, pp. 1108–1112,. 2017-Augus.
[191]
M. Lech, M. Stolar, C. Best, R. Bolia, Real-time speech emotion recognition using a pre-trained image classification network: effects of bandwidth reduction and companding, Front. Comput. Sci. 2 (2020) 1–14,. May.
[192]
Lee, C., Song, K., Jeong, J., & Choi, W. (2019). Convolutional attention networks for multimodal emotion recognition from speech and text data. 28–34. 10.18653/v1/w18-3304
[193]
S. Lee, D.K. Han, H. Ko, Fusion-convbert: parallel convolution and Bert fusion for speech emotion recognition, Sensors 20 (22) (2020) 1–19,.
[194]
D. Li, J. Liu, Z. Yang, L. Sun, Z. Wang, Speech emotion recognition using recurrent neural networks with directional self-attention, Expert Syst. Appl. 173 (2021),. September 2019.
[195]
D. Li, Y. Zhou, Z. Wang, D. Gao, Exploiting the potentialities of features for speech emotion recognition, Inform. Sci. 548 (2021) 328–343,.
[196]
H. Li, H. Xu, Deep reinforcement learning for robust emotional classification in facial expression recognition, Knowl.-Based Syst. 204 (2020),.
[197]
S. Li, X. Xing, W. Fan, B. Cai, P. Fordson, X. Xu, Spatiotemporal and frequential cascaded attention networks for speech emotion recognition, Neurocomputing 448 (2021) 238–248,.
[198]
Xin Li, X.Q. Sun, X.Y. Qi, X.F. Sun, Relevance vector machine based EEG emotion recognition, in: 2016 6th International Conference on Instrumentation and Measurement, Computer, Communication and Control, IMCCC 2016, 2016, pp. 293–297,.
[199]
Li, Yang, Fu, B., Li, F., Shi, G., & Zheng, W. (2021). A novel transferability attention neural network model for EEG emotion recognition. 447, 92–101. 10.1016/j.neucom.2021.02.048
[200]
Y. Li, T. Zhao, T. Kawahara, Improved end-to-end speech emotion recognition using self attention mechanism and multitask learning, in: Proceedings of the Annual Conference of the International Speech Communication Association, INTERSPEECH, 2019-Septe, 2019, pp. 2803–2807,.
[201]
Z. Li, G. Zhang, J. Dang, L. Wang, J. Wei, Multi-modal emotion recognition based on deep learning of eeg and audio signals, in: Proceedings of the International Joint Conference on Neural Networks, 2021-July, 2021, pp. 1–6,.
[202]
D.Y. Liliana, Emotion recognition from facial expression using deep convolutional neural network, J. Phys. Conf. Ser. 1193 (1) (2019),.
[203]
W. Lim, D. Jang, T. Lee, Speech emotion recognition using convolutional recurrent neural networks and spectrograms, in: 2016 Asia-Pacific Signal and Information Processing Association Annual Summit and Conference (APSIPA), 2016, pp. 1–4,.
[204]
D. Liu, L. Chen, Z. Wang, G. Diao, Speech expression multimodal emotion recognition based on deep belief network, J. Grid Comput. (2021),. 2/2021.
[205]
X. Liu, X. Cheng, K. Lee, GA-SVM-based facial emotion recognition, IEEE Sensors J. 21 (10) (2021) 11532–11542.
[206]
Y.J. Liu, M. Yu, G. Zhao, J. Song, Y. Ge, Y. Shi, Real-time movie-induced discrete emotion recognition from EEG signals, IEEE Trans. Affect. Comput. 9 (4) (2018) 550–562,.
[207]
Y.J. Liu, J.K. Zhang, W.J. Yan, S.J. Wang, G. Zhao, X. Fu, A main directional mean optical flow feature for spontaneous micro-expression recognition, IEEE Trans. Affect. Comput. 7 (4) (2016) 299–310,.
[208]
Y. Liu, X. Yuan, X. Gong, Z. Xie, F. Fang, Z. Luo, Conditional convolution neural network enhanced random forest for facial expression recognition, Pattern Recognit. 84 (2018) 251–261,.
[209]
Z.-T. Liu, Q. Xie, W. Min, W.-H. Cao, D.-Y. Li, Li, Si-Han, Electroencephalogram emotion recognition based on empirical mode decomposition and optimal feature selection, IEEE Trans Cognit. Deve. Syst. 11 (2019) 517–526. dECEMBER http://link.springer.com/10.1007/978-3-030-03243-2_299-1.
[210]
I. Livieris, E. Pintelas, P. Pintelas, Gender recognition by voice using an improved self-labeled algorithm, Mach. Learn. Knowl. Extract. 1 (1) (2019) 492–503,.
[211]
N. Lopes, A. Silva, S.R. Khanal, A. Reis, J. Barroso, V. Filipe, J. Sampaio, Facial emotion recognition in the elderly using a SVM classifier, in: 2018 - 2nd International Conference on Technology and Innovation in Sports, Health and Wellbeing (TISHW), 2018,.
[212]
R. Lotfian, C. Busso, Lexical dependent emotion detection using synthetic speech reference, IEEE Access 7 (2019) 22071–22085,.
[213]
Lyons, M.J., Kamachi, M., & Gyoba, J. (2020). Coding facial expressions with Gabor wavelets (IVC special issue). ArXiv, 1–13. 10.5281/zenodo.4029679
[214]
M Murugappan, A Mutawa, Facial geometric feature extraction based emotional expression classification using machine learning algorithms, PLoS ONE 16 (2) (2021),.
[215]
Y. Ma, Y. Hao, M. Chen, J. Chen, P. Lu, A. Košir, Audio-visual emotion fusion (AVEF): a deep efficient weighted approach, Inform. Fus. 46 (2019) 184–192,. June 2018.
[216]
Maheshwari, D., Ghosh, S.K., Tripathy, R.K., Sharma, M., & Acharya, U.R. (2021). Automated accurate emotion recognition system using rhythm-specific deep convolutional neural network technique with multi-channel EEG signals. April.
[217]
Q. Mao, G. Xu, W. Xue, J. Gou, Y. Zhan, Learning emotion-discriminative and domain-invariant features for domain adaptation in speech emotion recognition, Speech Commun. 93 (2017) 1–10,.
[218]
Q. Mao, W. Xue, Q. Rao, F. Zhang, Zhan Yongzhao, Yongzhao, Domain adaptation for speech emotion recognition by sharing priors between related source and target classes, in: 2016 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), IEEE, China, 2016, pp. 2608–2612.
[219]
V. Maruthapillai, M. Murugappan, Optimal geometrical set for automated marker placement to virtualized real-Time facial emotions, PLoS ONE 11 (2) (2016) 1–18,.
[220]
R.M. Mehmood, H.J. Lee, A novel feature extraction method based on late positive potential for emotion recognition in human brain signal patterns, Comput. Electr. Eng. 53 (2016) 444–457,.
[221]
N. Mehta, S. Jadhav, Facial emotion recognition using log Gabor filter and PCA, in: 2nd International Conference on Computing, Communication, Control and Automation, ICCUBEA 2016, 2017,.
[222]
A. Mert, A. Akan, Emotion recognition from EEG signals by using multivariate empirical mode decomposition, Pattern Anal. Appl. 21 (1) (2018) 81–89,.
[223]
S. Mirsamadi, E. Barsoum, C. Zhang, Automatic speech emotion recognition using recurrent neural networks with local attention, Microsoft Research, One Microsoft Way, Redmond, WA 98052, USA in: IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP) 2017, IEEE, Richardson USA, 2017, pp. 2227–2231,.
[224]
Z. Mohammadi, J. Frounchi, M. Amiri, Wavelet-based emotion recognition system using EEG signal, Neural Comput. Appl. 28 (8) (2017) 1985–1990,.
[225]
G. Muhammad, M.S. Hossain, Emotion recognition for cognitive edge computing using deep learning, IEEE Internet Things J. 4662 (c) (2021),.
[226]
R. Munoz, R. Olivares, C. Taramasco, R. Villarroel, R. Soto, T.S. Barcelos, E. Merino, M.F. Alonso-Sánchez, Using black hole algorithm to improve EEG-based emotion recognition, Comput. Intell. Neurosci. 2018 (2018),.
[227]
M. Murugappan, V. Maruthapillai, W. Khariunizam, A.M. Mutawa, S. Sruthi, C.W. Yean, Virtual markers based facial emotion recognition using ELM and PNN classifiers, in: 2020 16th IEEE International Colloquium on Signal Processing and Its Applications, CSPA 2020, CSPA, 2020, pp. 261–265,.
[228]
M. Murugappan, A.M. Mutawa, S. Sruthi, A. Hassouneh, A. Abdulsalam, S. Jerritta, R. Ranjana, Facial expression classification using KNN and decision tree classifiers, in: 4th International Conference on Computer, Communication and Signal Processing, ICCCSP 2020, 2020, pp. 15–17,.
[229]
M. Murugappan, B.S. Zheng, W. Khairunizam, Recurrent quantification analysis-based emotion classification in stroke using electroencephalogram signals, Arab. J. Sci. Eng. 46 (10) (2021) 9573–9588,.
[230]
S.M. Mustaqeem, S. Kwon, CLSTM: deep feature-based speech emotion recognition using the hierarchical convlstm network, Mathematics 8 (12) (2020) 1–19,.
[231]
S.M. Mustaqeem, Md. Sajjad, S. Kwon, Clustering-based speech emotion recognition by incorporating learned features and deep BiLSTM, IEEE Access 8 (2020) 79861–79875,.
[232]
B. Nakisa, M.N. Rastgoo, A. Rakotonirainy, F. Maire, V. Chandran, Long short term memory hyperparameter optimization for a neural network based emotion recognition framework, IEEE Access 6 (2018) 49325–49338,.
[233]
K.J. Noh, C.Y. Jeong, S. Jiyoun, C. Seungeun, K. Gague, L. Jeong Mook, J. Hyuntae, Multi-path and group-loss-based network for speech emotion, Sensors (2021) 1–18,.
[234]
F. Noroozi, M. Marjanovic, A. Njegus, S. Escalera, G. Anbarjafari, Audio-visual emotion recognition in video clips, IEEE Trans. Affect. Comput. 10 (1) (2019) 60–75,.
[235]
E.N.N. Ocquaye, Q. Mao, H. Song, G. Xu, Y. Xue, Dual exclusive attentive transfer for unsupervised deep convolutional domain adaptation in speech emotion recognition, IEEE Access 7 (2019) 93847–93857,.
[236]
J. Oliveira, I. Praca, On the usage of pre-trained speech recognition deep layers to detect emotions, IEEE Access 9 (2021) 9699–9705,.
[237]
B. Pan, K. Hirota, Z. Jia, L. Zhao, X. Jin, Y. Dai, Multimodal emotion recognition based on feature selection and extreme learning machine in video clips, J. Ambient Intell. Human. Comput. (2021),.
[238]
C. Pan, C. Shi, H. Mu, J. Li, X. Gao, EEG-based emotion recognition using logistic regression with gaussian kernel and laplacian prior and investigation of critical frequency bands, Appl. Sci. 10 (5) (2020),.
[239]
X. Pan, W. Guo, X. Guo, W. Li, J. Xu, J. Wu, Deep temporal-spatial aggregation for video-based facial expression recognition, Symmetry 11 (1) (2019),.
[240]
Y.R. Pandeya, J. Lee, Deep learning-based late fusion of multimodal information for emotion classification of music video, Multimedia Tools Appl. 80 (2) (2021) 2887–2905,.
[241]
R. Pathar, A. Adivarekar, A. Mishra, A. Deshmukh, Human emotion recognition using convolutional neural network in real time, in: Proceedings of 1st International Conference on Innovations in Information and Communication Technology, ICIICT 2019, 2019,.
[242]
M.D. Pawar, R.D. Kokate, Convolution neural network based automatic speech emotion recognition using mel-frequency cepstrum coefficients, Multimedia Tools Appl. (2021),.
[243]
Z. Peng, J. Dang, M. Unoki, M. Akagi, Multi-resolution modulation-filtered cochleagram feature for LSTM-based dimensional emotion recognition from speech, Neural Netw. 140 (2021) 261–273,.
[244]
Z. Peng, X. Li, Z. Zhu, M. Unoki, J. Dang, M. Akagi, Speech emotion recognition using 3D convolutions and attention-based sliding recurrent networks with auditory front-ends, IEEE Access 8 (2020) 16560–16572,.
[245]
A. Pise, H. Vadapalli, I. Sanders, Facial emotion recognition using temporal relational network: an application to E-learning, Multimedia Tools Appl. (2020),.
[246]
D.A. Pitaloka, A. Wulandari, T. Basaruddin, D.Y. Liliana, Enhancing CNN with preprocessing stage in automatic emotion recognition, Proc. Comput. Sci. 116 (2017) 523–529,.
[247]
Pons, G., Masip, D., & Member, S. (2020). Multitask, multilabel, and multidomain learning with convolutional networks for emotion recognition. 1–8.
[248]
E. Pranav, S. Kamal, S.C. Chandran, M.S H, Facial emotion recognition using deep convolutional neural network, in: 6th International Conference on Advanced Computing & Communication Systems (ICACCS), 2020, pp. 317–320.
[249]
C. Qing, R. Qiao, X. Xu, Y. Cheng, Interpretable emotion recognition using EEG signals, IEEE Access 7 (2019) 94160–94170,.
[250]
A. Raheel, M. Majid, S.M. Anwar, DEAR-MULSEMEDIA: dataset for emotion analysis and recognition in response to multiple sensorial media, Inform. Fus. 65 (2021) 37–49,. July 2020.
[251]
M.A. Rahman, A. Anjum, M.M.H. Milu, F. Khanam, M.S. Uddin, M.N. Mollah, Emotion recognition from EEG-based relative power spectral topography using convolutional neural network, Array 11 (2021),. June.
[252]
A.L. Ramos, B.G. Dadiz, A.B.G Santos, Classifying Emotion based on Facial Expression Analysis using Gabor Filter : a Basis for Adaptive Effective Teaching Strategy, Springer, Singapore, 2020,.
[253]
N.I.M. Razi, M. Othman, H. Yaacob, EEG-based emotion recognition in the investment activities, in: 6th International Conference on Information and Communication Technology for the Muslim World, ICT4M 2016, 2010, 2016, pp. 325–329,.
[254]
M. Ren, W. Nie, A. Liu, Y. Su, Multi-modal Correlated Network for emotion recognition in speech, Visual Inform. 3 (3) (2019) 150–155,.
[255]
M. Rescigno, M. Spezialetti, S. Rossi, Personalized models for facial emotion recognition through transfer learning, Multimedia Tools Appl. 79 (47–48) (2020) 35811–35828,.
[256]
Y. Said, M. Barr, Human emotion recognition based on facial expressions via deep learning on high-resolution images, Multimedia Tools Appl. (2021),.
[257]
A. Sakalle, P. Tomar, H. Bhardwaj, D. Acharya, A. Bhardwaj, A LSTM based deep learning network for recognizing emotions using wireless brainwave driven system, Expert Syst. Appl. 173 (2021),. January.
[258]
M. Sarma, P. Ghahremani, D. Povey, N.K. Goel, K.K. Sarma, N. Dehak, Emotion identification from raw speech signals using DNNs, in: Proceedings of the Annual Conference of the International Speech Communication Association, INTERSPEECH, 2018-Septe, 2018, pp. 3097–3101,.
[259]
L. Schoneveld, A. Othmani, H. Abdelkawy, Leveraging recent advances in deep learning for audio-visual emotion recognition, Pattern Recognit. Lett. 146 (2021) 1–7,.
[260]
A. Sepas-moghaddam, A. Etemad, F. Pereira, P.L. Correia, Facial Emotion Recognition Using Light Field Images with Deep Attention-Based Bidirectional LSTM, Instituto Superior Tecnico, Universidade de Lis. in: ICASSP 2020 - 2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), IEEE, 2020, pp. 3367–3371.
[261]
H. Shahabi, S. Moghimi, Toward automatic detection of brain responses to emotional music through analysis of EEG effective connectivity, Comput. Hum. Behav. 58 (2016) 231–239,.
[262]
F. Shen, Y. Peng, W. Kong, G. Dai, Multi-scale frequency bands ensemble learning for EEG-based emotion recognition, Sensors 21 (4) (2021) 1–20,.
[263]
M.F.H. Siddiqui, A.Y. Javaid, A multimodal facial emotion recognition framework through the fusion of speech with visible and infrared images, Multimodal Technol. Interact. 4 (3) (2020) 1–21,.
[264]
P. Singh, R. Srivastava, K.P.S. Rana, V. Kumar, A multimodal hierarchical approach to speech emotion recognition from audio and text[formula presented], Knowl.-Based Syst. 229 (2021),.
[265]
R. Singh, H. Puri, N. Aggarwal, V. Gupta, An efficient language-independent acoustic emotion classification system, Arab. J. Sci. Eng. 45 (4) (2020) 3111–3121,.
[266]
T. Song, W. Zheng, C. Lu, Y. Zong, X. Zhang, Z. Cui, MPED: a multi-modal physiological emotion database for discrete emotion recognition, IEEE Access 7 (2019) 12177–12191,.
[267]
A. Subasi, T. Tuncer, S. Dogan, D. Tanko, U. Sakoglu, EEG-based emotion recognition using tunable Q wavelet transform and rotation forest ensemble classifier, Biomed. Signal Process. Control 68 (2021),. April.
[268]
G. Subramanian, N. Cholendiran, K. Prathyusha, N. Balasubramanain, J. Aravinth, Multimodal emotion recognition using different fusion techniques, in: Proceedings of 2021 IEEE 7th International Conference on Bio Signals, Images and Instrumentation, 2021,. ICBSII 2021.
[269]
X. Sun, P. Xia, F. Ren, Multi-attention based deep neural network with hybrid features for dynamic sequential facial expression recognition, Neurocomputing (2020),. xxxx.
[270]
C. Tan, G. Ceballos, N. Kasabov, N.P. Subramaniyam, Fusionsense: emotion classification using feature fusion of multimodal data and deep learning in a brain-inspired spiking neural network, Sensors 20 (18) (2020) 1–27,.
[271]
Y. Tan, Z. Sun, F. Duan, J. Solé-Casals, C.F. Caiafa, A multimodal emotion recognition method based on facial expressions and electroencephalography, Biomed. Signal Process. Control 70 (2021),. March.
[272]
D. Tang, J. Zeng, M. Li, An end-to-end deep learning framework with speech emotion recognition of atypical individuals, in: Proceedings of the Annual Conference of the International Speech Communication Association, INTERSPEECH, 2018-Septe(September), 2018, pp. 162–166,.
[273]
S. Taran, V. Bajaj, Emotion recognition from single-channel EEG signals using a two-stage correlation and instantaneous frequency-based filtering method, Comput. Methods Programs Biomed. 173 (2019) 157–165,.
[274]
N. Thammasan, K. Moriyama, K. Fukui, ichi, M. Numao, Familiarity effects in EEG-based emotion recognition, Brain Inform. 4 (1) (2017) 39–50,.
[275]
S. Thuseethan, S. Rajasegarar, J. Yearwood, Emotion intensity estimation from video frames using deep hybrid convolutional neural networks, in: Proceedings of the International Joint Conference on Neural Networks, 2019-July(July), 2019, pp. 1–10,.
[276]
U. Tiwari, M. Soni, R. Chakraborty, A. Panda, S.K. Kopparapu, Multi-conditioning and data augmentation using generative noise model for speech emotion recognition in noisy conditions, in: ICASSP, IEEE International Conference on Acoustics, Speech and Signal Processing - Proceedings, 2020-May, 2020, pp. 7194–7198,.
[277]
A. Topic, M. Russo, Emotion recognition based on EEG feature maps through deep learning network, Eng. Sci. Technol. Int. J. (2021),. xxxx.
[278]
S. Tripathi, S. Acharya, R.D. Sharma, S. Mittal, S. Bhattacharya, Using deep and convolutional neural networks for accurate emotion classification on DEAP dataset, in: Proceedings of the Twenty-Ninth AAAI Conference on Innovative Applications (IAAI-17)), 2017, pp. 4746–4752.
[279]
H. Ullah, M. Uzair, A. Mahmood, M. Ullah, S.D. Khan, F.A. Cheikh, Internal emotion classification using EEG signal with sparse discriminative ensemble, IEEE Access 7 (2019) 40144–40153,.
[280]
Y. Velchev, S. Radeva, S. Sokolov, D. Radev, Automated estimation of human emotion from II, in: 2016 Digital Media Industry & Academic Forum (DMIAF), 2016, pp. 40–42.
[281]
A. Verma, P. Singh, J. Sahaya, R. Alex, Modified convolutional neural network architecture analysis for facial emotion recognition, in: 2019 International Conference on Systems, Signals and Image Processing (IWSSIP), 2019, pp. 169–173. https://ieeexplore.ieee.org/abstract/document/8787215.
[282]
D. Verma, D. Mukhopadhyay, Age driven automatic speech emotion recognition system, in: IEEE International Conference on Computing, Communication and Automation, ICCCA 2016, 2016, pp. 1005–1010,.
[283]
G. Verma, H. Verma, Hybrid-deep learning model for emotion recognition using facial expressions, Rev. Socionetw. Strat. 14 (2) (2020) 171–180,.
[284]
M. Wang, Z. Huang, Y. Li, L. Dong, H. Pan, Maximum weight multi-modal information fusion algorithm of electroencephalographs and face images for emotion recognition, Comput. Electr. Eng. 94 (2021),. April.
[285]
S. Wang, L. Hao, Q. Ji, Knowledge-augmented multimodal deep regression bayesian networks for emotion video tagging, IEEE Trans. Multimedia 22 (4) (2020) 1084–1097,.
[286]
X.H. Wang, T. Zhang, X.M. Xu, L. Chen, X.F. Xing, C.L.P. Chen, EEG emotion recognition using dynamical graph convolutional neural networks and broad learning system, in: 2018 IEEE International Conference on Bioinformatics and Biomedicine, BIBM 2018, 2018, pp. 1240–1244,.
[287]
Y. Wang, Z. Huang, B. McCane, P. Neo, EmotioNet: a 3-D convolutional neural network for EEG-based emotion recognition, in: Proceedings of the International Joint Conference on Neural Networks, 2018-July, 2018,.
[288]
Z. Wang, T. Gu, Y. Zhu, D. Li, H. Yang, W. Du, FLDNet: frame level distilling neural network for EEG emotion recognition, IEEE J. Biomed. Health Inform. 2194(c) (2021) 1–12,.
[289]
Z.-q. Wang, I. Tashev, Learning utterance-level representations for speech emotion and age /gender recognition using deep neural networks department of computer science and engineering, in: Proceedings 42nd IEEE International Conference on Acoustics, Speech, and Signal Processing, ICASSP 2017, The Ohio State University, USA Microsoft Research, One Microsoft Way, Redmond, USA, 2017, pp. 5150–5154.
[290]
Zhongmin Wang, X. Zhou, W. Wang, C. Liang, Emotion recognition using multimodal deep learning in multiple psychophysiological signals and video, Int. J. Mach. Learn. Cybern. 11 (4) (2020) 923–934,.
[291]
G. Wen, H. Li, J. Huang, D. Li, E. Xun, Random deep belief networks for recognizing emotions from speech signals, Comput. Intell. Neurosci. 2017 (2017),.
[292]
H. Wen, S. You, Y. Fu, Cross-modal dynamic convolution for multi-modal emotion recognition, J. Vis. Commun. Image Represent. 78 (2021),. June.
[293]
L. Wijayasingha, J.A. Stankovic, Robustness to noise for speech emotion classification using CNNs and attention mechanisms, Smart Health 19 (2021),. December 2020.
[294]
Wilaiprasitporn, T., Ditthapron, A., Matchaparn, K., Tongbuasirilai, T., Banluesombatkul, N., & Chuangsuwanich, E. (2018). Affective EEG-based person identification using the deep learning approach. ArXiv, 12(3), 486–496.
[295]
Williams, J., Kleinegesse, S., Comanescu, R., & Radu, O. (2018). Recognizing emotions in video using multimodal DNN feature fusion. 11–19. 10.18653/v1/w18-3302
[296]
X. Xia, D. Jiang, H. Sahli, Learning salient segments for speech emotion recognition using attentive temporal pooling, IEEE Access 8 (2020) 151740–151752,.
[297]
Y. Xie, R. Liang, Z. Liang, C. Huang, C. Zou, B. Schuller, Speech emotion classification using attention-based LSTM, IEEE/ACM Trans. Audio Speech Lang. Process. 27 (11) (2019) 1675–1685,.
[298]
B. Xing, H. Zhang, K. Zhang, L. Zhang, X. Wu, X. Shi, S. Yu, S. Zhang, Exploiting EEG signals and audiovisual feature fusion for video emotion recognition, IEEE Access 7 (1) (2019) 59844–59861,.
[299]
X. Xing, Z. Li, T. Xu, L. Shu, B. Hu, X. Xu, SAE+LSTM: a new framework for emotion recognition from multi-channel EEG, Front. Neurorobot. 13 (2019) 1–14,. June.
[300]
G. Xu, W. Li, J. Liu, A social emotion classification approach using multi-model fusion, Future Gen. Comput. Syst. 102 (2020) 347–356,.
[301]
Yadav, S.P. (2021). Emotion recognition model based on facial expressions. April 2020.
[302]
M. Yanagimoto, C. Sugimoto, Recognition of persisting emotional valence from EEG using convolutional neural networks, in: 2016 IEEE 9th International Workshop on Computational Intelligence and Applications, IWCIA 2016, 2016, pp. 27–32,. - Proceedings.
[303]
S. Yang, Z. Yin, Y. Wang, W. Zhang, Y. Wang, J. Zhang, Assessing cognitive mental workload via EEG signals and an ensemble deep learning classifier based on denoising autoencoders, Comput. Biol. Med. 109 (2019) 159–170,. January.
[304]
K. Yano, T. Suyama, Fixed low-rank EEG spatial filter estimation for emotion recognition induced by movies, in: PRNI 2016 - 6th International Workshop on Pattern Recognition in Neuroimaging, 2, 2016, pp. 3–6,.
[305]
P. Yenigalla, A. Kumar, S. Tripathi, C. Singh, S. Kar, J. Vepa, Speech emotion recognition using spectrogram & phoneme embedding, in: Proceedings of the Annual Conference of the International Speech Communication Association, INTERSPEECH, 2018, pp. 3688–3692,. 2018-Septe(September).
[306]
S. Yildirim, Y. Kaya, F. Kılıç, A modified feature selection method based on metaheuristic algorithms for speech emotion recognition, Appl. Acoust. 173 (2021),.
[307]
Z. Yin, L. Liu, L. Liu, J. Zhang, Y. Wang, Dynamical recursive feature elimination technique for neurophysiological signal-based emotion recognition, Cognit. Technol. Work 19 (4) (2017) 667–685,.
[308]
Z. Yin, Y. Wang, L. Liu, W. Zhang, J. Zhang, Cross-subject EEG feature selection for emotion recognition using transfer recursive feature elimination, Front. Neurorobot. 11 (2017) 1–16,. APR.
[309]
Z. Yin, J. Zhang, Cross-session classification of mental workload levels using EEG and an adaptive deep learning model, Biomed. Signal Process. Control 33 (2017) 30–47,.
[310]
M.M.T. Zadeh, M. Imani, B. Majidi, Fast facial emotion recognition using convolutional neural networks and Gabor filters, in: 2019 IEEE 5th Conference on Knowledge Based Engineering and Innovation, KBEI 2019, 2019, pp. 577–581,.
[311]
A. Zhang, L. Su, Y. Zhang, Y. Fu, L. Wu, S. Liang, EEG data augmentation for emotion recognition with a multiple generator conditional Wasserstein GAN, Complex Intell. Syst. (2021),.
[312]
Fan Zhang, H. Meng, M. Li, Emotion extraction and recognition from music, in: 2016 12th International Conference on Natural Computation, Fuzzy Systems and Knowledge Discovery, ICNC-FSKD 2016, 2016, pp. 1728–1733,.
[313]
F. Zhang, Q. Mao, X. Shen, Y. Zhan, M. Dong, Spatially coherent feature learning for pose-invariant facial expression recognition, ACM Trans. Multimed. Comput. Commun. Appl. 14 (1s) (2018),.
[314]
J. Zhang, M. Chen, S. Hu, Y. Cao, R. Kozma, PNN for EEG-based emotion recognition, in: 2016 IEEE International Conference on Systems, Man, and Cybernetics, SMC 2016, 2016, pp. 2319–2323,. - Conference Proceedings.
[315]
J. Zhang, P. Chen, S. Nichele, A. Yazidi, Emotion recognition using time-frequency analysis of EEG signals and machine learning, in: IEEE Symposium Series on Computational Intelligence (SSCI), 2019, pp. 404–409.
[316]
S. Zhang, A. Chen, W. Guo, Y. Cui, X. Zhao, L. Liu, Learning deep binaural representations with deep convolutional neural networks for spontaneous speech emotion recognition, IEEE Access 8 (2020) 23496–23505,.
[317]
S. Zhang, X. Tao, Y. Chuang, X. Zhao, Learning deep multimodal affective features for spontaneous speech emotion recognition, Speech Commun. 127 (2021) 73–81,. May 2020.
[318]
S. Zhang, S. Zhang, T. Huang, W. Gao, Speech emotion recognition using deep convolutional neural network and discriminant temporal pyramid matching, IEEE Trans. Multimedia 20 (6) (2018) 1576–1590.
[319]
S. Zhang, S. Zhang, T. Huang, W. Gao, Q. Tian, Learning affective features with a hybrid deep model for audio-visual emotion recognition, IEEE Trans. Circ. Syst. Video Technol. 28 (10) (2018) 3030–3043,.
[320]
W. Zhang, P. Song, Transfer sparse discriminant subspace learning for cross-corpus speech emotion recognition, IEEE/ACM Trans. Audio Speech Lang. Process. 28 (2020) 307–318,.
[321]
Y. Zhang, Y. Liu, F. Weninger, B. Schuller, Multi-task deep neural network with shared hidden layers: breaking down the wall between emotion representations, in: ICASSP, IEEE International Conference on Acoustics, Speech and Signal Processing, 645378, 2017, pp. 4990–4994,. - Proceedings.
[322]
Z. Zhang, C. Lai, H. Liu, Y.F. Li, Infrared facial expression recognition via Gaussian-based label distribution learning in the dark illumination environment for human emotion detection, Neurocomputing 409 (2020) 341–350,.
[323]
J. Zhao, X. Mao, L. Chen, Learning deep features to recognise speech emotion using merged deep CNN, IET Signal Proc. 12 (6) (2018) 713–721,.
[324]
Y. Zhao, X. Jin, X. Hu, Recurrent convolutional neural network for speech processing, in: 2017 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 2017, pp. 5300–5304,.
[325]
Z. Zhao, K. Wang, Z. Bao, Z. Zhang, N. Cummins, S. Sun, H. Wang, J. Tao, B.W. Schuller, Self-attention transfer networks for speech emotion recognition, Virtual Real. Intell. Hardw. 3 (1) (2021) 43–54,.
[326]
W. Zheng, Multichannel EEG-based emotion recognition via group sparse canonical correlation analysis, IEEE Trans. Cognit. Dev. Syst. 9 (3) (2017) 281–290,.
[327]
F. Zhou, S. Kong, C.C. Fowlkes, T. Chen, B. Lei, Fine-grained facial expression analysis using dimensional emotion model, Neurocomputing 392 (2020) 38–49,.
[328]
L. Zhu, L. Chen, D. Zhao, J. Zhou, W. Zhang, Emotion recognition from chinese speech for smart affective services using a combination of SVM and DBN, Sensors 17 (7) (2017),.
[329]
N. Zhuang, Y. Zeng, K. Yang, C. Zhang, L. Tong, B. Yan, Investigating patterns for self-induced emotion recognition from EEG signals, Sensors 18 (3) (2018) 1–22,.
[330]
Wang Kay Ngai, Haoran Xie, Di Zou, Kee-Lee Chou, Emotion recognition based on convolutional neural networks and heterogeneous bio-signal data sources, Information Fusion 77 (2022) 107–117,.

Cited By

View all
  • (2024)Multimodal Emotion Cognition Method Based on Multi-Channel Graphic InteractionInternational Journal of Cognitive Informatics and Natural Intelligence10.4018/IJCINI.34996918:1(1-17)Online publication date: 17-Sep-2024
  • (2024)Good GUIs, Bad GUIs: Affective Evaluation of Graphical User InterfacesProceedings of the 32nd ACM Conference on User Modeling, Adaptation and Personalization10.1145/3627043.3659549(232-243)Online publication date: 22-Jun-2024
  • (2024)Emotion recognition and artificial intelligenceInformation Fusion10.1016/j.inffus.2023.102019102:COnline publication date: 1-Feb-2024
  • Show More Cited By

Index Terms

  1. Automated emotion recognition: Current trends and future perspectives
    Index terms have been assigned to the content through auto-classification.

    Recommendations

    Comments

    Please enable JavaScript to view thecomments powered by Disqus.

    Information & Contributors

    Information

    Published In

    cover image Computer Methods and Programs in Biomedicine
    Computer Methods and Programs in Biomedicine  Volume 215, Issue C
    Mar 2022
    522 pages

    Publisher

    Elsevier North-Holland, Inc.

    United States

    Publication History

    Published: 01 March 2022

    Author Tags

    1. Human emotions
    2. Electroencephalogram (EEG)
    3. CAD
    4. Machine learning
    5. Facial
    6. Voice

    Qualifiers

    • Review-article

    Contributors

    Other Metrics

    Bibliometrics & Citations

    Bibliometrics

    Article Metrics

    • Downloads (Last 12 months)0
    • Downloads (Last 6 weeks)0
    Reflects downloads up to 30 Nov 2024

    Other Metrics

    Citations

    Cited By

    View all
    • (2024)Multimodal Emotion Cognition Method Based on Multi-Channel Graphic InteractionInternational Journal of Cognitive Informatics and Natural Intelligence10.4018/IJCINI.34996918:1(1-17)Online publication date: 17-Sep-2024
    • (2024)Good GUIs, Bad GUIs: Affective Evaluation of Graphical User InterfacesProceedings of the 32nd ACM Conference on User Modeling, Adaptation and Personalization10.1145/3627043.3659549(232-243)Online publication date: 22-Jun-2024
    • (2024)Emotion recognition and artificial intelligenceInformation Fusion10.1016/j.inffus.2023.102019102:COnline publication date: 1-Feb-2024
    • (2024)A systematic review of trimodal affective computing approachesExpert Systems with Applications: An International Journal10.1016/j.eswa.2024.124852255:PDOnline publication date: 21-Nov-2024
    • (2024)Bridging computer and education sciencesComputers & Education10.1016/j.compedu.2024.105111220:COnline publication date: 1-Oct-2024
    • (2024)Exploring contactless techniques in multimodal emotion recognition: insights into diverse applications, challenges, solutions, and prospectsMultimedia Systems10.1007/s00530-024-01302-230:3Online publication date: 6-Apr-2024
    • (2024)Willingness of sharing facial data for emotion recognition: a case study in the insurance marketAI & Society10.1007/s00146-023-01690-539:5(2373-2384)Online publication date: 1-Oct-2024
    • (2024)Speech Emotion Recognition Using Generative Adversarial Network and Deep Convolutional Neural NetworkCircuits, Systems, and Signal Processing10.1007/s00034-023-02562-543:4(2341-2384)Online publication date: 1-Apr-2024
    • (2023)Examining the Impact of Uncontrolled Variables on Physiological Signals in User Studies for Information Processing ActivitiesProceedings of the 46th International ACM SIGIR Conference on Research and Development in Information Retrieval10.1145/3539618.3591981(1971-1975)Online publication date: 19-Jul-2023
    • (2023)An ongoing review of speech emotion recognitionNeurocomputing10.1016/j.neucom.2023.01.002528:C(1-11)Online publication date: 1-Apr-2023
    • Show More Cited By

    View Options

    View options

    Login options

    Media

    Figures

    Other

    Tables

    Share

    Share

    Share this Publication link

    Share on social media