Nothing Special   »   [go: up one dir, main page]

skip to main content
10.1145/3340555.3353716acmotherconferencesArticle/Chapter ViewAbstractPublication Pagesicmi-mlmiConference Proceedingsconference-collections
research-article

CorrFeat: Correlation-based Feature Extraction Algorithm using Skin Conductance and Pupil Diameter for Emotion Recognition

Published: 14 October 2019 Publication History

Abstract

To recognize emotions using less obtrusive wearable sensors, we present a novel emotion recognition method that uses only pupil diameter (PD) and skin conductance (SC). Psychological studies show that these two signals are related to the attention level of humans exposed to visual stimuli. Based on this, we propose a feature extraction algorithm that extract correlation-based features for participants watching the same video clip. To boost performance given limited data, we implement a learning system without a deep architecture to classify arousal and valence. Our method outperforms not only state-of-art approaches, but also widely-used traditional and deep learning methods.

References

[1]
Nese Alyuz, Eda Okur, Ece Oktay, Utku Genc, Sinem Aslan, Sinem Emine Mete, Bert Arnrich, and Asli Arslan Esme. 2016. Semi-supervised model personalization for improved detection of learner’s emotional engagement. In Proceedings of the 18th ACM International Conference on Multimodal Interaction. ACM, 100–107.
[2]
Mathias Benedek and Christian Kaernbach. 2010. A continuous measure of phasic electrodermal activity. Journal of neuroscience methods 190, 1 (2010), 80–91.
[3]
Mathias Benedek and Christian Kaernbach. 2010. Decomposition of skin conductance data by means of nonnegative deconvolution. Psychophysiology 47, 4 (2010), 647–658.
[4]
Wolfram Boucsein. 2012. Electrodermal activity. Springer Science & Business Media.
[5]
Jason J Braithwaite, Derrick G Watson, Robert Jones, and Mickey Rowe. 2013. A guide for analysing electrodermal activity (EDA) & skin conductance responses (SCRs) for psychological experiments. Psychophysiology 49, 1 (2013), 1017–1034.
[6]
Gao-Yi Chao, Chun-Min Chang, Jeng-Lin Li, Ya-Tse Wu, and Chi-Chun Lee. 2018. Generating fMRI-Enriched Acoustic Vectors using a Cross-Modality Adversarial Network for Emotion Recognition. In Proceedings of the 2018 on International Conference on Multimodal Interaction. ACM, 55–62.
[7]
CL Philip Chen and Zhulin Liu. 2018. Broad learning system: An effective and efficient incremental learning system without the need for deep architecture. IEEE transactions on neural networks and learning systems 29, 1(2018), 10–24.
[8]
Hongtian Chen and Bin Jiang. 2019. A review of fault detection and diagnosis for the traction system in high-speed trains. IEEE Transactions on Intelligent Transportation Systems (2019).
[9]
Frank D Colman and Allan Paivio. 1969. Pupillary response and galvanic skin response during an imagery task. Psychonomic Science 16, 6 (1969), 296–297.
[10]
Hany Ferdinando and Esko Alasaarela. 2018. Enhancement of Emotion Recogniton using Feature Fusion and the Neighborhood Components Analysis. In ICPRAM. 463–469.
[11]
Hany Ferdinando, Tapio Seppänen, and Esko Alasaarela. 2017. Enhancing Emotion Recognition from ECG Signals using Supervised Dimensionality Reduction. In ICPRAM. 112–118.
[12]
Julien Fleureau, Philippe Guillotel, and Izabela Orlac. 2013. Affective benchmarking of movies based on the physiological responses of a real audience. In Affective Computing and Intelligent Interaction (ACII), 2013 Humaine Association Conference on. IEEE, 73–78.
[13]
Michael Grimm, Kristian Kroschel, and Shrikanth Narayanan. 2008. The Vera am Mittag German audio-visual emotional speech database. In 2008 IEEE international conference on multimedia and expo. IEEE, 865–868.
[14]
Dongdong Gui, Sheng-hua Zhong, and Zhong Ming. 2018. Implicit Affective Video Tagging Using Pupillary Response. In International Conference on Multimedia Modeling. Springer, 165–176.
[15]
Rui Guo, Shuangjiang Li, Li He, Wei Gao, Hairong Qi, and Gina Owens. 2013. Pervasive and unobtrusive emotion sensing for human mental health. In Pervasive Computing Technologies for Healthcare (PervasiveHealth), 2013 7th International Conference on. IEEE, 436–439.
[16]
Jennifer Healey, Rosalind W Picard, 2005. Detecting stress during real-world driving tasks using physiological sensors. IEEE Transactions on intelligent transportation systems 6, 2(2005), 156–166.
[17]
Eckhard H Hess and James M Polt. 1960. Pupil size as related to interest value of visual stimuli. Science 132, 3423 (1960), 349–350.
[18]
Anil Jain, Karthik Nandakumar, and Arun Ross. 2005. Score normalization in multimodal biometric systems. Pattern recognition 38, 12 (2005), 2270–2285.
[19]
JW Kling and Harold Schlosberg. 1961. The Uniqueness of Patterns of Skin-Conductance. The American journal of psychology 74, 1 (1961), 74–79.
[20]
Chuanhe Liu, Tianhao Tang, Kui Lv, and Minghao Wang. 2018. Multi-Feature Based Emotion Recognition for Video Clips. In Proceedings of the 2018 on International Conference on Multimodal Interaction. ACM, 630–634.
[21]
Nick Martin and Hermine Maes. 1979. Multivariate analysis. Academic press London.
[22]
Gary McKeown, Michel F Valstar, Roderick Cowie, and Maja Pantic. 2010. The SEMAINE corpus of emotionally coloured character interactions. In 2010 IEEE International Conference on Multimedia and Expo. IEEE, 1079–1084.
[23]
Prateek Panwar and Christopher M Collins. 2018. Detecting negative emotion for mixed initiative visual analytics. In Extended Abstracts of the 2018 CHI Conference on Human Factors in Computing Systems. ACM, LBW004.
[24]
Rosalind W. Picard, Elias Vyzas, and Jennifer Healey. 2001. Toward machine emotional intelligence: Analysis of affective physiological state. IEEE transactions on pattern analysis and machine intelligence 23, 10(2001), 1175–1191.
[25]
Lin Shu, Jinyan Xie, Mingyue Yang, Ziyi Li, Zhenqi Li, Dan Liao, Xiangmin Xu, and Xinyi Yang. 2018. A Review of Emotion Recognition Using Physiological Signals. Sensors 18, 7 (2018), 2074.
[26]
Mohammad Soleymani, Jeroen Lichtenauer, Thierry Pun, and Maja Pantic. 2012. A multimodal database for affect recognition and implicit tagging. IEEE Transactions on Affective Computing 3, 1 (2012), 42–55.
[27]
Tengfei Song, Wenming Zheng, Peng Song, and Zhen Cui. 2018. EEG Emotion Recognition Using Dynamical Graph Convolutional Neural Networks. IEEE Transactions on Affective Computing(2018), 1–1. https://doi.org/10.1109/TAFFC.2018.2817622
[28]
Goran Udovičić, Jurica Ðerek, Mladen Russo, and Marjan Sikora. 2017. Wearable emotion recognition system based on GSR and PPG signals. In Proceedings of the 2nd International Workshop on Multimedia for Personal Health and Health Care. ACM, 53–59.
[29]
Gyanendra K Verma and Uma Shanker Tiwary. 2014. Multimodal fusion framework: A multiresolution approach for emotion classification and recognition from physiological signals. NeuroImage 102(2014), 162–172.
[30]
Chin-An Wang and Douglas P Munoz. 2015. A circuit for pupil orienting responses: implications for cognitive modulation of pupil size. Current opinion in neurobiology 33 (2015), 134–140.
[31]
Wei-Long Zheng, Bo-Nan Dong, and Bao-Liang Lu. 2014. Multimodal emotion recognition using EEG and eye tracking data. In 2014 36th Annual International Conference of the IEEE Engineering in Medicine and Biology Society. IEEE, Chicago, IL, 5040–5043. https://doi.org/10.1109/EMBC.2014.6944757
[32]
Wan-Hui Wen, Guang-Yuan Liu, Nan-Pu Cheng, Jie Wei, Peng-Chao Shangguan, and Wen-Jin Huang. 2014. Emotion recognition based on multi-variant correlation of physiological signals. IEEE Transactions on Affective Computing1 (2014), 1–1.
[33]
A Tianyi Zhang and B Olivier Le Meur. 2018. How Old Do You Look? Inferring Your Age From Your Gaze. In 2018 25th IEEE International Conference on Image Processing (ICIP). IEEE, 2660–2664.
[34]
Yu-Dong Zhang, Zhang-Jing Yang, Hui-Min Lu, Xing-Xing Zhou, Preetha Phillips, Qing-Ming Liu, and Shui-Hua Wang. 2016. Facial emotion recognition based on biorthogonal wavelet entropy, fuzzy support vector machine, and stratified cross validation. IEEE Access 4(2016), 8375–8385.

Cited By

View all
  • (2023)Few-Shot Learning for Fine-Grained Emotion Recognition Using Physiological SignalsIEEE Transactions on Multimedia10.1109/TMM.2022.316571525(3773-3787)Online publication date: 2023
  • (2023)Orthogonal-Moment-Based Attraction Measurement With Ocular Hints in Video-Watching TaskIEEE Transactions on Computational Social Systems10.1109/TCSS.2023.326850510:3(900-909)Online publication date: Jun-2023
  • (2023)Weakly-Supervised Learning for Fine-Grained Emotion Recognition Using Physiological SignalsIEEE Transactions on Affective Computing10.1109/TAFFC.2022.315823414:3(2304-2322)Online publication date: 1-Jul-2023
  • Show More Cited By

Recommendations

Comments

Please enable JavaScript to view thecomments powered by Disqus.

Information & Contributors

Information

Published In

cover image ACM Other conferences
ICMI '19: 2019 International Conference on Multimodal Interaction
October 2019
601 pages
ISBN:9781450368605
DOI:10.1145/3340555
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 14 October 2019

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. Emotion recognition
  2. MAHNOB-HCI database
  3. Machine learning
  4. Pupil diameter
  5. Skin conductance response

Qualifiers

  • Research-article
  • Research
  • Refereed limited

Conference

ICMI '19

Acceptance Rates

Overall Acceptance Rate 453 of 1,080 submissions, 42%

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)33
  • Downloads (Last 6 weeks)4
Reflects downloads up to 26 Sep 2024

Other Metrics

Citations

Cited By

View all
  • (2023)Few-Shot Learning for Fine-Grained Emotion Recognition Using Physiological SignalsIEEE Transactions on Multimedia10.1109/TMM.2022.316571525(3773-3787)Online publication date: 2023
  • (2023)Orthogonal-Moment-Based Attraction Measurement With Ocular Hints in Video-Watching TaskIEEE Transactions on Computational Social Systems10.1109/TCSS.2023.326850510:3(900-909)Online publication date: Jun-2023
  • (2023)Weakly-Supervised Learning for Fine-Grained Emotion Recognition Using Physiological SignalsIEEE Transactions on Affective Computing10.1109/TAFFC.2022.315823414:3(2304-2322)Online publication date: 1-Jul-2023
  • (2023)Emotion and Stress Recognition Utilizing Galvanic Skin Response and Wearable Technology: A Real-time Approach for Mental Health Care2023 IEEE International Conference on Bioinformatics and Biomedicine (BIBM)10.1109/BIBM58861.2023.10386049(1125-1131)Online publication date: 5-Dec-2023
  • (2021)Experimental methods of pupillographic analysis based on high-speed video recording devicesYugra State University Bulletin10.17816/byusu20210359-6717:3(59-67)Online publication date: 15-Nov-2021
  • (2021)Effects of pupil area on impression formation in pupil expression media瞳孔表現メディアにおける瞳孔面積が印象形成に及ぼす影響Transactions of the JSME (in Japanese)10.1299/transjsme.21-0018787:903(21-00187-21-00187)Online publication date: 2021
  • (2021)AC-WGAN-GP: Augmenting ECG and GSR Signals using Conditional Generative Models for Arousal ClassificationAdjunct Proceedings of the 2021 ACM International Joint Conference on Pervasive and Ubiquitous Computing and Proceedings of the 2021 ACM International Symposium on Wearable Computers10.1145/3460418.3479301(21-22)Online publication date: 21-Sep-2021
  • (2021)Making Decisions in Intelligent Video Surveillance Systems Based on Modeling the Pupillary Response of a Person2021 IEEE 6th International Conference on Computer and Communication Systems (ICCCS)10.1109/ICCCS52626.2021.9449315(806-811)Online publication date: 23-Apr-2021
  • (2020)CorrNet: Fine-Grained Emotion Recognition for Video Watching Using Wearable Physiological SensorsSensors10.3390/s2101005221:1(52)Online publication date: 24-Dec-2020
  • (2020)Eye-Tracking Analysis for Emotion RecognitionComputational Intelligence and Neuroscience10.1155/2020/29092672020Online publication date: 1-Jan-2020

View Options

Get Access

Login options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

HTML Format

View this article in HTML Format.

HTML Format

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media