Nothing Special   »   [go: up one dir, main page]

skip to main content
10.1145/3544549.3585672acmconferencesArticle/Chapter ViewAbstractPublication PageschiConference Proceedingsconference-collections
Work in Progress

Interpretable Time-Dependent Convolutional Emotion Recognition with Contextual Data Streams

Published: 19 April 2023 Publication History

Abstract

Emotion prediction is important when interacting with computers. However, emotions are complex, difficult to assess, understand, and hard to classify. Current emotion classification strategies skip why a specific emotion was predicted, complicating the user’s understanding of affective and empathic interface behaviors. Advances in deep learning showed that convolutional networks can learn powerful time-series patterns while showing classification decisions and feature importances. We present a novel convolution-based model that classifies emotions robustly. Our model not only offers high emotion-prediction performance but also enables transparency on the model decisions. Our solution thereby provides a time-aware feature interpretation of classification decisions using saliency maps. We evaluate the system on a contextual, real-world driving dataset involving twelve participants. Our model achieves a mean accuracy of in 5-class emotion classification on unknown roads and outperforms in-car facial expression recognition by . We conclude how emotion prediction can be improved by incorporating emotion sensing into interactive computing systems.

Supplementary Material

MP4 File (3544549.3585672-video-figure.mp4)
Video Figure
MP4 File (3544549.3585672-talk-video.mp4)
Pre-recorded Video Presentation

References

[1]
Roy Assaf, Ioana Giurgiu, Frank Bagehorn, and Anika Schumann. 2019. Mtex-cnn: Multivariate time series explanations for predictions with convolutional neural networks. In 2019 IEEE International Conference on Data Mining (ICDM). IEEE, 952–957.
[2]
Shahin Atakishiyev, Mohammad Salameh, Hengshuai Yao, and Randy Goebel. 2021. Explainable artificial intelligence for autonomous driving: a comprehensive overview and field guide for future research directions. arXiv preprint arXiv:2112.11561 (2021).
[3]
David Bethge, Luis Falconeri Coelho, Thomas Kosch, Satiyabooshan Murugaboopathy, Ulrich von Zadow, Albrecht Schmidt, and Tobias Grosse-Puppendahl. 2023. Technical Design Space Analysis for Unobtrusive Driver Emotion Assessment Using Multi-Domain Context. Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies 6, 4 (2023), 1–30. https://doi.org/10.1145/3569466
[4]
David Bethge, Philipp Hallgarten, Tobias Grosse-Puppendahl, Mohamed Kari, Lewis L. Chuang, Ozan Özdenizci, and Albrecht Schmidt. 2022. EEG2Vec: Learning Affective EEG Representations via Variational Autoencoders. In 2022 IEEE International Conference on Systems, Man, and Cybernetics (SMC). 3150–3157. https://doi.org/10.1109/SMC53654.2022.9945517
[5]
David Bethge, Philipp Hallgarten, Ozan Özdenizci, Ralf Mikut, Albrecht Schmidt, and Tobias Grosse-Puppendahl. 2022. Exploiting multiple eeg data domains with adversarial learning. In 2022 44th Annual International Conference of the IEEE Engineering in Medicine & Biology Society (EMBC). IEEE, 3154–3158. https://doi.org/10.1109/EMBC48229.2022.9871743
[6]
David Bethge, Thomas Kosch, Tobias Grosse-Puppendahl, Lewis L Chuang, Mohamed Kari, Alexander Jagaciak, and Albrecht Schmidt. 2021. VEmotion: Using Driving Context for Indirect Emotion Prediction in Real-Time. In The 34th Annual ACM Symposium on User Interface Software and Technology. 638–651. https://doi.org/10.1145/3472749.3474775
[7]
Margaret M. Bradley and Peter J. Lang. 1994. Measuring emotion: The self-assessment manikin and the semantic differential. Journal of Behavior Therapy and Experimental Psychiatry 25, 1 (1994), 49–59. https://doi.org/10.1016/0005-7916(94)90063-9
[8]
Michael Braun, Jingyi Li, Florian Weber, Bastian Pfleging, Andreas Butz, and Florian Alt. 2020. What If Your Car Would Care? Exploring Use Cases For Affective Automotive User Interfaces. In 22nd International Conference on Human-Computer Interaction with Mobile Devices and Services (Oldenburg, Germany) (MobileHCI ’20). Association for Computing Machinery, New York, NY, USA, Article 37, 12 pages. https://doi.org/10.1145/3379503.3403530
[9]
Michael Braun, Florian Weber, and Florian Alt. 2021. Affective Automotive User Interfaces–Reviewing the State of Driver Affect Research and Emotion Regulation in the Car. ACM Comput. Surv. 54, 7, Article 137, 26 pages. https://doi.org/10.1145/3460938
[10]
Silvia Ceccacci, Maura Mengoni, Generosi Andrea, Luca Giraldi, Giuseppe Carbonara, Andrea Castellano, and Roberto Montanari. 2020. A Preliminary Investigation Towards the Application of Facial Expression Analysis to Enable an Emotion-Aware Car Interface. In Universal Access in Human-Computer Interaction. Applications and Practice, Margherita Antona and Constantine Stephanidis (Eds.). Springer International Publishing, Cham, 504–517. https://doi.org/10.1007/978-3-030-49108-6_36
[11]
Monique Dittrich and Sebastian Zepf. 2019. Exploring the validity of methods to track emotions behind the wheel. In International Conference on Persuasive Technology. Springer, 115–127. https://doi.org/10.1007/978-3-030-17287-9_10
[12]
Andrius Dzedzickis, Artūras Kaklauskas, and Vytautas Bucinskas. 2020. Human Emotion Recognition: Review of Sensors and Methods. Sensors 20, 3 (2020). https://doi.org/10.3390/s20030592
[13]
Maria Egger, Matthias Ley, and Sten Hanke. 2019. Emotion recognition from physiological signal analysis: a review. Electronic Notes in Theoretical Computer Science 343 (2019), 35–55. https://doi.org/10.1016/j.entcs.2019.04.009
[14]
Paul Ekman. 1984. Expression and the nature of emotion. Approaches to emotion 3, 19 (1984), 344.
[15]
H. Gao, A. Yüce, and J. Thiran. 2014. Detecting emotional stress from facial expressions for driving safety. In 2014 IEEE International Conference on Image Processing (ICIP). 5961–5965. https://doi.org/10.1109/ICIP.2014.7026203
[16]
Teddy Surya Gunawan, Muhammad Fahreza Alghifari, Malik Arman Morshidi, and Mira Kartiwi. 2018. A review on emotion recognition algorithms using speech analysis. Indonesian Journal of Electrical Engineering and Informatics (IJEEI) 6, 1 (2018), 12–20.
[17]
Mariam Hassib, Michael Braun, Bastian Pfleging, and Florian Alt. 2019. Detecting and Influencing Driver Emotions Using Psycho-Physiological Sensors and Ambient Light. In Human-Computer Interaction – INTERACT 2019, David Lamas, Fernando Loizides, Lennart Nacke, Helen Petrie, Marco Winckler, and Panayiotis Zaphiris (Eds.). Springer International Publishing, Cham, 721–742.
[18]
Sepp Hochreiter and Jürgen Schmidhuber. 1997. Long short-term memory. Neural computation 9, 8 (1997), 1735–1780.
[19]
Serkan Kiranyaz, Onur Avci, Osama Abdeljaber, Turker Ince, Moncef Gabbouj, and Daniel J Inman. 2021. 1D convolutional neural networks and applications: A survey. Mechanical systems and signal processing 151 (2021), 107398.
[20]
Thomas Kosch, Mariam Hassib, Robin Reutter, and Florian Alt. 2020. Emotions on the Go: Mobile Emotion Assessment in Real-Time Using Facial Expressions. In Proceedings of the International Conference on Advanced Visual Interfaces (Salerno, Italy) (AVI ’20). Association for Computing Machinery, New York, NY, USA, Article 18, 9 pages. https://doi.org/10.1145/3399715.3399928
[21]
Thomas Kosch, Jakob Karolus, Johannes Zagermann, Harald Reiterer, Albrecht Schmidt, and Paweł W. Woźniak. 2023. A Survey on Measuring Cognitive Workload in Human-Computer Interaction. ACM Comput. Surv. (jan 2023). https://doi.org/10.1145/3582272
[22]
Thomas Kosch, Robin Welsch, Lewis Chuang, and Albrecht Schmidt. 2023. The Placebo Effect of Artificial Intelligence in Human–Computer Interaction. ACM Trans. Comput.-Hum. Interact. 29, 6, Article 56 (jan 2023), 32 pages. https://doi.org/10.1145/3529225
[23]
Tuan Le Mau, Katie Hoemann, Sam H Lyons, Jennifer Fugate, Emery N Brown, Maria Gendron, and Lisa Feldman Barrett. 2021. Professional actors demonstrate variability, not stereotypical expressions, when portraying emotional states in photographs. Nature communications 12, 1 (2021), 1–13. https://doi.org/10.1038/s41467-021-25352-6
[24]
Chien-Liang Liu, Wen-Hoar Hsaio, and Yao-Chung Tu. 2018. Time series classification with multivariate convolutional neural network. IEEE Transactions on Industrial Electronics 66, 6 (2018), 4788–4797.
[25]
André Teixeira Lopes, Edilson De Aguiar, Alberto F De Souza, and Thiago Oliveira-Santos. 2017. Facial expression recognition with convolutional neural networks: coping with few data and the training sample order. Pattern recognition 61 (2017), 610–628.
[26]
Scott M Lundberg and Su-In Lee. 2017. A unified approach to interpreting model predictions. In Proceedings of the 31st international conference on neural information processing systems. 4768–4777.
[27]
Zhiyi Ma, Marwa Mahmoud, Peter Robinson, Eduardo Dias, and Lee Skrypchuk. 2017. Automatic Detection of a Driver’s Complex Mental States. In Computational Science and Its Applications, Osvaldo Gervasi, Beniamino Murgante, Sanjay Misra, Giuseppe Borruso, Carmelo M. Torre, Ana Maria A.C. Rocha, David Taniar, Bernady O. Apduhan, Elena Stankova, and Alfredo Cuzzocrea (Eds.). Springer International Publishing, Cham, 678–691. https://doi.org/10.1007/978-3-319-62398-6_48
[28]
Microsoft. [n. d.]. Azure Facial recognition API. https://azure.microsoft.com/de-de/services/cognitive-services/face/
[29]
Meital Navon and Orit Taubman – Ben-Ari. 2019. Driven by emotions: The association between emotion regulation, forgivingness, and driving styles. Transportation Research Part F: Traffic Psychology and Behaviour 65 (2019), 1–9. https://doi.org/10.1016/j.trf.2019.07.005
[30]
M. Paschero, G. Del Vescovo, L. Benucci, A. Rizzi, M. Santello, G. Fabbri, and F. M. F. Mascioli. 2012. A real time classifier for emotion and stress recognition in a vehicle driver. In 2012 IEEE International Symposium on Industrial Electronics. 1690–1695. https://doi.org/10.1109/ISIE.2012.6237345
[31]
Vitali Petsiuk, Abir Das, and Kate Saenko. 2018. Rise: Randomized input sampling for explanation of black-box models. arXiv preprint arXiv:1806.07421 (2018).
[32]
Rosalind W Picard. 2000. Affective computing. MIT press.
[33]
Marco Tulio Ribeiro, Sameer Singh, and Carlos Guestrin. 2016. " Why should i trust you?" Explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining. 1135–1144.
[34]
Ramprasaath R Selvaraju, Michael Cogswell, Abhishek Das, Ramakrishna Vedantam, Devi Parikh, and Dhruv Batra. 2017. Grad-cam: Visual explanations from deep networks via gradient-based localization. In Proceedings of the IEEE international conference on computer vision. 618–626.
[35]
Wensi Tang, Guodong Long, Lu Liu, Tianyi Zhou, Jing Jiang, and Michael Blumenstein. 2020. Rethinking 1d-cnn for time series classification: A stronger baseline. arXiv preprint arXiv:2002.10061 (2020).
[36]
Haofan Wang, Zifan Wang, Mengnan Du, Fan Yang, Zijian Zhang, Sirui Ding, Piotr Mardziel, and Xia Hu. 2020. Score-CAM: Score-weighted visual explanations for convolutional neural networks. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition workshops. 24–25.
[37]
Sebastian Zepf, Javier Hernandez, Alexander Schmitt, Wolfgang Minker, and Rosalind W Picard. 2020. Driver Emotion Recognition for Intelligent Vehicles: A Survey. ACM Computing Surveys (CSUR) 53, 3 (2020), 1–30. https://doi.org/10.1145/3388790
[38]
Shichao Zhang. 2012. Nearest neighbor selection for iteratively kNN imputation. Journal of Systems and Software 85, 11 (2012), 2541–2552.
[39]
Bolei Zhou, Aditya Khosla, Agata Lapedriza, Aude Oliva, and Antonio Torralba. 2016. Learning deep features for discriminative localization. In Proceedings of the IEEE conference on computer vision and pattern recognition. 2921–2929.
[40]
Feng Zhou, Yangjian Ji, and Roger J. Jiao. 2014. Augmented Affective-Cognition for Usability Study of In-Vehicle System User Interface. Journal of Computing and Information Science in Engineering 14, 2 (02 2014). https://doi.org/10.1115/1.4026222 arXiv:https://asmedigitalcollection.asme.org/computingengineering/article-pdf/14/2/021001/6099446/jcise_014_02_021001.pdf021001.

Cited By

View all
  • (2024)Comprehensive study of driver behavior monitoring systems using computer vision and machine learning techniquesJournal of Big Data10.1186/s40537-024-00890-011:1Online publication date: 22-Feb-2024
  • (2023)Facial Emotion Recognition for Photo and Video Surveillance Based on Machine Learning and Visual AnalyticsApplied Sciences10.3390/app1317989013:17(9890)Online publication date: 31-Aug-2023

Index Terms

  1. Interpretable Time-Dependent Convolutional Emotion Recognition with Contextual Data Streams

      Recommendations

      Comments

      Please enable JavaScript to view thecomments powered by Disqus.

      Information & Contributors

      Information

      Published In

      cover image ACM Conferences
      CHI EA '23: Extended Abstracts of the 2023 CHI Conference on Human Factors in Computing Systems
      April 2023
      3914 pages
      ISBN:9781450394222
      DOI:10.1145/3544549
      Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for third-party components of this work must be honored. For all other uses, contact the Owner/Author.

      Sponsors

      Publisher

      Association for Computing Machinery

      New York, NY, United States

      Publication History

      Published: 19 April 2023

      Check for updates

      Author Tags

      1. Affective Computing
      2. Emotion Classification
      3. Explainable AI
      4. Time-Series Classification

      Qualifiers

      • Work in progress
      • Research
      • Refereed limited

      Funding Sources

      Conference

      CHI '23
      Sponsor:

      Acceptance Rates

      Overall Acceptance Rate 6,164 of 23,696 submissions, 26%

      Upcoming Conference

      CHI 2025
      ACM CHI Conference on Human Factors in Computing Systems
      April 26 - May 1, 2025
      Yokohama , Japan

      Contributors

      Other Metrics

      Bibliometrics & Citations

      Bibliometrics

      Article Metrics

      • Downloads (Last 12 months)110
      • Downloads (Last 6 weeks)12
      Reflects downloads up to 14 Dec 2024

      Other Metrics

      Citations

      Cited By

      View all
      • (2024)Comprehensive study of driver behavior monitoring systems using computer vision and machine learning techniquesJournal of Big Data10.1186/s40537-024-00890-011:1Online publication date: 22-Feb-2024
      • (2023)Facial Emotion Recognition for Photo and Video Surveillance Based on Machine Learning and Visual AnalyticsApplied Sciences10.3390/app1317989013:17(9890)Online publication date: 31-Aug-2023

      View Options

      Login options

      View options

      PDF

      View or Download as a PDF file.

      PDF

      eReader

      View online with eReader.

      eReader

      Full Text

      View this article in Full Text.

      Full Text

      HTML Format

      View this article in HTML Format.

      HTML Format

      Media

      Figures

      Other

      Tables

      Share

      Share

      Share this Publication link

      Share on social media