Nothing Special   »   [go: up one dir, main page]

skip to main content
10.1145/2964284.2967276acmconferencesArticle/Chapter ViewAbstractPublication PagesmmConference Proceedingsconference-collections
short-paper

Alone versus In-a-group: A Comparative Analysis of Facial Affect Recognition

Published: 01 October 2016 Publication History

Abstract

Automatic affect analysis and understanding has become a well established research area in the last two decades. Recent works have started moving from individual to group scenarios. However, little attention has been paid to comparing the affect expressed in individual and group settings. This paper presents a framework to investigate the differences in affect recognition models along arousal and valence dimensions in individual and group settings. We analyse how a model trained on data collected from an individual setting performs on test data collected from a group setting, and vice versa. A third model combining data from both individual and group settings is also investigated. A set of experiments is conducted to predict the affective states along both arousal and valence dimensions on two newly collected databases that contain sixteen participants watching affective movie stimuli in individual and group settings, respectively. The experimental results show that (1) the affect model trained with group data performs better on individual test data than the model trained with individual data tested on group data, indicating that facial behaviours expressed in a group setting capture more variation than in an individual setting; and (2) the combined model does not show better performance than the affect model trained with a specific type of data (i.e., individual or group), but proves a good compromise. These results indicate that in settings where multiple affect models trained with different types of data are not available, using the affect model trained with group data is a viable solution.

References

[1]
C. Busso, M. Bulut, C.-C. Lee, A. Kazemzadeh, E. Mower, S. Kim, J. N. Chang, S. Lee, and S. S. Narayanan. Iemocap: Interactive emotional dyadic motion capture database. Language Resources and Evaluation, 2008.
[2]
C.-C. Chang and C.-J. Lin. LIBSVM: A library for support vector machines. ACM Trans. on Intelligent Systems and Technology, 2011.
[3]
A. Dhall and R. Goecke. A temporally piece-wise fisher vector approach for depression analysis. In Proc. of Int. Conf. on A ective Computing and Intelligent Interaction (ACII), 2015.
[4]
A. Dhall, R. Goecke, and T. Gedeon. Automatic group happiness intensity analysis. IEEE Trans. on Affective Computing, 2015.
[5]
H. Gunes and B. Schuller. Categorical and dimensional affect analysis in continuous input: Current trends and future directions. Image and Vision Computing, 2013.
[6]
S. Koelstra and I. Patras. Fusion of facial expressions and eeg for implicit affective tagging. Image and Vision Computing, 2013.
[7]
I. Leite, M. McCoy, D. Ullman, N. Salomons, and B. Scassellati. Comparing models of disengagement in individual and group interactions. In Proc. of ACM/IEEE Int. Conf. on Human-Robot Interaction, 2015.
[8]
W. Mou, O. Celiktutan, and H. Gunes. Group-level arousal and valence recognition in static images: Face, body and context. In Proc. of IEEE Int. Conf. on Automatic Face and Gesture Recognition and Workshops (FG), 2015.
[9]
W. Mou, H. Gunes, and I. Patras. Automatic recognition of emotions and membership in group videos. In Proc. of IEEE Conf. on Computer Vision and Pattern Recognition and Workshops (CVPRW), 2016.
[10]
J. Sanchez, F. Perronnin, T. Mensink, and J. Verbeek. Image classification with the sher vector: Theory and practice. International Journal of Computer Vision, 2013.
[11]
E. Sariyanidi, H. Gunes, and A. Cavallaro. Automatic analysis of facial affect: A survey of registration, representation, and recognition. IEEE Trans. on Pattern Analysis and Machine Intelligence, 2015.
[12]
E. Sariyanidi, H. Gunes, M. Gokmen, and A. Cavallaro. Local Zernike Moment representation for facial affect recognition. In Proc. of Brithish Machine and Vision Conference (BMVC), 2013.
[13]
M. Soleymani, S. Koelstra, I. Patras, and T. Pun. Continuous emotion detection in response to music videos. In Proc. of IEEE Conf. on Automatic Face and Gesture Recognition and Workshops (FG), 2011.
[14]
H. Wang, A. Klaser, C. Schmid, and C.-L. Liu. Dense trajectories and motion boundary descriptors for action recognition. International Journal of Computer Vision, 2013.
[15]
X. Xiong and F. De la Torre. Supervised descent method and its applications to face alignment. In Proc. of IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), 2013.
[16]
R. B. Zajonc et al. Social facilitation. Research Center for Group Dynamics, Institute for Social Research, University of Michigan, 1965.
[17]
Z. Zeng, M. Pantic, G. I. Roisman, and T. S. Huang. A survey of affect recognition methods: Audio, visual, and spontaneous expressions. IEEE Trans. on Pattern Analysis and Machine Intelligence, 2009.
[18]
G. Zhao and M. Pietikainen. Dynamic texture recognition using local binary patterns with an application to facial expressions. IEEE Trans. on Pattern Analysis and Machine Intelligence, 2007.

Cited By

View all
  • (2023)A Framework for Automatic Personality Recognition in Dyadic Interactions2023 11th International Conference on Affective Computing and Intelligent Interaction Workshops and Demos (ACIIW)10.1109/ACIIW59127.2023.10388205(1-8)Online publication date: 10-Sep-2023
  • (2022)Automatic Prediction of Group Cohesiveness in ImagesIEEE Transactions on Affective Computing10.1109/TAFFC.2020.302609513:3(1677-1690)Online publication date: 1-Jul-2022
  • (2021)Review and Challenges of Technologies for Real-Time Human Behavior MonitoringIEEE Transactions on Biomedical Circuits and Systems10.1109/TBCAS.2021.306061715:1(2-28)Online publication date: Feb-2021
  • Show More Cited By

Recommendations

Comments

Please enable JavaScript to view thecomments powered by Disqus.

Information & Contributors

Information

Published In

cover image ACM Conferences
MM '16: Proceedings of the 24th ACM international conference on Multimedia
October 2016
1542 pages
ISBN:9781450336031
DOI:10.1145/2964284
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

Sponsors

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 01 October 2016

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. affect recognition
  2. arousal and valence recognition
  3. group settings
  4. individual affect recognition

Qualifiers

  • Short-paper

Funding Sources

  • EPSRC under its IDEAS Factory Sandpits call on Digital Personhood

Conference

MM '16
Sponsor:
MM '16: ACM Multimedia Conference
October 15 - 19, 2016
Amsterdam, The Netherlands

Acceptance Rates

MM '16 Paper Acceptance Rate 52 of 237 submissions, 22%;
Overall Acceptance Rate 2,145 of 8,556 submissions, 25%

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)11
  • Downloads (Last 6 weeks)1
Reflects downloads up to 31 Jan 2025

Other Metrics

Citations

Cited By

View all
  • (2023)A Framework for Automatic Personality Recognition in Dyadic Interactions2023 11th International Conference on Affective Computing and Intelligent Interaction Workshops and Demos (ACIIW)10.1109/ACIIW59127.2023.10388205(1-8)Online publication date: 10-Sep-2023
  • (2022)Automatic Prediction of Group Cohesiveness in ImagesIEEE Transactions on Affective Computing10.1109/TAFFC.2020.302609513:3(1677-1690)Online publication date: 1-Jul-2022
  • (2021)Review and Challenges of Technologies for Real-Time Human Behavior MonitoringIEEE Transactions on Biomedical Circuits and Systems10.1109/TBCAS.2021.306061715:1(2-28)Online publication date: Feb-2021
  • (2021)AMIGOS: A Dataset for Affect, Personality and Mood Research on Individuals and GroupsIEEE Transactions on Affective Computing10.1109/TAFFC.2018.288446112:2(479-493)Online publication date: 1-Apr-2021
  • (2021)Affects in Groups: A review on automated affect processing and estimation in groupsIEEE Signal Processing Magazine10.1109/MSP.2021.310781138:6(74-83)Online publication date: Nov-2021
  • (2019)Alone versus In-a-groupACM Transactions on Multimedia Computing, Communications, and Applications10.1145/332150915:2(1-23)Online publication date: 10-Jun-2019
  • (2019)The unobtrusive group interaction (UGI) corpusProceedings of the 10th ACM Multimedia Systems Conference10.1145/3304109.3325816(249-254)Online publication date: 18-Jun-2019
  • (2019)Your Fellows Matter: Affect Analysis across Subjects in Group Videos2019 14th IEEE International Conference on Automatic Face & Gesture Recognition (FG 2019)10.1109/FG.2019.8756514(1-5)Online publication date: May-2019
  • (2019)Role of Group Level Affect to Find the Most Influential Person in ImagesComputer Vision – ECCV 2018 Workshops10.1007/978-3-030-11012-3_39(518-533)Online publication date: 29-Jan-2019
  • (2018)Unobtrusive Analysis of Group Interactions without CamerasProceedings of the 20th ACM International Conference on Multimodal Interaction10.1145/3242969.3264973(501-505)Online publication date: 2-Oct-2018
  • Show More Cited By

View Options

Login options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Figures

Tables

Media

Share

Share

Share this Publication link

Share on social media