Nothing Special   »   [go: up one dir, main page]

skip to main content
10.1145/3308532.3329443acmconferencesArticle/Chapter ViewAbstractPublication PagesivaConference Proceedingsconference-collections
extended-abstract

Generating Facial Expression Data: Computational and Experimental Evidence

Published: 01 July 2019 Publication History

Abstract

It is crucial that naturally-looking Embodied Conversational Agents (ECAs) display various verbal and non-verbal behaviors, including facial expressions. The generation of credible facial expressions has been approached by means of different methods, yet remains difficult because of the availability of naturalistic data. To infuse more variability into the facial expressions of ECAs, we proposed a model that considered temporal dynamic of facial behaviors as a countable-state Markov process. Once trained, the model was able to output new sequences of facial expressions from an existing dataset containing facial videos with Action Unit (AU) encodings. The approach was validated by having computer software and humans identify facial emotion from video. Half of the videos employed newly generated sequences of facial expressions using the model while the other half contained sequences selected directly from the original dataset. We found no statistically significant evidence that the newly generated facial expression sequences could be differentiated from the original ones, demonstrating that the model was able to generate new facial expression data that were indistinguishable from the original data. Our proposed approach could be used to expand the amount of labelled facial expression data in order to create new training sets for machine learning methods.

References

[1]
Shazia Afzal, Tevfik Metin Sezgin, Yujian Gao, and Peter Robinson. 2009. Perception of Emotional Expressions in Different Representations Using Facial Feature Points. In 2009 3rd International Conference on Affective Computing and Intelligent Interaction and Workshops (ACII 2009). IEEE, 1--6.
[2]
TimothyWBickmore and RosalindWPicard. 2005. Establishing and Maintaining Long-term Human-Computer Relationships. ACM Transactions on Computer- Human Interaction (TOCHI) 12, 2 (2005), 293--327.
[3]
Cynthia Breazeal. 2003. Emotion and Sociable Humanoid Robots. International Journal of Human-Computer Studies 59, 1--2 (2003), 119--155.
[4]
Jeffrey F Cohn, Zara Ambadar, and Paul Ekman. 2007. Observer-Based Measurement of Facial Expression with the Facial Action Coding System. The Handbook of Emotion Elicitation and Assessment (2007), 203--221.
[5]
Paul Ekman. 1992. An Argument for Basic Emotions. Cognition & Emotion 6, 3--4 (1992), 169--200.
[6]
Paul Ekman and Wallace Friesen. 1978. Facial Action Coding System: Investigator's Guide Consulting.
[7]
Kevin El Haddad, Hüseyin Çakmak, Emer Gilmartin, Stéphane Dupont, and Thierry Dutoit. 2016. Towards a Listening Agent: A System Generating Audiovisual Laughs and Smiles to Show Interest. In Proceedings of the 18th ACM International Conference on Multimodal Interaction (ICMI '16). ACM, 248--255.
[8]
Yudong Guo, Juyong Zhang, Lin Cai, Jianfei Cai, and Jianmin Zheng. 2018. Selfsupervised CNN for Unconstrained 3D Facial Performance Capture from a Single RGB-D Camera. arXiv preprint arXiv:1808.05323 (2018).
[9]
Liliana Laranjo, Adam G Dunn, Huong Ly Tong, Ahmet Baki Kocaballi, Jessica Chen, Rabia Bashir, Didi Surian, Blanca Gallego, Farah Magrabi, Annie YS Lau, et al. 2018. Conversational Agents in Healthcare: A Systematic Review. Journal of the American Medical Informatics Association 25, 9 (2018), 1248--1258.
[10]
Manuela Macedonia, Iris Groher, and Friedrich Roithmayr. 2014. Intelligent Virtual Agents as Language Trainers Facilitate Multilingualism. Frontiers in Psychology 5 (2014), 295.
[11]
S Mohammad Mavadati, Mohammad H Mahoor, Kevin Bartlett, Philip Trinh, and Jeffrey F Cohn. 2013. Disfa: A Spontaneous Facial Action Intensity Database. IEEE Transactions on Affective Computing 4, 2 (2013), 151--160.
[12]
Noldus. 2018. FaceReader: Tool for Automatic Analysis of Facial Expression: Version 7.1.
[13]
Verónica Orvalho, Pedro Bastos, Frederic I Parke, Bruno Oliveira, and Xenxo Alvarez. 2012. A Facial Rigging Survey. In Eurographics 2012 - State of the Art Reports. The Eurographics Association, 183--204.
[14]
Nava A Shaked. 2017. Avatars and Virtual Agents--Relationship Interfaces for the Elderly. Healthcare Technology Letters 4, 3 (2017), 83--87.
[15]
Supasorn Suwajanakorn, Steven M Seitz, and Ira Kemelmacher-Shlizerman. 2017. Synthesizing Obama: Learning Lip Sync from Audio. ACM Transactions on Graphics (TOG) 36, 4 (2017), 95.
[16]
Justus Thies, Michael Zollhöfer, Matthias Nießner, Levi Valgaerts, Marc Stamminger, and Christian Theobalt. 2015. Real-time Expression Transfer for Facial Reenactment. ACM Transactions on Graphics (TOG) 34, 6 (2015), 183.
[17]
Gerben A Van Kleef. 2009. How Emotions Regulate Social Life: The Emotions as Social Information (EASI) Model. Current Directions in Psychological Science 18, 3 (2009), 184--188.
[18]
Daniel Vlasic, Matthew Brand, Hanspeter Pfister, and Jovan Popovi?. 2005. Face Transfer with Multilinear Models. In ACM Transactions on Graphics (TOG), Vol. 24. ACM, 426--433.
[19]
YuxuanWang, RJ Skerry-Ryan, Daisy Stanton, YonghuiWu, Ron JWeiss, Navdeep Jaitly, Zongheng Yang, Ying Xiao, Zhifeng Chen, Samy Bengio, et al. 2017. Tacotron: Towards End-to-End Speech Synthesis. arXiv preprint arXiv:1703.10135 (2017).
[20]
Zhihong Zeng, Maja Pantic, Glenn I Roisman, and Thomas S Huang. 2009. A Survey of Affect Recognition Methods: Audio, Visual, and Spontaneous Expressions. IEEE Transactions on Pattern Analysis and Machine Intelligence 31, 1 (2009), 39--58.

Cited By

View all
  • (2022)Backchannel Behavior Influences the Perceived Personality of Human and Artificial Communication PartnersFrontiers in Artificial Intelligence10.3389/frai.2022.8352985Online publication date: 30-Mar-2022
  • (2022)A realistic, multimodal virtual agent for the healthcare domainProceedings of the 22nd ACM International Conference on Intelligent Virtual Agents10.1145/3514197.3551250(1-3)Online publication date: 6-Sep-2022
  • (2020)Spontaneous Facial Behavior Revolves Around Neutral Facial DisplaysProceedings of the 20th ACM International Conference on Intelligent Virtual Agents10.1145/3383652.3423893(1-8)Online publication date: 20-Oct-2020

Recommendations

Comments

Please enable JavaScript to view thecomments powered by Disqus.

Information & Contributors

Information

Published In

cover image ACM Conferences
IVA '19: Proceedings of the 19th ACM International Conference on Intelligent Virtual Agents
July 2019
282 pages
ISBN:9781450366724
DOI:10.1145/3308532
Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for third-party components of this work must be honored. For all other uses, contact the Owner/Author.

Sponsors

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 01 July 2019

Check for updates

Author Tags

  1. embodied conversational agents
  2. facial action coding system (facs)
  3. facial expressions
  4. machine learning
  5. non-verbal communication

Qualifiers

  • Extended-abstract

Funding Sources

  • the municipality of Tilburg
  • the Ministry of Economic Affairs
  • the European Union
  • Provincie Noord-Brabant
  • OPZuid

Conference

IVA '19
Sponsor:

Acceptance Rates

IVA '19 Paper Acceptance Rate 15 of 63 submissions, 24%;
Overall Acceptance Rate 53 of 196 submissions, 27%

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)10
  • Downloads (Last 6 weeks)2
Reflects downloads up to 18 Dec 2024

Other Metrics

Citations

Cited By

View all
  • (2022)Backchannel Behavior Influences the Perceived Personality of Human and Artificial Communication PartnersFrontiers in Artificial Intelligence10.3389/frai.2022.8352985Online publication date: 30-Mar-2022
  • (2022)A realistic, multimodal virtual agent for the healthcare domainProceedings of the 22nd ACM International Conference on Intelligent Virtual Agents10.1145/3514197.3551250(1-3)Online publication date: 6-Sep-2022
  • (2020)Spontaneous Facial Behavior Revolves Around Neutral Facial DisplaysProceedings of the 20th ACM International Conference on Intelligent Virtual Agents10.1145/3383652.3423893(1-8)Online publication date: 20-Oct-2020

View Options

Login options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media