Nothing Special   »   [go: up one dir, main page]

skip to main content
research-article

HapFACS 3.0: FACS-Based Facial Expression Generator for 3D Speaking Virtual Characters

Published: 01 October 2015 Publication History

Abstract

With the growing number of researchers interested in modeling the inner workings of affective social intelligence, the need for tools to easily model its associated expressions has emerged. The goal of this article is two-fold: 1) we describe HapFACS, a free software and API that we developed to provide the affective computing community with a resource that produces static and dynamic facial expressions for three-dimensional speaking characters; and 2) we discuss results of multiple experiments that we conducted in order to scientifically validate our facial expressions and head animations in terms of the widely accepted Facial Action Coding System (FACS) standard, and its Action Units (AU). The result is that users, without any 3D-modeling nor computer graphics expertise, can animate speaking virtual characters with FACS-based realistic facial expression animations, and embed these expressive characters in their own application(s). The HapFACS software and API can also be used for generating repertoires of realistic FACS-validated facial expressions, useful for testing emotion expression generation theories.

References

[1]
N. Ambady and R. Rosenthal, “Thin slices of expressive behavior as predictors of interpersonal consequences: A meta-analysis,” Psychol. Bull., vol. 111, no. 2, pp. 256–274, 1992.
[2]
J. Allwood, “Bodily communication dimensions of expression and content, ” Multimodality Lang. Speech Syst., vol. 7, no. 26, pp. 1–15, 2002.
[3]
K. R. Scherer, A. Schorr, and T. Johnstone, Appraisal Processes in Emotion: Theory, Methods, Research. New York, NY, USA: Oxford Univ. Press, 2001.
[4]
A. Sarrafzadeh, S. Alexander, F. Dadgostar, C. Fan, and A. Bigdeli, “See Me, Teach Me: Facial expression and gesture recognition for intelligent tutoring systems,” in Proc. Innovations Inf. Technol., Nov. 2006, pp. 1–5.
[5]
T. Bosse, G. F. Siddiqui, and J. Treur, “ An intelligent virtual agent to increase involvement in financial services,” in Proc. 10th Int. Conf. Intell. Virtual Agents, 2010, pp. 378–384.
[6]
P. Kenny, T. Parsons, J. Gratch, A. Leuski, and A. Rizzo, “Virtual patients for clinical therapist skills training, ” in Proc. Intell. Virtual Agents, 2007, pp. 197 –210.
[7]
C. Lisetti, R. Amini, U. Yasavur, and N. Rishe, “I can help you change! An empathic virtual agent delivers behavior,” ACM Trans. Manag. Inf. Syst., vol. 4, no. 4, pp. 1–28, 2013.
[8]
A. Arya, S. DiPaola, L. Jefferies, and J. T. Enns, “Socially communicative characters for interactive applications,” in Proc. 14th Int. Conf. Comput. Graph., Visual. Comput. Vision, Plzen, Czech Rep.: Union Agency Science Press, 2006.
[9]
S. C. Marsella, J. Gratch, and J. Rickel, “Expressive behaviors for virtual worlds,” in Life-Like Characters, Tools, Affective Functions, and Applications, M. Prendinger and H., Ishizuka, Eds. New York, NY, USA: Springer, 2003.
[10]
M. S. Hussain, S. K. D’Mello, and R. A. Calvo, “ Research and development tools in affective computing,” in The Oxford Handbook of Affective Computing, 1st ed. London, U.K.: Oxford Univ. Press, 2015, ch. 25, pp. 349–357.
[11]
M. Alencar and J. F. Netto, “TUtor collaborator using multi-agent system,” Collab. Technol. Social Comput., vol. 460, pp. 153–159, 2014 .
[12]
R. Amini, C. Lisetti, and U. Yasavur, “ Emotionally responsive virtual counselor for behavior-change health interventions,” in Proc. 9th Int. Conf. Design Sci. Res. Inf. Syst. Technol., 2014, pp. 433– 437.
[13]
U. Yasavur, C. Lisetti, and N. Rishe, “Let’s talk! Speaking virtual counselor offers you a brief intervention,” J. Multimodal User Interfaces, vol. 8, pp. 381–398, 2014.
[14]
R. Mattheij, M. Nilsenova, and E. Postma, “Vocal and facial imitation of humans interacting with virtual agents,” in Proc. Humaine Assoc. Conf. Affective Comput. Intell. Interaction, Sep. 2013, pp. 815 –820.
[15]
R. Mattheij, M. Postma-Nilsenová, and E. Postma, “ Mirror, mirror in the wall: Is there mimicry in you all?” J. Ambient Intell. Smart Environ., vol. 1, pp. 1–5, 2013.
[16]
R. Amini, C. Lisetti, U. Yasavur, and N. Rishe, “On-demand virtual health counselor for delivering behavior-change health interventions,” in Proc. IEEE Int. Conf. Healthcare Informat., 2013, pp. 46–55.
[17]
P. Ekman, W. V. Freisen, and J. C. Hager, Facial Action Coding System, vol. 160, 2nd ed. Salt Lake City, UT, USA: Research Nexus eBook, 2002.
[18]
W. V. Friesen and P. Ekman, “EMFACS-7: Emotional facial action coding system,” Univ. California, San Francisco, CA, USA, unpublished manuscript, 1983.
[19]
P. Ekman, R. W. Levenson, and W. V. Freisen, “ Autonomic nervous system activity distinguishes among emotions,” Science , vol. 221, no. 4616, pp. 1208–1210, 1983.
[20]
P. Ekman and W. V. Freisen, Facial Action Coding System: A Technique for the Measurement of Facial Movement. Mountain View, CA, USA: Consulting Psychol. Press, 1978.
[21]
P. Ekman and W. V. Freisen, “Detecting deception from the body or face,” J. Personality Social Psychol., vol. 29, no. 3, pp. 288–298, 1974.
[22]
N. Chovil, “Discourse-oriented facial displays in conversation, ” Res. Lang. Social Interaction, vol. 25, pp. 163– 194, 1991.
[23]
A. Fridlund, Human Facial Expression: An Evolutionary View. San Diego, CA, USA: Academic, 1994.
[24]
D. Lundqvist, A. Flykt, and A. Öhman, The Karolinska Directed Emotional Faces—KDEF. (CD ROM). Solna, Sweden: Dept. Clinical Neurosci., Psychol. Sec., Karolinska Inst., 1998.
[25]
J. L. Tracy, R. W. Robins, and R. A. Schriber, “ Development of a FACS-verified set of basic and self-conscious emotion expressions,” Emotion, vol. 9, no. 4, pp. 554–559, 2009.
[26]
J. Van Der Schalk, S. T. Hawk, A. H. Fischer, and B. Doosje, “Moving faces, looking places: Validation of the Amsterdam dynamic facial expression set (ADFES),” Emotion , vol. 11, no. 4, pp. 907–920, 2011.
[27]
F. Schiel, S. Steininger, and U. Türk, “ The smartkom multimodal corpus at BAS,” in Proc. 3rd Conf. Lang. Resources Evaluation, 2002, pp. 200–206.
[28]
P. Lucey, J. F. Cohn, T. Kanade, J. Saragih, Z. Ambadar, I. Matthews, and F. Ave, “The extended Cohn-Kanade dataset (CK+): A complete dataset for action unit and emotion-specified expression,” in Proc. Comput. Vision Pattern Recog. Workshops, 2010, pp. 94– 101.
[29]
M. Valstar and M. Pantic, “Induced disgust, happiness and surprise: An addition to the MMI facial expression database, ” in Proc. Int. Conf. Lang. Resources Evaluation, Workshop on Emotion, 2010, pp. 65–70.
[30]
E. G. Krumhuber, L. Tamarit, E. B. Roesch, and K. R. Scherer, “FACSGen 2.0 animation software: Generating three-dimensional FACS-valid facial expressions for emotion research,” Emotion, vol. 12, no. 2, pp. 351–363, Jan. 2012.
[31]
K. R. Scherer and H. Ellgring, “Multimodal expression of emotion: Affect programs or componential appraisal patterns? ” Emotion, vol. 7, no. 1, pp. 158 –71, Feb. 2007.
[32]
M. Thiebaux, M. Rey, A. N. Marshall, S. Marsella, and M. Kallmann, “SmartBody: Behavior realization for embodied conversational agents,” in Proc. 7th Int. Conf. Auton. Agents Multiagent Syst., 2008, pp. 12–16.
[33]
A. Shapiro, “Building a character animation system,” in Proc. 4th Int. Conf. Motion Games, 2011, pp. 98–109.
[34]
S. Villagrasa and A. S. Sánchez, “Face! 3d facial animation system based on FACS,” in Proc. IV Ibero-Amer. Symp. Comput. Graphics, 2009, pp. 203– 209.
[35]
N. Bee, B. Falk, and E. Andre, “Simplified facial animation control utilizing novel input devices: A comparative study,” in Proc. 14th Int. Conf. Intell. User Interfaces, 2009, pp. 197–206.
[36]
S. Pasquariello and C. Pelachaud, “Greta: A simple facial animation engine,” in Proc. 6th Online World Conf. Soft Comput. Ind. Appl. Session Soft Comput. Intell. 3D Agents, 2001, pp. 511 –525.
[37]
E. Bevacqua, K. Prepin, R. Niewiadomski, E. de Sevin, and C. Pelachaud, “GRETA: Towards an Interactive Conversational Virtual Companion, ” in Close Engagement with Artificial Companions: Key Social, Psychological, Ethical and Design Issues, Y. Wilks, Ed. Amsterdam, The Netherlands: John Benjamins, 2010, pp. 143–156.
[38]
S. Battista, F. Casalino, and C. Lande, “MPEG-4: A multimedia standard for the third millennium, Part 1,” IEEE Multimedia, vol. 6, no. 4, pp. 74–83, Oct.–Dec. 1999.
[39]
A. Wojde and L. Rothkrantz, “Parametric generation of facial expressions based on FACS,” Comput. Graphics Forum, vol. 24, no. 4, pp. 743– 757, 2005.
[40]
O. Alexander, M. Rogers, W. Lambeth, M. Chiang, and P. Debevec, “Creating a photoreal digital actor: The digital emily project, ” in Proc. Conf. Visual Media Production, Nov. 2009, pp. 176–187.
[41]
O. Alexander, G. Fyffe, J. Busch, X. Yu, R. Ichikari, A. Jones, P. Debevec, J. Jimenez, E. Danvoye, B. Antionazzi, M. Eheler, Z. Kysela, and J. Pahlen, “Digital Ira: Creating a real-time photoreal digital actor, ” in Proc. ACM SIGGRAPH, 2013, p. 4.
[42]
A. T. Hirsh, M. P. Jensen, and M. E. Robinson, “ Evaluation of nurses’ self-insight into their pain assessment and treatment decisions,” J. Pain, vol. 11, no. 5, pp. 454–61, May 2010.
[43]
C. Arrington, D.-M. Wilson, and L. Lehmann, “Improving performance and retention in computer science courses using a virtual game show,” in Proc. 49th Annu. Southeast Regional Conf., 2011, p. 320.
[44]
T. S. Behrend and L. F. Thompson, “Similarity effects in online training: Effects with computerized trainer agents, ” Comput. Human Behavior, vol. 27, no. 3, pp. 1201–1206, May 2011.
[45]
T. S. Behrend, S. Toaddy, L. F. Thompson, and D. J. Sharek, “The effects of avatar appearance on interviewer ratings in virtual employment interviews,” Comput. Human Behavior, vol. 28, no. 6, pp. 2128–2133, Jul. 2012.
[46]
K. Bransky and D. Richards, “Users’ expectations of IVA recall and forgetting,” in Proc. 10th Int. Conf. Intell. Virtual Agents, 2011, pp. 433–434.
[47]
B. De Carolis, I. Mazzotta, N. Novielli, and S. Pizzutilo, “Social robots and ECAs for accessing smart environments services,” in Proc. Int. Conf. Adv. Visual Interfaces, 2010, pp. 275–278.
[48]
A. N. Fellner, G. Matthews, K. D. Shockley, J. S. Warm, M. Zeidner, L. Karlov, and R. D. Roberts, “ Using emotional cues in a discrimination learning task: Effects of trait emotional intelligence and affective state,” J. Res. Personality, vol. 46, no. 3, pp. 239–247, Jun. 2012.
[49]
R. Ishii and I. N. Yukiko, “An empirical study of eye-gaze behaviors: Towards the estimation of conversational engagement in human-agent communication,” in Proc. Workshop Eye Gaze Intell. Human Mach. Interaction, 2010, pp. 33–40.
[50]
Y. Liu, O. Sourina, and M. K. Nguyen, “Real-time EEG-based human emotion recognition and visualization,” in Proc. Int. Conf. Cyberworlds, Oct. 2010, pp. 262–269.
[51]
I. Nakano and R. Ishii, “Estimating user’s engagement from eye-gaze behaviors in human-agent conversations, ” in Proc. 15th Int. Conf. Intell. User Interfaces, 2010, pp. 139–148.
[52]
N. Novielli, F. de Rosis, and I. Mazzotta, “User attitude towards an embodied conversational agent: Effects of the interaction mode,” J. Pragmatics, vol. 42, no. 9, pp. 2385–2397, Sep. 2010.
[53]
C. Smith, N. Crook, J. Boye, D. Charlton, S. Dobnik, D. Pizzi, M. Cavazza, and S. Pulman, “Interaction strategies for an affective conversational agent,” in Proc. 10th Int. Conf. Intell. Virtual Agents, 2010, pp. 301–314.
[54]
J. Szymaski and W. Duch, “Information retrieval with semantic memory model,” Cognitive Syst. Res., vol. 14, no. 1, pp. 84–100, Apr. 2012.
[55]
M. Turunen, J. Hakulinen, O. Ståhl, B. Gambäck, P. Hansen, M. C. Rodríguez Gancedo, R. S. de la Cámara, C. Smith, D. Charlton, and M. Cavazza, “Multimodal and mobile conversational health and fitness companions,” Comput. Speech Lang., vol. 25, no. 2, pp. 192–209, Apr. 2011.
[56]
C. E. Vargas and D. Field, “‘How was your day?’ An architecture for multimodal ECA systems, ” in Proc. 11th Annu. Meeting Special Interest Group Discourse Dialogue, 2010, pp. 47–50.
[57]
M. Cavazza, C. E. Vargas, J. R. n. Gil, I. D. Telefónica, N. Crook, D. Field, and S. Sheffield, “ ‘How was your day?’ An affective companion ECA prototype,” in Proc. 11th Annu. Meeting Special Interest Group Discourse Dialogue, 2010, pp. 277– 280.
[58]
O. Sourina, Y. Liu, and M. K. Nguyen, “ Emotion-enabled EEG-based interaction,” in Proc. SIGGRAPH Asia 2011 Posters, 2011, p. 1.
[59]
R. Amini and C. Lisetti, “HapFACS: An open source API/software to generate FACS-based expressions for ECAs animation and for corpus generation,” in Proc. 5th Biannu. Humaine Assoc. Conf. Affective Comput. Intell. Interaction, 2013, pp. 270–275.
[60]
R. Amini, U. Yasavur, and C. L. Lisetti, “HapFACS 1.0: Software/API for generating FACS-based facial expressions,” in Proc. 3rd Int. Symp. Facial Anal. Animation, 2012, p. 17.
[61]
M. Sayette, J. F. Cohn, and J. Wertz, “A psychometric evaluation of the facial action coding system for assessing spontaneous expression, ” J. Nonverbal Behavior, vol. 25, no. 3, pp. 167 –185, 2001.
[62]
O. Langner, R. Dotsch, G. Bijlstra, D. H. J. Wigboldus, S. T. Hawk, and A. van Knippenberg, “Presentation and validation of the radboud faces database, ” Cognition Emotion, vol. 24, pp. 1377– 1388, 2010.
[63]
J. van der Schalk, S. T. Hawk, A. H. Fischer, and B. J. Doosje, “Validation of the Amsterdam dynamic facial expression set (ADFES),” Emotion, vol. 11, pp. 907–920, 2011.
[64]
M. Paleari, A. Grizard, C. L. Lisetti, and S. Antipolis, “Adapting psychologically grounded facial emotional expressions to different anthropomorphic embodiment platforms, ” in Proc. 20th FLorida Artif. Intell. Res. Soc. Annu. Conf. Artif. Intell., 2007, pp. 7–9.
[65]
M. G. Beaupre and U. Hess, “Cross-cultural emotion recognition among Canadian ethnic groups,” J. Cross-Cultural Psychol., vol. 36, pp. 355–370, 2005.
[66]
E. Goeleven, R. de Raedt, L. Leyman, and B. Verschuere, “The Karolinski directed emotional faces: A validation study,” Cognition Emotion, vol. 22, pp. 1094–1118, 2008.
[67]
J. L. Tracy, R. W. Robins, and R. A. Schriber, “ Development of a FACS-verified set of basic and self-conscious emotion expressions,” Emotion, vol. 9, pp. 554–559, 2009.

Cited By

View all
  • (2024)An LLM-powered Socially Interactive Agent with Adaptive Facial Expressions for Conversing about HealthCompanion Proceedings of the 26th International Conference on Multimodal Interaction10.1145/3686215.3688378(75-77)Online publication date: 4-Nov-2024
  • (2023)A Review of Affective Computing Research Based on Function-Component-Representation FrameworkIEEE Transactions on Affective Computing10.1109/TAFFC.2021.310451214:2(1655-1674)Online publication date: 1-Apr-2023
  • (2023)A survey on the pipeline evolution of facial capture and tracking for digital humansMultimedia Systems10.1007/s00530-023-01081-229:4(1917-1940)Online publication date: 1-Apr-2023
  • Show More Cited By

Index Terms

  1. HapFACS 3.0: FACS-Based Facial Expression Generator for 3D Speaking Virtual Characters
          Index terms have been assigned to the content through auto-classification.

          Recommendations

          Comments

          Please enable JavaScript to view thecomments powered by Disqus.

          Information & Contributors

          Information

          Published In

          cover image IEEE Transactions on Affective Computing
          IEEE Transactions on Affective Computing  Volume 6, Issue 4
          Oct.-Dec. 2015
          107 pages

          Publisher

          IEEE Computer Society Press

          Washington, DC, United States

          Publication History

          Published: 01 October 2015

          Qualifiers

          • Research-article

          Contributors

          Other Metrics

          Bibliometrics & Citations

          Bibliometrics

          Article Metrics

          • Downloads (Last 12 months)0
          • Downloads (Last 6 weeks)0
          Reflects downloads up to 19 Dec 2024

          Other Metrics

          Citations

          Cited By

          View all
          • (2024)An LLM-powered Socially Interactive Agent with Adaptive Facial Expressions for Conversing about HealthCompanion Proceedings of the 26th International Conference on Multimodal Interaction10.1145/3686215.3688378(75-77)Online publication date: 4-Nov-2024
          • (2023)A Review of Affective Computing Research Based on Function-Component-Representation FrameworkIEEE Transactions on Affective Computing10.1109/TAFFC.2021.310451214:2(1655-1674)Online publication date: 1-Apr-2023
          • (2023)A survey on the pipeline evolution of facial capture and tracking for digital humansMultimedia Systems10.1007/s00530-023-01081-229:4(1917-1940)Online publication date: 1-Apr-2023
          • (2019)Time to Go ONLINE! A Modular Framework for Building Internet-based Socially Interactive AgentsProceedings of the 19th ACM International Conference on Intelligent Virtual Agents10.1145/3308532.3329452(227-229)Online publication date: 1-Jul-2019
          • (2019)Sparse modified marginal fisher analysis for facial expression recognitionApplied Intelligence10.1007/s10489-018-1388-749:7(2659-2671)Online publication date: 1-Jul-2019
          • (2018)eEVA as a Real-Time Multimodal Agent Human-Robot InterfaceRoboCup 2018: Robot World Cup XXII10.1007/978-3-030-27544-0_22(262-274)Online publication date: 18-Jun-2018

          View Options

          View options

          Media

          Figures

          Other

          Tables

          Share

          Share

          Share this Publication link

          Share on social media