Nothing Special   »   [go: up one dir, main page]

Skip to main content

How to Make a Robot Smile? Perception of Emotional Expressions from Digitally-Extracted Facial Landmark Configurations

  • Conference paper
Social Robotics (ICSR 2012)

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 7621))

Included in the following conference series:

Abstract

To design robots or embodied conversational agents that can accurately display facial expressions indicating an emotional state, we need technology to produce those facial expressions, and research that investigates the relationship between those technologies and human social perception of those artificial faces. Our starting point is assessing human perception of core facial information: Moving dots representing the facial landmarks, i.e., the locations and movements of the crucial parts of a face. Earlier research suggested that participants can relatively accurately identity facial expressions when all they can see of a real human full face are moving white painted dots representing the facial landmarks (although less accurate than recognizing full faces). In the current study we investigated the accuracy of recognition of emotions expressed by comparable facial landmarks (compared to accuracy of recognition of emotions expressed by full faces), but now used face-tracking software to produce the facial landmarks. In line with earlier findings, results suggested that participants could accurately identify emotions expressed by the facial landmarks (though less accurately than those expressed by full faces). Thereby, these results provide a starting point for further research on the fundamental characteristics of technology (AI methods) producing facial emotional expressions and their evaluation by human users.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

Similar content being viewed by others

References

  1. Bassili, J.N.: Facial motion in the perception of faces and of emotional expression. Journal of Experimental Psychology: Human Perception and Performance 4, 373–379 (1978)

    Article  Google Scholar 

  2. Johansson, G.: Visual motion perception. Scientific American 232, 76–88 (1975)

    Article  Google Scholar 

  3. Saragih, J.M., Lucey, S., Cohn, J.F., Court, T.: Real-time avatar animation from a single image. Automatic Face & Gesture (2011)

    Google Scholar 

  4. Saragih, J., Lucey, S., Cohn, J.: Deformable model fitting by regularized landmark mean-shift. International Journal of Computer Vision 91, 200–215 (2011)

    Article  MathSciNet  MATH  Google Scholar 

  5. Lucey, S., Wang, Y., Saragih, J., Cohn, J.: Non-rigid face tracking with enforced convexity and local appearance consistency constraint. International Journal of Image and Vision Computing 28, 781–789 (2010)

    Article  Google Scholar 

  6. Saragih, J., Lucey, S., Cohn, J.: Face alignment through subspace constrained mean-shifts. In: IEEE International Conference on Computer Vision, pp. 1034–1041 (2009)

    Google Scholar 

  7. Saragih, J., Lucey, S., Cohn, J.: Deformable model fitting with a mixture of local experts. In: International Conference on Computer Vision (2009)

    Google Scholar 

  8. Fong, T., Nourbakhsh, I., Dautenhahn, K.: A survey of socially interactive robots. Robotics and Autonomous Systems. 42, 143–166 (2003)

    Article  MATH  Google Scholar 

  9. Alexander, O., Rogers, M., Lambeth, W., Chiang, M., Debevec, P.: Creating a photoreal digital actor: the digital Emily project. In: 2009 Conference for Visual Media Production, pp. 176–187 (2009)

    Google Scholar 

  10. Yang, C., Chiang, W.: An interactive facial expression generation system. Springer Science Business Media. Mutimed Tools Appl. (2007)

    Google Scholar 

  11. Bänziger, T., Mortillaro, M., Scherer, K.R.: Introducing the Geneva Multimodal Expression corpus for experimental research on emotion perception. Emotion (2011) (advance online publication), doi:10.137/a0025827

    Google Scholar 

  12. Bänziger, T., Scherer, K.R.: Introducing the Geneva Multimodal Emotion Portrayal (GEMEP) corpus. In: Scherer, K.R., Bänziger, T., Roesch, E.B. (eds.) Blueprint for Affective Computing: A Sourcebook, pp. 271–294. Oxford University Press, Oxford (2010)

    Google Scholar 

  13. Xiao, J., Chai, J., Kanade, T.: A closed-form solution to non-rigid shape and motion recovery. International Journal of Computer Vision 2, 233–246 (2006)

    Article  Google Scholar 

  14. Breazeal, C.: Designing sociable robots. MIT Press, Cambridge (2002)

    Google Scholar 

  15. Lucey, P., Lucey, S., Cohn, J.F.: Registration invariant representations for expression detection. In: 2010 International Conference on Digital Image Computing: Techniques and Applications, pp. 255–261 (2010)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2012 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Liu, C., Ham, J., Postma, E., Midden, C., Joosten, B., Goudbeek, M. (2012). How to Make a Robot Smile? Perception of Emotional Expressions from Digitally-Extracted Facial Landmark Configurations. In: Ge, S.S., Khatib, O., Cabibihan, JJ., Simmons, R., Williams, MA. (eds) Social Robotics. ICSR 2012. Lecture Notes in Computer Science(), vol 7621. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-34103-8_3

Download citation

  • DOI: https://doi.org/10.1007/978-3-642-34103-8_3

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-642-34102-1

  • Online ISBN: 978-3-642-34103-8

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics