Abstract
To design robots or embodied conversational agents that can accurately display facial expressions indicating an emotional state, we need technology to produce those facial expressions, and research that investigates the relationship between those technologies and human social perception of those artificial faces. Our starting point is assessing human perception of core facial information: Moving dots representing the facial landmarks, i.e., the locations and movements of the crucial parts of a face. Earlier research suggested that participants can relatively accurately identity facial expressions when all they can see of a real human full face are moving white painted dots representing the facial landmarks (although less accurate than recognizing full faces). In the current study we investigated the accuracy of recognition of emotions expressed by comparable facial landmarks (compared to accuracy of recognition of emotions expressed by full faces), but now used face-tracking software to produce the facial landmarks. In line with earlier findings, results suggested that participants could accurately identify emotions expressed by the facial landmarks (though less accurately than those expressed by full faces). Thereby, these results provide a starting point for further research on the fundamental characteristics of technology (AI methods) producing facial emotional expressions and their evaluation by human users.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
Similar content being viewed by others
References
Bassili, J.N.: Facial motion in the perception of faces and of emotional expression. Journal of Experimental Psychology: Human Perception and Performance 4, 373–379 (1978)
Johansson, G.: Visual motion perception. Scientific American 232, 76–88 (1975)
Saragih, J.M., Lucey, S., Cohn, J.F., Court, T.: Real-time avatar animation from a single image. Automatic Face & Gesture (2011)
Saragih, J., Lucey, S., Cohn, J.: Deformable model fitting by regularized landmark mean-shift. International Journal of Computer Vision 91, 200–215 (2011)
Lucey, S., Wang, Y., Saragih, J., Cohn, J.: Non-rigid face tracking with enforced convexity and local appearance consistency constraint. International Journal of Image and Vision Computing 28, 781–789 (2010)
Saragih, J., Lucey, S., Cohn, J.: Face alignment through subspace constrained mean-shifts. In: IEEE International Conference on Computer Vision, pp. 1034–1041 (2009)
Saragih, J., Lucey, S., Cohn, J.: Deformable model fitting with a mixture of local experts. In: International Conference on Computer Vision (2009)
Fong, T., Nourbakhsh, I., Dautenhahn, K.: A survey of socially interactive robots. Robotics and Autonomous Systems. 42, 143–166 (2003)
Alexander, O., Rogers, M., Lambeth, W., Chiang, M., Debevec, P.: Creating a photoreal digital actor: the digital Emily project. In: 2009 Conference for Visual Media Production, pp. 176–187 (2009)
Yang, C., Chiang, W.: An interactive facial expression generation system. Springer Science Business Media. Mutimed Tools Appl. (2007)
Bänziger, T., Mortillaro, M., Scherer, K.R.: Introducing the Geneva Multimodal Expression corpus for experimental research on emotion perception. Emotion (2011) (advance online publication), doi:10.137/a0025827
Bänziger, T., Scherer, K.R.: Introducing the Geneva Multimodal Emotion Portrayal (GEMEP) corpus. In: Scherer, K.R., Bänziger, T., Roesch, E.B. (eds.) Blueprint for Affective Computing: A Sourcebook, pp. 271–294. Oxford University Press, Oxford (2010)
Xiao, J., Chai, J., Kanade, T.: A closed-form solution to non-rigid shape and motion recovery. International Journal of Computer Vision 2, 233–246 (2006)
Breazeal, C.: Designing sociable robots. MIT Press, Cambridge (2002)
Lucey, P., Lucey, S., Cohn, J.F.: Registration invariant representations for expression detection. In: 2010 International Conference on Digital Image Computing: Techniques and Applications, pp. 255–261 (2010)
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2012 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Liu, C., Ham, J., Postma, E., Midden, C., Joosten, B., Goudbeek, M. (2012). How to Make a Robot Smile? Perception of Emotional Expressions from Digitally-Extracted Facial Landmark Configurations. In: Ge, S.S., Khatib, O., Cabibihan, JJ., Simmons, R., Williams, MA. (eds) Social Robotics. ICSR 2012. Lecture Notes in Computer Science(), vol 7621. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-34103-8_3
Download citation
DOI: https://doi.org/10.1007/978-3-642-34103-8_3
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-642-34102-1
Online ISBN: 978-3-642-34103-8
eBook Packages: Computer ScienceComputer Science (R0)