Nothing Special   »   [go: up one dir, main page]

skip to main content
10.1145/2522848.2522851acmconferencesArticle/Chapter ViewAbstractPublication Pagesicmi-mlmiConference Proceedingsconference-collections
research-article

Learning a sparse codebook of facial and body microexpressions for emotion recognition

Published: 09 December 2013 Publication History

Abstract

Obtaining a compact and discriminative representation of facial and body expressions is a difficult problem in emotion recognition. Part of the difficulty is capturing microexpressions, i.e., short, involuntary expressions that last for only a fraction of a second: at a micro-temporal scale, there are so many other subtle face and body movements that do not convey semantically meaningful information. We present a novel approach to this problem by exploiting the sparsity of the frequent micro-temporal motion patterns. Local space-time features are extracted over the face and body region for a very short time period, e.g., few milliseconds. A codebook of microexpressions is learned from the data and used to encode the features in a sparse manner. This allows us to obtain a representation that captures the most salient motion patterns of the face and body at a micro-temporal scale. Experiments performed on the AVEC 2012 dataset show our approach achieving the best published performance on the arousal dimension based solely on visual features. We also report experimental results on audio-visual emotion recognition, comparing early and late data fusion techniques.

References

[1]
Y.-L. Boureau, J. Ponce, and Y. LeCun. A theoretical analysis of feature pooling in visual recognition. In ICML, 2010.
[2]
S. Chen, Y. Tian, Q. Liu, and D. N. Metaxas. Recognizing expressions from face and body gesture by temporal normalized motion and appearance features. Image Vision Comput., 31(2), 2013.
[3]
A. Coates and A. Y. Ng. The importance of encoding versus training with sparse coding and vector quantization. In ICML, 2011.
[4]
T. F. Cootes, G. J. Edwards, and C. J. Taylor. Active appearance models. PAMI, 23(6), 2001.
[5]
D. Cristinacce and T. F. Cootes. Feature detection and tracking with constrained local models. In BMVC, 2006.
[6]
N. Dalal and B. Triggs. Histograms of oriented gradients for human detection. In CVPR, 2005.
[7]
P. Dollár, V. Rabaud, G. Cottrell, and S. Belongie. Behavior recognition via sparse spatio-temporal features. In VS-PETS, October 2005.
[8]
P. Ekman. Telling lies: Clues to deceit in the marketplace, politics, and marriage. WW Norton & Company, 2009.
[9]
P. Ekman and W. V. Friesen. Facial action coding system. 1977.
[10]
P. Ekman and W. V. Friesen. Unmasking the face: A guide to recognizing emotions from facial clues. I S H K, 2003.
[11]
E. A. Haggard and K. S. Isaacs. Micromomentary facial expressions as indicators of ego mechanisms in psychotherapy. In Methods of research in psychotherapy. Springer, 1966.
[12]
D. R. Hardoon, S. Szedmák, and J. Shawe-Taylor. Canonical correlation analysis: An overview with application to learning methods. NECO, 16(12), 2004.
[13]
J. M. Iverson and S. Goldin-Meadow. Why people gesture when they speak. Nature, 396(6708), 1998.
[14]
Y. Jia, O. Vinyals, and T. Darrell. On compact codes for spatially pooled features. In ICML, 2013.
[15]
B. Jiang, M. F. Valstar, and M. Pantic. Action unit detection using sparse appearance descriptors in space-time video volumes. In FG, 2011.
[16]
J. Joshi, R. Goecke, M. Breakspear, and G. Parker. Can body expressions contribute to automatic depression analysis? In FG, 2013.
[17]
A. Kendon. Gesture -- Visible Action as Utterance. Cambridge University Press, 2004.
[18]
I. Laptev. On space-time interest points. IJCV, 64(2-3), 2005.
[19]
I. Laptev, M. Marszalek, C. Schmid, and B. Rozenfeld. Learning realistic human actions from movies. In CVPR, 2008
[20]
S. Lazebnik, C. Schmid, and J. Ponce. Beyond bags of features: Spatial pyramid matching for recognizing natural scene categories. In CVPR, 2006.
[21]
X. Li, T. Pfister, X. Huang, G. Zhao, and M. Pietikainen. A spontaneous micro-expression database: Inducement, collection and baseline. In FG, 2013.

Cited By

View all
  • (2024)TGMAE: Self-supervised Micro-Expression Recognition with Temporal Gaussian Masked Autoencoder2024 IEEE International Conference on Multimedia and Expo (ICME)10.1109/ICME57554.2024.10687556(1-6)Online publication date: 15-Jul-2024
  • (2024)A review of research on micro-expression recognition algorithms based on deep learningNeural Computing and Applications10.1007/s00521-024-10262-736:29(17787-17828)Online publication date: 5-Aug-2024
  • (2023)Relationships Between Social Interactions and Belbin Role Types in Collaborative Agile TeamsIEEE Access10.1109/ACCESS.2023.324532511(17002-17020)Online publication date: 2023
  • Show More Cited By

Index Terms

  1. Learning a sparse codebook of facial and body microexpressions for emotion recognition

    Recommendations

    Comments

    Please enable JavaScript to view thecomments powered by Disqus.

    Information & Contributors

    Information

    Published In

    cover image ACM Conferences
    ICMI '13: Proceedings of the 15th ACM on International conference on multimodal interaction
    December 2013
    630 pages
    ISBN:9781450321297
    DOI:10.1145/2522848
    Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

    Sponsors

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    Published: 09 December 2013

    Permissions

    Request permissions for this article.

    Check for updates

    Author Tags

    1. audio-visual emotion recognition
    2. data fusion
    3. dictionary learning
    4. microexpressions
    5. sparse coding
    6. spatio-temporal interest points

    Qualifiers

    • Research-article

    Conference

    ICMI '13
    Sponsor:

    Acceptance Rates

    ICMI '13 Paper Acceptance Rate 49 of 133 submissions, 37%;
    Overall Acceptance Rate 453 of 1,080 submissions, 42%

    Contributors

    Other Metrics

    Bibliometrics & Citations

    Bibliometrics

    Article Metrics

    • Downloads (Last 12 months)18
    • Downloads (Last 6 weeks)0
    Reflects downloads up to 14 Dec 2024

    Other Metrics

    Citations

    Cited By

    View all
    • (2024)TGMAE: Self-supervised Micro-Expression Recognition with Temporal Gaussian Masked Autoencoder2024 IEEE International Conference on Multimedia and Expo (ICME)10.1109/ICME57554.2024.10687556(1-6)Online publication date: 15-Jul-2024
    • (2024)A review of research on micro-expression recognition algorithms based on deep learningNeural Computing and Applications10.1007/s00521-024-10262-736:29(17787-17828)Online publication date: 5-Aug-2024
    • (2023)Relationships Between Social Interactions and Belbin Role Types in Collaborative Agile TeamsIEEE Access10.1109/ACCESS.2023.324532511(17002-17020)Online publication date: 2023
    • (2021)Bimodal Emotion Recognition using Kernel Canonical Correlation Analysis and Multiple Kernel Learning2021 14th International Congress on Image and Signal Processing, BioMedical Engineering and Informatics (CISP-BMEI)10.1109/CISP-BMEI53629.2021.9624428(1-5)Online publication date: 23-Oct-2021
    • (2021)Multimodal temporal machine learning for Bipolar Disorder and Depression RecognitionPattern Analysis and Applications10.1007/s10044-021-01001-y25:3(493-504)Online publication date: 18-Jun-2021
    • (2020)Micro-Expression Recognition Based on 2D-3D CNN2020 39th Chinese Control Conference (CCC)10.23919/CCC50068.2020.9188920(3152-3157)Online publication date: Jul-2020
    • (2020)A Multi-Task Neural Approach for Emotion Attribution, Classification, and SummarizationIEEE Transactions on Multimedia10.1109/TMM.2019.292212922:1(148-159)Online publication date: 1-Jan-2020
    • (2020)Manifold feature integration for micro-expression recognitionMultimedia Systems10.1007/s00530-020-00663-8Online publication date: 18-Jun-2020
    • (2019)Facial Expression Recognition Using Computer Vision: A Systematic ReviewApplied Sciences10.3390/app92146789:21(4678)Online publication date: 2-Nov-2019
    • (2018)A surveyMultimedia Tools and Applications10.5555/3269690.326978377:15(19301-19325)Online publication date: 1-Aug-2018
    • Show More Cited By

    View Options

    Login options

    View options

    PDF

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader

    Media

    Figures

    Other

    Tables

    Share

    Share

    Share this Publication link

    Share on social media