Nothing Special   »   [go: up one dir, main page]

skip to main content
10.1145/2388676.2388786acmconferencesArticle/Chapter ViewAbstractPublication Pagesicmi-mlmiConference Proceedingsconference-collections
research-article

Preserving actual dynamic trend of emotion in dimensional speech emotion recognition

Published: 22 October 2012 Publication History

Abstract

In this paper, we use the concept of dynamic trend of emotion to describe how a human's emotion changes over time, which is believed to be important for understanding one's stance toward current topic in interactions. However, the importance of this concept - to our best knowledge - has not been paid enough attention before in the field of speech emotion recognition (SER). Inspired by this, this paper aims to evoke researchers' attention on this concept and makes a primary effort on the research of predicting correct dynamic trend of emotion in the process of SER. Specifically, we propose a novel algorithm named Order Preserving Network (OPNet) to this end. First, as the key issue for OPNet construction, we propose employing a probabilistic method to define an emotion trend-sensitive loss function. Then, a nonlinear neural network is trained using the gradient descent as optimization algorithm to minimize the constructed loss function. We validated the prediction performance of OPNet on the VAM corpus, by mean linear error as well as a rank correlation coefficient γ as measures. Comparing to k-Nearest Neighbor and support vector regression, the proposed OPNet performs better on the preservation of actual dynamic trend of emotion.

References

[1]
Z. Cao, T. Qin, and T. Liu. Learning to rank: from pairwise approach to listwise approach. In ICML, pages 129--136, Corvallis, USA, 2007.
[2]
E. Douglas-Cowie, R. Cowie, I. Sneddon, C. Cox, O. Lowry, M. McRorie, J.-C. Martin, L. Devillers, S. Abrilian, A. Batliner, N. Amir, and K. Karpouzis. The HUMAINE database: Addressing the collection and annotation of naturalistic and induced emotional data. Lecture Notes in Computer Science, 4738:488--500, 2007.
[3]
F. Eyben, M. Wöllmer, A. Graves, B. Schuller, E. Douglas-Cowie, and R. Cowie. On-line emotion recognition in a 3-D activation-valence-time continuum using acoustic and linguistic cues. Journal on Multimodal User Interfaces, 3:7--19, 2010.
[4]
F. Eyben, M. Wöllmer, and B. Schuller. openSMILE - the Munich versatile and fast open-source audio feature extractor. In ACM Multimedia, pages 1459--1462, Firenze, Italy, 2010.
[5]
M. Grimm, K. Kroschel, and S. Narayanan. Support vector regression for automatic recognition of spontaneous emotions in speech. In ICASSP, volume IV, pages 1085--1088, Honolulu, USA, 2007. IEEE.
[6]
M. Grimm, K. Kroschel, and S. Narayanan. The Vera am Mittag German audio-visual emotional speech database. In ICME, pages 865--868, Hannover, Germany, 2008.
[7]
B. Schuller, M. Valstar, F. Eyben, G. McKeown, R. Cowie, and M. Pantic. AVEC 2011 - the first international audio/visual emotion challenge. Lecture Notes in Computer Science, 6975:415--424, 2011.
[8]
M. Wöllmer, F. Eyben, S. Reiter, B. Schuller, C. Cox, E. Douglas-Cowie, and R. Cowie. Abandoning emotion classes - towards continuous emotion recognition with modelling of long-range dependencies. In INTERSPEECH, pages 597--600, Brisbane, 2008.

Cited By

View all
  • (2018)Prediction of Emotion Change From SpeechFrontiers in ICT10.3389/fict.2018.000115Online publication date: 5-Jun-2018
  • (2015)An investigation of emotion changes from speechProceedings of the 2015 International Conference on Affective Computing and Intelligent Interaction (ACII)10.1109/ACII.2015.7344650(733-736)Online publication date: 21-Sep-2015

Recommendations

Comments

Please enable JavaScript to view thecomments powered by Disqus.

Information & Contributors

Information

Published In

cover image ACM Conferences
ICMI '12: Proceedings of the 14th ACM international conference on Multimodal interaction
October 2012
636 pages
ISBN:9781450314671
DOI:10.1145/2388676
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

Sponsors

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 22 October 2012

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. dynamic trend of emotion
  2. loss function
  3. neural network
  4. speech emotion recognition

Qualifiers

  • Research-article

Conference

ICMI '12
Sponsor:
ICMI '12: INTERNATIONAL CONFERENCE ON MULTIMODAL INTERACTION
October 22 - 26, 2012
California, Santa Monica, USA

Acceptance Rates

Overall Acceptance Rate 453 of 1,080 submissions, 42%

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)7
  • Downloads (Last 6 weeks)4
Reflects downloads up to 19 Nov 2024

Other Metrics

Citations

Cited By

View all
  • (2018)Prediction of Emotion Change From SpeechFrontiers in ICT10.3389/fict.2018.000115Online publication date: 5-Jun-2018
  • (2015)An investigation of emotion changes from speechProceedings of the 2015 International Conference on Affective Computing and Intelligent Interaction (ACII)10.1109/ACII.2015.7344650(733-736)Online publication date: 21-Sep-2015

View Options

Login options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media