Nothing Special   »   [go: up one dir, main page]

skip to main content
10.1145/2820426.2820461acmotherconferencesArticle/Chapter ViewAbstractPublication PageswebmediaConference Proceedingsconference-collections
short-paper

Multimodal Sentiment Analysis for Automatic Estimation of Polarity Tension of TV News in TV Newscasts Videos

Published: 27 October 2015 Publication History

Abstract

This paper presents a multimodal approach to perform content-based sentiment analysis in TV newscasts videos in order to assist in the automatic estimation of polarity tension of TV news. The proposed approach aims to contribute to the semiodiscoursive study relative to the construction of ethos of those TV shows. In order to achieve this goal, it is proposed the application of computational methods of state-of-the-art that, through the processing of newscasts' videos of interest, perform the automatic emotion recognition in facial expressions. Moreover, they extract modulations in the participants' speech (e.g., news anchors, reporters, among others) and apply sentiment analysis techniques in their text obtained from closed caption, therefore making possible to estimate the emotional tension level in the enunciation of the TV news. In order to evaluate the accuracy and the applicability of the system, we use an actual dataset composed by 358 videos from three Brazilian newscasts. The experimental results are promising, which indicate the potential of the approach to support the analysis of TV newscasts discourse.

References

[1]
Begoña, G. S. M.; Fidalgo, M. R. and Santos, M. C. G. 2010. Analysing the Development of TV News Programmes: from Information to Dramatization. Revista Latina de Comunicación Social, 65, p. 126--145.
[2]
Goffman, E. 1981. The Lecture. Forms of Talk. Pennsylvania: University of Pennsylvania Press, p. 162--195.
[3]
Pereira, M. H. R.; Pádua, F. L. C.; David-Silva, G. 2015. Multimodal Approach for Automatic Emotion Recognition Applied to the Tension Levels Study in TV Newscasts. Brazilian Journalism Research, v. 11, n. 1.
[4]
Charaudeau, P. 2006. Discours Journalistique et Positionnements Énonciatifs. Frontières et Dérives. Revue SEMEN, n. 22, Énonciation et responsabilité dans les médias, Presses Universitaires de Franche-Comté, Besançon.
[5]
Pereira, M. H. R.; Souza, C. L.; Pádua, F. L. C.; David-Silva, G.; Assis, G. T.; Pereira, A. C. M. 2014. SAPTE: A Multimedia Information System to Support the Discourse Analysis and Information Retrieval of Television Programs. Multimedia Tools and Applications, 74(2): 1--15.
[6]
Pantti, M. 2010. The Value of Emotion: An Examination of Television Journalists? Notions on Emotionality. European Journal of Communication, 25(2): 168--181.
[7]
David-Silva, G. 2005. Informação Televisiva: uma Encenação da Realidade (Comparação entre Telejornais Brasileiros e Franceses). Doctoral's Thesis in Linguistic Study - UFMG, Belo Horizonte, Brazil.
[8]
Araújo, M.; Gonçalves, P.; Cha, M.; Benevenuto, F. 2014. iFeel: A System that Compares and Combines Sentiment Analysis Methods. Proceedings of the World Wide Web Conference (WWW'14). Seoul, Korea.
[9]
Reis, J.; Benevenuto, F.; Vaz de Melo, P.; Prates, R.; Kwak, H. and An, J. 2015. Breaking the News: First Impressions Matter on Online News. Proceedings of the 9th International AAAI Conference on Web-Blogs and Social Media. Oxford.
[10]
Maynard, D.; Dupplaw, D.; Hare, J. 2013. Multimodal Sentiment Analysis of Social Media. BCS SGAI Workshop on Social Media Analysis. p. 44--55.
[11]
Stegmeier, J. 2012. Toward a computer-aided methodology for discourse analysis. Stellenbosch Papers in Linguistics, v. 41, p. 91--114, 2012.
[12]
Bartlett, M. S., Littlewort, Gwen; Frank, M.; Lainscsek, C.; Fasel, Ian and Movellan, J. 2006. Fully Automatic Facial Action Recognition in Spontaneous Behavior. 7th International Conference on Automatic Face and Gesture Recognition, IEEE, p. 223--230.
[13]
Florian E.; Weninger, F.; Gross, F. and Schuller, B. 2013. Recent Developments in openSMILE, the Munich Open-Source Multimedia Feature Extractor. ACM Multimedia (MM), Proceedings of the 21st ACM International Conference on Multimedia, Barcelona, p. 835--838.
[14]
Littlewort, G.; Bartlett, M. S.; Fasel, Ian; Susskind, J. and Movellan, J. 2004. Dynamics of Facial Expression Extracted Automatically from Video. CVPRW'04. Conference on Computer Vision and Pattern Recognition Workshop, IEEE.

Cited By

View all
  • (2022)Multimodal Sentiment AnalysisResearch Anthology on Implementing Sentiment Analysis Across Multiple Disciplines10.4018/978-1-6684-6303-1.ch098(1846-1870)Online publication date: 10-Jun-2022
  • (2022)Galileo, a data platform for viewing news on social networksEl Profesional de la información10.3145/epi.2022.sep.12Online publication date: 3-Oct-2022
  • (2019)Multimodal Sentiment AnalysisInternational Journal of Service Science, Management, Engineering, and Technology10.4018/IJSSMET.201904010310:2(38-58)Online publication date: 1-Apr-2019
  • Show More Cited By

Recommendations

Comments

Please enable JavaScript to view thecomments powered by Disqus.

Information & Contributors

Information

Published In

cover image ACM Other conferences
WebMedia '15: Proceedings of the 21st Brazilian Symposium on Multimedia and the Web
October 2015
266 pages
ISBN:9781450339599
DOI:10.1145/2820426
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

Sponsors

  • CYTED: Ciência Y Tecnologia Para El Desarrollo
  • SBC: Brazilian Computer Society
  • FAPEAM: Fundacao de Amparo a Pesquisa do Estado do Amazonas
  • CNPq: Conselho Nacional de Desenvolvimento Cientifico e Tecn
  • CGIBR: Comite Gestor da Internet no Brazil
  • CAPES: Brazilian Higher Education Funding Council

In-Cooperation

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 27 October 2015

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. closed caption
  2. facial expressions
  3. multimodal sentiment analysis
  4. prosody features of speech
  5. tension levels
  6. tv newscasts

Qualifiers

  • Short-paper

Funding Sources

  • FAPEMIG
  • CNPq
  • CEFET-MG

Conference

Webmedia '15
Sponsor:
  • CYTED
  • SBC
  • FAPEAM
  • CNPq
  • CGIBR
  • CAPES

Acceptance Rates

WebMedia '15 Paper Acceptance Rate 21 of 61 submissions, 34%;
Overall Acceptance Rate 270 of 873 submissions, 31%

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)1
  • Downloads (Last 6 weeks)0
Reflects downloads up to 17 Nov 2024

Other Metrics

Citations

Cited By

View all
  • (2022)Multimodal Sentiment AnalysisResearch Anthology on Implementing Sentiment Analysis Across Multiple Disciplines10.4018/978-1-6684-6303-1.ch098(1846-1870)Online publication date: 10-Jun-2022
  • (2022)Galileo, a data platform for viewing news on social networksEl Profesional de la información10.3145/epi.2022.sep.12Online publication date: 3-Oct-2022
  • (2019)Multimodal Sentiment AnalysisInternational Journal of Service Science, Management, Engineering, and Technology10.4018/IJSSMET.201904010310:2(38-58)Online publication date: 1-Apr-2019
  • (2017)An overview of Multimodal Sentiment Analysis research: Opportunities and Difficulties2017 IEEE International Conference on Imaging, Vision & Pattern Recognition (icIVPR)10.1109/ICIVPR.2017.7890858(1-6)Online publication date: 2017

View Options

Login options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media