Nothing Special   »   [go: up one dir, main page]

skip to main content
10.1145/2647868.2654909acmconferencesArticle/Chapter ViewAbstractPublication PagesmmConference Proceedingsconference-collections
research-article

Correlating Speaker Gestures in Political Debates with Audience Engagement Measured via EEG

Published: 03 November 2014 Publication History

Abstract

We hypothesize that certain speaker gestures can convey significant information that are correlated to audience engagement. We propose gesture attributes, derived from speakers' tracked hand motions to automatically quantify these gestures from video. Then, we demonstrate a correlation between gesture attributes and an objective method of measuring audience engagement: electroencephalography (EEG) in the domain of political debates. We collect 47 minutes of EEG recordings from each of 20 subjects watching clips of the 2012 U.S. Presidential debates. The subjects are examined in aggregate and in subgroups according to gender and political affiliation. We find statistically significant correlations between gesture attributes (particularly extremal pose) and our feature of engagement derived from EEG both with and without audio. For some stratifications, the Spearman rank correlation reaches as high as rho = 0.283 with p < 0.05, Bonferroni corrected. From these results, we identify those gestures that can be used to measure engagement, principally those that break habitual gestural patterns.

References

[1]
P. Buehler, M. Everingham, D. P. Huttenlocher, and A. Zisserman. Long term arm and hand tracking for continuous sign language TV broadcasts. In Proc. British Machine Vision Conference, 2008.
[2]
P. Buehler, M. Everingham, and A. Zisserman. Learning sign language by watching TV (using weakly aligned subtitles). In Proc. Computer Vision and Pattern Recognition, 2009.
[3]
P. Bull. The use of hand gesture in political speeches: Some case studies. J. Language and Social Psychology, 5(2):103--118, 1986.
[4]
P. E. Bull and G. Connelly. Body movement and emphasis in speech. J. Nonverbal Behavior, pages 169--187, 1985.
[5]
G. Buscher, E. Cutrell, and M. R. Morris. What do you see when you're surfing?: using eye tracking to predict salient regions of web pages. In Proc. SIGCHI Conf. Human Factors in Computing Systems, pages 21--30, 2009.
[6]
G. Buscher, S. T. Dumais, and E. Cutrell. The good, the bad, and the random: an eye-tracking study of ad quality in web search. In Proc. SIGIR Conf. Research and Development in Information Retrieval, pages 42--49, 2010.
[7]
D. Casasanto and K. Jasmin. Good and bad in the hands of politicians: Spontaneous gestures during positive and negative speech. PLoS One, 5(7):e11805, 2010.
[8]
A. Delorme and S. Makeig. EEGLAB: an open source toolbox for analysis of single-trial EEG dynamics including independent component analysis. J. Neuroscience Methods, 134(1):9--21, 2004.
[9]
N. A. Diakopoulos and D. A. Shamma. Characterizing debate performance via aggregated twitter sentiment. In Proc. SIGCHI Conf. Human Factors in Computing Systems, pages 1195--1198, 2010.
[10]
J. P. Dmochowski, P. Sajda, J. Dias, and L. C. Parra. Correlated components of ongoing EEG point to emotionally laden attention--a possible marker of engagement? Frontiers in Human Neuroscience, 6, 2012.
[11]
J. Eisenstein and R. Davis. Visual and linguistic information in gesture classification. In SIGGRAPH 2006 Courses, page 30, New York, NY, USA, 2006.
[12]
V. Ferrari, M. Marin-Jimenez, and A. Zisserman. Progressive search space reduction for human pose estimation. In Proc. Computer Vision and Pattern Recognition, June 2008.
[13]
N. Freedman. Hands, words, and mind: On the structuralization of body movements during discourse and the capacity for verbal representation. In Communicative Structures and Psychic Structures, pages 109--132. 1977.
[14]
D. Glowinski, N. Dael, A. Camurri, G. Volpe, M. Mortillaro, and K. R. Scherer. Multi-scale entropy analysis of dominance in social creative activities. T. Affective Computing, 2(2):106--118, 2011.
[15]
B. J. Grosz and C. L. Sidner. Attention, intentions, and the structure of discourse. Computational Linguistics, 12(3):175--204, 1986.
[16]
S. J. Hanson, A. D. Gagliardi, and C. Hanson. Solving the brain synchrony eigenvalue problem: conservation of temporal dynamics (fmri) over subjects doing the same task. J. Computational Neuroscience, 27(1):103--114, 2009.
[17]
U. Hasson, Y. Nir, I. Levy, G. Fuhrmann, and R. Malach. Intersubject synchronization of cortical activity during natural vision. Science, 303(5664):1634--1640, 2004.
[18]
J. M. Jones. Record-high 40% of americans identify as independents in '11. http://www.gallup.com/poll/151943/record-high-americans-identify-independents.aspx, 2012.
[19]
T. Kadir, R. Bowden, E. Ong, and A. Zisserman. Minimal training, large lexicon, unconstrained sign language recognition. In Proc. British Machine Vision Conference, volume 1, 2004.
[20]
A. Kapur, A. Kapur, N. Virji-Babul, G. Tzanetakis, and P. F. Driessen. Gesture-based affective computing on motion capture data. In Affective Computing and Intelligent Interaction, pages 1--7. Springer, 2005.
[21]
S. Kettebekov, M. Yeasin, and R. Sharma. Improving continuous gesture recognition with spoken prosody. In Proc. Computer Vision and Pattern Recognition, 2003.
[22]
M. Landler and P. Baker. After debate, Obama team tries to regain its footing. http://www.nytimes.com/2012/10/05/us/politics/obama-team-tries-to-change-course-after-debate-disappoints.html, 2012.
[23]
Y.-F. Ma, L. Lu, H.-J. Zhang, and M. Li. A user attention model for video summarization. In Proc. ACM Multimedia, 2002.
[24]
A. Mehrabian. Pleasure-arousal-dominance: A general framework for describing and measuring individual differences in temperament. Current Psychology, 14(4):261--292, 1996.
[25]
E. K. Miller, D. J. Freedman, and J. D. Wallis. The prefrontal cortex: categories, concepts and cognition. Philosophical Transactions of the Royal Society of London. Series B: Biological Sciences, 357(1424):1123--1136, 2002.
[26]
A. Mittal, A. Zisserman, and P. Torr. Hand detection using multiple proposals. In Proc. British Machine Vision Conference, 2011.
[27]
Y. I. Nakano and R. Ishii. Estimating user's engagement from eye-gaze behaviors in human-agent conversations. In Proc. Int. Conf. Intelligent User Interfaces, 2010.
[28]
A. Smeaton, P. Over, and A. Doherty. Video shot boundary detection: Seven years of trecvid activity. Computer Vision and Image Understanding, 114(4):411--418, 2010.
[29]
D. L. Strayer, J. M. Watson, and F. A. Drews. Cognitive distraction while multitasking in the automobile. Psychology of Learning and Motivation-Advances in Research and Theory, 54:29, 2011.
[30]
S. Tavernise. Brain test to diagnose A.D.H.D. is approved. http://www.nytimes.com/2013/07/16/health/brain-test-to-diagnose-adhd-is-approved.html, 2013.
[31]
M. Watanabe, Y. Den, K. Hirose, S. Miwa, and N. Minematsu. Features of pauses and conjunctions at syntactic and discourse boundaries in japanese monologues. In INTERSPEECH, 2007.
[32]
G. V. Xaquin, A. McLean, A. Tse, and S. Peçanha. What Romney and Obama's body language says to voters. http://www.nytimes.com/interactive/2012/10/02/us/politics/what-romney-and-obamas-body-language-says-to-voters.html, 2012.
[33]
Y. Yang and D. Ramanan. Articulated pose estimation with flexible mixtures-of-parts. In Proc. Computer Vision and Pattern Recognition, 2011.
[34]
J. R. Zhang and J. R. Kender. Recognizing and tracking clasping and occluded hands. In Proc. Int. Conf. Image Processing, Sept 2013.

Cited By

View all
  • (2024)Embodying Similarity and Difference: The Effect of Listing and Contrasting Gestures During U.S. Political SpeechCognitive Science10.1111/cogs.1342848:3Online publication date: 25-Mar-2024
  • (2024)Identifying temporal correlations between natural single-shot videos and EEG signalsJournal of Neural Engineering10.1088/1741-2552/ad2333Online publication date: 26-Jan-2024
  • (2023)Feeling the Temperature of the RoomProceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies10.1145/35808207:1(1-21)Online publication date: 28-Mar-2023
  • Show More Cited By

Recommendations

Comments

Please enable JavaScript to view thecomments powered by Disqus.

Information & Contributors

Information

Published In

cover image ACM Conferences
MM '14: Proceedings of the 22nd ACM international conference on Multimedia
November 2014
1310 pages
ISBN:9781450330633
DOI:10.1145/2647868
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

Sponsors

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 03 November 2014

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. attributes
  2. electroencephalography (eeg)
  3. engagement
  4. gestures
  5. neuroscience
  6. political debates
  7. video

Qualifiers

  • Research-article

Funding Sources

Conference

MM '14
Sponsor:
MM '14: 2014 ACM Multimedia Conference
November 3 - 7, 2014
Florida, Orlando, USA

Acceptance Rates

MM '14 Paper Acceptance Rate 55 of 286 submissions, 19%;
Overall Acceptance Rate 2,145 of 8,556 submissions, 25%

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)21
  • Downloads (Last 6 weeks)5
Reflects downloads up to 21 Nov 2024

Other Metrics

Citations

Cited By

View all
  • (2024)Embodying Similarity and Difference: The Effect of Listing and Contrasting Gestures During U.S. Political SpeechCognitive Science10.1111/cogs.1342848:3Online publication date: 25-Mar-2024
  • (2024)Identifying temporal correlations between natural single-shot videos and EEG signalsJournal of Neural Engineering10.1088/1741-2552/ad2333Online publication date: 26-Jan-2024
  • (2023)Feeling the Temperature of the RoomProceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies10.1145/35808207:1(1-21)Online publication date: 28-Mar-2023
  • (2018)A longitudinal database of Irish political speech with annotations of speaker abilityLanguage Resources and Evaluation10.1007/s10579-017-9401-z52:2(401-432)Online publication date: 1-Jun-2018
  • (2018)Computational EEG Analysis for Hyperscanning and Social NeuroscienceComputational EEG Analysis10.1007/978-981-13-0908-3_10(215-228)Online publication date: 17-Aug-2018
  • (2017)Who Composes the Music?Proceedings of the 25th ACM international conference on Multimedia10.1145/3123266.3123967(826-830)Online publication date: 23-Oct-2017

View Options

Login options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media