Nothing Special   »   [go: up one dir, main page]

skip to main content
10.1145/2666242.2666251acmconferencesArticle/Chapter ViewAbstractPublication Pagesicmi-mlmiConference Proceedingsconference-collections
research-article

Who Will Get the Grant?: A Multimodal Corpus for the Analysis of Conversational Behaviours in Group Interviews

Published: 16 November 2014 Publication History

Abstract

In the last couple of years more and more multimodal corpora have been created. Recently many of these corpora have also included RGB-D sensors' data. However, there is to our knowledge no publicly available corpus, which combines accurate gaze-tracking, and high-quality audio recording for group discussions of varying dynamics. With a corpus that would fulfill these needs, it would be possible to investigate higher level constructs such as group involvement, individual engagement or rapport which all require multi-modal feature extraction. In the following paper we describe the design and recording of such a corpus and we provide some illustrative examples of how such a corpus might be exploited in the study of group dynamics.

References

[1]
S. Ba and J. Odobez. A study on visual focus of attention recognition from head pose in a meeting room. In Proc. Workshop on Machine Learning for Multimodal Interaction (MLMI), 2006.
[2]
S. Ba and J.-M. Odobez. Multi-party focus of attention recognition in meetings from head pose and multimodal contextual cues. In Int. Conf. on Acoustics, Speech, and Signal Proc. (ICASSP), 2008.
[3]
J. H. Frey and A. Fontana. The group interview in social research. The Social Science Journal, 28(2):175--187, Jan. 1991.
[4]
K. A. Funes Mora, L. S. Nguyen, D. Gatica-Perez, and J.-M. Odobez. A Semi-Automated System for Accurate Gaze Coding in Natural Dyadic Interactions. In ICMI, Sydney, Dec. 2013.
[5]
K. A. Funes Mora and J.-M. Odobez. Geometric generative gaze estimation (G3E) for remote RGB-D cameras. In Computer Vision and Pattern Recognition, Ohio, June 2014.
[6]
D. Gatica-Perez, I. McCowan, and S. Bengio. Detecting Group Interest-Level in Meetings. In in IEEE ICASSP, 2005.
[7]
D. Herrera C., J. Kannala, and J. Heikkilä. Joint Depth and Color Camera Calibration with Distortion Correction. in IEEE Trans. on PAMI, 34(10):2058--2064, 2012.
[8]
H. Hung and G. Chittaranjan. The idiap wolf corpus: Exploring group behaviour in a competitive role-playing game. In Proc. of the Int. Conference on Multimedia, Firenze, Italy, 2010. ACM.
[9]
M. Johansson, G. Skantze, and J. Gustafson. Head Pose Patterns in Multiparty Human-Robot Team-Building Interactions. In Social Robotics, pages 351--360, 2013.
[10]
O. P. John, L. P. Naumann, and C. J. Soto. Paradigm Shift to the Integrative Big-Five Trait Taxonomy: History, Measurement, and Conceptual Issues. In O. P. John, R. W. Robins, and L. A. Pervin, editors, Handbook of personality: Theory and research, pages 114--158. NY: Guilford Press, New York, 2008.
[11]
C. Lai, J. Carletta, S. Renals, K. Evanini, and K. Zechner. Detecting summarization hot spots in meetings using group level involvement and turn-taking features. In INTERSPEECH, 2013.
[12]
I. McCowan, J. Carletta, and W. Kraaij. The AMI meeting corpus. In Proc. Methods and Techniques in Behavioral Research, pages 137--140, 2005.
[13]
L. S. Nguyen, D. Frauendorfer, M. Schmid Mast, and D. Gatica-Perez. Hire Me: Computational inference of hirability in employment interviews based on nonverbal behavior. IEEE Trans on Multimedia, 2014.
[14]
C. Oertel, F. Cummins, J. Edlund, P. Wagner, and N. Campbell. D64: a corpus of richly recorded conversational interaction. Journal on Multimodal User Interfaces, 7(1-2):19--28, Sept. 2012.
[15]
C. Oertel and G. Salvi. A gaze-based method for relating group involvement to individual engagement in multimodal multiparty dialogue. In International Conference on Multimodal Interaction, 2013.
[16]
C. Oertel, S. Scherer, and N. Campbell. On the use of multimodal cues for the prediction of involvement in spontaneous conversation. pages 1541--1544, 2011.
[17]
S. Sheikhi and J.-M. Odobez. Investigating the Midline Effect for Visual Focus of Attention Recognition. In Int Conf. on Multimodal Interaction (ICMI), Santa Monica, Oct. 2012.

Cited By

View all
  • (2024)CCDb-HG: Novel Annotations and Gaze-Aware Representations for Head Gesture Recognition2024 IEEE 18th International Conference on Automatic Face and Gesture Recognition (FG)10.1109/FG59268.2024.10581954(1-9)Online publication date: 27-May-2024
  • (2023)Co-Located Human-Human Interaction Analysis using Nonverbal Cues: A SurveyACM Computing Surveys10.1145/3626516Online publication date: 6-Oct-2023
  • (2023)Modelling the “transactive memory system” in multimodal multiparty interactionsJournal on Multimodal User Interfaces10.1007/s12193-023-00426-518:1(103-117)Online publication date: 11-Nov-2023
  • Show More Cited By

Index Terms

  1. Who Will Get the Grant?: A Multimodal Corpus for the Analysis of Conversational Behaviours in Group Interviews

    Recommendations

    Comments

    Please enable JavaScript to view thecomments powered by Disqus.

    Information & Contributors

    Information

    Published In

    cover image ACM Conferences
    UM3I '14: Proceedings of the 2014 workshop on Understanding and Modeling Multiparty, Multimodal Interactions
    November 2014
    58 pages
    ISBN:9781450306522
    DOI:10.1145/2666242
    Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

    Sponsors

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    Published: 16 November 2014

    Permissions

    Request permissions for this article.

    Check for updates

    Author Tags

    1. corpus collection
    2. eye-gaze
    3. group dynamics
    4. involvement

    Qualifiers

    • Research-article

    Funding Sources

    Conference

    ICMI '14
    Sponsor:

    Acceptance Rates

    UM3I '14 Paper Acceptance Rate 8 of 8 submissions, 100%;
    Overall Acceptance Rate 8 of 8 submissions, 100%

    Contributors

    Other Metrics

    Bibliometrics & Citations

    Bibliometrics

    Article Metrics

    • Downloads (Last 12 months)21
    • Downloads (Last 6 weeks)2
    Reflects downloads up to 24 Nov 2024

    Other Metrics

    Citations

    Cited By

    View all
    • (2024)CCDb-HG: Novel Annotations and Gaze-Aware Representations for Head Gesture Recognition2024 IEEE 18th International Conference on Automatic Face and Gesture Recognition (FG)10.1109/FG59268.2024.10581954(1-9)Online publication date: 27-May-2024
    • (2023)Co-Located Human-Human Interaction Analysis using Nonverbal Cues: A SurveyACM Computing Surveys10.1145/3626516Online publication date: 6-Oct-2023
    • (2023)Modelling the “transactive memory system” in multimodal multiparty interactionsJournal on Multimodal User Interfaces10.1007/s12193-023-00426-518:1(103-117)Online publication date: 11-Nov-2023
    • (2023)Empirical Research of Classroom Behavior Based on Online Education: A Systematic ReviewMobile Networks and Applications10.1007/s11036-023-02251-228:5(1793-1805)Online publication date: 5-Oct-2023
    • (2022)Robust Unsupervised Gaze Calibration Using Conversation and Manipulation Attention PriorsACM Transactions on Multimedia Computing, Communications, and Applications10.1145/347262218:1(1-27)Online publication date: 27-Jan-2022
    • (2021)Visual Focus of Attention Estimation in 3D Scene with an Arbitrary Number of Targets2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW)10.1109/CVPRW53098.2021.00352(3147-3155)Online publication date: Jun-2021
    • (2020)Engagement in Human-Agent Interaction: An OverviewFrontiers in Robotics and AI10.3389/frobt.2020.000927Online publication date: 4-Aug-2020
    • (2019)A deep learning approach for robust head pose independent eye movements recognition from videosProceedings of the 11th ACM Symposium on Eye Tracking Research & Applications10.1145/3314111.3319844(1-5)Online publication date: 25-Jun-2019
    • (2019)The unobtrusive group interaction (UGI) corpusProceedings of the 10th ACM Multimedia Systems Conference10.1145/3304109.3325816(249-254)Online publication date: 18-Jun-2019
    • (2018)Unobtrusive Analysis of Group Interactions without CamerasProceedings of the 20th ACM International Conference on Multimodal Interaction10.1145/3242969.3264973(501-505)Online publication date: 2-Oct-2018
    • Show More Cited By

    View Options

    Login options

    View options

    PDF

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader

    Media

    Figures

    Other

    Tables

    Share

    Share

    Share this Publication link

    Share on social media