Nothing Special   »   [go: up one dir, main page]

skip to main content
10.1145/3139513.3139515acmconferencesArticle/Chapter ViewAbstractPublication Pagesicmi-mlmiConference Proceedingsconference-collections
research-article

Automatic generation of actionable feedback towards improving social competency in job interviews

Published: 13 November 2017 Publication History

Abstract

Soft skill assessment is a vital aspect of a job interview process as these qualities are indicative of the candidates compatibility in the work environment, their negotiation skills, client interaction prowess and leadership flair among other factors. The rise in popularity of asynchronous video based job interviews has created the need for a scalable solution to gauge candidate performance and hence we turn to automation.
In this research, we aim to build a system that automatically provides a summative feedback to candidates at the end of an interview. Most feedback system predicts values of social indicators and communication cues, leaving the interpretation open to the user. Our system directly predicts an actionable feedback that leaves the candidate with a tangible take away at the end of the interview. We approached placement trainers and made a list of most common feedback that is given during training and we attempt to predict them directly.
Towards this front,we captured data from over 145 participants in an interview like environment. Designing intelligent training environments for job interview preparation using a video data corpus is a demanding task due to its complex correlations and multimodal interactions. We used several state-of-the-art machine learning algorithms with manual annotation as ground truth. The predictive models were built with a focus on nonverbal communication cues so as to reduce the task of addressing the challenges faced in spoken language understanding and task modelling. We extracted audio and lexical features and our findings indicate a stronger correlation to audio and prosodic features in candidate assessment.Our best results gave an accuracy of 95% when the baseline accuracy was 77%.

References

[1]
Keith Anderson, Elisabeth André, Tobias Baur, Sara Bernardini, Mathieu Chollet, Evi Chryssafidou, Ionut Damian, Cathy Ennis, Arjan Egges, Patrick Gebhard, et al. 2013. The TARDIS framework: intelligent virtual agents for social coaching in job interviews. In Advances in computer entertainment. Springer, 476–491.
[2]
Joan-Isaac Biel, Lucía Teijeiro-Mosquera, and Daniel Gatica-Perez. 2012. Facetube: predicting personality from facial expressions of emotion in online conversational video. In Proceedings of the 14th ACM international conference on Multimodal interaction. ACM, 53–56.
[3]
Paul Boersma and D Weenik. 1996. PRAAT: a system for doing phonetics by computer. Report of the Institute of Phonetic Sciences of the University of Amsterdam. Amsterdam: University of Amsterdam (1996).
[4]
Zoraida Callejas, Brian Ravenet, Magalie Ochs, and Catherine Pelachaud. 2014. A computational model of social attitudes for a virtual recruiter. In Proceedings of the 2014 international conference on Autonomous agents and multi-agent systems. International Foundation for Autonomous Agents and Multiagent Systems, 93– 100.
[5]
Donald G Childers and Jose A Diaz. 2000. Speech processing and synthesis toolboxes. (2000).
[6]
Melissa Cote, Amanda Dash, and Alexandra Branzan Albu. 2016. Look who is not talking: Assessing engagement levels in panel conversations. In Pattern Recognition (ICPR), 2016 23rd International Conference on. IEEE, 2109–2114.
[7]
AR Craig and P Calver. 1991. Following Up on Treated StutterersStudies of Perceptions of Fluency and Job Status. Journal of Speech, Language, and Hearing Research 34, 2 (1991), 279–284.
[8]
Florian Eyben, Martin Wöllmer, and Björn Schuller. 2010. Opensmile: the munich versatile and fast open-source audio feature extractor. In Proceedings of the 18th ACM international conference on Multimedia. ACM, 1459–1462.
[9]
John R Hollenbeck and Ellen M Whitener. 1988. Reclaiming personality traits for personnel selection: Self-esteem as an illustrative case. Journal of Management 14, 1 (1988), 81–91.
[10]
Mohammed Ehsan Hoque, Matthieu Courgeon, Jean-Claude Martin, Bilge Mutlu, and Rosalind W Picard. 2013. Mach: My automated conversation coach. In Proceedings of the 2013 ACM international joint conference on Pervasive and ubiquitous computing. ACM, 697–706.
[11]
Eunja Hyun, Hyunmin Yoon, and Sooryun Son. 2010. Relationships between user experiences and children’s perceptions of the education robot. In Proceedings of the 5th ACM/IEEE international conference on Human-robot interaction. IEEE Press, 199–200.
[12]
Dinesh Babu Jayagopi, Hayley Hung, Chuohao Yeo, and Daniel Gatica-Perez. 2009. Modeling dominance in group conversations using nonverbal activity cues. IEEE Transactions on Audio, Speech, and Language Processing 17, 3 (2009), 501–513.
[13]
Juan David Leongómez. 2014. Contextual musicality: vocal modulation and its perception in human social interaction. (2014).
[14]
Júlia Soihet Martins. 2016. Um estudo sobre jovens que não estudam e não trabalham no Rio de Janeiro (2004-2014). (2016).
[15]
Gelareh Mohammadi and Alessandro Vinciarelli. 2012. Automatic personality perception: Prediction of trait attribution based on prosodic features. IEEE Transactions on Affective Computing 3, 3 (2012), 273–284.
[16]
Julie Pallant. 2013. SPSS survival manual. McGraw-Hill Education (UK).
[17]
Richard M Perloff. 2010. The dynamics of persuasion: communication and attitudes in the twenty-first century. Routledge.
[18]
Pavel Pudil, Jana Novovičová, and Josef Kittler. 1994. Floating search methods in feature selection. Pattern recognition letters 15, 11 (1994), 1119–1125.
[19]
Marc Schroder, Elisabetta Bevacqua, Roddy Cowie, Florian Eyben, Hatice Gunes, Dirk Heylen, Mark Ter Maat, Gary McKeown, Sathish Pammi, Maja Pantic, et al. 2012. Building autonomous sensitive artificial listeners. IEEE Transactions on Affective Computing 3, 2 (2012), 165–183.
[20]
Miwa Takai. 2010. Detection of suspicious activity and estimate of risk from human behavior shot by surveillance camera. In Nature and Biologically Inspired Computing (NaBIC), 2010 Second World Congress on. IEEE, 298–304.
[21]
Andrea Tartaro and Justine Cassell. 2008. Playing with virtual peers: bootstrapping contingent discourse in children with autism. In Proceedings of the 8th international conference on International conference for the learning sciences-Volume 2. International Society of the Learning Sciences, 382–389.

Cited By

View all
  • (2024)Automated Scoring of Asynchronous Interview Videos Based on Multi-Modal Window-Consistency FusionIEEE Transactions on Affective Computing10.1109/TAFFC.2023.329433515:3(799-814)Online publication date: Jul-2024
  • (2021)Multimodal Analysis and Synthesis for Conversational ResearchCompanion Publication of the 2021 International Conference on Multimodal Interaction10.1145/3461615.3486794(400-401)Online publication date: 18-Oct-2021
  • (2019)Slices of Attention in Asynchronous Video Job Interviews2019 8th International Conference on Affective Computing and Intelligent Interaction (ACII)10.1109/ACII.2019.8925439(1-7)Online publication date: Sep-2019

Index Terms

  1. Automatic generation of actionable feedback towards improving social competency in job interviews

        Recommendations

        Comments

        Please enable JavaScript to view thecomments powered by Disqus.

        Information & Contributors

        Information

        Published In

        cover image ACM Conferences
        MIE 2017: Proceedings of the 1st ACM SIGCHI International Workshop on Multimodal Interaction for Education
        November 2017
        75 pages
        Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

        Sponsors

        Publisher

        Association for Computing Machinery

        New York, NY, United States

        Publication History

        Published: 13 November 2017

        Permissions

        Request permissions for this article.

        Check for updates

        Author Tags

        1. Automatic feedback prediction
        2. Hiring analytics
        3. Intelligent training platforms
        4. Job interview Automation
        5. Multimodal behavior analysis

        Qualifiers

        • Research-article

        Funding Sources

        • SERB Young Scientist grant of Dr. Jayagopi

        Conference

        ICMI '17
        Sponsor:

        Contributors

        Other Metrics

        Bibliometrics & Citations

        Bibliometrics

        Article Metrics

        • Downloads (Last 12 months)29
        • Downloads (Last 6 weeks)8
        Reflects downloads up to 19 Nov 2024

        Other Metrics

        Citations

        Cited By

        View all
        • (2024)Automated Scoring of Asynchronous Interview Videos Based on Multi-Modal Window-Consistency FusionIEEE Transactions on Affective Computing10.1109/TAFFC.2023.329433515:3(799-814)Online publication date: Jul-2024
        • (2021)Multimodal Analysis and Synthesis for Conversational ResearchCompanion Publication of the 2021 International Conference on Multimodal Interaction10.1145/3461615.3486794(400-401)Online publication date: 18-Oct-2021
        • (2019)Slices of Attention in Asynchronous Video Job Interviews2019 8th International Conference on Affective Computing and Intelligent Interaction (ACII)10.1109/ACII.2019.8925439(1-7)Online publication date: Sep-2019

        View Options

        Login options

        View options

        PDF

        View or Download as a PDF file.

        PDF

        eReader

        View online with eReader.

        eReader

        Media

        Figures

        Other

        Tables

        Share

        Share

        Share this Publication link

        Share on social media