Nothing Special   »   [go: up one dir, main page]

skip to main content
10.1145/3382507.3418868acmconferencesArticle/Chapter ViewAbstractPublication Pagesicmi-mlmiConference Proceedingsconference-collections
short-paper

Analyzing Nonverbal Behaviors along with Praising

Published: 22 October 2020 Publication History

Abstract

In this work, as a first attempt to analyze the relationship between praising skills and human behavior in dialogue, we focus on head and face behavior. We create a new dialogue corpus including face and head behavior information of persons who give praise (praiser) and receive praise (receiver) and the degree of success of praising (praising score). We also create a machine learning model that uses features related to head and face behavior to estimate praising score, clarify which features of the praiser and receiver are important in estimating praising score. The analysis results showed that features of the praiser and receiver are important in estimating praising score and that features related to utterance, head, gaze, and chin were important. The analysis of the features of high importance revealed that the praiser and receiver should face each other without turning their heads to the left or right, and the longer the praiser's utterance, the more successful the praising.

Supplementary Material

MP4 File (3382507.3418868.mp4)
In this work, as a first attempt to analyze the relationship between praising skills and human behavior in dialogue, we focus on head and face behavior. We create a new dialogue corpus including face and head behavior information of persons who praiser and receiver and praising score. We also create a machine learning model that uses features related to head and face behavior to estimate praising score, clarify which features of the praiser and receiver are important in estimating praising score. The analysis results showed that features of the praiser and receiver are important in estimating praising score and that features related to utterance, head, gaze, and chin were important.

References

[1]
R. Anderson, S.T. Manoogian, and J.S. Reznick. 1976. The undermining and enhancing of intrinsic motivation in preschool children. PJournal of Personality and Social Psychology 34, 5 (1976), 915--922.
[2]
O. Aran and D. Gatica-Perez. 2013. One of a kind: inferring personality impressions in meetings. In Proceedings of the 15th ACM International Conference on Multimodal Interaction (ICMI'13). 11--18.
[3]
T. Baltrušaitis, P. Robinson, and L.P. Morency. 2016. OpenFace: An open source facial behavior analysis toolkit. In IEEE Winter Conference on Applications of Computer Vision (WACV'16). 1--10.
[4]
L. Batrinca, N. Mana, B. Lepri, N. Sebe, and F. Pianesi. 2016. Multimodal Personality Recognition in Collaborative Goal-Oriented Tasks. IEEE Transactions on Multimedia 14, 4 (2016), 659--673.
[5]
J. Bergstra, D. Yamins, and D.D. Cox. 2013. Hyperopt: A Python Library for Optimizing the Hyperparameters of Machine Learning Algorithms. In Proceedings of the 12th Python in Science Conferences (SciPy'13). 13--20.
[6]
J.I. Biel, L. Teijeiro-Mosquera, and D. Gatica-Perez. 2012. FaceTube: predicting personality from facial expressions of emotion in online conversational video. In Proceedings of the 14th ACM international conference on Multimodal interaction (ICMI'12). 53--56.
[7]
J. Brophy. 1981. Teacher praise: Afullctional analysis. Review of Educational Research 51, 1 (1981), 5--32.
[8]
H. Brugman and A. Russel. 2004. Annotating Multimedia / Multi-modal resources with ELAN. In Proceedings of the 4th International Conference on Language Resources and Language Evaluation (LREC'04). 2065--2068.
[9]
L. Chen, G. Feng, J. Joe, C.W. Leong, C. Kitchen, and C.M. Lee. 2014. Towards Automated Assessment of Public Speaking Skills Using Multimodal Cues. In Proceedings of the 16th International Conference on Multimodal Interaction (ICMI'14). 200--203.
[10]
P. Ekman and W.V. Friesen. 1977. Manual for the Facial Action Coding System. Palo Alto: Consulting Psychologists Press (1977).
[11]
J.H. Friedman. 2001. Greedy Function Approximation: A Gradient Boosting Machine. The Annals of Statistics 29, 5 (2001), 1189--1232.
[12]
J. Henderlong and Lepper M.R. 2002. The effects of praise on children's intrinsic motivation: A review and synthesis. Psychological Bulletin 128, 5 (2002), 774--795.
[13]
R. Ishii, K. Otsuka, S. Kumano, R. Higashinaka, and J. Tomita. 2018. Analyzing Gaze Behavior and Dialogue Act during Turn-taking for Estimating Empathy Skill Level. In Proceedings of the 20th ACM International Conference on Multimodal Interaction (ICMI'18). 31--39.
[14]
D. Jayagopi, D. Sanchez-Cortes, K. Otsuka, J. Yamato, and D. Gatica-Perez. 2012. Linking speaking and looking behavior patterns with group composition, perception, and performance. In Proceedings of the 14th ACM international conference on Multimodal interaction (ICMI'12). 433--440.
[15]
L.N. Jenkins, M.T. Floress, and W. Reinke. 2015. Rates and Types of Teacher Praise: A Review and Future Directions. Psychology in the Schools 52, 5 (2015), 463--476.
[16]
T.M. Kalis, K.J. Vannest, and R. Parker. 2007. Praise Counts: Using Self-Monitoring to Increase Effective Teaching Practices. Preventing School Failure: Alternative Education for Children and Youth 51, 3 (2007), 20--27.
[17]
Y.S. Lin and C.C. Lee. 2018. Using Interlocutor-Modulated Attention BLSTM to Predict Personality Traits in Small Group Interaction. In Proceedings of the 20th ACM International Conference on Multimodal Interaction (ICMI'18). 163--169.
[18]
L.S. Nguyen, D. Frauendorfer, M.S. Mast, and D. Gatica-Perez. 2014. Hire me: Computational Inference of Hirability in Employment Interviews Based on Nonverbal Behavior. IEEE Transactions Multimedia 16, 4 (2014), 1018--1031.
[19]
S. Okada, Y. Ohtake, Y.I. Nakano, Y. Hayashi, H.H. Huang, Y. Takase, and K. Nitta. 2016. Estimating communication skills using dialogue acts and nonverbal features in multiple discussion datasets. In Proceedings of the 18th ACM International Conference on Multimodal Interaction (ICMI'16). 169--176.
[20]
S. Park, H.S. Shim, M. Chatterjee, K. Sagae, and L.P. Morency. 2014. Computational Analysis of Persuasiveness in Social Multimedia: A Novel Dataset and Multimodal Prediction Approach. In Proceedings of the 16th International Conference on Multimodal Interaction (ICMI'14). 50--57.
[21]
F. Pianesi, N. Mana, A. Cappelletti, B. Lepri, and M. Zancanaro. 2018. Multimodal recognition of personality traits in social interactions. In Proceedings of the 10th international conference on Multimodal interfaces (ICMI'08). 53--60.
[22]
T.S Polzin. 2000. Verbal and non-verbal cues in the communication of emotions. In 2000 IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP). 2429--2432.
[23]
V. Ramanarayanan, C.W. Leong, L. Chen, G. Feng, and D. Suendermann-Oeft. 2015. Evaluating Speech, Face, Emotion and Body Movement Time-series Features for Automated Multimodal Presentation Scoring. In Proceedings of the 17th ACM on International Conference on Multimodal Interaction (ICMI'15). 23--30.
[24]
D. Sanchez-Cortes, O. Aran, M.S. Mast, and D. Gatica-Perez. 2012. A Nonverbal Behavior Approach to Identify Emergent Leaders in Small Groups. IEEE Transactions on Multimedia 14, 3 (2012), 816--832.
[25]
M. Soleymani, K. Stefanov, H.S. Kang, J. Ondras, and J. Gratch. 2019. Multimodal Analysis and Estimation of Intimate Self-Disclosure. In Proceedings of the 21st ACM International Conference on Multimodal Interaction (ICMI'19). 59--68.
[26]
F. Valente, S. Kim, and P. Motlicek. 2012. Annotation and Recognition of Personality Traits in Spoken Conversations from the AMI Meetings Corpus. In Proceedings of INTERSPEECH. 1183--1186.
[27]
T. Wörtwein, M. Chollet, B. Schauerte, L.P. Morency, R. Stiefelhagen, and S. Scherer. 2015. Multimodal Public Speaking Performance Assessment. In Proceedings of the 17th ACM on International Conference on Multimodal Interaction (ICMI'15). 43--50.

Cited By

View all
  • (2023)Co-Located Human–Human Interaction Analysis Using Nonverbal Cues: A SurveyACM Computing Surveys10.1145/362651656:5(1-41)Online publication date: 25-Nov-2023
  • (2022)Modeling Japanese Praising Behavior by Analyzing Audio and Visual BehaviorsFrontiers in Computer Science10.3389/fcomp.2022.8151284Online publication date: 16-Mar-2022

Index Terms

  1. Analyzing Nonverbal Behaviors along with Praising

    Recommendations

    Comments

    Please enable JavaScript to view thecomments powered by Disqus.

    Information & Contributors

    Information

    Published In

    cover image ACM Conferences
    ICMI '20: Proceedings of the 2020 International Conference on Multimodal Interaction
    October 2020
    920 pages
    ISBN:9781450375818
    DOI:10.1145/3382507
    Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

    Sponsors

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    Published: 22 October 2020

    Permissions

    Request permissions for this article.

    Check for updates

    Author Tags

    1. communication
    2. gradient boosting
    3. multimodal interaction
    4. praise

    Qualifiers

    • Short-paper

    Conference

    ICMI '20
    Sponsor:
    ICMI '20: INTERNATIONAL CONFERENCE ON MULTIMODAL INTERACTION
    October 25 - 29, 2020
    Virtual Event, Netherlands

    Acceptance Rates

    Overall Acceptance Rate 453 of 1,080 submissions, 42%

    Contributors

    Other Metrics

    Bibliometrics & Citations

    Bibliometrics

    Article Metrics

    • Downloads (Last 12 months)22
    • Downloads (Last 6 weeks)2
    Reflects downloads up to 24 Sep 2024

    Other Metrics

    Citations

    Cited By

    View all
    • (2023)Co-Located Human–Human Interaction Analysis Using Nonverbal Cues: A SurveyACM Computing Surveys10.1145/362651656:5(1-41)Online publication date: 25-Nov-2023
    • (2022)Modeling Japanese Praising Behavior by Analyzing Audio and Visual BehaviorsFrontiers in Computer Science10.3389/fcomp.2022.8151284Online publication date: 16-Mar-2022

    View Options

    Get Access

    Login options

    View options

    PDF

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader

    Media

    Figures

    Other

    Tables

    Share

    Share

    Share this Publication link

    Share on social media