Nothing Special   »   [go: up one dir, main page]

skip to main content
10.1145/2993148.2993169acmconferencesArticle/Chapter ViewAbstractPublication Pagesicmi-mlmiConference Proceedingsconference-collections
short-paper

Sound emblems for affective multimodal output of a robotic tutor: a perception study

Published: 31 October 2016 Publication History

Abstract

Human and robot tutors alike have to give careful consideration as to how feedback is delivered to students to provide a motivating yet clear learning context. Here, we performed a perception study to investigate attitudes towards negative and positive robot feedback in terms of perceived emotional valence on the dimensions of 'Pleasantness', 'Politeness' and 'Naturalness'. We find that, indeed, negative feedback is perceived as significantly less polite and pleasant. Unlike humans who have the capacity to leverage various paralinguistic cues to convey subtle variations of meaning and emotional climate, presently robots are much less expressive. However, they have one advantage that they can combine synthetic robotic sound emblems with verbal feedback. We investigate whether these sound emblems, and their position in the utterance, can be used to modify the perceived emotional valence of the robot feedback. We discuss this in the context of an adaptive robotic tutor interacting with students in a multimodal learning environment.

References

[1]
R. Black. Bebot - Robot Synth. Normalware, 2014.
[2]
G. Castellano, A. Paiva, A. Kappas, R. Aylett, H. Hastie, W. Barendregt, F. Nabais, and S. Bull. Towards empathic virtual and robotic tutors. In Proc. of AIED. 2013.
[3]
N. Dethlefs, H. Cuayahuitl, H. Hastie, V. Rieser, and O. Lemon. Cluster-based prediction of user ratings for stylistic surface realisation. In Proc. of EACL, 2014.
[4]
P. Ekman and W. V. Friesen. The repertoire of nonverbal behavior: Categories, origins, usage, and coding. Semiotica, 1(1):49–98, 1969.
[5]
H. Elfenbein and N. Ambady. On the universality and cultural specificity of emotion recognition: a meta-analysis. 128(2):203–35, 2002.
[6]
A. C. Graesser, K. Wiemer-Hastings, P. Wiemer-Hastings, and R. Kreuz. Autotutor: A simulation of a human tutor. Cognitive Systems Research, 1(1):35 – 51, 1999.
[7]
C. E. Izard. The psychology of emotions. Springer Science & Business Media, 1991.
[8]
S. Janarthanam, H. Hastie, A. Deshmukh, R. Aylett, and M. E. Foster. A reusable interaction management module: Use case for empathic robotic tutoring. In Proc. of SemDial, 2015.
[9]
E.-S. Jee, Y.-J. Jeong, C. H. Kim, and H. Kobayashi. Sound design for emotion and intention expression of socially interactive robots. Intelligent Service Robotics, 3(3):199–206, 2010.
[10]
H. G. Johnson, P. Ekman, and W. V. Friesen. Communicative body movements: American emblems. Semiotica, 15(4):335–354, 1975.
[11]
A. Kappas, D. Kuester, P. Dente, and C. Basedow. Simply the B.E.S.T.! creation and validation of the bremen emotional sounds toolkit. In Poster presented at the 1st International Convention of Psychological Science, Amsterdam, the Netherlands, 2015.
[12]
W. Killgore. The affect grid: A moderately valid, nonspecific measure of pleasure and arousal. Psychological reports, 83(2):639–642, 1998.
[13]
A. N. Kluger and A. DeNisi. The effects of feedback interventions on performance: A historical review, a meta-analysis, and a preliminary feedback intervention theory. 119:254–284, 1996.
[14]
T. Komatsu. Can we assign attitudes to a computer based on its beep sounds? In Proc. of the Affective Interactions: The computer in the affective loop Workshop at Intelligent User Interface 2005, 2005.
[15]
T. Komatsu, K. Kobayashi, S. Yamada, K. Funakoshi, and M. Nakano. Augmenting expressivity of artificial subtle expressions (ASEs): Preliminary design guideline for ases. In Proc. of the 5th Augmented Human International Conference, 2014.
[16]
T. Komatsu and S. Yamada. How does the agents’ appearance affect users’ interpretation of the agents’ attitudes: Experimental investigation on expressing the same artificial sounds from agents with different appearances. Intl. Journal of Human–Computer Interaction, 27(3):260–279, 2011.
[17]
M. K. Lau. DTK: Dunnett-Tukey-Kramer Pairwise Multiple Comparison Test Adjusted for Unequal Variances and Unequal Sample Sizes.
[18]
I. Leite, G. Castellano, A. Pereira, C. Martinho, and A. Paiva. Modelling empathic behaviour in a robotic game companion for children: an ethnographic study in real-world settings. In Proc. of HRI, 2012.
[19]
F. Mairesse and M. Walker. Personality generation for dialogue. In 45th Annual Meeting of the Association for Computational Linguistics (ACL), 2007.
[20]
J. J. Ohala. The acoustic origin of the smile. The Journal of the Acoustical Society of America, 68(S1):S33–S33, 1980.
[21]
N. K. Person, R. J. Kreuz, R. A. Zwaan, and A. C. Graesser. Pragmatics and pedagogy: Conversational rules and politeness strategies may inhibit effective tutoring. Journal of Cognition and Instruction, 13(2):161-188, 1995.
[22]
R. W. Picard, S. Papert, W. Bender, B. Blumberg, C. Breazeal, D. Cavallo, T. Machover, M. Resnick, D. Roy, and C. Strohecker. Affective learning — a manifesto. BT Technology Journal, 22(4):253–269, 2002.
[23]
R. Read and T. Belpaeme. How to use non-linguistic utterances to convey emotion in child-robot interaction. In Proc. of the seventh annual ACM/IEEE international conference on Human-Robot Interaction, pages 219–220. ACM, 2012.
[24]
R. Read and T. Belpaeme. Situational context directs how people affectively interpret robotic non-linguistic utterances. In Proc. of the 2014 ACM/IEEE international conference on Human-robot interaction, pages 41–48. ACM, 2014.
[25]
J. A. Russell. A circumplex model of affect. J. Pers. Soc. Psychol., 39(6):1161–1178, 1980.
[26]
J. A. Russell. Emotion, core affect, and psychological construction. Cognition and Emotion, 23(7):1259–1283, 2009.
[27]
J. A. Russell, A. Weiss, and G. A. Mendelsohn. Affect grid: a single-item scale of pleasure and arousal. Journal of personality and social psychology, 57(3):493, 1989.
[28]
Y. I. Russell and F. Gobet. Sinuosity and the affect grid: A method for adjusting repeated mood scores 1. Perceptual and motor skills, 114(1):125–136, 2012.
[29]
K. R. Scherer. On the symbolic functions of vocal affect expression. Journal of Language and Social Psychology, 7(2):79–100, 1988.
[30]
K. R. Scherer. Affect bursts. Emotions: Essays on emotion theory, 161:196, 1994.
[31]
M. Schröder. Experimental study of affect bursts. Speech communication, 40(1):99–116, 2003.
[32]
P. Sharp. Behavior modification in the secondary school: A survey of students’ attitudes to rewards and praise. 9:109–112, 1985.
[33]
M. Tielman, M. Neerincx, J.-J. Meyer, and R. Looije. Adaptive emotional expression in robot-child interaction. In Proc. of the 2014 ACM/IEEE international conference on Human-robot interaction, pages 407–414. ACM, 2014.
[34]
M. Yik, J. A. Russell, and J. H. Steiger. A 12-point circumplex structure of core affect. Emotion, 11(4):705–731, 2011.

Cited By

View all
  • (2023)Probing Aesthetics Strategies for Robot Sound: Complexity and Materiality in Movement SonificationACM Transactions on Human-Robot Interaction10.1145/358527712:4(1-22)Online publication date: 13-Dec-2023
  • (2023)Nonverbal Sound in Human-Robot Interaction: A Systematic ReviewACM Transactions on Human-Robot Interaction10.1145/358374312:4(1-46)Online publication date: 13-Dec-2023
  • (2023)The Mediating Effect of Emotions on Trust in the Context of Automated System UsageIEEE Transactions on Affective Computing10.1109/TAFFC.2021.309488314:2(1572-1585)Online publication date: 1-Apr-2023
  • Show More Cited By

Recommendations

Comments

Please enable JavaScript to view thecomments powered by Disqus.

Information & Contributors

Information

Published In

cover image ACM Conferences
ICMI '16: Proceedings of the 18th ACM International Conference on Multimodal Interaction
October 2016
605 pages
ISBN:9781450345569
DOI:10.1145/2993148
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

Sponsors

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 31 October 2016

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. Human-robot interaction
  2. multimodal output
  3. speech synthesis
  4. synthesized sounds

Qualifiers

  • Short-paper

Conference

ICMI '16
Sponsor:

Acceptance Rates

Overall Acceptance Rate 453 of 1,080 submissions, 42%

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)10
  • Downloads (Last 6 weeks)0
Reflects downloads up to 13 Nov 2024

Other Metrics

Citations

Cited By

View all
  • (2023)Probing Aesthetics Strategies for Robot Sound: Complexity and Materiality in Movement SonificationACM Transactions on Human-Robot Interaction10.1145/358527712:4(1-22)Online publication date: 13-Dec-2023
  • (2023)Nonverbal Sound in Human-Robot Interaction: A Systematic ReviewACM Transactions on Human-Robot Interaction10.1145/358374312:4(1-46)Online publication date: 13-Dec-2023
  • (2023)The Mediating Effect of Emotions on Trust in the Context of Automated System UsageIEEE Transactions on Affective Computing10.1109/TAFFC.2021.309488314:2(1572-1585)Online publication date: 1-Apr-2023
  • (2022)Robot-mediated interventions for youth mental healthDesign for Health10.1080/24735132.2022.21018256:2(138-162)Online publication date: 2-Aug-2022
  • (2021)Personalization and Localization in Human-Robot Interaction: A Review of Technical MethodsRobotics10.3390/robotics1004012010:4(120)Online publication date: 3-Nov-2021
  • (2021)“An Error Occurred!” - Trust Repair With Virtual Robot Using Levels of Mistake ExplanationProceedings of the 9th International Conference on Human-Agent Interaction10.1145/3472307.3484170(218-226)Online publication date: 9-Nov-2021
  • (2019)Reflecting on the Presence of Science Fiction Robots in Computing LiteratureACM Transactions on Human-Robot Interaction10.1145/33037068:1(1-25)Online publication date: 6-Mar-2019

View Options

Get Access

Login options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media