Nothing Special   »   [go: up one dir, main page]

skip to main content
10.5555/2447556.2447611acmconferencesArticle/Chapter ViewAbstractPublication PageshriConference Proceedingsconference-collections
abstract

The vernissage corpus: a conversational human-robot-interaction dataset

Published: 03 March 2013 Publication History

Abstract

We introduce a new conversational Human-Robot-Interaction (HRI) dataset with a real-behaving robot inducing interactive behavior with and between humans. Our scenario involves a humanoid robot NAO1 explaining paintings in a room and then quizzing the participants, who are naive users. As perceiving nonverbal cues, apart from the spoken words, plays a major role in social interactions and socially-interactive robots, we have extensively annotated the dataset. It has been recorded and annotated to benchmark many relevant perceptual tasks, towards enabling a robot to converse with multiple humans, such as speaker localization and speech segmentation; tracking, pose estimation, nodding, visual focus of attention estimation in visual domain; and an audio-visual task such as addressee detection. NAO system states are also available. As compared to recordings done with a static camera, this corpus involves the head-movement of a humanoid robot (due to gaze change, nodding), posing challenges to visual processing. Also, the significant background noise present in a real HRI setting makes auditory tasks challenging.

References

[1]
X. Alameda-Pineda et al. The RAVEL data set. In ICMI 2011 Workshop on Multimodal Corpora, Alicante, Spain, Nov 2011.
[2]
E. Arnaud et al. The CAVA corpus: synchronised stereoscopic and binaural datasets with head movements. In Proc. ICMI. ACM, 2008.
[3]
T. Fong et al. A survey of socially interactive robots. Robotics and autonomous systems, 42(3):143--166, 2003.
[4]
D. Jayagopi et al. The vernissage corpus: A multimodal human-robot-interaction dataset. In Idiap research report (Idiap-RR-33--2012), 2012.
[5]
M. Lohse et al. Systemic interaction analysis (SInA) in HRI. In Proc. Human-Robot Interaction (HRI), San Diego, CA, USA, 2009.
[6]
Y. Mohammad et al. The h3r explanation corpus human-human and base human-robot interaction dataset. In Proc. ISSNIP. IEEE, 2008.
[7]
K. Pitsch et al. Attitude of german museum visitors towards an interactive art guide robot. In Proc. HRI. ACM, 2011.
[8]
J. Wienke et al. A framework for the acquisition of multimodal HRI data sets with a whole-system perspective. In LREC, 2012.
[9]
Z. Zivkovic et al. From sensors to human spatial concepts. Robotics and Autonomous Systems, 55(5):357--358, 2007.

Cited By

View all
  • (2019)M3B corpusAdjunct Proceedings of the 2019 ACM International Joint Conference on Pervasive and Ubiquitous Computing and Proceedings of the 2019 ACM International Symposium on Wearable Computers10.1145/3341162.3345588(825-834)Online publication date: 9-Sep-2019
  • (2018)Deep learning based multi-modal addressee recognition in visual scenes with utterancesProceedings of the 27th International Joint Conference on Artificial Intelligence10.5555/3304415.3304635(1546-1553)Online publication date: 13-Jul-2018
  • (2013)Leveraging the robot dialog state for visual focus of attention recognitionProceedings of the 15th ACM on International conference on multimodal interaction10.1145/2522848.2522881(107-110)Online publication date: 9-Dec-2013

Recommendations

Comments

Please enable JavaScript to view thecomments powered by Disqus.

Information & Contributors

Information

Published In

cover image ACM Conferences
HRI '13: Proceedings of the 8th ACM/IEEE international conference on Human-robot interaction
March 2013
452 pages
ISBN:9781467330558

Sponsors

In-Cooperation

  • AAAI: American Association for Artificial Intelligence
  • Human Factors & Ergonomics Soc: Human Factors & Ergonomics Soc

Publisher

IEEE Press

Publication History

Published: 03 March 2013

Check for updates

Author Tags

  1. hri corpus
  2. multimodal dataset
  3. social-robotics
  4. vernissage

Qualifiers

  • Abstract

Conference

HRI'13
Sponsor:

Acceptance Rates

Overall Acceptance Rate 268 of 1,124 submissions, 24%

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)2
  • Downloads (Last 6 weeks)0
Reflects downloads up to 29 Sep 2024

Other Metrics

Citations

Cited By

View all
  • (2019)M3B corpusAdjunct Proceedings of the 2019 ACM International Joint Conference on Pervasive and Ubiquitous Computing and Proceedings of the 2019 ACM International Symposium on Wearable Computers10.1145/3341162.3345588(825-834)Online publication date: 9-Sep-2019
  • (2018)Deep learning based multi-modal addressee recognition in visual scenes with utterancesProceedings of the 27th International Joint Conference on Artificial Intelligence10.5555/3304415.3304635(1546-1553)Online publication date: 13-Jul-2018
  • (2013)Leveraging the robot dialog state for visual focus of attention recognitionProceedings of the 15th ACM on International conference on multimodal interaction10.1145/2522848.2522881(107-110)Online publication date: 9-Dec-2013

View Options

Get Access

Login options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media