Nothing Special   »   [go: up one dir, main page]

skip to main content
10.1145/3568294.3579960acmconferencesArticle/Chapter ViewAbstractPublication PageshriConference Proceedingsconference-collections
abstract

Semantic Scene Understanding for Human-Robot Interaction

Published: 13 March 2023 Publication History

Abstract

Service robots will be co-located with human users in an unstructured human-centered environment and will benefit from understanding the user's daily activities, preferences, and needs towards fully assisting them. This workshop aims to explore how abstract semantic knowledge of the user's environment can be used as a context in understanding and grounding information regarding the user's instructions, preferences, habits, and needs. While object semantics have primarily been investigated for robotics in the perception and manipulation domain, recent works have shown the benefits of semantic modeling in a Human-Robot Interaction (HRI) context toward understanding and assisting human users. This workshop focuses on semantic information that can be useful in generalizing and interpreting user instructions, modeling user activities, anticipating user needs, and making the internal reasoning processes of a robot more interpretable to a user. Therefore, the workshop builds on topics from prior workshops such as Learning in HRI, behavior adaptation for assistance, and learning from humans and aims at facilitating cross-pollination across these domains through a common thread of utilizing abstract semantics of the physical world towards robot autonomy in assistive applications. We envision the workshop to touch on research areas such as unobtrusive learning from observations, preference learning, continual learning, enhancing the transparency of autonomous robot behavior, and user adaptation. The workshop aims to gather researchers working on these areas and provide fruitful discussions towards autonomous assistive robots that can learn and ground scene semantics for enhancing HRI.

References

[1]
Suneel Belkhale, Ethan K Gordon, Yuxiao Chen, Siddhartha Srinivasa, Tapomayukh Bhattacharjee, and Dorsa Sadigh. 2022. Balancing Efficiency and Comfort in Robot-Assisted Bite Transfer. In 2022 International Conference on Robotics and Automation (ICRA). IEEE, 4757--4763.
[2]
Haonan Chen, Hao Tan, Alan Kuntz, Mohit Bansal, and Ron Alterovitz. 2020. Enabling robots to understand incomplete natural language instructions using commonsense reasoning. In 2020 IEEE International Conference on Robotics and Automation (ICRA). IEEE, 1963--1969.
[3]
Serhan Cocs ar, Manuel Fernandez-Carmona, Roxana Agrigoroaie, François Ferland, Feng Zhao, Shigang Yue, Nicola Bellotto, Adriana Tapus, et al. 2020. ENRICHME: Perception and Interaction of an Assistive Robot for the Elderly at Home. International Journal of Social Robotics, Vol. 12, 3 (2020), 779--805.
[4]
Fethiye Irmak Dougan, Gaspar I. Melsión, and Iolanda Leite. 2023. Leveraging explainability for understanding object descriptions in ambiguous 3D environments. Frontiers in Robotics and AI, Vol. 9 (2023). https://doi.org/10.3389/frobt.2022.937772
[5]
Fethiye Irmak Dougan, Ilaria Torre, and Iolanda Leite. 2022. Asking Follow-Up Clarifications to Resolve Ambiguities in Human-Robot Conversation. In Proceedings of the 2022 ACM/IEEE International Conference on Human-Robot Interaction (Sapporo, Hokkaido, Japan) (HRI '22). IEEE Press, 461--469.
[6]
Chen Gao, Jinyu Chen, Si Liu, Luting Wang, Qiong Zhang, and Qi Wu. 2021. Room-and-object aware knowledge reasoning for remote embodied referring expression. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 3064--3073.
[7]
Ivan Kapelyukh and Edward Johns. 2022. My house, my rules: Learning tidying preferences with graph neural networks. In Conference on Robot Learning. PMLR, 740--749.
[8]
Ioannis Kostavelis, Dimitrios Giakoumis, Georgia Peleka, Andreas Kargakos, Evangelos Skartados, Manolis Vasileiadis, and Dimitrios Tzovaras. 2018. RAMCIP robot: a personal robotic assistant; demonstration of a complete framework. In Proceedings of the European conference on computer vision (ECCV) workshops. 0-0.
[9]
Weiyu Liu, Angel Daruna, and Sonia Chernova. 2019. CAGE: Context-Aware Grasping Engine. arXiv preprint arXiv:1909.11142 (2019).
[10]
Michael Lopez-Brau, Joseph Kwon, and Julian Jara-Ettinger. 2021. Social inferences from physical evidence via Bayesian event reconstruction. (2021).
[11]
Roberto J. López-Sastre, Marcos Baptista-Ríos, Francisco Javier Acevedo-Rodríguez, Soraya Pacheco-da Costa, Saturnino Maldonado-Bascón, and Sergio Lafuente-Arroyo. 2021. A Low-Cost Assistive Robot for Children with Neurodevelopmental Disorders to Aid in Daily Living Activities. International Journal of Environmental Research and Public Health, Vol. 18, 8 (2021). https://doi.org/10.3390/ijerph18083974
[12]
Yue Meng, Yongxi Lu, Aman Raj, Samuel Sunarjo, Rui Guo, Tara Javidi, Gaurav Bansal, and Dinesh Bharadia. 2019. Signet: Semantic instance aided unsupervised 3d geometry perception. In Proceedings of the IEEE/CVF conference on Computer Vision and Pattern Recognition. 9810--9820.
[13]
George Mois and Jenay M Beer. 2020. hrefhttps://doi.org/10.1007/s13670-020-00314-wThe role of healthcare robotics in providing support to older adults: a socio-ecological perspective. Current Geriatrics Reports, Vol. 9, 2 (2020), 82--89. https://doi.org/10.1007/s13670-020-00314-w
[14]
Daniel Nyga, Subhro Roy, Rohan Paul, Daehyung Park, Mihai Pomarlan, Michael Beetz, and Nicholas Roy. 2018. Grounding robot plans from natural language instructions with incomplete world knowledge. In Conference on Robot Learning. 714--723.
[15]
Nayoung Oh, Junyong Park, Ji Ho Kwak, and Sungho Jo. 2021. A robot capable of proactive assistance through handovers for sequential tasks. In 2021 18th International Conference on Ubiquitous Robots (UR). IEEE, 296--301.
[16]
Maithili Patel and Sonia Chernova. 2022. Proactive Robot Assistance via Spatio-Temporal Object Modeling. In 6th Annual Conference on Robot Learning.
[17]
Yuankai Qi, Zizheng Pan, Shengping Zhang, Anton van den Hengel, and Qi Wu. 2020. Object-and-action aware model for visual language navigation. In European Conference on Computer Vision. Springer, 303--317.
[18]
Rohin Shah, Dmitrii Krasheninnikov, Jordan Alexander, Pieter Abbeel, and Anca Dragan. 2019. The Implicit Preference Information in an Initial State. In International Conference on Learning Representations.
[19]
Fan Zhang and Yiannis Demiris. 2022. Learning garment manipulation policies toward robot-assisted dressing. Science robotics, Vol. 7, 65 (2022), eabm6010.

Recommendations

Comments

Please enable JavaScript to view thecomments powered by Disqus.

Information & Contributors

Information

Published In

cover image ACM Conferences
HRI '23: Companion of the 2023 ACM/IEEE International Conference on Human-Robot Interaction
March 2023
612 pages
ISBN:9781450399708
DOI:10.1145/3568294
Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for third-party components of this work must be honored. For all other uses, contact the Owner/Author.

Sponsors

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 13 March 2023

Check for updates

Author Tags

  1. human-centered autonomy
  2. robot learning
  3. scene semantics

Qualifiers

  • Abstract

Conference

HRI '23
Sponsor:

Acceptance Rates

Overall Acceptance Rate 268 of 1,124 submissions, 24%

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • 0
    Total Citations
  • 187
    Total Downloads
  • Downloads (Last 12 months)108
  • Downloads (Last 6 weeks)6
Reflects downloads up to 24 Sep 2024

Other Metrics

Citations

View Options

Get Access

Login options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media