Nothing Special   »   [go: up one dir, main page]

skip to main content
10.1145/3640792.3675716acmconferencesArticle/Chapter ViewAbstractPublication PagesautomotiveuiConference Proceedingsconference-collections
research-article

Shrinkable Arm-based eHMI on Autonomous Delivery Vehicle for Effective Communication with Other Road Users

Published: 22 September 2024 Publication History

Abstract

When employing autonomous driving technology in logistics, small autonomous delivery vehicles (aka delivery robots) encounter challenges different from passenger vehicles when interacting with other road users. We conducted an online video survey as a pre-study and found that autonomous delivery vehicles need external human-machine interfaces (eHMIs) to ask for help due to their small size and functional limitations. Inspired by everyday human communication, we chose arms as eHMI to show their request through limb motion and gesture. We held an in-house workshop to identify the arm’s requirements for designing a specific arm with shrink-ability (conspicuous when delivering messages but not affect traffic at other times). We prototyped a small delivery robot with a shrinkable arm and filmed the experiment videos. We conducted two studies (a video-based and a 360-degree-photo VR-based) with 18 participants. We demonstrated that arm-on-delivery robots can increase interaction efficiency by drawing more attention and communicating specific information.

Supplemental Material

MP4 File - Shrinkable Arm
The fabrication procedure and shrinking demo.

References

[1]
Tier 4. 2023. When will automatic delivery robots that solve the labor shortage in logistics start running around town?Retrieved April 11, 2024 from https://www.walkingspacedx.go.jp/post-197/
[2]
Ammar Al-Taie, Frank Pollick, and Stephen Brewster. 2022. Tour de Interaction: Understanding Cyclist-Driver Interaction with Self-Reported Cyclist Behaviour. In Adjunct Proceedings of the 14th International Conference on Automotive User Interfaces and Interactive Vehicular Applications. 127–131.
[3]
Christopher E Anderson, Amanda Zimmerman, Skylar Lewis, John Marmion, and Jeanette Gustat. 2019. Patterns of cyclist and pedestrian street crossing behavior and safety on an urban greenway. International journal of environmental research and public health 16, 2 (2019), 201.
[4]
Pavlo Bazilinskyy, Dimitra Dodou, and Joost De Winter. 2019. Survey on eHMI concepts: The effect of text, color, and perspective. Transportation research part F: traffic psychology and behaviour 67 (2019), 175–194.
[5]
Pavlo Bazilinskyy, Dimitra Dodou, and Joost De Winter. 2020. External Human-Machine Interfaces: Which of 729 colors is best for signaling ‘Please (do not) cross’?. In 2020 IEEE International Conference on Systems, Man, and Cybernetics (SMC). IEEE, 3721–3728.
[6]
Pavlo Bazilinskyy, Lars Kooijman, Dimitra Dodou, Kirsten Mallant, Victor Roosens, Marloes Middelweerd, Lucas Overbeek, and Joost de Winter. 2022. Get out of the way! Examining eHMIs in critical driver-pedestrian encounters in a coupled simulator. In Proceedings of the 14th International Conference on Automotive User Interfaces and Interactive Vehicular Applications. 360–371.
[7]
Juan Carmona, Carlos Guindel, Fernando Garcia, and Arturo de la Escalera. 2021. eHMI: Review and guidelines for deployment on autonomous vehicles. Sensors 21, 9 (2021), 2912.
[8]
Justine Cassell, Tim Bickmore, Lee Campbell, Hannes Vilhjalmsson, Hao Yan, 2000. Human conversation as a system framework: Designing embodied conversational agents. Embodied conversational agents (2000), 29–63.
[9]
Chia-Ming Chang, Koki Toda, Takeo Igarashi, Masahiro Miyata, and Yasuhiro Kobayashi. 2018. A video-based study comparing communication modalities between an autonomous car and a pedestrian. In Adjunct proceedings of the 10th international conference on automotive user interfaces and interactive vehicular applications. 104–109.
[10]
Chia-Ming Chang, Koki Toda, Daisuke Sakamoto, and Takeo Igarashi. 2017. Eyes on a Car: an Interface Design for Communication between an Autonomous Car and a Pedestrian. In Proceedings of the 9th international conference on automotive user interfaces and interactive vehicular applications. 65–73.
[11]
Vivienne Bihe Chi, Shashank Mehrotra, Teruhisa Misu, and Kumar Akash. 2024. Should I Help a Delivery Robot? Cultivating Prosocial Norms through Observations. In Extended Abstracts of the CHI Conference on Human Factors in Computing Systems. 1–7.
[12]
Jen Jen Chung, Carrie Rebhuhn, Connor Yates, Geoffrey A Hollinger, and Kagan Tumer. 2019. A multiagent framework for learning dynamic traffic management strategies. Autonomous Robots 43 (2019), 1375–1391.
[13]
Mark Colley and Enrico Rukzio. 2020. A design space for external communication of autonomous vehicles. In 12th International Conference on Automotive User Interfaces and Interactive Vehicular Applications. 212–222.
[14]
Mark Colley and Enrico Rukzio. 2020. Towards a design space for external communication of autonomous vehicles. In Extended Abstracts of the 2020 CHI Conference on Human Factors in Computing Systems. 1–8.
[15]
Koen De Clercq, Andre Dietrich, Juan Pablo Núñez Velasco, Joost De Winter, and Riender Happee. 2019. External human-machine interfaces on automated vehicles: Effects on pedestrian crossing decisions. Human factors 61, 8 (2019), 1353–1370.
[16]
Joost de Winter and Dimitra Dodou. 2022. External human–machine interfaces: Gimmick or necessity?Transportation research interdisciplinary perspectives 15 (2022), 100643.
[17]
Debargha Dey, Azra Habibovic, Melanie Berger, Devanshi Bansal, Raymond H Cuijpers, and Marieke Martens. 2022. Investigating the Need for Explicit Communication of Non-Yielding Intent through a Slow-Pulsing Light Band (SPLB) eHMI in AV-Pedestrian Interaction. In Proceedings of the 14th International Conference on Automotive User Interfaces and Interactive Vehicular Applications. 307–318.
[18]
Debargha Dey, Azra Habibovic, Andreas Löcken, Philipp Wintersberger, Bastian Pfleging, Andreas Riener, Marieke Martens, and Jacques Terken. 2020. Taming the eHMI jungle: A classification taxonomy to guide, compare, and assess the design principles of automated vehicles’ external human-machine interfaces. Transportation Research Interdisciplinary Perspectives 7 (2020), 100174.
[19]
Debargha Dey, Azra Habibovic, Bastian Pfleging, Marieke Martens, and Jacques Terken. 2020. Color and animation preferences for a light band eHMI in interactions between automated vehicles and pedestrians. In Proceedings of the 2020 CHI conference on human factors in computing systems. 1–13.
[20]
Debargha Dey, Brent Temmink, Daan Sonnemans, Karijn Den Teuling, Lotte van Berkel, and Bastian Pfleging. 2021. FlowMotion: Exploring the Intuitiveness of Fluid Motion Based Communication in eHMI Design for Vehicle-Pedestrian Communication. In 13th International Conference on Automotive User Interfaces and Interactive Vehicular Applications. 128–131.
[21]
Debargha Dey and Jacques Terken. 2017. Pedestrian interaction with vehicles: roles of explicit and implicit communication. In Proceedings of the 9th international conference on automotive user interfaces and interactive vehicular applications. 109–113.
[22]
Dimensions.com. 2023. Autonomous Delivery Vehicles. Retrieved April 14, 2024 from https://www.dimensions.com/collection/autonomous-delivery-vehicles
[23]
Giulia Dominijanni, Solaiman Shokur, Gionata Salvietti, Sarah Buehler, Erica Palmerini, Simone Rossi, Frederique De Vignemont, Andrea d’Avella, Tamar R Makin, Domenico Prattichizzo, 2021. Enhancing human bodies with extra robotic arms and fingers: The Neural Resource Allocation Problem. arXiv preprint arXiv:2103.17252 (2021).
[24]
Stefanie M Faas, Lesley-Ann Mathis, and Martin Baumann. 2020. External HMI for self-driving vehicles: Which information shall be displayed?Transportation research part F: traffic psychology and behaviour 68 (2020), 171–186.
[25]
Ylva Ferstl, Sean Thomas, Cédric Guiard, Cathy Ennis, and Rachel McDonnell. 2021. Human or Robot? Investigating voice, appearance and gesture motion realism of conversational social agents. In Proceedings of the 21st ACM international conference on intelligent virtual agents. 76–83.
[26]
Paul DS Fink, Velin Dimitrov, Hiroshi Yasuda, Tiffany L Chen, Richard R Corey, Nicholas A Giudice, and Emily S Sumner. 2023. Autonomous is not enough: designing multisensory Mid-Air gestures for vehicle interactions among people with visual impairments. In Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems. 1–13.
[27]
Lex Fridman, Bruce Mehler, Lei Xia, Yangyang Yang, Laura Yvonne Facusse, and Bryan Reimer. 2017. To walk or not to walk: Crowdsourced assessment of external vehicle-to-pedestrian displays. arXiv preprint arXiv:1707.02698 (2017).
[28]
Danilo Gallo, Prescillia Leslie Bioche, Jutta Katharina Willamowski, Tommaso Colombino, Shreepriya Gonzalez-Jimenez, Herve Poirier, and Cecile Boulard. 2023. Investigating the Integration of Human-Like and Machine-Like Robot Behaviors in a Shared Elevator Scenario. In Proceedings of the 2023 ACM/IEEE International Conference on Human-Robot Interaction. 192–201.
[29]
Steven R Gehrke, Christopher D Phair, Brendan J Russo, and Edward J Smaglik. 2023. Observed sidewalk autonomous delivery robot interactions with pedestrians and bicyclists. Transportation research interdisciplinary perspectives 18 (2023), 100789.
[30]
Xinyue Gui, Koki Toda, Stela Hanbyeol Seo, Chia-Ming Chang, and Takeo Igarashi. 2022. “I am going this way”: Gazing Eyes on Self-Driving Car Show Multiple Driving Directions. In Proceedings of the 14th international conference on automotive user interfaces and interactive vehicular applications. 319–329.
[31]
Xinyue Gui, Koki Toda, Stela Hanbyeol Seo, Felix Martin Eckert, Chia-Ming Chang, Xiang’Anthony Chen, and Takeo Igarashi. 2023. A field study on pedestrians’ thoughts toward a car with gazing eyes. In Extended Abstracts of the 2023 CHI Conference on Human Factors in Computing Systems. 1–7.
[32]
Gianpaolo Gulletta, Wolfram Erlhagen, and Estela Bicho. 2020. Human-like arm motion generation: A review. Robotics 9, 4 (2020), 102.
[33]
irasutoya. 2015. Illustration of elementary school students crossing the crosswalk with their hands raised. Retrieved April 11, 2024 from https://www.irasutoya.com/2015/12/blog-post_416.html
[34]
TIER IV. 2024. Media. Retrieved April 11, 2024 from https://tier4.jp/en/media/
[35]
Jong-Hann Jean, Chen-Fu Wei, Zheng-Wei Lin, and Kuang-Yow Lian. 2012. Development of an office delivery robot with multimodal human-robot interactions. In 2012 Proceedings of SICE Annual Conference (SICE). IEEE, 1564–1567.
[36]
Shyam Sundar Kannan, Ahreum Lee, and Byung-Cheol Min. 2021. External Human-Machine Interface on Delivery Robots: Expression of Navigation Intent of the Robot. In 2021 30th IEEE International Conference on Robot & Human Interactive Communication (RO-MAN). IEEE Press, 1305–1312.
[37]
Sotaro Kita. 2020. Cross-cultural variation of speech-accompanying gesture: A review. Speech Accompanying-Gesture (2020), 145–167.
[38]
Mikiya Kusunoki, Linh Viet Nguyen, Hsin-Ruey Tsai, Haoran Xie, 2024. Integration of Origami Twisted Tower to Soft Mechanism Through Rapid Fabrication Process. In 2024 IEEE/SICE International Symposium on System Integration (SII). IEEE, 01–02.
[39]
Mikiya Kusunoki and Haoran Xie. 2023. UX Study for Origami-Inspired Foldable Robotic Mechanisms. In Proceedings of the Asian HCI Symposium 2023. 28–34.
[40]
Daegyu Lee, Gyuree Kang, Boseong Kim, and D Hyunchul Shim. 2021. Assistive delivery robot application for real-world postal services. IEEE Access 9 (2021), 141981–141998.
[41]
Kiju Lee, Yanzhou Wang, and Chuanqi Zheng. 2020. Twister hand: Underactuated robotic gripper inspired by origami twisted tower. IEEE Transactions on Robotics 36, 2 (2020), 488–500.
[42]
Rui Li, Hongyu Wang, and Zhenyu Liu. 2021. Survey on mapping human hand motion to robotic hands for teleoperation. IEEE Transactions on Circuits and Systems for Video Technology 32, 5 (2021), 2647–2665.
[43]
Tao Liu, Yanzhou Wang, and Kiju Lee. 2017. Three-dimensional printable origami twisted tower: Design, fabrication, and robot embodiment. IEEE Robotics and Automation Letters 3, 1 (2017), 116–123.
[44]
Sheng-Yen Lo and Han-Pang Huang. 2016. Realization of sign language motion using a dual-arm/hand humanoid robot. Intelligent Service Robotics 9 (2016), 333–345.
[45]
Andreas Löcken, Carmen Golling, and Andreas Riener. 2019. How should automated vehicles interact with pedestrians? A comparative analysis of interaction concepts in virtual reality. In Proceedings of the 11th international conference on automotive user interfaces and interactive vehicular applications. 262–274.
[46]
Karthik Mahadevan, Sowmya Somanath, and Ehud Sharlin. 2018. Communicating awareness and intent in autonomous vehicle-pedestrian interaction. In Proceedings of the 2018 CHI conference on human factors in computing systems. 1–12.
[47]
Jennifer E Martinez, Dawn VanLeeuwen, Betsy Bender Stringam, and Marlena R Fraune. 2023. Hey? ! What did you think about that Robot? Groups Polarize Users’ Acceptance and Trust of Food Delivery Robots. In Proceedings of the 2023 ACM/IEEE International Conference on Human-Robot Interaction. 417–427.
[48]
Miraikan. 2023. Living with ’Weak Robots’. Retrieved April 14, 2024 from https://www.miraikan.jst.go.jp/en/exhibitions/future/hellorobots/?tabs=2#lab-2
[49]
Bilge Mutlu and Jodi Forlizzi. 2008. Robots in organizations: the role of workflow, social, and environmental factors in human-robot interaction. In Proceedings of the 3rd ACM/IEEE international conference on Human robot interaction. 287–294.
[50]
Cassidy Myers, Thomas Zane, Ron Van Houten, and Vincent T Francisco. 2022. The effects of pedestrian gestures on driver yielding at crosswalks: A systematic replication. Journal of applied behavior analysis 55, 2 (2022), 572–583.
[51]
Yukiko I Nakano, Fumio Nihei, Ryo Ishii, and Ryuichiro Higashinaka. 2024. Selecting Iconic Gesture Forms Based on Typical Entity Images. Journal of Information Processing 32 (2024), 196–205.
[52]
Huynh AD Nguyen and Quang P Ha. 2023. Robotic autonomous systems for earthmoving equipment operating in volatile conditions and teaming capacity: a survey. Robotica 41, 2 (2023), 486–510.
[53]
Yoichi Ochiai and Keisuke Toyoshima. 2011. Homunculus: the vehicle as augmented clothes. In Proceedings of the 2nd Augmented Human International Conference. 1–4.
[54]
Divyasha Pahuja, Saahil Sabnis, and Uday Nair. 2024. Delivery Bot: Enhancing Pedestrian Awareness, Willingness and Ability to Help Delivery Robots Encountering Obstructions. In Companion of the 2024 ACM/IEEE International Conference on Human-Robot Interaction. 1249–1252.
[55]
Xiang Pan, Malcolm Doering, and Takayuki Kanda. 2024. What Is Your Other Hand Doing, Robot? A Model of Behavior for Shopkeeper Robot’s Idle Hand. In Proceedings of the 2024 ACM/IEEE International Conference on Human-Robot Interaction. 552–560.
[56]
Hannah RM Pelikan and Malte F Jung. 2023. Designing robot sound-in-interaction: The case of autonomous public transport shuttle buses. In Proceedings of the 2023 ACM/IEEE International Conference on Human-Robot Interaction. 172–182.
[57]
Hannah RM Pelikan, Stuart Reeves, and Marina N Cantarutti. 2024. Encountering Autonomous Robots on Public Streets. In Proceedings of the 2024 ACM/IEEE International Conference on Human-Robot Interaction. 561–571.
[58]
Dirk Rothenbücher, Jamy Li, David Sirkin, Brian Mok, and Wendy Ju. 2015. Ghost driver: a platform for investigating interactions between pedestrians and driverless vehicles. In Adjunct Proceedings of the 7th International Conference on Automotive User Interfaces and Interactive Vehicular Applications. 44–49.
[59]
Elaheh Sanoubari, Stela H Seo, Diljot Garcha, James E Young, and Verónica Loureiro-Rodríguez. 2019. Good Robot Design or Machiavellian? An In-The-Wild Robot Leveraging Minimal Knowledge of Passersby’s Culture. In 2019 14th ACM/IEEE International Conference on Human-Robot Interaction (HRI). IEEE, 382–391.
[60]
Stela H Seo, Jihyang Gu, Seongmi Jeong, Keelin Griffin, James E Young, Andrea Bunt, and Susan Prentice. 2015. Women and Men Collaborating with Robots on Assembly Lines. (2015).
[61]
Jinjuan She, Jack Neuhoff, and Qingcong Yuan. 2021. Shaping pedestrians’ trust in autonomous vehicles: an effect of communication style, speed information, and adaptive strategy. Journal of Mechanical Design 143, 9 (2021), 091401.
[62]
the japan times. 2024. Uber Eats Japan begins deliveries with self-driving robots. Retrieved April 11, 2024 from https://www.japantimes.co.jp/business/2024/03/06/tech/inoue-uber-eats-robot/
[63]
TheTechAnonGuy. 2024. Willow X. Retrieved April 11, 2024 from https://twitter.com/TheTechAnonGuy/status/1770890716585083038
[64]
Andrea Thomaz. 2023. Robots in Real Life: Putting HRI to Work. In Proceedings of the 2023 ACM/IEEE International Conference on Human-Robot Interaction. 3–3.
[65]
Yiyuan Wang, Senuri Wijenayake, Marius Hoggenmüller, Luke Hespanhol, Stewart Worrall, and Martin Tomitsch. 2023. My eyes speak: Improving perceived sociability of autonomous vehicles in shared spaces through emotional robotic eyes. Proceedings of the ACM on Human-Computer Interaction 7, MHCI (2023), 1–30.
[66]
Yuto Yamaji, Taisuke Miyake, Yuta Yoshiike, P Ravindra S De Silva, and Michio Okada. 2011. Stb: Child-dependent sociable trash box. International Journal of Social Robotics 3 (2011), 359–370.
[67]
Zuozhong Yin, Jihong Liu, Bin Chen, and Chuanjun Chen. 2021. A delivery robot cloud platform based on microservice. Journal of Robotics 2021, 1 (2021), 6656912.
[68]
Xinyan Yu, Marius Hoggenmüller, and Martin Tomitsch. 2024. From Agent Autonomy to Casual Collaboration: A Design Investigation on Help-Seeking Urban Robots. In Proceedings of the CHI Conference on Human Factors in Computing Systems. 1–14.
[69]
Jianqi Zhang, Xu Yang, Wei Wang, Jinchao Guan, Ling Ding, and Vincent CS Lee. 2023. Automated guided vehicles and autonomous mobile robots for recognition and tracking in civil engineering. Automation in Construction 146 (2023), 104699.

Index Terms

  1. Shrinkable Arm-based eHMI on Autonomous Delivery Vehicle for Effective Communication with Other Road Users

    Recommendations

    Comments

    Please enable JavaScript to view thecomments powered by Disqus.

    Information & Contributors

    Information

    Published In

    cover image ACM Conferences
    AutomotiveUI '24: Proceedings of the 16th International Conference on Automotive User Interfaces and Interactive Vehicular Applications
    September 2024
    438 pages
    ISBN:9798400705106
    DOI:10.1145/3640792
    Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

    Sponsors

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    Published: 22 September 2024

    Permissions

    Request permissions for this article.

    Check for updates

    Badges

    Author Tags

    1. arm-based eHMI
    2. autonomous delivery vehicles
    3. delivery robot
    4. transformation design

    Qualifiers

    • Research-article
    • Research
    • Refereed limited

    Funding Sources

    • JST CREST

    Conference

    AutomotiveUI '24
    Sponsor:

    Acceptance Rates

    Overall Acceptance Rate 248 of 566 submissions, 44%

    Contributors

    Other Metrics

    Bibliometrics & Citations

    Bibliometrics

    Article Metrics

    • 0
      Total Citations
    • 83
      Total Downloads
    • Downloads (Last 12 months)83
    • Downloads (Last 6 weeks)37
    Reflects downloads up to 21 Nov 2024

    Other Metrics

    Citations

    View Options

    Login options

    View options

    PDF

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader

    HTML Format

    View this article in HTML Format.

    HTML Format

    Media

    Figures

    Other

    Tables

    Share

    Share

    Share this Publication link

    Share on social media