Nothing Special   »   [go: up one dir, main page]

skip to main content
10.1145/3613904.3642227acmconferencesArticle/Chapter ViewFull TextPublication PageschiConference Proceedingsconference-collections
research-article
Open access

Navigating Real-World Challenges: A Quadruped Robot Guiding System for Visually Impaired People in Diverse Environments

Published: 11 May 2024 Publication History

Abstract

Blind and Visually Impaired (BVI) people find challenges in navigating unfamiliar environments, even using assistive tools such as white canes or smart devices. Increasingly affordable quadruped robots offer us opportunities to design autonomous guides that could improve how BVI people find ways around unfamiliar environments and maneuver therein. In this work, we designed RDog, a quadruped robot guiding system that supports BVI individuals’ navigation and obstacle avoidance in indoor and outdoor environments. RDog combines an advanced mapping and navigation system to guide users with force feedback and preemptive voice feedback. Using this robot as an evaluation apparatus, we conducted experiments to investigate the difference in BVI people’s ambulatory behaviors using a white cane, a smart cane, and RDog. Results illustrated the benefits of RDog-based ambulation, including faster and smoother navigation with fewer collisions and limitations, and reduced cognitive load. We discuss the implications of our work for multi-terrain assistive guidance systems.
Figure 1:
Figure 1: We present RDog, a indoor and outdoor robotic travel guide for the BVI people with preemptive feedback.

1 Introduction

According to the World Health Organization (WHO) [59], there are 0.29 billion Blind and Vision impaired people worldwide (BVI), and 43 million are blind. Approximately 90% of them require assistance to leave their homes, and the remaining 10% that can go out independently usually only follow one or two routine routes. Navigating independently in unfamiliar places remains a significant challenge for BVI people.
Blind and Visually Impaired (BVI) individuals face two key challenges in navigating unfamiliar places: wayfinding and obstacle avoidance [21, 43]. Wayfinding entails determining one’s current location, charting a path to the destination, and maintaining orientation in between [21]. Obstacle avoidance requires the user to stay away from static objects and dynamic objects. In addition, in outdoor environments with different terrains, the user needs to prepare to terrain changes such as going from cement ground to grass or stepping down a curb  [21]. These challenges can be especially daunting in crowded or disorganized settings, such as cafeterias or railway stations [21].
Among navigation aids for individuals with visual impairments, the white cane is commonly used. Following orientation and mobility training, BVI individuals employ the white cane to detect obstacles, road edges, and surface variations. They often utilize the ’shorelining’ technique to follow sidewalk edges and continually sweep the cane ahead to ensure a clear path. However, white canes have limitations, as they can’t detect obstacles beyond their length or provide directions, leading to indirect routes and slow progress in crowded spaces. Detecting fast-moving objects like cyclists or cars is also challenging. Moreover, shorelining is inefficient and sometimes impractical in unstructured environments lacking clear boundaries [21].
Guide dogs offer superior obstacle avoidance capabilities, thanks to their extended sensing range and the ability to smoothly guide users around obstacles. However, guide dogs have limitations, as they can only memorize a few routine routes and rely on the user for direction in unfamiliar routes  [21]. Furthermore, there is a severe global shortage of guide dogs due to the extensive time required for breeding, training, and matching, along with the high cost which is approximately $42,000 [35]. They are also high maintenance and may not suit everyone’s lifestyle.
Electronic travel aids have been proposed to perceive the environments through different sensors and convey information to users through voice or tactile feedback [27, 33, 43]. With short-ranged ultrasonic, mid/long-ranged camera, and Light Detection And Ranging (LiDAR), along with GPS or SLAM algorithms, electronic travel aids show promise in surpassing white canes and guide dogs for obstacle avoidance and wayfinding across diverse environments. However, relaying environmental data via voice and tactile feedback can burden users, as voice and tactile feedback may not always be clear and timely. This can impose a significant cognitive load on users, limiting their usability [43]. To this date, the adoption rate of electronic travel aids is still very low. Recent research also explores autonomous robotic agents with force guidance [15, 29, 42] for safe, independent navigation by BVI users in unfamiliar terrains, but these systems are in early stages and primarily suited for flat indoor environments.
To create a genuinely improved navigation aid for BVI individuals, we conducted user interviews to comprehend their issues with current solutions and collect insights on an ideal travel aid. Using this feedback, we designed a quadruped robot, RDog, optimized for indoor and outdoor navigation. RDog excels in two crucial aspects: Firstly, it efficiently handles obstacle avoidance and wayfinding using advanced mapping and navigation technology, ensuring safe and efficient navigation. Secondly, RDog offers active force feedback and well-timed preemptive cues, enhancing user guidance in unstructured environments with varying terrain.
We compared RDog to traditional and smart cane solutions in various environments, analyzing factors like speed, trajectory, collision distance, cognitive workload, and usability. We also had the opportunity to compare RDog to an animal guide dog brought by one participant. Additionally, we performed a case study to evaluate RDog’s real-world usability, including finding stalls, transporting food, and returning trays. Overall, RDog outperforms other devices across multiple metrics and shows significant potential for real-world applications.

2 Related Work

In this section, we review various assistive devices and technologies designed to aid visually impaired individuals in real-time navigation. These solutions encompass a wide range of embodiments, sensory modalities, and approaches.

2.1 Canes and Wearable Devices

One prominent category of assistive devices for the visually impaired people is cane-based navigation systems. These devices enhance traditional white canes with advanced technology to detect obstacles and provide real-time feedback to users. Smart canes like WeWalk [58] and GuideCane [55] are equipped with sonar sensors that detect obstacles and convey information to users through tactile feedback, typically vibrations. Standford’s augmented cane [49] introduces motorized wheels to guide users away from obstacles, resulting in smoother walking experiences.
Wearable electronic travel aids (ETAs) such as glasses, belts, or wristbands, have also been developed to aid visually impaired individuals in navigation. Normally they convey information to the user through voice or vibration feedback [22, 27, 41, 46, 48]. Auditory feedback plays a significant role in assisting visually impaired individuals during navigation. Several solutions utilize audio cues to provide users with information about their surroundings, directions, path shape descriptions and turn-by-turn action instructions [34, 41, 46]. Other than the navigation related information, devices such as Orcam MyEye [39] and Envision [11] smart glasses adopt camera-based systems to help users recognize signs and surrounding people. Some systems use spatial audio feedback delivered through apps [5, 12], stereo sound [51], or spatial audio technologies [5] to enhance the perception of the environment.
Haptic feedback, which provides tactile sensations through vibrotactile stimulation, is another commonly employed mode of interaction for assisting visually impaired individuals [2, 22, 23, 33, 65]. While haptic feedback may convey limited information compared to audio, it offers a more rapid response [10]. Haptic feedback systems can deliver touch sensations to various body parts, including the head, shoulders, and wrists [19, 23, 33]. Users can be trained to use these tactile cues to interpret their surroundings and navigate.
Despite advancements in assistive technologies, they still have limitations. Audio-based systems rely on text-heavy communication, which is cognitively demanding and can overwhelm users. Vibration signals are timely but convey limited information and can be confusing in crowded environments. Most devices only alert users to obstacles without offering navigation guidance, causing stress and challenges, especially in unfamiliar places. As a result, these digital solutions have low adoption rates among BVI individuals [43].

2.2 Robot Guides

Guiding robots offer a promising approach to assist BVI individuals in terms of obstacle avoidance and wayfinding. Robots not only have sufficient space for sensors and computational units but also have the capability to proactively guide individuals by offering intuitive force feedback. The combination of wayfinding and the proactive obstacle avoidance capabilities potentially offers a more complete solution than smart canes and wearable devices which primarily send alerts for users to react to. Early works [36, 52] introduced prototypes for guidance robots, focusing on obstacle avoidance [36, 38, 45, 56]. Others concentrate on localization and navigation guidance based on an input destination [4, 54, 56, 57, 65]. These robots employ various sensor technologies, including ultrasonic or sonar sensors [36, 55], LiDAR [56, 64], or combinations of multiple sensors [30, 54].
Interaction interfaces between humans and robots have also been explored [38, 42, 45, 57, 61]. For example,  [65] explored using force-sensing sensors and vibrated actuators that allow BVI people to control and communicate with the robot in a natural way, and  [57] recognizes the user’s walking speed and direction by using the laser range finder. Some robots, such as those discussed in [9, 17, 61], are connected to users through soft leashes and adapts to the user walking behaviour through human modelling. However, it can be challenging for users to maintain a continuous sense of direction through a soft leash due to intermittent pulling forces. While some robots are designed for specific environments, such as hospitals [54], museums [25], and stores [4], most of the previous proposals have focused on indoor navigation.
In general, the existing guiding robots are not a complete solution yet. For example, wheeled robots such as the CaBot [15] can only operate in limited flat terrains, mostly indoors. Recent work has proposed quadruped robots as guiding robots [9, 17, 20, 61], but these prototypes are limited to controlled lab environments and have not been tested with BVI individuals.

2.3 Kinesthetic Devices

Another interesting type of assistive tool is kinesthetic devices [2, 31] that directly simulate muscle or joint sensations through control moment gyroscope (CMG), offering a sense of force that mimics physical interactions, similar to a virtual guide. These devices effectively convey spatial information to visually impaired individuals. However, they do not inform the user about terrain changes such as stairs, curbs, or inclines like a guide dog would, nor do they provide a physical sense of security. Additionally, there is limited space available to equip them with advanced sensors, and how those devices are perceived by the BVI people are not revealed so far.

3 Participatory Design with BVI People

Previous studies have extensively explored the design and advantages of assistive navigation devices for the BVI users [10, 12, 26, 28]. These devices, equipped with advanced sensors like cameras and ultrasonic sensors, offer real-time information about obstacles and environmental cues to BVI users through audio or tactile feedback, thereby augmenting their independence and safety. However, existing devices exhibit certain limitations that impede their real-world effectiveness. In particular,  [43] highlighted that the substantial cognitive load associated with processing acoustic and haptic feedback stands as a significant factor constraining the adoption rate of electronic travel aids. To gain deeper insights into the everyday navigational challenges and expectations of BVI individuals, we conducted interviews with six BVI users as part of a participatory study. Combining their feedback with insights from the literature, we formulated the key design requirements needed for an assistive robotic guidance system. We then iteratively developed a prototype of the system through pilot feedback with BVI people.

3.1 Initial Interviews with BVI users

We conducted in-depth semi-structured interviews with BVI users who have extensive experience traveling (commuting, exploring new places, traveling to different cities, etc.) and testing out various assistive tools. The demographic information of the pilot participants (PP1-6) are listed in Table  1.
Interviews were audio-recorded, transcribed, and finally coded thematically [6] by two authors. We used a deductive approach to identify themes related to existing challenges the visually impaired face in their day-to-day navigation, information about existing digital solutions, and ideas about building a truly helpful solution. Here we summarize the main takeaways (see Appendix for more details).
Table 1:
IDAgeGenderVision LevelOccupation
PP123Ffully blindsalesperson
PP224Fpartially sightedstudent
PP338Mfully blindlawyer and
guide dog owner
PP445Mfully blindself-employed
PP525Mpartially blindself-employed
PP650Mfully blindCEO of a screen-
reading company
Table 1: Demographic information of participatory design participants.

3.1.1 Existing Issues with Navigation.

Only a limited number of BVI users regularly use white canes for navigation. While white canes are cost-effective, they are primarily useful for obstacle avoidance in familiar and structured settings. In unstructured or unfamiliar areas like parks or malls, cane navigation can be ineffective, often leading users astray. This restriction hampers the mobility of many BVI people, despite their wish for greater independence. Furthermore, white canes miss overhead obstacles, causing head or shoulder collisions.
PP3: “White canes are only useful to a certain extent. They are like an extension of your hands - you can feel distant obstacles and avoid them. That’s pretty much it. They do not provide any other information, such as where you are, where to go next, or how to detour around obstacles. It takes a lot of practice and courage to use white canes, and not many people can use them independently.”

3.1.2 Experience with Existing Digital Solutions.

Prominent digital aids for the visually impaired include smart canes and wearable glasses, notably WeWalk [58] and KR Vision [27]. Despite being tested by some visually impaired users, these tools see limited daily-life utility. Our interviews identified three primary real-world limitations of these devices:
Limited Guidance: While these devices can alert users about obstacles, they do not provide guidance on how to navigate around these obstacles. As a result, users are required to plan their routes on their own, which can be cognitively demanding and confusing, especially in unstructured environments. Furthermore, because these devices often lack location information, users frequently find themselves getting lost.
Delayed Alerts: These devices have limited obstacle detection range, resulting in delayed alerts. Users may react too late to efficiently avoid obstacles.
Inaccurate/Excessive Obstacle Alerts: These devices often produce false alarms, causing frequent interruptions in complex settings. Solutions using voice feedback can impair users’ hearing which they rely on for safety and spatial awareness, while vibration feedback can be vague and insufficient.
PP2: "I understand the logic behind smart canes, but their value in real-life navigation is limited. They start alerting me to obstacles earlier than my white cane, but when they say something is three meters away, I don’t know what to do. Changing direction immediately might lead me off-trail, and if I don’t, the cane will hit it in a few steps, like a white cane. So, I end up using it like the white cane and ignoring the alerts."

3.1.3 What is a better assistive guidance solutions?

Several interviewees (PP1, PP3, & PP6) suggested that a self-propelled robot could be an ideal assistive tool for guiding visually impaired users. This idea also resonated with other interviewees. PP3, who has experience with a guide dog, noted a significant improvement in navigation when led by a self-propelled agent. Instead of merely alerting the user to upcoming obstacles, the agent can actively guide them around obstacles while maintaining an optimal path, resulting in a smoother, less stressful experience. PP3 also emphasized the importance of a rigid leash in transmitting the dog’s motions to the user, preparing them for changes in direction or elevation. Interviewees also expressed a preference for an agent adaptable to different terrains, as they hope to navigate various environments.
PP3: "Imagine you shut your eyes, and you carry a sighted person on your back who keeps telling you where to go. That’s the experience of voice feedback. It provides information, but it can be confusing and overwhelming, and you still need to make a decision on every step. The same goes for tactile feedback, except that the signals are more nebulous. However, following the Kinesthetic feedback is like holding onto someone’s arm. It is simple and effortless, and gives you a strong feeling of security."

3.2 Design Requirements Distillation

Summarizing our initial interviews, users prefer a self-propelled robot that not only alerts them to obstacles but also actively guides them along the correct path, navigating around obstacles. They favor simple, clear, and intuitive guidance signals over complex voice and vibration feedback. This aligns with issues highlighted in the literature regarding existing devices, particularly the challenges associated with unclear and complex communication of navigational information [10, 12, 26, 28]. We thus focused on designing a solution that not only can plan a clear path, but also effectively transmits the information to the user in an intuitive and clear manner.
Besides participants’ feedback, we also consulted online resources and existing literature to assess the viability of using a self-propelled robot. We sought insights from the interaction behavior between guide dogs and handlers [53] and the use of kinesthetic feedback  [15, 49] to inform our interaction design. Furthermore, Wigget et al.  [60] highlights the enhancement of users’ navigation capabilities and confidence, especially in unfamiliar environments, by utilizing guide dogs. These findings, together with the feedback from the participatory study, contribute to our formulation of the initial design requirements.
We outlined the following design prerequisites for our guidance system tailored to navigating unfamiliar environments.
D1 - Provide a “walkable path” to the destination. BVI users face challenges finding paths in unfamiliar, unstructured environments, and existing devices which focus on obstacle avoidance do not adequately address this issue. Thus, the system should be able to identify a path that is effective and smooth.
D2 - Avoid obstacles proactively. Current digital assistive devices typically notify users about obstacles but leave the user to independently determine how to navigate around them. This can lead to confusion and stress. Users expressed a preference for guidance that assists them in timely fashion around obstacles, especially in unfamiliar environments where users lack a mental map and are uncertain about safe navigation routes.
D3 - Navigate both indoor and outdoor environments across various terrains. As interviewees have expressed strong interest in exploring a variety of environments, a helpful assistive device should possess the capability to guide users in both indoor and outdoor settings, and adapt to diverse terrains.
D4 - Provide unambiguous, intuitive, and effective guidance. Guidance provided by existing digital assistive devices tend to be too ambiguous, or non-effective for improving the navigation experience. The ideal solution should be easy to process and clear enough for users to decide their actions, thereby conserving cognitive resources and reducing navigation-related stress.
Figure 2:
Figure 2: Early prototypes and our final implementation. (a) A pilot test with PP2 in a park. (b) A pilot test PP1 in a shopping mall. (c) Final implementation of the RDog system. Yellow boxes are components of the backend system, and green boxes are components of the front-end system.
Based on our design requirements, we developed an initial prototype of the RDog system (Fig.  2). Our design features a quadruped robot, chosen for its versatility across terrains and suitability for indoor and outdoor use compared to wheeled robots (D3). Inspired by real guide dogs, we decided to use kinesthetic feedback to guide users given its simple and intuitive nature (D4). For this, we equipped the prototype with a rigid leash typically used for real guide dogs. We piloted our prototype in the field with PP1, PP2, and PP3 in various locations, including a shopping mall, two parks, and a crowded neighborhood. The robot was controlled remotely in a wizard-of-oz manner to simulate wayfinding and proactive obstacle avoidance (D1, D2).
Users had a positive experience with the robot’s leash guidance system, finding the rigid leash’s kinesthetic feedback intuitive. They desired more comprehensive environmental information to anticipate terrain changes, preferably through voice or vibration feedback, to aid in both preparation and the learning of a cognitive map of the environment. Informal testing revealed difficulties with vibration signals, often drowned out by natural robot movement vibrations, leading us to choose voice feedback in the final prototype. Users stressed the importance of the robot slowing down for direction changes, slopes, and terrain texture variations, and coming to a complete stop at stairs or potentially hazardous terrain, waiting for user commands similar to an animal guide dog. Some users also wanted the option to adjust the robot’s speed to match their walking pace.
Based on feedback from the prototype testing, we incorporated two additional design requirements into our guidance system:
D5 - Preemptive information about change in direction or terrain through voice, paired with slowing down or making a stop.
D6 - Provide controls for speed and direction modulation onto the leash.

4 RDog- Implementation

To meet these design requirements, we need a versatile backend for autonomous navigation in diverse environments and seamless integration with a frontend interface for user feedback. The backend comprises the robot base, sensors, computational units, and navigation system, while the front-end involves the interaction interface. The following sections will provide detailed descriptions of these two components.

4.1 Robot Body, Sensors, and Computers

We build our system based on Unitree Go1 EDU quadruped robot system (Fig. 2(a)). The robot is equipped with five sets of fisheye stereo cameras and ultrasonic sensors which can be used to perceive the surrounding environments. Additionally, it is integrated with three NVIDIA Nano and a Raspberry Pi which are used to execute our navigation and interaction algorithms. Alongside the on-board computational units and sensors, we integrated an NVIDIA Jetson Orin processing unit and a Livox Mid-360 LiDAR. Livox Mid-360 LiDAR is positioned on the front part of the robot, with a horizontal field-of-view (FOV) of 360 degree, a vertical FOV of 59 degree, and a sensing range of 40 meters. Additionally, this LiDAR is embedded with a IMU unit which enables a more accurate state estimation. The NVIDIA Orin is used for the wayfinding and obstacle avoidance modules described in Sec 4.3, based on the point cloud input from the LiDAR. The Orin computer communicates with the onboard computers through ROS (Robot Operating System) [40].

4.2 Map System

Similar to High-Definition Map (HD Map) used in autonomous driving, our map system is hierarchical and contains a series of layers tailored for our guiding task. Here we detail these layers from bottom to top.

4.2.1 Pointcloud layer.

We utilized the modified FAST-LIO algorithm  [62, 63] to create 3D point cloud maps in each testing environment, achieving better map consistency through loop detection and pose-graph optimization. We adjusted the calibration parameters and extrinsics to make it compatible with our Livox-MID360 LiDAR, benefitting from its high-accuracy scans and integrated IMU unit. This enabled map generation with a single pass typically, but for large environments we used voxelgrid filtering using the PCL library  [44] to optimize point cloud density and file size.

4.2.2 2D Occupancy Layer.

After constructing the 3D map, we convert it into a 2D occupancy grid map for path planning. Traversable areas are marked as free, while obstacles like walls or chairs are designated as obstructed. To account for varying ground heights and initial coordinate misalignment, we project the point cloud using the robot’s local coordinates during mapping. Examples of occupancy maps are in Fig. 3.

4.2.3 Behavior layer.

The behavior layer plays a pivotal role in bridging the gap between the robot’s navigation functionality and its interaction with users. This section details the key aspects of the behavior layer and its implementation within our system. Essentially, the behavior layer contains a set of nodes extracted from key locations such as intersections, narrow passageways, terrain changes, and curbs.
Its purposes for planning and localization are described as below.
Hierarchical path planning. Instead of doing global planning from the start to the goal, we equip the system with a 2-level path planner which improves efficiency. See Section  4.3.1 for more details.
Improved localization performance. A set of relocalization nodes, strategically placed at turns or distinct structural points, offer rich features to improve success rates and reduce localization errors, particularly after navigating long corridors. This approach also reduces computational costs compared to a constant frequency relocalization policy.
Its connection to interaction interfaces are described as below.
For nodes related to "turn" actions, a corresponding turning maneuver is executed during the navigation process, accompanied by an anticipatory voice notification. The timing of this notification is crucial for building trust. It must be both prompt and consistent, and it is also influenced by the user’s chosen speed.
For nodes related to "terrain changes", such as from pavements to grass or tiles, the robot provides voice feedback, slowing down to allow the user to prepare. When encountering a step-down, the robot produces voice feedback and initiates curb interaction, pausing while the user senses the curb. Afterward, the user presses our joystick button to resume robot movement.

4.3 Wayfinding and Obstacle Avoidance

After the map has been built, the robot guides the user to the destination through wayfinding and obstacle avoidance.

4.3.1 Hierarchical Wayfinding (D1).

Wayfinding involves hierarchical localization and path planning, functioning through various map layers from bottom to top. Initially, the robot’s pose is sent to the point cloud layer for feature matching with the point cloud map to calculate a matching score; scores surpassing a predetermined threshold indicate successful localization. Subsequently, the robot determines its pose in the grid map layer and identifies the corresponding high-level node in the behavioral layer.
The planning module operates in a top-down manner. First, a sequence of behavioral nodes are extracted from the behavioral layer based on the initial and goal locations. Then, a collision-free path is generated from the robot’s current node to the next node, utilizing traversability information from the 2D occupancy layer. The path planner is implemented using the A* algorithm.

4.3.2 Obstacle avoidance (D2).

A local controller implemented with Dynamic Window Approach  [13] is used for path following while avoiding dynamic obstacles such as pedestrians. Concurrently, the robot maintains a high-frequency update of its current state through the integration of LiDAR and IMU data, as well as the robot’s odometry information. Moreover, at designated relocalization nodes, the robot conducts global relocalization to rectify any accumulated localization errors.

4.4 Interaction Interface

The interaction interface includes an interactive leash that provides kinesthetic feedback and allows the user to control the speed, pause, or resume the robot at their own will, and a preemptive feedback system that provides information about changes in direction or terrain through voice, paired with slowing down or making a stop.

4.4.1 Kinesthetic feedback (D4).

We learned from interviews and pilots that it is crucial to establish a rigid connection between the handler and the guide dog, as this enables the user to perceive Kinesthetic feedback from the guide dog’s motion during turns or changes in terrain. We thus used a rigid rod as the base of the leash. To accommodate BVI users with varying heights and walking styles, we made the length and the angle of the leash adjustable.

4.4.2 Joystick.

While handling techniques that involve force-control, such as leash gesture and collar cue [53], are commonly used to manage guide dogs, it takes weeks of training, which is not suitable for the current experimental setting. In addition, our design objective is to provide guidance to the general BVI population rather than only guide dog users. Therefore, we used a joystick with buttons which serves as a clear and straightforward interface for users.
We embed a 3-axis joystick in the 3D-printed handle. The joystick is multi-functional and has three standard operations: push forward, push back, and press in. The user can increase or decrease the robot’s speed by pushing the joystick forward or backward when the robot is moving (D6). The user can start or pause the robot’s movement anytime by pressing in the joystick. The user can press in the joystick and pause the robot to speak to someone on the road, or, when the robot reaches a curb or a step, it pauses and waits for the user to press in the joystick and continue moving (D5).

4.4.3 Preemptive Feedback (D5).

RDog provides anticipatory information about road conditions, including upcoming turns, terrain changes, curbs, and stairs ahead. It issues voice commands like "grass and tiles in front" while slowing down the robot to prepare the user for terrain changes. When encountering a step-down curb, the robot issues the command "Mind the step and press the button to continue," alerting the user to the curb ahead and pausing until the user presses the joystick button to resume. The complete list of voice feedback can be found in the appendix.

5 Study 1 - Comparing RDog with traditional Guidance systems

The objective of this study is to validate RDog’s efficacy for guiding users in diverse unfamiliar environments and compare it with the standard assistive device, i.e., the white cane, and a popular digital assistive solution, the WeWalk smart cane [58]. To the best of our knowledge, our investigation is the first to empirically compare a robotic guidance system with traditional solutions in real-world environments. We hypothesized the following:
H1. Navigation Efficiency, Smoothness, and Safety: RDog navigates faster and has fewer path deviations and collisions compared to the white cane and the smart cane.
Walking smoothly at a comfortable speed significantly enhances the overall user experience [49]. Challenges in surveying for obstacles and identifying safe routes with a white cane or smart cane may result in reduced walking speed and increased collisions. In contrast, RDog autonomously plans a navigable path, guiding users swiftly and smoothly to their destination.
H2. Perceived Workload: Users rate RDog as requiring less workload than the white cane and the smart cane on the NASA-TLX measures.
The required workload of an assistive device is a crucial parameter determining its usability, commonly assessed in studies such as  [42, 49]. Interviews and literature reviews indicate challenges for blind and visually impaired (BVI) users in detouring around obstacles and finding walkable paths with the white cane, especially in new or unstructured environments. Additionally, BVI users often face difficulties parsing voice or tactile feedback of smart devices in complex settings. Consequently, we anticipated that users would perceive the kinesthetic feedback provided by RDog as the least cognitively demanding.
H3. Terrain Feedback: A preemptive feedback system is critical in helping user navigating environments with terrain change.
Few existing studies have explored the performance of assistive devices in diverse terrains, making it difficult to directly anticipate the effectiveness of preemptive feedback on terrain changes from the literature. However, our pilot studies unveiled a user preference for advanced information about upcoming terrain shifts, as it allows users to proactively adjust speed and posture, ensuring a smooth transition and averting potential injuries. As a result, we expected that delivering preemptive notifications through voice would contribute to building trust between users and the robot, ultimately enhancing the overall navigation experience.
H4. SUS & Trust: The ratings of RDog on usability and trustworthiness measures are higher than the smart cane and comparable to the white cane.
Usability and trustworthiness are key metrics for assessing assistive devices  [8, 29, 49]. Our interviews unveiled concerns among users about delayed responses and false alerts with smart canes, leading us to anticipate lower ratings in usability and trustworthiness. In contrast, RDog is designed for ease of use, requiring minimal training, suggesting higher ratings in usability and trustworthiness. As participants are familiar with white canes used daily, we did not expect RDog ratings to surpass those of the white cane.
Figure 3:
Figure 3: Navigation tasks in three environments : (a) indoor, (b) unstructured canteen, and (c) a garden with multiple terrains such as smooth concrete floor (cyan region), brick walkway (grey region), grass patch (green region), and irregularly spaced stones with grass (yellow region).

5.1 Navigation tasks

We considered three realistic navigation tasks in different on-campus environments:
(1)
Navigation in an indoor environment (Path A, Fig. 3 a): The path is situated in an indoor environment with narrow corridors in a building. The path has two intersections (indicated in blue dots) and its length is approximately 55 m. It also has furniture, dustbins, projecting walls/pillars, and overhanging fire extinguishers.
(2)
Navigation in an unstructured environment (Path B, Fig. 3 b): This task requires navigation to the restroom by crossing a semi-indoor canteen environment, Path B, shown in Fig. 3 b. The environment consisted of dining tables, chairs, and a tree with low branches that are potential overhead obstacles. The user had to navigate through several narrow spaces between tables that had 6 intersections. We also placed chairs (indicated in pink dots) along the path as static obstacles, and one confederate that walked slowly in front of the BVI user as a dynamic obstacle. The path had an approximate length of 60 m.
(3)
Navigation on uneven terrain (Path C, Fig. 3 c): This task involved navigating a semi-indoor garden environment with changing terrains such as a smooth concrete floor, brick walkway, grass patch, irregularly spaced stones with grass (yellow region), and a curb (labelled).
For all navigation tasks, the direction and order of path traversals were counterbalanced across participants to reduce possible effects of path familiarity.

5.1.1 Guidance methods.

We compared three guidance methods in our study: RDog system, WeWalk smart cane, and the white cane. While RDog automatically guides users along the planned path, participants cannot do the same with their white cane or smart cane since they do not know directions. Hence, for a fair comparison, we inform the participant on the direction to take at each intersection point via external audio feedback in a wizard-of-oz manner in the white cane and smart cane conditions. For instance, the audio instructions were standardised: "full right" referring to a nearly 90° right turn, "slight right turn" for a partial right turn, and the same goes for the left turns.

5.2 Experiment Design

We conducted a within-subjects controlled study where users navigated the real-world navigation situations. We had guidance methods as a main factor. In each situation, users were guided to the goal location using the three guidance methods: white cane, smart cane, and RDog in counterbalanced order.

5.3 Metrics

We used the following metrics to evaluate and compare the usage of the proposed system.

5.3.1 Navigation Efficiency.

We measured task completion times, starting from when the participant initiated the joystick and began following the robot, and ending when the user reached the goal position. This reflects the navigation efficiency of using these devices.

5.3.2 Smoothness - Number of Deviations.

Deviations are quantified by counting the instances when users significantly veered off the intended path and got stuck for more than five seconds, necessitating intervention from our experimenter to guide them back on course. This reflects the navigation smoothness when using each device.

5.3.3 Safety - Number of Collisions.

A collision is counted when the user has physical contact with static or dynamic obstacles. This reflects the navigation safety through the journey.

5.3.4 Perceived workload.

We used NASA-TLX [18] to record users’ perceived workload after navigating with each of the guidance method. The scale measures mental demand, physical demand, temporal demand, performance, effort, and frustration.

5.3.5 Usability & Trust.

We employ the SUS survey [7] to measure the usability of each device after finishing all three routes. We used Muir’s questionnaire [37], with a seven-point Likert scale to measure the trust of the user when using our RDog at the beginning and end of the formal experiments.
Table 2:
 AgeGenderVision Level
125MFully blind on right, 30 % vision field of left
262MFully blind on right, slight shadows in left
335MFully blind
468MFully blind
534FPartially Blind
643MPartial blind with some peripheral vision
754FFully blind
862MFully blind
972MPartially blind
1049MFully blind
1127MPartially blind
1227MFully blind
Table 2: Demographic statistics of Study 1 participants.

5.4 Participants & Apparatus

We conducted the study with twelve BVI people (P01–P12 in Table 2). This study was approved by the institutional review board (IRB) of our institution, and an informed consent was obtained from every participant. Each study took 150 minutes and participants were compensated 35 USD.
For the white cane and smart cane conditions, the experimenter provided audio feedback to the blind user via voice notifications using a phone placed in an equipment backpack during a remote Zoom session. The experimenter followed behind the participant at a distance and gave instructions in real-time.
To capture the participant’s trajectory, we affixed a Livox-Mid360 LiDAR, similar to the one on our robot dog, atop a helmet. Details about the trajectory capture setup can be found in the Appendix.

5.5 Procedure

We began by introducing RDog and explaining its navigation and control features. Subsequently, we conducted a 10-minute training session where users practiced reaching a goal location with RDog while learning to adjust its speed as needed. This training occurred in an indoor space on a route different from the one used for the study. Participants then completed a trust survey to gauge their level of trust in RDog for assisting them in navigation following their initial usage.
Following this, participants were tasked with navigating through each scenario using the guidance methods in a counterbalanced order. We record the time of traversal, number of collisions and deviations for each traversal manually, and we cross-validate the results from the video recordings. After each navigation task, participants rated the perceived workload of using the guidance method for navigation using the NASA-TLX scale.
At the end of the study, participants evaluated the usability of the guidance methods using the SUS scale [7] and completed another round of a trust survey. In addition to the default setup, users were asked to walk through the uneven terrain environment again but without preemptive feedback, and then complete a survey on feedback about terrain change. The order between trials with or without preemptive feedback were counterbalanced. Open-ended interviews were conducted after all the aforementioned surveys were completed.

6 Results

The data collected from twelve BVI users were used for analysis. We lack two data points for the indoor setting in NASA-TLX, navigation analyses, and terrain surveys, as two participants were unable to complete the experiments due to schedule conflicts.
In evaluating the NASA-TLX’s total score across three distinct conditions (Unstructured, Uneven Terrain, and Indoor), we applied the Shapiro-Wilk test to check the data’s normality. If the data is normal (p > 0.05), we correct the degrees of freedom if sphericity fails (p < 0.05) under the Mauchly’s test, and then apply a one-way RM ANOVA followed by post-hoc analysis using multiple means comparison with Bonferroni correction. If the data is non-normal, we use the Friedman’s test and then conduct post-hoc analysis using paired Wilcoxon signed-rank tests with Bonferroni correction. Before proceeding with our data analysis, we replaced the missing data using multiple imputation.
Figure 4:
Figure 4: Navigation trajectories of all devices in three environments: (a) unstructured canteen, (b) garden with uneven terrain, and (c) indoor. Blue, orange, and pink colors corresponds to the trajectories of RDog, smart cane, and white canes respectively. The trajectories of RDog are the smoothest in the canteen and garden.
Figure 5:
Figure 5: Average navigation time in three environments: (a) Unstructured Canteen, (b) Uneven Terrain, and (c) Indoor. RDog has significantly shorter travelling time compared to the other two devices.
Table 3:
            
 CanteenTerrainIndoor  
 W-CaneS-CaneRDogW-CaneS-CaneRDogW-CaneS-CaneRDog  
Time(s)125.9 ± 33.4144.3 ± 42.783.9 ± 12.1137.7 ± 49.7132.9 ± 44.398.3 ± 15.394.2 ± 17.2113.8 ± 20.278.1 ± 10.1  
Deviation1.25 ± 0.91.9 ± 1.20.0 ± 0.00.5 ± 0.80.7 ± 0.80.0 ± 0.00.0 ± 0.00.0 ± 0.00.0 ± 0.0  
Collision1.6 ± 0.91.3 ± 0.80.3 ± 0.60.5 ± 0.50.3 ± 0.50.0 ± 0.01.1 ± 1.10.4 ± 0.80.0 ± 0.0  
Table 3: Results for navigation efficiency, smoothness, and safety.

6.1 Navigation Efficiency, Smoothness, and Safety

6.1.1 Navigation efficiency.

Navigation efficiency is measured by recording the participants’ traveling time in each trial. We found a significant main effect for environments in travelling time (F(2, 22) = 19.41, p < .001). Pairwise t-Tests with Bonferroni correction showed that RDog’s travelling time in the Unstructured environment was significantly faster than the smart cane (T22 = −4.94, p < .001) and the white cane (T22 = −4.10, p = .001). There was also a significant main effect for the Uneven Terrain environment (F(2, 22) = 8.63, p < .01), indicating that RDog exhibited different speeds compared to the smart cane (T22 = −3.04, p = 0.018) and the white cane (T22 = −2.75, p = 0.034). Lastly, in the Indoor environment, there was also significant difference in travelling time (F(2, 22) = 26.56 and p < .001). Rdog had a shorter traveling time compared to the smart cane (T = −5.37, p < .001) and the white cane (T = −3.09, p < .001).
The RDog demonstrates the largest performance advantage in the Unstructured environment. As demonstrated in Fig.  4, the users tend to get stuck and take detours, which significantly slows them down. In the terrain environment, the gap between the RDog and other two devices is smaller. This is likely because the RDog pauses at the curb and waits for the user to press the button to continue, which adds around 10 extra seconds.
Note that since the user could choose the speed of the RDog, a shorter traveling time indicates that most users used the fastest speed (0.9m/s) at least during parts of the testing, which reflects their trust in RDog.

6.1.2 Smoothness.

Navigation smoothness is measured by how often the users deviate from the path and have to be intervened by the experimenter to get back on track. As shown in Table 3, the number of deviations for RDog are 0 for all the cases, which are significantly lower in comparison to the white cane or the smart cane, indicating a more seamless overall navigation experience. In the unstructured environment, an ANOVA test revealed a significant difference between guidance methods (F=19.3785, p< .001). Subsequent pairwise comparisons revealed that there was a significant difference between RDog and the smart cane (p < .001) and between RDog and the white cane (p < .001). For the uneven terrain condition, the results also indicated a significant difference (χ2 = 7.63, p = 0.022); no pairwise comparisons were highlighted as significant, likely due to the close-to-zero values across conditions. No test was performed for the indoor condition, as all three conditions had 0 deviation. This is because the indoor environment is highly structured, and the walls act as boundaries of the path to stop the participants from deviating.

6.1.3 Safety - Collision.

Table 3 shows that by using RDog, participants experience fewer collisions with the environment compared to when using the white canes or the smart canes. Additionally, RDog maintained a greater average distance from static obstacles such as walls or trees, resulting in a safer and more natural path. We used the Friedman’s test to analyze collisions data across environments. In the unstructured condition, the difference among devices is significant (χ2 = 10.05, p = 0.007). There was a significant difference between the white cane and RDog (p = 0.008) but no significant difference between the smart cane and RDog. In the uneven terrain environment, there is a significant difference among devices (χ2 = 7.91, p = 0.019), but no significant results in pairwise comparisons. Similarly, for the indoor condition, while the overall Friedman’s test revealed significant result (χ2 = 8.22, p = 0.016), no pairwise comparisons found significant results. The lack of significant results in pairwise comparison is likely caused by the low numbers of errors across conditions or the floor effect. Nevertheless, the numerical trends indicate an advantage to RDog.
Figure 6:
Figure 6: Average NASA-TLX results in three environments: (a) Unstructured Canteen, (b) Uneven Terrain, and (c) Indoor. RDog significantly reduces users’ workload in Unstructured Canteen and Uneven Terrain compared to other devices.

6.2 Perceived Workload

Fig.  6 shows the results of the NASA-TLX scores (individual scales and overall) in three testing environments: canteen, terrain, indoor. Lower scores indicate a lower workload while performing the task. In general, RDog received the lowest scores in all three environments.
We found a significant main effect for the Unstructured environment (Friedman’s Test: χ2 = 18.67, p < .001). Pairwise comparisons (with Bonferroni correction) revealed RDog scored significantly lower than the white cane (p<.001) and the smart cane (p<.001). There was also a significant main effect for the Uneven Terrain (Friedman’s Test: χ2 = 15.17, p < .001). Pairwise comparisons with Bonferroni correction indicated that RDog scored significant lower than the white cane (p=0.002) and the smart cane (p<.001). Lastly, in the Indoor environment, there was a notable group difference (F(2,18)=8.23, p=0.003). Pairwise comparisons showed that RDog had lower scores than the smart cane (p=0.027), but there was no difference between RDog and the white cane. This is likely because the white cane is suitable for navigating in structured environments.
Figure 7:
Figure 7: Terrain Survey results. The trust and confidence of the users are significantly improved with our preemptive feedback system.

6.3 Terrain Feedback

One of the key advantages of using a quadruped robot is its capability to navigate outdoor environments with different terrains, such as pavement, grass, and curbs. We evaluated the experience of the users through a separate terrain survey. Feedback modes involve preemptive voice feedback about the terrain types and direction, slowing down at terrain changes, and pausing when encountering a curb.
Fig.  7 shows the user’s opinions on the interaction with the robot at the places with terrain changes. For Q1 (paired t-test, T = -7.25, p <.001) and Q2 (Wilcoxon signed rank test, p = 0.011) which ask the users to compare the experience between having feedback and not having feedback, users report a significant improvement in experience when having feedback. For Q3-5, users find all three feedback modes very important (6 out of 7) in helping them successfully navigate through the environments.

6.4 SUS & Trust

Usability Analysis Fig 8(a) shows the SUS scores across devices. The score of RDog is higher than those of the smart cane and the white cane. In assessing the System Usability Scale across various groups, an ANOVA test indicated a significant difference among the groups (F(2, 20) = 21.59, p < .001). Upon further exploration through post-hoc tests with Bonferroni correction, significant disparities were observed between RDog and the smart cane (T = 5.01, p <.001) and between the smart cane and white cane groups (T = -5.09, p <.001). The contrast between RDog and the white cane did not demonstrate a significant difference (p = 0.841). This is understandable as most people have been using white canes for years but are new to RDog.
Trust Analysis Fig. 8 b shows the trust of the robot before and after the formal study procedure. There is a slight increase in trust but not significant, given that the user only walks with the robot for less than ten minutes. However, the overall trust of the robot both before or after is still high, approximately 5 out of 7. It is understandable that users have reservations towards a new robot system. The detailed analyses are in the Appendix.
Figure 8:
Figure 8: SUS Results (a) and trust survey results (b). RDog has a comparable usability (a) to white canes. Users feel a slight increase in trust (b) with the robot after all the experiments.

6.5 Discussion

Figure 9:
Figure 9: Typical navigation challenges for BVI users include narrow passageway (a), atypical turn (b), overhanging obstacles (c), dynamic obstacles (d), and grass and tiles (e).

6.5.1 Navigation Performance and User Experience.

In terms of navigation efficiency and perceived workload, RDog outperformed both the white cane and the smart cane in all three environments. The most significant advantage was observed in the unstructured environment, known to be a challenging scenario for BVI individuals. In our study, this unstructured environment, an open cafeteria with densely placed chairs and tables, often resulted in challenges and mental stress when using canes. Videos indicated that both white canes and smart canes frequently got stuck between chair and table legs, leading to non-smooth navigation and frequent stops or decelerations. As seen in Fig.  4 a, the paths of white canes and smart canes were winding with many detours, whereas RDog followed a smoother and more optimal path.
Post-study interviews with participants revealed the difficulties they faced with canes, including obstacle avoidance (P3: "I have to keep hitting everything to figure out a path, and I feel embarrassed when the cane hits a chair that someone is sitting on.") and wayfinding (P4:"When I hear the voice command, I’m not sure where precisely to turn or how big a turn I should take."). The smart cane encountered similar issues as the white cane, particularly with low-lying obstacles like chair legs which are hard for the sensor to detect. The challenges reported in the interview include open environments, atypical turns, irregular structures, narrow passageways, dynamic obstacles like chairs and pedestrians, and overhanging obstacles. Sample images of those places are shown in Fig.  9. Clearly, in such environments, a self-propelled guide like RDog proves much more helpful. It not only follows an efficient path but also communicates information smoothly to the user through the leash, resulting in a less stressful experience, as indicated by our measurements of perceived workload."
Most users find structured indoor environments easier to navigate compared to unstructured spaces like canteens. However, obstacles such as protruding wall pillars can disrupt navigation, necessitating abrupt path adjustments. Notable safety concerns include potential collisions with fixtures like fire extinguishers (see Figure  9(c)) while shorelining with the white cane. The delayed and often inaccurate warnings of smart canes were also not practically useful in these situations. In contrast, the RDog navigates around such obstacles in advance, ensuring a safer, smoother journey.
In the terrain environment, the primary challenge lies in wayfinding within relatively open spaces. Shorelining with white canes becomes more challenging without distinct road boundaries, such as corridor walls in indoor environments. Notably, a tiled pavement leading into the grass (Fig.  9 e) posed difficulties for most users in finding the starting point based on the experimenter’s voice commands. Beyond navigation performance, an important observation relates to the trust instilled by the self-propelled robot. For instance, after the initial attempt to traverse the grass with a white cane, P7 expressed a sense of dread due to uncertainty, stating, "I’m not sure whether it is walkable in front of me. I was very cautious of anything brushing against my legs." However, when following RDog through the same route, she felt pleasantly surprised that her fear was not as pronounced. P7 noted, "A robot guiding me in front gives me much more confidence going through the grass. I know it is safe in front of me, as the robot is already there."

6.5.2 Preemptive Direction & Terrain Feedback.

We assessed users’ feedback on voice feedback related to terrain characteristics and direction changes, including terrain type and orientation, slowing down at terrain transitions, and pausing when encountering a curb. Users found this feedback to be highly beneficial.
The inclusion of such feedback stems from the fact that BVI individuals do not receive directional information about terrain when led by another agent, such as a person, a guide dog, or, in this study, RDog. In contrast, when using a white cane, haptic feedback directly signals terrain changes or curbs. However, it can be challenging to discern terrain changes directly from kinesthetic feedback provided by the guiding agent. Therefore, supplementing this feedback with other modalities, such as voice and movement changes, is essential.
Guide dogs face a similar challenge in transmitting terrain information directly. While guide dogs are trained to halt at terrain changes like stairs and curbs, users often need to deduce the specific type of change themselves. Unlike real guide dogs, our RDog has the unique ability to convey terrain information effectively through voice feedback, a feature users found highly beneficial (see Fig.  7).
P2:"White canes can detect the terrain change, but it does not tell the user what kind of terrain it is, and it is late. In contrast, the robot can tell me the exact terrain type in advance, and I find it very helpful". P5:"I found the slow-down when stepping up the curb is particularly useful. The pause at the interaction is useful too, but the slow-down is more important". P1 made an interesting comparison between white canes and RDog, noting, "Although white canes provide information about terrain, it can be excessive, as I do not need to feel the grass at every moment. In comparison, when the robot informs me that there is grass ahead, I lift my feet higher while walking, and that is enough."

6.5.3 Usability.

The average usability rating of RDog surpasses that of the smart cane and is comparable to that of the white cane. This score is sup rising and encouraging considering that participants are new to RDog, while they have years of experience using the white cane. In fact, RDog even outperforms the white cane in certain aspects. For example, most users rated the statement "I needed to learn a lot of things before I could get going with this system." lower for RDog compared to white canes. This is because using white canes requires training and familiarization, whereas following RDog is relatively intuitive.
Users also expressed appreciation for the speed change feature of the joystick, as it provides a sense of stability and control in various situations. The low usability score of the smart cane confirms some of the negative feedback obtained during our initial round of interviews with BVI individuals. Participants found the alert signals from the smart cane to be ambiguous and confusing. Based on the results from the interviews and the experiment, it is evident that providing BVI individuals with obstacle alerts alone offers limited value in improving their navigation experience.
.

6.5.4 Comparison to Animal Guide Dogs.

During the study, one participant, P11, brought his guide dog with him, providing an opportunity for an unplanned comparison between RDog and a real guide dog in the experimental setting. Although this comparison is based on a single data point and was not initially part of the study design, we include a qualitative discussion of our findings here.
Both the animal guide dog and RDog demonstrated similar workload scores, with the animal guide dog scoring 17 in the unstructured environment compared to RDog’s score of 19, and 28 for the animal dog versus 21 for RDog in the terrain environment. This similarity in workload scores is expected as both systems employ a similar force feedback mechanism.
In terms of navigation performance, both the animal guide dog and RDog achieved zero collisions and zero interventions. However, the real guide dog exhibited slightly faster navigation times than RDog in both environments (77s vs. 90s and 70s vs. 94s). This is probably because the user did not select the fastest speed setting for RDog throughout. However, the speed of choice may increase as the user becomes more familiar with RDog.
While animal guide dogs generally performed well in navigation, they displayed some notable unexpected behaviors. For instance, when encountering a chair obstructing the path, the guide dog would pause, seemingly considering leading the user to the chair for seating. A gesture from the user was required to prompt the dog to continue. Additionally, the guide dog exhibited hesitancy when transitioning between tile and grass paths, pausing several times. This hesitation was attributed to the dog’s lack of familiarity with such environments. These observations highlight that even animal guide dogs can exhibit cautious behavior in unfamiliar environments and may pause when uncertain. The "stop and wait for a command" behavior demonstrated by guide dogs in such situations could inform the future design of RDog to handle unexpected corner cases more effectively.
In terms of cost, our RDog system hardware, including the robot base, sensors, and computational units, amounts to around $6,000. This cost is significantly lower than the expense associated with training an animal guide dog, which can be as much as $50,000 [16]. This lower cost opens up opportunities for our RDog system to be adopted by a wider population.

6.5.5 Difference in Available Data.

The differences between data gathered from different devices could affect the performance of each device. The smart cane has access to an ultrasonic sensor with an onboard computational chip. RDog, on the other hand, has access to LiDAR and IMU, and it is equipped with an Nvidia Orin computer. The combination of more advanced sensors and computational units enables the robot to collect richer data in real time, including information about far-away obstacles, and to plan paths ahead of time, providing preemptive voice feedback. While the disparity in available data could be a factor contributing to the performance difference, it also underscores the importance of the choice of embodiment, i.e., the space on the body of the robot provides opportunities for installing a mixed number of sensors, whereas adding LiDAR to the smart cane would significantly increase the weight and affect its usability.

7 Case Study: Using RDog in A Canteen Environment

In Study 1, we demonstrated our robot’s ability to enhance navigation performance in controlled environments. This study aims to validate that our robot can assist users in independently navigating a canteen and successfully procuring food without requiring intervention from the experimenter or others. We selected the canteen environment as it was identified as the most challenging environment in Study 1. Besides the unstructured nature and irregular obstacles, users may also struggle to locate specific stalls and find seats. The study in this section were approved by the institutional review board (IRB) of our institution, and an informed consent was obtained from every participant.

7.1 Improved Design

7.1.1 Force Interaction.

Through study 1, we learned that while some users appreciate the accuracy and certainty provided from a joystick-based control ( P2, P3, P5, P10), some other users ( P4, P8, P9) prefer a more intuitive and natural speed control interface. Drawing inspiration from previous works exploring force interactions  [57, 65] and discussions with the real guide dog user from Study 1, we integrated a load cell into the interactive leash for measuring the force applied between the robot and the user, as illustrated in Fig.  2 c. The sequence of force signals within a specific time window is categorized into three primary actions: slight pull, slight push, strong push, and tug. A slight pull action reduces the robot’s speed, while a slight push action accelerates its speed. The tug action brings the robot to an immediate halt, and the strong push action resumes the robot’s movement. It’s worth noting that a similar "tugging" operation to halt the dog exists in real-world interactions between dogs and their users.

7.1.2 Dialogue communication.

To enable users to complete the entire task independently, we integrated voice recognition using the SU-03T chip from  [50], offering three primary functions. Initially, a wake-up phrase, "guide dog ready," precedes all other recognition functions. Subsequently, it recognizes key command keywords, with four configured destinations in our case study: bus stop, food stall, empty table, and tray return point. Upon recognizing these commands, it responds with "ready, going to" followed by the destination name, such as "food stall." Additionally, to assist users in locating the robot, RDog responds to the triggering phrase "where are you, Lucky?" with a series of barking sounds. Users can also inquire about possible destinations with "where can I go?".

7.2 Procedure

For this study, we chose a canteen environment that encompasses both indoor and semi-outdoor areas, along with a few curbs. Participants were assigned a series of tasks, including locating the robot, navigating to a food stall to collect food, finding an available seat for dining, and disposing of waste at a designated garbage disposal point. This experiment involved participants P1 and P13, and the specific workflow is detailed below:
(1)
The robot positioned itself at the canteen entrance, ready to await the user’s arrival.
(2)
After the user disembarked from a cab or bus, they initiated the process by issuing the voice command, "where are you, Lucky (the temporary name for RDog)?" to locate the robot. In response, the robot emitted a barking sound to indicate its location. The robot was placed within a five-meter radius of the user’s disembarkation point.
(3)
As the user approached RDog and held onto the handle, RDog vocalized "guide dog ready," indicating its readiness for further voice commands.
(4)
The user initiated the process by saying "where can I go," prompting the robot to provide a list of available destinations: food stall, an empty seat, and the tray return point.
(5)
After specifying the desired destination through voice command, the user pushed the handle to commence movement. The robot autonomously headed to the destination, detouring around obstacles along the way. Users had the option to adjust speed or pause the robot using force interaction interface described above.
(6)
Upon reaching the chosen destination, the robot came to a halt and provided information about the destination, such as "Reaching the food stall. The window is on your left."
We tested with two case study participants CP1 and CP2. CP1 was a participant (P1) in study 1 (Table 2), while CP2 joined the study exclusively for Study 2. CP2 is a 23-year-old female with Peter anomaly.

7.3 Qualitative Results and Disucssion

7.3.1 Overall Experience.

CP1 and CP2 expressed positive feedback regarding this user case. They shared that they often face challenges locating stalls and seats in canteens, typically relying on assistance from sighted guides or friends when visiting such places. This marked the first time they were able to independently navigate to a destination in such an environment. These findings indicate a promising use case for deploying the robot in public spaces like canteens and shopping malls, serving a shared purpose to benefit a broader range of visitors.

7.3.2 Finding the Robot.

Locating the robot is a crucial initial step for BVI individuals. Users in our studies successfully found the robot within a 5-meter range using voice commands. However, both P1 and P13 expressed identity concerns. P13 questioned, "What if the robot responds to others? Can it identify the user?" In the future, voice recognition and contactless communication methods may be needed to address this concern."

7.3.3 Reaching the goal.

Participants generally expressed satisfaction with the robot’s stopping positions but also offered valuable suggestions for improvement. One participant appreciated the robot’s rotation to a user-friendly orientation and found the voice feedback helpful. Furthermore, P1 suggested improvements related to stopping positions at chairs, proposing that the robot could stop at the back of chairs for easier access and specify the type of chair in front. These suggestions have the potential to enhance the usability and informativeness of RDog.

7.3.4 Force Interaction.

The user found the interface intuitive, and she almost learned to control the speed instantly. We also informally tested the force interaction system with two other participants at the end of Study 1. We found that one user still preferred the joystick interface, as it makes her feel more certain about the change and the resulting speed. In future work, we will further explore this issue and consider alternatives that leverage the advantages of both forms.

7.3.5 Voice Interaction.

Users expressed satisfaction with the voice interaction function. P1 commented, "The robot echoing my voice command is very important. I appreciate it when I say ’go to the food stall,’ and the robot responds with ’ready to go to the food stall.’ This clear communication reassures me that my voice is correctly received and understood by the robot." This feedback reaffirms the significance of incorporating voice interaction features for effective communication and trust-building.

8 Overall Discussion

In this paper, we introduced RDog, a robotic guide dog system designed to assist individuals with visual impairments (BVI) in navigating unfamiliar environments independently and safely. We began by conducting interviews with experienced BVI individuals to gather insights for designing an effective guiding system. Subsequently, we developed a prototype, incorporating feedback from pilot trials with interviewees, and adding LiDAR sensor and leash design.
We evaluated RDog’s performance in comparison to traditional tools such as the white cane and a state-of-the-art smart cane. The results showed that RDog outperformed both the white cane and the smart cane in terms of navigation efficiency, smoothness, safety, and perceived workload. Users found the terrain feedback system valuable, and RDog received high usability and trust ratings.
The advantages of RDog were particularly pronounced in unstructured environments with obstacles and narrow passages, highlighting its effectiveness in complex scenarios. A case study further demonstrated RDog’s ability to help users complete everyday tasks independently, such as dining in a canteen. This suggests the potential for RDog to be extended to various scenarios and developed into a practical product.

8.1 Comparison to Other Navigation Aids

Overall, the findings support our key assumptions and design requirements. Most users felt positive about the experience and stated that they would consider using RDog beyond the scope of the study.
Participants favored the smart cane the least compared to other devices. Despite having only slightly fewer collisions compared to white cane (see Table  3), smart cane scored the lowest in NASA-TLX and usability ratings, largely due to its ambiguous and often delayed vibrational feedback, which was particularly unreliable in narrow spaces with lateral obstacles. This inadequacy frequently resulted in users approaching too close to obstacles and experiencing difficulty in maintaining the correct path due to lack of wayfinding capabilities. Users reported similar issues while using wearable bands equipped with ultrasonic sensors.
When comparing our RDog to previous guiding robot projects, several significant distinctions emerge. Firstly, we have successfully demonstrated the feasibility of guiding users through outdoor environments with diverse terrains, marking a notable advancement from the majority of prior projects [9, 17, 20, 61], which primarily conducted testing in controlled and confined settings. While some projects like Cabot [15], Glide [42], and others [25, 29] have ventured into more realistic experimental scenarios, their primary focus remains on indoor environments, using wheeled-robot platforms distinct from ours.
In essence, our RDog system represents a novel category of assistive devices that complements traditional tools like white canes and animal guide dogs. White canes provide environmental feedback through tactile sensations, aiding users in understanding their surroundings and avoiding obstacles. Electronic Travel Aids (ETAs) offer additional information, including sign reading and pedestrian recognition, through voice and vibration. Animal guide dogs excel in obstacle avoidance and smooth navigation. RDog harnesses advanced navigation and AI technologies, excelling in route planning and environmental understanding. It intuitively and accurately conveys path information to users, allowing them to predominantly follow the robotic system. At crucial moments, users receive voice prompts that enhance their environmental awareness. With a rich array of sensors and powerful computational capability, additional functionalities can be seamlessly integrated in the future.

8.2 Voice Feedback

This work highlights the importance of voice feedback in navigation devices. In Study 1, we emphasized the value of voice feedback, especially in challenging environments, and for building user trust. Case Study 2 expanded this interaction by incorporating voice commands, enabling the robot to guide users through a canteen dining experience with two-way voice interaction.
During informal interviews at the end of Study 1, we inquired about participants’ preferences for additional voice feedback in future models. These preferences included environmental descriptions (such as details about nearby objects and landmarks), distance to the destination, and alerts for critical road conditions (like potholes). Participants favored automatic announcements for road conditions and on-demand information about distance and the environment.
In summary, intermittent voice feedback for critical situations and the option to request information about location and the environment can significantly enhance the navigation experience. Future research can further improve the usability of assistive devices for BVI individuals by refining and expanding the voice interaction system.

8.3 Perceived Workload of Navigation

While sighted individuals may not consider the cognitive workload associated with walking, it can be highly demanding for BVI people. Using assistive devices should aim to reduce mental and physical effort to enhance convenience and safety. In Study 1, we confirmed that using RDog reduces cognitive load compared to white canes and smart canes. This reduced workload not only improves the travel experience but also enables multitasking. For example, P2 mentioned that using a white cane prevented him from walking and talking on the phone simultaneously, requiring frequent stops. However, when using RDog, he could comfortably manage both tasks, as he no longer needed to focus on navigating road conditions and directions. Other participants also reported similar benefits during informal post-experiment walks, highlighting the potential for users to perform additional tasks while navigating, such as enhancing safety awareness, taking calls, or simply enjoying their surroundings.

8.4 Collaborative Guidance

In our studies, we explored various interaction modes between the robot and users. For example, the robot pauses at curbs and waits for the user’s input to proceed, and we used speech-based dialogue for tasks like ordering food. During interviews, participants suggested additional collaboration modes. Some preferred a shared guidance mode over strict robot-led navigation, allowing them to actively participate in decision-making when necessary. P11 expressed a desire for the robot to mimic real guide dogs in unfamiliar environments, seeking user input when uncertain about direction or walkability. Leveraging the robot’s voice interface, future communication could involve the robot pausing and asking the user for help in uncertain situations, fostering collaboration in dealing with various scenarios.

8.5 Multi-Terrain Navigation

Using a quadruped robot as an assistive guide offers terrain adaptability, essential for diverse real-world terrains. While our formal studies did not cover staircases, we conducted informal tests with participant P1. This involved a seven-step staircase, each step measuring approximately 15cm in height. The robot proactively notified the user about the upcoming staircase from around 5 meters away. As the user approached, the robot seamlessly transitioned into stair-climbing mode, starting the switch about 1 meter before the first step, guiding the user through to the last step.
This use case intrigued P1, offering valuable insights and recommendations. He highlighted the importance of preemptive voice notifications for terrain changes like stairs, providing advanced notice that white canes lack. P1 also suggested brief pauses or slowing down at the first and last steps to help users adapt to the terrain change, aligning with animal guide dog behavior.

8.6 Social Acceptability

In order to employ this assistive device in a wider range of environments, it is important to design the robot in a way that it is well-received by BVI users as well as the general public. Prior work has studied the social acceptability issues of existing assistive technologies [1, 3, 32, 47]. While the vast majority feel positive about the presence of assistive robots in public space [24, 25], there are some concerns regarding privacy and safety [1, 24].
In our experiments, we did not formally study the social acceptability from the surrounding pedestrians. However, during our formal experiments, we did not notice any reported disturbances to the surrounding pedestrians, even though the robot was operating in a crowded canteen environment. A possible explanation for not having reported issues with social acceptability is that our algorithm takes into consideration social conventions by setting the speed range to match that of pedestrians in the area, maintaining a reasonable distance from walls and other obstacles. This allows users to walk alongside other pedestrians without interfering with their normal walking behaviors or predictions about the surroundings. In fact, P3 commented on how RDog could help him reduce embarrassing incidents of hitting chairs that others sat on, making him feel that his behavior was more socially acceptable than using canes. On the other hand, there could be concerns for BVI users or pedestrians who are afraid of an animal-like robot or a moving robot in general, as it is an unfamiliar category of technology. In the future, we plan to conduct dedicated experiments to investigate social acceptance of RDog from the perspective of both stakeholders. We will measure the acceptance ratings of the public through watching videos of the robot in action, interviewing individuals in the robot’s operating environment, and observing implicit behaviors from nearby individuals. We will also evaluate whether BVI users feel comfortable and accepted when using the robot. Based on the gathered insights, we will further adjust design requirements and enhance BVI users’ comfort while using the robot.

8.7 Limitations

While our quadruped robot excels in guiding users across diverse terrains, it has limitations. Firstly, the motors consume more power than wheeled robots, potentially straining the battery. Secondly, user opinions vary about the robot’s walking sound. Some find it reassuring for understanding road conditions, while others worry about its appropriateness in quiet places like museums or theaters. Thirdly, RDog currently requires a map that has been built a priori, which limits its navigation range. In the future, we plan to integrate the system with floor maps [14] or consumer navigation maps so that RDog can navigate in a wider range of environments. Fourthly, our current system utilizes LiDAR as the primary sensor. To ensure consistent performance in a broader range of environments, such as those with glass doors, we will implement a multi-sensory fusion strategy that includes cameras and ultrasonic sensors. Lastly, our studies had a moderate sample size. In particular, only two of the participants were guide dog users due to the difficulty of recruitment given the small number of guide dog users (around 10 in the local city). In the future, we aim to interview and test more visually impaired users (especially guide dog users) in a larger-scale study. Additionally, we hope to involve orientation and mobility (O&M) trainers in future iterations of participatory studies to gather insights on system design and user training.

9 Conclusion

Our work introduces RDog, an autonomous quadruped robot designed to enhance navigation for Blind and Visually Impaired (BVI) individuals in unfamiliar environments. The RDog system combines an advanced mapping and navigation system to guide users with force feedback and preemptive voice feedback through various terrains and environments. Compared to existing tools like white canes and smart canes, RDog offers faster, smoother navigation with fewer collisions, reduced cognitive load, and a more user-friendly experience. This research points toward a promising future in assistive technology, paving the way for multi-terrain guidance systems that empower BVI individuals with greater independence and confidence in various environments.

Acknowledgments

This research is supported in part by the National Research Foundation (NRF), Singapore and DSO National Laboratories under the AI Singapore Program (AISG Award No: AISG2-RP-2020-016). This research project is partially supported by the Ministry of Education, Singapore, under its MOE Academic Research Fund Tier 2 programme (MOE-T2EP20221-0010). We also thank the reviewers for their insightful comments that helped to improve the paper.

Footnotes

Corresponding Authors
The author is now at Synteraction Lab, School of Creative Media, Department of Computer Science, City University of Hong Kong

Supplemental Material

MP4 File - Video Preview
Video Preview
Transcript for: Video Preview
MP4 File - Video Presentation
Video Presentation
Transcript for: Video Presentation
MP4 File - Video Figure
Video figure for our paper
Transcript for: Video Figure
PDF File - Appendix
The file is an appendix that contains additional experimental data, set up of the data capture device, a complete list of preemptive feedback, and the interview questions we asked in the pilot study

References

[1]
Taslima Akter, Tousif Ahmed, Apu Kapadia, and Swami Manohar Swaminathan. 2020. Privacy considerations of the visually impaired with camera based assistive technologies: Misrepresentation, impropriety, and fairness. In Proceedings of the 22nd International ACM SIGACCESS Conference on Computers and Accessibility. ACM, New York, NY, USA, 1–14. https://doi.org/10.1145/3373625.3417003
[2]
Michele Antolini, Monica Bordegoni, and Umberto Cugini. 2011. A haptic direction indicator using the gyro effect. In 2011 IEEE World Haptics Conference. IEEE, 251–256. https://doi.org/10.1109/WHC.2011.5945494
[3]
Mauro Avila Soto and Markus Funk. 2018. Look, a guidance drone! assessing the social acceptability of companion drones for blind travelers in public spaces. In Proceedings of the 20th International ACM SIGACCESS Conference on Computers and Accessibility. ACM, New York, NY, USA, 417–419. https://doi.org/10.1145/3234695.3241019
[4]
Shiri Azenkot, Catherine Feng, and Maya Cakmak. 2016. Enabling building service robots to guide blind people a participatory design approach. In 2016 11th ACM/IEEE International Conference on Human-Robot Interaction (HRI). IEEE, 3–10. https://doi.org/10.1109/HRI.2016.7451727
[5]
Jeffrey R Blum, Mathieu Bouchard, and Jeremy R Cooperstock. 2012. What’s around me? Spatialized audio augmented reality for blind users with a smartphone. In Mobile and Ubiquitous Systems: Computing, Networking, and Services: 8th International ICST Conference, MobiQuitous 2011, Copenhagen, Denmark, December 6-9, 2011, Revised Selected Papers 8. Springer, 49–62. https://doi.org/10.1007/978-3-642-30973-1_5
[6]
Virginia Braun and Victoria Clarke. 2006. Using thematic analysis in psychology. Qualitative Research in Psychology 3, 2 (2006), 77–101. https://doi.org/10.1191/1478088706qp063oa
[7]
John Brooke. 1996. Sus: a “quick and dirty’usability. Usability evaluation in industry 189, 3 (1996), 189–194. https://doi.org/10.1201/9781498710411-35
[8]
Min Chen, Stefanos Nikolaidis, Harold Soh, David Hsu, and Siddhartha Srinivasa. 2018. Planning with trust for human-robot collaboration. In Proceedings of the 2018 ACM/IEEE international conference on human-robot interaction. ACM, New York, NY, USA, 307–315. https://doi.org/10.1109/HRI.2016.7451727
[9]
Yanbo Chen, Zhengzhe Xu, Zhuozhu Jian, Gengpan Tang, Liyunong Yang, Anxing Xiao, Xueqian Wang, and Bin Liang. 2023. Quadruped guidance robot for the visually impaired: A comfort-based approach. In 2023 IEEE International Conference on Robotics and Automation (ICRA). IEEE, IEEE, Piscataway, NJ, USA, 12078–12084. https://doi.org/10.1109/ICRA48891.2023.10160854
[10]
Dimitrios Dakopoulos and Nikolaos G Bourbakis. 2009. Wearable obstacle avoidance electronic travel aids for blind: a survey. IEEE Transactions on Systems, Man, and Cybernetics, Part C (Applications and Reviews) 40, 1 (2009), 25–35.
[11]
Envision 2023. Envision Smart Glasses. Envision. Retrieved Aug 1, 2023 from https://www.letsenvision.com/glasses
[12]
Hugo Fernandes, Paulo Costa, Vitor Filipe, Hugo Paredes, and João Barroso. 2019. A review of assistive spatial orientation and navigation technologies for the visually impaired. Universal Access in the Information Society 18 (2019), 155–168.
[13]
Dieter Fox, Wolfram Burgard, and Sebastian Thrun. 1997. The dynamic window approach to collision avoidance. IEEE Robotics & Automation Magazine 4, 1 (1997), 23–33.
[14]
Wei Gao, David Hsu, Wee Sun Lee, Shengmei Shen, and Karthikk Subramanian. 2017. Intention-net: Integrating planning and deep learning for goal-directed autonomous navigation. In Conference on robot learning. PMLR, 185–194.
[15]
João Guerreiro, Daisuke Sato, Saki Asakawa, Huixu Dong, Kris M Kitani, and Chieko Asakawa. 2019. Cabot: Designing and evaluating an autonomous navigation robot for blind people. In Proceedings of the 21st International ACM SIGACCESS Conference on Computers and Accessibility. ACM, New York, NY, USA, 68–82. https://doi.org/10.1145/3308561.3353771
[16]
Guiding Eyes for the Blind 2023. Guide Dogs 101. Guiding Eyes for the Blind. Retrieved July 16, 2023 from https://www.guidingeyes.org/guide-dogs-101/#: :text=It%20can%20cost%20up%20to, contributions%20of%20friends%20and%20supporters.
[17]
Kaveh Akbari Hamed, Vinay R Kamidi, Wen-Loong Ma, Alexander Leonessa, and Aaron D Ames. 2019. Hierarchical and safe motion control for cooperative locomotion of robotic guide dogs and humans: A hybrid systems approach. IEEE Robotics and Automation Letters 5, 1 (2019), 56–63.
[18]
Sandra G Hart and Lowell E Staveland. 1988. Development of NASA-TLX (Task Load Index): Results of empirical and theoretical research. In Advances in psychology. Vol. 52. Elsevier, 139–183. https://doi.org/10.1016/S0166-4115(08)62386-9
[19]
Wilko Heuten, Niels Henze, Susanne Boll, and Martin Pielot. 2008. Tactile wayfinder: a non-visual support system for wayfinding. In Proceedings of the 5th Nordic conference on Human-computer interaction: building bridges. ACM, New York, NY, USA, 172–181. https://doi.org/10.1145/1463160.1463179
[20]
Hochul Hwang, Tim Xia, Ibrahima Keita, Ken Suzuki, Joydeep Biswas, Sunghoon I Lee, and Donghyun Kim. 2023. System configuration and navigation of a guide dog robot: Toward animal guide dog-level guiding work. In 2023 IEEE International Conference on Robotics and Automation (ICRA). IEEE, IEEE, Piscataway, NJ, USA, 9778–9784. https://doi.org/10.1109/ICRA48891.2023.10160573
[21]
Watthanasak Jeamwatthanachai, Mike Wald, and Gary Wills. 2019. Indoor navigation by blind people: Behaviors and challenges in unfamiliar spaces and buildings. British Journal of Visual Impairment 37, 2 (2019), 140–153. https://doi.org/10.1177/0264619619833723
[22]
Lise A Johnson and Charles M Higgins. 2006. A navigation aid for the blind using tactile-visual sensory substitution. In 2006 International Conference of the IEEE Engineering in Medicine and Biology Society. IEEE, IEEE, United States, 6289–6292.
[23]
Robert K Katzschmann, Brandon Araki, and Daniela Rus. 2018. Safe local navigation for visually impaired users with a time-of-flight and haptic feedback device. IEEE Transactions on Neural Systems and Rehabilitation Engineering 26, 3 (2018), 583–593.
[24]
Seita Kayukawa, Daisuke Sato, Masayuki Murata, Tatsuya Ishihara, Akihiro Kosugi, Hironobu Takagi, Shigeo Morishima, and Chieko Asakawa. 2022. How Users, Facility Managers, and Bystanders Perceive and Accept a Navigation Robot for Visually Impaired People in Public Buildings. In 2022 31st IEEE International Conference on Robot and Human Interactive Communication (RO-MAN). IEEE, IEEE Press, 546–553. https://doi.org/10.1109/RO-MAN53752.2022.9900717
[25]
Seita Kayukawa, Daisuke Sato, Masayuki Murata, Tatsuya Ishihara, Hironobu Takagi, Shigeo Morishima, and Chieko Asakawa. 2023. Enhancing Blind Visitor’s Autonomy in a Science Museum Using an Autonomous Navigation Robot. In Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems. ACM, New York, NY, USA, 1–14. https://doi.org/10.1145/3544548.3581220
[26]
Sulaiman Khan, Shah Nazir, and Habib Ullah Khan. 2021. Analysis of navigation assistants for blind and visually impaired people: A systematic review. IEEE access 9 (2021), 26712–26734. https://doi.org/10.1109/ACCESS.2021.3052415
[27]
KRVision 2023. KRVision Smart Glasses. KRVision. Retrieved August 20, 2023 from http://www.krvision.cn/cpjs/
[28]
Bineeth Kuriakose, Raju Shrestha, and Frode Eika Sandnes. 2022. Tools and technologies for blind and visually impaired navigation support: a review. IETE Technical Review 39, 1 (2022), 3–18. https://doi.org/10.1080/02564602.2020.1819893
[29]
Masaki Kuribayashi, Tatsuya Ishihara, Daisuke Sato, Jayakorn Vongkulbhisal, Karnik Ram, Seita Kayukawa, Hironobu Takagi, Shigeo Morishima, and Chieko Asakawa. 2023. PathFinder: Designing a Map-less Navigation System for Blind People in Unfamiliar Buildings. In Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems. ACM, New York, NY, USA, 1–16. https://doi.org/10.1145/3544548.3580687
[30]
Gerard Lacey and Shane MacNamara. 2000. Context-aware shared control of a robot mobility aid for the elderly blind. The International Journal of Robotics Research 19, 11 (2000), 1054–1065. https://doi.org/10.1177/02783640022067968
[31]
Nick Lavars. 2020. Mobility device for the blind works like a handheld robotic guide dog. Retrieved May 1, 2023 from https://newatlas.com/robotics/theia-blind-handheld-robotic-guide-dog/
[32]
Kyungjun Lee, Daisuke Sato, Saki Asakawa, Hernisa Kacorri, and Chieko Asakawa. 2020. Pedestrian detection with wearable cameras for the blind: A two-way perspective. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems. ACM, New York, NY, USA, 1–12. https://doi.org/10.1145/3313831.3376398
[33]
Guanhong Liu, Tianyu Yu, Chun Yu, Haiqing Xu, Shuchang Xu, Ciyuan Yang, Feng Wang, Haipeng Mi, and Yuanchun Shi. 2021. Tactile compass: Enabling visually impaired people to follow a path with continuous directional feedback. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems. ACM, New York, NY, USA, 1–13. https://doi.org/10.1145/3411764.3445644
[34]
Jack M Loomis, Reginald G Golledge, and Roberta L Klatzky. 1998. Navigation system for the blind: Auditory display modes and guidance. Presence 7, 2 (1998), 193–203. https://doi.org/10.1162/105474698565677
[35]
Lorna Marquès-Brocksopp. 2015. How does a dog attack on a guide dog affect the wellbeing of a guide dog owner?British Journal of Visual Impairment 33, 1 (2015), 5–18. https://doi.org/10.1177/0264619614553859
[36]
A Allan Melvin, B Prabu, R Nagarajan, Bukhari Illias, A Allan Melvin, B Prabu, R Nagarajan, I Bukhari, and AA Melvin. 2009. ROVI: a robot for visually impaired for collision-free navigation. In Proc. of the International Conference on Man-Machine Systems (ICoMMS 2009). 3B5–1.
[37]
B.M. Muir and B.M. Muir. 1989. Operators’ Trust in and Use of Automatic Controllers in a Supervisory Process Control Task. University of Toronto. https://books.google.com.sg/books?id=lESrjgEACAAJ
[38]
Amal Nanavati, Xiang Zhi Tan, and Aaron Steinfeld. 2018. Coupled indoor navigation for people who are blind. In Companion of the 2018 ACM/IEEE International Conference on Human-Robot Interaction. 201–202. https://doi.org/10.1145/3173386.3176976
[39]
Orcam 2023. Orcam MyEye. Orcam. Retrieved September 1, 2023 from https://www.orcam.com/en-us/orcam-myeye
[40]
Morgan Quigley, Ken Conley, Brian Gerkey, Josh Faust, Tully Foote, Jeremy Leibs, Rob Wheeler, Andrew Y Ng, 2009. ROS: an open-source Robot Operating System. In ICRA workshop on open source software, Vol. 3. Kobe, Japan, 5.
[41]
Lisa Ran, Sumi Helal, and Steve Moore. 2004. Drishti: an integrated indoor/outdoor blind navigation system and service. In Second IEEE Annual Conference on Pervasive Computing and Communications, 2004. Proceedings of the. IEEE, 23–30.
[42]
Vinitha Ranganeni, Mike Sinclair, Eyal Ofek, Amos Miller, Jonathan Campbell, Andrey Kolobov, and Edward Cutrell. 2023. Exploring Levels of Control for a Navigation Assistant for Blind Travelers. In Proceedings of the 2023 ACM/IEEE International Conference on Human-Robot Interaction. ACM, New York, NY, USA, 4–12. https://doi.org/10.1145/3568162.3578630
[43]
Santiago Real and Alvaro Araujo. 2019. Navigation systems for the blind and visually impaired: Past work, challenges, and open problems. Sensors 19, 15 (2019), 3404. https://doi.org/10.3390/s19153404
[44]
Radu Bogdan Rusu and Steve Cousins. 2011. 3d is here: Point cloud library (pcl). In 2011 IEEE international conference on robotics and automation. IEEE, 1–4. https://doi.org/10.1109/ICRA.2011.5980567
[45]
Shozo Saegusa, Yuya Yasuda, Yoshitaka Uratani, Eiichirou Tanaka, Toshiaki Makino, and Jen-Yuan Chang. 2011. Development of a guide-dog robot: human–robot interface considering walking conditions for a visually handicapped person. Microsystem Technologies 17 (2011), 1169–1174.
[46]
Daisuke Sato, Uran Oh, Kakuya Naito, Hironobu Takagi, Kris Kitani, and Chieko Asakawa. 2017. Navcog3: An evaluation of a smartphone-based blind indoor navigation assistant with semantic features in a large-scale environment. In Proceedings of the 19th International ACM SIGACCESS Conference on Computers and Accessibility. ACM, New York, NY, USA, 270–279. https://doi.org/10.1145/3340319
[47]
Kristen Shinohara and Jacob O Wobbrock. 2011. In the shadow of misperception: assistive technology use and social interactions. In Proceedings of the SIGCHI conference on human factors in computing systems. ACM, New York, NY, USA, 705–714. https://doi.org/10.1145/1978942.1979044
[48]
Shraga Shoval, Iwan Ulrich, and Johann Borenstein. 2003. NavBelt and the Guide-Cane [obstacle-avoidance systems for the blind and visually impaired]. IEEE robotics & automation magazine 10, 1 (2003), 9–20. https://doi.org/10.1109/MRA.2003.1191706
[49]
Patrick Slade, Arjun Tambe, and Mykel J Kochenderfer. 2021. Multimodal sensing and intuitive steering assistance improve navigation and mobility for people with impaired vision. Science Robotics 6, 59 (2021), eabg6594. https://doi.org/10.1126/scirobotics.abg6594
[50]
Machine intelligence 2020. Smartpi: one stop intelligent voice customization platform. Machine intelligence. Retrieved July 20, 2023 from https://www.aimachip.com/index.php?lang=en
[51]
Pawel Strumillo, Michal Bujacz, Przemyslaw Baranski, Piotr Skulimowski, Piotr Korbel, Mateusz Owczarek, Krzysztof Tomalczyk, Alin Moldoveanu, and Runar Unnthorsson. 2018. Different approaches to aiding blind persons in mobility and navigation in the “Naviton” and “Sound of Vision” projects. Mobility of Visually Impaired People: Fundamentals and ICT Assistive Technologies (2018), 435–468. https://doi.org/10.1007/978-3-319-54446-5_15
[52]
Susumu Tachi, Kiyoshi Komoriya, 1984. Guide dog robot. Autonomous mobile robots: Control, planning, and architecture (1984), 360–367.
[53]
From Kay to Zee. 2019. Zenith in Action. Watching my Guide Dog Work. Youtube. Retrieved July 15, 2023 from https://www.youtube.com/watch?v=IRdRQ2KTkLI
[54]
Kazuteru Tobita, Katsuyuki Sagayama, and Hironori Ogawa. 2017. Examination of a guidance robot for visually impaired people. Journal of Robotics and Mechatronics 29, 4 (2017), 720–727. https://doi.org/10.20965/jrm.2017.p0720
[55]
Iwan Ulrich and Johann Borenstein. 2001. The GuideCane-applying mobile robot technologies to assist the visually impaired. IEEE Transactions on Systems, Man, and Cybernetics-Part A: Systems and Humans 31, 2 (2001), 131–136. https://doi.org/10.1109/3468.911370
[56]
Andreas Wachaja, Pratik Agarwal, Mathias Zink, Miguel Reyes Adame, Knut Möller, and Wolfram Burgard. 2017. Navigating blind people with walking impairments using a smart walker. Autonomous Robots 41 (2017), 555–573. https://doi.org/10.1007/s10514-016-9595-8
[57]
Yuanlong Wei, Xiangxin Kou, and Min Cheol Lee. 2014. A new vision and navigation research for a guide-dog robot system in urban system. In 2014 IEEE/ASME International Conference on Advanced Intelligent Mechatronics. IEEE, 1290–1295. https://doi.org/10.1109/AIM.2014.6878260
[58]
WeWalk 2020. WeWalk Smart Cane. WeWalk. Retrieved Aug 15, 2023 from https://wewalk.io/en/
[59]
World Health Organization 2023. Blindness and vision impairment. World Health Organization. Retrieved Sep 1, 2023 from https://www.who.int/news-room/fact-sheets/detail/blindness-and-visual-impairment
[60]
Cindy Wiggett-Barnard and Henry Steel. 2008. The experience of owning a guide dog. Disability and Rehabilitation 30, 14 (2008), 1014–1026. https://doi.org/10.1080/09638280701466517
[61]
Anxing Xiao, Wenzhe Tong, Lizhi Yang, Jun Zeng, Zhongyu Li, and Koushil Sreenath. 2021. Robotic guide dog: Leading a human with leash-guided hybrid physical interaction. In 2021 IEEE International Conference on Robotics and Automation (ICRA). IEEE, IEEE, Piscataway, NJ, USA, 11470–11476. https://doi.org/10.1109/icra48506.2021.9561786
[62]
Wei Xu, Yixi Cai, Dongjiao He, Jiarong Lin, and Fu Zhang. 2022. Fast-lio2: Fast direct lidar-inertial odometry. IEEE Transactions on Robotics 38, 4 (2022), 2053–2073. https://doi.org/10.1109/tro.2022.3141876
[63]
Wei Xu and Fu Zhang. 2021. Fast-lio: A fast, robust lidar-inertial odometry package by tightly-coupled iterated kalman filter. IEEE Robotics and Automation Letters 6, 2 (2021), 3317–3324. https://doi.org/10.1109/LRA.2021.3064227
[64]
Kumar Yelamarthi, Daniel Haas, Daniel Nielsen, and Shawn Mothersell. 2010. RFID and GPS integrated navigation system for the visually impaired. In 2010 53rd IEEE International Midwest Symposium on Circuits and Systems. IEEE, 1149–1152.
[65]
Limin Zeng, Björn Einert, Alexander Pitkin, and Gerhard Weber. 2018. Hapticrein: Design and development of an interactive haptic rein for a guidance robot. In Computers Helping People with Special Needs: 16th International Conference, ICCHP 2018, Linz, Austria, July 11-13, 2018, Proceedings, Part II 16. Springer, 94–101. https://doi.org/10.1007/978-3-319-94274-2_14

Recommendations

Comments

Please enable JavaScript to view thecomments powered by Disqus.

Information & Contributors

Information

Published In

cover image ACM Conferences
CHI '24: Proceedings of the 2024 CHI Conference on Human Factors in Computing Systems
May 2024
18961 pages
ISBN:9798400703300
DOI:10.1145/3613904
This work is licensed under a Creative Commons Attribution International 4.0 License.

Sponsors

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 11 May 2024

Check for updates

Badges

Author Tags

  1. assistive technology
  2. navigation
  3. orientation and mobility
  4. robot guide dog
  5. visual impairment

Qualifiers

  • Research-article
  • Research
  • Refereed limited

Funding Sources

Conference

CHI '24

Acceptance Rates

Overall Acceptance Rate 6,199 of 26,314 submissions, 24%

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • 0
    Total Citations
  • 2,233
    Total Downloads
  • Downloads (Last 12 months)2,233
  • Downloads (Last 6 weeks)553
Reflects downloads up to 01 Oct 2024

Other Metrics

Citations

View Options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

HTML Format

View this article in HTML Format.

HTML Format

Get Access

Login options

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media