Nothing Special   »   [go: up one dir, main page]

Chuang et al., 2018 - Google Patents

Deep trail-following robotic guide dog in pedestrian environments for people who are blind and visually impaired-learning from virtual and real worlds

Chuang et al., 2018

View PDF
Document ID
17073194904710946127
Author
Chuang T
Lin N
Chen J
Hung C
Huang Y
Teng C
Huang H
Yu L
Giarré L
Wang H
Publication year
Publication venue
2018 IEEE International Conference on Robotics and Automation (ICRA)

External Links

Snippet

Navigation in pedestrian environments is critical to enabling independent mobility for the blind and visually impaired (BVI) in their daily lives. White canes have been commonly used to obtain contact feedback for following walls, curbs, or man-made trails, whereas guide …
Continue reading at nichinglin.github.io (PDF) (other versions)

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06KRECOGNITION OF DATA; PRESENTATION OF DATA; RECORD CARRIERS; HANDLING RECORD CARRIERS
    • G06K9/00Methods or arrangements for reading or recognising printed or written characters or for recognising patterns, e.g. fingerprints
    • G06K9/00624Recognising scenes, i.e. recognition of a whole field of perception; recognising scene-specific objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06KRECOGNITION OF DATA; PRESENTATION OF DATA; RECORD CARRIERS; HANDLING RECORD CARRIERS
    • G06K9/00Methods or arrangements for reading or recognising printed or written characters or for recognising patterns, e.g. fingerprints
    • G06K9/36Image preprocessing, i.e. processing the image information without deciding about the identity of the image
    • G06K9/46Extraction of features or characteristics of the image
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06KRECOGNITION OF DATA; PRESENTATION OF DATA; RECORD CARRIERS; HANDLING RECORD CARRIERS
    • G06K9/00Methods or arrangements for reading or recognising printed or written characters or for recognising patterns, e.g. fingerprints
    • G06K9/00221Acquiring or recognising human faces, facial parts, facial sketches, facial expressions
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course or altitude of land, water, air, or space vehicles, e.g. automatic pilot
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0255Control of position or course in two dimensions specially adapted to land vehicles using acoustic signals, e.g. ultra-sonic singals
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course or altitude of land, water, air, or space vehicles, e.g. automatic pilot
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0231Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06FELECTRICAL DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer

Similar Documents

Publication Publication Date Title
Chuang et al. Deep trail-following robotic guide dog in pedestrian environments for people who are blind and visually impaired-learning from virtual and real worlds
Islam et al. Person-following by autonomous robots: A categorical overview
Manjari et al. A survey on assistive technology for visually impaired
Bauer et al. The autonomous city explorer: Towards natural human-robot interaction in urban environments
Gharani et al. Context-aware obstacle detection for navigation by visually impaired
Martinez-Gomez et al. A taxonomy of vision systems for ground mobile robots
Chatterjee et al. Vision based autonomous robot navigation: algorithms and implementations
Pradeep et al. A wearable system for the visually impaired
Kumar et al. A Deep Learning Based Model to Assist Blind People in Their Navigation.
Kannapiran et al. Go-CHART: A miniature remotely accessible self-driving car robot
US20240181637A1 (en) Autonomous humanoid robot
Tan et al. Flying guide dog: Walkable path discovery for the visually impaired utilizing drones and transformer-based semantic segmentation
Madake et al. A qualitative and quantitative analysis of research in mobility technologies for visually impaired people
Silva et al. Navigation and obstacle avoidance: A case study using Pepper robot
Zhang et al. An egocentric vision based assistive co-robot
Kleiner et al. Robocuprescue-robot league team rescuerobots freiburg (germany)
Farias et al. Navigation control of the Khepera IV model with OpenCV in V-REP simulator
Yang et al. Research into the application of AI robots in community home leisure interaction
Shruthi et al. Path planning for autonomous car
Tapu et al. ALICE: A smartphone assistant used to increase the mobility of visual impaired people
Shakeel Service robot for the visually impaired: Providing navigational assistance using Deep Learning
Khusheef Investigation on the mobile robot navigation in an unknown environment
Tawil Towards only-vision autonomous wheelchair: A deep learning obstacle detection and image-based avoidance
Charalampous et al. Social mapping on RGB-D scenes
Menegatti et al. Intelligent Autonomous Systems 13: Proceedings of the 13th International Conference IAS-13