Zhang et al., 2013 - Google Patents
An egocentric vision based assistive co-robotZhang et al., 2013
View PDF- Document ID
- 6891501784316817514
- Author
- Zhang J
- Zhuang L
- Wang Y
- Zhou Y
- Meng Y
- Hua G
- Publication year
- Publication venue
- 2013 IEEE 13th International Conference on Rehabilitation Robotics (ICORR)
External Links
Snippet
We present the prototype of an egocentric vision based assistive co-robot system. In this co- robot system, the user is wearing a pair of glasses with a forward looking camera, and is actively engaged in the control loop of the robot in navigational tasks. The egocentric vision …
- 239000011521 glass 0 abstract description 39
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING; COUNTING
- G06F—ELECTRICAL DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/03—Arrangements for converting the position or the displacement of a member into a coded form
- G06F3/033—Pointing devices displaced or positioned by the user, e.g. mice, trackballs, pens or joysticks; Accessories therefor
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING; COUNTING
- G06F—ELECTRICAL DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/011—Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING; COUNTING
- G06F—ELECTRICAL DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/03—Arrangements for converting the position or the displacement of a member into a coded form
- G06F3/0304—Detection arrangements using opto-electronic means
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING; COUNTING
- G06F—ELECTRICAL DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/017—Gesture based interaction, e.g. based on a set of recognized hand gestures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING; COUNTING
- G06K—RECOGNITION OF DATA; PRESENTATION OF DATA; RECORD CARRIERS; HANDLING RECORD CARRIERS
- G06K9/00—Methods or arrangements for reading or recognising printed or written characters or for recognising patterns, e.g. fingerprints
- G06K9/00335—Recognising movements or behaviour, e.g. recognition of gestures, dynamic facial expressions; Lip-reading
- G06K9/00355—Recognition of hand or arm movements, e.g. recognition of deaf sign language
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING; COUNTING
- G06K—RECOGNITION OF DATA; PRESENTATION OF DATA; RECORD CARRIERS; HANDLING RECORD CARRIERS
- G06K9/00—Methods or arrangements for reading or recognising printed or written characters or for recognising patterns, e.g. fingerprints
- G06K9/00221—Acquiring or recognising human faces, facial parts, facial sketches, facial expressions
- G06K9/00288—Classification, e.g. identification
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING; COUNTING
- G06K—RECOGNITION OF DATA; PRESENTATION OF DATA; RECORD CARRIERS; HANDLING RECORD CARRIERS
- G06K9/00—Methods or arrangements for reading or recognising printed or written characters or for recognising patterns, e.g. fingerprints
- G06K9/00362—Recognising human body or animal bodies, e.g. vehicle occupant, pedestrian; Recognising body parts, e.g. hand
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING; COUNTING
- G06K—RECOGNITION OF DATA; PRESENTATION OF DATA; RECORD CARRIERS; HANDLING RECORD CARRIERS
- G06K9/00—Methods or arrangements for reading or recognising printed or written characters or for recognising patterns, e.g. fingerprints
- G06K9/62—Methods or arrangements for recognition using electronic means
- G06K9/6217—Design or setup of recognition systems and techniques; Extraction of features in feature space; Clustering techniques; Blind source separation
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05D—SYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
- G05D1/00—Control of position, course or altitude of land, water, air, or space vehicles, e.g. automatic pilot
- G05D1/02—Control of position or course in two dimensions
- G05D1/021—Control of position or course in two dimensions specially adapted to land vehicles
- G05D1/0255—Control of position or course in two dimensions specially adapted to land vehicles using acoustic signals, e.g. ultra-sonic singals
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Islam et al. | Person-following by autonomous robots: A categorical overview | |
Wang et al. | Enabling independent navigation for visually impaired people through a wearable vision-based feedback system | |
US11126257B2 (en) | System and method for detecting human gaze and gesture in unconstrained environments | |
Chuang et al. | Deep trail-following robotic guide dog in pedestrian environments for people who are blind and visually impaired-learning from virtual and real worlds | |
Van den Bergh et al. | Real-time 3D hand gesture interaction with a robot for understanding directions from humans | |
Gharani et al. | Context-aware obstacle detection for navigation by visually impaired | |
Riek | The social co-robotics problem space: Six key challenges | |
CN114080583A (en) | Visual teaching and repetitive motion manipulation system | |
Martinez-Gomez et al. | A taxonomy of vision systems for ground mobile robots | |
WO2021143543A1 (en) | Robot and method for controlling same | |
Randelli et al. | Knowledge acquisition through human–robot multimodal interaction | |
Zhang et al. | An egocentric vision based assistive co-robot | |
Taylor et al. | Robot perception of human groups in the real world: State of the art | |
Bengtson et al. | A review of computer vision for semi-autonomous control of assistive robotic manipulators (ARMs) | |
Grewal et al. | Autonomous wheelchair navigation in unmapped indoor environments | |
Shih et al. | Dlwv2: A deep learning-based wearable vision-system with vibrotactile-feedback for visually impaired people to reach objects | |
Graf et al. | Toward holistic scene understanding: A transfer of human scene perception to mobile robots | |
US11654573B2 (en) | Methods and systems for enabling human robot interaction by sharing cognition | |
Marinov et al. | Pose2drone: A skeleton-pose-based framework for human-drone interaction | |
Lidoris et al. | The autonomous city explorer project: Aims and system overview | |
Shahria et al. | Vision-Based Object Manipulation for Activities of Daily Living Assistance Using Assistive Robot | |
Ghidary et al. | Multi-modal human robot interaction for map generation | |
Hanheide et al. | Combining environmental cues & head gestures to interact with wearable devices | |
Hamlet et al. | A gesture recognition system for mobile robots that learns online | |
Kumar et al. | Deep Learning and Fuzzy Decision Support System for visually impaired persons |