Medeiros et al., 2021 - Google Patents
UAV target-selection: 3D pointing interface system for large-scale environmentMedeiros et al., 2021
View PDF- Document ID
- 11636427556555879096
- Author
- Medeiros A
- Ratsamee P
- Orlosky J
- Uranishi Y
- Higashida M
- Takemura H
- Publication year
- Publication venue
- 2021 IEEE International Conference on Robotics and Automation (ICRA)
External Links
Snippet
This paper presents a 3D pointing interface application to signal a UAV's target in a large- scale environment. This system enables UAVs equipped with a monocular camera to determine which window of a building is selected by a human user in large-scale indoor or …
- 241000282414 Homo sapiens 0 abstract description 7
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING; COUNTING
- G06F—ELECTRICAL DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/011—Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING; COUNTING
- G06F—ELECTRICAL DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/03—Arrangements for converting the position or the displacement of a member into a coded form
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING; COUNTING
- G06F—ELECTRICAL DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/017—Gesture based interaction, e.g. based on a set of recognized hand gestures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING; COUNTING
- G06K—RECOGNITION OF DATA; PRESENTATION OF DATA; RECORD CARRIERS; HANDLING RECORD CARRIERS
- G06K9/00—Methods or arrangements for reading or recognising printed or written characters or for recognising patterns, e.g. fingerprints
- G06K9/36—Image preprocessing, i.e. processing the image information without deciding about the identity of the image
- G06K9/46—Extraction of features or characteristics of the image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING; COUNTING
- G06K—RECOGNITION OF DATA; PRESENTATION OF DATA; RECORD CARRIERS; HANDLING RECORD CARRIERS
- G06K9/00—Methods or arrangements for reading or recognising printed or written characters or for recognising patterns, e.g. fingerprints
- G06K9/62—Methods or arrangements for recognition using electronic means
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05B—CONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
- G05B2219/00—Program-control systems
- G05B2219/30—Nc systems
- G05B2219/40—Robotics, robotics mapping to robotics vision
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
Similar Documents
Publication | Publication Date | Title |
---|---|---|
KR101865655B1 (en) | Method and apparatus for providing service for augmented reality interaction | |
Wan et al. | Teaching robots to do object assembly using multi-modal 3d vision | |
US20180190014A1 (en) | Collaborative multi sensor system for site exploitation | |
Peng et al. | Globally-optimal contrast maximisation for event cameras | |
Taryudi et al. | Eye to hand calibration using ANFIS for stereo vision-based object manipulation system | |
Ye et al. | 6-DOF pose estimation of a robotic navigation aid by tracking visual and geometric features | |
KR102075844B1 (en) | Localization system merging results of multi-modal sensor based positioning and method thereof | |
Medeiros et al. | 3D pointing gestures as target selection tools: guiding monocular UAVs during window selection in an outdoor environment | |
JP2022502791A (en) | Systems and methods for estimating robot posture, robots, and storage media | |
JP2021177144A (en) | Information processing apparatus, information processing method, and program | |
Fauadi et al. | Intelligent vision-based navigation system for mobile robot: A technological review | |
McGreavy et al. | Next best view planning for object recognition in mobile robotics | |
Ishihara et al. | Deep radio-visual localization | |
CN104182747A (en) | Object detection and tracking method and device based on multiple stereo cameras | |
Manns et al. | Identifying human intention during assembly operations using wearable motion capturing systems including eye focus | |
Hakim et al. | Goal location prediction based on deep learning using RGB-D camera | |
US11366450B2 (en) | Robot localization in a workspace via detection of a datum | |
Chen et al. | Design and Implementation of AMR Robot Based on RGBD, VSLAM and SLAM | |
Jia et al. | Autonomous vehicles navigation with visual target tracking: Technical approaches | |
Chikhalikar et al. | An object-oriented navigation strategy for service robots leveraging semantic information | |
Medeiros et al. | UAV target-selection: 3D pointing interface system for large-scale environment | |
Vega et al. | Robot evolutionary localization based on attentive visual short-term memory | |
Kondaxakis et al. | Real-time recognition of pointing gestures for robot to robot interaction | |
Singh et al. | Efficient deep learning-based semantic mapping approach using monocular vision for resource-limited mobile robots | |
Qian et al. | An improved ORB-SLAM2 in dynamic scene with instance segmentation |