Coppola et al., 2022 - Google Patents
Master of puppets: multi-modal robot activity segmentation from teleoperated demonstrationsCoppola et al., 2022
- Document ID
- 16942435071281289211
- Author
- Coppola C
- Jamone L
- Publication year
- Publication venue
- 2022 IEEE International Conference on Development and Learning (ICDL)
External Links
Snippet
Programming robots for complex tasks in unstructured settings (eg, light manufacturing, extreme environments) cannot be accomplished solely by analytical methods. Learning from teleoperated human demonstrations is a promising approach to decrease the programming …
- 230000000694 effects 0 title abstract description 43
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING; COUNTING
- G06K—RECOGNITION OF DATA; PRESENTATION OF DATA; RECORD CARRIERS; HANDLING RECORD CARRIERS
- G06K9/00—Methods or arrangements for reading or recognising printed or written characters or for recognising patterns, e.g. fingerprints
- G06K9/62—Methods or arrangements for recognition using electronic means
- G06K9/6217—Design or setup of recognition systems and techniques; Extraction of features in feature space; Clustering techniques; Blind source separation
- G06K9/6232—Extracting features by transforming the feature space, e.g. multidimensional scaling; Mappings, e.g. subspace methods
- G06K9/6247—Extracting features by transforming the feature space, e.g. multidimensional scaling; Mappings, e.g. subspace methods based on an approximation criterion, e.g. principal component analysis
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING; COUNTING
- G06K—RECOGNITION OF DATA; PRESENTATION OF DATA; RECORD CARRIERS; HANDLING RECORD CARRIERS
- G06K9/00—Methods or arrangements for reading or recognising printed or written characters or for recognising patterns, e.g. fingerprints
- G06K9/62—Methods or arrangements for recognition using electronic means
- G06K9/6267—Classification techniques
- G06K9/6268—Classification techniques relating to the classification paradigm, e.g. parametric or non-parametric approaches
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING; COUNTING
- G06F—ELECTRICAL DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/03—Arrangements for converting the position or the displacement of a member into a coded form
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING; COUNTING
- G06K—RECOGNITION OF DATA; PRESENTATION OF DATA; RECORD CARRIERS; HANDLING RECORD CARRIERS
- G06K9/00—Methods or arrangements for reading or recognising printed or written characters or for recognising patterns, e.g. fingerprints
- G06K9/00335—Recognising movements or behaviour, e.g. recognition of gestures, dynamic facial expressions; Lip-reading
- G06K9/00355—Recognition of hand or arm movements, e.g. recognition of deaf sign language
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING; COUNTING
- G06K—RECOGNITION OF DATA; PRESENTATION OF DATA; RECORD CARRIERS; HANDLING RECORD CARRIERS
- G06K9/00—Methods or arrangements for reading or recognising printed or written characters or for recognising patterns, e.g. fingerprints
- G06K9/36—Image preprocessing, i.e. processing the image information without deciding about the identity of the image
- G06K9/46—Extraction of features or characteristics of the image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING; COUNTING
- G06N—COMPUTER SYSTEMS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computer systems based on biological models
- G06N3/02—Computer systems based on biological models using neural network models
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05B—CONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
- G05B2219/00—Program-control systems
- G05B2219/30—Nc systems
- G05B2219/39—Robotics, robotics to robotics hand
- G05B2219/39376—Hierarchical, learning, recognition and skill level and adaptation servo level
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING; COUNTING
- G06N—COMPUTER SYSTEMS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N99/00—Subject matter not provided for in other groups of this subclass
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING; COUNTING
- G06F—ELECTRICAL DIGITAL DATA PROCESSING
- G06F19/00—Digital computing or data processing equipment or methods, specially adapted for specific applications
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Newbury et al. | Deep learning approaches to grasp synthesis: A review | |
Mohammed et al. | Review of deep reinforcement learning-based object grasping: Techniques, open challenges, and recommendations | |
Sahbani et al. | An overview of 3D object grasp synthesis algorithms | |
Li et al. | Survey on mapping human hand motion to robotic hands for teleoperation | |
Kaboli et al. | A tactile-based framework for active object learning and discrimination using multimodal robotic skin | |
Bekiroglu et al. | Learning tactile characterizations of object-and pose-specific grasps | |
Madan et al. | Recognition of haptic interaction patterns in dyadic joint object manipulation | |
Kasaei et al. | Towards lifelong assistive robotics: A tight coupling between object perception and manipulation | |
Faria et al. | Extracting data from human manipulation of objects towards improving autonomous robotic grasping | |
Geng et al. | Rlafford: End-to-end affordance learning for robotic manipulation | |
Simão et al. | Natural control of an industrial robot using hand gesture recognition with neural networks | |
Ruppel et al. | Learning object manipulation with dexterous hand-arm systems from human demonstration | |
Panda et al. | Single and multiple view support order prediction in clutter for manipulation | |
Dong et al. | A review of robotic grasp detection technology | |
Coppola et al. | Master of puppets: multi-modal robot activity segmentation from teleoperated demonstrations | |
Do et al. | Inter-finger small object manipulation with DenseTact optical tactile sensor | |
Eiband et al. | Identification of common force-based robot skills from the human and robot perspective | |
El-Khoury et al. | 3d objects grasps synthesis: A survey | |
Amatya et al. | Real time kinect based robotic arm manipulation with five degree of freedom | |
Palm et al. | Recognition of human grasps by time-clustering and fuzzy modeling | |
Wang et al. | Robot programming by demonstration with a monocular RGB camera | |
Qiu et al. | Hand pose-based task learning from visual observations with semantic skill extraction | |
Romero et al. | Human-to-robot mapping of grasps | |
Bhattacharjee et al. | Inferring object properties from incidental contact with a tactile-sensing forearm | |
Fresnillo et al. | A method for understanding and digitizing manipulation activities using programming by demonstration in robotic applications |