Tee et al., 2022 - Google Patents
A framework for tool cognition in robots without prior tool learning or observationTee et al., 2022
- Document ID
- 15841632774882831208
- Author
- Tee K
- Cheong S
- Li J
- Ganesh G
- Publication year
- Publication venue
- Nature Machine Intelligence
External Links
Snippet
Human tool use prowess distinguishes us from other animals. In many scenarios, a human is able to recognize objects seen for the first time as potential tools for a task and use them without requiring any learning. Here we propose a framework to enable similar abilities in …
- 230000019771 cognition 0 title abstract description 16
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING; COUNTING
- G06F—ELECTRICAL DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING; COUNTING
- G06K—RECOGNITION OF DATA; PRESENTATION OF DATA; RECORD CARRIERS; HANDLING RECORD CARRIERS
- G06K9/00—Methods or arrangements for reading or recognising printed or written characters or for recognising patterns, e.g. fingerprints
- G06K9/62—Methods or arrangements for recognition using electronic means
- G06K9/6217—Design or setup of recognition systems and techniques; Extraction of features in feature space; Clustering techniques; Blind source separation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING; COUNTING
- G06K—RECOGNITION OF DATA; PRESENTATION OF DATA; RECORD CARRIERS; HANDLING RECORD CARRIERS
- G06K9/00—Methods or arrangements for reading or recognising printed or written characters or for recognising patterns, e.g. fingerprints
- G06K9/36—Image preprocessing, i.e. processing the image information without deciding about the identity of the image
- G06K9/46—Extraction of features or characteristics of the image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING; COUNTING
- G06N—COMPUTER SYSTEMS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computer systems based on biological models
- G06N3/02—Computer systems based on biological models using neural network models
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING; COUNTING
- G06N—COMPUTER SYSTEMS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N99/00—Subject matter not provided for in other groups of this subclass
- G06N99/005—Learning machines, i.e. computer in which a programme is changed according to experience gained by the machine itself during a complete run
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING; COUNTING
- G06F—ELECTRICAL DIGITAL DATA PROCESSING
- G06F17/00—Digital computing or data processing equipment or methods, specially adapted for specific functions
- G06F17/50—Computer-aided design
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING; COUNTING
- G06N—COMPUTER SYSTEMS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N5/00—Computer systems utilising knowledge based models
- G06N5/04—Inference methods or devices
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING; COUNTING
- G06F—ELECTRICAL DIGITAL DATA PROCESSING
- G06F19/00—Digital computing or data processing equipment or methods, specially adapted for specific applications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Newbury et al. | Deep learning approaches to grasp synthesis: A review | |
Levine et al. | Learning hand-eye coordination for robotic grasping with deep learning and large-scale data collection | |
Levine et al. | End-to-end training of deep visuomotor policies | |
Finn et al. | Deep visual foresight for planning robot motion | |
Saxena et al. | Robotic grasping of novel objects using vision | |
Bohg et al. | Data-driven grasp synthesis—a survey | |
Martinez-Hernandez et al. | Feeling the shape: Active exploration behaviors for object recognition with a robotic hand | |
Tee et al. | A framework for tool cognition in robots without prior tool learning or observation | |
Ottenhaus et al. | Visuo-haptic grasping of unknown objects based on gaussian process implicit surfaces and deep learning | |
Chen et al. | Learning robust real-world dexterous grasping policies via implicit shape augmentation | |
Pozzi et al. | Hand closure model for planning top grasps with soft robotic hands | |
Staretu et al. | Leap motion device used to control a real anthropomorphic gripper | |
Faria et al. | Knowledge-based reasoning from human grasp demonstrations for robot grasp synthesis | |
Liu et al. | Rdt-1b: a diffusion foundation model for bimanual manipulation | |
Xu et al. | Dexterous manipulation from images: Autonomous real-world rl via substep guidance | |
Bütepage et al. | From visual understanding to complex object manipulation | |
Tang et al. | Selective object rearrangement in clutter | |
Matsushima et al. | World robot challenge 2020–partner robot: a data-driven approach for room tidying with mobile manipulator | |
Zhang et al. | Digital twin-enabled grasp outcomes assessment for unknown objects using visual-tactile fusion perception | |
Kent et al. | Construction of a 3D object recognition and manipulation database from grasp demonstrations | |
Hu et al. | Reboot: Reuse data for bootstrapping efficient real-world dexterous manipulation | |
Alaaudeen et al. | Intelligent robotics harvesting system process for fruits grasping prediction | |
Li et al. | Interactive learning for multi-finger dexterous hand: A model-free hierarchical deep reinforcement learning approach | |
de La Bourdonnaye et al. | Stage-wise learning of reaching using little prior knowledge | |
Kang et al. | Team Tidyboy at the WRS 2020: A modular software framework for home service robots |