Li et al., 2012 - Google Patents
Robot semantic mapping through wearable sensor-based human activity recognitionLi et al., 2012
View PDF- Document ID
- 17614994293391227420
- Author
- Li G
- Zhu C
- Du J
- Cheng Q
- Sheng W
- Chen H
- Publication year
- Publication venue
- 2012 IEEE International Conference on Robotics and Automation
External Links
Snippet
Semantic information can help both humans and robots to understand their environments better. In order to obtain semantic information efficiently and link it to a metric map, we present a semantic mapping approach through human activity recognition in an indoor …
- 230000000694 effects 0 title abstract description 79
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING; COUNTING
- G06K—RECOGNITION OF DATA; PRESENTATION OF DATA; RECORD CARRIERS; HANDLING RECORD CARRIERS
- G06K9/00—Methods or arrangements for reading or recognising printed or written characters or for recognising patterns, e.g. fingerprints
- G06K9/00221—Acquiring or recognising human faces, facial parts, facial sketches, facial expressions
- G06K9/00268—Feature extraction; Face representation
- G06K9/00281—Local features and components; Facial parts ; Occluding parts, e.g. glasses; Geometrical relationships
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING; COUNTING
- G06K—RECOGNITION OF DATA; PRESENTATION OF DATA; RECORD CARRIERS; HANDLING RECORD CARRIERS
- G06K9/00—Methods or arrangements for reading or recognising printed or written characters or for recognising patterns, e.g. fingerprints
- G06K9/62—Methods or arrangements for recognition using electronic means
- G06K9/6217—Design or setup of recognition systems and techniques; Extraction of features in feature space; Clustering techniques; Blind source separation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING; COUNTING
- G06K—RECOGNITION OF DATA; PRESENTATION OF DATA; RECORD CARRIERS; HANDLING RECORD CARRIERS
- G06K9/00—Methods or arrangements for reading or recognising printed or written characters or for recognising patterns, e.g. fingerprints
- G06K9/00624—Recognising scenes, i.e. recognition of a whole field of perception; recognising scene-specific objects
- G06K9/00771—Recognising scenes under surveillance, e.g. with Markovian modelling of scene activity
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING; COUNTING
- G06K—RECOGNITION OF DATA; PRESENTATION OF DATA; RECORD CARRIERS; HANDLING RECORD CARRIERS
- G06K9/00—Methods or arrangements for reading or recognising printed or written characters or for recognising patterns, e.g. fingerprints
- G06K9/00362—Recognising human body or animal bodies, e.g. vehicle occupant, pedestrian; Recognising body parts, e.g. hand
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING; COUNTING
- G06K—RECOGNITION OF DATA; PRESENTATION OF DATA; RECORD CARRIERS; HANDLING RECORD CARRIERS
- G06K9/00—Methods or arrangements for reading or recognising printed or written characters or for recognising patterns, e.g. fingerprints
- G06K9/62—Methods or arrangements for recognition using electronic means
- G06K9/6267—Classification techniques
- G06K9/6268—Classification techniques relating to the classification paradigm, e.g. parametric or non-parametric approaches
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/20—Analysis of motion
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Li et al. | Robot semantic mapping through wearable sensor-based human activity recognition | |
Liu et al. | Robotic room-level localization using multiple sets of sonar measurements | |
Piyathilaka et al. | Human activity recognition for domestic robots | |
Field et al. | Motion capture in robotics review | |
Piyathilaka et al. | Gaussian mixture based HMM for human daily activity recognition using 3D skeleton features | |
Jalal et al. | Individual detection-tracking-recognition using depth activity images | |
Sheng et al. | Robot semantic mapping through human activity recognition: A wearable sensing and computing approach | |
Hernandez-Penaloza et al. | A multi-sensor fusion scheme to increase life autonomy of elderly people with cognitive problems | |
TW200933538A (en) | Nursing system | |
Yan et al. | A hybrid probabilistic neural model for person tracking based on a ceiling-mounted camera | |
Zhu et al. | Recognizing human daily activity using a single inertial sensor | |
Hafeez et al. | Multi-fusion sensors for action recognition based on discriminative motion cues and random forest | |
Azzam et al. | A stacked LSTM-based approach for reducing semantic pose estimation error | |
Chamorro et al. | Neural network based lidar gesture recognition for realtime robot teleoperation | |
Mahmud et al. | A vision based voice controlled indoor assistant robot for visually impaired people | |
Hu et al. | Bayesian fusion of ceiling mounted camera and laser range finder on a mobile robot for people detection and localization | |
Weinrich et al. | Appearance-based 3D upper-body pose estimation and person re-identification on mobile robots | |
Kibria et al. | Creation of a Cost-Efficient and Effective Personal Assistant Robot using Arduino & Machine Learning Algorithm | |
Tenguria et al. | Design framework for general purpose object recognition on a robotic platform | |
Arndt et al. | Optimized mobile indoor robot navigation through probabilistic tracking of people in a wireless sensor network | |
Sanjay et al. | Person follower robotic system | |
Lafuente-Arroyo et al. | LIDAR signature based node detection and classification in graph topological maps for indoor navigation | |
Sonia et al. | A voting-based sensor fusion approach for human presence detection | |
Rashed et al. | Robustly tracking people with lidars in a crowded museum for behavioral analysis | |
Roberti et al. | An energy saving approach to active object recognition and localization |