CN106863355A - A kind of object identification method and robot for robot - Google Patents
A kind of object identification method and robot for robot Download PDFInfo
- Publication number
- CN106863355A CN106863355A CN201611222770.6A CN201611222770A CN106863355A CN 106863355 A CN106863355 A CN 106863355A CN 201611222770 A CN201611222770 A CN 201611222770A CN 106863355 A CN106863355 A CN 106863355A
- Authority
- CN
- China
- Prior art keywords
- sample
- sample object
- images
- image
- robot
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B25—HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
- B25J—MANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
- B25J19/00—Accessories fitted to manipulators, e.g. for monitoring, for viewing; Safety devices combined with or specially adapted for use in connection with manipulators
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/50—Information retrieval; Database structures therefor; File system structures therefor of still image data
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Data Mining & Analysis (AREA)
- General Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Evolutionary Biology (AREA)
- Evolutionary Computation (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Life Sciences & Earth Sciences (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Artificial Intelligence (AREA)
- Robotics (AREA)
- Mechanical Engineering (AREA)
- Databases & Information Systems (AREA)
- Manipulator (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses a kind of object identification method for robot and a kind of robot.Methods described includes:IMAQ is carried out to same sample object from multiple different visual angles respectively to obtain multiple different sample images;The different sample images of same sample object will be subordinated to and associate preservation with the identification label of the sample object;Images to be recognized is gathered for object to be identified, the images to be recognized is view data of the object to be identified under current visual angle;The sample image of the images to be recognized matching is searched out from all sample images for having preserved;The identification label associated by sample image that extraction is searched out is to complete the identification to the object to be identified.Compared with prior art, the method for the present invention reduces influence of the visual angle change to identification process during image recognition object, greatly expands the recognizable set and recognition efficiency of robot, improves recognition correct rate.
Description
Technical field
The present invention relates to robot field, and in particular to a kind of object identification method and robot for robot.
Background technology
With continuing to develop for robot technology, intelligent robot is more and more employed the family life with the mankind
In.
Currently, most intelligent robot possesses vision collecting function, view-based access control model acquisition function, and it is right that robot can be realized
The image recognition of current object to be identified.Common image recognition processes are to pre-save sample image data, are carrying out image
The view data of object to be identified and all sample image datas for having preserved are done into matching during identification and searches for corresponding so as to obtain
Sample image data, by the use of the corresponding iamge description of sample image data as recognition result.
In above-mentioned image recognition processes, key point be sample image data with object image data to be identified
Match somebody with somebody.But because the view data of sample image data and object to be identified is all planar image data, sample image data
Only the image of certain angle of object is recorded, when IMAQ is carried out to object from another angle, even same object,
The view data of acquisition is also different from the final sample image data for obtaining.
Therefore, even if robot was beforehand with the collection of sample image data to current object, in the interference of visual angle change
Under, robot can not smoothly recognize the object after.
The content of the invention
The invention provides a kind of object identification method for robot, methods described includes:
IMAQ is carried out to same sample object from multiple different visual angles respectively to obtain multiple different sample images;
The different sample images of same sample object will be subordinated to and associate preservation with the identification label of the sample object;
Images to be recognized is gathered for object to be identified, the images to be recognized is the object to be identified in current visual angle
Under view data;
The sample image of the images to be recognized matching is searched out from all sample images for having preserved;
The identification label associated by sample image that extraction is searched out is to complete the identification to the object to be identified.
In one embodiment, carry out IMAQ to same sample object to obtain multiple not from multiple different visual angles respectively
Same sample image, wherein:
Still image gathers position;
Rotate the sample object;
Multi-view image collection is carried out to the sample object during the sample object is rotated.
In one embodiment, carry out IMAQ to same sample object to obtain multiple not from multiple different visual angles respectively
Same sample image, wherein:
Keep the sample object motionless;
Around sample object rotation;
Multi-view image collection is carried out to the sample object during being rotated around the sample object.
In one embodiment, carry out IMAQ to same sample object to obtain multiple not from multiple different visual angles respectively
Same sample image, wherein:
Judge whether the sample object can be rotated;
When the sample object can be rotated, still image gathers position and rotates the sample object, in rotation institute
Multi-view image collection is carried out to the sample object during stating sample object;
Rotated around the sample object when the sample object cannot be rotated, around sample object rotation
Multi-view image collection is carried out to the sample object during turning.
In one embodiment, carry out IMAQ to same sample object to obtain multiple not from multiple different visual angles respectively
Same sample image, wherein:
360 degree of view transformations are carried out to the sample object based on horizontal plane;
IMAQ is carried out to the sample object with default angle interval during view transformation.
The invention allows for a kind of intelligent robot, the robot includes:
Collection view transformation module, it is configured to convert visual angle of the robot to same sample object;
Contrast images acquisition module, it is configured to during robot is converted to the visual angle of the sample object
IMAQ is carried out to the sample object from multiple different visual angles respectively to obtain multiple different sample images;
Contrast images memory module, its different sample image and the sample for being configured to be subordinated to same sample object
The identification label association of object is preserved;
Image recognition acquisition module, it is configured to gather images to be recognized, the images to be recognized for object to be identified
It is view data of the object to be identified under current visual angle;
Image recognition matching module, it is configured to search out the images to be recognized from all sample images for having preserved
The sample image of matching;
Identification tag extraction module, the identification label that it is configured to extract associated by the sample image for searching out is right to complete
The identification of the object to be identified.
In one embodiment:
The collection view transformation module includes sample object whirligig, and it is configured to still image collection position and revolves
Turn the sample object;
The contrast images acquisition module is configured to enter the sample object during the sample object is rotated
Row multi-view image collection.
In one embodiment:
The collection view transformation module includes gearshift, and it is configured to keep the sample object motionless and drives institute
Robot is stated to be rotated around the sample object;
The contrast images acquisition module is configured to right during the robot rotates around the sample object
The sample object carries out multi-view image collection.
In one embodiment, the collection view transformation module is included:
Sample object parsing module, it is configured to judge whether the sample object can be rotated;
Sample object whirligig, it is configured to the still image when the sample object can be rotated and gathers position simultaneously
Rotate the sample object;
Gearshift, it is configured to drive the robot around the sample when the sample object cannot be rotated
This object rotates;
The contrast images acquisition module is configured to rotating the sample object or the robot around the sample
Object carries out multi-view image collection during rotating to the sample object.
In one embodiment:
The collection view transformation module is configured to horizontal plane carries out 360 degree of view transformations to the sample object;
The contrast images acquisition module is configured to during view transformation with default angle interval to the sample
This object carries out IMAQ.
Compared with prior art, visual angle change is to identification process during the method for the present invention reduces image recognition object
Influence, greatly expand the recognizable set and recognition efficiency of robot, improve recognition correct rate.
Further feature of the invention or advantage will be illustrated in the following description.Also, Partial Feature of the invention or
Advantage will be become apparent by specification, or be appreciated that by implementing the present invention.The purpose of the present invention and part
Advantage can be realized or obtained by specifically noted step in specification, claims and accompanying drawing.
Brief description of the drawings
Accompanying drawing is used for providing a further understanding of the present invention, and constitutes a part for specification, with reality of the invention
Apply example to be provided commonly for explaining the present invention, be not construed as limiting the invention.In the accompanying drawings:
Fig. 1 is method flow diagram according to an embodiment of the invention;
Fig. 2~Fig. 4 is the partial process view of method according to embodiments of the present invention;
Fig. 5 is robot system architecture's sketch according to an embodiment of the invention;
Fig. 6~Fig. 8 is robot system part-structure sketch according to embodiments of the present invention.
Specific embodiment
Describe embodiments of the present invention in detail below with reference to drawings and Examples, implementation personnel of the invention whereby
Can fully understand how application technology means solve technical problem for the present invention, and reach technique effect implementation process and according to
The present invention is embodied according to above-mentioned implementation process.If it should be noted that do not constitute conflict, each embodiment in the present invention
And each feature in each embodiment can be combined with each other, the technical scheme for being formed protection scope of the present invention it
It is interior.
With continuing to develop for robot technology, intelligent robot is more and more applied to the family life of the mankind
In.
Currently, most intelligent robot possesses vision collecting function, view-based access control model acquisition function, and it is right that robot can be realized
The image recognition of current object to be identified.Common image recognition processes are to pre-save sample image data, are carrying out image
The view data of object to be identified and all sample image datas for having preserved are done into matching during identification and searches for corresponding so as to obtain
Sample image data, by the use of the corresponding iamge description of sample image data as recognition result.
In above-mentioned image recognition processes, key point be sample image data with object image data to be identified
Match somebody with somebody.But because the view data of sample image data and object to be identified is all planar image data, sample image data
Only the image of certain angle of object is recorded, when IMAQ is carried out to object from another angle, even same object,
The view data of acquisition is also different from the final sample image data for obtaining.
Therefore, even if robot was beforehand with the collection of sample image data to current object, in the interference of visual angle change
Under, robot can not smoothly recognize the object after.For this problem, the present invention proposes a kind of for robot
Object identification method.
In an embodiment of the present invention, the method for contrast sample's figure is employed to realize the identification to object.Specifically,
When identification object is needed, the images to be recognized of object is gathered, then by the images to be recognized and the sample for having preserved
This image collection does search matching, the sample image matched with images to be recognized in sample image set is obtained, so as to obtain sample
Object identification tag (such as object names) corresponding to this image, and then realize the identification to object.
In above-mentioned steps, difference with the prior art is, in the sample image set for having preserved, as certain
This object, sample image is not the image sampling for some visual angle of sample object, and is directed to multiple different visual angles
Multiple images sampling set.
By taking a specific applied environment as an example.Assuming that the front view A of object A1, rear view A2, left side view A3, it is right
Side view A4And top view A5Differ (there is larger difference).
If in sample image set, only preserving the front view A of object A1As sample image.So in identification
In scene, when only robot faces object A fronts, its images to be recognized for getting is possible to matching and searches front to regard
Figure A1, so as to obtain correspondence front view A1Object A identification label with complete identification.When robot in face of the object A back sides,
During side or above, its images to be recognized for getting cannot also be matched even with more wide in range Similarity matching and searched
Rope is to front view A1, cannot also realize the identification to object A.
In an embodiment of the present invention, for object A, the front view A of object A is preserved in sample set1, the back side regards
Figure A2, left side view A3, right side view A4And top view A5As sample image.So, no matter robot is in face of object A's
Front, the back side, side or top, the images to be recognized that it gets can search associate A by matching search
Identification label sample image (front view A1, rear view A2, left side view A3, right side view A4And top view A5),
So as to obtain the identification label of object A to complete object identification.
The method according to the invention, reduces influence of the visual angle change to identification process during image recognition object, greatly
The big recognizable set and recognition efficiency for extending robot, improves recognition correct rate.
Next the detailed process of method according to embodiments of the present invention is described in detail based on accompanying drawing, in the flow chart of accompanying drawing
The step of showing can perform in comprising the such as one group computer system of computer executable instructions.Although in flow charts
The logical order of each step is shown, but in some cases, can be performing shown different from order herein or retouch
The step of stating.
In one embodiment, gather first future object identification during required identification object sample image.
As shown in figure 1, various visual angles capturing sample image (step S100) first, specifically, in the step s 100, respectively from multiple different
Visual angle same sample object (sample of the object of required identification during following object identification) is carried out IMAQ with
Obtain multiple different sample images.
Next the different sample images of same sample object will be subordinated to associates guarantor with the identification label of the sample object
Deposit (step S110), thus got the sample image set for recognizing jobbie.For different sample objects
It is repeated continuously step S100 and step S110, it is possible to obtain the sample image set for recognizing multiple different objects.
After sample image set is ready, it is possible to based on sample image set to included in sample image set
The corresponding object of sample image is identified.Specifically, when being identified, images to be recognized (step S120) is gathered first,
Specifically, images to be recognized is view data of the object to be identified under current visual angle.Then (protected from sample image set
In all sample images deposited) search out the sample image (step S130) that images to be recognized is matched.
There are two kinds of results in step S130, one is the presence of the sample graph matched with images to be recognized in sample image set
Picture;Two is in the absence of the sample image matched with images to be recognized in sample image set.Therefore, perform after step s 130
Step S140, with the presence or absence of the sample image matched with images to be recognized (to matching Search Results in judgement sample image collection
Judged).
When there is sample image match with images to be recognized in sample image set, extraction with match the sample that searches out
Identification label associated by this image is completing to the identification of object to be identified (using the identification label as recognition result) (step
S150)。
When in sample image set in the absence of the sample image matched with images to be recognized, object to be identified is labeled as
None- identified (step S160).
Further, after step S150 and S160 according to recognition result (specific identification label or None- identified
Mark) perform next step operation.
In the step shown in Fig. 1, one of key point is the sample for collecting enough view numbers in the step s 100
Image.Because sample image is the depth of field relation and sample for recognizing a specific object, object and surrounding enviroment
Image capture position can't produce influence substantially with the spacing of sample object to the object identification based on sample image;Cause
This, in one embodiment, with the focus of single sample object, position and the direction of image collecting device is not changed, by changing
The direction (changing the face of the image collecting device that sample object is faced) for becoming sample object is come respectively from multiple different visual angles pair
Same sample object carries out IMAQ.
Specifically, as shown in Fig. 2 in one embodiment, still image gathers position (stationary machines people's IMAQ first
The position of device and direction) (step S200);Then rotated sample object (is kept between sample object and image capture position
Away from continuous to change face of the sample object towards image collecting device) (step S210);Carried out in the implementation procedure of step S210
IMAQ is so as to complete the multi-view image collection (step S220) to sample object.
Further, it is contemplated that under some application scenarios, it is impossible to (such as sample object mistake is rotated to sample object
Greatly).Therefore, in one embodiment, employ with the focus of single sample object, position and the direction of sample object do not changed,
IMAQ is carried out to same sample object from multiple different visual angles by the position of image collecting device respectively.
Specifically, as shown in figure 3, in one embodiment, fixed sample object space (keep sample object motionless) first
(step S300);Then robot (makes image collecting device remain in face of sample object around sample object movement
In the case of around sample object rotation) (step S310);IMAQ is carried out in the implementation procedure of step S310 so as to complete
To the multi-view image collection (step S320) of sample object.
Further, in one embodiment, the method with reference to shown in Fig. 2 and Fig. 3 carries out multi-view image collection.As schemed
Shown in 4, in one embodiment, first determine whether whether sample object can be rotated (step S400);When sample object can be by
Still image collection position (step S411) and rotated sample object (step S412) during rotation, in the process of rotated sample object
In multi-view image collection (step S413) is carried out to sample object;The fixed sample object when sample object cannot be rotated
Position (not carrying out displacement operation to sample object) (step S421) rotates (step S422) around sample object, around described
Sample object carries out multi-view image collection (step S423) during rotating to sample object.
Further, it is contemplated that in overwhelming majority identification application scenarios, robot is in same with object to be identified
Individual planar rings are domestic (situation of object identification is carried out when seldom occurring that robot is located at object surface or underface).Cause
This, in order to reduce the data acquisition amount of sample image, improves the matching search speed in identification process, in one embodiment, right
Same sample object only carries out 360 degree of multi-view image collection in the horizontal plane.Specifically, carrying out many of sample image
When multi-view image is gathered, 360 degree of view transformations are carried out to sample object based on horizontal plane;With default during view transformation
Angle interval IMAQ is carried out to sample object.
By taking step shown in Fig. 2 as an example, in one embodiment, the image capture position of stationary machines people in step s 200,
Image collecting device and sample object is made to be in a horizontal plane, the camera of image collecting device is towards sample object.
Sample object does 360 degree of rotations (rotation) around the central shaft perpendicular to horizontal plane in step S210.In step S220, whenever
Sample object rotates through a sample image of an angle (such as 10 degree of the rotation) image collecting device with regard to collecting sample object.
By taking step shown in Fig. 3 as an example, in one embodiment, the position of fixed sample object is not (to sample in step S300
This object carries out the operation of any its position of interference), make image collecting device be in a horizontal plane with sample object, image
The camera of harvester is towards sample object.Holding image collecting device is same with sample object in step S210 is in a water
Plane is interior and maintains the spacing between sample object and image collecting device constant, makes image collecting device around perpendicular to horizontal plane
Central shaft do 360 degree of rotations (rotation).In step S220, whenever sample object rotates through (such as rotation 10 of an angle
Degree) image collecting device with regard to collecting sample object a sample image.
Based on the method for the present invention, the invention allows for a kind of intelligent robot.As shown in figure 5, in one embodiment,
Robot includes:
Collection view transformation module 500, it is configured to convert visual angle of the robot to same sample object;
Contrast images acquisition module 510, it is configured to during robot is converted to the visual angle of sample object
IMAQ is carried out to sample object from multiple different visual angles respectively to obtain multiple different sample images;
Contrast images memory module 530, its different sample image and the sample for being configured to be subordinated to same sample object
The identification label association of this object is preserved;
Image recognition acquisition module 520, it is configured to gather images to be recognized for object to be identified, wherein, it is to be identified
Image is view data of the object to be identified under current visual angle;
Image recognition matching module 540, it is configured to be searched in the sample image preserved from contrast images memory module 530
Rope goes out the sample image matched with images to be recognized;
Identification tag extraction module 550, it is configured to obtain the sample image pair that image recognition matching module 540 is searched out
The identification label answered is to complete the identification to object to be identified.
Further, in one embodiment, contrast images acquisition module 510 is based on same with image recognition acquisition module 520
A set of image collecting device construction.
Further, in one embodiment, collection view transformation module 500 carries out 360 based on horizontal plane to sample object
Degree view transformation;
Contrast images acquisition module 510 is configured to during view transformation with default angle interval to the sample
Object carries out IMAQ.
Further, as shown in fig. 6, in one embodiment, collection view transformation module 600 includes sample object rotating dress
611 are put, sample object whirligig 611 is configured to still image collection position and rotated sample object;Contrast images gather mould
Block 610 is configured to carry out multi-view image to sample object during the rotated sample object of sample object whirligig 611
Gather and collection result is saved into contrast images memory module 630.Further, in one embodiment, using robot
Manipulator construction sample object whirligig 611, with manipulator rotated sample object.
Further, as shown in fig. 7, in one embodiment, collection view transformation module 700 includes gearshift 711.Position
Moving device 711 is configured to keep sample object motionless and driven machine people rotates around sample object;Contrast images acquisition module
710 are configured to carry out sample object during robot rotates around sample object multi-view image collection and will gather
Result is saved into contrast images memory module 630.Further, in one embodiment, using the displacement mechanism of robot itself
To construct gearshift 711.
Further, as shown in figure 8, in one embodiment, collection view transformation module 800 parses mould comprising sample object
Block 813, sample object whirligig 812 and gearshift 811.Wherein:
Sample object parsing module 813 is configured to whether judgement sample object can be rotated;
Sample object whirligig 812 is configured to the still image when sample object can be rotated and gathers position and rotate
Sample object;
Gearshift 811 is configured to the driven machine people when sample object cannot be rotated and is rotated around sample object.
Contrast images acquisition module 810 be configured to during rotated sample object or in robot around sample contents
Body carries out multi-view image collection to sample object and collection result is saved into contrast images memory module during rotating
830。
While it is disclosed that implementation method as above, but described content is only to facilitate understanding the present invention and adopting
Implementation method, is not limited to the present invention.Method of the present invention can also have other various embodiments.Without departing substantially from
In the case of essence of the present invention, those of ordinary skill in the art work as can make various corresponding changes or change according to the present invention
Shape, but these corresponding changes or deformation should all belong to scope of the claims of the invention.
Claims (10)
1. a kind of object identification method for robot, it is characterised in that methods described includes:
IMAQ is carried out to same sample object from multiple different visual angles respectively to obtain multiple different sample images;
The different sample images of same sample object will be subordinated to and associate preservation with the identification label of the sample object;
Images to be recognized is gathered for object to be identified, the images to be recognized is the object to be identified under current visual angle
View data;
The sample image of the images to be recognized matching is searched out from all sample images for having preserved;
The identification label associated by sample image that extraction is searched out is to complete the identification to the object to be identified.
2. method according to claim 1, it is characterised in that carried out to same sample object from multiple different visual angles respectively
The IMAQ sample images different to obtain multiple, wherein:
Still image gathers position;
Rotate the sample object;
Multi-view image collection is carried out to the sample object during the sample object is rotated.
3. method according to claim 1, it is characterised in that carried out to same sample object from multiple different visual angles respectively
The IMAQ sample images different to obtain multiple, wherein:
Keep the sample object motionless;
Around sample object rotation;
Multi-view image collection is carried out to the sample object during being rotated around the sample object.
4. method according to claim 1, it is characterised in that carried out to same sample object from multiple different visual angles respectively
The IMAQ sample images different to obtain multiple, wherein:
Judge whether the sample object can be rotated;
When the sample object can be rotated, still image gathers position and rotates the sample object, is rotating the sample
Multi-view image collection is carried out to the sample object during this object;
Rotated around the sample object when the sample object cannot be rotated, what is rotated around the sample object
During multi-view image collection is carried out to the sample object.
5. the method according to any one of claim 1-4, it is characterised in that respectively from multiple different visual angles to
This object carries out IMAQ to obtain multiple different sample images, wherein:
360 degree of view transformations are carried out to the sample object based on horizontal plane;
IMAQ is carried out to the sample object with default angle interval during view transformation.
6. a kind of intelligent robot, it is characterised in that the robot includes:
Collection view transformation module, it is configured to convert visual angle of the robot to same sample object;
Contrast images acquisition module, it is configured to during robot is converted to the visual angle of the sample object respectively
IMAQ is carried out to the sample object from multiple different visual angles to obtain multiple different sample images;
Contrast images memory module, its different sample image and the sample object for being configured to be subordinated to same sample object
Identification label association preserve;
Image recognition acquisition module, it is configured to gather images to be recognized for object to be identified, and the images to be recognized is institute
State view data of the object to be identified under current visual angle;
Image recognition matching module, it is configured to search out the images to be recognized matching from all sample images for having preserved
Sample image;
Identification tag extraction module, it is configured to extract the identification label associated by the sample image for searching out to complete to described
The identification of object to be identified.
7. robot according to claim 6, it is characterised in that:
The collection view transformation module includes sample object whirligig, and it is configured to still image collection position and rotates institute
State sample object;
The contrast images acquisition module is configured to carry out the sample object during the sample object is rotated many
Multi-view image is gathered.
8. robot according to claim 6, it is characterised in that:
The collection view transformation module includes gearshift, and it is configured to keep the sample object motionless and drives the machine
Device people rotates around the sample object;
The contrast images acquisition module is configured to during the robot rotates around the sample object to described
Sample object carries out multi-view image collection.
9. robot according to claim 6, it is characterised in that the collection view transformation module is included:
Sample object parsing module, it is configured to judge whether the sample object can be rotated;
Sample object whirligig, it is configured to the still image when the sample object can be rotated and gathers position and rotate
The sample object;
Gearshift, it is configured to drive the robot around the sample contents when the sample object cannot be rotated
Body rotates;
The contrast images acquisition module is configured to rotating the sample object or the robot around the sample object
Multi-view image collection is carried out to the sample object during rotation.
10. the method according to any one of claim 6-9, it is characterised in that:
The collection view transformation module is configured to horizontal plane carries out 360 degree of view transformations to the sample object;
The contrast images acquisition module is configured to during view transformation with default angle interval to the sample contents
Body carries out IMAQ.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201611222770.6A CN106863355A (en) | 2016-12-27 | 2016-12-27 | A kind of object identification method and robot for robot |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201611222770.6A CN106863355A (en) | 2016-12-27 | 2016-12-27 | A kind of object identification method and robot for robot |
Publications (1)
Publication Number | Publication Date |
---|---|
CN106863355A true CN106863355A (en) | 2017-06-20 |
Family
ID=59165043
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201611222770.6A Pending CN106863355A (en) | 2016-12-27 | 2016-12-27 | A kind of object identification method and robot for robot |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN106863355A (en) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108555909A (en) * | 2018-04-17 | 2018-09-21 | 子歌教育机器人(深圳)有限公司 | A kind of target seeking method, AI robots and computer readable storage medium |
CN109558868A (en) * | 2017-09-27 | 2019-04-02 | 缤果可为(北京)科技有限公司 | Image automatic collection and tagging equipment and method |
CN114769021A (en) * | 2022-04-24 | 2022-07-22 | 广东天太机器人有限公司 | Robot spraying system and method based on full-angle template recognition |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
DE102006005990A1 (en) * | 2006-02-08 | 2007-08-09 | Vision Tools Hard- Und Software Entwicklungs-Gmbh | Workpiece characteristics position measuring method involves measuring position by camera, which is movably fixed to robot, for multiple picture recordings |
CN101859371A (en) * | 2009-04-10 | 2010-10-13 | 鸿富锦精密工业(深圳)有限公司 | Pick-up device and object identification method thereof |
CN102254169A (en) * | 2011-08-23 | 2011-11-23 | 东北大学秦皇岛分校 | Multi-camera-based face recognition method and multi-camera-based face recognition system |
US20130136300A1 (en) * | 2011-11-29 | 2013-05-30 | Qualcomm Incorporated | Tracking Three-Dimensional Objects |
CN103473529A (en) * | 2013-08-26 | 2013-12-25 | 昆明学院 | Method and device for recognizing faces through multi-angle imaging |
CN103984942A (en) * | 2014-05-28 | 2014-08-13 | 深圳市中兴移动通信有限公司 | Object recognition method and mobile terminal |
-
2016
- 2016-12-27 CN CN201611222770.6A patent/CN106863355A/en active Pending
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
DE102006005990A1 (en) * | 2006-02-08 | 2007-08-09 | Vision Tools Hard- Und Software Entwicklungs-Gmbh | Workpiece characteristics position measuring method involves measuring position by camera, which is movably fixed to robot, for multiple picture recordings |
CN101859371A (en) * | 2009-04-10 | 2010-10-13 | 鸿富锦精密工业(深圳)有限公司 | Pick-up device and object identification method thereof |
CN102254169A (en) * | 2011-08-23 | 2011-11-23 | 东北大学秦皇岛分校 | Multi-camera-based face recognition method and multi-camera-based face recognition system |
US20130136300A1 (en) * | 2011-11-29 | 2013-05-30 | Qualcomm Incorporated | Tracking Three-Dimensional Objects |
CN103473529A (en) * | 2013-08-26 | 2013-12-25 | 昆明学院 | Method and device for recognizing faces through multi-angle imaging |
CN103984942A (en) * | 2014-05-28 | 2014-08-13 | 深圳市中兴移动通信有限公司 | Object recognition method and mobile terminal |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109558868A (en) * | 2017-09-27 | 2019-04-02 | 缤果可为(北京)科技有限公司 | Image automatic collection and tagging equipment and method |
CN108555909A (en) * | 2018-04-17 | 2018-09-21 | 子歌教育机器人(深圳)有限公司 | A kind of target seeking method, AI robots and computer readable storage medium |
CN114769021A (en) * | 2022-04-24 | 2022-07-22 | 广东天太机器人有限公司 | Robot spraying system and method based on full-angle template recognition |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11051000B2 (en) | Method for calibrating cameras with non-overlapping views | |
CN104793620B (en) | The avoidance robot of view-based access control model feature binding and intensified learning theory | |
US8768071B2 (en) | Object category recognition methods and robots utilizing the same | |
CN102999918B (en) | Multi-target object tracking system of panorama video sequence image | |
CN109684925B (en) | Depth image-based human face living body detection method and device | |
US10832078B2 (en) | Method and system for concurrent reconstruction of dynamic and static objects | |
EP3499414B1 (en) | Lightweight 3d vision camera with intelligent segmentation engine for machine vision and auto identification | |
CN110084243B (en) | File identification and positioning method based on two-dimensional code and monocular camera | |
Zhang et al. | Robust visual odometry in underwater environment | |
CN106863355A (en) | A kind of object identification method and robot for robot | |
EP3035242B1 (en) | Method and electronic device for object tracking in a light-field capture | |
EP3107007B1 (en) | Method and apparatus for data retrieval in a lightfield database | |
CN113658039A (en) | Method for determining splicing sequence of label images of medicine bottles | |
Gu et al. | OSSID: online self-supervised instance detection by (and for) pose estimation | |
CN110097504A (en) | A kind of image vision acquisition system for tunnel crusing robot | |
CN113505629A (en) | Intelligent storage article recognition device based on light weight network | |
CN110889460A (en) | Mechanical arm specified object grabbing method based on cooperative attention mechanism | |
US8538142B2 (en) | Face-detection processing methods, image processing devices, and articles of manufacture | |
CN115546021A (en) | Multi-camera image splicing method applied to cold bed shunting scene detection | |
Pahwa et al. | Tracking objects using 3D object proposals | |
CN109359649A (en) | A kind of recognition methods of access object, storage medium and the article-storage device of article-storage device | |
Li et al. | A novel automatic image stitching algorithm for ceramic microscopic images | |
Sadeghi-Tehran et al. | ATDT: autonomous template-based detection and tracking of objects from airborne camera | |
CN107273850B (en) | Autonomous following method based on mobile robot | |
Tang et al. | Real-time recognition and localization method for deep-sea underwater object |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20170620 |