CN109344786A - Target identification method, device and computer readable storage medium - Google Patents
Target identification method, device and computer readable storage medium Download PDFInfo
- Publication number
- CN109344786A CN109344786A CN201811187520.2A CN201811187520A CN109344786A CN 109344786 A CN109344786 A CN 109344786A CN 201811187520 A CN201811187520 A CN 201811187520A CN 109344786 A CN109344786 A CN 109344786A
- Authority
- CN
- China
- Prior art keywords
- target
- point cloud
- dimensional point
- dimensional
- sample
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/10—Terrestrial scenes
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/60—Type of objects
- G06V20/64—Three-dimensional objects
- G06V20/647—Three-dimensional objects by matching two-dimensional images to three-dimensional objects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V2201/00—Indexing scheme relating to image or video recognition or understanding
- G06V2201/06—Recognition of objects for industrial automation
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- General Physics & Mathematics (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Evolutionary Biology (AREA)
- Evolutionary Computation (AREA)
- Bioinformatics & Computational Biology (AREA)
- General Engineering & Computer Science (AREA)
- Artificial Intelligence (AREA)
- Life Sciences & Earth Sciences (AREA)
- Multimedia (AREA)
- Image Analysis (AREA)
- Image Processing (AREA)
Abstract
The invention discloses a kind of target identification methods, comprising: obtains the three-dimensional point cloud training sample of target to be sorted, and carries out two-dimensional projection to the three-dimensional point cloud training sample, obtains corresponding 2-D gray image;By the quadratic surface grid approach method based on mainstream shape, the corresponding two dimensional image of the three-dimensional point cloud training sample is determined;The 2-D gray image and the two dimensional image are input in preset convolutional neural networks model and carry out off-line training, with the corresponding target category of the determination target to be sorted;The three-dimensional point cloud of target to be matched is obtained, and the three-dimensional point cloud is input in the convolutional neural networks model after the off-line training, so that the target category to the target to be matched identifies.The invention also discloses a kind of Target Identification Unit and computer readable storage mediums.The present invention improves the accuracy rate of the target identification under three-dimensional point cloud.
Description
Technical field
The present invention relates to technical field of image processing more particularly to a kind of target identification methods, device and computer-readable
Storage medium.
Background technique
Robot is realized to the identification of mixed and disorderly accumulation object and automatically grabs and can greatly improve industrial efficiency,
Currently used is the target identification method under three-dimensional point cloud, and this method generally comprises two ranks of feature extraction and characteristic matching
Section, feature extraction includes the recognition methods using global characteristics and the recognition methods using local feature, in characteristic matching stage
Using direct characteristic point matching method and indirect characteristic point matching method.These methods have the disadvantage in that generally use corner,
The features such as cavity, histogram describe target, and recognition methods is affected by target self structure and sensor accuracy, especially when
When target has certain block, target is difficult to accurately identify, and to target deformation, target because blocking the factors such as incomplete very
Sensitivity, thus be difficult to solve the components type that factory needs to identify is more, shape difference greatly and case of frequent changes.
Summary of the invention
The main purpose of the present invention is to provide a kind of target identification method, device and computer readable storage medium, purports
Target identification method in the case where solving existing three-dimensional point cloud technical problem accurate not enough.
To achieve the above object, the present invention provides a kind of target identification method, and the target identification method includes:
The three-dimensional point cloud training sample of target to be sorted is obtained, and two-dimentional throwing is carried out to the three-dimensional point cloud training sample
Shadow obtains corresponding 2-D gray image;
By the quadratic surface grid approach method based on mainstream shape, the three-dimensional point cloud training sample corresponding two is determined
Tie up image;
The 2-D gray image and the two dimensional image are input in preset convolutional neural networks model carry out from
Line training, with the corresponding target category of the determination target to be sorted;
The three-dimensional point cloud of target to be matched is obtained, and the three-dimensional point cloud is input to the convolution mind after the off-line training
Through in network model, so that the target category to the target to be matched identifies.
Optionally, the step of three-dimensional point cloud training sample for obtaining target to be sorted includes:
It obtains system by three-dimensional point cloud sample is shot to obtain sample image, and based on image segmentation algorithm to institute
It states sample image to be split, to determine single target to be sorted from the sample;
The shooting that system carries out different positions and pose and different coverage extents to the target to be sorted is obtained based on three-dimensional point cloud,
To obtain the three-dimensional point cloud sample of the target to be sorted;
Augmentation processing is carried out to the three-dimensional point cloud sample, to obtain the three-dimensional point cloud training sample of the target to be sorted
This.
Optionally, described that augmentation processing is carried out to the three-dimensional point cloud sample, to obtain the three-dimensional of the target to be sorted
Point cloud training sample the step of include:
Three-dimensional modeling is carried out to the three-dimensional point cloud sample, determines the corresponding CAD model figure of the three-dimensional point cloud sample;
Augmentation operation is carried out to the CAD model figure, to obtain the three-dimensional point cloud training sample of the target to be sorted,
In, the augmentation operation includes Random-Rotation, translation, partial occlusion, noise is added.
Optionally, described that two-dimensional projection is carried out to the three-dimensional point cloud training sample, obtain corresponding 2-D gray image
The step of include:
Based on the projection properties of height value projection, the three-dimensional point cloud training sample is projected in multiple planes,
To obtain corresponding 2-D gray image.
Optionally, described by the quadratic surface grid approach method based on mainstream shape, determine the three-dimensional point cloud training
The step of sample corresponding two dimensional image includes:
The secondary mainstream shape of the three-dimensional point cloud training sample is constructed, and the three-dimensional is obtained based on the secondary mainstream shape
The corresponding quadratic surface grid of point cloud training sample;
The quadratic surface grid is optimized based on distance, area and flatness, and will be secondary after the optimization
Surface mesh is converted to the corresponding two dimensional image of the three-dimensional point cloud training sample.
Optionally, the three-dimensional point cloud for obtaining target to be matched, and the three-dimensional point cloud is input to the offline instruction
In convolutional neural networks model after white silk, the step of identifying so as to the target category to the target to be matched, includes:
System is obtained based on three-dimensional point cloud to shoot sample to be sorted, obtains the corresponding irregular point of sample to be sorted
Cloud, and the irregular cloud is split, to obtain the three-dimensional point cloud of single target to be matched;
Determine the corresponding 2-D gray image of three-dimensional point cloud and two dimensional image of the target to be matched, and by the three-dimensional
The corresponding 2-D gray image of point cloud and two dimensional image are input in the convolutional neural networks model after the off-line training, so as to
Convolutional neural networks model after the off-line training exports the target category of the target to be matched.
Optionally, the corresponding 2-D gray image of three-dimensional point cloud and two dimensional image of the determination target to be matched
Step includes:
The three-dimensional point cloud of the target to be matched is translated, rotation transformation, to obtain multiple three-dimensional point clouds;
Two-dimensional projection is carried out to the multiple three-dimensional point cloud, obtains the corresponding 2-D gray image of the target to be matched,
And by the quadratic surface grid approach method based on mainstream shape, the corresponding two dimensional image of the target to be matched is determined.
Optionally, described that the corresponding 2-D gray image of the three-dimensional point cloud and two dimensional image are input to the offline instruction
In convolutional neural networks model after white silk, so that the convolutional neural networks model after the off-line training exports the mesh to be matched
The step of target target category includes:
Convolution after the corresponding 2-D gray image of the three-dimensional point cloud and two dimensional image to be input to the off-line training
In neural network model, and the target to be matched for receiving the output of the convolutional neural networks model after the off-line training is corresponding
Multiple target categories;
Quantity statistics are carried out to the multiple target category, obtain the most target category of quantity, be determined as it is described to
Target category with target.
In addition, to achieve the above object, the present invention also provides a kind of Target Identification Unit, the Target Identification Unit packet
It includes: memory, processor and being stored in the target identification program that can be run on the memory and on the processor, it is described
The step of target identification program realizes target identification method as described above when being executed by the processor.
In addition, to achieve the above object, it is described computer-readable the present invention also provides a kind of computer readable storage medium
Target identification program is stored on storage medium, the target identification program realizes target as described above when being executed by processor
The step of recognition methods.
A kind of target identification method proposed by the present invention obtains the three-dimensional point cloud training sample of target to be sorted first, and
Two-dimensional projection is carried out to it, obtains corresponding 2-D gray image, while approaching by the quadratic surface grid based on mainstream shape
Method determines the corresponding two dimensional image of three-dimensional point cloud training sample, and 2-D gray image obtained above and two dimensional image is defeated
Enter and carry out off-line training into preset convolutional neural networks model, with target category corresponding to determination target to be sorted, most
The three-dimensional point cloud of target to be matched is obtained afterwards, and is entered into the convolutional neural networks model after off-line training, so as to right
The target category of target to be matched carries out online recognition.Target identification method proposed by the present invention, by target to be sorted
Three-dimensional point cloud training sample carries out two-dimensional projection and determines 2-D gray image, determines two dimensional image using mainstream conformal analysis method,
And be input in preset convolutional neural networks model and carry out off-line training, to utilize trained convolutional neural networks
Model carries out online recognition to target to be matched and improves under three-dimensional point cloud without clarification of objective to be matched is manually extracted
The accuracy rate of target identification.
Detailed description of the invention
Fig. 1 is the apparatus structure schematic diagram for the hardware running environment that the embodiment of the present invention is related to;
Fig. 2 is the flow diagram of target identification method first embodiment of the present invention;
Fig. 3 is the flow diagram of target identification method second embodiment of the present invention;
Fig. 4 is the convolutional neural networks structural schematic diagram in one embodiment of target identification method of the present invention.
The embodiments will be further described with reference to the accompanying drawings for the realization, the function and the advantages of the object of the present invention.
Specific embodiment
It should be appreciated that the specific embodiments described herein are merely illustrative of the present invention, it is not intended to limit the present invention.
The primary solutions of the embodiment of the present invention are: obtaining the three-dimensional point cloud training sample of target to be sorted, and to institute
It states three-dimensional point cloud training sample and carries out two-dimensional projection, obtain corresponding 2-D gray image;Pass through the secondary song based on mainstream shape
Surface grids approach method determines the corresponding two dimensional image of the three-dimensional point cloud training sample;By the 2-D gray image and institute
It states two dimensional image and is input in preset convolutional neural networks model and carry out off-line training, it is corresponding with the determination target to be sorted
Target category;The three-dimensional point cloud of target to be matched is obtained, and the three-dimensional point cloud is input to the volume after the off-line training
In product neural network model, so that the target category to the target to be matched identifies.Skill through the embodiment of the present invention
Art scheme solves target identification method under existing three-dimensional point cloud technical problem accurate not enough.
As shown in Figure 1, Fig. 1 is the apparatus structure schematic diagram for the hardware running environment that the embodiment of the present invention is related to.
The device of that embodiment of the invention can be PC, be also possible to smart phone, tablet computer, portable computer etc. with aobvious
Show the packaged type terminal device of function.
As shown in Figure 1, the apparatus may include: processor 1001, such as CPU, communication bus 1002, user interface
1003, network interface 1004, memory 1005.Wherein, communication bus 1002 is for realizing the connection communication between these components.
User interface 1003 may include display screen (Display), input unit such as keyboard (Keyboard), optional user interface
1003 can also include standard wireline interface and wireless interface.Network interface 1004 optionally may include that the wired of standard connects
Mouth, wireless interface (such as WI-FI interface).Memory 1005 can be high speed RAM memory, be also possible to stable memory
(non-volatile memory), such as magnetic disk storage.Memory 1005 optionally can also be independently of aforementioned processor
1001 storage device.
Optionally, device can also include camera, RF (Radio Frequency, radio frequency) circuit, sensor, audio
Circuit, Wi-Fi module etc..Certainly, device can also configure gyroscope, barometer, hygrometer, thermometer, infrared sensor
Etc. other sensors, details are not described herein.
It will be understood by those skilled in the art that the restriction of the not structure twin installation of apparatus structure shown in Fig. 1, can wrap
It includes than illustrating more or fewer components, perhaps combines certain components or different component layouts.
As shown in Figure 1, as may include that operating system, network are logical in a kind of memory 1005 of computer storage medium
Believe module, Subscriber Interface Module SIM and target identification program.
In device shown in Fig. 1, network interface 1004 is mainly used for connecting background server, carries out with background server
Data communication;User interface 1003 is mainly used for connecting client (user terminal), carries out data communication with client;And processor
1001, memory 1005 can be set in Target Identification Unit, and the Target Identification Unit is deposited by the calling of processor 1001
The target identification program stored in reservoir 1005, and execute following operation:
The three-dimensional point cloud training sample of target to be sorted is obtained, and two-dimentional throwing is carried out to the three-dimensional point cloud training sample
Shadow obtains corresponding 2-D gray image;
By the quadratic surface grid approach method based on mainstream shape, the three-dimensional point cloud training sample corresponding two is determined
Tie up image;
The 2-D gray image and the two dimensional image are input in preset convolutional neural networks model carry out from
Line training, with the corresponding target category of the determination target to be sorted;
The three-dimensional point cloud of target to be matched is obtained, and the three-dimensional point cloud is input to the convolution mind after the off-line training
Through in network model, so that the target category to the target to be matched identifies.
Further, processor 1001 can call the target identification program stored in memory 1005, also execute following
Operation:
It obtains system by three-dimensional point cloud sample is shot to obtain sample image, and based on image segmentation algorithm to institute
It states sample image to be split, to determine single target to be sorted from the sample;
The shooting that system carries out different positions and pose and different coverage extents to the target to be sorted is obtained based on three-dimensional point cloud,
To obtain the three-dimensional point cloud sample of the target to be sorted;
Augmentation processing is carried out to the three-dimensional point cloud sample, to obtain the three-dimensional point cloud training sample of the target to be sorted
This.
Further, processor 1001 can call the target identification program stored in memory 1005, also execute following
Operation:
Three-dimensional modeling is carried out to the three-dimensional point cloud sample, determines the corresponding CAD model figure of the three-dimensional point cloud sample;
Augmentation operation is carried out to the CAD model figure, to obtain the three-dimensional point cloud training sample of the target to be sorted,
In, the augmentation operation includes Random-Rotation, translation, partial occlusion, noise is added.
Further, processor 1001 can call the target identification program stored in memory 1005, also execute following
Operation:
Based on the projection properties of height value projection, the three-dimensional point cloud training sample is projected in multiple planes,
To obtain corresponding 2-D gray image.
Further, processor 1001 can call the target identification program stored in memory 1005, also execute following
Operation:
The secondary mainstream shape of the three-dimensional point cloud training sample is constructed, and the three-dimensional is obtained based on the secondary mainstream shape
The corresponding quadratic surface grid of point cloud training sample;
The quadratic surface grid is optimized based on distance, area and flatness, and will be secondary after the optimization
Surface mesh is converted to the corresponding two dimensional image of the three-dimensional point cloud training sample.
Further, processor 1001 can call the target identification program stored in memory 1005, also execute following
Operation:
System is obtained based on three-dimensional point cloud to shoot sample to be sorted, obtains the corresponding irregular point of sample to be sorted
Cloud, and the irregular cloud is split, to obtain the three-dimensional point cloud of single target to be matched;
Determine the corresponding 2-D gray image of three-dimensional point cloud and two dimensional image of the target to be matched, and by the three-dimensional
The corresponding 2-D gray image of point cloud and two dimensional image are input in the convolutional neural networks model after the off-line training, so as to
Convolutional neural networks model after the off-line training exports the target category of the target to be matched.
Further, processor 1001 can call the target identification program stored in memory 1005, also execute following
Operation:
The three-dimensional point cloud of the target to be matched is translated, rotation transformation, to obtain multiple three-dimensional point clouds;
Two-dimensional projection is carried out to the multiple three-dimensional point cloud, obtains the corresponding 2-D gray image of the target to be matched,
And by the quadratic surface grid approach method based on mainstream shape, the corresponding two dimensional image of the target to be matched is determined.
Further, processor 1001 can call the target identification program stored in memory 1005, also execute following
Operation:
Convolution after the corresponding 2-D gray image of the three-dimensional point cloud and two dimensional image to be input to the off-line training
In neural network model, and the target to be matched for receiving the output of the convolutional neural networks model after the off-line training is corresponding
Multiple target categories;
Quantity statistics are carried out to the multiple target category, obtain the most target category of quantity, be determined as it is described to
Target category with target.
Scheme provided in this embodiment obtains the three-dimensional point cloud training sample of target to be sorted first, and carries out two to it
Dimension projection, obtains corresponding 2-D gray image, while by the quadratic surface grid approach method based on mainstream shape, determining three
The corresponding two dimensional image of dimension point cloud training sample, is input to preset volume for 2-D gray image obtained above and two dimensional image
Off-line training is carried out in product neural network model, with target category corresponding to determination target to be sorted, is finally obtained to be matched
The three-dimensional point cloud of target, and be entered into the convolutional neural networks model after off-line training, so as to target to be matched
Target category carries out online recognition.Target identification method proposed by the present invention passes through the three-dimensional point cloud training to target to be sorted
Sample carries out two-dimensional projection and determines 2-D gray image, determines two dimensional image using mainstream conformal analysis method, and is input to
Off-line training is carried out in preset convolutional neural networks model, to utilize trained convolutional neural networks model to be matched
Target carries out online recognition and improves the standard of the target identification under three-dimensional point cloud without clarification of objective to be matched is manually extracted
True rate.
Based on above-mentioned hardware configuration, target identification method embodiment of the present invention is proposed.
It is the flow diagram of target identification method first embodiment of the present invention, in this embodiment, institute referring to Fig. 2, Fig. 2
The method of stating includes:
Step S10 obtains the three-dimensional point cloud training sample of target to be sorted, and carries out to the three-dimensional point cloud training sample
Two-dimensional projection obtains corresponding 2-D gray image;
Based on the target identification method under three-dimensional point cloud in the prior art there are the problem of, the invention proposes a kind of targets
Recognition methods is more specifically the three-dimensional point cloud target intelligence in a kind of robot sorting system based on convolutional neural networks
It can recognition methods.Two-dimensional projection image calculating is carried out by the three-dimensional point cloud training sample to target to be sorted, and utilizes base
The corresponding quadratic surface of point cloud sample to be trained is extracted, in the quadratic surface grid approach method of manifold to obtain corresponding two
Image is tieed up, the two-dimensional images of acquisition are input in a preset convolutional neural networks model, carries out off-line training, so
After can treat matched target using trained convolutional neural networks model and carry out online recognition and sorting, without artificial
Feature extraction is carried out, accuracy rate is high, it is big to be particularly suitable for External Shape difference when the crawl of industrial part robot, it is difficult to classification knowledge
Not, the actual conditions and frequently changed.
Firstly, the three-dimensional point cloud training sample of target to be sorted is obtained using three-dimensional point cloud acquisition system, in the present embodiment
It may include three-dimensional laser scanner, structure luminous point cloud acquisition system etc. that described three-dimensional point cloud, which obtains system,.Specifically, described
Step S10 includes:
Step a is obtained system by three-dimensional point cloud and is shot to obtain sample image to sample, and calculated based on image segmentation
Method is split the sample image, to determine single target to be sorted from the sample;
The sample that mess therein is shot by the 3D camera in three-dimensional point cloud acquisition system, it is corresponding to can be obtained the sample
Image contains multiple samples in the image, therefore, is further carried out using image segmentation algorithm to the corresponding image of sample
Segmentation, to extract single sample target.It is understood that the single sample target can be the sample mesh not being blocked completely
Mark, is also possible to the sample object being at least partially obscured.
Step b obtains system based on three-dimensional point cloud and carries out different positions and pose and different coverage extents to the target to be sorted
Shooting, to obtain the three-dimensional point cloud sample of the target to be sorted;
Further, the bat that system carries out a variety of poses, different coverage extents to single target is obtained using three-dimensional point cloud
Take the photograph, can be obtained the multidigit appearance of single target, the three-dimensional point cloud sample of different coverage extent, as in the present embodiment it is described to
Sort the three-dimensional point cloud sample of target.
Step c carries out augmentation processing to the three-dimensional point cloud sample, to obtain the three-dimensional point cloud instruction of the target to be sorted
Practice sample.
Further three-dimensional point cloud sample is carried out in order to improve the diversity of the three-dimensional point cloud sample of target to be sorted
Augmentation processing.Detailed process is as follows: when obtaining after the three-dimensional point cloud sample of training objective, carrying out to three-dimensional point cloud sample three-dimensional
Modeling, can accordingly determine the CAD model figure of each three-dimensional point cloud sample, and 3D imaging system is arranged in computer systems
Parameter, using imaging method simulate point cloud generation, and by CAD model figure carry out Random-Rotation, translation, partial occlusion (i.e.
Remove the part point cloud data of sample), four operations such as certain noise is added, completed after repeatedly simulating in this way, can be obtained mesh
Point cloud data collection after marking classification augmentation.
Similarly, in the present embodiment, it can also be carried out by the three-dimensional point cloud sample of the single target obtained to shooting
Four operations such as Random-Rotation, translation, partial occlusion (removing the part point cloud data of sample), the certain noise of addition, in this way
After completing repeatedly simulation, the point cloud data collection after target category augmentation equally can be obtained.
It is handled by the augmentation of above-mentioned two step, so that each target obtains multiple cloud samples, reaches various targets
Between sample size it is balanced because subsequent neural network is very sensitive to the harmony of data set.And make each target after augmentation
Sample is more abundant, closer to actual field environment, successive depths learning model is made to have more robustness.Further, to augmentation
Treated, and all three-dimensional point cloud training samples label, and demarcate the target category belonging to it, at the same can with label its
The degree being blocked.
Further, two-dimensional projection is carried out to determining three-dimensional point cloud training sample, to obtain corresponding two dimensional gray figure
Picture.It specifically, in the present embodiment, is the projection properties based on height value projection, by above-mentioned three-dimensional point cloud training sample more
It is projected in a plane, to obtain several corresponding 2-D gray images, projection plane can be customized, for example, flat at 6
It is projected on face, that is, can determine 6 width 2-D gray images.It is understood that in addition to described based on height in the present embodiment
Journey value projection projection properties carry out two-dimensional projection except, can also be based on the projection properties such as radial distance, grid angle into
Row projection, to obtain corresponding 2-D gray image.
Corresponding 2-D gray image is obtained based on the two-dimensional projection to three-dimensional point cloud, these gray level images are objectives
A kind of feature description, sample uniform 2-D gray image by the way that three-dimensional point cloud at random to be mapped in two-dimentional theorem in Euclid space
On, discernment with higher is trained convenient for subsequent convolutional neural networks.But the method for two-dimensional projection is given up
The three-dimensional fine structure of model, therefore, it is also desirable to by making up its three-dimensional based on the quadratic surface grid approach method of manifold
Fine structure.
Step S20 determines the three-dimensional point cloud training sample by the quadratic surface grid approach method based on mainstream shape
Corresponding two dimensional image;
In the present embodiment, three-dimensional point cloud is approached by the quadratic surface grid based on mainstream shape, is obtained secondary
Surface mesh, and can be exchanged into two dimensional image.Specifically, the secondary mainstream shape of three-dimensional point cloud training sample is constructed first,
And the corresponding quadratic surface grid of the three-dimensional point cloud training sample is obtained based on secondary mainstream shape.
It is possible to further be optimized based on parameters such as distance, area and flatness to above-mentioned quadratic surface grid, and
It is corresponding two dimensional image by the quadratic surface grid conversion after optimization.In the present embodiment, by based on the secondary of mainstream shape
Three-dimensional point cloud training sample is converted corresponding two dimensional image by surface mesh approach method, to realize three-dimensional point cloud mould
The shape description of type.
The 2-D gray image and the two dimensional image are input to preset convolutional neural networks model by step S30
Middle carry out off-line training, with the corresponding target category of the determination target to be sorted;
The quadratic surface for approaching acquisition by the 2-D gray image and quadratic surface grid that obtain two-dimensional projection is corresponding
Two dimensional image be input in preset convolutional neural networks model and be trained, input is the two dimensional image of three-dimensional point cloud,
Output is the prediction target category of the target.
Specifically, in the present embodiment, convolutional neural networks are the structures of a mixing, as shown in figure 4, to two-dimensional projection
The 2-D gray image of acquisition establishes random Delta (SDR) layer in multiple convolution-ponds-, forms Cov-MP-SDR-Net1 network
Structure, the last one pond layer export C1 characteristic pattern { F11, F12...F1C1}.And it is obtained for being approached using quadratic surface grid
The corresponding two dimensional image of quadratic surface obtained then establishes multiple convolution-pond-SDR layers, forms Cov-MP-SDR-Net2 network
Structure, the last one pond layer export C2 pair characteristic pattern { F21, F22...F2C2}.It is understood that considering respective data
There are significant difference, Cov-MP-SDR-Net2 and Cov-MP-SDR-Net1, and there are different network architecture parameters to set for input
It sets.
Further, by characteristic pattern { F11, F12...F1C1And { F21, F22...F2C2As input, it is input to full connection
In neural network FNet1, classify finally by Softmax layers, the prediction target category for target is exported, to constitute
By the end-to-end deep learning target category sorter network of Cov-MP-SDR-Net1 to FNet1 to Softmax.
Step S40 obtains the three-dimensional point cloud of target to be matched, and after the three-dimensional point cloud is input to the off-line training
Convolutional neural networks model in, so that the target category to the target to be matched identifies.
After completing the off-line training to convolutional neural networks model, system is obtained by three-dimensional point cloud and is clapped in real time online
Irregular cloud for accumulating all kinds of targets in a jumble is taken the photograph, and above-mentioned all kinds of targets are split using point cloud segmentation technology, so as to
Therefrom obtain three-dimensional point cloud to be matched.
Further, which is input in the good convolutional neural networks model of above-mentioned off-line training, to instruct
The convolutional neural networks model perfected carries out online recognition to it, to export the corresponding target category of the three-dimensional point cloud.
In the present embodiment, the three-dimensional point cloud training sample of target to be sorted is obtained first, and two-dimensional projection is carried out to it,
Corresponding 2-D gray image is obtained, while by the quadratic surface grid approach method based on mainstream shape, determining three-dimensional point cloud
2-D gray image obtained above and two dimensional image are input to preset convolutional Neural by the corresponding two dimensional image of training sample
Off-line training is carried out in network model, with target category corresponding to determination target to be sorted, finally obtains target to be matched
Three-dimensional point cloud, and be entered into the convolutional neural networks model after off-line training, so as to the target class to target to be matched
It carry out not online recognition.Target identification method proposed by the present invention, by the three-dimensional point cloud training sample to target to be sorted into
Row two-dimensional projection determines 2-D gray image, determines two dimensional image using mainstream conformal analysis method, and is input to preset
Off-line training is carried out in convolutional neural networks model, so as to using trained convolutional neural networks model to target to be matched into
Row online recognition improves the accuracy rate of the target identification under three-dimensional point cloud without clarification of objective to be matched is manually extracted.
Further, referring to Fig. 3, based on the above embodiment, target identification method second embodiment of the present invention is proposed, at this
In embodiment, the step S40 includes:
Step S41 obtains system based on three-dimensional point cloud and shoots to sample to be sorted, it is corresponding to obtain sample to be sorted
Irregular point cloud, and the irregular cloud is split, to obtain the three-dimensional point cloud of single target to be matched;
In the present embodiment, when the three-dimensional point cloud training sample by target to be sorted is to preset convolutional neural networks mould
After type carries out off-line training, online recognition is carried out to target to be matched using the trained convolutional neural networks model.
Specifically, system is obtained by online three-dimensional point cloud first to carry out the sample to be sorted for accumulating all kinds of targets in a jumble
Shooting, obtains the corresponding irregular cloud of sample to be sorted, and is split to irregular point cloud, can be obtained it is single to
Three-dimensional point cloud with target.
Step S42 determines the corresponding 2-D gray image of three-dimensional point cloud and two dimensional image of the target to be matched, and will
The corresponding 2-D gray image of the three-dimensional point cloud and two dimensional image are input to the convolutional neural networks mould after the off-line training
In type, so that the convolutional neural networks model after the off-line training exports the target category of the target to be matched.
Further, the corresponding 2-D gray image of the three-dimensional point cloud of the target to be matched and X-Y scheme are similarly determined
Picture, in the present embodiment, the determination process of the corresponding 2-D gray image of the three-dimensional point cloud of target to be matched and two dimensional image is such as
Under: the three-dimensional point cloud of target to be matched is translated, rotation transformation, can get multiple three-dimensional point clouds;Further, based on the
Two-dimensional projection's method in one embodiment determines the corresponding 2-D gray image of multiple three-dimensional point cloud;By being based on mainstream shape
Quadratic surface grid approach method determine the corresponding quadratic surface grid of three-dimensional point cloud, and similarly by quadratic surface grid turn
Turn to corresponding two dimensional image.
After the corresponding 2-D gray image of the three-dimensional point cloud for determining target to be matched and two dimensional image, it is input to instruction
In the convolutional neural networks model perfected, to export the target category of the target to be matched.
Further, in the present embodiment, 2-D gray image and two dimensional image are input to trained convolutional Neural
When carrying out target category online recognition in network model, the target category of each target to be matched identifies exportable multiple mesh
Classification is marked, therefore, quantity statistics are carried out to this multiple target category, frequency of occurrence is most, the i.e. most target class of quantity
Target category not as the target to be matched.
In the present embodiment, the three-dimensional point cloud of target to be matched is obtained, and utilizes the convolutional neural networks after off-line training
Model carries out online recognition to the three-dimensional point cloud of target to be matched, and exports corresponding target category, realize robot crawl,
The accurate target identification of three-dimensional point cloud in sorting system.
In addition, the embodiment of the present invention also proposes a kind of computer readable storage medium, the computer readable storage medium
On be stored with target identification program, following operation is realized when the target identification program is executed by processor:
The three-dimensional point cloud training sample of target to be sorted is obtained, and two-dimentional throwing is carried out to the three-dimensional point cloud training sample
Shadow obtains corresponding 2-D gray image;
By the quadratic surface grid approach method based on mainstream shape, the three-dimensional point cloud training sample corresponding two is determined
Tie up image;
The 2-D gray image and the two dimensional image are input in preset convolutional neural networks model carry out from
Line training, with the corresponding target category of the determination target to be sorted;
The three-dimensional point cloud of target to be matched is obtained, and the three-dimensional point cloud is input to the convolution mind after the off-line training
Through in network model, so that the target category to the target to be matched identifies.
Further, following operation is also realized when the target identification program is executed by processor:
It obtains system by three-dimensional point cloud sample is shot to obtain sample image, and based on image segmentation algorithm to institute
It states sample image to be split, to determine single target to be sorted from the sample;
The shooting that system carries out different positions and pose and different coverage extents to the target to be sorted is obtained based on three-dimensional point cloud,
To obtain the three-dimensional point cloud sample of the target to be sorted;
Augmentation processing is carried out to the three-dimensional point cloud sample, to obtain the three-dimensional point cloud training sample of the target to be sorted
This.
Further, following operation is also realized when the target identification program is executed by processor:
Three-dimensional modeling is carried out to the three-dimensional point cloud sample, determines the corresponding CAD model figure of the three-dimensional point cloud sample;
Augmentation operation is carried out to the CAD model figure, to obtain the three-dimensional point cloud training sample of the target to be sorted,
In, the augmentation operation includes Random-Rotation, translation, partial occlusion, noise is added.
Further, following operation is also realized when the target identification program is executed by processor:
Based on the projection properties of height value projection, the three-dimensional point cloud training sample is projected in multiple planes,
To obtain corresponding 2-D gray image.
Further, following operation is also realized when the target identification program is executed by processor:
The secondary mainstream shape of the three-dimensional point cloud training sample is constructed, and the three-dimensional is obtained based on the secondary mainstream shape
The corresponding quadratic surface grid of point cloud training sample;
The quadratic surface grid is optimized based on distance, area and flatness, and will be secondary after the optimization
Surface mesh is converted to the corresponding two dimensional image of the three-dimensional point cloud training sample.
Further, following operation is also realized when the target identification program is executed by processor:
System is obtained based on three-dimensional point cloud to shoot sample to be sorted, obtains the corresponding irregular point of sample to be sorted
Cloud, and the irregular cloud is split, to obtain the three-dimensional point cloud of single target to be matched;
Determine the corresponding 2-D gray image of three-dimensional point cloud and two dimensional image of the target to be matched, and by the three-dimensional
The corresponding 2-D gray image of point cloud and two dimensional image are input in the convolutional neural networks model after the off-line training, so as to
Convolutional neural networks model after the off-line training exports the target category of the target to be matched.
Further, following operation is also realized when the target identification program is executed by processor:
The three-dimensional point cloud of the target to be matched is translated, rotation transformation, to obtain multiple three-dimensional point clouds;
Two-dimensional projection is carried out to the multiple three-dimensional point cloud, obtains the corresponding 2-D gray image of the target to be matched,
And by the quadratic surface grid approach method based on mainstream shape, the corresponding two dimensional image of the target to be matched is determined.
Further, following operation is also realized when the target identification program is executed by processor:
Convolution after the corresponding 2-D gray image of the three-dimensional point cloud and two dimensional image to be input to the off-line training
In neural network model, and the target to be matched for receiving the output of the convolutional neural networks model after the off-line training is corresponding
Multiple target categories;
Quantity statistics are carried out to the multiple target category, obtain the most target category of quantity, be determined as it is described to
Target category with target.
Scheme provided in this embodiment obtains the three-dimensional point cloud training sample of target to be sorted first, and carries out two to it
Dimension projection, obtains corresponding 2-D gray image, while by the quadratic surface grid approach method based on mainstream shape, determining three
The corresponding two dimensional image of dimension point cloud training sample, is input to preset volume for 2-D gray image obtained above and two dimensional image
Off-line training is carried out in product neural network model, with target category corresponding to determination target to be sorted, is finally obtained to be matched
The three-dimensional point cloud of target, and be entered into the convolutional neural networks model after off-line training, so as to target to be matched
Target category carries out online recognition.Target identification method proposed by the present invention passes through the three-dimensional point cloud training to target to be sorted
Sample carries out two-dimensional projection and determines 2-D gray image, determines two dimensional image using mainstream conformal analysis method, and is input to
Off-line training is carried out in preset convolutional neural networks model, to utilize trained convolutional neural networks model to be matched
Target carries out online recognition and improves the standard of the target identification under three-dimensional point cloud without clarification of objective to be matched is manually extracted
True rate.
It should be noted that, in this document, the terms "include", "comprise" or its any other variant are intended to non-row
His property includes, so that the process, method, article or the system that include a series of elements not only include those elements, and
And further include other elements that are not explicitly listed, or further include for this process, method, article or system institute it is intrinsic
Element.In the absence of more restrictions, the element limited by sentence "including a ...", it is not excluded that including being somebody's turn to do
There is also other identical elements in the process, method of element, article or system.
The serial number of the above embodiments of the invention is only for description, does not represent the advantages or disadvantages of the embodiments.
Through the above description of the embodiments, those skilled in the art can be understood that above-described embodiment side
Method can be realized by means of software and necessary general hardware platform, naturally it is also possible to by hardware, but in many cases
The former is more preferably embodiment.Based on this understanding, technical solution of the present invention substantially in other words does the prior art
The part contributed out can be embodied in the form of software products, which is stored in one as described above
In storage medium (such as ROM/RAM, magnetic disk, CD), including some instructions are used so that terminal device (it can be mobile phone,
Computer, server, air conditioner or network equipment etc.) execute method described in each embodiment of the present invention.
The above is only a preferred embodiment of the present invention, is not intended to limit the scope of the invention, all to utilize this hair
Equivalent structure or equivalent flow shift made by bright specification and accompanying drawing content is applied directly or indirectly in other relevant skills
Art field, is included within the scope of the present invention.
Claims (10)
1. a kind of target identification method, which is characterized in that the target identification method the following steps are included:
The three-dimensional point cloud training sample of target to be sorted is obtained, and two-dimensional projection is carried out to the three-dimensional point cloud training sample, is obtained
To corresponding 2-D gray image;
By the quadratic surface grid approach method based on mainstream shape, the corresponding X-Y scheme of the three-dimensional point cloud training sample is determined
Picture;
The 2-D gray image and the two dimensional image are input in preset convolutional neural networks model and instructed offline
Practice, with the corresponding target category of the determination target to be sorted;
The three-dimensional point cloud of target to be matched is obtained, and the three-dimensional point cloud is input to the convolutional Neural net after the off-line training
In network model, so that the target category to the target to be matched identifies.
2. target identification method as described in claim 1, which is characterized in that the three-dimensional point cloud instruction for obtaining target to be sorted
Practice sample the step of include:
It obtains system by three-dimensional point cloud sample is shot to obtain sample image, and based on image segmentation algorithm to the sample
This image is split, to determine single target to be sorted from the sample;
The shooting that system carries out different positions and pose and different coverage extents to the target to be sorted is obtained based on three-dimensional point cloud, to obtain
Take the three-dimensional point cloud sample of the target to be sorted;
Augmentation processing is carried out to the three-dimensional point cloud sample, to obtain the three-dimensional point cloud training sample of the target to be sorted.
3. target identification method as claimed in claim 2, which is characterized in that described to carry out augmentation to the three-dimensional point cloud sample
Processing, to include: the step of obtaining the three-dimensional point cloud training sample of the target to be sorted
Three-dimensional modeling is carried out to the three-dimensional point cloud sample, determines the corresponding CAD model figure of the three-dimensional point cloud sample;
Augmentation operation is carried out to the CAD model figure, to obtain the three-dimensional point cloud training sample of the target to be sorted, wherein
The augmentation operation includes Random-Rotation, translation, partial occlusion, noise is added.
4. target identification method as described in claim 1, which is characterized in that described to be carried out to the three-dimensional point cloud training sample
Two-dimensional projection, the step of obtaining corresponding 2-D gray image include:
Based on the projection properties of height value projection, the three-dimensional point cloud training sample is projected in multiple planes, with
To corresponding 2-D gray image.
5. target identification method as described in claim 1, which is characterized in that described to pass through the quadratic surface net based on mainstream shape
Lattice approach method, the step of determining the three-dimensional point cloud training sample corresponding two dimensional image include:
The secondary mainstream shape of the three-dimensional point cloud training sample is constructed, and the three-dimensional point cloud is obtained based on the secondary mainstream shape
The corresponding quadratic surface grid of training sample;
The quadratic surface grid is optimized based on distance, area and flatness, and by the quadratic surface after the optimization
Grid conversion is the corresponding two dimensional image of the three-dimensional point cloud training sample.
6. target identification method as described in claim 1, which is characterized in that the three-dimensional point cloud for obtaining target to be matched,
And be input to the three-dimensional point cloud in the convolutional neural networks model after the off-line training, so as to the target to be matched
Target category the step of being identified include:
System is obtained based on three-dimensional point cloud to shoot sample to be sorted, obtains the corresponding irregular cloud of sample to be sorted,
And the irregular cloud is split, to obtain the three-dimensional point cloud of single target to be matched;
Determine the corresponding 2-D gray image of three-dimensional point cloud and two dimensional image of the target to be matched, and by the three-dimensional point cloud
Corresponding 2-D gray image and two dimensional image are input in the convolutional neural networks model after the off-line training, so as to described
Convolutional neural networks model after off-line training exports the target category of the target to be matched.
7. target identification method as claimed in claim 6, which is characterized in that the three-dimensional point of the determination target to be matched
The step of corresponding 2-D gray image of cloud and two dimensional image includes:
The three-dimensional point cloud of the target to be matched is translated, rotation transformation, to obtain multiple three-dimensional point clouds;
Two-dimensional projection is carried out to the multiple three-dimensional point cloud, obtains the corresponding 2-D gray image of the target to be matched, and lead to
The quadratic surface grid approach method based on mainstream shape is crossed, determines the corresponding two dimensional image of the target to be matched.
8. such as target identification method of any of claims 1-7, which is characterized in that described by the three-dimensional point cloud pair
The 2-D gray image and two dimensional image answered are input in the convolutional neural networks model after the off-line training, so as to it is described from
Convolutional neural networks model after line training exports the step of target category of the target to be matched and includes:
Convolutional Neural after the corresponding 2-D gray image of the three-dimensional point cloud and two dimensional image to be input to the off-line training
In network model, and the target to be matched for receiving the output of the convolutional neural networks model after the off-line training is corresponding more
A target category;
Quantity statistics are carried out to the multiple target category, the most target category of quantity is obtained, is determined as the mesh to be matched
Target target category.
9. a kind of Target Identification Unit, which is characterized in that the Target Identification Unit includes: memory, processor and is stored in
On the memory and the target identification program that can run on the processor, the target identification program is by the processor
It realizes when execution such as the step of target identification method described in any item of the claim 1 to 8.
10. a kind of computer readable storage medium, which is characterized in that be stored with target knowledge on the computer readable storage medium
Other program realizes such as target identification described in any item of the claim 1 to 8 when the target identification program is executed by processor
The step of method.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811187520.2A CN109344786A (en) | 2018-10-11 | 2018-10-11 | Target identification method, device and computer readable storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811187520.2A CN109344786A (en) | 2018-10-11 | 2018-10-11 | Target identification method, device and computer readable storage medium |
Publications (1)
Publication Number | Publication Date |
---|---|
CN109344786A true CN109344786A (en) | 2019-02-15 |
Family
ID=65308862
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201811187520.2A Pending CN109344786A (en) | 2018-10-11 | 2018-10-11 | Target identification method, device and computer readable storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN109344786A (en) |
Cited By (15)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110069993A (en) * | 2019-03-19 | 2019-07-30 | 同济大学 | A kind of target vehicle detection method based on deep learning |
CN110097047A (en) * | 2019-03-19 | 2019-08-06 | 同济大学 | A vehicle detection method using single-line lidar based on deep learning |
CN110414374A (en) * | 2019-07-08 | 2019-11-05 | 深兰科技(上海)有限公司 | A kind of determination method, apparatus, equipment and the medium of barrier pose |
CN110781932A (en) * | 2019-10-14 | 2020-02-11 | 国家广播电视总局广播电视科学研究院 | Ultrahigh-definition film source color gamut detection method for multi-class image conversion and comparison |
CN111414809A (en) * | 2020-02-28 | 2020-07-14 | 上海牙典软件科技有限公司 | Three-dimensional graph recognition method, device, equipment and storage medium |
CN111652085A (en) * | 2020-05-14 | 2020-09-11 | 东莞理工学院 | Object recognition method based on the combination of 2D and 3D features |
CN112016638A (en) * | 2020-10-26 | 2020-12-01 | 广东博智林机器人有限公司 | Method, device and equipment for identifying steel bar cluster and storage medium |
CN112395962A (en) * | 2020-11-03 | 2021-02-23 | 北京京东乾石科技有限公司 | Data augmentation method and device, and object identification method and system |
CN112613551A (en) * | 2020-12-17 | 2021-04-06 | 东风汽车有限公司 | Automobile part identification method, storage medium and system |
CN112700455A (en) * | 2020-12-28 | 2021-04-23 | 北京超星未来科技有限公司 | Laser point cloud data generation method, device, equipment and medium |
CN112926432A (en) * | 2021-02-22 | 2021-06-08 | 杭州优工品科技有限公司 | Training method and device suitable for industrial component recognition model and storage medium |
WO2021142843A1 (en) * | 2020-01-19 | 2021-07-22 | Oppo广东移动通信有限公司 | Image scanning method and device, apparatus, and storage medium |
WO2021169498A1 (en) * | 2020-09-18 | 2021-09-02 | 平安科技(深圳)有限公司 | Three-dimensional point cloud augmentation method and apparatus, storage medium, and computer device |
CN113449574A (en) * | 2020-03-26 | 2021-09-28 | 上海际链网络科技有限公司 | Method and device for identifying content on target, storage medium and computer equipment |
CN113936269A (en) * | 2021-11-17 | 2022-01-14 | 深圳市镭神智能系统有限公司 | Method for identifying staying object and method for controlling motor vehicle |
Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102222240A (en) * | 2011-06-29 | 2011-10-19 | 东南大学 | DSmT (Dezert-Smarandache Theory)-based image target multi-characteristic fusion recognition method |
US8553989B1 (en) * | 2010-04-27 | 2013-10-08 | Hrl Laboratories, Llc | Three-dimensional (3D) object recognition system using region of interest geometric features |
CN103810747A (en) * | 2014-01-29 | 2014-05-21 | 辽宁师范大学 | Three-dimensional point cloud object shape similarity comparing method based on two-dimensional mainstream shape |
CN103985116A (en) * | 2014-04-28 | 2014-08-13 | 辽宁师范大学 | Method for describing three-dimensional auricle shape features based on local salience and two-dimensional main manifold |
CN104298971A (en) * | 2014-09-28 | 2015-01-21 | 北京理工大学 | Method for identifying objects in 3D point cloud data |
CN105930382A (en) * | 2016-04-14 | 2016-09-07 | 严进龙 | Method for searching for 3D model with 2D pictures |
CN106874955A (en) * | 2017-02-24 | 2017-06-20 | 深圳市唯特视科技有限公司 | A kind of 3D shape sorting technique based on depth convolutional neural networks |
CN106951923A (en) * | 2017-03-21 | 2017-07-14 | 西北工业大学 | A 3D shape recognition method for robots based on multi-view information fusion |
-
2018
- 2018-10-11 CN CN201811187520.2A patent/CN109344786A/en active Pending
Patent Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8553989B1 (en) * | 2010-04-27 | 2013-10-08 | Hrl Laboratories, Llc | Three-dimensional (3D) object recognition system using region of interest geometric features |
CN102222240A (en) * | 2011-06-29 | 2011-10-19 | 东南大学 | DSmT (Dezert-Smarandache Theory)-based image target multi-characteristic fusion recognition method |
CN103810747A (en) * | 2014-01-29 | 2014-05-21 | 辽宁师范大学 | Three-dimensional point cloud object shape similarity comparing method based on two-dimensional mainstream shape |
CN103985116A (en) * | 2014-04-28 | 2014-08-13 | 辽宁师范大学 | Method for describing three-dimensional auricle shape features based on local salience and two-dimensional main manifold |
CN104298971A (en) * | 2014-09-28 | 2015-01-21 | 北京理工大学 | Method for identifying objects in 3D point cloud data |
CN105930382A (en) * | 2016-04-14 | 2016-09-07 | 严进龙 | Method for searching for 3D model with 2D pictures |
CN106874955A (en) * | 2017-02-24 | 2017-06-20 | 深圳市唯特视科技有限公司 | A kind of 3D shape sorting technique based on depth convolutional neural networks |
CN106951923A (en) * | 2017-03-21 | 2017-07-14 | 西北工业大学 | A 3D shape recognition method for robots based on multi-view information fusion |
Non-Patent Citations (6)
Title |
---|
BAOGUANG SHI 等: "DeepPano: Deep Panoramic Representation for 3-D Shape Recognition", 《2015 IEEE SIGNAL PROCESSING LETTERS》 * |
F. GOMEZ-DONOSO 等: "LonchaNet: A Sliced-based CNN Architecture for Real-time 3D Object Recognition", 《IJCNN 2017》 * |
GUAN PANG 等: "Fast and Robust Multi-View 3D Object Recognition in Point Clouds", 《2015 INTERNATIONAL CONFERENCE ON 3D VISION》 * |
HANG SU 等: "Multi-view Convolutional Neural Networks for 3D Shape Recognition", 《ICCV 2015》 * |
冯元力 等: "球面深度全景图表示下的三维形状识别", 《计算机辅助设计与图形学学报》 * |
孙晓鹏 等: "3D 点云形状特征的二维主流形描述", 《软件学报》 * |
Cited By (22)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110097047A (en) * | 2019-03-19 | 2019-08-06 | 同济大学 | A vehicle detection method using single-line lidar based on deep learning |
CN110069993A (en) * | 2019-03-19 | 2019-07-30 | 同济大学 | A kind of target vehicle detection method based on deep learning |
CN110097047B (en) * | 2019-03-19 | 2021-10-08 | 同济大学 | A vehicle detection method based on deep learning using single-line lidar |
CN110414374A (en) * | 2019-07-08 | 2019-11-05 | 深兰科技(上海)有限公司 | A kind of determination method, apparatus, equipment and the medium of barrier pose |
CN110414374B (en) * | 2019-07-08 | 2021-12-17 | 深兰科技(上海)有限公司 | Method, device, equipment and medium for determining obstacle position and attitude |
CN110781932A (en) * | 2019-10-14 | 2020-02-11 | 国家广播电视总局广播电视科学研究院 | Ultrahigh-definition film source color gamut detection method for multi-class image conversion and comparison |
CN110781932B (en) * | 2019-10-14 | 2022-03-11 | 国家广播电视总局广播电视科学研究院 | Ultrahigh-definition film source color gamut detection method for multi-class image conversion and comparison |
WO2021142843A1 (en) * | 2020-01-19 | 2021-07-22 | Oppo广东移动通信有限公司 | Image scanning method and device, apparatus, and storage medium |
CN111414809A (en) * | 2020-02-28 | 2020-07-14 | 上海牙典软件科技有限公司 | Three-dimensional graph recognition method, device, equipment and storage medium |
CN111414809B (en) * | 2020-02-28 | 2024-03-05 | 上海牙典软件科技有限公司 | Three-dimensional pattern recognition method, device, equipment and storage medium |
CN113449574A (en) * | 2020-03-26 | 2021-09-28 | 上海际链网络科技有限公司 | Method and device for identifying content on target, storage medium and computer equipment |
CN111652085A (en) * | 2020-05-14 | 2020-09-11 | 东莞理工学院 | Object recognition method based on the combination of 2D and 3D features |
WO2021169498A1 (en) * | 2020-09-18 | 2021-09-02 | 平安科技(深圳)有限公司 | Three-dimensional point cloud augmentation method and apparatus, storage medium, and computer device |
CN112016638B (en) * | 2020-10-26 | 2021-04-06 | 广东博智林机器人有限公司 | Method, device and equipment for identifying steel bar cluster and storage medium |
CN112016638A (en) * | 2020-10-26 | 2020-12-01 | 广东博智林机器人有限公司 | Method, device and equipment for identifying steel bar cluster and storage medium |
CN112395962A (en) * | 2020-11-03 | 2021-02-23 | 北京京东乾石科技有限公司 | Data augmentation method and device, and object identification method and system |
CN112613551A (en) * | 2020-12-17 | 2021-04-06 | 东风汽车有限公司 | Automobile part identification method, storage medium and system |
CN112613551B (en) * | 2020-12-17 | 2024-08-20 | 东风汽车有限公司 | Automobile part identification method, storage medium and system |
CN112700455A (en) * | 2020-12-28 | 2021-04-23 | 北京超星未来科技有限公司 | Laser point cloud data generation method, device, equipment and medium |
CN112926432A (en) * | 2021-02-22 | 2021-06-08 | 杭州优工品科技有限公司 | Training method and device suitable for industrial component recognition model and storage medium |
CN112926432B (en) * | 2021-02-22 | 2023-08-15 | 杭州优工品科技有限公司 | Training method, device and storage medium suitable for industrial part identification model |
CN113936269A (en) * | 2021-11-17 | 2022-01-14 | 深圳市镭神智能系统有限公司 | Method for identifying staying object and method for controlling motor vehicle |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109344786A (en) | Target identification method, device and computer readable storage medium | |
Wu et al. | Using channel pruning-based YOLO v4 deep learning algorithm for the real-time and accurate detection of apple flowers in natural environments | |
Xu et al. | Wheat ear counting using K-means clustering segmentation and convolutional neural network | |
CN107808143B (en) | Computer Vision-Based Dynamic Gesture Recognition Method | |
CN110060237B (en) | Fault detection method, device, equipment and system | |
Cai et al. | SMT solder joint inspection via a novel cascaded convolutional neural network | |
CN107316058A (en) | Improve the method for target detection performance by improving target classification and positional accuracy | |
CN110070072A (en) | A method of generating object detection model | |
CN110135503A (en) | A deep learning recognition method for assembly robot parts | |
CN110070101A (en) | Floristic recognition methods and device, storage medium, computer equipment | |
CN107239790A (en) | A kind of service robot target detection and localization method based on deep learning | |
CN103020885B (en) | Depth image compression | |
CN109272016A (en) | Target detection method, device, terminal equipment and computer readable storage medium | |
CN110399888B (en) | Weiqi judging system based on MLP neural network and computer vision | |
CN111507134A (en) | Human-shaped posture detection method and device, computer equipment and storage medium | |
CN109816634B (en) | Detection method, model training method, device and equipment | |
CN104112143A (en) | Weighted hyper-sphere support vector machine algorithm based image classification method | |
CN111611889B (en) | Miniature insect pest recognition device in farmland based on improved convolutional neural network | |
CN110991444A (en) | Complex scene-oriented license plate recognition method and device | |
CN112215861A (en) | Football detection method and device, computer readable storage medium and robot | |
CN109871821A (en) | Pedestrian re-identification method, device, device and storage medium for adaptive network | |
CN106682681A (en) | Recognition algorithm automatic improvement method based on relevance feedback | |
CN114266967B (en) | Target recognition method for cross-source remote sensing data based on signed distance feature | |
CN114509785A (en) | Three-dimensional object detection method, device, storage medium, processor and system | |
CN104036294A (en) | Spectral tag based adaptive multi-spectral remote sensing image classification method |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20190215 |