CN110796694A - Fruit three-dimensional point cloud real-time acquisition method based on KinectV2 - Google Patents
Fruit three-dimensional point cloud real-time acquisition method based on KinectV2 Download PDFInfo
- Publication number
- CN110796694A CN110796694A CN201910970724.1A CN201910970724A CN110796694A CN 110796694 A CN110796694 A CN 110796694A CN 201910970724 A CN201910970724 A CN 201910970724A CN 110796694 A CN110796694 A CN 110796694A
- Authority
- CN
- China
- Prior art keywords
- point cloud
- fruit
- point
- kinect
- points
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 235000013399 edible fruits Nutrition 0.000 title claims abstract description 71
- 238000000034 method Methods 0.000 title claims abstract description 58
- 230000008569 process Effects 0.000 claims abstract description 19
- 238000005516 engineering process Methods 0.000 claims abstract description 7
- 230000001360 synchronised effect Effects 0.000 claims abstract description 4
- 230000003044 adaptive effect Effects 0.000 claims abstract description 3
- 238000001914 filtration Methods 0.000 claims abstract description 3
- 238000004422 calculation algorithm Methods 0.000 claims description 14
- 238000009616 inductively coupled plasma Methods 0.000 claims description 9
- 239000000463 material Substances 0.000 claims description 8
- 230000008859 change Effects 0.000 claims description 7
- 238000000605 extraction Methods 0.000 claims description 6
- 238000010276 construction Methods 0.000 claims description 3
- 238000012216 screening Methods 0.000 claims description 3
- 238000013519 translation Methods 0.000 claims description 3
- 230000004927 fusion Effects 0.000 claims description 2
- 238000011160 research Methods 0.000 abstract description 4
- 239000011159 matrix material Substances 0.000 description 20
- 239000013598 vector Substances 0.000 description 18
- 230000006870 function Effects 0.000 description 13
- 230000009466 transformation Effects 0.000 description 10
- 230000009286 beneficial effect Effects 0.000 description 7
- 238000005457 optimization Methods 0.000 description 7
- 241000196324 Embryophyta Species 0.000 description 6
- 238000001514 detection method Methods 0.000 description 5
- 238000010586 diagram Methods 0.000 description 5
- 229920001610 polycaprolactone Polymers 0.000 description 5
- 239000004632 polycaprolactone Substances 0.000 description 5
- 230000006872 improvement Effects 0.000 description 4
- 238000013507 mapping Methods 0.000 description 4
- 238000011161 development Methods 0.000 description 3
- 238000009826 distribution Methods 0.000 description 3
- 230000000694 effects Effects 0.000 description 3
- 238000004458 analytical method Methods 0.000 description 2
- 238000004364 calculation method Methods 0.000 description 2
- 238000006243 chemical reaction Methods 0.000 description 2
- 238000012937 correction Methods 0.000 description 2
- 238000009792 diffusion process Methods 0.000 description 2
- 238000006073 displacement reaction Methods 0.000 description 2
- 239000000284 extract Substances 0.000 description 2
- 238000002156 mixing Methods 0.000 description 2
- 239000000203 mixture Substances 0.000 description 2
- 238000011084 recovery Methods 0.000 description 2
- 238000005070 sampling Methods 0.000 description 2
- 238000003860 storage Methods 0.000 description 2
- 230000001629 suppression Effects 0.000 description 2
- 238000012800 visualization Methods 0.000 description 2
- 206010063385 Intellectualisation Diseases 0.000 description 1
- 244000000188 Vaccinium ovalifolium Species 0.000 description 1
- 238000009825 accumulation Methods 0.000 description 1
- 238000012271 agricultural production Methods 0.000 description 1
- 238000009395 breeding Methods 0.000 description 1
- 230000001488 breeding effect Effects 0.000 description 1
- 238000004590 computer program Methods 0.000 description 1
- 238000000354 decomposition reaction Methods 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 230000007613 environmental effect Effects 0.000 description 1
- 238000002474 experimental method Methods 0.000 description 1
- 230000014509 gene expression Effects 0.000 description 1
- 238000012804 iterative process Methods 0.000 description 1
- 238000010801 machine learning Methods 0.000 description 1
- 238000005259 measurement Methods 0.000 description 1
- 238000012544 monitoring process Methods 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 238000002360 preparation method Methods 0.000 description 1
- 239000000047 product Substances 0.000 description 1
- 230000001737 promoting effect Effects 0.000 description 1
- 239000002994 raw material Substances 0.000 description 1
- 230000009467 reduction Effects 0.000 description 1
- 238000012827 research and development Methods 0.000 description 1
- 238000000926 separation method Methods 0.000 description 1
- 239000000126 substance Substances 0.000 description 1
- 239000013589 supplement Substances 0.000 description 1
- 239000000725 suspension Substances 0.000 description 1
- 230000001131 transforming effect Effects 0.000 description 1
- 239000011800 void material Substances 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/50—Depth or shape recovery
- G06T7/55—Depth or shape recovery from multiple images
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10024—Color image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10028—Range image; Depth image; 3D point clouds
Landscapes
- Engineering & Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Image Processing (AREA)
Abstract
The invention relates to a fruit three-dimensional point cloud real-time acquisition method based on Kinect V2. The method is characterized in that the fruit point cloud is collected on line at the front end, a user shoots crop fruits through a handheld Kinect V2 camera, the shooting process is displayed on a screen in real time through a synchronous image building and positioning technology, the pose is optimized and noise is removed at the rear end, and finally the point cloud information of the fruits is obtained. In order to obtain a pure fruit point cloud, firstly, removing outer points of the fruit point cloud by a filter-based method such as straight-through filtering and the like, then performing guaranteed fine denoising on the fruit point cloud by using the Voronoi joint covariance, and finally obtaining a complete and noiseless fruit point cloud by curvature-based adaptive mean shift. According to the invention, the Kinect V2 is used as an acquisition tool of crop point cloud, so that a convenient and cheap solution is provided for three-dimensional reconstruction of fruits and research of digital fruits.
Description
Technical Field
The present invention belongs to the application of computer graphics and computer vision related technology in obtaining three-dimensional digital fruit form. Relates to a fruit three-dimensional point cloud real-time acquisition method based on Kinect V2 equipment.
Background
With the development of agricultural technologies, plant phenomics is developing towards intellectualization, visualization and digitization. The fruit three-dimensional modeling is an important component in plant digital research, and is used for carrying out reverse modeling on the fruit surface, so that on one hand, the growth and development rules of plants can be explored by simulating the influence of environmental change on plant models, and the agricultural production can be guided; on the other hand, the breakthrough of the key agricultural core technology is accelerated. The real construction of the plant model is beneficial to further promoting the development of digital fruit and agricultural electronic commerce, and the technology of intelligent agriculture, accurate gene expression monitoring, intelligent breeding and the like is powerfully supported, so that the method has important technical values for accelerating the progress of digital agriculture and improving the price and quality of agricultural products.
Although the point cloud acquisition based on the three-dimensional scanner is convenient and efficient at present, the prices of the three-dimensional scanner are tens of thousands of yuan and tens of thousands of yuan, which is not beneficial to the technical popularization. Although three-dimensional scanning equipment based on ToF such as Kinect V2 and the like on the market can conveniently and cheaply acquire point cloud data, the problems that points acquired by Kinect are sparse, noise is high, and the camera position is uncertain exist in later-stage registration and cannot be completely corresponding are solved.
Disclosure of Invention
Aiming at the problems and defects in the prior art, the invention aims to provide a fruit three-dimensional point cloud real-time acquisition method based on KinectV2 equipment, which aims to solve the technical problems that a user acquires fruit point cloud through handheld KinectV2 equipment, extracts accurate point cloud information through a color image and a depth image acquired by Kinect V2 equipment, and tracks the scanning progress in real time in the scanning process.
The technical scheme adopted by the practical purpose of the invention is as follows: a handheld Kinect V2-based fruit three-dimensional point cloud real-time acquisition method comprises the following steps:
step 1: the Kinect V2 is driven and configured by utilizing libfreenect2 to drive Kinect V2, and the drive that a user needs to perform zadig to replace Kinect V2 before using is libusbK (v3.0.7.0);
step 2: the mapping condition and the real-time position of the camera in the whole scanning process are displayed in real time by a synchronous positioning and mapping technology.
And step 3: an RGB image is acquired from Kinect V2 every 1 second and a depth image is acquired correspondingly. The optimal material is that the angle change amplitude of the adjacent 2 materials is not more than 10 degrees.
And 4, step 4: the point cloud is obtained by matching pixel by pixel using the color and depth images obtained by Kinect V2.
And 5: and performing feature extraction and matching on the plurality of imported two-dimensional images by using a feature extraction and feature matching module of OpenMVG to generate a fruit crop Sparse point cloud model, and performing Sparse Reconstruction (Sparse Reconstruction) on scene information such as camera positions and the like calculated by using a projective theorem based on an SfM algorithm according to a feature matching result.
Step 6: generating a Dense point cloud model of the fruit crop, and continuously generating and screening a surface patch by using the obtained scene information and the original photo through a Multi-view Stereo Reconstruction (PMVS) algorithm so as to diffuse the existing data points for Dense Reconstruction (Dense Reconstruction); through L0The curvature mean shift under the norm constraint solves the problem of removing the original mixed noise point which keeps the sharp characteristic of the thorn shape of the fruit.
And 7: and (3) splicing a point cloud model of the fruit crops, and realizing storage and operation of an internal parameter matrix and an external parameter matrix of the camera through an Eigen library, an OpenCV library and a PCL library of C + +. And rotating and translating the three-dimensional point cloud model according to the internal and external parameters of the camera, and adding the three-dimensional point cloud model to one point cloud model to complete the point cloud splicing.
And 8: and performing fine fusion of the point clouds by using an ICP (inductively coupled plasma) algorithm. Dense point clouds without holes are obtained.
And step 9: in order to remove background noise points of fruit point clouds, a filter-based method such as straight-through filtering is adopted, then the fruit point clouds are guaranteed to be subjected to fine denoising through the Veno joint covariance, and finally the complete and noise-free fruit point clouds are obtained through curvature-based adaptive mean shift.
Based on the depth and RGB images acquired by Kinect V2, the invention combines SFM algorithm to invent a set of fruit point cloud acquisition system which is suitable for plant fruits, low in cost, real-time and high in efficiency. The SfM algorithm is used for extracting three-dimensional point information from an RGB image obtained by Kinect to supplement the problem of sparseness of points obtained by Kinect V2 depth information, and is also used for assisting in determining the position of a camera and determining the registration information of Kinect V2 obtained point cloud films.
The invention uses Kinect V2 as main equipment, thereby greatly reducing the cost of point cloud acquisition, enabling the calculation to be operated off line through front-end and back-end separation, saving the labor cost and accelerating the display speed of the front end.
The invention has important technical theoretical value and application significance for research and development of fruit point cloud three-dimensional scanning systems, establishment of fruit three-dimensional phenotype databases and the like.
The invention obtains the crop fruit point cloud through the cheap RGB-D equipment KinectV2, simultaneously ensures the accurate reduction of the fruit point cloud, and provides a wider research foundation for the digital research of fruits and the reverse engineering of the fruit point cloud.
Drawings
FIG. 1 is a basic flow of a system for acquiring three-dimensional point cloud in real time based on SfM and machine learning according to the present invention;
FIG. 2 is a basic flow of motion recovery structure (SfM) for acquiring a sparse point cloud according to the present invention;
FIG. 3 is a schematic diagram of a process in which 3D points in a motion recovery structure are projected onto a 2D image plane of a camera;
FIG. 4 is a diagram of KinectV2 initialization routine operation;
FIG. 5 is a process of the system acquiring images from KinectV2, extracting key points, and estimating a rotation pose matrix;
FIG. 6 is a system-computed keypoint visualization;
FIG. 7 is a directly shot point cloud without denoising by the present system;
FIG. 8 is a diagram of an unrotated point cloud after denoising by the present system;
FIG. 9 is a diagram of the results of the rotation matrix estimated by the present system;
fig. 10 is a graph of the results of the rotation matrix by the present system ICP.
Detailed Description
The principles, features and processes of the present invention are explained below in conjunction with the drawings, which are presented by way of example only and not to limit the scope of the invention.
In the invention, after a user configures the Kinect related drive according to the mode of the invention, the user can hold the KinectV2 by hand to move slowly, and in the process, the program automatically acquires the RGB image and the depth image for calculating the three-dimensional point information.
As shown in fig. 1, a fruit three-dimensional point cloud real-time obtaining method based on Kinect V2 includes the following steps:
step 1: the user selects Xbox NUI Sensor (composition metadata) by performing a zadig replacement before running. The replacement was libusbK (v3.0.7.0).
Step 2: and calibrating the camera, completing the processes of feature extraction, feature matching and the like, and further solving the pose of the camera.
Step 2.1: the camera is calibrated, and an image acquired by the camera is distorted due to the lens, so that the distortion correction of the camera is required. Firstly, shooting a checkerboard picture in multiple angles and multiple directions, acquiring 20 pictures, and then completing the calibration of Camera internal parameters by using a MATLAB Camera calibration.
Step 2.2: the characteristic extraction, namely the synchronous positioning and mapping of the invention, is only used for the display of the scanning process, and has no requirement on the precision, so the ORB characteristic with rotation invariance and scale invariance is adopted, and the FAST-12 algorithm is used for extracting OrientedFAST key points. Meanwhile, for higher efficiency, the invention adds a set of pretest operations, namely directly detecting the 1 st, 5 th, 9 th and 13 th pixels on the neighborhood circle for each pixel, and continuously detecting other pixels when three continuous pixels in the four pixels exceed the threshold range. The feature points found out through the first steps of operation may have a phenomenon of 'bunching', and whether the brightness of the central pixel is the maximum value in the neighborhood circle is judged by using Non-maximum value suppression (Non-maximum suppression), so that the problem of feature point concentration is avoided.
Step 2.3: and (3) feature matching, namely calculating a descriptor of each key point extracted in the step 2.2 by using a BRIEF algorithm.
Step 2.3.1. randomly selecting n point pairs (p, q) in the characteristic point neighborhood with the size of M multiplied by M, wherein the p and the q are all in accordance withThe sampling criterion follows the same gaussian distribution which is isotropic.
And 2.3.2, comparing the size relation of p and q, if p is larger than q, taking 1, otherwise, taking 0, and finally obtaining a vector consisting of 0 and 1 in the n dimension, namely a Steer BRIEF descriptor.
Step 2.3.3. the number of the characteristic points obtained in the process of the invention is not large, so that a Brute-Force matching method (Brute-Force mather) is used for the image ItEach of which is a feature pointMeasurement and image It+1All the extracted feature pointsHamming distance (Hamming distance) of the descriptor(s).
Step 2.3.4: and finding out the minimum distance between all matches, screening out the matches with the Hamming distance less than twice the minimum distance, and obtaining two groups of well-matched 3D points p and p'.
Step 2.3.5: define the ith pairError term of the point: e.g. of the typei=pi-(Rpi′+t)
step 2.3.7: decomposing the formula, and finally solving the optimal R by using an SVD (singular value decomposition) method, wherein the optimal R is obtained by using a formula pi=RpiAnd' + t, solving t to obtain the pose of the camera.
And step 3: the depth map and color map are acquired by libfreenect library as shown in fig. 4.
Step 3.1: the libfreenect2:: Frame:: Color and libfreenect2:: Frame:: Depth objects are obtained by libfreenect2:: rgb, Depth of the Frame statement and saved in the Mat object of OpenCV.
Step 3.2: depth and color images are preserved. The RGB color pictures and depth pictures are saved in jpg format by the cool String & file (InputArrayimg) function of OpenCV.
And (5) repeating the step (3) every 1 second, wherein the angle change amplitude of the adjacent 2 materials is not more than 10 degrees, which is the best material. Until the shooting is finished.
And 4, step 4: the point cloud is formed by mapping the acquired depth map and color map, and the process is shown in fig. 5.
Step 4.1: information of x, y, z, rgb, etc. of the point is obtained by libfreenect2:: Registration:: void getpointXYZRGB (constFrame undistorted, constFrame registered, int r, int c, float & x, float & y, float & z, float & rgb), as shown in FIG. 6.
Step 4.2: the point information obtained in step 3.1 is passed to p using pcl: PointXYZRGBA to declare a point object p. Finally, it is put into the point cloud using the push _ back () function.
Step 4.3: saving the point cloud obtained in step 3.4 is saved locally in the pcd format and displayed on the user interface in real time through the savPCDFile (const:: string & file _ name, constpci:: PointCloud < PointT >) -function, as shown in FIG. 7.
And (4) repeating the step (4) every 1 second, wherein the angle change amplitude of the adjacent 2 materials is not more than 10 degrees, which is the best material. Until the shooting is finished.
In the invention, a user acquires a series of two-dimensional pictures, three-dimensional point cloud models and depth maps of target fruit crops at different and about 10-degree variation amplitudes by using a Kinect V2 machine and introduces the pictures into a computer program in the invention; performing feature extraction and picture matching on a series of input two-dimensional images, and obtaining scene information such as key points, camera positions and the like according to a matching result to perform sparse point cloud reconstruction; then, dense point cloud reconstruction is carried out on the basis of the sparse point cloud; and carrying out three-dimensional point cloud model splicing on the basis of the known camera position.
And 5: and (3) extracting and matching the features of the two-dimensional image group of the fruit crops in the step (3), accurately identifying the local features of the object, finding out feature points in each image, and performing quick and accurate pairwise matching to obtain key points.
Step 5.1: considering that the fruit size is small, for the step of feature detection, the SIFT descriptor with size and rotation invariance and strong robustness is used in the method to improve accuracy, and the off-line algorithm is also advantageous without considering time cost. The SIFT algorithm calculates a 4 × 8-128-dimensional feature vector through gaussian filters (DOG) of different sizes, and performs feature matching based on the feature vector, which is more advantageous in terms of accuracy than other methods.
Step 5.2: the second step is matching and establishing a feature matching trajectory, as shown in fig. 3, and pairwise matching is performed on the images by using the euclidean distance. The invention has smaller applicable scale, so the method of rough matching is adopted: the distance is calculated exhaustively for all feature points. The feature points of each picture acquired at the front end are extracted to perform feature point matching between every two pictures, and F (I) is used for representing the feature points around the image I. For each pair of images I and J, consider each feature f e F (I) to find the nearest neighbor feature vector c e F (J)
When all pairs of matching images are determined, the algorithm of the present invention connects common feature matching points that appear in multiple images to form an image feature matching trajectory. And then, using a breadth-first search BFS to find out a complete track of each feature point, and constructing an image connection diagram so as to carry out binding Adjustment on the Bunde Adjustment.
Step 5.3: and selecting a proper initialization image pair, and in order to avoid the situation of local optimization, estimating external parameters of the initialization matching pair by using a 5-point method, and providing an initialized 3D point for the first binding adjustment after the track is triangulated to ensure that the reconstruction result is accurate enough.
Step 5.4: the invention adopts Sparse beam adjustment method Sparse Bundle (SBA) to carry out binding adjustment. The binding adjustment is an iterative process for removing three-dimensional points with large errors according to certain conditions. After the process is finished, the pose and key point information of the camera can be obtained, and a sparse three-dimensional point cloud model is reconstructed according to the pose and key point information.
The beneficial effects of the expansion and improvement scheme are that: the method can accurately extract data and features of a plurality of two-dimensional images, and find out feature points of a target object, so that pairwise matching can be quickly and accurately performed.
Step 6: and utilizing a sparse point cloud module to use the obtained key points, camera parameters and other information to realize the generation of the sparse point cloud. And further utilizing a dense point cloud generating module to continuously generate and screen a surface patch according to camera parameters so as to diffuse the existing sparse point cloud data points and realize dense point cloud acquisition, as shown in fig. 2.
Step 6.1, initial feature matching, generating sparse surface patches as seed points, DOG and Harris operators, drawing grids on each image patch, wherein the size of the grids is 32 multiplied by 32 pixels, and selecting η -4 points with local maximum interest values in each grid (4 feature points are respectively selected by the two operators)
Step 6.2: a reference image and other images are selected. Each image is taken as a reference influence R (P) in turn, and images with an included angle of less than 60 degrees with the main optical axis are selected from other images. The reference influence is matched to these images.
Step 6.3: and selecting matching points and solving the coordinates of the model points to generate the central coordinates and normal vectors of the patch and then optimizing the patch.
Step 6.4: the average correlation coefficient is maximized and v (p) is updated. And then diffusing, wherein dense patches are obtained by sparse seed point diffusion. The goal is to have at least one patch in each grid cell.
The beneficial effects of the expansion and improvement scheme are that: the dense point cloud can be accurately obtained from the sparse point cloud, so that the generated point cloud model is closer to a target object.
And 7: and utilizing a point cloud splicing module to splice the discrete point clouds of the fruit crops.
Step 7.1: the method realizes the reading and writing and the composition of the point cloud, and the function utilizes the PCD reading and writing functions in the PCL and OpenCV libraries and the following steps: pcl, PCDReader, pcl, io, savePCDFileASCII and cv, Mat.
Step 7.2: utilizing matrix classes and point clouds in Eigen and PCL libraries to transform functions through a matrix: eigen: Matrix4f, pcl: transformPoint Cloud. And the storage and operation of the external reference matrix of the camera and the rotation and translation of the point cloud are realized.
Step 7.3: for the pcd point cloud file, after point clouds are transformed through a pci, the transformed point clouds are added into the finally output point clouds one by one, and point cloud splicing is completed simply and efficiently on the point cloud level.
Step 7.4: and for the point cloud formed by reading the depth map and the RGB map, storing the RGB map and the depth map by using a vector library and cv:: mat type, and storing the camera pose by using Eigen:: Isometric 3d type and the vector library.
Step 7.4.1: reading an RGB image from vector < cv:: Mat >, reading a corresponding depth map, accessing pixel information row by row and column by column, and constructing a three-dimensional point according to the following formula:
wherein d is a depth value, depthScale is a depth proportion, x and y are two-dimensional coordinates, cx and cy are camera distortion parameters, and fx and fy are camera focal lengths.
After the three-dimensional point coordinate assignment is completed, multiplying the three-dimensional point coordinate assignment by a transformation matrix T, and transforming the points to a world coordinate system, wherein the transformation formula is as follows:
wherein a' is a coordinate vector before transformation, a is a coordinate vector after transformation,for the transformation matrix T: the vector consists of a rotation matrix R and a translation vector t, and the specific values of R and t come from the SfM step.
After the transformation is completed, as shown in FIG. 9, since the three-dimensional point and the point in the point cloud have different formats, a three-dimensional point is defined by pcl:i.e. PointXYZRGB, and the coordinate and RGB value are equal to the three-dimensional point, and the three-dimensional point is added to the point cloud to be finally output, and the point cloud splicing of the ground with higher precision is completed on the point level.
And 8: and precisely registering fruit point clouds. In the experiment, the point cloud registration of the fruit point cloud is carried out by adopting an ICP (inductively coupled plasma) registration method in a PCL (polycaprolactone) library.
Step 8.1: set the original fruit point cloud using the function icp.
Step 8.2: the euclidean distance of the maximum corresponding point is set using the function setmaxcorrespondingdistance (), and only the corresponding points whose distance between the corresponding points is smaller than the set value are taken as the point pair calculated by ICP.
Step 8.3: and setting an iteration stopping condition. The abort condition 1 sets the maximum number of iterations for the use function setMaximuIterans (). The abort condition 2 is to set the maximum tolerance of the conversion matrix of the two previous and subsequent iterations using the function settransfomiationepsilon (). The suspension condition 3 is to set the maximum tolerance of the euclidean distance mean of the point pairs of the two iterations before and after using the function seteuclidean information epsilon ().
Step 8.4: make itThe final registered transformation matrix is obtained with the function getFinalTransformation (). Calculate Rx(θ)Ry(α)Rz(β)
Then calculating a transformation matrix according to the transformation matrix
R=Rz(β)Ry(α)Rx(θ)
Step 8.5: ICP registration is performed using the function align (PointCloudSource & output), outputting the transformed fruit point cloud as shown in fig. 10.
Step 8.6: and acquiring a convergence state. The function returns true whenever the iteration process meets one of the three abort conditions described above.
Step 8.7: min _ number _ corerespondents _ minimum number of pairs of matching points, a rigid transformation can be determined from three non-collinear points in space.
Step 8.8: and acquiring a final iteration stop point cloud.
And step 9: the method adopts a guarantee denoising method to remove noise points around the thorn-shaped fruit point cloud. Compared with other methods, the characteristic of the thorn-shaped fruit point cloud can be better kept by the characteristic denoising method. For the method, the process of feature denoising is divided into three parts, namely large-scale denoising, feature detection and optimization and original object mixed denoising.
And step 9: the method adopts a guarantee denoising method to remove noise points around the thorn-shaped fruit point cloud. Compared with other methods, the characteristic of the thorn-shaped fruit point cloud can be better kept by the characteristic denoising method. For the method, the process of feature denoising is divided into three parts, namely large-scale denoising, feature detection and optimization and original object mixed denoising.
Step 9.1: and (5) denoising in a large scale. The invention provides a method for classifying background noise into long-range noise and short-range noise, which solves the problem of removing large-scale noise points such as background, outlier and the like.
Step 9.1.1: and (5) denoising the long shot. According to the method, the point cloud is analyzed, and the coordinate axial direction with the most distant view noise point distribution is found out, so that the subsequent large-scale denoising is facilitated. In the invention, the axial direction with the most noise is obtained as the X axis through the analysis of the fruit point cloud.
And 9.1.2, according to the result obtained in the last step, obtaining the denoising dimension X and the value range (Xmin, Xmax) through analysis.
And 9.1.3, traversing each point in the point cloud, judging whether the value of the point on the specified dimension is in a value range, deleting the points with values which are not in the value range, and keeping the points in a threshold value.
Step 9.1.4, checking the result, if the integrity of the fruit point cloud is damaged, resetting the value range; if the point cloud distant view noise points are still more, readjusting the x threshold value; if the number of distant scene noise points is reduced and the fruit is complete, outputting a well processed point cloud.
Step 9.1.5 further sets the specified dimension y, z and its threshold limits.
Step 9.1.6 traverses each point in the point cloud and points not within range are removed by determining the relationship between y, z and their thresholds for each point.
Step 9.1.7 checks the effect and if the desired result is not met, returns to step 9.1.5 to re-select the threshold limit. If the result is satisfied, the point cloud Cf without background is obtained1。
Step 9.1.8 near view denoising. Suppose that the data point in the fruit point cloud model is piI 1,2 … S, for each point piDistance to any point diSet point piA field of k, then piThe average distance to all its k neighbors is represented by a gaussian distribution.
Step 9.1.9 setting a threshold range Tmax according to the mean value mu and the standard deviation sigma
Step 9.1.10 removes points that do not meet the threshold based on the average distance and the standard threshold range Tmax.
Step 9.1.11, go through each point in the point cloud, and assign the minimum neighbor point number M of each point in the determined radius r, and detect the relation between the true neighbor number M and M of the point, then keep the point. Otherwise, the point is deleted.Thereby obtaining a point cloud C with the background completely removedf2。
The method for denoising in large scale has the advantages that: the large-area noise generated can be accurately removed, so that the processed data points are beneficial to the next use. Errors due to background effects are prevented.
And 9.2, detecting and optimizing the characteristics. The thorn-shaped feature detection and optimization algorithm provided by the invention can be used for detecting and optimizing the thorn-shaped fruit point cloud similar to the thorn-shaped fruit point cloud in the process of characterizing denoising, and is an important precondition for next original object mixed denoising.
Step 9.2.1 fruit point cloud C after background removalf1The PCA normal calculation of each point in the method can be used for calculating other geometric attributes.
Step 9.2.2 calculates the curvature from the obtained normal vector. And redefines Cf1The change of each point in the graph is represented as sigma (p)i). λ represents eigenvalues of the covariance matrix, and λ 1 is the maximum eigenvalue. The change is set to the initial detected feature index wc-maxThe weight of (c).
Step 9.2.3 the present invention combines the curvature of a certain point with the normal vector angle weighting, and improves the weighting in step 9.2.2 as the second global feature judgment index for judging the possible sharp feature point. n isiThe fields representing points, nr and cv, represent the weights of normal and curvature, respectively. It can make sufficient preparation for the refinement of the following feature points.
Step 9.2.4 is based on a finite set of result points { p ] on the Euclidean plane1,…,pnThe invention defines a Voronoi cell V consisting of all points on the euclidean planeP. Of points to non-empty subsets in space XTuple pkIs less than it is to any other station piThe distance of (c). Is shown as
VP={x∈X|d(x,Pk)≤d(x,Pj)forallj≠k}
Step 9.2.5 calculate Voronoi cell VPThe covariance matrix of (a) as an equation. Wherein VPRepresentation and sampling point piAssociated finite Voronoi cells. VPIs defined by its second moment with respect to the centroid ss.
Step 9.2.6 employs a joint domain solution to combat the effects of noise, where individual elements are replaced by a Voronoi union of elements. Anisotropy V (p) based on the ratio of the minimum eigenvalue to the maximum eigenvalue of the covariance matrix in step 9.2.5i) Calculating the anisotropy factor sigma epsilon [0,1 ] of the midpoint of the fruit model]. Selecting the K-nearest neighbor q of the pointj(j ═ 1,2, … k). To V (p)i)∪V(qj) (j ═ 1,2, … k is cycled through in steps until either the anisotropy reaches the α threshold or the number of neighbors of pi reaches a maximum value.
Step 9.2.7 determines the likelihood of the feature points using the ratio of the eigenvalues of the joined matrix. Wherein c isfPossibility of representing characteristic points, λminAnd λmaxMinimum and maximum eigenvalues, respectively.
cf=λmin/λmax
Step 9.2.8 the present invention zooms in on the key feature determination indicators for refined features, where the rescaling of R represents the magnification factor. w is afThe characteristic intensity can be reasonably improved, so that a better threshold value can be set to judge the characteristic point cloud.
wf=Rsrescale×cf
Step 9.2.9 the present invention proposes a basic diffusion method that allows a point to move in the normal direction. And H (v) is improved. x is the number ofiRepresenting the center vertex, A representing the sum of the Voronoi cell areas, αj,βjTo have the same edge e as the Voronoi cellijDifferent angles of (a).
Step 9.2.10 the present invention proposes an anisotropic mean curvature flow based on error correction that is passed through a combinatorial selector to ensure a smooth-time feature. Wherein g (·) is 1/(1+ k · x)2) K is determined by the user; g and (1-g) represent control factors.
The beneficial effects of adopting the above-mentioned expansion and improvement scheme are that the self characteristics of the thorn-shaped fruit can be more accurately extracted through multi-scale global geometric characteristic detection and sparse based on the anisotropic curvature characteristic of the joint VCM, and misjudgment is optimized as the characteristic point. And a foundation is provided for the next raw material mixing denoising.
Step 9.3 the method for removing the noise from the raw thorn fruit according to the present invention comprises the following steps0The curvature mean shift under the norm constraint solves the problem of removing the original mixed noise point which keeps the sharp characteristic of the thorn shape of the fruit.
Step 9.3.1 calculates a normal vector. And taking the eigenvector corresponding to the minimum eigenvalue in the three eigenvalues of the covariance matrix diagonalization as a normal vector. Expressed as:
cov(U)·ni=λmin·ni
step 9.3.2 for constructing constraint modelThe present invention represents the ith sample point as CiSmoothness of the surface and sparsity of features are measured in the local domain. The final constraint is expressed as:
step 9.3.3 constructs L0 sparse constraint of the whole fruit model by accumulation of constraints on points, in the invention, wiThe size of the domain is defined. Expressed as:
step 9.3.4 further has L0Definition of the optimization, the final problem is expressed as By using an alternative optimization method, by constraining the previous | niThe equation is solved for 1, resulting in a constrained quadratic. Expressed as:
step 9.3.5 the present invention passes the following normal n in step 9.3.1iEstimates are used to recover features that were lost in the previous denoising process.
Step 9.3.6 the present invention is based on feature point set CFF(pi) gives the curvature displacement optimization of the fruit features under the l0morn constraint, while D (-) is used to determine the degree of deviation of each point. Expressed as:
step 9.3.7 the invention calculates the local mean curvature HiAs a parameter, the deviation becomes larger as the curvature increases. By polynomial fittingDefining the polynomial fitting curvature of each point asAnd gives the local mean curvature Hi. Expressed as:
step 9.3.8 defines a curvature-based mean shift. Vector p in the present inventioniHas a dimension of 7, p isiK nearest neighbors of (a) is N (p)i)={qi1,qi2,,…qikRepresents it. g (-) is a Gaussian nucleus, M (p)i) Is piMean displacement point of correlation, Mv(pi) Is a reaction of with piThe associated average shift vector. P is to beiIs defined as:
9.3.9 the mean shift process of the invention is shown as follows, and a clean fruit point cloud Cf is obtained after mean shift is applied to each point3As shown in fig. 8.
M(pi):=M(pi)+Mv(pi)
The beneficial effect of adopting the above-mentioned expansion and improvement scheme is to solve the problem of removing the original substance mixing noise point which keeps the sharp-pointed characteristic of fruit thorn shape. The sharp features of the thorn-shaped fruit are better kept at the same time in the condition that the noise is removed as clean as possible.
Claims (1)
1. A fruit three-dimensional point cloud real-time obtaining method based on Kinect V2 is characterized by comprising the following steps:
step 1: the Kinect V2 is driven and configured, and the Kinect V2 is driven by utilizing libfreenect 2;
step 2: displaying the image construction condition and the real-time position of a camera of the whole scanning process by a synchronous positioning and image construction technology;
and step 3: collecting an RGB image from the Kinect V2 every 1 second and simultaneously collecting a depth image corresponding to the RGB image, wherein the optimal material is obtained when the angle change amplitude of the adjacent 2 materials is not more than 10 degrees;
and 4, step 4: matching the color image and the depth image obtained by the Kinect V2 pixel by pixel to obtain a point cloud;
and 5: utilizing a feature extraction and feature matching module of OpenMVG to extract and match features of a plurality of imported two-dimensional images to generate a fruit crop sparse point cloud model, and utilizing a projective theorem to calculate scene information such as camera positions and the like to carry out sparse reconstruction according to a feature matching result and based on an SfM algorithm;
step 6: constructing a dense point cloud model of the fruit crops, continuously generating and screening a patch according to the obtained scene information and the original photo by adopting a patch-based PMVS algorithm, and further diffusing the existing data points so as to realize dense point cloud reconstruction;
and 7: splicing a point cloud model of the fruit crops, splicing the point clouds by using a C + + point cloud correlation library function, and performing rotation and translation operations on a plurality of pieces of point clouds one by using the internal and external parameters of the camera obtained in the previous step to synthesize a large piece of point cloud and finish splicing;
and 8: carrying out fine fusion on the point clouds by utilizing an ICP (inductively coupled plasma) algorithm to obtain dense hole-free point clouds;
and step 9: removing outer points of the fruit point cloud by using a filter-based method such as straight-through filtering and the like, then performing guarantee fine denoising on the fruit point cloud by using the Voronoi joint covariance, and finally obtaining a complete and noiseless fruit point cloud by using curvature-based adaptive mean shift.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910970724.1A CN110796694A (en) | 2019-10-13 | 2019-10-13 | Fruit three-dimensional point cloud real-time acquisition method based on KinectV2 |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910970724.1A CN110796694A (en) | 2019-10-13 | 2019-10-13 | Fruit three-dimensional point cloud real-time acquisition method based on KinectV2 |
Publications (1)
Publication Number | Publication Date |
---|---|
CN110796694A true CN110796694A (en) | 2020-02-14 |
Family
ID=69440173
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910970724.1A Pending CN110796694A (en) | 2019-10-13 | 2019-10-13 | Fruit three-dimensional point cloud real-time acquisition method based on KinectV2 |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110796694A (en) |
Cited By (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111429490A (en) * | 2020-02-18 | 2020-07-17 | 北京林业大学 | Agricultural and forestry crop three-dimensional point cloud registration method based on calibration ball |
CN112150606A (en) * | 2020-08-24 | 2020-12-29 | 上海大学 | Thread surface three-dimensional reconstruction method based on point cloud data |
CN112834970A (en) * | 2020-12-31 | 2021-05-25 | 苏州朗润医疗系统有限公司 | Method for improving TOF3D resolution by k-space enhancement for magnetic resonance imaging |
CN113052880A (en) * | 2021-03-19 | 2021-06-29 | 南京天巡遥感技术研究院有限公司 | SFM sparse reconstruction method, system and application |
CN113284197A (en) * | 2021-07-22 | 2021-08-20 | 浙江华睿科技股份有限公司 | TOF camera external reference calibration method and device for AGV, and electronic equipment |
CN114429497A (en) * | 2020-10-14 | 2022-05-03 | 西北农林科技大学 | Living body Qinchuan cattle body ruler measuring method based on 3D camera |
CN115035327A (en) * | 2022-08-15 | 2022-09-09 | 北京市农林科学院信息技术研究中心 | Plant production line phenotype acquisition platform and plant phenotype fusion analysis method |
WO2023078052A1 (en) * | 2021-11-02 | 2023-05-11 | 中兴通讯股份有限公司 | Three-dimensional object detection method and apparatus, and computer-readable storage medium |
CN116416223A (en) * | 2023-03-20 | 2023-07-11 | 北京国信会视科技有限公司 | Complex equipment debugging method, system, electronic equipment and storage medium |
CN116626706A (en) * | 2023-05-12 | 2023-08-22 | 北京交通大学 | Rail transit tunnel intrusion detection method and system |
CN116817754A (en) * | 2023-08-28 | 2023-09-29 | 之江实验室 | Soybean plant phenotype extraction method and system based on sparse reconstruction |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20170193692A1 (en) * | 2015-12-30 | 2017-07-06 | Shenzhen Institutes Of Advanced Technology Chinese Academy Of Sciences | Three-dimensional point cloud model reconstruction method, computer readable storage medium and device |
WO2018048353A1 (en) * | 2016-09-09 | 2018-03-15 | Nanyang Technological University | Simultaneous localization and mapping methods and apparatus |
CN108198230A (en) * | 2018-02-05 | 2018-06-22 | 西北农林科技大学 | A kind of crop and fruit three-dimensional point cloud extraction system based on image at random |
CN108734728A (en) * | 2018-04-25 | 2018-11-02 | 西北工业大学 | A kind of extraterrestrial target three-dimensional reconstruction method based on high-resolution sequence image |
CN109544681A (en) * | 2018-11-26 | 2019-03-29 | 西北农林科技大学 | A kind of fruit three-dimensional digital method based on cloud |
CN109741382A (en) * | 2018-12-21 | 2019-05-10 | 西安科技大学 | A kind of real-time three-dimensional method for reconstructing and system based on Kinect V2 |
WO2019133922A1 (en) * | 2017-12-29 | 2019-07-04 | Flir Systems, Inc. | Point cloud denoising systems and methods |
-
2019
- 2019-10-13 CN CN201910970724.1A patent/CN110796694A/en active Pending
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20170193692A1 (en) * | 2015-12-30 | 2017-07-06 | Shenzhen Institutes Of Advanced Technology Chinese Academy Of Sciences | Three-dimensional point cloud model reconstruction method, computer readable storage medium and device |
WO2018048353A1 (en) * | 2016-09-09 | 2018-03-15 | Nanyang Technological University | Simultaneous localization and mapping methods and apparatus |
WO2019133922A1 (en) * | 2017-12-29 | 2019-07-04 | Flir Systems, Inc. | Point cloud denoising systems and methods |
CN108198230A (en) * | 2018-02-05 | 2018-06-22 | 西北农林科技大学 | A kind of crop and fruit three-dimensional point cloud extraction system based on image at random |
CN108734728A (en) * | 2018-04-25 | 2018-11-02 | 西北工业大学 | A kind of extraterrestrial target three-dimensional reconstruction method based on high-resolution sequence image |
CN109544681A (en) * | 2018-11-26 | 2019-03-29 | 西北农林科技大学 | A kind of fruit three-dimensional digital method based on cloud |
CN109741382A (en) * | 2018-12-21 | 2019-05-10 | 西安科技大学 | A kind of real-time three-dimensional method for reconstructing and system based on Kinect V2 |
Non-Patent Citations (4)
Title |
---|
HAOPENG ZHANG ET AL.: "3D Reconstruction of Space Objects from Multi-Views by a Visible Sensor", 《SENSORS(BASEL)》 * |
LOUIS CUEL ET AL.: "Robust Geometry Estimation using the Generalized Voronoi Covariance Measure", 《ARXIV》 * |
张洪鑫 等: "基于单目图像序列的铸件三维重建方法", 《中国机械工程》 * |
李国俊 等: "利用Voronoi协方差矩阵重建隐式曲面", 《中国图象图形学报》 * |
Cited By (17)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111429490A (en) * | 2020-02-18 | 2020-07-17 | 北京林业大学 | Agricultural and forestry crop three-dimensional point cloud registration method based on calibration ball |
CN112150606A (en) * | 2020-08-24 | 2020-12-29 | 上海大学 | Thread surface three-dimensional reconstruction method based on point cloud data |
CN112150606B (en) * | 2020-08-24 | 2022-11-08 | 上海大学 | Thread surface three-dimensional reconstruction method based on point cloud data |
CN114429497A (en) * | 2020-10-14 | 2022-05-03 | 西北农林科技大学 | Living body Qinchuan cattle body ruler measuring method based on 3D camera |
CN112834970B (en) * | 2020-12-31 | 2022-12-20 | 苏州朗润医疗系统有限公司 | Method for improving TOF3D resolution by k-space enhancement for magnetic resonance imaging |
CN112834970A (en) * | 2020-12-31 | 2021-05-25 | 苏州朗润医疗系统有限公司 | Method for improving TOF3D resolution by k-space enhancement for magnetic resonance imaging |
CN113052880A (en) * | 2021-03-19 | 2021-06-29 | 南京天巡遥感技术研究院有限公司 | SFM sparse reconstruction method, system and application |
CN113052880B (en) * | 2021-03-19 | 2024-03-08 | 南京天巡遥感技术研究院有限公司 | SFM sparse reconstruction method, system and application |
CN113284197A (en) * | 2021-07-22 | 2021-08-20 | 浙江华睿科技股份有限公司 | TOF camera external reference calibration method and device for AGV, and electronic equipment |
WO2023078052A1 (en) * | 2021-11-02 | 2023-05-11 | 中兴通讯股份有限公司 | Three-dimensional object detection method and apparatus, and computer-readable storage medium |
CN115035327A (en) * | 2022-08-15 | 2022-09-09 | 北京市农林科学院信息技术研究中心 | Plant production line phenotype acquisition platform and plant phenotype fusion analysis method |
CN116416223A (en) * | 2023-03-20 | 2023-07-11 | 北京国信会视科技有限公司 | Complex equipment debugging method, system, electronic equipment and storage medium |
CN116416223B (en) * | 2023-03-20 | 2024-01-09 | 北京国信会视科技有限公司 | Complex equipment debugging method, system, electronic equipment and storage medium |
CN116626706A (en) * | 2023-05-12 | 2023-08-22 | 北京交通大学 | Rail transit tunnel intrusion detection method and system |
CN116626706B (en) * | 2023-05-12 | 2024-01-16 | 北京交通大学 | Rail transit tunnel intrusion detection method and system |
CN116817754A (en) * | 2023-08-28 | 2023-09-29 | 之江实验室 | Soybean plant phenotype extraction method and system based on sparse reconstruction |
CN116817754B (en) * | 2023-08-28 | 2024-01-02 | 之江实验室 | Soybean plant phenotype extraction method and system based on sparse reconstruction |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110796694A (en) | Fruit three-dimensional point cloud real-time acquisition method based on KinectV2 | |
Müller-Linow et al. | The leaf angle distribution of natural plant populations: assessing the canopy with a novel software tool | |
CN109146948B (en) | Crop growth phenotype parameter quantification and yield correlation analysis method based on vision | |
Gibbs et al. | Plant phenotyping: an active vision cell for three-dimensional plant shoot reconstruction | |
Li et al. | A leaf segmentation and phenotypic feature extraction framework for multiview stereo plant point clouds | |
CN111899172A (en) | Vehicle target detection method oriented to remote sensing application scene | |
Wu et al. | Passive measurement method of tree diameter at breast height using a smartphone | |
CN107192350A (en) | A kind of three-dimensional laser scanner intrinsic parameter scaling method and device | |
Santos et al. | 3D plant modeling: localization, mapping and segmentation for plant phenotyping using a single hand-held camera | |
CN110009745B (en) | Method for extracting plane from point cloud according to plane element and model drive | |
CN110070567A (en) | A kind of ground laser point cloud method for registering | |
CN115375842A (en) | Plant three-dimensional reconstruction method, terminal and storage medium | |
CN108195736A (en) | A kind of method of three-dimensional laser point cloud extraction Vegetation canopy clearance rate | |
CN105488541A (en) | Natural feature point identification method based on machine learning in augmented reality system | |
CN116862955A (en) | Three-dimensional registration method, system and equipment for plant images | |
Yao et al. | Relative camera refinement for accurate dense reconstruction | |
CN116883480A (en) | Corn plant height detection method based on binocular image and ground-based radar fusion point cloud | |
Jiang et al. | Learned local features for structure from motion of uav images: A comparative evaluation | |
Xinmei et al. | Passive measurement method of tree height and crown diameter using a smartphone | |
Remondino et al. | Evaluating hand-crafted and learning-based features for photogrammetric applications | |
Yao et al. | Registrating oblique SAR images based on complementary integrated filtering and multilevel matching | |
Srivastava et al. | Drought stress classification using 3D plant models | |
CN110969650B (en) | Intensity image and texture sequence registration method based on central projection | |
CN112509142A (en) | Rapid bean plant three-dimensional reconstruction method based on phenotype-oriented accurate identification | |
CN110009726B (en) | Method for extracting plane from point cloud according to structural relationship between plane elements |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
WD01 | Invention patent application deemed withdrawn after publication | ||
WD01 | Invention patent application deemed withdrawn after publication |
Application publication date: 20200214 |