Nothing Special   »   [go: up one dir, main page]

CN114612412B - Processing method of three-dimensional point cloud data, application of processing method, electronic equipment and storage medium - Google Patents

Processing method of three-dimensional point cloud data, application of processing method, electronic equipment and storage medium Download PDF

Info

Publication number
CN114612412B
CN114612412B CN202210215815.6A CN202210215815A CN114612412B CN 114612412 B CN114612412 B CN 114612412B CN 202210215815 A CN202210215815 A CN 202210215815A CN 114612412 B CN114612412 B CN 114612412B
Authority
CN
China
Prior art keywords
point cloud
cloud data
point
matching
layer
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202210215815.6A
Other languages
Chinese (zh)
Other versions
CN114612412A (en
Inventor
张亮
李新霖
朱光明
叶林杰
朱炉明
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Adaptive Technology Co ltd
Original Assignee
Hangzhou Adaptive Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Adaptive Technology Co ltd filed Critical Hangzhou Adaptive Technology Co ltd
Priority to CN202210215815.6A priority Critical patent/CN114612412B/en
Publication of CN114612412A publication Critical patent/CN114612412A/en
Application granted granted Critical
Publication of CN114612412B publication Critical patent/CN114612412B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0004Industrial image inspection
    • G06T7/001Industrial image inspection using an image reference approach
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/136Segmentation; Edge detection involving thresholding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/33Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/40Analysis of texture
    • G06T7/41Analysis of texture based on statistical description of texture
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20016Hierarchical, coarse-to-fine, multiscale or multiresolution image processing; Pyramid transform
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30108Industrial image inspection
    • G06T2207/30152Solder

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Geometry (AREA)
  • Probability & Statistics with Applications (AREA)
  • Quality & Reliability (AREA)
  • Image Processing (AREA)
  • Length Measuring Devices By Optical Means (AREA)

Abstract

The invention discloses a processing method of three-dimensional point cloud data, application thereof, electronic equipment and a storage medium, wherein the processing method comprises the following steps: s1, carrying out direct filtering on point cloud data of a workpiece to be detected and a standard workpiece; s2, taking a distance value from the line scanning camera to each point cloud as a pixel value, and obtaining a first depth image corresponding to the point cloud data of the workpiece to be detected and a second depth image corresponding to the point cloud data of the standard workpiece; s3, calculating offset and rotation angles of the first depth image and the second depth image, and registering the point cloud data of the workpiece to be detected according to the offset and the rotation angles; the difficulty of three-dimensional point cloud data processing is reduced, and the speed and the efficiency of data processing are improved.

Description

Processing method of three-dimensional point cloud data, application of processing method, electronic equipment and storage medium
Technical Field
The invention belongs to the technical field of automatic detection, and particularly relates to a processing method of three-dimensional point cloud data, application of the processing method, electronic equipment and a storage medium.
Background
The welding seam of the product mainly comprises a minimum width, a maximum width, an average width, a minimum height, a maximum height, an average height and the like, and whether the current workpiece is qualified or not can be judged by evaluating the difference between the width, the height and the standard workpiece; the radian of the product mainly comprises an upper radian, a lower radian and the like, and whether the current workpiece is qualified or not can be judged by evaluating the difference between the radian and the standard radian.
At present, the detection of the welding line and the detection of the radian are mainly divided into manual detection and automatic detection, wherein the manual detection is that a worker uses a standard tool to measure and record, and finally, the data are statistically compared, the detection efficiency is lower, and the working precision of the worker is continuously reduced along with the improvement of the working intensity and the extension of the working time; the automatic detection mainly comprises the steps of obtaining surface data of a product through a parameter measurement technology, and then carrying out statistics and calculation on the surface data of the product according to a corresponding algorithm to obtain the height parameter, the width parameter and the radian parameter of the region of interest.
In the existing product parameter measurement technology, two modes of two-dimensional visual detection and three-dimensional visual detection exist, and when the traditional two-dimensional visual detection method is used for detecting a three-dimensional object, the three-dimensional object has a large limit due to complex structure and numerous parameters, and the height, the width and other information of the three-dimensional object are difficult to measure; the three-dimensional visual detection technology uses a plurality of advanced visual processing technologies and object image capturing devices, has the characteristic of rapidness and accuracy in the aspect of measuring the height, the width and the radian of a three-dimensional object, however, compared with a two-dimensional image algorithm, the three-dimensional image algorithm has the advantages of larger data volume, longer processing time consumption and lower efficiency, and is difficult to encircle a region of interest in a three-dimensional space.
Disclosure of Invention
The embodiment of the invention aims to provide a processing method of three-dimensional point cloud data and application thereof, wherein three-dimensional point cloud data of a test piece to be tested is obtained through a line scanning camera, a series of processing is carried out on the point cloud data, the three-dimensional point cloud data is simplified and then two-dimensional mapping is carried out, statistical calculation is conveniently carried out on an interested region in the three-dimensional point cloud data, the processing difficulty of the three-dimensional point cloud data is reduced, and the speed and the efficiency of data processing are improved.
The embodiment of the invention also aims to provide the electronic equipment and the storage medium.
In order to solve the technical problems, the technical scheme adopted by the invention is that the processing method of the three-dimensional point cloud data comprises the following steps:
S1, acquiring point cloud data of a workpiece to be detected and point cloud data of a standard workpiece, and performing direct filtering on the point cloud data to enable Z-axis coordinates of the point cloud data to be 0-1;
S2, taking the distance value from the line scanning camera to each point cloud as a pixel value, and obtaining a first depth image corresponding to the point cloud data of the workpiece to be detected and a second depth image corresponding to the point cloud data of the standard workpiece;
And S3, calculating the offset and the rotation angle of the first depth image compared with the second depth image, and registering the point cloud data of the workpiece to be detected according to the offset and the rotation angle.
Further, the step S3 includes the following steps:
s31, respectively acquiring a first image pyramid of a first depth image and a second image pyramid of a second depth image by using downsampling;
S32, a rectangular frame is used for circling a region with obvious texture characteristics in the bottommost layer of the first image pyramid as a region of interest ROI 1, and the coordinates, the width and the height of the region of interest ROI 1 are respectively converted layer by layer to obtain a region of interest ROI i in each layer of the first image pyramid, and pixel values and feature vectors of all pixel points in the ROI i;
ROI i=(Pi(xi,yi),Wi,Hi), wherein I represents the layer number variable of the first image pyramid and the second image pyramid, i=1, …, I represents the total layer number of the first image pyramid and the second image pyramid, I is the highest layer, P i(xi,yi) represents the upper left corner of the I-th layer region of interest of the first image pyramid, W i represents the I-th layer region of interest width of the first image pyramid, H i represents the I-th layer region of interest height of the first image pyramid;
s33, constructing a matching region by taking each pixel point in the highest layer of the second image pyramid as a center point and W I、HI as a width and height, obtaining new matching regions of the matching regions under different rotation angles, and jointly constructing a data set by taking the new matching regions of the matching regions and the new matching regions of the matching regions under different rotation angles as elements;
According to the pixel values and the feature vectors of the pixel points in the region of interest ROI I of the highest layer of the first image pyramid, an optimal matching region in a data set corresponding to the ROI I is obtained, the center point of the optimal matching region is used as an optimal matching point, and expansion and coordinate value conversion are carried out on the optimal matching point to obtain an optimal matching point set of the second image pyramid in the I-1 layer;
s34, traversing each optimal matching point in the first layer I-1 of the second image pyramid, constructing a matching region by taking each optimal matching point as a center and taking W I-1、HI-1 as a width and a height, and acquiring new matching regions of each matching region under different rotation angles to jointly form a matching region data set;
according to the pixel values and the feature vectors of the pixel points in the region of interest ROI I-1 of the first image pyramid I-1 layer, obtaining an optimal matching region and an optimal matching point corresponding to the region of interest ROI I-1 in the second image pyramid I-1 layer, and expanding and converting coordinate values of the optimal matching point to obtain an optimal matching point set of the second image pyramid I-2 layer;
repeating the step S34 to obtain the coordinate value of the optimal matching point in the bottommost layer of the second image pyramid;
s35, calculating the offset and the rotation angle of the first depth image relative to the second depth image according to the coordinate value of the central point in the region of interest ROI 1 and the coordinate value of the optimal matching point in the bottommost layer of the second image pyramid;
S36, acquiring a transformation matrix between the point cloud data of the workpiece to be detected and the standard point cloud data of the workpiece according to the offset and the rotation angle, and registering the point cloud data of the workpiece to be detected based on the transformation matrix.
Further, the process of obtaining the optimal matching area is as follows:
Traversing each optimal matching point in the ith layer of the second image pyramid, taking each optimal matching point as a center, taking W i、Hi as a width and a height to construct a matching region, and acquiring new matching regions of each matching region under different rotation angles to jointly form a matching region data set;
And calculating the similarity S between each matching region in the matching region data set and the region of interest ROI i, if the similarity between all the matching regions and the ROI i is smaller than or equal to 0.9, failing to match, and if the similarity between the matching region and the ROI i is larger than 0.9, the matching region is the optimal matching region, and the center point of the matching region is the optimal matching point.
Further, the process of expanding and converting the coordinate value of the optimal matching point is as follows:
expanding the optimal matching points in the ith layer of the second image pyramid to obtain For a pair ofCoordinate value conversion is carried out on each matching point in the second image pyramid to obtain an optimal matching point set U of the upper layer of depth image, wherein the elements in the set U are P i-1(mi-1,ni-1 and theta;
The coordinate value conversion formula is as follows:
Pi-1(mi-1,ni-1,θ)=k·Pi(mi,ni,θ)
Where P i(mi,ni, θ) represents the optimal matching point of the i-th layer of the image pyramid, P i-1(mi-1,ni-1, θ) represents the optimal matching point of the i-1 th layer of the image pyramid, k represents the downsampling coefficient when constructing the second image pyramid, k=2, (m i,ni) represents the coordinate value of P i(mi,ni, θ), (m i-1,ni-1) represents the coordinate value of P i-1(mi-1,ni-1, θ), and θ represents the rotation angle of the matching region.
Further, the similarity S is calculated as follows:
M i、Ni represents the total number of rows and columns of the Pixel points in the matching region in the ith layer of the second image pyramid, derX R、DerYR represents the vectors of the Pixel points Pixel R(xi,yi) in the X direction and the Y direction, derX M、DerYM represents the vectors of the Pixel points Pixel M(mi,ni) in the X direction and the Y direction, pixel M(mi,ni) represents the Pixel point on the coordinate (M i,ni) in the matching region, pixel R(xi,yi) represents the Pixel point in the ROI i corresponding to the coordinate (M i,ni), and Sim (X i,yi) represents the similarity between Pixel M(mi,ni) and Pixel R(xi,yi).
The application of the processing method of the three-dimensional point cloud data is used for measuring welding seams in a workpiece to be measured, and specifically comprises the following steps:
S41, generating a third depth image according to the point cloud data of the workpiece to be detected after registration, wherein an interested region in the third depth image is a welding line, and dividing the third depth image into a left side region and a right side region by taking the welding line as a central line;
S42, mapping the left area and the right area back to point cloud data to obtain point cloud data positioned at the left side and point cloud data positioned at the right side of the welding line;
s43, performing plane fitting on the point cloud data by using a RANSAC algorithm to obtain a left fitting plane corresponding to the point cloud data on the left side of the welding line and a right fitting plane corresponding to the point cloud data on the right side;
S44, obtaining point cloud data of the welding seam region through a coordinate conversion algorithm, calculating the distance d li between each point cloud data in the welding seam region and a left fitting plane and the distance d ri between each point cloud data and a right fitting plane, and filtering the point cloud data according to the distance d li and the distance d ri to obtain final point cloud data of the welding seam region;
S45, traversing each point cloud in the weld point cloud data, and counting the distribution of each point cloud on the X axis, the Y axis and the Z axis to obtain the width, the length and the height information of the weld.
The application of the processing method of the three-dimensional point cloud data is used for measuring the arc radian in a workpiece to be measured, and specifically comprises the following steps:
s51, filtering Z-axis information of point cloud data of a workpiece to be detected, and projecting the Z-axis information to an XOY plane to obtain plane point cloud data;
S52, generating a fourth depth image corresponding to the plane point cloud data, carrying out graying and binarization processing on the fourth depth image to obtain a binary image of the fourth depth image, extracting edge information in the binary image by using a Canny operator, and obtaining a circular arc pixel point set;
s53, obtaining coordinate values of the arc pixel points, and performing curve fitting by using a least square method to obtain a curve fitting equation;
S54, calculating the curvature of each circular arc pixel point to obtain radian information of the workpiece to be measured.
An electronic device comprises a processor, a memory and a communication bus, wherein the processor and the memory are communicated with each other through the communication bus;
a memory for storing a computer program;
and the processor is used for realizing the steps of the method when executing the program stored in the memory.
A computer readable storage medium having stored therein a computer program which when executed by a processor performs the above-described method steps.
The beneficial effects of the invention are as follows: the defect of measuring the shape of an object in space by using a three-dimensional image algorithm is overcome, and meanwhile, the efficiency of the three-dimensional image algorithm is improved by using the two-dimensional image algorithm to assist the three-dimensional image algorithm; the stability and the precision of the image acquisition are improved by using line structured light, the registration of the three-dimensional point cloud data is assisted by using a positioning technology of a two-dimensional image algorithm, the registration speed of the three-dimensional point cloud data is greatly improved, the selection of a region of interest is realized by using the mapping relation between the two-dimensional image and the three-dimensional point cloud data, and the efficiency of weld measurement and radian detection is greatly improved.
Drawings
In order to more clearly illustrate the embodiments of the invention or the technical solutions in the prior art, the drawings that are required in the embodiments or the description of the prior art will be briefly described, it being obvious that the drawings in the following description are only some embodiments of the invention, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a flow chart of an embodiment of the present invention.
Fig. 2 is a flow chart of point cloud data registration under test.
FIG. 3 is a flow chart of a weld seam measurement in accordance with an embodiment of the present invention.
FIG. 4 is a flow chart of the arc measurement according to an embodiment of the present invention.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present invention, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
As shown in fig. 1, the processing method of the three-dimensional point cloud data specifically includes the following steps:
And step 1, establishing a coordinate system by taking a platform on which the bottom surface of the workpiece is positioned as an XOY plane and taking the vertical direction of the platform as a Z axis, and scanning the workpiece to be detected and the standard workpiece by using a line scanning camera to obtain point cloud data of the workpiece to be detected and point cloud data of the standard workpiece.
And 2, because the structured light of the line scanning camera is linear, the obtained original point cloud data is distributed in a limited range in the X, Y direction, the distribution along the Z axis direction is wider, and a lot of redundant data and noise data exist in the Z axis direction, so that the accuracy and efficiency of welding seam measurement and radian detection by using the data are reduced, and the point cloud data of a workpiece to be detected and the point cloud data of a standard workpiece are respectively filtered and simplified by using the straight-through filtering.
The Z-axis filtering range of the point cloud data is set to be 0-1 during the direct filtering, so that most characteristics of the point cloud data can be reserved, and the original point cloud data can be simplified to a great extent.
Step 3, in order to accelerate the selection and registration speed of the subsequent region of interest, mapping the workpiece point cloud data to be detected into a first depth image, and mapping the standard workpiece point cloud data into a second depth image, wherein the specific process is as follows:
Setting parameters of an image collector, generating a depth image at a specific angle by using a library function in a PCL point cloud library, namely acquiring the distance from a line scanning camera to each point cloud in point cloud data, taking the distance value as a pixel value of each pixel point in the depth image by square sum of specific numerical values of the point cloud in three dimensions under a default shooting angle of the camera, and acquiring a first depth image corresponding to point cloud data of a workpiece to be detected and a second depth image corresponding to point cloud data of a standard workpiece.
The parameters include: 1. the angle, namely the pose (sensor_ pose), of the image collector is selected, the angle shot by a line scanning camera is generally defaulted, the parameter is mainly set for acquiring a depth image of a workpiece in a overlooking angle, and if the measurement requirement changes, the depth image of the front view or side view angle of the workpiece can be acquired by rotating a three-dimensional point cloud model on a preview page; 2. setting the angular resolution, namely the size of an angle corresponding to a pixel point in the depth image; 3. setting a horizontal maximum sampling angle and a vertical maximum sampling angle of the depth sensor, wherein the horizontal maximum sampling angle and the vertical maximum sampling angle are both set to 180 degrees because the three-dimensional point cloud data do not exist at the rear side of the sensor due to the characteristics of the line scanning camera; 4. setting a default coordinate system as CAMERA_FRAME; 5. setting the influence level of the adjacent point on the query point distance value to be 5 when the depth of the depth image is acquired, wherein the average value of all points in 5cm in the Z-axis buffer area is used as the pixel value of the pixel point so as to ensure that data is not lost; 6. setting the minimum acquisition distance, generally defaulting to 0, to represent a blind area which is smaller than the current value in the Z-axis direction and is an image collector, and not processing.
In general, the data scanned by the line scan camera can be directly processed in depth image, and if the data acquired by the line scan camera is not ideal or the acquisition of the depth image of other angles of the acquired data is expected, the acquisition can be performed by adjusting the parameters.
Step 4, calculating the offset and the rotation angle of the first depth image compared with the second depth image by using a two-dimensional image algorithm, wherein the specific process of registering the cloud data to be measured points is as shown in fig. 2:
Step 41, respectively generating a first image pyramid of the first depth image and a second image pyramid of the second depth image by downsampling, wherein the layers of the two image pyramids are the same, the bottommost layer of the two image pyramids is a depth image original image, namely the first depth image and the second depth image, the higher the layer number of the image pyramids is, the smaller the number of pixel points in the depth image is, the size of each layer of depth image in the image pyramids is 1/2 of that of one layer of depth image, the layer number of the image pyramids is reduced, the processing speed of an algorithm is slowed down, the calculating precision is improved, the processing speed of the algorithm can be properly improved by increasing the layer number of the image pyramids, but the characteristic information loss in the depth images is more, the processing result is poorer, the processing speed and the effect of the algorithm are comprehensively considered, and the layer number of the image pyramids is set in 2-6 layers.
Step 42, circling a region with obvious texture features in the bottommost layer of the first image pyramid as a region of interest ROI 1 by using a rectangular frame, and performing layer-by-layer conversion on the region of interest ROI 1 by using formula (1) to obtain a region of interest ROI i of each layer in the first image pyramid:
Region of interest ROI i=(Pi(xi,yi),Wi,Hi in the I-th layer of the first image pyramid, where I represents the number of layers variable of the image pyramid, I represents the total number of layers of the image pyramid, i=1, 2, …, I, P i(xi,yi) represents the upper left corner of the I-th layer of the first image pyramid, P i+1(xi+1,yi+1) represents the upper left corner of the i+1-th layer of the first image pyramid, W i represents the I-th layer of the first image pyramid, W i+1 represents the i+1-th layer of the first image pyramid, H i represents the I-th layer of the first image pyramid, H i+1 represents the i+1-th layer of the first image pyramid, k represents the downsampling factor when constructing the first image pyramid, k=2.
In step 43, a pixel value Px i of each pixel point in the region of interest ROI i in each layer of the first image pyramid is obtained, and the Sobel operator is used to process the depth image to obtain a feature vector Ve i of each pixel point.
And step 44, constructing a matching region in the highest layer of the second image pyramid, and obtaining an optimal matching region with the highest similarity with the region of interest ROI I according to the pixel values and the feature vectors of the pixel points in the region of interest ROI I, wherein the center point of the optimal matching region is used as an optimal matching point.
Step 45, in order to avoid calculation errors, expanding the obtained optimal matching points to obtainCoordinate value conversion is carried out according to the optimal matching points matched by the highest layer, so that an optimal matching point set U of the upper layer in the second image pyramid is obtained, and the elements in the set U are P I-1(mI-1,nI-1 and theta);
similarly, in the layer I-1 of the second image pyramid, a matching area is constructed by taking each pixel point in the optimal matching point set U as a center and taking W I-1、HI-1 as a width and height, and a new matching area set is formed by all the matching areas and the matching areas under different rotation angles.
And repeating similarity calculation to obtain a new optimal matching point, and expanding and converting coordinate values of the new optimal matching point until reaching the bottommost layer of the second image pyramid to obtain a coordinate value P res(mres,nresres of the optimal matching point in the bottommost layer.
The coordinate value conversion formula is as follows:
Pi-1(mi-1,ni-1,θ)=k·Pi(mi,ni,θ)
Where P i(mi,ni, θ) represents the optimal matching point of the i-th layer of the image pyramid, P i-1(mi-1,ni-1, θ) represents the optimal matching point of the i-1-th layer of the image pyramid, k represents the downsampling coefficient when constructing the second image pyramid, k=2.
Step 46, calculating an offset and a rotation angle r of the first depth image compared with the second depth image according to the coordinate value P roi(xroi,yroiroi) of the center point in the region of interest ROI 1 and the coordinate value P res(mres,nresres) of the optimal matching point in the bottom layer of the second image pyramid:
offset=(mres-xroi,nres-yroi)
r=θresroi
Since the rotation of the workpiece to be measured only occurs on the plane in which the X-axis and the Y-axis lie, the offset of the first depth image compared to the second depth image can be rewritten as D (a, b, c), D (a, b, c) = (offset x,offsety, 0), where offset x represents the offset of the workpiece in the X-direction, offset y represents the offset of the workpiece in the Y-direction, and offset x=mres-xroi,offsety=nres-yroi.
Constructing a transformation matrix according to the offset D (a, b, c) and the rotation angle of the workpiece to be measured on the Y axis as follows:
wherein r x、ry、rz represents the rotation angles of the point cloud data of the workpiece to be measured on the X axis, the Y axis and the Z axis, a, b and c represent the offset amounts of the point cloud data of the workpiece to be measured on X, Y and Z directions, a=offset x,b=offsety and c=0, respectively.
The transformation matrix is used for transforming the point cloud data of the workpiece to be detected to obtain registered point cloud data of the workpiece to be detected, and in the embodiment, the offset and the rotation angle of the point cloud data of the workpiece to be detected relative to the point cloud data of the standard workpiece are indirectly calculated by using the depth image, so that the processing speed of an algorithm can be improved, and the three-dimensional point cloud data can be conveniently processed.
The process of constructing the matching region and obtaining the optimal matching region in step 44 is as follows:
Pixel values and feature vectors of pixel points beyond the edge position are assigned to be 0, each pixel point P i(mi,ni in the ith layer depth image of the second image pyramid is traversed, each pixel point P i(mi,ni) is used as a center point, a matching area is constructed by using W i、Hi as a wide and high structure, new matching areas of the matching area under different rotation angles are obtained within a preset angle range of 0-180 degrees, and all the matching areas and the new matching areas under different rotation angles are used as elements to jointly form a data set Match_ Rotate i of the matching area in the ith layer.
The rotation formula of the matching region is as follows:
Wherein (m i ni) represents the coordinates of the pixel point in the i-th layer before rotation, (m i′ ni') represents the coordinates of the pixel point in the i-th layer after rotation, Represents a matching region constructed with a pixel point (m i ni) as a center point,Representing matching regionsAnd rotating the angle theta to obtain a new matching area.
Using each element in the data setMatching with the region of interest ROI i, calculatingPixel M(mi,ni) and Pixel R(xi,yi) in the region of interest ROI I (x i,yi), the width and height of the region of interest ROI I and the matching region are the same, and the positions of Pixel M(mi,ni) and Pixel R(xi,yi) in each depth image are the same, namely m i=xi、ni=yi, so as to obtain the similarity S between each matching region and the region of interest ROI i.
Wherein M i、Ni represents the total number of rows and columns of the Pixel points in the matching region in the ith layer of the second image pyramid, derX R、DerYR represents the vectors of the Pixel points Pixel R(xi,yi) in the X direction and the Y direction, and DerX M、DerYM represents the vectors of the Pixel points Pixel M(mi,ni) in the X direction and the Y direction, respectively.
When the similarity between all the matching areas in the data set and the ROI i is smaller than or equal to 0.9, the matching is considered to be failed; when the similarity between a certain matching region and the ROI i is higher than 0.9, the matching region is regarded as the optimal matching region, and the center point thereof is the optimal matching point.
As shown in fig. 3, the weld measurement is performed by using the point cloud data processed by the method, and the specific process is as follows:
And step 51, generating a corresponding third depth image according to the registered point cloud data of the workpiece to be detected, wherein the region of interest in the third depth image is a welding line, and dividing the third depth image into a left side region and a right side region by taking the plane of the welding line as a central line.
And step 52, obtaining point cloud data on the left side and point cloud data on the right side of the welding line according to the position coordinates of the left side area and the right side area in the third depth image.
The coordinate conversion relation between the position coordinates of the third depth image and the point cloud data is as follows:
Wherein x, y, z represent three-dimensional coordinate data of the point cloud, x ', y' represent position coordinates of the pixel point in the third depth image, D represents a depth value, Representing camera internal parameters.
Step 53, respectively performing plane fitting according to the point cloud data on the left side and the point cloud data on the right side of the welding line, and obtaining a left fitting plane and a right fitting plane according to the following processes:
creating a model parameter object for saving a plane fitting result;
Setting a plane segmentation threshold value to be 0.7, and judging points with the distance from the fitting plane exceeding the threshold value as invalid data;
And performing plane fitting on the point cloud data by using a RANSAC algorithm, and finally obtaining the best result of the plane fitting by continuously updating the setting model.
The RANSAC algorithm specifically comprises the following steps:
Specifically, the parameters of the mathematical model are estimated from a group of point cloud data of the workpiece to be measured containing outliers in an iterative manner, the RANSAC algorithm assumes that the data contains correct data and abnormal data (or called noise), the correct data is denoted as an inner point (inliers), the abnormal data is denoted as an outer point (outliers), and meanwhile, the RANSAC algorithm also assumes that given a group of correct data, there is a method capable of calculating model parameters conforming to the data.
The core idea of the RANSAC algorithm is randomness and supposition, wherein the randomness is to randomly select sampling data according to the occurrence probability of correct data, and the randomness simulation can approximately obtain a correct result according to a big number law; the supposition is that the selected sampling data are all correct data, then other points are calculated by using the correct data through the models meeting the problems, the result is scored finally, if the number of local points is too small and the model of the last time is not used, the sampling data are discarded, and if the sampling data are better than the existing model, the current model is selected.
The reason for selecting the RANSAC algorithm in this embodiment is as follows: the data volume is reduced after the data is subjected to the direct filtering processing, the RANSAC algorithm has robustness, high-precision parameters can be estimated from a large number of data sets of external points, in order to more accurately count the point cloud data of a key area, the point cloud data of a non-key area needs to be sent out first, so that plane fitting is required to be carried out on the point cloud data on two sides of the key area, and a fitting plane of the point cloud data on the left side of a welding line and a fitting plane of the point cloud data on the right side of the welding line are obtained, and the equation is expressed as follows: a lx+Bly+Clz=Dl、Arx+Bry+Crz=Dr, wherein a l,Bl,Cl,Dl and a r,Br,Cr,Dr are the formula constants calculated by the RANSAC algorithm for the left and right fitting planes, respectively.
And 54, according to the position coordinates of the welding seam region in the third depth image, obtaining corresponding point cloud data by using the coordinate system conversion relation in the step 52, calculating the distance d li between each point cloud data of the welding seam region and the left fitting plane and the distance d ri between each point cloud data of the welding seam region and the right fitting plane, and filtering the point cloud data to obtain final welding seam point cloud data.
Setting a filtering threshold of weld point cloud data to be 0.1, when the distance between a certain point cloud and a left fitting plane or a right fitting plane is smaller than or equal to 0.1, considering the point cloud as a plane point, namely edge burrs of a weld joint, to be filtered, and when the distance between the certain point cloud and the left fitting plane and the right fitting plane is larger than 0.1, considering the point cloud as a non-plane point to be reserved; when the value of the filtering threshold is too small, burr points at the edge of the welding line are counted to generate redundant data, and when the value of the filtering threshold is too large, part of cloud data of the welding line points are filtered to influence the final counting result.
And step 55, traversing each point in the weld point cloud data, and counting coordinate values of each point cloud data on an X axis, a Y axis and a Z axis to obtain the maximum value, the minimum value and the average value of distance values of each two points on the X axis and the Y axis, and the maximum value, the minimum value and the average value of the coordinate values on the Z axis to obtain the height, the width and the height information of the weld.
As shown in fig. 4, the arc radian measurement is performed by using the point cloud data processed by the method, and the specific process is as follows:
and 61, projecting the point cloud data in the Z-axis direction, projecting the point cloud data to be measured on an XOY plane, filtering the Z-axis information of the point cloud data to be measured, and obtaining plane point cloud data.
And 62, dragging the 3D model through a visual window, adjusting the pose of the plane point cloud to a front view angle, generating a fourth depth image corresponding to the pose by using the method in the step 3 under the condition of the pose, and carrying out graying and binarization processing on the fourth depth image to obtain a binary image of the fourth depth image.
Setting the input in Canny as a binary image, outputting as an arc image, wherein the low threshold value is 90, the high threshold value is 110, the image channel is 1, and calling a Canny interface to extract edge information in the binary image so as to obtain an arc pixel point set;
And step 63, performing curve fitting by using a least square method according to the coordinate value of each pixel point in the circular arc pixel point set.
Specifically, assume that the fitting polynomial of the least squares method is: y=a 0+a1x+...+akxk, the sum of the distances from each pixel point to the curve, i.e. the sum of squares of the deviations, is as follows: Finally, under a proper set of coefficients (a 0,a1,...,ak), the deviation sum is minimum, and the better the fitted curve effect is.
Wherein k represents the number variable of terms in the fitting polynomial, a 0、a1、ak represents the coefficients of the 1 st term, the 2 nd term and the kth term of the fitting polynomial respectively, the values of the coefficients are unknown, Q represents the number variable of the arc pixel points, Q represents the total number of the arc pixel points, q=1, 2, …, Q, and y q represents the least square fitting result of the Q-th arc pixel points.
To find the fitting polynomial coefficients that meet the conditions, the partial derivative of a q is found to the right of the equation, thus resulting in:
...
The polynomial coefficients of the above formula are expressed in matrix form as follows:
where X q represents the X-axis coordinate of the qth circular arc pixel point, And (3) representing the performance of the kth term coefficient when calculating the distance from the qth circular arc pixel point to the fitted curve, solving a coefficient matrix, namely a 0-ak by using the above method, and further determining a fitted equation of the curve.
And step 64, obtaining the curvature kappa q of each pixel point on the fitting curve by the following formula to obtain the radian information on the workpiece to be measured.
Where y' q represents the first derivative of y q and y "q represents the second derivative of y q, the present embodiment takes the highest curvature value in the fitted curve as the curvature characteristic value of the curve.
In the initial stage of the embodiment, a template_alignment method and an alignment_ prerejective method are respectively tried to directly match 3D point cloud data, then the point cloud data are calibrated, the time for measuring the width and the height of a welding line each time is up to 30 minutes through testing, and then a two-dimensional image algorithm is used for assisting in three-dimensional point cloud calibration under the same computer configuration, so that the whole measurement time is reduced to about 20 seconds, and the processing speed of the algorithm is greatly accelerated; and comparing the processing results of the different processing modes before and after, and controlling the error within 1% by using the results of the two methods.
Similarly, in the initial stage of the embodiment, the curvature of the curved surface is calculated by directly reconstructing the curved surface of the three-dimensional point cloud data, the kd-tree method is used for reconstructing the curved surface of the three-dimensional point cloud data, and the getModelCurvatures interface in the PCL is used for calculating the curvature of the curved surface, so that one hour is required in the whole calculation process, the curvature of the curved surface is assisted and calculated in a 3D-2D (three-dimensional) mode under the same computer configuration, the whole calculation process is reduced to about 20 seconds, and the processing speed of an algorithm is greatly accelerated.
The invention also encompasses an electronic device comprising a memory for storing various computer program instructions and a processor for executing the computer program instructions to perform all or part of the steps described above; the electronic device may communicate with one or more external devices, with one or more devices that enable a user to interact with the electronic device, and/or with any device that enables the electronic device to communicate with one or more other computing devices, and with one or more networks (e.g., local area, wide area, and/or public networks) via a network adapter.
The present invention also includes a computer readable medium having a computer program stored thereon, the computer readable medium being executable by a processor, the computer readable medium may include, but is not limited to, magnetic storage devices, optical discs, digital versatile discs, smart cards, and flash memory devices, and the readable storage medium of the present invention may represent one or more devices and/or other machine readable media for storing information, the term "machine readable medium" including, but not limited to, wireless channels and various other media (and/or storage media) capable of storing, containing, and/or carrying code and/or instructions and/or data.
In this specification, each embodiment is described in a related manner, and identical and similar parts of each embodiment are all referred to each other, and each embodiment mainly describes differences from other embodiments. In particular, for system embodiments, since they are substantially similar to method embodiments, the description is relatively simple, as relevant to see a section of the description of method embodiments.
The foregoing description is only of the preferred embodiments of the present invention and is not intended to limit the scope of the present invention. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present invention are included in the protection scope of the present invention.

Claims (8)

1. The processing method of the three-dimensional point cloud data is characterized by comprising the following steps of:
S1, acquiring point cloud data of a workpiece to be detected and point cloud data of a standard workpiece, and performing direct filtering on the point cloud data to enable Z-axis coordinates of the point cloud data to be 0-1;
S2, taking the distance value from the line scanning camera to each point cloud as a pixel value, and obtaining a first depth image corresponding to the point cloud data of the workpiece to be detected and a second depth image corresponding to the point cloud data of the standard workpiece;
S3, calculating offset and rotation angle of the first depth image compared with the second depth image, and registering the point cloud data of the workpiece to be detected according to the offset and rotation angle;
The step S3 comprises the following steps:
s31, respectively acquiring a first image pyramid of a first depth image and a second image pyramid of a second depth image by using downsampling;
s32, a rectangular frame is used for encircling a region with obvious texture features in the bottommost layer of the first image pyramid to be used as a region of interest For the region of interestRespectively converting the coordinates, width and height of the first image pyramid layer by layer to obtain the region of interest in each layer of the first image pyramidA kind of electronic devicePixel values and feature vectors of all pixel points in the pixel array;
Wherein A layer number variable representing the first image pyramid and the second image pyramid,Representing the total layer number of the first image pyramid and the second image pyramid, the firstThe layer is the highest layer of the layers,Representing the first image pyramidThe upper left corner of the region of interest of the layer,Representing the first image pyramidThe width of the region of interest of the layer,Representing the first image pyramidLayer region of interest height;
S33, taking each pixel point in the highest layer of the second image pyramid as a center point Constructing matching areas for wide and high, obtaining new matching areas of each matching area under different rotation angles, and forming a data set by taking each matching area and the new matching areas under different rotation angles as elements;
regions of interest at the highest level according to the first image pyramid Pixel value and characteristic vector of inner pixel point, obtaining and connecting withThe optimal matching area in the corresponding data set takes the central point as the optimal matching point, and expands and converts the coordinate value of the optimal matching point to obtain the optimal matching point in the second image pyramidA set of optimal matching points for the layer;
s34, traversing the second image pyramid Each optimal matching point in the layer is centered on each optimal matching pointConstructing matching areas for the width and the height, and acquiring new matching areas of each matching area under different rotation angles to jointly form a matching area data set;
according to the first image pyramid Layer region of interestObtaining a second image pyramid by the pixel value and the feature vector of the inner pixel pointIn layers and regions of interestCorresponding optimal matching area and optimal matching point, expanding the optimal matching point and converting coordinate value to obtain the second image pyramid firstA set of optimal matching points for the layer;
repeating the step S34 to obtain the coordinate value of the optimal matching point in the bottommost layer of the second image pyramid;
S35, according to the region of interest Calculating the offset and the rotation angle of the first depth image relative to the second depth image by the coordinate value of the inner center point and the coordinate value of the optimal matching point in the bottommost layer of the second image pyramid;
S36, acquiring a transformation matrix between the point cloud data of the workpiece to be detected and the standard point cloud data of the workpiece according to the offset and the rotation angle, and registering the point cloud data of the workpiece to be detected based on the transformation matrix.
2. The method for processing three-dimensional point cloud data according to claim 1, wherein the process of obtaining the optimal matching area is as follows:
calculating each matching region and region of interest in the matching region data set Similarity of (2)If all the matching areas are equal toThe similarity of the two is less than or equal to 0.9, the matching fails, and if the matching area and the matching area are matchedThe matching area is the optimal matching area, and the center point is the optimal matching point if the similarity is larger than 0.9.
3. The method for processing three-dimensional point cloud data according to claim 1, wherein the process of expanding and converting coordinate values of the optimal matching points is as follows:
For the second image pyramid Expanding the optimal matching points in the layer to obtainFor a pair ofCoordinate value conversion is carried out on each matching point in the second image pyramid to obtain an optimal matching point set of the upper layer of depth imageAggregation ofThe elements in (a) are
The coordinate value conversion formula is as follows:
Wherein the method comprises the steps of Representing a second image pyramidThe optimal matching point of the layer is defined,Representing a second image pyramidThe optimal matching point of the layer is defined,Representing the downsampling factor in constructing the second image pyramid,Representation ofIs a coordinate value of (a),Representation ofIs a coordinate value of (a),Indicating the rotation angle of the matching region.
4. The method for processing three-dimensional point cloud data according to claim 2, wherein the similarityIs calculated as follows:
Respectively represent the second image pyramid The total number of rows and columns of pixel points in the matching area in the layer,Respectively representing pixel pointsVectors in the X-direction and the Y-direction,Respectively representing pixel pointsVectors in the X-direction and the Y-direction,Representing coordinates in a matching regionThe pixel points on the pixel electrodes are arranged on the pixel electrodes,Representation and coordinatesCorresponding toIn the number of pixels in the pixel array,Representation ofAnd (3) withSimilarity of two pixels.
5. Use of the three-dimensional point cloud data processing method according to any one of claims 1 to 4 for measuring a weld in a workpiece to be measured, in particular further comprising the steps of:
S41, generating a third depth image according to the point cloud data of the workpiece to be detected after registration, wherein an interested region in the third depth image is a welding line, and dividing the third depth image into a left side region and a right side region by taking the welding line as a central line;
S42, mapping the left area and the right area back to point cloud data to obtain point cloud data positioned at the left side and point cloud data positioned at the right side of the welding line;
s43, performing plane fitting on the point cloud data by using a RANSAC algorithm to obtain a left fitting plane corresponding to the point cloud data on the left side of the welding line and a right fitting plane corresponding to the point cloud data on the right side;
s44, obtaining point cloud data of the welding seam region through a coordinate conversion algorithm, and calculating the distance from each point cloud data in the welding seam region to the left fitting plane Distance of right fitting planeAccording to distanceSum distanceFiltering the point cloud data to obtain final point cloud data of the welding line area;
S45, traversing each point cloud in the weld point cloud data, and counting the distribution of each point cloud on the X axis, the Y axis and the Z axis to obtain the width, the length and the height information of the weld.
6. The use of the method for processing three-dimensional point cloud data according to any one of claims 1 to 4, wherein the method is used for measuring arc radian in a workpiece to be measured, and specifically comprises the following steps:
s51, filtering Z-axis information of point cloud data of a workpiece to be detected, and projecting the Z-axis information to an XOY plane to obtain plane point cloud data;
S52, generating a fourth depth image corresponding to the plane point cloud data, carrying out graying and binarization processing on the fourth depth image to obtain a binary image of the fourth depth image, extracting edge information in the binary image by using a Canny operator, and obtaining a circular arc pixel point set;
s53, obtaining coordinate values of the arc pixel points, and performing curve fitting by using a least square method to obtain a curve fitting equation;
S54, calculating the curvature of each circular arc pixel point to obtain radian information of the workpiece to be measured.
7. An electronic device is characterized by comprising a processor, a memory and a communication bus, wherein the processor and the memory are communicated with each other through the communication bus;
a memory for storing a computer program;
A processor for carrying out the method steps of any one of claims 1-6 when executing a program stored on a memory.
8. A computer-readable storage medium, characterized in that the computer-readable storage medium has stored therein a computer program which, when executed by a processor, implements the method steps of any of claims 1-6.
CN202210215815.6A 2022-03-07 2022-03-07 Processing method of three-dimensional point cloud data, application of processing method, electronic equipment and storage medium Active CN114612412B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210215815.6A CN114612412B (en) 2022-03-07 2022-03-07 Processing method of three-dimensional point cloud data, application of processing method, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210215815.6A CN114612412B (en) 2022-03-07 2022-03-07 Processing method of three-dimensional point cloud data, application of processing method, electronic equipment and storage medium

Publications (2)

Publication Number Publication Date
CN114612412A CN114612412A (en) 2022-06-10
CN114612412B true CN114612412B (en) 2024-08-23

Family

ID=81861018

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210215815.6A Active CN114612412B (en) 2022-03-07 2022-03-07 Processing method of three-dimensional point cloud data, application of processing method, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN114612412B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115439480B (en) * 2022-11-09 2023-02-28 成都运达科技股份有限公司 Bolt abnormity detection method and system based on 3D depth image template matching
CN116416223B (en) * 2023-03-20 2024-01-09 北京国信会视科技有限公司 Complex equipment debugging method, system, electronic equipment and storage medium

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112894832A (en) * 2019-11-19 2021-06-04 广东博智林机器人有限公司 Three-dimensional modeling method, three-dimensional modeling device, electronic equipment and storage medium

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110335295B (en) * 2019-06-06 2021-05-11 浙江大学 Plant point cloud acquisition registration and optimization method based on TOF camera

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112894832A (en) * 2019-11-19 2021-06-04 广东博智林机器人有限公司 Three-dimensional modeling method, three-dimensional modeling device, electronic equipment and storage medium

Also Published As

Publication number Publication date
CN114612412A (en) 2022-06-10

Similar Documents

Publication Publication Date Title
US11551341B2 (en) Method and device for automatically drawing structural cracks and precisely measuring widths thereof
CN109410207B (en) NCC (non-return control) feature-based unmanned aerial vehicle line inspection image transmission line detection method
CN106408609B (en) A kind of parallel institution end movement position and posture detection method based on binocular vision
CN113409382B (en) Method and device for measuring damaged area of vehicle
CN102298779B (en) Image registering method for panoramic assisted parking system
CN110009690A (en) Binocular stereo vision image measuring method based on polar curve correction
CN111797744B (en) Multimode remote sensing image matching method based on co-occurrence filtering algorithm
CN114612412B (en) Processing method of three-dimensional point cloud data, application of processing method, electronic equipment and storage medium
CN109829856B (en) Bridge crack information fusion method
CN113324478A (en) Center extraction method of line structured light and three-dimensional measurement method of forge piece
CN116188558B (en) Stereo photogrammetry method based on binocular vision
CN114998448A (en) Method for calibrating multi-constraint binocular fisheye camera and positioning space point
CN116188544A (en) Point cloud registration method combining edge features
CN110033491B (en) Camera calibration method
CN112446926B (en) Relative position calibration method and device for laser radar and multi-eye fish-eye camera
CN117541537B (en) Space-time difference detection method and system based on all-scenic-spot cloud fusion technology
CN106228593B (en) A kind of image dense Stereo Matching method
CN117173437A (en) Multi-mode remote sensing image hybrid matching method and system with multi-dimensional directional self-similar characteristics
CN116597016A (en) Optical fiber endoscope image calibration method
CN116596987A (en) Workpiece three-dimensional size high-precision measurement method based on binocular vision
CN209279912U (en) A kind of object dimensional information collecting device
CN118522055B (en) Method, system, equipment and storage medium for realizing real wrinkle detection
CN117392211B (en) BGA element rapid identification positioning method and system and storage medium
CN115619876A (en) Method, system, medium, and computing device for evaluating internal reference accuracy of camera module
CN118015250A (en) Automatic image control point determining method based on aerial image

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant