Nothing Special   »   [go: up one dir, main page]

CN115797354B - Method for detecting appearance defects of laser welding seam - Google Patents

Method for detecting appearance defects of laser welding seam Download PDF

Info

Publication number
CN115797354B
CN115797354B CN202310083936.4A CN202310083936A CN115797354B CN 115797354 B CN115797354 B CN 115797354B CN 202310083936 A CN202310083936 A CN 202310083936A CN 115797354 B CN115797354 B CN 115797354B
Authority
CN
China
Prior art keywords
image
point
points
defect
point cloud
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202310083936.4A
Other languages
Chinese (zh)
Other versions
CN115797354A (en
Inventor
林福赐
李佐霖
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xiamen Weiya Intelligent Technology Co ltd
Original Assignee
Xiamen Weiya Intelligence Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xiamen Weiya Intelligence Technology Co ltd filed Critical Xiamen Weiya Intelligence Technology Co ltd
Priority to CN202310083936.4A priority Critical patent/CN115797354B/en
Publication of CN115797354A publication Critical patent/CN115797354A/en
Application granted granted Critical
Publication of CN115797354B publication Critical patent/CN115797354B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02EREDUCTION OF GREENHOUSE GAS [GHG] EMISSIONS, RELATED TO ENERGY GENERATION, TRANSMISSION OR DISTRIBUTION
    • Y02E60/00Enabling technologies; Technologies with a potential or indirect contribution to GHG emissions mitigation
    • Y02E60/10Energy storage using batteries
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/30Computing systems specially adapted for manufacturing

Landscapes

  • Investigating Materials By The Use Of Optical Means Adapted For Particular Applications (AREA)

Abstract

The invention relates to the technical field of weld appearance defect detection and electrophotography of laser welding, in particular to a method for detecting the weld appearance defect of the laser welding, which comprises the following steps: acquiring a depth image and a 2D image of a welded product; converting the depth image into a 3D point cloud image, processing the 3D point cloud image, and identifying the processed 3D point cloud image to obtain first defect data; processing the 2D image, and recognizing the processed 2D image to obtain second defect data; and outputting a weld appearance defect detection result by carrying out combination analysis on the first defect data and the second defect data. The invention is based on the computer vision technology, combines the point cloud algorithm and the deep learning algorithm, can successfully identify the defects of the appearance of the welding seam of the laser welding, and compared with the manual visual inspection method, the invention greatly improves the detection efficiency of the defects of the appearance of the welding seam and can improve the quality consistency and the yield of products.

Description

Method for detecting appearance defects of laser welding seam
Technical Field
The invention relates to the technical field of weld joint appearance defect detection and electrophotography of laser welding, in particular to a method for detecting the weld joint appearance defect of laser welding.
Background
The sealing nail for the lithium battery is generally welded by laser, and due to the reasons of energy abnormality or impurities in a welding line in the laser welding process, abnormal concave-convex defects are sometimes generated after welding, and the defects seriously affect the quality of the battery, for example, U.S. patent document US20080253410A1 discloses a manufacturing method of laser equipment and the battery. The traditional detection method generally adopts a manual visual inspection mode to identify defects, and the manual visual inspection has the following defects: the on-line period is long, the visual inspector needs to be trained for a period of time before going on duty, the training period is long, and the labor is consumed; the judgment of the defects is subjective, and the judgment standards of different visual inspectors on the defects are inconsistent; some defects are very small in size and cannot be observed by naked eyes; the visual inspector cannot accurately control the judging time of the product, so that the beats of the production line are inconsistent. These several drawbacks can seriously affect the overall quality of the battery and affect the production efficiency, and therefore, the defect identification mode of manual visual inspection is no longer suitable for the production requirements of the current stage.
In recent years, more and more industries introduce computer vision technology identification defects; compared with manual visual inspection, the method has the advantages that the online period is quick, the same set of detection method can be used for the same product, the judgment standard is objective, the detection time is relatively fixed, the detection efficiency is improved, the consistency of defect detection results is ensured, and the product yield is improved. However, how to introduce a computer vision technology in terms of detecting the appearance defects of the weld joints in laser welding so as to achieve the purpose of improving the consistency and the yield of the product defect detection results is a problem to be solved urgently.
Disclosure of Invention
In order to solve the technical problem, the invention provides a method for detecting appearance defects of a laser welding seam, which comprises the following steps:
s100: acquiring a depth image and a 2D image of a welded product;
s200: converting the depth image into a 3D point cloud image, processing the 3D point cloud image, and identifying the processed 3D point cloud image to obtain first defect data;
s300: processing the 2D image, and recognizing the processed 2D image to obtain second defect data;
s400: and outputting a weld appearance defect detection result by carrying out combination analysis on the first defect data and the second defect data.
Optionally, in step S100, the depth image is acquired by a 3D sensor; the 2D image is obtained by shooting with a CCD camera.
Optionally, in step S200, converting the depth image into a 3D point cloud image includes:
determining conversion coefficients of the depth image: adopting x_res to represent the distance actually corresponding to a pixel on the same row, y_res to represent the distance actually corresponding to a pixel on the same column, and z_res to represent the conversion ratio of the pixel value size on the depth image relative to the actual height;
implementing spatial point cloud conversion: traversing pixel points on the depth image, converting each pixel point into a space point in the 3D point cloud image through a conversion coefficient, wherein the conversion formula is as follows: x=i x_res, y=j y_res, z=value z_res; where X, Y, Z represents the spatial point coordinates in the 3D point cloud image, i represents the row of the depth image, j represents the column of the depth image, and value represents the pixel value of pixel point (i, j) on the depth image.
Optionally, in step S200, after converting the depth image into a 3D point cloud image, downsampling is performed, that is, the number of point clouds is reduced by a voxel grid downsampling method, which specifically includes the following steps:
let the point cloud set before downsampling be A, the point cloud set after downsampling be B, and the process of downsampling by adopting the voxel grid is as follows: and dividing the space into a plurality of cubes with equal size, if one or more points exist in the cube by the point cloud set A, namely, assigning a point to the center of the cube at the corresponding position of the point cloud set B, thereby achieving the purpose of downsampling.
Optionally, in step S200, processing the 3D point cloud image includes:
s210: removing outliers in the 3D point cloud image;
s220: identifying convex points and concave points in the 3D point cloud image, wherein the method specifically comprises the following steps of: firstly, fitting a reference plane; secondly, traversing all points in the 3D point cloud image, calculating the distance between each point and a reference plane, and finally, if the distance is larger than a set distance threshold value, indicating that the point belongs to a convex point or a concave point;
s230: the convex points or the concave points are quantized, specifically: clustering the convex points or the concave points into a set by adopting a clustering algorithm to obtain a point set image; and calculating a first characteristic size of the point set image, and taking the first characteristic size as first defect data.
Optionally, in step S220, a reference plane is fitted by using a random sampling consistency fitting plane algorithm, and the procedure of fitting the reference plane is as follows:
s221: randomly selecting 3 non-collinear target points from the 3D point cloud image, solving a plane equation of a plane formed by the 3 target points, and obtaining a plane model;
s222: selecting other points in the 3D point cloud image, calculating the point-to-surface distance from the selected points to the plane model determined by the plane equation, comparing the point-to-surface distance with a preset minimum distance threshold, if the point-to-surface distance is smaller than the minimum distance threshold, the selected points are inner points, otherwise, the selected points are outer points, and recording the number of all the inner points in the plane model;
s223: repeating the steps S221 and S222, and recording the current plane model if the number of the inner points of the current plane model exceeds the number of the inner points of the previous plane model;
s224: and repeating the steps S221-S223 for iteration until the iteration is finished, and obtaining the plane model with the largest inner points as the reference plane.
Optionally, in step S300, processing the 2D image includes:
s310: grabbing the planar defect through a target detection technology comprises the following steps:
firstly, collecting a certain number of plane defect pictures, and then marking the positions of the plane defects to obtain a training set; secondly, training the training set by using a YOLO target detection network to obtain a detection model; finally, using the detection model for detecting the planar defect of the 2D image to obtain the planar defect in the undercut shape and the specific position of the planar defect on the 2D image;
s320: quantifying the planar defect, including:
carrying out confidence analysis on the plane defects to obtain confidence; performing minimum bounding box algorithm analysis on the planar defects, and screening out planar defects with second characteristic sizes larger than a set second size threshold; and taking the second characteristic size and the confidence of the screened plane defect as second defect data.
Optionally, in step S400, the binding analysis is as follows:
s410: screening out point set images with the first characteristic size larger than a first size threshold value from the 3D point cloud images; taking the convex area or the concave area corresponding to the screened point set image as a first appearance defect;
s420: taking the plane defect corresponding area with the confidence coefficient larger than the confidence coefficient threshold value as a second appearance defect;
s430: for the planar defect with the confidence coefficient not larger than the confidence coefficient threshold value, searching whether a convex or concave point set image exists at a corresponding position in the 3D point cloud image according to the position of the planar defect on the 2D image; if not, eliminating the plane defect; if the point set image at the corresponding position in the 3D point cloud image is larger than a third size threshold, merging the point location areas of the plane defect and the point set image at the corresponding position, and taking the merged point location areas as third appearance defects.
Optionally, in step S320, the confidence analysis includes:
determining a confidence interval of the planar defect;
analyzing probability that the planar defect grabbing reliability falls into a confidence interval by adopting normal distribution;
and determining the confidence level of the planar defect according to the probability that the planar defect grabbing reliability falls into the confidence interval.
Optionally, in step S300, the processing the 2D image further includes image preprocessing, specifically:
carrying out graying treatment on the 2D image to obtain a corresponding 2D image after graying treatment;
based on each pixel point in the 2D image after the graying treatment, the product of the R channel pixel value and the R channel weighting value, the product of the G channel pixel value and the G channel weighting value and the product of the B channel pixel value and the B channel weighting value are added to obtain the gray value of the pixel point, a Gaussian filter algorithm is adopted to combine the inherent variation of an image window and the total variation of the image window for the pixel point of the 2D image after the graying treatment, a structure and texture decomposition regularizer is formed, and smoothing treatment is carried out to obtain the 2D image after the smoothing treatment;
performing image enhancement processing on the smoothed 2D image, namely performing supervised model training on the smoothed 2D image by adopting an image enhancement model to obtain an image enhanced 2D image;
and using the 2D image after image enhancement for planar defect grabbing.
According to the method for detecting the appearance defects of the laser welding seam, the depth image and the 2D image of the welded product are acquired, the depth image and the 2D image are respectively processed to respectively obtain the first defect data and the second defect data, the first defect data and the second defect data are combined and analyzed on the basis of the respective processing, and the appearance defect detection result of the welding seam is output according to the analysis result; the invention is based on the computer vision technology, combines the point cloud algorithm and the deep learning algorithm, can successfully identify the defects of the appearance of the welding seam of the laser welding, and greatly improves the detection efficiency of the defects of the appearance of the welding seam and can improve the quality consistency and the yield of products compared with the manual visual inspection method from the detection result.
Additional features and advantages of the invention will be set forth in the description which follows, and in part will be obvious from the description, or may be learned by practice of the invention. The objectives and other advantages of the invention will be realized and attained by the structure particularly pointed out in the written description and claims thereof as well as the appended drawings.
The technical scheme of the invention is further described in detail through the drawings and the embodiments.
Drawings
The accompanying drawings are included to provide a further understanding of the invention and are incorporated in and constitute a part of this specification, illustrate the invention and together with the embodiments of the invention, serve to explain the invention. In the drawings:
FIG. 1 is a flow chart of a method for detecting defects in the appearance of a laser welded weld in accordance with an embodiment of the present invention;
FIG. 2 is a flow chart of converting a depth image into a 3D point cloud image employed in an embodiment of a method of the present invention for detecting defects in the appearance of a laser welded weld;
FIG. 3 is a flow chart of processing a 3D point cloud image in an embodiment of a method of the present invention for detecting defects in the appearance of a laser welded weld;
FIG. 4 is a flow chart of a fitted reference plane in an embodiment of a method of the present invention for detecting defects in the appearance of a laser welded weld;
FIG. 5 is a flow chart of processing a 2D image in an embodiment of a method of the present invention for detecting defects in the appearance of a laser welded weld;
FIG. 6 is a flow chart of a bonding analysis method employed in an embodiment of a method for detecting defects in the appearance of a laser welded weld in accordance with the present invention;
fig. 7 is an application flowchart of the method for detecting the appearance defect of the laser welding seam, which is applied to the detection of the appearance defect of the welding seam of the laser welding of the lithium battery.
Detailed Description
The preferred embodiments of the present invention will be described below with reference to the accompanying drawings, it being understood that the preferred embodiments described herein are for illustration and explanation of the present invention only, and are not intended to limit the present invention.
As shown in fig. 1, an embodiment of the present invention provides a method for detecting an appearance defect of a laser welding seam, including:
s100: acquiring a depth image and a 2D image of a welded product;
s200: converting the depth image into a 3D point cloud image, processing the 3D point cloud image, and identifying the processed 3D point cloud image to obtain first defect data;
s300: processing the 2D image, and recognizing the processed 2D image to obtain second defect data;
s400: and outputting a weld appearance defect detection result by carrying out combination analysis on the first defect data and the second defect data.
The working principle and beneficial effects of the technical scheme are as follows: according to the scheme, the depth image and the 2D image of the welded product are acquired, the depth image and the 2D image are respectively processed to respectively obtain first defect data and second defect data, the first defect data and the second defect data are combined and analyzed on the basis of the respective processing, and a weld appearance defect detection result is output according to an analysis result; the scheme is based on a computer vision technology, combines a point cloud algorithm and a deep learning algorithm, can successfully identify defects of appearance of the welding seam of the laser welding, and greatly improves the detection efficiency of the defects of the appearance of the welding seam and the yield of products compared with a manual visual inspection method from the detection result; for example, for the lithium battery industry, the quality consistency and the yield of lithium battery products can be improved.
In one embodiment, in step S100, a depth image is acquired by a 3D sensor; the 2D image is obtained by shooting with a CCD camera.
The working principle and beneficial effects of the technical scheme are as follows: the 3D sensor in the scheme can adopt a 3D camera, and the acquired depth image can implicitly have three-dimensional associated data, so that the conversion and the processing of the 3D point cloud image are convenient to follow; the 2D image is shot by adopting a CCD camera, so that finer surface conditions can be reflected on the 2D image; the scheme lays a good foundation for subsequent image analysis, and is beneficial to improving the accuracy of analysis results.
In one embodiment, in step S200, converting the depth image into a 3D point cloud image includes:
determining conversion coefficients of the depth image: adopting x_res to represent the distance actually corresponding to a pixel on the same row, y_res to represent the distance actually corresponding to a pixel on the same column, and z_res to represent the conversion ratio of the pixel value size on the depth image relative to the actual height;
implementing spatial point cloud conversion: traversing pixel points on the depth image, converting each pixel point into a space point in the 3D point cloud image through a conversion coefficient, wherein the conversion formula is as follows: x=i x_res, y=j y_res, z=value z_res; where X, Y, Z represents the spatial point coordinates in the 3D point cloud image, i represents the row of the depth image, j represents the column of the depth image, and value represents the pixel value of pixel point (i, j) on the depth image.
The working principle and beneficial effects of the technical scheme are as follows: the depth image can be regarded as an expression mode of the 3D point cloud, so that the depth image and the 3D point cloud have fixed conversion coefficients; wherein, the three conversion parameters are fixed parameters in the 3D camera setting by using x_res, y_res and z_res, and can be obtained by the 3D camera; known is a depth image and its corresponding conversion coefficients, the process of converting to a 3D point cloud image is for example: obtaining the size of a row and a column of a depth image, wherein i represents the row of the image, j represents the column of the image, i=0 and j=0 are firstly set to obtain a pixel row i on the depth image, the pixel value of the column j is set to be value, if the spatial point coordinate corresponding to the pixel point (i, j) is x=i x_res, y=j x_res and z=value z_res, a point cloud set is set to be a group, the spatial point belongs to a point cloud set, when j > cols represents the total column number of the depth image, the next step is entered, otherwise, j=j+1 is assigned, and the next pixel point in the same pixel row is converted; when i > rows, rows represents the total line number of the depth image, ending, outputting a point cloud set, otherwise, giving i=i+1, and performing conversion of pixel points in the next line; the calculation flow of the conversion into a 3D point cloud image is shown in fig. 2.
In one embodiment, in step S200, after converting the depth image into a 3D point cloud image, downsampling is performed, that is, the number of point clouds is reduced by a voxel grid downsampling method, which is specifically as follows:
let the point cloud set before downsampling be A, the point cloud set after downsampling be B, and the process of downsampling by adopting the voxel grid is as follows: and dividing the space into a plurality of cubes with equal size, if one or more points exist in the cube by the point cloud set A, namely, assigning a point to the center of the cube at the corresponding position of the point cloud set B, thereby achieving the purpose of downsampling.
The working principle and beneficial effects of the technical scheme are as follows: after the depth map is converted into the point cloud, the data quantity of the point cloud is larger, the detection time is too long, redundancy exists in the original data, and the detection result cannot be influenced by reducing the quantity of the point cloud, so that the method reduces the quantity of the point cloud through a voxel grid downsampling method, reduces the quantity of the point cloud after downsampling, and improves the detection speed; the size of the cube in this scheme determines the number of point clouds after downsampling.
In one embodiment, as shown in fig. 3, in step S200, processing the 3D point cloud image includes:
s210: removing outliers in the 3D point cloud image;
s220: identifying convex points and concave points in the 3D point cloud image, wherein the method specifically comprises the following steps of: firstly, fitting a reference plane; secondly, traversing all points in the 3D point cloud image, calculating the distance between each point and a reference plane, and finally, if the distance is larger than a set distance threshold value, indicating that the point belongs to a convex point or a concave point;
s230: the convex points or the concave points are quantized, specifically: clustering the convex points or the concave points into a set by adopting a clustering algorithm to obtain a point set image; and calculating a first characteristic size of the point set image, and taking the first characteristic size as first defect data.
The working principle and beneficial effects of the technical scheme are as follows: noise inevitably exists in the image acquisition process, and when algorithm detection is used, abnormal noise is easy to interfere, so that the detection result is inaccurate, and therefore, the noise needs to be identified and removed from the point cloud set. Noise points are typically represented in the point cloud set as discrete, isolated outlier forms that can be identified and removed by outlier removal methods. And traversing all points in the point cloud set, setting a detection radius r, taking the points as sphere centers, calculating the number of the point clouds in the sphere range of the radius r, calculating the average value of the number of the point clouds in all the point spheres in the set, and deleting the point from the point cloud set when the number of the point clouds in a certain point sphere is smaller than the average value and the smaller amount is larger than a certain threshold value (the average number threshold value of the point clouds), so as to achieve the aim of removing outliers. The presence of protrusions and depressions on the seal pin weld can affect the quality of the seal, requiring identification of such defects. The appearance of the weld joint welded by laser is a plane with the reference plane under normal conditions, if a convex or concave exists on the weld bead, the defect points of the convex or concave are not on the plane, and the distances between the points and the plane are larger, so that the reference plane can be fitted first, the points in the set can be traversed, the distance between the points and the reference plane is calculated, and if the distance is larger than a certain value, the points belong to the convex points or the concave points. The single point has no length, width and height characteristics and cannot be subjected to quantization processing, so that the points are required to be clustered into a set, and the set of points with the number larger than a certain value is processed. Firstly, clustering defective points into independent point sets by using a clustering algorithm, and then respectively calculating the length, width, height and other characteristics of the point sets to perform quantization treatment; clustering by utilizing Euclidean distance between points in the point cloud, when the Euclidean distance between the points is smaller than a set threshold value, traversing the defective points, clustering the defective points into a defective point set according to the Euclidean distance, and then calculating the length, width and height defect characteristics of the defective point set, wherein the length, width and height of the defective point set are the first characteristic dimensions; thus, first defect data for solving the convex point and the concave point by using a 3D algorithm is obtained.
In one embodiment, in step S230, the clustering algorithm calculates the euclidean distance using the following formula by creating a three-dimensional coordinate system:
Figure SMS_1
in the above-mentioned method, the step of,
Figure SMS_2
indicating Euclidean distances of the ith and jth convex or concave points; />
Figure SMS_3
Three-dimensional coordinate values representing the i-th convex point or concave point; />
Figure SMS_4
Three-dimensional coordinate values representing the jth raised point or recessed point;
if the Euclidean distance between the two convex points or the concave points is not greater than the preset Euclidean distance threshold value, clustering the two convex points or the concave points into the same set, and forming a point set image by all the convex points or the concave points contained in the same set.
The working principle and beneficial effects of the technical scheme are as follows: according to the scheme, the Euclidean distance between two convex points or concave points in the 3D point cloud image is calculated through the formula, whether the calculated two convex points or concave points are clustered is determined through the preset Euclidean distance threshold, the Euclidean distance algorithm is simple, effective and easy to operate and implement, the calculated amount is small, the calculating speed is high, and the appearance defect detection efficiency is improved.
In one embodiment, as shown in fig. 4, in step S220, a reference plane is fitted using a random sampling consistency fitting plane algorithm, and the procedure for fitting the reference plane is as follows:
s221: randomly selecting 3 non-collinear target points from the 3D point cloud image, solving a plane equation of a plane formed by the 3 target points, and obtaining a plane model;
s222: selecting other points in the 3D point cloud image, calculating the point-to-surface distance from the selected points to the plane model determined by the plane equation, comparing the point-to-surface distance with a preset minimum distance threshold, if the point-to-surface distance is smaller than the minimum distance threshold, the selected points are inner points, otherwise, the selected points are outer points, and recording the number of all the inner points in the plane model;
s223: repeating the steps S221 and S222, and recording the current plane model if the number of the inner points of the current plane model exceeds the number of the inner points of the previous plane model;
s224: and repeating the steps S221-S223 for iteration until the iteration is finished, and obtaining the plane model with the largest inner points as the reference plane.
The working principle and beneficial effects of the technical scheme are as follows: the reference plane of the present solution is a plane in space, e.g. expressed in the form of
Figure SMS_5
A plane is fitted by using a random sampling consistency fitting plane algorithm, the method is to use a computer to perform a random probability experiment, the result of each experiment can be recorded through iteration and precision control, the algorithm of an optimal plane in data can be found out from the experiment after the condition is met, and the plane can be stably obtained by resisting abnormal interference of the data. Firstly, randomly selecting 3 non-collinear points, solving a plane equation, and obtaining a plane model determined by the plane equation; secondly, calculating the distance (the distance between the points and the plane) from the plane equation by using the rest points, comparing the distance with the set minimum distance precision (minimum distance threshold), and if the distance is smaller than the set minimum distance precision, the point is an inner point; otherwise, the model parameters are the outer points, the number of all inner points in the model parameters is recorded, then the previous two steps are repeated, the number of the inner points of the current model (plane model) exceeds the number of the inner points of the previous model, the current plane model is more optimized, and the current optimal model is recorded; finally, weightAnd repeating the previous three steps until the iteration is finished, and finding a plane model with the largest inner points, wherein the plane model is the optimal reference plane. After obtaining model parameters, traversing all points in the weld joint set, and calculating the distance between the points and a reference plane, for example, setting the points to be +.>
Figure SMS_6
The reference plane equation is
Figure SMS_7
The distance from the point to the reference plane is +.>
Figure SMS_8
Calculating the absolute value of dis, and if the absolute value is larger than a certain threshold value (distance threshold value), proving that the point belongs to a convex point or a concave point.
In one embodiment, as shown in fig. 5, in step S300, processing the 2D image includes:
s310: grabbing the planar defect through a target detection technology comprises the following steps:
firstly, collecting a certain number of plane defect pictures, and then marking the positions of the plane defects to obtain a training set; secondly, training the training set by using a YOLO target detection network to obtain a detection model; finally, using the detection model for detecting the planar defect of the 2D image to obtain the planar defect in the undercut shape and the specific position of the planar defect on the 2D image;
s320: quantifying the planar defect, including:
carrying out confidence analysis on the plane defects to obtain confidence; performing minimum bounding box algorithm analysis on the planar defects, and screening out planar defects with second characteristic sizes larger than a set second size threshold; and taking the second characteristic size and the confidence of the screened plane defect as second defect data.
The working principle and beneficial effects of the technical scheme are as follows: according to the scheme, the 2D image is adopted, so that the color information of the surface of the object can be acquired more accurately; the appearance of the laser welding seam is provided with a black hole with an undercut shape at the inner side or the outer side of the seam, and the defect is obvious in form on a 2D image; therefore, detecting through a detection model trained by a YOLO target detection network, and detecting by using a 2D image, namely detecting a black hole defect (plane defect) in an undercut shape, so as to obtain a specific position of the plane defect on the 2D image; after the planar defect of the 2D image is identified, the confidence level of the planar defect result and the minimum bounding box of the planar defect are obtained, wherein the confidence level of the planar defect result is the probability value of the planar defect, and the closer to 1, the more reliable the result is; calculating the length, width and pixel area of the minimum bounding box, wherein the length, width and pixel area of the minimum bounding box are the second characteristic size of the minimum bounding box, and classifying the result as the screened plane defect when the length, width and pixel area are all larger than a certain threshold value (namely the second size threshold value), otherwise, not considering the defect; and extracting the second characteristic size of the screened plane defect to obtain second defect data.
In one embodiment, as shown in fig. 6, in step S400, the binding analysis is as follows:
s410: screening out point set images with the first characteristic size larger than a first size threshold value from the 3D point cloud images; taking the convex area or the concave area corresponding to the screened point set image as a first appearance defect;
s420: taking the plane defect corresponding area with the confidence coefficient larger than the confidence coefficient threshold value as a second appearance defect;
s430: for the planar defect with the confidence coefficient not larger than the confidence coefficient threshold value, searching whether a convex or concave point set image exists at a corresponding position in the 3D point cloud image according to the position of the planar defect on the 2D image; if not, eliminating the plane defect; if the point set image at the corresponding position in the 3D point cloud image is larger than a third size threshold, merging the point location areas of the plane defect and the point set image at the corresponding position, and taking the merged point location areas as third appearance defects.
The working principle and beneficial effects of the technical scheme are as follows: the scheme is obvious for concave-convex defects on a depth image, and part of the characteristics are not characteristic on a 2D image, so that the concave-convex defects detected on a 3D image belong to a defect point set as long as the length, the width and the height of the defects are simultaneously larger than a certain threshold (namely a first size threshold), and can be directly classified as final output defects. For the black hole defects of the inner side or the outer side of the welding seam detected on the 2D image, the welding seam is interfered by some black points, the black points are only abnormal in color and do not influence the quality of products, the black points are similar to the black hole defects in color and area, misjudgment is easy to cause, but the edges of the black points are slowly overmuch from black to normal color, and the edge change of the undercut black hole defects is sharp, so the confidence of the 2D detection result of the black points is lower, the undercut black hole defects are slightly concave-convex on 3D, and the black points have no concave-convex on 3D; therefore, the planar defect should be distinguished and judged, and when the defect is detected on the 2D image and the confidence of the defect is larger than a certain value, the defect is classified as a final output result; otherwise, the combination judgment of the 2D image and the 3D image is carried out, the length, the width and the height of the 3D point cloud at the corresponding position are calculated, when the three values are all larger than a certain value (a third size threshold), the defect is classified as a final output defect, otherwise, the defect is filtered. And obtaining the final output defect through the combined analysis of the 3D image and the 2D image.
In one embodiment, in step S320, the confidence analysis includes:
determining a confidence interval of the planar defect;
analyzing probability that the planar defect grabbing reliability falls into a confidence interval by adopting normal distribution;
and determining the confidence level of the planar defect according to the probability that the planar defect grabbing reliability falls into the confidence interval.
The working principle and beneficial effects of the technical scheme are as follows: according to the scheme, through determining the confidence interval of the planar defect, referring to a normal distribution theory, the probability that the grabbing reliability of the planar defect falls into the confidence interval is inspected, and then the confidence of the planar defect is obtained according to the inspected result; the confidence coefficient analysis adopts probability evaluation to determine the confidence coefficient of the plane defect, thereby reducing the misjudgment rate of the plane defect and improving the detection accuracy of the plane defect.
In one embodiment, in step S300, processing the 2D image further includes image preprocessing, specifically:
carrying out graying treatment on the 2D image to obtain a corresponding 2D image after graying treatment;
based on each pixel point in the 2D image after the graying treatment, the product of the R channel pixel value and the R channel weighting value, the product of the G channel pixel value and the G channel weighting value and the product of the B channel pixel value and the B channel weighting value are added to obtain the gray value of the pixel point, a Gaussian filter algorithm is adopted to combine the inherent variation of an image window and the total variation of the image window for the pixel point of the 2D image after the graying treatment, a structure and texture decomposition regularizer is formed, and smoothing treatment is carried out to obtain the 2D image after the smoothing treatment;
performing image enhancement processing on the smoothed 2D image, namely performing supervised model training on the smoothed 2D image by adopting an image enhancement model to obtain an image enhanced 2D image;
and using the 2D image after image enhancement for planar defect grabbing.
The working principle and beneficial effects of the technical scheme are as follows: the preprocessing of the 2D image comprises graying processing, smoothing processing and image enhancement processing, wherein the graying processing can reduce the data volume of the 2D image processing and improve the processing efficiency; fusing meaningful structures and texture units in the 2D image together through smoothing treatment, so that the textures of the image are clear; performing supervised model training by adopting an image enhancement model, and improving the quality and the identifiability of a 2D image through image enhancement processing, thereby improving the accuracy of appearance defect detection; wherein inherent variation refers to gradients in one image window that have more uniform directions than complex textures contained in another image window; the total variation of the image window is a parameter reflecting the quality of the 2D image.
In one embodiment, the structure and texture decomposition regularization employed in image preprocessing, the image window total variation is represented by the following total variation model:
Figure SMS_10
in the above formula, Q represents the total variation model of the image window; p represents an output structure image; s is S k Representing the gray value of k pixel points in the output structural image; i k A gray value representing k pixels in the input image; k represents a pixel point; q represents the index of all pixels in a square area centered on k pixels; />
Figure SMS_14
Expressed as +.>
Figure SMS_18
Index sets of all pixel points in a square area with the pixel points as centers; x and y represent the lateral and longitudinal pixel coordinates of the image, respectively; />
Figure SMS_9
Representing correction factors, preferably
Figure SMS_15
;/>
Figure SMS_17
Representing a Gaussian kernel function, having
Figure SMS_20
In the above, the->
Figure SMS_12
Representing the transverse pixel coordinates of the k pixel points; />
Figure SMS_13
A horizontal pixel coordinate representing a q pixel index; />
Figure SMS_16
Representing the vertical pixel coordinates of the k pixel points; />
Figure SMS_19
Representing the vertical pixel coordinates of the index of the q pixel points; />
Figure SMS_11
Indicating high levelA spatial scale.
The working principle and beneficial effects of the technical scheme are as follows: in the scheme, when the 2D image is subjected to smoothing processing, the total variation of the image window is represented by adopting the algorithm model, the algorithm model depends on the data of the 2D image part, the gradient of the 2D image part is not required to be considered to be isotropic, and as long as the gradients with opposite directions in one 2D image part window are mutually counteracted, the gradient mode can be effective no matter whether the gradient mode is isotropic or not, so that the effect of sharpening the edge can be achieved; the structural image is obtained through the algorithm model processing, and when the edge of the structural image is detected, the edge extraction is convenient, the identifiability of the 2D image is improved, and the accuracy of capturing the plane defects is improved.
With the vigorous development of the manufacturing industry in China, the improvement of the production efficiency and the control of the product quality are more and more required, an artificial intelligence technology is introduced, and the method becomes an important link for transformation of the manufacturing industry.
For laser welding of lithium batteries, after welding, the welding line appearance defects are detected by adopting the method, as shown in fig. 7, firstly, a depth image of the lithium batteries after laser welding is obtained by shooting through a 3D camera, and a 2D image of the lithium batteries after laser welding is shot through a CCD camera; secondly, the following processing procedures are sequentially carried out on the depth image: converting into point cloud, downsampling, removing outliers, extracting abnormal convex and concave points, and carrying out 3D defect quantization treatment, wherein the following treatment processes are sequentially carried out on a 2D image: grabbing defects-2D defects through a target detection technology and carrying out quantization treatment; and finally, combining the processing results of the depth image and the 2D image to analyze, obtaining a weld appearance defect detection result of the lithium battery, and outputting a final result. The invention is based on the computer vision technology, combines the point cloud algorithm and the deep learning algorithm, can successfully identify the defects of the appearance of the welding seam of the laser welding of the lithium battery, and greatly improves the detection efficiency of the defects of the appearance of the welding seam and can improve the yield of products compared with the manual visual inspection method from the detection result.
It will be apparent to those skilled in the art that various modifications and variations can be made to the present invention without departing from the spirit or scope of the invention. Thus, it is intended that the present invention also include such modifications and alterations insofar as they come within the scope of the appended claims or the equivalents thereof.

Claims (6)

1. A method for detecting defects in the appearance of a laser welded seam, comprising:
s100: acquiring a depth image and a 2D image of a welded product;
s200: converting the depth image into a 3D point cloud image, processing the 3D point cloud image, and identifying the processed 3D point cloud image to obtain first defect data; processing the 3D point cloud image includes:
s210: removing outliers in the 3D point cloud image;
s220: identifying convex points and concave points in the 3D point cloud image, wherein the method specifically comprises the following steps of: firstly, fitting a reference plane; secondly, traversing all points in the 3D point cloud image, calculating the distance between each point and a reference plane, and finally, if the distance is larger than a set distance threshold value, indicating that the point belongs to a convex point or a concave point; and fitting a reference plane by using a random sampling consistency fitting plane algorithm, wherein the process of fitting the reference plane is as follows:
s221: randomly selecting 3 non-collinear target points from the 3D point cloud image, solving a plane equation of a plane formed by the 3 target points, and obtaining a plane model;
s222: selecting other points in the 3D point cloud image, calculating the point-to-surface distance from the selected points to the plane model determined by the plane equation, comparing the point-to-surface distance with a preset minimum distance threshold, if the point-to-surface distance is smaller than the minimum distance threshold, the selected points are inner points, otherwise, the selected points are outer points, and recording the number of all the inner points in the plane model;
s223: repeating the steps S221 and S222, and recording the current plane model if the number of the inner points of the current plane model exceeds the number of the inner points of the previous plane model;
s224: repeating the steps S221-S223 for iteration until the iteration is finished, and obtaining a plane model with the largest inner points, namely a reference plane;
s230: the convex points or the concave points are quantized, specifically: clustering the convex points or the concave points into a set by adopting a clustering algorithm to obtain a point set image; calculating a first characteristic size of the point set image, and taking the first characteristic size as first defect data; by establishing a three-dimensional coordinate system, the clustering algorithm calculates Euclidean distance by adopting the following formula:
Figure FDA0004191780050000011
in the above, D i,j Indicating Euclidean distances of the ith and jth convex or concave points; x is X i 、Y i And Z i Three-dimensional coordinate values representing the i-th convex point or concave point; x is X j 、Y j And Z j Three-dimensional coordinate values representing the jth raised point or recessed point;
if the Euclidean distance between two convex points or concave points is not greater than the preset Euclidean distance threshold value, clustering the two convex points or concave points into the same set, wherein all the convex points or concave points contained in the same set form a point set image;
s300: processing the 2D image, and recognizing the processed 2D image to obtain second defect data; processing the 2D image includes:
s310: grabbing the planar defect through a target detection technology comprises the following steps:
firstly, collecting a certain number of plane defect pictures, and then marking the positions of the plane defects to obtain a training set; secondly, training the training set by using a YOLO target detection network to obtain a detection model; finally, using the detection model for detecting the planar defect of the 2D image to obtain the planar defect in the undercut shape and the specific position of the planar defect on the 2D image;
s320: quantifying the planar defect, including:
carrying out confidence analysis on the plane defects to obtain confidence; performing minimum bounding box algorithm analysis on the planar defects, and screening out planar defects with second characteristic sizes larger than a set second size threshold; taking the second characteristic size and the confidence coefficient of the screened plane defect as second defect data;
s400: outputting weld appearance defect detection results by carrying out combination analysis on the first defect data and the second defect data; the binding assay was performed as follows:
s410: screening out point set images with the first characteristic size larger than a first size threshold value from the 3D point cloud images; taking the convex area or the concave area corresponding to the screened point set image as a first appearance defect;
s420: taking the plane defect corresponding area with the confidence coefficient larger than the confidence coefficient threshold value as a second appearance defect;
s430: for the planar defect with the confidence coefficient not larger than the confidence coefficient threshold value, searching whether a convex or concave point set image exists at a corresponding position in the 3D point cloud image according to the position of the planar defect on the 2D image; if not, eliminating the plane defect; if the point set image at the corresponding position in the 3D point cloud image is larger than a third size threshold, merging the point location areas of the plane defect and the point set image at the corresponding position, and taking the merged point location areas as third appearance defects.
2. The method for detecting defects in appearance of a laser welded seam according to claim 1, wherein in step S100, depth images are acquired by a 3D sensor; the 2D image is obtained by shooting with a CCD camera.
3. The method for detecting defects in appearance of a laser welded seam according to claim 1, wherein in step S200, converting the depth image into a 3D point cloud image comprises:
determining conversion coefficients of the depth image: adopting x_res to represent the distance actually corresponding to a pixel on the same row, y_res to represent the distance actually corresponding to a pixel on the same column, and z_res to represent the conversion ratio of the pixel value size on the depth image relative to the actual height;
implementing spatial point cloud conversion: traversing pixel points on the depth image, converting each pixel point into a space point in the 3D point cloud image through a conversion coefficient, wherein the conversion formula is as follows: x=a×x_res, y=b×y_res, z=value×z_res; where X, Y, Z represents the spatial point coordinates in the 3D point cloud image, a represents the row of the depth image, B represents the column of the depth image, and value represents the pixel value of the pixel point (a, B) on the depth image.
4. The method for detecting defects in appearance of a laser welded seam according to claim 1, wherein in step S200, after converting the depth image into a 3D point cloud image, downsampling is performed, that is, the number of point clouds is reduced by a voxel grid downsampling method, in which:
let the point cloud set before downsampling be A, the point cloud set after downsampling be B, and the process of downsampling by adopting the voxel grid is as follows: and dividing the space into a plurality of cubes with equal size, if one or more points exist in the cube by the point cloud set A, namely, assigning a point to the center of the cube at the corresponding position of the point cloud set B, thereby achieving the purpose of downsampling.
5. The method for detecting defects in appearance of a laser welded seam according to claim 1, wherein in step S320, the confidence analysis comprises:
determining a confidence interval of the planar defect;
analyzing probability that the planar defect grabbing reliability falls into a confidence interval by adopting normal distribution;
and determining the confidence level of the planar defect according to the probability that the planar defect grabbing reliability falls into the confidence interval.
6. The method for detecting defects in the appearance of a laser welded seam according to claim 1, characterized in that in step S300 the processing of the 2D image further comprises an image pre-processing, in particular:
carrying out graying treatment on the 2D image to obtain a corresponding 2D image after graying treatment;
based on each pixel point in the 2D image after the graying treatment, the product of the R channel pixel value and the R channel weighting value, the product of the G channel pixel value and the G channel weighting value and the product of the B channel pixel value and the B channel weighting value are added to obtain the gray value of the pixel point, a Gaussian filter algorithm is adopted to combine the inherent variation of an image window and the total variation of the image window for the pixel point of the 2D image after the graying treatment, a structure and texture decomposition regularizer is formed, and smoothing treatment is carried out to obtain the 2D image after the smoothing treatment;
performing image enhancement processing on the smoothed 2D image, namely performing supervised model training on the smoothed 2D image by adopting an image enhancement model to obtain an image enhanced 2D image;
and using the 2D image after image enhancement for planar defect grabbing.
CN202310083936.4A 2023-02-09 2023-02-09 Method for detecting appearance defects of laser welding seam Active CN115797354B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310083936.4A CN115797354B (en) 2023-02-09 2023-02-09 Method for detecting appearance defects of laser welding seam

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310083936.4A CN115797354B (en) 2023-02-09 2023-02-09 Method for detecting appearance defects of laser welding seam

Publications (2)

Publication Number Publication Date
CN115797354A CN115797354A (en) 2023-03-14
CN115797354B true CN115797354B (en) 2023-05-30

Family

ID=85430482

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310083936.4A Active CN115797354B (en) 2023-02-09 2023-02-09 Method for detecting appearance defects of laser welding seam

Country Status (1)

Country Link
CN (1) CN115797354B (en)

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116309576B (en) * 2023-05-19 2023-09-08 厦门微亚智能科技股份有限公司 Lithium battery weld defect detection method, system and storage medium
CN116818780B (en) * 2023-05-26 2024-03-26 深圳市大德激光技术有限公司 Visual 2D and 3D detection system for button cell shell after laser welding
CN116703914B (en) * 2023-08-07 2023-12-22 浪潮云洲工业互联网有限公司 Welding defect detection method, equipment and medium based on generation type artificial intelligence
CN117078665B (en) * 2023-10-13 2024-04-09 东声(苏州)智能科技有限公司 Product surface defect detection method and device, storage medium and electronic equipment
CN117078666B (en) * 2023-10-13 2024-04-09 东声(苏州)智能科技有限公司 Two-dimensional and three-dimensional combined defect detection method, device, medium and equipment
CN118275450B (en) * 2024-05-30 2024-09-10 菲特(天津)检测技术有限公司 Weld joint detection method and device
CN118537340B (en) * 2024-07-26 2024-10-25 江苏航运职业技术学院 Cloud computing-based weld defect intelligent detection method and system

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113516660A (en) * 2021-09-15 2021-10-19 江苏中车数字科技有限公司 Visual positioning and defect detection method and device suitable for train
CN115601359A (en) * 2022-12-12 2023-01-13 广州超音速自动化科技股份有限公司(Cn) Welding seam detection method and device

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112907528B (en) * 2021-02-09 2021-11-09 南京航空航天大学 Point cloud-to-image-based composite material laying wire surface defect detection and identification method
CN112967243B (en) * 2021-02-26 2023-01-13 清华大学深圳国际研究生院 Deep learning chip packaging crack defect detection method based on YOLO
CN115147370A (en) * 2022-06-30 2022-10-04 章鱼博士智能技术(上海)有限公司 Battery top cover welding defect detection method and device, medium and electronic equipment
CN115009794B (en) * 2022-06-30 2023-04-18 佛山豪德数控机械有限公司 Full-automatic plate conveying production line and production control system thereof
CN115619738A (en) * 2022-10-18 2023-01-17 宁德思客琦智能装备有限公司 Detection method for module side seam welding after welding
CN115496746A (en) * 2022-10-20 2022-12-20 复旦大学 Method and system for detecting surface defects of plate based on fusion of image and point cloud data

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113516660A (en) * 2021-09-15 2021-10-19 江苏中车数字科技有限公司 Visual positioning and defect detection method and device suitable for train
CN115601359A (en) * 2022-12-12 2023-01-13 广州超音速自动化科技股份有限公司(Cn) Welding seam detection method and device

Also Published As

Publication number Publication date
CN115797354A (en) 2023-03-14

Similar Documents

Publication Publication Date Title
CN115797354B (en) Method for detecting appearance defects of laser welding seam
CN109191459B (en) Automatic identification and rating method for continuous casting billet macrostructure center segregation defect
CN115082467B (en) Building material welding surface defect detection method based on computer vision
CN107437243B (en) Tire impurity detection method and device based on X-ray image
CN111915572B (en) Adaptive gear pitting quantitative detection system and method based on deep learning
CN114820773B (en) Silo transport vehicle carriage position detection method based on computer vision
CN104268505A (en) Automatic cloth defect point detection and recognition device and method based on machine vision
CN112330593A (en) Building surface crack detection method based on deep learning network
CN113393426B (en) Steel rolling plate surface defect detection method
CN113435460A (en) Method for identifying brilliant particle limestone image
CN117152161A (en) Shaving board quality detection method and system based on image recognition
CN109840483A (en) A kind of method and device of landslide fissure detection and identification
CN113781585B (en) Online detection method and system for surface defects of additive manufactured parts
CN115597494B (en) Precision detection method and system for prefabricated part preformed hole based on point cloud
CN115018790A (en) Workpiece surface defect detection method based on anomaly detection
CN107610119A (en) The accurate detection method of steel strip surface defect decomposed based on histogram
CN113313107A (en) Intelligent detection and identification method for multiple types of diseases on cable surface of cable-stayed bridge
CN116563262A (en) Building crack detection algorithm based on multiple modes
CN114581805A (en) Coating roller surface defect detection method adopting 3D line laser profile technology
CN115656182A (en) Sheet material point cloud defect detection method based on tensor voting principal component analysis
CN110751687B (en) Apple size grading method based on computer vision minimum and maximum circle
CN117269168A (en) New energy automobile precision part surface defect detection device and detection method
CN116148880A (en) Method for automatically detecting power transmission line and dangerous object based on unmanned aerial vehicle laser radar point cloud data
CN115984360A (en) Method and system for calculating length of dry beach based on image processing
CN113919396A (en) Vibration signal and image characteristic machine tool cutter wear state monitoring method based on semi-supervised learning

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CP01 Change in the name or title of a patent holder
CP01 Change in the name or title of a patent holder

Address after: 361000 room 201a, Jinfeng Building, information photoelectric Park, Xiamen Torch hi tech Zone, Xiamen City, Fujian Province

Patentee after: Xiamen Weiya Intelligent Technology Co.,Ltd.

Address before: 361000 room 201a, Jinfeng Building, information photoelectric Park, Xiamen Torch hi tech Zone, Xiamen City, Fujian Province

Patentee before: XIAMEN WEIYA INTELLIGENCE TECHNOLOGY Co.,Ltd.