Nothing Special   »   [go: up one dir, main page]

CN113506243A - PCB welding defect detection method and device and storage medium - Google Patents

PCB welding defect detection method and device and storage medium Download PDF

Info

Publication number
CN113506243A
CN113506243A CN202110622967.3A CN202110622967A CN113506243A CN 113506243 A CN113506243 A CN 113506243A CN 202110622967 A CN202110622967 A CN 202110622967A CN 113506243 A CN113506243 A CN 113506243A
Authority
CN
China
Prior art keywords
detection point
defect
sample
point image
pixel
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110622967.3A
Other languages
Chinese (zh)
Other versions
CN113506243B (en
Inventor
陆唯佳
刘鹏
李兵洋
刘创
沈崴
葛欢
金昱
张洁
王�琦
曹雨
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
United Automotive Electronic Systems Co Ltd
Original Assignee
United Automotive Electronic Systems Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by United Automotive Electronic Systems Co Ltd filed Critical United Automotive Electronic Systems Co Ltd
Priority to CN202110622967.3A priority Critical patent/CN113506243B/en
Publication of CN113506243A publication Critical patent/CN113506243A/en
Application granted granted Critical
Publication of CN113506243B publication Critical patent/CN113506243B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0004Industrial image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30108Industrial image inspection
    • G06T2207/30141Printed circuit board [PCB]

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Biomedical Technology (AREA)
  • General Health & Medical Sciences (AREA)
  • Evolutionary Biology (AREA)
  • Health & Medical Sciences (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Quality & Reliability (AREA)
  • Image Analysis (AREA)
  • Investigating Materials By The Use Of Optical Means Adapted For Particular Applications (AREA)

Abstract

The application discloses a PCB welding defect detection method, a PCB welding defect detection device and a storage medium, and relates to the technical field of automatic detection. The PCB welding defect detection method comprises the following steps: acquiring a detection point image; inputting the detection point images into a pixel classifier to obtain pixel level segmentation results corresponding to the detection point images; the pixel level segmentation result is used for indicating the boundary of the pad and the filling abnormal area in the detection point image and the area where the pin is located; the pixel-level classifier introduces prior knowledge in the construction process; obtaining defect judgment information of the detection points according to pixel-level segmentation results corresponding to the detection point images and a defect detection picture classifier, wherein the defect judgment information comprises product identification results; the problem that the abnormal defect of soldering tin filling and the defect that pins are not excessive are easy to be mistakenly reported when the PCB is automatically detected at present is solved; the effect of improving the defect detection accuracy is achieved.

Description

PCB welding defect detection method and device and storage medium
Technical Field
The application relates to the technical field of automatic detection, in particular to a PCB welding defect detection method, a device and a storage medium.
Background
With the continuous development of science and technology and economy, people have an increasing demand for electronic products, and PCB (printed circuit board) products are also more and more widely applied.
The manufacturing process of PCB products generally involves a reflow soldering process of the chip components and a selective wave soldering process of the discrete package components. After welding, in order to ensure the quality of the PCB product, the welded PCB product needs to be detected. Currently, AOI (Automated Optical Inspection) replaces most manual Inspection. The AOI device on the production line, which usually uses the line edge, detects possible welding defects, such as: warped foot, missing piece, bridge, reverse, etc. Conventional AOI equipment suppliers, e.g.
Figure BDA0003100712420000011
Traditional machine vision detection technology is used, and manual features are largely used as detection bases.
In the automatic defect detection of selective welding, two welding defects are difficult to understand by a machine, one is solder filling abnormity (solder skip), the other is pin over board (pin over component), namely, a concerned area of the machine is unstable when the machine detects the two defects, and the detection result of the machine is easy to be misrepresented.
Disclosure of Invention
In order to solve the problems in the related art, the application provides a method and a device for detecting the welding defects of the PCB and a storage medium. The technical scheme is as follows:
in a first aspect, an embodiment of the present application provides a method for detecting a PCB welding defect, where the method includes:
acquiring a detection point image;
inputting the detection point images into a pixel classifier to obtain pixel level segmentation results corresponding to the detection point images; the pixel level segmentation result is used for indicating the boundary of the pin, the bonding pad and the filling abnormal area in the detection point image and the area where the background is located; the pixel classifier introduces prior knowledge in the construction process;
and obtaining defect judgment information of the detection points according to the pixel-level segmentation results corresponding to the detection point images and the defect detection picture classifier, wherein the defect judgment information comprises product identification results.
The method comprises the steps of inputting a detection point image into a pixel classifier by obtaining the detection point image to obtain a pixel level segmentation result corresponding to the detection point image, and obtaining defect judgment information of the detection point according to the pixel level segmentation result corresponding to the detection point image and a defect detection picture classifier; the pixel classifier introduces prior knowledge in the construction process, regulates the attention area of a detection point image in the detection process, and solves the problems that the abnormal defect of soldering tin filling and the defect that pins are not excessive are easy to be mistakenly reported when the PCB is automatically detected at present; the effect of improving the defect detection accuracy is achieved.
Optionally, before the detected point image is input to the pixel classifier, the method further includes:
acquiring a sample detection point image;
manually marking the sample detection point image to obtain a marked sample sheet corresponding to the sample detection point image;
generating a pixel level segmentation result corresponding to the sample detection point image according to the marked sample sheet corresponding to the sample detection point image;
constructing a semantic segmentation network model;
and training a semantic segmentation network model based on the sample detection point images and pixel level segmentation results corresponding to the sample detection point images to obtain a pixel classifier.
Optionally, when the defect type of the detection point corresponding to the sample detection point image is solder filling abnormality, the marked sample sheet includes a sample sheet marked with a pin, a sample sheet marked with a pad, and a sample sheet marked with a filling abnormality region;
and when the defect type of the detection point corresponding to the sample detection point image is that the pin is not over-board, marking the sample sheet to include the sample sheet of the marking pin.
Optionally, the semantic segmentation network model includes a coding module, an embedding module, and a decoding module, where an input of the coding module is a detection point image, the coding module has n coding outputs, and an embedding module is arranged between each coding output and the decoding module; n is a positive integer of 2nThe size of the detection point image is not larger than the size of the detection point image;
in the decoding module, the n coded outputs are respectively processed by convolution; the ith encoded output is subjected to i-th convolution, i ═ 1, 2.., n;
the output of the decoding module is a pixel level segmentation result;
each embedding module comprises 2 cascaded embedding blocks, and each embedding block sequentially comprises a pixel convolution layer, a batch normalization layer and a nonlinear activation layer.
Optionally, the processing performed by the encoding module on the inspection point image includes Max Pooling processing, Conv processing, and bottleeck processing.
Optionally, the processing performed by the decoding module is convolution processing, Conv processing, Softmax processing, and Argmax processing in sequence.
Optionally, the processing performed by the decoding module is convolution processing, Conv processing, and Softmax processing in sequence.
Optionally, obtaining a defect judgment result of the detection point according to the pixel-level segmentation result corresponding to the detection point graph and the defect detection picture classifier includes:
extracting manual characteristics according to a pixel level segmentation result corresponding to the detection point image, wherein the manual characteristics at least comprise aperture size, imaging algorithm type, the number of pixels of a prediction pin area, the outer perimeter of a convex hull area of the prediction pin area, the roundness of the prediction pin area, the number of pixels of a prediction pad and a filling abnormal area boundary part, a divergence angle of the prediction pad and the filling abnormal area part, skewness of picture pixel gray scale distribution, a foreground ratio of a picture and pin normal extension characteristics;
inputting the manual characteristics into a defect detection picture classifier to obtain defect judgment information of detection points;
the pin normal expansion characteristic is the difference between the gray median in the expansion area and the maximum gray value in the pin area after expanding the pins by K pixels along the outer contour normal direction in the pixel level segmentation result.
Optionally, before inputting the manual features into the defect detection picture classifier and obtaining the defect judgment information of the detection points, the method further includes:
extracting sample manual characteristics according to pixel level segmentation results corresponding to the sample detection point images and meta information of the sample detection point images;
and designing a defect detection picture classifier according to the sample manual characteristics and the artificial experience.
Optionally, before inputting the manual features into the defect detection picture classifier and obtaining the defect judgment information of the detection points, the method further includes:
extracting sample manual characteristics according to pixel level segmentation results corresponding to the sample detection point images and meta information of the sample detection point images;
constructing a defect detection classification model;
and training a defect detection classification model based on the manual sample characteristics and the product identification result of the sample detection points to obtain a defect detection picture classifier.
Optionally, before obtaining the defect judgment information of the detection point according to the pixel-level segmentation result corresponding to the detection point image and the defect detection picture classifier, the method further includes:
acquiring a sample detection point image and a pixel level segmentation result corresponding to the sample detection point image;
constructing a defect detection classification model;
and training a defect detection classification model based on a pixel level segmentation result corresponding to the sample detection point image and a product identification result of the sample detection point to obtain a defect detection picture classifier.
In a second aspect, an embodiment of the present application provides a PCB welding defect detection apparatus, including:
the image acquisition module is used for acquiring a detection point image;
the segmentation module is used for inputting the detection point images into the pixel classifier to obtain pixel level segmentation results corresponding to the detection point images; the pixel level segmentation result is used for indicating the boundary of the pad and the filling abnormal area in the detection point image and the area where the pin is located; the pixel classifier introduces prior knowledge in the construction process;
and the judging module is used for obtaining the defect judging information of the detection points according to the pixel-level segmentation result corresponding to the detection point image and the defect detection image classifier, wherein the defect judging information comprises a product identification result.
In a third aspect, an embodiment of the present application provides a PCB welding defect detection apparatus, which includes a processor and a memory; the memory has stored therein a program that is loaded and executed by the processor to implement the method of the first aspect described above.
In a fourth aspect, the present application provides a computer-readable storage medium, in which a program is stored, and the program is loaded and executed by a processor to implement the method shown in the first aspect.
Drawings
In order to more clearly illustrate the detailed description of the present application or the technical solutions in the prior art, the drawings needed to be used in the detailed description of the present application or the prior art description will be briefly introduced below, and it is obvious that the drawings in the following description are some embodiments of the present application, and other drawings can be obtained by those skilled in the art without creative efforts.
FIG. 1, FIG. 2, FIG. 3 are proof sheets of inspection point images;
FIG. 4 is a flowchart of a PCB welding defect detection method provided by an embodiment of the present application;
FIG. 5 is a PCB picture proof;
FIG. 6, FIG. 7, FIG. 8 are labeled sample sheets labeled by experts;
FIG. 9 is a PCB picture proof;
FIG. 10 is a labeled proof after expert labeling;
FIG. 11 is a PCB picture proof;
fig. 12 is a visual picture of a pixel-level segmentation result provided by an embodiment of the present application;
FIG. 13 is a PCB picture proof;
fig. 14 is a visual picture of a pixel-level segmentation result provided by an embodiment of the present application;
FIG. 15 is a schematic structural diagram of a semantic segmentation network model provided by an embodiment of the present application;
FIG. 16 is a schematic structural diagram of a semantic segmentation network model according to another embodiment of the present application;
FIG. 17 is a PCB picture proof;
FIG. 18 is a diagram illustrating defect detection information according to an embodiment of the present application;
fig. 19 is a block diagram of a PCB welding defect detecting apparatus according to an embodiment of the present application;
fig. 20 is a block diagram of a PCB welding defect detection apparatus according to an embodiment of the present application.
Detailed Description
The technical solutions in the present application will be described clearly and completely with reference to the accompanying drawings, and it is obvious that the described embodiments are some, but not all embodiments of the present application. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
In the description of the present application, it should be noted that the terms "center", "upper", "lower", "left", "right", "vertical", "horizontal", "inner", "outer", and the like indicate orientations or positional relationships based on the orientations or positional relationships shown in the drawings, and are only for convenience of description and simplicity of description, and do not indicate or imply that the device or element being referred to must have a particular orientation, be constructed and operated in a particular orientation, and thus, should not be construed as limiting the present application. Furthermore, the terms "first," "second," and "third" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance.
In the description of the present application, it is to be noted that, unless otherwise explicitly specified or limited, the terms "mounted," "connected," and "connected" are to be construed broadly, e.g., as meaning either a fixed connection, a removable connection, or an integral connection; the connection can be mechanical connection or electrical connection; the two elements may be directly connected or indirectly connected through an intermediate medium, or may be communicated with each other inside the two elements, or may be wirelessly connected or wired connected. The specific meaning of the above terms in the present application can be understood in a specific case by those of ordinary skill in the art.
In addition, the technical features mentioned in the different embodiments of the present application described below may be combined with each other as long as they do not conflict with each other.
When the PCB selective soldering defect is detected by combining an automatic optical detection means with a deep learning means, aiming at the defects of solder filling abnormity (solder skip) and pin over board (pin over component), the defect is influenced by imaging factors (such as illumination), and an attention area on a picture is unstable in the machine judgment process, so that misjudgment is easily caused. In order to improve the result accuracy of the automatic optical detection of the PCB, the PCB welding defect detection method provided by the embodiment of the application introduces the prior knowledge of experts in the detection.
In a PCB selective welding scene, aiming at abnormal defects of solder filling and defects of pins which are not excessive, the judgment mode of an expert during manual inspection is as follows:
1. abnormal defect of solder filling
In the detection of abnormal solder filling defects, it is relatively difficult to define the range of the abnormal filling region. In the actual proof of such defects, many pins (pin pins) normally pass through the board, and the solder filling is insufficient in some areas.
Fig. 1 shows a picture proof corresponding to a certain inspection point on a PCB, where the defect type of the inspection point is a filling abnormal defect. Referring to fig. 1, each pad (pad) is a circular arc-shaped bright area, and the area 11 is an area where solder is filled and replenished.
When the expert performs manual inspection, whether an obvious arc highlight area exists in the picture or not is observed, then whether a crescent-shaped shadow area exists inside the arc highlight area or not is judged, and if the crescent-shaped shadow area exists, whether the detection point has filling abnormity defects or not is judged according to the characteristics of the shape size and the like of the crescent-shaped shadow area.
2. Pin-free plate
In the detection of leads that are not excessive board defects, it is relatively difficult to define the area of the leads.
Fig. 2 and 3 are actually acquired pictures in the PCB detection process, fig. 2 is a picture sample corresponding to a certain detection point on the PCB, fig. 3 is a picture sample corresponding to a certain detection point on the PCB, the detection point in fig. 2 really has a defect that a pin does not pass through the board, and the detection point in fig. 3 really has no defect that a pin does not pass through the board, but is misreported by a machine as a defect that a pin does not pass through the board. As shown in fig. 2, the solder pad has excessive tin filling, the pins do not pass through the board, but scattered highlight areas can be seen in the central area of the solder pad; as shown in fig. 3, the pins have a through-plane structure, but the gray scale of the pixel of the pin is not much different from the gray scale of the surrounding tin, and the pin is partially adhered to the pixel.
When the expert checks manually, the expert will first try to identify the pin part in the picture and then judge whether there is any pin that is not a defect of the board.
As the detection results of the defects of solder filling abnormity and pin-out-of-board defect of the expert manual welding are more accurate, when the defects of PCB selective welding are detected by using an automatic optical detection means and a deep learning means, the attention area of the machine on the picture in the discrimination process is expected to be infinitely close to the attention area on the picture when the expert manual inspection is carried out.
Referring to fig. 4, a flowchart of a method for detecting a soldering defect of a PCB according to an embodiment of the present application is shown, where the method at least includes the following steps:
in step 101, a detected point image is acquired.
On a PCB packaging production line, a detection device shoots a PCB picture according to a preset imaging algorithm, and the PCB picture is used for detecting whether a welding defect exists in a detection point.
Optionally, the PCB picture is shot by AOI equipment, the PCB picture shot by the AOI equipment is stored in a res file, and the PCB picture is obtained from the res file; the res file also includes meta information of the PCB picture.
The PCB picture comprises detection points and areas except the detection points.
Optionally, the detection point image is obtained from a PCB picture. One detection point image corresponds to one detection point on the PCB.
Optionally, the detection point image is a gray scale image.
In step 102, the detected point image is input to a pixel classifier to obtain a pixel level segmentation result corresponding to the detected point image.
The pixel level segmentation result is used for indicating the boundary of the pad and the filling abnormal area in the detection point image and the area of the pin.
The pixel level segmentation result also includes a background.
The boundary between the pad and the filling abnormal area in the pixel level segmentation result and the pin are the attention area for machine judgment.
In the process of detecting the welding defects of the PCB, the pixel-level segmentation result is helpful for stabilizing the picture attention area involved in the machine discrimination process, such as: hot spot regions in the hot spot map.
Since the boundary of the pad region highlighted in the inspection point image and the filled abnormal region characterized by the shaded region is generally recognized relatively easily, the boundary of the pad and the boundary of the filled abnormal region are targeted for recognition.
And the pixel-level segmentation result corresponding to each detection point image is a mask with the same size as the detection point image, each pixel on the mask is predicted to have a mark value through a pixel classifier, and one mark value corresponds to one label type. Such as: the scalar value 1 corresponds to a pin, the scalar value 2 corresponds to a pad and a boundary of the filled abnormal area, and the scalar value 0 corresponds to a background.
The pixel classifier introduces a priori knowledge of an expert (technician) in the construction process; the pixel-level segmentation result corresponding to the detection point image obtained by the pixel classifier is closer to the attention area when the expert manually distinguishes.
In step 103, defect judgment information of the detection point is obtained according to the pixel level segmentation result corresponding to the detection point image and the defect detection image classifier, and the defect judgment information includes a product identification result.
And the defect detection picture classifier judges the defects based on the pixel-level segmentation result corresponding to the detection point image and outputs a product identification result, wherein the product identification result is a good product or a defective product.
Optionally, the defect discrimination information includes a product identification result and a pixel-level segmentation result.
In summary, in the method for detecting a welding defect of a PCB provided in the embodiment of the present application, a PCB picture is obtained, a detection point image is obtained from the PCB picture, the detection point image is input to a pixel classifier, a pixel level segmentation result corresponding to the detection point image is obtained, and defect judgment information of the detection point is obtained according to the pixel level segmentation result corresponding to the detection point image and a defect detection picture classifier; the pixel classifier introduces prior knowledge in the construction process, regulates the attention area of a detection point image in the detection process, and solves the problems that the abnormal defect of soldering tin filling and the defect that pins are not excessive are easy to be mistakenly reported when the PCB is automatically detected at present; the effect of improving the defect detection accuracy is achieved.
Before the defect detection is carried out on the PCB by utilizing the pixel classifier and the defect detection picture classifier, the PCB welding defect detection method further comprises the step of designing the pixel classifier and the defect detection picture classifier, wherein the pixel classifier is designed before the defect detection picture classifier, and the design parameters of the defect detection picture classifier are determined based on the output result of the pixel classifier.
Design of pixel classifier
In an alternative embodiment based on the embodiment shown in fig. 1, the pixel classifier needs to be designed before the detected point image is input into the pixel classifier, that is, before step 101, the method further includes the following steps:
in step 201, a sample detection point image is acquired.
Selecting a plurality of sample PCB pictures corresponding to detection points with abnormal soldering tin filling defects and a plurality of sample PCB pictures corresponding to detection points with pins but no plate defects, recording the sample PCB pictures corresponding to the detection points with the abnormal soldering tin filling defects as SP pictures, and recording the sample PCB pictures corresponding to the detection points with pins but no plate defects as WP pictures.
And acquiring sample detection point images from the SP picture and the WP picture.
In step 202, the sample detection point image is manually marked to obtain a marked sample sheet corresponding to the sample detection point image.
And (3) carrying out fine manual marking on the sample detection point image by a specialist, marking a pin, a bonding pad and a filling abnormal area in the sample detection point image corresponding to the SP picture, and marking the pin in the sample detection point image corresponding to the WP picture.
When the defect type of the detection point corresponding to the sample detection point image is soldering tin filling abnormity, marking the sample sheet, wherein the marking sample sheet comprises a sample sheet for marking a pin, a sample sheet for marking a bonding pad and a sample sheet for marking a filling abnormity area; namely, for the SP picture, three marked sample sheets are marked by the experts and respectively correspond to the pin, the bonding pad and the filling abnormal area.
When the defect type of the detection point corresponding to the sample detection point image is that the pin is not over-board, marking the sample sheet to include the sample sheet of the marking pin; namely, for the WP picture, one marked sample sheet is provided after the expert marks, and corresponds to the pin.
In one example, fig. 5 is a SP picture, a framed part 51 in the SP picture is a sample detection point image, the labeled proof after expert labeling is as shown in fig. 6, 7 and 8, fig. 6 is the labeled proof after pin marking, fig. 7 is the labeled proof after pad marking, and fig. 8 is the labeled proof after filling the abnormal region.
In another example, fig. 9 is a WP picture, a portion 61 framed in the WP picture is a sample detection point image, a labeled proof marked by the expert is shown in fig. 10, and fig. 10 is a labeled proof marked with pins.
In step 203, a pixel-level segmentation result corresponding to the sample detection point image is generated according to the marked sample corresponding to the sample detection point image.
And for the detection point image corresponding to the SP picture, generating a pixel level segmentation result corresponding to the detection point image according to the three marked samples obtained after marking.
And for the detection point image corresponding to the WP picture, generating a pixel level segmentation result corresponding to the detection point image according to one marked sample obtained after marking.
For SP pictures, the labeling of filling abnormal areas greatly depends on the subjective judgment of experts, and the labeling areas have great noise. In the detection point image, the boundary of a highlight pad and the boundary of an abnormal filling area characterized by a shadow area are easily identified, the boundary of the pad and the boundary of the abnormal filling area have obvious semantic features, and therefore, in order to reduce the influence of labeling noise on a pixel classifier, the boundary of the pad and the boundary of the abnormal filling area labeled by an expert are taken as a class area.
When designing a pixel classifier, target areas needing to be learned of a network model corresponding to the pixel classifier are divided into two types, one type is the boundary of a bonding pad and the boundary of a filling abnormal area, and the other type is a pin.
Optionally, the boundary of the pad in the marked sample sheet and the boundary of the filled abnormal region are extracted by using an image processing method.
The semantic part in the expert annotation is extracted by using an image processing method, so that the influence of annotation noise on subsequent semantic segmentation network model training is reduced.
In one example, fig. 11 is an SP picture, fig. 12 is a visual picture of a pixel level segmentation result corresponding to the sample detection point image in fig. 11, a region 13 represents a boundary between a pad region and a filling abnormal region, and a region 14 represents a pin region; in another example, fig. 13 is an SP picture, fig. 14 is a visual picture of a pixel-level segmentation result corresponding to the sample detection point image in fig. 13, a region 15 indicates a boundary between a pad region and a pad filling abnormality region, and a region 16 indicates a pin region.
In step 204, a semantic segmentation network model is constructed.
It should be noted that step 204 may also be performed before step 201, and this is not limited in this embodiment of the application.
In one example, the semantic segmentation network model comprises an encoding module, an embedding module and a decoding module; the input of the coding module is a detection point image, the coding module has n coding outputs, and an embedding module is arranged between each coding output and the decoding module, as shown in fig. 15; n is a positive integer of 2nNot larger than the size of the detected point image.
The size of the feature map of each coded output is different.
In the convolution module, n coded outputs are respectively subjected to convolution processing. And after each coded output is subjected to convolution processing, a convolution result is obtained. The ith encoded output is subjected to i convolutions, i 1, 2.
And outputting the codes subjected to the convolution processing into feature maps with the same size.
The output of the decoding module is the pixel level segmentation result.
Each embedding module comprises 2 cascaded embedding blocks, and each embedding block sequentially comprises a pixel convolution layer, a batch normalization layer and a nonlinear activation layer.
Through the embedding module, the description capability of the semantic segmentation network model is increased.
The processing of the detection point image by the encoding module comprises Max Pooling processing, Conv processing and Bottleneck processing.
Optionally, the processing performed by the decoding module is convolution processing, Conv processing, Softmax processing, and Argmax processing in sequence.
Optionally, the processing performed by the decoding module is convolution processing, Conv processing, and Softmax processing in sequence.
In one example, taking n as 6, the processing performed by the decoding module is the convolution processing, Conv processing, Softmax processing, and Argmax processing in this order, and the semantic segmentation network model has a structure as shown in fig. 16, and the processing performed by the encoding module on the detected point image is: max Pooling processing, Conv processing, Max Pooling processing, Bottleneck processing, Max Pooling processing and Bottleneck processing, wherein the Conv processing outputs the 1 st coded output, each subsequent Bottleneck processing outputs one coded output, and the sizes of the convolution results #1 to #6 are the same.
In another example, the semantic segmentation network model is a Unet network model.
In step 205, a semantic segmentation network model is trained based on the sample detection point images and the pixel level segmentation results corresponding to the sample detection point images, so as to obtain a pixel classifier.
The input of the pixel classifier is a detection point image, and the output of the pixel classifier is a pixel-level segmentation result.
And training a semantic segmentation network model by using the sample detection point images and pixel level segmentation results corresponding to the sample detection point images as sample data, and determining parameters of the semantic segmentation network model to obtain the pixel classifier.
Design of defect detection picture classifier
The design of the defect detection picture classifier can be realized by the following 3 ways:
(1) the method comprises the following steps of 1, extracting manual features according to pixel level segmentation results corresponding to sample detection point images and meta information of the sample detection point images, and designing a defect detection picture classifier according to the manual features and artificial experience. The method comprises the following specific steps:
in step 301, a sample manual feature is extracted according to a pixel level segmentation result corresponding to the sample detection point image and meta information of the sample detection point image.
And acquiring a sample detection point image, and acquiring a pixel level segmentation result corresponding to the sample detection point image by using a designed pixel classifier.
It should be noted that the sample detection point images used for designing the defect inspection picture classifier may be different from the sample detection point images used for designing the pixel classifier.
The manual features at least comprise aperture size, imaging algorithm category, pixel number of a prediction pin area, the outer perimeter of a convex hull area of the prediction pin area, roundness of the prediction pin area, pixel number of a prediction pad and a part filling an abnormal area boundary, divergence angle of the prediction pad and the part filling the abnormal area, skewness of picture pixel gray scale distribution, foreground ratio of the picture and pin normal extension features.
The predicted pin area refers to the area labeled as a pin in the pixel level result tag.
Automatically acquiring sample manual characteristics from pixel level segmentation results corresponding to the sample detection point images; the manual characteristics corresponding to the sample detection point images are the sample manual characteristics
And acquiring the meta information of the detection point image from the res file corresponding to the detection point.
Optionally, the aperture size is obtained from an original res file corresponding to the detected point image.
Optionally, the imaging algorithm class is obtained from an original res file corresponding to the detection point image.
Optionally, the number of pixels in the predicted lead area, the outer perimeter of the convex hull area of the predicted lead area, the roundness of the predicted lead area, the number of pixels at the boundary part of the predicted pad and the filled abnormal area, the divergence angle of the predicted pad and the filled abnormal area, and the skewness of the picture pixel gray scale distribution are obtained from the pixel level segmentation result.
Optionally, the foreground ratio of the picture is a foreground ratio of the picture binarized by a large rule method.
Optionally, a pin normal extension feature (ddmedia) is obtained from the pixel level segmentation result.
In a detection point image corresponding to a detection point which is normally welded, a circle of shadow area is arranged around the pin, and the shadow area is formed by coating soldering tin along the pin in a conical manner and irradiating by lamplight; based on this a priori knowledge, a pin normal extension feature is introduced into the manual feature.
The pin normal expansion characteristic is the difference between the gray level median value in the expansion area and the maximum gray level value in the pin area after expanding the pins by K pixels along the outer contour normal direction in the pixel level segmentation result. K is a positive integer.
The expansion area is defined by a boundary 1 and a boundary 2, the boundary 1 is the outline of the pin, and the boundary 2 is the position where the pin expands by K pixels along the normal direction of the outline.
If the solder normally wraps around the pin to form a shadow, the pin normal expansion feature will be a relatively small negative number. For the pin normal extension feature, a threshold is selected for determining whether the pin activation region is a real pin or a false activation.
The pin area is activated by mistake, which means that a certain area indicated as a pin on the pixel-level segmentation result corresponds to a position on the PCB which is not a real pin.
The influence of false activation of the pin area on defect judgment can be effectively prevented by setting the normal extension characteristic of the pin in combination with the prior knowledge of an expert.
In step 302, a defect inspection picture classifier is designed based on sample manual features and manual experience.
And directly designing parameters of the defect detection image classifier according to sample manual characteristics and artificial experiences without using a deep learning algorithm to obtain the defect detection image classifier.
(2) And 2, extracting manual features according to pixel level segmentation results corresponding to the sample detection point images and meta information of the sample detection point images, and training based on a deep learning algorithm by using the manual features and product identification results of the sample detection points to obtain the defect detection picture classifier. The method comprises the following specific steps:
in step 401, a sample manual feature is extracted according to the pixel level segmentation result corresponding to the sample detection point image and the meta-information of the sample detection point image.
This step is explained in step 301 above and will not be described here.
In step 402, a defect detection classification model is constructed.
Optionally, a defect detection classification model is constructed based on a deep learning algorithm.
It should be noted that step 402 may also be performed before step 401, and this is not limited in this embodiment of the application.
In step 403, training a defect detection classification model based on the manual sample features and the product identification results of the sample detection points to obtain a defect detection picture classifier.
The input of the defect classifier is manual characteristics, and the output is a product identification result.
And the product identification result is a good product or a defective product.
The product identification result of the sample detection point is known, and the product identification result of the sample detection point is the real defect condition of the product.
And training a defect detection classification model by taking the manual sample characteristics and the product identification results of the sample detection points as sample data to obtain parameters of the defect detection classification model, and thus obtaining the defect detection picture classifier.
(3) And 3, training based on a deep learning algorithm to obtain the defect detection picture classifier according to the pixel level segmentation result corresponding to the image of the sample detection point and the product identification result of the sample detection point. The method comprises the following specific steps:
in step 501, a sample detection point image and a pixel level segmentation result corresponding to the sample detection point image are acquired.
And acquiring a sample detection point image.
It should be noted that the sample detection point images used for designing the defect inspection picture classifier may be different from the sample detection point images used for designing the pixel classifier.
And acquiring a pixel level segmentation result corresponding to the sample detection point image by using the designed pixel classifier.
In step 502, a defect detection classification model is constructed.
Optionally, a defect detection image classifier is constructed based on a deep learning algorithm.
It should be noted that step 502 may also be performed before step 501, which is not limited in this embodiment of the application.
In step 503, a defect detection classification model is trained based on the pixel level segmentation result corresponding to the sample detection point image and the product identification result of the sample detection point, so as to obtain a defect detection picture classifier.
The input of the defect detection picture classifier is a pixel level segmentation result, and the output is a product identification result.
And the product identification result is a good product or a defective product.
And training a defect detection classification model by taking a pixel level segmentation result corresponding to the sample detection point image and a product identification result of the sample detection point as sample data to obtain parameters of the defect detection classification model, and thus obtaining the defect detection picture classifier.
In an alternative embodiment based on the embodiment shown in fig. 1, when step 103 is executed, that is, step "obtaining the defect judgment information of the detected point according to the pixel level segmentation result corresponding to the detected point image and the defect detection picture classifier, the defect judgment information including the product identification result", the defect detection picture classifier used is designed by using the above-mentioned mode 1, mode 2 or mode 3.
If the defect inspection image classifier is designed by using the method 1 or the method 2, the step 103 can be implemented as follows:
in step 1031, the manual features are extracted according to the pixel-level segmentation result corresponding to the detected point image and the meta-information corresponding to the detected point image.
The manual features at least comprise aperture size, imaging algorithm category, pixel number of a prediction pin area, the outer perimeter of a convex hull area of the prediction pin area, roundness of the prediction pin area, pixel number of a prediction pad and a part filling an abnormal area boundary, divergence angle of the prediction pad and the part filling the abnormal area, skewness of picture pixel gray scale distribution, foreground ratio of the picture and pin normal extension features.
Optionally, the manual label is automatically extracted by using a preset manual feature extraction algorithm.
This step is explained in step 301 above and will not be described here.
In step 1032, the manual features are input into the defect detection picture classifier to obtain defect judgment information of the detection points.
Inputting the manual characteristics into a defect detection picture classifier, outputting a product identification result of a detection point by the defect detection picture classifier, and generating defect judgment information according to the product identification result.
Optionally, the defect detection picture classifier outputs a pixel level segmentation result and a product identification result of the detection point; and the pixel level segmentation result of the detection point and the product identification result jointly form defect judgment information output by the defect detection picture classifier.
In one example, the PCB picture is shown in fig. 17, the framed part in the PCB picture is a detected point image 71, and after the detection by the pixel classifier and the missing part detection picture classifier, the obtained defect judgment information of the detected point is shown in fig. 18, the defect judgment information includes a pixel-level segmentation result 72 and a product identification result, and the product identification result is NG (defective part); the defect identification information further includes partial manual features, for example, the value of the pin normal extension feature (denoted as dd in fig. 18) is-4.3, and the roundness of the predicted pin area (denoted as pin round in fig. 18) is 0.68.
In the PCB corresponding to the PCB picture shown in fig. 17, the detection point corresponding to the detection point image 72 has a defect that the pin is not over-board, but because the solder surface is not smooth, a cord-shaped shaded portion appears in fig. 17, and a part of the pin activation region is erroneously activated in the pixel-level segmentation result output by the pixel classifier, but because the manual features of "pin normal expansion feature" and "roundness of predicted pin region" are extracted, after the defect detection picture classifier discriminates, a correct product identification result (NG) is still output, that is, the machine detection does not have a misjudgment.
If the defect classifier is designed by using the method 3, the step 103 can be implemented as follows:
in step 1031a, the pixel-level segmentation result corresponding to the detected point image is input to a defect detection picture classifier to obtain defect discrimination information of the detected point.
And inputting the pixel-level segmentation result corresponding to the detection point image into a defect detection picture classifier, and outputting defect judgment information of the detection point by the defect detection picture classifier.
Optionally, the defect detection picture classifier outputs a pixel level segmentation result of the detection point and a product identification result.
Fig. 19 is a block diagram of a PCB soldering defect detecting apparatus according to an embodiment of the present application. The device at least comprises the following modules: an image acquisition module 310, a segmentation module 320, and a discrimination module 330.
An image obtaining module 310, configured to obtain a detection point image;
the segmentation module 320 is configured to input the detected point image into the pixel classifier to obtain a pixel level segmentation result corresponding to the detected point image; the pixel level segmentation result is used for indicating the boundary of the pad and the filling abnormal area in the detection point image and the area where the pin is located; the pixel classifier introduces prior knowledge in the construction process;
the judging module 330 is configured to obtain defect judging information of the detection point according to the pixel-level segmentation result corresponding to the detection point image and the defect detection image classifier, where the defect judging information includes a product identification result.
Optionally, the apparatus further includes a model building module, configured to obtain a sample detection point image;
manually marking the sample detection point image to obtain a marked sample sheet corresponding to the sample detection point image;
generating a pixel level segmentation result corresponding to the sample detection point image according to the marked sample sheet corresponding to the sample detection point image;
constructing a semantic segmentation network model;
and training a semantic segmentation network model based on the sample detection point images and pixel level segmentation results corresponding to the sample detection point images to obtain a pixel classifier.
Optionally, when the defect type of the detection point corresponding to the sample detection point image is solder filling abnormality, the marked sample sheet includes a sample sheet marked with a pin, a sample sheet marked with a pad, and a sample sheet marked with a filling abnormality region;
and when the defect type of the detection point corresponding to the sample detection point image is that the pin is not over-board, marking the sample sheet to include the sample sheet of the marking pin.
Optionally, the semantic segmentation network model includes an encoding module, an embedding module, and a decoding module;
the input of the coding module is a detection point image, the coding module is provided with n coding outputs, and an embedding module is arranged between each coding output and the upper rolling module; n is a positive integer of 2nThe size of the detection point image is not larger than the size of the detection point image;
in the decoding module, the convolution processing is respectively carried out on the n coded outputs; the ith encoded output is subjected to i-th convolution, i ═ 1, 2.., n;
the output of the decoding module is a pixel level segmentation result;
each embedding module comprises 2 cascaded embedding blocks, and each embedding block sequentially comprises a pixel convolution layer, a batch normalization layer and a nonlinear activation layer.
Optionally, the processing performed by the encoding module on the inspection point image includes Max Pooling processing, Conv processing, and bottleeck processing.
Optionally, the processing performed by the decoding module is convolution processing, Conv processing, Softmax processing, and Argmax processing in sequence.
Optionally, the processing performed by the decoding module is convolution processing, Conv processing, and Softmax processing in sequence.
Optionally, the determining module is configured to extract manual features according to a pixel level segmentation result corresponding to the detection point image and meta information of the detection point image, where the manual features at least include an aperture size, an imaging algorithm type, a number of pixels in a prediction pin region, an outer perimeter of a convex hull region of the prediction pin region, a roundness of the prediction pin region, a number of pixels at a boundary portion of a prediction pad and a filling abnormal region, a divergence angle of the prediction pad and the filling abnormal region, a skewness of a picture pixel gray scale distribution, a foreground ratio of a picture, and a pin normal expansion feature;
inputting the manual characteristics into a defect detection picture classifier to obtain defect judgment information of detection points;
the pin normal expansion characteristic is the difference between the gray median in the expansion area and the maximum gray value in the pin area after expanding the pins by K pixels along the outer contour normal direction in the pixel level segmentation result.
Optionally, the apparatus further comprises a model building module;
the model building module is used for extracting sample manual characteristics according to pixel level segmentation results corresponding to the sample detection point images and meta information of the sample detection point images;
and designing a defect detection picture classifier according to the sample manual characteristics and the artificial experience.
Optionally, the apparatus further comprises a model building module;
the model building module is used for extracting sample manual characteristics according to pixel level segmentation results corresponding to the sample detection point images and meta information of the sample detection point images;
constructing a defect detection classification model;
and training a defect detection classification model based on the manual sample characteristics and the product identification result of the sample detection points to obtain a defect detection picture classifier.
Optionally, the apparatus further comprises a model building module;
the model building module is used for obtaining the sample detection point image and the pixel level segmentation result corresponding to the sample detection point image;
constructing a defect detection classification model;
and training a defect detection classification model based on a pixel level segmentation result corresponding to the sample detection point image and a product identification result of the sample detection point to obtain a defect detection picture classifier.
It should be noted that: when the PCB welding defect detection device provided in the above embodiment performs PCB welding defect detection, only the division of the above functional modules is used for illustration, in practical application, the above function distribution can be completed by different functional modules according to needs, that is, the internal structure of the PCB welding defect detection device is divided into different functional modules to complete all or part of the above described functions. In addition, the PCB welding defect detection apparatus provided by the above embodiment and the PCB welding defect detection method embodiment belong to the same concept, and the specific implementation process thereof is detailed in the method embodiment and is not described herein again.
Referring to fig. 20, a block diagram of a PCB soldering defect detecting apparatus according to an exemplary embodiment of the present application is shown. A terminal in the present application may include one or more of the following components: a processor 410 and a memory 420.
Processor 410 may include one or more processing cores. The processor 410 connects various parts within the overall terminal using various interfaces and lines, performs various functions of the terminal and processes data by executing or executing instructions, programs, code sets, or instruction sets stored in the memory 420, and calling data stored in the memory 420. Alternatively, the processor 410 may be implemented in hardware using at least one of Digital Signal Processing (DSP), Field-Programmable Gate Array (FPGA), and Programmable Logic Array (PLA). The processor 410 may integrate one or a combination of a Central Processing Unit (CPU) and a modem. Wherein, the CPU mainly processes an operating system, an application program and the like; the modem is used to handle wireless communications. It is understood that the modem may not be integrated into the processor 410, but may be implemented by a single chip.
Optionally, the processor 410, when executing the program instructions in the memory 420, implements the PCB soldering defect detection method provided by the above-mentioned various method embodiments.
The Memory 420 may include a Random Access Memory (RAM) or a Read-Only Memory (Read-Only Memory). Optionally, the memory 420 includes a non-transitory computer-readable medium. The memory 420 may be used to store instructions, programs, code, sets of codes, or sets of instructions. The memory 420 may include a program storage area and a data storage area, wherein the program storage area may store instructions for implementing an operating system, instructions for at least one function, instructions for implementing the various method embodiments described above, and the like; the storage data area may store data created according to the use of the terminal, and the like.
It should be added that the above terminal is only illustrative, and in actual implementation, the terminal may also include fewer or more components, such as: the device further comprises a touch display screen, a communication component, a sensor component and the like, and the embodiment is not limited to one embodiment.
It should be noted that the apparatus for performing steps 101 to 103 is the same apparatus for performing steps 201 to 205, or the apparatus for performing steps 101 to 103 is different apparatus for performing steps 201 to 205; the apparatus for performing steps 101 to 103 is the same apparatus for performing steps 301, 401 to 403, or 501 to 503, or the apparatus for performing steps 101 to 103 is different apparatus for performing steps 301, 401 to 403, or 501 to 503; the embodiments of the present application do not limit this.
Optionally, the present application further provides a computer-readable storage medium, in which a program is stored, and the program is loaded and executed by a processor to implement the PCB welding defect detection method of the above method embodiment.
Optionally, the present application further provides a computer product, which includes a computer-readable storage medium, in which a program is stored, and the program is loaded and executed by a processor to implement the PCB welding defect detection method of the above-mentioned method embodiment.
It should be understood that the above examples are only for clarity of illustration and are not intended to limit the embodiments. Other variations and modifications will be apparent to persons skilled in the art in light of the above description. And are neither required nor exhaustive of all embodiments. And obvious variations or modifications of this invention are intended to be covered by the scope of the invention as expressed herein.

Claims (24)

1. A PCB welding defect detection method is characterized by comprising the following steps:
acquiring a detection point image;
inputting the detection point image into a pixel classifier to obtain a pixel level segmentation result corresponding to the detection point image; the pixel level segmentation result is used for indicating the boundary of the pad and the filling abnormal area in the detection point image and the area where the pin is located; the pixel classifier introduces prior knowledge in the construction process;
and obtaining defect judgment information of the detection point according to the pixel-level segmentation result corresponding to the detection point image and a defect detection picture classifier, wherein the defect judgment information comprises a product identification result.
2. The method of claim 1, wherein prior to inputting the detection point images into a pixel classifier, the method further comprises:
acquiring a sample detection point image;
manually marking the sample detection point image to obtain a marked sample sheet corresponding to the sample detection point image;
generating a pixel level segmentation result corresponding to the sample detection point image according to the marked sample sheet corresponding to the sample detection point image;
constructing a semantic segmentation network model;
training the semantic segmentation network model based on the sample detection point images and the pixel level segmentation results corresponding to the sample detection point images to obtain the pixel classifier.
3. The method according to claim 2, characterized in that when the defect type of the detection point corresponding to the sample detection point image is solder filling abnormality, the marked sample sheet comprises a sample sheet of a marked pin, a sample sheet of a marked pad, and a sample sheet of a marked filling abnormality area;
and when the defect type of the detection point corresponding to the sample detection point image is that the pin is not over-board, the marked sample sheet comprises a sample sheet of the marked pin.
4. The method of claim 2, wherein the semantic segmentation network model comprises an encoding module, an embedding module, and a decoding module;
the input of the coding module is a detection point image, the coding module is provided with n coding outputs, and an embedding module is arranged between each coding output and the decoding module; n is a positive integer of 2nThe size of the detection point image is not larger than the size of the detection point image;
in the decoding module, the n coded outputs are respectively subjected to convolution processing; the ith encoded output is subjected to i-th convolution, i ═ 1, 2.., n;
the output of the decoding module is a pixel level segmentation result;
each embedding module comprises 2 cascaded embedding blocks, and each embedding block sequentially comprises a pixel convolution layer, a batch normalization layer and a nonlinear activation layer.
5. The method of claim 4, wherein the processing of the checkpoint image by the encoding module comprises Max Pooling processing, Conv processing, Bottleneck processing.
6. The method according to claim 4 or 5, characterized in that the processing by the decoding module is in turn a convolution processing, a Conv processing, a Softmax processing, an Argmax processing.
7. The method according to claim 4 or 5, wherein the processing by the decoding module is in turn a convolution processing, a Conv processing, a Softmax processing.
8. The method according to any one of claims 1 to 7, wherein the obtaining the defect judgment result of the detection point according to the pixel level segmentation result corresponding to the detection point pattern and the defect detection picture classifier comprises:
extracting manual features according to pixel level segmentation results corresponding to the detection point images and meta information of the detection point images, wherein the manual features at least comprise aperture size, imaging algorithm type, the number of pixels of a prediction pin area, the outer perimeter of a convex hull area of the prediction pin area, the roundness of the prediction pin area, the number of pixels of a prediction pad and a filling abnormal area boundary part, the divergence angle of the prediction pad and the filling abnormal area part, the skewness of picture pixel gray scale distribution, the foreground ratio of a picture and pin normal expansion features;
inputting the manual characteristics into the defect detection picture classifier to obtain defect judgment information of the detection points;
the pin normal expansion characteristic is the difference between the gray level median value in the expansion area and the maximum gray level value in the pin area obtained after expanding the pin by K pixels along the outer contour normal direction in the pixel level segmentation result.
9. The method of claim 8, wherein before the inputting the manual features into the defect inspection picture classifier to obtain defect discriminating information of the inspection points, the method further comprises:
extracting sample manual characteristics according to a pixel level segmentation result corresponding to a sample detection point image and meta information of the sample detection point image;
and designing the defect detection picture classifier according to the sample manual characteristics and manual experience.
10. The method of claim 8, wherein before the inputting the manual features into the defect inspection picture classifier to obtain defect discriminating information of the inspection points, the method further comprises:
extracting sample manual characteristics according to a pixel level segmentation result corresponding to a sample detection point image and meta information of the sample detection point image;
constructing a defect detection classification model;
and training the defect detection classification model based on the sample manual characteristics and the product identification result of the sample detection point to obtain the defect detection picture classifier.
11. The method according to any one of claims 1 to 7, wherein before obtaining the defect discrimination information of the detection points according to the pixel-level segmentation result corresponding to the detection point image and the defect detection picture classifier, the method further comprises:
acquiring a sample detection point image and a pixel level segmentation result corresponding to the sample detection point image;
constructing a defect detection classification model;
and training the defect detection classification model based on the pixel level segmentation result corresponding to the sample detection point image and the product identification result of the sample detection point to obtain the defect detection picture classifier.
12. A PCB welding defect detection device, characterized in that the device includes:
the image acquisition module is used for acquiring a detection point image;
the segmentation module is used for inputting the detection point images into a pixel classifier to obtain pixel level segmentation results corresponding to the detection point images; the pixel level segmentation result is used for indicating the areas where the bonding pads, the filling abnormal areas and the boundaries of the pins are located in the detection point image; the pixel classifier introduces prior knowledge in the construction process;
and the judging module is used for obtaining the defect judging information of the detection point according to the pixel-level segmentation result corresponding to the detection point image and the defect detection image classifier, wherein the defect judging information comprises a product identification result.
13. The apparatus of claim 12, further comprising a model building module for obtaining a sample detection point image;
manually marking the sample detection point image to obtain a marked sample sheet corresponding to the sample detection point image;
generating a pixel level segmentation result corresponding to the sample detection point image according to the marked sample sheet corresponding to the sample detection point image;
constructing a semantic segmentation network model;
training the semantic segmentation network model based on the sample detection point images and the pixel level segmentation results corresponding to the sample detection point images to obtain the pixel classifier.
14. The apparatus according to claim 13, wherein when the defect type of the detection point corresponding to the sample detection point image is solder filling abnormality, the marked proof includes a proof marking pin, a proof marking pad, and a proof marking filling abnormality area;
and when the defect type of the detection point corresponding to the sample detection point image is that the pin is not over-board, the marked sample sheet comprises a sample sheet of the marked pin.
15. The apparatus of claim 13, wherein the semantic segmentation network model comprises an encoding module, an embedding module, and a decoding module;
the input of the coding module is a detection point image, the coding module is provided with n coding outputs, and an embedding module is arranged between each coding output and the decoding module; n is a positive integer of 2nThe size of the detection point image is not larger than the size of the detection point image;
in the decoding module, the n coded outputs are respectively subjected to convolution processing; the ith encoded output is subjected to i-th convolution, i ═ 1, 2.., n;
the output of the decoding module is a pixel level segmentation result;
each embedding module comprises 2 cascaded embedding blocks, and each embedding block sequentially comprises a pixel convolution layer, a batch normalization layer and a nonlinear activation layer.
16. The apparatus of claim 15, wherein the processing of the checkpoint image by the encoding module comprises Max Pooling processing, Conv processing, Bottleneck processing.
17. The apparatus according to claim 15 or 16, wherein the processing by the decoding module is in turn a convolution processing, a Conv processing, a Softmax processing, an Argmax processing.
18. The apparatus according to claim 15 or 16, wherein the processing by the decoding module is in turn a convolution processing, a Conv processing, a Softmax processing.
19. The apparatus according to any one of claims 12 to 18, wherein the determining module is configured to extract manual features according to a pixel-level segmentation result corresponding to the detected point image and meta-information of the detected point image, where the manual features at least include an aperture size, a category of an imaging algorithm, a number of pixels in a predicted pin region, an outer perimeter of a convex hull region of the predicted pin region, a roundness of the predicted pin region, a number of pixels at a boundary portion of a predicted pad and a filled abnormal region, a divergence angle of the predicted pad and the filled abnormal region, a skewness of a picture pixel gray scale distribution, a foreground ratio of the picture, and a pin normal extension feature;
inputting the manual characteristics into the defect detection picture classifier to obtain defect judgment information of the detection points;
the pin normal expansion characteristic is the difference between the gray level median value in the expansion area and the maximum gray level value in the pin area obtained after expanding the pin by K pixels along the outer contour normal direction in the pixel level segmentation result.
20. The apparatus of claim 19, further comprising a model building module;
the model construction module is used for extracting sample manual characteristics according to pixel level segmentation results corresponding to the sample detection point images and the meta information of the sample detection point images;
and designing the defect detection picture classifier according to the sample manual characteristics and manual experience.
21. The apparatus of claim 19, further comprising a model building module;
the model construction module is used for extracting sample manual characteristics according to pixel level segmentation results corresponding to the sample detection point images and the meta information of the sample detection point images;
constructing a defect detection classification model;
and training the defect detection classification model based on the sample manual characteristics and the product identification result of the sample detection point to obtain the defect detection picture classifier.
22. The apparatus of any one of claims 13 to 19, further comprising a model building module;
the model building module is used for obtaining a sample detection point image and a pixel level segmentation result corresponding to the sample detection point image;
constructing a defect detection classification model;
and training the defect detection classification model based on the pixel level segmentation result corresponding to the sample detection point image and the product identification result of the sample detection point to obtain the defect detection picture classifier.
23. A PCB welding defect detection device is characterized by comprising a processor and a memory; stored in the memory is a program that is loaded and executed by the processor to implement the method according to any of claims 1 to 11.
24. A computer-readable storage medium, in which a program is stored, which is loaded and executed by a processor to implement the method according to any one of claims 1 to 11.
CN202110622967.3A 2021-06-04 2021-06-04 PCB welding defect detection method, device and storage medium Active CN113506243B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110622967.3A CN113506243B (en) 2021-06-04 2021-06-04 PCB welding defect detection method, device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110622967.3A CN113506243B (en) 2021-06-04 2021-06-04 PCB welding defect detection method, device and storage medium

Publications (2)

Publication Number Publication Date
CN113506243A true CN113506243A (en) 2021-10-15
CN113506243B CN113506243B (en) 2024-09-06

Family

ID=78009027

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110622967.3A Active CN113506243B (en) 2021-06-04 2021-06-04 PCB welding defect detection method, device and storage medium

Country Status (1)

Country Link
CN (1) CN113506243B (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114299036A (en) * 2021-12-30 2022-04-08 中国电信股份有限公司 Electronic component detection method and device, storage medium and electronic equipment
CN115713499A (en) * 2022-11-08 2023-02-24 哈尔滨工业大学 Quality detection method for surface mounted components
CN116423003A (en) * 2023-06-13 2023-07-14 苏州松德激光科技有限公司 Tin soldering intelligent evaluation method and system based on data mining

Citations (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1869667A (en) * 2006-06-08 2006-11-29 李贤伟 Profile analysing method for investigating defect of printed circuit board
WO2014168694A1 (en) * 2013-04-10 2014-10-16 Teradyne, Inc. Electronic assembly test system
CN109615609A (en) * 2018-11-15 2019-04-12 北京航天自动控制研究所 A kind of solder joint flaw detection method based on deep learning
CN109886950A (en) * 2019-02-22 2019-06-14 北京百度网讯科技有限公司 The defect inspection method and device of circuit board
CN109886964A (en) * 2019-03-29 2019-06-14 北京百度网讯科技有限公司 Circuit board defect detection method, device and equipment
CN110610199A (en) * 2019-08-22 2019-12-24 南京理工大学 Automatic optical detection method for printed circuit board resistance element welding spots based on svm and xgboost
CN110636715A (en) * 2019-08-27 2019-12-31 杭州电子科技大学 Self-learning-based automatic welding and defect detection method
CN110992317A (en) * 2019-11-19 2020-04-10 佛山市南海区广工大数控装备协同创新研究院 PCB defect detection method based on semantic segmentation
US20200160083A1 (en) * 2018-11-15 2020-05-21 International Business Machines Corporation Efficient defect localization/segmentation for surface defect inspection
CN111681232A (en) * 2020-06-10 2020-09-18 厦门理工学院 Industrial welding image defect detection method based on semantic segmentation
WO2020191391A2 (en) * 2019-03-21 2020-09-24 Illumina, Inc. Artificial intelligence-based sequencing
CN111784673A (en) * 2020-06-30 2020-10-16 创新奇智(上海)科技有限公司 Defect detection model training and defect detection method, device and storage medium
CN111951225A (en) * 2020-07-20 2020-11-17 南京南瑞继保电气有限公司 PCB welding abnormity detection method and device and storage medium
CN112288724A (en) * 2020-10-30 2021-01-29 北京市商汤科技开发有限公司 Defect detection method and device, electronic equipment and storage medium
CN112508846A (en) * 2020-10-30 2021-03-16 北京市商汤科技开发有限公司 Defect detection method and device, electronic equipment and storage medium
CN112598627A (en) * 2020-12-10 2021-04-02 广东省大湾区集成电路与系统应用研究院 Method, system, electronic device and medium for detecting image defects
CN112614105A (en) * 2020-12-23 2021-04-06 东华大学 Depth network-based 3D point cloud welding spot defect detection method
CN112651966A (en) * 2021-01-18 2021-04-13 厦门大学嘉庚学院 Printed circuit board micro-defect detection method based on ACYOLOV4_ CSP
CN112730460A (en) * 2020-12-08 2021-04-30 北京航天云路有限公司 Welding defect and intensive rosin joint detection technology for communication IC chip

Patent Citations (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1869667A (en) * 2006-06-08 2006-11-29 李贤伟 Profile analysing method for investigating defect of printed circuit board
WO2014168694A1 (en) * 2013-04-10 2014-10-16 Teradyne, Inc. Electronic assembly test system
US20200160083A1 (en) * 2018-11-15 2020-05-21 International Business Machines Corporation Efficient defect localization/segmentation for surface defect inspection
CN109615609A (en) * 2018-11-15 2019-04-12 北京航天自动控制研究所 A kind of solder joint flaw detection method based on deep learning
CN109886950A (en) * 2019-02-22 2019-06-14 北京百度网讯科技有限公司 The defect inspection method and device of circuit board
WO2020191391A2 (en) * 2019-03-21 2020-09-24 Illumina, Inc. Artificial intelligence-based sequencing
CN109886964A (en) * 2019-03-29 2019-06-14 北京百度网讯科技有限公司 Circuit board defect detection method, device and equipment
CN110610199A (en) * 2019-08-22 2019-12-24 南京理工大学 Automatic optical detection method for printed circuit board resistance element welding spots based on svm and xgboost
CN110636715A (en) * 2019-08-27 2019-12-31 杭州电子科技大学 Self-learning-based automatic welding and defect detection method
CN110992317A (en) * 2019-11-19 2020-04-10 佛山市南海区广工大数控装备协同创新研究院 PCB defect detection method based on semantic segmentation
CN111681232A (en) * 2020-06-10 2020-09-18 厦门理工学院 Industrial welding image defect detection method based on semantic segmentation
CN111784673A (en) * 2020-06-30 2020-10-16 创新奇智(上海)科技有限公司 Defect detection model training and defect detection method, device and storage medium
CN111951225A (en) * 2020-07-20 2020-11-17 南京南瑞继保电气有限公司 PCB welding abnormity detection method and device and storage medium
CN112288724A (en) * 2020-10-30 2021-01-29 北京市商汤科技开发有限公司 Defect detection method and device, electronic equipment and storage medium
CN112508846A (en) * 2020-10-30 2021-03-16 北京市商汤科技开发有限公司 Defect detection method and device, electronic equipment and storage medium
CN112730460A (en) * 2020-12-08 2021-04-30 北京航天云路有限公司 Welding defect and intensive rosin joint detection technology for communication IC chip
CN112598627A (en) * 2020-12-10 2021-04-02 广东省大湾区集成电路与系统应用研究院 Method, system, electronic device and medium for detecting image defects
CN112614105A (en) * 2020-12-23 2021-04-06 东华大学 Depth network-based 3D point cloud welding spot defect detection method
CN112651966A (en) * 2021-01-18 2021-04-13 厦门大学嘉庚学院 Printed circuit board micro-defect detection method based on ACYOLOV4_ CSP

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
KAIHUA ZHANG等: "Solder Joint Defect Detection in the Connectors Using Improved Faster-RCNN Algorithm", APPLIED SCIENCES, pages 1 - 15 *
张宪;史沧红;李孝杰;: "基于特征对抗对的视觉特征归因网络研究", 计算机研究与发展, no. 03 *
赵瑞祥: "BGA焊球气泡缺陷检测技术研究", 中国优秀硕士学位论文全文数据库信息科技辑, pages 135 - 431 *

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114299036A (en) * 2021-12-30 2022-04-08 中国电信股份有限公司 Electronic component detection method and device, storage medium and electronic equipment
CN115713499A (en) * 2022-11-08 2023-02-24 哈尔滨工业大学 Quality detection method for surface mounted components
CN115713499B (en) * 2022-11-08 2023-07-14 哈尔滨工业大学 Quality detection method for mounted patch element
CN116423003A (en) * 2023-06-13 2023-07-14 苏州松德激光科技有限公司 Tin soldering intelligent evaluation method and system based on data mining
CN116423003B (en) * 2023-06-13 2023-10-31 苏州松德激光科技有限公司 Tin soldering intelligent evaluation method and system based on data mining

Also Published As

Publication number Publication date
CN113506243B (en) 2024-09-06

Similar Documents

Publication Publication Date Title
CN113506243A (en) PCB welding defect detection method and device and storage medium
CN110021005B (en) Method and device for screening defects of circuit board and computer readable recording medium
CN110880175B (en) Welding spot defect detection method, system and equipment
CN105510348A (en) Flaw detection method and device of printed circuit board and detection equipment
CN111583216A (en) Defect detection method for PCBA
CN115294114A (en) Quality detection method based on ECU circuit welding
CN113591965A (en) AOI detection image processing method and device, storage medium and computer equipment
CN114266743B (en) FPC defect detection method, system and storage medium based on HSV and CNN
CN117152165B (en) Photosensitive chip defect detection method and device, storage medium and electronic equipment
CN112115948A (en) Chip surface character recognition method based on deep learning
CN115775246A (en) Method for detecting defects of PCB (printed circuit board) components
CN116168218A (en) Circuit board fault diagnosis method based on image recognition technology
CN115100166A (en) Welding spot defect detection method and device
WO2014103617A1 (en) Alignment device, defect inspection device, alignment method, and control program
US9607234B2 (en) Method and apparatus for processing images, and storage medium storing the program
CN113724218B (en) Method, device and storage medium for identifying chip welding defect by image
CN114998217A (en) Method for determining defect grade of glass substrate, computer device and storage medium
JP3589424B1 (en) Board inspection equipment
EP2063259A1 (en) Method for inspecting mounting status of electronic component
CN116091503B (en) Method, device, equipment and medium for discriminating panel foreign matter defects
KR20220026439A (en) Apparatus and method for checking whether a part is inserted in PCB
CN116993654B (en) Camera module defect detection method, device, equipment, storage medium and product
JP2003086919A (en) Pattern inspection device
CN111935480B (en) Detection method for image acquisition device and related device
KR101383827B1 (en) System and method for automatic extraction of soldering regions in pcb

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant