Nothing Special   »   [go: up one dir, main page]

CN116977257A - Defect detection method, device, electronic apparatus, storage medium, and program product - Google Patents

Defect detection method, device, electronic apparatus, storage medium, and program product Download PDF

Info

Publication number
CN116977257A
CN116977257A CN202310245929.XA CN202310245929A CN116977257A CN 116977257 A CN116977257 A CN 116977257A CN 202310245929 A CN202310245929 A CN 202310245929A CN 116977257 A CN116977257 A CN 116977257A
Authority
CN
China
Prior art keywords
defect
image
detected
positioning
images
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310245929.XA
Other languages
Chinese (zh)
Inventor
张博深
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN202310245929.XA priority Critical patent/CN116977257A/en
Publication of CN116977257A publication Critical patent/CN116977257A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0004Industrial image inspection
    • G06T7/0006Industrial image inspection using a design-rule based approach
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Physics & Mathematics (AREA)
  • Artificial Intelligence (AREA)
  • Geometry (AREA)
  • Health & Medical Sciences (AREA)
  • Quality & Reliability (AREA)
  • Computing Systems (AREA)
  • Databases & Information Systems (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Multimedia (AREA)
  • Investigating Materials By The Use Of Optical Means Adapted For Particular Applications (AREA)
  • Image Analysis (AREA)

Abstract

The application discloses a defect detection method, a device, electronic equipment, a storage medium and a program product, wherein the defect detection of an object to be detected is divided into two stages, in the first stage, rough defect positioning is carried out by utilizing an obtained object image of the detected object, the approximate position where the defect possibly exists is identified, and a reference defect area where the defect exists in the object image is determined; in the second stage, the object image is cut in a plurality of different scales by utilizing the reference defect area, a plurality of sub-images to be detected with different scales are obtained, accurate defect positioning classification is carried out by utilizing the cut sub-images to be detected with different scales, the target position and the defect type possibly with defects are identified, and the target defect area and the corresponding target defect type with defects in the object to be detected are determined. The application can improve the accuracy of defect detection.

Description

Defect detection method, device, electronic apparatus, storage medium, and program product
Technical Field
The present application relates to the field of defect detection technologies, and in particular, to a defect detection method, device, electronic apparatus, storage medium, and program product.
Background
To ensure the functional and appearance integrity of the product, the product is typically inspected for defects during the production process. For example, a display panel is widely used as an important performance of intelligence in many electronic devices such as mobile phones, tablet computers, televisions, and car computers. In order to ensure that the display panel can display normally, it is necessary to perform defect detection on the display panel.
In the related art, a manual visual inspection method is generally adopted to detect possible defects of an article to be inspected, such as a display panel, however, the accuracy of the defect detection result is affected due to factors such as manual subjective judgment.
Disclosure of Invention
The embodiment of the application provides a defect detection method, a defect detection device, electronic equipment, a computer readable storage medium and a computer program product, which can improve the accuracy of defect detection of a display panel.
In a first aspect, the present application provides a defect detection method, including:
acquiring an article image of an article to be detected;
performing defect positioning according to the object image, and determining a reference defect area with defects of the object image;
according to the reference defect area, carrying out multi-scale cutting on the object image to obtain a plurality of sub-images to be detected with different scales;
And carrying out defect positioning classification according to the plurality of sub-images to be detected, and determining a target defect area and a corresponding target defect category of the object to be detected, wherein the target defect area is defective.
In a second aspect, the present application provides a defect detection apparatus, including:
the image acquisition module is used for acquiring an article image of the article to be detected;
the defect positioning module is used for performing defect positioning according to the object image and determining a reference defect area with defects of the object image;
the image clipping module is used for clipping the object image in a multi-scale mode according to the reference defect area to obtain a plurality of sub-images to be detected with different scales;
and the positioning classification network is used for carrying out defect positioning classification according to the plurality of sub-images to be detected and determining a target defect area and a corresponding target defect category of the object to be detected, wherein the target defect area is defective.
In an alternative embodiment, the defect positioning module is used for inputting the object image into the defect positioning model for defect positioning, and determining a reference defect area where the object image has defects.
In an alternative embodiment, the defect positioning module is used for inputting the object image into the defect positioning model to perform defect positioning, so as to obtain a candidate defect region of the object image output by the defect positioning model and a corresponding defect confidence level, wherein the defect confidence level is used for indicating the reliability of the defect in the corresponding candidate defect region; and determining a reference defect area with defects of the object image according to the candidate defect area and the corresponding defect confidence coefficient.
In an alternative embodiment, the defect positioning module is configured to determine a defect confidence threshold according to a defect confidence corresponding to the candidate defect region; and determining the candidate defect region with the corresponding defect confidence greater than or equal to the defect confidence threshold as a reference defect region.
In an alternative embodiment, the defect positioning module is used for performing downsampling processing on the object image to obtain a downsampled image of the object image; and inputting the downsampled image into a defect positioning model to perform defect positioning, and determining a reference defect area of the object image with defects.
In an alternative embodiment, the positioning classification module is used for inputting a plurality of sub-images to be detected into the trained defect positioning classification model to perform defect positioning classification, and determining a target defect area and a corresponding target defect category of the object to be detected with defects; wherein, defect location classification model and defect location model joint training obtain.
In an optional embodiment, the defect positioning and classifying model includes a feature extraction network, a feature fusion network and a positioning and classifying network, wherein the feature extraction network includes a plurality of feature extraction branches corresponding to different scales, and the positioning and classifying module is used for respectively inputting a plurality of sub-images to be detected into the feature extraction branches corresponding to the scales to perform feature extraction so as to obtain image features of the plurality of sub-images to be detected; inputting the image features of the multiple sub-images to be detected into a feature fusion network to perform feature fusion, so as to obtain fusion features; and inputting the fusion characteristics into a positioning classification network to perform defect positioning classification, and determining a target defect area and a corresponding target defect category of the object to be detected with defects.
In an alternative embodiment, the feature fusion network comprises a weighting operation module and a splicing module, wherein the weighting operation module comprises a plurality of weighting operation branches corresponding to different scales, and the positioning classification module is used for respectively inputting the image features of the plurality of sub-images to be detected into the weighting operation branches corresponding to the scales for weighting operation to obtain the weighting features of the plurality of sub-images to be detected; and inputting the weighted features of the multiple sub-images to be detected into a splicing module for feature splicing to obtain fusion features.
In an alternative embodiment, the defect detection device further comprises a model training module, which is used for acquiring a positive sample image with defects, and a defect type label and a defect position label thereof; acquiring a negative sample image without defects; training a defect positioning classification model by adopting a gradient descent mode and a reinforcement learning mode according to the negative sample image, the positive sample image, the defect type label and the defect position label thereof; and training the defect positioning model in a gradient descent mode according to the negative sample image, the positive sample image and the defect position labels thereof.
In an alternative embodiment, the model training module is configured to update weight parameters of the feature extraction network and the location classification network in the defect location classification model in a gradient descent manner according to the negative sample image, the positive sample image, the defect category label and the defect location label thereof, and update weight parameters of the feature extraction branch in the feature fusion network in a reinforcement learning manner.
In an alternative embodiment, the model training module is configured to obtain a first training loss of the defect localization model and obtain a second training loss of the defect localization classification model; and fusing the first training loss and the second training loss to obtain fused training loss; determining reinforcement learning rewards according to the fusion training losses, performing reinforcement learning according to the rewards, and updating weight parameters of feature extraction branches in a feature fusion network.
In a third aspect, the present application provides an electronic device, including a memory and a processor, where the memory stores a computer program, and the processor is configured to execute the computer program in the memory, to implement the steps in the defect detection method provided by the present application.
In a fourth aspect, the present application provides a computer readable storage medium storing a plurality of instructions adapted to be loaded by a processor for implementing the steps in the defect detection method provided by the present application.
In a fifth aspect, the present application provides a computer program product comprising a computer program or instructions which, when executed by a processor, implement the steps in the defect detection method provided by the present application.
In the method, defect detection of an article to be detected is divided into two stages, in a first stage, rough defect positioning is carried out by utilizing an acquired article image of the detected article, the approximate position where the defect possibly exists is identified, and a reference defect area where the defect exists in the article image is determined; in the second stage, the object image is cut in a plurality of different scales by utilizing the reference defect area, a plurality of sub-images to be detected with different scales are obtained, accurate defect positioning classification is carried out by utilizing the cut sub-images to be detected with different scales, the target position and the defect type possibly with defects are identified, and the target defect area and the corresponding target defect type with defects in the object to be detected are determined. On the one hand, traditional manual visual detection is replaced by an image recognition mode, so that manual subjective judgment can be avoided, and the accuracy of defect detection results is improved. On the other hand, by dividing the defect detection into two stages, firstly locating the reference defect area and then cutting out sub-images to be detected with different scales according to the reference defect area, the characteristics with more accurate scales can be provided for defect locating and classifying, meanwhile, the influence of image content outside the reference defect area on defect locating and classifying can be avoided as much as possible, and the accuracy of defect detection results can be further improved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are needed in the description of the embodiments will be briefly described below, it being obvious that the drawings in the following description are only some embodiments of the present application, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1a is a schematic diagram of a defect detection system according to an embodiment of the present application;
FIG. 1b is a schematic flow chart of a defect detection method according to an embodiment of the present application;
FIG. 1c is a schematic diagram of defect localization performed by a defect localization model according to an embodiment of the present application;
FIG. 1d is a schematic diagram of a multi-scale cut in an embodiment of the present application;
FIG. 1e is a schematic diagram of a defect localization classification model according to an embodiment of the present application;
FIG. 1f is a schematic diagram of a feature fusion network according to an embodiment of the present application;
FIG. 1g is a schematic diagram of reinforcement learning in an embodiment of the application;
FIG. 2 is a schematic diagram of another process of the defect detection method according to the embodiment of the present application;
FIG. 3 is a schematic diagram of two-stage defect detection in an embodiment of the present application;
FIG. 4 is a schematic diagram of a defect detecting apparatus according to an embodiment of the present application;
Fig. 5 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
It should be noted that the principles of the present application are illustrated as implemented in a suitable computing environment. The following description is based on illustrative embodiments of the application and should not be taken as limiting other embodiments of the application not described in detail herein.
In the following description of the present application reference is made to "some embodiments" which describe a subset of all possible embodiments, but it is to be understood that "some embodiments" can be the same subset or a different subset of all possible embodiments and can be combined with each other without conflict.
In the following description of the present application, the terms "first", "second", "third" and "third" are merely used to distinguish similar objects from each other, and do not represent a particular ordering of the objects, it being understood that the "first", "second", "third" may be interchanged with a particular order or sequence, as permitted, to enable embodiments of the present application described herein to be practiced otherwise than as illustrated or described herein.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this application belongs. The terminology used herein is for the purpose of describing embodiments of the application only and is not intended to be limiting of the application.
In order to be able to improve the efficiency and accuracy of defect detection, embodiments of the present application provide a defect detection method, a defect detection apparatus, an electronic device, a computer-readable storage medium, and a computer program product. Wherein the defect detection method may be performed by a defect detection apparatus or by an electronic device integrated with the defect detection apparatus.
The following description of the embodiments of the present application will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present application, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the application without making any inventive effort, are intended to fall within the scope of the application.
Referring to fig. 1a, the present application further provides a defect detection system, as shown in fig. 1a, where the defect detection system includes an electronic device 100, and the defect detection apparatus provided by the present application is integrated in the electronic device 100. For example, when the electronic device 100 is further configured with a camera, the object to be detected may be directly photographed by the configured camera, so as to obtain an object image of the object to be detected, then defect positioning is performed according to the object image, a reference defect area where the object image has a defect is determined, and further multi-scale cutting is performed on the object image according to the reference defect area, so as to obtain a plurality of sub-images to be detected with different scales, and finally defect positioning is performed according to the plurality of sub-images to be detected, so as to determine a target defect area where the object to be detected has a defect and a corresponding target defect category.
The electronic device 100 may be any device with a processor and having a processing capability, such as a mobile electronic device with a processor, such as a smart phone, a tablet computer, a palm computer, a notebook computer, or a stationary electronic device with a processor, such as a desktop computer, a television, a server, and an industrial device.
In addition, as shown in fig. 1a, the defect detection system may further include a memory 200 for storing raw data, intermediate data, and result data in the defect detection process, for example, the electronic device 100 stores the acquired object image (raw data) of the object to be detected, the indication information indicating the reference defect area, the sub-image to be detected (intermediate data), and the indication information (result data) indicating the target defect area and the corresponding target defect type of the object to be detected in the memory 200.
It should be noted that, the schematic view of the scenario of the defect detection system shown in fig. 1a is only an example, and the defect detection system and scenario described in the embodiment of the present application are for more clearly describing the technical solution of the embodiment of the present application, and do not constitute a limitation on the technical solution provided by the embodiment of the present application, and those skilled in the art can know that, with the evolution of the defect detection system and the appearance of a new service scenario, the technical solution provided by the embodiment of the present application is equally applicable to similar technical problems.
The following will describe in detail. The numbers of the following examples are not intended to limit the preferred order of the examples.
Referring to fig. 1b, fig. 1b is a schematic flow chart of a defect detection method according to an embodiment of the present application, and as shown in fig. 1b, the flow chart of the defect detection method according to the present application is as follows:
in S110, an article image of an article to be detected is acquired.
It should be noted that, the object to be detected refers to any object that needs to be subjected to defect detection, and the matched defect detection is correspondingly performed according to different types of the object to be detected. For example, if the object to be detected is a display panel, it is possible to detect whether the display panel has appearance defects such as a breakage defect and a scratch defect, and functional defects such as a bright point defect, a dark point defect and a line defect; for another example, if the object to be tested is a circuit board, it can be detected whether the circuit board has appearance defects such as scratch defects, silk screen defects, dirt defects, and functional defects such as cold joint defects and adhesion defects.
The mode of acquiring the object image of the object to be detected is not particularly limited in this embodiment, for example, when the electronic device executing the defect detection method of the present application is configured with the image acquisition component, the electronic device may directly acquire the image of the object to be detected through the configured image acquisition component, so as to obtain the object image of the object to be detected; in addition, the electronic device can also acquire the object image of the object to be detected from other electronic devices provided with the image acquisition component; the electronic equipment can also acquire the pre-acquired article image of the article to be detected from the server, and the article image is uploaded to the server after the image acquisition of the article to be detected is carried out by other image acquisition components.
The obtained object image of the object to be detected is used for the subsequent defect detection of the object to be detected.
In S120, defect localization is performed based on the article image, and a reference defect area in which the article image is defective is determined.
Wherein, defect positioning refers to identifying a defect area in an image, wherein defects possibly exist, and does not pay attention to the type of defects of the identified defect area. In this embodiment, the defect detection of the object to be detected is divided into two stages, namely a primary prediction stage and a secondary prediction stage. Wherein the goal of the primary prediction stage is to be able to locate the rough location of the defect. Based on the prediction target, the defect positioning of the object image can be realized by adopting a traditional defect positioning mode based on defect template matching, and the defect positioning of the object image can also be realized by adopting an artificial intelligence defect positioning mode based on machine vision.
Correspondingly, after the article image of the article to be detected is obtained, performing defect positioning according to the article image in a configured defect positioning mode, and determining the area possibly having the defect in the article image, and marking the area as a reference defect area.
In some embodiments, performing defect localization from an item image, determining a reference defect region of the item image that is defective, includes:
And inputting the object image into a defect positioning model to perform defect positioning, and determining a reference defect area of the object image with defects.
In this embodiment, the defect positioning of the object image is implemented by adopting an artificial intelligence defect positioning mode based on machine vision. Artificial intelligence (Artificial Intelligence, AI) is a theory, method, technique, and application system that uses a digital computer or a machine controlled by a digital computer to simulate, extend, and extend human intelligence, sense the environment, acquire knowledge, and use knowledge to obtain optimal results. In other words, artificial intelligence is an integrated technology of computer science that attempts to understand the essence of intelligence and to produce a new intelligent machine that can react in a similar way to human intelligence. Artificial intelligence, i.e. research on design principles and implementation methods of various intelligent machines, enables the machines to have functions of sensing, reasoning and decision.
The artificial intelligence technology is a comprehensive subject, and relates to the technology with wide fields, namely the technology with a hardware level and the technology with a software level. Artificial intelligence infrastructure technologies generally include technologies such as sensors, dedicated artificial intelligence chips, cloud computing, distributed storage, big data processing technologies, operation/interaction systems, mechatronics, and the like. Artificial intelligence software technology mainly includes Machine Learning (ML) technology, wherein Deep Learning (DL) is a new research direction in Machine Learning, which is introduced into Machine Learning to make it closer to an original target, i.e., artificial intelligence. At present, deep learning is mainly applied to the fields of machine vision, natural language processing and the like.
Deep learning is the inherent regularity and presentation hierarchy of learning sample data, and information obtained during such learning processes greatly aids in interpretation of data such as text, image and sound. The deep learning technology and the corresponding training set are utilized to train and obtain network models realizing different functions, for example, a gender classification model for gender classification can be obtained based on one training set, an image optimization model for image optimization can be obtained based on another training set, and the like. Accordingly, in the present embodiment, the sample article image of the sample article with the defect is used as the training set to train the defective positioning model, the defective positioning model is configured to take the article image of the article to be detected as input, and the defective area where the article image may exist as output, and the model structure and the training mode of the defective positioning model are not particularly limited.
In this embodiment, referring to fig. 1c, after an article image of an article to be detected is obtained, the obtained article image may be input into a trained defect positioning model to perform defect positioning, so as to determine a reference defect area where the article image has a defect. For example, a defect localization model may be obtained by training on a pre-acquired training set using a YOLO model as a base model, where the YOLO model includes, but is not limited to, YOLOv1, YOLOv2, YOLOv3, YOLOv4, YOLOv5, YOLOv6, and each retrofit model, etc.
For example, since only the approximate position of the defect needs to be located in the primary prediction stage, a lightweight model-PP-YOLO Tiny model with a deployment size of only 1.3M can be adopted as a basic model, and training is performed on a training set obtained in advance by using the L1 loss until a preset stop condition is met, so that a defect location model is obtained. The preset stopping condition can be configured to reach preset times for parameter iteration times of the PP-YOLO Tiny model in the training process, or configured to be L1 loss function convergence and the like. The L1 loss function, also known as the minimum absolute value deviation or minimum absolute value error, acts to minimize the sum of the absolute differences of the tag value and the predicted value.
The PP-YOLO Tiny model consists of three parts, namely a main network, a neck network and a detection head network. The backbone network is a model basis and is used for extracting features at a plurality of different scales, and the feature images with the different scales are obtained through corresponding extraction, wherein the feature images comprise but are not limited to features of shape edges, features of color shades and the like; the neck network is connected with the main network and the detection head network, and is used for carrying out feature fusion on a plurality of feature graphs with different scales extracted by the main network, and taking the fused feature graphs as the input of the detection head network; the detection head network is used for carrying out defect positioning on the input fusion characteristic diagram, and correspondingly outputting a positioning result for describing the position of the defect area. It should be noted that, the detection head network of the PP-YOLO Tiny model includes two branches, which are a positioning branch for implementing a positioning function and a classification branch for implementing a classification function, and since only defect positioning needs to be implemented in the primary prediction stage in this embodiment, the classification branch in the detection head network is deleted.
In some embodiments, inputting the image of the item into a defect localization model for defect localization, determining a reference defect region where the image of the item is defective, includes:
inputting the object image into a defect positioning model for defect positioning to obtain a candidate defect region of the object image output by the defect positioning model and a corresponding defect confidence coefficient, wherein the defect confidence coefficient is used for indicating the reliability degree of defects in the corresponding candidate defect region;
and determining a reference defect area with defects in the object image according to the candidate defect area and the corresponding defect confidence coefficient.
In the present embodiment, the defect localization model is denoted as f 1 (x,θ 1 ) Wherein x represents the input of the defect localization model, i.e., the object image, θ 1 Parameters representing a defect localization model.
The process of inputting the object image into the defect positioning model to perform defect positioning to obtain a series of candidate defect areas and corresponding defect confidence of the object image output by the defect positioning model can be expressed as follows:
Pred_box=f 1 (x,θ 1 );
Pred_box={Pred_box 1 ,Pred_box 2 ,…,Pred_box N };
Pred_box i ∈[x,y,w,h,conf];
wherein pred_box represents a rectangular candidate defect region, and N (positive integer of 1 or more) candidate defect regions are total, each pred_box i Is defined by the coordinate positions x (abscissa of the center of the candidate defect region), y (ordinate of the center of the candidate defect region), w (width of the candidate defect region), h (height of the candidate defect region), and conf (defect confidence of the candidate defect region) of the predicted defect region. For example, a predicted candidate defect region is [100,100,20,30,0.6 ] ]Indicating that there is a rectangular candidate defect region of width 20 and height 30 at coordinates (100 ) with a defect confidence of 0.6, indicating that the region has a defect probability of 60%.
In order to reduce the amount of computation in the subsequent secondary prediction stage to improve the overall efficiency of defect detection, the present embodiment does not use all candidate defect areas output by the defect localization model as reference defect areas, but only uses a part of the candidate defect areas as reference defect areas. And screening the candidate defect areas output by the defect positioning model according to the candidate defect areas and the corresponding defect confidence degrees thereof, and determining the screened candidate defect areas as reference defect areas with defects of the object image. For example, k candidate defect regions with highest defect confidence may be screened out and determined as the reference defect regions, and candidate defect regions with defect confidence greater than or equal to the defect confidence threshold may be screened out and determined as the reference defect regions.
In one embodiment, determining a reference defect region of the article image having a defect according to the candidate defect region and its corresponding defect confidence comprises:
determining a defect confidence threshold according to the defect confidence corresponding to the candidate defect region;
And determining the candidate defect area with the corresponding defect confidence coefficient larger than or equal to the defect confidence coefficient threshold value as a reference defect area.
In order to more accurately screen out the reference defect area from the candidate defect areas, the embodiment dynamically determines the defect confidence threshold value and determines the reference defect area based on the defect confidence threshold value.
The defect confidence threshold is determined according to the defect confidence corresponding to each candidate defect area, that is, the defect confidence threshold is not fixed for the object images of different objects to be detected, but is determined according to the defect confidence corresponding to each predicted candidate defect area. For example, the defect confidence threshold may be configured as a median value of defect confidence values corresponding to the candidate defect regions, the defect confidence threshold may be configured as an average value of defect confidence values corresponding to the candidate defect regions, and so on.
As above, after determining the defect confidence threshold, the candidate defect region having the corresponding defect confidence greater than or equal to the defect confidence threshold is further determined as the reference defect region according to the defect confidence threshold.
For example, for an object image, the object image is input into a defect positioning model for defect positioning, the defect positioning model outputs 5 candidate defect regions and corresponding defect confidences, the defect confidences corresponding to the 5 candidate defect regions are respectively 0.4, 0.9, 0.1, 0.2 and 0.8, the candidate defect regions corresponding to the defect confidences 0.4, 0.8 and 0.9 are determined as reference defect regions assuming that the median value 0.4 is determined as the defect confidence threshold, and the candidate defect regions corresponding to the defect confidences 0.8 and 0.9 are determined as reference defect regions assuming that the average value 0.48 is determined as the defect confidence threshold.
In one embodiment, to further improve the overall efficiency of defect detection, the method for inputting the object image into the defect positioning model for defect positioning and determining the reference defect area of the object image with defects includes:
downsampling the object image to obtain a downsampled image of the object image;
and inputting the downsampled image into a defect positioning model to perform defect positioning, and determining a reference defect area of the object image with defects.
In this embodiment, the original acquired object image is not directly input into the defect positioning model to perform defect positioning, but downsampling is performed on the object image to obtain a downsampled image with a smaller scale than that of the original object image, where the scale difference between the downsampled image and the object image depends on the downsampling multiple adopted in the downsampling process; then, inputting the downsampled image into a defect positioning model to perform defect positioning, and correspondingly determining a reference defect area of the downsampled image, wherein the downsampled image and the object image only have scale differences, so that how to perform defect positioning on the downsampled image through the defect positioning model to determine the reference defect area of the downsampled image, the method for determining the reference defect area of the object image by adopting the defect positioning model in the above embodiment can be correspondingly implemented, and the description is omitted herein; and finally, performing scale reduction on the reference defect area of the downsampled image according to the downsampling multiple to determine the reference defect area of the defect of the object image.
For example, assuming that the dimension of an article image of an article to be detected is 800x640, the article image is subjected to double downsampling processing, and the dimension of the downsampled image is 400x320, so that the downsampled image is identical to the image content of the article image, but the area is reduced to one fourth of the area of the original article image, and the data amount is reduced.
It can be understood that, because the downsampled image obtained by downsampling is smaller than the original object image, the data size of the downsampled image is correspondingly smaller, and the operand required by defect positioning is smaller when the defect positioning model is adopted to position the defect, thereby achieving the purpose of improving the integral efficiency of defect detection.
In S130, according to the reference defect area, the object image is subjected to multi-scale clipping, so as to obtain a plurality of sub-images to be detected with different scales.
As described above, the embodiment of the present application divides defect detection of an article to be detected into two stages, namely, a primary prediction stage and a secondary prediction stage. The target of the secondary prediction stage is to locate the accurate position of the defect and determine the defect type.
In this embodiment, after a reference defect area where a defect may exist is determined, the article image is subjected to multi-scale clipping according to a configured multi-scale clipping strategy based on the reference defect area, so as to obtain a plurality of images with different scales, and the images are recorded as sub-images to be detected. The image content of the sub-image to be detected and the content of the reference defect area of the object image are at least partially overlapped to be used as constraint, and a multi-scale clipping strategy can be configured by a person skilled in the art according to actual needs.
For example, referring to fig. 1d, a multi-scale clipping strategy may be configured as follows:
and cutting out K sub-images to be detected with preset scales by taking the center of the reference defect area as a cutting center, wherein K is a positive integer greater than or equal to 2.
For another example, a multi-scale clipping strategy may also be configured to:
and cutting out K sub-images to be detected with preset scales by taking the upper left corner of the reference defect area as the cut upper left corner.
In S140, defect locating classification is performed according to the plurality of sub-images to be detected, and a target defect area and a corresponding target defect category of the object to be detected with defects are determined.
The defect positioning classification refers to identifying a defect area possibly having a defect in an image, and determining what type of defect the identified defect area is. Based on the target of the secondary prediction stage, the defect positioning classification of the image can be realized by adopting a traditional defect positioning classification mode based on defect template matching, and the defect positioning classification of the image can also be realized by adopting an artificial intelligent defect positioning classification mode based on machine vision.
In an embodiment, performing defect localization and classification according to a plurality of sub-images to be detected, determining a target defect area and a corresponding target defect category of an object to be detected, where the target defect area and the corresponding target defect category are defective, includes:
Inputting a plurality of sub-images to be detected into a trained defect positioning and classifying model to perform defect positioning and classifying, and determining a target defect area and a corresponding target defect category of the object to be detected;
wherein, defect location classification model and defect location model joint training obtain.
In this embodiment, an artificial intelligent defect locating and classifying mode based on machine vision is adopted to implement defect locating and classifying on a plurality of sub-images to be detected. Correspondingly, in this embodiment, the sample article image of the sample article with the defect is used as a training set, and the defect positioning model and the defect positioning classification model are jointly trained, where the defect positioning classification model is configured to take a plurality of sub-images to be detected obtained by cutting the article image of the article to be detected as input, and a defect area where the article image may exist and a defect type of the defect area as output, and the model structure and the training mode of the defect positioning classification model are not limited specifically.
When defect positioning classification is performed according to a plurality of sub-images to be detected, inputting the plurality of sub-images to be detected with different scales into a trained defect positioning classification model to perform defect positioning classification, and correspondingly determining a target defect area of an object image, namely a corresponding target defect type, namely a target defect area of an object to be detected with defects and a corresponding target defect type.
In an embodiment, referring to fig. 1e, a defect location classification model includes a feature extraction network, a feature fusion network, and a location classification network, wherein the feature extraction network includes a plurality of feature extraction branches corresponding to different scales, and inputs a plurality of sub-images to be detected into a trained defect location classification model to perform defect location classification, and determines a target defect area and a corresponding target defect category of a defect of an object to be detected, including:
respectively inputting the plurality of sub-images to be detected into feature extraction branches with corresponding scales to perform feature extraction, so as to obtain image features of the plurality of sub-images to be detected;
inputting the image features of the multiple sub-images to be detected into a feature fusion network to perform feature fusion, so as to obtain fusion features;
and inputting the fusion characteristics into a positioning classification network to perform defect positioning classification, and determining a target defect area and a corresponding target defect category of the object to be detected.
The feature extraction network is a basis of a defect positioning classification model and comprises a plurality of feature extraction branches corresponding to different scales, and each feature extraction branch is configured to perform feature extraction on a sub-image to be detected of the corresponding scale to obtain image features, including but not limited to features of shape edges, features of color shades and the like.
The feature fusion network is connected with the feature extraction network and the positioning classification network, and is configured to perform feature fusion on a plurality of image features with different scales obtained by encoding the feature extraction network, and the fused features obtained by fusion are used as the input of the positioning classification network.
The positioning classification network is configured to perform defect positioning classification on the input fusion features, and correspondingly output positioning classification results for describing the positions and the categories of the defect areas.
In this embodiment, after completing multi-scale clipping of an object image according to a reference defect area to obtain a plurality of to-be-detected images with different scales, respectively inputting the plurality of to-be-detected sub-images into feature extraction branches with corresponding scales to perform feature extraction to obtain image features of the to-be-detected sub-images; then inputting the image features of the multiple sub-images to be detected into a feature fusion network to perform feature fusion, so as to obtain fusion features; and finally, inputting the fusion characteristics into a positioning and classifying network to perform defect positioning and classifying, and determining a target defect area and a corresponding target defect category of the object to be detected, which have defects, according to the positioning and classifying result output by the positioning and classifying network.
In this embodiment, the feature fusion modes of the image features with different scales are not particularly limited, and can be selected by those skilled in the art according to actual needs.
In an embodiment, please refer to fig. 1f, the feature fusion network includes a weighting operation module and a stitching module, the weighting operation module includes a plurality of weighting operation branches corresponding to different scales, the feature fusion network is used for feature fusion of image features of a plurality of sub-images to be detected, and the feature fusion method includes:
respectively inputting the image features of the plurality of sub-images to be detected into weighting operation branches of corresponding scales to perform weighting operation, so as to obtain the weighting features of the plurality of sub-images to be detected;
and inputting the weighted features of the multiple sub-images to be detected into a splicing module for feature splicing to obtain fusion features.
In this embodiment, for each scale of image features, a weight parameter w is preset and configured i The quality is measured, here the weight parameter w i The configuration of (c) is not particularly limited, and may be configured empirically by one skilled in the art, for example.
Correspondingly, the weight parameter w is utilized in the present embodiment i And fusion of the multi-scale image features is realized. The process of extracting the image features of the sub-images to be detected with different scales and respectively inputting the extracted image features into weighting operation branches with corresponding scales to perform weighting operation to obtain the weighting features of each sub-image to be detected can be expressed as follows:
F i '=F i *w i
F i Image features representing the ith sub-image to be detected, w i Weight parameters of weighting operation branches representing corresponding scales of ith sub-image to be detected, F i ' represents the weighted features of the ith sub-image to be detected.
As above, after the weighting operation is completed to obtain the weighting characteristics of the multiple sub-images to be detected, the weighting characteristics of the multiple sub-images to be detected are further input into the splicing module to perform characteristic splicing, that is, characteristic fusion is realized in a characteristic splicing manner, so as to obtain fusion characteristics, and the process can be expressed as follows:
F m =concat(F i ');
F m representing the fusion feature, concat () represents a splice operation, i.e., a concat operation.
In one embodiment, an optional joint training scheme of the defect localization model and the defect localization classification model is provided, and before acquiring the article image of the article to be detected, the method further includes:
acquiring a positive sample image with defects, and a defect type label and a defect position label of the positive sample image;
acquiring a negative sample image without defects;
training a defect positioning classification model by adopting a gradient descent mode and a reinforcement learning mode according to the negative sample image, the positive sample image, the defect type label and the defect position label;
and training the defect positioning model in a gradient descent mode according to the negative sample image, the positive sample image and the defect position label of the negative sample image.
In this embodiment, for an article of a type, according to different defect types that may exist in the article of the type, a sample defect image of a sample article in which defects of the different defect types exist is acquired, recorded as a positive sample image, and a defect position label for describing a defect region in the sample defect image in which defects exist, and a defect type label for describing a defect type of the defect region are acquired. For example, for a display panel, the image acquisition component is used for shooting the display panel with different types of defects such as breakage defects, scratch defects, bright point defects, dark point defects and/or line defects, and the like, so as to correspondingly obtain a positive sample image.
In addition, for one type of article, a sample image without defects is also acquired and recorded as a negative sample image. For example, the image acquisition component shoots the display panel without any defect, and accordingly a negative sample image is obtained.
It should be noted that, in this embodiment, the defect types and the number of the obtained positive sample images are not particularly limited, and the number of the negative sample images may be configured by those skilled in the art according to actual needs.
In this embodiment, for the defect positioning model, only the position with the defect is concerned, but not what type of defect is concerned, and correspondingly, according to the negative sample image, the positive sample image and the defect position label thereof, the defect positioning model is trained by adopting a gradient descent mode; meanwhile, for the defect positioning and classifying model, as the positions and the types of defects are focused at the same time, the defect positioning and classifying model is trained by adopting a gradient descent mode and a reinforcement learning mode according to the negative sample image, the positive sample image, the defect type labels and the defect position labels.
In an embodiment, training the defect localization classification model according to the negative sample image, the positive sample image, the defect class label and the defect position label by adopting a gradient descent mode and a reinforcement learning mode comprises the following steps:
according to the negative sample image, the positive sample image, the defect type label and the defect position label thereof, the weight parameters of the feature extraction network and the positioning classification network in the defect positioning classification model are updated in a gradient descent mode, and the weight parameters of the feature extraction branches in the feature fusion network are updated in a reinforcement learning mode.
Referring to fig. 1g, reinforcement learning is an optimization strategy, according to the current state, the environment is changed by the actions of the agent, the rewards are used to measure the score value in the current state, the higher the score is, the more correct the actions of the agent are, and based on this, the agent can learn a strategy for maximizing the rewards.
In this embodiment, according to the negative sample image, the positive sample image, the defect type label and the defect position label thereof, the weight parameters of the feature extraction network and the positioning classification network in the defect positioning classification model are updated by adopting a gradient descent method, wherein the gradient descent method comprises, but is not limited to, a random gradient descent method (Stochastic Gradient Descent), a small-batch gradient descent method (Mini Batch Gradient Descent) and the like.
In this embodiment, the reinforcement-learned agent corresponds to a defect localization model and a defect localization classification model, the states correspond to weight parameters of the defect localization model and the defect localization classification model, the actions represent a process of updating the weight parameters by using a gradient descent method, the environment represents a current optimization state of the model, such as gradient and optimizer parameters, and rewards can be determined according to the loss of the defect localization model and the defect localization classification model as a whole.
In an embodiment, updating weight parameters of feature extraction branches in a feature fusion network by reinforcement learning includes:
acquiring a first training loss of the defect positioning model and acquiring a second training loss of the defect positioning classification model;
fusing the first training loss and the second training loss to obtain fused training loss;
determining reinforcement learning rewards according to fusion training losses, reinforcement learning is carried out according to rewards, and weight parameters of feature extraction branches in a feature fusion network are updated.
The first training loss is used for describing the difference between a determined sample reference defect area and a defect area indicated by a corresponding defect position label when the defect positioning model performs defect positioning on the sample image. For how to use the defect positioning model to perform defect positioning on the sample image, please refer to the description related to using the defect positioning model to perform defect positioning on the object image in the above embodiments, which is not repeated herein.
The second training loss is used for describing the difference between the determined sample target defect and the defect area indicated by the defect position label corresponding to the sample image and the difference between the determined sample target defect category and the defect category indicated by the defect category label corresponding to the sample image. For how to use the defect locating classification model to locate defects in the sample image, please refer to the description related to using the defect locating model to locate defects in the object image (i.e. according to the reference defect area determined by the defect locating model, cutting the object images in multiple scales, and inputting the multiple sub-images to be detected obtained by cutting into the defect locating classification model to locate and classify).
In this embodiment, a first training loss of the defect positioning model is obtained, a second training loss of the defect positioning classification model is obtained, and the first training loss and the second training loss are fused to obtain a fused training loss, where the fusion mode of the first training loss and the second training loss is not specifically limited, and may be configured by a person skilled in the art according to actual needs.
And after the fusion training loss is obtained through fusion, determining reinforcement learning rewards according to the fusion training loss by taking the negative correlation of the rewards and the fusion training loss as constraint, performing reinforcement learning according to the determined rewards, and updating weight parameters of feature extraction branches in a feature fusion network.
For example, the inverse of the fusion training loss may be directly employed as a reinforcement learning reward, expressed as:
R=-L all
wherein R represents reinforcement learning rewards, L all Representing fusion training loss.
In this embodiment, according to the fusion training loss, the gradient descent method is used to update the weight parameters of the defect positioning model, and update the weight parameters of the feature extraction network and the positioning classification network in the defect positioning classification model.
As above, by adopting the gradient descent mode and the reinforcement learning mode to perform joint training on the defect positioning model and the defect positioning classification model in the same training set, the overall efficiency of defect detection can be reduced while the calculated amount of defect detection is reduced.
As can be seen from the above, in the embodiment of the present application, the defect detection of the object to be detected is divided into two stages, and in the first stage, rough defect positioning is performed by using the obtained object image of the detected object, the approximate position where the defect may exist is identified, and the reference defect area where the defect exists in the object image is determined; in the second stage, the object image is cut in a plurality of different scales by utilizing the reference defect area, a plurality of sub-images to be detected with different scales are obtained, accurate defect positioning classification is carried out by utilizing the cut sub-images to be detected with different scales, the target position and the defect type possibly with defects are identified, and the target defect area and the corresponding target defect type with defects in the object to be detected are determined. On the one hand, traditional manual visual detection is replaced by an image recognition mode, so that manual subjective judgment can be avoided, and the accuracy of defect detection results is improved. On the other hand, by dividing the defect detection into two stages, firstly locating the reference defect area and then cutting out sub-images to be detected with different scales according to the reference defect area, the characteristics with more accurate scales can be provided for defect locating and classifying, meanwhile, the influence of image content outside the reference defect area on defect locating and classifying can be avoided as much as possible, and the accuracy of defect detection results can be further improved.
According to the defect detection method provided in the above embodiments, the object to be detected is taken as a display panel, and the defect detection device is integrated in an electronic device for example for further detailed description.
Referring to fig. 2 and 3 in combination, the flow of the defect detection method may further be as follows:
in S210, the electronic device acquires a negative sample image of the display panel without the defect, a positive sample image of the display panel with the defect, a defect type label thereof, and a defect position label.
In this embodiment, for a display panel, the electronic device obtains, according to different defect types that may exist in the display panel, a sample defect image of the display panel in which defects of the different defect types exist, marks as a positive sample image, and obtains a defect position label for describing a defect area in the sample defect image in which defects exist, and a defect type label describing a defect type of the defect area. For example, the electronic device shoots the display panel with different types of defects such as breakage defects, scratch defects, bright point defects, dark point defects and/or line defects through the image acquisition component, and accordingly obtains a positive sample image.
In addition, for the display panel, the electronic device also acquires a sample image without defects, noted as a negative sample image. For example, the electronic device shoots the display panel without any defect through the image acquisition component, and accordingly a negative sample image is obtained.
It should be noted that, in this embodiment, the defect types and the number of the positive sample images obtained by the electronic device are not particularly limited, and the number of the negative sample images may be configured by those skilled in the art according to actual needs. The obtained negative sample image, the positive sample image, the defect type label and the defect position label are used for forming a training set for the combined training of the follow-up defect positioning model and the defect positioning classification model.
In S220, the electronic device updates the weight parameters of the defect localization model in a gradient descent manner according to the negative sample image, the positive sample image and the defect position labels thereof.
In this embodiment, for the defect positioning model, since only the position where the defect exists is focused, but not what type of defect exists, the electronic device updates the weight parameters of the defect positioning model by using a gradient descent method according to the negative sample image, the positive sample image and the defect position label thereof. Among them, gradient descent methods include, but are not limited to, a random gradient descent method (Stochastic Gradient Descent), a small-batch gradient descent method (Mini Batch Gradient Descent), and the like.
In S230, the electronic device updates the weight parameters of the feature extraction network and the location classification network in the defect location classification model in a gradient descent manner according to the negative sample image, the positive sample image, the defect category label and the defect location label thereof.
In this embodiment, the electronic device updates the weight parameters of the feature extraction network and the location classification network in the defect location classification model in a gradient descent manner according to the negative sample image, the positive sample image, the defect category label and the defect location label thereof.
In S240, the electronic device obtains a first training loss of the defect localization model and obtains a second training loss of the defect localization classification model, and fuses the first training loss and the second training loss to obtain a fused training loss.
In S250, the electronic device determines reinforcement learning rewards according to the fusion training loss, performs reinforcement learning according to the rewards, and updates the weight parameters of feature extraction branches in the feature fusion network in the defect positioning classification model.
In this embodiment, the reinforcement-learned agent corresponds to a defect localization model and a defect localization classification model, the states correspond to weight parameters of the defect localization model and the defect localization classification model, the actions represent a process of updating the weight parameters by using a gradient descent method, the environment represents a current optimization state of the model, such as gradient and optimizer parameters, and rewards can be determined according to the loss of the defect localization model and the defect localization classification model as a whole.
The first training loss is used for describing the difference between a determined sample reference defect area and a defect area indicated by a corresponding defect position label when the defect positioning model performs defect positioning on the sample image. For how to use the defect positioning model to perform defect positioning on the sample image, please refer to the description related to using the defect positioning model to perform defect positioning on the object image in the above embodiments, which is not repeated herein.
The second training loss is used for describing the difference between the determined sample target defect and the defect area indicated by the defect position label corresponding to the sample image and the difference between the determined sample target defect category and the defect category indicated by the defect category label corresponding to the sample image. For how to use the defect locating classification model to locate defects in the sample image, please refer to the description related to using the defect locating model to locate defects in the object image (i.e. according to the reference defect area determined by the defect locating model, cutting the object images in multiple scales, and inputting the multiple sub-images to be detected obtained by cutting into the defect locating classification model to locate and classify).
In this embodiment, the electronic device obtains a first training loss of the defect positioning model and obtains a second training loss of the defect positioning classification model, fuses the first training loss and the second training loss to obtain a fused training loss, and the fusion mode of the first training loss and the second training loss is not specifically limited and can be configured by a person skilled in the art according to actual needs.
And after the fusion training loss is obtained through fusion, the electronic equipment takes the negative correlation of the rewards and the fusion training loss as constraint, determines the rewards of reinforcement learning according to the fusion training loss, performs reinforcement learning according to the determined rewards, and updates the weight parameters of the feature extraction branches in the feature fusion network.
For example, the electronic device may directly employ the inverse of the fusion training loss as a reinforcement learning reward, expressed as:
R=-L all
wherein R represents reinforcement learning rewards, L all Representing fusion training loss.
In this embodiment, the electronic device updates the weight parameters of the defect location model in a gradient descent manner according to the fusion training loss, and updates the weight parameters of the feature extraction network and the location classification network in the defect location classification model.
As above, by adopting the gradient descent mode and the reinforcement learning mode to perform joint training on the defect positioning model and the defect positioning classification model in the same training set, the overall efficiency of defect detection can be reduced while the calculated amount of defect detection is reduced.
In S260, the electronic device obtains a display panel image of the display panel to be detected, inputs the display panel image into a defect positioning model to perform defect positioning, and determines a reference defect area where the display panel image has defects.
The method for acquiring the display panel image of the display panel to be detected is not particularly limited, for example, when the electronic device is configured with the image acquisition component, the image acquisition can be directly performed on the display panel to be detected through the configured image acquisition component, so as to obtain the display panel image of the display panel to be detected; in addition, the electronic device can acquire the display panel image of the display panel to be detected from other electronic devices provided with the image acquisition component; the electronic device can also acquire a display panel image of the display panel to be detected, which is acquired in advance, from the server, and the display panel image is uploaded to the server after being acquired by other image acquisition components configured with the image acquisition components.
The obtained display panel image of the display panel to be detected is used for the subsequent defect detection of the display panel to be detected.
In this embodiment, referring to fig. 1c, after the display panel image of the display panel to be detected is obtained, the obtained display panel image is input into a trained defect positioning model to perform defect positioning, so as to determine a reference defect area where the display panel image has defects.
In S270, the electronic device performs multi-scale clipping on the display panel image according to the reference defect area, to obtain a plurality of sub-images to be detected with different scales.
In this embodiment, after determining a reference defect area where a defect may exist, the electronic device performs multi-scale clipping on the display panel image according to a configured multi-scale clipping policy based on the reference defect area, to obtain a plurality of images with different scales, and records the images as sub-images to be detected. The multi-scale clipping strategy can be configured by a person skilled in the art according to actual needs by taking the image content of the sub-image to be detected and the content of the reference defect area of the display panel image as constraints which are at least partially overlapped.
For example, referring to fig. 1d, a multi-scale clipping strategy may be configured as follows:
And cutting out K sub-images to be detected with preset scales by taking the center of the reference defect area as a cutting center, wherein K is a positive integer greater than or equal to 2.
For another example, a multi-scale clipping strategy may also be configured to:
and cutting out K sub-images to be detected with preset scales by taking the upper left corner of the reference defect area as the cut upper left corner.
In S280, the electronic device inputs the plurality of sub-images to be detected into feature extraction branches of corresponding scales to perform feature extraction, so as to obtain image features of the plurality of sub-images to be detected.
Through the above combined training, each feature extraction branch of the feature extraction network is configured to perform feature extraction on the sub-image to be detected with the corresponding scale, so as to obtain image features, including but not limited to features of shape edges, features of color shades, and the like.
In S290, the electronic device inputs the image features of the multiple sub-images to be detected into weighting operation branches of corresponding scales in the feature fusion network respectively to perform weighting operation, so as to obtain weighting features of the multiple sub-images to be detected, inputs the weighting features of the multiple sub-images to be detected into a splicing module in the feature fusion network to perform feature splicing, so as to obtain fusion features, inputs the fusion features into a positioning classification network to perform defect positioning classification, and determines a target defect region of the display panel to be detected with defects and a corresponding target defect class.
In this embodiment, the feature fusion network is connected to the feature extraction network and the positioning classification network, and is configured to perform feature fusion on a plurality of image features with different scales obtained by encoding the feature extraction network, and take the fused features obtained by fusion as input of the positioning classification network.
The positioning classification network is configured to perform defect positioning classification on the input fusion features, and correspondingly output positioning classification results for describing the positions and the categories of the defect areas.
In this embodiment, for the image features extracted from the sub-images to be detected with different scales, the electronic device inputs the image features into weighting branches with corresponding scales to perform weighting operation, so as to obtain weighting features of each sub-image to be detected, where the process may be expressed as:
F i '=F i *w i
F i image features representing the ith sub-image to be detected, w i Weight parameters of weighting operation branches representing corresponding scales of ith sub-image to be detected, F i ' represents the weighted features of the ith sub-image to be detected.
As above, after the weighting operation is completed to obtain the weighting characteristics of the multiple sub-images to be detected, the electronic device further inputs the weighting characteristics of the multiple sub-images to be detected into the splicing module to perform characteristic splicing, that is, the characteristic fusion is realized in a characteristic splicing manner, so as to obtain fusion characteristics, and the process can be expressed as:
F m =concat(F i ');
F m Representing the fusion feature, concat () represents a splice operation, i.e., a concat operation.
And after the fusion characteristics are obtained by splicing, the electronic equipment inputs the fusion characteristics into a positioning and classifying network to perform defect positioning and classifying, and a target defect area and a corresponding target defect category of the object to be detected with defects are determined according to the positioning and classifying result output by the positioning and classifying network.
In order to facilitate better implementation of the defect detection method provided by the embodiment of the application, the embodiment of the application also provides a defect detection device based on the defect detection method. The meaning of the term is the same as that of the defect detection method, and specific implementation details refer to the description of the method embodiment.
Referring to fig. 4, fig. 4 is a schematic structural diagram of a defect detecting device according to an embodiment of the present application, where the defect detecting device may include an image acquisition module 310, a defect positioning module 320, an image clipping module 330 and a positioning classification module 340,
an image acquisition module 310, configured to acquire an article image of an article to be detected;
the defect positioning module 320 is configured to perform defect positioning according to the object image, and determine a reference defect area where the object image has a defect;
The image clipping module 330 is configured to clip the object image in multiple scales according to the reference defect area, so as to obtain multiple sub-images to be detected with different scales;
the positioning and classifying module 340 is configured to perform multi-scale clipping on the object image according to the reference defect area, so as to obtain a plurality of sub-images to be detected with different scales.
In an alternative embodiment, defect localization module 320 is configured to input the image of the object into a defect localization model for defect localization, and determine a reference defect area where the image of the object is defective.
In an alternative embodiment, the defect positioning module 320 is configured to input the object image into the defect positioning model for performing defect positioning, so as to obtain a candidate defect area of the object image output by the defect positioning model and a corresponding defect confidence level, where the defect confidence level is used to indicate a reliability degree of defects in the corresponding candidate defect area; and determining a reference defect area with defects of the object image according to the candidate defect area and the corresponding defect confidence coefficient.
In an alternative embodiment, the defect locating module 320 is configured to determine a defect confidence threshold according to the defect confidence corresponding to the candidate defect region; and determining the candidate defect region with the corresponding defect confidence greater than or equal to the defect confidence threshold as a reference defect region.
In an alternative embodiment, the defect localization module 320 is configured to perform a downsampling process on the object image to obtain a downsampled image of the object image; and inputting the downsampled image into a defect positioning model to perform defect positioning, and determining a reference defect area of the object image with defects.
In an alternative embodiment, the positioning classification module 340 is configured to input the plurality of sub-images to be detected into a trained defect positioning classification model to perform defect positioning classification, and determine a target defect area where the object to be detected has a defect and a corresponding target defect class; wherein, defect location classification model and defect location model joint training obtain.
In an alternative embodiment, the defect positioning and classifying model includes a feature extraction network, a feature fusion network, and a positioning and classifying network, where the feature extraction network includes a plurality of feature extraction branches corresponding to different scales, and the positioning and classifying module 340 is configured to input the plurality of sub-images to be detected into the feature extraction branches corresponding to the scales respectively to perform feature extraction, so as to obtain image features of the plurality of sub-images to be detected; inputting the image features of the multiple sub-images to be detected into a feature fusion network to perform feature fusion, so as to obtain fusion features; and inputting the fusion characteristics into a positioning classification network to perform defect positioning classification, and determining a target defect area and a corresponding target defect category of the object to be detected with defects.
In an alternative embodiment, the feature fusion network includes a weighting operation module and a stitching module, where the weighting operation module includes a plurality of weighting operation branches corresponding to different scales, and the positioning classification module 340 is configured to input image features of a plurality of sub-images to be detected into the weighting operation branches corresponding to the scales to perform weighting operation respectively, so as to obtain weighting features of the plurality of sub-images to be detected; and inputting the weighted features of the multiple sub-images to be detected into a splicing module for feature splicing to obtain fusion features.
In an alternative embodiment, the defect detection device further comprises a model training module, which is used for acquiring a positive sample image with defects, and a defect type label and a defect position label thereof; acquiring a negative sample image without defects; training a defect positioning classification model by adopting a gradient descent mode and a reinforcement learning mode according to the negative sample image, the positive sample image, the defect type label and the defect position label thereof; and training the defect positioning model in a gradient descent mode according to the negative sample image, the positive sample image and the defect position labels thereof.
In an alternative embodiment, the model training module is configured to update weight parameters of the feature extraction network and the location classification network in the defect location classification model in a gradient descent manner according to the negative sample image, the positive sample image, the defect category label and the defect location label thereof, and update weight parameters of the feature extraction branch in the feature fusion network in a reinforcement learning manner.
In an alternative embodiment, the model training module is configured to obtain a first training loss of the defect localization model and obtain a second training loss of the defect localization classification model; and fusing the first training loss and the second training loss to obtain fused training loss; determining reinforcement learning rewards according to the fusion training losses, performing reinforcement learning according to the rewards, and updating weight parameters of feature extraction branches in a feature fusion network.
The specific implementation of each module can be referred to the previous embodiments, and will not be repeated here.
In this embodiment, the defect detection of the object to be detected is divided into two stages, and in the first stage, the defect positioning module 320 performs rough defect positioning by using the object image of the detected object obtained by the image obtaining module 310, identifies the approximate position where the defect may exist, and determines the reference defect area where the defect exists in the object image; in the second stage, the image cropping module 330 utilizes the reference defect area to crop the object image at a plurality of different scales to obtain a plurality of sub-images to be detected with different scales, and the positioning classification module 340 utilizes the plurality of sub-images to be detected with different scales obtained by cropping to perform accurate defect positioning classification, identify the target position and defect category where defects may exist, and determine the target defect area and corresponding target defect category where defects exist in the object to be detected. On the one hand, traditional manual visual detection is replaced by an image recognition mode, so that manual subjective judgment can be avoided, and the accuracy of defect detection results is improved. On the other hand, by dividing the defect detection into two stages, firstly locating the reference defect area and then cutting out sub-images to be detected with different scales according to the reference defect area, the characteristics with more accurate scales can be provided for defect locating and classifying, meanwhile, the influence of image content outside the reference defect area on defect locating and classifying can be avoided as much as possible, and the accuracy of defect detection results can be further improved.
The embodiment of the application also provides an electronic device, which comprises a memory and a processor, wherein the processor is used for executing the steps in the defect detection method provided by the embodiment by calling the computer program stored in the memory.
Referring to fig. 5, fig. 5 is a schematic structural diagram of an electronic device according to an embodiment of the application.
The electronic device may include one or more processing cores 'processors 101, one or more computer-readable storage media's memory 102, power supply 103, and input unit 104, among other components. It will be appreciated by those skilled in the art that the electronic device structure shown in fig. 5 is not limiting of the electronic device and may include more or fewer components than shown, or may combine certain components, or a different arrangement of components. Wherein:
the processor 101 is a control center of the electronic device, connects various parts of the entire electronic device using various interfaces and lines, performs various functions of the electronic device and processes data by running or executing software programs and/or modules stored in the memory 102, and invoking data stored in the memory 102. Optionally, processor 101 may include one or more processing cores; alternatively, the processor 101 may integrate an application processor that primarily handles operating systems, user interfaces, applications, etc., with a modem processor that primarily handles wireless communications. It will be appreciated that the modem processor described above may not be integrated into the processor 101.
The memory 102 may be used to store software programs and modules, and the processor 101 executes various functional applications and data processing by executing the software programs and modules stored in the memory 102. The memory 102 may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program (such as a sound playing function, an image playing function, etc.) required for at least one function, and the like; the storage data area may store data created according to the use of the electronic device, etc. In addition, memory 102 may include high-speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid-state storage device. Accordingly, the memory 102 may also include a memory controller to provide access to the memory 102 by the processor 101.
The electronic device further comprises a power supply 103 for powering the various components, optionally, the power supply 103 may be logically connected to the processor 101 by a power management system, whereby the functions of managing charging, discharging, and power consumption are performed by the power management system. The power supply 103 may also include one or more of any of a direct current or alternating current power supply, a recharging system, a power failure detection circuit, a power converter or inverter, a power status indicator, and the like.
The electronic device may further comprise an input unit 104, which input unit 104 may be used for receiving input digital or character information and for generating keyboard, mouse, joystick, optical or trackball signal inputs in connection with user settings and function control.
Although not shown, the electronic device may further include a display unit, an image acquisition component, and the like, which are not described herein. In particular, in this embodiment, the processor 101 in the electronic device loads executable codes corresponding to one or more computer programs into the memory 102 according to the following instructions, and the steps in the defect detection method provided by the present application are executed by the processor 101, for example:
acquiring an article image of an article to be detected;
performing defect positioning according to the object image, and determining a reference defect area with defects of the object image;
according to the reference defect area, carrying out multi-scale cutting on the object image to obtain a plurality of sub-images to be detected with different scales;
and carrying out defect positioning classification according to the plurality of sub-images to be detected, and determining a target defect area and a corresponding target defect category of the object to be detected, wherein the target defect area is defective.
It should be noted that, the electronic device provided in the embodiment of the present application and the defect detection method in the foregoing embodiment belong to the same concept, and detailed implementation processes of the electronic device are described in the foregoing related embodiments, which are not repeated herein.
The present application also provides a computer-readable storage medium having stored thereon a computer program which, when executed on a processor of an electronic device provided by an embodiment of the present application, causes the processor of the electronic device to execute the steps in the defect detection method provided by the present application. The storage medium may be a magnetic disk, an optical disk, a Read Only Memory (ROM), a random access Memory (Random Access Memory, RAM), or the like.
The present application also provides a computer program product or computer program comprising computer instructions stored in a computer readable storage medium. The processor of the computer device reads the computer instructions from the computer-readable storage medium and executes the computer instructions to cause the computer device to perform various alternative implementations of the defect detection method described above.
The foregoing has outlined rather broadly the principles and embodiments of the present application in order that the detailed description of the application that follows may be better understood, such as an exemplary embodiment of the present application; meanwhile, as those skilled in the art will have variations in the specific embodiments and application scope in light of the ideas of the present application, the present description should not be construed as limiting the present application.

Claims (15)

1. A defect detection method, comprising:
acquiring an article image of an article to be detected;
performing defect positioning according to the object image, and determining a reference defect area with defects of the object image;
performing multi-scale cutting on the object image according to the reference defect area to obtain a plurality of sub-images to be detected with different scales;
and carrying out defect positioning classification according to the plurality of sub-images to be detected, and determining a target defect area and a corresponding target defect category of the object to be detected, wherein the target defect area is defective.
2. The defect detection method of claim 1, wherein the determining a reference defect area of the article image in which the defect exists based on the defect localization of the article image comprises:
and inputting the object image into a defect positioning model to perform defect positioning, and determining a reference defect area where the object image has defects.
3. The defect detection method of claim 2, wherein the inputting the object image into a defect localization model for defect localization, determining a reference defect region in which the object image is defective, comprises:
inputting the object image into the defect positioning model for defect positioning, and obtaining a candidate defect region of the object image output by the defect positioning model and a corresponding defect confidence level, wherein the defect confidence level is used for indicating the reliability degree of defects in the corresponding candidate defect region;
And determining a reference defect area with defects of the object image according to the candidate defect area and the corresponding defect confidence coefficient.
4. A defect detection method according to claim 3, wherein said determining a reference defect region of said object image having defects according to said candidate defect regions and their corresponding defect confidence levels comprises:
determining a defect confidence threshold according to the defect confidence corresponding to the candidate defect region;
and determining the candidate defect area with the corresponding defect confidence coefficient larger than or equal to the defect confidence coefficient threshold value as the reference defect area.
5. The defect detection method of claim 2, wherein the inputting the object image into a defect localization model for defect localization, determining a reference defect region in which the object image is defective, comprises:
performing downsampling processing on the object image to obtain a downsampled image of the object image;
and inputting the downsampled image into the defect positioning model to perform defect positioning, and determining a reference defect area where the object image has defects.
6. The defect detection method according to claim 2, wherein the performing defect localization classification according to the plurality of sub-images to be detected, determining a target defect area and a corresponding target defect class of the object to be detected having defects, includes:
Inputting a plurality of sub-images to be detected into a trained defect positioning and classifying model to perform defect positioning and classifying, and determining a target defect area and a corresponding target defect category of the object to be detected with defects;
wherein the defect positioning classification model and the defect positioning model are obtained through combined training.
7. The defect detection method according to claim 6, wherein the defect localization classification model includes a feature extraction network, a feature fusion network, and a localization classification network, the feature extraction network includes a plurality of feature extraction branches corresponding to different scales, the inputting the plurality of sub-images to be detected into the trained defect localization classification model performs defect localization classification, and determining a target defect area and a corresponding target defect class where the defect exists in the object to be detected includes:
respectively inputting the plurality of sub-images to be detected into feature extraction branches with corresponding scales to perform feature extraction, so as to obtain image features of the plurality of sub-images to be detected;
inputting the image features of the multiple sub-images to be detected into the feature fusion network to perform feature fusion, so as to obtain fusion features;
and inputting the fusion characteristics into the positioning classification network to perform defect positioning classification, and determining a target defect area and a corresponding target defect category of the object to be detected, wherein the target defect area is defective.
8. The defect detection method according to claim 7, wherein the feature fusion network includes a weighting operation module and a stitching module, the weighting operation module includes a plurality of weighting operation branches corresponding to different scales, the inputting the image features of the plurality of sub-images to be detected into the feature fusion network for feature fusion, to obtain fusion features, includes:
respectively inputting the image features of the plurality of sub-images to be detected into weighting operation branches of corresponding scales to perform weighting operation, so as to obtain the weighting features of the plurality of sub-images to be detected;
and inputting the weighted features of the multiple sub-images to be detected into the splicing module to perform feature splicing, so as to obtain the fusion features.
9. The defect detection method of claim 8, wherein prior to the acquiring the article image of the article to be detected, further comprising:
acquiring a positive sample image with defects, and a defect type label and a defect position label of the positive sample image;
acquiring a negative sample image without defects;
training the defect positioning classification model by adopting a gradient descent mode and a reinforcement learning mode according to the negative sample image, the positive sample image, the defect type label and the defect position label thereof;
And training the defect positioning model in a gradient descent mode according to the negative sample image, the positive sample image and the defect position labels thereof.
10. The defect detection method of claim 9, wherein the training the defect localization classification model according to the negative sample image, the positive sample image, and the defect class labels and defect location labels thereof in a gradient descent manner and a reinforcement learning manner comprises:
and updating weight parameters of a feature extraction network and a positioning classification network in the defect positioning classification model by adopting a gradient descent mode according to the negative sample image, the positive sample image, the defect type label and the defect position label of the positive sample image, and updating weight parameters of feature extraction branches in the feature fusion network by adopting a reinforcement learning mode.
11. The defect detection method of claim 10, wherein updating the weight parameters of the feature extraction branches in the feature fusion network by reinforcement learning comprises:
acquiring a first training loss of the defect positioning model and acquiring a second training loss of the defect positioning classification model;
Fusing the first training loss and the second training loss to obtain a fused training loss;
determining reinforcement learning rewards according to the fusion training losses, performing reinforcement learning according to the rewards, and updating weight parameters of feature extraction branches in the feature fusion network.
12. A defect detection apparatus, comprising:
the image acquisition module is used for acquiring an article image of the article to be detected;
the defect positioning module is used for performing defect positioning according to the object image and determining a reference defect area where the object image has defects;
the image clipping module is used for clipping the object image in a multi-scale mode according to the reference defect area to obtain a plurality of sub-images to be detected with different scales;
and the positioning classification network is used for carrying out defect positioning classification according to the plurality of sub-images to be detected and determining a target defect area and a corresponding target defect category of the object to be detected, wherein the target defect area is defective.
13. An electronic device comprising a memory storing a computer program and a processor for running the computer program in the memory to perform the steps of the defect detection method of any of claims 1 to 11.
14. A computer readable storage medium storing a plurality of instructions adapted to be loaded by a processor to perform the steps of the defect detection method of any of claims 1 to 11.
15. A computer program product comprising a computer program or instructions which, when executed by a processor, carries out the steps of the defect detection method of any of claims 1 to 11.
CN202310245929.XA 2023-03-06 2023-03-06 Defect detection method, device, electronic apparatus, storage medium, and program product Pending CN116977257A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310245929.XA CN116977257A (en) 2023-03-06 2023-03-06 Defect detection method, device, electronic apparatus, storage medium, and program product

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310245929.XA CN116977257A (en) 2023-03-06 2023-03-06 Defect detection method, device, electronic apparatus, storage medium, and program product

Publications (1)

Publication Number Publication Date
CN116977257A true CN116977257A (en) 2023-10-31

Family

ID=88482025

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310245929.XA Pending CN116977257A (en) 2023-03-06 2023-03-06 Defect detection method, device, electronic apparatus, storage medium, and program product

Country Status (1)

Country Link
CN (1) CN116977257A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117853826A (en) * 2024-03-07 2024-04-09 誊展精密科技(深圳)有限公司 Object surface precision identification method based on machine vision and related equipment
CN117972487A (en) * 2024-01-26 2024-05-03 常州润来科技有限公司 Copper pipe milling face cutter disc defect detection method and system
CN118334031A (en) * 2024-06-14 2024-07-12 深圳思谋信息科技有限公司 Appearance defect detection method and device, storage medium and electronic equipment

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117972487A (en) * 2024-01-26 2024-05-03 常州润来科技有限公司 Copper pipe milling face cutter disc defect detection method and system
CN117972487B (en) * 2024-01-26 2024-09-03 常州润来科技有限公司 Copper pipe milling face cutter disc defect detection method and system
CN117853826A (en) * 2024-03-07 2024-04-09 誊展精密科技(深圳)有限公司 Object surface precision identification method based on machine vision and related equipment
CN117853826B (en) * 2024-03-07 2024-05-10 誊展精密科技(深圳)有限公司 Object surface precision identification method based on machine vision and related equipment
CN118334031A (en) * 2024-06-14 2024-07-12 深圳思谋信息科技有限公司 Appearance defect detection method and device, storage medium and electronic equipment

Similar Documents

Publication Publication Date Title
CN110555481B (en) Portrait style recognition method, device and computer readable storage medium
CN116977257A (en) Defect detection method, device, electronic apparatus, storage medium, and program product
WO2018121690A1 (en) Object attribute detection method and device, neural network training method and device, and regional detection method and device
CN110264444B (en) Damage detection method and device based on weak segmentation
CN111815564B (en) Method and device for detecting silk ingots and silk ingot sorting system
CN110517259A (en) A kind of detection method, device, equipment and the medium of product surface state
CN111160469A (en) Active learning method of target detection system
TW202009681A (en) Sample labeling method and device, and damage category identification method and device
CN115131283B (en) Defect detection and model training method, device, equipment and medium for target object
CN111310826B (en) Method and device for detecting labeling abnormality of sample set and electronic equipment
CN112528908B (en) Living body detection method, living body detection device, electronic equipment and storage medium
CN107808126A (en) Vehicle retrieval method and device
CN110175519B (en) Method and device for identifying separation and combination identification instrument of transformer substation and storage medium
CN113052295A (en) Neural network training method, object detection method, device and equipment
CN113065634B (en) Image processing method, neural network training method and related equipment
CN111967490A (en) Model training method for map detection and map detection method
CN116958512A (en) Target detection method, target detection device, computer readable medium and electronic equipment
CN111784053A (en) Transaction risk detection method, device and readable storage medium
CN115131339A (en) Factory tooling detection method and system based on neural network target detection
CN109598712A (en) Quality determining method, device, server and the storage medium of plastic foam cutlery box
CN114462526B (en) Classification model training method and device, computer equipment and storage medium
WO2022247628A1 (en) Data annotation method and related product
CN117173154A (en) Online image detection system and method for glass bottle
CN116977271A (en) Defect detection method, model training method, device and electronic equipment
CN116433936A (en) Image detection method, device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination