CN112884691A - Data enhancement and device, data enhancement equipment and storage medium - Google Patents
Data enhancement and device, data enhancement equipment and storage medium Download PDFInfo
- Publication number
- CN112884691A CN112884691A CN202110260102.7A CN202110260102A CN112884691A CN 112884691 A CN112884691 A CN 112884691A CN 202110260102 A CN202110260102 A CN 202110260102A CN 112884691 A CN112884691 A CN 112884691A
- Authority
- CN
- China
- Prior art keywords
- image
- training
- workpiece
- detection model
- target detection
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000012549 training Methods 0.000 claims abstract description 179
- 238000001514 detection method Methods 0.000 claims abstract description 140
- 230000007547 defect Effects 0.000 claims abstract description 138
- 230000004927 fusion Effects 0.000 claims abstract description 68
- 238000000034 method Methods 0.000 claims abstract description 39
- 238000012545 processing Methods 0.000 claims description 61
- 238000012795 verification Methods 0.000 claims description 40
- 230000009466 transformation Effects 0.000 claims description 39
- 238000013519 translation Methods 0.000 claims description 21
- 238000010008 shearing Methods 0.000 claims description 7
- 238000004590 computer program Methods 0.000 claims description 6
- 239000003292 glue Substances 0.000 claims description 2
- 230000003647 oxidation Effects 0.000 claims description 2
- 238000007254 oxidation reaction Methods 0.000 claims description 2
- 230000000694 effects Effects 0.000 abstract description 17
- 230000008569 process Effects 0.000 description 12
- 235000012431 wafers Nutrition 0.000 description 6
- 238000010586 diagram Methods 0.000 description 4
- 230000008859 change Effects 0.000 description 3
- 238000007689 inspection Methods 0.000 description 2
- 239000000463 material Substances 0.000 description 2
- 230000004075 alteration Effects 0.000 description 1
- 238000013527 convolutional neural network Methods 0.000 description 1
- 230000002950 deficient Effects 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 238000012423 maintenance Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000003062 neural network model Methods 0.000 description 1
- 238000004091 panning Methods 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 230000037303 wrinkles Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/50—Image enhancement or restoration using two or more images, e.g. averaging or subtraction
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
- G06T7/0004—Industrial image inspection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20212—Image combination
- G06T2207/20221—Image fusion; Image merging
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Quality & Reliability (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Image Processing (AREA)
- Image Analysis (AREA)
Abstract
A data enhancement method, a data enhancement apparatus, a data enhancement device, and a non-volatile computer-readable storage medium. The data enhancement method includes acquiring a first workpiece image having a defect; identifying and intercepting an image area where the defect is located in the first workpiece image to serve as a first fusion image; acquiring a second workpiece image without defects as a second fusion image; and fusing the first fused image and the second fused image to obtain a training image. The number of training images corresponding to the type of defects with lower occurrence probability can be increased, so that the number of training images with different types of defects is basically the same, the training effect is better when the target detection model is trained subsequently through a larger number of training images, the generalization performance of the target detection model can be improved, and the detection effect of the target detection model on the defects can be improved.
Description
Technical Field
The present application relates to the field of detection technologies, and in particular, to a data enhancement method, a data enhancement apparatus, a data enhancement device, and a non-volatile computer-readable storage medium.
Background
At present, defects of a workpiece are generally detected through a template detection algorithm, but the template detection algorithm has the problems of over-detection and low recognition accuracy, and compared with the above-mentioned method, a neural network model with higher detection accuracy is gradually favored.
Disclosure of Invention
The application provides a data enhancement method, a data enhancement device and a non-volatile computer-readable storage medium.
The data enhancement method of the embodiment of the application comprises the steps of obtaining a first workpiece image with defects; identifying and intercepting an image area where the defect is located in the first workpiece image to serve as a first fusion image; acquiring a second workpiece image without defects as a second fusion image; and fusing the first fused image and the second fused image to obtain a training image.
The data enhancement device of the embodiment of the application comprises a first acquisition module, an identification module, a second acquisition module and a fusion module. The first acquisition module is used for acquiring a first workpiece image with defects; the identification module is used for identifying and intercepting an image area where the defect is located in the first workpiece image to serve as a first fusion image; the second acquisition module is used for acquiring a second workpiece image without defects to serve as a second fusion image; the fusion module fuses the first fusion image and the second fusion image to acquire a training image.
The data enhancement device of the embodiment of the application comprises a processor. The processor is configured to: acquiring a first workpiece image with defects; identifying and intercepting an image area where the defect is located in the first workpiece image to serve as a first fusion image; acquiring a second workpiece image without defects as a second fusion image; and fusing the first fused image and the second fused image to obtain a training image.
A non-transitory computer-readable storage medium embodying a computer program of embodiments of the application, which when executed by one or more processors, causes the processors to perform the data enhancement method. The data enhancement method includes acquiring a first workpiece image having a defect; identifying and intercepting an image area where the defect is located in the first workpiece image to serve as a first fusion image; acquiring a second workpiece image without defects as a second fusion image; and fusing the first fused image and the second fused image to obtain a training image.
According to the data enhancement method, the data enhancement device, the data enhancement equipment and the nonvolatile computer readable storage medium, more workpiece images with defects are generated by fusing the image area where the defects are located in the first workpiece image with the defects and the second workpiece image without the defects, so that enough training images are obtained, the number of the training images corresponding to the types of defects with lower occurrence probability can be increased, the number of the training images with different types of defects is basically the same, the training effect is better when the target detection model is trained subsequently through the training images with more defects, the generalization performance of the target detection model can be improved, and the detection effect of the target detection model on the defects is improved.
Additional aspects and advantages of the present application will be set forth in part in the description which follows and, in part, will be obvious from the description, or may be learned by practice of the present application.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings needed to be used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present application, and other drawings can be obtained by those skilled in the art without creative efforts.
FIG. 1 is a schematic flow chart diagram of a data enhancement method according to some embodiments of the present application;
FIG. 2 is a block diagram of a data enhancement device according to some embodiments of the present application;
FIG. 3 is a schematic plan view of a data enhancement device and detection device according to certain embodiments of the present application;
FIGS. 4-9 are schematic flow diagrams of data enhancement methods according to certain embodiments of the present application;
FIGS. 10-14 are schematic illustrations of a data enhancement method according to certain embodiments of the present application; and
FIG. 15 is a schematic diagram of a connection between a processor and a computer-readable storage medium according to some embodiments of the present application.
Detailed Description
Embodiments of the present application will be further described below with reference to the accompanying drawings. The same or similar reference numbers in the drawings identify the same or similar elements or elements having the same or similar functionality throughout. In addition, the embodiments of the present application described below in conjunction with the accompanying drawings are exemplary and are only for the purpose of explaining the embodiments of the present application, and are not to be construed as limiting the present application.
Referring to fig. 1 to 3, a data enhancement method according to an embodiment of the present invention includes the following steps:
011: acquiring a first workpiece image with defects;
012: identifying and intercepting an image area where the defect is located in the first workpiece image to serve as a first fusion image;
013: acquiring a second workpiece image without defects as a second fusion image; and
014: and fusing the first fused image and the second fused image to acquire a training image.
The data enhancement device 10 of the embodiment of the present application includes a first obtaining module 11, a recognition module 12, a second obtaining module 13, and a fusion module 14. The first acquisition module 11 is used for acquiring a first workpiece image with defects; the identification module 12 is configured to identify and intercept an image region where a defect in the first workpiece image is located, so as to serve as a first fusion image; the second acquiring module 13 is configured to acquire a defect-free second workpiece image as a second fused image; the fusion module 14 is configured to fuse the first fused image and the second fused image to obtain a training image. That is, step 011 can be implemented by the first acquisition module 11, step 012 can be performed by the identification module 12, step 013 can be performed by the second acquisition module 13, and step 014 can be performed by the fusion module 14.
The data enhancement device 100 of the present embodiment includes a processor 20. The processor 20 is configured to: acquiring a first workpiece image with defects; identifying and intercepting an image area where the defect is located in the first workpiece image to serve as a first fusion image; acquiring a second workpiece image without defects as a second fusion image; and fusing the first fused image and the second fused image to obtain a training image. That is, step 011, step 012, step 013, and step 014 may be performed by processor 20.
Specifically, the data enhancement device 100 is connected to the detection device 200, the sensor 210 of the detection device 200 captures the workpiece 300 to obtain image data, the processor 20 reads the image data from the sensor 210 to obtain an image of the workpiece, for example, when the workpiece 300 is detected, the workpiece 300 is placed on the motion platform 220, and the sensor 210 collects information of the workpiece 300 to generate an image of the workpiece.
The motion platform 220 can be used for carrying the workpiece 300, and the motion platform 220 moves to drive the workpiece 300 to move, so that the sensor 210 can acquire information of the workpiece 300.
For example, the motion platform 220 includes an XY motion platform 221 and a Z motion platform 222, and the sensor 210 is disposed on the motion platform 220, specifically: the sensor 210 is disposed on the Z-motion stage 222, wherein the XY-motion stage 221 is configured to control the workpiece 300 to move along a horizontal plane, so as to change the relative positions of the workpiece 300 and the sensor 210 in the horizontal plane, and the Z-motion stage 222 is configured to control the sensor 210 to move along a vertical horizontal plane, so that the XY-motion stage 221 and the Z-motion stage 222 cooperate to change the three-dimensional position of the sensor 210 relative to the workpiece 300 (i.e., the relative positions in the horizontal plane and the vertical horizontal plane).
It will be appreciated that the motion stage 220 is not limited to the above-described configuration, and only needs to be able to change the three-dimensional position of the sensor 210 relative to the workpiece 300.
The sensor 210 may be one or more and the plurality of sensors 210 may be different types of sensors 210, e.g., the sensors 210 may include visible light cameras, depth cameras, etc. In the present embodiment, the sensor 210 is a visible light camera.
When the image of the workpiece is captured, the processor 20 may adjust the distance between the sensor 210 and the workpiece 300 according to the field range of the sensor 210, so that the workpiece 300 is located within the field range, and the image of the entire workpiece 300 can be obtained by capturing one image; alternatively, the field of view of the sensor 210 may cover only a partial region of the workpiece 300 and be moved by the motion platform 220 to capture different regions of the workpiece 300, so as to obtain multiple workpiece images. In the present embodiment, the sensor 210 captures only a partial region of the workpiece 300 at a time to obtain a plurality of workpiece images.
In the present embodiment, the workpiece 300 is exemplified as a wafer, and the defects of the wafer generally include foreign objects, residual glue, oxidation, bubbles, wrinkles, cracks, and the like.
After acquiring a plurality of workpiece images, a first workpiece image with defects and a second workpiece image without defects can be accurately determined through a detection model or manual assistance. Then, the image area with the defect in the first workpiece image is identified and intercepted as a first fused image. The first workpiece image comprises various defects of different types, and the first fused image which is identified and intercepted is a defect image of different types or the same type but with different characteristics (such as size, color and the like).
The identification defects can be manually identified, the detection personnel can generally accurately identify the defects of common types, and after the defects are identified, the edges of the defects can be drawn to obtain a first fusion image by intercepting the defects; alternatively, the defect identification may be implemented by the processor 20, and the processor 20 identifies the position and the type of the defect based on a preset detection model, so as to intercept the image region where the defect is located according to the position of the defect to obtain the first fused image.
In order to improve the training effect, the wafer pattern or the second workpiece images of a plurality of wafers with different wafer background patterns can be selected when the second workpiece images are obtained, so that the plurality of second workpiece images with different image backgrounds can be obtained, the diversity of the second workpiece images can be improved, the training effect can be improved, the influence of the trained target detection model on the image background can be reduced, the influence of the complex image background on the precision of defect detection can be reduced, and the target detection model can still accurately perform defect detection even under different image backgrounds.
And then fusing the first fused image and a second workpiece image serving as a second fused image to obtain a plurality of defective training images, so that the training images are enhanced in various aspects such as quantity and types to improve the detection effect of the training images on the target detection model. The target detection model may be, but is not limited to, a second order detection algorithm (e.g., Faster R-CNN and its variants), a first order detection algorithm (e.g., YOLOV3 and its variants), an anchor-free detection algorithm (e.g., CenterNet and its variants), and the like.
According to the data enhancement method, the data enhancement device 10 and the data enhancement equipment 100, more workpiece images with defects are generated by fusing the image area where the defect is located in the first workpiece image with the defects and the second workpiece image without the defects, so that enough training images are obtained, the number of the training images corresponding to the type with lower occurrence probability can be increased, the number of the training images with the defects of different types is basically the same, the training effect is better when the target detection model is trained subsequently through a plurality of training images, the generalization performance of the target detection model can be improved, and the detection effect of the target detection model on the defects is improved.
Referring to fig. 2, 3 and 4, in some embodiments, before step 013, the detection algorithm further comprises:
015: before the first fused image and the second fused image are fused, the first fused image is subjected to transformation processing, and the transformation processing comprises at least one of mirror image, translation, rotation, shearing and deformation.
In certain embodiments, the data enhancement device 10 further comprises a transformation module 15. The transformation module 15 is configured to perform transformation processing on the first fused image before fusing the first fused image and the second fused image, where the transformation processing includes at least one of mirroring, translation, rotation, shearing, and deformation. That is, step 015 may be performed by transformation module 15.
In some embodiments, the processor 20 is further configured to perform a transformation process on the first fused image prior to fusing the first fused image and the second fused image, the transformation process including at least one of mirroring, translation, rotation, cropping, and morphing. That is, step 015 may be performed by processor 20.
Specifically, it can be understood that the number of the first fused images is limited, and in order to improve the diversity of the first fused images, before the first fused image and the second fused image are fused, the first fused image may be transformed, and different first fused images are generated by performing operations such as mirroring, translation, rotation, shearing, deformation and the like on defects of the first fused image, so as to improve the diversity of the training images obtained by fusion.
For example, the processor 20 mirrors, translates, rotates, shears, or deforms the first fused image; of course, the processor 20 may also perform the translation process and the rotation process on the first fused image at the same time; or simultaneously carrying out translation processing, rotation processing and mirror image processing; or simultaneously carrying out translation processing, rotation processing, mirror image processing and shearing processing; alternatively, the translation processing, the rotation processing, and the mirror processing are performed simultaneously, and the translation processing, the rotation processing, and the mirror processing are performed a plurality of times respectively at different distances, different angles, and different symmetry axes, which are not listed here.
In performing the transformation, the processor 20 may randomly generate a transformation manner to perform the transformation on the first fused image; or, the processor 20 may record the transformation processing mode that each first fused image has executed, and when each transformation processing is performed on the first fused image, first determine a transformation processing mode that is different from the previous transformation processing mode, and then perform transformation processing on the first fused image, so that the first fused image after each transformation processing is different from the first fused image generated before, thereby further improving the diversity of the first fused image, and improving the training effect.
Referring to fig. 2, 3 and 5, in some embodiments, the first fused image is multiple, and step 014 includes:
0141: selecting a target fusion image from the plurality of first fusion images; and
0142: and fusing the target fusion image and the second fusion image to obtain a training image.
In some embodiments, the fusion module 14 is further configured to select a target fusion image from the plurality of first fusion images; and fusing the target fusion image and the second fusion image to obtain a training image. That is, steps 0141 and 0142 may be performed by the fusion module 14.
In some embodiments, the processor 20 is further configured to select a target fused image from the plurality of first fused images; and fusing the target fusion image and the second fusion image to obtain a training image. That is, steps 0141 and 0142 may be performed by processor 20.
Specifically, since the number of wafers with defects is generally small, the number of the first fused images is generally small, and in order to ensure the training effect, the first fused images are selected to at least include images of all types of defects. For example, corresponding to each defect, selecting a typical first workpiece image corresponding to the type from the first workpiece images to obtain a first fused image corresponding to the type; or, corresponding to each defect, selecting a plurality of (such as two, three, etc.) typical first workpiece images corresponding to the type from the first workpiece images to obtain a plurality of first fused images corresponding to the type.
When the first fusion image and the second fusion image are fused, firstly, the first fusion image is subjected to transformation processing, then one or more first fusion images are selected from the plurality of first fusion images subjected to transformation processing to serve as target fusion images, and then the target fusion images and the second fusion images are fused; or, one or more first fusion images are randomly selected from the plurality of first fusion images to be used as target fusion images, then transformation processing is carried out on the target fusion images, and finally the target fusion images are fused with the second fusion images. Such that each second fused image is fused with a different first fused image. For example, 1000 second fused images, 1000 different training images may be generated. Thus, the diversity of the generated training images can be improved.
Referring to fig. 2, fig. 3 and fig. 6, in some embodiments, the data enhancement method further includes:
016: and inputting all the training images as a training set to the target detection model for training so as to make the target detection model converge.
In certain embodiments, the data enhancement device further comprises a training module 16. The training module 16 is configured to input all training images as a training set to the target detection model for training, so that the target detection model converges. That is, step 016 can be implemented by training module 16.
In some embodiments, the processor 20 is further configured to input all training images as a training set to the target detection model for training, so that the target detection model converges. That is, step 016 can be implemented by processor 20.
Specifically, after the training images are obtained, all the training images may be input to the target detection model for training until the target detection model converges, thereby implementing the training of the target detection model. The target detection model convergence refers to that when the target detection model trained by the training image can accurately detect the defects of the workpiece 300, the target detection model can be considered to be converged, for example, the detection accuracy of the defects of the workpiece 300 reaches a predetermined accuracy (e.g., 90%, 95%, 98%, etc.).
Referring to fig. 2, 3 and 7, in some embodiments, step 016 includes:
0161: marking the type and the position of the defect corresponding to the first fusion image in the training image to generate a verification image;
0162: inputting a training set to a target detection model to output a detection result;
0163: determining a loss value according to the verification image and the detection result; and
0164: and adjusting the target detection model according to the loss value so that the target detection model converges.
In some embodiments, the training module 16 is further configured to label the type and location of the defect in the training image corresponding to the first fused image to generate a verification image; inputting a training set to a target detection model to output a detection result; determining a loss value according to the verification image and the detection result; and adjusting the target detection model according to the loss value so as to make the target detection model converge. That is, steps 0161 through 0164 may be performed by training module 16.
In some embodiments, the processor 20 is further configured to label the type and location of the defect in the training image corresponding to the first fused image to generate a verification image; inputting a training set to a target detection model to output a detection result; determining a loss value according to the verification image and the detection result; and adjusting the target detection model according to the loss value so as to make the target detection model converge. That is, steps 0161 to 0164 may be performed by processor 20.
Specifically, after the training image is obtained, the defects in the training image may be labeled in advance. For example, the processor 20 may obtain the position of the first fused image in the second fused image and the type of the defect corresponding to the first fused image during the fusion; at this time, the processor 20 may automatically label the defect in the training image, such as the type and location of the defect in the training image. After all the training images are labeled, a verification image corresponding to each training image can be generated, and the processor 20 generates a training set according to the training images and the verification images, for example, a set of all the training images and all the verification images is used as the training set.
During training, firstly, a training image is input to a target detection model, then the target detection model outputs a detection result, the detection result comprises the type and the position of the defect of the training image, then the detection result is compared with a corresponding verification image, for example, whether the type of the defect of the detection result is the same as that of the defect of the corresponding verification image is compared, and the deviation of the position is determined, so that the loss value is determined.
The processor 20 adjusts the target detection model according to the loss value so that the target detection model converges. For example, the type detection parameters of the defects are adjusted according to whether the detection results are the same as the types of the defects of the corresponding verification images, the position detection parameters of the defects are adjusted according to the deviation between the detection results and the positions of the defects of the corresponding verification images, and the training of the target detection model is realized through a training set comprising the training images and the verification images, so that the target detection model is converged, and the detection effect of the target detection model is ensured.
Referring to fig. 2, 3 and 8, in some embodiments, step 0163 includes:
01631: comparing the type of the defect in the detection result with the type of the defect of the corresponding verification image to determine a type loss value;
01632: comparing the positions of the defects in the detection result with the positions of the defects of the corresponding verification images in the verification set to determine a position loss value;
01633: and determining a loss value according to the type loss value and the position loss value.
In some embodiments, the second obtaining module 13 is further configured to compare the type of the defect in the detection result with the type of the defect in the corresponding verification image to determine a type loss value; comparing the positions of the defects in the detection result with the positions of the defects of the corresponding verification images in the verification set to determine a position loss value; and determining a loss value according to the type loss value and the position loss value. That is, steps 1631 to 01633 may be performed by the second acquiring module 13.
In some embodiments, the processor 20 is further configured to compare the type of the defect in the detection result with the type of the defect in the corresponding verification image to determine a type loss value; comparing the positions of the defects in the detection result with the positions of the defects of the corresponding verification images in the verification set to determine a position loss value; and determining a loss value according to the type loss value and the position loss value. That is, steps 01631 and 01633 may be performed by processor 20.
Specifically, when determining the loss value, the type of the defect in the detection result may be compared with the type of the defect corresponding to the corresponding verification image to determine a type loss value, and if the two types are the same, the loss value is determined to be 0, and if the two types are not the same, the loss value is determined to be 1.
The location of the defect in the inspection result may then be compared to the location of the defect in the corresponding verification image to determine a location loss value. If the position of the defect in the detection result is marked by the first defect frame, the corresponding defect in the verification image is marked by the second defect frame, and the first defect frame and the second defect frame are both rectangular, the difference of the position coordinates of the first defect frame and the second defect frame (such as the difference of the position coordinates of the centers of the first defect frame and the second defect frame) can be calculated, then the position loss value can be determined according to the difference, and the larger the difference is, the larger the position loss value is.
Since the importance of the defect type determination is higher, when determining the loss value according to the type loss value and the position loss value, a weight value with a larger type loss value may be given, for example, the loss value is a + type loss value + b + position loss value, where a is greater than b. Therefore, the detection accuracy of the type of the defect after the processor 20 adjusts the target detection model according to the loss value is ensured.
Referring to fig. 2, 3 and 9, in some embodiments, 016 further includes:
0165: when the loss value is smaller than a preset threshold value, determining that the target detection model converges;
0166: and when the loss value is greater than the preset threshold value, carrying out transformation processing on the training set, and training the target detection model again according to the training set after the transformation processing until the target detection model is converged.
In some embodiments, the training module 16 is further configured to determine that the target detection model converges when the loss value is smaller than a preset threshold; and when the loss value is greater than the preset threshold value, carrying out transformation processing on the training set, and training the target detection model again according to the training set after the transformation processing until the target detection model is converged. That is, step 0165 and step 0166 may be performed by the second obtaining module 13.
In some embodiments, the processor 20 is further configured to determine that the target detection model converges when the loss value is less than a preset threshold; and when the loss value is greater than the preset threshold value, carrying out transformation processing on the training set, and training the target detection model again according to the training set after the transformation processing until the target detection model is converged. That is, step 0165 and step 0166 may be performed by processor 20.
Specifically, after the target detection model is adjusted according to the loss value, it is determined whether the target detection model is converged, after a training set is input to the target detection model, the target detection model outputs the loss value, and if a loss value is output in each training image, the average of the loss values corresponding to all the training images is calculated as the loss value of the final output, at this time, the processor 20 determines whether the loss value is greater than a preset threshold, if the loss value is less than or equal to the preset threshold, it is determined that the detection loss is small, the detection accuracy has reached the requirement, and it is determined that the target detection model is converged.
If the loss value is larger than the preset threshold value, the detection loss is too large, the detection accuracy still does not meet the requirement, and at the moment, the target detection model can be determined not to be converged and the training needs to be continued. At this time, the training set may be transformed first, and the transformation of the training image is specifically as follows:
referring to fig. 10, for example, processor 20 mirrors each training image P1 to obtain a mirrored image P2 of each training image P1 as a new training image P1. The mirror image P2 after the mirror image processing is mirror-symmetrical to the training image P1, and the axis of symmetry may be arbitrary, for example, by performing mirror image processing with any one side of the training image P1 as the axis of symmetry (in fig. 10, mirror image processing is performed with the rightmost side of the training image P1 as the axis of symmetry), or performing mirror image processing with the diagonal line of the training image P1 or the connecting line of the midpoints of any two sides as the axis of symmetry, a plurality of new training images are obtained by mirror image processing.
Referring to FIG. 11, for another example, processor 20 performs a panning process on each training image P1 to obtain a panned image P3 of each training image P1 as a new training image P1. Specifically, a predetermined image region (i.e., the region occupied by the training image P1) is determined by using the training image P1, then the training image P1 is translated, such as left translation, right translation, left-up translation, and the like (rightward translation in fig. 11), then the image of the predetermined image region (i.e., the translated image P3) is used as a new training image P1, and the position of the defect after translation in the image is changed, so that a plurality of new training images P1 are obtained.
Referring to fig. 12, for another example, processor 20 performs a rotation process on each training image P1 to obtain a rotated image P4 of each training image P1 as a new training image P1. Specifically, a predetermined image region is determined by using the training image P1, then the training image P1 is rotated, for example, clockwise or counterclockwise by 10 degrees, 30 degrees, 60 degrees, 90 degrees, 140 degrees, etc. (fig. 12 is rotated by 30 degrees counterclockwise), then the image of the predetermined image region (and the rotated image P4) is used as a new training image P1, and the position of the defect after rotation in the image is changed, so as to obtain a plurality of new training images P1.
Referring to fig. 13, for another example, the processor 20 performs a cropping process on each training image P1 to obtain a cropped image P5 of each training image as a new training image P1. Specifically, a predetermined image region is determined by using the training image P1, then the training image P1 is cropped, for example, 1/4, 1/3, 1/2 of the training image P1 is cropped (fig. 13 is 1/2 for cropping the training image), and then the image of the predetermined image region (i.e., the cropping image P5) is used as a new training image P1, so as to obtain a plurality of new training images P1.
Referring to fig. 14, for another example, processor 20 performs warping on each training image P1 to obtain warped image P6 of each training image P1 as new training image P1. Specifically, a predetermined image area is determined by using the training image P1, then the training image P1 is deformed, for example, the training image is compressed in the transverse direction, so that the original rectangular training image P1 becomes a rectangle with notches, then the image of the predetermined image area (i.e., the deformed image P6) is used as a new training image P1, and the position and the shape of the deformed defect in the image are changed, so that a plurality of new training images P1 are obtained.
Of course, the processor 20 may also perform the translation process and the rotation process on the training image at the same time; or simultaneously carrying out translation processing, rotation processing and mirror image processing; or simultaneously carrying out translation processing, rotation processing, mirror image processing and shearing processing; alternatively, the translation processing, the rotation processing, and the mirror processing are performed simultaneously, and the translation processing, the rotation processing, and the mirror processing are performed a plurality of times respectively at different distances, different angles, and different symmetry axes, which are not listed here.
Similarly, when the training images are transformed, the verification images corresponding to the training images can be synchronously transformed, and the transformation processing modes of the training images corresponding to the verification images are the same, so that the training images and the verification images still correspond after the transformation processing of the training set, and the subsequent training effect on the target detection model is ensured.
Then, the processor 20 performs a second round of training on the target detection model according to the training set after the transformation processing, determines whether the target detection model converges again according to the loss value after the training, performs the transformation processing on the training set again when the target detection model does not converge, performs a third round of training according to the training set after the transformation processing, and the process is repeated until the trained target detection model converges.
In other embodiments, in order to ensure the accuracy of determining the convergence of the target detection model, after the target detection model is trained according to the training set, a preset verification set is input to enable the target detection model to output a loss value, and images in the verification set are different from training images in the training set, so that the verification set can accurately verify whether the target detection model converges, if the number of the training images is multiple, one part of the training images is used as the training images in the training set, and the other part of the training images is used as the images in the verification set; or the verification set is obtained according to the transformation processing of the training set.
Referring to fig. 2 and 3, in some embodiments, the processor 20 is further configured to inspect an image of the workpiece 300 according to the converged target inspection model to determine the type, location, and confidence level of the defect; and outputting the type, the position and the confidence coefficient of the defect when the confidence coefficient is greater than the confidence coefficient threshold corresponding to the type of the defect.
Specifically, after the training of the target detection model is completed, the sensor 210 acquires an image of the workpiece 300 to be detected, and the processor 20 detects the image of the workpiece 300 according to the target detection model to determine the type, position and confidence of the defect. And when the confidence coefficient is larger than the confidence coefficient threshold value corresponding to the type of the current defect, determining that the current defect is accurately detected, and outputting the type, the position and the confidence coefficient of the current defect as a detection result.
The confidence threshold corresponds to the type of the defect, the defects of different types correspond to different confidence thresholds, and therefore the detection accuracy of the defects of different types is improved in a targeted mode, the target detection model is an end-to-end model, the end-to-end model only uses one model and one objective function, compared with a training effect which is possibly caused by slight difference in a multi-module model training target, the training effect is difficult to achieve the optimal state, errors among different modules can affect each other, the final detection accuracy is affected, implementation and maintenance of the end-to-end model are simple, the trained model can achieve the optimal effect, the detection effect is good, and the engineering complexity is low.
Referring to fig. 15, one or more non-transitory computer-readable storage media 300 containing a computer program 302 according to an embodiment of the present disclosure, when the computer program 302 is executed by one or more processors 20, enable the processor 20 to perform the calibration method according to any of the embodiments described above.
For example, referring to fig. 1-3, the computer program 302, when executed by the one or more processors 20, causes the processors 20 to perform the steps of:
011: acquiring a first workpiece image with defects;
012: identifying and intercepting an image area where the defect is located in the first workpiece image to serve as a first fusion image;
013: acquiring a second workpiece image without defects as a second fusion image; and
014: and fusing the first fused image and the second fused image to acquire a training image.
As another example, referring to fig. 2, 3 and 4 in conjunction, when the computer program 302 is executed by the one or more processors 20, the processors 20 may further perform the steps of:
015: before the first fused image and the second fused image are fused, the first fused image is subjected to transformation processing, and the transformation processing comprises at least one of mirror image, translation, rotation, shearing and deformation.
In the description herein, references to the description of the terms "one embodiment," "some embodiments," "an illustrative embodiment," "an example," "a specific example" or "some examples" or the like mean that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the present application. In this specification, schematic representations of the above terms do not necessarily refer to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples. Furthermore, the various embodiments or examples and features of the various embodiments or examples described in this specification can be combined and combined by those skilled in the art without contradiction.
Any process or method descriptions in flow charts or otherwise described herein may be understood as representing modules, segments, or portions of code which include one or more executable instructions for implementing specific logical functions or steps of the process, and the scope of the preferred embodiments of the present application includes other implementations in which functions may be executed out of order from that shown or discussed, including substantially concurrently or in reverse order, depending on the functionality involved, as would be understood by those reasonably skilled in the art of the embodiments of the present application.
Although embodiments of the present application have been shown and described above, it is to be understood that the above embodiments are exemplary and not to be construed as limiting the present application, and that changes, modifications, substitutions and alterations can be made to the above embodiments by those of ordinary skill in the art within the scope of the present application.
Claims (12)
1. A method of data enhancement, comprising:
acquiring a first workpiece image with defects;
identifying and intercepting an image area where the defect is located in the first workpiece image to serve as a first fusion image;
acquiring a second workpiece image without defects as a second fusion image; and
and fusing the first fused image and the second fused image to obtain a training image.
2. The data enhancement method of claim 1, further comprising:
before the first fused image and the second fused image are fused, the first fused image is subjected to transformation processing, and the transformation processing comprises at least one of mirroring, translation, rotation, shearing and deformation.
3. The data enhancement method of claim 1, wherein the first fused image is a plurality of images, and the fusing the first fused image and the second fused image to obtain a training image comprises:
selecting a target fusion image from the plurality of first fusion images; and
and fusing the target fusion image and the second fusion image to obtain the training image.
4. The data enhancement method according to any one of claims 1 to 3, further comprising:
and inputting all the training images as a training set to a target detection model for training so as to make the target detection model converge.
5. The data enhancement method of claim 4, wherein the training of all the training images as a training set input to a target detection model to make the target detection model converge comprises:
marking the type and the position of the defect corresponding to the first fusion image in the training image to generate a verification image;
inputting the training set to the target detection model to output a detection result;
determining a loss value according to the verification image and the detection result; and
and adjusting the target detection model according to the loss value so as to make the target detection model converge.
6. The data enhancement method of claim 5, wherein determining a loss value based on the verification image and the detection result comprises:
comparing the type of the defect in the detection result with the type of the defect of the corresponding verification image to determine a type loss value;
comparing the positions of the defects in the detection result with the positions of the defects of the verification images corresponding to the verification set to determine a position loss value;
determining the loss value according to the type loss value and the position loss value.
7. The data enhancement method according to claim 5 or 6, wherein the training of inputting all the training images as a training set to the target detection model so as to make the target detection model converge further comprises:
when the loss value is smaller than a preset threshold value, determining that the target detection model converges;
and when the loss value is larger than the preset threshold value, carrying out transformation processing on the training set, and training the target detection model again according to the transformed training set until the target detection model is converged.
8. The data enhancement method according to claim 1, wherein the second workpiece image is a plurality of images, and image backgrounds of the plurality of second workpiece images are different from each other.
9. The data enhancement method of claim 1, wherein the workpiece comprises a wafer and the defect comprises at least one of a foreign object, a glue residue, and an oxidation.
10. A data enhancement apparatus, comprising:
a first acquisition module for acquiring a first workpiece image having a defect;
the identification module is used for identifying and intercepting an image area where the defect is located in the first workpiece image to serve as a first fusion image;
the second acquisition module is used for acquiring a second workpiece image without defects to serve as a second fusion image;
and the fusion module is used for fusing the first fusion image and the second fusion image to acquire a training image.
11. A data enhancement device, comprising a processor configured to:
acquiring a first workpiece image with defects;
identifying and intercepting an image area where the defect is located in the first workpiece image to serve as a first fusion image;
acquiring a second workpiece image without defects as a second fusion image; and
and fusing the first fused image and the second fused image to obtain a training image.
12. A non-transitory computer readable storage medium containing a computer program which, when executed by a processor, causes the processor to perform the data enhancement method of any one of claims 1-9.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110260102.7A CN112884691B (en) | 2021-03-10 | 2021-03-10 | Data enhancement device, data enhancement apparatus, and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110260102.7A CN112884691B (en) | 2021-03-10 | 2021-03-10 | Data enhancement device, data enhancement apparatus, and storage medium |
Publications (2)
Publication Number | Publication Date |
---|---|
CN112884691A true CN112884691A (en) | 2021-06-01 |
CN112884691B CN112884691B (en) | 2024-09-10 |
Family
ID=76054058
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110260102.7A Active CN112884691B (en) | 2021-03-10 | 2021-03-10 | Data enhancement device, data enhancement apparatus, and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112884691B (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114359071A (en) * | 2021-12-10 | 2022-04-15 | 中科星图空间技术有限公司 | Target data fusion enhancement method, system and device based on progressive learning |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20020054293A1 (en) * | 2000-04-18 | 2002-05-09 | Pang Kwok-Hung Grantham | Method of and device for inspecting images to detect defects |
CN109583489A (en) * | 2018-11-22 | 2019-04-05 | 中国科学院自动化研究所 | Defect classifying identification method, device, computer equipment and storage medium |
US20190377972A1 (en) * | 2018-06-08 | 2019-12-12 | Guangdong Oppo Mobile Telecommunications Corp., Ltd. | Method and apparatus for training, classification model, mobile terminal, and readable storage medium |
CN111640091A (en) * | 2020-05-14 | 2020-09-08 | 阿丘机器人科技(苏州)有限公司 | Method for detecting product defects and computer storage medium |
CN111709948A (en) * | 2020-08-19 | 2020-09-25 | 深兰人工智能芯片研究院(江苏)有限公司 | Method and device for detecting defects of container |
CN111814867A (en) * | 2020-07-03 | 2020-10-23 | 浙江大华技术股份有限公司 | Defect detection model training method, defect detection method and related device |
US20210295483A1 (en) * | 2019-02-26 | 2021-09-23 | Tencent Technology (Shenzhen) Company Limited | Image fusion method, model training method, and related apparatuses |
-
2021
- 2021-03-10 CN CN202110260102.7A patent/CN112884691B/en active Active
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20020054293A1 (en) * | 2000-04-18 | 2002-05-09 | Pang Kwok-Hung Grantham | Method of and device for inspecting images to detect defects |
US20190377972A1 (en) * | 2018-06-08 | 2019-12-12 | Guangdong Oppo Mobile Telecommunications Corp., Ltd. | Method and apparatus for training, classification model, mobile terminal, and readable storage medium |
CN109583489A (en) * | 2018-11-22 | 2019-04-05 | 中国科学院自动化研究所 | Defect classifying identification method, device, computer equipment and storage medium |
US20210295483A1 (en) * | 2019-02-26 | 2021-09-23 | Tencent Technology (Shenzhen) Company Limited | Image fusion method, model training method, and related apparatuses |
CN111640091A (en) * | 2020-05-14 | 2020-09-08 | 阿丘机器人科技(苏州)有限公司 | Method for detecting product defects and computer storage medium |
CN111814867A (en) * | 2020-07-03 | 2020-10-23 | 浙江大华技术股份有限公司 | Defect detection model training method, defect detection method and related device |
CN111709948A (en) * | 2020-08-19 | 2020-09-25 | 深兰人工智能芯片研究院(江苏)有限公司 | Method and device for detecting defects of container |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114359071A (en) * | 2021-12-10 | 2022-04-15 | 中科星图空间技术有限公司 | Target data fusion enhancement method, system and device based on progressive learning |
Also Published As
Publication number | Publication date |
---|---|
CN112884691B (en) | 2024-09-10 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN112161619B (en) | Pose detection method, three-dimensional scanning path planning method and detection system | |
JP5543872B2 (en) | Pattern inspection method and pattern inspection apparatus | |
CN112884743B (en) | Detection method and device, detection equipment and storage medium | |
CN110889823B (en) | SiC defect detection method and system | |
CN111982921B (en) | Method and device for detecting hole defects, conveying platform and storage medium | |
CN112307912A (en) | Method and system for determining personnel track based on camera | |
CN117274258B (en) | Method, system, equipment and storage medium for detecting defects of main board image | |
CN111951210A (en) | Data processing method, device and equipment | |
CN115272340B (en) | Industrial product defect detection method and device | |
CN112884691A (en) | Data enhancement and device, data enhancement equipment and storage medium | |
TW202300896A (en) | Sample observation device and method | |
CN112884744A (en) | Detection method and device, detection equipment and storage medium | |
CN113066069B (en) | Adjustment method and device, adjustment device and storage medium | |
CN112926438B (en) | Detection method and device, detection equipment and storage medium | |
CN112257514B (en) | Infrared vision intelligent detection shooting method for equipment fault inspection | |
CN112926439B (en) | Detection method and device, detection equipment and storage medium | |
CN114966238A (en) | Automatic detection and alignment method for antenna phase center | |
CN112950563A (en) | Detection method and device, detection equipment and storage medium | |
CN114286078A (en) | Camera module lens appearance inspection method and equipment | |
CN112926437A (en) | Detection method and device, detection equipment and storage medium | |
CN113192021A (en) | Detection method and device, detection equipment and storage medium | |
WO2022181304A1 (en) | Inspection system and inspection program | |
CN110796117A (en) | Blood cell automatic analysis method, system, blood cell analyzer and storage medium | |
TWI842292B (en) | Appearance defect inspection method and system for automated production line products | |
JP2024159256A (en) | Inspection device, inspection method, manufacturing method, and inspection program |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |