Nothing Special   »   [go: up one dir, main page]

CN111476750B - Method, device, system and storage medium for detecting stain of imaging module - Google Patents

Method, device, system and storage medium for detecting stain of imaging module Download PDF

Info

Publication number
CN111476750B
CN111476750B CN201910006562.XA CN201910006562A CN111476750B CN 111476750 B CN111476750 B CN 111476750B CN 201910006562 A CN201910006562 A CN 201910006562A CN 111476750 B CN111476750 B CN 111476750B
Authority
CN
China
Prior art keywords
pixel
test image
stain
pixels
pixel value
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910006562.XA
Other languages
Chinese (zh)
Other versions
CN111476750A (en
Inventor
马江敏
黄宇
吴高德
金壮壮
廖海龙
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ningbo Sunny Opotech Co Ltd
Original Assignee
Ningbo Sunny Opotech Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ningbo Sunny Opotech Co Ltd filed Critical Ningbo Sunny Opotech Co Ltd
Priority to CN201910006562.XA priority Critical patent/CN111476750B/en
Publication of CN111476750A publication Critical patent/CN111476750A/en
Application granted granted Critical
Publication of CN111476750B publication Critical patent/CN111476750B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/90Dynamic range modification of images or parts thereof

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Quality & Reliability (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Processing (AREA)

Abstract

The application provides a method, a device, a system and a storage medium for detecting stains on an imaging module, wherein the method comprises the following steps: obtaining a test image through an imaging module; performing image enhancement processing on the test image to obtain an enhanced test image; performing binarization processing on the enhanced test image to obtain a binarized test image, in the binarization processing, setting each pixel of the enhanced test image as a stain pixel having a stain pixel value or a non-stain pixel having a non-stain pixel value, respectively, by comparing the pixel value of the enhanced test image with a predetermined stain threshold value; and determining spot pixels adjacent to or communicating with each spot pixel to determine the size and position of the spot region of the test image as a detection result, wherein the two communicating spot pixels are spot pixels connected by other spot pixels.

Description

Method, device, system and storage medium for detecting stain of imaging module
Technical Field
The application relates to the field of quality detection of imaging modules, in particular to a method, a device, a system and a storage medium for detecting stains of an imaging module.
Background
With the increasing demand for electronic products equipped with camera functions, such as smart phones, the demand for imaging modules is also increasing, and the demand for quality thereof is also increasing. In order to ensure the quality of the imaging module, detection is necessary in the production process of the imaging module, and detection of stains of the imaging module is an important detection. When the imaging module is subjected to stain detection, the stain test result is misjudged due to the influence of factors such as image noise, ambient brightness and the like. In order to meet the detection requirement, a method capable of accurately judging the imaging module stain under the influence of factors such as image noise, ambient brightness and the like is needed.
Disclosure of Invention
The invention provides a method for detecting stains on an imaging module, which comprises the following steps: obtaining a test image through an imaging module; performing image enhancement processing on the test image to obtain an enhanced test image; performing binarization processing on the enhanced test image to obtain a binarized test image, in the binarization processing, setting each pixel of the enhanced test image as a stain pixel having a stain pixel value or a non-stain pixel having a non-stain pixel value, respectively, by comparing the pixel value of the enhanced test image with a predetermined stain threshold value; and determining spot pixels adjacent to or communicating with each spot pixel to determine the size and position of the spot region of the test image as a detection result, wherein the two communicating spot pixels are spot pixels connected by other spot pixels.
In one embodiment, performing image enhancement processing on the test image to obtain an enhanced test image includes: reducing the minimum pixel value in the test image to a target minimum pixel value; increasing the maximum pixel value in the test image to a target maximum pixel value; the pixel values between the minimum pixel value and the maximum pixel value are adjusted to obtain an enhanced test image by:
stretching pixel value = stretching coefficient + (pixel value-minimum pixel value) +target minimum pixel value
The stretching coefficient is the ratio of the difference between the target maximum pixel value and the target minimum pixel value to the difference between the maximum pixel value and the minimum pixel value, and the stretching pixel value represents the adjusted pixel value.
In one embodiment, performing image enhancement processing on the test image to obtain an enhanced test image includes: performing Fourier transform on the test image to obtain a Fourier spectrum; moving a zero frequency point of the Fourier spectrum to a central position; removing predetermined frequencies in the fourier spectrum; moving the zero frequency point of the Fourier spectrum back to the original position; and performing inverse Fourier transform on the Fourier spectrum, and performing one of taking a real part, taking an absolute value and taking a square root on a pixel value of each pixel in the image obtained by the inverse Fourier transform to obtain an enhanced test image.
In one embodiment, removing the predetermined frequencies in the fourier spectrum comprises: the predetermined frequencies in the fourier spectrum are removed by a gaussian low pass filter function or a gaussian band pass filter function.
In one embodiment, before performing image enhancement processing on the test image to obtain an enhanced test image, the method further includes: and performing dimension reduction processing on the test image by one of a region average dimension reduction method, a downsampling dimension reduction method and a bilinear dimension reduction method.
In one embodiment, the method further comprises: expanding the boundary of the dimension-reduced test image outward by a predetermined number of pixels, wherein the pixel values of the pixels in the expanded region are determined by: determining the optical center of the test image; obtaining a brightness decreasing relation of the imaging module according to the pixel value of the pixel in the test image, the distance from the optical center and the brightness value of the optical center; and determining pixel values of the pixels in the extended region according to the brightness decreasing relation and the distance between the pixels in the extended region and the optical center.
In one embodiment, the method further comprises: and expanding the boundary of the test image subjected to the dimension reduction processing outwards by a predetermined pixel number, wherein the pixel value of the pixel in the expansion area is determined according to the pixel at the boundary of the test image subjected to the dimension reduction processing or the pixel within a predetermined range at the boundary.
In one embodiment, the method further comprises: the pixel value of each pixel in the binarized test image is adjusted according to the pixel values within a predetermined range around each pixel in the binarized test image.
In one embodiment, adjusting the pixel value of each pixel in the binarized test image according to pixel values within a predetermined range around each pixel in the binarized test image comprises: setting an average pixel value in a preset range around each pixel in the binarization test image as the pixel value of each pixel in the binarization test image, or carrying out weighted average on the pixel values of the pixels in the preset range around each pixel in the binarization test image, and determining the pixel value of each pixel in the binarization test image according to the weighted average result.
In one embodiment, adjusting the pixel value of each pixel in the binarized test image according to pixel values within a predetermined range around each pixel in the binarized test image comprises: the median value of the pixel values in a predetermined range around each pixel in the binarized test image is set as the pixel value of each pixel in the binarized test image.
In one embodiment, the method further comprises: for each pixel in the test image subjected to the dimension reduction processing, comparing the pixel value of the pixel with the average pixel value of other pixels in a preset window around the pixel, and judging whether the pixel is a stain pixel or not according to a comparison result; finding a stain pixel adjacent to or communicating with the stain pixel to obtain the size and position of the stain region; and outputting the obtained size and position as a luminance difference detection result.
In one embodiment, the method further comprises: and combining the detection result and the brightness difference detection result as a final detection result.
In one embodiment, comparing the pixel value of the pixel to the average pixel value of other pixels within a predetermined window around the pixel comprises: summing pixel values of pixels within a predetermined window around the pixel to obtain a sum of pixel values within the window; and subtracting the pixel value of the pixel from the sum of the pixel values in the window and dividing by the number of other pixels in the predetermined window around the pixel to obtain an average pixel value of the other pixels in the predetermined window around the pixel, wherein when the predetermined window of pixels overlaps with the predetermined window of another pixel, the sum of the pixel values in the window of pixels is obtained by subtracting the sum of the pixel values in the non-overlapping region from the sum of the pixel values in the window of another pixel and adding the sum of the pixel values in the predetermined window of pixels to the sum of the pixel values in the predetermined window of another pixel.
In one embodiment, the method further comprises verifying the detection result and removing pixel stains determined to be false detection from the detection result, wherein verifying the detection result comprises verifying each pixel stain by: determining pixels in the test image corresponding to each of the stain pixels; comparing the determined pixel value of each pixel with the average pixel values of other pixels within a predetermined window around the pixel value; and judging whether the verified stained pixels are false detection or not according to the comparison result.
The invention also provides a device for detecting the stain of the imaging module, which comprises: the image intensifier is used for carrying out image intensification processing on the test image obtained by the imaging module so as to obtain an intensified test image; a binarizer for binarizing the enhanced test image to obtain a binarized test image, in which each pixel of the enhanced test image is set as a stain pixel having a stain pixel value or a non-stain pixel having a non-stain pixel value, respectively, by comparing a pixel value of the enhanced test image with a predetermined stain threshold value; and a stain region determiner for determining stain pixels adjacent to or communicating with the respective stain pixels to determine a size and a position of a stain region of the test image as a detection result, wherein the two communicating stain pixels are stain pixels connected by other stain pixels.
The invention also provides a system for detecting the stain of the imaging module, which comprises: a processor; and a memory coupled to the processor and storing machine-readable instructions executable by the processor to perform operations comprising: obtaining a test image through an imaging module; performing image enhancement processing on the test image to obtain an enhanced test image; performing binarization processing on the enhanced test image to obtain a binarized test image, in the binarization processing, setting each pixel of the enhanced test image as a stain pixel having a stain pixel value or a non-stain pixel having a non-stain pixel value, respectively, by comparing the pixel value of the enhanced test image with a predetermined stain threshold value; and determining spot pixels adjacent to or communicating with each spot pixel to determine the size and position of the spot region of the test image as a detection result, wherein the two communicating spot pixels are spot pixels connected by other spot pixels.
The present application also provides a non-transitory machine-readable storage medium storing machine-readable instructions executable by a processor to: obtaining a test image through an imaging module; performing image enhancement processing on the test image to obtain an enhanced test image; performing binarization processing on the enhanced test image to obtain a binarized test image, in the binarization processing, setting each pixel of the enhanced test image as a stain pixel having a stain pixel value or a non-stain pixel having a non-stain pixel value, respectively, by comparing the pixel value of the enhanced test image with a predetermined stain threshold value; and determining spot pixels adjacent to or communicating with each spot pixel to determine the size and position of the spot region of the test image as a detection result, wherein the two communicating spot pixels are spot pixels connected by other spot pixels.
Drawings
Other features, objects and advantages of the present application will become more apparent from the detailed description of non-limiting embodiments, which proceeds with reference to the accompanying drawings, in which:
FIG. 1 illustrates a flowchart of a method of spot detection of an imaging module according to an exemplary embodiment of the present application;
FIG. 2 illustrates a flowchart of a method of spot detection of an imaging module according to another exemplary embodiment of the present application;
FIG. 3 shows a schematic diagram for explaining a bilinear interpolation dimension reduction method;
FIG. 4 shows a schematic diagram for illustrating boundary expansion of a test image;
FIG. 5 shows a schematic diagram for illustrating boundary expansion using the brightness decrementing feature of an imaging module;
FIG. 6 illustrates a flowchart of a method of spot detection of an imaging module according to yet another exemplary embodiment of the present application;
FIG. 7 shows a schematic diagram for explaining obtaining a sum of pixel values within a predetermined window around a pixel; and
fig. 8 shows a schematic diagram of a computer system suitable for use in implementing the terminal device or server of the present application.
Detailed Description
In the following description, for purposes of explanation, numerous specific details are set forth in order to provide a thorough understanding of the various exemplary embodiments. It may be evident, however, that the various exemplary embodiments may be practiced without these specific details or with one or more equivalent arrangements. In other instances, well-known structures and devices are shown in block diagram form in order to avoid unnecessarily obscuring the various exemplary embodiments.
In the drawings, the size and relative sizes of layers, films, panels, regions, etc. may be exaggerated for clarity and description. Furthermore, like reference numerals refer to like elements.
When an element or layer is referred to as being "on," "connected to" or "coupled to" another element or layer, it can be directly on, connected or coupled to the other element or layer, or intervening elements or layers may be present. In contrast, when an element is referred to as being "directly on," "directly connected to" or "directly coupled to" another element or layer, there are no intervening elements or layers present. For purposes of this disclosure, "at least one of X, Y and Z" and "at least one selected from the group consisting of X, Y and Z" may be interpreted as any combination of two or more of X only, Y only, Z only, or X, Y and Z (such as, for example, XYZ, XYY, YZ and ZZ). Like numbers refer to like elements throughout. As used herein, the term "and/or" includes any and all combinations of one or more of the associated listed items.
Although the terms first, second, etc. may be used herein to describe various elements, components, regions, layers and/or sections, these elements, components, regions, layers and/or sections should not be limited by these terms. These terms are used to distinguish one element, component, region, layer or section from another element, component, region, layer or section. Thus, a first element, first component, first region, first layer, and/or first section discussed below could be termed a second element, second component, second region, second layer, and/or second section without departing from the teachings of the present disclosure.
For purposes of description, spatially relative terms such as "below," "lower," "above," "upper," and the like may be used herein and thus describe one element or feature's relationship to another element(s) or feature(s) as illustrated in the figures. In addition to the orientations depicted in the drawings, spatially relative terms are intended to encompass different orientations of the device in use, operation, and/or manufacture. For example, if the device in the figures is turned over, elements described as "below" or "beneath" other elements or features would then be oriented "above" the other elements or features. Thus, the exemplary term "below" may encompass both an orientation of above and below. Furthermore, the device may be otherwise oriented (e.g., rotated 90 degrees or at other orientations) and the spatially relative descriptors used herein interpreted accordingly.
The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting. As used herein, the singular forms "a", "an", and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. Furthermore, the terms "comprises," "comprising," "includes," and/or "including," when used in this specification, specify the presence of stated features, integers, steps, operations, elements, components, and/or groups thereof, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
Various embodiments are described herein with reference to cross-sectional illustrations that are schematic illustrations of idealized exemplary embodiments and/or intermediate structures. As such, variations in the shapes of the illustrations as a result, for example, of manufacturing techniques and/or tolerances, are to be expected. Accordingly, the exemplary embodiments disclosed herein should not be construed as limited to the particular illustrated shapes of regions but are to include deviations in shapes that result, for example, from manufacturing. For example, an implanted region shown as a rectangle will typically have rounded or curved features and/or gradients of implant concentration at its edges rather than a binary change from implanted to non-implanted region. Likewise, a buried region formed by implantation may result in some implantation in the region between the buried region and the surface through which implantation occurs. Thus, the regions illustrated in the figures are schematic in nature and their shapes are not intended to illustrate the actual shape of a region of a device and are not intended to be limiting.
Unless otherwise defined, all terms (including technical and scientific terms) used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this disclosure belongs. Unless specifically so defined herein, terms (such as those defined in commonly used dictionaries) should be interpreted as having a meaning that is consistent with their meaning in the context of the relevant art and will not be interpreted in an idealized or overly formal sense.
In the invention, when the imaging module is used for detecting the stain, the imaging module is used for shooting a test image, and detecting whether the stain exists in the image so as to detect whether the imaging module has flaws.
Fig. 1 illustrates a flowchart of a method of performing spot detection on an imaging module according to an exemplary embodiment of the present invention.
Referring to fig. 1, a method 100 of performing a stain detection on an imaging module according to an exemplary embodiment of the present invention includes: step S101, obtaining a test image through an imaging module; step S102, performing image enhancement processing on a test image; step S103, performing binarization processing on the enhanced test image; and step S104, determining a stain area in the binarized test image.
The above steps will be described in detail below.
In step S101, an imaging module may be used to capture a test image, and luminance information may be included in the obtained test image. In some embodiments, a test image including only luminance information may be obtained by extracting the luminance information, and in such a test image, a pixel value of each pixel may be a luminance value.
In step S102, an image enhancement process may be performed on the test image to obtain an enhanced test image. Due to the different production line environments and the different causes of the stains, the degree and type of the stains may be different, for example, the type of the stains may include deep stains, shallow stains, ultra-shallow stains, etc., and the positions of the stains may include edge positions, center positions, etc. To distinguish these complex stains from the test image, it is necessary to effectively distinguish the foreground stain from the background. The aim of the image enhancement processing of the test image is to make the stain more prominent against the background to facilitate the detection of the stain in a subsequent step.
Enhancement of the test image may be achieved, for example, by a linear stretching method or a frequency domain based enhancement method.
In the linear stretching method, a maximum pixel value and a minimum pixel value in the test image may be calculated first, and then a target maximum pixel value and a target minimum pixel value in the enhanced test image are determined, wherein the target maximum pixel value may be greater than the maximum pixel value in the test image, and the target minimum pixel value may be less than the minimum pixel value in the test image, and the range of values of the target maximum pixel value and the target minimum pixel value may be: 0 to 255. After determining the target maximum pixel value and the target minimum pixel value in the enhanced test image, the linear stretch coefficient may be determined by the following equation (1):
lineCoef=(dstImgMax-dstImgMin)/(imgMax-imgMin)(1)
wherein linecouf represents a linear stretch coefficient, dstImgMax and dstImgMin represent a target maximum pixel value and a target minimum pixel value, respectively, and imgMaxValue and imgmin value represent a maximum pixel value and a minimum pixel value, respectively.
After determining the linear stretch coefficient, the pixel value of each pixel in the enhanced test image may be determined by the following equation (2):
dstValue k =lineCoef*(srcValue k -imgMin)+dstImgMin(2)
wherein dstValue k Representing the pixel value of the kth pixel in the enhanced test image, srcValue k Representing the pixel value of the kth pixel in the test image without enhancement processing.
After linear stretching, the difference between the pixel values of the pixels in the test image is further increased, so that the difference between pixels in the image with different pixel values is more pronounced, which is advantageous for subsequent stain determination.
In the frequency domain based enhancement method, the test image may first be fourier transformed to obtain a fourier spectrum F (u, v). In the obtained fourier spectrum, the zero frequency point is not usually located at the center position, and therefore, the zero frequency point can be moved to the center position of the fourier spectrum after the fourier spectrum is obtained. Then, predetermined frequencies in the fourier spectrum may be removed, e.g. frequencies below the predetermined frequency, or frequencies within a certain frequency range.
Removing the predetermined frequencies in the fourier spectrum may be achieved by gaussian low pass filtering. The gaussian low pass filtered frequency domain design function may be the following equation (3):
wherein, (u, v) represents coordinates in the Fourier spectrum, M and N represent magnitudes of the Fourier spectrum, and (M/2, N/2) is the center of the spectrum, sigma is the standard deviation of the Gaussian function, and the value range can be: 2 to 200.
Removing predetermined frequencies in the fourier spectrum may also be accomplished by gaussian bandpass filtering. The gaussian bandpass filter frequency domain design function may be the following equation (4):
wherein,,know->Frequency domain filtering for two gaussian low pass, < >>Sigma included in (a) 1 And->Sigma included in (a) 2 The range of values of (2) can be: 2 to 200, and sigma 1 >σ 2
Removing the predetermined frequency from the Fourier spectrum may be obtained by multiplying G (u, v) by F (u, v) and H (u, v), where H (u, v) may be H as described above GL (u, v) or H HB
After obtaining G (u, v), the zero frequency point in G (u, v) may be shifted back to the original position and the inverse fourier transform may be performed on G (u, v) to obtain an image G (u, v), where the pixel values in the image are complex, so that one of taking the real part, taking the absolute value, and taking the square root may be performed on each pixel value to obtain the enhanced test image.
After the image enhancement processing is performed on the test image, the enhancement test image may be subjected to binarization processing in step S103 to obtain a binarized test image. In this step, a predetermined stain threshold value may be determined first, and a pixel value higher than the predetermined stain threshold value may be set as a stain pixel value, for example, set to 1, and a pixel value lower than the predetermined stain threshold value may be set as a non-stain pixel value, for example, set to 0. However, the present invention is not limited thereto, and for example, the stain pixel value may be 0 and the non-stain pixel value may be 1. After the binarization process, the pixel values in the enhanced test image are adjusted to 0 or 1, i.e., the test image is binarized. This makes the distinction between stains and background more pronounced, since the pixel values in the binarized test image only comprise two values.
Next, in step S104, a stain region may be detected based on the binarized test image.
Specifically, the stain region may be detected by a region growing method, and first, each pixel in the binarized test image may be traversed to detect its pixel value, and for a pixel whose pixel value is a stain pixel value, a stain pixel adjacent or connected thereto is searched for from the pixel starting point, wherein the adjacent stain pixel refers to a stain pixel directly connected thereto, the connected stain pixel refers to a stain pixel connectable through other stain pixels, and a region composed of the adjacent or connected stain pixels may be referred to as a stain region. The size and position of the stain region in the test image can be determined by the above detection as a detection result.
Fig. 2 illustrates a flowchart of a method of spot detection for an imaging module according to another exemplary embodiment of the present invention.
The method shown in fig. 2 differs from the method shown in fig. 1 in that the method in fig. 2 adds steps of image dimension reduction, boundary expansion, and image filtering. Steps S201, S204, S205, and S207 in fig. 2 are the same as S101 to S104 in fig. 1, respectively, and a description about these steps will not be repeated here, and thus, differences from fig. 2 and fig. 1 will be mainly described hereinafter.
As the number of pixels of the imaging module increases, the size of the image obtained therefrom increases, and the time required to process the image increases. In order to save processing time and computing resources, the test image may be subjected to a dimension reduction process, i.e., step S202 is performed.
The dimension reduction processing may be implemented by, for example, a region average dimension reduction method, a downsampling dimension reduction method, or a bilinear dimension reduction method.
In the area average dimension reduction method, firstly, the width reduction multiple and the height reduction multiple of an image can be determined, and then the width and the height of the dimension reduction image can be obtained according to the following formulas (5) and (6):
resImgW=round(imgW/zoom_W)(5)
resImgH=round(imgH/zoom_H)(6)
wherein, resImgW and resImgH represent the width and height of the dimension-reduced image respectively, imgW and imgH represent the width and height of the test image before dimension reduction respectively, zoom_W and zoom_H represent the reduction factors in width and height respectively, and round () is a rounding function.
Then, for each pixel in the dimension-reduced image, a corresponding region thereof in the test image can be obtained from the position and reduction multiple thereof, and this region can be determined by the following formulas (7) - (10):
startW=x*zoom_W(7)
endW=(x+1)*zoom_W(8)
startH=y*zoom_H(9)
endH=(y+1)*zoom_H(10)
wherein x and y represent coordinates of a pixel in the dimension-reduced image, startW and endW represent a width start position and a width end position of an area in the test image corresponding to the pixel, respectively, and startH and endH represent a height start position and a height end position of the area in the test image corresponding to the pixel, respectively. By the width start position, the width end position, the height start position, and the height end position, an area corresponding to a pixel in the dimension-reduced image can be determined in the test image.
For each pixel in the dimension-reduced image, its pixel value may be the average of the pixels in the region of the test image corresponding thereto.
By reducing the test image and filling the pixels in the dimension-reduced image with the average value of the pixels in the corresponding region within the test image, a dimension-reduced image smaller than the original test image size can be obtained, and in the subsequent processing, various processes can be performed using the dimension-reduced image as the test image.
In the downsampling dimension reduction method, a reduction multiple of an image, which is an nth power of 2, may be first determined, and then, the number of downsampling times, which corresponds to N, may be determined according to the reduction multiple. After determining the number of downsampling times, the image may be downsampled. Specifically, odd or even rows and columns are extracted from the original test image, and a dimension-reduced image is composed by using the rows and columns. If the number of downsampling times is greater than 1, in the downsampling after the first downsampling, the operations of extracting the odd or even rows and columns are all performed based on the dimension-reduced image obtained in the last downsampling. After the number of downsampling times reaches the determined number, the finally obtained reduced-dimension image is taken as a final reduced-dimension image, and various processes can be performed by using the reduced-dimension image as a test image.
In the bilinear interpolation dimension reduction method, first, the width reduction multiple and the height reduction multiple of the image may be determined, and then the width and the height of the dimension reduction image may be obtained by the formulas (5) and (6) as described above. Next, for each pixel in the dimension-reduced image, the position in the original test image corresponding thereto can be determined by the following formulas (11) and (12):
i=x*zoom_W(11)
j=y*zoom_H(12)
where x and y represent coordinates of pixels in the dimension-reduced image, boom_w and boom_h represent reduction multiples in width and height, respectively, and i and j represent positions in the original test image corresponding to the pixels in the dimension-reduced image.
Then, the position of four pixels closest to the position is searched in the test image according to the determined position, and the pixel value of the pixel in the dimension-reduced image corresponding to the position is determined according to the pixel value of the four pixels.
A process of determining the pixel value of the corresponding pixel in the reduced-dimension image from the determined four pixels will be described below with reference to fig. 3.
Fig. 3 schematically shows a part of a test image. As shown in fig. 3, the point P represents the position corresponding to one pixel in the reduced-dimension image determined by the above-described formulas (11) and (12), and the points I11, I21, I12, and I22 represent the four pixels nearest to the point P, respectively.
First, linear interpolation is performed in the X direction from the pixel values of the pixels I11 and I21 to obtain the pixel value at the position R2. Similarly, linear interpolation may be performed in the X direction based on the pixel values of pixels I12 and I22 to obtain the pixel value at position R1. Then, linear interpolation may be performed in the Y direction based on the pixel values at the positions R1 and R2 to obtain a pixel value at the position P, which may be a pixel value of a corresponding pixel in the reduced-dimension image. It should be noted that although the above-described process describes that linear interpolation in the X direction is performed first and then linear interpolation in the Y direction is performed, the present invention is not limited thereto, and linear interpolation in the Y direction may be performed first, for example, pixel values at positions between the pixels I11 and I12 and pixel values at positions between the pixels I21 and I22 are obtained first, respectively, and then linear interpolation in the X direction is performed based on these pixel values to obtain pixel values at the point P.
The pixel value of each pixel in the reduced-dimension image can be determined by the method described above. After the pixel values of all the pixels are determined, a final reduced-dimension image is obtained, and various processes can be performed using the reduced-dimension image as a test image.
After the dimension reduction process is performed on the test image, the boundary of the dimension-reduced test image may be expanded in step S203.
Fig. 4 shows a schematic diagram of boundary expansion of a test image. As shown in FIG. 4, region 400 represents an unexpanded test image and regions 401-404 represent outwardly expanded regions. In the expansion process, the number of pixels whose boundaries are expanded in the width direction and the height direction may be determined first, and then the width and the height of the expanded test image may be determined according to the following formulas (13) and (14):
newImgW=imgW+2*edgeNum_W(13)
newImgH=imgH+2*edgeNum_H(14)
where newImgW and newImgH represent the width and height of the extended test image, respectively, imgW and imgH represent the width and height of the unexpanded test image, respectively, and edgenum_w and edgenum_h are the numbers of pixels whose boundaries are extended in the width direction and the height direction, respectively.
The boundary expansion may be implemented, for example, by a fixed value filling boundary expansion method, a copy outer boundary value expansion method, a mirror boundary expansion method, or a boundary expansion method based on a module brightness characteristic.
In the fixed value filling boundary extension method, the extension regions 401-404 may be filled with fixed pixel values, respectively, and the fixed pixel values may have a value range of: 0 to 255.
In the copy outer boundary value extension method, as shown in fig. 4, the extension area 401 may be filled with the pixel value of the C1 column (i.e., the pixel value of a column of pixels located at the left edge of the test image), the extension area 402 may be filled with the pixel value of the C2 column in the test image (i.e., the pixel value of a column of pixels located at the right edge of the test image), the extension area 403 may be filled with the pixel value of the R1 row in the test image (i.e., the pixel value of a row of pixels located at the upper edge of the test image), and the extension area 404 may be filled with the pixel value of the R2 row in the test image (i.e., the pixel value of a row of pixels located at the lower edge of the test image).
In the mirror boundary extension method, as shown in FIG. 4, the C1-1 columns (i.e., the column in the extension region 401 that is closest to the test image) may be filled with pixel values of the C1 columns. The C1-2 columns (i.e., the column next adjacent to the test image in the extended region 401) may be filled with pixel values for the C1+1 columns. Similarly, the entire extension region 401 may be filled. In other words, in the mirror boundary extension method, the pixel values in the test image are symmetrically filled into extension regions symmetrical about the symmetry axis with the four edges of the test image as symmetry axes, respectively. Regions 402 through 404 may be filled in using similar methods.
A boundary expansion method based on the luminance characteristics of the module will be described below by referring to fig. 5, in which a test image and an expansion portion are shown in fig. 5.
First, the optical center of the test image may be determined, and the optical center O of the test image is schematically shown in fig. 5, and in fig. 5, a region 500 represents the test image and a region 501 represents the extension region. Then, the brightness decreasing characteristic of the imaging module is determined according to the pixel value of a pixel (for example, the pixel P1) in the test image, the brightness value of the optical center O and the distance between the pixel and the optical center O. Taking the pixel as the pixel P1 as an example, the difference between the pixel value of the pixel P1 and the luminance value of the optical center O may be divided by the distance D1 between the pixel P1 and the optical center O to determine the luminance decreasing characteristic of the imaging module. Then, the pixel value to be filled at the position P2 can be determined according to the distance D2 of the position P2 from the optical center O in the extension area and the brightness decreasing characteristic of the imaging module. The pixel values at other locations in the extended region 501 may be determined using a method similar to the pixel value determination method of location P2 described above.
It should be noted that, although the boundary expansion is performed on the test image subjected to the dimension reduction processing is described above, the present invention is not limited thereto. For example, in the method according to the invention, the dimension reduction process may be omitted, and the boundary expansion may be performed directly on the test image obtained by the imaging module.
Through the boundary expansion, the pixels at the boundary of the original test image become pixels in the boundary, which is favorable for detecting whether the stain exists at the boundary in the original test image, and can reduce the erroneous judgment at the boundary.
For the above-described test image subjected to the down-scaling and/or boundary expansion, the enhancement processing and the binarization processing (i.e., step S204 and step S205) as described above may be performed.
In step S206, the binarized test image may be subjected to filtering processing. In the filtering process, the pixel value of each pixel in the binarized test image may be adjusted according to the pixel values within a predetermined range around each pixel in the binarized test image.
The filtering process may be implemented by a stencil filtering method or a statistical ordering filtering method.
In the stencil filtering method, the width and length of the stencil may be first confirmed, wherein the width and the length are odd numbers. The templates may then be designed, for example, an average template or a gaussian template may be used.
The average template can be represented by the following formula (15):
where W represents the stencil function, tempW and tempH represent the stencil width and stencil height, respectively. The function of the average template is to average the pixel values within the template range.
The Gaussian stencil may be represented by the following formula (16):
wherein w (i, j) represents a template function, and (i, j) represents a position in the template, sigma is a standard deviation of a Gaussian function, and the value range is as follows: 0.1 to 20. The function of the gaussian template is to weight average the pixel values over the template.
For each pixel in the binarized test image, its pixel value can be determined by the following equation (17):
where filt (i, j) is a pixel value of a pixel with coordinates (i, j) in the binarized test image, and srcImg () represents taking the pixel value in the binarized test image. Equation (17) represents: for a pixel at (i, j) in the binarized test image, its pixel value can be determined by the pixel values in the range of the stencil w. The pixel values of all pixels in the binarized test image can be adjusted by the above formula (17).
In the statistical ordering filtering method, the width and length of the template may be first confirmed, wherein the width and the length are odd numbers. The pixel value of each pixel in the binarized test image may then be determined by the following equation (18):
wherein (i, j) is coordinates in the binarized test image, filtImg is a filtered image, srcImg is a pixel value of the binarized test image, S xy For a coordinate set centered on (i, j) and having a size of temp w×temp ph, media () is a filter function, the calculation S is implemented xy The pixel values within the region are at the pixel values at the intermediate positions, i.e., the median pixel values. Equation (18) represents: the median value of the pixel values in a predetermined range around each pixel in the binarized test image is set as the pixel value of each pixel in the binarized test image. The pixel values of all pixels in the binarized test image can be adjusted by the above equation (18).
By filtering, the influence of light source, noise and other external environments in the production process on the test image can be reduced.
After the filtering process, the size and the size of the stain region in the binarized test image may be determined as a detection result in step S207.
Although the method shown in fig. 2 adds three steps S202, S203 and S206 to the method shown in fig. 1, the present invention is not limited thereto, and only one or more of the above steps may be added on the basis of the method shown in fig. 1.
Although not shown in fig. 2, the method according to the present disclosure may further include verification of the detection result to prevent erroneous judgment or detection.
The verification may be performed by based on the test image without enhancement processing. Specifically, first, the corresponding pixel in the test image that is not subjected to enhancement processing may be determined from the stain pixel included in the inspection result. For each corresponding pixel, comparing the pixel value of the corresponding pixel with the average pixel value of other pixels in a preset window around the corresponding pixel, and judging whether the verified stained pixel is false detection or not according to the comparison result. In general, the pixel value of a stain pixel is lower than the pixel value of surrounding normal pixels, and therefore, when the pixel value of one pixel is lower than the pixel average value of other pixels within a surrounding predetermined window, the pixel can be regarded as a stain pixel, and conversely, the pixel can be regarded as a normal pixel. When a pixel in the test image corresponding to a stained pixel in the detection result, which is not subjected to enhancement processing, is judged as a normal pixel, the stained pixel in the detection result can be considered as erroneous judgment or erroneous detection. By the verification method, false detection stain pixels in the detection result can be found, and the stain pixels can be removed from the detection result.
Since the above-described verification process verifies the result in the enhanced test image by the test image that is not enhanced, it can avoid erroneous judgment of the stained pixels due to the enhancement processing of the image.
The detection methods described above are generally more effective for detecting shallow or very shallow stains. In order to comprehensively detect various types of stains, another detection process may be performed in parallel while the above-described method is being performed.
Fig. 6 shows a flow chart of the simultaneous execution of two detection processes. Among them, steps S6011 to S6017 are the same as steps S201 to S207 shown in fig. 2, and detailed description thereof is not repeated again. Fig. 6 is different from fig. 2 in that fig. 6 includes steps S6021 to S6023 and step S603. The differences between fig. 6 and fig. 1 will be mainly described in detail hereinafter.
Referring to fig. 6, in step S6021, the test image obtained by the imaging module may be subjected to a dimension reduction process, which is the same as step S202 in fig. 2, and thus, a detailed description thereof is omitted herein.
In step S6022, each pixel in the reduced-dimension test image may be detected by a luminance difference detection method. In the luminance difference detection method, for each pixel, the pixel value of the pixel is compared with the average pixel value of other pixels in a predetermined window around the pixel, and if the pixel value of the pixel is lower than the average pixel value of other pixels in the predetermined window around the pixel, the pixel can be considered as a stain pixel.
In some embodiments, step S6022 may borrow the calculation results in the predetermined window around it when calculating the pixel average value in one predetermined window to increase the calculation speed and save the calculation resources.
Specifically, in calculating the pixel average value of other pixels in a predetermined window around one pixel, the pixel value sum of all pixels in the predetermined window may be calculated first, and then the pixel value of the one pixel is subtracted from the pixel value sum and divided by the number of other pixels in the predetermined window to obtain the pixel average value in the predetermined window. When a predetermined window of one pixel overlaps with a predetermined window of another pixel, the sum of pixel values of pixels in the overlapping region may not be repeatedly calculated. This embodiment will be described below with reference to fig. 7.
As shown in fig. 7, block 701 represents a predetermined window around a pixel, and the sum of pixel values in the predetermined window has been calculated. After the detection of the pixel corresponding to the block 701 is completed, the detection of the pixel adjacent thereto may be performed, and at this time, a predetermined window corresponding to the adjacent pixel (for example, the pixel located on the right side of the one pixel) may be represented by a block 702, for example, as can be seen from fig. 7, the block 702 is partially overlapped with the block 701, and when the pixel value sum in the block 702 is calculated, the pixel value sum in the overlapped portion may be retained, and only the pixel value sum of the newly added portion is calculated in the calculation process. For example, the sum of pixel values in block 702 may be calculated by the following equation (19):
Sum 702 =Sum 701 +(Sum right -Sum left ) (19)
Wherein Sum is 701 And Sum of 702 Representing the Sum of pixel values in blocks 701 and 702, sum, respectively left And Sum of right Two portions of block 701 and block 702 are shown, respectively, that do not overlap.
If the sum of pixel values within the box above box 701 is to be calculated, then equation (19) above may be modified to equation (20) below:
Sum new =Sum 701 +(Sum upper -Sum lower ) (20)
wherein Sum is new Representing the Sum of pixel values in a new box to be calculated, sum lower And Sum of upper Two portions of the frame above the frame 701 are shown, respectively, that do not overlap the frame 701.
If the frame to be calculated has both a movement in the left-right direction and a movement in the up-down direction with respect to the frame 701, the calculation formula of the pixel value sum within the frame may be:
Sum new =Sum 701 +(Sum Left -Sum Right )+(Sum lower -Sum upper ) (21)
by the method, unnecessary repeated calculation can be reduced when pixel value sums in a preset window around the pixel are calculated, the calculation efficiency is improved, the calculation time is reduced, and therefore the current required inspection speed can be better met.
In step S6023 following step S6022, other stain pixels adjacent to or in communication with the detected stain pixel may be found to determine a stain region in the test image, for example, to determine the size and the size of the stain region as a luminance difference detection result.
Next, in step S603, the detection results obtained through steps S6011 to S6017 may be combined with the luminance difference detection results obtained through steps S6021 to S6023 to obtain a final detection result.
By the method shown in fig. 6, both more pronounced spots and shallower spots can be detected effectively, resulting in a more comprehensive detection result.
Although steps S6011 to S6017 shown in fig. 6 are the same as steps S201 to S207 shown in fig. 2, the present application is not limited thereto, and for example, steps S6011 to S6017 in fig. 6 may be replaced with steps S101 to S104 in fig. 1.
The application also provides a device for detecting the stain of the imaging module, which comprises: the image intensifier is used for carrying out image intensification processing on the test image obtained by the imaging module so as to obtain an intensified test image; a binarizer for binarizing the enhanced test image to obtain a binarized test image, in which each pixel of the enhanced test image is set as a stain pixel having a stain pixel value or a non-stain pixel having a non-stain pixel value, respectively, by comparing a pixel value of the enhanced test image with a predetermined stain threshold value; and a stain region determiner for determining stain pixels adjacent to or communicating with the respective stain pixels to determine a size and a position of a stain region of the test image as a detection result, wherein the two communicating stain pixels are stain pixels connected by other stain pixels.
In one embodiment, the image intensifier is for: reducing the minimum pixel value in the test image to a target minimum pixel value; increasing the maximum pixel value in the test image to a target maximum pixel value; the pixel values between the minimum pixel value and the maximum pixel value are adjusted to obtain an enhanced test image by:
stretching pixel value = stretching coefficient + (pixel value-minimum pixel value) +target minimum pixel value
The stretching coefficient is the ratio of the difference between the target maximum pixel value and the target minimum pixel value to the difference between the maximum pixel value and the minimum pixel value, and the stretching pixel value represents the adjusted pixel value.
In one embodiment, the image intensifier is for: performing Fourier transform on the test image to obtain a Fourier spectrum; moving a zero frequency point of the Fourier spectrum to a central position; removing predetermined frequencies in the fourier spectrum; moving the zero frequency point of the Fourier spectrum back to the original position; and performing inverse Fourier transform on the Fourier spectrum, and performing one of taking a real part, taking an absolute value and taking a square root on a pixel value of each pixel in the image obtained by the inverse Fourier transform to obtain an enhanced test image.
In one embodiment, the image intensifier is for: the predetermined frequencies in the fourier spectrum are removed by a gaussian low pass filter function or a gaussian band pass filter function.
In one embodiment, the apparatus further comprises: and the dimension reducing device is used for carrying out dimension reduction processing on the test image by one of a regional average dimension reduction method, a downsampling dimension reduction method and a bilinear dimension reduction method.
In one embodiment, the apparatus further comprises: a boundary expander for expanding the boundary of the dimension-reduced test image outward by a predetermined number of pixels, wherein the pixel values of the pixels in the expanded region are determined by: determining the optical center of the test image; obtaining a brightness decreasing relation of the imaging module according to the pixel value of the pixel in the test image, the distance from the optical center and the brightness value of the optical center; and determining pixel values of the pixels in the extended region according to the brightness decreasing relation and the distance between the pixels in the extended region and the optical center.
In one embodiment, the apparatus further comprises: and a boundary expander for expanding the boundary of the dimension-reduced test image outward by a predetermined number of pixels, wherein the pixel values of the pixels in the expanded region are determined according to the pixels at the boundary of the dimension-reduced test image or the pixels within a predetermined range at the boundary.
In one embodiment, the apparatus further comprises: and an image filter for adjusting the pixel value of each pixel in the binarized test image according to the pixel values within a predetermined range around each pixel in the binarized test image.
In one embodiment, an image filter is used to: setting an average pixel value in a preset range around each pixel in the binarization test image as the pixel value of each pixel in the binarization test image, or carrying out weighted average on the pixel values of the pixels in the preset range around each pixel in the binarization test image, and determining the pixel value of each pixel in the binarization test image according to the weighted average result.
In one embodiment, an image filter is used to: the median value of the pixel values in a predetermined range around each pixel in the binarized test image is set as the pixel value of each pixel in the binarized test image.
In one embodiment, the apparatus further comprises a parallel detection actuator for: for each pixel in the test image subjected to the dimension reduction processing, comparing the pixel value of the pixel with the average pixel value of other pixels in a preset window around the pixel, and judging whether the pixel is a stain pixel or not according to a comparison result; finding a stain pixel adjacent to or communicating with the stain pixel to obtain the size and position of the stain region; and outputting the obtained size and position as a luminance difference detection result.
In one embodiment, the apparatus further comprises a combiner for: and combining the detection result and the brightness difference detection result as a final detection result.
In one embodiment, the row detection executor performs the comparison of the pixel value of the pixel with the average pixel value of other pixels within a predetermined window around the pixel by: summing pixel values of pixels within a predetermined window around the pixel to obtain a sum of pixel values within the window; and subtracting the pixel value of the pixel from the sum of the pixel values in the window and dividing by the number of other pixels in the predetermined window around the pixel to obtain an average pixel value of the other pixels in the predetermined window around the pixel, wherein when the predetermined window of pixels overlaps with the predetermined window of another pixel, the sum of the pixel values in the window of pixels is obtained by subtracting the sum of the pixel values in the non-overlapping region from the sum of the pixel values in the window of another pixel and adding the sum of the pixel values in the predetermined window of pixels to the sum of the pixel values in the predetermined window of another pixel.
In one embodiment, the apparatus further comprises a validator for validating the detection result and removing from the detection result the pixel stain determined to be false detected, wherein validating the detection result comprises validating each pixel stain by: determining pixels in the test image corresponding to each of the stain pixels; comparing the determined pixel value of each pixel with the average pixel values of other pixels within a predetermined window around the pixel value; and judging whether the verified stained pixels are false detection or not according to the comparison result.
The application also provides a computer system, which can be a mobile terminal, a Personal Computer (PC), a tablet computer, a server and the like. Referring now to FIG. 8, there is illustrated a schematic diagram of a computer system 800 suitable for use in implementing a terminal device or server of the present application: as shown in fig. 8, computer system 800 includes one or more processors, communications, etc., such as: one or more Central Processing Units (CPUs) 801, and/or one or more image processors (GPUs) 813, etc., which may perform various suitable actions and processes according to executable instructions stored in a Read Only Memory (ROM) 802 or loaded from a storage section 808 into a Random Access Memory (RAM) 803. The communication portion 812 may include, but is not limited to, a network card, which may include, but is not limited to, a IB (Infiniband) network card.
The processor may be in communication with the rom 802 and/or the ram 803 to execute executable instructions, and is connected to the communication portion 812 through the bus 804, and is in communication with other target devices through the communication portion 812, so as to perform operations corresponding to any of the methods provided in the embodiments of the present application, for example: obtaining a test image through an imaging module; performing image enhancement processing on the test image to obtain an enhanced test image; performing binarization processing on the enhanced test image to obtain a binarized test image, in the binarization processing, setting each pixel of the enhanced test image as a stain pixel having a stain pixel value or a non-stain pixel having a non-stain pixel value, respectively, by comparing the pixel value of the enhanced test image with a predetermined stain threshold value; and determining spot pixels adjacent to or communicating with each spot pixel to determine the size and position of the spot region of the test image as a detection result, wherein the two communicating spot pixels are spot pixels connected by other spot pixels.
In addition, in the RAM 803, various programs and data required for device operation can also be stored. The CPU 801, ROM 802, and RAM 803 are connected to each other by a bus 804. ROM 802 is an optional module in the presence of RAM 803. The RAM 803 stores executable instructions that cause the processor 801 to perform operations corresponding to the communication methods described above, or write executable instructions to the ROM 802 at the time of execution. An input/output (I/O) interface 805 is also connected to the bus 804. The communication unit 812 may be integrally provided or may be provided with a plurality of sub-modules (e.g., a plurality of IB network cards) and be connected to the bus link.
The following components are connected to the I/O interface 805: an input portion 806 including a keyboard, mouse, etc.; an output portion 807 including a display such as a Cathode Ray Tube (CRT), a Liquid Crystal Display (LCD), and a speaker; a storage section 808 including a hard disk or the like; and a communication section 809 including a network interface card such as a LAN card, a modem, or the like. The communication section 809 performs communication processing via a network such as the internet. The drive 810 is also connected to the I/O interface 805 as needed. A removable medium 811 such as a magnetic disk, an optical disk, a magneto-optical disk, a semiconductor memory, or the like is mounted on the drive 810 as needed so that a computer program read out therefrom is mounted into the storage section 808 as needed.
It should be noted that the architecture shown in fig. 8 is only an alternative implementation, and in a specific practical process, the number and types of components in fig. 8 may be selected, deleted, added or replaced according to actual needs; in the setting of different functional components, implementation manners such as separation setting or integration setting can also be adopted, for example, the GPU and the CPU can be separated or the GPU can be integrated on the CPU, the communication part can be separated or the communication part can be integrated on the CPU or the GPU, and the like. Such alternative embodiments fall within the scope of the present disclosure.
In addition, according to embodiments of the present application, the processes described above with reference to flowcharts may be implemented as computer software programs. For example, the present application provides a non-transitory machine-readable storage medium storing machine-readable instructions executable by a processor to perform instructions corresponding to the method steps provided by the present application, such as: obtaining a test image through an imaging module; performing image enhancement processing on the test image to obtain an enhanced test image; performing binarization processing on the enhanced test image to obtain a binarized test image, in the binarization processing, setting each pixel of the enhanced test image as a stain pixel having a stain pixel value or a non-stain pixel having a non-stain pixel value, respectively, by comparing the pixel value of the enhanced test image with a predetermined stain threshold value; and determining spot pixels adjacent to or communicating with each spot pixel to determine the size and position of the spot region of the test image as a detection result, wherein the two communicating spot pixels are spot pixels connected by other spot pixels.
In such an embodiment, the computer program may be downloaded and installed from a network via the communication section 809, and/or installed from the removable media 811. The above-described functions defined in the method of the present application are performed when the computer program is executed by a Central Processing Unit (CPU) 801.
The methods and apparatus, devices of the present application may be implemented in numerous ways. For example, the methods and apparatus, devices of the present application may be implemented by software, hardware, firmware, or any combination of software, hardware, firmware. The above-described sequence of steps for the method is for illustration only, and the steps of the method of the present application are not limited to the sequence specifically described above unless specifically stated otherwise. Furthermore, in some embodiments, the present application may also be embodied as programs recorded in a recording medium, the programs including machine-readable instructions for implementing the methods according to the present application. Thus, the present application also covers a recording medium storing a program for executing the method according to the present application.
The description of the present application has been presented for purposes of illustration and description, and is not intended to be exhaustive or limited to the application in the form disclosed. Many modifications and variations will be apparent to those of ordinary skill in the art. The embodiments were chosen and described in order to best explain the principles of the application and the practical application, and to enable others of ordinary skill in the art to understand the application for various embodiments with various modifications as are suited to the particular use contemplated.

Claims (22)

1. A method for detecting stains in an imaging module, the method comprising:
obtaining a test image through the imaging module;
performing image enhancement processing on the test image by a linear stretching method or a frequency domain-based enhancement method to obtain an enhanced test image;
performing binarization processing on the enhanced test image to obtain a binarized test image, in which pixels of the enhanced test image are respectively set as stain pixels having stain pixel values or non-stain pixels having non-stain pixel values by comparing pixel values of the enhanced test image with a predetermined stain threshold;
through filtering processing, adjusting the pixel value of each pixel in the binarization test image according to the pixel value in a preset range around each pixel in the binarization test image; and
determining a stain pixel adjacent to or communicated with each stain pixel by traversing each pixel in the adjusted binarized test image to determine the size and the position of a stain region of the test image, wherein the two communicated stain pixels are stain pixels connected by other stain pixels,
Wherein the method further comprises verifying the detection result and removing from the detection result the spot pixels determined to be false detected, wherein verifying the detection result comprises verifying each spot pixel by:
determining pixels in the test image corresponding to each stain pixel;
comparing the determined pixel value of each pixel with the average pixel values of other pixels within a predetermined window around the pixel value; and
judging whether the verified stain pixels are false detection or not according to the comparison result;
wherein the method further comprises:
performing dimension reduction treatment on the test image;
for each pixel in the test image that has been dimension reduced, comparing the pixel value of the pixel with the average pixel values of other pixels within a predetermined window around the pixel,
judging whether the pixel is a stain pixel or not according to a comparison result;
finding a stain pixel adjacent to or communicating with the stain pixel to obtain the size and position of a stain region;
outputting the obtained size and position as a brightness difference detection result; and
and combining the detection result and the brightness difference detection result as a final detection result.
2. The method of claim 1, wherein performing image enhancement processing on the test image by a linear stretching method to obtain an enhanced test image comprises:
reducing the minimum pixel value in the test image to a target minimum pixel value;
increasing the maximum pixel value in the test image to a target maximum pixel value;
adjusting pixel values between the minimum pixel value and the maximum pixel value to obtain the enhanced test image by:
stretching pixel value = stretching coefficient + (pixel value-minimum pixel value) +target minimum pixel value
The stretching coefficient is the ratio of the difference between the target maximum pixel value and the target minimum pixel value to the difference between the maximum pixel value and the minimum pixel value, and the stretching pixel value represents the adjusted pixel value.
3. The method of claim 1, wherein performing image enhancement processing on the test image by a frequency domain based enhancement method to obtain an enhanced test image comprises:
performing Fourier transform on the test image to obtain a Fourier spectrum;
moving a zero frequency point of the Fourier spectrum to a central position;
Removing predetermined frequencies in the fourier spectrum;
moving the zero frequency point of the Fourier spectrum back to the original position; and
and performing inverse Fourier transform on the Fourier spectrum, and performing one of taking a real part, taking an absolute value and taking a square root on a pixel value of each pixel in the image obtained through inverse Fourier transform to obtain the enhanced test image.
4. The method of claim 3, wherein removing predetermined frequencies in the fourier spectrum comprises:
the predetermined frequencies in the fourier spectrum are removed by a gaussian low pass filter function or a gaussian band pass filter function.
5. The method of any of claims 1-4, further comprising, prior to performing image enhancement processing on the test image to obtain an enhanced test image:
and performing the dimension reduction processing on the test image by one of a region average dimension reduction method, a downsampling dimension reduction method and a bilinear dimension reduction method.
6. The method of claim 5, wherein the method further comprises:
expanding the boundaries of the test image subjected to the dimension reduction processing outward by a predetermined number of pixels,
wherein the pixel values of the pixels in the extension area are determined by:
Determining an optical center of the test image;
obtaining a brightness decreasing relation of the imaging module according to the pixel value of the pixel in the test image, the distance from the optical center and the brightness value of the optical center; and
and determining the pixel value of the pixel in the expansion area according to the brightness decreasing relation and the distance between the pixel in the expansion area and the optical center.
7. The method of claim 5, wherein the method further comprises:
expanding the boundaries of the test image subjected to the dimension reduction processing outward by a predetermined number of pixels,
wherein the pixel values of the pixels in the extension area are determined according to the pixels at the boundary of the test image subjected to the dimension reduction processing or the pixels within a predetermined range at the boundary.
8. The method of claim 1, wherein adjusting the pixel value of each pixel in the binarized test image according to pixel values within a predetermined range around each pixel in the binarized test image comprises:
setting an average pixel value within a predetermined range around each pixel in the binarized test image to be the pixel value of each pixel in the binarized test image, or
And carrying out weighted average on the pixel values of the pixels in a preset range around each pixel in the binarization test image, and determining the pixel value of each pixel in the binarization test image according to the weighted average result.
9. The method of claim 1, wherein adjusting the pixel value of each pixel in the binarized test image according to pixel values within a predetermined range around each pixel in the binarized test image comprises:
and setting the median value of the pixel values in a preset range around each pixel in the binarized test image as the pixel value of each pixel in the binarized test image.
10. The method of claim 1, wherein comparing the pixel value of the pixel to average pixel values of other pixels within a predetermined window around the pixel comprises:
summing pixel values of pixels within a predetermined window around the pixel to obtain a sum of pixel values within the window; and
subtracting the pixel value of the pixel from the sum of pixel values in the window and dividing by the number of other pixels in the predetermined window around the pixel to obtain an average pixel value of the other pixels in the predetermined window around the pixel,
wherein when the predetermined window of pixels overlaps with the predetermined window of another pixel, the intra-window pixel value sum of pixels is obtained by subtracting the pixel value sum in the non-overlapping region from the intra-window pixel value sum of another pixel and adding the pixel value sum in the region of the predetermined window of pixels relative to the predetermined window of another pixel.
11. An apparatus for spot detection of an imaging module, the apparatus comprising:
the dimension reducing device is used for carrying out dimension reducing treatment on the test image;
the image intensifier is used for carrying out image intensification processing on the test image obtained by the imaging module by a linear stretching method or an intensification method based on a frequency domain so as to obtain an intensified test image;
a binarizer for binarizing the enhanced test image to obtain a binarized test image, in which each pixel of the enhanced test image is set as a stain pixel having a stain pixel value or a non-stain pixel having a non-stain pixel value, respectively, by comparing a pixel value of the enhanced test image with a predetermined stain threshold value;
an image filter for adjusting a pixel value of each pixel in the binarized test image according to a pixel value within a predetermined range around each pixel in the binarized test image;
a stain region determiner for determining stain pixels adjacent to or communicating with each of the stain pixels by traversing each of the pixels in the adjusted binarized test image to determine a size and a position of a stain region of the test image as a detection result, wherein the two communicated stain pixels are stain pixels connected by other stain pixels;
A verifier configured to verify the detection result and remove, from the detection result, the stain pixels determined to be false-detected, wherein verifying the detection result includes verifying each stain pixel by:
determining pixels in the test image corresponding to each stain pixel;
comparing the determined pixel value of each pixel with the average pixel values of other pixels within a predetermined window around the pixel value; and
judging whether the verified stain pixels are false detection or not according to the comparison result;
the parallel detection executor is used for comparing the pixel value of each pixel in the test image subjected to the dimension reduction processing with the average pixel value of other pixels in a preset window around the pixel, and judging whether the pixel is a stain pixel or not according to a comparison result; finding a stain pixel adjacent to or communicating with the stain pixel to obtain the size and position of a stain region; and outputting the obtained size and position as a brightness difference detection result; and
and a combiner for combining the detection result and the luminance difference detection result as a final detection result.
12. The apparatus of claim 11, wherein the image enhancer is to:
Reducing the minimum pixel value in the test image to a target minimum pixel value;
increasing the maximum pixel value in the test image to a target maximum pixel value;
adjusting pixel values between the minimum pixel value and the maximum pixel value to obtain the enhanced test image by:
stretching pixel value = stretching coefficient + (pixel value-minimum pixel value) +target minimum pixel value
The stretching coefficient is the ratio of the difference between the target maximum pixel value and the target minimum pixel value to the difference between the maximum pixel value and the minimum pixel value, and the stretching pixel value represents the adjusted pixel value.
13. The apparatus of claim 11, wherein the image enhancer is to:
performing Fourier transform on the test image to obtain a Fourier spectrum;
moving a zero frequency point of the Fourier spectrum to a central position;
removing predetermined frequencies in the fourier spectrum;
moving the zero frequency point of the Fourier spectrum back to the original position; and
and performing inverse Fourier transform on the Fourier spectrum, and performing one of taking a real part, taking an absolute value and taking a square root on a pixel value of each pixel in the image obtained through inverse Fourier transform to obtain the enhanced test image.
14. The apparatus of claim 13, wherein the image enhancer is to:
the predetermined frequencies in the fourier spectrum are removed by a gaussian low pass filter function or a gaussian band pass filter function.
15. The apparatus of any one of claims 11-14, wherein the apparatus further comprises:
the dimension reducer is used for carrying out dimension reduction processing on the test image through one of a regional average dimension reduction method, a downsampling dimension reduction method and a bilinear dimension reduction method.
16. The apparatus of claim 15, wherein the apparatus further comprises:
a boundary expander for expanding the boundary of the test image subjected to the dimension reduction process outward by a predetermined number of pixels,
wherein the pixel values of the pixels in the extension area are determined by:
determining an optical center of the test image;
obtaining a brightness decreasing relation of the imaging module according to the pixel value of the pixel in the test image, the distance from the optical center and the brightness value of the optical center; and
and determining the pixel value of the pixel in the expansion area according to the brightness decreasing relation and the distance between the pixel in the expansion area and the optical center.
17. The apparatus of claim 15, wherein the apparatus further comprises:
a boundary expander for expanding the boundary of the test image subjected to the dimension reduction process outward by a predetermined number of pixels,
wherein the pixel values of the pixels in the extension area are determined according to the pixels at the boundary of the test image subjected to the dimension reduction processing or the pixels within a predetermined range at the boundary.
18. The apparatus of any one of claims 11-14, wherein the apparatus further comprises:
and the image filter is used for adjusting the pixel value of each pixel in the binarized test image according to the pixel value in a preset range around each pixel in the binarized test image.
19. The apparatus of claim 18, wherein the image filter is to:
setting an average pixel value within a predetermined range around each pixel in the binarized test image to be the pixel value of each pixel in the binarized test image, or
And carrying out weighted average on the pixel values of the pixels in a preset range around each pixel in the binarization test image, and determining the pixel value of each pixel in the binarization test image according to the weighted average result.
20. The apparatus of claim 11, wherein the row detection executor is to compare the pixel value of the pixel with average pixel values of other pixels within a predetermined window around the pixel by:
summing pixel values of pixels within a predetermined window around the pixel to obtain a sum of pixel values within the window; and
subtracting the pixel value of the pixel from the sum of pixel values in the window and dividing by the number of other pixels in the predetermined window around the pixel to obtain an average pixel value of the other pixels in the predetermined window around the pixel,
wherein when the predetermined window of pixels overlaps with the predetermined window of another pixel, the intra-window pixel value sum of pixels is obtained by subtracting the pixel value sum in the non-overlapping region from the intra-window pixel value sum of another pixel and adding the pixel value sum in the region of the predetermined window of pixels relative to the predetermined window of another pixel.
21. A system for spot detection of an imaging module, the system comprising:
a processor; and
a memory coupled to the processor and storing machine-readable instructions executable by the processor to perform operations comprising:
Obtaining a test image through the imaging module;
performing image enhancement processing on the test image by a linear stretching method or a frequency domain-based enhancement method to obtain an enhanced test image;
performing binarization processing on the enhanced test image to obtain a binarized test image, in which pixels of the enhanced test image are respectively set as stain pixels having stain pixel values or non-stain pixels having non-stain pixel values by comparing pixel values of the enhanced test image with a predetermined stain threshold;
through filtering processing, adjusting the pixel value of each pixel in the binarization test image according to the pixel value in a preset range around each pixel in the binarization test image; and
determining a stain pixel adjacent to or communicated with each stain pixel by traversing each pixel in the adjusted binarized test image to determine the size and the position of a stain region of the test image, wherein the two communicated stain pixels are stain pixels connected by other stain pixels,
wherein the operations further comprise verifying the detection result and removing from the detection result stain pixels determined to be false detected, wherein verifying the detection result comprises verifying each stain pixel by:
Determining pixels in the test image corresponding to each stain pixel;
comparing the determined pixel value of each pixel with the average pixel values of other pixels within a predetermined window around the pixel value; and
judging whether the verified stain pixels are false detection or not according to the comparison result;
wherein the operations further comprise:
performing dimension reduction treatment on the test image; for each pixel in the test image that has been dimension reduced, comparing the pixel value of the pixel with the average pixel values of other pixels within a predetermined window around the pixel,
judging whether the pixel is a stain pixel or not according to a comparison result;
finding a stain pixel adjacent to or communicating with the stain pixel to obtain the size and position of a stain region;
outputting the obtained size and position as a brightness difference detection result; and
and combining the detection result and the brightness difference detection result as a final detection result.
22. A non-transitory machine-readable storage medium storing machine-readable instructions executable by a processor to perform operations comprising:
obtaining a test image through an imaging module;
Performing image enhancement processing on the test image by a linear stretching method or a frequency domain-based enhancement method to obtain an enhanced test image;
performing binarization processing on the enhanced test image to obtain a binarized test image, in which pixels of the enhanced test image are respectively set as stain pixels having stain pixel values or non-stain pixels having non-stain pixel values by comparing pixel values of the enhanced test image with a predetermined stain threshold;
through filtering processing, adjusting the pixel value of each pixel in the binarization test image according to the pixel value in a preset range around each pixel in the binarization test image; and
determining a stain pixel adjacent to or communicated with each stain pixel by traversing each pixel in the adjusted binarized test image to determine the size and the position of a stain region of the test image, wherein the two communicated stain pixels are stain pixels connected by other stain pixels,
wherein the operations further comprise verifying the detection result and removing from the detection result stain pixels determined to be false detected, wherein verifying the detection result comprises verifying each stain pixel by:
Determining pixels in the test image corresponding to each stain pixel;
comparing the determined pixel value of each pixel with the average pixel values of other pixels within a predetermined window around the pixel value; and
judging whether the verified stain pixels are false detection or not according to the comparison result;
wherein the operations further comprise:
performing dimension reduction treatment on the test image;
for each pixel in the test image that has been dimension reduced, comparing the pixel value of the pixel with the average pixel values of other pixels within a predetermined window around the pixel,
judging whether the pixel is a stain pixel or not according to a comparison result;
finding a stain pixel adjacent to or communicating with the stain pixel to obtain the size and position of a stain region;
outputting the obtained size and position as a brightness difference detection result; and
and combining the detection result and the brightness difference detection result as a final detection result.
CN201910006562.XA 2019-01-04 2019-01-04 Method, device, system and storage medium for detecting stain of imaging module Active CN111476750B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910006562.XA CN111476750B (en) 2019-01-04 2019-01-04 Method, device, system and storage medium for detecting stain of imaging module

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910006562.XA CN111476750B (en) 2019-01-04 2019-01-04 Method, device, system and storage medium for detecting stain of imaging module

Publications (2)

Publication Number Publication Date
CN111476750A CN111476750A (en) 2020-07-31
CN111476750B true CN111476750B (en) 2023-09-26

Family

ID=71743159

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910006562.XA Active CN111476750B (en) 2019-01-04 2019-01-04 Method, device, system and storage medium for detecting stain of imaging module

Country Status (1)

Country Link
CN (1) CN111476750B (en)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112862832B (en) * 2020-12-31 2022-07-12 盛泰光电科技股份有限公司 Dirt detection method based on concentric circle segmentation positioning
CN112967208B (en) * 2021-04-23 2024-05-14 北京恒安嘉新安全技术有限公司 Image processing method and device, electronic equipment and storage medium
CN113838003B (en) * 2021-08-30 2024-04-30 歌尔科技有限公司 Image speckle detection method, apparatus, medium and computer program product
CN116008294B (en) * 2022-12-13 2024-03-08 无锡微准科技有限公司 Key cap surface particle defect detection method based on machine vision
CN118678049A (en) * 2024-08-09 2024-09-20 广东星云开物科技股份有限公司 Shared vehicle parking camera anomaly detection method, device, equipment and medium

Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH0518909A (en) * 1991-07-15 1993-01-26 Fuji Electric Co Ltd Apparatus for inspecting inner surface of circular container
JP2003032490A (en) * 2001-07-11 2003-01-31 Ricoh Co Ltd Image processing apparatus
JP2008170325A (en) * 2007-01-12 2008-07-24 Seiko Epson Corp Stain flaw detection method and stain flaw detection device
JP2008232639A (en) * 2007-03-16 2008-10-02 Seiko Epson Corp Stain defect detection method and device
JP2008241407A (en) * 2007-03-27 2008-10-09 Mitsubishi Electric Corp Defect detecting method and defect detecting device
JP2008292256A (en) * 2007-05-23 2008-12-04 Fuji Xerox Co Ltd Device, method and program for image quality defect detection
US7903864B1 (en) * 2007-01-17 2011-03-08 Matrox Electronic Systems, Ltd. System and methods for the detection of irregularities in objects based on an image of the object
JPWO2010055815A1 (en) * 2008-11-13 2012-04-12 株式会社日立メディコ Medical image processing apparatus and method
CN104104945A (en) * 2014-07-22 2014-10-15 西北工业大学 Star sky image defective pixel robustness detection method
CN104867159A (en) * 2015-06-05 2015-08-26 北京大恒图像视觉有限公司 Stain detection and classification method and device for sensor of digital camera
KR20160108644A (en) * 2015-03-04 2016-09-20 주식회사 에이치비테크놀러지 Device for detecting defect of device
CN106412573A (en) * 2016-10-26 2017-02-15 歌尔科技有限公司 Method and device for detecting lens stain
CN106815821A (en) * 2017-01-23 2017-06-09 上海兴芯微电子科技有限公司 The denoising method and device of near-infrared image

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2010055815A (en) * 2008-08-26 2010-03-11 Sony Corp Fuel cartridge, fuel cell and electronic equipment

Patent Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH0518909A (en) * 1991-07-15 1993-01-26 Fuji Electric Co Ltd Apparatus for inspecting inner surface of circular container
JP2003032490A (en) * 2001-07-11 2003-01-31 Ricoh Co Ltd Image processing apparatus
JP2008170325A (en) * 2007-01-12 2008-07-24 Seiko Epson Corp Stain flaw detection method and stain flaw detection device
US7903864B1 (en) * 2007-01-17 2011-03-08 Matrox Electronic Systems, Ltd. System and methods for the detection of irregularities in objects based on an image of the object
JP2008232639A (en) * 2007-03-16 2008-10-02 Seiko Epson Corp Stain defect detection method and device
JP2008241407A (en) * 2007-03-27 2008-10-09 Mitsubishi Electric Corp Defect detecting method and defect detecting device
JP2008292256A (en) * 2007-05-23 2008-12-04 Fuji Xerox Co Ltd Device, method and program for image quality defect detection
JPWO2010055815A1 (en) * 2008-11-13 2012-04-12 株式会社日立メディコ Medical image processing apparatus and method
CN104104945A (en) * 2014-07-22 2014-10-15 西北工业大学 Star sky image defective pixel robustness detection method
KR20160108644A (en) * 2015-03-04 2016-09-20 주식회사 에이치비테크놀러지 Device for detecting defect of device
CN104867159A (en) * 2015-06-05 2015-08-26 北京大恒图像视觉有限公司 Stain detection and classification method and device for sensor of digital camera
CN106412573A (en) * 2016-10-26 2017-02-15 歌尔科技有限公司 Method and device for detecting lens stain
CN106815821A (en) * 2017-01-23 2017-06-09 上海兴芯微电子科技有限公司 The denoising method and device of near-infrared image

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
基于自适应局部增强的手机TFT-LCD屏Mura缺陷自动检测;廖苗;刘毅志;欧阳军林;余建勇;肖文辉;彭理;;液晶与显示(第06期);全文 *
徐德.显微视觉测量与控制.北京:国防工业出版社,2014,第54-56页. *
牛金海.超声原理及生物医学工程应用 生物医学超声学.上海:上海交通大学出版社,2017,第169-171页. *
胡章芳."MATLAB 仿真及其在光学课程中的应用 第2版".北京:北京航空航天大学出版社,2018,第121-122页. *

Also Published As

Publication number Publication date
CN111476750A (en) 2020-07-31

Similar Documents

Publication Publication Date Title
CN111476750B (en) Method, device, system and storage medium for detecting stain of imaging module
CN109872304B (en) Image defect detection method and device, electronic device and storage medium
CN110766736B (en) Defect detection method, defect detection device, electronic equipment and storage medium
CN108805023B (en) Image detection method, device, computer equipment and storage medium
US7783103B2 (en) Defect detecting device, image sensor device, image sensor module, image processing device, digital image quality tester, and defect detecting method
CN112381727B (en) Image denoising method and device, computer equipment and storage medium
CN114529459A (en) Method, system and medium for enhancing image edge
CN113781406B (en) Scratch detection method and device for electronic component and computer equipment
CN108665495B (en) Image processing method and device and mobile terminal
CN117152165B (en) Photosensitive chip defect detection method and device, storage medium and electronic equipment
CN110399873A (en) ID Card Image acquisition methods, device, electronic equipment and storage medium
CN114820594A (en) Method for detecting edge sealing defects of plate based on image, related equipment and storage medium
CN107895371B (en) Textile flaw detection method based on peak coverage value and Gabor characteristics
CN114298985B (en) Defect detection method, device, equipment and storage medium
JP5705611B2 (en) Apparatus and method for detecting rotation angle from normal position of image
CN113744294A (en) Image processing method and related device
CN111415365B (en) Image detection method and device
CN116993609A (en) Image noise reduction method, device, equipment and medium
JP7258509B2 (en) Image processing device, image processing method, and image processing program
CN111028215A (en) Method for detecting end surface defects of steel coil based on machine vision
CN105844593A (en) Automated processing method for single interference round fringe pre-processing
CN115564727A (en) Method and system for detecting abnormal defects of exposure development
CN116485702A (en) Image processing method, device and storage medium
CN111369491B (en) Image stain detection method, device, system and storage medium
CN114596210A (en) Noise estimation method, device, terminal equipment and computer readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant