CN116563279A - Measuring switch detection method based on computer vision - Google Patents
Measuring switch detection method based on computer vision Download PDFInfo
- Publication number
- CN116563279A CN116563279A CN202310825640.5A CN202310825640A CN116563279A CN 116563279 A CN116563279 A CN 116563279A CN 202310825640 A CN202310825640 A CN 202310825640A CN 116563279 A CN116563279 A CN 116563279A
- Authority
- CN
- China
- Prior art keywords
- gradient
- sliding window
- reference pixel
- obtaining
- pixel point
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000001514 detection method Methods 0.000 title claims abstract description 20
- 230000004927 fusion Effects 0.000 claims abstract description 51
- 238000000034 method Methods 0.000 claims abstract description 35
- 238000009826 distribution Methods 0.000 claims abstract description 31
- 238000005259 measurement Methods 0.000 claims abstract description 25
- 230000007547 defect Effects 0.000 claims abstract description 17
- 238000012545 processing Methods 0.000 claims abstract description 7
- 238000013507 mapping Methods 0.000 claims description 21
- 230000000739 chaotic effect Effects 0.000 claims description 15
- 230000011218 segmentation Effects 0.000 claims description 13
- 238000012935 Averaging Methods 0.000 claims description 8
- 238000010606 normalization Methods 0.000 claims description 7
- 238000004458 analytical method Methods 0.000 description 7
- 230000008569 process Effects 0.000 description 6
- 230000001788 irregular Effects 0.000 description 5
- 238000004519 manufacturing process Methods 0.000 description 5
- 238000012549 training Methods 0.000 description 5
- 230000000694 effects Effects 0.000 description 2
- 238000007499 fusion processing Methods 0.000 description 2
- 238000002372 labelling Methods 0.000 description 2
- 230000002776 aggregation Effects 0.000 description 1
- 238000004220 aggregation Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 238000010586 diagram Methods 0.000 description 1
- 238000007689 inspection Methods 0.000 description 1
- 238000003672 processing method Methods 0.000 description 1
- 230000000750 progressive effect Effects 0.000 description 1
- 230000004044 response Effects 0.000 description 1
- 238000005070 sampling Methods 0.000 description 1
- 238000000638 solvent extraction Methods 0.000 description 1
- 230000009466 transformation Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/13—Edge detection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/136—Segmentation; Edge detection involving thresholding
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/60—Analysis of geometric attributes
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10004—Still image; Photographic image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y04—INFORMATION OR COMMUNICATION TECHNOLOGIES HAVING AN IMPACT ON OTHER TECHNOLOGY AREAS
- Y04S—SYSTEMS INTEGRATING TECHNOLOGIES RELATED TO POWER NETWORK OPERATION, COMMUNICATION OR INFORMATION TECHNOLOGIES FOR IMPROVING THE ELECTRICAL POWER GENERATION, TRANSMISSION, DISTRIBUTION, MANAGEMENT OR USAGE, i.e. SMART GRIDS
- Y04S10/00—Systems supporting electrical power generation, transmission or distribution
- Y04S10/50—Systems or methods supporting the power network operation or management, involving a certain degree of interaction with the load-side end user applications
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Quality & Reliability (AREA)
- Geometry (AREA)
- Image Analysis (AREA)
Abstract
The invention relates to the technical field of image processing, in particular to a measuring switch detection method based on computer vision. According to the method, a sliding window is arranged in a measurement switch surface image, the effective degree is obtained according to gray value distribution characteristics in the window, the gradient fusion weight of each reference pixel point is obtained by combining consistency and difference distances between the reference pixel points corresponding to all directions in the window and the corresponding reference pixel points in other sliding windows, and then gradient amplitude values in all directions are subjected to weighted fusion, so that the final gradient amplitude value is obtained. And performing defect detection on the surface of the measuring switch according to the edge image obtained by the final gradient amplitude. According to the invention, the influence caused by gradient information of the pits on the surface of the measuring switch is eliminated by weighting and fusing the gradient amplitude values in all directions, so that the edge image with strong reference is obtained for quality detection, and an accurate measuring switch quality detection result is obtained.
Description
Technical Field
The invention relates to the technical field of image processing, in particular to a measuring switch detection method based on computer vision.
Background
The measuring switch is an electric element used in electric distribution system, mainly used for measuring and controlling parameters of current, voltage, electric power and the like. The appearance of the measuring switch greatly influences the quality judgment of people, and the good appearance can make a response to the purchasing experience of people, so that the appearance detection is very important. The surface defects of the measuring switch are various, and the main defects in the production process are scratches and crack defects, so that accurate detection of the scratches and crack defects on the surface of the measuring switch is required.
Scratches and cracks have obvious edge texture features on the surface of an object, and in the prior art, a machine vision method is generally adopted, and the defects of the scratches or the cracks are screened out through edge information reflected by images. However, more granular pits exist on the surface of the measuring switch, so that the surface is not smooth, and the defect type and defect degree cannot be accurately detected by directly utilizing the edge image of the surface of the measuring switch to detect the defects.
Disclosure of Invention
In order to solve the technical problem that the defect information cannot be accurately detected due to the fact that the surface of a measuring switch is not smooth, the invention aims to provide a measuring switch detection method based on computer vision, and the adopted technical scheme is as follows:
the invention provides a measuring switch detection method based on computer vision, which comprises the following steps:
obtaining a surface image of a measuring switch; the measurement switch surface image comprises a sliding window with a preset size, gradient amplitude values in all preset directions in the sliding window are obtained, and reference pixel points and gradient information thereof in all preset directions are obtained according to the positions of the pixel points in the sliding window;
obtaining the chaotic degree in the sliding window according to the gray value corresponding to each reference pixel point; obtaining the effective degree of the sliding window according to the chaotic degree and the gray value distribution in the sliding window; obtaining gradient fusion weights of each reference pixel point according to gradient information differences and difference distances between the reference pixel point in each sliding window and the reference pixel points at corresponding positions in other adjacent sliding windows and the effective degrees of the corresponding sliding windows;
weighting and fusing the gradient amplitude values in each preset direction according to the gradient fusion weight values of each reference pixel point to obtain a final gradient amplitude value of the central point of the sliding window; traversing the whole surface image of the measuring switch to obtain the final gradient amplitude of each pixel point;
obtaining an edge image of the surface image of the measuring switch according to the final gradient amplitude; and detecting the defects of the surface of the measuring switch according to the edge information in the edge image.
Further, the measuring switch surface image includes:
acquiring an initial measurement switch surface image; and removing the background information of the initial measurement switch surface image to obtain a measurement switch surface image only containing measurement switch information.
Further, the obtaining method of the confusion degree comprises the following steps:
dividing the gray value in the sliding window into two gray value categories according to threshold segmentation, and obtaining a category connected domain corresponding to each gray value category in the sliding window; counting the number of the class connected domains corresponding to the gray value class to which each pixel point belongs in the sliding window, and taking the number as a distribution characteristic value of the corresponding pixel point; and averaging the distribution characteristic values of all the pixel points in the sliding window, and then normalizing to obtain the chaotic degree.
Further, the method for acquiring the effectiveness degree comprises the following steps:
in the sliding window, taking the number of the pixel points corresponding to the other gray value category except the gray value category to which the reference pixel point belongs as a comparison value; performing negative correlation mapping on the product of the number of the pixel points of the gray value class to which the reference pixel points belong and the comparison number value to obtain a first mapping value; carrying out negative correlation mapping and normalization on the chaotic degree to obtain a second mapping value; and multiplying the first mapping value and the second mapping value and then normalizing to obtain the effective degree.
Further, the method for acquiring the gradient fusion weight comprises the following steps:
obtaining the first sliding window reference pixel point relative to other adjacent sliding windows according to a sub-gradient fusion weight formulaThe sub-gradient fusion weight values in the preset directions comprise the following formula:
wherein ,is->Sub-gradient fusion weights in a preset direction, < ->For the sliding window +.>First reference gradient amplitude values corresponding to reference pixel points in preset directions,/for the reference pixel points>Is->Second reference gradient magnitude of reference pixel point at corresponding position on other adjacent sliding window, +.>For the sliding window +.>First reference gradient direction angle corresponding to reference pixel point in preset direction, +.>First->Second reference gradient angle of reference pixel point at corresponding position on other adjacent sliding window, +.>For sliding window->Corresponding reference pixel points and the first +.>The difference distance between the corresponding reference pixel points of other adjacent sliding windows,/>For the effective degree of the sliding window, < >>For the first fitting parameter, +.>Is a second fitting parameter;
optionally selecting one direction as a target direction, and acquiring sub-gradient fusion weights between the sliding window and all other adjacent sliding windows in the target direction, wherein the maximum sub-gradient fusion weight is used as a gradient fusion weight of a reference pixel point corresponding to the target direction; and changing the target direction to obtain the gradient fusion weight of each reference pixel point.
Further, the method for obtaining the final gradient amplitude value comprises the following steps:
obtaining a preset direction according to a preset fixed angle interval and the number of directions; the number of the directions is even;
taking the product of the gradient fusion weight corresponding to each preset direction and the gradient amplitude as a weighted gradient amplitude in the corresponding direction; accumulating the weighted gradient amplitudes in all preset directions, and then squaring to obtain an initial final gradient amplitude; dividing the initial final gradient amplitude by 2 to obtain the final gradient amplitude.
Further, the obtaining an edge image of the metrology switch surface image from the final gradient magnitude comprises:
obtaining a gradient image formed by the final gradient amplitude values; changing gradient thresholds in a preset threshold interval, and processing the gradient image by each gradient threshold to obtain a corresponding sub-edge image; and obtaining all the sub-edge images corresponding to the gradient threshold values, and obtaining the edge image of the measurement switch surface image by averaging after overlapping the sub-edge images.
Further, the obtaining the reference pixel point in each preset direction according to the position of the pixel point in the sliding window includes:
searching other pixel points in each preset direction by taking the central point of the sliding window as a starting point, and taking the first pixel point which is found as a reference pixel point in the corresponding direction.
The invention has the following beneficial effects:
according to the embodiment of the invention, the sliding window is arranged in the surface image of the measuring switch, so that gradient amplitude values in multiple directions can be obtained based on the sliding window. And further determining gradient information of the reference pixel point according to the position of the pixel point in each direction, and taking the reference pixel point as a reference object, so that analysis between the sliding window and other adjacent sliding windows can be facilitated in a subsequent analysis process. In the analysis process, firstly, the effective degree of the corresponding reference pixel point is obtained according to the distribution of the gray values corresponding to the reference pixel point, the effective degree can indicate the credibility of the gradient information corresponding to the reference pixel point, and therefore the gradient fusion weight of each reference pixel point is obtained by combining the reference pixel point information of the same position in other adjacent sliding windows. The gradient fusion weight can represent the information credibility of the gradient information in the gradient direction corresponding to the reference pixel point, the gradient amplitude values in all directions are subjected to weighted fusion through the gradient fusion weight, and the obtained final gradient amplitude information contains more credible information, so that the influence of the irregular gradient information of the pits on the edge image is avoided. Accurate defect detection information can be obtained through the edge image.
Drawings
In order to more clearly illustrate the embodiments of the invention or the technical solutions and advantages of the prior art, the following description will briefly explain the drawings used in the embodiments or the description of the prior art, and it is obvious that the drawings in the following description are only some embodiments of the invention, and other drawings can be obtained according to the drawings without inventive effort for a person skilled in the art.
FIG. 1 is a flowchart of a method for detecting a measuring switch based on computer vision according to an embodiment of the present invention.
Detailed Description
In order to further describe the technical means and effects adopted by the invention to achieve the preset aim, the following detailed description refers to the specific implementation, structure, characteristics and effects of a computer vision-based measuring switch detection method according to the invention with reference to the accompanying drawings and preferred embodiments. In the following description, different "one embodiment" or "another embodiment" means that the embodiments are not necessarily the same. Furthermore, the particular features, structures, or characteristics of one or more embodiments may be combined in any suitable manner.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs.
The following specifically describes a specific scheme of the computer vision-based measuring switch detection method provided by the invention with reference to the accompanying drawings.
Referring to fig. 1, a flowchart of a method for detecting a measuring switch based on computer vision according to an embodiment of the invention is shown, and the method includes:
step S1: obtaining a surface image of a measuring switch; the measurement switch surface image comprises a sliding window with a preset size, gradient amplitude values in all preset directions in the sliding window are obtained, and reference pixel points and gradient information thereof in all preset directions are obtained according to the positions of the pixel points in the sliding window.
In the embodiment of the invention, the detection of the measuring switch should enter a quality inspection process after the production link, i.e. after the production of the measuring switch is finished, and the surface image of the measuring switch on the production line is acquired through the camera. It should be noted that, because the measuring switch is a three-dimensional object, that is, there are a plurality of surfaces, a plurality of cameras may be set to collect surface images of the measuring switch under a plurality of view angles, the image processing method under each view angle is the same, the surface images under all view angles may be combined into one panoramic image for analysis, or each image may be separately analyzed, which is not limited herein.
Preferably, in another embodiment of the present invention, in consideration of the fact that a large amount of background information exists in the image directly acquired by the camera, for example, production line information, in order to prevent the background information from affecting the subsequent analysis, the background information in the obtained initial measurement switch surface image needs to be removed, and a measurement switch surface image only containing measurement switch information needs to be obtained.
In one embodiment of the invention, the measurement switch information is extracted by adopting a semantic segmentation network, and then the initial measurement switch surface image is intercepted based on the segmentation result, so as to obtain the measurement switch surface image. It should be noted that, the structure and the training method of the semantic segmentation network are technical means well known to those skilled in the art, specific structural details and training processes are not described and limited herein, and only the basic training method of the semantic segmentation network in one embodiment of the present invention is briefly described:
the image containing the measuring switch is used as training data. And labeling the pixels of the measuring switch as 1, and labeling other pixels as 0 to obtain label data.
The semantic segmentation network adopts an encoding-decoding structure, and the training data and the label data are input into the network after being normalized. The semantic segmentation encoder is used for extracting the characteristics of the input data and obtaining a characteristic diagram. The semantic segmentation decoder performs sampling transformation on the feature map and outputs a semantic segmentation result.
The network is trained using a cross entropy loss function.
In other embodiments of the present invention, the measurement switch area may be partitioned by using a threshold partitioning algorithm based on the principle that the gray scale features of the measurement switch surface are significantly different from the gray scale features of the background information, and the specific algorithm implementation process is a technical means well known to those skilled in the art, which is not described herein.
And setting a sliding window in the measurement switch surface image according to the preset size, wherein the sliding window is centered on each pixel point of the measurement switch surface image, and the gradient amplitude values in each preset direction in the sliding window can be obtained because the sliding window area contains pixel point information in each direction. Note that in the embodiment of the present invention, the size of the sliding window is set to 3×3, and the preset directions are set to 8 directions and are uniformly and equally spaced, that is, the preset directions include 0 °, 45 °, 90 °, 135 °, 180 °, 225 °, 270 °, and 315 °.
Because gradient amplitude values in all directions in the window area are obtained in the sliding window area, reference pixel points in all preset directions are obtained according to the positions of the pixel points in the sliding window in order to facilitate the subsequent comparison analysis between the sliding window area and other adjacent sliding window areas.
Preferably, in one embodiment of the present invention, obtaining the reference pixel point in each preset direction according to the position of the pixel point in the sliding window includes:
searching other pixel points in each preset direction by taking the central point of the sliding window as a starting point, and taking the first pixel point which is found as a reference pixel point in the corresponding direction. It should be noted that, because the size of the sliding window is set to 3×3 and eight equally spaced directions are set in one embodiment of the present invention, the reference pixel points corresponding to each direction are all the pixel points except the center point in the sliding window.
Gradient information of each reference pixel point is further obtained for subsequent comparison analysis.
It should be noted that, the gradient amplitude corresponding to each direction in the sliding window and the gradient information of each reference pixel point are well known technical means for those skilled in the art, and are not described herein.
Step S2: obtaining the chaotic degree in the sliding window according to the gray value corresponding to each reference pixel point; obtaining the effective degree of the sliding window according to the chaotic degree and the gray value distribution in the sliding window; and obtaining the gradient fusion weight of each reference pixel point according to the gradient information difference and the difference distance between the reference pixel point in each sliding window and the reference pixel points at the corresponding positions in other adjacent sliding windows and the effective degree of the corresponding sliding windows.
For the pits on the surface of the measuring switch, the surface of the measuring switch is not smooth due to the irregular gray scale distribution characteristics of the pits. According to priori knowledge, granular pits on the surface of the measuring switch are generally in aggregation distribution, so that gradient information of pits at one position and gradient information at other positions are obviously different, the degree of confusion can be obtained according to the distribution of gray values corresponding to reference pixels in a sliding window, the probability of presenting pit characteristics in the direction corresponding to the reference pixels is expressed according to the degree of confusion, namely, the greater the degree of confusion is, the more the distribution of pits is presented in the direction.
Preferably, the method for specifically obtaining the confusion degree in one embodiment of the invention comprises the following steps:
dividing gray values in the sliding window into two gray value categories according to threshold segmentation, and obtaining category connected domains corresponding to each gray value category in the sliding window; counting the number of class connected domains corresponding to the gray value class to which each pixel point belongs in the sliding window, and taking the number as a distribution characteristic value of the corresponding pixel point; and averaging the distribution characteristic values of all the pixel points in the sliding window, and then normalizing to obtain the chaotic degree. In one embodiment of the present invention, the obtaining formula of the confusion degree is specifically:
wherein ,for the degree of confusion>For normalization function->For the number of pixels within the sliding window,is the->A distribution characteristic value of each pixel point.
In the chaotic degree formula, the larger the distribution characteristic value of the pixel point is, the more the class connected domain corresponding to the gray value class is, the more chaotic the gray distribution in the window is, the larger the probability of presenting the pock gray characteristic distribution in the window is, so that the information of all the pixel points in the sliding window is integrated in a mode of calculating an average value, and the chaotic degree in the sliding window is obtained.
In one embodiment of the present invention, the threshold segmentation is performed by using an oxford threshold segmentation method, and the normalization function may be normalized by using basic mathematical operations such as function mapping, range normalization, etc. or a function mapping manner, and specific operations are technical means well known to those skilled in the art and will not be described herein.
The degree of confusion can represent whether the gray feature distribution of the pits exists in the sliding window, and the larger the degree of confusion is, the more the distribution of the pits is met in the window, and the lower the credibility of the corresponding gradient information is. The effective degree of the sliding window is thus obtained from the degree of confusion and the distribution of gray values within the sliding window. The effective degree characterizes the credibility of information in the window, and the larger the effective degree is, the more credible gradient information in the window is indicated, and the influence of pocking marks can be avoided by introducing the effective degree in the subsequent fusion process.
Preferably, in one embodiment of the present invention, obtaining the validity degree of each reference pixel point according to the degree of confusion includes:
in the sliding window, the number of the pixel points corresponding to the other gray value category except the gray value category to which the reference pixel point belongs is used as a comparison value; performing negative correlation mapping on the product of the number of pixel points of the gray value class to which the reference pixel points belong and the contrast value to obtain a first mapping value; carrying out negative correlation mapping and normalization on the chaotic degree to obtain a second mapping value; and multiplying the first mapping value and the second mapping value and then normalizing to obtain the effective degree. In one embodiment of the invention, the significance is formulated as:
wherein ,the number of pixels of the gray value class to which the reference pixel belongs is +.>For the purpose of comparing the value of the quantity,for the degree of confusion>As an exponential function based on natural constants, < +.>To get home toFunction of transformation->To a significant extent.
As can be seen from the effective degree formula, and />The smaller the gap between them, the more uneven the gray value distribution in the region of the sliding window, the more irregular the gray value distribution in the sliding window, +.>The greater the degree of effectiveness, the lesser the degree of effectiveness; on the contrary, the method comprises the steps of,the smaller the gradient distribution is, the more the gray value distribution of the pixel points in the sliding window area is concentrated, the centripetal or divergent condition of the gradient distribution is shown, the high distinguishing degree of the pixel points in the window is shown, and the credibility degree of the information is good; the greater the degree of confusion, the closer the gray scale distribution within the sliding window to the gray scale distribution of the pits, the less effective the degree. It should be noted that in other embodiments, other basic mathematical operations may be used to perform the negative correlation mapping and normalization operations, and such operations are well known to those skilled in the art, and are not limited and described herein.
Because the gray scale and gradient change of the pock pixels are irregular, the consistency between the gradient information of the pock region and other regions is low, and therefore the gradient fusion weight in the corresponding direction can be judged through the consistency of the gradient information between the sliding window region and other adjacent sliding window regions. Specifically, according to the gradient information difference and the difference distance between the reference pixel point in the sliding window and the reference pixel point at the corresponding position in other adjacent sliding windows and the effectiveness degree of the corresponding sliding window, the method specifically includes:
obtaining the first sliding window reference pixel point relative to other adjacent sliding windows according to the sub-gradient fusion weight formulaThe sub-gradient fusion weight value formula under the preset direction comprises:
wherein ,is->Sub-gradient fusion weights in a preset direction, < ->For sliding window->First reference gradient amplitude values corresponding to reference pixel points in preset directions,/for the reference pixel points>Is->Second reference gradient magnitude of reference pixel point at corresponding position on other adjacent sliding window, +.>For sliding window->First reference gradient direction angle corresponding to reference pixel point in preset direction, +.>First->Second reference gradient angle of reference pixel point at corresponding position on other adjacent sliding window, +.>For sliding window->Corresponding reference pixel points and the first +.>The difference distance between the corresponding reference pixel points of other adjacent sliding windows,/>For the effective degree of the sliding window, +.>For the first fitting parameter, +.>Is the second fitting parameter. In the embodiment of the invention, the first fitting parameter takes a value of 1, and the second fitting parameter takes a value of 1.
The gradient information difference is divided into a gradient amplitude difference and a gradient direction difference of reference information according to a sub-gradient fusion weight formula, and negative correlation processing is carried out on the gradient amplitude difference and the gradient direction difference in a reciprocal form, so that the larger the difference is, the smaller the consistency is, and the smaller the corresponding sub-gradient fusion weight is;the larger the information between two reference pixel points is, the smaller the information referential is, the smaller the sub-gradient fusion weight is; the sub-gradient fusion weight further comprises information of the effective degree by combining the effective degree, so that the influence of pocking marks can be eliminated in the subsequent weighted fusion process.
The sub-gradient fusion weight can represent information availability and information consistency in the direction corresponding to the reference pixel point, wherein the higher the information consistency is, the more important the gradient amplitude in the corresponding direction is indicated, and the higher the information availability is, the stronger the gradient amplitude in the corresponding direction is indicated.
Optionally selecting one direction as a target direction, acquiring sub-gradient fusion weights between the sliding window and all other adjacent sliding windows in the target direction, and taking the maximum sub-gradient fusion weight as a gradient fusion weight of a reference pixel point corresponding to the target direction; and changing the target direction to obtain the gradient fusion weight of each reference pixel point.
Step S3: weighting and fusing the gradient amplitudes in each preset direction according to the gradient fusion weights of each reference pixel point to obtain the final gradient amplitude of the central point of the sliding window; traversing the whole measuring switch surface image to obtain the final gradient amplitude of each pixel point.
Each reference pixel point corresponds to one gradient fusion weight, namely, each preset direction corresponds to one gradient fusion weight, so that the gradient amplitude values in each preset direction can be weighted and fused according to the gradient fusion weights, and the final gradient amplitude value of the central point of the sliding window is obtained. Because the gradient fusion weight contains information of information consistency and information availability, the influence of pits on the gradient can be eliminated after weighted fusion, and the difference characteristics among gradient information are improved, so that the characteristics of the final gradient amplitude obtained finally are more obvious.
Preferably, the method for obtaining the final gradient amplitude comprises the following steps:
because the direction setting manner of the sliding window in one embodiment of the present invention is: the preset directions are obtained according to the preset fixed angle intervals and the number of directions, and the number of directions is even, so that the preset directions can be divided into a forward direction and a reverse direction.
Taking the product of the gradient fusion weight corresponding to each preset direction and the gradient amplitude as a weighted gradient amplitude in the corresponding direction; accumulating the weighted gradient amplitudes in all preset directions, and then squaring to obtain an initial final gradient amplitude; since the initial final gradient magnitude includes both the forward direction and the reverse direction, the initial gradient magnitude may be divided by 2 to obtain the final gradient magnitude, i.e., the superimposed gradient magnitude in one direction is used as the final gradient magnitude.
Step S4: obtaining an edge image of the surface image of the measuring switch according to the final gradient amplitude; and detecting the defects of the surface of the measuring switch according to the edge information in the edge image.
Because the obtained final gradient amplitude has obvious edge gradient characteristics and can eliminate error influence caused by pocking mark information, the edge image obtained according to the final gradient amplitude can accurately detect the defects of the surface of the measuring switch.
Preferably, obtaining an edge image of the metrology switch surface image from the final gradient magnitude in one embodiment of the present invention includes:
obtaining a gradient image formed by final gradient amplitude values; changing gradient threshold values in a preset threshold value interval, and processing the gradient image by each gradient threshold value to obtain a corresponding sub-edge image; and obtaining sub-edge images corresponding to all gradient thresholds, overlapping the sub-edge images, and then averaging to obtain the edge image of the surface image of the measuring switch. Because the edge image is obtained by averaging the superimposed sub-edge images, the edge image contains small edge information with obvious characteristics and details, namely the edge information in the edge image is richer and more referential.
It should be noted that, the method of overlapping and averaging images is a technical means well known to those skilled in the art, and will not be described herein.
After the edge image is obtained, the identification of cracks or scratches can be carried out, the scratches can extend along a certain direction, and the cracks show irregular trend or bifurcation. Therefore, based on the characteristics of the two defects, the detection of the scratch and crack information can be realized by combining the edge information in the edge image.
In summary, in the embodiment of the present invention, a sliding window is set in a measurement switch surface image, and the effective degree is obtained according to the gray value distribution characteristics in the window, and the gradient fusion weight of each reference pixel is obtained by combining the consistency and the difference distance between the reference pixel corresponding to each direction in the window and the corresponding reference pixel in other sliding windows, so as to perform weighted fusion on the gradient amplitudes in each direction, thereby obtaining the final gradient amplitude. And performing defect detection on the surface of the measuring switch according to the edge image obtained by the final gradient amplitude. According to the embodiment of the invention, the influence caused by gradient information of the pits on the surface of the measuring switch is eliminated by weighting and fusing the gradient amplitude values in all directions, so that the edge image with strong reference is obtained for quality detection, and an accurate measuring switch quality detection result is obtained.
It should be noted that: the sequence of the embodiments of the present invention is only for description, and does not represent the advantages and disadvantages of the embodiments. The processes depicted in the accompanying drawings do not necessarily require the particular order shown, or sequential order, to achieve desirable results. In some embodiments, multitasking and parallel processing are also possible or may be advantageous.
In this specification, each embodiment is described in a progressive manner, and identical and similar parts of each embodiment are all referred to each other, and each embodiment mainly describes differences from other embodiments.
Claims (8)
1. A computer vision-based measuring switch detection method, the method comprising:
obtaining a surface image of a measuring switch; the measurement switch surface image comprises a sliding window with a preset size, gradient amplitude values in all preset directions in the sliding window are obtained, and reference pixel points and gradient information thereof in all preset directions are obtained according to the positions of the pixel points in the sliding window;
obtaining the chaotic degree in the sliding window according to the gray value corresponding to each reference pixel point; obtaining the effective degree of the sliding window according to the chaotic degree and the gray value distribution in the sliding window; obtaining gradient fusion weights of each reference pixel point according to gradient information differences and difference distances between the reference pixel point in each sliding window and the reference pixel points at corresponding positions in other adjacent sliding windows and the effective degrees of the corresponding sliding windows;
weighting and fusing the gradient amplitude values in each preset direction according to the gradient fusion weight values of each reference pixel point to obtain a final gradient amplitude value of the central point of the sliding window; traversing the whole surface image of the measuring switch to obtain the final gradient amplitude of each pixel point;
obtaining an edge image of the surface image of the measuring switch according to the final gradient amplitude; and detecting the defects of the surface of the measuring switch according to the edge information in the edge image.
2. The method of claim 1, wherein the measuring switch surface image comprises:
acquiring an initial measurement switch surface image; and removing the background information of the initial measurement switch surface image to obtain a measurement switch surface image only containing measurement switch information.
3. The method for detecting a measuring switch based on computer vision according to claim 1, wherein the method for obtaining the degree of confusion comprises:
dividing the gray value in the sliding window into two gray value categories according to threshold segmentation, and obtaining a category connected domain corresponding to each gray value category in the sliding window; counting the number of the class connected domains corresponding to the gray value class to which each pixel point belongs in the sliding window, and taking the number as a distribution characteristic value of the corresponding pixel point; and averaging the distribution characteristic values of all the pixel points in the sliding window, and then normalizing to obtain the chaotic degree.
4. The method for detecting a measuring switch based on computer vision according to claim 3, wherein the method for obtaining the validity comprises:
in the sliding window, taking the number of the pixel points corresponding to the other gray value category except the gray value category to which the reference pixel point belongs as a comparison value; performing negative correlation mapping on the product of the number of the pixel points of the gray value class to which the reference pixel points belong and the comparison number value to obtain a first mapping value; carrying out negative correlation mapping and normalization on the chaotic degree to obtain a second mapping value; and multiplying the first mapping value and the second mapping value and then normalizing to obtain the effective degree.
5. The method for detecting a measuring switch based on computer vision according to claim 1, wherein the method for acquiring the gradient fusion weight comprises the following steps:
obtaining the first sliding window reference pixel point relative to other adjacent sliding windows according to a sub-gradient fusion weight formulaThe sub-gradient fusion weight values in the preset directions comprise the following formula:
wherein ,is->Sub-gradient fusion weights in a preset direction, < ->For the sliding window +.>First reference gradient amplitude values corresponding to reference pixel points in preset directions,/for the reference pixel points>Is->Second reference gradient magnitude of reference pixel point at corresponding position on other adjacent sliding window, +.>For the sliding window +.>First reference gradient direction angle corresponding to reference pixel point in preset direction, +.>First->Second reference gradient angle of reference pixel point at corresponding position on other adjacent sliding window, +.>For sliding window->Corresponding reference pixel points and the first +.>The difference distance between the corresponding reference pixel points of other adjacent sliding windows,/>For the effective degree of the sliding window, < >>For the first fitting parameter, +.>Is a second fitting parameter;
optionally selecting one direction as a target direction, and acquiring sub-gradient fusion weights between the sliding window and all other adjacent sliding windows in the target direction, wherein the maximum sub-gradient fusion weight is used as a gradient fusion weight of a reference pixel point corresponding to the target direction; and changing the target direction to obtain the gradient fusion weight of each reference pixel point.
6. The method for detecting a measuring switch based on computer vision according to claim 1, wherein the method for obtaining the final gradient amplitude comprises:
obtaining a preset direction according to a preset fixed angle interval and the number of directions; the number of the directions is even;
taking the product of the gradient fusion weight corresponding to each preset direction and the gradient amplitude as a weighted gradient amplitude in the corresponding direction; accumulating the weighted gradient amplitudes in all preset directions, and then squaring to obtain an initial final gradient amplitude; dividing the initial final gradient amplitude by 2 to obtain the final gradient amplitude.
7. The method of claim 1, wherein obtaining an edge image of the surface image of the metrology switch based on the final gradient magnitude comprises:
obtaining a gradient image formed by the final gradient amplitude values; changing gradient thresholds in a preset threshold interval, and processing the gradient image by each gradient threshold to obtain a corresponding sub-edge image; and obtaining all the sub-edge images corresponding to the gradient threshold values, and obtaining the edge image of the measurement switch surface image by averaging after overlapping the sub-edge images.
8. The method for detecting a measuring switch based on computer vision according to claim 1, wherein the obtaining the reference pixel points in each preset direction according to the positions of the pixel points in the sliding window comprises:
searching other pixel points in each preset direction by taking the central point of the sliding window as a starting point, and taking the first pixel point which is found as a reference pixel point in the corresponding direction.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310825640.5A CN116563279B (en) | 2023-07-07 | 2023-07-07 | Measuring switch detection method based on computer vision |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310825640.5A CN116563279B (en) | 2023-07-07 | 2023-07-07 | Measuring switch detection method based on computer vision |
Publications (2)
Publication Number | Publication Date |
---|---|
CN116563279A true CN116563279A (en) | 2023-08-08 |
CN116563279B CN116563279B (en) | 2023-09-19 |
Family
ID=87488255
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202310825640.5A Active CN116563279B (en) | 2023-07-07 | 2023-07-07 | Measuring switch detection method based on computer vision |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN116563279B (en) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116862907A (en) * | 2023-08-30 | 2023-10-10 | 无锡市明通动力工业有限公司 | Motor accessory quality detection method based on image features |
CN117173184A (en) * | 2023-11-03 | 2023-12-05 | 济宁市市政园林养护中心 | Road construction quality detection method and system based on artificial intelligence |
CN117218122A (en) * | 2023-11-09 | 2023-12-12 | 深圳市金三维实业有限公司 | Watch shell quality detection method based on image data |
Citations (15)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106952258A (en) * | 2017-03-23 | 2017-07-14 | 南京汇川图像视觉技术有限公司 | A kind of bottle mouth defect detection method based on gradient orientation histogram |
CN109035155A (en) * | 2018-06-15 | 2018-12-18 | 宁波大学 | A kind of more exposure image fusion methods of halation removal |
WO2021017589A1 (en) * | 2019-07-31 | 2021-02-04 | 茂莱(南京)仪器有限公司 | Image fusion method based on gradient domain mapping |
CN112884795A (en) * | 2019-11-29 | 2021-06-01 | 国网江苏省电力有限公司盐城供电分公司 | Power transmission line inspection foreground and background segmentation method based on multi-feature significance fusion |
KR20210126378A (en) * | 2020-04-10 | 2021-10-20 | (주)쓰리뷰 | Real-time sliding window based anomaly detection system for multivariate data generated by manufacturing equipment |
US20210327023A1 (en) * | 2018-12-29 | 2021-10-21 | Zhejiang Dahua Technology Co., Ltd. | Methods and systems for image processing |
CN114445387A (en) * | 2022-01-29 | 2022-05-06 | 泗阳富艺木业股份有限公司 | Fiberboard quality classification method based on machine vision |
US20220292645A1 (en) * | 2021-03-12 | 2022-09-15 | China University Of Mining & Technology, Beijing | Method for restoring video data of drainage pipe based on computer vision |
CN115115613A (en) * | 2022-07-26 | 2022-09-27 | 南通博莹机械铸造有限公司 | Paint spraying defect detection method and system based on machine vision |
CN115239701A (en) * | 2022-09-15 | 2022-10-25 | 江苏鑫缘医疗科技有限公司 | Method for detecting foreign matters on surface of medical gauze |
CN115311290A (en) * | 2022-10-12 | 2022-11-08 | 南通市通州区精华电器有限公司 | Method for detecting defects of metal parts of precision instrument |
WO2023070911A1 (en) * | 2021-10-27 | 2023-05-04 | 西安工程大学 | Self-attention-based method for detecting defective area of color-textured fabric |
CN116205906A (en) * | 2023-04-25 | 2023-06-02 | 青岛豪迈电缆集团有限公司 | Nondestructive testing method for production abnormality in cable |
CN116309570A (en) * | 2023-05-18 | 2023-06-23 | 山东亮马新材料科技有限公司 | Titanium alloy bar quality detection method and system |
CN116363133A (en) * | 2023-06-01 | 2023-06-30 | 无锡斯达新能源科技股份有限公司 | Illuminator accessory defect detection method based on machine vision |
-
2023
- 2023-07-07 CN CN202310825640.5A patent/CN116563279B/en active Active
Patent Citations (15)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106952258A (en) * | 2017-03-23 | 2017-07-14 | 南京汇川图像视觉技术有限公司 | A kind of bottle mouth defect detection method based on gradient orientation histogram |
CN109035155A (en) * | 2018-06-15 | 2018-12-18 | 宁波大学 | A kind of more exposure image fusion methods of halation removal |
US20210327023A1 (en) * | 2018-12-29 | 2021-10-21 | Zhejiang Dahua Technology Co., Ltd. | Methods and systems for image processing |
WO2021017589A1 (en) * | 2019-07-31 | 2021-02-04 | 茂莱(南京)仪器有限公司 | Image fusion method based on gradient domain mapping |
CN112884795A (en) * | 2019-11-29 | 2021-06-01 | 国网江苏省电力有限公司盐城供电分公司 | Power transmission line inspection foreground and background segmentation method based on multi-feature significance fusion |
KR20210126378A (en) * | 2020-04-10 | 2021-10-20 | (주)쓰리뷰 | Real-time sliding window based anomaly detection system for multivariate data generated by manufacturing equipment |
US20220292645A1 (en) * | 2021-03-12 | 2022-09-15 | China University Of Mining & Technology, Beijing | Method for restoring video data of drainage pipe based on computer vision |
WO2023070911A1 (en) * | 2021-10-27 | 2023-05-04 | 西安工程大学 | Self-attention-based method for detecting defective area of color-textured fabric |
CN114445387A (en) * | 2022-01-29 | 2022-05-06 | 泗阳富艺木业股份有限公司 | Fiberboard quality classification method based on machine vision |
CN115115613A (en) * | 2022-07-26 | 2022-09-27 | 南通博莹机械铸造有限公司 | Paint spraying defect detection method and system based on machine vision |
CN115239701A (en) * | 2022-09-15 | 2022-10-25 | 江苏鑫缘医疗科技有限公司 | Method for detecting foreign matters on surface of medical gauze |
CN115311290A (en) * | 2022-10-12 | 2022-11-08 | 南通市通州区精华电器有限公司 | Method for detecting defects of metal parts of precision instrument |
CN116205906A (en) * | 2023-04-25 | 2023-06-02 | 青岛豪迈电缆集团有限公司 | Nondestructive testing method for production abnormality in cable |
CN116309570A (en) * | 2023-05-18 | 2023-06-23 | 山东亮马新材料科技有限公司 | Titanium alloy bar quality detection method and system |
CN116363133A (en) * | 2023-06-01 | 2023-06-30 | 无锡斯达新能源科技股份有限公司 | Illuminator accessory defect detection method based on machine vision |
Non-Patent Citations (6)
Title |
---|
MIAOFANG SHEN: "Exploration of Significant Defects Based on Computer Vision Publisher: IEEE", 2017 INTERNATIONAL CONFERENCE ON COMPUTER TECHNOLOGY, ELECTRONICS AND COMMUNICATION (ICCTEC) * |
侯涛;张志腾;: "改进Canny算子在列车车轮踏面损伤检测中的应用", 铁道科学与工程学报, no. 08 * |
李东兴;高倩倩;张起;蔡亚南;吴秀东;: "融合数学形态学滤波技术的边缘检测算法", 山东理工大学学报(自然科学版), no. 06 * |
王炎;连晓峰;叶璐;: "基于特征融合的多尺度窗口产品外观检测方法", 计算机测量与控制, no. 12 * |
谢昭;吴东涛;吴克伟;李洋;: "基于梯度双面互补特性的级联快速目标检测", 电子学报, no. 10 * |
陈跃;张晓光;阮殿旭;: "基于模糊梯度法的焊接图像缺陷边缘检测方法", 煤矿机械, no. 01 * |
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116862907A (en) * | 2023-08-30 | 2023-10-10 | 无锡市明通动力工业有限公司 | Motor accessory quality detection method based on image features |
CN116862907B (en) * | 2023-08-30 | 2023-11-03 | 无锡市明通动力工业有限公司 | Motor accessory quality detection method based on image features |
CN117173184A (en) * | 2023-11-03 | 2023-12-05 | 济宁市市政园林养护中心 | Road construction quality detection method and system based on artificial intelligence |
CN117173184B (en) * | 2023-11-03 | 2024-01-26 | 济宁市市政园林养护中心 | Road construction quality detection method and system based on artificial intelligence |
CN117218122A (en) * | 2023-11-09 | 2023-12-12 | 深圳市金三维实业有限公司 | Watch shell quality detection method based on image data |
CN117218122B (en) * | 2023-11-09 | 2024-03-29 | 深圳市金三维实业有限公司 | Watch shell quality detection method based on image data |
Also Published As
Publication number | Publication date |
---|---|
CN116563279B (en) | 2023-09-19 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN116563279B (en) | Measuring switch detection method based on computer vision | |
Li et al. | Automatic pavement crack detection by multi-scale image fusion | |
CN106875381B (en) | Mobile phone shell defect detection method based on deep learning | |
CN109900711A (en) | Workpiece, defect detection method based on machine vision | |
CN105654507B (en) | A kind of vehicle overall dimension measurement method based on the tracking of image behavioral characteristics | |
CN108520514B (en) | Consistency detection method for electronic elements of printed circuit board based on computer vision | |
CN105335973B (en) | Apply to the visual processing method of strip machining production line | |
CN114723704B (en) | Textile quality evaluation method based on image processing | |
CN109115800B (en) | Method for rapidly detecting burrs of product and accurately measuring length | |
CN115861291B (en) | Chip circuit board production defect detection method based on machine vision | |
CN106600600A (en) | Wafer defect detection method based on characteristic matching | |
CN115063423B (en) | Self-adaptive identification method for cold and hot cracks of mechanical castings based on computer vision | |
CN103226814A (en) | Medicine bottle foreign matter detection method based on medical visual detection robot image correction | |
CN115184380B (en) | Method for detecting abnormity of welding spots of printed circuit board based on machine vision | |
CN107966444B (en) | Textile flaw detection method based on template | |
CN101995412B (en) | Robust glass scratch defect detection method and device thereof | |
CN103886597A (en) | Circle detection method based on edge detection and fitted curve clustering | |
CN107705294B (en) | Cross laser image type roadbed surface settlement monitoring method and monitoring system | |
CN114627080B (en) | Vehicle stamping accessory defect detection method based on computer vision | |
CN108876860A (en) | A kind of image calibration method for pipe bubble offset measurement | |
CN115308222B (en) | System and method for identifying poor chip appearance based on machine vision | |
CN115345876B (en) | Bolt thread defect detection method based on ultrasonic image | |
CN107610119B (en) | The accurate detection method of steel strip surface defect decomposed based on histogram | |
CN109540925A (en) | Complicated ceramic tile surface defect inspection method based on difference shadow method and local variance measurement operator | |
CN110211178B (en) | Pointer instrument identification method using projection calculation |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |