Nothing Special   »   [go: up one dir, main page]

CN117495791A - Surface defect positioning method - Google Patents

Surface defect positioning method Download PDF

Info

Publication number
CN117495791A
CN117495791A CN202311392936.9A CN202311392936A CN117495791A CN 117495791 A CN117495791 A CN 117495791A CN 202311392936 A CN202311392936 A CN 202311392936A CN 117495791 A CN117495791 A CN 117495791A
Authority
CN
China
Prior art keywords
image
matrix
texture
roi
area
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311392936.9A
Other languages
Chinese (zh)
Inventor
牛伟龙
伍子卿
徐许
宋铖
姜杰
王勇超
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Suzhou University
Original Assignee
Suzhou University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Suzhou University filed Critical Suzhou University
Priority to CN202311392936.9A priority Critical patent/CN117495791A/en
Publication of CN117495791A publication Critical patent/CN117495791A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0004Industrial image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/40Analysis of texture
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/30Computing systems specially adapted for manufacturing

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Quality & Reliability (AREA)
  • Investigating Materials By The Use Of Optical Means Adapted For Particular Applications (AREA)

Abstract

The application provides a surface defect positioning method, which comprises the following steps: obtaining a target image, and extracting textures of the target image to obtain a texture matrix; respectively carrying out difference between the initial image and an X-direction texture matrix and a Y-direction texture matrix to obtain an X-direction confusion matrix and a Y-direction confusion matrix; accumulating the two matrixes to obtain an ROI array; performing preliminary positioning on the defect area according to the ROI intersection; removing the misjudgment area in the defect area after preliminary positioning to obtain a target defect area; and displaying the positioning result of the target defect area on a visual application interface of the monitoring personnel. The method has relatively strong sensitivity to the contrast ratio of the image defect area and the image normal area, can weaken the contrast ratio of the noise point of the image and the image normal area according to the contrast ratio of the defect area and the image normal area, positions the defect position in the image with larger noise, and can effectively extract the defect for the image with more complex texture.

Description

Surface defect positioning method
Technical Field
The application relates to the technical field of detection and positioning, in particular to a surface defect positioning method.
Background
The quality control link in the industrial process is crucial to the detection of the quality of products, the quality inspection link of products is an important step for ensuring the quality and performance of the products to reach the standard, and the detection of the fine and unobvious defects on the surfaces of the products is one of the difficult problems in the industry. With the rapid development of computers, in the quality control link, the advanced machine detection mode gradually replaces the role of a quality inspector, and the currently common detection means include but are not limited to:
1) Visual inspection: visual inspection techniques are a common method to capture and analyze product surface images using high resolution cameras and image processing software to detect various defects such as cracks, scratches, spots, and bubbles. Advances in computer vision and deep learning techniques have been widely used in automated vision inspection to enable the system to automatically learn and identify a variety of surface defects.
2) Nondestructive testing: non-destructive testing (NDT) methods include X-rays, magnetic particles, eddy currents, etc., for detecting internal and surface defects of materials. These methods are useful for quality control and defect detection of a variety of materials such as metals, composites, and glass.
3) Ultrasonic detection: ultrasonic techniques can be used to detect surface and subsurface defects, such as cracks, bubbles, and inclusions, in materials such as metals and composites. Can be applied to the detection of the inner surface and the outer surface of work.
4) Infrared thermal imaging: possible defects, such as thermal cracks or heat leak points, are identified by detecting surface temperature differences. Is suitable for materials such as metal, composite materials and the like.
5) Optical film inspection, which is used to detect defects such as bubbles, scratches and stains in film coatings for optical elements, display screens, and coated products.
6) Electromagnetic induction: the method is suitable for detecting the defects of the metal surface, and the defects in the conductive material are detected by measuring eddy current induction.
The continual advances and applications of the above-described techniques help to improve product quality, reduce scrap rates, and ensure that the product meets standards and customer requirements. Surface defect detection technology plays a key role in various manufacturing fields, and ensures the reliability and consistency of production.
However, most of the above-mentioned means, except the visual detection means, have the characteristics of complex implementation and low efficiency, and cannot meet the requirement of rapid and efficient defect detection.
In view of the above, no effective solution has been found in the prior art.
It should be noted that the information disclosed in the above background section is only for enhancing understanding of the background of the present disclosure and thus may include information that does not constitute prior art known to those of ordinary skill in the art.
Disclosure of Invention
In view of the above-mentioned drawbacks and shortcomings of the prior art, the present application provides a surface defect positioning method, so as to solve the problems that in the prior art, the implementation of a detection means is complex, the efficiency is low, and the defect detection cannot be performed rapidly and efficiently.
In order to solve the above technical problems, the present application provides a surface defect positioning method, including:
obtaining a target image, and performing texture extraction on the target image through a texture extraction algorithm to obtain a texture matrix; the texture matrix comprises an X-direction texture matrix and a Y-direction texture matrix of the target image;
respectively carrying out difference between the initial image and the X-direction texture matrix and the Y-direction texture matrix to obtain an X-direction confusion matrix and a Y-direction confusion matrix;
accumulating the X-direction confusion matrix and the Y-direction confusion matrix to obtain an ROI array; at least two RO I arrays are provided;
obtaining ROI intersections of ROI areas in different RO I arrays, and performing preliminary defect area positioning according to the ROI intersections; the ROI area is a position area which is not 0 and is continuous in each ROI array;
removing the misjudgment area in the defect area after preliminary positioning through a similarity measurement algorithm to obtain a target defect area;
and displaying the positioning result of the target defect area on a visual application interface of a monitoring person.
In one embodiment of the present invention, acquiring the target image includes:
acquiring an image of the surface of the detection object to obtain the initial image;
converting the initial image into a gray scale image;
and carrying out fuzzy denoising on the gray level image through a Gaussian filter to obtain a target image.
In one embodiment of the present invention, performing image acquisition on a surface of a detection object, and acquiring the initial image includes:
and carrying out image acquisition on the surface of the detection object through an image acquisition system and a light supplementing system.
In one embodiment of the invention, the image acquisition system consists of a CCD line camera; the light supplementing system consists of an annular LED lamp fixed on the CCD linear array camera lens.
In one embodiment of the present invention, the formula of the gaussian filter is:
where G is the value of a gaussian function, x and y are independent variables, representing the positions on the x-axis and y-axis, respectively, and σ is the standard deviation, determining the width and shape of the gaussian distribution.
In one embodiment of the present invention, the texture extraction algorithm formula is:
wherein M (i, j) is a texture feature matrix in the j direction, p (i, j) is a gray pixel matrix of the image, i is the image direction with similar texture features, j is the texture direction with complex changes, M is the length in the i direction, and n is the length in the j direction.
In one embodiment of the present invention, the similarity measurement algorithm formula is:
wherein V is a pixel characteristic value, row is a matrix transverse length, col is a matrix longitudinal length, and V ROI For the pixel characteristic value of the region of interest, V global For the global contrast pixel characteristic value corresponding to the current region of interest, S is the similarity, G is a judging function,for empty sets, γ is a threshold.
In one embodiment of the present invention, when S > γ, the current region is considered as a meaningful region, and is determined as a defective region and output;
and when S < gamma, the current region is regarded as a nonsensical region, and the current region is rejected.
In one embodiment of the present invention, after the X-direction confusion matrix and the Y-direction confusion matrix are obtained, noise points in the matrix are further denoised, where a specific formula is as follows:
wherein M is chaos As a confusion matrix, M Ori The delta is the threshold value corresponding to each direction.
In one embodiment of the present invention, the X-direction confusion matrix and the Y-direction confusion matrix are accumulated to obtain an ROI array, where the specific formula is:
wherein L is a one-dimensional array, named as an ROI array, the array length is j, and L (j) is one element.
Compared with the prior art, the technical scheme of the application has the following advantages:
the surface defect positioning method is a positioning method based on the characteristics of an input image, has relatively strong sensitivity to the contrast ratio of an image defect area and an image normal area, can weaken the contrast ratio of the noise point of the image and the image normal area according to the contrast ratio of the defect area and the normal area, and is consistent with the mode of sensing external light information by naked eyes of people.
The method is accurate and quick for positioning most defects, can position the defect positions in images with larger noise, can effectively extract the defects in images with more complex textures, and is simple and convenient to calculate and high in detection efficiency compared with a defect positioning method based on deep learning, and can be well applied to quality control of industrial production.
Drawings
In order that the invention may be more readily understood, a more particular description of the invention will be rendered by reference to specific embodiments thereof which are illustrated in the appended drawings, in which:
FIG. 1 is a flow chart of a method for locating surface defects according to an embodiment of the present application;
FIG. 2 illustrates a defect localization effect diagram provided by an embodiment of the present application;
fig. 3 shows an effect diagram before eliminating a misjudgment area according to an embodiment of the present application;
fig. 4 shows an effect diagram after eliminating a misjudgment area according to an embodiment of the present application.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the embodiments of the present application more clear, the technical solutions of the embodiments of the present application will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present application, and it should be understood that the accompanying drawings in the present application are only for the purpose of illustration and description, and are not intended to limit the protection scope of the present application. In addition, it should be understood that the schematic drawings are not drawn to scale. A flowchart, as used in this application, illustrates operations implemented according to some embodiments of the present application. It should be appreciated that the operations of the flow diagrams may be implemented out of order and that steps without logical context may be performed in reverse order or concurrently. Moreover, one or more other operations may be added to the flow diagrams and one or more operations may be removed from the flow diagrams as directed by those skilled in the art.
In addition, the described embodiments are only some, but not all, of the embodiments of the present application. The components of the embodiments of the present application, which are generally described and illustrated in the figures herein, may be arranged and designed in a wide variety of different configurations. Thus, the following detailed description of the embodiments of the present application, as provided in the accompanying drawings, is not intended to limit the scope of the application, as claimed, but is merely representative of selected embodiments of the application. All other embodiments, which can be made by those skilled in the art based on the embodiments of the present application without making any inventive effort, are intended to be within the scope of the present application.
In order to enable those skilled in the art to use the present application, the following embodiments are given in connection with a specific application scenario "positioning links in defect detection of scenes such as metal surface defects, textile surface defects, rail surface defects, etc., and the general principles defined herein may be applied to other embodiments and application scenarios for those skilled in the art without departing from the spirit and scope of the present application.
The method described below in the embodiment of the present application may be applied to any scenario in which a positioning link is required to be performed in defect detection based on a scenario such as a metal surface defect, a textile surface defect, a track surface defect, etc., and the embodiment of the present application is not limited to a specific application scenario, and any scheme using a positioning link in defect detection of a scenario such as a metal surface defect, a textile surface defect, a track surface defect, etc. provided in the embodiment of the present application is within the scope of protection of the present application.
In order to facilitate understanding of the present application, the technical solutions provided in the present application are described in detail below in conjunction with specific embodiments.
The application provides a surface defect positioning method, which aims to solve the defects of complex implementation, low efficiency and low defect detection efficiency in the prior art.
The surface defect detection and positioning algorithm is simple and convenient to calculate, good in positioning effect and high in universality. The method is used for a positioning link in defect detection of scenes such as metal surface defects, textile surface defects, track surface defects and the like.
Referring to fig. 1, fig. 1 is a flowchart of a surface defect positioning method provided in the present application, where the surface defect positioning method provided in the present application in fig. 1 includes:
s101: obtaining a target image, and performing texture extraction on the target image through a texture extraction algorithm to obtain a texture matrix; the texture matrix comprises an X-direction texture matrix and a Y-direction texture matrix of the target image.
In some possible embodiments, acquiring the target image in step S101 includes:
acquiring an image of the surface of the detection object to obtain the initial image;
converting the initial image into a gray scale image;
and carrying out fuzzy denoising on the gray level image through a Gaussian filter to obtain a target image.
In some possible embodiments, image acquisition of the surface of the test object, acquiring the initial image includes:
and carrying out image acquisition on the surface of the detection object through an image acquisition system and a light supplementing system.
In some possible embodiments, the image acquisition system consists of a CCD line camera; the light supplementing system consists of an annular LED lamp fixed on the CCD linear array camera lens.
The image acquisition system composed of the CCD linear array cameras is used for acquiring images of the detected surface, and the light supplementing system is correspondingly started to obtain higher image quality, so that accuracy in the subsequent processing process is guaranteed.
Specifically, the light supplementing system is formed by fixing an annular LED lamp on a camera lens. When the unidirectional LED light filling lamp is used for enhancing illumination, reflection is easily caused on smooth surfaces such as metal, glass and the like, collected images are provided with light spots, and the defect positioning is affected, so that the annular light filling lamp is used for enhancing illumination, the annular light can be used for enhancing the images uniformly, and excessive reflection cannot occur.
Further, after the image of the image acquisition system is obtained, the image needs to be further processed, and the acquired BGR image is converted into a gray image, and the conversion formula is as follows:
v gray =0.3R+0.59G+0.11B (1);
wherein V is gray The pixel value is gray pixel value, R is R channel pixel value of BGR channel, G is G channel pixel value, and B is B channel pixel value.
It should be noted that, the gray level image only contains brightness information, which can simplify the image processing task and make the processing more efficient than the BGR image. The present application will focus more on the luminance information of the image.
Further, after the image gray level image is obtained, in order to further improve the quality of the image, an image Gaussian filter is applied to blur and denoise the image.
Specifically, the gaussian filter is a linear smoothing filter for selecting weights according to gaussian function distribution characteristics. In the noise of an input image, the gaussian filter is considered to be more subject to gaussian noise distribution, so that the gaussian filter is very effective in suppressing the noise, and meanwhile, the gaussian filter has rotational symmetry, single value and single lobe, so that the edge information of the image can be kept while the image is smoothed.
In some possible embodiments, the formula of the gaussian filter is:
where G is the value of a gaussian function, x and y are independent variables, representing the positions on the x-axis and y-axis, respectively, and σ is the standard deviation, determining the width and shape of the gaussian distribution.
Further, after the image is subjected to fuzzy denoising through a Gaussian filter, texture extraction is performed on the image. In an input image, texture extraction is performed according to the formula mentioned in the following formula (3).
Particularly, for application requirements of different scenes, texture extraction in a single direction or multiple directions can be performed, and in a background single image, image extraction in multiple directions can be performed, so that positioning is more accurate, and the number of the maximum selectable extraction directions is smaller than or equal to the number of feature similar directions. In this embodiment, texture extraction and subsequent positioning are preferentially selected for only the horizontal X-direction and the vertical Y-direction. The texture extraction aims at extracting the approximate background texture characteristics of the image, and an X-direction texture matrix and a Y-direction texture matrix are obtained after the textures in the two directions are extracted.
In some possible embodiments, the texture extraction algorithm formula is:
wherein M (i, j) is a texture feature matrix in the j direction, p (i, j) is a gray pixel matrix of the image, i is the image direction with similar texture features, j is the texture direction with complex changes, M is the length in the i direction, and n is the length in the j direction.
S102: and respectively carrying out difference on the initial image and the X-direction texture matrix and the Y-direction texture matrix to obtain an X-direction confusion matrix and a Y-direction confusion matrix.
Illustratively, after obtaining the texture matrix, a new matrix is obtained by respectively differencing the original image (i.e., the original image) with the two directional texture matrices, and is named as a confusion matrix.
It should be noted that there are a large number of noise points in the matrix, so that the threshold α and the threshold β are respectively introduced for denoising the two matrices, and a smaller difference is filtered out by introducing the threshold α, and a larger difference is reserved, which is specifically expressed as the following formula (4).
In some possible embodiments, after the X-direction confusion matrix and the Y-direction confusion matrix are obtained, noise points in the matrix are further denoised, where a specific formula is as follows:
wherein M is chaos As a confusion matrix, M Ori The delta is the threshold value corresponding to each direction.
S103: accumulating the X-direction confusion matrix and the Y-direction confusion matrix to obtain an ROI array; the number of the ROI arrays is at least two.
Illustratively, the obtained X-direction and Y-direction confusion matrices are accumulated, so that the position of the defect is amplified as much as possible, as shown in the following equation (5).
In some possible embodiments, the X-direction confusion matrix and the Y-direction confusion matrix are accumulated to obtain an ROI array, where the specific formula is:
wherein L is a one-dimensional array, named as an ROI array, the array length is j, and L (j) is one element.
For example, after two or more ROI arrays are obtained, each consecutive non-0 region is a ROI region by marking the positions in the arrays that are greater than 0, and the ROI regions in different ROI arrays are intersected and named as ROI intersection. The number of intersection elements is the combination number of the ROI areas, and after the RO I intersection is obtained, the primary positioning is carried out for the first time.
S104: obtaining ROI intersections of ROI areas in different ROI arrays, and performing preliminary defect area positioning according to the RO I intersections; the ROI area is each continuous position area which is not 0 in the ROI array.
Specifically, fig. 2 is a defect positioning effect diagram provided by the embodiment of the present application, as shown in fig. 2, the present application may position a defect position in an image with larger noise, and may also effectively extract a defect for an image with more complex texture.
For example, after preliminary positioning, the erroneous judgment area needs to be removed once, the removal formula is shown in the following formula (6), and the formula (6) can well remove the erroneous judgment area.
S105: and removing the misjudgment area in the defect area after preliminary positioning through a similarity measurement algorithm to obtain a target defect area.
In some possible embodiments, the similarity measurement algorithm formula is:
wherein V is a pixel characteristic value, row is a matrix transverse length, col is a matrix longitudinal length, and V ROI For the pixel characteristic value of the region of interest, V global For the global contrast pixel characteristic value corresponding to the current region of interest, S is the similarity, G is a judging function,for empty sets, γ is a threshold.
In some possible embodiments, when S > γ, the current region is considered as a meaningful region, and is determined as a defective region and output; and when S < gamma, the current region is regarded as a nonsensical region, and the current region is rejected.
Further, referring to the effect diagram before the erroneous judgment area is eliminated in fig. 3 and the effect diagram after the erroneous judgment area is eliminated in fig. 4, it can be found that the erroneous judgment area can be accurately eliminated, and therefore a more accurate positioning result is obtained.
S106: and displaying the positioning result of the target defect area on a visual application interface of a monitoring person.
The final positioning result is obtained, and the positioning result is displayed on a visual application interface for monitoring staff.
In summary, the surface defect positioning method provided by the application has relatively strong sensitivity to the contrast ratio of the image defect area and the image normal area based on the positioning of the characteristics of the input image, and can weaken the contrast ratio of the noise point of the image and the image normal area according to the contrast ratio of the defect area and the normal area, so that the mode of sensing the external light information by naked eyes of people is identical.
The method is accurate and quick for positioning most defects, can position the defect positions in images with larger noise, can effectively extract the defects in images with more complex textures, and is simple and convenient to calculate and high in detection efficiency compared with a defect positioning method based on deep learning, and can be well applied to quality control of industrial production.
It should be appreciated by those skilled in the art that embodiments of the present application may be provided as a method, system, or architecture. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment, or an embodiment combining software and hardware aspects.
It is apparent that the above examples are given by way of illustration only and are not limiting of the embodiments. Other variations and modifications of the present invention will be apparent to those of ordinary skill in the art in light of the foregoing description. It is not necessary here nor is it exhaustive of all embodiments. And obvious variations or modifications thereof are contemplated as falling within the scope of the present invention.

Claims (10)

1. A method of locating surface defects, comprising:
obtaining a target image, and performing texture extraction on the target image through a texture extraction algorithm to obtain a texture matrix; the texture matrix comprises an X-direction texture matrix and a Y-direction texture matrix of the target image;
respectively carrying out difference between the initial image and the X-direction texture matrix and the Y-direction texture matrix to obtain an X-direction confusion matrix and a Y-direction confusion matrix;
accumulating the X-direction confusion matrix and the Y-direction confusion matrix to obtain an ROI array; the number of the ROI arrays is at least two;
obtaining ROI intersections of ROI areas in different ROI arrays, and performing preliminary defect area positioning according to the ROI intersections; the ROI area is a position area which is not 0 and is continuous in each ROI array;
removing the misjudgment area in the defect area after preliminary positioning through a similarity measurement algorithm to obtain a target defect area;
and displaying the positioning result of the target defect area on a visual application interface of a monitoring person.
2. The method of claim 1, wherein acquiring the target image comprises:
acquiring an image of the surface of the detection object to obtain the initial image;
converting the initial image into a gray scale image;
and carrying out fuzzy denoising on the gray level image through a Gaussian filter to obtain a target image.
3. The method of claim 2, wherein capturing an image of the surface of the inspected object, the capturing the initial image comprising:
and carrying out image acquisition on the surface of the detection object through an image acquisition system and a light supplementing system.
4. A surface defect localization method as claimed in claim 3, wherein the image acquisition system is composed of a CCD line camera; the light supplementing system consists of an annular LED lamp fixed on the CCD linear array camera lens.
5. The method for locating surface defects according to claim 2, wherein the formula of the gaussian filter is:
where G is the value of a gaussian function, x and y are independent variables, representing the positions on the x-axis and y-axis, respectively, and σ is the standard deviation, determining the width and shape of the gaussian distribution.
6. The method of claim 1, wherein the texture extraction algorithm formula is:
wherein M (i, j) is a texture feature matrix in the j direction, p (i, j) is a gray pixel matrix of the image, i is the image direction with similar texture features, j is the texture direction with complex changes, M is the length in the i direction, and n is the length in the j direction.
7. The surface defect localization method of claim 1, wherein the similarity measurement algorithm formula is:
wherein V is a pixel characteristic value, row is a matrix transverse length, col is a matrix longitudinal length, and V ROI For the pixel characteristic value of the region of interest, V global Global contrast corresponding to the current region of interestPixel characteristic value, S is similarity, G is a judging function,for empty sets, γ is a threshold.
8. A method of locating surface defects according to claim 8, wherein:
when S > gamma, the current area is considered as a meaningful area, and the current area is judged as a defective area and output;
and when S < gamma, the current region is regarded as a nonsensical region, and the current region is rejected.
9. The method for locating surface defects according to claim 1, wherein after obtaining the X-direction confusion matrix and the Y-direction confusion matrix, denoising noise points in the matrix, wherein the specific formula is as follows:
wherein M is chaos As a confusion matrix, M Ori The delta is the threshold value corresponding to each direction.
10. The surface defect positioning method according to claim 1, wherein the accumulating of the X-direction confusion matrix and the Y-direction confusion matrix is performed to obtain an ROI array, and the specific formula is:
wherein L is a one-dimensional array, named as an ROI array, the array length is j, and L (j) is one element.
CN202311392936.9A 2023-10-25 2023-10-25 Surface defect positioning method Pending CN117495791A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311392936.9A CN117495791A (en) 2023-10-25 2023-10-25 Surface defect positioning method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311392936.9A CN117495791A (en) 2023-10-25 2023-10-25 Surface defect positioning method

Publications (1)

Publication Number Publication Date
CN117495791A true CN117495791A (en) 2024-02-02

Family

ID=89679225

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311392936.9A Pending CN117495791A (en) 2023-10-25 2023-10-25 Surface defect positioning method

Country Status (1)

Country Link
CN (1) CN117495791A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN118052818A (en) * 2024-04-15 2024-05-17 宝鸡中海机械设备有限公司 Visual detection method for surface quality of sand mold 3D printer

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN118052818A (en) * 2024-04-15 2024-05-17 宝鸡中海机械设备有限公司 Visual detection method for surface quality of sand mold 3D printer
CN118052818B (en) * 2024-04-15 2024-06-28 宝鸡中海机械设备有限公司 Visual detection method for surface quality of sand mold 3D printer

Similar Documents

Publication Publication Date Title
JP6620477B2 (en) Method and program for detecting cracks in concrete
Tao et al. Weak scratch detection and defect classification methods for a large-aperture optical element
CN111982916A (en) Welding seam surface defect detection method and system based on machine vision
JP2008139285A (en) Construct using picture processing technique, and crack width measuring method of product
CN115684176B (en) Online visual detection system for film surface defects
CN108550160B (en) Non-uniform light bar characteristic region extraction method based on light intensity template
CN117495791A (en) Surface defect positioning method
CN114719749B (en) Metal surface crack detection and real size measurement method and system based on machine vision
CN112858351A (en) X-ray nondestructive inspection method based on machine vision
CN112014407A (en) Method for detecting surface defects of integrated circuit wafer
Barkavi et al. Processing digital image for measurement of crack dimensions in concrete
CN113155839A (en) Steel plate outer surface defect online detection method based on machine vision
CN113781458A (en) Artificial intelligence based identification method
CN114428110A (en) Method and system for detecting defects of fluorescent magnetic powder inspection image of bearing ring
CN105608674B (en) A kind of image enchancing method based on image registration, interpolation and denoising
CN117115171B (en) Slight bright point defect detection method applied to subway LCD display screen
JP2009198290A (en) Flaw detection method and detector
Hashmi et al. Computer-vision based visual inspection and crack detection of railroad tracks
CN116091506B (en) Machine vision defect quality inspection method based on YOLOV5
CN115829981A (en) Visual detection method for end face damage of retired bearing
Mishra et al. Surface defects detection for ceramic tiles using image processing and morphological techniques
JP2001028059A (en) Method and device for color unevenness inspection
TWI493177B (en) Method of detecting defect on optical film with periodic structure and device thereof
CN106530274A (en) Steel bridge crack positioning method
Golkar et al. Vision based length measuring system for ceramic tile borders

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination