Nothing Special   »   [go: up one dir, main page]

CN113869300A - Workpiece surface defect and character recognition method and system based on multi-vision fusion - Google Patents

Workpiece surface defect and character recognition method and system based on multi-vision fusion Download PDF

Info

Publication number
CN113869300A
CN113869300A CN202111161598.9A CN202111161598A CN113869300A CN 113869300 A CN113869300 A CN 113869300A CN 202111161598 A CN202111161598 A CN 202111161598A CN 113869300 A CN113869300 A CN 113869300A
Authority
CN
China
Prior art keywords
workpiece
image
detected
light source
color
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111161598.9A
Other languages
Chinese (zh)
Inventor
周显恩
王耀南
朱青
毛建旭
汪志成
王飞文
余秋伟
周新城
刘世福
杨林
陈锐
李达
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jiangxi Communication Terminal Industry Technology Research Institute Co ltd
Original Assignee
Jiangxi Communication Terminal Industry Technology Research Institute Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jiangxi Communication Terminal Industry Technology Research Institute Co ltd filed Critical Jiangxi Communication Terminal Industry Technology Research Institute Co ltd
Priority to CN202111161598.9A priority Critical patent/CN113869300A/en
Publication of CN113869300A publication Critical patent/CN113869300A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/044Recurrent networks, e.g. Hopfield networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4038Image mosaicing, e.g. composing plane images from plane sub-images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/90Dynamic range modification of images or parts thereof
    • G06T5/94Dynamic range modification of images or parts thereof based on local image properties, e.g. for local contrast enhancement
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0004Industrial image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2200/00Indexing scheme for image data processing or generation, in general
    • G06T2200/32Indexing scheme for image data processing or generation, in general involving image mosaicing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30108Industrial image inspection

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Computing Systems (AREA)
  • Software Systems (AREA)
  • Molecular Biology (AREA)
  • Computational Linguistics (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Mathematical Physics (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Quality & Reliability (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a workpiece surface defect and character recognition method and system based on multi-vision fusion, which comprises the steps of firstly carrying out feature fusion and splicing on images acquired by a plurality of industrial cameras, completing coarse positioning of a surface label of a workpiece to be detected through sift feature matching, and carrying out image enhancement through a Retinex algorithm to further reduce the influence of illumination; then Gaussian filtering is carried out to remove image noise points, and under the condition of lacking a large number of defect samples, a GANOMaly network is used for defect detection; then, the filtered image is subjected to adaptive threshold segmentation, character positioning is carried out through a CTPN network, character recognition is carried out through a CRNN network, specifically, training verification is carried out through a training data set added with a defective character sample through the CRNN network, and the recognition effect of the method on the defective characters is improved; and finally, visually outputting the GANomaly network defect detection result and the CRNN network character recognition result, and sorting the workpieces to be detected according to the results. The invention has the advantages of accurate detection and sensitive reaction.

Description

Workpiece surface defect and character recognition method and system based on multi-vision fusion
Technical Field
The invention belongs to the technical field of identification of surface defects and characters of reflective workpieces to be detected, and particularly relates to a method and a system for identifying the surface defects and the characters of the workpieces based on multi-vision fusion.
Background
In industrial production, it is very important to perform finished product inspection on a workpiece to be inspected. The most treat that the work piece surface of examining all has the character of various types difference to be used for expressing production information, however the interference is more in the actual detection process, and the mixed and disorderly light source in workshop arouses to treat that the work piece reflects light, treats that there is crackle, wearing and tearing in the work piece surface of examining, the material of waiting to treat the work piece is different, has stainless steel, plastics, rubber etc. and it is various to treat work piece character stamp mode of examining, has to spout a yard, impression, relief (sculpture) etc. this and treats work piece surface defect detection and character recognition and bring the difficulty for waiting to treat the work piece surface especially cylindric. How to solve the problems has great significance for realizing the target of intelligent industrial production of surface defects and character recognition of workpieces to be detected.
Although researchers have conducted researches on character recognition of the surface of a planar workpiece to be inspected, few researches on character recognition of the surface of an arc-shaped or cylindrical workpiece to be inspected exist. Most of character information on the surface of the cylindrical workpiece to be detected is usually manually identified and read, and is manually input into a computer for management and control, so that the method consumes manpower and has low efficiency; for a detection system, a single camera is generally adopted for imaging, a workpiece to be detected needs to be placed to a specific position in advance, otherwise, the character information of the workpiece to be detected cannot be acquired completely or even cannot be acquired completely; in terms of the identification method, the traditional method for identifying the characters of the workpiece to be detected obtains a binary image through threshold segmentation, then adopts a vertical projection method to segment a single character, and finally identifies the character through template matching. The deep learning method for recognizing the characters of the workpiece to be detected has good robustness and high recognition accuracy, but no method for training by adding a defective character training set exists at present, and the recognition accuracy of the defective characters is poor. The surface defect detection method of the workpiece to be detected mainly adopts deep learning, but the current mainstream defect detection method needs a large amount of defect samples, and the collection of a large amount of defect workpiece samples to be detected in the actual production process of the workpiece to be detected is difficult.
Disclosure of Invention
In order to solve at least one of the technical problems, the invention provides a workpiece surface defect and character recognition method and system based on multi-vision fusion, which can be adaptive to the material and character imprinting type of a cylindrical workpiece, can quickly recognize the defects and incomplete characters on the surface of the workpiece to be detected under the interference of light reflection conditions, and has accurate detection and sensitive response.
One of the purposes of the invention is realized by the following technical scheme: the system comprises an industrial control computer, an object stage, a conveying belt, a plurality of industrial cameras, a plurality of color-adjustable light sources and a plurality of light source controllers, wherein the industrial control computer is respectively connected with each light source controller and each industrial camera; the object stage is carried on the conveying belt and used for placing a workpiece to be detected, and the workpiece to be detected is a cylindrical workpiece to be detected; the conveying belt is used for conveying the workpiece to be detected; the two phases of the industrial cameras are distributed around the workpiece to be detected at a preset angle and are used for shooting the surface image of the workpiece to be detected; the plurality of color-adjustable light sources are distributed around the workpiece to be detected at intervals of a second preset angle, each color-adjustable light source is provided with three channels which respectively correspond to red light, green light and blue light, and each light source controller is connected with one color-adjustable light source and used for controlling the color-adjustable light sources; and adjusting the intensity of red light, green light and blue light on an industrial control computer through a light source controller to obtain a color light source required by the corresponding color-adjustable light source.
As a further improvement, the number of the industrial cameras and the number of the adjustable color light sources are three, every two of the three industrial cameras are uniformly distributed around a workpiece to be detected at an interval of 120 degrees, and every two of the three adjustable color light sources are distributed around the workpiece to be detected at an interval of 120 degrees.
The second purpose of the invention is realized by the following technical scheme: the method for identifying the surface defects and characters of the workpiece based on the multi-vision fusion is provided, the system for identifying the surface defects and characters of the workpiece to be detected based on the multi-vision fusion is used for detection, and the method comprises the following steps:
s1, placing the workpiece to be detected on an object stage, respectively starting each industrial camera, each color-adjustable light source, each light source controller and an industrial control computer, and adjusting the light source color of each corresponding color-adjustable light source through the industrial control computer and each light source controller to ensure the imaging effect of the workpiece to be detected;
s2, acquiring images of the workpiece to be detected through a plurality of industrial cameras;
s3, carrying out image feature fusion and splicing on the images acquired by the plurality of industrial cameras to obtain spliced images;
s4, zooming the spliced image, and completing coarse positioning of the surface label of the workpiece to be detected by using sift characteristic matching;
s5, performing image enhancement by using a Retinex algorithm, further reducing the influence of illumination on a workpiece to be detected, and improving the image quality;
s6, removing image noise by adopting a Gaussian space filtering method, and detecting the surface defects of the workpiece to be detected by utilizing a GANOMaly network to obtain the defect information of the workpiece to be detected;
s7, obtaining a binary image through a self-adaptive threshold segmentation algorithm;
s8, positioning the surface characters of the workpiece to be detected by utilizing a CTPN network;
s9, re-acquiring the image sample of the workpiece to be detected, making a training data set, performing CRNN network training and verification, and sending the characters positioned in the step S8 into the trained and verified CRNN network for character recognition to obtain a surface character recognition result of the workpiece to be detected;
s10, visually outputting the defect information of the workpiece to be detected obtained in the step S6 and the surface character recognition result of the workpiece to be detected obtained in the step S9 through an industrial control computer, and displaying the information in the spliced image obtained in the step S3;
s11, judging whether the workpieces to be detected are sorted or not according to the defect information of the workpieces to be detected displayed by the spliced images and the character recognition result, and if the workpieces to be detected have defects and/or characters are wrong, sorting; otherwise, it is not processed.
As a further improvement, in step S1, the specific method for adjusting the light source color of each corresponding color-adjustable light source through the industrial control computer and each light source controller to ensure the imaging effect of the workpiece to be detected includes:
firstly, selecting a light source color which is complementary to the appearance color and the label color of a workpiece to be detected to enhance a contrast color, or selecting a light source color which is adjacent to the background of the workpiece to be detected to remove the interference of unnecessary information; secondly, the intensity of red light, green light or blue light of the corresponding color-adjustable light source is adjusted on an industrial control computer through a light source controller, each color-adjustable light source obtains the required light source color, and the imaging quality of the characters on the surface of the workpiece to be detected is ensured in the aspect of imaging.
As a further improvement, the step S3 specifically includes the following steps:
s31, under the condition that the aspect ratio of the image acquired by each industrial camera is kept, the size is zoomed to obtain the final size, wherein the shortest side is 800 pixels, and each zoomed image is subjected to feature point extraction by adopting a rapid robustness feature algorithm;
s32, carrying out fast approximate neighbor algorithm matching on the extracted feature points;
s33, establishing a homography matrix among the images, selecting a random sampling consistency algorithm to continuously screen reliable matching points, and finishing image registration;
and S34, realizing image fusion according to the multiband fusion strategy, and eliminating splicing cracks and ghost phenomena.
As a further improvement, the image enhancement by the Retinex algorithm in step S5 is specifically represented as:
s51, transforming the original image S (x, y) in the Retinex algorithm into a log domain based on the Retinex algorithm, so as to convert the product relationship into a relationship of sum: log S (x, y) — log R (x, y) + log L (x, y), where R (x, y) denotes a reflection image of the original image S (x, y), L (x, y) denotes an illumination image, x denotes an abscissa of the image, and y denotes an ordinate of the image.
S52, performing gaussian convolution on the original image S (x, y) to estimate the illumination image L (x, y), estimating L (x, y) from the original image S (x, y), and removing the illumination image L (x, y) to obtain the reflection image R (x, y), that is:
Figure BDA0003290096630000051
in the formula, R (x, y) represents a logarithmic domain expression of the reflection image R (x, y), σ represents a gaussian ambient constant, λ represents a gaussian distribution constant coefficient, and exp represents an exponential operation with a natural coefficient as a base.
As a further improvement, the GANOMaly network is composed of a generation network, a discriminator D and a reconstruction encoder
Figure BDA0003290096630000061
The generating network comprises an encoder GE (x), a decoder GD (z), and the step S6 of utilizing the GANOMaly network to detect the surface defects of the workpiece to be detected specifically comprises the following steps:
s61, shooting N normal to-be-detected workpiece images, wherein N is a positive integer larger than zero;
s62, obtaining the image xiSent to encoder GE (x) to obtain latent variable z, which is passed through decoder GD (z) to obtain xiIs reconstructed image of
Figure BDA0003290096630000062
i≤N;
S63, judging device D for image xiIf true, the image is reconstructed
Figure BDA0003290096630000063
Judging as false, thereby continuously optimizing the reconstructed image
Figure BDA0003290096630000064
And image xiThe difference between the ideal reconstructed image and the image xiNo difference exists;
s64 reconstruction encoder
Figure BDA0003290096630000065
Counterweight patternImage
Figure BDA0003290096630000066
Coding again to obtain the latent variable of the reconstructed image coding
Figure BDA0003290096630000067
S65, potential variable z obtained by encoder GE (x) and reconstruction encoder
Figure BDA0003290096630000068
Derived latent variables
Figure BDA0003290096630000069
The difference between them is compared with a preset threshold value phi, when the difference is greater than the preset threshold value phi, the sent image x is determinediIs an abnormal sample; otherwise, consider image xiThe method is characterized in that the method is a normal sample, the abnormal sample is a shot surface defect of a workpiece to be detected, and the normal sample is a shot surface defect of the workpiece to be detected.
As a further improvement, the positioning of the surface characters of the workpiece to be detected by using the CTPN network in step S8 specifically includes:
s81, extracting features by using a VGG16 network, and taking the features obtained by a third convolution layer in a fifth convolution block in the VGG16 network as a feature map, wherein the size is W multiplied by H multiplied by C, W represents the width, H represents the height, and C represents the prediction category;
s82, sliding on a feature map (feature map) by using 3 × 3 sliding windows, where each sliding window can obtain a feature vector with a length of 3 × 3 × C, the center of each sliding window predicts k offsets relative to the anchor frame, and k is a positive integer greater than zero;
s83, inputting the features extracted in the step S81 into a bidirectional long and short memory neural network to obtain an output with the length of W multiplied by 256, and then connecting the output with a full connection layer of 512 to prepare for output;
s84, connecting an output layer behind the full connection layer, wherein the output layer comprises three outputs: 2k vertical coordinates, 2k score and k side-refinement, wherein the vertical coordinates represent the y-axis coordinates of the height and the center of each prediction frame, the score represents the category information of the k anchor frames, and the side-refinement represents the horizontal translation amount of each prediction frame;
s85, filtering redundant text prediction areas by using a standard non-maximum suppression algorithm;
and S86, merging the obtained text prediction areas into text lines by using a text line construction algorithm based on a graph, and completing character positioning.
As a further improvement, the specific method for making the training data set in step S9 is as follows:
s911, downloading 1000 images of a Chinese natural text data set pushed by Qinghua university and Tengxin from Github;
s912, downloading 1000 images of an ICDAR _2015 English text data set from Github;
s913, generating 1000 images of the conventional Chinese and English complete characters, 2000 images of the incomplete characters by the codes, and forming a data set by the 1000 images of the Chinese and English complete characters and the 2000 images of the incomplete characters;
s914, a data set is sorted, the format of the training data in the CRNN is LMDB, two kinds of data are stored, one is picture data, and the other is label data, wherein the picture data is a plurality of pictures with characters, and the height of the characters accounts for 80% -90% of the height of the pictures; the label data is in txt text format, the text content is characters on the picture data, namely, the text name is consistent with the picture name.
As a further improvement, the trained and verified CRNN network includes a convolutional layer, a cyclic layer and a transcription layer, and the step S9 of recognizing the surface characters of the workpiece to be detected by using the trained and verified CRNN network is decomposed into the following processes:
s921, extracting a characteristic sequence from the input image by using the convolutional layer through CNN;
s922, predicting the label distribution of the characteristic sequence obtained from the convolution layer by using the RNN by the cycle layer;
and S923, the transcription layer converts the label distribution acquired from the circulation layer into a final identification result through the operation of de-duplication integration by using the CTC.
The workpiece surface defect and character recognition method and system based on multi-vision fusion provided by the invention have the advantages that firstly, the images collected by a plurality of industrial cameras are subjected to feature fusion and splicing, the rough positioning of the surface label of the workpiece to be detected is completed through sift feature matching, the image enhancement is carried out through a Retinex algorithm, and the influence of illumination is further reduced; then Gaussian filtering is carried out to remove image noise points, and under the condition of lacking a large number of defect samples, a GANOMaly network is used for defect detection; then, the filtered image is subjected to adaptive threshold segmentation, character positioning is carried out through a CTPN network, character recognition is carried out through a CRNN network, specifically, training verification is carried out through a training data set added with a defective character sample through the CRNN network, and the recognition effect of the method on the defective characters is improved; and finally, visually outputting the GANomaly network defect detection result and the CRNN network character recognition result, and sorting the workpieces to be detected according to the results. The method can be adaptive to the material and character imprinting type of the cylindrical workpiece, can quickly identify the defects and the incomplete characters on the surface of the workpiece to be detected under the interference of a light reflection condition, and has the advantages of accurate detection and sensitive reaction.
Drawings
The invention is further illustrated by means of the attached drawings, but the embodiments in the drawings do not constitute any limitation to the invention, and for a person skilled in the art, other drawings can be obtained on the basis of the following drawings without inventive effort.
FIG. 1 is a schematic diagram of an embodiment of a system for workpiece surface defect and character recognition based on multi-vision fusion.
FIG. 2 is a flow chart of an embodiment of a method for workpiece surface defect and character recognition based on multi-vision fusion.
FIG. 3 is a diagram of image feature fusion and stitching effects of an embodiment of a workpiece surface defect and character recognition method based on multi-vision fusion.
FIG. 4 is an effect diagram of an embodiment of a workpiece surface defect and character recognition method based on multi-vision fusion.
Detailed Description
In order to make those skilled in the art better understand the technical solution of the present invention, the following detailed description of the present invention is provided with reference to the accompanying drawings and specific embodiments, and it is to be noted that the embodiments and features of the embodiments of the present application can be combined with each other without conflict.
As shown in fig. 1, an embodiment of the present invention provides a system for workpiece surface defect and character recognition based on multi-vision fusion, including an industrial control computer 6, an object stage 2, a conveyor belt 4, a plurality of industrial cameras, a plurality of color-adjustable light sources and a plurality of light source controllers 3, where the industrial control computer 6 is connected to each light source controller 3 and each industrial camera respectively; the object stage 2 is carried on the conveying belt 4 and used for placing a workpiece to be detected, and the workpiece to be detected is a cylindrical workpiece to be detected; the conveying belt 4 is used for conveying workpieces to be detected; the industrial cameras are distributed around the workpiece to be detected at preset angles, and are used for shooting the surface image of the workpiece to be detected; the plurality of color-adjustable light sources are distributed around the workpiece to be detected at intervals of a second preset angle, each color-adjustable light source is provided with three channels which respectively correspond to red light, green light and blue light, and each light source controller 3 is connected with one color-adjustable light source and used for controlling the color-adjustable light sources; the intensity of red light, green light and blue light is adjusted on an industrial control computer 6 through a light source controller 3 to obtain a color light source required by a corresponding color-adjustable light source.
The invention provides a workpiece surface defect and character recognition system based on multi-vision fusion, aiming at the problems that the illumination of workshops is disordered and the reflection of a workpiece is serious to cause the workpiece surface defect and the character recognition is difficult on an industrial production line, and the system comprises an industrial camera 1, a color-adjustable light source 2, a light source controller 3, an industrial control computer 6 and a conveying belt 4, wherein a plurality of industrial cameras 1 are placed around the workpiece to be detected at preset angles in pairs to obtain the complete surface image of the workpiece, and a plurality of color-adjustable light sources 2 are also placed around the workpiece to be detected at preset angles in pairs. The system can adjust the intensity of red light, green light and blue light on the industrial control computer 6 through the light source controller 3 according to the specific material, character engraving mode and appearance color information of the workpiece to be detected, so as to obtain the color light source required by the corresponding color-adjustable light source 2, reduce the influence of uneven illumination on hardware and obtain a clear image of the surface of the workpiece.
As a preferred embodiment of the present invention, the number of the industrial cameras 1 and the color-tunable light sources 2 is three, three industrial cameras 1 are uniformly distributed around the workpiece to be inspected at an interval of 120 ° in pairs, and three color-tunable light sources 2 are distributed around the workpiece to be inspected at an interval of 120 ° in pairs. It should be noted that each color-tunable light source 2 is preferably disposed between two adjacent industrial cameras 1. Of course, there are many other possibilities for the number of industrial cameras 1 and the color-tunable light sources 2, and the technical effects of the present invention can be achieved.
Meanwhile, as shown in fig. 2, an embodiment of the present invention further provides a workpiece surface defect and character recognition method based on multi-vision fusion, the method uses the system for workpiece surface defect and character recognition to be inspected based on multi-vision fusion to perform detection, and includes the following steps:
s1, placing the workpiece to be detected on an object stage, respectively starting each industrial camera, each color-adjustable light source, each light source controller and an industrial control computer, and adjusting the light source color of each corresponding color-adjustable light source through the industrial control computer and each light source controller to ensure the imaging effect of the workpiece to be detected;
preferably, the specific method for ensuring the imaging effect of the workpiece to be detected by adjusting the light source color of each corresponding color-adjustable light source through the industrial control computer and each light source controller in the step is as follows:
firstly, selecting a light source color which is complementary to the appearance color and the label color of a workpiece to be detected to enhance a contrast color, or selecting a light source color which is adjacent to the background of the workpiece to be detected to remove the interference of unnecessary information; secondly, adjusting the intensity of red light, green light or blue light of a corresponding color-adjustable light source on an industrial control computer through a light source controller, wherein each color-adjustable light source obtains the required light source color, and the imaging quality of characters on the surface of the workpiece to be detected is ensured in the aspect of imaging; the reason why the contrast color is enhanced by selecting the light source color which is complementary to the workpiece to be detected or the light source color which is adjacent to the background of the workpiece to be detected is selected to remove the interference of unnecessary information is that the contrast ratio of the target object and the background can be enhanced by selecting the light source color which is complementary to the target object or the light source color which is adjacent to the background can be filtered;
s2, acquiring images of the workpiece to be detected through a plurality of industrial cameras;
s3, carrying out image feature fusion and splicing on the images acquired by the plurality of industrial cameras to obtain spliced images;
s4, zooming the spliced image, and completing coarse positioning of a label on the surface of the workpiece to be detected by using sift (Scale-invariant feature transform) feature matching;
s5, performing image enhancement by using a Retinex (single Scale retina) algorithm, further reducing the influence of illumination on a workpiece to be detected, and improving the image quality;
s6, removing image noise points by adopting a Gaussian spatial filtering method, and detecting surface defects of the workpiece to be detected by utilizing a GANOMaly network (based on a generated countermeasure network abnormal Detection model) to obtain defect information of the workpiece to be detected;
s7, obtaining a binary image through a self-adaptive threshold segmentation algorithm;
s8, positioning the surface characters of the workpiece to be detected by utilizing a CTPN Network (connecting Text forward Network, connecting a main Text area Network);
s9, re-obtaining an image sample of the workpiece to be detected, making a training data set, carrying out CRNN (Convolutional Recurrent Neural Network) training and verification, and sending the characters positioned in the step S8 into the trained and verified CRNN for character recognition to obtain a surface character recognition result of the workpiece to be detected;
s10, visually outputting the defect information of the workpiece to be detected obtained in the step S6 and the surface character recognition result of the workpiece to be detected obtained in the step S9 through an industrial control computer, and displaying the information in the spliced image obtained in the step S3;
s11, judging whether the workpieces to be detected are sorted or not according to the defect information of the workpieces to be detected displayed by the spliced images and the character recognition result, and if the workpieces to be detected have defects and/or characters are wrong, sorting; otherwise, it is not processed.
The method comprises the steps of firstly, carrying out feature fusion and splicing on images acquired by a plurality of cameras, completing coarse positioning of a label on the surface of a workpiece to be detected through sift feature matching, carrying out image enhancement through a Retinex method, and further reducing the influence of illumination in an algorithm; then Gaussian filtering is carried out to remove image noise points, and under the condition of lacking a large number of defect samples, a GANOMaly network is used for defect detection; then, the filtered image is subjected to adaptive threshold segmentation, character positioning is carried out through a CTPN network, character recognition is carried out through a CRNN network, specifically, training verification is carried out through a training data set added with a defective character sample through the CRNN network, and the recognition effect of the method on the defective characters is improved; and finally, visually outputting the GANomaly network defect detection result and the CRNN network character recognition result, and sorting the workpieces to be detected according to the results. The method can be adaptive to the material and character imprinting type of the cylindrical workpiece, can quickly identify the defects and the incomplete characters on the surface of the workpiece to be detected under the interference of a light reflection condition, and has accurate detection and sensitive response.
Meanwhile, as shown in fig. 3, step S3 specifically includes the following steps:
s31, under the condition that the aspect ratio of the image acquired by each industrial camera is kept, the size is zoomed to obtain the final size, wherein the shortest side is 800 pixels, and each zoomed image is subjected to Feature point extraction by adopting a fast Robust Feature (SURF) algorithm;
s32, carrying out Fast Approximate neighbor algorithm (Fast Library for Approximate Neighbors, FLANN) matching on the extracted feature points;
s33, establishing a homography matrix among the images, selecting a Random Sample Consensus algorithm (RANSAC) to continuously screen reliable matching points, and finishing image registration;
and S34, realizing image fusion according to a Multi-Band-Blending strategy, and eliminating splicing cracks and ghost phenomena.
As a preferred embodiment of the present invention, the image enhancement using the Retinex algorithm in step S5 is specifically represented as:
s51, transforming the original image S (x, y) in the Retinex algorithm into a log domain based on the Retinex algorithm, so as to convert the product relationship into a relationship of sum: log S (x, y) — log R (x, y) + log L (x, y), where R (x, y) denotes a reflection image of the original image S (x, y), L (x, y) denotes an illumination image, x denotes an abscissa of the image, and y denotes an ordinate of the image. It should be noted that the basic assumption of the Retinex algorithm is that the original image S (x, y) is the product of the illumination image L (x, y) and the reflection image R (x, y), that is: s (x, y) ═ R (x, y) × L (x, y).
S52, performing gaussian convolution on the original image S (x, y) to estimate the illumination image L (x, y), estimating L (x, y) from the original image S (x, y), and removing the illumination image L (x, y) to obtain the reflection image R (x, y), that is:
Figure BDA0003290096630000141
in the formula, R (x, y) represents a logarithmic domain expression of the reflection image R (x, y), σ represents a gaussian ambient constant, λ represents a gaussian distribution constant, and exp represents an exponential operation with a natural constant e as a base.
Furthermore, it is worth mentioning that the above-mentioned GANomaly network is composed of a generation network, a discriminator D and a reconstruction encoder
Figure BDA0003290096630000151
The method comprises the following steps of generating a network, wherein the network comprises an encoder GE (x) and a decoder GD (z), and the step of detecting the surface defects of the workpiece to be detected by using the GANOMALY network in the step S6 specifically comprises the following steps:
s61, shooting N normal to-be-detected workpiece images, wherein N is a positive integer larger than zero; it should be noted that the value of N is preferably 500, although the value of N is not limited thereto, and may be 600, 700 or more;
s62, obtaining the image xiFed to encoder ge (x) to obtain latent variable z,z is passed through a decoder GD (z) to obtain xiIs reconstructed image of
Figure BDA0003290096630000152
i≤N;
S63, judging device D for image xiIf true, the image is reconstructed
Figure BDA0003290096630000153
Judging as false, thereby continuously optimizing the reconstructed image
Figure BDA0003290096630000154
And image xiThe difference between the ideal reconstructed image and the image xiNo difference exists;
s64 reconstruction encoder
Figure BDA0003290096630000155
For the reconstructed image
Figure BDA0003290096630000156
Coding again to obtain the latent variable of the reconstructed image coding
Figure BDA0003290096630000157
S65, potential variable z obtained by encoder GE (x) and reconstruction encoder
Figure BDA0003290096630000158
Derived latent variables
Figure BDA0003290096630000159
The difference between them is compared with a preset threshold value phi, when the difference is greater than the preset threshold value phi, the sent image x is determinediIs an abnormal sample; otherwise, consider image xiThe method is characterized in that the method is a normal sample, the abnormal sample is a shot surface defect of a workpiece to be detected, and the normal sample is a shot surface defect of the workpiece to be detected.
In step S65, the encoder ge (x), the decoder gd (z), and the reconstruction encoder
Figure BDA00032900966300001510
Are applicable to normal samples. When an exception sample is received, encoder ge (x), decoder gd (z) will not be applicable to the exception sample, the resulting encoded latent variable z and the latent variable obtained by the reconstruction encoder
Figure BDA00032900966300001511
The difference between (a) and (b) is large, i.e. the latent variable z obtained by the encoder GE (x) and the reconstruction encoder
Figure BDA0003290096630000161
Derived latent variables
Figure BDA0003290096630000162
The difference between them is greater than a preset threshold phi.
As a preferred embodiment of the present invention, the positioning of the surface characters of the workpiece to be detected by using the CTPN network in step S8 specifically includes:
s81, extracting features by using a VGG16 network, and taking the features obtained by a third convolutional layer (conv5_3) in a fifth convolutional block in the VGG16 network as a feature map, wherein the size is W multiplied by H multiplied by C, W represents the width, H represents the height, and C represents the prediction category; it should be noted that, the VGG16 network refers to the prior art, and is not described herein again;
s82, sliding the feature graph by using a sliding window with the size of 3 x 3, wherein each sliding window can obtain a feature vector with the length of 3 x C, the center of each sliding window can predict k offsets relative to the anchor frame, and k is a positive integer greater than zero;
s83, inputting the features extracted in the step S81 into a bidirectional Short-Term Memory (Bi-LSTM) network to obtain an output with the length of W multiplied by 256, and then connecting a 512 full-connection layer to prepare for output;
s84, connecting an output layer behind the full connection layer, wherein the output layer comprises three outputs: 2k vertical coordinates representing the height and the y-axis coordinate (which can determine the upper and lower boundaries) of the center of each prediction box (bounding box), 2k scores representing the category information (whether characters or not) of k anchor boxes (anchors), and k side-redefinition representing the horizontal translation amount of each prediction box (bounding box);
s85, filtering redundant text prediction areas (popsals) by using a standard non-maximum suppression algorithm;
and S86, merging the obtained text prediction areas into text lines by using a text line construction algorithm based on a graph, and completing character positioning.
It should be noted that the bounding box is obtained by neural network iteration and reflects the position coordinates of the target object; the anchor is used for calculating reference of a bounding box in an iterative process, and based on the reference, a prediction box generated by the algorithm only needs to be adjusted on the basis of the anchor box; the propofol refers to an area surrounded by a bounding box.
Further, the specific method for making the training data set in step S9 is as follows:
s911, downloading 1000 images of a Chinese natural text data set pushed by Qinghua university and Tengxin from Github;
s912, downloading 1000 images of an ICDAR _2015 English text data set from Github;
s913, generating 1000 images of the conventional Chinese and English complete characters, 2000 images of the incomplete characters by the codes, and forming a data set by the 1000 images of the Chinese and English complete characters and the 2000 images of the incomplete characters;
s914, a data set is sorted, the format of the training data in the CRNN is LMDB, two kinds of data are stored, one is picture data, and the other is label data, wherein the picture data is a plurality of pictures with characters, and the height of the characters accounts for 80% -90% of the height of the pictures; the label data is in txt text format, the text content is characters on the picture data, namely, the text name is consistent with the picture name.
Meanwhile, the trained and verified CRNN network of the present invention includes a convolutional layer, a cyclic layer and a transcription layer, and the step S9 of recognizing and decomposing the surface characters of the workpiece to be detected by using the trained and verified CRNN network is as follows:
s921, extracting a feature sequence from the input image by the convolutional layer by adopting a CNN (convolutional neural network);
s922, predicting the label distribution of the characteristic sequence obtained from the convolutional layer by using a Recurrent Neural Network (RNN) in the cyclic layer;
s923, the transcription layer converts the label distribution obtained from the loop layer into a final recognition result through a deduplication integration operation using CTC (connection Temporal Classification).
FIG. 4 is an effect diagram of an embodiment of a workpiece surface defect and character recognition method based on multi-vision fusion. Fig. 4 selects the surface defect and character recognition for a vehicle-mounted USB charger workpiece, and the specific effect is shown in the figure, from which it can be seen that the surface defect and character can be recognized more clearly.
In summary, compared with the prior art, the invention has the advantages that:
(1) the invention adjusts the light source color of the color-adjustable light source according to the appearance of the workpiece to be detected and the label color through multi-camera imaging, ensures the imaging quality and reduces the interference of illumination on hardware; meanwhile, after the images of the multiple cameras are fused and spliced, a Retinex image enhancement algorithm is adopted, the uneven illumination is improved, the influence of illumination is further reduced from the algorithm, and the problem of light reflection on the surface of a workpiece is solved.
(2) According to the invention, the GANomaly network is adopted to carry out defect detection on the surface of the workpiece to be detected, and the obvious defect detection effect can be obtained only by training a normal sample and without a large number of defect samples;
(3) the training data set for character recognition comprises 1000 images of a Chinese natural text data set, 1000 images of an ICDAR _2015 English text data set, 1000 images of conventional Chinese and English complete characters and 2000 images of defective characters which are jointly proposed by Qinghua university and Tengxin.
In the description above, numerous specific details are set forth in order to provide a thorough understanding of the present invention, however, the present invention may be practiced in other ways than those specifically described herein, and therefore should not be construed as limiting the scope of the present invention.
In conclusion, although the present invention has been described with reference to the preferred embodiments, it should be noted that, although various changes and modifications may be made by those skilled in the art, they should be included in the scope of the present invention unless they depart from the scope of the present invention.

Claims (10)

1. The system for identifying the surface defects and the characters of the workpiece based on multi-vision fusion is characterized by comprising an industrial control computer, an object stage, a conveying belt, a plurality of industrial cameras, a plurality of color-adjustable light sources and a plurality of light source controllers, wherein the industrial control computer is respectively connected with each light source controller and each industrial camera; the object stage is carried on the conveying belt and used for placing a workpiece to be detected, and the workpiece to be detected is a cylindrical workpiece to be detected; the conveying belt is used for conveying the workpiece to be detected; the two phases of the industrial cameras are distributed around the workpiece to be detected at a preset angle and are used for shooting the surface image of the workpiece to be detected; the plurality of color-adjustable light sources are distributed around the workpiece to be detected at intervals of a second preset angle, each color-adjustable light source is provided with three channels which respectively correspond to red light, green light and blue light, and each light source controller is connected with one color-adjustable light source and used for controlling the color-adjustable light sources; and adjusting the intensity of red light, green light and blue light on an industrial control computer through a light source controller to obtain a color light source required by the corresponding color-adjustable light source.
2. The system for workpiece surface defect and character recognition based on multi-vision fusion as recited in claim 1, wherein the number of the industrial cameras and the color tunable light sources are three, three industrial cameras are uniformly distributed around the workpiece to be inspected at intervals of 120 ° in pairs, and three color tunable light sources are distributed around the workpiece to be inspected at intervals of 120 ° in pairs.
3. The method for workpiece surface defect and character recognition based on multi-vision fusion is characterized in that the system for workpiece surface defect and character recognition based on multi-vision fusion to be detected is used for detection, and comprises the following steps:
s1, placing the workpiece to be detected on an object stage, respectively starting each industrial camera, each color-adjustable light source, each light source controller and an industrial control computer, and adjusting the light source color of each corresponding color-adjustable light source through the industrial control computer and each light source controller to ensure the imaging effect of the workpiece to be detected;
s2, acquiring images of the workpiece to be detected through a plurality of industrial cameras;
s3, carrying out image feature fusion and splicing on the images acquired by the plurality of industrial cameras to obtain spliced images;
s4, zooming the spliced image, and completing coarse positioning of the surface label of the workpiece to be detected by using sift characteristic matching;
s5, performing image enhancement by using a Retinex algorithm, further reducing the influence of illumination on a workpiece to be detected, and improving the image quality;
s6, removing image noise by adopting a Gaussian space filtering method, and detecting the surface defects of the workpiece to be detected by utilizing a GANOMaly network to obtain the defect information of the workpiece to be detected;
s7, obtaining a binary image through a self-adaptive threshold segmentation algorithm;
s8, positioning the surface characters of the workpiece to be detected by utilizing a CTPN network;
s9, re-acquiring the image sample of the workpiece to be detected, making a training data set, performing CRNN network training and verification, and sending the characters positioned in the step S8 into the trained and verified CRNN network for character recognition to obtain a surface character recognition result of the workpiece to be detected;
s10, visually outputting the defect information of the workpiece to be detected obtained in the step S6 and the surface character recognition result of the workpiece to be detected obtained in the step S9 through an industrial control computer, and displaying the information in the spliced image obtained in the step S3;
s11, judging whether the workpieces to be detected are sorted or not according to the defect information of the workpieces to be detected displayed by the spliced images and the character recognition result, and if the workpieces to be detected have defects and/or characters are wrong, sorting; otherwise, it is not processed.
4. The method for multi-vision fusion based surface defect and character recognition of the workpiece to be inspected according to claim 3, wherein in the step S1, the specific method for ensuring the imaging effect of the workpiece to be inspected by adjusting the light source color of each corresponding color-adjustable light source through the industrial control computer and each light source controller is as follows:
firstly, selecting a light source color which is complementary to the appearance color and the label color of a workpiece to be detected to enhance a contrast color, or selecting a light source color which is adjacent to the background of the workpiece to be detected to remove the interference of unnecessary information; secondly, the intensity of red light, green light or blue light of the corresponding color-adjustable light source is adjusted on an industrial control computer through a light source controller, each color-adjustable light source obtains the required light source color, and the imaging quality of the characters on the surface of the workpiece to be detected is ensured in the aspect of imaging.
5. The method for identifying surface defects and characters of a workpiece to be inspected based on multi-vision fusion as claimed in claim 3, wherein the step S3 comprises the following steps:
s31, under the condition that the aspect ratio of the image acquired by each industrial camera is kept, the size is zoomed to obtain the final size, wherein the shortest side is 800 pixels, and each zoomed image is subjected to feature point extraction by adopting a rapid robustness feature algorithm;
s32, carrying out fast approximate neighbor algorithm matching on the extracted feature points;
s33, establishing a homography matrix among the images, selecting a random sampling consistency algorithm to continuously screen reliable matching points, and finishing image registration;
and S34, realizing image fusion according to the multiband fusion strategy, and eliminating splicing cracks and ghost phenomena.
6. The method for workpiece surface defect and character recognition based on multi-vision fusion as claimed in claim 3, wherein the image enhancement using Retinex algorithm in step S5 is embodied as:
s51, transforming the original image S (x, y) in the Retinex algorithm into a log domain based on the Retinex algorithm, so as to convert the product relationship into a relationship of sum: log S (x, y) — log R (x, y) + log L (x, y), where R (x, y) denotes a reflection image of the original image S (x, y), L (x, y) denotes an illumination image, x denotes an abscissa of the image, and y denotes an ordinate of the image.
S52, performing gaussian convolution on the original image S (x, y) to estimate the illumination image L (x, y), estimating L (x, y) from the original image S (x, y), and removing the illumination image L (x, y) to obtain the reflection image R (x, y), that is:
Figure FDA0003290096620000031
in the formula, R (x, y) represents a logarithmic expression of the reflection image R (x, y), σ represents a gaussian ambient space constant, λ represents a gaussian distribution constant coefficient, and exp represents an exponential operation with a natural constant e as a base.
7. The method of claim 3, wherein the GANOMaly network is composed of a generation network, a discriminator D and a reconstruction encoder
Figure FDA0003290096620000041
The generating network comprises an encoder GE (x), a decoder GD (z), and the step S6 of utilizing the GANOMaly network to detect the surface defects of the workpiece to be detected specifically comprises the following steps:
s61, shooting N normal to-be-detected workpiece images, wherein N is a positive integer larger than zero;
s62, obtaining the image xiSent to encoder GE (x) to obtain latent variable z, which is passed through decoder GD (z) to obtain xiIs reconstructed image of
Figure FDA0003290096620000042
i≤N;
S63, judging device D for image xiIf true, the image is reconstructed
Figure FDA0003290096620000043
Judging as false, thereby continuously optimizing the reconstructed image
Figure FDA0003290096620000044
And image xiThe difference between the ideal reconstructed image and the image xiNo difference exists;
s64 reconstruction encoder
Figure FDA0003290096620000045
For the reconstructed image
Figure FDA0003290096620000046
Coding again to obtain the latent variable of the reconstructed image coding
Figure FDA0003290096620000047
S65, potential variable z obtained by encoder GE (x) and reconstruction encoder
Figure FDA0003290096620000048
Derived latent variables
Figure FDA0003290096620000049
The difference between them is compared with a preset threshold value phi, when the difference is greater than the preset threshold value phi, the sent image x is determinediIs an abnormal sample; otherwise, consider image xiIs a normal sample, the abnormality isThe sample is the defect on the surface of the shot workpiece to be detected, and the normal sample is the defect on the surface of the shot workpiece to be detected.
8. The workpiece surface defect and character recognition method based on multi-vision fusion as claimed in claim 7, wherein the step S8 of locating the surface characters of the workpiece to be detected by using CTPN network specifically comprises:
s81, extracting features by using a VGG16 network, and taking the features obtained by a third convolution layer in a fifth convolution block in the VGG16 network as a feature map, wherein the size is W multiplied by H multiplied by C, W represents the width, H represents the height, and C represents the prediction category;
s82, sliding the feature graph by using a sliding window with the size of 3 x 3, wherein each sliding window can obtain a feature vector with the length of 3 x C, the center of each sliding window can predict k offsets relative to the anchor frame, and k is a positive integer greater than zero;
s83, inputting the features extracted in the step S81 into a bidirectional long and short memory neural network to obtain an output with the length of W multiplied by 256, and then connecting the output with a full connection layer of 512 to prepare for output;
s84, connecting an output layer behind the full connection layer, wherein the output layer comprises three outputs: 2k vertical coordinates, 2k score and k side-refinement, wherein the vertical coordinates represent the y-axis coordinates of the height and the center of each prediction frame, the score represents the category information of the k anchor frames, and the side-refinement represents the horizontal translation amount of each prediction frame;
s85, filtering redundant text prediction areas by using a standard non-maximum suppression algorithm;
and S86, merging the obtained text prediction areas into text lines by using a text line construction algorithm based on a graph, and completing character positioning.
9. The method for workpiece surface defect and character recognition based on multi-vision fusion as claimed in claim 8, wherein the specific method for training data set generation in step S9 is as follows:
s911, downloading 1000 images of a Chinese natural text data set pushed by Qinghua university and Tengxin from Github;
s912, downloading 1000 images of an ICDAR _2015 English text data set from Github;
s913, generating 1000 images of the conventional Chinese and English complete characters, 2000 images of the incomplete characters by the codes, and forming a data set by the 1000 images of the Chinese and English complete characters and the 2000 images of the incomplete characters;
s914, a data set is sorted, the format of the training data in the CRNN is LMDB, two kinds of data are stored, one is picture data, and the other is label data, wherein the picture data is a plurality of pictures with characters, and the height of the characters accounts for 80% -90% of the height of the pictures; the label data is in txt text format, the text content is characters on the picture data, namely, the text name is consistent with the picture name.
10. The method for workpiece surface defect and character recognition based on multi-vision fusion as claimed in claim 9, wherein the trained and verified CRNN network comprises a convolutional layer, a cyclic layer and a transcription layer, and the step S9 of recognizing the surface characters of the workpiece to be detected by using the trained and verified CRNN network is decomposed into the following processes:
s921, extracting a characteristic sequence from the input image by using the convolutional layer through CNN;
s922, predicting the label distribution of the characteristic sequence obtained from the convolution layer by using the RNN by the cycle layer;
and S923, the transcription layer converts the label distribution acquired from the circulation layer into a final identification result through the operation of de-duplication integration by using the CTC.
CN202111161598.9A 2021-09-30 2021-09-30 Workpiece surface defect and character recognition method and system based on multi-vision fusion Pending CN113869300A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111161598.9A CN113869300A (en) 2021-09-30 2021-09-30 Workpiece surface defect and character recognition method and system based on multi-vision fusion

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111161598.9A CN113869300A (en) 2021-09-30 2021-09-30 Workpiece surface defect and character recognition method and system based on multi-vision fusion

Publications (1)

Publication Number Publication Date
CN113869300A true CN113869300A (en) 2021-12-31

Family

ID=79001218

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111161598.9A Pending CN113869300A (en) 2021-09-30 2021-09-30 Workpiece surface defect and character recognition method and system based on multi-vision fusion

Country Status (1)

Country Link
CN (1) CN113869300A (en)

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114463757A (en) * 2022-01-28 2022-05-10 上海电机学院 Industrial scene character end-side reasoning training device and method based on machine vision
CN114511704A (en) * 2022-04-19 2022-05-17 科大智能物联技术股份有限公司 Spray printing code identification and detection method based on high-speed production line
CN115588204A (en) * 2022-09-23 2023-01-10 神州数码系统集成服务有限公司 Single character image matching and identifying method based on DS evidence theory
CN116012378A (en) * 2023-03-24 2023-04-25 湖南东方钪业股份有限公司 Quality detection method for alloy wire used for additive manufacturing
CN116485795A (en) * 2023-06-19 2023-07-25 湖南隆深氢能科技有限公司 Coil coating production line flaw detection method and system
CN117523543A (en) * 2024-01-08 2024-02-06 成都大学 Metal stamping character recognition method based on deep learning
CN118014933A (en) * 2023-12-29 2024-05-10 山东福茂装饰材料有限公司 Defect detection and identification method and device based on image detection
CN118010751A (en) * 2024-04-08 2024-05-10 杭州汇萃智能科技有限公司 Machine vision detection method and system for workpiece defect detection

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103293167A (en) * 2013-06-08 2013-09-11 湖南竟宁科技有限公司 Automatic detection method and system of copper grains in foam nickel
CN109949305A (en) * 2019-03-29 2019-06-28 北京百度网讯科技有限公司 Method for detecting surface defects of products, device and computer equipment
CN210807443U (en) * 2019-12-27 2020-06-19 苏州康代智能科技股份有限公司 Arch-shaped lighting device and imaging system with same
CN112966841A (en) * 2021-03-18 2021-06-15 深圳闪回科技有限公司 Offline automatic order examining system
CN113327237A (en) * 2021-06-09 2021-08-31 合肥中科星翰科技有限公司 Visual detection system suitable for power supply circuit board

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103293167A (en) * 2013-06-08 2013-09-11 湖南竟宁科技有限公司 Automatic detection method and system of copper grains in foam nickel
CN109949305A (en) * 2019-03-29 2019-06-28 北京百度网讯科技有限公司 Method for detecting surface defects of products, device and computer equipment
CN210807443U (en) * 2019-12-27 2020-06-19 苏州康代智能科技股份有限公司 Arch-shaped lighting device and imaging system with same
CN112966841A (en) * 2021-03-18 2021-06-15 深圳闪回科技有限公司 Offline automatic order examining system
CN113327237A (en) * 2021-06-09 2021-08-31 合肥中科星翰科技有限公司 Visual detection system suitable for power supply circuit board

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114463757A (en) * 2022-01-28 2022-05-10 上海电机学院 Industrial scene character end-side reasoning training device and method based on machine vision
CN114511704A (en) * 2022-04-19 2022-05-17 科大智能物联技术股份有限公司 Spray printing code identification and detection method based on high-speed production line
CN114511704B (en) * 2022-04-19 2022-07-12 科大智能物联技术股份有限公司 Spray printing code identification and detection method based on high-speed production line
CN115588204A (en) * 2022-09-23 2023-01-10 神州数码系统集成服务有限公司 Single character image matching and identifying method based on DS evidence theory
CN115588204B (en) * 2022-09-23 2023-06-13 神州数码系统集成服务有限公司 Single character image matching recognition method based on DS evidence theory
CN116012378A (en) * 2023-03-24 2023-04-25 湖南东方钪业股份有限公司 Quality detection method for alloy wire used for additive manufacturing
CN116485795A (en) * 2023-06-19 2023-07-25 湖南隆深氢能科技有限公司 Coil coating production line flaw detection method and system
CN116485795B (en) * 2023-06-19 2023-09-01 湖南隆深氢能科技有限公司 Coil coating production line flaw detection method and system
CN118014933A (en) * 2023-12-29 2024-05-10 山东福茂装饰材料有限公司 Defect detection and identification method and device based on image detection
CN117523543A (en) * 2024-01-08 2024-02-06 成都大学 Metal stamping character recognition method based on deep learning
CN117523543B (en) * 2024-01-08 2024-03-19 成都大学 Metal stamping character recognition method based on deep learning
CN118010751A (en) * 2024-04-08 2024-05-10 杭州汇萃智能科技有限公司 Machine vision detection method and system for workpiece defect detection

Similar Documents

Publication Publication Date Title
CN113869300A (en) Workpiece surface defect and character recognition method and system based on multi-vision fusion
CN108960245B (en) Tire mold character detection and recognition method, device, equipment and storage medium
CN108918536B (en) Tire mold surface character defect detection method, device, equipment and storage medium
CN110008909B (en) Real-name system business real-time auditing system based on AI
CN110175603B (en) Engraved character recognition method, system and storage medium
CN111709909A (en) General printing defect detection method based on deep learning and model thereof
CN105740910A (en) Vehicle object detection method and device
CN111339902B (en) Liquid crystal display indication recognition method and device for digital display instrument
CN116703911B (en) LED lamp production quality detecting system
CN111681215A (en) Convolutional neural network model training method, and workpiece defect detection method and device
CN114926441A (en) Defect detection method and system for machining and molding injection molding part
CN116245882A (en) Circuit board electronic element detection method and device and computer equipment
CN118275449A (en) Copper strip surface defect detection method, device and equipment
CN113591583A (en) Intelligent boron ore beneficiation system and method
CN115375991A (en) Strong/weak illumination and fog environment self-adaptive target detection method
CN114972246A (en) Die-cutting product surface defect detection method based on deep learning
CN115511775A (en) Light-weight ceramic tile surface defect detection method based on semantic segmentation
Zhao et al. MSC-AD: A Multiscene Unsupervised Anomaly Detection Dataset for Small Defect Detection of Casting Surface
TWM606740U (en) Defect detection system
CN116994049A (en) Full-automatic flat knitting machine and method thereof
CN116385353A (en) Camera module abnormality detection method
CN116485992A (en) Composite three-dimensional scanning method and device and three-dimensional scanner
CN115239663A (en) Method and system for detecting defects of contact lens, electronic device and storage medium
JP7511882B2 (en) Method and system for determining grade of electronic device screen
Cerezci et al. Online metallic surface defect detection using deep learning

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination