CN117095005A - Plastic master batch quality inspection method and system based on machine vision - Google Patents
Plastic master batch quality inspection method and system based on machine vision Download PDFInfo
- Publication number
- CN117095005A CN117095005A CN202311359574.3A CN202311359574A CN117095005A CN 117095005 A CN117095005 A CN 117095005A CN 202311359574 A CN202311359574 A CN 202311359574A CN 117095005 A CN117095005 A CN 117095005A
- Authority
- CN
- China
- Prior art keywords
- image
- module
- adopting
- sub
- master batch
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 239000004594 Masterbatch (MB) Substances 0.000 title claims abstract description 69
- 239000004033 plastic Substances 0.000 title claims abstract description 66
- 229920003023 plastic Polymers 0.000 title claims abstract description 66
- 238000000034 method Methods 0.000 title claims abstract description 63
- 238000007689 inspection Methods 0.000 title claims abstract description 26
- 230000007547 defect Effects 0.000 claims abstract description 112
- 238000004422 calculation algorithm Methods 0.000 claims abstract description 85
- 238000001514 detection method Methods 0.000 claims abstract description 67
- 238000005516 engineering process Methods 0.000 claims abstract description 56
- 230000004927 fusion Effects 0.000 claims abstract description 55
- 238000004519 manufacturing process Methods 0.000 claims abstract description 54
- 238000005457 optimization Methods 0.000 claims abstract description 38
- 230000002787 reinforcement Effects 0.000 claims abstract description 30
- 230000001360 synchronised effect Effects 0.000 claims abstract description 29
- 238000004458 analytical method Methods 0.000 claims description 77
- 230000000877 morphologic effect Effects 0.000 claims description 25
- 238000002372 labelling Methods 0.000 claims description 23
- 238000013527 convolutional neural network Methods 0.000 claims description 22
- 230000000694 effects Effects 0.000 claims description 22
- 230000002159 abnormal effect Effects 0.000 claims description 21
- 230000009467 reduction Effects 0.000 claims description 20
- 230000005856 abnormality Effects 0.000 claims description 19
- 238000000605 extraction Methods 0.000 claims description 17
- 238000007781 pre-processing Methods 0.000 claims description 16
- 238000011156 evaluation Methods 0.000 claims description 15
- 238000012545 processing Methods 0.000 claims description 15
- 238000001914 filtration Methods 0.000 claims description 14
- 238000010586 diagram Methods 0.000 claims description 13
- 230000010354 integration Effects 0.000 claims description 10
- 238000003384 imaging method Methods 0.000 claims description 9
- 238000012544 monitoring process Methods 0.000 claims description 9
- 230000007246 mechanism Effects 0.000 claims description 7
- 238000005211 surface analysis Methods 0.000 claims description 7
- 238000009826 distribution Methods 0.000 claims description 6
- 238000003672 processing method Methods 0.000 claims description 6
- 239000002990 reinforced plastic Substances 0.000 claims description 6
- 238000007405 data analysis Methods 0.000 claims description 5
- 238000007635 classification algorithm Methods 0.000 claims description 4
- 239000003623 enhancer Substances 0.000 claims description 4
- 238000003331 infrared imaging Methods 0.000 claims description 4
- 230000008569 process Effects 0.000 abstract description 6
- 238000010191 image analysis Methods 0.000 abstract description 5
- 238000013528 artificial neural network Methods 0.000 abstract description 3
- 238000013135 deep learning Methods 0.000 description 3
- 230000002708 enhancing effect Effects 0.000 description 3
- 238000012706 support-vector machine Methods 0.000 description 3
- 230000009471 action Effects 0.000 description 2
- 230000002950 deficient Effects 0.000 description 2
- 230000006870 function Effects 0.000 description 2
- 230000006872 improvement Effects 0.000 description 2
- 239000011159 matrix material Substances 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 238000003908 quality control method Methods 0.000 description 2
- 230000000007 visual effect Effects 0.000 description 2
- PXFBZOLANLWPMH-UHFFFAOYSA-N 16-Epiaffinine Natural products C1C(C2=CC=CC=C2N2)=C2C(=O)CC2C(=CC)CN(C)C1C2CO PXFBZOLANLWPMH-UHFFFAOYSA-N 0.000 description 1
- 230000003044 adaptive effect Effects 0.000 description 1
- 230000008485 antagonism Effects 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000001186 cumulative effect Effects 0.000 description 1
- 238000013434 data augmentation Methods 0.000 description 1
- 230000010339 dilation Effects 0.000 description 1
- 230000007613 environmental effect Effects 0.000 description 1
- 230000003628 erosive effect Effects 0.000 description 1
- 239000000463 material Substances 0.000 description 1
- 230000008447 perception Effects 0.000 description 1
- 230000008092 positive effect Effects 0.000 description 1
- 238000002360 preparation method Methods 0.000 description 1
- 238000007670 refining Methods 0.000 description 1
- 230000003014 reinforcing effect Effects 0.000 description 1
- 230000035945 sensitivity Effects 0.000 description 1
- 238000001228 spectrum Methods 0.000 description 1
- 238000012546 transfer Methods 0.000 description 1
- 230000009466 transformation Effects 0.000 description 1
- 239000002699 waste material Substances 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/0464—Convolutional networks [CNN, ConvNet]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/764—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/77—Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
- G06V10/774—Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/77—Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
- G06V10/80—Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/82—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10032—Satellite or aerial image; Remote sensing
- G06T2207/10036—Multispectral image; Hyperspectral image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02P—CLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
- Y02P90/00—Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
- Y02P90/30—Computing systems specially adapted for manufacturing
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Evolutionary Computation (AREA)
- General Health & Medical Sciences (AREA)
- Software Systems (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Health & Medical Sciences (AREA)
- Artificial Intelligence (AREA)
- Computing Systems (AREA)
- Medical Informatics (AREA)
- Multimedia (AREA)
- Databases & Information Systems (AREA)
- Computational Linguistics (AREA)
- Life Sciences & Earth Sciences (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- Data Mining & Analysis (AREA)
- Molecular Biology (AREA)
- General Engineering & Computer Science (AREA)
- Mathematical Physics (AREA)
- Quality & Reliability (AREA)
- Investigating Materials By The Use Of Optical Means Adapted For Particular Applications (AREA)
- Image Processing (AREA)
Abstract
The invention relates to the technical field of image analysis, in particular to a plastic master batch quality inspection method and system based on machine vision, comprising the following steps: based on a plastic master batch sample, a multi-sensor synchronous acquisition technology is adopted, and simultaneously visible light, infrared and ultraviolet spectrogram images are acquired to generate a multi-mode original image data set. In the invention, the multi-sensor synchronous acquisition technology integrates visible light, infrared and ultraviolet images, expands detection dimension, improves image quality and detail by a convolution neural network and multi-source image fusion algorithm, creates a high-definition comprehensive image, processes point cloud data to construct a three-dimensional model, visually displays the form of plastic master batches, improves defect detection accuracy by a deep convolution neural network, reduces false alarm, optimizes production parameters in real time by deep reinforcement learning, enhances self-adaptive anomaly detection, supports dynamic image acquisition parameter optimization by reinforcement learning, and improves image quality and detection accuracy.
Description
Technical Field
The invention relates to the technical field of image analysis, in particular to a plastic master batch quality inspection method and system based on machine vision.
Background
Machine vision is a technique that allows computers and other vision equipment to "see" and interpret their surroundings. In the field of image analysis, this typically involves using cameras and computer algorithms to analyze images and extract useful information therefrom. Image analysis can be used in a variety of fields such as industrial inspection, medical image analysis, environmental perception of autopilot vehicles, etc.
The quality inspection method of the plastic master batch based on machine vision is a method for inspecting the quality of the plastic master batch by utilizing a machine vision technology. The method aims to improve the monitoring and control of the quality of the plastic master batch in the production process and ensure that the produced plastic master batch meets the preset quality standard. The effect that reaches is through machine vision system, carries out high-speed, high accuracy to characteristics such as outward appearance, colour, shape, size, surface defect of plastics master batch. The method can realize automatic detection of a large number of plastic master batches, improves production efficiency, reduces labor cost and reduces quality control errors.
Existing plastic master batch quality inspection methods often rely on only a single sensor for image acquisition, resulting in possible omission of certain defects in a particular band. The method does not have advanced image fusion technology, so that the details and the quality of the image are limited, and high-precision detection is difficult. The lack of modeling and analysis of three-dimensional morphology has a dead zone for defects of complex morphology. Conventional defect detection algorithms often do not fully utilize deep learning techniques, resulting in relatively low recognition and accuracy rates. In addition, the existing method lacks intelligent adjustment capability for production parameters, and when abnormality is detected, manual intervention is needed, so that production efficiency is reduced. Finally, the fixed setting of the image acquisition parameters results in poor image quality in some cases, and it is difficult to meet the requirement of high-precision detection.
Disclosure of Invention
The invention aims to solve the defects in the prior art, and provides a plastic master batch quality inspection method and system based on machine vision.
In order to achieve the above purpose, the present invention adopts the following technical scheme: a plastic master batch quality inspection method based on machine vision comprises the following steps:
s1: based on a plastic master batch sample, adopting a multi-sensor synchronous acquisition technology, and simultaneously acquiring visible light, infrared and ultraviolet spectrogram images to generate a multi-mode original image data set;
s2: based on the multi-mode original image data set, performing image preprocessing by adopting a convolutional neural network, and generating a comprehensive fusion image through a multi-source image fusion algorithm;
s3: based on the comprehensive fusion image, a three-dimensional model of the plastic master batch is established by adopting a point cloud data processing method, and shape and surface analysis is carried out to generate a three-dimensional morphological analysis report;
s4: based on the comprehensive fusion image and the three-dimensional morphological analysis report, performing defect detection and classification on the plastic master batch by adopting a deep convolutional neural network to generate an automatic defect labeling chart;
s5: based on the automatic defect labeling diagram, adopting a deep reinforcement learning algorithm to dynamically adjust production parameters, and carrying out self-adaptive anomaly detection and strategy updating to generate a quality anomaly report and an optimized detection strategy;
S6: and dynamically optimizing image acquisition parameters, including exposure time, focal length and light source intensity, by adopting a reinforcement learning method based on the quality anomaly report, and generating optimized image acquisition parameter settings.
As a further scheme of the invention, based on a plastic master batch sample, a multi-sensor synchronous acquisition technology is adopted, and simultaneously visible light, infrared and ultraviolet spectrogram images are acquired, and the steps for generating a multi-mode original image data set are specifically as follows:
s101: based on the selective strategy, adopting a sensor selection algorithm to perform optimal sensor configuration and generating sensor parameter configuration;
s102: based on the sensor parameter configuration, a synchronous trigger mechanism is applied to synchronously start a plurality of groups of sensors to obtain synchronous acquisition control signals;
s103: based on the synchronous acquisition control signal, adopting a high-speed imaging technology to acquire a visible light image, and generating a visible light original image;
s104: based on the synchronous acquisition control signal, an infrared imaging technology is utilized to acquire an infrared image, and an infrared original image is generated;
s105: based on the synchronous acquisition control signal, an ultraviolet imaging technology is used for acquiring an ultraviolet image, and an ultraviolet original image is generated;
S106: based on the visible light original image, the infrared original image and the ultraviolet original image, a data integration technology is applied, and the multi-mode images are combined to generate a multi-mode original image data set.
As a further scheme of the invention, based on the multi-mode original image data set, a convolutional neural network is adopted to perform image preprocessing, and a multi-source image fusion algorithm is adopted to generate a comprehensive fusion image, wherein the steps of the method specifically comprise:
s201: based on the multi-mode original image data set, adopting an image enhancement algorithm to improve the image quality and obtaining an enhanced image data set;
s202: based on the enhanced image dataset, applying a median filtering algorithm to perform image noise reduction and generating a noise-reduced image dataset;
s203: based on the noise-reduced image data set, performing spatial alignment by adopting an image registration algorithm to obtain a spatially aligned image data set;
s204: performing brightness and contrast adjustment based on the spatially aligned image dataset by using a histogram equalization technique to generate a contrast-adjusted image dataset;
s205: based on the image dataset with the adjusted contrast, extracting key features by using a SIFT feature extraction algorithm to obtain a key image feature set;
S206: and based on the key image feature set, applying a multi-source image fusion algorithm, and combining the multi-mode images to generate a comprehensive fusion image.
As a further scheme of the invention, based on the comprehensive fusion image, a point cloud data processing method is adopted to establish a three-dimensional model of the plastic master batch, shape and surface analysis is carried out, and the step of generating a three-dimensional morphological analysis report specifically comprises the following steps:
s301: based on the comprehensive fusion image, adopting a three-dimensional reconstruction algorithm to perform three-dimensional reconstruction to obtain a preliminary three-dimensional model;
s302: based on the preliminary three-dimensional model, performing model gridding by utilizing a three-dimensional grid generation technology to generate a gridding three-dimensional model;
s303: based on the meshed three-dimensional model, applying a point cloud filtering algorithm to perform data noise reduction to obtain a noise-reduced three-dimensional model;
s304: based on the three-dimensional model after noise reduction, adopting a surface fitting technology to analyze the shape and the surface, and generating three-dimensional shape and surface data;
s305: based on the three-dimensional shape and the surface data, extracting and analyzing a specific form by using a morphological operation technology to obtain a form characteristic report;
s306: integrating the morphological feature report with the three-dimensional shape and surface data, and generating a three-dimensional morphological analysis report by using a data integration technology.
As a further scheme of the invention, based on the comprehensive fusion image and the three-dimensional morphological analysis report, the depth convolution neural network is adopted to detect and classify the defects of the plastic master batch, and the steps for generating the automatic defect labeling chart are specifically as follows:
s401: based on the comprehensive fusion image, adopting an image enhancement algorithm to perform image preprocessing to generate an enhanced plastic master batch image;
s402: based on the reinforced plastic master batch image, performing feature extraction by adopting a SIFT technology to obtain a plastic master batch feature set;
s403: based on the plastic master batch characteristic set, performing defect identification by adopting a deep convolutional neural network to obtain a defect identification tag;
s404: and combining the defect identification label and the reinforced plastic master batch image, and adopting a classification algorithm to classify defects to generate an automatic defect labeling chart.
As a further scheme of the invention, based on the automatic defect labeling diagram, a deep reinforcement learning algorithm is adopted to dynamically adjust production parameters, and self-adaptive anomaly detection and strategy updating are carried out, and the steps of generating a quality anomaly report and an optimized detection strategy are specifically as follows:
s501: analyzing defect distribution by adopting a data analysis technology based on the automatic defect labeling diagram to obtain a defect analysis report;
S502: based on the defect analysis report, adopting a deep reinforcement learning algorithm to adjust production parameters and generating adjusted production parameters;
s503: monitoring a production process by using the adjusted production parameters and adopting a self-adaptive abnormality detection technology to generate an abnormality detection report;
s504: and optimizing the detection strategy by adopting a strategy updating algorithm according to the abnormal detection report to obtain a quality abnormal report.
As a further scheme of the invention, based on the quality anomaly report, the image acquisition parameters are dynamically optimized by adopting a reinforcement learning method, including exposure time, focal length and light source intensity, and the steps for generating the optimized image acquisition parameter settings are specifically as follows:
s601: according to the quality abnormality report, adopting an analysis technology to discuss the abnormality cause and obtain an abnormality cause analysis report;
s602: based on the abnormal cause analysis report, applying a reinforcement learning method, and providing an image acquisition optimization scheme to generate a parameter optimization scheme;
s603: combining the parameter optimization scheme, adjusting exposure time, focal length and light source intensity, and generating primarily optimized image acquisition parameters;
s604: carrying out image acquisition again by utilizing the primarily optimized image acquisition parameters, evaluating the effect of the image acquisition parameters and generating an effect evaluation report;
S605: and according to the effect evaluation report, carrying out parameter refinement and perfecting to generate optimized image acquisition parameter setting.
The plastic master batch quality inspection system based on machine vision is used for executing the plastic master batch quality inspection method based on machine vision, and comprises an image acquisition module, an image processing module, a three-dimensional modeling module, a characteristic recognition module, a quality analysis module, a parameter adjustment module and an optimization acquisition module.
As a further scheme of the invention, the image acquisition module acquires visible light, infrared and ultraviolet images by adopting a synchronous trigger mechanism based on a sensor selection algorithm to generate a multi-mode original image data set;
the image processing module generates an image dataset with improved quality by adopting an image enhancement algorithm, a median filtering algorithm, an image registration algorithm and a histogram equalization technology based on the multi-mode original image dataset, and finally generates a comprehensive fusion image;
the three-dimensional modeling module generates a meshed three-dimensional model and three-dimensional shape surface data by utilizing a three-dimensional reconstruction algorithm and a three-dimensional mesh generation technology based on the comprehensive fusion image;
The feature recognition module is used for preprocessing the comprehensive fusion image, extracting features, performing defect recognition by using a deep convolutional neural network, and generating a defect recognition tag;
the quality analysis module classifies defects based on the defect identification labels, generates an automatic defect labeling chart, analyzes defect distribution and generates a defect analysis report;
the parameter adjustment module adjusts production parameters based on the defect analysis report by using a deep reinforcement learning algorithm, generates adjusted production parameters, monitors the production process, generates an abnormal detection report, optimizes the detection strategy by a strategy updating algorithm, and finally generates a quality abnormal report;
the optimization acquisition module analyzes the abnormal reasons, proposes an optimization scheme, generates a parameter optimization scheme, adjusts image acquisition parameters, generates primarily optimized image acquisition parameter settings, re-acquires images, evaluates effects and finally obtains the optimized image acquisition parameter settings.
As a further scheme of the invention, the image acquisition module comprises a sensor configuration sub-module, a synchronous control sub-module, an image acquisition sub-module and a data integration sub-module;
the image processing module comprises an image enhancer module, an image noise reduction sub-module, a space alignment sub-module and a first feature extraction sub-module;
The three-dimensional modeling module comprises a three-dimensional reconstruction sub-module, a model gridding sub-module, a data noise reduction sub-module and a shape analysis sub-module;
the feature recognition module comprises an image preprocessing sub-module, a second feature extraction sub-module and a defect recognition sub-module;
the quality analysis module comprises a defect classification sub-module and a defect analysis sub-module;
the parameter adjustment module comprises a first parameter adjustment sub-module, a production monitoring sub-module and a strategy updating sub-module;
the optimization acquisition module comprises an anomaly analysis sub-module, an optimization scheme sub-module, a second parameter adjustment sub-module and an effect evaluation sub-module.
Compared with the prior art, the invention has the advantages and positive effects that:
in the invention, the multi-sensor synchronous acquisition technology is adopted to realize the joint acquisition of visible light, infrared and ultraviolet spectrogram images, so that the detection is more comprehensive. By using the convolutional neural network and the multi-source image fusion algorithm, the image quality is improved, the details are enhanced, and the comprehensive fusion image with higher definition is obtained. In combination with point cloud data processing, the three-dimensional model provides a more visual representation of the morphological features of the plastic master batch. With deep convolutional neural networks, defect detection and classification become more accurate, reducing the false detection rate. The application of deep reinforcement learning ensures real-time and intelligent adjustment of production parameters, enhances the capability of self-adaptive anomaly detection, and ensures continuous iterative optimization of strategies. The adoption of the reinforcement learning method provides powerful technical support for dynamic optimization of image acquisition parameters, and improves image quality and detection accuracy.
Drawings
FIG. 1 is a schematic workflow diagram of the present invention;
FIG. 2 is a S1 refinement flowchart of the present invention;
FIG. 3 is a S2 refinement flowchart of the present invention;
FIG. 4 is a S3 refinement flowchart of the present invention;
FIG. 5 is a S4 refinement flowchart of the present invention;
FIG. 6 is a S5 refinement flowchart of the present invention;
FIG. 7 is a S6 refinement flowchart of the present invention;
FIG. 8 is a system flow diagram of the present invention;
FIG. 9 is a schematic diagram of a system framework of the present invention.
Detailed Description
The present invention will be described in further detail with reference to the drawings and examples, in order to make the objects, technical solutions and advantages of the present invention more apparent. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the scope of the invention.
In the description of the present invention, it should be understood that the terms "length," "width," "upper," "lower," "front," "rear," "left," "right," "vertical," "horizontal," "top," "bottom," "inner," "outer," and the like indicate orientations or positional relationships based on the orientation or positional relationships shown in the drawings, merely to facilitate describing the present invention and simplify the description, and do not indicate or imply that the devices or elements referred to must have a specific orientation, be configured and operated in a specific orientation, and therefore should not be construed as limiting the present invention. Furthermore, in the description of the present invention, the meaning of "a plurality" is two or more, unless explicitly defined otherwise.
Embodiment one.
Referring to fig. 1, the present invention provides a technical solution: a plastic master batch quality inspection method based on machine vision comprises the following steps:
s1: based on a plastic master batch sample, adopting a multi-sensor synchronous acquisition technology, and simultaneously acquiring visible light, infrared and ultraviolet spectrogram images to generate a multi-mode original image data set;
s2: based on a multi-mode original image dataset, performing image preprocessing by adopting a convolutional neural network, and generating a comprehensive fusion image through a multi-source image fusion algorithm;
s3: based on the comprehensive fusion image, a three-dimensional model of the plastic master batch is established by adopting a point cloud data processing method, and shape and surface analysis is carried out to generate a three-dimensional morphological analysis report;
s4: based on the comprehensive fusion image and the three-dimensional morphological analysis report, performing defect detection and classification on the plastic master batch by adopting a deep convolutional neural network to generate an automatic defect labeling diagram;
s5: based on the automatic defect labeling diagram, adopting a deep reinforcement learning algorithm to dynamically adjust production parameters, and carrying out self-adaptive anomaly detection and strategy updating to generate a quality anomaly report and an optimized detection strategy;
s6: based on the quality anomaly report, dynamically optimizing image acquisition parameters including exposure time, focal length and light source intensity by adopting a reinforcement learning method, and generating optimized image acquisition parameter settings.
The method overcomes the limitation of a single image source in recognition and detection by comprehensively utilizing the multi-mode original image dataset, and greatly improves the analysis precision and the detection reliability of the plastic master batch by fusion analysis of visible light, infrared and ultraviolet spectrograms.
By means of deep learning and application of a convolutional neural network, automatic defect detection and classification of the plastic master batch can be achieved. Through automatic defect labeling and intelligent recognition, the efficiency and the accuracy of quality detection are remarkably improved, and meanwhile, the labor intensity and the skill requirement of manual detection are reduced.
In the method, not only is the comprehensive fusion analysis of the images carried out, but also the establishment of a three-dimensional model of the plastic master batch and the comprehensive analysis of the shape and the surface of the plastic master batch are realized, which is helpful for enterprises to know the quality characteristics of products deeply and comprehensively and discover potential process defects.
The production parameters are dynamically adjusted through the deep reinforcement learning algorithm, and the self-adaptive anomaly detection and strategy updating are carried out, so that the method is not only beneficial to quickly responding to quality anomalies in the production process and reducing defective product output, but also can be carried out in the first time for optimization adjustment, and the production efficiency and the primary product yield are improved.
On the basis of quality anomaly report, the image acquisition parameters (such as exposure time, focal length, light source intensity and the like) are dynamically optimized by using a reinforcement learning method, so that the image acquisition process can be pertinently improved, and the control precision and flexibility of the whole production process can be further enhanced.
Through accurate defect identification and automatic detection flow, the method can timely and accurately find out the problems in the production process, and timely perform parameter adjustment and strategy update, so that the resource waste and the reworking cost caused by defective products are reduced.
The high-efficiency and intelligent quality inspection method not only can ensure the high quality of the product, but also can continuously improve the performance and the production efficiency of the product by continuously optimizing the production parameters, thereby enhancing the competitiveness of the enterprise product in the market and the customer satisfaction.
Referring to fig. 2, based on a plastic master batch sample, a multi-sensor synchronous acquisition technology is adopted to simultaneously acquire visible light, infrared and ultraviolet spectrogram images, and the steps for generating a multi-mode original image dataset are specifically as follows:
s101: based on the selective strategy, adopting a sensor selection algorithm to perform optimal sensor configuration and generating sensor parameter configuration;
S102: based on sensor parameter configuration, a synchronous trigger mechanism is applied, and a plurality of groups of sensors are synchronously started to obtain synchronous acquisition control signals;
s103: based on the synchronous acquisition control signal, adopting a high-speed imaging technology to acquire a visible light image, and generating a visible light original image;
s104: based on the synchronous acquisition control signal, an infrared imaging technology is utilized to acquire an infrared image, and an infrared original image is generated; s105: based on the synchronous acquisition control signal, an ultraviolet imaging technology is used for acquiring an ultraviolet image, and an ultraviolet original image is generated;
s106: based on the visible light original image, the infrared original image and the ultraviolet original image, a data integration technology is applied to combine the multi-mode images to generate a multi-mode original image data set.
Firstly, according to the characteristics and requirements of a plastic master batch sample, a selectivity strategy is adopted, and a sensor selection algorithm is applied. A specific algorithm may be a method based on an information entropy or greedy strategy to evaluate and select the optimal sensor configuration. The method is provided with a sensor set (S) and a plastic master batch sample feature set (F), and aims to find a subset (S') to maximize feature coverage:
S' = \arg \max_{S' \subseteq S} \sum_{f \in F} I(s;f),
Where (I (s; f)) represents the information gain of the sensor(s) over the feature (f). In this way, a parameter configuration of the sensor is obtained.
Next, based on the sensor parameter configuration described above, a synchronous trigger mechanism is applied to ensure that all selected sensors can operate synchronously. This can be achieved by a central control unit which, when receiving the start signal, sends control signals to all sensors for synchronous acquisition.
With the synchronization control signal, the next three steps correspond to different imaging techniques, respectively. Adopting a high-speed imaging technology to collect visible light; collecting infrared images by utilizing an infrared imaging technology; ultraviolet imaging techniques are used to acquire ultraviolet images. Among these three processes, factors that need to be considered include exposure time, sensitivity, light intensity, and the like.
The final step is to integrate the three types of images into a multi-modal raw image dataset. This dataset may be represented by a three-dimensional tensor (T), wherein each dimension corresponds to a visible, infrared and ultraviolet image, respectively. Each image can be seen as a two-dimensional matrix, examples:
t= [ i_ { visible }, i_ { infrared }, i_ { ultraviolet }, wherein (i_ { visible }, (i_ { inflroned }) and (i_ { ultraviolet }) represent visible light, infrared and ultraviolet image data, respectively.
Referring to fig. 3, based on a multi-mode original image dataset, the steps of performing image preprocessing by using a convolutional neural network and generating a comprehensive fused image by using a multi-source image fusion algorithm are specifically as follows:
s201: based on the multi-mode original image data set, adopting an image enhancement algorithm to improve the image quality and obtain an enhanced image data set;
s202: based on the enhanced image dataset, applying a median filtering algorithm to perform image noise reduction and generating a noise-reduced image dataset;
s203: based on the image dataset after noise reduction, performing spatial alignment by adopting an image registration algorithm to obtain a spatially aligned image dataset;
s204: performing brightness and contrast adjustment based on the spatially aligned image dataset by using a histogram equalization technique to generate a contrast-adjusted image dataset;
s205: based on the image data set with the adjusted contrast, extracting key features by using a SIFT feature extraction algorithm to obtain a key image feature set;
s206: based on the key image feature set, a multi-source image fusion algorithm is applied, and the multi-mode images are combined to generate a comprehensive fusion image.
S201-image enhancement:
Image enhancement is performed on a multi-modal raw image dataset, with the common enhancement algorithms being data augmentation (e.g., rotation, cropping, flipping, etc.), but image enhancement using a Generation Antagonism Network (GAN) is also contemplated. The concrete reinforcing operation is as follows:
I_{enhanced} = Augment(I_{original})。
s202, image noise reduction:
a median filtering algorithm is applied to give an image window (such as 3x3 and 5x 5), the pixel values in the window are ordered, and the median value is taken as the value of the central pixel. The formula is:
I_{denoised}(x,y) = median{I_{enhanced}(x+i,y+j)|i,j \in W},
where (W) is the filter window.
S203-image registration:
spatial alignment is performed using image registration algorithms such as ORB, SIFT, etc. After keypoint matching, the images are aligned using affine transformation or homography matrix:
M = Match(I_{denoised_1}, I_{denoised_2}),
I_{aligned} = Transform(I_{denoised}, M)。
s204-histogram equalization:
image brightness and contrast are adjusted using histogram equalization techniques. For each pixel, reassign the luminance value as:
I_{hist_eq}(x,y) = T(I_{aligned}(x,y)),
where (T) is a transfer function based on the cumulative histogram of the image.
S205-SIFT feature extraction: extracting key image features by using a SIFT algorithm:
K = SIFT(I_{hist_eq}),
where (K) is the keypoint and descriptor of the image.
S206-multi-source image fusion:
the multi-source image fusion algorithm involves fusion at the pixel level, feature level, or decision level. Here, consider a weighted fusion at the pixel level:
I_{fused}(x,y) = w_1 \cdot I_{source1}(x,y) + w_2 \cdot I_{source2}(x,y),
Where (w_1) and (w_2) are predefined weights.
Referring to fig. 4, based on the comprehensive fusion image, a three-dimensional model of the plastic master batch is established by adopting a point cloud data processing method, and shape and surface analysis is performed, so that a three-dimensional morphological analysis report is generated specifically by the following steps:
s301: based on the comprehensive fusion image, adopting a three-dimensional reconstruction algorithm to perform three-dimensional reconstruction to obtain a preliminary three-dimensional model;
s302: based on the preliminary three-dimensional model, performing model gridding by utilizing a three-dimensional grid generation technology to generate a gridding three-dimensional model;
s303: based on the gridding three-dimensional model, applying a point cloud filtering algorithm to perform data noise reduction to obtain a noise-reduced three-dimensional model;
s304: based on the three-dimensional model after noise reduction, adopting a surface fitting technology to analyze the shape and the surface, and generating three-dimensional shape and surface data;
s305: based on the three-dimensional shape and the surface data, a morphological operation technology is used for extracting and analyzing a specific form to obtain a form characteristic report;
s306: integrating the morphological feature report with the three-dimensional shape and surface data, and generating a three-dimensional morphological analysis report by using a data integration technology.
S301-three-dimensional reconstruction: and adopting a three-dimensional reconstruction algorithm, such as a binocular three-dimensional matching or structured light method, to reconstruct the three-dimensional of the comprehensive fusion image:
P_{3D} = StereoReconstruct(I_{fused1}, I_{fused2}),
Where (p_ {3D }) is the reconstructed point cloud data.
S302, three-dimensional grid generation: three-dimensional mesh generation techniques such as Delaunay triangulation or Maring documents algorithm are applied:
M_{3D} = MeshGeneration(P_{3D}),
s303-point cloud filtering: noise removal using statistical outlier removal or radius-based filtering:
P_{filtered} = PointCloudFilter(P_{3D}, \theta),
where "\theta" is the filtering parameter.
S304-surface fitting: shape and surface analysis is performed using surface fitting techniques, such as least squares or RANSAC algorithms:
S_{3D} = SurfaceFit(P_{filtered}, \phi),
wherein, (\phi) is the fitting parameter.
S305-morphological operation: morphological operation techniques such as erosion, dilation, open and close operations are applied to extract and analyze specific morphological features:
F_{morph} = MorphologyOperation(S_{3D}, \kappa),
wherein, (\kappa) is the morphologically manipulated core.
S306-generating a three-dimensional morphological analysis report: combining the morphology feature report with the three-dimensional shape and surface data, integrating all the data to generate a final analysis report:
R_{3D} = IntegrateReport(F_{morph}, S_{3D}),
referring to fig. 5, based on the comprehensive fusion image and the three-dimensional morphological analysis report, the method adopts the deep convolutional neural network to detect and classify the defects of the plastic master batch, and the steps for generating the automatic defect labeling chart specifically include:
s401: based on the comprehensive fusion image, adopting an image enhancement algorithm to perform image preprocessing to generate an enhanced plastic master batch image;
S402: based on the reinforced plastic master batch image, performing feature extraction by adopting a SIFT technology to obtain a plastic master batch feature set;
s403: based on the plastic master batch characteristic set, performing defect identification by adopting a deep convolutional neural network to obtain a defect identification label;
s404: and combining the defect identification label and the reinforced plastic master batch image, and adopting a classification algorithm to classify the defects to generate an automatic defect labeling chart.
S401-image enhancement: the comprehensive fused image is preprocessed using common image enhancement algorithms, such as histogram equalization: i_ { enhanced } = histogramequation (i_ { fused }).
S402, feature extraction: extracting key points and feature descriptors from the enhanced image by adopting a Scale Invariant Feature Transform (SIFT) algorithm:
(K, D) = SIFT(I_{enhanced}),
wherein (K) represents a key point and (D) represents a descriptor.
S403, defect identification:
and performing defect identification on the SIFT feature descriptor by using a Deep Convolutional Neural Network (DCNN). Consider a pre-trained DCNN model as (m_ { DCNN }):
L = M_{dcnn}(D),
wherein, (L) is a defect identification tag.
S404, defect classification and marking:
based on the defect identification tag and the enhanced image, a classification algorithm such as a Support Vector Machine (SVM) is applied:
T_{defect} = SVM(L, I_{enhanced}),
And then carrying out defect marking on the enhanced image:
I_{annotated} = Annotate(I_{enhanced}, T_{defect}),
referring to fig. 6, based on the automatic defect labeling chart, the method adopts a deep reinforcement learning algorithm to dynamically adjust production parameters, and performs self-adaptive anomaly detection and strategy update, and the steps of generating a quality anomaly report and an optimized detection strategy are specifically as follows:
s501: analyzing defect distribution by adopting a data analysis technology based on the automatic defect labeling diagram to obtain a defect analysis report;
s502: based on the defect analysis report, adopting a deep reinforcement learning algorithm to adjust production parameters and generating adjusted production parameters;
s503: monitoring the production process by using the adjusted production parameters and adopting a self-adaptive abnormality detection technology to generate an abnormality detection report;
s504: and optimizing the detection strategy by adopting a strategy updating algorithm according to the abnormal detection report to obtain a quality abnormal report.
S501-defect analysis: first, data analysis is performed based on an automatic defect labeling diagram:
r_ { defect } = data analysis (i_ { annotation }),
wherein, (R_ { defect }) represents a defect analysis report.
S502-production parameter adjustment:
and adjusting production parameters by using a deep reinforcement learning model. Assuming that the state is the current production environment and the action is the adjustment parameter, the rewards are based on the degree of defect reduction:
P_{adjusted} = DRLModel(R_{defect}, S_{current}),
Where (P_ { adjusted }) is the adjusted production parameter and (S_ { current }) is the current production state.
S503-anomaly detection: using the adjusted production parameters, applying an adaptive anomaly detection technique:
R_{anomaly} = AnomalyDetection(P_{adjusted}, S_{current}),
wherein, (R_ { analog }) is an anomaly detection report.
S504, strategy updating and exception reporting: updating the detection strategy according to the abnormal detection report:
R_{quality}, S_{update} = PolicyUpdate(R_{anomaly}, P_{adjusted}),
wherein, (R_ { quality }) is a quality anomaly report, (S_ { update }) is an optimized detection strategy.
Referring to fig. 7, based on the quality anomaly report, the image acquisition parameters including exposure time, focal length, and light source intensity are dynamically optimized by using a reinforcement learning method, and the steps for generating the optimized image acquisition parameter settings are specifically as follows:
s601: according to the quality abnormality report, adopting an analysis technology to discuss the abnormality cause and obtain an abnormality cause analysis report;
s602: based on the analysis report of the abnormal reasons, applying a reinforcement learning method, and providing an image acquisition optimization scheme to generate a parameter optimization scheme;
s603: combining a parameter optimization scheme, adjusting exposure time, focal length and light source intensity, and generating primarily optimized image acquisition parameters;
s604: the image acquisition is carried out again by utilizing the preliminarily optimized image acquisition parameters, the effect is estimated, and an effect estimation report is generated;
S605: and (3) carrying out parameter refinement and perfecting according to the effect evaluation report, and generating optimized image acquisition parameter setting.
S601-analysis of abnormality cause: first, analysis is performed on a quality anomaly report:
r_ { cause } = anomaly analysis (r_ { quality }),
wherein (R_ { reliability }) represents an abnormality cause analysis report and (R_ { quality }) is a quality abnormality report.
S602-parameter optimization scheme: analyzing the abnormal cause report by using a reinforcement learning model to propose an image acquisition optimization scheme:
s_ { scheme } = RLModel (r_ { cause }),
wherein, (S_ { summation }) is a parameter optimization scheme.
S603-preliminary parameter adjustment: according to the parameter optimization scheme, adjusting image acquisition parameters:
P_{initial} = AdjustParameters(S_{suggestion}),
where (p_ { initial }) is the initially optimized image acquisition parameter.
S604-effect evaluation: and (3) re-carrying out image acquisition by utilizing the primarily optimized image acquisition parameters, and evaluating the effect:
R_{evaluation} = ImageEvaluation(P_{initial}),
where (R_ { evaluation }) is the effect evaluation report.
S605-parameter refinement and perfecting: according to the effect evaluation report, refining and perfecting image acquisition parameters:
P_{optimized} = RefineParameters(R_{evaluation}),
wherein, (P_ { optimized }) is the optimized image acquisition parameter setting.
Referring to fig. 8, a plastic master batch quality inspection system based on machine vision is used for executing the plastic master batch quality inspection method based on machine vision, and the system comprises an image acquisition module, an image processing module, a three-dimensional modeling module, a feature recognition module, a quality analysis module, a parameter adjustment module and an optimization acquisition module.
The image acquisition module acquires visible light, infrared and ultraviolet images by adopting a synchronous trigger mechanism based on a sensor selection algorithm to generate a multi-mode original image data set;
the image processing module generates an image dataset with improved quality by adopting an image enhancement algorithm, a median filtering algorithm, an image registration algorithm and a histogram equalization technology based on the multi-mode original image dataset, and finally generates a comprehensive fusion image;
the three-dimensional modeling module generates a meshed three-dimensional model and three-dimensional shape surface data by utilizing a three-dimensional reconstruction algorithm and a three-dimensional mesh generation technology based on the comprehensive fusion image;
the feature recognition module is used for preprocessing the comprehensive fusion image, extracting features, performing defect recognition by using a deep convolutional neural network, and generating a defect recognition tag;
The quality analysis module classifies defects based on the defect identification labels, generates an automatic defect labeling chart, analyzes defect distribution and generates a defect analysis report;
the parameter adjustment module adjusts production parameters based on the defect analysis report by using a deep reinforcement learning algorithm, generates adjusted production parameters, monitors the production process, generates an abnormal detection report, optimizes the detection strategy by a strategy updating algorithm, and finally generates a quality abnormal report;
the optimization acquisition module analyzes the abnormal reasons, proposes an optimization scheme, generates a parameter optimization scheme, adjusts image acquisition parameters, generates primarily optimized image acquisition parameter settings, re-acquires images, evaluates effects and finally obtains the optimized image acquisition parameter settings.
Firstly, the multi-mode image acquisition function of the system enables the characteristics of the plastic master batch to be observed under a plurality of spectrums, so that accurate and reliable data information can be obtained under different environments and physical conditions. By capturing visible, infrared and ultraviolet images, nuances and potential defects of the material can be captured from multiple dimensions, enhancing the accuracy and reliability of the detection.
And secondly, various image processing technologies such as an image enhancement algorithm, a median filtering algorithm and the like are introduced, so that the quality and resolution of the image are obviously improved, the subsequent feature identification and quality analysis have a clearer and more accurate data base, and the possibility of false detection and omission is greatly reduced.
Meanwhile, by implementing a three-dimensional modeling technology, the system can more accurately capture a plurality of physical characteristics such as the shape, the size, the surface texture and the like of the plastic master batch, and provides omnibearing data support for accurate quality detection. In particular, three-dimensional modeling techniques have shown their irreplaceable importance for spatial defects and deformations that are not perceptible in two-dimensional images.
In the aspect of feature recognition, the introduction of the deep convolutional neural network greatly widens the recognition capability of complex and tiny defects. Compared with the traditional method, the deep learning brings higher level of automation and intelligence, greatly lightens the labor burden and also remarkably improves the detection efficiency and the detection precision.
From quality analysis and parameter adjustment aspect, the use of degree of depth reinforcement study makes the system not only can carry out accurate quality detection and analysis based on current data, can also through constantly learning, intelligent optimization production parameter and detection strategy, ensure production process's stability to in time send the early warning when meetting the problem, thereby prevented the quality accident that probably takes place.
Referring to fig. 9, the image acquisition module includes a sensor configuration sub-module, a synchronization control sub-module, an image acquisition sub-module, and a data integration sub-module; the image processing module comprises an image enhancer module, an image noise reduction sub-module, a space alignment sub-module and a first characteristic extraction sub-module;
The three-dimensional modeling module comprises a three-dimensional reconstruction sub-module, a model gridding sub-module, a data noise reduction sub-module and a shape analysis sub-module;
the feature recognition module comprises an image preprocessing sub-module, a second feature extraction sub-module and a defect recognition sub-module;
the quality analysis module comprises a defect classification sub-module and a defect analysis sub-module;
the parameter adjustment module comprises a first parameter adjustment sub-module, a production monitoring sub-module and a strategy updating sub-module;
the optimization acquisition module comprises an anomaly analysis sub-module, an optimization scheme sub-module, a second parameter adjustment sub-module and an effect evaluation sub-module.
And an image acquisition module:
sensor configuration submodule: by optimizing the layout and performance configuration of the sensor, higher resolution and quality image data can be obtained.
And a synchronization control submodule: data acquisition synchronization between different sensors and devices is ensured to avoid data inconsistencies.
An image acquisition sub-module: is responsible for acquiring the original image data and provides an initial source of data.
A data integration sub-module: the data from the different sensors and devices are integrated together to provide a complete data set for subsequent processing.
An image processing module:
An image enhancer module: by enhancing contrast, brightness, etc., the visual quality of the image can be improved, contributing to a clearer analysis of the image.
An image noise reduction sub-module: noise in the image is removed, and the image quality and accuracy are improved.
Spatial alignment submodule: the images of different viewing angles or sensors are accurately spatially aligned to provide accurate input for three-dimensional modeling.
A first feature extraction sub-module: and extracting key features, and facilitating subsequent processing and analysis.
And a three-dimensional modeling module:
three-dimensional reconstruction sub-module: the image data shot from a plurality of angles are converted into a three-dimensional model, so that three-dimensional restoration of the target object is realized.
Model meshing submodule: the three-dimensional model is converted into grid or point cloud data, which provides a data format for further analysis.
And a data noise reduction sub-module: noise in the three-dimensional data is removed, and the precision of the model is improved.
Shape analysis submodule: the geometry of the model is analyzed to provide information about the target object.
And the characteristic recognition module is used for:
an image preprocessing sub-module: the input image is further processed in preparation for feature extraction.
A second feature extraction sub-module: higher level features, such as texture, shape, etc., are extracted from the image.
Defect identification sub-module: detecting and identifying defects or anomalies in the image facilitates quality control.
And a mass analysis module:
defect classification sub-module: the detected defects are classified to determine their severity.
Defect analysis sub-module: the nature and location of the defect is analyzed, providing more detailed information for improving the process.
Parameter adjustment module:
a first parameter adjustment sub-module: system parameters are adjusted to optimize the image acquisition and processing process.
Production monitoring submodule: the production process is monitored in real time, problems are found in time, and measures are taken.
Policy update sub-module: and updating the system strategy according to the monitoring result to improve the performance.
And (3) an optimization acquisition module:
an anomaly analysis sub-module: abnormal conditions, such as equipment failure or quality problems, are identified to take appropriate action.
Optimization scheme sub-module: an improvement scheme is provided to improve the production efficiency and quality.
And a second parameter adjustment sub-module: the parameters are further adjusted to achieve better performance.
An effect evaluation sub-module: the system effect is evaluated to verify the effect of the improvement.
The present invention is not limited to the above embodiments, and any equivalent embodiments which can be changed or modified by the technical disclosure described above can be applied to other fields, but any simple modification, equivalent changes and modification made to the above embodiments according to the technical matter of the present invention will still fall within the scope of the technical disclosure.
Claims (10)
1. The plastic master batch quality inspection method based on machine vision is characterized by comprising the following steps of:
based on a plastic master batch sample, adopting a multi-sensor synchronous acquisition technology, and simultaneously acquiring visible light, infrared and ultraviolet spectrogram images to generate a multi-mode original image data set;
based on the multi-mode original image data set, performing image preprocessing by adopting a convolutional neural network, and generating a comprehensive fusion image through a multi-source image fusion algorithm;
based on the comprehensive fusion image, a three-dimensional model of the plastic master batch is established by adopting a point cloud data processing method, and shape and surface analysis is carried out to generate a three-dimensional morphological analysis report;
based on the comprehensive fusion image and the three-dimensional morphological analysis report, performing defect detection and classification on the plastic master batch by adopting a deep convolutional neural network to generate an automatic defect labeling chart;
based on the automatic defect labeling diagram, adopting a deep reinforcement learning algorithm to dynamically adjust production parameters, and carrying out self-adaptive anomaly detection and strategy updating to generate a quality anomaly report and an optimized detection strategy;
and dynamically optimizing image acquisition parameters, including exposure time, focal length and light source intensity, by adopting a reinforcement learning method based on the quality anomaly report, and generating optimized image acquisition parameter settings.
2. The machine vision-based plastic master batch quality inspection method according to claim 1, wherein the step of simultaneously acquiring visible light, infrared and ultraviolet spectrogram images and generating a multi-mode original image data set is specifically as follows:
based on the selective strategy, adopting a sensor selection algorithm to perform optimal sensor configuration and generating sensor parameter configuration;
based on the sensor parameter configuration, a synchronous trigger mechanism is applied to synchronously start a plurality of groups of sensors to obtain synchronous acquisition control signals;
based on the synchronous acquisition control signal, adopting a high-speed imaging technology to acquire a visible light image, and generating a visible light original image;
based on the synchronous acquisition control signal, an infrared imaging technology is utilized to acquire an infrared image, and an infrared original image is generated;
based on the synchronous acquisition control signal, an ultraviolet imaging technology is used for acquiring an ultraviolet image, and an ultraviolet original image is generated;
based on the visible light original image, the infrared original image and the ultraviolet original image, a data integration technology is applied, and the multi-mode images are combined to generate a multi-mode original image data set.
3. The machine vision-based plastic master batch quality inspection method according to claim 1, wherein the steps of performing image preprocessing by using a convolutional neural network based on the multi-mode original image dataset and generating a comprehensive fusion image by using a multi-source image fusion algorithm are specifically as follows:
based on the multi-mode original image data set, adopting an image enhancement algorithm to improve the image quality and obtaining an enhanced image data set;
based on the enhanced image dataset, applying a median filtering algorithm to perform image noise reduction and generating a noise-reduced image dataset;
based on the noise-reduced image data set, performing spatial alignment by adopting an image registration algorithm to obtain a spatially aligned image data set;
performing brightness and contrast adjustment based on the spatially aligned image dataset by using a histogram equalization technique to generate a contrast-adjusted image dataset;
based on the image dataset with the adjusted contrast, extracting key features by using a SIFT feature extraction algorithm to obtain a key image feature set;
and based on the key image feature set, applying a multi-source image fusion algorithm, and combining the multi-mode images to generate a comprehensive fusion image.
4. The machine vision-based plastic master batch quality inspection method according to claim 1, wherein the step of creating a three-dimensional model of the plastic master batch and performing shape and surface analysis based on the comprehensive fusion image by using a point cloud data processing method to generate a three-dimensional morphological analysis report is specifically as follows:
based on the comprehensive fusion image, adopting a three-dimensional reconstruction algorithm to perform three-dimensional reconstruction to obtain a preliminary three-dimensional model;
based on the preliminary three-dimensional model, performing model gridding by utilizing a three-dimensional grid generation technology to generate a gridding three-dimensional model;
based on the meshed three-dimensional model, applying a point cloud filtering algorithm to perform data noise reduction to obtain a noise-reduced three-dimensional model;
based on the three-dimensional model after noise reduction, adopting a surface fitting technology to analyze the shape and the surface, and generating three-dimensional shape and surface data;
based on the three-dimensional shape and the surface data, extracting and analyzing a specific form by using a morphological operation technology to obtain a form characteristic report;
integrating the morphological feature report with the three-dimensional shape and surface data, and generating a three-dimensional morphological analysis report by using a data integration technology.
5. The machine vision-based plastic master batch quality inspection method according to claim 1, wherein the step of performing defect detection and classification on plastic master batches by using a deep convolutional neural network based on the comprehensive fusion image and the three-dimensional morphological analysis report to generate an automatic defect labeling chart is specifically as follows:
based on the comprehensive fusion image, adopting an image enhancement algorithm to perform image preprocessing to generate an enhanced plastic master batch image;
based on the reinforced plastic master batch image, performing feature extraction by adopting a SIFT technology to obtain a plastic master batch feature set;
based on the plastic master batch characteristic set, performing defect identification by adopting a deep convolutional neural network to obtain a defect identification tag;
and combining the defect identification label and the reinforced plastic master batch image, and adopting a classification algorithm to classify defects to generate an automatic defect labeling chart.
6. The machine vision-based plastic master batch quality inspection method according to claim 1, wherein the steps of dynamically adjusting production parameters by adopting a deep reinforcement learning algorithm based on the automatic defect labeling diagram, performing self-adaptive anomaly detection and policy updating, and generating a quality anomaly report and an optimized detection policy are specifically as follows:
Analyzing defect distribution by adopting a data analysis technology based on the automatic defect labeling diagram to obtain a defect analysis report;
based on the defect analysis report, adopting a deep reinforcement learning algorithm to adjust production parameters and generating adjusted production parameters;
monitoring a production process by using the adjusted production parameters and adopting a self-adaptive abnormality detection technology to generate an abnormality detection report;
and optimizing the detection strategy by adopting a strategy updating algorithm according to the abnormal detection report to obtain a quality abnormal report.
7. The machine vision-based plastic master batch quality inspection method according to claim 1, wherein the step of dynamically optimizing image acquisition parameters including exposure time, focal length, light source intensity by using reinforcement learning method based on the quality anomaly report, and generating optimized image acquisition parameter settings specifically comprises the steps of:
according to the quality abnormality report, adopting an analysis technology to discuss the abnormality cause and obtain an abnormality cause analysis report;
based on the abnormal cause analysis report, applying a reinforcement learning method, and providing an image acquisition optimization scheme to generate a parameter optimization scheme;
combining the parameter optimization scheme, adjusting exposure time, focal length and light source intensity, and generating primarily optimized image acquisition parameters;
Carrying out image acquisition again by utilizing the primarily optimized image acquisition parameters, evaluating the effect of the image acquisition parameters and generating an effect evaluation report;
and according to the effect evaluation report, carrying out parameter refinement and perfecting to generate optimized image acquisition parameter setting.
8. The plastic master batch quality inspection system based on machine vision is characterized by being used for executing the plastic master batch quality inspection method based on machine vision as set forth in any one of claims 1-7, and comprises an image acquisition module, an image processing module, a three-dimensional modeling module, a characteristic recognition module, a quality analysis module, a parameter adjustment module and an optimization acquisition module.
9. The machine vision-based plastic master batch quality inspection system according to claim 8, wherein the image acquisition module acquires visible light, infrared and ultraviolet images by adopting a synchronous trigger mechanism based on a sensor selection algorithm to generate a multi-mode original image dataset;
the image processing module generates an image dataset with improved quality by adopting an image enhancement algorithm, a median filtering algorithm, an image registration algorithm and a histogram equalization technology based on the multi-mode original image dataset, and finally generates a comprehensive fusion image;
The three-dimensional modeling module generates a meshed three-dimensional model and three-dimensional shape surface data by utilizing a three-dimensional reconstruction algorithm and a three-dimensional mesh generation technology based on the comprehensive fusion image;
the feature recognition module is used for preprocessing the comprehensive fusion image, extracting features, performing defect recognition by using a deep convolutional neural network, and generating a defect recognition tag;
the quality analysis module classifies defects based on the defect identification labels, generates an automatic defect labeling chart, analyzes defect distribution and generates a defect analysis report;
the parameter adjustment module adjusts production parameters based on the defect analysis report by using a deep reinforcement learning algorithm, generates adjusted production parameters, monitors the production process, generates an abnormal detection report, optimizes the detection strategy by a strategy updating algorithm, and finally generates a quality abnormal report;
the optimization acquisition module analyzes the abnormal reasons, proposes an optimization scheme, generates a parameter optimization scheme, adjusts image acquisition parameters, generates primarily optimized image acquisition parameter settings, re-acquires images, evaluates effects and finally obtains the optimized image acquisition parameter settings.
10. The machine vision-based plastic master batch quality inspection system of claim 8, wherein the image acquisition module comprises a sensor configuration sub-module, a synchronization control sub-module, an image acquisition sub-module, and a data integration sub-module;
The image processing module comprises an image enhancer module, an image noise reduction sub-module, a space alignment sub-module and a first feature extraction sub-module;
the three-dimensional modeling module comprises a three-dimensional reconstruction sub-module, a model gridding sub-module, a data noise reduction sub-module and a shape analysis sub-module;
the feature recognition module comprises an image preprocessing sub-module, a second feature extraction sub-module and a defect recognition sub-module;
the quality analysis module comprises a defect classification sub-module and a defect analysis sub-module;
the parameter adjustment module comprises a first parameter adjustment sub-module, a production monitoring sub-module and a strategy updating sub-module;
the optimization acquisition module comprises an anomaly analysis sub-module, an optimization scheme sub-module, a second parameter adjustment sub-module and an effect evaluation sub-module.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202311359574.3A CN117095005B (en) | 2023-10-20 | 2023-10-20 | Plastic master batch quality inspection method and system based on machine vision |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202311359574.3A CN117095005B (en) | 2023-10-20 | 2023-10-20 | Plastic master batch quality inspection method and system based on machine vision |
Publications (2)
Publication Number | Publication Date |
---|---|
CN117095005A true CN117095005A (en) | 2023-11-21 |
CN117095005B CN117095005B (en) | 2024-02-02 |
Family
ID=88775681
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202311359574.3A Active CN117095005B (en) | 2023-10-20 | 2023-10-20 | Plastic master batch quality inspection method and system based on machine vision |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN117095005B (en) |
Cited By (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117409003A (en) * | 2023-12-14 | 2024-01-16 | 四川宏亿复合材料工程技术有限公司 | Detection method for backing plate of rail damping fastener |
CN117422936A (en) * | 2023-12-15 | 2024-01-19 | 广州蓝图地理信息技术有限公司 | Remote sensing image classification method and system |
CN117422712A (en) * | 2023-12-15 | 2024-01-19 | 青岛合丰新材料有限公司 | Plastic master batch visual detection method and system based on image filtering processing |
CN117934354A (en) * | 2024-03-21 | 2024-04-26 | 共幸科技(深圳)有限公司 | Image processing method based on AI algorithm |
CN118090743A (en) * | 2024-04-22 | 2024-05-28 | 山东浪潮数字商业科技有限公司 | Porcelain winebottle quality detection system based on multi-mode image recognition technology |
CN118537339A (en) * | 2024-07-25 | 2024-08-23 | 宁波荣新汽车零部件有限公司 | Method and system for evaluating surface quality of part based on sander |
CN118658727A (en) * | 2024-08-16 | 2024-09-17 | 深圳市中电熊猫磁通电子有限公司 | Winding method and device for transformer processing |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20170217102A1 (en) * | 2016-01-29 | 2017-08-03 | Siemens Medical Solutions Usa, Inc. | Multi-Modality Image Fusion for 3D Printing of Organ Morphology and Physiology |
CN111079556A (en) * | 2019-11-25 | 2020-04-28 | 航天时代飞鸿技术有限公司 | Multi-temporal unmanned aerial vehicle video image change area detection and classification method |
CN112505065A (en) * | 2020-12-28 | 2021-03-16 | 上海工程技术大学 | Method for detecting surface defects of large part by indoor unmanned aerial vehicle |
CN115184359A (en) * | 2022-06-01 | 2022-10-14 | 贵州开放大学(贵州职业技术学院) | Surface defect detection system and method capable of automatically adjusting parameters |
US20230135512A1 (en) * | 2020-04-03 | 2023-05-04 | Speed Space-Time Information Technology Co., Ltd | Method for updating road signs and markings on basis of monocular images |
CN116128820A (en) * | 2022-03-30 | 2023-05-16 | 国网河北省电力有限公司雄安新区供电公司 | Pin state identification method based on improved YOLO model |
CN116843650A (en) * | 2023-07-04 | 2023-10-03 | 上海交通大学 | SMT welding defect detection method and system integrating AOI detection and deep learning |
-
2023
- 2023-10-20 CN CN202311359574.3A patent/CN117095005B/en active Active
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20170217102A1 (en) * | 2016-01-29 | 2017-08-03 | Siemens Medical Solutions Usa, Inc. | Multi-Modality Image Fusion for 3D Printing of Organ Morphology and Physiology |
CN111079556A (en) * | 2019-11-25 | 2020-04-28 | 航天时代飞鸿技术有限公司 | Multi-temporal unmanned aerial vehicle video image change area detection and classification method |
US20230135512A1 (en) * | 2020-04-03 | 2023-05-04 | Speed Space-Time Information Technology Co., Ltd | Method for updating road signs and markings on basis of monocular images |
CN112505065A (en) * | 2020-12-28 | 2021-03-16 | 上海工程技术大学 | Method for detecting surface defects of large part by indoor unmanned aerial vehicle |
CN116128820A (en) * | 2022-03-30 | 2023-05-16 | 国网河北省电力有限公司雄安新区供电公司 | Pin state identification method based on improved YOLO model |
CN115184359A (en) * | 2022-06-01 | 2022-10-14 | 贵州开放大学(贵州职业技术学院) | Surface defect detection system and method capable of automatically adjusting parameters |
CN116843650A (en) * | 2023-07-04 | 2023-10-03 | 上海交通大学 | SMT welding defect detection method and system integrating AOI detection and deep learning |
Non-Patent Citations (3)
Title |
---|
CATHERINE LOLLETT: "Driver’s Drowsiness Classifier using a Single-Camera Robust to Mask-wearing Situations using an Eyelid, Lower-Face Contour, and Chest Movement Feature Vector GRU-based Model", 《2022 IEEE INTELLIGENT VEHICLES SYMPOSIUM (IV)》 * |
何赟泽: "以图像为主的多模态感知与多源融合技术发展 及应用综述", 《测控技术》, vol. 42, no. 6, pages 10 - 21 * |
张旭中;翟道远;陈俊;: "基于深度强化学习的木材缺陷图像识别及分割模型研究", 电子测量技术, no. 17 * |
Cited By (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117409003A (en) * | 2023-12-14 | 2024-01-16 | 四川宏亿复合材料工程技术有限公司 | Detection method for backing plate of rail damping fastener |
CN117409003B (en) * | 2023-12-14 | 2024-02-20 | 四川宏亿复合材料工程技术有限公司 | Detection method for backing plate of rail damping fastener |
CN117422936A (en) * | 2023-12-15 | 2024-01-19 | 广州蓝图地理信息技术有限公司 | Remote sensing image classification method and system |
CN117422712A (en) * | 2023-12-15 | 2024-01-19 | 青岛合丰新材料有限公司 | Plastic master batch visual detection method and system based on image filtering processing |
CN117422712B (en) * | 2023-12-15 | 2024-03-01 | 青岛合丰新材料有限公司 | Plastic master batch visual detection method and system based on image filtering processing |
CN117422936B (en) * | 2023-12-15 | 2024-04-02 | 广州蓝图地理信息技术有限公司 | Remote sensing image classification method and system |
CN117934354A (en) * | 2024-03-21 | 2024-04-26 | 共幸科技(深圳)有限公司 | Image processing method based on AI algorithm |
CN117934354B (en) * | 2024-03-21 | 2024-06-11 | 共幸科技(深圳)有限公司 | Image processing method based on AI algorithm |
CN118090743A (en) * | 2024-04-22 | 2024-05-28 | 山东浪潮数字商业科技有限公司 | Porcelain winebottle quality detection system based on multi-mode image recognition technology |
CN118537339A (en) * | 2024-07-25 | 2024-08-23 | 宁波荣新汽车零部件有限公司 | Method and system for evaluating surface quality of part based on sander |
CN118658727A (en) * | 2024-08-16 | 2024-09-17 | 深圳市中电熊猫磁通电子有限公司 | Winding method and device for transformer processing |
CN118658727B (en) * | 2024-08-16 | 2024-11-05 | 深圳市中电熊猫磁通电子有限公司 | Winding method and device for transformer processing |
Also Published As
Publication number | Publication date |
---|---|
CN117095005B (en) | 2024-02-02 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN117095005B (en) | Plastic master batch quality inspection method and system based on machine vision | |
CN101443817B (en) | Method and device for determining correspondence, preferably for the three-dimensional reconstruction of a scene | |
CN104966304B (en) | Multi-target detection tracking based on Kalman filtering and nonparametric background model | |
Schmugge et al. | Detection of cracks in nuclear power plant using spatial-temporal grouping of local patches | |
CN110033431B (en) | Non-contact detection device and detection method for detecting corrosion area on surface of steel bridge | |
EP2549759B1 (en) | Method and system for facilitating color balance synchronization between a plurality of video cameras as well as method and system for obtaining object tracking between two or more video cameras | |
Adam et al. | Construction of accurate crack identification on concrete structure using hybrid deep learning approach | |
CN117274258B (en) | Method, system, equipment and storage medium for detecting defects of main board image | |
Zhao et al. | Recognition of flooding and sinking conditions in flotation process using soft measurement of froth surface level and QTA | |
CN115222884A (en) | Space object analysis and modeling optimization method based on artificial intelligence | |
CN109781737A (en) | A kind of detection method and its detection system of hose surface defect | |
CN117456195A (en) | Abnormal image identification method and system based on depth fusion | |
CN108364306B (en) | Visual real-time detection method for high-speed periodic motion | |
CN105844282A (en) | Method for detecting defects of fuel injection nozzle O-Ring through line scanning camera | |
CN104966283A (en) | Imaging layered registering method | |
JP3053512B2 (en) | Image processing device | |
CN111582076A (en) | Picture freezing detection method based on pixel motion intelligent perception | |
CN117274843A (en) | Unmanned aerial vehicle front end defect identification method and system based on lightweight edge calculation | |
CN117330582A (en) | Polymer PE film surface crystal point detecting system | |
EP4352451A1 (en) | Texture mapping to polygonal models for industrial inspections | |
CN114494134A (en) | Industrial defect detection system based on component point cloud registration detection | |
CN113516654A (en) | Method and system for identifying abnormal part of inner wall of core hole based on vision | |
CN108171168B (en) | Intelligent image detection method and device for small and weak target change | |
CN118470099B (en) | Object space pose measurement method and device based on monocular camera | |
Priya et al. | A Novel Computer Vision Framework for the Automated Visual Inspection for Quality Control of Automotive Fasteners |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |