CN116912335A - Color difference detection method, system, equipment and storage medium for tile image - Google Patents
Color difference detection method, system, equipment and storage medium for tile image Download PDFInfo
- Publication number
- CN116912335A CN116912335A CN202310873498.1A CN202310873498A CN116912335A CN 116912335 A CN116912335 A CN 116912335A CN 202310873498 A CN202310873498 A CN 202310873498A CN 116912335 A CN116912335 A CN 116912335A
- Authority
- CN
- China
- Prior art keywords
- convolution
- feature
- tile image
- color difference
- difference detection
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000001514 detection method Methods 0.000 title claims abstract description 165
- 238000003860 storage Methods 0.000 title claims abstract description 16
- 230000004927 fusion Effects 0.000 claims abstract description 63
- 238000013135 deep learning Methods 0.000 claims abstract description 23
- 238000010586 diagram Methods 0.000 claims abstract description 22
- 238000011176 pooling Methods 0.000 claims description 36
- 238000006243 chemical reaction Methods 0.000 claims description 19
- 238000000034 method Methods 0.000 claims description 18
- 238000004590 computer program Methods 0.000 claims description 13
- 238000012545 processing Methods 0.000 claims description 11
- 230000004913 activation Effects 0.000 claims description 6
- 238000000605 extraction Methods 0.000 claims description 6
- 239000013598 vector Substances 0.000 claims description 6
- 230000006870 function Effects 0.000 description 8
- 230000008569 process Effects 0.000 description 6
- 239000000919 ceramic Substances 0.000 description 5
- 238000004891 communication Methods 0.000 description 3
- 230000008878 coupling Effects 0.000 description 3
- 238000010168 coupling process Methods 0.000 description 3
- 238000005859 coupling reaction Methods 0.000 description 3
- 238000007689 inspection Methods 0.000 description 3
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 230000004931 aggregating effect Effects 0.000 description 1
- 239000004566 building material Substances 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 238000009826 distribution Methods 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 238000013507 mapping Methods 0.000 description 1
- 230000009467 reduction Effects 0.000 description 1
- 238000009877 rendering Methods 0.000 description 1
- 230000004044 response Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/90—Determination of colour characteristics
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/0464—Convolutional networks [CNN, ConvNet]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/50—Image enhancement or restoration using two or more images, e.g. averaging or subtraction
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10024—Color image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20212—Image combination
- G06T2207/20221—Image fusion; Image merging
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02P—CLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
- Y02P90/00—Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
- Y02P90/30—Computing systems specially adapted for manufacturing
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Computational Linguistics (AREA)
- Data Mining & Analysis (AREA)
- Life Sciences & Earth Sciences (AREA)
- Artificial Intelligence (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- Software Systems (AREA)
- Health & Medical Sciences (AREA)
- Evolutionary Computation (AREA)
- General Health & Medical Sciences (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- General Engineering & Computer Science (AREA)
- Mathematical Physics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Image Analysis (AREA)
Abstract
The application relates to the field of image detection, in particular to a color difference detection method, device and system for a tile image and a storage medium, wherein the color difference detection method, device and system for the tile image obtain the tile image to be detected and a standard tile image; respectively inputting the tile image to be detected and the standard tile image into a twin deep learning network, and obtaining a first convolution characteristic image corresponding to the tile image to be detected and a first convolution characteristic image corresponding to the standard tile image with a plurality of scales according to a plurality of preset scales; inputting a first convolution feature map corresponding to a tile image to be detected with a plurality of scales and a first convolution feature map corresponding to a standard tile image into a feature fusion network to obtain a fusion difference feature map; the fusion difference characteristic diagram is input into a color difference detection network to carry out color difference detection, so that a color difference detection result of the tile image to be detected is obtained, the accuracy and efficiency of the color difference detection are improved, and the cost of the color difference detection is reduced.
Description
Technical Field
The application relates to the field of image detection, in particular to a color difference detection method, device and system for tile images and a storage medium.
Background
The ceramic tile is used as an important building material, is widely applied in modern life, and the quality inspection and grading of the product is used as an important detail of the ceramic tile production link, so that the quality of the product is directly affected.
The current quality inspection and grading of products are completed manually, in the detection process, people observe the surfaces of the ceramic tiles by adjusting the angles of the ceramic tiles and fluorescent lamp tubes, color difference detection of the ceramic tiles is carried out according to the observed results, the color difference detection results and efficiency mainly depend on experience of inspection personnel, and the manual detection is low in precision, poor in efficiency and high in cost due to the influence of subjective factors, proficiency and other factors.
Disclosure of Invention
Based on the above, the application aims to provide a color difference detection method, device and system for tile images and a storage medium, wherein the color difference detection method, device and system for tile images and the storage medium are used for extracting different layers of characteristic information of tile images to be detected and standard tile images by using a twin deep learning network through a preset color difference detection model, carrying out characteristic fusion, realizing color difference detection for the tile images to be detected according to the obtained fusion characteristic information, improving the accuracy and efficiency of the color difference detection and reducing the cost of the color difference detection.
In a first aspect, an embodiment of the present application provides a color difference detection method for a tile image, including the following steps:
obtaining a tile image to be detected, a standard tile image and a preset color difference detection model, wherein the color difference detection model comprises a twin deep learning network, a feature fusion network and a color difference detection network;
respectively inputting the tile image to be detected and the standard tile image into the twin deep learning network, and obtaining a first convolution characteristic image corresponding to the tile image to be detected and a first convolution characteristic image corresponding to the standard tile image according to a plurality of preset scales;
inputting a first convolution feature map corresponding to the tile image to be detected with the multiple scales and a first convolution feature map corresponding to the standard tile image into the feature fusion network to obtain a fusion difference feature map;
and inputting the fusion difference characteristic diagram into the color difference detection network to perform color difference detection, and obtaining a color difference detection result of the tile image to be detected.
In a second aspect, an embodiment of the present application provides a color difference detection apparatus for tile image, including:
the system comprises a data acquisition module, a color difference detection module and a color difference detection module, wherein the data acquisition module is used for acquiring a tile image to be detected, a standard tile image and a preset color difference detection model, and the color difference detection model comprises a twin deep learning network, a feature fusion network and a color difference detection network;
the feature extraction module is used for respectively inputting the tile image to be detected and the standard tile image into the twin deep learning network, and obtaining a first convolution feature map corresponding to the tile image to be detected and a first convolution feature map corresponding to the standard tile image of a plurality of scales according to the preset plurality of scales;
the feature fusion module is used for inputting a first convolution feature map corresponding to the tile images to be detected with the multiple scales and a first convolution feature map corresponding to the standard tile image into the feature fusion network to perform feature difference, so as to obtain feature difference maps with the various scales, and splicing the feature difference maps with the various scales to obtain fusion difference feature maps;
and the color difference detection module is used for inputting the fusion difference characteristic diagram into the color difference detection network to carry out color difference detection, and obtaining a color difference detection result of the tile image to be detected.
In a third aspect, an embodiment of the present application provides a computer apparatus, including: a processor, a memory, and a computer program stored on the memory and executable on the processor; the computer program when executed by the processor performs the steps of the color difference detection method of tile images as described in the first aspect.
In a fourth aspect, an embodiment of the present application provides a storage medium storing a computer program, which when executed by a processor implements the steps of the color difference detection method of a tile image according to the first aspect.
In the embodiment of the application, the color difference detection method, the device and the system for the tile image and the storage medium are provided, the feature information of different layers of the tile image to be detected and the standard tile image is extracted by using a twin deep learning network through a preset color difference detection model, feature fusion is performed, the color difference detection of the tile image to be detected is realized according to the obtained fusion feature information, the accuracy and the efficiency of the color difference detection are improved, and the cost of the color difference detection is reduced.
For a better understanding and implementation, the present application is described in detail below with reference to the drawings.
Drawings
Fig. 1 is a flow chart of a color difference detection method of a tile image according to an embodiment of the present application;
fig. 2 is a schematic flow chart of S2 in a color difference detection method of a tile image according to an embodiment of the present application;
fig. 3 is a schematic flow chart of S3 in the color difference detection method of a tile image according to an embodiment of the present application;
fig. 4 is a schematic flow chart of S32 in the color difference detection method of a tile image according to an embodiment of the present application;
fig. 5 is a schematic flow chart of S4 in the color difference detection method of a tile image according to an embodiment of the present application;
fig. 6 is a schematic structural diagram of a color difference detection device for tile image according to an embodiment of the present application;
fig. 7 is a schematic structural diagram of a computer device according to an embodiment of the present application.
Detailed Description
Reference will now be made in detail to exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, the same numbers in different drawings refer to the same or similar elements, unless otherwise indicated. The implementations described in the following exemplary examples do not represent all implementations consistent with the application. Rather, they are merely examples of apparatus and methods consistent with aspects of the application as detailed in the accompanying claims.
The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the application. As used in this specification and the appended claims, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It should also be understood that the term "and/or" as used herein refers to and encompasses any or all possible combinations of one or more of the associated listed items.
It should be understood that although the terms first, second, third, etc. may be used herein to describe various information, these information should not be limited by these terms. These terms are only used to distinguish one type of information from another. For example, first information may also be referred to as second information, and similarly, second information may also be referred to as first information, without departing from the scope of the application. The word "if"/"if" as used herein may be interpreted as "at … …" or "at … …" or "in response to a determination", depending on the context.
Referring to fig. 1, fig. 1 is a flowchart of a color difference detection method for tile images according to an embodiment of the application, the method includes the following steps:
s1: and obtaining a tile image to be detected, a standard tile image and a preset color difference detection model.
The execution subject of the color difference detection method of the tile image is a detection apparatus (hereinafter referred to as detection apparatus) of the color difference detection method of the tile image. In an alternative embodiment, the detecting device may be a computer device, a server, or a server cluster formed by combining multiple computer devices.
In this embodiment, the detection device may obtain the tile image to be detected and the standard tile image input by the user, or may obtain the tile image to be detected and the standard tile image in a preset database.
The detection equipment obtains a preset color difference detection model, wherein the color difference detection model comprises a twin deep learning network, a characteristic fusion network and a color difference detection network, the twin deep learning network is used for extracting characteristics of a tile image to be detected and a standard tile image, the characteristic fusion network is used for fusing the extracted characteristics of the tile image to be detected and the standard tile image, and the color difference detection network is used for carrying out color difference detection according to the obtained fusion characteristics.
S2: and respectively inputting the tile image to be detected and the standard tile image into the twin deep learning network, and obtaining a first convolution characteristic image corresponding to the tile image to be detected and a first convolution characteristic image corresponding to the standard tile image according to a plurality of preset scales.
In this embodiment, the detection device inputs the tile image to be detected and the standard tile image into the twin deep learning network, and obtains a first convolution feature map corresponding to the tile image to be detected and a first convolution feature map corresponding to the standard tile image according to a plurality of preset scales.
The twin deep learning network comprises a central differential convolution layer and a plurality of central differential convolution modules which are sequentially connected, wherein the central differential convolution module comprises a plurality of central differential sub-convolution layers and a maximum pooling layer which are sequentially connected; referring to fig. 2, fig. 2 is a schematic flow chart of step S2 in the color difference detection method of tile image according to an embodiment of the present application, including steps S21 to S24, specifically including the following steps:
s21: and respectively taking the tile image to be detected and the standard tile image as input feature images of the central differential convolution layer, dividing the input feature images into a plurality of units according to a preset window, and obtaining the neighborhood of each unit and a plurality of adjacent units corresponding to the neighborhood according to the position coordinates of each unit in the input feature images and the preset neighborhood range.
In order to improve the efficiency and the accuracy of feature extraction, in this embodiment, the detection device uses the tile image to be detected and the standard tile image as the input feature images of the central differential convolution layer respectively, divides the input feature images into a plurality of units according to a preset window, and obtains the neighborhood of each unit and a plurality of adjacent units corresponding to the neighborhood according to the position coordinates of each unit in the input feature images and the preset neighborhood range.
S22: according to the neighborhood of each unit, a plurality of adjacent units corresponding to the neighborhood and a preset central differential convolution algorithm, a first central differential convolution feature diagram of each unit of the input feature diagram is obtained, and a first central differential convolution feature diagram of the input feature diagram is constructed, so that a first central differential convolution feature diagram output by the central differential convolution layer is obtained.
The center differential convolution algorithm is as follows:
wherein y (p 0 ) The position coordinate for inputting the feature map is p 0 A first central differential convolution feature map, p n For the position coordinate p 0 In the neighborhood of the cell of (2), the position coordinate is p n R is a neighborhood, w (p n ) For the position coordinate p n Weight value of adjacent cell of (c), x (p 0 +p n ) The position coordinate for inputting the feature map is p 0 +p n And θ is the central differential gradient information parameter, x (p 0 ) The position coordinate for inputting the feature map is p 0 Is a pixel value of (a).
In this embodiment, the detection device obtains a first central differential convolution feature map of each unit of the input feature map according to a neighborhood of each unit, a plurality of adjacent units corresponding to the neighborhood, and a preset central differential convolution algorithm, constructs the first central differential convolution feature map of the input feature map, obtains a first central differential convolution feature map output by the central differential convolution layer, and enhances image features by adopting a central differential convolution method and aggregating intensity information and gradient information to improve accuracy of color difference detection.
S23: taking a first central difference convolution feature map output by the central difference convolution layer as an input feature map of a first central difference sub convolution module, and obtaining a second central difference convolution feature map output by a last central difference sub convolution layer according to a central difference convolution algorithm in each central difference sub convolution layer; and inputting the second center difference convolution characteristic map to the maximum pooling layer for pooling treatment, and obtaining the maximum pooling map output by the maximum pooling layer as a first convolution characteristic map output by the first center difference molecular convolution module.
In this embodiment, the detection device uses the first central difference convolution feature map output by the central difference convolution layer as the input feature map of the first central difference convolution module, and obtains the second central difference convolution feature map output by the last central difference sub-convolution layer according to the central difference convolution algorithm in each central difference convolution layer, which will not be described in detail herein, referring to step S22.
The detection equipment inputs the second center difference convolution characteristic diagram to the maximum pooling layer for pooling treatment, obtains the maximum pooling diagram output by the maximum pooling layer, and uses the maximum pooling diagram as the first convolution characteristic diagram output by the first center difference molecular convolution module, and performs further dimension reduction on the extracted information of the second center difference convolution characteristic diagram so as to reduce the calculated amount, enhance the invariance and the robustness of the image characteristics and improve the efficiency of color difference detection.
S24: and taking the first convolution feature image output by the first center difference molecule convolution module as an input feature image of the next center difference molecule convolution module, repeating the steps to obtain the first convolution feature image output by each center difference molecule convolution module, and taking the first convolution feature image output by each center difference molecule convolution module as a first convolution feature image of a plurality of scales to obtain a first convolution feature image corresponding to the tile image to be detected of the plurality of scales and a first convolution feature image corresponding to the standard tile image.
In this embodiment, the detection device uses the first convolution feature map output by the first center difference molecule convolution module as the input feature map of the next center difference molecule convolution module, and repeats the above steps to obtain the first convolution feature maps output by the center difference molecule convolution modules, and uses the first convolution feature maps output by the center difference molecule convolution modules as the first convolution feature maps of several scales to obtain the first convolution feature maps corresponding to the tile images to be detected of the several scales and the first convolution feature maps corresponding to the standard tile images. The multi-scale feature extraction of the tile image to be detected and the standard tile image is realized, so that the feature information of finer and larger receptive fields is obtained, and the accuracy of color difference detection is improved.
S3: and inputting the first convolution feature images corresponding to the tile images to be detected with the multiple scales and the first convolution feature images corresponding to the standard tile images into the feature fusion network to obtain fusion difference feature images.
In this embodiment, the detection device inputs the first convolution feature images corresponding to the tile images to be detected and the first convolution feature images corresponding to the standard tile images with the multiple scales into the feature fusion network to obtain a fusion difference feature image, and performs feature fusion on the extracted feature information of different levels of the tile images to be detected and the standard tile images to obtain a fusion feature image with richer details so as to perform more accurate color difference detection.
The feature fusion network comprises an attention conversion module; referring to fig. 3, fig. 3 is a schematic flow chart of step S3 in the color difference detection method of tile image according to an embodiment of the present application, including steps S31 to S32, specifically including the following steps:
s31: and performing difference processing on the first convolution characteristic image corresponding to the tile image to be detected and the first convolution characteristic image corresponding to the standard tile image in the same scale to obtain characteristic difference images in all scales.
In this embodiment, the detection device performs difference processing on a first convolution feature map corresponding to a tile image to be detected and a first convolution feature map corresponding to a standard tile image with the same scale, so as to obtain feature difference maps with the scales.
S32: inputting the feature difference graphs of all the scales into the attention conversion module to obtain feature difference graphs of all the scales after attention conversion, and splicing the feature difference graphs of all the scales after attention conversion to obtain a fusion difference feature graph.
In this embodiment, the detection device inputs the feature difference maps of each scale to the attention conversion module, obtains the feature difference maps of each scale after the attention conversion, splices the feature difference maps of each scale after the attention conversion, obtains a fused difference feature map, and refines the spatial attention of the feature difference maps of each scale to obtain a fused feature map with richer details so as to perform more accurate color difference detection.
The attention conversion module comprises a pooling layer, a convolution layer and an activation layer which are sequentially connected; referring to fig. 4, fig. 4 is a schematic flow chart of step S32 in the color difference detection method of tile image according to an embodiment of the present application, including steps S321 to S323, specifically including the following steps:
s321: and inputting the characteristic difference graphs of the scales into the pooling layer to obtain an average pooling graph and a maximum pooling graph of the scales output by the pooling layer.
In this embodiment, the detection device inputs the feature difference map of each scale into the pooling layer, and obtains an average pooling map a (F i ) Maximum pooling graph M (F i ) Wherein A () is an average pooling function, M () is a maximum pooling function, F i Is a feature differential graph of the ith scale.
S322: and inputting the average pooling graph and the maximum pooling graph of each scale into the convolution layer to obtain a second convolution characteristic graph of each scale.
In this embodiment, the detection device willThe average pooling graph and the maximum pooling graph of each scale are input into the convolution layer to obtain a second convolution characteristic graph C of each scale i ([A(F i ),M(F i )]) Wherein C i () Is a convolution function corresponding to the ith scale.
S323: and inputting the feature difference graphs of the scales and the corresponding second convolution feature graphs into the activation layer to obtain the feature difference graphs of the scales after the attention conversion.
In this embodiment, the detection device inputs the feature difference maps of the scales and the corresponding second convolution feature maps into the activation layer to obtain feature difference maps of the scales after attention conversion, which specifically includes:
wherein F 'is' i To focus on the feature difference map of the i-th scale after conversion,for hadamard product, σ () is the activation function.
S4: and inputting the fusion difference characteristic diagram into the color difference detection network to perform color difference detection, and obtaining a color difference detection result of the tile image to be detected.
In this embodiment, the detection device inputs the fused difference feature map to the color difference detection network to perform color difference detection, so as to obtain a color difference detection result of the tile image to be detected.
The color difference detection network comprises a full-connection module and a classification module; referring to fig. 5, fig. 5 is a schematic flow chart of step S4 in the color difference detection method of tile image according to an embodiment of the present application, including steps S41 to S42, specifically as follows:
s41: and inputting the fusion difference feature map to the full-connection module, and obtaining fusion difference feature vectors output by the full-connection module according to a plurality of full-connection layers in the full-connection module.
The fully connected layer (fully connected layers, FC) acts as a "classifier" capable of mapping input data from high dimensions to low dimensions.
In this embodiment, the detection device inputs the fusion difference feature map to the fully-connected module, and obtains the fusion difference feature vector output by the fully-connected module according to a plurality of fully-connected layers in the fully-connected module.
S42: and inputting the fusion difference feature vector into the classification module to perform color difference detection, so as to obtain color difference detection probability, and obtaining a color difference detection result of the tile image to be detected according to the color difference detection probability and a preset color difference detection threshold.
The color difference detection results comprise a color difference detection success result and a color difference detection failure result, in the embodiment, the detection equipment inputs the fusion difference feature vector into the classification module to perform color difference detection, color difference detection probability is obtained, when the color difference detection probability is larger than the color difference detection threshold, the color difference detection success result of the tile image to be detected is obtained, and when the color difference detection probability is smaller than or equal to the color difference detection threshold, the color difference detection failure result of the tile image to be detected is obtained, so that the color difference detection of the tile image to be detected is realized, the accuracy and the efficiency of the color difference detection are improved, and the cost of the color difference detection is reduced.
Referring to fig. 6, fig. 6 is a schematic structural diagram of a device for detecting color difference of tile image according to an embodiment of the present application, the device may implement all or a part of the device for detecting color difference of tile image through software, hardware or a combination of both, and the device 6 includes:
the data acquisition module 61 is configured to obtain a tile image to be detected, a standard tile image, and a preset color difference detection model, where the color difference detection model includes a twin deep learning network, a feature fusion network, and a color difference detection network;
the feature extraction module 62 is configured to input the tile image to be detected and the standard tile image into the twin deep learning network, and obtain a first convolution feature map corresponding to the tile image to be detected and a first convolution feature map corresponding to the standard tile image according to a plurality of preset scales;
the feature fusion module 63 is configured to input a first convolution feature map corresponding to the tile image to be tested and a first convolution feature map corresponding to the standard tile image with the plurality of scales into the feature fusion network to perform feature difference, obtain feature difference maps 64 with the scales, and splice the feature difference maps with the scales to obtain a fusion difference feature map;
and the color difference detection module is used for inputting the fusion difference characteristic diagram into the color difference detection network to carry out color difference detection, and obtaining a color difference detection result of the tile image to be detected.
In the embodiment of the application, a tile image to be detected, a standard tile image and a preset color difference detection model are obtained through a data acquisition module, wherein the color difference detection model comprises a twin deep learning network, a feature fusion network and a color difference detection network, the tile image to be detected and the standard tile image are respectively input into the twin deep learning network through a feature extraction module, and a first convolution feature map corresponding to the tile image to be detected and a first convolution feature map corresponding to the standard tile image with a plurality of scales are obtained according to a plurality of preset scales; inputting a first convolution feature image corresponding to the tile images to be detected and a first convolution feature image corresponding to the standard tile images with a plurality of scales into the feature fusion network through a feature fusion module to perform feature difference, obtaining feature difference images with the scales, and splicing the feature difference images with the scales to obtain fusion difference feature images; and inputting the fusion difference characteristic diagram into the color difference detection network through a color difference detection module to perform color difference detection, so as to obtain a color difference detection result of the tile image to be detected. Through a preset color difference detection model, feature information of different layers of the tile image to be detected and the standard tile image is extracted by using a twin deep learning network, feature fusion is carried out, color difference detection of the tile image to be detected is realized according to the obtained fusion feature information, the accuracy and efficiency of the color difference detection are improved, and the cost of the color difference detection is reduced.
Referring to fig. 7, fig. 7 is a schematic structural diagram of a computer device according to an embodiment of the present application, where the computer device 7 includes: a processor 71, a memory 72, and a computer program 73 stored on the memory 72 and executable on the processor 71; the computer device may store a plurality of instructions adapted to be loaded by the processor 71 and to perform the method steps shown in fig. 1 to 5, and the specific implementation procedure may be referred to in the specific description shown in fig. 1 to 5, which is not repeated here.
Wherein processor 71 may include one or more processing cores. The processor 71 performs various functions of the color difference detecting means 7 of the tile image and processes data by executing or executing instructions, programs, code sets or instruction sets stored in the memory 72 and calling data in the memory 72 using various interfaces and various parts in the line connection server, alternatively, the processor 71 may be implemented in at least one hardware form of digital signal processing (Digital Signal Processing, DSP), field-programmable gate array (Field-Programmable Gate Array, FPGA), programmable logic array (Programble Logic Array, PLA). The processor 71 may integrate one or a combination of several of a central processing unit 71 (Central Processing Unit, CPU), an image processor 71 (Graphics Processing Unit, GPU), and a modem, etc. The CPU mainly processes an operating system, a user interface, an application program and the like; the GPU is used for rendering and drawing the content required to be displayed by the touch display screen; the modem is used to handle wireless communications. It will be appreciated that the modem may not be integrated into the processor 71 and may be implemented by a single chip.
The Memory 72 may include a random access Memory 72 (Random Access Memory, RAM) or a Read-Only Memory 72 (Read-Only Memory). Optionally, the memory 72 includes a non-transitory computer readable medium (non-transitory computer-readable storage medium). Memory 72 may be used to store instructions, programs, code sets, or instruction sets. The memory 72 may include a stored program area and a stored data area, wherein the stored program area may store instructions for implementing an operating system, instructions for at least one function (such as touch instructions, etc.), instructions for implementing the various method embodiments described above, etc.; the storage data area may store data or the like referred to in the above respective method embodiments. The memory 72 may optionally be at least one memory device located remotely from the aforementioned processor 71.
The embodiment of the present application further provides a storage medium, where the storage medium may store a plurality of instructions, where the instructions are suitable for being loaded by a processor and executed by the processor, and the specific execution process may refer to the specific description shown in fig. 1 to 5, and will not be described herein.
It will be apparent to those skilled in the art that, for convenience and brevity of description, only the above-described division of the functional units and modules is illustrated, and in practical application, the above-described functional distribution may be performed by different functional units and modules according to needs, i.e. the internal structure of the apparatus is divided into different functional units or modules to perform all or part of the above-described functions. The functional units and modules in the embodiment may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit, where the integrated units may be implemented in a form of hardware or a form of a software functional unit. In addition, the specific names of the functional units and modules are only for distinguishing from each other, and are not used for limiting the protection scope of the present application. The specific working process of the units and modules in the above system may refer to the corresponding process in the foregoing method embodiment, which is not described herein again.
In the foregoing embodiments, the descriptions of the embodiments are emphasized, and in part, not described or illustrated in any particular embodiment, reference is made to the related descriptions of other embodiments.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus/terminal device and method may be implemented in other manners. For example, the apparatus/terminal device embodiments described above are merely illustrative, e.g., the division of the modules or units is merely a logical function division, and there may be additional divisions in actual implementation, e.g., multiple units or components may be combined or integrated into another system, or some features may be omitted or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed may be an indirect coupling or communication connection via interfaces, devices or units, which may be in electrical, mechanical or other forms.
The units described as separate units may or may not be physically separate, and units shown as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in the embodiments of the present application may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units may be implemented in hardware or in software functional units.
The integrated modules/units, if implemented in the form of software functional units and sold or used as stand-alone products, may be stored in a computer readable storage medium. Based on such understanding, the present application may implement all or part of the flow of the method of the above embodiment, or may be implemented by a computer program to instruct related hardware, where the computer program may be stored in a computer readable storage medium, and when the computer program is executed by a processor, the computer program may implement the steps of each of the method embodiments described above. Wherein the computer program comprises computer program code which may be in source code form, object code form, executable file or some intermediate form etc.
The present application is not limited to the above-described embodiments, but, if various modifications or variations of the present application are not departing from the spirit and scope of the present application, the present application is intended to include such modifications and variations as fall within the scope of the claims and the equivalents thereof.
Claims (8)
1. The color difference detection method of the tile image is characterized by comprising the following steps of:
obtaining a tile image to be detected, a standard tile image and a preset color difference detection model, wherein the color difference detection model comprises a twin deep learning network, a feature fusion network and a color difference detection network;
respectively inputting the tile image to be detected and the standard tile image into the twin deep learning network, and obtaining a first convolution characteristic image corresponding to the tile image to be detected and a first convolution characteristic image corresponding to the standard tile image according to a plurality of preset scales;
inputting a first convolution feature map corresponding to the tile image to be detected with the multiple scales and a first convolution feature map corresponding to the standard tile image into the feature fusion network to obtain a fusion difference feature map;
and inputting the fusion difference characteristic diagram into the color difference detection network to perform color difference detection, and obtaining a color difference detection result of the tile image to be detected.
2. The color difference detection method of tile image according to claim 1, wherein: the twin deep learning network comprises a central differential convolution layer and a plurality of central differential convolution modules which are sequentially connected, wherein the central differential convolution module comprises a plurality of central differential sub-convolution layers and a maximum pooling layer which are sequentially connected;
inputting the tile image to be detected and the standard tile image into the twin deep learning network, and obtaining a first convolution feature map corresponding to the tile image to be detected and a first convolution feature map corresponding to the standard tile image with a plurality of scales according to a plurality of preset scales, wherein the steps comprise:
respectively taking the tile image to be detected and the standard tile image as input feature images of the central differential convolution layer, dividing the input feature images into a plurality of units according to a preset window, and obtaining the neighborhood of each unit and a plurality of adjacent units corresponding to the neighborhood according to the position coordinates of each unit in the input feature images and the preset neighborhood range;
obtaining a first central difference convolution feature map of each unit of the input feature map according to a neighborhood of each unit, a plurality of adjacent units corresponding to the neighborhood and a preset central difference convolution algorithm, and constructing the first central difference convolution feature map of the input feature map to obtain a first central difference convolution feature map output by the central difference convolution layer, wherein the central difference convolution algorithm is as follows:
y(p 0 )=∑ pn∈R w(p n )·(x(p 0 +p n )-θ·x(p 0 ))
wherein y (p 0 ) The position coordinate for inputting the feature map is p 0 A first central differential convolution feature map, p n For the position coordinate p 0 In the neighborhood of the cell of (2), the position coordinate is p n R is a neighborhood, w (p n ) For the position coordinate p n Weight value of adjacent cell of (c), x (p 0 +p n ) The position coordinate for inputting the feature map is p 0 +p n And θ is the middleCardiac differential gradient information parameters, x (p 0 ) The position coordinate for inputting the feature map is p 0 Pixel values of (2);
taking a first central difference convolution feature map output by the central difference convolution layer as an input feature map of a first central difference sub convolution module, and obtaining a second central difference convolution feature map output by a last central difference sub convolution layer according to a central difference convolution algorithm in each central difference sub convolution layer; inputting the second center difference convolution feature map to the maximum pooling layer for pooling treatment, and obtaining a maximum pooling map output by the maximum pooling layer as a first convolution feature map output by the first center difference molecular convolution module;
and taking the first convolution feature image output by the first center difference molecule convolution module as an input feature image of the next center difference molecule convolution module, repeating the steps to obtain the first convolution feature image output by each center difference molecule convolution module, and taking the first convolution feature image output by each center difference molecule convolution module as a first convolution feature image of a plurality of scales to obtain a first convolution feature image corresponding to the tile image to be detected of the plurality of scales and a first convolution feature image corresponding to the standard tile image.
3. The color difference detection method of tile image according to claim 2, wherein: the feature fusion network comprises an attention conversion module;
inputting the first convolution feature images corresponding to the tile images to be detected with the multiple scales and the first convolution feature images corresponding to the standard tile images into the feature fusion network to obtain a fusion difference feature image, wherein the method comprises the following steps of:
performing difference processing on a first convolution feature image corresponding to a tile image to be detected and a first convolution feature image corresponding to a standard tile image in the same scale to obtain feature difference images in all scales;
inputting the feature difference graphs of all the scales into the attention conversion module to obtain feature difference graphs of all the scales after attention conversion, and splicing the feature difference graphs of all the scales after attention conversion to obtain a fusion difference feature graph.
4. A color difference detection method of tile image according to claim 3, wherein: the attention conversion module comprises a pooling layer, a convolution layer and an activation layer which are sequentially connected;
the step of inputting the feature difference graphs of the scales into the attention conversion module to obtain the feature difference graphs of the scales after the attention conversion comprises the following steps:
inputting the characteristic difference graphs of the scales into the pooling layer to obtain an average pooling graph and a maximum pooling graph of the scales output by the pooling layer;
inputting the average pooling graph and the maximum pooling graph of each scale into the convolution layer to obtain a second convolution characteristic graph of each scale;
and inputting the feature difference graphs of the scales and the corresponding second convolution feature graphs into the activation layer to obtain the feature difference graphs of the scales after the attention conversion.
5. The color difference detection method of tile image according to claim 4, wherein: the color difference detection network comprises a full-connection module and a classification module;
the step of inputting the fusion difference feature map into the color difference detection network for color difference detection to obtain a color difference detection result of the tile image to be detected, comprises the following steps:
inputting the fusion difference feature map to the full-connection module, and obtaining fusion difference feature vectors output by the full-connection module according to a plurality of full-connection layers in the full-connection module;
and inputting the fusion difference feature vector into the classification module to perform color difference detection, so as to obtain color difference detection probability, and obtaining a color difference detection result of the tile image to be detected according to the color difference detection probability and a preset color difference detection threshold.
6. A color difference detection device for tile images, comprising:
the system comprises a data acquisition module, a color difference detection module and a color difference detection module, wherein the data acquisition module is used for acquiring a tile image to be detected, a standard tile image and a preset color difference detection model, and the color difference detection model comprises a twin deep learning network, a feature fusion network and a color difference detection network;
the feature extraction module is used for respectively inputting the tile image to be detected and the standard tile image into the twin deep learning network, and obtaining a first convolution feature map corresponding to the tile image to be detected and a first convolution feature map corresponding to the standard tile image of a plurality of scales according to the preset plurality of scales;
the feature fusion module is used for inputting a first convolution feature map corresponding to the tile images to be detected with the multiple scales and a first convolution feature map corresponding to the standard tile image into the feature fusion network to perform feature difference, so as to obtain feature difference maps with the various scales, and splicing the feature difference maps with the various scales to obtain fusion difference feature maps;
and the color difference detection module is used for inputting the fusion difference characteristic diagram into the color difference detection network to carry out color difference detection, and obtaining a color difference detection result of the tile image to be detected.
7. A computer device, comprising: a processor, a memory, and a computer program stored on the memory and executable on the processor; the computer program when executed by the processor implements the steps of the color difference detection method of a tile image as claimed in any one of claims 1 to 5.
8. A storage medium, characterized by: the storage medium stores a computer program which, when executed by a processor, implements the steps of the color difference detection method of a tile image as claimed in any one of claims 1 to 5.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310873498.1A CN116912335A (en) | 2023-07-14 | 2023-07-14 | Color difference detection method, system, equipment and storage medium for tile image |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310873498.1A CN116912335A (en) | 2023-07-14 | 2023-07-14 | Color difference detection method, system, equipment and storage medium for tile image |
Publications (1)
Publication Number | Publication Date |
---|---|
CN116912335A true CN116912335A (en) | 2023-10-20 |
Family
ID=88356004
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202310873498.1A Pending CN116912335A (en) | 2023-07-14 | 2023-07-14 | Color difference detection method, system, equipment and storage medium for tile image |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN116912335A (en) |
-
2023
- 2023-07-14 CN CN202310873498.1A patent/CN116912335A/en active Pending
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110298361B (en) | Semantic segmentation method and system for RGB-D image | |
CN110136170B (en) | Remote sensing image building change detection method based on convolutional neural network | |
CN110660066B (en) | Training method of network, image processing method, network, terminal equipment and medium | |
CN109584248B (en) | Infrared target instance segmentation method based on feature fusion and dense connection network | |
CN112465748B (en) | Crack identification method, device, equipment and storage medium based on neural network | |
CN110246181B (en) | Anchor point-based attitude estimation model training method, attitude estimation method and system | |
CN110379020B (en) | Laser point cloud coloring method and device based on generation countermeasure network | |
CN109614935A (en) | Car damage identification method and device, storage medium and electronic equipment | |
CN114463637B (en) | Winter wheat remote sensing identification analysis method and system based on deep learning | |
CN111538799A (en) | Thermodynamic diagram construction method, thermodynamic diagram construction equipment, thermodynamic diagram construction storage medium and thermodynamic diagram construction device | |
CN117409330B (en) | Aquatic vegetation identification method, aquatic vegetation identification device, computer equipment and storage medium | |
CN110599455A (en) | Display screen defect detection network model, method and device, electronic equipment and storage medium | |
CN117152484A (en) | Small target cloth flaw detection method for improving YOLOv5s | |
CN110007764B (en) | Gesture skeleton recognition method, device and system and storage medium | |
CN112084865A (en) | Target detection method, target detection device, electronic equipment and storage medium | |
CN115578624A (en) | Agricultural disease and pest model construction method, detection method and device | |
CN113065521B (en) | Object identification method, device, equipment and medium | |
WO2022167483A1 (en) | System for clustering data points | |
CN110163140A (en) | Crowd density picture capturing method and device | |
CN106960188B (en) | Weather image classification method and device | |
CN110619365B (en) | Method for detecting falling water | |
WO2024139700A1 (en) | Building identification method and apparatus, and device | |
CN117315369A (en) | Fundus disease classification method and device based on neural network | |
CN116912335A (en) | Color difference detection method, system, equipment and storage medium for tile image | |
CN115953330B (en) | Texture optimization method, device, equipment and storage medium for virtual scene image |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |