CN114972871A - Image registration-based few-sample image anomaly detection method and system - Google Patents
Image registration-based few-sample image anomaly detection method and system Download PDFInfo
- Publication number
- CN114972871A CN114972871A CN202210617656.2A CN202210617656A CN114972871A CN 114972871 A CN114972871 A CN 114972871A CN 202210617656 A CN202210617656 A CN 202210617656A CN 114972871 A CN114972871 A CN 114972871A
- Authority
- CN
- China
- Prior art keywords
- image
- feature
- features
- transformed
- registration
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000001514 detection method Methods 0.000 title claims abstract description 68
- 238000009826 distribution Methods 0.000 claims abstract description 131
- 230000009466 transformation Effects 0.000 claims abstract description 100
- 239000011159 matrix material Substances 0.000 claims description 56
- 238000013528 artificial neural network Methods 0.000 claims description 46
- 230000002159 abnormal effect Effects 0.000 claims description 38
- 238000011156 evaluation Methods 0.000 claims description 31
- 238000013527 convolutional neural network Methods 0.000 claims description 26
- 230000005856 abnormality Effects 0.000 claims description 23
- 238000000605 extraction Methods 0.000 claims description 21
- 230000001131 transforming effect Effects 0.000 claims description 13
- PXFBZOLANLWPMH-UHFFFAOYSA-N 16-Epiaffinine Natural products C1C(C2=CC=CC=C2N2)=C2C(=O)CC2C(=CC)CN(C)C1C2CO PXFBZOLANLWPMH-UHFFFAOYSA-N 0.000 claims description 8
- 235000009508 confectionery Nutrition 0.000 claims description 6
- 230000002441 reversible effect Effects 0.000 claims description 6
- 230000017105 transposition Effects 0.000 claims description 6
- 230000006870 function Effects 0.000 description 30
- 238000000034 method Methods 0.000 description 15
- 238000012549 training Methods 0.000 description 15
- 238000004364 calculation method Methods 0.000 description 5
- 238000013480 data collection Methods 0.000 description 5
- 230000006399 behavior Effects 0.000 description 4
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000013145 classification model Methods 0.000 description 1
- 238000013135 deep learning Methods 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 238000010586 diagram Methods 0.000 description 1
- 238000009776 industrial production Methods 0.000 description 1
- 230000008569 process Effects 0.000 description 1
- 238000012545 processing Methods 0.000 description 1
- 238000006748 scratching Methods 0.000 description 1
- 230000002393 scratching effect Effects 0.000 description 1
- 238000012360 testing method Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/764—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
- G06V10/765—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects using rules for classification or partitioning the feature space
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/30—Determination of transform parameters for the alignment of images, i.e. image registration
- G06T7/33—Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/77—Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
- G06V10/7715—Feature extraction, e.g. by transforming the feature space, e.g. multi-dimensional scaling [MDS]; Mappings, e.g. subspace methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/82—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Evolutionary Computation (AREA)
- General Physics & Mathematics (AREA)
- Artificial Intelligence (AREA)
- Computer Vision & Pattern Recognition (AREA)
- General Health & Medical Sciences (AREA)
- Health & Medical Sciences (AREA)
- Computing Systems (AREA)
- Software Systems (AREA)
- Multimedia (AREA)
- Medical Informatics (AREA)
- Databases & Information Systems (AREA)
- Biomedical Technology (AREA)
- Mathematical Physics (AREA)
- General Engineering & Computer Science (AREA)
- Molecular Biology (AREA)
- Data Mining & Analysis (AREA)
- Computational Linguistics (AREA)
- Biophysics (AREA)
- Life Sciences & Earth Sciences (AREA)
- Image Analysis (AREA)
Abstract
The invention provides a few-sample image anomaly detection method and system based on image registration, which comprises the following steps: extracting high-dimensional characteristics of the images from the support image and the image to be detected; carrying out feature spatial transformation on the high-dimensional features of the image to obtain transformed image features; feature coding is carried out on the transformed image features; implementing feature registration on the coding features; fitting the feature distribution of the support image to the transformed image features to obtain a feature distribution model; and (4) evaluating image abnormity of the transformed image characteristics and the characteristic distribution model. The invention provides a few-sample anomaly detection method based on image registration, aiming at the problems of the current anomaly detection method.
Description
Technical Field
The invention relates to the field of computer vision and image processing, in particular to a few-sample image anomaly detection method and system based on image registration.
Background
Currently, deep learning based techniques have achieved significant success in object classification tasks, however, for anomaly detection tasks, it is too costly to collect sufficient anomaly data for model training. In a standard anomaly detection process, only normal data is usually provided for model training, and an anomaly detection method is required to have data anomaly detection capability without abnormal data training. The few-sample abnormality detection task is mainly applied to an abnormality detection scene of multi-class data, and only a limited number of normal images are provided for the abnormality detection task of each class during training. The popularization of few-sample anomaly detection can help to reduce the burden of large-scale collection of training data and reduce the data collection work of data intensive application tasks.
The existing few-sample anomaly detection methods all follow a standard single-model single-class learning paradigm, namely different models are respectively trained according to data of different object classes, and each model can only be used for executing an anomaly detection task of a single object class. The existing method needs to retrain the model before executing the abnormal detection task of the unknown object, and consumes a great deal of time and computing resources. In fact, by designing a general few-sample anomaly detection algorithm, the anomaly detection capability of the model can be generalized to objects of strange categories, so that the algorithm greatly reduces the calculation overhead, can quickly detect anomalies of objects of new categories, and promotes the application and deployment of anomaly detection in the fields of industrial production and the like.
Patent document CN106951899A (application number: CN201710192706.6) discloses an abnormality detection method based on image recognition, including: normalizing the picture containing the detected target to obtain a gray image; utilizing the trained target recognition model to perform image matting, and matting the detected target image from the gray-scale image; carrying out binary classification on the detected target image by using the trained binary classification model, and determining the credibility score of the detected target image; and if the credibility score of the detected target image is not higher than a preset abnormal threshold, judging that the detected target image is an abnormal target. By converting the picture containing the detected target into a gray image, the characteristic dimension contained in the picture can be effectively reduced on the basis of not reducing the characteristic information of the picture; the interference brought by non-detection target image information can be effectively reduced by scratching the detected target image from the gray level image. But this invention cannot be applied to the anomaly detection task of a new class of objects with a small sample of new class data.
Disclosure of Invention
Aiming at the defects in the prior art, the invention aims to provide a few-sample image anomaly detection method and system based on image registration.
The invention provides a few-sample image anomaly detection method based on image registration, which comprises the following steps:
step S1: extracting high-dimensional characteristics of the images from the support image and the image to be detected;
step S2: carrying out feature spatial transformation on the high-dimensional features of the image to obtain transformed image features;
step S3: feature coding is carried out on the transformed image features;
step S4: implementing feature registration on the coding features;
step S5: fitting the feature distribution of the support image to the transformed image features to obtain a feature distribution model;
step S6: and (4) realizing image anomaly assessment on the transformed image characteristics and the characteristic distribution model.
Preferably, in the step S1:
taking a support image and an image to be detected as input, extracting high-dimensional characteristic information of the image by using a deep convolution neural network, and extracting a network by using an image characteristicIs composed of three cascaded convolutional neural network modules based on residual error, which are respectively marked as C 1 ,C 2 And C 3 Respectively obtaining three multi-scale high-dimensional features which are respectively marked asAnd
in the step S2:
extracting high-dimensional features of image obtained by image feature extractionAndperforming spatial transformation of the features using a spatial transformation neural network; wherein the feature of high dimensionFor input, i is 1,2,3, the spatial transform neural network performs spatial coordinate transformation using the following formula:
wherein,is a characteristic of the input before transformationThe original coordinates of the first and second coordinates,is a transformed output characteristicTarget coordinates of (1), S i Is the ithSpatial transform neural network, A i Is a coordinate transformation matrix, theta ij The parameters of the affine transformation matrix of the coordinates are continuously corrected by a space transformation neural network through a convolution neural network according to an error back propagation algorithm to obtain the transformed image characteristicsAndwherein,in order to be a feature of the image to be tested,to support features of the image.
Preferably, in the step S3:
transforming the feature space to obtain transformed image featuresAndfeature coding is achieved by using a deep convolutional neural network; wherein for the characteristics of the image to be testedObtaining coding features using the coder and predictor:
p a =P(z a )
where E is the encoder consisting of three layers of convolution operations, P is the predictor consisting of two layers of convolution operations, z a Is a high-dimensional feature, p, of the image to be tested after being encoded by an encoder a The image to be detected is coded high-dimensional characteristics obtained by a predictor; for features supporting an imageObtaining coding features using the coder and predictor:
p b =P(z b )
wherein the encoder E and the predictor P share weights with the encoder and predictor acting on the image features to be tested, z b Is a high-dimensional feature, p, of the support image encoded by the encoder b The image to be detected is coded high-dimensional characteristics obtained by a predictor;
in the step S4:
for the obtained coding characteristics p a And z b Feature registration is achieved using an image feature registration loss function:
wherein | · | purple sweet 2 Is an L-2 regularization operation;
the symmetric image feature registration loss function L is defined as:
preferably, in the step S5:
according to the transformed image features obtained in the feature space transformation step, fitting the feature distribution of the support image by using a distribution estimation model to obtain a feature distribution model;
transforming the feature space to obtain transformed featuresAndestimating a normal distribution of features using a statistical-based estimator, obtaining a probabilistic representation of corresponding features of a normal image using a multivariate Gaussian distribution, assuming that the image is divided into grids (i, j) e [1, W [ ]]×[1,H]Where, WXH is the resolution of the feature used to estimate the normal distribution; at the position (i, j) of each grid, noteFor transformed features from N support imagesSet of (2), F ij Distributed by multiple gaussians of N (mu) ij ,∑ ij ) Generation with sample mean μ ij Sample covariance ∑ ij Comprises the following steps:
wherein, (.) T For matrix transposition operation, a regularization term is in the shape of E I, so that a sample covariance matrix is full-rank and reversible; the multivariate gaussian distribution of each possible location together constitutes a feature distribution model.
Preferably, in the step S6:
according to the transformed image features obtained by feature space transformation and a feature distribution model obtained by feature distribution estimation, using an anomaly evaluation function to realize image anomaly evaluation;
for the image to be tested, comparing the features of the image to be tested obtained by feature space transformation with the feature distribution model obtained by feature distribution estimation, and calculating the following abnormal evaluation function:
wherein,As a sample covariance ∑ ij The mahalanobis distance matrix M ═ M (f) ij )) 1≤i≤W,1≤j≤H Forming an abnormal score matrix;
and the positions of the matrix with the numerical values larger than the preset value represent abnormal areas, and the abnormal score of the whole image is the maximum value of the abnormal matrix.
The invention provides a few-sample image anomaly detection system based on image registration, which comprises:
module M1: extracting high-dimensional characteristics of the images from the support image and the image to be detected;
module M2: carrying out feature spatial transformation on the high-dimensional features of the image to obtain transformed image features;
module M3: feature coding is carried out on the transformed image features;
module M4: implementing feature registration on the coding features;
module M5: fitting the feature distribution of the support image to the transformed image features to obtain a feature distribution model;
module M6: and (4) realizing image anomaly assessment on the transformed image characteristics and the characteristic distribution model.
Preferably, in said module M1:
taking a support image and an image to be detected as input, extracting high-dimensional characteristic information of the image by using a deep convolutional neural network, wherein the image characteristic extraction network consists of three concatenated convolutional neural network modules based on residual errors and is respectively marked as C 1 ,C 2 And C 3 Respectively obtaining three multi-scale high-dimensional features which are respectively marked asAnd
in the module M2:
extracting image featuresHigh dimensional features of the resulting imageAndperforming spatial transformation of the features using a spatial transformation neural network; wherein the feature of high dimensionFor input, i is 1,2,3, the spatial transform neural network performs spatial coordinate transformation using the following formula:
wherein,is a characteristic of the input before transformationThe original coordinates of the first and second coordinates,is a transformed output characteristicTarget coordinates of (1), S i Is the ith space transform neural network, A i Is a coordinate transformation matrix, theta ij The parameters of the affine transformation matrix of the coordinates are continuously corrected by a space transformation neural network through a convolution neural network according to an error back propagation algorithm to obtain the transformed image characteristicsAndwherein,in order to be a feature of the image to be tested,to support features of the image.
Preferably, in said module M3:
transforming the feature space to obtain transformed image featuresAndfeature coding is achieved by using a deep convolutional neural network; wherein for the characteristics of the image to be testedObtaining coding features using the coder and predictor:
p a =P(z a )
where E is the encoder consisting of three layers of convolution operations, P is the predictor consisting of two layers of convolution operations, z a Is a high-dimensional feature, p, of the image to be tested after being encoded by an encoder a The high-dimensional characteristics of the image to be detected after being coded by a predictor; for features supporting an imageObtaining coding features using the coder and predictor:
p b =P(z b )
wherein the encoder E and the predictor P share weights with the encoder and predictor acting on the image features to be tested, z b Is a high-dimensional feature, p, of the support image encoded by the encoder b The high-dimensional characteristics of the image to be detected after being coded by a predictor;
in the module M4:
for the obtained coding characteristics p a And z b Feature registration is achieved using an image feature registration loss function:
wherein | · | purple sweet 2 Is an L-2 regularization operation;
the symmetric image feature registration loss function L is defined as:
preferably, in said module M5:
according to the transformed image features obtained in the feature space transformation step, fitting the feature distribution of the support image by using a distribution estimation model to obtain a feature distribution model;
transforming the feature space to obtain transformed featuresAndestimating a normal distribution of features using a statistics-based estimator, obtaining a probabilistic representation of corresponding features of a normal image using a multivariate Gaussian distribution, assuming that the image is divided into grids (i, j) e [1, W]×[1,H]Where, WXH is the resolution of the feature used to estimate the normal distribution; at the position (i, j) of each grid, noteFor transformed features from N support imagesSet of (2), F ij Distributed by multiple gaussians of N (mu) ij ,∑ ij ) Generation with sample mean μ ij Sample covariance ∑ ij Comprises the following steps:
wherein, (.) T For matrix transposition operation, a regularization item belongs to I, so that a sample covariance matrix is full-rank and reversible; the multivariate gaussian distribution of each possible location together constitutes a feature distribution model.
Preferably, in said module M6:
according to the transformed image features obtained by feature space transformation and a feature distribution model obtained by feature distribution estimation, using an anomaly evaluation function to realize image anomaly evaluation;
for the image to be tested, comparing the features of the image to be tested obtained by feature space transformation with the feature distribution model obtained by feature distribution estimation, and calculating the following abnormal evaluation function:
wherein,as a sample covariance ∑ ij The mahalanobis distance matrix M ═ M (f) ij )) 1≤i≤W,1≤j≤H Forming an abnormal score matrix;
the positions of the matrix with the numerical values larger than the preset value represent abnormal areas, and the abnormal score of the whole image is the maximum value of the abnormal matrix.
Compared with the prior art, the invention has the following beneficial effects:
1. the invention provides a few-sample abnormity detection method based on image registration, which aims at the problems of the existing abnormity detection method, and can be applied to an abnormity detection task of a new class object by training a generalizable model by using the known class object data without retraining the model for the new class object data and only using the new class data of few samples;
2. according to the invention, by using the actual behavior of human detection abnormity and introducing training based on image registration, the abnormity generalization and detection capability of an abnormity detection algorithm are improved, and the data collection and high-performance calculation overhead required by abnormity detection task model training are greatly reduced, so that better performance is obtained on an abnormity detection task.
Drawings
Other features, objects and advantages of the invention will become more apparent upon reading of the detailed description of non-limiting embodiments with reference to the following drawings:
FIG. 1 is a flow chart of a method according to an embodiment of the present invention;
FIG. 2 is a schematic diagram of a system according to an embodiment of the present invention.
Detailed Description
The present invention will be described in detail with reference to specific examples. The following examples will assist those skilled in the art in further understanding the invention, but are not intended to limit the invention in any way. It should be noted that it would be obvious to those skilled in the art that various changes and modifications can be made without departing from the spirit of the invention. All falling within the scope of the present invention.
Example 1:
the invention provides a few-sample image anomaly detection method based on image registration, which comprises the following steps: an image feature extraction step: extracting high-dimensional features of the images by using a depth convolution neural network for the support image and the image to be detected; and (3) feature space transformation: performing spatial transformation on the high-dimensional features of the image obtained in the image feature extraction step by using a spatial transformation neural network to obtain transformed image features; and (3) feature coding: feature coding is realized on the transformed image features obtained in the feature space transformation step by using a deep convolutional neural network; a characteristic registration step: for the coding features obtained in the feature coding step, using an image feature registration loss function to realize feature registration; a characteristic distribution estimation step: according to the transformed image features obtained in the feature space transformation step, fitting the feature distribution of the support image by using a distribution estimation model to obtain a feature distribution model; an abnormality evaluation step: and according to the transformed image characteristics obtained in the characteristic space transformation step and the characteristic distribution model obtained in the characteristic distribution estimation step, using an abnormality evaluation function to realize image abnormality evaluation. According to the invention, by using the actual behavior of human detection abnormity and introducing training based on image registration, the abnormity generalization and detection capability of an abnormity detection algorithm are improved, and the data collection and high-performance calculation overhead required by abnormity detection task model training are greatly reduced, so that better performance is obtained on an abnormity detection task.
The method for detecting the image abnormality of the few samples based on the image registration, as shown in fig. 1-2, includes:
step S1: extracting high-dimensional characteristics of the images from the support image and the image to be detected;
specifically, in the step S1:
taking a support image and an image to be detected as input, extracting high-dimensional characteristic information of the image by using a deep convolutional neural network, wherein the image characteristic extraction network consists of three concatenated convolutional neural network modules based on residual errors and is respectively marked as C 1 ,C 2 And C 3 Respectively obtaining three multi-scale high-dimensional features which are respectively marked asAnd
step S2: carrying out feature spatial transformation on the high-dimensional features of the image to obtain transformed image features;
in the step S2:
extracting high-dimensional features of image obtained by image feature extractionAndperforming spatial transformation of the features using a spatial transformation neural network; wherein the feature of high dimensionFor input, i is 1,2,3, the spatial transform neural network performs spatial coordinate transformation using the following formula:
wherein,is characteristic of the input before transformationThe original coordinates of the first and second coordinates,is a transformed output characteristicTarget coordinates of (1), S i Is the ith space transform neural network, A i Is a coordinate transformation matrix, theta ij The parameters of the affine transformation matrix of the coordinates are continuously corrected by a space transformation neural network through a convolution neural network according to an error back propagation algorithm to obtain the transformed image characteristicsAndwherein,in order to be a feature of the image to be tested,to support features of the image.
Step S3: feature coding is carried out on the transformed image features;
specifically, in the step S3:
transforming the feature space to obtain transformed image featuresAndfeature coding is achieved by using a deep convolutional neural network; wherein for the characteristics of the image to be testedObtaining coding features using the coder and predictor:
p a =P(z a )
where E is the encoder consisting of three layers of convolution operations, P is the predictor consisting of two layers of convolution operations, z a Is a high-dimensional feature, p, of the image to be tested after being encoded by an encoder a The high-dimensional characteristics of the image to be detected after being coded by a predictor; for features supporting an imageObtaining coding features using the coder and predictor:
p b =P(z b )
wherein the encoder E and the predictor P share weights with the encoder and predictor acting on the image features to be tested, z b Is a high-dimensional feature, p, of the support image encoded by the encoder b The high-dimensional characteristics of the image to be detected after being coded by a predictor;
step S4: implementing feature registration on the coding features;
in the step S4:
for the obtained coding feature p a And z b Feature registration is achieved using an image feature registration loss function:
wherein | · | purple sweet 2 Is an L-2 regularization operation;
the symmetric image feature registration loss function L is defined as:
step S5: fitting the feature distribution of the support image to the transformed image features to obtain a feature distribution model;
specifically, in the step S5:
according to the transformed image features obtained in the feature space transformation step, fitting the feature distribution of the support image by using a distribution estimation model to obtain a feature distribution model;
transforming the feature space to obtain transformed featuresAndestimating a normal distribution of features using a statistics-based estimator, obtaining a probabilistic representation of corresponding features of a normal image using a multivariate Gaussian distribution, assuming that the image is divided into grids (i, j) e [1, W]×[1,H]Where, WXH is the resolution of the feature used to estimate the normal distribution; at the position (i, j) of each grid, noteFor transformed features from N support imagesSet of (2), F ij Distributed by multiple gaussians of N (mu) ij ,∑ ij ) Generation with sample mean μ ij Sample covariance ∑ ij Comprises the following steps:
wherein, (.) T For matrix transposition operation, a regularization term is in the shape of E I, so that a sample covariance matrix is full-rank and reversible; the multivariate gaussian distribution of each possible location together constitutes a feature distribution model.
Step S6: and (4) realizing image anomaly assessment on the transformed image characteristics and the characteristic distribution model.
Specifically, in the step S6:
according to the transformed image features obtained by feature space transformation and a feature distribution model obtained by feature distribution estimation, using an anomaly evaluation function to realize image anomaly evaluation;
for the image to be tested, comparing the features of the image to be tested obtained by feature space transformation with the feature distribution model obtained by feature distribution estimation, and calculating the following abnormal evaluation function:
wherein,as a sample covariance ∑ ij The mahalanobis distance matrix M ═ M (f) ij )) 1≤i≤W,1≤j≤H Forming an abnormal score matrix;
and the positions of the matrix with the numerical values larger than the preset value represent abnormal areas, and the abnormal score of the whole image is the maximum value of the abnormal matrix.
Example 2:
example 2 is a preferred example of example 1, and the present invention will be described in more detail.
The person skilled in the art can understand that the method for detecting an abnormality of a sample-less image based on image registration provided by the present invention as a specific embodiment of a system for detecting an abnormality of a sample-less image based on image registration, that is, the system for detecting an abnormality of a sample-less image based on image registration can be implemented by executing a flow of steps of the method for detecting an abnormality of a sample-less image based on image registration.
The invention provides a few-sample image anomaly detection system based on image registration, which comprises:
module M1: extracting high-dimensional characteristics of the images from the support image and the image to be detected;
specifically, in the module M1:
taking a support image and an image to be detected as input, extracting high-dimensional characteristic information of the image by using a deep convolutional neural network, wherein the image characteristic extraction network consists of three concatenated convolutional neural network modules based on residual errors, and the three convolutional neural network modules are respectively marked as C 1 ,C 2 And C 3 Respectively obtaining three multi-scale high-dimensional features which are respectively marked asAnd
module M2: carrying out feature spatial transformation on the high-dimensional features of the image to obtain transformed image features;
in the module M2:
extracting high-dimensional features of image obtained by image feature extractionAndperforming spatial transformation of the features using a spatial transformation neural network; wherein the feature of high dimensionFor input, i is 1,2,3, the spatial transform neural network performs spatial coordinate transformation using the following formula:
wherein,is a characteristic of the input before transformationThe original coordinates of the first and second coordinates,is a transformed output characteristicTarget coordinates of (1), S i Is the ith space transform neural network, A i Is a coordinate transformation matrix, theta ij The parameters of the affine transformation matrix of the coordinates are continuously corrected by a space transformation neural network through a convolution neural network according to an error back propagation algorithm to obtain the transformed image characteristicsAndwherein,in order to be a feature of the image to be tested,to support features of the image.
Module M3: feature coding is carried out on the transformed image features;
specifically, in the module M3:
transforming the feature space to obtain transformed image featuresAndfeature coding is achieved by using a deep convolutional neural network; wherein for the characteristics of the image to be testedObtaining coding features using the coder and predictor:
p a =P(z a )
where E is the encoder consisting of three layers of convolution operations, P is the predictor consisting of two layers of convolution operations, z a Is a high-dimensional feature, p, of the image to be tested after being encoded by an encoder a The image to be detected is coded high-dimensional characteristics obtained by a predictor; for features supporting an imageObtaining coding features using the coder and predictor:
p b =P(z b )
wherein the encoder E and the predictor P share weights with the encoder and predictor acting on the image features to be tested, z b Is a high-dimensional feature, p, of the support image encoded by the encoder b The high-dimensional characteristics of the image to be detected after being coded by a predictor;
module M4: implementing feature registration on the coding features;
in the module M4:
for the obtained coding characteristics p a And z b Feature registration is achieved using an image feature registration loss function:
wherein | · | purple sweet 2 Is an L-2 regularization operation;
the symmetric image feature registration loss function L is defined as:
module M5: fitting the feature distribution of the support image to the transformed image features to obtain a feature distribution model;
specifically, in the module M5:
according to the transformed image features obtained in the feature space transformation step, fitting the feature distribution of the support image by using a distribution estimation model to obtain a feature distribution model;
transforming the feature space to obtain transformed featuresAndestimating a normal distribution of features using a statistics-based estimator, obtaining a probabilistic representation of corresponding features of a normal image using a multivariate Gaussian distribution, assuming that the image is divided into grids (i, j) e [1, W]×[1,H]Where, WXH is the resolution of the feature used to estimate the normal distribution; at the position (i, j) of each grid, noteFor transformed features from N support imagesSet of (2), F ij From a multivariate Gaussian distribution of N (mu) ij ,∑ ij ) Generation with sample mean μ ij Sample covariance ∑ ij Comprises the following steps:
wherein, (.) T For matrix transposition operation, a regularization term is in the shape of E I, so that a sample covariance matrix is full-rank and reversible; the multivariate gaussian distribution of each possible location together constitutes a feature distribution model.
Module M6: and (4) realizing image anomaly assessment on the transformed image characteristics and the characteristic distribution model.
Specifically, in the module M6:
according to the transformed image features obtained by feature space transformation and a feature distribution model obtained by feature distribution estimation, using an anomaly evaluation function to realize image anomaly evaluation;
for the image to be tested, comparing the characteristics of the image to be tested obtained by characteristic space transformation with the characteristic distribution model obtained by characteristic distribution estimation, and calculating the following abnormal evaluation function:
wherein,as a sample covariance ∑ ij The mahalanobis distance matrix M ═ M (f) ij )) 1≤i≤W,1≤j≤H Forming an abnormal score matrix;
and the positions of the matrix with the numerical values larger than the preset value represent abnormal areas, and the abnormal score of the whole image is the maximum value of the abnormal matrix.
Example 3:
example 3 is a preferred example of example 1, and the present invention will be described in more detail.
FIG. 1 is a flowchart of an embodiment of a method for detecting an anomaly in a sample-less image based on image registration according to the present invention, in which a deep convolutional neural network is used to extract high-dimensional features of an image for a support image and an image to be detected; performing spatial transformation on the high-dimensional features of the image obtained in the image feature extraction step by using a spatial transformation neural network to obtain transformed image features; feature coding is realized on the transformed image features obtained in the feature space transformation step by using a deep convolutional neural network; for the coding features obtained in the feature coding step, using an image feature registration loss function to realize feature registration; according to the transformed image features obtained in the feature space transformation step, fitting the feature distribution of the support image by using a distribution estimation model to obtain a feature distribution model; and according to the transformed image characteristics obtained in the characteristic space transformation step and the characteristic distribution model obtained in the characteristic distribution estimation step, using an abnormality evaluation function to realize image abnormality evaluation.
The invention provides a few-sample abnormity detection method based on image registration, aiming at the problems of the existing abnormity detection method. The method utilizes the known class object data to train the generalizable general model, does not need to retrain the model for the new class object data, but only utilizes the new class data of few samples, and can be applied to the abnormal detection task of the new class object. According to the invention, by taking the actual behavior of human detection abnormity as a reference and introducing training based on image registration, the abnormity generalization and detection capability of an abnormity detection algorithm are improved, and the data collection and high-performance calculation overhead required by abnormity detection task model training are greatly reduced, so that better performance is obtained on an abnormity detection task.
Specifically, with reference to fig. 1, the method comprises the steps of:
an image feature extraction step: extracting high-dimensional characteristics of the images from the supporting image and the image to be detected by using a depth convolution neural network;
and (3) feature space transformation: performing spatial transformation on the high-dimensional features of the image obtained in the image feature extraction step by using a spatial transformation neural network to obtain transformed image features;
and (3) feature coding: feature coding is realized on the transformed image features obtained in the feature space transformation step by using a deep convolutional neural network;
a characteristic registration step: for the coding features obtained in the feature coding step, using an image feature registration loss function to realize feature registration;
a characteristic distribution estimation step: according to the transformed image features obtained in the feature space transformation step, fitting the feature distribution of the support image by using a distribution estimation model to obtain a feature distribution model;
an abnormality evaluation step: and according to the transformed image characteristics obtained in the characteristic space transformation step and the characteristic distribution model obtained in the characteristic distribution estimation step, using an abnormality evaluation function to realize image abnormality evaluation.
Corresponding to the above method, the present invention further provides an embodiment of a system for detecting an abnormality of a few-sample image based on image registration, including:
an image feature extraction module: extracting high-dimensional features of the images by using a depth convolution neural network for the support image and the image to be detected;
a feature space transformation module: performing spatial transformation on the high-dimensional features of the image obtained by the image feature extraction module by using a spatial transformation neural network to obtain transformed image features;
a feature encoding module: feature coding is realized on the transformed image features obtained by the feature space transformation module by using a deep convolutional neural network;
a feature registration module: the coding features obtained by the feature coding module are subjected to feature registration by using an image feature registration loss function;
a feature distribution estimation module: according to the transformed image features obtained by the feature space transformation module, fitting the feature distribution of the support image by using a distribution estimation model to obtain a feature distribution model;
an anomaly assessment module: and according to the transformed image characteristics obtained by the characteristic space transformation module and the characteristic distribution model obtained by the characteristic distribution estimation module, using an anomaly evaluation function to realize image anomaly evaluation.
Technical features realized by each module of the image registration-based small-sample image anomaly detection system can be the same as technical features realized by corresponding steps in the image registration-based small-sample image anomaly detection method.
Specific implementations of the above steps and modules are described in detail below to facilitate understanding of the technical solutions of the present invention.
In some embodiments of the present invention, the image feature extracting step includes: extracting high-dimensional characteristics of the images from the supporting image and the image to be detected by using a depth convolution neural network;
in some embodiments of the present invention, the feature space transforming step, wherein: performing spatial transformation on the high-dimensional features of the image obtained in the image feature extraction step by using a spatial transformation neural network to obtain transformed image features;
in some embodiments of the present invention, the feature encoding step, wherein: feature coding is realized on the transformed image features obtained in the feature space transformation step by using a deep convolutional neural network;
in some embodiments of the invention, the feature registration step, wherein: for the coding features obtained in the feature coding step, using an image feature registration loss function to realize feature registration;
in some embodiments of the present invention, the feature distribution estimating step includes: according to the transformed image features obtained in the feature space transformation step, fitting the feature distribution of the support image by using a distribution estimation model to obtain a feature distribution model;
in some embodiments of the invention, the anomaly assessment step, wherein: and according to the transformed image characteristics obtained in the characteristic space transformation step and the characteristic distribution model obtained in the characteristic distribution estimation step, using an abnormality evaluation function to realize image abnormality evaluation.
Specifically, a network framework of a training system composed of an image feature extraction module, a feature space transformation module, a feature coding module, a feature registration module, a feature distribution estimation module and an anomaly evaluation module is shown in fig. 2, and the whole system framework can be trained end to end.
In the system framework of the embodiment shown in fig. 2, a supporting image (i.e. a certain small number of normal images) and an image to be detected are used as input, a deep convolutional neural network is used to extract high-dimensional feature information of the image, and the image feature extraction network is composed of three cascaded convolutional neural network modules based on residual errors, which are respectively marked as C 1 ,C 2 And C 3 Respectively obtaining three multi-scale high-dimensional features which are respectively marked asAnd
in the system framework of the embodiment shown in FIG. 2, the high-dimensional features of the image obtained in the image feature extraction step are extractedAndspace of useThe transforming neural network performs a spatial transformation of the features. Wherein the feature of high dimensionFor input, the spatial transformation neural network performs spatial coordinate transformation using the following formula:
wherein,is a characteristic of the input before transformationThe original coordinates of the first and second coordinates,is a transformed output characteristicTarget coordinates of (1), S i Is the ith space transform neural network, A i Is a coordinate transformation matrix, theta ij Are specific parameters of the coordinate affine transformation matrix. The space transformation neural network continuously corrects the parameters of the affine transformation matrix according to an error back propagation algorithm through the convolution neural network to finally obtain the transformed image characteristics(features of the image to be tested) and(features supporting the image).
In the system framework of the embodiment as shown in fig. 2, the transformed image features resulting from the feature space transformation step are transformed(to be measured)Characteristics of the test image) and(features of the support image), feature coding is implemented using a deep convolutional neural network. Wherein for the characteristics of the image to be testedThe coding characteristics are obtained using the encoder and the predictor,
where E is the encoder consisting of three layers of convolution operations, P is the predictor consisting of two layers of convolution operations, P a Is the encoded high-dimensional feature finally obtained from the image to be detected. For features supporting an imageThe encoding characteristics are obtained using only the encoder,
wherein encoder E shares a weight, z, with the encoder acting on the image feature to be tested b Are the encoded high-dimensional features that the support image ultimately gets.
In the system framework of the embodiment shown in fig. 2, the coding feature p obtained in the feature coding step is coded a And z b And using an image feature registration loss function to realize feature registration,
wherein, I 2 Is an L-2 regularization operation. Finally, a symmetric image feature registration loss function is defined as,
in the system framework of the embodiment shown in fig. 2, the transformed features resulting from the feature space transformation step are transformedAnda normal distribution of features is estimated using a statistical-based estimator, and a probability representation of corresponding features of normal images is obtained using a multivariate Gaussian distribution. Suppose an image is divided into grids (i, j) e [1, W]×[1,H]Where W × H is the resolution of the feature used to estimate the normal distribution. At the position (i, j) of each grid, note For transformed features from N support imagesSet of (2), let F ij Distributed by multiple gaussians of N (mu) ij ,∑ ij ) Generation with sample mean μ ij Sample covariance ∑ ij Comprises the following steps:
wherein the regularization term ∈ I makes the sample covariance matrix full rank and invertible. The multivariate gaussian distribution of each possible location together constitutes the final feature distribution model.
In the system framework of the embodiment shown in fig. 2, for the image to be tested, the features of the image to be tested obtained in the feature space transformation step are compared with the feature distribution model obtained in the feature distribution estimation step, so as to calculate the following abnormal rating function:
the mahalanobis distance matrix M above is (M (f) ij )) 1≤i≤W,1≤j≤H The final anomaly score matrix is composed. Wherein the position with a larger value in the matrix represents an abnormal region. The final anomaly score for the entire image is the maximum of the above anomaly matrix.
In summary, the present invention provides a few-sample anomaly detection method based on image registration, which is directed to the problems of the current anomaly detection method. The method utilizes the known class object data to train the generalizable general model, does not need to retrain the model for the new class object data, but only utilizes the new class data of few samples, and can be applied to the abnormal detection task of the new class object. According to the invention, by using the actual behavior of human detection abnormity and introducing training based on image registration, the abnormity generalization and detection capability of an abnormity detection algorithm are improved, and the data collection and high-performance calculation overhead required by abnormity detection task model training are greatly reduced, so that better performance is obtained on an abnormity detection task.
Those skilled in the art will appreciate that, in addition to implementing the systems, apparatus, and various modules thereof provided by the present invention in purely computer readable program code, the same procedures can be implemented entirely by logically programming method steps such that the systems, apparatus, and various modules thereof are provided in the form of logic gates, switches, application specific integrated circuits, programmable logic controllers, embedded microcontrollers and the like. Therefore, the system, the device and the modules thereof provided by the present invention can be considered as a hardware component, and the modules included in the system, the device and the modules thereof for implementing various programs can also be considered as structures in the hardware component; modules for performing various functions may also be considered to be both software programs for performing the methods and structures within hardware components.
The foregoing description of specific embodiments of the present invention has been presented. It is to be understood that the present invention is not limited to the specific embodiments described above, and that various changes or modifications may be made by one skilled in the art within the scope of the appended claims without departing from the spirit of the invention. The embodiments and features of the embodiments of the present application may be combined with each other arbitrarily without conflict.
Claims (10)
1. A few-sample image anomaly detection method based on image registration is characterized by comprising the following steps:
step S1: extracting high-dimensional characteristics of the images from the support image and the image to be detected;
step S2: carrying out feature spatial transformation on the high-dimensional features of the image to obtain transformed image features;
step S3: feature coding is carried out on the transformed image features;
step S4: realizing feature registration on the coding features;
step S5: fitting the feature distribution of the support image to the transformed image features to obtain a feature distribution model;
step S6: and (4) realizing image anomaly assessment on the transformed image characteristics and the characteristic distribution model.
2. The image registration-based few-sample image anomaly detection method according to claim 1, characterized in that:
in the step S1:
taking a support image and an image to be detected as input, extracting high-dimensional characteristic information of the image by using a deep convolutional neural network, wherein the image characteristic extraction network consists of three concatenated convolutional neural network modules based on residual errors and is respectively marked as C 1 ,C 2 And C 3 Respectively obtaining three multi-scale high-dimensional features which are respectively marked asAnd
in the step S2:
extracting high-dimensional features of image obtained by image feature extractionAndperforming spatial transformation of the features using a spatial transformation neural network; wherein the feature of high dimensionFor input, i is 1,2,3, the spatial transform neural network performs spatial coordinate transformation using the following formula:
wherein,is a characteristic of the input before transformationThe original coordinates of the first and second coordinates,is a transformed output characteristicTarget coordinates of (1), S i Is the ith space transform neural network, A i Is a coordinate transformation matrix, theta ij The parameters of the affine transformation matrix of the coordinates are continuously corrected by a space transformation neural network through a convolution neural network according to an error back propagation algorithm to obtain the transformed image characteristicsAndwherein,in order to be a feature of the image to be tested,to support features of the image.
3. The image registration-based few-sample image anomaly detection method according to claim 1, characterized in that:
in the step S3:
transforming the feature space to obtain transformed image featuresAndfeature coding is achieved by using a deep convolutional neural network; wherein for the characteristics of the image to be testedObtaining coding features using the coder and predictor:
p a =P(z a )
where E is the encoder consisting of three layers of convolution operations, P is the predictor consisting of two layers of convolution operations, z a Is a high-dimensional feature, p, of the image to be tested after being encoded by an encoder a Is the code obtained by predictor of the image to be detectedThe latter high dimensional features; for features supporting an imageObtaining coding features using the coder and predictor:
p b =P(z b )
wherein the encoder E and the predictor P share weights with the encoder and predictor acting on the image features to be tested, z b Is a high-dimensional feature, p, of the support image encoded by the encoder b The high-dimensional characteristics of the image to be detected after being coded by a predictor;
in the step S4:
for the obtained coding characteristics p a And z b Feature registration is achieved using an image feature registration loss function:
wherein | · | purple sweet 2 Is an L-2 regularization operation;
the symmetric image feature registration loss function L is defined as:
4. the image registration-based few-sample image abnormality detection method according to claim 1, wherein in the step S5:
according to the transformed image features obtained in the feature space transformation step, fitting the feature distribution of the support image by using a distribution estimation model to obtain a feature distribution model;
to feature space transformationCharacteristics of the obtained transformAndestimating a normal distribution of features using a statistics-based estimator, obtaining a probabilistic representation of corresponding features of a normal image using a multivariate Gaussian distribution, assuming that the image is divided into grids (i, j) e [1, W]×[1,H]Where, WXH is the resolution of the feature used to estimate the normal distribution; at the position (i, j) of each grid, noteFor transformed features from N support imagesSet of (2), F ij Distributed by multiple gaussians of N (mu) ij ,Σ ij ) Generation with sample mean μ ij Sample covariance ∑ ij Comprises the following steps:
wherein, (. cndot.) T For matrix transposition operation, a regularization term is in the shape of E I, so that a sample covariance matrix is full-rank and reversible; the multivariate gaussian distribution of each possible location together constitutes a feature distribution model.
5. The image registration-based few-sample image abnormality detection method according to claim 1, wherein in the step S6:
according to the transformed image features obtained by feature space transformation and a feature distribution model obtained by feature distribution estimation, using an anomaly evaluation function to realize image anomaly evaluation;
for the image to be tested, comparing the features of the image to be tested obtained by feature space transformation with the feature distribution model obtained by feature distribution estimation, and calculating the following abnormal evaluation function:
wherein,as a sample covariance ∑ ij The mahalanobis distance matrix M ═ M (f) ij )) 1≤i≤W,1≤j≤H Forming an abnormal score matrix;
and the positions of the matrix with the numerical values larger than the preset value represent abnormal areas, and the abnormal score of the whole image is the maximum value of the abnormal matrix.
6. A system for detecting abnormalities in a sample-less image based on image registration, comprising:
module M1: extracting high-dimensional characteristics of the images from the support image and the image to be detected;
module M2: carrying out feature spatial transformation on the high-dimensional features of the image to obtain transformed image features;
module M3: feature coding is carried out on the transformed image features;
module M4: implementing feature registration on the coding features;
module M5: fitting the feature distribution of the support image to the transformed image features to obtain a feature distribution model;
module M6: and (4) realizing image anomaly assessment on the transformed image characteristics and the characteristic distribution model.
7. The system for few-sample image anomaly detection based on image registration according to claim 6, characterized by:
in the module M1:
taking a support image and an image to be detected as input, extracting high-dimensional characteristic information of the image by using a deep convolution neural network, wherein the image characteristic extraction network consists ofThree concatenated convolutional neural network modules based on residual error, denoted as C 1 ,C 2 And C 3 Respectively obtaining three multi-scale high-dimensional features which are respectively marked asAnd
in the module M2:
extracting high-dimensional features of image obtained by image feature extractionAndperforming spatial transformation of the features using a spatial transformation neural network; wherein the feature of high dimensionFor input, i is 1,2,3, the spatial transform neural network performs spatial coordinate transformation using the following formula:
wherein,is a characteristic of the input before transformationThe original coordinates of the first and second coordinates,is a transformed output characteristicTarget coordinates of (1), S i Is the ith space transform neural network, A i Is a coordinate transformation matrix, theta ij The parameters of the affine transformation matrix of the coordinates are continuously corrected by a space transformation neural network through a convolution neural network according to an error back propagation algorithm to obtain the transformed image characteristicsAndwherein,in order to be a feature of the image to be tested,to support features of the image.
8. The system for few-sample image anomaly detection based on image registration according to claim 6, characterized by:
in the module M3:
transforming the feature space to obtain transformed image featuresAndfeature coding is achieved by using a deep convolutional neural network; wherein for the characteristics of the image to be testedObtaining coding features using the coder and predictor:
p a =P(z a )
where E is the encoder consisting of three layers of convolution operations, P is the predictor consisting of two layers of convolution operations, z a Is a high-dimensional feature, p, of the image to be tested after being encoded by an encoder a The high-dimensional characteristics of the image to be detected after being coded by a predictor; for features supporting an imageObtaining coding features using the coder and predictor:
p b =P(z b )
wherein the encoder E and the predictor P share weights with the encoder and predictor acting on the image features to be tested, z b Is a high-dimensional feature, p, of the support image encoded by the encoder b The high-dimensional characteristics of the image to be detected after being coded by a predictor;
in the module M4:
for the obtained coding characteristics p a And z b Feature registration is achieved using an image feature registration loss function:
wherein | · | purple sweet 2 Is an L-2 regularization operation;
the symmetric image feature registration loss function L is defined as:
9. the system for few-sample image anomaly detection based on image registration according to claim 6, characterized in that in said module M5:
according to the transformed image features obtained in the feature space transformation step, fitting the feature distribution of the support image by using a distribution estimation model to obtain a feature distribution model;
transforming the feature space to obtain transformed featuresAndestimating a normal distribution of features using a statistics-based estimator, obtaining a probabilistic representation of corresponding features of a normal image using a multivariate Gaussian distribution, assuming that the image is divided into grids (i, j) e [1, W]×[1,H]Where, WXH is the resolution of the feature used to estimate the normal distribution; at the position (i, j) of each grid, noteFor transformed features from N support imagesSet of (2), F ij Distributed by multiple gaussians of N (mu) ij ,Σ ij ) Generation with sample mean μ ij Sample covariance ∑ ij Comprises the following steps:
wherein, (.) T For matrix transposition operation, a regularization item belongs to I, so that a sample covariance matrix is full-rank and reversible; the multivariate gaussian distribution of each possible location together constitutes a feature distribution model.
10. The system for few-sample image anomaly detection based on image registration according to claim 6, characterized in that in said module M6:
according to the transformed image features obtained by feature space transformation and a feature distribution model obtained by feature distribution estimation, using an anomaly evaluation function to realize image anomaly evaluation;
for the image to be tested, comparing the features of the image to be tested obtained by feature space transformation with the feature distribution model obtained by feature distribution estimation, and calculating the following abnormal evaluation function:
wherein,as a sample covariance Σ ij The mahalanobis distance matrix M ═ M (f) ij )) 1≤i≤W,1≤j≤H Forming an abnormal score matrix;
and the positions of the matrix with the numerical values larger than the preset value represent abnormal areas, and the abnormal score of the whole image is the maximum value of the abnormal matrix.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210617656.2A CN114972871A (en) | 2022-06-01 | 2022-06-01 | Image registration-based few-sample image anomaly detection method and system |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210617656.2A CN114972871A (en) | 2022-06-01 | 2022-06-01 | Image registration-based few-sample image anomaly detection method and system |
Publications (1)
Publication Number | Publication Date |
---|---|
CN114972871A true CN114972871A (en) | 2022-08-30 |
Family
ID=82959159
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202210617656.2A Pending CN114972871A (en) | 2022-06-01 | 2022-06-01 | Image registration-based few-sample image anomaly detection method and system |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN114972871A (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117574136A (en) * | 2024-01-16 | 2024-02-20 | 浙江大学海南研究院 | Convolutional neural network calculation method based on multi-element Gaussian function space transformation |
CN118470442A (en) * | 2024-07-10 | 2024-08-09 | 华东交通大学 | Small sample anomaly detection method based on multi-scale hypergraph and feature registration |
-
2022
- 2022-06-01 CN CN202210617656.2A patent/CN114972871A/en active Pending
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117574136A (en) * | 2024-01-16 | 2024-02-20 | 浙江大学海南研究院 | Convolutional neural network calculation method based on multi-element Gaussian function space transformation |
CN117574136B (en) * | 2024-01-16 | 2024-05-10 | 浙江大学海南研究院 | Convolutional neural network calculation method based on multi-element Gaussian function space transformation |
CN118470442A (en) * | 2024-07-10 | 2024-08-09 | 华东交通大学 | Small sample anomaly detection method based on multi-scale hypergraph and feature registration |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111860670B (en) | Domain adaptive model training method, image detection method, device, equipment and medium | |
CN109118473B (en) | Angular point detection method based on neural network, storage medium and image processing system | |
CN109766992B (en) | Industrial control abnormity detection and attack classification method based on deep learning | |
CN114972871A (en) | Image registration-based few-sample image anomaly detection method and system | |
CN111738054B (en) | Behavior anomaly detection method based on space-time self-encoder network and space-time CNN | |
CN105488456A (en) | Adaptive rejection threshold adjustment subspace learning based human face detection method | |
CN112070058A (en) | Face and face composite emotional expression recognition method and system | |
CN111079847A (en) | Remote sensing image automatic labeling method based on deep learning | |
CN111241924B (en) | Face detection and alignment method, device and storage medium based on scale estimation | |
CN105528620B (en) | method and system for combined robust principal component feature learning and visual classification | |
CN114170184A (en) | Product image anomaly detection method and device based on embedded feature vector | |
CN117155706B (en) | Network abnormal behavior detection method and system | |
CN116910752B (en) | Malicious code detection method based on big data | |
CN114494242A (en) | Time series data detection method, device, equipment and computer storage medium | |
CN116704431A (en) | On-line monitoring system and method for water pollution | |
CN111462184B (en) | Online sparse prototype tracking method based on twin neural network linear representation model | |
CN114565605A (en) | Pathological image segmentation method and device | |
CN113313179B (en) | Noise image classification method based on l2p norm robust least square method | |
Mai et al. | Vietnam license plate recognition system based on edge detection and neural networks | |
CN116188445A (en) | Product surface defect detection and positioning method and device and terminal equipment | |
CN111291712A (en) | Forest fire recognition method and device based on interpolation CN and capsule network | |
CN114842506A (en) | Human body posture estimation method and system | |
CN111815658B (en) | Image recognition method and device | |
CN111696070A (en) | Multispectral image fusion power internet of things fault point detection method based on deep learning | |
CN113506272B (en) | False video detection method and system |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |