CN113808180B - Heterologous image registration method, system and device - Google Patents
Heterologous image registration method, system and device Download PDFInfo
- Publication number
- CN113808180B CN113808180B CN202111098634.1A CN202111098634A CN113808180B CN 113808180 B CN113808180 B CN 113808180B CN 202111098634 A CN202111098634 A CN 202111098634A CN 113808180 B CN113808180 B CN 113808180B
- Authority
- CN
- China
- Prior art keywords
- image
- training
- matching
- matching network
- sample
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/30—Determination of transform parameters for the alignment of images, i.e. image registration
- G06T7/33—Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10032—Satellite or aerial image; Remote sensing
- G06T2207/10044—Radar image
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- Life Sciences & Earth Sciences (AREA)
- Artificial Intelligence (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Evolutionary Biology (AREA)
- Evolutionary Computation (AREA)
- General Engineering & Computer Science (AREA)
- Image Analysis (AREA)
Abstract
The application discloses a method, a system and a device for registering heterologous images, wherein the method comprises the following steps: filtering the SAR image and forming an image block pair with the corresponding optical image; the image block is used for carrying out deep convolution on the sample book to generate an countermeasure network for training; carrying out data enhancement and division on the training samples; training a depth twin matching network based on a training set; generating matching point pairs based on the trained matching network; and calculating a transformation matrix according to the matching point pairs and registering the images. The system comprises: the image pair sample module, the training sample module, the partitioning module, the training module, the matching module, and the registration module. The apparatus includes a memory and a processor for performing the above-described heterologous image registration method. By using the method and the device, the heterologous registration accuracy can be improved. The method, the system and the device for registering the heterologous images can be widely applied to the field of image registration.
Description
Technical Field
The present application relates to the field of image registration, and in particular, to a method, system, and apparatus for registration of heterologous images.
Background
Existing image registration techniques fall broadly into three categories: the region-based image registration method, the feature-based image registration method and the network-based image registration method which are emerging in recent years have larger defects, and the difference of imaging principles and imaging conditions cause nonlinear intensity difference between SAR images and optical images, so that the gray-scale-based method has poor results; the imaging principle of the SAR image causes serious speckle noise in the SAR image, so that the method based on the point characteristics is difficult to extract reliable characteristic points on the SAR image, and the application of a good registration method in the optical image is generally difficult to obtain expected effects in the registration of the heterogeneous image; the convolutional neural network-based method requires a large amount of training data to obtain a good model and prevent overfitting, but the data volume of the optical image and SAR image data set in practical application is often far short so as to train a better network
Disclosure of Invention
In order to solve the technical problems, the application aims to provide a method, a system and a device for registering heterologous images, which improve the accuracy of heterologous registration.
The first technical scheme adopted by the application is as follows: a method of heterologous image registration, comprising the steps of:
filtering the SAR image and forming an image block pair with the corresponding optical image to obtain an image block pair sample;
the image block is used for inputting the sample book into a deep convolution generation countermeasure network for training, so that a training sample is obtained;
carrying out data enhancement and division on the training samples to obtain a training set and a testing set;
training the deep twin matching network based on a training set to obtain a trained matching network;
extracting image blocks of pictures in the test set and inputting the image blocks into a matching network after training is completed to obtain matching point pairs;
and calculating a transformation matrix according to the matching point pairs and registering the images.
Further, the filtering the SAR image specifically includes the following steps:
reading an image data matrix of the SAR image;
sliding a preset filtering window to process the SAR image, and calculating parameters in the window by combining an image data matrix;
and outputting a filtering result based on the parameters in the window and the filtering equation to obtain a filtered SAR image.
Further, the filter equation formula is expressed as follows:
in the above-mentioned method, the step of,representing the filtered value, b representing the preset parameter, m representing the observed value, ++>Representing the mean of the pixels within the local window.
Further, the depth convolution generating countermeasure network is provided with two groups, and the step of training the sample book input depth convolution generating countermeasure network by the image block to obtain a training sample specifically includes:
inputting the image block pair into a group of deep convolution to generate an countermeasure network, generating a pseudo SAR image based on a generator, and generating a discrimination tag based on a discriminator;
inputting the SAR image filtered by the image block pair sample into another group of depth convolution to generate an countermeasure network, generating a pseudo-optical image based on a generator, and generating a discrimination label based on a discriminator;
and forming a training set according to the SAR image, the pseudo SAR image, the optical image, the pseudo optical image and the discrimination tag.
Further, the step of performing data enhancement and division on the training sample to obtain a training set and a testing set specifically includes:
performing geometric transformation on the training sample to obtain an enhanced training sample;
the geometric transformation includes flipping, rotating, cropping, translating, and scaling;
training samples after enhancement at 7:3 to divide the training set and the test set.
Further, the step of training the deep twin matching network based on the training set to obtain a trained matching network specifically includes:
training one branch of the depth twin matching network according to a pseudo SAR image generated by the optical image in the training set and a corresponding SAR image;
training the other branch of the depth twin matching network according to the pseudo optical image generated by the SAR image in the training set and the corresponding optical image;
and carrying out loss calculation on the deep twin matching network by combining the discrimination tag to obtain the matching network after training.
Further, the step of extracting the image blocks of the pictures in the test set and inputting the image blocks into the matching network after training is completed to obtain the matching point pairs specifically includes:
detecting characteristic points of pictures in a test set based on a SIFT method;
taking image blocks of the SAR image and the optical image according to the characteristic points of the image and inputting the image blocks into two branches of a matching network after training is completed, so as to obtain a matching result;
and processing the matching result based on a progressive consistent sampling method to obtain a matching point pair.
Further, the formula of the transformation matrix calculated according to the matching point pairs is expressed as follows:
in the above formula, T represents image I 1 And I 2 The geometric transformation matrix between s represents I 2 Relative to I 1 Scaled scale factor, θ represents I 2 Relative to I 1 T is the rotation angle of (t) x Representation I 2 Relative to I 1 Is t y Represents I 2 Relative to I 1 Is used for the vertical displacement parameter of the device.
The second technical scheme adopted by the application is as follows: a heterologous image registration system comprising:
the image pair sample module is used for filtering the SAR image and forming an image block pair with the corresponding optical image to obtain an image block pair sample;
the training sample module is used for inputting the image block into the deep convolution of the sample to generate an countermeasure network for training, so as to obtain a training sample;
the division module is used for carrying out data enhancement on the training samples and dividing the training samples to obtain a training set and a testing set;
the training module trains the deep twin matching network based on the training set to obtain a matching network after training;
the matching module is used for extracting image blocks of the pictures in the test set and inputting the image blocks into a matching network after training is completed to obtain matching point pairs;
and the registration module is used for calculating a transformation matrix according to the matching point pairs and registering the images.
The third technical scheme adopted by the application is as follows: a heterologous image registration system comprising:
at least one processor;
at least one memory for storing at least one program;
the at least one program, when executed by the at least one processor, causes the at least one processor to implement a heterologous image registration method as described above.
The method, the system and the device have the beneficial effects that: according to the method, a large number of labeled image block pairs can be generated for network training through data enhancement, the problem that the size of a data set is insufficient for training a depth network is solved, and the problem of heterogeneous image registration is converted into the problem of homologous image registration through generating an countermeasure network, so that the accuracy of heterogeneous registration is improved.
Drawings
FIG. 1 is a flow chart of steps of a method for registration of a heterologous image in accordance with the present application;
fig. 2 is a block diagram of a heterologous image registration system of the present application.
Detailed Description
The application will now be described in further detail with reference to the drawings and to specific examples. The step numbers in the following embodiments are set for convenience of illustration only, and the order between the steps is not limited in any way, and the execution order of the steps in the embodiments may be adaptively adjusted according to the understanding of those skilled in the art.
Referring to fig. 1, the present application provides a heterologous image registration method comprising the steps of:
filtering the SAR image and forming an image block pair with the corresponding optical image to obtain an image block pair sample;
the image block is used for inputting the sample book into a deep convolution generation countermeasure network for training, so that a training sample is obtained;
carrying out data enhancement and division on the training samples to obtain a training set and a testing set;
training the deep twin matching network based on a training set to obtain a trained matching network;
extracting image blocks of pictures in the test set and inputting the image blocks into a matching network after training is completed to obtain matching point pairs;
and calculating a transformation matrix according to the matching point pairs and registering the images.
Further as a preferred embodiment of the method, the filtering of the SAR image by the LEE filtering method specifically comprises the following steps:
reading an image data matrix of the SAR image;
sliding a preset filtering window to process the SAR image, and calculating parameters in the window by combining an image data matrix;
specifically, a filter window size (7×7 sliding window), a sliding window, and parameters within the sliding window are set. Assuming that the a priori mean and variance can be derived from computing the local mean and variance,and->The prior mean and variance of the Lee filtering method are calculated by the following two formulas:
where v is the noise in the local window,
the linear mathematical model of Lee filtering isWherein:
and outputting a filtering result based on the parameters in the window and the filtering equation to obtain a filtered SAR image.
Specifically, lee filter equation
Further, as a preferred embodiment of the method, the depth convolution generating countermeasure network is provided with two groups, and the step of training the sample input depth convolution generating countermeasure network by the image block to obtain a training sample specifically includes:
inputting the image block pair into a group of deep convolution to generate an countermeasure network, generating a pseudo SAR image based on a generator, and generating a discrimination tag based on a discriminator;
inputting the SAR image filtered by the image block pair sample into another group of depth convolution to generate an countermeasure network, generating a pseudo-optical image based on a generator, and generating a discrimination label based on a discriminator;
and forming a training set according to the SAR image, the pseudo SAR image, the optical image, the pseudo optical image and the discrimination tag.
Specifically, two groups of countermeasure networks are generated, the two groups of networks have the same structure and different weights. Each group of networks consists of 4 convolutional layers, in terms of activation function: the discriminator uses a linear unit with leakage correction to prevent excessive sparseness of the gradient, the generator still uses a linear rectification function, but the final output layer uses a Tanh function (hyperbolic tangent function). Training by using an adaptive learning rate optimization algorithm, and setting the learning rate to be 0.0002. Training to generate an antagonism network according to the following loss function:
L DCGAN representing the reactive loss constraint of the generator and arbiter, where L L1 (G) The representation generator generates pixel level loss constraints between the image and the real image, p data Is the true data distribution, p z The noise distribution is that G (z) is a pseudo image of a simulated real image generated by the generation model from the random noise z, D (x) is a probability that the discrimination model judges that the real image is true, and D (G (z)) is a probability that the pseudo image generated by the discrimination model judgment generator is true.
Further, as a preferred embodiment of the method, the step of performing data enhancement and division on the training samples to obtain a training set and a testing set specifically includes:
performing geometric transformation on the training sample to obtain an enhanced training sample;
the geometric transformation includes flipping, rotating, cropping, translating, and scaling;
training samples after enhancement at 7:3 to divide the training set and the test set.
Specifically, the data enhancement is completed through the five geometric transformations, and the number of pictures of the training set is increased
Further as a preferred embodiment of the method, the step of training the deep twin matching network based on the training set to obtain a trained matching network specifically includes:
training one branch of the depth twin matching network according to a pseudo SAR image generated by the optical image in the training set and a corresponding SAR image;
training the other branch of the depth twin matching network according to the pseudo optical image generated by the SAR image in the training set and the corresponding optical image;
and carrying out loss calculation on the deep twin matching network by combining the discrimination tag to obtain the matching network after training.
Specifically, two branches of the depth twin network are respectively used for training an optical image and an SAR image, the two branches have the same structure, the weights are not shared, and the network adopts a cross entropy loss function and an SGD optimization algorithm. Obviously, the number of unmatched point pairs in the image is far greater than the number of matched point pairs in training, and in order to avoid the precision reduction caused by unbalanced number of positive and negative samples, a random sampling strategy is adopted to make the number of positive and negative samples consistent.
Cross entropy loss function:
wherein y is i Is the input image pair x i Is a label of (a). 1 represents a match and 0 represents a mismatch.Representing the predicted match probability, can be determined by two points v on the FC3 layer 0 (x i ) And v 1 (x i ) And (5) calculating to obtain the product. The calculation formula is as follows:
the matching network can be divided into a feature extraction network and a metric learning network. The feature extraction network adopts a dual-branch structure, and each branch comprises 5 convolution layers and 3 multiplied by 3 pooling layers. Wherein, three pooling layers are respectively positioned behind the first convolution layer, the second convolution layer and the fifth convolution layer. A bottleneck layer (real full connection layer) is connected between the feature extraction network and the metric learning network, and is used for reducing the dimension of the feature expression vector and avoiding over fitting. The metric learning network is composed of two full-connection layers connected with a ReLU activation function and one full-connection layer connected with a softmax function, and outputs probability values representing the similarity of image blocks.
Further as a preferred embodiment of the method, the step of extracting the image blocks of the pictures in the test set and inputting the image blocks into the matching network after training to obtain the matching point pairs specifically includes:
detecting characteristic points of pictures in a test set based on a SIFT method;
taking image blocks of the SAR image and the optical image according to the characteristic points of the image and inputting the image blocks into two branches of a matching network after training is completed, so as to obtain a matching result;
and processing the matching result based on a progressive consistent sampling method to obtain a matching point pair.
Further as a preferred embodiment of the method, the calculating the transformation matrix according to the matching point pair is formulated as follows:
in the above formula, T represents image I 1 And image I 2 The geometric transformation matrix between s represents the image I 2 Relative to image I 1 Scaled scale factor, θ represents image I 2 Relative to image I 1 T is the rotation angle of (t) x Representing image I 2 Relative to image I 1 Is t y Representative image I 2 Relative to image I 1 Is used for the vertical displacement parameter of the device.
The method also has the beneficial effects that: training samples are all known in labels, and by learning images subjected to different geometric transformations, the matching network is robust to rotation, translation and scale change/can learn the characteristics with rotation, translation and scale invariance; the metric function is learned through the network and the similarity metrics are integrated into the matching network, which directly outputs the matching tags.
As shown in fig. 2, a heterologous image registration system, comprising:
the image pair sample module is used for filtering the SAR image and forming an image block pair with the corresponding optical image to obtain an image block pair sample;
the training sample module is used for inputting the image block into the deep convolution of the sample to generate an countermeasure network for training, so as to obtain a training sample;
the division module is used for carrying out data enhancement on the training samples and dividing the training samples to obtain a training set and a testing set;
the training module trains the deep twin matching network based on the training set to obtain a matching network after training;
the matching module is used for extracting image blocks of the pictures in the test set and inputting the image blocks into a matching network after training is completed to obtain matching point pairs;
and the registration module is used for calculating a transformation matrix according to the matching point pairs and registering the images.
A heterologous image registration apparatus:
at least one processor;
at least one memory for storing at least one program;
the at least one program, when executed by the at least one processor, causes the at least one processor to implement a heterologous image registration method as described above.
The content in the method embodiment is applicable to the embodiment of the device, and the functions specifically realized by the embodiment of the device are the same as those of the method embodiment, and the obtained beneficial effects are the same as those of the method embodiment.
While the preferred embodiment of the present application has been described in detail, the application is not limited to the embodiment, and various equivalent modifications and substitutions can be made by those skilled in the art without departing from the spirit of the application, and these equivalent modifications and substitutions are intended to be included in the scope of the present application as defined in the appended claims.
Claims (7)
1. A method of heterologous image registration comprising the steps of:
filtering the SAR image and forming an image block pair with the corresponding optical image to obtain an image block pair sample;
the image block is used for inputting the sample book into a deep convolution generation countermeasure network for training, so that a training sample is obtained;
carrying out data enhancement and division on the training samples to obtain a training set and a testing set;
training the deep twin matching network based on a training set to obtain a trained matching network;
extracting image blocks of pictures in the test set and inputting the image blocks into a matching network after training is completed to obtain matching point pairs;
calculating a transformation matrix according to the matching point pairs and registering the images;
the step of data enhancement and division of the training sample to obtain a training set and a testing set specifically comprises the following steps:
performing geometric transformation on the training sample to obtain an enhanced training sample;
the geometric transformation includes flipping, rotating, cropping, translating, and scaling;
training samples after enhancement at 7:3, dividing the training set and the testing set in proportion;
the step of training the deep twin matching network based on the training set to obtain a trained matching network specifically comprises the following steps:
training one branch of the depth twin matching network according to a pseudo SAR image generated by the optical image in the training set and a corresponding SAR image;
training the other branch of the depth twin matching network according to the pseudo optical image generated by the SAR image in the training set and the corresponding optical image;
carrying out loss calculation on the deep twin matching network by combining the discrimination tag to obtain a matching network after training is completed;
the step of extracting the image blocks of the pictures in the test set and inputting the image blocks into the matching network after training to obtain the matching point pairs comprises the following steps:
detecting characteristic points of pictures in a test set based on a SIFT method;
taking image blocks of the SAR image and the optical image according to the characteristic points of the image and inputting the image blocks into two branches of a matching network after training is completed, so as to obtain a matching result;
and processing the matching result based on a progressive consistent sampling method to obtain a matching point pair.
2. The method for registration of heterologous images according to claim 1, wherein the filtering of the SAR image comprises the steps of:
reading an image data matrix of the SAR image;
sliding a preset filtering window to process the SAR image, and calculating parameters in the window by combining an image data matrix;
and outputting a filtering result based on the parameters in the window and the filtering equation to obtain a filtered SAR image.
3. A method of registration of a heterologous image according to claim 2, wherein the formula of the filter equation is as follows:
in the above-mentioned method, the step of,representing the filtered value, b representing the preset parameter, m representing the observed value, ++>Representing the mean of the pixels within the local window.
4. A method of registration of heterologous images according to claim 3, wherein the depth convolution generation countermeasure network has two sets, and the step of training the sample-by-sample input depth convolution generation countermeasure network with the image block to obtain training samples specifically comprises:
inputting the image block pair into a group of deep convolution to generate an countermeasure network, generating a pseudo SAR image based on a generator, and generating a discrimination tag based on a discriminator;
inputting the SAR image filtered by the image block pair sample into another group of depth convolution to generate an countermeasure network, generating a pseudo-optical image based on a generator, and generating a discrimination label based on a discriminator;
and forming a training set according to the SAR image, the pseudo SAR image, the optical image, the pseudo optical image and the discrimination tag.
5. The method of registration of a heterologous image according to claim 4, wherein the formula for computing the transformation matrix from the matching point pairs is as follows:
in the above formula, T represents image I 1 And image I 2 The geometric transformation matrix between s represents the image I 2 Relative to image I 1 Scaled scale factor, θ represents image I 2 Relative to image I 1 T is the rotation angle of (t) x Representing image I 2 Relative to image I 1 Is t y Representative image I 2 Relative to image I 1 Is used for the vertical displacement parameter of the device.
6. A heterologous image registration system comprising:
the image pair sample module is used for filtering the SAR image and forming an image block pair with the corresponding optical image to obtain an image block pair sample;
the training sample module is used for inputting the image block into the deep convolution of the sample to generate an countermeasure network for training, so as to obtain a training sample;
the division module is used for carrying out data enhancement on the training samples and dividing the training samples to obtain a training set and a testing set;
the training module trains the deep twin matching network based on the training set to obtain a matching network after training;
the matching module is used for extracting image blocks of the pictures in the test set and inputting the image blocks into a matching network after training is completed to obtain matching point pairs;
the registration module is used for calculating a transformation matrix according to the matching point pairs and registering the images;
the training sample is subjected to data enhancement and division to obtain a training set and a testing set, which specifically comprise: performing geometric transformation on the training sample to obtain an enhanced training sample; the geometric transformation includes flipping, rotating, cropping, translating, and scaling; training samples after enhancement at 7:3, dividing the training set and the testing set in proportion;
training the deep twin matching network based on a training set to obtain a trained matching network, which specifically comprises the following steps: training one branch of the depth twin matching network according to a pseudo SAR image generated by the optical image in the training set and a corresponding SAR image; training the other branch of the depth twin matching network according to the pseudo optical image generated by the SAR image in the training set and the corresponding optical image; carrying out loss calculation on the deep twin matching network by combining the discrimination tag to obtain a matching network after training is completed;
the method comprises the steps of extracting image blocks of pictures in a test set and inputting the image blocks into a matching network after training is completed to obtain matching points, wherein the method specifically comprises the following steps: detecting characteristic points of pictures in a test set based on a SIFT method; taking image blocks of the SAR image and the optical image according to the characteristic points of the image and inputting the image blocks into two branches of a matching network after training is completed, so as to obtain a matching result; and processing the matching result based on a progressive consistent sampling method to obtain a matching point pair.
7. A heterologous image registration device, comprising:
at least one processor;
at least one memory for storing at least one program;
the at least one program, when executed by the at least one processor, causes the at least one processor to implement a heterologous image registration method as claimed in any of claims 1-5.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111098634.1A CN113808180B (en) | 2021-09-18 | 2021-09-18 | Heterologous image registration method, system and device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111098634.1A CN113808180B (en) | 2021-09-18 | 2021-09-18 | Heterologous image registration method, system and device |
Publications (2)
Publication Number | Publication Date |
---|---|
CN113808180A CN113808180A (en) | 2021-12-17 |
CN113808180B true CN113808180B (en) | 2023-10-17 |
Family
ID=78939711
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202111098634.1A Active CN113808180B (en) | 2021-09-18 | 2021-09-18 | Heterologous image registration method, system and device |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113808180B (en) |
Families Citing this family (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114565653B (en) * | 2022-03-02 | 2023-07-21 | 哈尔滨工业大学 | Heterologous remote sensing image matching method with rotation change and scale difference |
CN114972453B (en) * | 2022-04-12 | 2023-05-05 | 南京雷电信息技术有限公司 | Improved SAR image region registration method based on LSD and template matching |
CN115356599B (en) * | 2022-10-21 | 2023-04-07 | 国网天津市电力公司城西供电分公司 | Multi-mode urban power grid fault diagnosis method and system |
CN116563569B (en) * | 2023-04-17 | 2023-11-17 | 昆明理工大学 | Hybrid twin network-based heterogeneous image key point detection method and system |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108510532A (en) * | 2018-03-30 | 2018-09-07 | 西安电子科技大学 | Optics and SAR image registration method based on depth convolution GAN |
CN111462012A (en) * | 2020-04-02 | 2020-07-28 | 武汉大学 | SAR image simulation method for generating countermeasure network based on conditions |
-
2021
- 2021-09-18 CN CN202111098634.1A patent/CN113808180B/en active Active
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108510532A (en) * | 2018-03-30 | 2018-09-07 | 西安电子科技大学 | Optics and SAR image registration method based on depth convolution GAN |
CN111462012A (en) * | 2020-04-02 | 2020-07-28 | 武汉大学 | SAR image simulation method for generating countermeasure network based on conditions |
Also Published As
Publication number | Publication date |
---|---|
CN113808180A (en) | 2021-12-17 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN113808180B (en) | Heterologous image registration method, system and device | |
CN110738697B (en) | Monocular depth estimation method based on deep learning | |
CN113450307B (en) | Product edge defect detection method | |
CN111709909B (en) | General printing defect detection method based on deep learning and model thereof | |
CN106875381B (en) | Mobile phone shell defect detection method based on deep learning | |
CN111445459B (en) | Image defect detection method and system based on depth twin network | |
CN111340738B (en) | Image rain removing method based on multi-scale progressive fusion | |
CN113239954B (en) | Attention mechanism-based image semantic segmentation feature fusion method | |
CN112183501A (en) | Depth counterfeit image detection method and device | |
CN105160686B (en) | A kind of low latitude various visual angles Remote Sensing Images Matching Method based on improvement SIFT operators | |
CN112801141B (en) | Heterogeneous image matching method based on template matching and twin neural network optimization | |
CN115471682A (en) | Image matching method based on SIFT fusion ResNet50 | |
CN114119987A (en) | Feature extraction and descriptor generation method and system based on convolutional neural network | |
CN113920516A (en) | Calligraphy character skeleton matching method and system based on twin neural network | |
CN112766340A (en) | Depth capsule network image classification method and system based on adaptive spatial mode | |
CN114241469A (en) | Information identification method and device for electricity meter rotation process | |
CN114241194A (en) | Instrument identification and reading method based on lightweight network | |
CN117115359A (en) | Multi-view power grid three-dimensional space data reconstruction method based on depth map fusion | |
CN115205155A (en) | Distorted image correction method and device and terminal equipment | |
CN114299323A (en) | Printed matter defect detection method combining machine vision and deep learning technology | |
CN117611925A (en) | Multi-source remote sensing image classification method based on graph neural network and convolution network | |
CN117315670A (en) | Water meter reading area detection method based on computer vision | |
CN116778164A (en) | Semantic segmentation method for improving deep V < 3+ > network based on multi-scale structure | |
CN116630160A (en) | Cell image super-resolution reconstruction method and system based on convolution network | |
Dai et al. | Intelligent ammeter reading recognition method based on deep learning |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |