Nothing Special   »   [go: up one dir, main page]

CN113793263B - Parallel residual error network high-resolution image reconstruction method for multi-scale cavity convolution - Google Patents

Parallel residual error network high-resolution image reconstruction method for multi-scale cavity convolution Download PDF

Info

Publication number
CN113793263B
CN113793263B CN202110967124.7A CN202110967124A CN113793263B CN 113793263 B CN113793263 B CN 113793263B CN 202110967124 A CN202110967124 A CN 202110967124A CN 113793263 B CN113793263 B CN 113793263B
Authority
CN
China
Prior art keywords
convolution
resolution image
layer
residual error
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110967124.7A
Other languages
Chinese (zh)
Other versions
CN113793263A (en
Inventor
仇傲
张伟
罗欣怡
李志鹏
李焱骏
师奕兵
郭一多
罗斌
谢雨洁
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
University of Electronic Science and Technology of China
Original Assignee
University of Electronic Science and Technology of China
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by University of Electronic Science and Technology of China filed Critical University of Electronic Science and Technology of China
Priority to CN202110967124.7A priority Critical patent/CN113793263B/en
Publication of CN113793263A publication Critical patent/CN113793263A/en
Application granted granted Critical
Publication of CN113793263B publication Critical patent/CN113793263B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4053Scaling of whole images or parts thereof, e.g. expanding or contracting based on super-resolution, i.e. the output image resolution being higher than the sensor resolution
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10132Ultrasound image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Biomedical Technology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a parallel residual error network high-resolution image reconstruction method based on multi-scale cavity convolution, which comprises the steps of firstly carrying out shallow feature extraction on convolution layers with the size of 9 × 9 and the number of channels of 64, then utilizing the characteristic that the cavity convolution improves the receptive field under the condition that the parameter quantity is not changed to construct a multi-scale cavity convolution block, then combining the multi-scale cavity convolution block with a common 3 × 3 convolution layer and a BN layer to form a residual error block, connecting 16 residual error blocks in series to form a residual error network, carrying out nonlinear mapping on features by adopting a multi-path parallel structure to obtain high-level features, and finally carrying out rearrangement on feature maps by sub-pixel convolution layers to finally obtain a high-resolution image SR.

Description

Parallel residual error network high-resolution image reconstruction method for multi-scale cavity convolution
Technical Field
The invention belongs to the technical field of image processing, and particularly relates to a parallel residual error network high-resolution image reconstruction method based on multi-scale cavity convolution.
Background
In the field of oil logging, the well-around imaging logging is an important branch. The imaging logging around the well can reflect the condition of the oil well by an intuitive well wall image, the development condition of cracks and holes on the well wall can be clearly seen, the imaging logging around the well is an important means for evaluating the oil well, and the resolution evaluation of logging personnel on the well is directly influenced by the definition of the obtained well wall image. High-resolution reconstruction is one of main research directions for image enhancement, the enhancement effect of traditional methods such as a bilinear interpolation method and a bicubic interpolation method is not obvious, and super-resolution reconstruction based on deep learning is researched by more and more people due to the obvious enhancement effect. For example: the deep learning SRCNN algorithm for image super-resolution reconstruction uses bicubic interpolation for upsampling, and then a three-layer neural network is constructed for feature learning and reconstruction.
Disclosure of Invention
The invention aims to overcome the defects of the prior art and provides a parallel residual error network high-resolution image reconstruction method based on multi-scale cavity convolution so as to realize the rapid enhancement of a logging image and facilitate logging personnel to better observe the image and analyze the underground condition in real time.
In order to achieve the purpose, the invention discloses a parallel residual error network high-resolution image reconstruction method based on multi-scale cavity convolution, which is characterized by comprising the following steps of:
(1) Acquiring and preprocessing an image;
acquiring a plurality of original high-resolution well logging images by using a well periphery ultrasonic imaging instrument, cutting each high-resolution image to obtain a high-resolution image HR with the same size, and performing n-time down-sampling on each high-resolution well logging image to obtain a low-resolution image LR with the size of H x W, wherein n is a sampling multiple;
(2) Constructing an image reconstruction network based on multi-scale cavity convolution and training;
(2.1) extracting a feature map containing shallow features;
inputting the low-resolution image LR into a convolution layer v1 with the size of 9 × 9 and the number of channels of 64, and performing shallow feature extraction by using a PRelu activation function to obtain 64 feature maps with the size of H × W;
(2.2) constructing two parallel residual error networks;
(2.2.1) constructing a multi-scale cavity rolling block;
simultaneously extracting the features of 64 feature maps by adopting 64 convolution layers v2 with convolution kernels of 3 × 3 and 64 convolution layers v3 with convolution kernels of 3 × 3 and 2 expansion rates, adding the output results of v2 and v3, inputting the added result into v2 and v3 again, and finally performing feature fusion on the output results of v2 and v3 by using convolution kernel 1, and directly adding the fused result with the input 64 feature maps to construct a multi-scale hole convolution block;
(2.2.2) constructing a single-path residual error network: connecting a multi-scale cavity convolution block, a convolution layer v4 of a convolution kernel with the size of 3 x 3 and a normalization layer together, adding the result to 64 input feature maps to form a residual block, and connecting 16 residual blocks in series to form a single-path residual network;
(2.2.3) constructing two parallel residual error networks: simultaneously extracting features of a convolutional layer v5 with 64 convolutional kernels with 5 × 5 sizes and a convolutional layer v6 with 64 convolutional kernels with 5 × 5 sizes and 2 expansion rates, adding output results of v5 and v6, inputting the added output results into v5 and v6 again, and finally performing feature fusion on the output results of v5 and v6 by using 1 × 1 convolutional kernel and directly adding the input output results with 64 feature maps to construct another multi-scale hole convolutional block; then, the multi-scale hole convolution block is added with a convolution layer v4 and a normalization layer of a convolution kernel of 3 x 3 size and input 64 feature maps to form another residual block, and then 16 residual blocks are connected in series to form another single-path residual network;
connecting two single-path residual error networks to the convolution layer in the step (2.1) in a parallel mode, performing feature fusion on the results output by the two parallel residual error networks in the step (2.2) by using a 1-to-1 convolution kernel, inputting the results into a convolution layer v4 and a normalization layer of a 3-to-3 convolution kernel, and directly adding the results and the output of the convolution layer in the step (2.1) to obtain a feature map containing 64 high-level features;
(2.3) reconstructing a high-resolution image;
inputting 64 feature graphs containing high-level features into channels with the number of 64 x n 2 The channel number is widened by the convolution layer v7, and then the widened channel number is input into the sub-pixel convolution layer, so that the single pixels on the multiple channel feature maps are combined and arranged into a group of pixels on one channel feature map, namely: h W r n 2 → n H W r, r is the number of channels after the last stage output; finally, outputting a reconstructed high-resolution image SR with the size of (n × H) × (n × W) and the number of channels of 3 through a convolution layer v8 with the size of 9 × 9 and the number of channels of 3;
(2.4) calculating a loss function value;
calculating pixel mean square error MSE of the reconstructed high-resolution image SR and the original high-resolution image HR, and taking the MSE as a loss function value;
Figure BDA0003224342850000031
wherein SR (i, j) represents the pixel value of the pixel point with the coordinate (i, j) in the high-resolution image SR, and HR (i, j) represents the pixel value of the pixel point with the coordinate (i, j) in the high-resolution image HR;
(2.5) repeating the steps (2.1) - (2.4), continuing to train the image reconstruction network, and performing parameter optimization by using an Adam optimization algorithm to minimize MSE (mean Square error), so as to finally obtain a trained image reconstruction network model;
(3) And acquiring a logging image in real time, and inputting the logging image into the trained image reconstruction network so as to output a reconstructed high-resolution image.
The invention aims to realize the following steps:
the invention relates to a parallel residual error network high-resolution image reconstruction method based on multi-scale cavity convolution, which comprises the steps of firstly carrying out shallow feature extraction on convolution layers with the size of 9 × 9 and the number of channels of 64, then utilizing the property of improving the receptive field of the cavity convolution under the condition of constant parameter quantity to construct a multi-scale cavity convolution block, then combining the multi-scale cavity convolution block with a common 3 × 3 convolution layer and a BN layer to form a residual error block, connecting 16 residual error blocks in series to form a residual error network, carrying out nonlinear mapping on features by adopting a multi-path parallel structure to obtain high-level features, finally carrying out rearrangement on feature maps by using sub-pixel convolution layers to finally obtain a high-resolution image SR.
Meanwhile, the parallel residual error network high-resolution image reconstruction method based on the multi-scale cavity convolution also has the following beneficial effects:
(1) The receptive field is improved by using the cavity convolution under the condition that the parameter quantity is not changed, and more global characteristics are obtained.
(2) And feature information of different scales is complemented by using a parallel network structure.
(3) Compared with the traditional bicubic interpolation and deep learning super-resolution reconstruction classical algorithms SRCNN, VDSR, SRResNet and the like, the multi-scale parallel network based on the cavity convolution obviously improves the PSNR and SSIM of image reconstruction objective indexes.
Drawings
FIG. 1 is a flow chart of a parallel residual error network high-resolution image reconstruction method based on multi-scale hole convolution according to the invention;
FIG. 2 is a block diagram of a peri-borehole ultrasound imager;
FIG. 3 is a multi-scale hole volume block;
FIG. 4 is a parallel network structure based on multi-scale hole rolling blocks;
FIG. 5 is a 4-fold reconstructed mean index analysis of the test set for this and other classical algorithms;
FIG. 6 is a test set 2-fold reconstructed mean index analysis of the present algorithm and other classical algorithms;
FIG. 7 is a comparison of the effect of multiple single log image participation of the present algorithm and other classical algorithms;
Detailed Description
The following description of the embodiments of the present invention is provided in order to better understand the present invention for those skilled in the art with reference to the accompanying drawings. It is to be expressly noted that in the following description, a detailed description of known functions and designs will be omitted when it may obscure the main content of the present invention.
Examples
In this embodiment, as shown in fig. 1, the parallel residual error network high resolution image reconstruction method based on multi-scale hole convolution of the present invention includes the following steps:
s1, image acquisition and pretreatment;
as shown in fig. 2, the borehole ultrasonic imaging instrument includes a ground control system and an underground logging circuit system, during logging, the ultrasonic transducer probe is driven by the motor transmission device to rotate for 360 degrees, each time the ultrasonic transducer probe rotates for one circle, the tooth sensor generates 250 pulses and the body maark sensor generates 1 pulse, then the main control module of the underground logging circuit shapes the signals, the shaped tooth signals serve as transmitting signals to enable the FPGA to drive the transmitting circuit to generate high-voltage pulses and excite the ultrasonic transducer, and the shaped body mark signals serve as acquisition cycle synchronization signals to mark a new circle acquisition starting point.
The ultrasonic transducer collects full-wave column data of echoes, the full-wave column data are sent to the ground control system through an EDIB bus, the upper computer receives the data converted through the USB port and then analyzes the data, echo amplitude and arrival time data in a data packet are extracted, and a final borehole wall image is synthesized by using software.
After actual logging, 127 images with the resolution of 140 × 140, 152 images with the resolution of 180 × 180, 860 images with the resolution of 352 × 352 are finally obtained as a training set and 366 images with the resolution of 352 × 352, 265 images with the resolution of 180 × 180, and 30 images with the resolution of 140 × 140 are finally obtained as a testing set;
and taking the original images in the training set as original high-resolution images, cutting each original high-resolution image to obtain an original high-resolution image HR with the size of 96 × 96, and performing four-time down-sampling on the HR to obtain a low-resolution image LR with the size of H × W =24 × 24 so as to perform network training.
S2, constructing an image reconstruction network based on multi-scale hole convolution and training;
s2.1, extracting a characteristic diagram containing shallow layer characteristics;
in this embodiment, 20 LR images are randomly extracted into the feature extraction layer each time, the extracted low-resolution image LR is input to the convolution layer v1 with the size of 9 × 9 and the number of channels of 64, and the shallow feature extraction is performed by using the prilu activation function, so as to obtain 64 feature maps with the size of H × W;
s2.2, constructing two parallel residual error networks;
s2.2.1, constructing a multi-scale cavity rolling block;
simultaneously extracting the features of 64 feature maps by adopting a convolutional layer v2 with 64 convolution kernels with 3 × 3 sizes and a convolutional layer v3 with 64 convolution kernels with 3 × 3 convolution kernels with 2 expansion rates, then adding the output results of v2 and v3 and inputting the added result into v2 and v3 again, and finally performing feature fusion on the output results of v2 and v3 by using a 1 × 1 convolution kernel and directly adding the input result and the input 64 feature maps to construct a multi-scale cavity convolutional block, as shown in fig. 3;
s2.2.2, constructing a single-path residual error network: connecting a multi-scale cavity convolution block, a convolution layer v4 of a convolution kernel with the size of 3 x 3 and a normalization layer together, adding the result to 64 input feature maps to form a residual block, and connecting 16 residual blocks in series to form a single-path residual network;
s2.2.3, as shown in FIG. 4, two parallel residual error networks are constructed: simultaneously extracting features of a convolutional layer v5 with 64 convolutional kernels with 5 × 5 sizes and 2 expansion rates, adding output results of v5 and v6, inputting the added output results into v5 and v6 again, and finally performing feature fusion on the output results of v5 and v6 by using 1 × 1 convolutional kernel and directly adding the input output results with 64 feature maps to construct another multi-scale hole convolutional block; then, the multi-scale hole convolution block is added with a convolution layer v4 and a normalization layer of a convolution kernel with the size of 3 x 3 and input 64 feature maps to form another residual block, and then 16 residual blocks are connected in series to form another single-path residual network;
connecting two single-path residual error networks to the convolution layer in the step S2.1 in a parallel mode, performing feature fusion on the results output by the two parallel residual error networks in the step S2.2 by using a 1-to-1 convolution kernel, inputting the results into a convolution layer v4 and a normalization layer of a 3-to-3 convolution kernel, and directly adding the results and the output of the convolution layer in the step S2.1 to obtain a feature map containing 64 high-level features;
s2.3, reconstructing a high-resolution image;
the 64 feature maps containing high-level features are input to the convolution layer v7 with the channel number 64 × 4 to widen the channel number, and then input to the sub-pixel convolution layer, so that the single pixels on the multiple channel feature maps are combined and arranged into a group of pixels on one channel feature map, namely: h W R4 2 → 4 × h (4 × w) × r, in this embodiment, r is the number of channels 64 after the last stage output; finally, outputting a reconstructed high-resolution image SR with the size of (4 × H) × (4 × W) and the number of channels of 3 through a convolution layer v8 with the size of 9 × 9 and the number of channels of 3;
s2.4, calculating a loss function value;
calculating pixel mean square error MSE of the reconstructed high-resolution image SR and the original high-resolution image HR, and taking the MSE as a loss function value;
Figure BDA0003224342850000061
wherein SR (i, j) represents the pixel value of the pixel point with the coordinate (i, j) in the high-resolution image SR, and HR (i, j) represents the pixel value of the pixel point with the coordinate (i, j) in the high-resolution image HR;
s2.5, repeating the steps S2.1-S2.4, continuing to train the image reconstruction network, and performing parameter optimization by using an Adam optimization algorithm to minimize MSE (mean Square error), so as to finally obtain a trained image reconstruction network model;
and S3, collecting a logging image in real time, and inputting the logging image into the trained image reconstruction network so as to output a reconstructed high-resolution image.
Authentication
In this embodiment, the reconstructed high resolution image (SR) and the original high resolution image (HR) are reconstructed at high resolution, and a peak signal-to-noise ratio (PSNR) and a Structural Similarity (SSIM) index, which are commonly used in an algorithm, are compared. PSNR is compared from pixels, and the higher PSNR, the less distortion representing the reconstructed image, and the more similar the pixels are to those of HR. The peak signal-to-noise ratio is calculated as follows:
Figure BDA0003224342850000062
where n is the number of bits per pixel, and is typically 8.
SSIM is to compare two images from contrast, structural features and brightness, and the closer the SSIM is to 1, the more similar the representative images are, the better the reconstruction effect is. The structural similarity is calculated as follows:
Figure BDA0003224342850000071
Figure BDA0003224342850000072
Figure BDA0003224342850000073
SSIM(X,Y)=l(X,Y)·c(X,Y)·s(X,Y)
wherein X and Y represent the image SR and the image HR, μ, respectively X 、μ Y Representing the mean, σ, of the images SR and HR, respectively X 、σ Y Denotes the standard deviation, σ, of the images SR and HR, respectively XY Representing the covariance of the images SR and HR, c 1 、c 2 、c 3 Is a constant.
In order to verify the reconstruction effect of the algorithm, the algorithm and the classical algorithms SRCNN, VDSR and SRResNet for deep learning super-resolution reconstruction, the void convolutional layer with the expansion rate of 2 which is changed into the convolutional layer in SRResNet, and the average PSNR and average SSIM which are used for performing quadruple reconstruction effect by using the test set are shown in fig. 5. Then, the above steps are repeated to change the network proposed by the algorithm and the classical algorithms SRCNN, VDSR and SRResNet of deep learning super-resolution reconstruction, change the convolution layer in SRResNet into a cavity convolution layer with the expansion rate of 2, and change the single-path residual network in SRResNet into a parallel two-path residual network, and simultaneously use the average PSNR and the average SSIM pair with the double reconstruction effect of the test set as shown in FIG. 6. The image shows that the multi-scale cavity convolution blocks and the parallel network structure are effective to the reconstruction effect, and the parallel residual error network high-resolution image reconstruction method based on the multi-scale cavity convolution, which is provided by the algorithm, is improved to the reconstruction effect.
As shown in fig. 7, the algorithm and the conventional bicubic interpolation are shown, and based on the SRCNN, VDSR, ESPCN, SRResNet of the deep learning, the objective indicators PSNR and SSIM comparison and subjective visual comparison are performed on four well-logging images randomly selected from a test set.
Although illustrative embodiments of the present invention have been described above to facilitate the understanding of the present invention by those skilled in the art, it should be understood that the present invention is not limited to the scope of the embodiments, and various changes may be made apparent to those skilled in the art as long as they are within the spirit and scope of the present invention as defined and defined by the appended claims, and all matters of the invention which utilize the inventive concepts are protected.

Claims (1)

1. A parallel residual error network high-resolution image reconstruction method based on multi-scale cavity convolution is characterized by comprising the following steps:
(1) Acquiring and preprocessing an image;
acquiring a plurality of original high-resolution well logging images by using a well periphery ultrasonic imaging instrument, cutting each high-resolution image to obtain a high-resolution image HR with the same size, and performing n-time down-sampling on each high-resolution well logging image to obtain a low-resolution image LR with the size of H x W, wherein n is a sampling multiple;
(2) Constructing an image reconstruction network based on multi-scale void convolution and training;
(2.1) extracting a feature map containing shallow features;
inputting the low-resolution image LR into a convolution layer v1 with the size of 9 × 9 and the number of channels of 64, and performing shallow feature extraction by using a PRelu activation function to obtain 64 feature maps with the size of H × W;
(2.2) constructing two parallel residual error networks;
(2.2.1) constructing a multi-scale cavity rolling block;
simultaneously extracting the features of 64 feature maps by adopting a convolutional layer v2 with 64 convolutional kernels with 3 × 3 sizes and 64 convolutional layers v3 with 3 × 3 convolutional kernels with 2 expansion rates, then adding the output results of v2 and v3 and inputting the added results into v2 and v3 again, and finally performing feature fusion on the output results of v2 and v3 by using a 1 × 1 convolutional kernel and directly adding the input results with 64 feature maps to construct a multi-scale cavity convolutional block;
(2.2.2) constructing a single-path residual error network: connecting a multi-scale cavity convolution block, a convolution layer v4 of a convolution kernel with the size of 3 x 3 and a normalization layer together, adding the result to 64 input feature maps to form a residual block, and connecting 16 residual blocks in series to form a single-path residual network;
(2.2.3) constructing two parallel residual error networks: simultaneously extracting features of a convolutional layer v5 with 64 convolutional kernels with 5 × 5 sizes and a convolutional layer v6 with 64 convolutional kernels with 5 × 5 sizes and 2 expansion rates, adding output results of v5 and v6, inputting the added output results into v5 and v6 again, and finally performing feature fusion on the output results of v5 and v6 by using 1 × 1 convolutional kernels and directly adding the fused output results with 64 input feature graphs to construct another multi-scale hole convolutional block; then, the multi-scale hole convolution block is added with a convolution layer v4 and a normalization layer of a convolution kernel of 3 x 3 size and input 64 feature maps to form another residual block, and then 16 residual blocks are connected in series to form another single-path residual network;
connecting two single-path residual error networks to the convolution layer in the step (2.1) in a parallel mode, performing feature fusion on the result output by the two parallel residual error networks in the step (2.2) by using a 1-to-1 convolution kernel, inputting the result into a convolution layer v4 of a 3-to-3 convolution kernel and a normalization layer, and directly adding the result and the output of the convolution layer in the step (2.1) to obtain a feature map containing 64 high-level features;
(2.3) reconstructing a high-resolution image;
inputting 64 feature graphs containing high-level features into channels with the number of 64 x n 2 The channel number is widened by the convolution layer v7, and then the widened channel number is input into the sub-pixel convolution layer, so that the single pixels on the multiple channel feature maps are combined and arranged into a group of pixels on one channel feature map, namely: h W r n 2 → n H W r, r is the number of channels after the last stage output; finally, outputting a reconstructed high-resolution image SR with the size of (n × H) × (n × W) and the number of channels of 3 through a convolution layer v8 with the size of 9 × 9 and the number of channels of 3;
(2.4) calculating a loss function value;
calculating pixel mean square error MSE of the reconstructed high-resolution image SR and the original high-resolution image HR, and taking the MSE as a loss function value;
Figure FDA0003224342840000021
wherein SR (i, j) represents the pixel value of the pixel point with the coordinate (i, j) in the high-resolution image SR, and HR (i, j) represents the pixel value of the pixel point with the coordinate (i, j) in the high-resolution image HR;
(2.5) repeating the steps (2.1) - (2.4), continuing to train the image reconstruction network, and performing parameter optimization by using an Adam optimization algorithm to minimize MSE (mean Square error), so as to finally obtain a trained image reconstruction network model;
(3) And acquiring a logging image in real time, and inputting the logging image into the trained image reconstruction network so as to output a reconstructed high-resolution image.
CN202110967124.7A 2021-08-23 2021-08-23 Parallel residual error network high-resolution image reconstruction method for multi-scale cavity convolution Active CN113793263B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110967124.7A CN113793263B (en) 2021-08-23 2021-08-23 Parallel residual error network high-resolution image reconstruction method for multi-scale cavity convolution

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110967124.7A CN113793263B (en) 2021-08-23 2021-08-23 Parallel residual error network high-resolution image reconstruction method for multi-scale cavity convolution

Publications (2)

Publication Number Publication Date
CN113793263A CN113793263A (en) 2021-12-14
CN113793263B true CN113793263B (en) 2023-04-07

Family

ID=78876216

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110967124.7A Active CN113793263B (en) 2021-08-23 2021-08-23 Parallel residual error network high-resolution image reconstruction method for multi-scale cavity convolution

Country Status (1)

Country Link
CN (1) CN113793263B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114025118A (en) * 2022-01-06 2022-02-08 广东电网有限责任公司中山供电局 Low-bit-rate video reconstruction method and system, electronic equipment and storage medium
CN114529519B (en) * 2022-01-25 2024-07-12 河南大学 Image compressed sensing reconstruction method and system based on multi-scale depth cavity residual error network
CN114463181B (en) * 2022-02-11 2024-09-03 重庆邮电大学 Image super-resolution method for generating countermeasure network based on improvement

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110097512A (en) * 2019-04-16 2019-08-06 四川大学 Construction method and the application of the three-dimensional MRI image denoising model of confrontation network are generated based on Wasserstein
CN110211038A (en) * 2019-04-29 2019-09-06 南京航空航天大学 Super resolution ratio reconstruction method based on dirac residual error deep neural network
CN110232653A (en) * 2018-12-12 2019-09-13 天津大学青岛海洋技术研究院 The quick light-duty intensive residual error network of super-resolution rebuilding
CN110930306A (en) * 2019-10-28 2020-03-27 杭州电子科技大学 Depth map super-resolution reconstruction network construction method based on non-local perception
CN111047515A (en) * 2019-12-29 2020-04-21 兰州理工大学 Cavity convolution neural network image super-resolution reconstruction method based on attention mechanism

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11237111B2 (en) * 2020-01-30 2022-02-01 Trustees Of Boston University High-speed delay scanning and deep learning techniques for spectroscopic SRS imaging

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110232653A (en) * 2018-12-12 2019-09-13 天津大学青岛海洋技术研究院 The quick light-duty intensive residual error network of super-resolution rebuilding
CN110097512A (en) * 2019-04-16 2019-08-06 四川大学 Construction method and the application of the three-dimensional MRI image denoising model of confrontation network are generated based on Wasserstein
CN110211038A (en) * 2019-04-29 2019-09-06 南京航空航天大学 Super resolution ratio reconstruction method based on dirac residual error deep neural network
CN110930306A (en) * 2019-10-28 2020-03-27 杭州电子科技大学 Depth map super-resolution reconstruction network construction method based on non-local perception
CN111047515A (en) * 2019-12-29 2020-04-21 兰州理工大学 Cavity convolution neural network image super-resolution reconstruction method based on attention mechanism

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
PRED: A PARALLEL NETWORK FOR HANDLING MULTIPLE DEGRADATIONS VIA SINGLE MODEL IN SINGLE IMAGE SUPER-RESOLUTION;Guangyang Wu等;《网页在线公开:https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=8804409》;第1-5页 *
基于并行残差卷积网络的图像超分辨重建;杨伟铭等;《空军工程大学学报(自然科学版)》;第20卷(第4期);第84-89页 *
基于并行通道-空间注意力机制的腹部MRI影像多尺度超分辨率重建;樊帆等;《计算机应用》;第40卷(第12期);第3624-3630页 *

Also Published As

Publication number Publication date
CN113793263A (en) 2021-12-14

Similar Documents

Publication Publication Date Title
CN113793263B (en) Parallel residual error network high-resolution image reconstruction method for multi-scale cavity convolution
CN111127374B (en) Pan-sharing method based on multi-scale dense network
CN112734646B (en) Image super-resolution reconstruction method based on feature channel division
CN109214989B (en) Single image super resolution ratio reconstruction method based on Orientation Features prediction priori
CN108629816A (en) The method for carrying out thin layer MR image reconstruction based on deep learning
CN109712077B (en) Depth dictionary learning-based HARDI (hybrid automatic repeat-based) compressed sensing super-resolution reconstruction method
CN113379601A (en) Real world image super-resolution method and system based on degradation variational self-encoder
CN109584162A (en) A method of based on the image super-resolution reconstruct for generating network
CN110533591B (en) Super-resolution image reconstruction method based on codec structure
CN111833261A (en) Image super-resolution restoration method for generating countermeasure network based on attention
CN111178499B (en) Medical image super-resolution method based on generation countermeasure network improvement
CN111353935A (en) Magnetic resonance imaging optimization method and device based on deep learning
CN111797891A (en) Unpaired heterogeneous face image generation method and device based on generation countermeasure network
CN113284046A (en) Remote sensing image enhancement and restoration method and network based on no high-resolution reference image
CN115880158A (en) Blind image super-resolution reconstruction method and system based on variational self-coding
CN112818777A (en) Remote sensing image target detection method based on dense connection and feature enhancement
CN117455770A (en) Lightweight image super-resolution method based on layer-by-layer context information aggregation network
CN117474764A (en) High-resolution reconstruction method for remote sensing image under complex degradation model
CN112446825A (en) Rock core CT image super-resolution method based on cyclic generation countermeasure network
CN102298768B (en) High-resolution image reconstruction method based on sparse samples
CN117173022A (en) Remote sensing image super-resolution reconstruction method based on multipath fusion and attention
CN111161152B (en) Image super-resolution method based on self-adaptive convolutional neural network
CN115375968A (en) Fault diagnosis method for planetary gearbox
CN114998137A (en) Ground penetrating radar image clutter suppression method based on generation countermeasure network
CN101371794A (en) Color Doppler ultrasonic diagnostic apparatus coding device and method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant