CN107154064B - Natural image compressed sensing method for reconstructing based on depth sparse coding - Google Patents
Natural image compressed sensing method for reconstructing based on depth sparse coding Download PDFInfo
- Publication number
- CN107154064B CN107154064B CN201710306725.7A CN201710306725A CN107154064B CN 107154064 B CN107154064 B CN 107154064B CN 201710306725 A CN201710306725 A CN 201710306725A CN 107154064 B CN107154064 B CN 107154064B
- Authority
- CN
- China
- Prior art keywords
- training
- observation
- iteration
- model
- image
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 29
- 238000012360 testing method Methods 0.000 claims abstract description 31
- 230000009466 transformation Effects 0.000 claims abstract description 29
- 238000012549 training Methods 0.000 claims description 68
- 239000011159 matrix material Substances 0.000 claims description 24
- 230000002238 attenuated effect Effects 0.000 claims description 3
- 230000008602 contraction Effects 0.000 claims description 3
- 238000012935 Averaging Methods 0.000 claims description 2
- 238000012937 correction Methods 0.000 claims description 2
- 238000011478 gradient descent method Methods 0.000 claims description 2
- 238000011084 recovery Methods 0.000 abstract description 3
- 238000006243 chemical reaction Methods 0.000 abstract 1
- 238000004321 preservation Methods 0.000 abstract 1
- 238000000513 principal component analysis Methods 0.000 description 21
- 238000004088 simulation Methods 0.000 description 12
- 230000006835 compression Effects 0.000 description 7
- 238000007906 compression Methods 0.000 description 7
- 230000000694 effects Effects 0.000 description 7
- 238000005457 optimization Methods 0.000 description 5
- 238000005516 engineering process Methods 0.000 description 3
- 238000013528 artificial neural network Methods 0.000 description 2
- 238000011161 development Methods 0.000 description 2
- 230000006870 function Effects 0.000 description 2
- 238000012546 transfer Methods 0.000 description 2
- 230000005540 biological transmission Effects 0.000 description 1
- 238000013527 convolutional neural network Methods 0.000 description 1
- 238000013135 deep learning Methods 0.000 description 1
- 238000010586 diagram Methods 0.000 description 1
- 238000002474 experimental method Methods 0.000 description 1
- 238000012545 processing Methods 0.000 description 1
- 230000000306 recurrent effect Effects 0.000 description 1
- 238000004904 shortening Methods 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 238000013526 transfer learning Methods 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T11/00—2D [Two Dimensional] image generation
Landscapes
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses a kind of natural image compressed sensing method for reconstructing based on depth sparse coding mainly solves the problems, such as that existing method is difficult to fast, accurately with coefficient reconstruction natural image.Its implementation is: 1) to image block and in orthogonal transform domain up conversion, the observation vector of calculation of transform coefficients;2) the recovery transformation coefficient of observation vector is found out with iteration method, and updates calculating parameter;3) calculate 2) in transformation coefficient observation vector, find out its with 1) in observation vector residual error amount;4) step 1) -3 is repeated) obtain trained model, and preservation model parameter;5) test observation data and model parameter are input to trained model, obtain image transform coefficients corresponding with test observation data;6) inverse transformation, the natural image finally rebuild are carried out to the transformation coefficient in 5).The natural image that the present invention is rebuild is clear, and reconstructed velocity is quickly, can be used for the recovery to natural image.
Description
Technical Field
The invention belongs to the field of image processing, and particularly relates to a natural image compressed sensing reconstruction method which can be used for sampled natural image restoration.
Background
With the development of media technology, massive image data faces huge challenges in real-time transmission and storage. The proposal of the compressed sensing technology opens up a new idea in theory for solving the problems effectively. The compressive sensing theory considers that if a signal has sparsity under a certain transformation base, random projection observation can be carried out on the signal, the signal is accurately reconstructed through priori information of the signal by using fewer observed values, and a model of the signal is a norm optimization problem under the constraint of observation data fidelity.
For the above compressed sensing model, different norm constraints represent different reconstruction algorithms and reconstruction performance. If norm is used, the orthogonal matching pursuit OMP algorithm is usually used for reconstruction. Although the norm satisfies the original idea of a compressed sensing model, namely finding a solution of an optimization problem with the minimum sparsity, the solution is often not accurate because it is an NP-Hard problem. Therefore, the scholars propose to replace the norm by the norm to make the norm become a convex optimization problem, and propose a series of reconstruction algorithms based on iteration: an iterative threshold shrinkage algorithm ISTA, a fast iterative threshold shrinkage algorithm FISTA, an approximate information transfer algorithm AMP, and the like. Although the problems are further solved by the proposed algorithms, the algorithms are all solution algorithms based on an optimization theory, and still have the problems of limited reconstruction precision, complex iteration process, low convergence speed and the like.
With the development of deep learning technology in recent years, researchers have proposed a compressed sensing reconstruction method Recon-Net based on a convolutional neural network, which solves the problem of complicated iteration process, but has two problems: 1) the reconstructed image obtained by directly applying the model has larger noise, and the reconstructed image with better quality can be obtained only by denoising; 2) the convergence rate of the model is slow during training, and even on high performance computers, the time of day is required for training the model.
Disclosure of Invention
The invention aims to provide a natural image compressed sensing reconstruction method based on depth sparse coding aiming at the problems of the traditional reconstruction method based on optimization and the current reconstruction method based on depth learning, so as to simplify the complexity of a model, reduce the training time of the model and improve the reconstruction speed and the reconstruction effect of an image.
The technical scheme of the invention is as follows: the compressed sensing coding is carried out on the natural image by utilizing the sparse prior information on the transform domain, and the recovery and reconstruction of the image coding information are realized by combining the learning-based approximate information transfer algorithm LAMP and the recurrent neural network RNN model, wherein the realization steps comprise the following steps:
(1) model training:
(1a) inputting a plurality of pictures, and taking n training image blocks X from the pictures;
(1b) carrying out compressed sensing observation on n training image blocks X in the step (1a) to obtain n observation coefficients Y, and forming n training sample pairs by using the training image blocks and the corresponding observation coefficients: { (X ═ X)1,x2,...,xn-1,xn),(Y=y1,y2,...,yn-1,yn)};
(1c) Setting the model training times K to be 100, and randomly selecting r training samples y from Y, X each timer、xrTraining by adopting a gradient descent method, wherein a cutoff condition flag of each training is that model error values are not attenuated after 50 times of iteration;
(1d) setting parameters of a sparse coding algorithm, initializing the iteration number T to 10, and enabling an initial transformation systemResidual v of observed datatY, whereinTransform coefficients, v, for the t-th iteratively calculated training image blocktIs the observed residual error of the t iteration;
(1e) calculating the transformation coefficient of the training image block of the t +1 th iteration:whereinTransform coefficients for the t-th iteration, ATCtvtTransform coefficients for the observed residual of the t-th iteration, ATIs the transpose of the observation matrix A, CtIs the parameter matrix to be optimized for the t-th time,threshold for the t-th iteration, αtIs the scalar parameter value of the t-th update, M is the dimension of the observed data yDegree, | | vt||2Denotes vtηst(. is a threshold contraction function;
(1f) the observed residual of the t +1 th iteration is calculated:wherein, bt+1vtIs the Onsagercorrection item;
(1g) circularly executing T times (1e) -1 f) to obtain transformation coefficient
(1h) The transform coefficient obtained when (1g)When the iteration cutoff condition flag is met, the model parameters of the training are stored;
(1i) performing the model training for K times (1d) - (1h) in a circulating manner to finish the model training;
(2) the testing steps are as follows:
(2a) will observe the data ytestAnd T parameter matrixes obtained by model trainingAnd scalar parameter valuesInputting the data into a trained model to obtain and input observation data ytestCorresponding test image transform coefficients
(2b) For the transformation coefficientCarrying out PCA inverse transformation to obtain and observe data ytestCorresponding toImage Img:therein, Ψ-1Representing a PCA inverse transformation matrix;
(2c) Ψ existing according to the orthogonality of the PCA transform-1=ΨΨTRelation to observed data ytestThe corresponding image Img is rewritten as:finish to the observation data ytestIn which Ψ isTRepresenting the transpose of the PCA forward transform matrix Ψ.
Compared with other prior art, the invention has the following advantages:
the invention introduces sparse prior information of a natural image, combines the advantages of a deep neural network and sparse coding, and reduces the complexity of a model, thereby shortening the reconstruction time of the image, realizing the rapid reconstruction of compressed sensing and improving the reconstruction effect of the natural image;
secondly, the model training speed is improved by adopting a model training method of transfer learning, so that the model can be trained quickly.
Drawings
FIG. 1 is a general flow chart of an implementation of the present invention;
FIG. 2 is a model training sub-flow diagram of the present invention;
FIG. 3 is a sub-flowchart of image reconstruction in the present invention;
FIG. 4 is an original natural image of Barbara used in the simulation experiment of the present invention;
fig. 5 is a graph showing the effect of the conventional TVAL3 method on the reconstruction of a Barbara image at a compression ratio of 0.25;
FIG. 6 is a graph showing the effect of reconstruction of a Barbara image at a compression ratio of 0.25 using a conventional Recon-Net method;
fig. 7 is a graph showing the effect of the present invention on the reconstruction of a Barbara image at a compression ratio of 0.25.
The specific implementation mode is as follows:
the embodiments and effects of the present invention are described in detail below with reference to the accompanying drawings:
referring to fig. 1, the natural image reconstruction method based on depth sparse coding of the present invention includes two parts, namely model training and testing. Firstly inputting n training image blocks X and constructing a training sample pair, then carrying out model training to obtain a trained model, and inputting test observation data into the trained model to carry out testing to obtain a reconstructed natural image.
The following describes the two parts of model training and testing of the present invention in detail:
first, model training part
Referring to fig. 2, the implementation steps of this section are as follows:
step 1: inputting n training image blocks X to obtain training sample pairs,
(1a) inputting n training image blocks X, and for each input training image block data XiCarrying out Principal Component Analysis (PCA) transformation to obtain transformation coefficientWherein Ψ represents a PCA forward transformation matrix of principal component analysis, and Ψ is obtained according to the orthogonality of the PCA transformation of the principal component analysis-1=ΨΨTIs obtained according to the relationTherein, Ψ-1And ΨTSeparately representing principal component analysis PCA inversionTransposing the transform matrix and the positive transform matrix Ψ;
(1b) observing each training image block according to a compressed sensing observation model to obtain observation data yi:Wherein, A ═ phi ΨTPhi is an undersampled Gaussian random observation matrix, and the vector w is Gaussian white noise with a zero mean value;
(1c) n training image blocks X and corresponding n observation coefficients Y form n training sample pairs: { (X ═ X)1,x2,...,xn-1,xn),(Y=y1,y2,...,yn-1,yn)}。
Step 2: setting the training times of the model, randomly selecting r training sample pairs,
(2a) setting the model training times K to be 100;
(2b) at each training time, r training sample pairs x are randomly selected from Y, Xr,yr}。
And step 3: setting parameters of a sparse coding algorithm, and initializing relevant variables in the algorithm.
(3a) Initializing the iteration time T as 10;
(3b) let the initial transform coefficientResidual v of observed datatY, whereinImage block transform coefficients, v, calculated for the t-th iterationtIs the observed residual for the t-th iteration.
And 4, step 4: calculating transformation coefficient of training image block of t +1 th iteration
(4a) Calculating the observed residual v of the t-th iterationtSparse coefficient A ofTCtvtWherein A isTIs the transpose of the observation matrix A, CtIs a parameter matrix to be optimized for the t time and is initialized to be an identity matrix;
(4b) calculating sparse coefficients of training image blocks of the (t + 1) th iteration,wherein,for the transform coefficients of the t-th iteration,threshold for the t-th iteration, αtIs the scalar parameter value of the t-th update, M is the dimension of the observed data y, | | vt||2Denotes vtA second norm of (d);
(4c) calculating the transformation coefficient of the training image block of the (t + 1) th iteration,
wherein, ηst(. is) a threshold contraction function for contracting the threshold λtAnd the sparse coefficient mut+1Making comparison by making the ratio less than a threshold value lambdatCoefficient of sparsity mut+1Set to zero and will be greater than a threshold lambdatCoefficient of sparsity mut+1Is set to mut+1And a threshold lambdatThe absolute value of the difference.
And 5: calculating the observed residual v of the t +1 th iterationt+1。
(5a) Firstly, the product obtained in the step 4Setting the coefficient larger than zero as 1, and then averaging the coefficients according to the columns to obtainZero norm ofRecalculating the weight bt+1:
Where N is a transform coefficientM is the observation residual vtDimension (d);
(5b) weight bt+1And the observed residual vtPerforming dot multiplication to obtain a matrix b of the Onsager correction itemt+1vt;
(5c) Subjecting the matrix b obtained in (5b) tot+1vtSubstitution formulaCalculating the observed residual v of the t +1 th iterationt+1Wherein y is the observation data of the training image block, and A is the observation matrix.
Step 6: and (4) circularly executing the steps from the step 4 to the step 5 for 10 times to obtain the transformation coefficient
And 7: and saving the model parameters of the training.
The transform coefficient obtained in step 6Satisfy iteration cutoffAnd when the flag is set, storing the model parameters of the training, wherein the cutoff condition flag of each training is that the model error value is not attenuated after iterating for 50 times.
And 8: and (5) circularly executing the step 3 to the step 7 for K times to finish model training.
Second, test part
Referring to fig. 3, the implementation steps of this section are as follows:
and step 9: inputting the test observation data and the parameters stored by the model training part to obtain input observation data ytestCorresponding image transform coefficient
Taking out T matrixes C stored in the model training part from the model training parttAnd scalar αtAre respectively marked asAndand the two parameters and the test observation data y collected in the real scenetestAs input, inputting into trained model, outputting and observing data ytestCorresponding image transform coefficient
Step 10: obtaining observation data y by Principal Component Analysis (PCA) inverse transformationtestCorresponding natural image Img.
(10a) For the transformation coefficientCarrying out Principal Component Analysis (PCA) inverse transformation to obtain and observe data ytestCorresponding natural images:therein, Ψ-1Representing a Principal Component Analysis (PCA) inverse transformation matrix;
(10b) Ψ) transformed from Principal Component Analysis (PCA)-1=ΨΨTRelation to the observed data ytestThe corresponding natural image Img is rewritten as:finish to the observation data ytestIn which Ψ isTRepresenting the transpose of the principal component analysis PCA forward transform matrix Ψ.
The effect of the invention can be specifically illustrated by the following simulation experiment:
1. simulation conditions are as follows:
1) the programming platform used for the simulation experiment is Pycharm v 2016;
2) the natural image data used by the simulation experiment come from standard training and testing data sets;
3) the size of a training image block used in the simulation experiment is 25 multiplied by 25, and the number n of training samples is 52650;
4) in the simulation experiment, a peak signal-to-noise ratio (PSNR) index is adopted to evaluate a compressed sensing experiment result, wherein the PSNR index is defined as:
wherein, MAXiAnd MSEiN is the number of pixels for the maximum pixel value and the mean square error of the reconstructed high-resolution natural image Img.
2. Simulation content:
simulation 1, reconstructing the natural image Barbara at a compression rate of 0.25 by using the TVAL3 method, and the reconstruction result is shown in fig. 5.
Simulation 2, by using the Recon-Net method, reconstructs the natural image Barbara at a compression rate of 0.25, and the reconstruction result is shown in fig. 6.
Simulation 3, adopting the method of the invention to reconstruct the natural image Barbara when the compression rate is 0.25, and the reconstruction result is shown in FIG. 7.
As can be seen from the results of the natural image Barbara reconstruction shown in FIGS. 5-7, the image reconstructed by the method is clearer, the image edge is sharper and the visual effect is better than the images reconstructed by other methods.
The peak signal-to-noise ratio PSNR obtained by respectively carrying out reconstruction simulation on a natural image Barbara by using the conventional TVAL3 method, NLR-CS method, Recon-Net method and the method disclosed by the invention is shown in Table 1.
TABLE 1 PSNR values for different reconstruction methods
As can be seen from Table 1, the peak signal-to-noise ratio PSNR of the present invention is 3.73dB higher than that of the existing TVAL3 method when the compression ratio is 0.25, and 2.00dB higher than that of the existing Recon-Net.
Claims (1)
1. The natural image reconstruction method based on the depth sparse coding comprises the following steps:
(1) model training:
(1a) inputting a plurality of pictures, and taking n training image blocks X from the pictures;
(1b) carrying out compressed sensing observation on n training image blocks X in the step (1a) to obtain n observation coefficients Y, and forming n training sample pairs by using the training image blocks and the corresponding observation coefficients:
{(X=x1,x2,...,xn-1,xn),(Y=y1,y2,...,yn-1,yn)},
the compressed sensing observation of the n training image blocks X in (1a) is implemented as follows:
(1b1) for each image block data xiCarrying out PCA transformation to obtain a transformation coefficient:wherein, Ψ and Ψ-1Respectively representing the PCA forward transform and the inverse transform thereof, and satisfying Ψ due to the orthogonality of the PCA transform-1=ΨΨTThen, there is,ΨTa transpose representing the PCA positive transformation matrix Ψ;
(1b2) observing each image block according to a compressed sensing observation model to obtain observation data:wherein A ═ phi ΨTThe matrix phi is an undersampled Gaussian random observation matrix, and the vector w is Gaussian white noise with zero mean;
(1c) setting the model training times K to be 100, and randomly selecting r training samples y from Y, X each timer、xrTraining by adopting a gradient descent method, wherein a cutoff condition flag of each training is that model error values are not attenuated after 50 times of iteration;
(1d) setting parameters of a sparse coding algorithm, initializing the iteration number T to 10, and enabling an initial transformation systemResidual v of observed datatY, whereinTransform coefficients, v, for the t-th iteratively calculated training image blocktIs the observed residual error of the t iteration;
(1e) calculate t +1 th iterationTraining image block transform coefficients of (1):whereinTransform coefficients for the t-th iteration, ATCtvtTransform coefficients for the observed residual of the t-th iteration, ATIs the transpose of the observation matrix A, CtIs the parameter matrix to be optimized for the t-th time,threshold for the t-th iteration, αtIs the scalar parameter value of the t-th update, M is the dimension of the observed data y, | | vt||2Denotes vtηst(. is a threshold contraction function;
(1f) the observed residual of the t +1 th iteration is calculated:wherein, bt+1vtIs the Onsagercorrection term:
(1f1) firstly, the method is carried outSetting the coefficient larger than zero as 1, and then averaging the coefficients according to the columns to obtainZero norm ofFinally, the weight b is calculatedt+1:
Where N is a transform coefficientM is the observation residual vtDimension (d);
(1f2) weight bt+1And the observed residual vtPerforming dot multiplication to obtain a matrix b of the Onsager correction itemt+1vt;
(1g) Circularly executing T times (1e) -1 f) to obtain transformation coefficient
(1h) The transform coefficient obtained when (1g)When the iteration cutoff condition flag is met, the model parameters of the training are stored;
(1i) performing the model training for K times (1d) - (1h) in a circulating manner to finish the model training;
(2) the testing steps are as follows:
(2a) will observe the data ytestAnd T parameter matrixes obtained by model trainingAnd scalar parameter valuesInputting the data into a trained model to obtain and input observation data ytestCorresponding test image transform coefficients
(2b) For the transformation coefficientCarrying out PCA inverse transformation to obtain and observe data ytestCorresponding image Img:therein, Ψ-1Representing a PCA inverse transformation matrix;
(2c) Ψ existing according to the orthogonality of the PCA transform-1=ΨΨTRelation to observed data ytestThe corresponding image Img is rewritten as:finish to the observation data ytestIn which Ψ isTRepresenting the transpose of the PCA forward transform matrix Ψ.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710306725.7A CN107154064B (en) | 2017-05-04 | 2017-05-04 | Natural image compressed sensing method for reconstructing based on depth sparse coding |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710306725.7A CN107154064B (en) | 2017-05-04 | 2017-05-04 | Natural image compressed sensing method for reconstructing based on depth sparse coding |
Publications (2)
Publication Number | Publication Date |
---|---|
CN107154064A CN107154064A (en) | 2017-09-12 |
CN107154064B true CN107154064B (en) | 2019-07-23 |
Family
ID=59793689
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201710306725.7A Active CN107154064B (en) | 2017-05-04 | 2017-05-04 | Natural image compressed sensing method for reconstructing based on depth sparse coding |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN107154064B (en) |
Families Citing this family (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107610192B (en) * | 2017-09-30 | 2021-02-12 | 西安电子科技大学 | Self-adaptive observation compressed sensing image reconstruction method based on deep learning |
CN108629409B (en) * | 2018-04-28 | 2020-04-10 | 中国科学院计算技术研究所 | Neural network processing system for reducing IO overhead based on principal component analysis |
CN108629410B (en) * | 2018-04-28 | 2021-01-22 | 中国科学院计算技术研究所 | Neural network processing method based on principal component analysis dimension reduction and/or dimension increase |
CN110717949A (en) * | 2018-07-11 | 2020-01-21 | 天津工业大学 | Interference hyperspectral image sparse reconstruction based on TROMP |
CN110097497B (en) * | 2019-05-14 | 2023-03-24 | 电子科技大学 | Multi-scale image transformation and inverse transformation method based on residual multisystemlets |
CN110381313B (en) * | 2019-07-08 | 2021-08-31 | 东华大学 | Video compression sensing reconstruction method based on LSTM network and image group quality blind evaluation |
CN113034414B (en) * | 2021-03-22 | 2022-11-11 | 上海交通大学 | Image reconstruction method, system, device and storage medium |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102722896A (en) * | 2012-05-22 | 2012-10-10 | 西安电子科技大学 | Adaptive compressed sensing-based non-local reconstruction method for natural image |
CN105430416A (en) * | 2015-12-04 | 2016-03-23 | 四川大学 | Fingerprint image compression method based on adaptive sparse domain coding |
CN106530321A (en) * | 2016-10-28 | 2017-03-22 | 南方医科大学 | Multi-graph image segmentation based on direction and scale descriptors |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7770016B2 (en) * | 1999-07-29 | 2010-08-03 | Intertrust Technologies Corporation | Systems and methods for watermarking software and other media |
-
2017
- 2017-05-04 CN CN201710306725.7A patent/CN107154064B/en active Active
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102722896A (en) * | 2012-05-22 | 2012-10-10 | 西安电子科技大学 | Adaptive compressed sensing-based non-local reconstruction method for natural image |
CN105430416A (en) * | 2015-12-04 | 2016-03-23 | 四川大学 | Fingerprint image compression method based on adaptive sparse domain coding |
CN106530321A (en) * | 2016-10-28 | 2017-03-22 | 南方医科大学 | Multi-graph image segmentation based on direction and scale descriptors |
Non-Patent Citations (1)
Title |
---|
多稀疏空间下的压缩感知图像重构;王良君等;《西安电子科技大学学报》;20130630;第40卷(第03期);全文 |
Also Published As
Publication number | Publication date |
---|---|
CN107154064A (en) | 2017-09-12 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN107154064B (en) | Natural image compressed sensing method for reconstructing based on depth sparse coding | |
CN106204467B (en) | Image denoising method based on cascade residual error neural network | |
CN103400402B (en) | Based on the sparse compressed sensing MRI image rebuilding method of low-rank structure | |
CN111369487B (en) | Hyperspectral and multispectral image fusion method, system and medium | |
CN105631807B (en) | The single-frame image super-resolution reconstruction method chosen based on sparse domain | |
CN107133930A (en) | Ranks missing image fill method with rarefaction representation is rebuild based on low-rank matrix | |
CN106127688B (en) | A kind of super-resolution image reconstruction method and its system | |
CN106204447A (en) | The super resolution ratio reconstruction method with convolutional neural networks is divided based on total variance | |
CN103279933B (en) | A kind of single image super resolution ratio reconstruction method based on bilayer model | |
CN104867119B (en) | The structural missing image fill method rebuild based on low-rank matrix | |
CN110490832A (en) | A kind of MR image reconstruction method based on regularization depth image transcendental method | |
CN106952228A (en) | The super resolution ratio reconstruction method of single image based on the non local self-similarity of image | |
CN111127325B (en) | Satellite video super-resolution reconstruction method and system based on cyclic neural network | |
CN110675321A (en) | Super-resolution image reconstruction method based on progressive depth residual error network | |
CN105513033B (en) | A kind of super resolution ratio reconstruction method that non local joint sparse indicates | |
CN110136060B (en) | Image super-resolution reconstruction method based on shallow dense connection network | |
CN110796622B (en) | Image bit enhancement method based on multi-layer characteristics of series neural network | |
CN110139046B (en) | Tensor-based video frame synthesis method | |
CN110728728B (en) | Compressed sensing network image reconstruction method based on non-local regularization | |
CN105957022A (en) | Recovery method of low-rank matrix reconstruction with random value impulse noise deletion image | |
Xia et al. | Meta-learning-based degradation representation for blind super-resolution | |
CN105184742B (en) | A kind of image de-noising method of the sparse coding based on Laplce's figure characteristic vector | |
CN105590296B (en) | A kind of single-frame images Super-Resolution method based on doubledictionary study | |
CN114202459A (en) | Blind image super-resolution method based on depth prior | |
CN106296583B (en) | Based on image block group sparse coding and the noisy high spectrum image ultra-resolution ratio reconstructing method that in pairs maps |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |