Nothing Special   »   [go: up one dir, main page]

skip to main content
research-article

Deep Learning for Single Image Super-Resolution: A Brief Review

Published: 01 December 2019 Publication History

Abstract

Single image super-resolution (SISR) is a notoriously challenging ill-posed problem that aims to obtain a high-resolution output from one of its low-resolution versions. Recently, powerful deep learning algorithms have been applied to SISR and have achieved state-of-the-art performance. In this survey, we review representative deep learning-based SISR methods and group them into two categories according to their contributions to two essential aspects of SISR: The exploration of efficient neural network architectures for SISR and the development of effective optimization objectives for deep SISR learning. For each category, a baseline is first established, and several critical limitations of the baseline are summarized. Then, representative works on overcoming these limitations are presented based on their original content, as well as our critical exposition and analyses, and relevant comparisons are conducted from a variety of perspectives. Finally, we conclude this review with some current challenges and future trends in SISR that leverage deep learning algorithms.

References

[1]
Y. LeCun, Y. Bengio, and G. Hinton, “Deep learning,” Nature, vol. 521, no. 7553, 2015, Art. no.
[2]
A. Krizhevsky, I. Sutskever, and G. E. Hinton, “Imagenet classification with deep convolutional neural networks,” in Proc. Adv. Neural Inf. Process. Syst., 2012, pp. 1097–1105.
[3]
G. Hinton et al., “Deep neural networks for acoustic modeling in speech recognition: The shared views of four research groups,” IEEE Signal Process. Mag., vol. 29, no. 6, pp. 82–97, Nov. 2012.
[4]
R. Collobert and J. Weston, “A unified architecture for natural language processing: Deep neural networks with multitask learning,” in Proc. ACM 25th Int. Conf. Mach. Learn., 2008, pp. 160–167.
[5]
C.-Y. Yang, C. Ma, and M.-H. Yang, “Single-image super-resolution: A benchmark,” in Proc. Eur. Conf. Comput. Vis., 2014, pp. 372–386.
[6]
R. Timofte, R. Rothe, and L. Van Gool, “Seven ways to improve example-based single image super resolution,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit., 2016, pp. 1865–1873.
[7]
D. P. Kingma and J. Ba, “Adam: A method for stochastic optimization,” 2014, arXiv:1412.6980.
[8]
K. He, X. Zhang, S. Ren, and J. Sun, “Delving deep into rectifiers: Surpassing human-level performance on ImageNet classification,” in Proc. IEEE Int. Conf. Comput. Vis., 2015, pp. 1026–1034.
[9]
S. C. Park, M. K. Park, and M. G. Kang, “Super-resolution image reconstruction: A technical overview,” IEEE Signal Process. Mag., vol. 20, no. 3, pp. 21–36, May 2003.
[10]
R. Keys, “Cubic convolution interpolation for digital image processing,” IEEE Trans. Acoust., Speech, Signal Process., vol. 29, no. 6, pp. 1153–1160, Dec. 1981.
[11]
C. E. Duchon, “Lanczos filtering in one and two dimensions,” J. Appl. Meteorol., vol. 18, no. 8, pp. 1016–1022, 1979.
[12]
S. Dai et al., “Softcuts: A soft edge smoothness prior for color image super-resolution,” IEEE Trans. Image Process., vol. 18, no. 5, pp. 969–981, May 2009.
[13]
J. Sun, Z. Xu, and H.-Y. Shum, “Image super-resolution using gradient profile prior,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit., 2008, pp. 1–8.
[14]
Q. Yan, Y. Xu, X. Yang, and T. Q. Nguyen, “Single image superresolution based on gradient profile sharpness,” IEEE Trans. Image Process., vol. 24, no. 10, pp. 3187–3202, Oct. 2015.
[15]
A. Marquina and S. J. Osher, “Image super-resolution by TV-regularization and Bregman iteration,” J. Sci. Comput., vol. 37, no. 3, pp. 367–382, 2008.
[16]
W. T. Freeman, T. R. Jones, and E. C. Pasztor, “Example-based super-resolution,” IEEE Comput. Graph. Appl., vol. 22, no. 2, pp. 56–65, Mar./Apr. 2002.
[17]
H. Chang, D.-Y. Yeung, and Y. Xiong, “Super-resolution through neighbor embedding,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit., 2004, pp. 275–282.
[18]
M. Aharon et al., “K-SVD: An algorithm for designing overcomplete dictionaries for sparse representation,” IEEE Trans. Signal Process., vol. 54, no. 11, pp. 4311–4322, Nov. 2006.
[19]
J. Yang, J. Wright, T. S. Huang, and Y. Ma, “Image super-resolution via sparse representation,” IEEE Trans. Image Process., vol. 19, no. 11, pp. 2861–2873, Nov. 2010.
[20]
R. Zeyde, M. Elad, and M. Protter, “On single image scale-up using sparse-representations,” in Proc. Int. Conf. Curves Surfaces, 2010, pp. 711–730.
[21]
R. Timofte, V. De, and L. Van Gool, “Anchored neighborhood regression for fast example-based super-resolution,” in Proc. IEEE Int. Conf. Comput. Vis., 2013, pp. 1920–1927.
[22]
R. Timofte, V. De Smet, and L. Van Gool, “A+: Adjusted anchored neighborhood regression for fast super-resolution,” in Proc. Asian Conf. Comput. Vis., 2014, pp. 111–126.
[23]
F. Cao, M. Cai, Y. Tan, and J. Zhao, “Image super-resolution via adaptive $\ell _{p}$$(0< p< 1)$ regularization and sparse representation,” IEEE Trans. Neural Netw. Learn. Syst., vol. 27, no. 7, pp. 1550–1561, Jul. 2016.
[24]
J. Liu, W. Yang, X. Zhang, and Z. Guo, “Retrieval compensated group structured sparsity for image super-resolution,” IEEE Trans. Multimedia, vol. 19, no. 2, pp. 302–316, Feb. 2017.
[25]
S. Schulter, C. Leistner, and H. Bischof, “Fast and accurate image upscaling with super-resolution forests,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit., 2015, pp. 3791–3799.
[26]
K. Zhang, D. Tao, X. Gao, X. Li, and J. Li, “Coarse-to-fine learning for single-image super-resolution,” IEEE Trans. Neural Netw. Learn. Syst., vol. 28, no. 5, pp. 1109–1122, May 2017.
[27]
J. Yu, X. Gao, D. Tao, X. Li, and K. Zhang, “A unified learning framework for single image super-resolution,” IEEE Trans. Neural Netw. Learn. Syst., vol. 25, no. 4, pp. 780–792, Apr. 2014.
[28]
C. Deng et al., “Similarity constraints-based structured output regression machine: An approach to image super-resolution,” IEEE Trans. Neural Netw. Learn. Syst., vol. 27, no. 12, pp. 2472–2485, Dec. 2016.
[29]
W. Yang et al., “Consistent coding scheme for single-image super-resolution via independent dictionaries,” IEEE Trans. Multimedia, vol. 18, no. 3, pp. 313–325, Mar. 2016.
[30]
Y. Bengio, A. Courville, and P. Vincent, “Representation learning: A review and new perspectives,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 35, no. 8, pp. 1798–1828, Aug. 2013.
[31]
H. A. Song and S.-Y. Lee, “Hierarchical representation using NMF,” in Proc. Int. Conf. Neural Inf. Process., 2013, pp. 466–473.
[32]
J. Schmidhuber, “Deep learning in neural networks: An overview,” Neural Netw., vol. 61, pp. 85–117, 2015.
[33]
N. Rochester, J. Holland, L. Haibt, and W. Duda, “Tests on a cell assembly theory of the action of the brain, using a large digital computer,” IRE Trans. Inf. Theory, vol. 2, no. 3, pp. 80–93, 1956.
[34]
D. E. Rumelhart, G. E. Hinton, and R. J. Williams, “Learning representations by back-propagating errors,” Nature, vol. 323, no. 6088, 1986, Art. no.
[35]
Y. LeCun et al., “Backpropagation applied to handwritten zip code recognition,” Neural Comput., vol. 1, no. 4, pp. 541–551, 1989.
[36]
J. L. Elman, “Finding structure in time,” Cogn. Sci., vol. 14, no. 2, pp. 179–211, 1990.
[37]
Y. Bengio, P. Simard, and P. Frasconi, “Learning long-term dependencies with gradient descent is difficult,” IEEE Trans. Neural Netw., vol. 5, no. 2, pp. 157–166, Mar. 1994.
[38]
J. F. Kolen and S. C. Kremer, “Gradient flow in recurrent nets: The difficulty of learning longterm dependencies,” 2001. [Online]. Available: https://ieeexplore.ieee.org/document/5264952
[39]
G. E. Hinton, “Learning multiple layers of representation,” Trends Cogn. Sci., vol. 11, no. 10, pp. 428–434, 2007.
[40]
D. C. Ciresan, U. Meier, J. Masci, L. Maria Gambardella, and J. Schmidhuber, “Flexible, high performance convolutional neural networks for image classification,” in Proc. Int. Joint Conf. Artif. Intell., 2011, pp. 1237–1242.
[41]
D. CireşAn, U. Meier, J. Masci, and J. Schmidhuber, “Multi-column deep neural network for traffic sign classification,” Neural Netw., vol. 32, pp. 333–338, 2012.
[42]
R. Salakhutdinov and H. Larochelle, “Efficient learning of deep Boltzmann machines,” in Proc. 13th Int. Conf. Artif. Intell. Statist., 2010, pp. 693–700.
[43]
D. P. Kingma and M. Welling, “Auto-encoding variational Bayes,” arXiv preprint arXiv:1312.6114, 2013.
[44]
D. J. Rezende, S. Mohamed, and D. Wierstra, “Stochastic backpropagation and approximate inference in deep generative models,” arXiv preprint arXiv:1401.4082, 2014.
[45]
I. Goodfellow et al., “Generative adversarial nets,” in Proc. Advances Neural Inf. Process. Syst., 2014, pp. 2672–2680.
[46]
I. Goodfellow, Y. Bengio, A. Courville, and Y. Bengio, Deep Learning. Cambridge, MA, USA: MIT Press, 2016, vol. 1.
[47]
C. Dong, C. C. Loy, K. He, and X. Tang, “Learning a deep convolutional network for image super-resolution,” in Proc. Eur. Conf. Comput. Vis., 2014, pp. 184–199.
[48]
C. Dong, C. C. Loy, K. He, and X. Tang, “Image super-resolution using deep convolutional networks,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 38, no. 2, pp. 295–307, Feb. 2016.
[49]
W. Shi et al., “Real-time single image and video super-resolution using an efficient sub-pixel convolutional neural network,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit., 2016, pp. 1874–1883.
[50]
M. D. Zeiler, G. W. Taylor, and R. Fergus, “Adaptive deconvolutional networks for mid and high level feature learning,” in Proc. IEEE Int. Conf. Comput. Vis., 2011, pp. 2018–2025.
[51]
V. Dumoulin and F. Visin, “A guide to convolution arithmetic for deep learning,” arXiv preprint arXiv:1603.07285, 2016.
[52]
M. D. Zeiler and R. Fergus, “Visualizing and understanding convolutional networks,” in Proc. Eur. Conf. Comput. Vis., 2014, pp. 818–833.
[53]
J. Long, E. Shelhamer, and T. Darrell, “Fully convolutional networks for semantic segmentation,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit., 2015, pp. 3431–3440.
[54]
A. Radford, L. Metz, and S. Chintala, “Unsupervised representation learning with deep convolutional generative adversarial networks,” arXiv preprint arXiv:1511.06434, 2015.
[55]
W. Shi et al., “Is the deconvolution layer the same as a convolutional layer?” arXiv preprint arXiv:1609.07009, 2016.
[56]
C. Dong, C. C. Loy, and X. Tang, “Accelerating the super-resolution convolutional neural network,” in Proc. Eur. Conf. Comput. Vis., 2016, pp. 391–407.
[57]
N. Efrat, D. Glasner, A. Apartsin, B. Nadler, and A. Levin, “Accurate blur models vs. image priors in single image super-resolution,” in Proc. IEEE Int. Conf. Comput. Vis., 2013, pp. 2832–2839.
[58]
Y. Jia et al., “Caffe: Convolutional architecture for fast feature embedding,” in Proc. 22nd ACM Int. Conf. Multimedia, 2014, pp. 675–678.
[59]
M. Abadi et al., “TensorFlow: A system for large-scale machine learning,” in Proc. 12th USENIX Conf. Operating Syst. Des. Implementation, 2016, vol. 16, pp. 265–283.
[60]
G. F. Montufar, R. Pascanu, K. Cho, and Y. Bengio, “On the number of linear regions of deep neural networks,” in Proc. Adv. Neural Inf. Process. Syst., 2014, pp. 2924–2932.
[61]
J. Kim, J. Kwon Lee, and K. Mu Lee, “Accurate image super-resolution using very deep convolutional networks,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit., 2016, pp. 1646–1654.
[62]
K. Simonyan and A. Zisserman, “Very deep convolutional networks for large-scale image recognition,” arXiv preprint arXiv:1409.1556, 2014.
[63]
J. Kim, J. Kwon Lee, and K. Mu Lee, “Deeply-recursive convolutional network for image super-resolution,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit., 2016, pp. 1637–1645.
[64]
K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit., 2016, pp. 770–778.
[65]
K. He, X. Zhang, S. Ren, and J. Sun, “Identity mappings in deep residual networks,” in Proc. Eur. Conf. Comput. Vis., 2016, pp. 630–645.
[66]
A. Veit, M. J. Wilber, and S. Belongie, “Residual networks behave like ensembles of relatively shallow networks,” in Proc. Adv. Neural Inf. Process. Syst., 2016, pp. 550–558.
[67]
D. Balduzzi et al., “The shattered gradients problem: If resnets are the answer, then what is the question?” in Proc. Int. Conf. Mach. Learn., 2017, pp. 342–350.
[68]
C. Ledig et al., “Photo-realistic single image super-resolution using a generative adversarial network,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit., 2017, pp. 4681–4690.
[69]
S. Ioffe and C. Szegedy, “Batch normalization: Accelerating deep network training by reducing internal covariate shift,” in Proc. Int. Conf. Mach. Learn., 2015, pp. 448–456.
[70]
Y. Tai, J. Yang, and X. Liu, “Image super-resolution via deep recursive residual network,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit., 2017, pp. 3147–3155.
[71]
B. Lim, S. Son, H. Kim, S. Nah, and K. M. Lee, “Enhanced deep residual networks for single image super-resolution,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit. Workshops, 2017, pp. 136–144.
[72]
C. Szegedy, S. Ioffe, V. Vanhoucke, and A. A. Alemi, “Inception-v4, inception-resnet and the impact of residual connections on learning,” in Proc. Assoc. Adv. Artif. Intell., 2017, pp. 4278–4284.
[73]
G. Huang, Z. Liu, K. Q. Weinberger, and L. van der Maaten, “Densely connected convolutional networks,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit., 2017, vol. 1, no. 2, p. 3.
[74]
Y. Chen et al., “Dual path networks,” in Proc. Adv. Neural Inform. Process. Syst., 2017, pp. 4470–4478.
[75]
T. Tong, G. Li, X. Liu, and Q. Gao, “Image super-resolution using dense skip connections,” in Proc. IEEE Int. Conf. Comput. Vis., 2017, pp. 4809–4817.
[76]
Y. Tai, J. Yang, X. Liu, and C. Xu, “MemNet: A persistent memory network for image restoration,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit., 2017, pp. 4539–4547.
[77]
S. Hochreiter and J. Schmidhuber, “Long short-term memory,” Neural Comput., vol. 9, no. 8, pp. 1735–1780, 1997.
[78]
Y. Zhang, Y. Tian, Y. Kong, B. Zhong, and Y. Fu, “Residual dense network for image super-resolution,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit., 2018, pp. 2472–2481.
[79]
Z. Wang, D. Liu, J. Yang, W. Han, and T. Huang, “Deep networks for image super-resolution with sparse prior,” in Proc. IEEE Int. Conf. Comput. Vis., 2015, pp. 370–378.
[80]
K. Gregor and Y. LeCun, “Learning fast approximations of sparse coding,” in Proc. 27th Int. Conf. Int. Conf. Mach. Learn., 2010, pp. 399–406.
[81]
D. Liu et al., “Robust single image super-resolution via deep networks with sparse prior,” IEEE Trans. Image Process., vol. 25, no. 7, pp. 3194–3207, Jul. 2016.
[82]
D. Liu, Z. Wang, N. Nasrabadi, and T. Huang, “Learning a mixture of deep networks for single image super-resolution,” in Proc. Asian Conf. Comput. Vis., 2016, pp. 145–156.
[83]
W. Yang et al., “Deep edge guided recurrent residual learning for image super-resolution,” IEEE Trans. Image Process., vol. 26, no. 12, pp. 5895–5907, Dec. 2017.
[84]
W.-S. Lai, J.-B. Huang, N. Ahuja, and M.-H. Yang, “Deep Laplacian pyramid networks for fast and accurate super-resolution,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit., 2017, pp. 624–632.
[85]
R. Dahl, M. Norouzi, and J. Shlens, “Pixel recursive super resolution,” in Proc. IEEE Int. Conf. Comput. Vis., 2017, pp. 5439–5448.
[86]
A. Singh and N. Ahuja, “Super-resolution using sub-band self-similarity,” in Proc. Asian Conf. Comput. Vis., 2014, pp. 552–568.
[87]
A. van den Oord, N. Kalchbrenner, and K. Kavukcuoglu, “Pixel recurrent neural networks,” in Proc. Int. Conf. Mach. Learn., 2016, pp. 1747–1756.
[88]
A. van den Oord et al., “Conditional image generation with PixelCNN decoders,” in Proc. Adv. Neural Inf. Process. Syst., 2016, pp. 4790–4798.
[89]
M. Irani and S. Peleg, “Improving resolution by image registration,” CVGIP, Graphical Models Image Process., vol. 53, no. 3, pp. 231–239, 1991.
[90]
M. Haris, G. Shakhnarovich, and N. Ukita, “Deep backprojection networks for super-resolution,” in Proc. Conf. Comput. Vis. Pattern Recognit., 2018, pp. 1664–1673.
[91]
X. Wang, K. Yu, C. Dong, and C. Change Loy, “Recovering realistic texture in image super-resolution by deep spatial feature transform,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit., 2018, pp. 606–615.
[92]
K. Zhang, W. Zuo, and L. Zhang, “Learning a single convolutional super-resolution network for multiple degradations,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit., 2018, pp. 3262–3271.
[93]
R. Timofte, V. De Smet, and L. Van Gool, “Semantic super-resolution: When and where is it useful?” Comput. Vis. Image Understanding, vol. 142, pp. 1–12, 2016.
[94]
S. V. Venkatakrishnan, C. A. Bouman, and B. Wohlberg, “Plug-and-play priors for model based reconstruction,” in Proc. IEEE Global Conf. Signal Inf. Process., 2013, pp. 945–948.
[95]
T. Meinhardt, M. Moller, C. Hazirbas, and D. Cremers, “Learning proximal operators: Using denoising networks for regularizing inverse imaging problems,” in Proc. IEEE Int. Conf. Comput. Vis., 2017, pp. 1781–1790.
[96]
K. Zhang, W. Zuo, S. Gu, and L. Zhang, “Learning deep CNN denoiser prior for image restoration,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit., 2017, pp. 3929–3938.
[97]
T. Tirer and R. Giryes, “Image restoration by iterative denoising and backward projections,” IEEE Trans. Image Process., vol. 28, no. 3, pp. 1220–1234, Mar. 2019.
[98]
D. Ulyanov, A. Vedaldi, and V. Lempitsky, “Deep image prior,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit., 2018, pp. 9446–9454.
[99]
K. Zhang, X. Gao, D. Tao, and X. Li, “Single image super-resolution with multiscale similarity learning,” IEEE Trans. Neural Netw. Learn. Syst., vol. 24, no. 10, pp. 1648–1659, Oct. 2013.
[100]
A. Shocher, N. Cohen, and M. Irani, “Zero-shot super-resolution using deep internal learning,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit., 2018, pp. 3118–3126.
[101]
M. Zontak and M. Irani, “Internal statistics of a single natural image,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit., 2011, pp. 977–984.
[102]
T. Michaeli and M. Irani, “Nonparametric blind super-resolution,” in Proc. IEEE Int. Conf. Comput. Vis, 2013, pp. 945–952.
[103]
T. Tirer and R. Giryes, “Super-resolution based on image-adapted CNN denoisers: Incorporating generalization of training data and internal learning in test time,” arXiv preprint arXiv:1811.12866, 2018.
[104]
Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, “Image quality assessment: From error visibility to structural similarity,” IEEE Trans. Image Process., vol. 13, no. 4, pp. 600–612, Apr. 2004.
[105]
N. Ahn, B. Kang, and K.-A. Sohn, “Fast, accurate, and, lightweight super-resolution with cascading residual network,” in Proc. Eur. Conf. Comput. Vis., 2018, pp. 252–268.
[106]
D. Martin, C. Fowlkes, D. Tal, and J. Malik, “A database of human segmented natural images and its application to evaluating segmentation algorithms and measuring ecological statistics,” in Proc. 8th IEEE Int. Conf. Comput. Vis., 2001, vol. 2, pp. 416–423.
[107]
J. Deng et al., “ImageNet: A large-scale hierarchical image database,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit., 2009, pp. 248–255.
[108]
E. Agustsson and R. Timofte, “NTIRE 2017 challenge on single image super-resolution: Dataset and study,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit. Workshops, 2017, pp. 126–135.
[109]
Z. Yang, K. Zhang, Y. Liang, and J. Wang, “Single image super-resolution with a parameter economic residual-like convolutional neural network,” in Proc. Int. Conf. Multimedia Model., 2017, pp. 353–364.
[110]
Z. Hui, X. Wang, and X. Gao, “Fast and accurate single image super-resolution via information distillation network,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit., 2018, pp. 723–731.
[111]
X. Fan, Y. Yang, C. Deng, J. Xu, and X. Gao, “Compressed multi-scale feature fusion network for single image super-resolution,” Signal Process., vol. 146, pp. 50–60, 2018.
[112]
H. Zhao, O. Gallo, I. Frosio, and J. Kautz, “Loss functions for neural networks for image processing,” IEEE Trans. Comput. Imaging, vol. 3, no. 1, pp. 47–51, Mar. 2017.
[113]
J. Bruna, P. Sprechmann, and Y. LeCun, “Super-resolution with deep convolutional sufficient statistics,” arXiv preprint arXiv:1511.05666, 2015.
[114]
J. Johnson, A. Alahi, and L. Fei-Fei, “Perceptual losses for real-time style transfer and super-resolution,” in Proc. Eur. Conf. Comput. Vis., 2016, pp. 694–711.
[115]
R. Mechrez, I. Talmi, F. Shama, and L. Zelnik-Manor, “Learning to maintain natural image statistics,” arXiv preprint arXiv:1803.04626, 2018.
[116]
R. Mechrez, I. Talmi, and L. Zelnik-Manor, “The contextual loss for image transformation with non-aligned data,” in Proc. Eur. Conf. Comput. Vis., 2018, pp. 768–783.
[117]
K. Li and J. Malik, “Implicit maximum likelihood estimation,” arXiv preprint arXiv:1809.09087, 2018.
[118]
K. Li, S. Peng, and J. Malik, “Super-resolution via conditional implicit maximum likelihood estimation,” arXiv preprint arXiv:1810.01406, 2018.
[119]
F. Huszár, “How (not) to train your generative model: Scheduled sampling, likelihood, adversary?” arXiv preprint arXiv:1511.05101, 2015.
[120]
L. Theis, A. v. d. Oord, and M. Bethge, “A note on the evaluation of generative models,” arXiv preprint arXiv:1511.01844, 2015.
[121]
M. S. Sajjadi, B. Schölkopf, and M. Hirsch, “EnhanceNet: Single image super-resolution through automated texture synthesis,” in Proc. IEEE Int. Conf. Comput. Vis., 2017, pp. 4501–4510.
[122]
J.-Y. Zhu, T. Park, P. Isola, and A. A. Efros, “Unpaired image-to-image translation using cycle-consistent adversarial networks,” in Proc. IEEE Int. Conf. Comput. Vis., 2017, pp. 2223–2232.
[123]
Y. Yuan et al., “Unsupervised image super-resolution using cycle-in-cycle generative adversarial networks,” in Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit. Workshops, 2018, pp. 814–823.
[124]
M. Arjovsky, S. Chintala, and L. Bottou, “Wasserstein generative adversarial networks,” in Proc. Int. Conf. Mach. Learn., 2017, pp. 214–223.
[125]
S. Nowozin, B. Cseke, and R. Tomioka, “f-GAN: Training generative neural samplers using variational divergence minimization,” in Proc. Adv. Neural Inform. Process. Syst., 2016, pp. 271–279.
[126]
D. J. Sutherland et al., “Generative models and model criticism via optimized maximum mean discrepancy,” arXiv preprint arXiv:1611.04488, 2016.
[127]
Y. Blau and T. Michaeli, “The perception-distortion tradeoff,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit., 2018, pp. 6228–6237.

Cited By

View all
  • (2025)A review of deep-learning-based super-resolutionPattern Recognition10.1016/j.patcog.2024.110935157:COnline publication date: 1-Jan-2025
  • (2024)Near-realtime Facial Animation by Deep 3D Simulation Super-ResolutionACM Transactions on Graphics10.1145/367068743:5(1-20)Online publication date: 9-Aug-2024
  • (2024)Learning Cross-Spectral Prior for Image Super-ResolutionProceedings of the 32nd ACM International Conference on Multimedia10.1145/3664647.3681169(1447-1455)Online publication date: 28-Oct-2024
  • Show More Cited By

Index Terms

  1. Deep Learning for Single Image Super-Resolution: A Brief Review
    Index terms have been assigned to the content through auto-classification.

    Recommendations

    Comments

    Please enable JavaScript to view thecomments powered by Disqus.

    Information & Contributors

    Information

    Published In

    cover image IEEE Transactions on Multimedia
    IEEE Transactions on Multimedia  Volume 21, Issue 12
    Dec. 2019
    258 pages

    Publisher

    IEEE Press

    Publication History

    Published: 01 December 2019

    Qualifiers

    • Research-article

    Contributors

    Other Metrics

    Bibliometrics & Citations

    Bibliometrics

    Article Metrics

    • Downloads (Last 12 months)0
    • Downloads (Last 6 weeks)0
    Reflects downloads up to 20 Nov 2024

    Other Metrics

    Citations

    Cited By

    View all
    • (2025)A review of deep-learning-based super-resolutionPattern Recognition10.1016/j.patcog.2024.110935157:COnline publication date: 1-Jan-2025
    • (2024)Near-realtime Facial Animation by Deep 3D Simulation Super-ResolutionACM Transactions on Graphics10.1145/367068743:5(1-20)Online publication date: 9-Aug-2024
    • (2024)Learning Cross-Spectral Prior for Image Super-ResolutionProceedings of the 32nd ACM International Conference on Multimedia10.1145/3664647.3681169(1447-1455)Online publication date: 28-Oct-2024
    • (2024)A New Frontier of AI: On-Device AI Training and PersonalizationProceedings of the 46th International Conference on Software Engineering: Software Engineering in Practice10.1145/3639477.3639716(323-333)Online publication date: 14-Apr-2024
    • (2024)Implicit function‐based continuous representation for meticulous segmentation of cracks from high‐resolution imagesComputer-Aided Civil and Infrastructure Engineering10.1111/mice.1305239:4(539-558)Online publication date: 13-Feb-2024
    • (2024)The Transform-and-Perform Framework: Explainable Deep Learning Beyond ClassificationIEEE Transactions on Visualization and Computer Graphics10.1109/TVCG.2022.321924830:2(1502-1515)Online publication date: 1-Feb-2024
    • (2024)Degradation-Aware Self-Attention Based Transformer for Blind Image Super-ResolutionIEEE Transactions on Multimedia10.1109/TMM.2024.336892326(7516-7528)Online publication date: 22-Feb-2024
    • (2024)Coarse-to-Fine Cross-View Interaction Based Accurate Stereo Image Super-Resolution NetworkIEEE Transactions on Multimedia10.1109/TMM.2024.336449226(7321-7334)Online publication date: 9-Feb-2024
    • (2024)Improving Deepfake Detection Generalization by Invariant Risk MinimizationIEEE Transactions on Multimedia10.1109/TMM.2024.335565126(6785-6798)Online publication date: 19-Jan-2024
    • (2024)Spatial-Temporal Inter-Layer Reference Frame Generation Network for Spatial SHVCIEEE Transactions on Multimedia10.1109/TMM.2023.330844426(3235-3250)Online publication date: 1-Jan-2024
    • Show More Cited By

    View Options

    View options

    Login options

    Media

    Figures

    Other

    Tables

    Share

    Share

    Share this Publication link

    Share on social media