Abstract
Generative adversarial networks have received a remarkable success in many computer vision applications for their ability to learn from complex data distribution. In particular, they are capable to generate realistic images from latent space with a simple and intuitive structure. The main focus of existing models has been improving the performance; however, there is a little attention to make a robust model. In this paper, we investigate solutions to the super-resolution problems—in particular perceptual quality—by proposing a robust GAN. The proposed model unlike the standard GAN employs two generators and two discriminators in which, a discriminator determines that the samples are from real data or generated one, while another discriminator acts as classifier to return the wrong samples to its corresponding generators. Generators learn a mixture of many distributions from prior to the complex distribution. This new methodology is trained with the feature matching loss and allows us to return the wrong samples to the corresponding generators, in order to regenerate the real-look samples. Experimental results in various datasets show the superiority of the proposed model compared to the state of the art methods.
Similar content being viewed by others
Explore related subjects
Discover the latest articles, news and stories from top researchers in related subjects.References
Zareapoor M, Zhang J, Yang J (2019) Towards realistic image via function learning. Multimed Tools Appl. https://doi.org/10.1007/s11042-019-7361-6
Zareapoor M, Shamsolmoali P, Yang J (2019) Learning depth super-resolution by using multi-scale convolutional neural network. J Intell Fuzzy Syst 36(2):1773–1783
Goodfellow I, Pouget-Abadie J, Mirza M, Xu B, Warde-Farley D, Ozair S, Courville A, Bengio Y (2014) Generative adversarial nets. In: Proceeding of advances in neural information processing systems, pp 2672–2680
Ledig C, Theis L, Huszar F, Caballero J, Aitken AP, Tejani A, Totz J, Wang Z, Shi W (2016) Photo-realistic single image super-resolution using a generative adversarial network. CoRR, vol. abs/1609.04802, 2016. [Online]. http://arxiv.org/abs/1609.04802
Reed S, Akata Z, Yan X, Logeswaran L, Schiele B, Lee H (2016) Generative adversarial text-to-image synthesis. In: Proceedings of ICML, pp 1060–1069
Zhang H, Xu T, Li H, Zhang S, Huang X, Wang X, Metaxas DN (2017) StackGAN: text to photo-realistic image synthesis with stacked generative adversarial networks. In: Proceeding of the ICCV, pp 5907–5915
Durugkar IP, Gemp I, Mahadevan S (2016) Generative multi-adversarial networks. ICLR. CoRR, abs/1611.01673
Zareapoor M, Celebi ME, Yang J (2019) Diverse adversarial network for image super-resolution. Signal Process Image Commun 74:191–200. https://doi.org/10.1016/j.image.2019.02.008
Ding L, Zhang H, Xiao J et al (2018) An improved image mixed noise removal algorithm based on super-resolution algorithm and CNN. Neural Comput Appl. https://doi.org/10.1007/s00521-018-3777-6
Wang Z, Liu D, Yang J, Han W, Huang T (2015) Deep networks for image super-resolution with sparse prior. In: ICCV
Dong C, Loy CC, He K, Tang X (2016) Image super-resolution using deep convolutional networks. IEEE Trans Pattern Anal Mach Intell 38(2):295–307
Zareapoor M, Jain DK, Yang J (2018) Local spatial information for image super-resolution. Cogn Syst Res 52:49–57
Radford A, Metz L, Chintala S (2015) Unsupervised representation learning with deep convolutional generative adversarial networks. In: Proceeding of international conference on learning representations arXiv:1511.06434
Salimans T, Goodfellow I, Zaremba W, Cheung V, Radford A, Chen X (2016) Improved techniques for training GANs. In: Proceeding of the NIPS, pp 2234–2242
Odena A, Olah C, Shlens J (2017) Conditional image synthesis with auxiliary classifier GANs. In: International conference on machine learning (PMLR), pp 2642–2651
Chen X, Duan Y, Houthooft R, Schulman J, Sutskever I, Abbeel P (2016) Infogan: interpretable representation learning by information maximizing generative adversarial nets. In: Advances in neural information processing systems
Arjovsky M, Chintala S, Bottou L (2017) Wasserstein generative adversarial networks. In: Proceedings of the 34th international conference on machine learning, pp 214–223
Nguyen TD, Le T, Vu H, Phung D (2017) Dual discriminator generative adversarial nets. In: Advances in neural information processing systems 29 (NIPS) (accepted)
Arora S, Ge R, Liang Y, Ma T, Zhang Y (2017) Generalization and equilibrium in generative adversarial nets (gans). arXiv preprint arXiv:1703.00573
Tolstikhin I, Gelly S, Bousquet O, Simon-Gabriel C-J, Sch¨olkopf B (2017) Adagan: boosting generative models. arXiv preprint arXiv:1701.02386
Ghosh A, Kulharia V, Namboodiri VP, Torr PHS, Dokania PK (2017) Multi-agent diverse generative adversarial networks. In: Proceeding of the CVPR, pp 8513–8521
Wang X, Gupta A (2016) Generative image modeling using style and structure adversarial networks. arXiv preprint arXiv:1603.05631
Yang J, Kannan A, Batra D, Parikh D (2017) Lr-gan: layered recursive generative adversarial networks for image generation. arXiv preprint arXiv:1703.01560
Denton E, Chintala S, Szlam A, Fergus R (2015) Deep generative image models using a Laplacian pyramid of adversarial networks. In: Proceeding the NIPS, pp 1486–1494
Burt PJ, Adelson EH (1987) The Laplacian pyramid as a compact image code. In: Readings in computer vision. Elsevier, pp 671–679
Chen R, Qu Y, Li C et al (2018) Single-image super-resolution via joint statistic models-guided deep auto-encoder network. Neural Comput Appl. https://doi.org/10.1007/s00521-018-3886-2
Liu M-Y, Tuzel O (2016) Coupled generative adversarial networks. In: Proceedings of the advances in neural information processing systems (NIPS 2016), Barcelona, Spain, pp 469–477
Kliger M, Fleishman S (2018) Novelty detection with GAN. arXiv:1802.10560v1 [cs.CV]
He K, Zhang X, Ren S, Sun J (2015) Delving deep into rectifiers: surpassing human-level performance on imagenet classification. In: ICCV
Maas A, Hannun AY, Ng AY (2013) Rectifier nonlinearities improve neural network acoustic models
Abadi M, Barham P, Chen J, Chen Z, Davis A, Dean J, Devin M, Ghemawat S, Irving G, Isard M et al (2016) TensorFlow: a system for large-scale machine learning. In: OSDI, vol 16, pp 265–283
Kingma DP, Ba J (2014) Adam: a method for stochastic optimization. CoRR, vol. abs/1412.6980
Kim J, Lee JK, Lee KM (2016) Accurate image super-resolution using very deep convolutional networks. In: Proceedings of the CVPR, pp 1646–1654
Lai WS, Huang J-B, Ahuja N, Yang M-H (2017) Deep Laplacian pyramid networks for fast and accurate superresolution. In: CVPR, pp 624–632
Wang Y, Perazzi F, Williams BM, Hornung AS, Hornung OS, Schroers C (2017) A fully progressive approach to single-image super-resolution. arXiv:1804.02900v2
Berthelot D, Schumm T, Metz L (2017) Began: boundary equilibrium generative adversarial networks. CoRR, abs/1703.10717
Juefei-Xu F, Boddeti VN, Savvides M (2017) Gang of gans: generative adversarial networks with maximum margin ranking. arXiv preprint arXiv:1704.04865
Metz L, Poole B, Pfau D, Sohl-Dickstein J (2016) Unrolled generative adversarial networks. arXiv preprint arXiv:1611.02163
Wang R, Cully A, Chang HJ, Demiris Y (2017) Magan: Margin adaptation for generative adversarial networks. arXiv preprint arXiv:1704.03817
Johnson J, Alahi A, Fei-Fei L (2016) Perceptual losses for real-time style transfer and super-resolution. In: ECCV
Huang G, Liu Z, van der Maaten L, Weinberger KQ (2017) Densely connected convolutional networks. In: CVPR
Tai Y, Yang J, Liu X (2017) Image super-resolution via deep recursive residual network. In: Proceedings of the CVPR, pp 2790–2798
Wu H, Zheng S, Zhang J, Huang K (2017) GP-GAN: towards realistic high-resolution image blending. arXiv:1703.07195v2
Wang X, Yu K, Dong C, Loy CC (2018) Recovering realistic texture in image super-resolution by deep spatial feature transform. In: CVPR. arXiv:1804.02815v1
Dong C, Loy CC, Tang X (2016) Accelerating the super-resolution convolutional neural network. In: European conference on computer vision (ECCV), pp 391–407
Zhang Y, Tian Y, Kong Y, Zhong B, Fu Y (2018) Residual dense network for image super-resolution. In: CVPR
Tai Y, Yang J, Liu X, Xu C (2017) Memnet: a persistent memory network for image restoration. In: ICCV
Acknowledgements
This research is partly supported by NSFC, China (U1803261, 61876107, 61572315); 973 Plan, China (2015CB856004). H. Zhou was supported by UK EPSRC under Grant EP/N011074/1, Royal Society-Newton Advanced Fellowship under Grant NA160342 and European Union’s Horizon 2020 research and innovation program under the Marie-Sklodowska-Curie grant agreement No 720325.
Author information
Authors and Affiliations
Corresponding author
Ethics declarations
Conflict of interest
We have no conflict of interest to declare.
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
About this article
Cite this article
Zareapoor, M., Zhou, H. & Yang, J. Perceptual image quality using dual generative adversarial network. Neural Comput & Applic 32, 14521–14531 (2020). https://doi.org/10.1007/s00521-019-04239-0
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s00521-019-04239-0