Nothing Special   »   [go: up one dir, main page]

Skip to main content
Log in

Research on image Inpainting algorithm of improved GAN based on two-discriminations networks

  • Published:
Applied Intelligence Aims and scope Submit manuscript

Abstract

All existing image inpainting methods based on neural network models are affected by structural distortions and blurred textures on visible connectivity, such that overfitting and overlearning phenomena can easily emerge in the image inpainting processing procedure. Accordingly, in an attempt to address the defects of image inpainting algorithm, such as long iteration time, poor adaptability and unsatisfactory repairing effects, the image inpainting algorithm of improved Generative Adversarial Networks based on deep learning method of Two-Discriminations Network has been proposed in the paper. The proposed method uses image inpainting network, global discrimination network and local discrimination network to create a fusion network to apply computational images. In the training procedure of proposed algorithm, the network of image inpainting algorithm uses similar patching method to fill the broken area in image and set it as input training objects, which greatly improves the speed and quality of image inpainting. The global discrimination network uses global structure with marginal information and feature information to judge the completed image, meaning that it comprehensively achieves visible connectivity. As local discrimination network can judge the computational images, it has also been trained with assisted feature patches found on multiple images. Furthermore, the proposed method can enhance the discriminant capability and solve the problem that the image inpainting network has easily been overfitting when the features are too concentrated and limited in number to process. Our results of designed experiments demonstrate that proposed algorithm has better adaptive capability on several image categories than those state-of-the-arts.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8

Similar content being viewed by others

Explore related subjects

Discover the latest articles, news and stories from top researchers in related subjects.

References

  1. Bertalmio M; Sapiro G; Caselles V (2000). Image Inpainting. In Proceedings of Annual Conference on Computer Graphics and Interactive Techniques, New Orleans, LA, USA, 23–28 ; pp. 417–424

  2. Telea A (2004) An image Inpainting technique based on the fast matching method. J Graphic Tools 9(1):23–24

    Article  Google Scholar 

  3. Tang F; Ying Y; Wang J; Peng QS (2004). A novel texture synthesis based algorithm for object removal in photographs. Springer, Berlin Heidelberg

  4. Chen YT; Wang J; Liu SJ; Chen X; Xiong J; Xie JB; Yang K (2019). The multi-scale fast correlation filtering tracking algorithm based on a features fusion model. Concurr Comput-Pract Exp. https://doi.org/10.1002/cpe.5533

  5. Barnes C, Dan BG, Shechtman E, Finkelstein A (2011) The PatchMatch randomized matching algorithm for image manipulation. Commun ACM 54(11):103–110

    Article  Google Scholar 

  6. He KM and Sun J (2012). Computing Nearest-Neighbor Fields Via Propagation Assisted KD-trees. In Proceedings of IEEE Conference Computer Vision and Pattern Recognition, Providence, RI, USA, 16–21 ; pp. 111–118

  7. Goodfellow I; Pouget-Abadie J; Mirza M; Xu B; Warde-Farley D; Ozair S; Courville AC; Bengio Y. Generative Adversarial Nets. In Proceedings of Annual Conference on Neural Information Processing Systems, Montreal, Quebec, Canada, 8–13 December (2014), pp. 5672–2680

  8. Vincent P; Larochelle H; Bengio Y; Manzagol PA. Extracting and Composing Robust Features with Denoising Autoencoders. In Proceedings of 25th International Conference on Machine Learning, Helsinki, Finland, 5–9 June; pp. 1096–1103

  9. Dosovitskiv A; Brox T (2016). Inverting Visual Representations with Convolutional Networks. In Proceedings of 2016 IEEE Conference Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June; pp. 4829–4837

  10. Sun L; Ma CY; Chen YJ; Zheng YH; Shim HJ; Wu ZB; Jeon B (2019) Low rank component induced spatial-spectral kernel method for Hyperspectral image classification. IEEE Trans Circ Syst Video T. https://doi.org/10.1109/TCSVT.2019.2946723, 1

  11. Bengio YS, Courville A, Vincent P (2013) Representation learning: a review and new perspectives. IEEE Trans Patt Anal Mach Int 35(8):1798–1828

    Article  Google Scholar 

  12. Li YJ; Liu SF; Yang JM; Yang MH. Generative Face Completion. In Proceedings of 2017 IEEE Conference Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July (2017); pp. 5892–5900

  13. Tai Y; Yang J; Liu XM; Xu CY. MemNet: A Persistent Memory Network for Image Restoration. In Proceedings of 2017 IEEE International Conference Computer Vision, Venice, Italy, 22–29 October (2017); pp. 4549–4557

  14. Yang JX; Zhao YQ; Chan JCW; Yi C. Hyperspectral Image Classification using Two-Channel Deep Convolutional Neural Network. In Proceedings of 2016 IEEE International Geoscience and Remote Sensing Symposium, Beijing, China, 10–15 July (2016); pp. 5079–5082

  15. Radford A; Metz L; Chintala S. Unsupervised Representation Learning with Deep Convolutional Generative Adversarial Networks. In Proceedings of International Conference on Learning Representations, San Juan, Puerto Rico, 2–4 May (2016); Available online: arXiv:1511.06434

  16. Arjovsky M; Chintala S; Bottou L Wasserstein GAN. (2017). Available online: https://arxiv.org/abs/1701.07875 (accessed on 6 Dec 2017)

  17. He KM; Zhang XY; Ren SQ; Sun J. Deep Residual Learning for Image Recognition. In Proceedings of 2016 IEEE Conference Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June (2016); pp. 770–778

  18. Kingma DP; Welling M. Auto-Encoding Variational Bayes. In Proceedings of International Conference on Learning Representations, Banff, AB, Canada, 14–16 April (2014); Available online: http://arxiv.org/abs/1312.6114 (accessed on 1 May 2014)

  19. Yeh AR; Chen C; Lim TY; Schwing AG; Hasegawa-Johnson M; Do MN. Semantic Image Inpainting with Deep Generative Models. In Proceedings of 2017 IEEE Conference Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July (2017); pp. 6882–6890

  20. Chen YT; Tao JJ; Liu LW; Xiong J; Xia RL; Xie JB; Zhang Q; Yang K (2020) Research of improving semantic image segmentation based on a feature fusion model. J Ambient Intell Humaniz Comput. https://doi.org/10.1007/s12652-020-02066-z

  21. Sun L, Wu FY, Zhan TM, Liu W, Wang J, Jeon B (2020) Weighted nonlocal low-rank tensor decomposition method for sparse Unmixing of Hyperspectral images. IEEE J. Sel Top Appl Earth Obs 13:1174–1188

    Article  Google Scholar 

  22. Chen YT, Xu WH, Zuo JW, Yang K (2019) The fire recognition algorithm using dynamic feature fusion and IV-SVM classifier. Clust Comput 22:7665–7675

    Article  Google Scholar 

  23. Zhang JM, Xie ZP, Sun J, Zou X, Wang J (2020) A cascaded R-CNN with multiscale attention and imbalanced samples for traffic sign detection. IEEE Access 8:29742–29754

    Article  Google Scholar 

  24. Chen YT; Tao JJ; Zhang Q; Yang K; Chen X; Xiong J; Xia RL; Xie JB. Saliency Detection via Improved Hierarchical Principle Component Analysis Method. Wirel. Commun. Mob. Comput., (2020), Article ID 8822777

  25. Zeiler MD; Fergus R. Visualizing and Understanding Convolutional Networks. In Proceedings of 2014 European Conference on Computer Vision, Zurich, Switzerland, 6–12 September (2014); pp. 818–833

  26. Zeyde R; Elad M; Protter M. On Single Image Scale-up using Sparse-Representations. In Proceedings of 2010 International Conference Curves and Surfaces, Avignon, France, 24–30 June (2010); pp. 711–730

  27. Pan J-S, Kong LP, Sung TW, Tsai PW, Snáel V (2018) A clustering scheme for wireless sensor networks based on genetic algorithm and dominating set. J Internet Technol 19(4):1111–1118

    Google Scholar 

  28. Wang J, Qin JH, Xiang XY, Tan Y, Pan N (2019) CAPTCHA recognition based on deep convolutional neural network. Math Biosci Eng 16(5):5851–5861

    Article  MathSciNet  Google Scholar 

  29. Pan J-S, Kong LP, Sung TW, Tsai PW, Snasel V (2018) Alpha-fraction first strategy for hierarchical wireless sensor networks. J Internet Technol 19(6):1717–1726

    Google Scholar 

  30. Meng ZY, Pan J-S, Tseng K-K (2019) PaDE: an enhanced differential evolution algorithm with novel control parameter adaption schemes for numerical optimization. Knowl-Based Syst 168:80–99

    Article  Google Scholar 

  31. Pan J-S, Lee C-Y, Sghaier A, Zeghid M, Xie JF (2019) Novel Systolization of subquadratic space complexity multipliers based on Toeplitz matrix–vector product approach. IEEE T Vlsi Syst 27(7):1614–1622

    Article  Google Scholar 

  32. Lu WP, Zhang X, Lu HM, Li FF (2020) Deep hierarchical encoding model for sentence semantic matching. J Vis Commun Image Represent 71:102794

    Article  Google Scholar 

  33. Chen YT, Liu LW, Tao JJ, Chen X, Xia RL, Zhang Q, Xiong J, Yang K, Xie JB (2020) The image annotation algorithm using convolutional features from intermediate layer of deep learning. Multimed Tools Appl. https://doi.org/10.1007/s11042-020-09887-2

  34. Nair V; Hinton GE. Rectified Linear Units Improve Restricted Boltzmann Machines. In Proceedings of 2010 International Conference on Machine Learning, Haifa, Israel, 21–24 June (2010); pp. 807–814

  35. He KM; Zhang X; Ren S; Sun J. Delving Deep into Rectifiers: Surpassing Human-Level Performance on Imagenet Classification. In Proceedings of 2015 IEEE International Conference on Computer Vision, Santiago, Chile, 7–13 December (2015); pp. 1026–1034

  36. Krizhevsky A; Sutskever I; Hinton GE. ImageNet Classification with Deep Convolutional Neural Networks. In Proceedings of 2012 International Conference Neural Information Processing Systems, Lake Tahoe, Nevada, United States, 3–6 December (2012); pp. 1097–1105

  37. Iizuka S, Simo-Serra E, Ishikawa H (2017) Globally and locally consistent image inpainting. ACM Trans Graph 36(4):1–14

    Article  Google Scholar 

  38. Pathak D; Krahenbuhl P; Donahue J; Darrell T; Efros AA Context Encoders: Features Learning by Inpainting. In Proceedings of 2016 IEEE Conference on Computer Vision Pattern Recognition, Las Vegas, NV, USA, 27–30 June (2016); pp. 2536–2544

  39. Ruder S (2016). An Overview of Gradient Descent Optimization Algorithms. Preprint ArXiv; Available online: arXiv:1609.04747 (accessed on 15 Sep 2016)

  40. Philbin J; Chum O; Isard M; Sivic J; Zisserman A. Object Retrieval with Large Vocabularies and Fast Spatial Matching. In Proceedings of 2007 IEEE conference on computer vision pattern recognition, Minneapolis, Minnesota, USA, 18-23 June (2007); Available online: https://doi.org/10.1109/CVPR.2007.383172 (accessed on 16 July 2007)

  41. Liu ZW; Luo P; Wang XG; Tang XO. Deep Learning Face Attributes in the Wild. In Proceedings of 2015 International Conference on Computer Vision, Santiago, Chile, 7–13 December (2015); pp. 3730–3738

  42. Yu J; Zhe L; Yang J; Shen X; Xin L; Huang T. Generative image inpainting with contextual attention. In Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–22 June (2018); pp. 5505–5014

  43. Chen YT, Liu LW, Tao JJ, Xia RL, Zhang Q, Yang K, Xiong J, Chen X (2020) The improved image inpainting algorithm via encoder and similarity constraint. Visual Comput. https://doi.org/10.1007/s00371-020-01932-3

Download references

Acknowledgments

This research was funded by the National Natural Science Foundation of China [61972056, 61402053], the Natural Science Foundation of Hunan Province of China [2020JJ4623], the Scientific Research Fund of Hunan Provincial Education Department [17A007, 19C0028, 19B005], the Changsha Science and Technology Planning [KQ1703018, KQ1706064, KQ1703018-01, KQ1703018-04], the Junior Faculty Development Program Project of Changsha University of Science and Technology [2019QJCZ011], the “Double First-class” International Cooperation and Development Scientific Research Project of Changsha University of Science and Technology [2019IC34], the Practical Innovation and Entrepreneurship Ability Improvement Plan for Professional Degree Postgraduate of Changsha University of Science and Technology [SJCX202072], the Postgraduate Training Innovation Base Construction Project of Hunan Province [2019-248-51, 2020-172-48], the Beidou Micro Project of Hunan Provincial Education Department [XJT[2020] No.149].

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Yuantao Chen.

Additional information

Publisher’s note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Chen, Y., Zhang, H., Liu, L. et al. Research on image Inpainting algorithm of improved GAN based on two-discriminations networks. Appl Intell 51, 3460–3474 (2021). https://doi.org/10.1007/s10489-020-01971-2

Download citation

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10489-020-01971-2

Keywords

Navigation