Nothing Special   »   [go: up one dir, main page]

skip to main content
10.1007/978-3-030-01231-1_14guideproceedingsArticle/Chapter ViewAbstractPublication PagesConference Proceedingsacm-pubtype
Article

Transferring GANs: Generating Images from Limited Data

Published: 08 September 2018 Publication History

Abstract

Transferring knowledge of pre-trained networks to new domains by means of fine-tuning is a widely used practice for applications based on discriminative models. To the best of our knowledge this practice has not been studied within the context of generative deep networks. Therefore, we study domain adaptation applied to image generation with generative adversarial networks. We evaluate several aspects of domain adaptation, including the impact of target domain size, the relative distance between source and target domain, and the initialization of conditional GANs. Our results show that using knowledge from pre-trained networks can shorten the convergence time and can significantly improve the quality of the generated images, especially when target data is limited. We show that these conclusions can also be drawn for conditional GANs even when the pre-trained model was trained without conditioning. Our results also suggest that density is more important than diversity and a dataset with one or few densely sampled classes is a better source model than more diverse datasets such as ImageNet or Places.

References

[1]
Arjovsky, M., Bottou, L.: Towards principled methods for training generative adversarial networks. In: ICLR (2017)
[2]
Arjovsky, M., Chintala, S., Bottou, L.: Wasserstein generative adversarial networks. In: ICML, pp. 214–223 (2017)
[3]
Azizpour H, Razavian AS, Sullivan J, Maki A, and Carlsson S Factors of transferability for a generic convnet representation IEEE Trans. PAMI 2016 38 9 1790-1802
[4]
Borji, A.: Pros and Cons of GAN evaluation measures. arXiv preprint arXiv:1802.03446 (2018)
[5]
Chen, X., Duan, Y., Houthooft, R., Schulman, J., Sutskever, I., Abbeel, P.: Infogan: interpretable representation learning by information maximizing generative adversarial nets. In: NIPS (2016)
[6]
Cordts, M., et al.: The cityscapes dataset for semantic urban scene understanding. In: CVPR, pp. 3213–3223 (2016)
[7]
Danihelka, I., Lakshminarayanan, B., Uria, B., Wierstra, D., Dayan, P.: Comparison of maximum likelihood and GAN-based training of real NVPs. arXiv preprint arXiv:1705.05263 (2017)
[8]
Denton, E.L., Chintala, S., Fergus, R., et al.: Deep generative image models using a Laplacian pyramid of adversarial networks. In: NIPS, pp. 1486–1494 (2015)
[9]
Donahue, J., et al.: DeCAF: a deep convolutional activation feature for generic visual recognition. In: ICML, pp. 647–655 (2014)
[10]
Dumoulin, V., et al.: Adversarially learned inference. In: ICLR (2017)
[11]
Dumoulin, V., Shlens, J., Kudlur, M.: A learned representation for artistic style. In: ICLR (2017)
[12]
Ganin Y et al. Domain-adversarial training of neural networks JMLR 2016 17 1 2030-2096
[13]
Goodfellow, I., et al.: Generative adversarial nets. In: NIPS, pp. 2672–2680 (2014)
[14]
Grinblat, G.L., Uzal, L.C., Granitto, P.M.: Class-splitting generative adversarial networks. arXiv preprint arXiv:1709.07359 (2017)
[15]
Gulrajani, I., Ahmed, F., Arjovsky, M., Dumoulin, V., Courville, A.C.: Improved training of Wasserstein GANs. In: NIPS, pp. 5769–5779 (2017)
[16]
He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: CVPR, pp. 770–778 (2016)
[17]
Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Klambauer, G., Hochreiter, S.: GANs trained by a two time-scale update rule converge to a Nash equilibrium. In: NIPS (2017)
[18]
Hu, J., Lu, J., Tan, Y.P.: Deep transfer metric learning. In: CVPR, pp. 325–333. IEEE (2015)
[19]
Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: a database for studying face recognition in unconstrained environments. In: Workshop on Faces in ‘Real-Life’ Images: Detection, Alignment, and Recognition (2008)
[20]
Huang, X., Li, Y., Poursaeed, O., Hopcroft, J., Belongie, S.: Stacked generative adversarial networks. In: CVPR, vol. 2, p. 4 (2017)
[21]
Im, D.J., Ma, H., Taylor, G., Branson, K.: Quantitatively evaluating GANs with divergences proposed for training. In: ICLR (2018)
[22]
Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive growing of GANs for improved quality, stability, and variation. In: ICLR (2018)
[23]
Kim, T., Cha, M., Kim, H., Lee, J., Kim, J.: Learning to discover cross-domain relations with generative adversarial networks. In: ICML, pp. 1857–1865 (2017)
[24]
Kingma, D.P., Ba, J.: Adam: a method for stochastic optimization. In: ICLR (2014)
[25]
Krizhevsky, A., Sutskever, I., Hinton, G.E.: Imagenet classification with deep convolutional neural networks. In: NIPS, pp. 1097–1105 (2012)
[26]
Ledig, C., et al.: Photo-realistic single image super-resolution using a generative adversarial network. In: CVPR (2016)
[27]
Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: ICCV, pp. 3730–3738 (2015)
[28]
Ma, L., Jia, X., Sun, Q., Schiele, B., Tuytelaars, T., Van Gool, L.: Pose guided person image generation. In: NIPS, pp. 405–415 (2017)
[29]
Mirza, M., Osindero, S.: Conditional generative adversarial nets. arXiv preprint arXiv:1411.1784 (2014)
[30]
Miyato, T., Koyama, M.: CGANs with projection discriminator. In: ICLR (2018)
[31]
Nilsback, M.E., Zisserman, A.: Automated flower classification over a large number of classes. In: ICVGIP, pp. 722–729. IEEE (2008)
[32]
Odena, A., Olah, C., Shlens, J.: Conditional image synthesis with auxiliary classifier GANs. In: ICML (2016)
[33]
Oquab, M., Bottou, L., Laptev, I., Sivic, J.: Learning and transferring mid-level image representations using convolutional neural networks. In: CVPR, pp. 1717–1724. IEEE (2014)
[34]
Pan SJ and Yang Q A survey on transfer learning IEEE Trans. Knowl. Data Eng. 2010 22 10 1345-1359
[35]
Perarnau, G., van de Weijer, J., Raducanu, B., Álvarez, J.M.: Invertible conditional GANs for image editing. In: NIPS 2016 Workshop on Adversarial Training (2016)
[36]
Radford, A., Metz, L., Chintala, S.: Unsupervised representation learning with deep convolutional generative adversarial networks. In: ICLR (2015)
[37]
Reed, S., Akata, Z., Yan, X., Logeswaran, L., Schiele, B., Lee, H.: Generative adversarial text to image synthesis. In: ICML, pp. 1060–1069 (2016)
[38]
Russakovsky O et al. Imagenet large scale visual recognition challenge IJCV 2015 115 3 211-252
[39]
Salimans, T., Goodfellow, I., Zaremba, W., Cheung, V., Radford, A., Chen, X.: Improved techniques for training GANs. In: NIPS, pp. 2234–2242 (2016)
[40]
Smith, E., Meger, D.: Improved adversarial systems for 3D object generation and reconstruction. arXiv preprint arXiv:1707.09557 (2017)
[41]
Sricharan, K., Bala, R., Shreve, M., Ding, H., Saketh, K., Sun, J.: Semi-supervised conditional GANs. arXiv preprint arXiv:1708.05789 (2017)
[42]
Theis, L., van den Oord, A., Bethge, M.: A note on the evaluation of generative models. In: ICLR (2015)
[43]
Tzeng, E., Hoffman, J., Darrell, T., Saenko, K.: Simultaneous deep transfer across domains and tasks. In: CVPR, pp. 4068–4076 (2015)
[44]
Wang, Y., Zhang, L., van de Weijer, J.: Ensembles of generative adversarial networks. In: NIPS 2016 Workshop on Adversarial Training (2016)
[45]
Yu, F., Zhang, Y., Song, S., Seff, A., Xiao, J.: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015)
[46]
Zhang, H., et al.: StackGAN: text to photo-realistic image synthesis with stacked generative adversarial networks. In: ICCV, pp. 5908–5916 (2017)
[47]
Zhang, Z., Song, Y., Qi, H.: Age progression/regression by conditional adversarial autoencoder. In: CVPR, vol. 2 (2017)
[48]
Zhou, B., Lapedriza, A., Xiao, J., Torralba, A., Oliva, A.: Learning deep features for scene recognition using places database. In: NIPS, pp. 487–495 (2014)
[49]
Zhou, Z., et al.: Activation maximization generative adversarial nets. In: ICLR (2018)
[50]
Zhu, J.Y., Park, T., Isola, P., Efros, A.A.: Unpaired image-to-image translation using cycle-consistent adversarial networks. In: ICCV, pp. 2242–2251 (2017)

Cited By

View all
  • (2024)High-Fidelity Cellular Network Control-Plane Traffic Generation without Domain KnowledgeProceedings of the 2024 ACM on Internet Measurement Conference10.1145/3646547.3688422(530-544)Online publication date: 4-Nov-2024
  • (2024)RefineStyle: Dynamic Convolution Refinement for StyleGANPattern Recognition and Computer Vision10.1007/978-981-97-8692-3_30(422-436)Online publication date: 18-Oct-2024
  • (2023)NICEProceedings of the 37th International Conference on Neural Information Processing Systems10.5555/3666122.3666729(13773-13801)Online publication date: 10-Dec-2023
  • Show More Cited By

Index Terms

  1. Transferring GANs: Generating Images from Limited Data
      Index terms have been assigned to the content through auto-classification.

      Recommendations

      Comments

      Please enable JavaScript to view thecomments powered by Disqus.

      Information & Contributors

      Information

      Published In

      cover image Guide Proceedings
      Computer Vision – ECCV 2018: 15th European Conference, Munich, Germany, September 8–14, 2018, Proceedings, Part VI
      Sep 2018
      724 pages
      ISBN:978-3-030-01230-4
      DOI:10.1007/978-3-030-01231-1

      Publisher

      Springer-Verlag

      Berlin, Heidelberg

      Publication History

      Published: 08 September 2018

      Author Tags

      1. Generative adversarial networks
      2. Transfer learning
      3. Domain adaptation
      4. Image generation

      Qualifiers

      • Article

      Contributors

      Other Metrics

      Bibliometrics & Citations

      Bibliometrics

      Article Metrics

      • Downloads (Last 12 months)0
      • Downloads (Last 6 weeks)0
      Reflects downloads up to 19 Nov 2024

      Other Metrics

      Citations

      Cited By

      View all
      • (2024)High-Fidelity Cellular Network Control-Plane Traffic Generation without Domain KnowledgeProceedings of the 2024 ACM on Internet Measurement Conference10.1145/3646547.3688422(530-544)Online publication date: 4-Nov-2024
      • (2024)RefineStyle: Dynamic Convolution Refinement for StyleGANPattern Recognition and Computer Vision10.1007/978-981-97-8692-3_30(422-436)Online publication date: 18-Oct-2024
      • (2023)NICEProceedings of the 37th International Conference on Neural Information Processing Systems10.5555/3666122.3666729(13773-13801)Online publication date: 10-Dec-2023
      • (2023)Analyzing the interplay between transferable GANs and gradient optimizersProceedings of the Companion Conference on Genetic and Evolutionary Computation10.1145/3583133.3596406(1777-1784)Online publication date: 15-Jul-2023
      • (2023)SolarDetector: Automatic Solar PV Array Identification using Big Satellite Imagery DataProceedings of the 8th ACM/IEEE Conference on Internet of Things Design and Implementation10.1145/3576842.3582384(117-129)Online publication date: 9-May-2023
      • (2022)DynaGAN: Dynamic Few-shot Adaptation of GANs to Multiple DomainsSIGGRAPH Asia 2022 Conference Papers10.1145/3550469.3555416(1-8)Online publication date: 29-Nov-2022
      • (2022)FakeCLR: Exploring Contrastive Learning for Solving Latent Discontinuity in Data-Efficient GANsComputer Vision – ECCV 202210.1007/978-3-031-19784-0_35(598-615)Online publication date: 23-Oct-2022
      • (2021)Mind2Mind: Transfer Learning for GANsGeometric Science of Information10.1007/978-3-030-80209-7_91(851-859)Online publication date: 21-Jul-2021

      View Options

      View options

      Login options

      Media

      Figures

      Other

      Tables

      Share

      Share

      Share this Publication link

      Share on social media