Abstract
This paper reviews the AIM 2020 challenge on extreme image inpainting. This report focuses on proposed solutions and results for two different tracks on extreme image inpainting: classical image inpainting and semantically guided image inpainting. The goal of track 1 is to inpaint large part of the image with no supervision. Similarly, the goal of track 2 is to inpaint the image by having access to the entire semantic segmentation map of the input. The challenge had 88 and 74 participants, respectively. 11 and 6 teams competed in the final phase of the challenge, respectively. This report gauges current solutions and set a benchmark for future extreme image inpainting methods.
E. Ntavelis (entavelis@ethz.ch, ETH Zurich and CSEM SA), A. Romero, S. Bigdeli, and R. Timofte are the AIM 2020 challenge organizers, while the other authors participated in the challenge.
Appendix A contains the authors’teams and affiliations.
AIM webpage: http://www.vision.ee.ethz.ch/aim20/.
Github webpage: https://github.com/vglsd/AIM2020-Image-Inpainting-Challenge.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
References
A tensorflow implementation of SRGAN. https://github.com/tensorlayer/srgan.git
Alom, M.Z., Hasan, M., Yakopcic, C., Taha, T.M., Asari, V.K.: Recurrent residual convolutional neural network based on U-Net (R2U-Net) for medical image segmentation. arXiv preprint arXiv:1802.06955 (2018)
Bai, M., Li, S., Fan, J., Zhou, C., Zuo, L., Na, J., Jeong, M.: Fast light-weight network for extreme image inpainting challenge. In: Bartoli, A., Fusiello, A. (eds.) ECCV 2020 Workshops. LNCS, vol. 12537, pp.742–757 (2020)
Bau, D., et al.: Semantic photo manipulation with a generative image prior. ACM Trans. Graph. (Proc. ACM SIGGRAPH) 38(4), 1–11 (2019)
Caesar, H., Uijlings, J., Ferrari, V.: Coco-stuff: thing and stuff classes in context. In: CVPR (2018). https://arxiv.org/abs/1612.03716
Cho, K., et al.: Learning phrase representations using RNN encoder-decoder for statistical machine translation. arXiv preprint arXiv:1406.1078 (2014)
El Helou, M., Zhou, R., Süsstrunk, S., Timofte, R., et al.: AIM 2020: scene relighting and illumination estimation challenge. In: Bartoli, A., Fusiello, A. (eds.) ECCV 2020 Workshops. LNCS, vol. 12537, pp. 499–518 (2020)
Fritsche, M., Gu, S., Timofte, R.: Frequency separation for real-world super-resolution. In: 2019 IEEE/CVF International Conference on Computer Vision Workshop (ICCVW), pp. 3599–3608. IEEE (2019)
Fuoli, D., Huang, Z., Gu, S., Timofte, R., et al.: AIM 2020 challenge on video extreme super-resolution: methods and results. In: European Conference on Computer Vision Workshops (2020)
Goodfellow, I., et al.: Generative adversarial nets. In: Advances in Neural Information Processing Systems, pp. 2672–2680 (2014)
Hong, S., Yan, X., Huang, T.E., Lee, H.: Learning hierarchical semantic image manipulation through structured representations. In: Advances in Neural Information Processing Systems, pp. 2713–2723 (2018)
Hui, Z., Li, J., Wang, X., Gao, X.: Image fine-grained inpainting. arXiv preprint arXiv:2002.02609 (2020)
Ignatov, A., Timofte, R., et al.: AIM 2020 challenge on learned image signal processing pipeline. In: Bartoli, A., Fusiello, A. (eds.) ECCV 2020 Workshops. LNCS, vol. 12537, pp. 152–170 (2020)
Ignatov, A., Timofte, R., et al.: AIM 2020 challenge on rendering realistic bokeh. In: Bartoli, A., Fusiello, A. (eds.) ECCV 2020 Workshops. LNCS, vol. 12537, pp. 213–228 (2020)
Isola, P., Zhu, J.Y., Zhou, T., Efros, A.A.: Image-to-image translation with conditional adversarial networks (2016)
Johnson, J., Alahi, A., Fei-Fei, L.: Perceptual losses for real-time style transfer and super-resolution. In: Leibe, B., Matas, J., Sebe, N., Welling, M. (eds.) ECCV 2016. LNCS, vol. 9906, pp. 694–711. Springer, Cham (2016). https://doi.org/10.1007/978-3-319-46475-6_43
Li, C.T., Siu, W.C., Liu, Z.S., Wang, L.W., Lun, D.P.K.: DeepGIN: deep generative inpainting network for extreme image inpainting. In: European Conference on Computer Vision Workshops (2020)
Li, C., Wand, M.: Precomputed real-time texture synthesis with Markovian generative adversarial networks. In: Leibe, B., Matas, J., Sebe, N., Welling, M. (eds.) ECCV 2016. LNCS, vol. 9907, pp. 702–716. Springer, Cham (2016). https://doi.org/10.1007/978-3-319-46487-9_43
Lim, J.H., Ye, J.C.: Geometric GAN. arXiv preprint arXiv:1705.02894 (2017)
Nazeri, K., Ng, E., Joseph, T., Qureshi, F., Ebrahimi, M.: EdgeConnect: structure guided image inpainting using edge prediction. In: The IEEE International Conference on Computer Vision (ICCV) Workshops, October 2019
Nazeri, K., Ng, E., Joseph, T., Qureshi, F.Z., Ebrahimi, M.: EdgeConnect: generative image inpainting with adversarial edge learning. arXiv preprint arXiv:1901.00212 (2019)
Ntavelis, E., Romero, A., Kastanis, I., Van Gool, L., Timofte, R.: Sesame: semantic editing of scenes by adding, manipulating or erasing objects. In: Proceedings of the European Conference on Computer Vision (ECCV) (2020)
Ntavelis, E., Romero, A., Bigdeli, S.A., Timofte, R., et al.: AIM 2020 challenge on image extreme inpainting. In: Bartoli, A., Fusiello, A. (eds.) ECCV 2020 Workshops. LNCS, vol. 12537, pp. 716–741 (2020)
Park, T., Liu, M.Y., Wang, T.C., Zhu, J.Y.: Semantic image synthesis with spatially-adaptive normalization. In: CVPR, pp. 2337–2346 (2019)
Shaker, N., Liapis, A., Togelius, J., Lopes, R., Bidarra, R.: Constructive generation methods for dungeons and levels. Procedural Content Generation in Games. CSCS, pp. 31–55. Springer, Cham (2016). https://doi.org/10.1007/978-3-319-42716-4_3
Son, S., Lee, J., Nah, S., Timofte, R., Lee, K.M., et al.: AIM 2020 challenge on video temporal super-resolution. In: European Conference on Computer Vision Workshops (2020)
Wang, X., et al.: ESRGAN: enhanced super-resolution generative adversarial networks. In: Leal-Taixé, L., Roth, S. (eds.) ECCV 2018. LNCS, vol. 11133, pp. 63–79. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-11021-5_5
Wei, P., Lu, H., Timofte, R., Lin, L., Zuo, W., et al.: AIM 2020 challenge on real image super-resolution : methods and results. In: Bartoli, A., Fusiello, A. (eds.) ECCV 2020 Workshops. LNCS, vol. 12537, pp. 392–422 (2020)
Xiangli, Y., Deng, Y., Dai, B., Loy, C.C., Lin, D.: Real or not real, that is the question. In: ICLR (2020)
Xu, D., Chu, Y., Sun, Q.: Moire pattern removal via attentive fractal network. In: The IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, June 2020
Yi, Z., Tang, Q., Azizi, S., Jang, D., Xu, Z.: Contextual residual aggregation for ultra high-resolution image inpainting. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 7508–7517 (2020)
Yu, J., Lin, Z., Yang, J., Shen, X., Lu, X., Huang, T.S.: Generative image inpainting with contextual attention. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 5505–5514 (2018)
Yu, J., Lin, Z., Yang, J., Shen, X., Lu, X., Huang, T.S.: Free-form image inpainting with gated convolution. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 4471–4480 (2019)
Zeng, Y., Lin, Z., Yang, J., Zhang, J., Shechtman, E., Lu, H.: High-resolution image inpainting with iterative confidence feedback and guided upsampling. In: European Conference on Computer Vision. Springer (2020)
Zhang, K., Danelljan, M., Li, Y., Timofte, R., et al.: AIM 2020 challenge on efficient super-resolution: Methods and results. In: Bartoli, A., Fusiello, A. (eds.) ECCV 2020 Workshops. LNCS, vol. 12537, pp. 5–40 (2020)
Zhou, B., Lapedriza, A., Khosla, A., Oliva, A., Torralba, A.: Places: a 10 million image database for scene recognition. IEEE Trans. Pattern Anal. Mach. Intell. 40(6), 1452–1464 (2017)
Zhou, B., Zhao, H., Puig, X., Fidler, S., Barriuso, A., Torralba, A.: Scene parsing through ade20k dataset. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 633–641 (2017)
Acknowledgements
We thank the AIM 2020 sponsors: Huawei, MediaTek, Qualcomm AI Research, NVIDIA, Google and Computer Vision Lab/ETH Zürich.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Appendix A: Teams and affiliations
Appendix A: Teams and affiliations
1.1 AIM2020 organizers
Members: Evangelos Ntavelis1,2 (entavelis@ethz.ch), Siavash Bigdeli2 (siavash.bigdeli@csem.ch), Andrés Romero1 (roandres@ethz.ch), Radu Timofte1 (radu.timofte@vision.ee.ethz.ch).
Affiliations: 1Computer Vision Lab, ETH Zürich. 2CSEM.
1.2 Rainbow
Title: Image fine-grained inpainting.
Members: Zheng Hui, Xiumei Wang, Xinbo Gao.
Affiliations: School of Electronic Engineering, Xidian University.
1.3 Yonsei-MVPLab
Title: Image Inpainting based on Edge and Frequency Guided Recurrent Convolutions.
Members: Chajin Shin, Taeoh Kim, Hanbin Son, Sangyoun Lee.
Affiliations: Image and Video Pattern Recognition Lab., School of Electrical and Electronic Engineering, Yonsei University, Seoul, South Korea.
1.4 BossGao
Title: Image Inpainting With Mask Awareness
Members: Chao Li, Fu Li, Dongliang He, Shilei Wen, Errui Ding
Affiliations: Department of Computer Vision (VIS), Baidu Inc.
1.5 ArtIst
Title: Fast Light-Weight Network for Image Inpainting
Members: Mengmeng Bai, Shuchen Li
Affiliations: Samsung R&D Institute China-Beijing (SRC-Beijing)
1.6 DLUT
Title: Iterative Confidence Feedback and Guided Upsampling for filling large holes and inpainting high-resolution images
Members: Yu Zeng1, Zhe Lin2, Jimei Yang2, Jianming Zhang2, Eli Shechtman2, Huchuan Lu1
Affiliations: 1Dalian University of Technology, 2Adobe
1.7 AI-Inpainting Group
Title: MSEM: Multi-Scale Semantic-Edge Merged Model for Image Inpainting
Members: Weijian Zeng, Haopeng Ni, Yiyang Cai, Chenghua Li
Affiliations: Rensselaer Polytechnic Institute
1.8 qwq
Title: Markovian Discriminator guided Attentive Fractal Network
Members: Dejia Xu, Haoning Wu, Yu Han
Affiliations: Peking University
1.9 CVIP Inpainting Team
Title: Global Spatial-Channel Attention and Inter-layer GRU-based Image Inpainting
Members: Uddin S. M. Nadim, Hae Woong Jang, Soikat Hasan Ahmed, Jungmin Yoon, and Yong Ju Jung
Affiliations: Computer Vision and Image Processing (CVIP) Lab, Gachon University.
1.10 DeepInpaintingT1
Title: Deep Generative Inpainting Network for Extreme Image Inpainting
Members: Chu-Tak Li, Zhi-Song Liu, Li-Wen Wang, Wan-Chi Siu, Daniel P.K. Lun
Affiliations: Centre for Multimedia Signal Processing, Department of Electronic and Information Engineering, The Hong Kong Polytechnic University, Hong Kong
1.11 IPCV IITM
Title: Contextual Residual Aggregation Network
Members: Maitreya Suin, Kuldeep Purohit, A. N. Rajagopalan
Affiliations: Indian Institute of Technology Madras, India
1.12 MultiCog
Title: Pix2Pix for Image Inpainting
Members: Pratik Narang1, Murari Mandal2, Pranjal Singh Chauhan1
Affiliations: 1BITS Pilani, 2MNIT Jaipur
Rights and permissions
Copyright information
© 2020 Springer Nature Switzerland AG
About this paper
Cite this paper
Ntavelis, E. et al. (2020). AIM 2020 Challenge on Image Extreme Inpainting. In: Bartoli, A., Fusiello, A. (eds) Computer Vision – ECCV 2020 Workshops. ECCV 2020. Lecture Notes in Computer Science(), vol 12537. Springer, Cham. https://doi.org/10.1007/978-3-030-67070-2_43
Download citation
DOI: https://doi.org/10.1007/978-3-030-67070-2_43
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-67069-6
Online ISBN: 978-3-030-67070-2
eBook Packages: Computer ScienceComputer Science (R0)