Abstract
New sensors and imaging systems are the indispensable foundation for mobile intelligent photography and imaging. Nowadays, under-display cameras (UDCs) demonstrate practical applicability in smartphones, laptops, tablets, and other scenarios. However, the images captured by UDCs suffer from complex image degradation issues, such as flare, haze, blur, and noise. To solve the above issues, we present an Enhanced Coarse-to-Fine Network (ECFNet) to effectively restore the UDC images, which takes the multi-scale images as the input and gradually generates multi-scale results from coarse to fine. We design two enhanced core components, i.e., Enhanced Residual Dense Block (ERDB) and multi-scale Cross-Gating Fusion Module (CGFM), in the ECFNet, and we further introduce progressive training and model ensemble strategies to enhance the results. Experimental results show superior performance against the existing state-of-the-art methods both qualitatively and visually, and our ECFNet achieves the best performance in terms of all the evaluation metrics in the MIPI-challenge 2022 Under-display Camera Image Restoration track. Our source codes are available at this repository.
Y. Zhu—This work was done during his internship at Shanghai Al Laboratory.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
References
Agarap, A.F.: Deep learning using rectified linear units (ReLU). arXiv preprint arXiv:1803.08375 (2018)
Anwar, S., Barnes, N.: Real image denoising with feature attention. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 3155–3164 (2019)
Chen, D., et al.: Gated context aggregation network for image dehazing and deraining. In: 2019 IEEE Winter Conference on Applications of Computer Vision (WACV), pp. 1375–1383. IEEE (2019)
Chen, L., Chu, X., Zhang, X., Sun, J.: Simple baselines for image restoration. In: Avidan, S., Brostow, G., Cisse, M., Farinella, G.M., Hassner, T. (eds.) Computer Vision – ECCV 2022. LNCS, vol. 13667. Springer, Cham. https://doi.org/10.1007/978-3-031-20071-7_2
Cheng, C.J., et al.: P-79: evaluation of diffraction induced background image quality degradation through transparent OLED display. In: SID Symposium Digest of Technical Papers, vol. 50, pp. 1533–1536. Wiley Online Library (2019)
Cho, S.J., Ji, S.W., Hong, J.P., Jung, S.W., Ko, S.J.: Rethinking coarse-to-fine approach in single image deblurring. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 4641–4650 (2021)
Feng, R., Li, C., Chen, H., Li, S., Loy, C.C., Gu, J.: Removing diffraction image artifacts in under-display camera via dynamic skip connection network. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 662–671 (2021)
Fu, X., Qi, Q., Zha, Z.J., Zhu, Y., Ding, X.: Rain streak removal via dual graph convolutional network. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 35, pp. 1352–1360 (2021)
Gao, H., Tao, X., Shen, X., Jia, J.: Dynamic scene deblurring with parameter selective sharing and nested skip connections. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 3848–3856 (2019)
He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: surpassing human-level performance on ImageNet classification. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1026–1034 (2015)
He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016)
Hendrycks, D., Gimpel, K.: Gaussian error linear units (GELUs). arXiv preprint arXiv:1606.08415 (2016)
Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017)
Jia, X., De Brabandere, B., Tuytelaars, T., Gool, L.V.: Dynamic filter networks. In: 29th Proceedings of Conference on Advances in Neural Information Processing System (2016)
Kim, S.-W., Kook, H.-K., Sun, J.-Y., Kang, M.-C., Ko, S.-J.: Parallel feature pyramid network for object detection. In: Ferrari, V., Hebert, M., Sminchisescu, C., Weiss, Y. (eds.) ECCV 2018. LNCS, vol. 11209, pp. 239–256. Springer, Cham (2018). https://doi.org/10.1007/978-3-030-01228-1_15
Kingma, D.P., Ba, J.: Adam: a method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014)
Koh, J., Lee, J., Yoon, S.: BNUDC: a two-branched deep neural network for restoring images from under-display cameras. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1950–1959 (2022)
Kwon, H.J., Yang, C.M., Kim, M.C., Kim, C.W., Ahn, J.Y., Kim, P.R.: Modeling of luminance transition curve of transparent plastics on transparent OLED displays. Electr. Imaging 2016(20), 1–4 (2016)
Liu, D., Wen, B., Fan, Y., Loy, C.C., Huang, T.S.: Non-local recurrent network for image restoration. In: 31st Proceedings of Conference on Advances in Neural Information Processing Systems (2018)
Liu, Z., Mao, H., Wu, C.Y., Feichtenhofer, C., Darrell, T., Xie, S.: A convnet for the 2020s. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 11976–11986 (2022)
Ma, Y., Liu, X., Bai, S., Wang, L., He, D., Liu, A.: Coarse-to-fine image inpainting via region-wise convolutions and non-local correlation. In: IJCAI, pp. 3123–3129 (2019)
Nah, S., Hyun Kim, T., Mu Lee, K.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 3883–3891 (2017)
Pan, X., Zhan, X., Dai, B., Lin, D., Loy, C.C., Luo, P.: Exploiting deep generative prior for versatile image restoration and manipulation. IEEE Trans. Pattern Anal. Mach. Intell. 44, 7474–7489 (2021)
Panikkasseril Sethumadhavan, H., Puthussery, D., Kuriakose, M., Charangatt Victor, J.: Transform domain pyramidal dilated convolution networks for restoration of under display camera images. In: Bartoli, A., Fusiello, A. (eds.) ECCV 2020. LNCS, vol. 12539, pp. 364–378. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-68238-5_28
Qin, Z., Tsai, Y.H., Yeh, Y.W., Huang, Y.P., Shieh, H.P.D.: See-through image blurring of transparent organic light-emitting diodes display: calculation method based on diffraction and analysis of pixel structures. J. Display Technol. 12(11), 1242–1249 (2016)
Qin, Z., Xie, J., Lin, F.C., Huang, Y.P., Shieh, H.P.D.: Evaluation of a transparent display’s pixel structure regarding subjective quality of diffracted see-through images. IEEE Photonics J. 9(4), 1–14 (2017)
Ren, W., et al.: Gated fusion network for single image dehazing. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 3253–3261 (2018)
Sundar, V., Hegde, S., Kothandaraman, D., Mitra, K.: Deep Atrous guided filter for image restoration in under display cameras. In: Bartoli, A., Fusiello, A. (eds.) ECCV 2020. LNCS, vol. 12539, pp. 379–397. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-68238-5_29
Tang, Q., Jiang, H., Mei, X., Hou, S., Liu, G., Li, Z.: 28–2: study of the image blur through FFS LCD panel caused by diffraction for camera under panel. In: SID Symposium Digest of Technical Papers, vol. 51, pp. 406–409. Wiley Online Library (2020)
Wang, L., Li, Y., Wang, S.: Deepdeblur: fast one-step blurry face images restoration. arXiv preprint arXiv:1711.09515 (2017)
Wang, X., Chan, K.C., Yu, K., Dong, C., Change Loy, C.: EDVR: video restoration with enhanced deformable convolutional networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (2019)
Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE Trans. Image Process. 13(4), 600–612 (2004)
Yang, Q., Liu, Y., Tang, J., Ku, T.: Residual and dense UNet for under-display camera restoration. In: Bartoli, A., Fusiello, A. (eds.) ECCV 2020. LNCS, vol. 12539, pp. 398–408. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-68238-5_30
Zamir, S.W., Arora, A., Khan, S., Hayat, M., Khan, F.S., Yang, M.H.: Restormer: efficient transformer for high-resolution image restoration. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 5728–5739 (2022)
Zamir, S.W., et al.: Multi-stage progressive image restoration. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Rrecognition, pp. 14821–14831 (2021)
Zamir, S.W., et al.: Multi-stage progressive image restoration. In: CVPR (2021)
Zhang, H., Dai, Y., Li, H., Koniusz, P.: Deep stacked hierarchical multi-patch network for image deblurring. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 5978–5986 (2019)
Zhang, K., Tao, D., Gao, X., Li, X., Li, J.: Coarse-to-fine learning for single-image super-resolution. IEEE Trans. Neural Netw. Learn. Syst. 28(5), 1109–1122 (2016)
Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018)
Zhang, Y., Li, K., Li, K., Wang, L., Zhong, B., Fu, Y.: Image super-resolution using very deep residual channel attention networks. In: Ferrari, V., Hebert, M., Sminchisescu, C., Weiss, Y. (eds.) ECCV 2018. LNCS, vol. 11211, pp. 294–310. Springer, Cham (2018). https://doi.org/10.1007/978-3-030-01234-2_18
Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2472–2481 (2018)
Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image restoration. IEEE Trans. Pattern Anal. Mach. Intell. 43(7), 2480–2495 (2020)
Zhao, H., Kong, X., He, J., Qiao, Yu., Dong, C.: Efficient image super-resolution using pixel attention. In: Bartoli, A., Fusiello, A. (eds.) ECCV 2020. LNCS, vol. 12537, pp. 56–72. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-67070-2_3
Zhou, Y., et al.: UDC 2020 challenge on image restoration of under-display camera: methods and results. In: Bartoli, A., Fusiello, A. (eds.) ECCV 2020. LNCS, vol. 12539, pp. 337–351. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-68238-5_26
Zhou, Y., Ren, D., Emerton, N., Lim, S., Large, T.: Image restoration for under-display camera. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9179–9188 (2021)
Acknowledgement
This work was supported by the National Natural Science Foundation of China (NSFC) under Grant 61901433 and in part by the USTC Research Funds of the Double First-Class Initiative under Grant YD2100002003. This work is partially supported by the Shanghai Committee of Science and Technology (Grant No. 21DZ1100100).
Author information
Authors and Affiliations
Corresponding authors
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
Zhu, Y., Wang, X., Fu, X., Hu, X. (2023). Enhanced Coarse-to-Fine Network for Image Restoration from Under-Display Cameras. In: Karlinsky, L., Michaeli, T., Nishino, K. (eds) Computer Vision – ECCV 2022 Workshops. ECCV 2022. Lecture Notes in Computer Science, vol 13805. Springer, Cham. https://doi.org/10.1007/978-3-031-25072-9_9
Download citation
DOI: https://doi.org/10.1007/978-3-031-25072-9_9
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-25071-2
Online ISBN: 978-3-031-25072-9
eBook Packages: Computer ScienceComputer Science (R0)