Nothing Special   »   [go: up one dir, main page]

skip to main content
research-article

ZRDNet: zero-reference image defogging by physics-based decomposition–reconstruction mechanism and perception fusion

Published: 09 October 2023 Publication History

Abstract

This paper investigates challenging fully unsupervised defogging problems, i.e., how to remove fog by feeding only foggy images in deep neural networks rather than using paired or unpaired synthetic images, and how to overcome the problems of insufficient structure and detail recovery in existing unsupervised defogging methods. For this purpose, a zero-reference image defogging method (ZRDNet) is proposed to solve these two problems. Specifically, we develop an unsupervised defogging network consisting of a layer decomposition network and a perceptual fusion network, which are separately optimized by joint multiple-loss based on the stage-wise learning. The decomposition network guides the image decomposition–reconstruction process by rationally constructing loss functions. The fusion network further enhances the details and contrast of the defogged images by fusing the decomposition–reconstruction results. The joint multiple-loss optimization strategy based on the stage-wise learning guides decomposition and fusion tasks, which are completed stage-by-stage. Additionally, a non-reference loss is constructed to prevent artifacts and distortion induced by transmission value deviation. Our method is completely unsupervised, and training only relies on fog images and information derived from the fog images themselves. Experiments are conducted to demonstrate that our ZRDNet, which overcomes the problems of insufficient structure and detail recovery, and domain shift induced by using synthetic image, achieves favorable performance.

References

[1]
Kuanar S, Mahapatra D, Bilas M, et al. Multi-path dilated convolution network for haze and glow removal in nighttime images Vis. Comput. 2022 38 1121-1134
[2]
Narasimhan SG and Nayar SK Contrast restoration of weather degraded images IEEE Trans. Pattern Anal. Mach. Intell. 2003 25 6 713-724
[3]
He K, Sun J, and Tang X Single image haze removal using dark channel prior IEEE Trans. Pattern Anal. Mach. Intell. 2011 33 12 2341-2353
[4]
Bui TM and Kim W Single image dehazing using color ellipsoid prior IEEE Trans. Image Process. 2018 27 2 999-1009
[5]
Zhang X, Wang T, Tang G, et al. Single image haze removal based on a simple additive model with haze smoothness prior IEEE Trans. Circuits Syst. Video Technol. 2022 32 6 3490-3499
[6]
Zhu Q, Mai J, and Shao L A fast single image haze removal algorithm using color attenuation prior IEEE Trans. Image Process. 2015 24 11 3522-3533
[7]
Zhang S, Zhang J, He F, et al. DRDDN: dense residual and dilated dehazing network Vis. Comput. 2023 39 3 953-969
[8]
Yi W, Dong L, Liu M, et al. MFAF-Net: image dehazing with multi-level features and adaptive fusion Vis. Comput. 2023
[9]
Song Y, He Z, Qian H, et al. Vision transformers for single image dehazing IEEE Trans. Image Process. 2023 32 1927-1941
[10]
Yang, D., Sun, J.: Proximal dehaze-net: a prior learning-based deep network for single image dehazing. In: European Conference on Computer Vision, Munich, Germany, pp. 702–717 (2018)
[11]
Qin, X., Wang, Z., Bai, Y., et al: FFA-Net: feature fusion attention network for single image dehazing. In: AAAI Conference on Artificial Intelligence, New York, USA, pp. 11908–11915 (2020)
[12]
Chen, Z., Wang, Y., Yang, Y., et al.: PSD: principled synthetic-to-real dehazing guided by physical priors. In: IEEE Conference on Computer Vision and Pattern Recognition, Nashville, USA, pp. 7176–7185 (2021)
[13]
Yang F and Zhang Q Depth aware image dehazing Vis. Comput. 2022 38 5 1579-1587
[14]
Li X, Hua Z, and Li J Attention-based adaptive feature selection for multi-stage image dehazing Vis. Comput. 2023 39 2 663-678
[15]
Engin, D., Genc, A., Ekenel, H.K.: Cycle-Dehaze: enhanced CycleGAN for single image dehazing. In: IEEE Conference on Computer Vision and Pattern Recognition Workshops, Salt Lake City, USA, pp. 938–9388 (2018)
[16]
Yang, X., Xu, Z., Luo, J.: Towards perceptual image dehazing by physics-based disentanglement and adversarial training. In: AAAI Conference on Artificial Intelligence, New Orleans, USA, pp. 7485–7492 (2018)
[17]
Wang S, Mei X, Kang P, et al. DFC-dehaze: an improved cycle-consistent generative adversarial network for unpaired image dehazing Vis. Comput. 2023
[18]
Li L, Dong YL, Ren WQ, et al. Semi-supervised image dehazing IEEE Trans. Image Process. 2020 29 2766-2779
[19]
Zhao S, Zhang L, Shen Y, et al. RefineDNet: a weakly supervised refinement framework for single image dehazing IEEE Trans. Image Process. 2021 30 3391-3404
[20]
Yang, Y., Wang, C., Liu, R., et al.: Self-augmented unpaired image dehazing via density and depth decomposition. In: IEEE Conference on Computer Vision and Pattern Recognition, New Orleans, USA, pp. 2037–2046 (2022)
[21]
Goodfellow, I., Pouget-Abadie, J., Mirza, M., et al.: Generative adversarial nets. In: Advances in Neural Information Processing Systems, Montreal, Canada, pp. 2672–2680 (2014)
[22]
Jiang Q, Mao Y, Cong R, et al. Unsupervised decomposition and correction network for low-light image enhancement IEEE Trans. Intell. Transp. Syst. 2022 23 10 19440-19455
[23]
Guo, C., Li, C., Guo, J., et al.: Zero-reference deep curve estimation for low-light image enhancement. In: IEEE Conference on Computer Vision and Pattern Recognition, Seattle, USA, pp. 1780–1789 (2020)
[24]
Gandelsman, Y., Shocher, A., Irani, M.: Double-DIP: unsupervised image decomposition via coupled deep-image-priors. In: IEEE Conference on Computer Vision and Pattern Recognition, Long Beach, USA, pp. 11026–11035 (2019)
[25]
Golts A, Freedman D, and Elad M Unsupervised single image dehazing using dark channel prior loss IEEE Trans. Image Process. 2020 29 2692-2701
[26]
Li B, Gou Y, Liu JZ, et al. Zero-shot image dehazing IEEE Trans. Image Process. 2020 29 8457-8466
[27]
Li B, Gou Y, Gu S, et al. You only look yourself: unsupervised and untrained single image dehazing neural network Int. J. Comput. Vis. 2021 129 5 1754-1767
[28]
Xu W, Chen X, Guo H, et al. Unsupervised image restoration with quality-task-perception loss IEEE Trans. Circuits Syst. Video Technol. 2022 32 9 5736-5747
[29]
Li J, Li Y, Zhuo L, et al. USID-Net: unsupervised single image dehazing network via disentangled representations IEEE Trans. Multimed. 2022
[30]
Li C, Guo C, Guo J, et al. PDR-Net: perception-inspired single image dehazing network with refinement IEEE Trans. Multimed. 2020 22 3 704-716
[31]
Li, R., Pan, J., Li, Z., et al.: Single image dehazing via conditional generative adversarial network. In: IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, USA, pp. 8202–8211 (2018)
[32]
Zhang, H., Patel, V.M.: Densely connected pyramid dehazing network. In: IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, USA, pp. 3194–3203 (2018)
[33]
Woo, S., Park, J., Lee, J.Y., et al.: CBAM: convolutional block attention module. In: European Conference on Computer Vision, Munich, Germany, pp. 3–19 (2018)
[34]
Yin S, Wang Y, and Yang YH A novel image-dehazing network with a parallel attention block Pattern Recogn. 2020 102
[35]
He, K., Zhang, X., Ren, S., et al.: Deep residual learning for image recognition. In: IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, USA, pp. 770–778 (2016)
[36]
Ronneberger, O., Fischer, P., Brox, T.: U-net: convolutional networks for biomedical image segmentation. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, Munich, Germany, pp. 234–241 (2015)
[37]
Zhao, H., Kong, X., He, J., et al.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, Glasgow, UK, pp. 56–72 (2020)
[38]
Li, X., Wang, W., Hu, X., et al.: Selective kernel networks. In: IEEE Conference on Computer Vision and Pattern Recognition, Long Beach, USA, pp. 510–519 (2019)
[39]
Johnson, J., Alahi, A., Li, F.: Perceptual losses for real-time style transfer and super-resolution. In: European Conference on Computer Vision, Amsterdam, Netherlands, pp. 694–711 (2016)
[40]
Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. Preprint arXiv:1409.1556v6 (2015).
[41]
Russakovsky O, Deng J, Su H, et al. Imagenet large scale visual recognition challenge Int. J. Comput. Vis. 2015 115 3 211-252
[42]
Li Z, Wang Y, Peng C, et al. Laplace dark channel attenuation-based single image defogging in ocean scenes Multimed. Tools Appl. 2022
[43]
Li B, Ren W, Fu D, et al. Benchmarking single-image dehazing and beyond IEEE Trans. Image Process. 2019 28 1 492-505
[44]
Zhao S, Zhang L, Huang S, et al. Dehazing evaluation: real-world benchmark datasets, criteria, and baselines IEEE Trans. Image Process. 2020 29 6947-6962
[45]
Huynh-Thu Q and Ghanbari M Scope of validity of PSNR in image/video quality assessment Electron. Lett. 2008 44 13 800-801
[46]
Wang Z, Bovik AC, Sheikh HR, et al. Image quality assessment: from error visibility to structural similarity IEEE Trans. Image Process. 2004 13 4 600-612
[47]
Su, S., Yan, Q., Zhu, Y., et al.: Blindly assess image quality in the wild guided by a self-adaptive hyper network. In: IEEE Conference on Computer Vision and Pattern Recognition, Seattle, USA, pp. 3667–3676 (2020)
[48]
Kingma, D.P., Ba, J.: Adam: a method for stochastic optimization. In: International Conference for Learning Representations, San Diego, USA, pp. 1–8 (2015)
[49]
Ledig, C., Theis, L., Huszár, F., et al.: Photo-realistic single image super-resolution using a generative adversarial network. In: IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, USA, pp. 4681-4690 (2017)
[50]
Blau, Y., Mechrez, R., Timofte, R., et al.: The 2018 PIRM challenge on perceptual image super-resolution. In: European Conference on Computer Vision Workshops, Munich, Germany, pp. 334–355 (2018)

Cited By

View all
  • (2024)Jdlmask: joint defogging learning with boundary refinement for foggy scene instance segmentationThe Visual Computer: International Journal of Computer Graphics10.1007/s00371-023-03230-040:11(8155-8172)Online publication date: 1-Nov-2024

Index Terms

  1. ZRDNet: zero-reference image defogging by physics-based decomposition–reconstruction mechanism and perception fusion
    Index terms have been assigned to the content through auto-classification.

    Recommendations

    Comments

    Please enable JavaScript to view thecomments powered by Disqus.

    Information & Contributors

    Information

    Published In

    cover image The Visual Computer: International Journal of Computer Graphics
    The Visual Computer: International Journal of Computer Graphics  Volume 40, Issue 8
    Aug 2024
    782 pages

    Publisher

    Springer-Verlag

    Berlin, Heidelberg

    Publication History

    Published: 09 October 2023
    Accepted: 09 September 2023

    Author Tags

    1. Unsupervised image defogging
    2. Deep neural networks
    3. Decomposition–reconstruction mechanism
    4. Perceptual fusion

    Qualifiers

    • Research-article

    Funding Sources

    Contributors

    Other Metrics

    Bibliometrics & Citations

    Bibliometrics

    Article Metrics

    • Downloads (Last 12 months)0
    • Downloads (Last 6 weeks)0
    Reflects downloads up to 25 Nov 2024

    Other Metrics

    Citations

    Cited By

    View all
    • (2024)Jdlmask: joint defogging learning with boundary refinement for foggy scene instance segmentationThe Visual Computer: International Journal of Computer Graphics10.1007/s00371-023-03230-040:11(8155-8172)Online publication date: 1-Nov-2024

    View Options

    View options

    Login options

    Media

    Figures

    Other

    Tables

    Share

    Share

    Share this Publication link

    Share on social media