Nighttime Image Dehazing by Render
<p>Comparison of nighttime dehazing results. Our method outperforms several state-of-the-art algorithms, including NDIM [<a href="#B1-jimaging-09-00153" class="html-bibr">1</a>], NHRG [<a href="#B2-jimaging-09-00153" class="html-bibr">2</a>], MRP [<a href="#B3-jimaging-09-00153" class="html-bibr">3</a>], and OSFD [<a href="#B4-jimaging-09-00153" class="html-bibr">4</a>], in effectively addressing fog caused by directional light sources, while maintaining color consistency. The data generated using our method ensure reliable scene details and realistic fog effects, thereby providing a more accurate representation of real-world scenarios.</p> "> Figure 2
<p>Overview of the experiment framework. Obtain the haze intensity index from the foggy image, select the parameters in the engine, and create a three-dimensional scene to make a paired dataset. Use the network trained on the new dataset to process the foggy image finally. The picture without a dashed box shows the correspondence between image defogging and computer rendering.</p> "> Figure 3
<p>Preview of our simulation dataset. Our dataset is designed to accurately represent the various scattering effects of nighttime haze under artificial light sources. As you move from top to bottom, the smog gradually increases in intensity. In fact, our dataset can be adjusted to include multiple levels of haze intensity, making it an effective tool for modeling a wide range of real-world scenarios.</p> "> Figure 4
<p>Our generated image haze characteristics comparison. We compare the display of different features in the same picture and highlight the position of each patch on the left.</p> "> Figure 5
<p>Degradation kernel estimation of haze rendering. The small picture in the upper-left corner illustrates the relationship between haze intensities and the actual haze point spread function (HPSF). The haze intensity increases gradually from left to right (0–0.06). The upper images show the haze degradation of a point light source, while the lower images show the haze change of a directional light source in a simulated scene. The lower images were taken from the red box in the left scene.</p> "> Figure 6
<p>Image dehazing result comparison. Comparison with other nighttime dehazing methods, the compared dataset is based on flicker dataset [<a href="#B3-jimaging-09-00153" class="html-bibr">3</a>].</p> ">
Abstract
:1. Introduction
- We convert the nighttime haze, which is challenging to accurately handle in two-dimensional space, to three-dimensional space. We propose a method for accurately describing nighttime haze using a three-dimensional scattering formula. Through the derivation of radiation transfer equations and volume rendering formulas, we demonstrate that our three-dimensional haze rendering approach conforms to the scattering relationship.
- We use a rendering engine based on a three-dimensional scattering formulation to create a simulation dataset for nighttime dehazing. We train our existing network on this dataset and achieve a good nighttime dehazing effect.
- We propose a haze point spread test method based on the optical point spread function to accurately represent haze intensity levels, thereby ensuring that the haze density of the training data is similar to that of the real scene. Based on unique texture relationships of nighttime haze, we propose several network structures and data enhancement methods. Our ablation experiments demonstrate the effectiveness of these improvements.
2. Related Work
2.1. Image Dehazing
2.2. Nighttime Dehazing
2.3. Haze Image Dataset and Rending
3. Proposed Method
3.1. Nighttime Image Haze Model
3.2. Construct Nighttime Dehazing Data
3.3. Haze Concentration and Haze Parameters
3.4. Improved Night Image Dehazing
4. Experimental Results
4.1. Ablation Study with Different HPSF Concentrations
4.2. Ablation Study with Data Preprocessing
4.3. Ablation Study with Loss
4.4. Testing on Synthetic Images
4.5. Evaluation on Real Photographs
5. Conclusions
- We propose a new compositing method that leverages rendering techniques to recreate complex real-world lighting and haze. Compared to other data construction methods, our results are more consistent with real scenes and optical scattering models.
- To match haze images of different concentrations and render datasets at different concentrations, we propose a haze point spread function (HPSF) using the light point spread function (PSF) method.
- We improved the data preprocessing method and loss function of the neural network based on the image characteristics of nighttime haze images.
Author Contributions
Funding
Data Availability Statement
Conflicts of Interest
References
- Jing, Z.; Yang, C.; Wang, Z. Nighttime haze removal based on a new imaging model. In Proceedings of the 2014 IEEE International Conference on Image Processing (ICIP), Paris, France, 27–30 October 2014. [Google Scholar]
- Yu, L.; Tan, R.T.; Brown, M.S. Nighttime Haze Removal with Glow and Multiple Light Colors. In Proceedings of the IEEE International Conference on Computer Vision, Washington, DC, USA, 7–13 December 2015. [Google Scholar]
- Jing, Z.; Yang, C.; Shuai, F.; Yu, K.; Chang, W.C. Fast Haze Removal for Nighttime Image Using Maximum Reflectance Prior. In Proceedings of the IEEE Conference on Computer Vision Pattern Recognition, Honolulu, HI, US, 21–26 July 2017. [Google Scholar]
- Zhang, J.; Cao, Y.; Zha, Z.J.; Tao, D. Nighttime Dehazing with a Synthetic Benchmark. In Proceedings of the 28th ACM International Conference on Multimedia, MM’20, New York, NY, USA, 12–16 October 2020; pp. 2355–2363. [Google Scholar] [CrossRef]
- Tang, K.; Yang, J.; Wang, J. Investigating Haze-relevant Features in A Learning Framework for Image Dehazing. In Proceedings of the IEEE Conference on Computer Vision Pattern Recognition, Columbus, OH, USA, 23–28 June 2014. [Google Scholar]
- He, K.; Sun, J.; Tang, X. Single Image Haze Removal Using Dark Channel Prior. IEEE Trans. Pattern Anal. Mach. Intell. 2010, 33, 2341–2353. [Google Scholar]
- Cai, B.; Xu, X.; Jia, K.; Qing, C.; Tao, D. DehazeNet: An End-to-End System for Single Image Haze Removal. IEEE Trans. Image Process. 2016, 25, 5187–5198. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Ren, W.; Pan, J.; Zhang, H.; Cao, X.; Yang, M.H. Single Image Dehazing via Multi-scale Convolutional Neural Networks with Holistic Edges. Int. J. Comput. Vis. 2020, 128, 240–259. [Google Scholar] [CrossRef]
- Chen, D.; He, M.; Fan, Q.; Liao, J.; Hua, G. Gated Context Aggregation Network for Image Dehazing and Deraining. In Proceedings of the 2019 IEEE Winter Conference on Applications of Computer Vision (WACV), Waikoloa Village, HI, USA, 7–11 January 2019. [Google Scholar]
- Song, Y.; He, Z.; Qian, H.; Du, X. Vision transformers for single image dehazing. IEEE Trans. Image Process. 2023, 32, 1927–1941. [Google Scholar] [CrossRef] [PubMed]
- Vashishth, S.; Joshi, R.; Prayaga, S.S.; Bhattacharyya, C.; Talukdar, P. RESIDE: Improving Distantly-Supervised Neural Relation Extraction using Side Information. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, Brussels, Belgium, 31 October–4 November 2018. [Google Scholar]
- Ancuti, C.O.; Ancuti, C.; Timofte, R.; Vleeschouwer, C.D. O-HAZE: A Dehazing Benchmark with Real Hazy and Haze-Free Outdoor Images. In Proceedings of the 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), Salt Lake City, UT, USA, 18–22 June 2018. [Google Scholar]
- Ancuti, C.; Ancuti, C.O.; Vleeschouwer, C.D.; Bovik, A. Night-time dehazing by fusion. In Proceedings of the IEEE International Conference on Image Processing, Phoenix, AZ, USA, 25–28 September 2016. [Google Scholar]
- Silberman, N.; Hoiem, D.; Kohli, P.; Fergus, R. Indoor Segmentation and Support Inference from RGBD Images. In Proceedings of the ECCV, Florence, Italy, 7–13 October 2012. [Google Scholar]
- Ancuti, C.O.; Ancuti, C.; Timofte, R.; Vleeschouwer, C.D. I-HAZE: A dehazing benchmark with real hazy and haze-free indoor images. In Proceedings of the Advanced Concepts for Intelligent Vision Systems: 19th International Conference, ACIVS 2018, Poitiers, France, 24–27 September 2018. [Google Scholar]
- Ancuti, C.O.; Ancuti, C.; Sbert, M.; Timofte, R. Dense Haze: A benchmark for image dehazing with dense-haze and haze-free images. arXiv 2019, arXiv:1904.02904. [Google Scholar] [CrossRef]
- Ancuti, C.O.; Ancuti, C.; Timofte, R. NH-HAZE: An Image Dehazing Benchmark with Non-Homogeneous Hazy and Haze-Free Images. In Proceedings of the 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), Seattle, WA, USA, 14–19 June 2020. [Google Scholar]
- Xie, C.; Mousavian, A.; Xiang, Y.; Fox, D. RICE: Refining Instance Masks in Cluttered Environments with Graph Neural Networks. Conf. Robot. Learn. 2021, 164, 1655–1665. [Google Scholar]
- Novák, J.; Georgiev, I.; Hanika, J.; Křivánek, J.; Jarosz, W. Monte Carlo methods for physically based volume rendering. In Proceedings of the ACM SIGGRAPH 2018 Courses, New York, NY, USA, 12–16 August 2018. [Google Scholar]
- Hillaire, S. A Scalable and Production Ready Sky and Atmosphere Rendering Technique. Comput. Graph. Forum 2020, 39, 13–22. [Google Scholar] [CrossRef]
- EPIC. 2020. Available online: https://docs.unrealengine.com/5.0/en-US (accessed on 1 January 2023).
- Unrealengine. 2019. Available online: https://docs.unrealengine.com/4.27/en-US/BuildingWorlds/FogEffects/ (accessed on 1 January 2023).
- Afifi, M.; Abdelhamed, A.; Abuolaim, A.; Punnappurath, A.; Brown, M.S. CIE XYZ Net: Unprocessing Images for Low-Level Computer Vision Tasks, 2020.
- Chen, S.; Feng, H.; Gao, K.; Xu, Z.; Chen, Y. Extreme-Quality Computational Imaging via Degradation Framework. In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), Montreal, BC, Canada, 11–17 October 2021; pp. 2632–2641. [Google Scholar]
- Chang, M.; Li, Q.; Feng, H.; Xu, Z. Spatial-Adaptive Network for Single Image Denoising. In Proceedings of the Computer Vision–ECCV 2020: 16th European Conference, Glasgow, UK, 23–28 August 2020. [Google Scholar]
- Rabbani, M.; Jones, P.W. Digital Image Compression Techniques; SPIE Press: Bellingham, WA, USA, 1991; Volume 7. [Google Scholar]
- Wang, Z.; Bovik, A.C.; Sheikh, H.R.; Simoncelli, E.P. Image quality assessment: From error visibility to structural similarity. IEEE Trans. Image Process. 2004, 13, 600–612. [Google Scholar] [PubMed] [Green Version]
- Sharma, G.; Wu, W.; Dalal, E.N. The CIEDE2000 color-difference formula: Implementation notes, supplementary test data, and mathematical observations. Color Res. Appl. 2005, 30, 21–30. [Google Scholar] [CrossRef]
Train Concentration | 0.1 | 0.2 | 0.3 | 0.4 |
---|---|---|---|---|
test 0.1 (PSNR↑/SSIM↑) | 30.94/0.956 | 29.58/0.955 | 28.31/0.950 | 28.23/0.933 |
test 0.2 (PSNR↑/SSIM↑) | 27.64/0.932 | 28.59/0.937 | 28.03/0.928 | 26.84/0.920 |
test 0.3 (PSNR↑/SSIM↑) | 23.97/0.895 | 25.65/0.903 | 27.16/0.915 | 26.48/0.912 |
test 0.4 (PSNR↑/SSIM↑) | 20.10/0.845 | 20.23/0.852 | 21.69/0.861 | 26.02/0.905 |
Horizontal Flip | ✓ | ✓ | ||
---|---|---|---|---|
Vertical Flip | ✓ | ✓ | ||
PSNR↑ | 29.17 (2%) | 29.37 (0%) | 29.02 (4%) | 28.91 (5%) |
SSIM↑ | 0.860 (6%) | 0.869 (0%) | 0.862 (5%) | 0.864 (4%) |
Haze Concentration | 0.1 | 0.2 | 0.3 | 0.4 | 0.5 | 0.6 |
---|---|---|---|---|---|---|
L1 Loss (PSNR/SSIM) | 30.59/0.956 | 27.54/0.933 | 23.68/0.890 | 21.43/0.851 | 19.04/0.796 | 17.80/0.757 |
LightLoss (PSNR/SSIM) | 30.94/0.956 | 28.59/0.937 | 27.16/0.915 | 26.02/0.905 | 25.23/0.861 | 24.89/0.852 |
Method | Input | NDIM | NHRG | MRP | OSFD | SADNet (Ours) | Restormer (Ours) |
---|---|---|---|---|---|---|---|
PSNR↑ | 26.17 (44%) | 20.56 (176%) | 23.95 (87%) | 26.43 (40%) | 27.72 (21%) | 28.44 (11%) | 29.37 (0%) |
SSIM↑ | 0.737 (101%) | 0.680 (144%) | 0.754 (88%) | 0.792 (59%) | 0.837 (24%) | 0.854 (11%) | 0.869 (0%) |
CIE2000↓ | 146.3 (42%) | 130.1 (26%) | 118.7 (15%) | 132.6 (28%) | 128.9 (25%) | 103.3 (0%) | 102.8 (0%) |
Time/s↓ | 20.58 | 25.03 | 1.56 | 0.772 | 1.692 | 1.988 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Jin, Z.; Feng, H.; Xu, Z.; Chen, Y. Nighttime Image Dehazing by Render. J. Imaging 2023, 9, 153. https://doi.org/10.3390/jimaging9080153
Jin Z, Feng H, Xu Z, Chen Y. Nighttime Image Dehazing by Render. Journal of Imaging. 2023; 9(8):153. https://doi.org/10.3390/jimaging9080153
Chicago/Turabian StyleJin, Zheyan, Huajun Feng, Zhihai Xu, and Yueting Chen. 2023. "Nighttime Image Dehazing by Render" Journal of Imaging 9, no. 8: 153. https://doi.org/10.3390/jimaging9080153
APA StyleJin, Z., Feng, H., Xu, Z., & Chen, Y. (2023). Nighttime Image Dehazing by Render. Journal of Imaging, 9(8), 153. https://doi.org/10.3390/jimaging9080153