Unsupervised Dark-Channel Attention-Guided CycleGAN for Single-Image Dehazing
<p>Our network can convert hazy images, misty images, and haze-free images into the results we expect.</p> "> Figure 2
<p>The original dark-channel map (<b>a</b>), and the dark-channel map after training and enhancement (<b>b</b>).</p> "> Figure 3
<p>The structure of the proposed network and the intermediate process of image conversion.</p> "> Figure 4
<p>Dehazing results using the O-HAZE [<a href="#B10-sensors-20-06000" class="html-bibr">10</a>] dataset. The proposed method presented an improved dehazing effect and color reduction compared to other methods [<a href="#B3-sensors-20-06000" class="html-bibr">3</a>,<a href="#B9-sensors-20-06000" class="html-bibr">9</a>,<a href="#B11-sensors-20-06000" class="html-bibr">11</a>,<a href="#B12-sensors-20-06000" class="html-bibr">12</a>,<a href="#B20-sensors-20-06000" class="html-bibr">20</a>,<a href="#B26-sensors-20-06000" class="html-bibr">26</a>].</p> "> Figure 5
<p>Comparison of the results generated using the proposed method and CycleDehaze [<a href="#B9-sensors-20-06000" class="html-bibr">9</a>].</p> "> Figure 6
<p>Dehazing results following the application of the proposed method and CycleDehaze [<a href="#B9-sensors-20-06000" class="html-bibr">9</a>] to the O-HAZE dataset [<a href="#B10-sensors-20-06000" class="html-bibr">10</a>].</p> "> Figure 7
<p>Qualitative results using natural hazy images in comparison with those generated by CycleDehaze [<a href="#B9-sensors-20-06000" class="html-bibr">9</a>].</p> "> Figure 8
<p>Results obtained with proposed attention map and that using the method of Mejjati et al. [<a href="#B26-sensors-20-06000" class="html-bibr">26</a>] through training of the network.</p> "> Figure 9
<p>Translation results using our method and other methods for haze-free images [<a href="#B3-sensors-20-06000" class="html-bibr">3</a>,<a href="#B9-sensors-20-06000" class="html-bibr">9</a>,<a href="#B11-sensors-20-06000" class="html-bibr">11</a>,<a href="#B12-sensors-20-06000" class="html-bibr">12</a>,<a href="#B20-sensors-20-06000" class="html-bibr">20</a>,<a href="#B26-sensors-20-06000" class="html-bibr">26</a>].</p> "> Figure 10
<p>Comparison of the dehazing effects when applied to misty images [<a href="#B3-sensors-20-06000" class="html-bibr">3</a>,<a href="#B9-sensors-20-06000" class="html-bibr">9</a>,<a href="#B11-sensors-20-06000" class="html-bibr">11</a>,<a href="#B12-sensors-20-06000" class="html-bibr">12</a>,<a href="#B20-sensors-20-06000" class="html-bibr">20</a>,<a href="#B26-sensors-20-06000" class="html-bibr">26</a>].</p> ">
Abstract
:1. Introduction
- Aiming at the characteristics of uneven concentrations of haze in different areas of an actual image, the proposed method enhances the network structure of CycleGAN [8], introduces a related attention mechanism, and completes an end-to-end partitioned component-dehazing process.
- The proposed method innovatively uses the enhanced dark channel as the attention map and introduces the dark-channel enhancement coefficient. Through the training of the network, the enhanced dark-channel map can more effectively and accurately mark the hazy areas and their concentrations in an image, as well as increase the difference between different haze concentrations.
- Benefiting from the characteristics of the dark channel and attention mechanism, the proposed network can better retain the original features and details of an image, and it is more robust to mist- and haze-free image conversion.
2. Related Work
3. Proposed Approach
3.1. Attention Mechanism
3.2. Cyclic Generation Process
3.3. Loss Function
4. Experiments and Results
4.1. Dataset
4.2. Implementation
4.3. Results
4.4. Dehazing Robustness
4.5. Result Analysis
5. Conclusions
Author Contributions
Funding
Conflicts of Interest
References
- Yang, D.; Sun, J. Proximal Dehaze-Net: A Prior Learning-Based Deep Network for Single Image Dehazing. In Lecture Notes in Computer Science; Springer Science and Business Media LLC: Berlin, Germany, 2018; pp. 729–746. [Google Scholar]
- Zhang, H.; Patel, V.M. Densely Connected Pyramid Dehazing Network. In Proceedings of the 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–23 June 2018; pp. 3194–3203. [Google Scholar]
- He, K.; Sun, J.; Tang, X. Single image haze removal using dark channel prior. IEEE Trans. Pattern Anal. Mach. Intell. 2011, 33, 2341–2353. [Google Scholar] [PubMed]
- Ancuti, C.O.; Ancuti, C.; Hermans, C.; Bekaert, P. A Fast Semi-inverse Approach to Detect and Remove the Haze from a Single Image. In Asian Conference on Computer Vision; Springer Science and Business Media: Berlin, Germany, 2011; pp. 501–514. [Google Scholar]
- Ancuti, C.; Ancuti, C.O.; De Vleeschouwer, C.; Bovik, A.C. Night-time dehazing by fusion. In Proceedings of the 2016 IEEE International Conference on Image Processing (ICIP), Phoenix, AZ, USA, 25–28 September 2016; pp. 2256–2260. [Google Scholar]
- Meng, G.; Wang, Y.; Duan, J.; Xiang, S.; Pan, C. Efficient Image Dehazing with Boundary Constraint and Contextual Regularization. In Proceedings of the 2013 IEEE International Conference on Computer Vision, Sydney, NSW, Australia, 1–8 December 2013; pp. 617–624. [Google Scholar]
- Goodfellow, I.; Pouget-Abadie, J.; Mirza, M.; Xu, B.; Warde-Farley, D.; Ozair, S.; Courville, A.; Bengio, Y. Generative adversarial nets. In Advances Neural Information Processing Systems; ACM Digital Library: New York, NY, USA, 2014; pp. 2672–2680. [Google Scholar]
- Zhu, J.-Y.; Park, T.; Isola, P.; Efros, A.A. Unpairedimage-to-image translation using cycle-consistent adversarial networks. In Proceedings of the IEEE International Conference on ComputerVision (ICCV), Venice, Italy, 22–29 October 2017. [Google Scholar]
- Engin, D.; Genc, A.; Ekenel, H.K. Cycle-Dehaze: Enhanced CycleGAN for Single Image Dehazing. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, Salt Lake City, UT, USA, 18–22 June 2018; pp. 938–9388. [Google Scholar] [CrossRef] [Green Version]
- Ancuti, C.O.; Ancuti, C.; Timofte, R.; De Vleeschouwer, C. O-HAZE: A Dehazing Benchmark with Real Hazy and Haze-Free Outdoor Images. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, Salt Lake City, UT, USA, 18–22 June 2018; pp. 867–8678. [Google Scholar] [CrossRef] [Green Version]
- Fattal, R. Dehazing Using Color-Lines. ACM Trans. Graph. 2014, 34, 1–14. [Google Scholar] [CrossRef]
- Berman, D.; Treibitz, T.; Avidan, S. Non-local Image Dehazing. In Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 26 June–1 July 2016; pp. 1674–1682. [Google Scholar]
- Chen, L.-C.; Papandreou, G.; Kokkinos, I.; Murphy, K.; Yuille, A.L. DeepLab: Semantic Image Segmentation with Deep Convolutional Nets, Atrous Convolution, and Fully Connected CRFs. IEEE Trans. Pattern Anal. Mach. Intell. 2017, 40, 834–848. [Google Scholar] [CrossRef] [PubMed]
- He, K.; Zhang, X.; Ren, S.; Sun, J. Deep Residual Learning for Image Recognition. In Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 26 June–1 July 2016; pp. 770–778. [Google Scholar]
- Ren, S.; He, K.; Girshick, R.; Sun, J. Faster R-CNN: Towards real-time object detection with region proposal networks. In Advances Neural Information Processing Systems; ACM Digital Library: New York, NY, USA, 2015; pp. 91–99. [Google Scholar]
- Ren, W.; Liu, S.; Zhang, H.; Pan, J.; Cao, X.; Yang, M.-H. Single Image Dehazing via Multi-scale Convolutional Neural Networks. In Lecture Notes in Computer Science; Springer Science and Business Media: Cham, Switzerland, 2016; pp. 154–169. [Google Scholar]
- Cai, B.; Xu, X.; Jia, K.; Qing, C.; Tao, D. DehazeNet: An End-to-End System for Single Image Haze Removal. IEEE Trans. Image Process. 2016, 25, 5187–5198. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Mirza, M.; Osindero, S. Conditional generative adversarial nets. arXiv 2014, arXiv:1411.1784. [Google Scholar]
- Reed, S.; Akata, Z.; Yan, X.; Logeswaran, L.; Schiele, B.; Lee, H. Generative adversarial text to image synthesis. arXiv 2016, arXiv:1605.05396. [Google Scholar]
- Isola, P.; Zhu, J.-Y.; Zhou, T.; Efros, A.A. Image-to-Image Translation with Conditional Adversarial Networks. In Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA, 21–26 July 2017; pp. 5967–5976. [Google Scholar]
- Swami, K.; Das, S.K. CANDY: Conditional Adversarial Networks based End-to-End System for Single Image Haze Removal. In Proceedings of the 2018 24th International Conference on Pattern Recognition (ICPR), Beijing, China, 20–24 August 2018; pp. 3061–3067. [Google Scholar]
- Zhang, H.; Sindagi, V.; Patel, V.M. Joint Transmission Map Estimation and Dehazing using Deep Networks. IEEE Trans. Circuits Syst. Video Technol. 2019, 30, 1975–1986. [Google Scholar] [CrossRef] [Green Version]
- Li, R.; Pan, J.; Li, Z.; Tang, J. Single Image Dehazing via Conditional Generative Adversarial Network. In Proceedings of the 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–22 June 2018; pp. 8202–8211. [Google Scholar]
- Simonyan, K.; Zisserman, A. Very deep convolutional networks for large-scale image recognition. arXiv 2015, arXiv:1409.1556. [Google Scholar]
- Wang, X.; Qiu, S.; Liu, K.; Tang, X. Web Image Re-Ranking UsingQuery-Specific Semantic Signature. IEEE Trans. Pattern Anal. Mach. Intell. 2013, 36, 810–823. [Google Scholar] [CrossRef] [PubMed]
- Mejjati, Y.A.; Richardt, C.; Tompkin, J.; Cosker, D.; Kim, K.I. Unsupervised Attention-guided Image to Image Translation. In Advances Neural Information Processing Systems; ACM Digital Library: New York, NY, USA, 2018; pp. 3693–3703. [Google Scholar]
- Deng, J.; Dong, W.; Socher, R.; Li, L.-J.; Li, K.; Fei-Fei, L. ImageNet: A Large-Scale Hierarchical Image Database. In Proceedings of the 2009 IEEE Conference on Computer Vision and Pattern Recognition, Miami, FL, USA, 20–25 June 2009. [Google Scholar]
- Wang, Z.; Bovik, A.C.; Sheikh, H.R.; Simoncelli, E.P. Image quality assessment: From error visibility to structural similarity. IEEE Trans. Image Process. 2004, 6, 600–612. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Sharma, G.; Wu, W.; Dalal, E.N. The CIEDE2000 color-difference formula: Implementation notes, supplementary test data, and mathematical observations. Color. Res. Appl. 2004, 30, 21–30. [Google Scholar] [CrossRef]
- Silberman, N.; Hoiem, D.; Kohli, P.; Fergus, R. Indoor Segmentation and Support Inference from rgbd Images. In European Conference on Computer Vision; Springer: Berlin, Germany, 2012; pp. 46–760. [Google Scholar]
- Ancuti, C.O.; Ancuti, C.; Timofte, R.; De Vleeschouwer, C. I-HAZE: A dehazing benchmark with real hazy and haze-free indoor images. arXiv 2018, arXiv:1804.05091. [Google Scholar]
- Abadi, M.; Agarwal, A.; Barham, P.; Brevdo, E.; Chen, Z.; Citro, C.; Corrado, G.S.; Davis, A.; Dean, J.; Devin, M.; et al. Tensorflow: Large-scale machine learning on heterogeneous systems. arxiv 2016, arXiv:1603.044672015. [Google Scholar]
- Friedman, M. The Use of Ranks to Avoid the Assumption of Normality Implicit in the Analysis of Variance. J. Am. Stat. Assoc. 1937, 32, 675–701. [Google Scholar] [CrossRef]
Fattal [11] | He et al. [3] | Berman et al. [12] | Isola et al. [20] | Mejjati et al. [26] | Engin et al. [9] | Proposed Method | |
---|---|---|---|---|---|---|---|
PSNR ↑ | 15.502 | 13.267 | 15.235 | 14.371 | 18.430 | 18.664 | 18.732 |
SSIM ↑ | 0.577 | 0.631 | 0.620 | 0.555 | 0.645 | 0.595 | 0.674 |
CIEDE2000 ↓ | 17.710 | 23.505 | 18.073 | 20.790 | 11.808 | 11.175 | 10.955 |
Engin et al. [9] | Proposed Method | |
---|---|---|
PSNR ↑ | 16.428 | 17.125 |
SSIM ↑ | 0.768 | 0.783 |
CIEDE2000 ↓ | 13.072 | 11.965 |
Engin et al. [9] | Proposed Method | |||
---|---|---|---|---|
Training Set: I-HAZE Test Set: O-HAZE | Training Set: O-HAZE Test Set: I-HAZE | Training Set: I-HAZE Test Set: O-HAZE | Training Set: O-HAZE Test Set: I-HAZE | |
PSNR ↑ | 14.721 | 15.754 | 16.791 | 17.502 |
SSIM ↑ | 0.528 | 0.692 | 0.621 | 0.753 |
CIEDE2000 ↓ | 15.706 | 14.568 | 13.861 | 11.416 |
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |
© 2020 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).
Share and Cite
Chen, J.; Wu, C.; Chen, H.; Cheng, P. Unsupervised Dark-Channel Attention-Guided CycleGAN for Single-Image Dehazing. Sensors 2020, 20, 6000. https://doi.org/10.3390/s20216000
Chen J, Wu C, Chen H, Cheng P. Unsupervised Dark-Channel Attention-Guided CycleGAN for Single-Image Dehazing. Sensors. 2020; 20(21):6000. https://doi.org/10.3390/s20216000
Chicago/Turabian StyleChen, Jiahao, Chong Wu, Hu Chen, and Peng Cheng. 2020. "Unsupervised Dark-Channel Attention-Guided CycleGAN for Single-Image Dehazing" Sensors 20, no. 21: 6000. https://doi.org/10.3390/s20216000
APA StyleChen, J., Wu, C., Chen, H., & Cheng, P. (2020). Unsupervised Dark-Channel Attention-Guided CycleGAN for Single-Image Dehazing. Sensors, 20(21), 6000. https://doi.org/10.3390/s20216000