Low-Light Sparse Polarization Demosaicing Network (LLSPD-Net): Polarization Image Demosaicing Based on Stokes Vector Completion in Low-Light Environment
<p>Polarization sensor structure. (<b>a</b>) An illustration of the polarization filter and Bayer filter distribution for the DoFP sensor. (<b>b</b>) The color polarization sensor used for the dataset acquisition in this paper.</p> "> Figure 2
<p>(<b>a</b>) The 0° image; the lower left is the low-light image, and the upper right is the image after direct luminance magnification. (<b>b</b>) The S0 image obtained from linearly amplified low-intensity images at 0°, 45°, 90°, and 135°. (<b>c</b>,<b>d</b>) The DoLP and AoP images, respectively, under low-light conditions.</p> "> Figure 3
<p>Noise creates polarization artifacts. Photo credit: Tim Caro.</p> "> Figure 4
<p>The inputs to this network are the RGB image of the raw image after demosaicing, the sparse polarization components S<sub>1,2</sub>, and the masked image M.</p> "> Figure 5
<p>Specific structure diagram of the residual module (RM). It contains a total of 6 convolutional layers. One shortcut connection convolutional layer is included.</p> "> Figure 6
<p>Polarization complementation network, which contains an RGB image guidance branch and a Stokes vector generation branch, where the repetitive guidance module RG is used to refine the Stokes Vector.</p> "> Figure 7
<p>The dataset sample richness and completeness are represented in 2D and 3D forms by normalizing the Stokes parameter values of the polarization dataset. The first row is a categorical representation of the LLCP dataset samples. The second row is the categorical representation of the self−constructed L−Polarization dataset in this paper. The second column’s vertical coordinate indicates the median intensity value S0. The third column plot vertical coordinate indicates the median LBP value of S0.</p> "> Figure 8
<p>The results of RGB image enhancement, Stokes vector complementation, and DoLP and AoP imaging in low−light environment. (<b>a</b>) Indoor environment; (<b>b</b>) outdoor environment.</p> "> Figure 8 Cont.
<p>The results of RGB image enhancement, Stokes vector complementation, and DoLP and AoP imaging in low−light environment. (<b>a</b>) Indoor environment; (<b>b</b>) outdoor environment.</p> "> Figure 9
<p>Results of qualitative comparison with several classical PDM models.</p> "> Figure 10
<p>Ablation experimental validation of different components of the LLSPD−Net model. W/O Intensity Network denotes the result of direct polarization imaging without introducing a low-light image enhancement process, and W/O DRM denotes the effect of removing the detail enhancement module in the intensity network. W/O RG denotes the result of polarization imaging when removing the repetition guidance module in the Stokes completion network.</p> ">
Abstract
:1. Introduction
- (1)
- An imaging model for polarized images in low-light environments, LLSPD-Net, is designed to generate polarized images with less noise while enhancing low-light polarized intensity images.
- (2)
- In order to obtain high-quality RGB images and polarized images at the same time, we designed a Stokes complementary method to acquire polarized images with the help of the hourglass network structure, and simulated the sparse arrangement of polarization filters.
- (3)
- We collected a low-light polarization dataset, L-Polarization, containing different materials for indoor, outdoor, and different scenes, which is a paired dataset containing 300 sets of low-light and constant-light environments.
2. Related Works
2.1. Low-Light Image Enhancement Methods
2.2. Polarization Demosaicing and Depth Completion Methods
3. Method
3.1. Overview
3.2. Network Architecture
3.2.1. Intensity Network
3.2.2. Polarization Completion Network
3.3. Loss Function
- (1)
- In the intensity loss, we use loss as the content loss, and compute the perceptual loss using the features extracted from the VGG-16 pre-trained model before the activation layer. The specific definitions are as follows:
- (2)
- Stokes loss is defined as
4. Experimental Section
4.1. Experimental Configurations
- (1)
- Dataset:
- (2)
- Training details:
4.2. Experimental Results
4.2.1. Comparison with Low-Light Image Enhancement Methods
4.2.2. Comparison with the Basic Polarization Demosaicing Method
4.2.3. Ablation Experiments
5. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
References
- Gao, D.; Li, Y.; Ruhkamp, P.; Skobleva, I.; Wysocki, M.; Jung, H.; Wang, P.; Guridi, A.; Busam, B. Polarimetric pose prediction. In Proceedings of the European Conference on Computer Vision, Tel Aviv, Israel, 23–27 October 2022; Springer: Berlin/Heidelberg, Germany, 2022; pp. 735–752. [Google Scholar]
- Kalra, A.; Taamazyan, V.; Rao, S.K.; Venkataraman, K.; Raskar, R.; Kadambi, A. Deep polarization cues for transparent object segmentation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 13–19 June 2020; pp. 8602–8611. [Google Scholar]
- Pang, Y.; Yuan, M.; Fu, Q.; Ren, P.; Yan, D.M. Progressive polarization based reflection removal via realistic training data generation. Pattern Recognit. 2022, 124, 108497. [Google Scholar] [CrossRef]
- Ono, T.; Kondo, Y.; Sun, L.; Kurita, T.; Moriuchi, Y. Degree-of-linear-polarization-based color constancy. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, New Orleans, LA, USA, 18–24 June 2022; pp. 19740–19749. [Google Scholar]
- Kupinski, M.; Li, L. Evaluating the Utility of Mueller Matrix Imaging for Diffuse Material Classification. J. Imaging Sci. Technol. 2020, 64, 060409-1–060409-7. [Google Scholar] [CrossRef]
- Qiu, S.; Fu, Q.; Wang, C.; Heidrich, W. Linear polarization demosaicking for monochrome and colour polarization focal plane arrays. Comput. Graph. Forum. 2021, 40, 77–89. [Google Scholar] [CrossRef]
- Zhang, Y.; Zhang, J.; Guo, X. Kindling the darkness: A practical low-light image enhancer. In Proceedings of the 27th ACM International Conference on Multimedia, Nice, France, 21–25 October 2019; pp. 1632–1640. [Google Scholar]
- Buades, A.; Coll, B.; Morel, J.M. A non-local algorithm for image denoising. In Proceedings of the 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR’05), San Diego, CA, USA, 20–25 June 2005; IEEE: Piscataway, NJ, USA, 2005; Volume 2, pp. 60–65. [Google Scholar]
- Priyanka, S.A.; Wang, Y.K.; Huang, S.Y. Low-light image enhancement by principal component analysis. IEEE Access 2018, 7, 3082–3092. [Google Scholar] [CrossRef]
- Liu, J.; Xu, D.; Yang, W.; Fan, M.; Huang, H. Benchmarking low-light image enhancement and beyond. Int. J. Comput. Vis. 2021, 129, 1153–1184. [Google Scholar] [CrossRef]
- Lv, F.; Li, Y.; Lu, F. Attention guided low-light image enhancement with a large scale low-light simulation dataset. Int. J. Comput. Vis. 2021, 129, 2175–2193. [Google Scholar] [CrossRef]
- Wen, S.; Zheng, Y.; Lu, F.; Zhao, Q. Convolutional demosaicing network for joint chromatic and polarimetric imagery. Opt. Lett. 2019, 44, 5646–5649. [Google Scholar] [CrossRef] [PubMed]
- Kiku, D.; Monno, Y.; Tanaka, M.; Okutomi, M. Beyond color difference: Residual interpolation for color image demosaicking. IEEE Trans. Image Process. 2016, 25, 1288–1300. [Google Scholar] [CrossRef] [PubMed]
- Dabov, K.; Foi, A.; Katkovnik, V.; Egiazarian, K. Color image denoising via sparse 3D collaborative filtering with grouping constraint in luminance-chrominance space. In Proceedings of the 2007 IEEE International Conference on Image Processing, San Antonio, TX, USA, 16 September–19 October 2007; IEEE: Piscataway, NJ, USA, 2007; Volume 1, pp. I-313–I-316. [Google Scholar]
- Jiang, Y.; Gong, X.; Liu, D.; Cheng, Y.; Fang, C.; Shen, X.; Yang, J.; Zhou, P.; Wang, Z. Enlightengan: Deep light enhancement without paired supervision. IEEE Trans. Image Process. 2021, 30, 2340–2349. [Google Scholar] [CrossRef]
- Li, C.; Guo, J.; Porikli, F.; Pang, Y. LightenNet: A convolutional neural network for weakly illuminated image enhancement. Pattern Recognit. Lett. 2018, 104, 15–22. [Google Scholar] [CrossRef]
- Ma, L.; Ma, T.; Liu, R.; Fan, X.; Luo, Z. Toward fast, flexible, and robust low-light image enhancement. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, New Orleans, LA, USA, 18–24 June 2022; pp. 5637–5646. [Google Scholar]
- Le Teurnier, B.; Boffety, M.; Goudail, F. Error model for linear DoFP imaging systems perturbed by spatially varying polarization states. Appl. Opt. 2022, 61, 7273–7282. [Google Scholar] [CrossRef] [PubMed]
- Luo, Y.; Zhang, J.; Tian, D. Sparse representation-based demosaicking method for joint chromatic and polarimetric imagery. Opt. Lasers Eng. 2023, 164, 107526. [Google Scholar] [CrossRef]
- Li, N.; Zhao, Y.; Pan, Q.; Kong, S.G. Demosaicking DoFP images using Newton’s polynomial interpolation and polarization difference model. Opt. Express 2019, 27, 1376–1391. [Google Scholar] [CrossRef] [PubMed]
- Zeng, X.; Luo, Y.; Zhao, X.; Ye, W. An end-to-end fully-convolutional neural network for division of focal plane sensors to reconstruct s 0, dolp, and aop. Opt. Express 2019, 27, 8566–8577. [Google Scholar] [CrossRef] [PubMed]
- Zhang, Y.; Funkhouser, T. Deep depth completion of a single rgb-d image. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–23 June 2018; pp. 175–185. [Google Scholar]
- Hegde, G.; Pharale, T.; Jahagirdar, S.; Nargund, V.; Tabib, R.A.; Mudenagudi, U.; Vandrotti, B.; Dhiman, A. Deepdnet: Deep dense network for depth completion task. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Nashville, TN, USA, 19–25 June 2021; pp. 2190–2199. [Google Scholar]
- Chipman, R.; Lam, W.S.T.; Young, G. Polarized Light and Optical Systems; CRC Press: Boca Raton, FL, USA, 2018. [Google Scholar]
- Liu, J.; Duan, J.; Hao, Y.; Chen, G.; Zhang, H.; Zheng, Y. Polarization image demosaicing and RGB image enhancement for a color polarization sparse focal plane array. Opt. Express 2023, 31, 23475–23490. [Google Scholar] [CrossRef] [PubMed]
- Hai, J.; Xuan, Z.; Yang, R.; Hao, Y.; Zou, F.; Lin, F.; Han, S. R2rnet: Low-light image enhancement via real-low to real-normal network. J. Vis. Commun. Image Represent. 2023, 90, 103712. [Google Scholar] [CrossRef]
- He, K.; Zhang, X.; Ren, S.; Sun, J. Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 770–778. [Google Scholar]
- Yu, C.; Wang, J.; Peng, C.; Gao, C.; Yu, G.; Sang, N. Learning a discriminative feature network for semantic segmentation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–23 June 2018; pp. 1857–1866. [Google Scholar]
- Yan, Z.; Wang, K.; Li, X.; Zhang, Z.; Li, J.; Yang, J. RigNet: Repetitive image guided network for depth completion. In Proceedings of the European Conference on Computer Vision, Tel Aviv, Israel, 23–27 October 2022; Springer: Berlin/Heidelberg, Germany, 2022; pp. 214–230. [Google Scholar]
- Xu, X.; Wan, M.; Ge, J.; Chen, H.; Zhu, X.; Zhang, X.; Chen, Q.; Gu, G. ColorPolarNet: Residual dense network-based chromatic intensity-polarization imaging in low-light environment. IEEE Trans. Instrum. Meas. 2022, 71, 5025210. [Google Scholar] [CrossRef]
- Ahonen, T.; Hadid, A.; Pietikäinen, M. Face recognition with local binary patterns. In Proceedings of the Computer Vision-ECCV 2004: 8th European Conference on Computer Vision, Prague, Czech Republic, 11–14 May 2004; Proceedings, Part I 8. Springer: Berlin/Heidelberg, Germany, 2004; pp. 469–481. [Google Scholar]
- He, R.; Guan, M.; Wen, C. SCENS: Simultaneous contrast enhancement and noise suppression for low-light images. IEEE Trans. Ind. Electron. 2020, 68, 8687–8697. [Google Scholar] [CrossRef]
- Morimatsu, M.; Monno, Y.; Tanaka, M.; Okutomi, M. Monochrome and color polarization demosaicking using edge-aware residual interpolation. In Proceedings of the 2020 IEEE International Conference on Image Processing (ICIP), Abu Dhabi, United Arab Emirates, 25–28 October 2020; IEEE: Piscataway, NJ, USA, 2020; pp. 2571–2575. [Google Scholar]
- Gao, S.; Gruev, V. Bilinear and bicubic interpolation methods for division of focal plane polarimeters. Opt. Express 2011, 19, 26161–26173. [Google Scholar] [CrossRef] [PubMed]
Method | RGB | S1,2 | DoLP | AoP | |||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
PSNR | SSIM | PCQI | PSNR | SSIM | PCQI | PSNR | SSIM | PCQI | PSNR | SSIM | PCQI | Error [°] | |
Input | 26.32 | 0.63 | 0.51 | 36.81 | 0.58 | 0.42 | 28.42 | 0.53 | 0.41 | 12.41 | 0.33 | 0.11 | 15.32 |
C-BM3D | 28.59 | 0.81 | 0.83 | 38.26 | 0.79 | 0.69 | 30.16 | 0.57 | 0.46 | 12.52 | 0.26 | 0.37 | 19.51 |
EnlightenGAN | 38.23 | 0.89 | 0.63 | 55.37 | 0.83 | 0.53 | 36.26 | 0.68 | 0.69 | 19.13 | 0.31 | 0.38 | 12.33 |
LightenNet | 39.91 | 0.91 | 0.71 | 55.46 | 0.86 | 0.62 | 36.63 | 0.72 | 0.61 | 20.26 | 0.39 | 0.41 | 11.05 |
LLSPD-Net (ours) | 41.21 | 0.94 | 0.88 | 56.21 | 0.89 | 0.77 | 37.15 | 0.79 | 0.69 | 22.43 | 0.49 | 0.47 | 9.14 |
Method | DoLP | AoP | |||||
---|---|---|---|---|---|---|---|
PSNR | SSIM | PCQI | PSNR | SSIM | PCQI | Error [°] | |
Newton’s | 27.42 | 0.51 | 0.41 | 12.21 | 0.34 | 0.13 | 15.33 |
Bicubic | 31.05 | 0.53 | 0.46 | 13.64 | 0.26 | 0.36 | 18.59 |
ForkNet | 35.87 | 0.63 | 0.73 | 20.03 | 0.38 | 0.45 | 10.86 |
LLSPD-Net | 36.95 | 0.78 | 0.76 | 21.43 | 0.44 | 0.42 | 9.13 |
Method | RGB | S1,2 | DoLP | AoP | |||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
PSNR | SSIM | PCQI | PSNR | SSIM | PCQI | PSNR | SSIM | PCQI | PSNR | SSIM | PCQI | Error [°] | |
W/O Intensity Network | 34.97 | 0.89 | 0.81 | 33.17 | 0.83 | 0.77 | 35.26 | 0.67 | 0.68 | 17.13 | 0.41 | 0.43 | 15.25 |
W/O DRM | 31.47 | 0.75 | 0.76 | 31.41 | 0.78 | 0.79 | 31.39 | 0.58 | 0.61 | 18.46 | 0.42 | 0.39 | 20.23 |
W/O RG | 35.68 | 0.94 | 0.85 | 34.38 | 0.85 | 0.81 | 35.57 | 0.69 | 0.73 | 22.87 | 0.47 | 0.45 | 11.73 |
LLSPD-Net | 36.84 | 0.96 | 0.87 | 35.47 | 0.88 | 0.84 | 36.87 | 0.74 | 0.76 | 23.85 | 0.51 | 0.47 | 10.15 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Chen, G.; Hao, Y.; Duan, J.; Liu, J.; Jia, L.; Song, J. Low-Light Sparse Polarization Demosaicing Network (LLSPD-Net): Polarization Image Demosaicing Based on Stokes Vector Completion in Low-Light Environment. Sensors 2024, 24, 3299. https://doi.org/10.3390/s24113299
Chen G, Hao Y, Duan J, Liu J, Jia L, Song J. Low-Light Sparse Polarization Demosaicing Network (LLSPD-Net): Polarization Image Demosaicing Based on Stokes Vector Completion in Low-Light Environment. Sensors. 2024; 24(11):3299. https://doi.org/10.3390/s24113299
Chicago/Turabian StyleChen, Guangqiu, Youfei Hao, Jin Duan, Ju Liu, Linfeng Jia, and Jingyuan Song. 2024. "Low-Light Sparse Polarization Demosaicing Network (LLSPD-Net): Polarization Image Demosaicing Based on Stokes Vector Completion in Low-Light Environment" Sensors 24, no. 11: 3299. https://doi.org/10.3390/s24113299
APA StyleChen, G., Hao, Y., Duan, J., Liu, J., Jia, L., & Song, J. (2024). Low-Light Sparse Polarization Demosaicing Network (LLSPD-Net): Polarization Image Demosaicing Based on Stokes Vector Completion in Low-Light Environment. Sensors, 24(11), 3299. https://doi.org/10.3390/s24113299