Subjective and Objective Quality Evaluation for Underwater Image Enhancement and Restoration
<p>Real underwater images were collected in the UIQE database.</p> "> Figure 1 Cont.
<p>Real underwater images were collected in the UIQE database.</p> "> Figure 2
<p>The effects of 9 different methods on underwater image enhancement are shown. (<b>a</b>) Raw. (<b>b</b>) CycleGAN. (<b>c</b>) FGAN. (<b>d</b>) RB. (<b>e</b>) RED. (<b>f</b>) UDCP. (<b>g</b>) UGAN. (<b>h</b>) UIBLA. (<b>i</b>) UWCNN-SD. (<b>j</b>) WSCT.</p> "> Figure 3
<p>Comparison of mean and standard deviation of <span class="html-italic">MOS</span> values of different enhancement algorithms.</p> "> Figure 4
<p>The effects of 9 different methods on underwater image enhancement are shown.</p> "> Figure 5
<p>GM map of different enhancement methods. (<b>a</b>) the original image, (<b>b</b>) the GM map on the original image, (<b>c</b>) the UDCP map, (<b>d</b>) the GM map on the UDCP map, (<b>e</b>) the UBCP map and (<b>f</b>) the GM map on the UBCP map. The first line represents CycleGAN, the second line represents FGAN, the third line represents RB, the fourth line represents RED, the fifth line represents UDCP, the sixth line represents UGAN, the seventh line represents UIBLA, the eighth line represents UWCNN-SD, and the ninth line represents WSCT.</p> "> Figure 5 Cont.
<p>GM map of different enhancement methods. (<b>a</b>) the original image, (<b>b</b>) the GM map on the original image, (<b>c</b>) the UDCP map, (<b>d</b>) the GM map on the UDCP map, (<b>e</b>) the UBCP map and (<b>f</b>) the GM map on the UBCP map. The first line represents CycleGAN, the second line represents FGAN, the third line represents RB, the fourth line represents RED, the fifth line represents UDCP, the sixth line represents UGAN, the seventh line represents UIBLA, the eighth line represents UWCNN-SD, and the ninth line represents WSCT.</p> "> Figure 6
<p>LOG map of different enhancement methods. (<b>a</b>) the original image, (<b>b</b>) the LOG map on the original image, (<b>c</b>) the UDCP map, (<b>d</b>) the LOG map on the UDCP map, (<b>e</b>) the UBCP map and (<b>f</b>) the LOG map on the UBCP map. The first line represents CycleGAN, the second line represents FGAN, the third line represents RB, the fourth line represents RED, the fifth line represents UDCP, the sixth line represents UGAN, the seventh line represents UIBLA, the eighth line represents UWCNN-SD, and the ninth line represents WSCT.</p> "> Figure 6 Cont.
<p>LOG map of different enhancement methods. (<b>a</b>) the original image, (<b>b</b>) the LOG map on the original image, (<b>c</b>) the UDCP map, (<b>d</b>) the LOG map on the UDCP map, (<b>e</b>) the UBCP map and (<b>f</b>) the LOG map on the UBCP map. The first line represents CycleGAN, the second line represents FGAN, the third line represents RB, the fourth line represents RED, the fifth line represents UDCP, the sixth line represents UGAN, the seventh line represents UIBLA, the eighth line represents UWCNN-SD, and the ninth line represents WSCT.</p> "> Figure 7
<p>Marginal probabilities between normalized GM and LOG features for images of the UDCP maps and the UBCP maps. (<b>a</b>) Histogram of <span class="html-italic">P<sub>G</sub></span> on UBCP, (<b>b</b>) Histogram of <span class="html-italic">P<sub>L</sub></span> on UBCP, (<b>c</b>) Histogram of <span class="html-italic">P<sub>G</sub></span> on UBCP, (<b>d</b>) Histogram of <span class="html-italic">P<sub>L</sub></span> on UDCP. The abscissa indicates that the feature is divided into 10 dimensions, and the ordinate indicates the sum of marginal distribution. The first line represents CycleGAN, the second line represents FGAN, the third line represents RB, the fourth line represents RED, the fifth line represents UDCP, the sixth line represents UGAN, the seventh line represents UIBLA, the eighth line represents UWCNN-SD, and the ninth line represents WSCT.</p> "> Figure 8
<p>Independency distribution between normalized GM and LOG features for images of UBCP. (<b>a</b>) Histogram of <span class="html-italic">Q<sub>G</sub></span> on UBCP, (<b>b</b>) Histogram of <span class="html-italic">Q<sub>L</sub></span> on UBCP, (<b>c</b>) Histogram of <span class="html-italic">Q<sub>G</sub></span> on UBCP, (<b>d</b>) Histogram of <span class="html-italic">Q<sub>L</sub></span> on UDCP. The abscissa indicates that the feature is divided into 10 dimensions, and the ordinate indicates the sum of conditional distribution. The first line represents CycleGAN, the second line represents FGAN, the third line represents RB, the fourth line represents RED, the fifth line represents UDCP, the sixth line represents UGAN, the seventh line represents UIBLA, the eighth line represents UWCNN-SD, and the ninth line represents WSCT.</p> "> Figure 9
<p>SROCC values of different dimensions M = N = {5, 10, 15, 20} in the database.</p> "> Figure 10
<p>Comparison of the performance of BRSIQUE, NIQE, UCIQE, UIQM, CCF, ILNIQE and UIQEI in three scenes. (<b>a</b>) in blue scene, (<b>b</b>) in green scene, (<b>c</b>) in haze scene.</p> "> Figure 11
<p>The performance of a class of features (SROCC and PLCC) in the UIQE database. f1–f84 are the feature IDS given in <a href="#symmetry-14-00558-t002" class="html-table">Table 2</a>.</p> ">
Abstract
:1. Introduction
- The underwater image quality evaluation (UIQE) database: The paper creates a UIQE database, which collects 45 real underwater images, enhanced by 9 enhancement methods, and generates a total of 405 underwater images. Then, the UIQE database is subjectively studied and an important discovery is obtained. Although the existing enhancement methods perform well in enhancement, they are still difficult to maintain, regarding the balance in removing color and preserving details to obtain better underwater images.
- An objective UIQE index (UIQEI) method: Based on the UDCP and BCP, an objective method was proposed to accurately evaluate the quality of the enhanced and restoration underwater images. The enhanced and restored underwater images usually show different degrees of degradation in the different local regions, which brings great difficulties to the overall quality recognition of enhanced and restored underwater images. To solve this problem, an underwater dark channel map was used to illustrate the information of darker areas. Then, an underwater bright channel map is developed to show the region of brightness supersaturation. Further, the features extracted by the joint statistics of normalized gradient magnitude (GM) and Laplacian of Gaussian (LOG) are fused to capture the local differences. Finally, the features of color, fog density, and global contrast are discussed.
2. Related Work
2.1. Underwater Imaging Model
2.2. Underwater Image Enhancement and Restoration
3. Construction of Subjective Evaluation Database
3.1. Underwater Image Enhancement Database (UIQE)
3.2. Sets of Subjective Quality Evaluation
3.3. Subjective Data Processing
4. Objective Model of UIQEI
4.1. Local Contrast
4.1.1. Underwater Dark Channel Prior Theory and Bright Channel Prior Theory
4.1.2. Gradient Amplitude and Laplacian of Gaussian
4.2. Color Feature
4.3. Fog Density
4.4. Global Contrast
4.5. Regression
5. Experimental Comparison
5.1. Experimental Details
5.2. Evaluation Criteria
5.3. Feature Analysis
5.4. Ablation Experiment
6. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Acknowledgments
Conflicts of Interest
References
- McConnell, J.; Martin, J.D.; Englot, B. Fusing concurrent orthogonal wide-aperture sonar images for dense underwater 3D reconstruction. In Proceedings of the 2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Las Vegas, NV, USA, 25–29 October 2020; pp. 1653–1660. [Google Scholar]
- Bobkov, V.; Kudryashov, A.; Inzartsev, A. Method for the Coordination of Referencing of Autonomous Underwater Vehicles to Man-Made Objects Using Stereo Images. J. Mar. Sci. Eng. 2021, 9, 1038. [Google Scholar] [CrossRef]
- Zhuang, Y.; Wu, C.; Wu, H. Event coverage hole repair algorithm based on multi-AUVs in multi-constrained three-dimensional underwater wireless sensor networks. Symmetry 2020, 12, 1884. [Google Scholar] [CrossRef]
- Fu, X.; Zhuang, P.; Huang, Y. A retinex-based enhancing approach for single underwater image. In Proceedings of the 2014 IEEE International Conference on Image Processing (ICIP), Paris, France, 27–30 October 2014; pp. 4572–4576. [Google Scholar]
- Ancuti, C.; Ancuti, C.O.; Haber, T. Enhancing underwater images and videos by fusion. In Proceedings of the 2012 IEEE Conference on Computer Vision and Pattern Recognition, Providence, Rhode Island, 16–21 June 2012; pp. 81–88. [Google Scholar]
- Henke, B.; Vahl, M.; Zhou, Z. Removing color cast of underwater images through non-constant color constancy hypothesis. In Proceedings of the 2013 8th International Symposium on Image and Signal Processing and Analysis (ISPA), Trieste, Italy, 4–6 September 2013; pp. 20–24. [Google Scholar]
- Ji, T.; Wang, G. An approach to underwater image enhancement based on image structural decomposition. J. Ocean. Univ. China 2015, 14, 255–260. [Google Scholar] [CrossRef]
- Gao, F.; Wang, K.; Yang, Z. Underwater image enhancement based on local contrast correction and multi-scale fusion. J. Mar. Sci. Eng. 2021, 9, 225. [Google Scholar] [CrossRef]
- Drews, P.L.J.; Nascimento, E.R.; Botelho, S.S.C. Underwater depth estimation and image restoration based on single images. IEEE Comput. Graph. Appl. 2016, 36, 24–35. [Google Scholar] [CrossRef] [PubMed]
- Galdran, A.; Pardo, D.; Picón, A. Automatic red-channel underwater image restoration. J. Vis. Commun. Image Represent. 2015, 26, 132–145. [Google Scholar] [CrossRef] [Green Version]
- Peng, Y.T.; Cosman, P.C. Underwater image restoration based on image blurriness and light absorption. IEEE Trans. Image Process. 2017, 26, 1579–1594. [Google Scholar] [CrossRef]
- Zhao, X.; Jin, T.; Qu, S. Deriving inherent optical properties from background color and underwater image enhancement. Ocean. Eng. 2015, 94, 163–172. [Google Scholar] [CrossRef]
- Zhu, J.Y.; Park, T.; Isola, P. Unpaired image-to-image translation using cycle-consistent adversarial networks. In Proceedings of the IEEE International Conference on Computer Vision, Venice, Italy, 22–29 October 2017; pp. 2223–2232. [Google Scholar]
- Fabbri, C.; Islam, M.J.; Sattar, J. Enhancing underwater imagery using generative adversarial networks. In Proceedings of the 2018 IEEE International Conference on Robotics and Automation (ICRA), Brisbane, Australia, 21–25 May 2018; pp. 7159–7165. [Google Scholar]
- Li, C.; Guo, J.; Guo, C. Emerging from water: Underwater image color correction based on weakly supervised color transfer. IEEE Signal Process. Lett. 2018, 25, 323–327. [Google Scholar] [CrossRef] [Green Version]
- Li, H.; Li, J.; Wang, W. A fusion adversarial underwater image enhancement network with a public test dataset. arXiv 2019, arXiv:1906.06819. [Google Scholar]
- Wu, S.; Luo, T.; Jiang, G. A Two-Stage underwater enhancement network based on structure decomposition and characteristics of underwater imaging. IEEE J. Ocean. Eng. 2021, 46, 1213–1227. [Google Scholar] [CrossRef]
- Khaustov, P.A.; Spitsyn, V.G.; Maksimova, E.I. Algorithm for improving the quality of underwater images based on the neuro-evolutionary approach. Fundam. Res. 2016, 2016, 328–332. [Google Scholar]
- Wu, D.; Yuan, F.; Cheng, E. Underwater no-reference image quality assessment for display module of ROV. Sci. Program. 2020, 2, 1–15. [Google Scholar] [CrossRef]
- Ma, M.; Feng, X.; Chao, L.; Huang, D.; Xia, Z.; Jiang, X. A new database for evaluating underwater image processing methods. In Proceedings of the 2018 Eighth International Conference on Image Processing Theory, Tools and Applications (IPTA), Xi’an, China, 7–10 November 2018; pp. 1–6. [Google Scholar]
- Yang, N.; Zhong, Q.; Li, K. A reference-free underwater image quality assessment metric in frequency domain. Signal Process. Image Commun. 2021, 94, 116218. [Google Scholar] [CrossRef]
- Yang, M.; Sowmya, A. An underwater color image quality evaluation metric. IEEE Trans. Image Process. 2015, 24, 6062–6071. [Google Scholar] [CrossRef]
- Panetta, K.; Gao, C.; Agaian, S. Human-visual-system-inspired underwater image quality measures. IEEE J. Ocean. Eng. 2015, 41, 541–551. [Google Scholar] [CrossRef]
- Wang, Y.; Li, N.; Li, Z. An imaging-inspired no-reference underwater color image quality assessment metric. Comput. Electr. Eng. 2018, 70, 904–913. [Google Scholar] [CrossRef]
- Jaffe, J.S. Underwater optical imaging: The past, the present, and the prospects. IEEE J. Ocean. Eng. 2014, 40, 683–700. [Google Scholar] [CrossRef]
- Drews, P.; Nascimento, E.; Moraes, F. Transmission estimation in underwater single images. In Proceedings of the IEEE International Conference on Computer Vision Workshops, Sydney, Australia, 1–8 December 2013; pp. 825–830. [Google Scholar]
- Li, C.; Anwar, S.; Porikli, F. Underwater scene prior inspired deep underwater image and video enhancement. Pattern Recognit. 2020, 98, 107038. [Google Scholar] [CrossRef]
- Uplavikar, P.M.; Wu, Z.; Wang, Z. All-in-one underwater image enhancement using domain-adversarial learning. CVPR Workshops 2019, 1–8. [Google Scholar]
- Li, C.; Guo, C.; Ren, W. An underwater image enhancement benchmark dataset and beyond. IEEE Trans. Image Process. 2019, 29, 4376–4389. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Liu, R.; Fan, X.; Zhu, M. Real-world underwater enhancement: Challenges, benchmarks, and solutions under natural light. IEEE Trans. Circuits Syst. Video Technol. 2020, 30, 4861–4875. [Google Scholar] [CrossRef]
- Russakovsky, O.; Deng, J.; Su, H. Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 2015, 115, 211–252. [Google Scholar] [CrossRef] [Green Version]
- Xiao, J.; Hays, J.; Ehinger, K.A. Sun database: Large-scale scene recognition from abbey to zoo. In Proceedings of the 2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, San Francisco, CA, USA, 13–18 June 2010; Volume 1, pp. 3485–3492. [Google Scholar]
- Series, B.T. Methodology for the subjective assessment of the quality of television pictures. Recomm. ITU-R BT 2012, 500–513. [Google Scholar]
- Seshadrinathan, K.; Soundararajan, R.; Bovik, A.C. Study of subjective and objective quality assessment of video. IEEE Trans. Image Process. 2010, 19, 1427–1441. [Google Scholar] [CrossRef]
- Ma, L.; Lin, W.; Deng, C. Image retargeting quality assessment: A study of subjective scores and objective metrics. IEEE J. Sel. Top. Signal Process. 2012, 6, 626–639. [Google Scholar] [CrossRef]
- He, K.; Sun, J.; Tang, X. Single image haze removal using dark channel prior. IEEE Trans. Pattern Anal. Mach. Intell. 2010, 33, 2341–2353. [Google Scholar]
- Peng, Y.T.; Cao, K.; Cosman, P.C. Generalization of the dark channel prior for single image restoration. IEEE Trans. Image Process. 2018, 27, 2856–2868. [Google Scholar] [CrossRef]
- Wen, H.; Tian, Y.; Huang, T. Single underwater image enhancement with a new optical model. In Proceedings of the 2013 IEEE International Symposium on Circuits and Systems (ISCAS), Beijing, China, 19–23 May 2013; pp. 753–756. [Google Scholar]
- Gao, Y.; Li, H.; Wen, S. Restoration and enhancement of underwater images based on bright channel prior. Math. Probl. Eng. 2016, 2016, 3141478. [Google Scholar] [CrossRef]
- Wang, Y.; Zhuo, S.; Tao, D. Automatic local exposure correction using bright channel prior for under-exposed images. Signal Process. 2013, 93, 3227–3238. [Google Scholar] [CrossRef]
- Lin, M.; Wang, Z.; Zhang, D. Color compensation based on bright channel and fusion for underwater image enhancement. Acta Opt. Sin. 2018, 38, 1110003. [Google Scholar]
- Marr, D.; Hildreth, E. Theory of edge detection. Proceedings of the Royal Society of London. Series B. Biol. Sci. 1980, 207, 187–217. [Google Scholar]
- Canny, J. A computational approach to edge detection. IEEE Trans. Pattern Anal. Mach. Intell. 1986, 1986, 679–698. [Google Scholar] [CrossRef]
- Xue, W.; Mou, X.; Zhang, L. Blind image quality assessment using joint statistics of gradient magnitude and Laplacian features. IEEE Trans. Image Process. 2014, 23, 4850–4862. [Google Scholar] [CrossRef] [PubMed]
- Fairchild, M.D. Color Appearance Models; John Wiley & Sons: Chichester, UK, 2013; pp. 1–34. [Google Scholar]
- Hasler, D.; Suesstrunk, S.E. Measuring colorfulness in natural images. Human vision and electronic imaging VIII. Int. Soc. Opt. Photonics 2003, 5007, 87–95. [Google Scholar]
- Choi, L.K.; You, J.; Bovik, A.C. Referenceless prediction of perceptual fog density and perceptual image defogging. IEEE Trans. Image Process. 2015, 24, 3888–3901. [Google Scholar] [CrossRef]
- Matkovic, K.; Neumann, L.; Neumann, A.; Psik, T.; Purgathofer, W. Global contrast factor-a new approach to image contrast. Comput. Aesthet. 2005, 159–168. [Google Scholar]
- Caviedes, J.; Philips, F. Final report from the video quality expert’s group on the validation of objective models of video quality assessment march 2000. In Proceedings of the VQEG Meeting, Ottawa, ON, Canada, 13–17 March 2000. [Google Scholar]
- Mittal, A.; Moorthy, A.K.; Bovik, A.C. No-reference image quality assessment in the spatial domain. IEEE Trans. Image Process. 2012, 21, 4695–4708. [Google Scholar] [CrossRef]
- Mittal, A.; Soundararajan, R.; Bovik, A.C. Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 2012, 20, 209–212. [Google Scholar] [CrossRef]
- Zhang, L.; Zhang, L.; Bovik, A.C. A feature-enriched completely blind image quality evaluator. IEEE Trans. Image Process. 2015, 24, 2579–2591. [Google Scholar] [CrossRef] [Green Version]
- Oszust, M. Decision fusion for image quality assessment using an optimization approach. IEEE Signal Process. Lett. 2015, 23, 65–69. [Google Scholar] [CrossRef]
- Yue, G.; Yan, W.; Zhou, T. Referenceless quality evaluation of tone-mapped HDR and multiexposure fused images. IEEE Trans. Ind. Inform. 2019, 16, 1764–1775. [Google Scholar] [CrossRef]
Range | Describe |
---|---|
1 | No color recovery, low contrast, texture distortion, edge artifacts, poor visibility. |
2 | Partial color restoration, improved contrast, texture distortion, edge artifacts, and poor visibility. |
3 | Color recovery, contrast enhancement, realistic texture, local edge artifacts, and acceptable visibility. |
4 | Color recovery, contrast enhancement, texture reality, better edge artifact recovery, and better visibility. |
5 | Color restoration, contrast enhancement, texture reality, edge artifacts, and good visibility of underwater images. |
Index | Feature Type | Symbol | Feature Description | Feature ID |
---|---|---|---|---|
1 | GM and LOG of the underwater dark channel map | PG, PL, QG, QL | Measure the local contrast of the image | |
2 | GM and LOG of the underwater bright channel map | PG, PL, QG, QL | Measure the local contrast of the image | |
3 | Color | Isaturation(i, j), CF | Measure the color of the image | |
4 | Fog density | D | Measure the fog density of the image | |
5 | Global contrast | GCF | Measure the global contrast of the image |
SROCC | PLCC | RMSE | |
---|---|---|---|
BRSIQUE [50] | 0.5495 | 0.5446 | 0.6188 |
NIQE [51] | 0.3850 | 0.4079 | 0.6736 |
UCIQE [22] | 0.2680 | 0.3666 | 0.6864 |
UIQM [23] | 0.5755 | 0.5898 | 0.5958 |
CCF [24] | 0.2680 | 0.3666 | 0.6864 |
ILNIQE [52] | 0.1591 | 0.1749 | 0.7264 |
UIQEI | 0.8568 | 0.8705 | 0.3600 |
Index | Feature |
---|---|
G1 | GM under UBCP |
G2 | LOG under UBCP |
G3 | GM under UDCP |
G4 | LOG under UDCP |
G5 | Color feature |
G6 | Fog density feature |
G7 | Global contrast feature |
G8 | Excluding the global contrast feature |
SROCC | PLCC | RMSE | |
---|---|---|---|
G1 | 0.7850 | 0.8029 | 0.4359 |
G2 | 0.7742 | 0.8001 | 0.4381 |
G3 | 0.8134 | 0.8211 | 0.4184 |
G4 | 0.8085 | 0.8193 | 0.4196 |
G5 | 0.6360 | 0.7519 | 0.4825 |
G6 | 0.7304 | 0.7849 | 0.4552 |
G7 | 0.3981 | 0.4438 | 0.6579 |
G8 | 0.8455 | 0.8603 | 0.3714 |
Train-Test | SROCC | PLCC | RMSE |
---|---|---|---|
80–20% | 0.8568 | 0.8705 | 0.3600 |
70–30% | 0.8494 | 0.8605 | 0.3736 |
60–40% | 0.8433 | 0.8528 | 0.3841 |
50–50% | 0.8313 | 0.8424 | 0.3965 |
40–60% | 0.8157 | 0.8282 | 0.4120 |
30–70% | 0.7941 | 0.8115 | 0.4315 |
20–80% | 0.7845 | 0.7833 | 0.4576 |
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |
© 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Li, W.; Lin, C.; Luo, T.; Li, H.; Xu, H.; Wang, L. Subjective and Objective Quality Evaluation for Underwater Image Enhancement and Restoration. Symmetry 2022, 14, 558. https://doi.org/10.3390/sym14030558
Li W, Lin C, Luo T, Li H, Xu H, Wang L. Subjective and Objective Quality Evaluation for Underwater Image Enhancement and Restoration. Symmetry. 2022; 14(3):558. https://doi.org/10.3390/sym14030558
Chicago/Turabian StyleLi, Wenxia, Chi Lin, Ting Luo, Hong Li, Haiyong Xu, and Lihong Wang. 2022. "Subjective and Objective Quality Evaluation for Underwater Image Enhancement and Restoration" Symmetry 14, no. 3: 558. https://doi.org/10.3390/sym14030558
APA StyleLi, W., Lin, C., Luo, T., Li, H., Xu, H., & Wang, L. (2022). Subjective and Objective Quality Evaluation for Underwater Image Enhancement and Restoration. Symmetry, 14(3), 558. https://doi.org/10.3390/sym14030558