Nothing Special   »   [go: up one dir, main page]

You seem to have javascript disabled. Please note that many of the page functionalities won't work as expected without javascript enabled.
 
 
Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (4)

Search Parameters:
Keywords = underwater bright channel prior

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
18 pages, 4837 KiB  
Article
Rethinking Underwater Crab Detection via Defogging and Channel Compensation
by Yueping Sun, Bikang Yuan, Ziqiang Li, Yong Liu and Dean Zhao
Fishes 2024, 9(2), 60; https://doi.org/10.3390/fishes9020060 - 30 Jan 2024
Viewed by 1754
Abstract
Crab aquaculture is an important component of the freshwater aquaculture industry in China, encompassing an expansive farming area of over 6000 km2 nationwide. Currently, crab farmers rely on manually monitored feeding platforms to count the number and assess the distribution of crabs [...] Read more.
Crab aquaculture is an important component of the freshwater aquaculture industry in China, encompassing an expansive farming area of over 6000 km2 nationwide. Currently, crab farmers rely on manually monitored feeding platforms to count the number and assess the distribution of crabs in the pond. However, this method is inefficient and lacks automation. To address the problem of efficient and rapid detection of crabs via automated systems based on machine vision in low-brightness underwater environments, a two-step color correction and improved dark channel prior underwater image processing approach for crab detection is proposed in this paper. Firstly, the parameters of the dark channel prior are optimized with guided filtering and quadtrees to solve the problems of blurred underwater images and artificial lighting. Then, the gray world assumption, the perfect reflection assumption, and a strong channel to compensate for the weak channel are applied to improve the pixels of red and blue channels, correct the color of the defogged image, optimize the visual effect of the image, and enrich the image information. Finally, ShuffleNetV2 is applied to optimize the target detection model to improve the model detection speed and real-time performance. The experimental results show that the proposed method has a detection rate of 90.78% and an average confidence level of 0.75. Compared with the improved YOLOv5s detection results of the original image, the detection rate of the proposed method is increased by 21.41%, and the average confidence level is increased by 47.06%, which meets a good standard. This approach could effectively build an underwater crab distribution map and provide scientific guidance for crab farming. Full article
Show Figures

Figure 1

Figure 1
<p>An example of underwater images of crabs in a pond. (<b>a</b>) Original image of underwater crab, (<b>b</b>) 3D color distribution map; (<b>c</b>) 3-channel pixel histogram. The red, green, and blue colors in (<b>b</b>) and (<b>c</b>) represent the pixels of the red, green, and blue channels of the image, respectively.</p>
Full article ">Figure 2
<p>Underwater crab video acquisition methods. (<b>a</b>) China crab freshwater aquaculture production distribution in 2022, (<b>b</b>) automatic baiting boat operation trajectory, (<b>c</b>) automatic baiting boat and its video collection equipment deployment, (<b>d</b>) manual collection equipment.</p>
Full article ">Figure 3
<p>Systematic overview of the proposed approach.</p>
Full article ">Figure 4
<p>Structure diagram of the improved YOLOv5s model. The model includes four parts: input, backbone, neck, and output. ShuffleNetV2 acts as the backbone to make the network more lightweight. The color selection does not carry any specific meaning.</p>
Full article ">Figure 5
<p>Comparison of processing results among algorithms: (<b>a</b>) original image, (<b>b</b>) DCP, (<b>c</b>) AHE, (<b>d</b>) MSRCP, (<b>e</b>) GCANet, (<b>f</b>) proposed method.</p>
Full article ">Figure 6
<p>Comparison of recognition results of improved YOLOv5s model before and after enhancement: (<b>a</b>) original image, (<b>b</b>) recognition results of the original image, (<b>c</b>) enhanced image, (<b>d</b>) recognition results of the enhanced image.</p>
Full article ">Figure 7
<p>Heat map visualization results after multiple image processing algorithms.</p>
Full article ">Figure 8
<p>The process of improved YOLOv5 algorithm is demonstrated: (<b>a</b>) characteristic maps after SPPF and C3 processing, (<b>b</b>) the average loss during training for each model.</p>
Full article ">
16 pages, 7031 KiB  
Article
Enhancement and Optimization of Underwater Images and Videos Mapping
by Chengda Li, Xiang Dong, Yu Wang and Shuo Wang
Sensors 2023, 23(12), 5708; https://doi.org/10.3390/s23125708 - 19 Jun 2023
Cited by 4 | Viewed by 1795
Abstract
Underwater images tend to suffer from critical quality degradation, such as poor visibility, contrast reduction, and color deviation by virtue of the light absorption and scattering in water media. It is a challenging problem for these images to enhance visibility, improve contrast, and [...] Read more.
Underwater images tend to suffer from critical quality degradation, such as poor visibility, contrast reduction, and color deviation by virtue of the light absorption and scattering in water media. It is a challenging problem for these images to enhance visibility, improve contrast, and eliminate color cast. This paper proposes an effective and high-speed enhancement and restoration method based on the dark channel prior (DCP) for underwater images and video. Firstly, an improved background light (BL) estimation method is proposed to estimate BL accurately. Secondly, the R channel’s transmission map (TM) based on the DCP is estimated sketchily, and a TM optimizer integrating the scene depth map and the adaptive saturation map (ASM) is designed to refine the afore-mentioned coarse TM. Later, the TMs of G–B channels are computed by their ratio to the attenuation coefficient of the red channel. Finally, an improved color correction algorithm is adopted to improve visibility and brightness. Several typical image-quality assessment indexes are employed to testify that the proposed method can restore underwater low-quality images more effectively than other advanced methods. An underwater video real-time measurement is also conducted on the flipper-propelled underwater vehicle-manipulator system to verify the effectiveness of the proposed method in the real scene. Full article
Show Figures

Figure 1

Figure 1
<p>Several classical underwater images and their TM of the R color channel based on the DCP. The upper layer is the original image, and the lower layer is their corresponding transmission map. (<b>a</b>–<b>f</b>) represent different underwater images.</p>
Full article ">Figure 2
<p>Flowchart of the proposed method.</p>
Full article ">Figure 3
<p>The above inaccurate TMs of the R channel are refined with our optimizer for <a href="#sensors-23-05708-f001" class="html-fig">Figure 1</a>d–f. (<b>a</b>) The underestimated TM for the clay pot in the foreground region is accurately corrected; (<b>b</b>) the overestimated TM for the back of the statue in the background region is accurately corrected; (<b>c</b>) the TM in the shark back area with the non-uniform illumination is improved.</p>
Full article ">Figure 4
<p>The whole processing of the proposed underwater image restoration method. (<b>a</b>) Raw image; (<b>b</b>) the TM of the R channel based on DCP; (<b>c</b>) refinement of the TM optimized by our TM optimizer; (<b>d</b>) the restored image; (<b>e</b>) the enhanced image with color correction.</p>
Full article ">Figure 5
<p>Image restoration results with different TM obtained for the greenish images. (<b>a</b>) Raw images; (<b>b</b>) DCP; (<b>c</b>) UDCP; (<b>d</b>) MIP; (<b>e</b>) ILBA; (<b>f</b>) ULAP; (<b>g</b>) NUDCP; (<b>h</b>) ours.</p>
Full article ">Figure 6
<p>Image restoration results with different TM obtained for the bluish images. (<b>a</b>) Raw images; (<b>b</b>) DCP; (<b>c</b>) UDCP; (<b>d</b>) MIP; (<b>e</b>) ILBA; (<b>f</b>) ULAP; (<b>g</b>) NUDCP; (<b>h</b>) ours.</p>
Full article ">Figure 7
<p>Comparative results for greenish images. (<b>a</b>) Raw images; (<b>b</b>) DCP; (<b>c</b>) UDCP; (<b>d</b>) MIP; (<b>e</b>) IBLA; (<b>f</b>) NUDCP; (<b>g</b>) O_WCC; (<b>h</b>) DCP + HE; (<b>i</b>) UDCP + HE; (<b>j</b>) MIP + HE; (<b>k</b>) IBLA + HE; (<b>l</b>) ULAP + HE; (<b>m</b>) NU_CC; (<b>n</b>) ours.</p>
Full article ">Figure 8
<p>Comparative results for bluish images. (<b>a</b>) Raw images; (<b>b</b>) DCP; (<b>c</b>) UDCP; (<b>d</b>) MIP; (<b>e</b>) IBLA; (<b>f</b>) NUDCP; (<b>g</b>) O_WCC; (<b>h</b>) DCP + HE; (<b>i</b>) UDCP + HE; (<b>j</b>) MIP + HE; (<b>k</b>) IBLA + HE; (<b>l</b>) ULAP + HE; (<b>m</b>) NU_CC; (<b>n</b>) ours.</p>
Full article ">Figure 9
<p>Comparative results for challenge underwater images. (<b>a</b>) Raw images; (<b>b</b>) DCP; (<b>c</b>) UDCP; (<b>d</b>) MIP; (<b>e</b>)IBLA; (<b>f</b>) ULAP; (<b>g</b>) NUDCP; (<b>h</b>) O_WCC; (<b>i</b>) DCP + HE; (<b>j</b>) UDCP + HE; (<b>k</b>) MIP + HE; (<b>l</b>) IBLA + HE; (<b>m</b>) ULAP + HE; (<b>n</b>) NU_CC; (<b>o</b>) ours.</p>
Full article ">
23 pages, 9384 KiB  
Article
Subjective and Objective Quality Evaluation for Underwater Image Enhancement and Restoration
by Wenxia Li, Chi Lin, Ting Luo, Hong Li, Haiyong Xu and Lihong Wang
Symmetry 2022, 14(3), 558; https://doi.org/10.3390/sym14030558 - 10 Mar 2022
Cited by 4 | Viewed by 2827
Abstract
Since underwater imaging is affected by the complex water environment, it often leads to severe distortion of the underwater image. To improve the quality of underwater images, underwater image enhancement and restoration methods have been proposed. However, many underwater image enhancement and restoration [...] Read more.
Since underwater imaging is affected by the complex water environment, it often leads to severe distortion of the underwater image. To improve the quality of underwater images, underwater image enhancement and restoration methods have been proposed. However, many underwater image enhancement and restoration methods produce over-enhancement or under-enhancement, which affects their application. To better design underwater image enhancement and restoration methods, it is necessary to research the underwater image quality evaluation (UIQE) for underwater image enhancement and restoration methods. Therefore, a subjective evaluation dataset for an underwater image enhancement and restoration method is constructed, and on this basis, an objective quality evaluation method of underwater images, based on the relative symmetry of underwater dark channel prior (UDCP) and the underwater bright channel prior (UBCP) is proposed. Specifically, considering underwater image enhancement in different scenarios, a UIQE dataset is constructed, which contains 405 underwater images, generated from 45 different underwater real images, using 9 representative underwater image enhancement methods. Then, a subjective quality evaluation of the UIQE database is studied. To quantitatively measure the quality of the enhanced and restored underwater images with different characteristics, an objective UIQE index (UIQEI) is used, by extracting and fusing four groups of features, including: (1) the joint statistics of normalized gradient magnitude (GM) and Laplacian of Gaussian (LOG) features, based on the underwater dark channel map; (2) the joint statistics of normalized gradient magnitude (GM) and Laplacian of Gaussian (LOG) features, based on the underwater bright channel map; (3) the saturation and colorfulness features; (4) the fog density feature; (5) the global contrast feature; these features capture key aspects of underwater images. Finally, the experimental results are analyzed, qualitatively and quantitatively, to illustrate the effectiveness of the proposed UIQEI method. Full article
Show Figures

Figure 1

Figure 1
<p>Real underwater images were collected in the UIQE database.</p>
Full article ">Figure 1 Cont.
<p>Real underwater images were collected in the UIQE database.</p>
Full article ">Figure 2
<p>The effects of 9 different methods on underwater image enhancement are shown. (<b>a</b>) Raw. (<b>b</b>) CycleGAN. (<b>c</b>) FGAN. (<b>d</b>) RB. (<b>e</b>) RED. (<b>f</b>) UDCP. (<b>g</b>) UGAN. (<b>h</b>) UIBLA. (<b>i</b>) UWCNN-SD. (<b>j</b>) WSCT.</p>
Full article ">Figure 3
<p>Comparison of mean and standard deviation of <span class="html-italic">MOS</span> values of different enhancement algorithms.</p>
Full article ">Figure 4
<p>The effects of 9 different methods on underwater image enhancement are shown.</p>
Full article ">Figure 5
<p>GM map of different enhancement methods. (<b>a</b>) the original image, (<b>b</b>) the GM map on the original image, (<b>c</b>) the UDCP map, (<b>d</b>) the GM map on the UDCP map, (<b>e</b>) the UBCP map and (<b>f</b>) the GM map on the UBCP map. The first line represents CycleGAN, the second line represents FGAN, the third line represents RB, the fourth line represents RED, the fifth line represents UDCP, the sixth line represents UGAN, the seventh line represents UIBLA, the eighth line represents UWCNN-SD, and the ninth line represents WSCT.</p>
Full article ">Figure 5 Cont.
<p>GM map of different enhancement methods. (<b>a</b>) the original image, (<b>b</b>) the GM map on the original image, (<b>c</b>) the UDCP map, (<b>d</b>) the GM map on the UDCP map, (<b>e</b>) the UBCP map and (<b>f</b>) the GM map on the UBCP map. The first line represents CycleGAN, the second line represents FGAN, the third line represents RB, the fourth line represents RED, the fifth line represents UDCP, the sixth line represents UGAN, the seventh line represents UIBLA, the eighth line represents UWCNN-SD, and the ninth line represents WSCT.</p>
Full article ">Figure 6
<p>LOG map of different enhancement methods. (<b>a</b>) the original image, (<b>b</b>) the LOG map on the original image, (<b>c</b>) the UDCP map, (<b>d</b>) the LOG map on the UDCP map, (<b>e</b>) the UBCP map and (<b>f</b>) the LOG map on the UBCP map. The first line represents CycleGAN, the second line represents FGAN, the third line represents RB, the fourth line represents RED, the fifth line represents UDCP, the sixth line represents UGAN, the seventh line represents UIBLA, the eighth line represents UWCNN-SD, and the ninth line represents WSCT.</p>
Full article ">Figure 6 Cont.
<p>LOG map of different enhancement methods. (<b>a</b>) the original image, (<b>b</b>) the LOG map on the original image, (<b>c</b>) the UDCP map, (<b>d</b>) the LOG map on the UDCP map, (<b>e</b>) the UBCP map and (<b>f</b>) the LOG map on the UBCP map. The first line represents CycleGAN, the second line represents FGAN, the third line represents RB, the fourth line represents RED, the fifth line represents UDCP, the sixth line represents UGAN, the seventh line represents UIBLA, the eighth line represents UWCNN-SD, and the ninth line represents WSCT.</p>
Full article ">Figure 7
<p>Marginal probabilities between normalized GM and LOG features for images of the UDCP maps and the UBCP maps. (<b>a</b>) Histogram of <span class="html-italic">P<sub>G</sub></span> on UBCP, (<b>b</b>) Histogram of <span class="html-italic">P<sub>L</sub></span> on UBCP, (<b>c</b>) Histogram of <span class="html-italic">P<sub>G</sub></span> on UBCP, (<b>d</b>) Histogram of <span class="html-italic">P<sub>L</sub></span> on UDCP. The abscissa indicates that the feature is divided into 10 dimensions, and the ordinate indicates the sum of marginal distribution. The first line represents CycleGAN, the second line represents FGAN, the third line represents RB, the fourth line represents RED, the fifth line represents UDCP, the sixth line represents UGAN, the seventh line represents UIBLA, the eighth line represents UWCNN-SD, and the ninth line represents WSCT.</p>
Full article ">Figure 8
<p>Independency distribution between normalized GM and LOG features for images of UBCP. (<b>a</b>) Histogram of <span class="html-italic">Q<sub>G</sub></span> on UBCP, (<b>b</b>) Histogram of <span class="html-italic">Q<sub>L</sub></span> on UBCP, (<b>c</b>) Histogram of <span class="html-italic">Q<sub>G</sub></span> on UBCP, (<b>d</b>) Histogram of <span class="html-italic">Q<sub>L</sub></span> on UDCP. The abscissa indicates that the feature is divided into 10 dimensions, and the ordinate indicates the sum of conditional distribution. The first line represents CycleGAN, the second line represents FGAN, the third line represents RB, the fourth line represents RED, the fifth line represents UDCP, the sixth line represents UGAN, the seventh line represents UIBLA, the eighth line represents UWCNN-SD, and the ninth line represents WSCT.</p>
Full article ">Figure 9
<p>SROCC values of different dimensions M = N = {5, 10, 15, 20} in the database.</p>
Full article ">Figure 10
<p>Comparison of the performance of BRSIQUE, NIQE, UCIQE, UIQM, CCF, ILNIQE and UIQEI in three scenes. (<b>a</b>) in blue scene, (<b>b</b>) in green scene, (<b>c</b>) in haze scene.</p>
Full article ">Figure 11
<p>The performance of a class of features (SROCC and PLCC) in the UIQE database. f1–f84 are the feature IDS given in <a href="#symmetry-14-00558-t002" class="html-table">Table 2</a>.</p>
Full article ">
12469 KiB  
Article
Enhancement of Low Contrast Images Based on Effective Space Combined with Pixel Learning
by Gengfei Li, Guiju Li and Guangliang Han
Information 2017, 8(4), 135; https://doi.org/10.3390/info8040135 - 1 Nov 2017
Cited by 4 | Viewed by 6066
Abstract
Images captured in bad conditions often suffer from low contrast. In this paper, we proposed a simple, but efficient linear restoration model to enhance the low contrast images. The model’s design is based on the effective space of the 3D surface graph of [...] Read more.
Images captured in bad conditions often suffer from low contrast. In this paper, we proposed a simple, but efficient linear restoration model to enhance the low contrast images. The model’s design is based on the effective space of the 3D surface graph of the image. Effective space is defined as the minimum space containing the 3D surface graph of the image, and the proportion of the pixel value in the effective space is considered to reflect the details of images. The bright channel prior and the dark channel prior are used to estimate the effective space, however, they may cause block artifacts. We designed the pixel learning to solve this problem. Pixel learning takes the input image as the training example and the low frequency component of input as the label to learn (pixel by pixel) based on the look-up table model. The proposed method is very fast and can restore a high-quality image with fine details. The experimental results on a variety of images captured in bad conditions, such as nonuniform light, night, hazy and underwater, demonstrate the effectiveness and efficiency of the proposed method. Full article
Show Figures

Figure 1

Figure 1
<p>Images and the projection of their 3D surface graphs on the <span class="html-italic">x-z</span> plane. In the 2D projection, the <span class="html-italic">x</span>-axis is the image width, and the <span class="html-italic">y</span>-axis is the pixel value: (<b>a</b>) the clear images; (<b>b</b>) the low illumination images; (<b>c</b>) the hazy images; (<b>d</b>) the underwater images.</p>
Full article ">Figure 1 Cont.
<p>Images and the projection of their 3D surface graphs on the <span class="html-italic">x-z</span> plane. In the 2D projection, the <span class="html-italic">x</span>-axis is the image width, and the <span class="html-italic">y</span>-axis is the pixel value: (<b>a</b>) the clear images; (<b>b</b>) the low illumination images; (<b>c</b>) the hazy images; (<b>d</b>) the underwater images.</p>
Full article ">Figure 2
<p>Comparison of α fusion and guided filter: (<b>a</b>) the α fusion, with <span class="html-italic">thred</span>1 = 10, the radius of the mean filter is 40; (<b>b</b>) the guided filter.</p>
Full article ">Figure 3
<p>Comparison of refinement: (<b>a</b>) the pixel learning, with <span class="html-italic">thred</span>1 = 10; (<b>b</b>) the α fusion, with <span class="html-italic">thred</span>1 = 10; (<b>c</b>) the guided filter.</p>
Full article ">Figure 4
<p>Comparison of refinement from different labels: (<b>a</b>) the label is <b>Ibm</b>, with <span class="html-italic">thred</span>1 = 10; (<b>b</b>) the label is <b>Ib</b>, with <span class="html-italic">thred</span>1 = 10.</p>
Full article ">Figure 5
<p>Comparison of refinement: (<b>a</b>) the pixel learning, where <span class="html-italic">thred</span>1 = 10, <span class="html-italic">thred</span>2 = 20; (<b>b</b>) the guided filter.</p>
Full article ">Figure 6
<p>Overenhancement and truncation: (<b>a</b>) the original image; (<b>b</b>) the result without truncation by <span class="html-italic">t</span><b><sub>D</sub></b> = 255, <span class="html-italic">t</span><b><sub>U</sub></b> = 0; (<b>c</b>) the result with truncation by <span class="html-italic">t</span><b><sub>D</sub></b> = 150, <span class="html-italic">t</span><b><sub>U</sub></b> = 70.</p>
Full article ">Figure 7
<p>Color casts: (<b>a</b>) the original image needs white balance; (<b>b</b>) the result without white balance; (<b>c</b>) the result with white balance.</p>
Full article ">Figure 8
<p>Flow diagram of the proposed algorithm: for the gray scale, <span class="html-italic">bright<sub>p</sub></span> and <span class="html-italic">dark<sub>p</sub></span> should be skipped, which means the input gray image <b>I</b> = <b>Ip<sub>U</sub></b> = <b>Ip<sub>D</sub></b>.</p>
Full article ">Figure 9
<p>Results with different <span class="html-italic">thred</span>1: (<b>a</b>) the original hazy image; (<b>b</b>) <span class="html-italic">thred</span>1 = 1; (<b>c</b>) <span class="html-italic">thred</span>1 = 10; (<b>d</b>) <span class="html-italic">thred</span>1 = 20.</p>
Full article ">Figure 10
<p>Results of enhancement for color images: (<b>a</b>) the original hazy image; (<b>b</b>) our results.</p>
Full article ">Figure 11
<p>Results of enhancement for infrared images (14 bits): <span class="html-italic">thred</span>1 = 800, <span class="html-italic">thred</span>2 = 1600, <span class="html-italic">tD</span> = 4000, <span class="html-italic">tU</span> = 14,000.</p>
Full article ">Figure 12
<p>Comparison of haze removal: (<b>a</b>) input image; and results from using the methods by (<b>b</b>) Zhang’s [<a href="#B28-information-08-00135" class="html-bibr">28</a>]; (<b>c</b>) He’s [<a href="#B23-information-08-00135" class="html-bibr">23</a>]; (<b>d</b>) Kim’s [<a href="#B29-information-08-00135" class="html-bibr">29</a>]; (<b>e</b>) Tan’s [<a href="#B21-information-08-00135" class="html-bibr">21</a>]; (<b>f</b>) ours; our parameters were set as: for <b>D</b> <span class="html-italic">thred</span>1 = 1, for <b>U</b> <span class="html-italic">thred</span>1 <span class="html-italic">=</span> 20.</p>
Full article ">Figure 12 Cont.
<p>Comparison of haze removal: (<b>a</b>) input image; and results from using the methods by (<b>b</b>) Zhang’s [<a href="#B28-information-08-00135" class="html-bibr">28</a>]; (<b>c</b>) He’s [<a href="#B23-information-08-00135" class="html-bibr">23</a>]; (<b>d</b>) Kim’s [<a href="#B29-information-08-00135" class="html-bibr">29</a>]; (<b>e</b>) Tan’s [<a href="#B21-information-08-00135" class="html-bibr">21</a>]; (<b>f</b>) ours; our parameters were set as: for <b>D</b> <span class="html-italic">thred</span>1 = 1, for <b>U</b> <span class="html-italic">thred</span>1 <span class="html-italic">=</span> 20.</p>
Full article ">Figure 13
<p>Comparison of the nonuniform illumination image: (<b>a</b>) input image; and results from using the methods by (<b>b</b>) multiscale retina-cortex (Retinex) (MSR) [<a href="#B5-information-08-00135" class="html-bibr">5</a>]; (<b>c</b>) Wang’s [<a href="#B10-information-08-00135" class="html-bibr">10</a>]; (<b>d</b>) ours.</p>
Full article ">Figure 14
<p>Comparison of the approaches for the nighttime image: (<b>a</b>) input image; and the results from using the methods by (<b>b</b>) MSR [<a href="#B5-information-08-00135" class="html-bibr">5</a>]; (<b>c</b>) Lin’s [<a href="#B11-information-08-00135" class="html-bibr">11</a>]; (<b>d</b>) ours.</p>
Full article ">Figure 15
<p>Comparison of the approaches for the underwater image: (<b>a</b>) input image; and the results form using the methods by (<b>b</b>) He’s [<a href="#B23-information-08-00135" class="html-bibr">23</a>]; (<b>c</b>) MSR [<a href="#B5-information-08-00135" class="html-bibr">5</a>]; (<b>d</b>) Ancuti’s [<a href="#B32-information-08-00135" class="html-bibr">32</a>]; (<b>e</b>) Zhang’s [<a href="#B8-information-08-00135" class="html-bibr">8</a>]; (<b>f</b>) ours.</p>
Full article ">Figure 16
<p>Comparison of the approaches for the underwater image: (<b>a</b>) input image; and the results from using the methods by (<b>b</b>) He’s [<a href="#B23-information-08-00135" class="html-bibr">23</a>]; (<b>c</b>) Li’s [<a href="#B30-information-08-00135" class="html-bibr">30</a>]; (<b>d</b>) Lin’s [<a href="#B11-information-08-00135" class="html-bibr">11</a>]; (<b>e</b>) Ancuti’s [<a href="#B32-information-08-00135" class="html-bibr">32</a>]; (<b>f</b>) ours.</p>
Full article ">
Back to TopTop