Nothing Special   »   [go: up one dir, main page]

Next Article in Journal
Deep Convolutional Neural Network-Based Hemiplegic Gait Detection Using an Inertial Sensor Located Freely in a Pocket
Previous Article in Journal
Current Only-Based Fault Diagnosis Method for Industrial Robot Control Cables
You seem to have javascript disabled. Please note that many of the page functionalities won't work as expected without javascript enabled.
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Sand Dust Images Enhancement Based on Red and Blue Channels

1
School of Information Science and Engineering, Xinjiang University, Urumqi 830046, China
2
Key Laboratory of Signal Detection and Processing, Xinjiang University, Urumqi 830046, China
*
Author to whom correspondence should be addressed.
Sensors 2022, 22(5), 1918; https://doi.org/10.3390/s22051918
Submission received: 29 January 2022 / Revised: 20 February 2022 / Accepted: 27 February 2022 / Published: 1 March 2022
(This article belongs to the Topic Data Science and Knowledge Discovery)
Figure 1
<p>Formation model for degraded images.</p> ">
Figure 2
<p>Flowchart of sand-dust image restoration method based on red and blue channel.</p> ">
Figure 3
<p>Sand dust images and histograms.</p> ">
Figure 4
<p>Sand dust image correction based on the red channel correction function: (<b>a</b>) Sand-dust images; (<b>b</b>) Corrected image.</p> ">
Figure 5
<p>Atmospheric light position selected by two algorithms: (<b>a</b>) Proposed algorithm; (<b>b</b>) Dark channel prior algorithm.</p> ">
Figure 6
<p>Qualitative comparison results of sand dust images with weak color cast. (<b>a</b>) Sanddust Images; (<b>b</b>) TFO [<a href="#B9-sensors-22-01918" class="html-bibr">9</a>]; (<b>c</b>) NGT [<a href="#B11-sensors-22-01918" class="html-bibr">11</a>]; (<b>d</b>) BCGF [<a href="#B13-sensors-22-01918" class="html-bibr">13</a>]; (<b>e</b>) AWC [<a href="#B17-sensors-22-01918" class="html-bibr">17</a>]; (<b>f</b>) VRSI [<a href="#B19-sensors-22-01918" class="html-bibr">19</a>]; (<b>g</b>) SBT [<a href="#B20-sensors-22-01918" class="html-bibr">20</a>]; (<b>h</b>) FBE [<a href="#B29-sensors-22-01918" class="html-bibr">29</a>]; (<b>i</b>) GDCP [<a href="#B30-sensors-22-01918" class="html-bibr">30</a>]; (<b>j</b>) HDCP [<a href="#B31-sensors-22-01918" class="html-bibr">31</a>]; (<b>k</b>) RBCP [<a href="#B32-sensors-22-01918" class="html-bibr">32</a>]; (<b>l</b>) Proposed.</p> ">
Figure 6 Cont.
<p>Qualitative comparison results of sand dust images with weak color cast. (<b>a</b>) Sanddust Images; (<b>b</b>) TFO [<a href="#B9-sensors-22-01918" class="html-bibr">9</a>]; (<b>c</b>) NGT [<a href="#B11-sensors-22-01918" class="html-bibr">11</a>]; (<b>d</b>) BCGF [<a href="#B13-sensors-22-01918" class="html-bibr">13</a>]; (<b>e</b>) AWC [<a href="#B17-sensors-22-01918" class="html-bibr">17</a>]; (<b>f</b>) VRSI [<a href="#B19-sensors-22-01918" class="html-bibr">19</a>]; (<b>g</b>) SBT [<a href="#B20-sensors-22-01918" class="html-bibr">20</a>]; (<b>h</b>) FBE [<a href="#B29-sensors-22-01918" class="html-bibr">29</a>]; (<b>i</b>) GDCP [<a href="#B30-sensors-22-01918" class="html-bibr">30</a>]; (<b>j</b>) HDCP [<a href="#B31-sensors-22-01918" class="html-bibr">31</a>]; (<b>k</b>) RBCP [<a href="#B32-sensors-22-01918" class="html-bibr">32</a>]; (<b>l</b>) Proposed.</p> ">
Figure 7
<p>Qualitative comparison results of various sand storm images: (<b>a</b>) Sanddust Images; (<b>b</b>) TFO [<a href="#B9-sensors-22-01918" class="html-bibr">9</a>]; (<b>c</b>) NGT [<a href="#B11-sensors-22-01918" class="html-bibr">11</a>]; (<b>d</b>) BCGF [<a href="#B13-sensors-22-01918" class="html-bibr">13</a>]; (<b>e</b>) AWC [<a href="#B17-sensors-22-01918" class="html-bibr">17</a>]; (<b>f</b>) VRSI [<a href="#B19-sensors-22-01918" class="html-bibr">19</a>]; (<b>g</b>) SBT [<a href="#B20-sensors-22-01918" class="html-bibr">20</a>]; (<b>h</b>) FBE [<a href="#B29-sensors-22-01918" class="html-bibr">29</a>]; (<b>i</b>) GDCP [<a href="#B30-sensors-22-01918" class="html-bibr">30</a>]; (<b>j</b>) HDCP [<a href="#B31-sensors-22-01918" class="html-bibr">31</a>]; (<b>k</b>) RBCP [<a href="#B32-sensors-22-01918" class="html-bibr">32</a>]; (<b>l</b>) Proposed.</p> ">
Figure 7 Cont.
<p>Qualitative comparison results of various sand storm images: (<b>a</b>) Sanddust Images; (<b>b</b>) TFO [<a href="#B9-sensors-22-01918" class="html-bibr">9</a>]; (<b>c</b>) NGT [<a href="#B11-sensors-22-01918" class="html-bibr">11</a>]; (<b>d</b>) BCGF [<a href="#B13-sensors-22-01918" class="html-bibr">13</a>]; (<b>e</b>) AWC [<a href="#B17-sensors-22-01918" class="html-bibr">17</a>]; (<b>f</b>) VRSI [<a href="#B19-sensors-22-01918" class="html-bibr">19</a>]; (<b>g</b>) SBT [<a href="#B20-sensors-22-01918" class="html-bibr">20</a>]; (<b>h</b>) FBE [<a href="#B29-sensors-22-01918" class="html-bibr">29</a>]; (<b>i</b>) GDCP [<a href="#B30-sensors-22-01918" class="html-bibr">30</a>]; (<b>j</b>) HDCP [<a href="#B31-sensors-22-01918" class="html-bibr">31</a>]; (<b>k</b>) RBCP [<a href="#B32-sensors-22-01918" class="html-bibr">32</a>]; (<b>l</b>) Proposed.</p> ">
Review Reports Versions Notes

Abstract

:
The scattering and absorption of light results in the degradation of image in sandstorm scenes, it is vulnerable to issues such as color casting, low contrast and lost details, resulting in poor visual quality. In such circumstances, traditional image restoration methods cannot fully restore images owing to the persistence of color casting problems and the poor estimation of scene transmission maps and atmospheric light. To effectively correct color casting and enhance visibility for such sand dust images, we proposed a sand dust image enhancement algorithm using the red and blue channels, which consists of two modules: the red channel-based correction function (RCC) and blue channel-based dust particle removal (BDPR), the RCC module is used to correct color casting errors, and the BDPR module removes sand dust particles. After the dust image is processed by these two modules, a clear and visible image can be produced. The experimental results were analyzed qualitatively and quantitatively, and the results show that this method can significantly improve the image quality under sandstorm weather and outperform the state-of-the-art restoration algorithms.

1. Introduction

Images or videos captured in sandstorm scenes usually have low contrast, poor visibility and yellowish tones. This is because sand dust particles scatter and absorb specific spectra of light between the imaging devices and the observed objects. Therefore, these degraded sand dust images will greatly lose their quality and degrade the performance of computer vision application systems that typically work outdoors during inclement weather conditions. Such systems include video surveillance systems for public security monitoring [1,2], intelligent transportation systems for license plate recognition [3,4], visual recognition systems for automatic driving [5], and so on. Hence, developing an effective sand dust image restoration method to restore color and contrast for computer vision application systems is desirable. To improve the performance of computer vision systems and restore the visibility of degraded images, some restoration algorithms for degraded sand dust images have been proposed. Huang [6] presented a transformation method that enhances the contrast of degraded images via the gamma correction technique and probability distribution of bright pixels. AlRuwaili [7] proposed an enhancement scheme, the degraded input image is first converted into an HIS color space, and then color cast corrections and contrast stretching are performed. Zhi [8] restored vivid sand dust images by using color correction, SVD and the contrast-limited adaptive histogram equalization algorithm. Tri-threshold fuzzy operators are introduced to enhance contrast by Al-Ameen [9]. Yan [10] enhanced dust images by improving the sub-block partial overlapping histogram equalization algorithm. Shi [11] enhanced images by combining contrast limited adaptive histogram equalization (CLAHE) and gray world theory. Tensor least square method is proposed to enhance sand dust image by Xu [12]. Cheng [13] using white balance and guided filtering technologies. Park [14] proposed a Coincidence histogram. Although the above traditional algorithms have some effects on the restoration of sand dust images, the restored images appear over-enhanced or under-enhanced, and the color is distorted.
In order to provide better visual quality of degraded sand dust images, several visibility restoration methods using the atmospheric transmission model have been presented. Yu [15] introduced a method for restoring single sand dust image that depends on using atmospheric transmission model and constraining information loss. The atmospheric light is first estimated using the grey-world assumption and the scattering model, then, a fast guide filter is used to suppress the halos in the post process. Wang [16] first considered multiple scattering factors, then, particle swarm optimization method was used for optimizing the exposure parameters and the atmospheric light in order to obtain better restored images. Peng [17] assumed that the ambient light was known; then, images were restored by calculating the difference between the light intensity observed in the degraded image scene and the ambient light intensity. Huang [18] presented a novel Laplacian-based image restoration method. The minimum filter and Gauss adaptive transform are introduced by Yang [19]. Kim [20] proposed a method based on saturation transmission estimation. However, the above-mentioned methods caused the processed image to appear blocky, haloed or over-enhanced, and the methods cannot handle sand dust images that contain heavy yellow tones.
Image dehazing algorithms have attracted great attention in recent years. One attractive solution for image dehazing is the neural network approach, for instance, Cai [21] designed a novel Ranking-CNN for single image dehazing. Yang [22] proposed a Region Detection Network model, which reflects the regional detection of a single hazy image. Li [23] proposed a PDR-Net for single image dehazing. Their findings show that a neural network can better estimate ambient light and the transmission than other approaches. However, training a neural network needs a lot of datasets, and there is no dataset for sand dust images, so the neural network is not suitable for sand dust images processing at present. Another effective solution for image dehazing is the dark channel prior (DCP) method raised by He [24]; this is a natural image-based observation that one of the RGB channels has very low intensity for most pixels. The DCP method is very useful for haze removal, and its calculation is simple. Many improved DCP methods have been applied in various fields, such as image dehazing [25,26], underwater image enhancement [27,28], and sand dust image restoration [29,30,31,32,33].
To effectively remove atmosphere particles from the sand dust images, Fu [29] proposed a restoration method for sand dust images by using fusion strategy. The input sand dust image was color-corrected by using the statistical scheme first. Then, gamma correction technology with two different coefficients and DCP were applied. Finally, the input images and the weight maps are fused to obtain enhanced image. Peng [30] proposed a general dark channel prior method to restore sand dust, haze and underwater images. He estimated the ambient light by adopting a depth-related color change. Then, he calculated the difference between ambient light and scene intensity. Shi [31] proposed a DCP method for enhancing sand dust images and reducing halos. The method included three modules: The color casting was first corrected by using gray world theory in LAB space; then, sand particles were removed by an improved DCP-based dehazing method; finally, a gamma function was used to stretch contrast. Gao [32] proposed the method of reversing blue channel. Cheng [33] combined white balance and reversing blue channel technology to enhance sand dust images. However, the improved DCP algorithms mentioned above create block artifacts, color distortion and yellow tones when restoring degraded images taken under sandstorm weather.
In this paper, we proposed a method for restoring sand dust images by using red-blue channels, which takes the advantages of the proposed red channel correction function (RCC) module and the blue channel dust particles removal (BDPR) module. By combining them, the color deviation problems and the underestimation of scene depth can be effectively overcome. Compared with the other improved DCP algorithms, our algorithm is founded upon the imaging characteristics of the sand dust images. By adopting this scheme, the proposed algorithm can effectively generate clear images. In a word, the contributions of the paper are reflected in the following three aspects:
  • The red channel correction function (RCC) can effectively avoid the problem of insufficient or excessive color cast adjustments in real sand dust images. It restores the lost color channel from the other channels. Because the dust particles absorb less of the red ray under the dusty weather conditions, causing the red ray decay to be the slowest.
  • After the input image is processed by the correction function, the blue channel dust particles removal (BDPR) module is applied to remove atmospheric particles in degraded images. We assume that the dust particles absorb blue rays quickly; hence, the intensity of the blue channel is lower. The proposed method can remove sand dust particles more effectively, eliminating the blueish tone of the restored image.
  • To obtain more accurate transmission and atmospheric light, the sand dust image and the corrected image are applied to the BDPR module simultaneously, where the sand dust image is used in atmospheric light estimation, and the corrected image is used for calculating transmission.
The rest of the paper is organized as follows. Section 2 reviews the dark channel prior method. Section 3 introduces the proposed in this paper methods and algorithms in detail. Section 4 introduces experimental results of this method and state-of-the-art sand dust image restoration algorithms and analyzes them in detail. Finally, Section 5 summarizes the advantages and limitations of the proposed method and suggests directions for further research.

2. Background

In this section, we will briefly review the dehazing method based on the dark channel prior [24], which has been widely improved and applied in the restoration of hazy, underwater and sandstorm images.
In the fields of computer graphics and computer vision, imaging models are widely used, which describes the light scattering and absorption between the camera and the observation scene, shown in Figure 1.
Assuming that the light decay is homogeneous, the formation model for a hazy image is given by [34]:
D c ( x ) = R c ( x ) t ( x ) + A c ( 1 t ( x ) ) , c { r , g , b }
where D c ( x ) is the intensity of the c channel where the color image is observed at x pixel, R c ( x ) is the intensity of the haze-free scene, t ( x ) represents the medium transmission, A c is the ambient light, c indicates one of the RGB color channels, and the value ranges of D c ( x ) , R c ( x ) and A c are set to [0, 1].
The DCP is based on the observations of outdoor haze-free images, which show that approximately 75% of the pixels of the non-sky area have an intensity such that one of the RGB color channels is zero. The DCP is as follows:
R Dark r g b ( x ) = min ( min y Ω ( x ) ( R R ( y ) ) , min y Ω ( x ) ( R G ( y ) ) , min y Ω ( x ) ( R B ( y ) ) ) 0
Implementing the minimum operators on the local patch of the RGB channels in Equation (1) and dividing both sides of Equation (1) by A c , the transmission t ( x ) can be roughly calculated as:
t ( x ) = 1 ω min y Ω ( x ) { min c { r , g , b } D c ( y ) A c }
where ω = 0.95 to leave some haze in the restored scene brightness to make it looks natural. Because a block artifact is generated by using local minimum filtering, it can be improved by using the soft matting method [35] or by a guided filter method [36]. Atmospheric light A c is chosen from the brightest 0.1% of the pixels based on the DCP for the hazy image.
Finally, according to the formation model for hazy images expressed in Equation (1), by solving the image formation process inversely, the restored image R c ( x ) is calculated as:
R c ( x ) = D c ( x ) A c max ( t ( x ) , t 0 ) + A c , c { r , g , b }
where t 0 is an empirical parameter, which is set to 0.1 to improve the exposure of scene radiance.

3. Proposed Algorithm

The image formation model in Section 2 suggests that the estimations of the medium transmission and atmospheric light are very important to restore degraded images. However, the inherent features of the degraded sand dust image make the traditional dehazing algorithms, based upon the atmospheric propagation model for hazy images, unable to process the images with color cast. To this end, we propose a sand dust removal method and algorithms based on red and blue channels to recover the visual and color effects. The proposed method consists of both an RCC module and a BDPR module for which novel algorithms are proposed here. The flowchart of our method is shown in Figure 2.
First, the original sand dust image is corrected by the red channel correction function module to overcome the problem of yellow or red color deviation in sand dust images. Second, the blue channel sand dust removal module is applied to restore image details and remove atmospheric particles, which is based on the characteristic that the blue ray is quickly absorbed during sandstorm weather conditions. However, the image processed by module BDPR is darker and has some blue tones. Thus, we use the contrast enhancement method in [37,38] to increase the contrast of the color-corrected image. Finally, the clear restored image is generated by fusing the enhanced image with the sand-dust-removed image using wavelet fusion technology.

3.1. RCC Module

For image color correction, gamma correction technology [6] and grayscale world theory [7] are widely used. However, sand dust images have serious color casting because the green and blue rays in the atmosphere are absorbed and scattered by sand dust particles. The gamma correction and gray world hypothesis approaches cannot be applied to correct color casting directly, which may result in color distortion and image over-enhancement.
The color casting of sand dust images is caused by the light attenuation. The red ray decay is slowest, and the blue light is absorbed fastest under sandstorm weather, and the RGB channel histogram of the degraded image is sequential. Two histograms of the original sand dust images are shown in Figure 3, where images on the left are sand dust images, and right side images are the histograms of the RGB channels.
Based on the above observation, we propose a color correction function module, which depends on the red channel to effectively adjust the image. First, the histogram of the red channel is used as a reference in the RCC module, and the histograms of the blue and green channels are translated as follows:
I r ( x ) = I r ( x ) I g ( x ) = I g ( x ) + ( μ r μ g ) I b ( x ) = I b ( x ) + ( μ r μ b )
Then, color is stretched in RGB color space, which is described as follows:
I c max = μ c + k σ c I c min = μ c k σ c I c = 255 ( I c I c min ) / ( I c max I c min )
where μ c , c { r , g , b } is the mean value of each channel in RGB color space, σ c , c { r , g , b } represents the mean variance of RGB channel, and k is the adjustment factor. k is set to 2 in our experiment. Images are processed by RCC module as shown in Figure 4. It is easy to see that the method is simple and effective for correcting sand dust images, but the corrected image is still blurry.

3.2. BDPR Module

The first term, R c ( x ) t ( x ) , c { r , g , b } , in Equation (1) is the direct decay component, and A c ( 1 t ( x ) ) , c ( r , g , b ) is atmospheric light attenuation. The direct decay component describes the radiation and attenuation of the scene in the medium, while atmospheric light attenuation describes the scene changes caused by light scattering. Moreover, the Equation (1) shows that the radiant light of the scene first goes through the multiplier attenuation, and then through the additive attenuation.
According to the Beer Lambert law, the propagation of light decreases exponentially with increasing distance. Assuming the atmosphere is uniform, the medium transmission map t ( x ) can be indicated as:
t ( x ) = e β d ( x )
where d ( x ) is the scene depth, and β is the atmospheric scattering coefficient. As d ( x ) approaches 0, t ( x ) approaches 1, so the atmospheric light attenuation cannot be affected. In contrast, when d ( x ) is not 0, t ( x ) decreases as d ( x ) increases, and atmospheric light attenuation takes a dominant role. However, Equation (1) cannot be used directly for sand dust images; hence, we transform Equation (1) into Equation (8) to take advantage of the inherent characteristic that the intensity of blue channel in color sand dust image is very low:
D c ( x ) = R c ( x ) t ( x ) + A c ( 1 t ( x ) ) , c { r , g } 1 D b ( x ) = 1 R b ( x ) t ( x ) + 1 A b ( 1 t ( x ) )
where D c and R c represent the degraded sand dust image and the original image, respectively. Please note that Equation (8) is equivalent to Equation (1). Therefore, Equation (8) reflects the fact that the light decays with distance, which actually happens in sandstorm weather. The only difference we have to account for is that blue intensity attenuates faster as distance increases. Hence, we modified the DCP method according to [27,32]. It states that:
R B l u e ( x ) = min ( min y Ω ( x ) ( R R ( y ) ) , min y Ω ( x ) ( R G ( y ) ) , min y Ω ( x ) ( 1 R B ( y ) ) ) 0
for a non-degraded sand dust image, where ω ( x ) represents the neighborhood pixels around the pixel x. Note that in the degraded image near the observer, the blue channel intensity is high, so its reciprocal 1 R B ( y ) is low, and the prior is true. However, blue intensity rapidly attenuates as the distance increases, so the prior starts to become false. This fact will help to estimate the depth map of the scene and the atmospheric light.
Restoration for a single degraded sand dust image using Equation (9) is a very challenging task, because there is little image information available, and the estimation accuracy of the medium transmission and atmospheric light is related to the recovery quality of the image.
In previous studies, atmospheric light was chosen to be the most opaque area of haze in the image. In this paper, the atmospheric light is estimated by Equations (8) and (9) using the input degraded sand dust image. In addition, the brightest pixel of 0.1% is selected as the atmospheric light estimation, as suggested in [24]. Figure 5 shows the atmospheric light position selected by the proposed algorithm, and the dark channel prior algorithms in two images.
The red area in Figure 5 is the estimated position of atmospheric light. It is thus clear that the brightest area in the sky is selected by the dark channel algorithm, while the proposed algorithm chose the most opaque sand dust area, not the brightest sky area or other white objects (such as the white cars in the picture) as the atmospheric light. This shows that the proposed method is better at choosing atmospheric light.
After the atmospheric light is estimated, another key step is to calculate medium transmission. Since the degraded sand dust image has low contrast and color distortion, it cannot be applied to estimate transmission. We use the bright and dark channels of the corrected image, which are processed by the RCC module, and their difference to estimate the transmission [17]. This method assumes that the density of the sand dust image is related to the maximum and minimum of the channels and their difference, which is defined as:
d I b r i g h t ( I b r i g h t I d a r k ) I d a r k ( x ) = min { I r R C C ( x ) , I g R C C ( x ) , 1 I b R C C ( x ) } I b r i g h t ( x ) = max { I r R C C ( x ) , I g R C C ( x ) , 1 I b R C C ( x ) }
where I r R C C ( x ) , I g R C C ( x ) and I b R C C ( x ) represent the RGB channels of the corrected image by the RCC module. Then, sand dust density is expressed as:
d ( x ) = min Ω ( x ) I dark ( x ) 1 I b r i g h t ( x ) I d a r k ( x ) max ( 1 , I b r i g h t ( x ) )
Assuming medium transmission is locally homogeneous and is inversely proportional to d ( x ) , medium transmission t ( x ) is estimated as:
t ( x ) = 1 ω G ( d ( x ) )
where G ( y ) is the function of image guided filtering [36], and ω is a parameter for retaining the naturalness of the restored image, which is set to be 0.95 in the paper.
Finally, after the atmospheric light A c and transmission t ( x ) are calculated, the sand dust-free particle image is obtained by Equation (4).

4. Experimental Results

In this section, we will make qualitative and quantitative assessments on the proposed method in the paper. We compare the proposed method with 10 other state-of-the-art image restoration algorithms, including Tri-threshold fuzzy Operators (TFO) [9], normalized gamma transformation (NGT) [11], blue channel compensation and guided Image filtering (BCGF) [13], airlight white correction (AWC) [17], visibility restoration of single image (VRSI) [19], saturation-based transmission map estimation (SBT) [20], fusion-based enhancing approach (FBE) [29], generalization of the dark channel Prior (GDCP) [30], halo-reduced dark channel Prior (HDCP) [31] and as reversing the blue channel prior (RBCP) [32], whose source codes are provided by the authors. The experimental results include three parts. The first part will qualitatively discuss the restoration results of the captured images in sandstorm weather conditions. In the second part, the three evaluation methods in [39], the natural image quality evaluator (NIQE) [40], the distortion identification-based image verity and integrity evaluation (DIIVINE) index [41] and natural scene statistics and Perceptual characteristics-based quality index (NPQI) [42] are used to quantitatively analyze the restoration results of the presented algorithm and the 10 state-of-the-art algorithms. The third part will analyze the execution time of algorithms. All algorithms use MATLAB code except SBTME and are run on a computer with 2.7GHZ CPU, Intel Core i5 and with 32G RAM.

4.1. Qualitative Assessment

Figure 6 and Figure 7 show the restoration results of the presented method and the known benchmark methods on weak sand dust images and various sandstorm images. As shown in Figure 6, TFO [9], BCGF [13], AWC [17] and GDCP [30] do not eliminate color shift. NGT [11] and VRSI [19] cannot effectively remove sand dust. FBE [29] failed to process sand-dust images. FBE [29] can eliminate the undesirable color cast effects, but the restored image is dark, and the details are lost. HDCP [31] enhances the contrast of the image, but the contrast is over-enhanced and the restoration results are severely distorted. The image obtained by RBCP [32] is dark and blue.
For the sandstorm images shown in Figure 7, TFO [9], SBT [20], GDCP [30] and RBCP [32] obtain poor image enhancement effect. NGT [11] does not remove sand-dust particles in the image. BCGF [13], AWC [17] and HDCP [31] can not remove the color veils of sandstorm images. VRSI [19] overenhanced sand-dust images.The restored image by FBE [29] is dark and sand dust do not effectively removal.
Compared with the above 10 state-of-the-art methods, our method removes the color cast using the RCC module, and it removes sand dust particles using the BDPR module. our restored results are more natural in color, clearer in detail and more similar to real images.

4.2. Quantitative Assessment

In general, the objective evaluation mechanism is used to quantify the accuracy of restoration results. Because there is no clear sand dust-free reference image, it is very difficult to analyze the restored sand dust image quantitatively. Therefore, to better quantitatively evaluate the performance of this method in the processing of sand dust images, the paper uses a non-reference method and introduces the following three well-known metrics proposed in [39]: visible edges recovery percentage e, the saturation σ and the contrast restoration percentage r ¯ . In addition, the natural image quality evaluator (NIQE) as suggested in [40], the distortion identification-based image verity and integrity evaluation (DIIVINE) index in [41] and natural scene statistics and Perceptual characteristics-based quality index (NPQI) in [42] are adopted. In the above metrics, if e and σ are approximately zero, that suggests a better performance, and a greater r ¯ implies that the contrast of restored image is stronger. The smaller the NIQE is, the better the quality of the restored image is. The lower the DIIVINE is, the better the distorted image quality is. The smaller the NPQI is, the better the quality of the restored image is.
The above well-known metrics are used to quantitatively evaluate the restoration performance of 12 sand dust images from Figure 6 and Figure 7 using the proposed algorithm and the ten state-of-the-art algorithms compared in this paper. The result was shown in Table 1. Compared with the other 10 state-of-the-art algorithms, the sand dust images are restored by the proposed method achieved a greater e and σ is closer to 0, and the obtained r ¯ is among the top ranked. Meanwhile, the proposed method can obtain better scores on DIIVINE and NPQI metrics. The experimental results in Table 1 show that the proposed in the paper method can achieve better performance in the restoration of the sand dust images, and the restored sand dust images have better performance in contrast, tones and saturation. Figure 6l and Figure 7l also demonstrates that the image restored by the proposed algorithm has obtained good effects.
In order to further verify the performance and robustness of the proposed method, we used 375 sand dust images collected from the Internet. The average scores of the six metrics on the images restored using the proposed method and compared methods are listed in Table 2. It can be seen from the Table 2 that the e obtained by BCGF [13] and FBE [29] is greater than that of the proposed algorithm in this paper, but other results are lower than that of the algorithm in this paper. Although the resulting σ and r ¯ of HDCP [31] is better than that obtained by the method in this paper, the actual restoration effect of HDCP [31] is obviously worse than that of the proposed algorithm. Moreover, compared with other methods, the proposed algorithm can obtain better NIQE results. As can be concluded from Table 2, it is not surprising that the method in this paper achieves the top rank DIIVINE scores and the best NPQI scores for 375 sand dust images, which is mainly due to the corrective ability inherited from RCC and BDPR modules. Through processing the results of a large number of sand dust image data sets, it indicates that the proposed method in this paper can obtain better performance in the restoration of sand dust images.

4.3. Running Time

Table 3 lists average run-time of different methods implementing 20 execution rounds on the real-world sand dust image of different sizes. The experiment is implemented under the Windows 10 environment of Intel i5 CPU and 32G RAM. Except for the code of SBT [20], the other codes are written in MATLAB, and all codes are provided by the author. To ensure the fairness of the comparison, SBT [20] is not included in the comparison due to its code being written in Python. As shown in Table 3, the proposed method has a higher time cost, so the application of this method to real-time systems needs to be improved. The use of guided filtering and local filtering in this method leads to the high time cost. Inspired by Kim [20], using a pixel-by-pixel compensation approach to estimate the transmission can significantly reduce the time cost and satisfy the application of real-time video image processing.

5. Conclusions

In this paper, a single sand dust image restoration algorithm is proposed, and it can effectively recover the sand dust images. First, according to the characteristics of the slowest attenuation of red light in the degraded image, color correction is performed using a red channel-based correction algorithm. Then, an improved DCP algorithm is improved based on the blue channel to remove sand dust particles by using the characteristic that blue light decays fastest. Through the quantitative and qualitative analysis of the restoration results of a large number of degraded sand dust images with different scenes and color casting, the proposed algorithm displays satisfactory performance in processing most of the sand dust images and can obtain reasonable restoration results. However, the proposed algorithm has the disadvantage of high time costs. In the future, we will further study more effective sand dust restoration algorithms to meet the needs of real-time vision application systems. Another further goal will be to develop methods for sand dust video restoration using spatio-temporal data modeling [43].

Author Contributions

Conceptualization, F.S. and Z.J.; methodology, F.S. and Z.J.; software, F.S.; validation, F.S., S.S. and J.W.; formal analysis, F.S. and H.L.; investigation, F.S.; data curation, F.S., S.S. and J.W.; writing—original draft preparation, F.S.; writing—review and editing, F.S. and Z.J.; supervision, Z.J.; funding acquisition, F.S. and Z.J. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by the International Science and Technology Cooperation Project of the Ministry of Education of the People’s Republic of China under Grant DICE 2016–2196, the National Natural Science Foundation of China under Grant U1803261, scientific research plan of universities in Xinjiang Uygur Autonomous Region under grant XJEDU2019Y006, and the Natural Science Foundation of XinJiang under Grants 2021D01C057. We sincerely thank the editors and reviewers for taking the time to review our manuscript.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The data presented in this study are available on request from the corresponding author. Data are not publicly available due to privacy considerations.

Conflicts of Interest

The authors declare that they have no conflict of interest.

Abbreviations

The following abbreviations are used in this manuscript:
RCCThe red-channel-based correction function
BDPRBlue-channel-based dust particle removal
SVDSingular Value Decomposition
CLAHECombining contrast limited adaptive histogram equalization
DCPDark channel prior
TFOTri-threshold fuzzy Operators
NGTNormalized gamma transformation
BCGFBlue channel compensation and guided Image filtering
AWCAirlight white correction
VRSIVisibility restoration of single image
SBTSaturation-based transmission map estimation
FBEFusion-based enhancing approach
GDCPGeneralization of the dark channel Prior
HDCPHalo-reduced dark channel Prior
RBCPReversing the blue channel prior
NIQENatural image quality evaluator
DIIVINEThe distortion identification-based image verity and integrity evaluation index
NPQINatural scene statistics and Perceptual characteristics-based quality index

References

  1. Zhang, X.; Hu, W.; Chen, S.; Maybank, S. Graph-embedding-based learning for robust object tracking. IEEE Trans. Ind. Electron. 2013, 61, 1072–1084. [Google Scholar] [CrossRef]
  2. Castanon, G.; Elgharib, M.; Saligrama, V.; Jodoin, P.M. Retrieval in long-surveillance videos using user-described motion and object attributes. IEEE Trans. Circuits Syst. Video Technol. 2015, 26, 2313–2327. [Google Scholar] [CrossRef] [Green Version]
  3. Chaturvedi, M.; Srivastava, S. Multi-modal design of an intelligent transportation system. IEEE Trans. Intell. Transp. Syst. 2016, 18, 2017–2027. [Google Scholar] [CrossRef]
  4. Ferreira, D.L.; Nunes, B.A.A.; Obraczka, K. Scale-free properties of human mobility and applications to intelligent transportation systems. IEEE Trans. Intell. Transp. Syst. 2018, 19, 3736–3748. [Google Scholar] [CrossRef]
  5. Fang, P.; Zecong, W.; Zhang, X. Vehicle automatic driving system based on embedded and machine learning. In Proceedings of the 2020 International Conference on Computer Vision, Image and Deep Learning (CVIDL), Chongqing, China, 10–12 July 2020; pp. 281–284. [Google Scholar]
  6. Huang, S.C.; Cheng, F.C.; Chiu, Y.S. Efficient contrast enhancement using adaptive gamma correction with weighting distribution. IEEE Trans. Image Process. 2012, 22, 1032–1041. [Google Scholar] [CrossRef]
  7. Alruwaili, M.; Gupta, L. A statistical adaptive algorithm for dust image enhancement and restoration. In Proceedings of the 2015 IEEE International Conference on Electro/Information Technology (EIT), Dekalb, IL, USA, 21–23 May 2015; pp. 286–289. [Google Scholar]
  8. Zhi, N.; Mao, S.; Li, M. Visibility restoration algorithm of sand dust degraded images. J. Image Graph. 2016, 21, 1585–1592. [Google Scholar]
  9. Al-Ameen, Z. Visibility enhancement for images captured in dusty weather via tuned tri-threshold fuzzy intensification operators. Int. J. Intell. Syst. Appl. 2016, 8, 10. [Google Scholar] [CrossRef] [Green Version]
  10. Yan, T.; Wang, L.; Wang, J. Method to Enhance Degraded Image in Dust Environment. J. Softw. 2014, 9, 2672–2677. [Google Scholar] [CrossRef] [Green Version]
  11. Shi, Z.; Feng, Y.; Zhao, M.; Zhang, E.; He, L. Normalised gamma transformation-based contrast-limited adaptive histogram equalisation with colour correction for sand–dust image enhancement. IET Image Process. 2019, 14, 747–756. [Google Scholar] [CrossRef]
  12. Xu, G.; Wang, X.; Xu, X. Single image enhancement in sandstorm weather via tensor least square. IEEE/CAA J. Autom. Sin. 2020, 7, 1649–1661. [Google Scholar] [CrossRef]
  13. Cheng, Y.; Jia, Z.; Lai, H.; Yang, J.; Kasabov, N.K. A Fast Sand-Dust Image Enhancement Algorithm by Blue Channel Compensation and Guided Image Filtering. IEEE Access 2020, 8, 196690–196699. [Google Scholar] [CrossRef]
  14. Park, T.H.; Eom, I.K. Sand-Dust Image Enhancement Using Successive Color Balance with Coincident Chromatic Histogram. IEEE Access 2021, 9, 19749–19760. [Google Scholar] [CrossRef]
  15. Yu, S.; Zhu, H.; Wang, J.; Fu, Z.; Xue, S.; Shi, H. Single sand-dust image restoration using information loss constraint. J. Mod. Opt. 2016, 63, 2121–2130. [Google Scholar] [CrossRef]
  16. Wang, Y.; Li, Y.; Zhang, T. The method of image restoration in the environments of dust. In Proceedings of the 2010 IEEE International Conference on Mechatronics and Automation, Xi’an, China, 4–7 August 2010; pp. 294–298. [Google Scholar]
  17. Peng, Y.T.; Lu, Z.; Cheng, F.C.; Zheng, Y.; Huang, S.C. Image haze removal using airlight white correction, local light filter, and aerial perspective prior. IEEE Trans. Circuits Syst. Video Technol. 2019, 30, 1385–1395. [Google Scholar] [CrossRef]
  18. Huang, S.C.; Ye, J.H.; Chen, B.H. An advanced single-image visibility restoration algorithm for real-world hazy scenes. IEEE Trans. Ind. Electron. 2014, 62, 2962–2972. [Google Scholar] [CrossRef]
  19. Yang, Y.; Zhang, C.; Liu, L.; Chen, G.; Yue, H. Visibility restoration of single image captured in dust and haze weather conditions. Multidimens. Syst. Signal Process. 2020, 31, 619–633. [Google Scholar] [CrossRef]
  20. Kim, S.E.; Park, T.H.; Eom, I.K. Fast single image dehazing using saturation based transmission map estimation. IEEE Trans. Image Process. 2019, 29, 1985–1998. [Google Scholar] [CrossRef]
  21. Cai, B.; Xu, X.; Jia, K.; Qing, C.; Tao, D. Dehazenet: An end-to-end system for single image haze removal. IEEE Trans. Image Process. 2016, 25, 5187–5198. [Google Scholar] [CrossRef] [Green Version]
  22. Yang, X.; Li, H.; Fan, Y.L.; Chen, R. Single image haze removal via region detection network. IEEE Trans. Multimed. 2019, 21, 2545–2560. [Google Scholar] [CrossRef]
  23. Li, C.; Guo, C.; Guo, J.; Han, P.; Fu, H.; Cong, R. PDR-Net: Perception-inspired single image dehazing network with refinement. IEEE Trans. Multimed. 2019, 22, 704–716. [Google Scholar] [CrossRef]
  24. He, K.; Sun, J.; Tang, X. Single image haze removal using dark channel prior. IEEE Trans. Pattern Anal. Mach. Intell. 2010, 33, 2341–2353. [Google Scholar] [PubMed]
  25. Liu, P.J.; Horng, S.J.; Lin, J.S.; Li, T. Contrast in haze removal: Configurable contrast enhancement model based on dark channel prior. IEEE Trans. Image Process. 2018, 28, 2212–2227. [Google Scholar] [CrossRef]
  26. Ling, Z.; Gong, J.; Fan, G.; Lu, X. Optimal transmission estimation via fog density perception for efficient single image defogging. IEEE Trans. Multimed. 2017, 20, 1699–1711. [Google Scholar] [CrossRef]
  27. Galdran, A.; Pardo, D.; Picón, A.; Alvarez-Gila, A. Automatic red-channel underwater image restoration. J. Vis. Commun. Image Represent. 2015, 26, 132–145. [Google Scholar] [CrossRef] [Green Version]
  28. Song, W.; Wang, Y.; Huang, D.; Liotta, A.; Perra, C. Enhancement of underwater images with statistical model of background light and optimization of transmission map. IEEE Trans. Broadcast. 2020, 66, 153–169. [Google Scholar] [CrossRef] [Green Version]
  29. Fu, X.; Huang, Y.; Zeng, D.; Zhang, X.P.; Ding, X. A fusion-based enhancing approach for single sandstorm image. In Proceedings of the 2014 IEEE 16th International Workshop on Multimedia Signal Processing (MMSP), Jakarta, Indonesia, 22–24 September 2014; pp. 1–5. [Google Scholar]
  30. Peng, Y.T.; Cao, K.; Cosman, P.C. Generalization of the dark channel prior for single image restoration. IEEE Trans. Image Process. 2018, 27, 2856–2868. [Google Scholar] [CrossRef]
  31. Shi, Z.; Feng, Y.; Zhao, M.; Zhang, E.; He, L. Let you see in sand dust weather: A method based on halo-reduced dark channel prior dehazing for sand-dust image enhancement. IEEE Access 2019, 7, 116722–116733. [Google Scholar] [CrossRef]
  32. Gao, G.; Lai, H.; Jia, Z.; Liu, Y.; Wang, Y. Sand-dust image restoration based on reversing the blue channel prior. IEEE Photonics J. 2020, 12, 1–16. [Google Scholar] [CrossRef]
  33. Cheng, Y.; Jia, Z.; Lai, H.; Yang, J.; Kasabov, N.K. Blue channel and fusion for sandstorm image enhancement. IEEE Access 2020, 8, 66931–66940. [Google Scholar] [CrossRef]
  34. Tan, R.T. Visibility in bad weather from a single image. In Proceedings of the 2008 IEEE Conference on Computer Vision and Pattern Recognition, Anchorage, AK, USA, 23–28 June 2008; pp. 1–8. [Google Scholar]
  35. Levin, A.; Lischinski, D.; Weiss, Y. A closed-form solution to natural image matting. IEEE Trans. Pattern Anal. Mach. Intell. 2007, 30, 228–242. [Google Scholar] [CrossRef] [Green Version]
  36. He, K.; Sun, J.; Tang, X. Guided image filtering. In Computer Vision—ECCV 2010, Proceedings of the European Conference on Computer Vision, Heraklion, Greece, 5–11 September 2010; Springer: Berlin/Heidelberg, Germany, 2010; pp. 1–14. [Google Scholar]
  37. Chang, Y.; Jung, C.; Ke, P.; Song, H.; Hwang, J. Automatic contrast-limited adaptive histogram equalization with dual gamma correction. IEEE Access 2018, 6, 11782–11792. [Google Scholar] [CrossRef]
  38. Aboshosha, S.; Zahran, O.; Dessouky, M.I.; Abd El-Samie, F.E. Resolution and quality enhancement of images using interpolation and contrast limited adaptive histogram equalization. Multimed. Tools Appl. 2019, 78, 18751–18786. [Google Scholar] [CrossRef]
  39. Hautiere, N.; Tarel, J.P.; Aubert, D.; Dumont, E. Blind contrast enhancement assessment by gradient ratioing at visible edges. Image Anal. Stereol. 2008, 27, 87–95. [Google Scholar] [CrossRef]
  40. Mittal, A.; Soundararajan, R.; Bovik, A.C. Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 2012, 20, 209–212. [Google Scholar] [CrossRef]
  41. Moorthy, A.K.; Bovik, A.C. Blind Image Quality Assessment: From Natural Scene Statistics to Perceptual Quality. IEEE Trans. Image Process. 2011, 20, 3350–3364. [Google Scholar] [CrossRef] [PubMed]
  42. Liu, Y.; Gu, K.; Li, X.; Zhang, Y. Blind image quality assessment by natural scene statistics and perceptual characteristics. In ACM Transactions on Multimedia Computing, Communications, and Applications (TOMM); Association for Computing Machinery: New York, NY, USA, 2020; Volume 16, pp. 1–91. [Google Scholar]
  43. Kasabov, N.K. Time-Space, Spiking Neural Networks and Brain-Inspired Artificial Intelligence; Springer: New York, NY, USA, 2019. [Google Scholar]
Figure 1. Formation model for degraded images.
Figure 1. Formation model for degraded images.
Sensors 22 01918 g001
Figure 2. Flowchart of sand-dust image restoration method based on red and blue channel.
Figure 2. Flowchart of sand-dust image restoration method based on red and blue channel.
Sensors 22 01918 g002
Figure 3. Sand dust images and histograms.
Figure 3. Sand dust images and histograms.
Sensors 22 01918 g003
Figure 4. Sand dust image correction based on the red channel correction function: (a) Sand-dust images; (b) Corrected image.
Figure 4. Sand dust image correction based on the red channel correction function: (a) Sand-dust images; (b) Corrected image.
Sensors 22 01918 g004
Figure 5. Atmospheric light position selected by two algorithms: (a) Proposed algorithm; (b) Dark channel prior algorithm.
Figure 5. Atmospheric light position selected by two algorithms: (a) Proposed algorithm; (b) Dark channel prior algorithm.
Sensors 22 01918 g005
Figure 6. Qualitative comparison results of sand dust images with weak color cast. (a) Sanddust Images; (b) TFO [9]; (c) NGT [11]; (d) BCGF [13]; (e) AWC [17]; (f) VRSI [19]; (g) SBT [20]; (h) FBE [29]; (i) GDCP [30]; (j) HDCP [31]; (k) RBCP [32]; (l) Proposed.
Figure 6. Qualitative comparison results of sand dust images with weak color cast. (a) Sanddust Images; (b) TFO [9]; (c) NGT [11]; (d) BCGF [13]; (e) AWC [17]; (f) VRSI [19]; (g) SBT [20]; (h) FBE [29]; (i) GDCP [30]; (j) HDCP [31]; (k) RBCP [32]; (l) Proposed.
Sensors 22 01918 g006aSensors 22 01918 g006b
Figure 7. Qualitative comparison results of various sand storm images: (a) Sanddust Images; (b) TFO [9]; (c) NGT [11]; (d) BCGF [13]; (e) AWC [17]; (f) VRSI [19]; (g) SBT [20]; (h) FBE [29]; (i) GDCP [30]; (j) HDCP [31]; (k) RBCP [32]; (l) Proposed.
Figure 7. Qualitative comparison results of various sand storm images: (a) Sanddust Images; (b) TFO [9]; (c) NGT [11]; (d) BCGF [13]; (e) AWC [17]; (f) VRSI [19]; (g) SBT [20]; (h) FBE [29]; (i) GDCP [30]; (j) HDCP [31]; (k) RBCP [32]; (l) Proposed.
Sensors 22 01918 g007aSensors 22 01918 g007b
Table 1. Average results of non-reference evaluation of 12 sand dust images.
Table 1. Average results of non-reference evaluation of 12 sand dust images.
Methode σ r ¯ NIQEDIIVINENPQI
TFO [9]0.42680.06931.51233.532932.623610.3987
NGT [11]0.42680.06931.51233.322326.79599.5502
BCGF [13]0.82810.39432.80153.395229.93009.3376
AWC [17]0.82810.39432.80153.385927.52169.8819
VRSI [19]0.40740.00022.07693.44531.43059.8026
SBT [20]0.71340.00111.67133.419127.186511.7016
FBE [29]0.99440.30652.15723.342728.73209.6831
GDCP [30]0.71250.01321.52513.411830.123310.9681
HDCP [31]0.74850.00544.45023.640127.649810.2809
RBCP [32]0.91360.00231.40733.615331.927412.3408
Proposed0.78080.02312.19683.31127.69039.5006
Table 2. Average results of non-reference evaluation of 375 sand dust images.
Table 2. Average results of non-reference evaluation of 375 sand dust images.
Methode σ r ¯ NIQEDIIVINENPQI
TFO [9]1.78260.06111.78843.850335.039711.3340
NGT [11]0.82040.000011.93303.733126.563411.1022
BCGF [13]2.98870.65273.15823.732426.503110.8567
AWC [17]1.99800.16661.50843.911227.521612.6198
VRSI [19]1.34410.10701.70083.889833.429211.7494
SBT [20]2.16810.00381.86383.768729.728311.9398
FBE [29]2.64530.2212.32183.706026.544510.7949
GDCP [30]1.73760.10661.74053.839329.381812.1313
HDCP  [31]2.20700.05664.64964.068024.884111.6375
RBCP [32]1.39510.12991.60073.992834.284212.3572
Proposed2.45190.07802.36713.715425.355110.7368
Table 3. The running times of various methods (unit: second).
Table 3. The running times of various methods (unit: second).
Method500 × 300640 × 4801200 × 8002000 × 15003648 × 1824
TFO [9]0.04160.09770.41662.61079.1139
NGT [11]0.61340.77591.36103.44907.1490
BCGF [13]0.36690.52131.37944.19639.5850
AWC [17]0.37290.58192.578134.18361.833
VRSI [19]0.66091.43984.620013.97831.675
FBE [29]1.13841.63483.633510.16621.727
GDCP [30]2.40174.334713.60236.70289.085
HDCP  [31]4.54327.893524.22972.765165.34
RBCP [32]0.77221.59186.867435.4307151.25
Proposed1.06251.64914.802913.30732.021
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Shi, F.; Jia, Z.; Lai, H.; Song, S.; Wang, J. Sand Dust Images Enhancement Based on Red and Blue Channels. Sensors 2022, 22, 1918. https://doi.org/10.3390/s22051918

AMA Style

Shi F, Jia Z, Lai H, Song S, Wang J. Sand Dust Images Enhancement Based on Red and Blue Channels. Sensors. 2022; 22(5):1918. https://doi.org/10.3390/s22051918

Chicago/Turabian Style

Shi, Fei, Zhenhong Jia, Huicheng Lai, Sensen Song, and Junnan Wang. 2022. "Sand Dust Images Enhancement Based on Red and Blue Channels" Sensors 22, no. 5: 1918. https://doi.org/10.3390/s22051918

APA Style

Shi, F., Jia, Z., Lai, H., Song, S., & Wang, J. (2022). Sand Dust Images Enhancement Based on Red and Blue Channels. Sensors, 22(5), 1918. https://doi.org/10.3390/s22051918

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop