Keywords

1 Introduction

Remote sensing images with high spatial resolution and abundant information have played a significant role in both civil and military fields in recent years. However, the observed images are taken at a considerable distance from surface of the earth, not all of the electromagnetic can reach the sensor after it passes through the atmospheric layer. Consequently, these images are usually vulnerable to climate effects [1, 2] and atmospheric conditions, such as cloud and haze.

In this paper, we mainly address remote sensing images degraded by haze, which indicate two distinguished features: first, regions polluted by haze have low contrast; second, these regions often suffer from color distortion to a certain extent. The haze removal processing targets at two features above as well, in other words, to promote the contrast and recover the true color in hazy regions.

Nevertheless, haze removal is a challenging task since the polluted regions contain both the information of haze and ground features [3]. Which results in that the information of ground features may be changed when haze is removal. Fortunately, in recent years, removing haze from remote sensing images has attracted an increasing attention, and a number of researches have been undertaken. Here is a brief summery.

Richter [4] developed a haze removal method using haze transition (haze-free to haze) regions. The algorithm separates cloud, hazy and clear regions by matching the histogram of the clear regions to the hazy part, then a haze boundary region is introduced to generate a smoother transition from clear regions to haze.

The dark object subtraction (DOS) method [5] is a typical image-based atmospheric correction method, we can acquire a satisfactory haze-removal image by taking advantage of the darkest pixel in a homogeneous hazy image. However, this method fails to process images in which haze is non-uniform.

Zhang et al. [6] put forward a haze optimized transformation (HOT) to characterize the spatial distribution in Landsat TM/ETM+ images, then DOS based on HOT is applied to remove haze [7]. But the effect of HOT method relies on the selected hazy and haze-free regions to a great extent, it usually takes several times to acquire an ideal result.

He et al. [8] proposed the dark channel prior based on the statistic of 5000 outdoor haze-free images: in most of the non-sky patches, at least one color channel has some pixels whose intensity are very low and close to zero. Combined with the haze image model, the effect of this approach is very impressive. But this method will bring about two problems when applied to remote sensing images: first, the visual effect of dehazed image is not appealing when processing remote sensing images with uneven haze. Second, when the scene objects are inherently similar to the atmospheric light [8], such as bright bare soil, the dark channel prior will lead to severe color distortion.

In this paper, we put forward a fast and efficient spatial and spectral self-adaptive haze removal method to remove haze in remote sensing images. Experimental results demonstrate that our method not only can successfully dealing with images with uneven haze, but also may keep good color consistency in bight areas.

The remaining letter is organized as follows. In Sect. 2, the haze image model and dark channel prior are introduced in detail. In Sect. 3, a detailed description of our method is given. In Sect. 4, experimental results and a comparison with He and HOT are shown. Finally, conclusions are drawn in Sect. 5.

2 Background

2.1 The Haze Image Model

In computer graphics and computer vision, the haze image model, widely used to describe the formation of a haze image is [812]:

$$ I\left( x \right) = {\text{J}}\left( x \right)t\left( x \right) + A\left[ {1 - t\left( x \right)} \right] $$
(1)

Where I stands for the observed intensity, J is the scene radiance, A stands for the atmospheric light of the whole image, t is the medium transmission coefficient, standing for the portion of the light that is not scattered and reaches the imaging instrument.

In general, the haze image model indicates that the observed image is composed by two sections: the scene radiance and the global atmospheric light, and their proportion is t and \( ( 1- t) \) respectively. The term \( A\left( {1 - t} \right) \) is called airlight [8, 11, 13, 14], and the term \( \left( {{\text{J*}}t} \right) \) is called the direct attenuation [8, 11, 14].

For Eq. (1), in order to remove the haze in the image, it’s necessary to recover the scene radiance J from the hazy image J. However, without knowing the value of A and t, it’s obviously an ill-posed problem since only I is given. Therefore, A and t need to be estimated if we want to recover J. Once A and t is obtained, we may calculate the scene radiance J in the following formula:

$$ J\left( x \right) = \frac{I\left( x \right) - A}{t\left( x \right)} + A $$
(2)

2.2 The Dark Channel Prior Method

  • A. Dark Channel Prior

In order to recover the scene radiance J in Eq. (1), the dark channel prior method proposed by He et al. [8] may estimate A and t relatively accurate, which has been a milestone in this research field until now. A brief introduction is presented next.

The dark channel prior is a statistic rule based on the observation on outdoor haze-free images: in most of non-sky patches in an outdoor image, at least one color channel has some pixels whose intensity are very low and close to zero [8]. Equivalently, in haze-free images the minimum intensity of such a patch is close to zero.

The conception of dark channel of an arbitrary image is formally defined as follows:

$$ J^{dark} \left( x \right) = {}_{y \in \varOmega \left( x \right)}^{min} \left[ {{}_{{ c \in \left\{ {r,g,b} \right\}}}^{min} }\,{J^{c} \left( y \right)} \right] $$
(3)

Where \( \varOmega \left( x \right) \) is the local neighbor of pixel x, and \( J^{c} \) is one of the RGB channels. The dark channel \( J^{dark} \left( x \right) \) in a haze-free image is tend to zero, which is called the dark channel prior (Fig. 1).

$$ J^{dark} \to 0 $$
(4)
Fig. 1.
figure 1figure 1

Calculation of a dark channel. (a) An arbitrary image J. (b) Calculating the minimum of (r, g, b) values of each pixel. (c) The dark channel of J. The patch size of \( {\varOmega } \) is 15 × 15 (Color figure online).

To verify the theory, the author randomly select 5000 outdoor haze-free images and resize their size at 500*500 pixels. Then, the dark channels are computed using a patch size 15*15. It turns out that 75 % of the dark channels are 0, while 86 % are blow 16. Figure 2 shows the detailed distribution of the pixel intensity of dark channels.

Fig. 2.
figure 2figure 2

Statistic results of the dark channels. (a) Cumulative distribution. (b) Histogram of the average intensity of each dark channel.

  • B. Estimate the Atmospheric Light A

The atmospheric light A stands for the most haze-opaque region in whole image, He et al. [8] estimated A based on the dark channel, and two major steps are expounded below in detail:

Firstly, picking up the top 0.1 % brightest pixels in the dark channel prior, these pixels are usually regarded as the most haze-opaque region.

Then, among the extracted 0.1 % pixels, the pixels with highest intensity in the input image are selected as the atmospheric light A.

It is necessary to declare that the atmospheric light may not be the brightest pixels in the whole image.

  • C. Estimate the Transmission \( \varvec{t}\left( \varvec{x} \right) \)

In Eq. (1), we assume that atmospheric light A is given, then the haze image model can be normalized by dividing A on both sides. Therefore, the normalized haze image model in each color channel can be rewritten as:

$$ \frac{{I^{c} (x)}}{{A^{c} }} = \frac{{J^{c} (x)}}{{A^{c} }} * t(x) + 1 - t(x) $$
(5)

Furthermore, He assumed that the transmission \( t^{{\prime }} \left( x \right) \) in a local patch \( \varOmega \left( x \right) \) is invariable, and then calculated the dark channel on both sides of (5) by taking the minimum operation:

$$ \mathop {\hbox{min} }\limits_{{CE\left( {r,g,b} \right)}} \left[ {\mathop {\hbox{min} }\limits_{y \in \varOmega (x)} \frac{{{\text{I}}^{c} \left( y \right)}}{{{\text{A}}^{c} }}} \right] = t'(x) \times \mathop {\hbox{min} }\limits_{{CE\left( {r,g,b} \right)}} \left[ {\mathop {\hbox{min} }\limits_{y \in \varOmega (x)} \frac{{{\text{I}}^{c} \left( y \right)}}{{{\text{A}}^{c} }}} \right] + \left[ {1 - t'(x)} \right] $$
(6)

According to the dark channel prior, the dark channel of a local patch is close to zero:

$$ J^{dark} (x) = \mathop {\hbox{min} }\limits_{y \in \varOmega (x)} \left[ {\mathop {\hbox{min} }\limits_{{c \in \{ r,d,b\} }} J^{c} (y)} \right] = 0 $$
(7)

As \( A^{c} \) is constantly positive, this leads to:

$$ \mathop {\hbox{min} }\limits_{c \in (r,g,b)} \left[ {\mathop {\hbox{min} }\limits_{y \in \varOmega (x)} \frac{{J^{c} (y)}}{{A^{c} }}} \right] = 0 $$
(8)

Putting (8) into (6), we are able to eliminate the multiplicative term and estimate the transmission \( t^{{\prime }} \left( x \right) \) by Eq. (9):

$$ t'(x) = 1 - \mathop {\hbox{min} }\limits_{c \in (r,g,b)} \left[ {\mathop {\hbox{min} }\limits_{y \in \varOmega (x)} \frac{{J^{c} (y)}}{{A^{c} }}} \right] = 0 $$
(9)

However, the image tends to be unnatural if we remove the haze thoroughly [8], to solve this problem, the author introduce a constant parameter \( \upomega= 0.95 \) into (9):

$$ t'(x) = 1 - \omega * \mathop {\hbox{min} }\limits_{c \in (r,g,b)} \left[ {\mathop {\hbox{min} }\limits_{y \in \varOmega (x)} \frac{{I^{c} (y)}}{{A^{c} }}} \right] $$
(10)
  • D. Recover the Scene Radiance \( \varvec{J}\left( \varvec{x} \right) \)

The transmission estimated by (10) in the last subsection is not accurate enough and may result in discontinuity even though no abrupt discontinuities occur in the image [8]. Therefore, to avoid block artifacts, a guided filter [15] is used to refine the transmission estimated in the last subsection. The filter kernel is explicitly expressed as:

$$ W_{ij} \left( I \right) = \frac{1}{{\left| w \right|^{2} }}\sum \limits_{{k:\left( {i,j} \right) \epsilon w_{k} }} \left( {1 + \frac{{\left( {I_{i} - \mu_{k} } \right) + \left( {I_{j} - \mu_{k} } \right)}}{{\sigma_{k}^{2} + \epsilon }}} \right) $$
(11)

Where I stands for the guidance image, \( w_{k} \) is a window centered at the pixel \( k \), \( \left| w \right| \) is the number of pixels in the window \( w_{k} \), \( \sigma_{k}^{2} \) and \( \mu_{k} \) stand for the variance and mean of I correspondingly, \( \epsilon \) is a regularization parameter. And the original hazy image is guidance image in this letter.

Since the atmospheric light A and the refined transmission \( t\left(x \right) \) can be estimated from the dark channel prior method, the scene radiance \( {\text{J}}\left(x \right) \) is able to be recovered from the hazy image according to Eq. (2).

3 Spatial and Spectral Self-Adaptive Haze Removal Model

3.1 Spectral Adaptive: Processing Intensity Varies with Haze Density

When applying dark channel prior to remote sensing images with uneven haze, the effect is not ideal. As is showed in the picture below, thin haze is removed with effect while dense haze is not removed totally.

In original paper we notice that He applied a constant parameter ω = 0.95 to reduce the unnatural distortion of the dehazed image. By assigning different values to ω, we found that different processing intensities are performed. Furthermore, the processing intensity increases as the value of ω grows (Fig. 3).

Fig. 3.
figure 3figure 3

Haze removal extent varies with ω. (a) Original hazy image. (b) \( \upomega= 1 \). (c) \( \upomega= 0.8 \). (d) \( \upomega= 0.6 \).

In the next step, we modify the constant \( \upomega \) to a matrix \( \upomega \). The value of ω varies from 0 to 1. Moreover, the value of \( \upomega \) grows as the density of haze in the corresponding region increases. By taking advantage of this matrix \( \upomega \), we can apply different processing intensities to regions of different haze densities and remove the inhomogeneous haze in an image successfully.

3.2 Spatial Adaptive: Separate Different Landforms

He discovered that when dealing with scene objects inherently similar to the atmospheric light, such as sky regions in outdoor images, the dark channel prior is invalid and always leads to color distortion. Similar to sky regions in outdoor images, the bright bare soil in remote sensing images resembles the atmospheric light too. From Fig. 4, we may find that after being processed by dark channel prior, these bright regions tend to be unnatural darker than they should be.

Fig. 4.
figure 4figure 4

Color distortion after haze removal with original dark channel prior (a) input image (b) haze removal with dark channel prior (Color figure online)

The reason why this phenomenon occurs is that unlike other landforms, such as vegetation and river, which often tend to indicate a low intensity in one of RGB channels, bare lands are inclined to have a higher intensity in all of RGB channels, which is similar to the spectral features of dense haze. So after haze removal operation with dark channel prior method, one of RGB channels in these bright regions are forced to be zero, and eventually they appear to be unnaturally darker, which is apparently incorrect.

Therefore, these bright ground features should be separated to other landforms and then processed alone. In this case, these regions may become more visually natural after haze removal.

After reduplicative statistics and verification, we found a judgment condition (12) to separate bright landforms to other ground features, experimental results demonstrate this judgment is effective and stable for aerial images. And in remote sensing images with near-infrared band, we can classify different landforms with NDVI and NDWI indexes, which are extensively applied to extract vegetation and water from remote sensing images.

$$ \frac{G + B - 2*R}{{{\text{R}} - 0.5 * {\text{B}}}} \le 1.1 $$
(12)

At last, we perform different processing intensities to different landforms by assign ω different values. In general, the processing intensity of bright landforms is weaker than that of other landforms, such as vegetation and water.

4 Experimental Results and Analysis

To demonstrate the effectiveness of our algorithm, we manually select several aerial remote sensing images and do experiments on these images. The proposed method is implemented on a PC with 3.4 GHZ Intel Core i3 Processor using Visual C++ 6.0 in Window 7 environment. And the recovered images are visually appealing.

Figures 5 and 6 are comparisons between results recovered by He’s original dark channel prior [8], HOT [5], and our algorithm. From two comparisons, we may notice that the original dark channel prior cannot completely remove dense haze in the image, at the same time, the bright bare lands suffer from color distortion and become unnatural darker after haze removal. HOT method is able to remove all haze, however, the dehazed image suffers from slight color distortion, and the resolution of the image decreases compared with the original image. In addition, the effect of HOT method relies on the selected regions of interest to a great extent, it usually takes several times to acquire the ideal result. The proposed method can not only remove the non-uniform haze in the image, but also can increase the contrast and restore the true color in hazy regions. And the image is visually appealing after haze removal operation.

Fig. 5.
figure 5figure 5

Haze removal. (a) Input hazy image. (b) He’s result. (c) HOT’s result. (d) Our result (Color figure online).

Fig. 6.
figure 6figure 6

Haze removal. (a) Input hazy image. (b) He’s result. (c) HOT’s result. (d) Our result.

Besides subjective visual effect, we also assess the experimental results with quantitative evaluation. In this paper, we adopt entropy [16] and average gradient as evaluation criteria. Entropy reflects the abundance of information in an image, meanwhile, average gradient is another significant evaluation criterion, and the higher the average gradient of an image is, the clearer the image is. Judging from Table 1 below, it obvious that both entropy and average gradient have the highest value in our proposed method. It proves that our method can recover clearer and more natural images (Table 2).

Table 1. Quantitative evaluation of various methods in Fig. 5
Table 2. Quantitative evaluation of various methods in Fig. 6

In addition, we are able to remove haze from remote sensing images with near-infrared band as well. Taking a ZY1-02C image for example, we can remove uneven haze in the image and keep good color consistency (Fig. 7).

Fig. 7.
figure 7figure 7

Haze removal for ZY1-02C image. (a) Input hazy image. (b) Our result.

5 Conclusion

In this paper, an efficient method is put forward to remove haze in remote sensing images, compared with the previous methods, our method has two main advantages: first, by modifying the original constant ω to a matrix, we can successfully remove non-uniform haze in remote sensing images. Second, our method is able to avoid severe color distortion and recover natural color in bright regions. Experiments indicate that the proposed method can archive appealing haze-free images in both visual effects and quantitative evaluation. Additionally, our proposed method, however, still causes slight color distortion [17, 18], a solution to this problem will be our future research direction.