Nothing Special   »   [go: up one dir, main page]

Next Article in Journal
Assessing Non-Photosynthetic Cropland Biomass from Spaceborne Hyperspectral Imagery
Next Article in Special Issue
Attention-Guided Multispectral and Panchromatic Image Classification
Previous Article in Journal
Analysis of the Impacts of Environmental Factors on Rat Hole Density in the Northern Slope of the Tienshan Mountains with Satellite Remote Sensing Data
Previous Article in Special Issue
MRA-SNet: Siamese Networks of Multiscale Residual and Attention for Change Detection in High-Resolution Remote Sensing Images
You seem to have javascript disabled. Please note that many of the page functionalities won't work as expected without javascript enabled.
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Improving Urban Land Cover Classification in Cloud-Prone Areas with Polarimetric SAR Images

1
Department of Geography, The University of Hong Kong, Pokfulam, Hong Kong 999077, China
2
HKU Shenzhen Institute of Research and Innovation, Nanshan District, Shenzhen 518057, China
*
Author to whom correspondence should be addressed.
Remote Sens. 2021, 13(22), 4708; https://doi.org/10.3390/rs13224708
Submission received: 28 September 2021 / Revised: 19 November 2021 / Accepted: 19 November 2021 / Published: 21 November 2021
(This article belongs to the Special Issue Applications of Remote Sensing Imagery for Urban Areas)
Figure 1
<p>The geographic location of the study area.</p> ">
Figure 2
<p>The methodological framework of this study.</p> ">
Figure 3
<p>Land cover classification with different levels of cloud coverage. (<b>a</b>,<b>e</b>,<b>i</b>,<b>m</b>) are the cloud-contaminated optical images with cloud coverage of 0, 6%, 30%, and 50%, respectively, (<b>b</b>,<b>f</b>,<b>j</b>,<b>n</b>) are the land cover classification maps of single optical images, (<b>c</b>,<b>g</b>,<b>k</b>,<b>o</b>) are the land cover classification maps of single SAR images, and (<b>d</b>,<b>h</b>,<b>l</b>,<b>p</b>) are the land cover classification maps of fused optical and SAR images.</p> ">
Figure 4
<p>Detailed classification maps on cloud-free and cloud-covered images. (<b>a</b>,<b>f</b>,<b>k</b>,<b>p</b>) are the cloud-free images, (<b>b</b>,<b>g</b>,<b>l</b>,<b>q</b>) are classification maps of the cloud-free images, (<b>c</b>,<b>h</b>,<b>m</b>,<b>r</b>) are the cloud-covered images, (<b>d</b>,<b>i</b>,<b>n</b>,<b>s</b>) are the classification maps of the single cloud-covered images, and (<b>e</b>,<b>j</b>,<b>o</b>,<b>t</b>) are the classification maps of fused optical and SAR images.</p> ">
Figure 5
<p>The overall accuracy of three classifiers under different cloud coverage using single optical data and combined optical and SAR data.</p> ">
Figure 6
<p>The spectral values of the five cloud-free and cloud-covered land cover samples in 12 bands.</p> ">
Figure 7
<p>The overall accuracy of training with/without cloud-covered samples under different types of cloud coverage using single optical data and combined optical and SAR data.</p> ">
Figure 8
<p>The spectral values of five land cover samples covered by “thin cirrus” and “high-probability clouds” in 12 bands.</p> ">
Versions Notes

Abstract

:
Urban land cover (ULC) serves as fundamental environmental information for urban studies, while accurate and timely ULC mapping remains challenging due to cloud contamination in tropical and subtropical areas. Synthetic aperture radar (SAR) has excellent all-weather working capability to overcome the challenge, while optical SAR data fusion is often required due to the limited land surface information provided by SAR. However, the mechanism by which SAR can compensate optical images, given the occurrence of clouds, in order to improve the ULC mapping, remains unexplored. To address the issue, this study proposes a framework, through various sampling strategies and three typical supervised classification methods, to quantify the ULC classification accuracy using optical and SAR data with various cloud levels. The land cover confusions were investigated in detail to understand the role of SAR in distinguishing land cover under different types of cloud coverage. Several interesting experimental results were found. First, 50% cloud coverage over the optical images decreased the overall accuracy by 10–20%, while the incorporation of SAR images was able to improve the overall accuracy by approximately 4%, by increasing the recognition of cloud-covered ULC information, particularly the water bodies. Second, if all the training samples were not contaminated by clouds, the cloud coverage had a higher impact with a reduction of 35% in the overall accuracy, whereas the incorporation of SAR data contributed to an increase of approximately 5%. Third, the thickness of clouds also brought about different impacts on the results, with an approximately 10% higher reduction from thick clouds compared with that from thin clouds, indicating that certain spectral information might still be available in the areas covered by thin clouds. These findings provide useful references for the accurate monitoring of ULC over cloud-prone areas, such as tropical and subtropical cities, where cloud contamination is often unavoidable.

1. Introduction

Urban land cover (ULC) is a fundamental Earth observation parameter for understanding the underlying environmental, ecological, and social processes in urban growth [1]. ULC dynamics are essential for understanding the structure and function of the Earth’s ecosystem [2,3], providing vital information for numerous urban studies in urban environmental monitoring, urban planning, and urban transport [4,5,6]. Owing to the rapid development of Earth observation technologies, various remote sensing data with increasing spatial and spectral resolutions have been widely used for ULC monitoring [7,8,9]. However, despite increasing satellite observations, one common problem in the optical remote sensing domain, cloud cover, still hinders the effective application of continuous and accurate urban land change monitoring. As optical remote sensing is a passive technique for Earth observation that relies on solar illumination, the spectral signatures of land covers are affected by the incident component and the reflected component [10]. Therefore, clouds can easily interfere with the reflectance signals and block the view of the surface underneath.
An analysis of over 12 years of continuous observations from the Moderate Resolution Imaging Spectrometer (MODIS) showed that 67% of the Earth’s surface is covered by clouds on average [11]. Cloudless remote sensing images are very scarce, especially in tropical and subtropical areas that are cloudy and rainy. Therefore, some research focuses on cloud removal, trying to recover the land surface under clouds [10,12], and most research ignores the image regions covered by clouds by carrying out preprocessing of cloud masks [8,13,14], which causes loss of information. However, a few studies have quantitatively explored the influence of cloud cover on ULC classification [15] and the specific impacts of cloud cover on recognizing the surface underneath, especially the confusion of different land covers, which may help elucidate the impact of clouds and provide useful information for subsequent processing of clouds.
On the other hand, synthetic aperture radar (SAR), with its all-weather and day-and-night imaging capability, has become a useful supplement to optical remote sensing, especially in cloudy areas. Researchers have proposed many methods for fusing optical and SAR data, mainly conducted at three different levels: pixel level, feature level, and decision level [16,17]. Pixel-level fusion refers to the pixel-level overlay of optical and SAR data without performing the feature extraction. Some studies have claimed that optical and SAR data are not suitable for fusion at the pixel level due to completely different imaging mechanisms [18]. Feature-level fusion refers to the fusion of features extracted from optical and SAR data. Due to the theoretical feasibility and relatively mature technology, most existing studies are based on feature-level fusion. Methods usually adopted for feature-level fusion include support vector machine (SVM) [19], random forest (RF) [20], and deep network methods [21]. Decision-level fusion refers to land covers classification using optical and SAR data, respectively, and then making decisions on classification results from these two sources. Decision-level fusion of optical and SAR data mainly includes voting, Dempster–Shafer theory, and RF [22,23,24]. However, it is tricky to determine which source has the more reliable classification result and set a strategy for merging the two classification results in the decision-level fusion. There are also some studies that conducted SAR and optical data fusion for image classification or target recognition in cloudy areas, and some have reported an improvement in accuracy compared with using single optical data [15,25,26,27,28,29]. For example, Zhang et al. proposed to combine optical and SAR data for classification in cloudy areas [15], Sun et al. proposed a crop classification method based on optical and SAR response mechanism in cloudy and rainy areas [25], Yang et al. proposed an object-based classification method for cloudy coastal areas using optical and SAR images for vulnerability assessment of marine disaster [27], and Sharesha et al. combined topographic parameters and SAR data to solve the problem of cloud cover in classification [28]. However, a comprehensive study of the role of SAR in this process, i.e., the effect of SAR on different cloud-contaminated optical images and its effect on distinguishing land covers under different types of cloud coverage, still needs further exploration. In general, although it is recognized that clouds can affect the ULC classification of optical images and the fusion of SAR and optical images is beneficial to improve the accuracy of ULC classification, how exactly do clouds affect land cover recognition? What is the quantitative relationship between cloud content and the accuracy of ULC classification? How does SAR help to identify land cover when the surface is covered by clouds? What is the difference between the role of SAR when there is a cloud and when there is no cloud? These issues remain to be further explored. Therefore, to fill the abovementioned research gap, this study designs a “cloud impact” methodological framework to investigate the mechanism of cloud impact on ULC classification and quantify the supplementary effect of SAR data on the discrimination of land covers under different cloud covers.

2. Study Area and Datasets

2.1. Study Area

The Pearl River Delta (PRD) in southern China was chosen as the study area. This region, located downstream of the Pearl River, has undergone a dramatic urbanization process in recent decades. However, increasing human activities have resulted in a series of environmental problems in this area. As a critical indicator of urban development and related environmental issues, the measurement of ULC is of great significance for environmental studies of the PRD and has been given much attention by the local government. However, the PRD region is located in the subtropical, humid climate belt, with a long period of rainy and cloudy weather throughout the year, making land cover classification using remote sensing images difficult [30]. From 2016 to 2021, acquisitions covering this study area are 66, 86, 147, 97, 91, and 123, respectively. Among them, the number of cloud-free images is 5, 9, 14, 13, 7, and 10, respectively, indicating that cloud-free images account for about 7.6%, and most of them are between November and January. Therefore, exploring the impact of clouds on ULC classification could be of high significance for environmental studies of the PRD and could provide support for the necessity of the fusion of optical data and SAR data in a cloud-covered situation. In this study, a study site located on the southwest boundary of Shenzhen and the western part of Hong Kong, with an approximate area of 750 km2, was selected. Figure 1 shows the study area (also with the spatial distribution of labelled samples). The land covers in this region mainly include vegetation, soil, bright impervious surfaces (newly built concrete roads and roofs), dark impervious surfaces (asphalt roads, old concrete roads, and roofs), and water (rivers, lakes, and seas) [31].

2.2. Satellite Data

In this study, one scene of the fully polarimetric PALSAR-2 Standard Product Level 1.1 image, acquired on 11 January 2017 with a 6 m resolution, was selected as the SAR data due to its capability for fully polarimetric observation with finer spatial resolution and a wider swath [32]. The full polarimetry mode of the PALSAR-2 sensor aboard the ALOS-2 satellite transmits/receives two orthogonal linear polarizations, i.e., vertical (V)/horizontal (H). It can measure all four polarimetry combinations (HH, HV, VV, and VH, where VH represents V transmission and H reception) and describe the complete polarimetric characteristics of the ground [1].
For optical data, to explore the impacts of cloud cover, multiple images with different cloud coverages were needed. To reduce the possible effects of land cover changes, the selected optical data were required to have acquisition dates close to the ALOS-2 data. With these two considerations, four scenes of Sentinel-2 images with cloud contents of approximately 0, 6%, 30%, and 50% were downloaded from the United States Geological Survey (USGS) Earth Explorer platform [33], with acquisition times of 31 December, 14 February, 2 October, and 25 May 2017. These images were in Level-1C processing format, meaning they underwent geometric and radiometric correction but were not atmospherically corrected [34]. Therefore, the Sen2Cor (v2.8) toolbox was used to perform atmospheric correction and convert the top-of-atmosphere reflectance Level-1C data to a bottom-of-atmosphere (BOA) reflectance Level-2A product [35]. Among the original 13 bands of Sentinel-2, the SWIR-Cirrus band does not participate in atmospheric correction by Sen2Cor, and it is not provided in the Level-2A product. Coupled with its poor quality in Level-1C, it was not adopted. Finally, 12 optical band L2A level Sentinel-2 data with spatial resolutions of 10, 20, and 60 m were obtained. Moreover, the pixel properties of “cirrus clouds” and “clouds with high possibility” were identified in the quality bands of the Sentinel-2 data and were used to identify whether a pixel was covered by clouds in this study.

3. Methodology

To measure the influence of clouds on ULC classification and the enhancement effect of SAR on ULC classification in the presence of clouds, a novel research framework is designed in this paper. In this framework, a new sampling strategy is proposed to extract samples with different cloud contents and construct a dataset to quantitatively quantify the impact of cloud content. By using single-source or multisource data, considering cloud-free or cloud-covered samples and cloud types, this paper further explores the influence mechanism of clouds on ULC classification and the supplementary effect of SAR on land cover recognition.

3.1. Framework of the Research

The general methodological framework is demonstrated in Figure 2. First, both the optical and SAR images were preprocessed individually using corresponding processing techniques before they were coregistered. After that, various features were extracted from the coregistered optical and SAR data. Meanwhile, from labelled pixels, different proportions of cloud-covered and cloud-free samples were randomly selected to construct datasets with various levels of cloud coverage. Then, for each case of cloud coverage, with the use of the corresponding dataset and inputting single optical features or fused optical and SAR features, three representative classifiers were applied to conduct the ULC classification. Finally, the results of validation and accuracy assessment were applied to evaluate the impact of cloud cover and the supplementary effect of SAR data on the discrimination of land covers under various proportions of cloud coverage.

3.2. Feature Extraction from Both Optical and SAR Images

In this section, the processes of feature extraction from both optical and SAR images are elaborated. A series of preprocessing steps were conducted before the ALOS-2 data were geocoded and coregistered with the Sentinel-2 data. First, a radiometric calibration procedure was applied to the ALOS-2 data using SNAP software. Then, to reduce speckle noise in SAR data, speckle filtering with an enhanced Lee filter was conducted, as it is more suitable for preserving radiometric and textural information than other speckle filters [36]. Finally, the processed ALOS-2 data, together with the extracted polarimetric features, were geocoded with a digital elevation model and coregistered with the four Sentinel-2 images to a 10 × 10 m resolution under the georeferenced system WGS84 and UTM (Zone 49N). More than 20 ground control points were selected, and the root means square error (RMSE) for the coregistration was less than one pixel.
For optical features, the original spectral signatures in the optical satellite data are not sufficient for the discrimination of different land covers due to their material diversity and spectral confusion. The normalized difference vegetation index (NDVI) [37] and normalized difference water index (NDWI) [38] were also used to describe the spectral features. In addition, textual information can be used to provide complementary information, which is essential for compensating for spectral information and reducing spectral confusion. Therefore, the popular grey-level co-occurrence matrix (GLCM) approach [39] was utilized to extract the four texture features in this study, including homogeneity, dissimilarity, entropy, and angular second moment. After applying the GLCM texture extraction for each spectral layer, 12×4 feature layers were obtained. The first four dimensions of these features were extracted as the final texture features by dimensionality reduction using a principal component analysis (PCA) technique to remove redundant information and keep the most valuable information. In this way, optical features with 18 layers were extracted.
To further enhance the discrimination of different types of land cover, especially in the case of cloud cover, features extracted from SAR data are also essential. Therefore, for fully polarimetric SAR images (ALOS-2), six polarimetric features that have shown good performance in other studies were extracted [31], including the backscattering coefficients of the HH, HV, VH, and VV data, the polarization ratio, coherence coefficient, Freeman–Durden decomposition parameters [40], Cloude–Pottier decomposition parameters [41], and Yamaguchi four-component decomposition parameters [42]. Finally, by layer stacking the 18-layer optical features and the 26-layer SAR features, the 44-dimensional feature was obtained. With the layer-stacked optical and SAR data, normalization was then applied to these features to avoid bias during the following processes.

3.3. Urban Land Cover Classification

To evaluate the impact of cloud cover on the land cover classification, different classification algorithms were employed for comparison. As typical supervised machine learning methods, support vector machine (SVM) [19] and random forest (RF) [20] were selected due to their stable performance in previous studies [43,44]. In addition, GoogLeNet [45] was selected as the baseline deep convolutional network method due to its light size and its proven effectiveness in previous studies [46,47,48]. This section provides descriptions of the algorithms used in the land cover classification experiments.

3.3.1. Support Vector Machine (SVM)

SVM is a supervised machine learning method that has been widely employed in various remote sensing applications. The SVM classifier uses a decision boundary to separate classes. This decision boundary is referred to as the best separating hyperplane, which maximizes the margin between two groups. The support vectors are the points closest to the hyperplane. The algorithm finds the best hyperplane boundary in the n-dimensional classification space by iteratively distinguishing patterns in the training data and applying the same configuration to a separate evaluation dataset. In this study, the dimension is the number of feature bands, and the vectors are individual pixels in a multiband composite [49]. A detailed mathematical description of this algorithm can be found in [19]. In the experiments of this study, an SVM classifier from the LIBSVM library using the C-SVC type and the radial basis function (RBF) kernel was employed [50]. The parameter gamma (G) in the kernel function and the optimal penalty (C) were chosen using the grid-search method, setting them as 0.8 and 100. The model was executed in the MATLAB R2020b development environment. Each experiment was repeated five times. All processing was performed on a 20-core 2.40 GHz Xeon server with 32 GB of RAM running on Windows 10 64-bit.

3.3.2. Random Forest (RF)

The ensemble learning strategy of RF was proposed by Breiman in 2001 [20], which is based on the idea that the combination of bootstrap aggregate classifiers performs better than a single classifier. The bootstrap component means that to build each decision tree, a set of training samples is left out by random selection, called out-of-bag samples [51]. These out-of-bag samples can then be used as testing data for the decision tree, which helps to decorrelate the trees, thereby reducing multicollinearity. In this way, several decision trees can be built on different groupings of the input data and used to construct the random forest, which will be used as the classifier. Afterward, the resultant output will be assigned as the majority votes of each class that are voted on by all decision trees. A detailed mathematical description of RF can be found in [20]. The setting of RF included two key parameters, the number of decision trees (T), and the number of variables (m), for splitting each node. This study followed the recommendation that T should be larger than 20 and m should be determined by Equation (1):
m = √M + 1
where M is the total number of features for each sample [45]. The model was performed in the MATLAB R2020b development environment in this study, with the setting of parameters T and mm as 300 and 7, respectively.

3.3.3. GoogLeNet

GoogLeNet [45] is a popular pretrained deep convolutional neural network (CNN) architecture with 22 layers developed by Google. It introduces the Inception architecture, which is based on parallel convolutional layers with filters of different sizes. In the Inception module, where input data are fed into multiple convolutions with multiple kernels and max-pooling simultaneously, the network trains with optimal weights and more useful features. Inside, every inception layer encompasses within itself three independent convolution layers with convolutional kernels of 1 × 1, 3 × 3, 5 × 5 to derive a variety of spatial information of the input data and a 3 × 3 max-pooling layer to capture more discriminative features by reducing the channel and the size of the input data. Therefore, this adjustment reduces the number of parameters used in GoogLeNet and makes it faster in terms of calculation, lighter in size, and higher in performance compared to many other popular architectures. The model was implemented by the TensorFlow open-source deep-learning Python library [52], running on an NVIDIA GeForce RTX 3070 under Window 10 64-bit in this study, with the learning rate set to 1 × 10-2, batch size set to 10, and epoch set to 20, according to previous studies [53].

3.4. Sample Collection and Results Validation

In this study, datasets with different cloud contents were needed to explore the performance of land cover classification under various levels of cloud coverage. Specifically, to be consistent between the cloud content of the whole image and the proportion of cloud-covered pixels among the labelled pixels in the image, a total of 43,665 pixels of five land cover classes were carefully labelled with visual interpretation over the four images and very high-resolution images from Google Earth near the acquisition date of the satellite images, including 9520 pixels of vegetation (VEG), 6970 pixels of soil (SOI), 6228 pixels of bright impervious surface (BIS), 10,969 pixels of dark impervious surface (DIS), and 9978 pixels of water (WAT). The spatial distribution of these labelled samples is shown in Figure 1. Then, for each experiment with specific cloud coverage (0, 6%, 30%, and 50%), these 43,665 labelled pixels in the corresponding image were used to construct the dataset, with cloud-covered samples accounting for the corresponding proportion. For the image with 6% cloud content, there were 3038 cloud-covered samples and 40,627 cloud-free samples, respectively. Among the cloud-covered samples, the number of samples in the five categories of VEG, SOI, BIS, DIS, and WAT was 475, 379, 436, 694, and 1054, respectively. In the cloud-free samples, the number of the five land covers was 9045, 6591, 5792, 10,275, and 8924, respectively. For the image with 30% cloud content, there were 14,276 cloud samples and 29,389 cloud-free samples. The five land covers’ cloud-covered and cloud-free sample sizes were [3282, 2728, 2762, 3616, 1888] and [6238, 4242, 3466, 7353, 8090]. For the image with 50% cloud content, there were 21,382 cloud samples and 22,283 cloud-free samples, respectively, with the cloud-covered and cloud-free sample sizes of the five land covers as [5026, 3794, 3184, 7302, 2977] and [4494, 3176, 3044, 3667, 7001]. In each experiment of specific cloud coverage, half of the samples were randomly selected as training samples, and the other half were selected as testing samples.
To evaluate the results, apart from qualitative visual inspection, quantitative metrics were also employed in this study, including the standard accuracy metrics of the overall accuracy (OA), the confusion matrix, the producer’s accuracy (PA), and the user’s accuracy (UA). The confusion matrix was also calculated by comparing predicted land covers against ground truth land cover data, where the row entries represent the actual classes of the testing data and the column entries represent the predicted classes from the classifier. The OA, PA, and UA were calculated as follows:
O A = k = 1 j n k k N
P A ( k ) =   n k k n k +  
U A ( k ) =   n k k n + k  
where k represents the k th category, j is the number of rows in the matrix, n k k is the number of observations in row k and column k , n + k and n k + are the marginal totals of row k and column k , respectively, and N is the total number of observations.
Moreover, two types of evaluation methods were employed to assess the impacts of clouds on ULC classification. First, for each level of cloud coverage, OA was employed to quantify the impacts of different levels of cloud coverage on ULC classification. Second, to further investigate the impact of cloud cover on the five types of land cover, the confusion of land covers under cloud-free and cloud-covered areas was evaluated. To achieve this, all testing samples in the dataset with 0 cloud coverage and all cloud-covered testing samples in the dataset with 50% cloud coverage were utilized as evaluation samples representing cloud-free and cloud-covered areas, respectively. Then, the confusion matrices PA and UA were calculated to reveal the confusion of land covers under cloud-free and cloud-covered areas.

4. Results and Discussion

4.1. Overview of the Impacts of Clouds

To understand the results of land cover classification under cloud cover in a qualitative manner, the estimated land covers of the four images with different cloud content are illustrated in Figure 3 and Figure 4 (in detail). For each image with different cloud coverage (0, 6%, 30%, and 50%), the first column in Figure 3 shows the original optical image, and the second, third, and fourth columns represent the classification maps based on optical images, SAR images, and fusion images, respectively.
First of all, from the third column, it could be found that in the general distribution of the estimated land covers based on the SAR image close to the classification map of the cloud-free image, there were serious category discontinuities which looked like a number of spots, even though denoising had been performed in the image preprocessing. Therefore, the classification results using only SAR were not ideal. Second, the first row in Figure 3 shows the classification results of the cloud-free image for reference. By comparing the classification results based on the different cloud-covered optical images in the second column, it could be found that the areas covered by clouds did have land cover confusion, which looked like randomly distributed spots. However, the fourth column in Figure 3 shows that the confusion of land covers under clouds in optical images has been alleviated to a certain extent by fusing optical and SAR data, for which more details are shown in Figure 4.
In Figure 4, the first and second columns represent the cloud-free optical images and their corresponding classification maps for reference, respectively. The third, fourth, and fifth columns represent the cloud-covered optical images at the same location and the corresponding classification maps based on single optical data and the fusion data, respectively. The first row of Figure 4 shows that under cloud cover, land cover could be confused: DIS is mistakenly identified as the BIS and WAT. However, through the fusion of SAR data, part of the DIS and VEG could be accurately identified, and the misjudgment of WAT could be eliminated. From the second row, it could be found that under the cloud cover, WAT was also misjudged as other land covers, while the fusion of SAR data significantly reduced the confusion and accurately identified most water bodies. In the third row, it could be found that the cloud cover made it hard to identify the WAT, DIS, and VEG accurately, but the SAR data helped to identify most of the VEG, and the river was also roughly visible.
As the classification performance of SAR data itself is not perfect, the fusion of SAR and cloud-covered optical images cannot achieve utterly correct recognition of the land cover under cloud cover. However, what this study wants to explore is the following: in cloudy areas or in the case of an emergency where available cloud-free optical data are scarce, how clouds will affect the ULC classification and how to improve the classification as much as possible to provide a reference for the distribution of land covers, instead of achieving the same high classification accuracy as cloud-free images. Considering these, the fusion of optical and SAR data is meaningful as it indeed achieves a considerable improvement in the land cover classification compared to using cloud-covered optical or SAR data alone. On the other hand, further study needs to be carried out to avoid the erroneous information of both parties when fusing optical and SAR data. Since the focus of this study is not the fusion algorithm, a common feature-level fusion method of feature layer overlay is conducted here.

4.2. Quantifying the Impacts of Different Levels of Cloud Coverage

To explore the impact of cloud cover on the discrimination of different land covers, classification experiments were conducted under different cloud covers to quantitatively evaluate the impact of cloud cover on the land cover classification accuracy with optical data, applying three different classification algorithms: SVM, RF, and GoogLeNet. Furthermore, to explore the relationship between cloud cover and the supplementary effect of SAR data on land cover classification, for each classifier, two experiments with the combined use of optical data and SAR data and with single optical data were performed. The overall accuracy was calculated and is shown in Figure 5 to obtain a general understanding of the results. “Sen2+ALOS2” means using the fused Sentinel-2 data and ALOS-2 data, which are drawn with solid lines. “Sen2” means the use of the single Sentinel-2 data, which are drawn with dashed lines. It is worth noting that since these images with different cloud content were acquired on different dates, there may be differences in the classification of the images themselves due to such factors as the different growth conditions of vegetation. Therefore, the classification accuracy of the cloud-free samples under the SVM classifier for each cloud content image was used as a reference to evaluate the impact of cloud content, shown as the long dash line in Figure 5. It can be found that compared with the 0% cloud content image, the classification accuracy of the cloud-free samples in other months was not much different but was relatively stable. The impact of clouds can be further analyzed by comparing the classification accuracy of the cloud-free samples with the overall sample classification accuracy for each cloud content image.
Figure 5 comprehensively illustrates the overall accuracy for land cover classification under different cloud coverages with different classifiers and data. There were mainly three interesting findings. First, it was obvious that in all cases, cloud cover could heavily affect land cover classification accuracy, as OA gradually decreased with the increase in cloud coverage, by approximately 20% in OA at most. This finding is consistent with previous studies proving that clouds can reduce the accuracy of land cover classification [15], while this paper quantitatively measures the ULC classification accuracy under different proportions of cloud coverage. In addition, although the accuracy decreased sharply with increasing cloud coverage, all three classifiers could still achieve an OA accuracy of more than 90% when the cloud coverage was less than 10%. This means that in some cases where satellite data are scarce or for some applications that require time-series images, images with cloud coverage below 10% still have a certain reference value, despite a small amount of accuracy being lost.
Second, for all three classifiers, the combined use of SAR and optical data could achieve higher classification accuracy than using optical data alone. This proved the supplemental effect of SAR data on land cover classification, owing to the electromagnetic characteristics of SAR data, which helped reduce the confusion of different land cover types. These results are consistent with the findings in previous studies exploring the effect of integrating SAR and optical data [15,17]. Furthermore, in the case of cloud cover, the enhancement effect of combining SAR data on land cover classification was further improved compared to the cloud-free case, which was obvious as the difference between the “solid line” and “dash line” increased with the cloud coverage increasing, meaning the gap between the two curves (Sen2+ALOS2 and Sen2) increased up to approximately 4% in OA when the cloud coverage reached 50%. This indicates that SAR data may help discriminate land cover covered by clouds, thus reducing the impact of cloud cover on land cover classification accuracy. For further analysis, an experiment using single SAR data was also conducted in this study, which showed an OA of 82.51%. This indicated that the classification accuracy of SAR itself was not so high. Therefore, the improvement of the classification accuracy of optical images was not enormous. However, it still provided supplementary information for optical images and brought a noticeable improvement to the cloud-covered optical images, which still supported the effectiveness of fusing optical and SAR data. Although previous studies have shown the effectiveness of combining cloud-free optical images and SAR images in improving classification accuracy [15,17], this finding further demonstrates the effect of combining SAR data in the case of cloud cover.
Third, although GoogLeNet, as a deep learning method, is known for its outstanding performance, which was indeed shown in the case of cloud-free data with an OA as high as 96.66%, its performance on land cover classification was not as good as SVM and RF in the presence of clouds. The accuracy curves of GoogLeNet dropped sharply with increasing cloud coverage and had obvious fluctuations, which might not be surprising since deep learning models are easily influenced by input data [54,55]. In comparison, although the performances of SVM and RF were not as good as GoogLeNet in the cloud-free case, they had better robustness and could still achieve acceptable classification accuracy with higher cloud coverage. On the other hand, regarding computational consumption, with the same experimental setting in Section 3.3, the model training time for SVM, RF, and GoogLeNet is 44.63 s, 69.87 s, and 7628 s. The time consumed for testing is 14.87 s, 1.09 s, and 328 s. Therefore, although the accuracy of GoogLeNet is higher than that of SVM and RF when there is no cloud, the trade-off between improved accuracy and higher computational consumption is also an issue worth considering. When there is cloud cover, both SVM and RF can achieve relatively stable performance, with the accuracy of SVM slightly higher, while RF takes less time to classify the testing data.

4.3. Investigate the Mechanism of Cloud Impact during ULC Classification

To further investigate the impact of cloud cover on the five types of land cover and the role of combining SAR data in distinguishing different land covers, the confusion matrices with PA and UA are shown in Table 1. The four confusion matrices show the land cover classification results of the cloud-free samples and the cloud-covered samples with single optical data and combined optical and SAR data. The experiment was conducted on the SVM classifier, as it showed the best performance.
In general, two important conclusions could be drawn from the results. First, the impacts of clouds on land cover classification could be verified by the sharp decreases in UA, PA, and OA, both with single optical data and fused optical data and SAR data. The results with Sentinel-2 images in Table 1 show that the overall classification accuracy extracted from the cloud-covered samples was 81.11%, significantly less than that quantified from the cloud-free samples, with 95.45% OA. Additionally, the two confusion matrices of “Sen2” in Table 1 show that the confusion between different land covers has been greatly intensified, with sharp reduction of PA and UA for all five land covers. This is because the ground surface is blocked by clouds, and the original unique spectral reflections of different land covers are replaced by similar high reflections of the clouds, making the underlying surface hard to distinguish. To further explore this issue, spectral values of the five cloud-free and cloud-covered land cover samples in 12 bands were plotted in Figure 6. In the absence of clouds, the spectral characteristics of VEG, DIS, and WAT were relatively unique, so they were easier to distinguish from other categories. In contrast, the spectral curves of BIS and SOI had more overlap, so they were more likely to be confused, which is also shown in Table 1. This finding that BIS and SOI are more likely to be confused could also be found in other studies [17,56]. In the presence of clouds, due to the blocking of clouds, different land covers showed similar high reflections, which make it difficult to distinguish them. This was also shown in Table 1, with sharp decreases in PA and UA. However, there might be an exception here—water. The WAT achieved the highest recognition accuracy among the five land covers and still had a high OA of up to 90%. One possible reason was the unique spectral characteristics of WAT compared to other land covers. Another reason could be identified from Figure 6—the spectral value of water seemed to be more distinguished than other land covers. This might be because most of the water samples were labelled on the sea area, while in our satellite image (Figure 3 (m)), the clouds on the sea were relatively thin, which may result in spectral reflectance over the water samples being slightly different.
Second, the effectiveness of combining the optical and SAR images could be further confirmed with the increase of the four different evaluation indicators, especially in the case of cloud cover, with approximately 4% in OA. In the cloud-free case, the PA and UA of these land covers were slightly improved with the supplementation of SAR data. TheUA of the SOI and PA of the BIS increased by approximately 2% with the fusion of SAR data, resulting in less BIS misclassified as SOI. While in the case of cloud cover, there were two main findings. First, after combining optical and SAR data, both the UA and PA of water greatly increased by approximately 7%, reaching a relatively high value, with PA and UA of 98.29%, which fully verified the supplementary effect of SAR on the optical data under cloud cover. This considerable gain might be due to the unique radiation characteristics of water, which help to correctly distinguish water from other land covers when its spectral features are affected by clouds. The effectiveness of SAR in distinguishing water can also be found in previous works [17,24]. Although the gain effect of SAR data was not as apparent in the absence of clouds since the recognition of water can already achieve high accuracy with single optical data, in the case of cloud cover, the role of SAR data was noticeable. This finding might also provide support for water-related studies by showing that combining optical and SAR data is an effective means to deal with cloud cover cases and can still achieve acceptable high accuracy even when covered by clouds. Second, for soil, bright and dark impervious surfaces, and vegetation, the combined use of optical and SAR data also reduced the misidentified samples from other types of land cover. Although vegetation was badly affected by cloud cover, the supplementary effect of combining optical and SAR data was noticeable, as 120 more samples were correctly identified compared with single optical data. This may be because SAR is sensitive to surface roughness and moisture, which makes the combined use of SAR images more accessible to separate vegetation from other land covers than the use of single optical data, which suffers from spectral signals interfering with clouds. In general, the confusion of all five land covers greatly increased under cloud cover, but the combined use of optical and SAR data could reduce the confusion, especially for vegetation and water.
For further comparison, the confusion matrix of testing samples with single SAR features is also shown in Table 2. It could be found that the water body is still the land cover with the highest recognition accuracy. Compared to only using cloud-covered optical data shown in Table 1, SAR has a stronger ability to recognize land covers in some cases, such as vegetation and water bodies. However, the accuracy of SAR itself was not exceptionally high, and the fusion of optical and SAR data achieved better performance in ULC classification than using cloud-covered optical or SAR data alone, which was also consistent with the findings in Figure 3.

4.4. Further Exploration of the Impacts of Clouds

4.4.1. The Impact of Training Samples

In this study, the experimental samples were labelled with visual interpretation over the four coregistered Sentinel-2 images and very high-resolution images from Google Earth. The four images used the same set of geographic locations to extract corresponding samples and construct datasets with different cloud contents as they had been coregistered. Therefore, even if one sample is covered by clouds in a certain image, its category is still known, as it may be cloud-free in other images. In the experiments, for datasets with different cloud contents, half of the samples were used for training and the other half for testing, which means that the samples covered by clouds also participated in the training process. Based on the previous discussion in Figure 6, some samples covered by clouds may still be distinguishable. Therefore, these cloud-covered samples participating in training might help to identify similar samples in the testing set. However, in general, when there is only one image, sample labelling is often performed on land covers that are not covered by clouds, as it is difficult to identify the category of land covers under clouds. Taking these factors into consideration, another experiment was conducted using the same training set and testing set in Section 4.2, with SVM as the classifier, except that the cloud-covered samples were removed from the training set. The results are shown in Figure 7, with cloud-covered samples participating in the training as a comparison.
Obviously, when cloud-covered samples did not participate in training, the OA was greatly reduced. It dropped sharply with the increase in cloud coverage and only achieved approximately 60% overall accuracy when the cloud coverage reached 50%, where the improvement effect of cloud-covered samples participating in the training was as high as 25–28%. This is quite a valuable finding. It indicates that although cloud cover prevents accurate land cover classification, if some cloud-covered samples with category labels can be obtained and are input into the training process, the land cover classification accuracy can be greatly improved. In addition, it is also worth noting that the supplementary effect of SAR is even more significant (with an approximate 5% increase in OA) when training with pure cloud-free samples due to the lack of effective optical information identifying cloud-covered testing samples. However, since the classification accuracy is much lower than the 82.51% classification accuracy of the SAR itself in this situation, whether or not to use optical data and fusion data still needs to be considered according to practical needs. Taking this into consideration, under cloud coverage, the strategy of acquiring class labels of the land covers under the cloud and adding these cloud-covered samples into model training may bring about more significant accuracy improvements than simply fusing optical and SAR data.

4.4.2. The Impacts of Cloud Types

This section further explores the influence of cloud types. The cloud types were roughly divided through the quality band of Sentinel-2. In this section, we simply considered two types of “thin cirrus” and “high-probability clouds”, which were usually thicker clouds. With the same experimental setup and dataset as in Section 4.3, in cloud-covered samples of the 50% cloud coverage image, for “thin cirrus” in the testing samples, there were 71 VEG, 64 SOI, 55 BIS, 79 DIS, and 169 WAT, respectively. For “high-probability clouds” in the testing samples, there were 1526 VEG, 1281 SOI, 1266 BIS, 2060 DIS, and 782 WAT, respectively. The classification results of the test samples covered by these two kinds of clouds were extracted and compared with the ground truth. The results showed that although the classification accuracy of samples covered by thick clouds was only 84.76%, which was similar to the results in Table 1, the classification accuracy of samples covered by thin clouds was as high as 95.89%. The spectral values of the five land cover samples covered by these two kinds of clouds in 12 bands are plotted in Figure 8, with the solid line representing samples covering “high-probability clouds” and the dashed line representing “thin cirrus”. Under the occlusion of thick clouds, the spectral values of different land covers were very similar. Under the occlusion of thin clouds, although the spectral values of the land covers were also affected, they were still quite distinguishable. Hence, the data provided valuable information—although it was recognized that cloud obstruction would affect the accuracy of ULC classification, it was highly related to the type of cloud. For some studies where it was difficult to find cloud-free images, using images covered by thin clouds might also obtain relatively satisfactory results. Of course, the current research on satellite image cloud type recognition is still not well developed, which is worth further exploring.

5. Conclusions

This work proposes a research framework for studying the impact of clouds and the effect of polarimetric SAR images on ULC classification in cloud-prone areas. Through a designed sampling strategy, a quantitative assessment of the impact of clouds on ULC classification was carried out, and the effect of SAR on cloud-contaminated areas was further explored. Experiments found that ULC classification accuracy decreases with increasing cloud content, cloud cover greatly increases the confusion of land covers, and water is less likely to be confused than other land covers. The fusion of SAR and cloud-contaminated optical data could help reduce confusion between land covers under clouds, and improve the ULC classification accuracy of cloud-covered areas, especially for vegetation and water. In addition, SVM and RF show more robustness and less time consumption than GoogLeNet, while GoogLeNet performs best in cloud-free areas. Finally, through the analysis of training samples and cloud type, it is further found that the participation of cloud-covered samples in the training process could significantly improve ULC classification accuracy, and ULC classification accuracy is related to the type of clouds. In the case of thin-cloud cover, ULC classification may still achieve reliable accuracy. Future research on distinguishing cloud types may help further elucidate this issue.

Author Contributions

Conceptualization, J.L. and H.Z.; methodology, J.L., H.Z. and Y.L.; software, J.L.; validation, J.L.; formal analysis, J.L.; investigation, J.L.; resources, H.Z.; data curation, J.L.; writing—original draft preparation, J.L.; writing—review and editing, J.L., H.Z. and Y.L.; visualization, J.L.; supervision, H.Z.; project administration, H.Z.; funding acquisition, H.Z. All authors have read and agreed to the published version of the manuscript.

Funding

This study was jointly supported by the National Natural Science Foundation of China (42022061 and 42071390), the Shenzhen Science and Technology Program (JCYJ20210324124013037), the Research Grants Council (RGC) of Hong Kong (HKU27602020, HKU14605917), and The University of Hong Kong (201909185015 and 202011159112).

Data Availability Statement

The Sentinel-2 data that support the findings of this study are openly available in [USGS Earth Explorer] at [https://earthexplorer.usgs.gov/], 12 November 2021, reference number [“USGS Earth Explorer.”].

Acknowledgments

The authors would like to thank Professor Maurizio Migliaccio and Ferdinando Nunziata from the Universit di Napoli Parthenope, Italy, for providing the ALOS-2 data, and the editor and three anonymous reviewers for their critical comments and suggestion to improve the original manuscript.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Ohki, M.; Shimada, M. Large-area land use and land cover classification with quad, compact, and dual polarization SAR data by PALSAR-2. IEEE Trans. Geosci. Remote Sens. 2018, 56, 5550–5557. [Google Scholar] [CrossRef]
  2. Dronova, I.; Gong, P.; Wang, L.; Zhong, L. Mapping dynamic cover types in a large seasonally flooded wetland using extended principal component analysis and object-based classification. Remote Sens. Environ. 2015, 158, 193–206. [Google Scholar] [CrossRef]
  3. Huang, H.; Chen, Y.; Clinton, N.; Wang, J.; Wang, X.; Liu, C.; Gong, P.; Yang, J.; Bai, Y.; Zheng, Y. Mapping major land cover dynamics in Beijing using all Landsat images in Google Earth Engine. Remote Sens. Environ. 2017, 202, 166–176. [Google Scholar] [CrossRef]
  4. Huang, B.; Zhao, B.; Song, Y. Urban land-use mapping using a deep convolutional neural network with high spatial resolution multispectral remote sensing imagery. Remote Sens. Environ. 2018, 214, 73–86. [Google Scholar] [CrossRef]
  5. Huang, X.; Wang, Y.; Li, J.; Chang, X.; Cao, Y.; Xie, J.; Gong, J. High-resolution urban land-cover mapping and landscape analysis of the 42 major cities in China using ZY-3 satellite images. Sci. Bull. 2020, 65, 1039–1048. [Google Scholar] [CrossRef]
  6. Rosenberg, B.D.; Schroth, A.W. Coupling of reactive riverine phosphorus and iron species during hot transport moments: Impacts of land cover and seasonality. Biogeochemistry 2017, 132, 103–122. [Google Scholar] [CrossRef]
  7. Li, X.; Lei, L.; Sun, Y.; Li, M.; Kuang, G. Collaborative Attention-Based Heterogeneous Gated Fusion Network for Land Cover Classification. IEEE Trans. Geosci. Remote Sens. 2020, 59, 3829–3845. [Google Scholar] [CrossRef]
  8. Luo, X.; Tong, X.; Pan, H. Integrating Multiresolution and Multitemporal Sentinel-2 Imagery for Land-Cover Mapping in the Xiongan New Area, China. IEEE Trans. Geosci. Remote Sens. 2020, 59, 1029–1040. [Google Scholar] [CrossRef]
  9. Ji, S.; Wang, D.; Luo, M. Generative Adversarial Network-Based Full-Space Domain Adaptation for Land Cover Classification From Multiple-Source Remote Sensing Images. IEEE Trans. Geosci. Remote Sens. 2020, 59, 3816–3828. [Google Scholar] [CrossRef]
  10. Meraner, A.; Ebel, P.; Zhu, X.X.; Schmitt, M. Cloud removal in Sentinel-2 imagery using a deep residual neural network and SAR-optical data fusion. ISPRS J. Photogramm. Remote Sens. 2020, 166, 333–346. [Google Scholar] [CrossRef] [PubMed]
  11. King, M.D.; Platnick, S.; Menzel, W.P.; Ackerman, S.A.; Hubanks, P.A. Spatial and temporal distribution of clouds observed by MODIS onboard the Terra and Aqua satellites. IEEE Trans. Geosci. Remote Sens. 2013, 51, 3826–3852. [Google Scholar] [CrossRef]
  12. Gao, J.; Yuan, Q.; Li, J.; Zhang, H.; Su, X. Cloud removal with fusion of high resolution optical and SAR images using generative adversarial networks. Remote Sens. 2020, 12, 191. [Google Scholar] [CrossRef] [Green Version]
  13. Paris, C.; Bruzzone, L.; Fernández-Prieto, D. A novel approach to the unsupervised update of land-cover maps by classification of time series of multispectral images. IEEE Trans. Geosci. Remote Sens. 2019, 57, 4259–4277. [Google Scholar] [CrossRef]
  14. Li, J.; Huang, X.; Hu, T.; Jia, X.; Benediktsson, J.A. A novel unsupervised sample collection method for urban land-cover mapping using landsat imagery. IEEE Trans. Geosci. Remote Sens. 2019, 57, 3933–3951. [Google Scholar] [CrossRef]
  15. Zhang, R.; Tang, X.; You, S.; Duan, K.; Xiang, H.; Luo, H. A novel feature-level fusion framework using optical and SAR remote sensing images for land use/land cover (LULC) classification in cloudy mountainous area. Appl. Sci. 2020, 10, 2928. [Google Scholar] [CrossRef]
  16. Zhang, H.; Xu, R. Exploring the optimal integration levels between SAR and optical data for better urban land cover mapping in the Pearl River Delta. Int. J. Appl. Earth Obs. Geoinf. 2018, 64, 87–95. [Google Scholar] [CrossRef]
  17. Zhang, Y.; Zhang, H.; Lin, H. Improving the impervious surface estimation with combined use of optical and SAR remote sensing images. Remote Sens. Environ. 2014, 141, 155–167. [Google Scholar] [CrossRef]
  18. Zhang, L.; Zou, B.; Zhang, J.; Zhang, Y. Classification of polarimetric SAR image based on support vector machine using multiple-component scattering model and texture features. EURASIP J. Adv. Signal Process. 2009, 2010, 1–9. [Google Scholar] [CrossRef] [Green Version]
  19. Cortes, C.; Vapnik, V. Support-vector networks. Mach. Learn. 1995, 20, 273–297. [Google Scholar] [CrossRef]
  20. Breiman, L. Random forests. Mach. Learn. 2001, 45, 5–32. [Google Scholar] [CrossRef] [Green Version]
  21. Diao, W.; Sun, X.; Zheng, X.; Dou, F.; Wang, H.; Fu, K. Efficient saliency-based object detection in remote sensing images using deep belief networks. IEEE Geosci. Remote Sens. Lett. 2016, 13, 137–141. [Google Scholar] [CrossRef]
  22. Clinton, N.; Yu, L.; Gong, P. Geographic stacking: Decision fusion to increase global land cover map accuracy. ISPRS J. Photogramm. Remote Sens. 2015, 103, 57–65. [Google Scholar] [CrossRef]
  23. Sun, Z.; Xu, R.; Du, W.; Wang, L.; Lu, D. High-resolution urban land mapping in China from sentinel 1A/2 imagery based on Google Earth Engine. Remote Sens. 2019, 11, 752. [Google Scholar] [CrossRef] [Green Version]
  24. Shao, Z.; Fu, H.; Fu, P.; Yin, L. Mapping urban impervious surface by fusing optical and SAR data at the decision level. Remote Sens. 2016, 8, 945. [Google Scholar] [CrossRef] [Green Version]
  25. Sun, Y.; Luo, J.; Wu, Z.; Wu, T.; Zhou, Y.N.; Gao, L.; Dong, W.; Liu, H.; Liu, W.; Yang, Y. Crop classification in cloudy and rainy areas based on the optical-synthetic aperture radar response mechanism. J. Appl. Remote Sens. 2020, 14, 028501. [Google Scholar] [CrossRef]
  26. Zhang, R.; Tang, Z.; Luo, D.; Luo, H.; You, S.; Zhang, T. Combined Multi-Time Series SAR Imagery and InSAR Technology for Rice Identification in Cloudy Regions. Appl. Sci. 2021, 11, 6923. [Google Scholar] [CrossRef]
  27. Yang, F.; Yang, X.; Wang, Z.; Lu, C.; Li, Z.; Liu, Y. Object-based classification of cloudy coastal areas using medium-resolution optical and SAR images for vulnerability assessment of marine disaster. J. Oceanol. Limnol. 2019, 37, 1955–1970. [Google Scholar] [CrossRef]
  28. Shrestha, D.P.; Saepuloh, A.; van der Meer, F. Land cover classification in the tropics, solving the problem of cloud covered areas using topographic parameters. Int. J. Appl. Earth Obs. Geoinf. 2019, 77, 84–93. [Google Scholar] [CrossRef]
  29. Zhou, N.; Li, X.; Shen, Z.; Wu, T.; Luo, J. Geo-Parcel-Based Change Detection Using Optical and SAR Images in Cloudy and Rainy Areas. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2020, 14, 1326–1332. [Google Scholar] [CrossRef]
  30. Fan, F.; Wang, Y.; Wang, Z. Temporal and spatial change detecting (1998–2003) and predicting of land use and land cover in Core corridor of Pearl River Delta (China) by using TM and ETM+ images. Environ. Monit. Assess. 2008, 137, 127–147. [Google Scholar] [CrossRef]
  31. Zhang, H.; Li, J.; Wang, T.; Lin, H.; Zheng, Z.; Li, Y.; Lu, Y. A manifold learning approach to urban land cover classification with optical and radar data. Landsc. Urban Plan. 2018, 172, 11–24. [Google Scholar] [CrossRef]
  32. Okada, Y.; Nakamura, S.; Iribe, K.; Yokota, Y.; Tsuji, M.; Tsuchida, M.; Hariu, K.; Kankaku, Y.; Suzuki, S.; Osawa, Y. System design of wide swath, high resolution, full polarimietoric L-band SAR onboard ALOS-2. In Proceedings of the 2013 IEEE International Geoscience and Remote Sensing Symposium-IGARSS, Melbourne, Australia, 21–26 July 2013; pp. 2408–2411. [Google Scholar]
  33. USGS Earth Explorer. Available online: https://earthexplorer.usgs.gov/ (accessed on 12 November 2021).
  34. Drusch, M.; Del Bello, U.; Carlier, S.; Colin, O.; Fernandez, V.; Gascon, F.; Hoersch, B.; Isola, C.; Laberinti, P.; Martimort, P. Sentinel-2: ESA’s optical high-resolution mission for GMES operational services. Remote Sens. Environ. 2012, 120, 25–36. [Google Scholar] [CrossRef]
  35. Sen2Cor v2.8. Available online: http://step.esa.int/main/snap-supported-plugins/sen2cor/sen2cor_v2-8/ (accessed on 10 May 2021).
  36. Lopes, A.; Touzi, R.; Nezry, E. Adaptive speckle filters and scene heterogeneity. IEEE Trans. Geosci. Remote Sens. 1990, 28, 992–1000. [Google Scholar] [CrossRef]
  37. Rouse, J.W.; Haas, R.H.; Schell, J.A.; Deering, D.W.; Harlan, J.C. Monitoring the vernal advancement and retrogradation (green wave effect) of natural vegetation. In NASA/GSFC Type III Final Report; NASA/GSFC: Greenbelt, MD, USA, 1974; Volume 371. [Google Scholar]
  38. Gao, B.-C. NDWI—A normalized difference water index for remote sensing of vegetation liquid water from space. Remote Sens. Environ. 1996, 58, 257–266. [Google Scholar] [CrossRef]
  39. Haralick, R.M.; Shanmugam, K.; Dinstein, I.H. Textural features for image classification. IEEE Trans. Syst. Man Cybern. 1973, SMC-3, 610–621. [Google Scholar] [CrossRef] [Green Version]
  40. Freeman, A.; Durden, S.L. A three-component scattering model for polarimetric SAR data. IEEE Trans. Geosci. Remote Sens. 1998, 36, 963–973. [Google Scholar] [CrossRef] [Green Version]
  41. Cloude, S.R.; Pottier, E. An entropy based classification scheme for land applications of polarimetric SAR. IEEE Trans. Geosci. Remote Sens. 1997, 35, 68–78. [Google Scholar] [CrossRef]
  42. Yamaguchi, Y.; Moriyama, T.; Ishido, M.; Yamada, H. Four-component scattering model for polarimetric SAR image decomposition. IEEE Trans. Geosci. Remote Sens. 2005, 43, 1699–1706. [Google Scholar] [CrossRef]
  43. Shao, Y.; Lunetta, R.S. Comparison of support vector machine, neural network, and CART algorithms for the land-cover classification using limited training data points. ISPRS J. Photogramm. Remote Sens. 2012, 70, 78–87. [Google Scholar] [CrossRef]
  44. Mansaray, L.R.; Wang, F.; Huang, J.; Yang, L.; Kanu, A.S. Accuracies of support vector machine and random forest in rice mapping with Sentinel-1A, Landsat-8 and Sentinel-2A datasets. Geocarto Int. 2020, 35, 1088–1108. [Google Scholar] [CrossRef]
  45. Szegedy, C.; Liu, W.; Jia, Y.; Sermanet, P.; Reed, S.; Anguelov, D.; Erhan, D.; Vanhoucke, V.; Rabinovich, A. Going deeper with convolutions. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Boston, MA, USA, 7–12 June 2015; pp. 1–9. [Google Scholar]
  46. Yao, X.; Wang, X.; Karaca, Y.; Xie, J.; Wang, S. Glomerulus Classification via an Improved GoogLeNet. IEEE Access 2020, 8, 176916–176923. [Google Scholar] [CrossRef]
  47. Li, C.; Zhang, H.; Wu, P.; Yin, Y.; Liu, S. A complex junction recognition method based on GoogLeNet model. Trans. GIS 2020, 24, 1756–1778. [Google Scholar] [CrossRef]
  48. Kim, J.-H.; Seo, S.-Y.; Song, C.-G.; Kim, K.-S. Assessment of electrocardiogram rhythms by GoogLeNet deep neural network architecture. J. Healthc. Eng. 2019, 2019, 2826901. [Google Scholar] [CrossRef] [Green Version]
  49. Mountrakis, G.; Im, J.; Ogole, C. Support vector machines in remote sensing: A review. ISPRS J. Photogramm. Remote Sens. 2011, 66, 247–259. [Google Scholar] [CrossRef]
  50. Chang, C.-C.; Lin, C.-J. LIBSVM: A library for support vector machines. ACM Trans. Intell. Syst. Technol. (TIST) 2011, 2, 1–27. [Google Scholar] [CrossRef]
  51. Yu, X.; Hyyppä, J.; Vastaranta, M.; Holopainen, M.; Viitala, R. Predicting individual tree attributes from airborne laser point clouds based on the random forests technique. ISPRS J. Photogramm. Remote Sens. 2011, 66, 28–37. [Google Scholar] [CrossRef]
  52. Girija, S.S. Tensorflow: Large-Scale Machine Learning on Heterogeneous Distributed Systems. 2016. Available online: www.tensorflow.org (accessed on 25 May 2021).
  53. Zhang, H.; Luoma, W.; Ting, W.; Yinyi, L.; Hui, L.; Zheng, Z. A Comparative Study of Impervious Surface Estimation from Optical and Sar Data Using Deep Convolutional Networks. In Proceedings of the 2018 IEEE International Geoscience and Remote Sensing Symposium (IGARSS 2018), Valencia, Spain, 22–27 July 2018; pp. 1656–1659. [Google Scholar]
  54. Hao, D.; Zhang, L.; Sumkin, J.; Mohamed, A.; Wu, S. Inaccurate labels in weakly-supervised deep learning: Automatic identification and correction and their impact on classification performance. IEEE J. Biomed. Health Inform. 2020, 24, 2701–2710. [Google Scholar] [CrossRef] [PubMed]
  55. Jalilian, E.; Uhl, A. Finger-vein recognition using deep fully convolutional neural semantic segmentation networks: The impact of training data. In Proceedings of the 2018 IEEE International Workshop on Information Forensics and Security (WIFS), Hong Kong, China, 11–13 December 2018; pp. 1–8. [Google Scholar]
  56. Zhang, H.; Zhang, Y.; Lin, H. Compare different levels of fusion between optical and SAR data for impervious surfaces estimation. In Proceedings of the 2012 Second International Workshop on Earth Observation and Remote Sensing Applications, Shanghai, China, 8–11 June 2012; pp. 26–30. [Google Scholar]
Figure 1. The geographic location of the study area.
Figure 1. The geographic location of the study area.
Remotesensing 13 04708 g001
Figure 2. The methodological framework of this study.
Figure 2. The methodological framework of this study.
Remotesensing 13 04708 g002
Figure 3. Land cover classification with different levels of cloud coverage. (a,e,i,m) are the cloud-contaminated optical images with cloud coverage of 0, 6%, 30%, and 50%, respectively, (b,f,j,n) are the land cover classification maps of single optical images, (c,g,k,o) are the land cover classification maps of single SAR images, and (d,h,l,p) are the land cover classification maps of fused optical and SAR images.
Figure 3. Land cover classification with different levels of cloud coverage. (a,e,i,m) are the cloud-contaminated optical images with cloud coverage of 0, 6%, 30%, and 50%, respectively, (b,f,j,n) are the land cover classification maps of single optical images, (c,g,k,o) are the land cover classification maps of single SAR images, and (d,h,l,p) are the land cover classification maps of fused optical and SAR images.
Remotesensing 13 04708 g003
Figure 4. Detailed classification maps on cloud-free and cloud-covered images. (a,f,k,p) are the cloud-free images, (b,g,l,q) are classification maps of the cloud-free images, (c,h,m,r) are the cloud-covered images, (d,i,n,s) are the classification maps of the single cloud-covered images, and (e,j,o,t) are the classification maps of fused optical and SAR images.
Figure 4. Detailed classification maps on cloud-free and cloud-covered images. (a,f,k,p) are the cloud-free images, (b,g,l,q) are classification maps of the cloud-free images, (c,h,m,r) are the cloud-covered images, (d,i,n,s) are the classification maps of the single cloud-covered images, and (e,j,o,t) are the classification maps of fused optical and SAR images.
Remotesensing 13 04708 g004
Figure 5. The overall accuracy of three classifiers under different cloud coverage using single optical data and combined optical and SAR data.
Figure 5. The overall accuracy of three classifiers under different cloud coverage using single optical data and combined optical and SAR data.
Remotesensing 13 04708 g005
Figure 6. The spectral values of the five cloud-free and cloud-covered land cover samples in 12 bands.
Figure 6. The spectral values of the five cloud-free and cloud-covered land cover samples in 12 bands.
Remotesensing 13 04708 g006
Figure 7. The overall accuracy of training with/without cloud-covered samples under different types of cloud coverage using single optical data and combined optical and SAR data.
Figure 7. The overall accuracy of training with/without cloud-covered samples under different types of cloud coverage using single optical data and combined optical and SAR data.
Remotesensing 13 04708 g007
Figure 8. The spectral values of five land cover samples covered by “thin cirrus” and “high-probability clouds” in 12 bands.
Figure 8. The spectral values of five land cover samples covered by “thin cirrus” and “high-probability clouds” in 12 bands.
Remotesensing 13 04708 g008
Table 1. The confusion matrix of cloud-free samples and cloud-covered samples with single optical data and combined optical and SAR data. Columns represent classification labels, and rows represent the actual labels of the testing data.
Table 1. The confusion matrix of cloud-free samples and cloud-covered samples with single optical data and combined optical and SAR data. Columns represent classification labels, and rows represent the actual labels of the testing data.
Cloud-Free SamplesCloud-Covered Samples
Sen2+
ALOS2
VEGSOIBISDISWATUA(%) VEGSOIBISDISWATUA(%)
VEG46326113771996.46VEG1976129120243879.81
SOI4332157756094.81SOI97162667104485.67
BIS3128291894092.84BIS587615892111181.70
DIS821211405227393.79DIS2321371542855384.44
WAT4500491599.82WAT10358149498.29
PA(%)97.2391.0892.6995.8499.55-PA(%)83.2782.5082.1283.4698.29-
OA: 95.76%OA: 85.03%
Sen2 VEGSOIBISDISWATUA(%) VEGSOIBISDISWATUA(%)
VEG46266712801996.29VEG18581391672751675.68
SOI43320614271092.61SOI821559721166882.18
BIS5133285077092.99BIS1096415152152678.54
DIS851211445226493.66DIS28416815827822481.44
WAT5300491499.84WAT40412333138691.00
PA(%)97.1090.8290.5395.8299.53-PA(%)78.3079.1078.2981.3291.18-
OA: 95.37%OA: 81.11%
Table 2. The confusion matrix of testing samples with single SAR features. Columns represent classification labels, and rows represent the actual labels of the testing data.
Table 2. The confusion matrix of testing samples with single SAR features. Columns represent classification labels, and rows represent the actual labels of the testing data.
ALOS-2
VEGSOIBISDISWATUA(%)
VEG38363391363939379.97
SOI4102629611918977.78
BIS2089823036263470.45
DIS3297847444424382.78
WAT66911742480595.70
PA(%)79.1181.2777.0078.0194.89-
OA: 82.51%
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Ling, J.; Zhang, H.; Lin, Y. Improving Urban Land Cover Classification in Cloud-Prone Areas with Polarimetric SAR Images. Remote Sens. 2021, 13, 4708. https://doi.org/10.3390/rs13224708

AMA Style

Ling J, Zhang H, Lin Y. Improving Urban Land Cover Classification in Cloud-Prone Areas with Polarimetric SAR Images. Remote Sensing. 2021; 13(22):4708. https://doi.org/10.3390/rs13224708

Chicago/Turabian Style

Ling, Jing, Hongsheng Zhang, and Yinyi Lin. 2021. "Improving Urban Land Cover Classification in Cloud-Prone Areas with Polarimetric SAR Images" Remote Sensing 13, no. 22: 4708. https://doi.org/10.3390/rs13224708

APA Style

Ling, J., Zhang, H., & Lin, Y. (2021). Improving Urban Land Cover Classification in Cloud-Prone Areas with Polarimetric SAR Images. Remote Sensing, 13(22), 4708. https://doi.org/10.3390/rs13224708

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop