Nothing Special   »   [go: up one dir, main page]

Next Issue
Volume 13, September-2
Previous Issue
Volume 13, August-2
You seem to have javascript disabled. Please note that many of the page functionalities won't work as expected without javascript enabled.
 
 
remotesensing-logo

Journal Browser

Journal Browser

Remote Sens., Volume 13, Issue 17 (September-1 2021) – 208 articles

Cover Story (view full-size image): A highwall is the core of most mines as mineral feeding mining production originates there. Unexpected rock and earth falls can risk human lives and the economy activity; hence, continuous and detailed highwall monitoring is required. Topographic surveys of a highwall are very complex due a variety of challenging conditions: highwalls are vertical, long, and they often lack easy and safe access paths. We demonstrate based on SfM methodology that a facade drone flight mode combined with a nadir camera angle and automatically programmed with a computer-based mission planning software provides the most accurate and detailed topographies in the shortest time and with increased flight safety. View this paper.
  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
19 pages, 17200 KiB  
Article
Evaluation of a Statistical Approach for Extracting Shallow Water Bathymetry Signals from ICESat-2 ATL03 Photon Data
by Heidi Ranndal, Philip Sigaard Christiansen, Pernille Kliving, Ole Baltazar Andersen and Karina Nielsen
Remote Sens. 2021, 13(17), 3548; https://doi.org/10.3390/rs13173548 - 6 Sep 2021
Cited by 41 | Viewed by 4612
Abstract
In this study we present and validate a simple empirical method to obtain bathymetry profiles using the geolocated photon data from the Ice, Cloud, and Land Elevation Satellite-2 (ICESat-2) mission, which was launched by NASA in September 2018. The satellite carries the Advanced [...] Read more.
In this study we present and validate a simple empirical method to obtain bathymetry profiles using the geolocated photon data from the Ice, Cloud, and Land Elevation Satellite-2 (ICESat-2) mission, which was launched by NASA in September 2018. The satellite carries the Advanced Topographic Laser Altimeter System (ATLAS), which is a lidar that can detect single photons and calculate their bounce point positions. ATLAS uses a green laser, causing some of the photons to penetrate the air–water interface. Under the right conditions and in shallow waters (<40 m), these photons are reflected back to ATLAS after interaction with the ocean bottom. Using ICESat-2 data from four different overflights above the Heron Reef, Australia, a comparison with SDB data showed a median absolute deviation of approximately 18 cm and Root Mean Square Errors (RMSEs) down to 28 cm. Crossovers between two different overflights above the Heron Reef showed a median absolute difference of 13 cm. For an area north-west of Sisimiut, Greenland, the comparison was done with multibeam echo sounding data, with RMSEs down to between 35 cm, and correspondingly showed median absolute deviations between 33 and 49 cm. The proposed method works well under good conditions with clear waters such as in the Great Barrier Reef; however, for more difficult areas a more advanced machine learning technique should be investigated in order to find an automated method that can distinguish between bathymetry and other signals and noise. Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>(<b>left</b>) SDB bathymetry of Heron Reef wrt. to the mean sea level, and (<b>right</b>) north of Sisimiut, Greenland, wrt. the latoid.</p>
Full article ">Figure 2
<p>Multibeam echo sounding reference data in the area around Sisimiut, Greenland.</p>
Full article ">Figure 3
<p>Illustration of discrimination procedure using a track crossing Heron Reef on 15 September 2019 (beam gt1r). Surface points are shown in grey, with the median surface height in black, subsurface points before the refraction correction are shown in green. Blue points correspond to the subsurface data after the refraction correction.</p>
Full article ">Figure 4
<p>Illustration of refraction of the laser beams at the air-sea interface. Inspiration from [<a href="#B8-remotesensing-13-03548" class="html-bibr">8</a>].</p>
Full article ">Figure 5
<p>Refraction index for sea water at various salinities and temperatures. Only valid for the ATLAS instrument, which uses a green laser with a wavelength of 532 nm.</p>
Full article ">Figure 6
<p>Vertical refraction correction as a function of temperature for three different salinities and at two different depths of 10 m (left <span class="html-italic">y</span>-axis) and 35 m (right <span class="html-italic">y</span>-axis).</p>
Full article ">Figure 7
<p>ICESat-2 track crossing the Yongle Atoll in South China Sea on 22 October 2018. The beam shown in this plot is gt3r. (<b>Top left</b>) Location of the Yongle Atoll in the South China Sea, (<b>Top right</b>) Location of beam gt3r across the Yongle Atoll, and (<b>bottom</b>) the derived bathymetry signal compared to uncorrected ATL03 data.</p>
Full article ">Figure 8
<p>ICESat-2 track crossing Heron Reef on 15 September 2019. ICESat-2 data from ATL13 and ATL03 shown along with SDB data and the GEBCO model.</p>
Full article ">Figure 9
<p>(<b>Left</b>) Location of Heron Reef, Australia. (<b>Right</b>) Map of ICESat-2 tracks, pink highlights areas with detected bathymetry. A: 8 April 2019, beam gt3l (strong), B: 15 September 2019, beams gt3l/gt3r(strong), C: 8 April 2019, beams gt2l/gt2r(weak).</p>
Full article ">Figure 10
<p>Bias plot comparing the depth of Heron Reef, Australia from ICESat-2 and EOMAP. Depths are metres below the EGM2008 geoid. A perfect fit line (black) is shown along with a linear fit of all high confidence data (grey).</p>
Full article ">Figure 11
<p>Histogram showing the absolute differences between ICESat-2 and EOMAP depth estimates for Heron Reef, Australia.</p>
Full article ">Figure 12
<p>Crossover points over Heron Reef between two beam pairs from tracks 154 and 1213, on 9 April 2019, and 15 September 2019, respectively.</p>
Full article ">Figure 13
<p>(<b>Left</b>) Location of Sisimiut, Greenland. (<b>Right</b>) The different ICESat-2 beams used for this study.</p>
Full article ">Figure 14
<p>Figure showing the different bathymetry profiles as well as all the ATL03 photon data from ICESat-2 track 529. The ICESat-2 data were acquired on 2 November 2018.</p>
Full article ">Figure 15
<p>Figure showing the different bathymetry profiles as well as all the ATL03 photon data from ICESat-2 track 529. The ICESat-2 data were acquired on 1 February 2019.</p>
Full article ">Figure 16
<p>(<b>Left</b>) Bias plots comparing the obtained ICESat-2 bathymetry around Sisimiut, Greenland with bathymetry from EOMAP SDB and (<b>Right</b>) MBES. Depths are metres below the EGM2008.</p>
Full article ">
22 pages, 22566 KiB  
Article
Modifications of the Multi-Layer Perceptron for Hyperspectral Image Classification
by Xin He and Yushi Chen
Remote Sens. 2021, 13(17), 3547; https://doi.org/10.3390/rs13173547 - 6 Sep 2021
Cited by 24 | Viewed by 3615
Abstract
Recently, many convolutional neural network (CNN)-based methods have been proposed to tackle the classification task of hyperspectral images (HSI). In fact, CNN has become the de-facto standard for HSI classification. It seems that the traditional neural networks such as multi-layer perceptron (MLP) are [...] Read more.
Recently, many convolutional neural network (CNN)-based methods have been proposed to tackle the classification task of hyperspectral images (HSI). In fact, CNN has become the de-facto standard for HSI classification. It seems that the traditional neural networks such as multi-layer perceptron (MLP) are not competitive for HSI classification. However, in this study, we try to prove that the MLP can achieve good classification performance of HSI if it is properly designed and improved. The proposed Modified-MLP for HSI classification contains two special parts: spectral–spatial feature mapping and spectral–spatial information mixing. Specifically, for spectral–spatial feature mapping, each input sample of HSI is divided into a sequence of 3D patches with fixed length and then a linear layer is used to map the 3D patches to spectral–spatial features. For spectral–spatial information mixing, all the spectral–spatial features within a single sample are feed into the solely MLP architecture to model the spectral–spatial information across patches for following HSI classification. Furthermore, to obtain the abundant spectral–spatial information with different scales, Multiscale-MLP is proposed to aggregate neighboring patches with multiscale shapes for acquiring abundant spectral–spatial information. In addition, the Soft-MLP is proposed to further enhance the classification performance by applying soft split operation, which flexibly capture the global relations of patches at different positions in the input HSI sample. Finally, label smoothing is introduced to mitigate the overfitting problem in the Soft-MLP (Soft-MLP-L), which greatly improves the classification performance of MLP-based method. The proposed Modified-MLP, Multiscale-MLP, Soft-MLP, and Soft-MLP-L are tested on the three widely used hyperspectral datasets. The proposed Soft-MLP-L leads to the highest OA, which outperforms CNN by 5.76%, 2.55%, and 2.5% on the Salinas, Pavia, and Indian Pines datasets, respectively. The obtained results reveal that the proposed models provide competitive results compared to the state-of-the-art methods, which shows that the MLP-based methods are still competitive for HSI classification. Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>The overview architecture of the proposed Modified-MLP for HSI classification.</p>
Full article ">Figure 2
<p>The framework of the proposed Multiscale-MLP for HSI classification.</p>
Full article ">Figure 3
<p>An example of the soft split operation in the Soft-MLP.</p>
Full article ">Figure 4
<p>The Salinas dataset. (<b>a</b>) False-color composite image; (<b>b</b>) ground truth map.</p>
Full article ">Figure 5
<p>The Pavia University dataset. (<b>a</b>) False-color composite image; (<b>b</b>) ground truth map.</p>
Full article ">Figure 6
<p>The Indian Pines dataset. (<b>a</b>) False-color composite image; (<b>b</b>) ground truth map.</p>
Full article ">Figure 7
<p>Curves on the Salinas dataset.</p>
Full article ">Figure 8
<p>Curves on the Pavia dataset.</p>
Full article ">Figure 9
<p>Curves on the Indian Pines dataset.</p>
Full article ">Figure 10
<p>Test accuracy of key parameters on the Salinas dataset.</p>
Full article ">Figure 11
<p>Test accuracy of key parameters on the Pavia dataset.</p>
Full article ">Figure 12
<p>Test accuracy of key parameters on the Indian dataset.</p>
Full article ">Figure 13
<p>Results of Multiscale-MLP with different window sizes.</p>
Full article ">Figure 14
<p>Test accuracy (%) comparisons under different methods on the three datasets with 150 training samples.</p>
Full article ">Figure 15
<p>Test accuracy (%) comparisons under different methods on the three datasets with 300 training samples.</p>
Full article ">Figure 16
<p>Results of different methods with cross-validation.</p>
Full article ">Figure 17
<p>Salinas. (<b>a</b>) False-color composite image. The classification maps using (<b>b</b>) EMP-SVM; (<b>c</b>) CNN; (<b>d</b>) SSRN; (<b>e</b>) VGG; (<b>f</b>) Modified-MLP; (<b>g</b>) Multiscale-MLP; (<b>h</b>) Soft-MLP; (<b>i</b>) Soft-MLP-L.</p>
Full article ">Figure 18
<p>Pavia. (<b>a</b>) False-color composite image. The classification maps using (<b>b</b>) EMP-SVM; (<b>c</b>) CNN; (<b>d</b>) SSRN; (<b>e</b>) VGG; (<b>f</b>) Modified-MLP; (<b>g</b>) Multiscale-MLP; (<b>h</b>) Soft-MLP; (<b>i</b>) Soft-MLP-L.</p>
Full article ">Figure 19
<p>Indian Pines. (<b>a</b>) False-color composite image. The classification maps using (<b>b</b>) EMP-SVM; (<b>c</b>) CNN; (<b>d</b>) SSRN; (<b>e</b>) VGG; (<b>f</b>) Modified-MLP; (<b>g</b>) Multiscale-MLP; (<b>h</b>) Soft-MLP; (<b>I</b>) Soft-MLP-L.</p>
Full article ">
18 pages, 6261 KiB  
Article
A New Approach for the Development of Grid Models Calculating Tropospheric Key Parameters over China
by Ge Zhu, Liangke Huang, Lilong Liu, Chen Li, Junyu Li, Ling Huang, Lv Zhou and Hongchang He
Remote Sens. 2021, 13(17), 3546; https://doi.org/10.3390/rs13173546 - 6 Sep 2021
Cited by 8 | Viewed by 2461
Abstract
Pressure, water vapor pressure, temperature, and weighted mean temperature (Tm) are tropospheric parameters that play an important role in high-precision global navigation satellite system navigation (GNSS). As accurate tropospheric parameters are obligatory in GNSS navigation and GNSS water vapor detection, high-precision [...] Read more.
Pressure, water vapor pressure, temperature, and weighted mean temperature (Tm) are tropospheric parameters that play an important role in high-precision global navigation satellite system navigation (GNSS). As accurate tropospheric parameters are obligatory in GNSS navigation and GNSS water vapor detection, high-precision modeling of tropospheric parameters has gained widespread attention in recent years. A new approach is introduced to develop an empirical tropospheric delay model named the China Tropospheric (CTrop) model, providing meteorological parameters based on the sliding window algorithm. The radiosonde data in 2017 are treated as reference values to validate the performance of the CTrop model, which is compared to the canonical Global Pressure and Temperature 3 (GPT3) model. The accuracy of the CTrop model in regards to pressure, water vapor pressure, temperature, and weighted mean temperature are 5.51 hPa, 2.60 hPa, 3.09 K, and 3.35 K, respectively, achieving an improvement of 6%, 9%, 10%, and 13%, respectively, when compared to the GPT3 model. Moreover, three different resolutions of the CTrop model based on the sliding window algorithm are also developed to reduce the amount of gridded data provided to the users, as well as to speed up the troposphere delay computation process, for which users can access model parameters of different resolutions for their requirements. With better accuracy of estimating the tropospheric parameters than that of the GPT3 model, the CTrop model is recommended to improve the performance of GNSS positioning and navigation. Full article
(This article belongs to the Special Issue BDS/GNSS for Earth Observation)
Show Figures

Figure 1

Figure 1
<p>Distribution of 89 radiosonde stations over China. Blue dots are radiosonde stations, and red triangles are representative grid points.</p>
Full article ">Figure 2
<p>Relationships between temperature and geopotential height at four MERRA-2 grid points over China in 2016: (<b>a</b>) 42°N, 90°E; (<b>b</b>) 42°N, 120°E; (<b>c</b>) 30°N, 90°E; (<b>d</b>) 30°N, 120°E. Blue dots are the temperature of MERRA-2 in each height, and red lines are the linear fit to them.</p>
Full article ">Figure 3
<p>Time series of tropospheric parameters for pressure (<b>a</b>), parameters for water vapor pressure (<b>b</b>), parameters for temperature (<b>c</b>), and parameters for T<sub>m</sub> (<b>d</b>) provided by MERRA-2 data from 2012 to 2016. The dots shown are the mean values of each latitude interval for each epoch. Blue dots are at high latitude, green dots are at middle latitude, red dots are at low latitude, and orange dots are the mean value.</p>
Full article ">Figure 4
<p>Distribution of the annual mean of the tropospheric parameters for pressure (<b>a</b>), parameters for water vapor pressure (<b>b</b>), parameters for temperature (<b>c</b>), and parameters for T<sub>m</sub> (<b>d</b>) calculated from MERRA-2 data.</p>
Full article ">Figure 5
<p>Distribution of the annual mean of the lapse rate for pressure (<b>a</b>), decrease factor for water vapor pressure (<b>b</b>), lapse rate for temperature (<b>c</b>), and lapse rate for T<sub>m</sub> (<b>d</b>).</p>
Full article ">Figure 6
<p>Realization process of the sliding window algorithm over China. The red rectangles denote the size of the sliding windows, and the red dots denote the center point of each window. The new grid over China consists of red dots and blue dashed lines.</p>
Full article ">Figure 7
<p>Distribution of the performance of pressure at each radiosonde site in 2017 by the CTrop and GPT3 models: (<b>a</b>) Bias of GPT3; (<b>b</b>) Bias of CTrop; (<b>c</b>) RMS of GPT3; (<b>d</b>) RMS of CTrop. The positive bias means the model outputs are larger than the reference values, while the negative bias means they are smaller than the reference values.</p>
Full article ">Figure 8
<p>Distribution of the performance of water vapor pressure at each radiosonde site in 2017 by the CTrop and GPT3 models: (<b>a</b>) Bias of GPT3; (<b>b</b>) Bias of CTrop; (<b>c</b>) RMS of GPT3; (<b>d</b>) RMS of CTrop.</p>
Full article ">Figure 9
<p>Distribution of the performance of T<sub>m</sub> at each radiosonde site in 2017 by the CTrop and GPT3 models: (<b>a</b>) Bias of GPT3; (<b>b</b>) Bias of CTrop; (<b>c</b>) RMS of GPT3; (<b>d</b>) RMS of CTrop.</p>
Full article ">Figure 10
<p>Distribution of the performance of temperature at each radiosonde site in 2017 by the CTrop and GPT3 models: (<b>a</b>) Bias of GPT3; (<b>b</b>) Bias of CTrop; (<b>c</b>) RMS of GPT3; (<b>d</b>) RMS of CTrop.</p>
Full article ">Figure 11
<p>Distribution of the performance of water vapor pressure in different resolutions of the CTrop and GPT3 models validated by radiosonde sites in 2017: (<b>a</b>) Bias of GPT3-5; (<b>b</b>) Bias of CTrop-5; (<b>c</b>) Bias of CTrop-2; (<b>d</b>) RMS of GPT3-5; (<b>e</b>) RMS of CTrop-5; (<b>f</b>) RMS of CTrop-2. The positive bias means the model outputs are larger than the reference values, while the negative bias means they are smaller than the reference values.</p>
Full article ">Figure 12
<p>Distribution of the performance of pressure in different resolutions of the CTrop and GPT3 models validated by radiosonde sites in 2017: (<b>a</b>) Bias of GPT3-5; (<b>b</b>) Bias of CTrop-5; (<b>c</b>) Bias of CTrop-2; (<b>d</b>) RMS of GPT3-5; (<b>e</b>) RMS of CTrop-5; (<b>f</b>) RMS of CTrop-2.</p>
Full article ">Figure 13
<p>Distribution of the performance of T<sub>m</sub> in different resolutions of the CTrop and GPT3 models validated by radiosonde sites in 2017: (<b>a</b>) Bias of GPT3-5; (<b>b</b>) Bias of CTrop-5; (<b>c</b>) Bias of CTrop-2; (<b>d</b>) RMS of GPT3-5; (<b>e</b>) RMS of CTrop-5; (<b>f</b>) RMS of CTrop-2.</p>
Full article ">Figure 14
<p>Distribution of the performance of temperature in different resolutions of the CTrop and GPT3 models validated by radiosonde sites in 2017: (<b>a</b>) Bias of GPT3-5; (<b>b</b>) Bias of CTrop-5; (<b>c</b>) Bias of CTrop-2; (<b>d</b>) RMS of GPT3-5; (<b>e</b>) RMS of CTrop-5; (<b>f</b>) RMS of CTrop-2.</p>
Full article ">
15 pages, 953 KiB  
Technical Note
Micro-Motion Parameter Extraction for Ballistic Missile with Wideband Radar Using Improved Ensemble EMD Method
by Nannan Zhu, Jun Hu, Shiyou Xu, Wenzhen Wu, Yunfan Zhang and Zengping Chen
Remote Sens. 2021, 13(17), 3545; https://doi.org/10.3390/rs13173545 - 6 Sep 2021
Cited by 12 | Viewed by 2856
Abstract
Micro-motion parameters extraction is crucial in recognizing ballistic missiles with a wideband radar. It is known that the phase-derived range (PDR) method can provide a sub-wavelength level accuracy. However, it is sensitive and unstable when the signal-to-noise ratio (SNR) is low. In this [...] Read more.
Micro-motion parameters extraction is crucial in recognizing ballistic missiles with a wideband radar. It is known that the phase-derived range (PDR) method can provide a sub-wavelength level accuracy. However, it is sensitive and unstable when the signal-to-noise ratio (SNR) is low. In this paper, an improved PDR method is proposed to reduce the impacts of low SNRs. First, the high range resolution profile (HRRP) is divided into a series of segments so that each segment contains a single scattering point. Then, the peak values of each segment are viewed as non-stationary signals, which are further decomposed into a series of intrinsic mode functions (IMFs) with different energy, using the ensemble empirical mode decomposition with the complementary adaptive noise (EEMDCAN) method. In the EEMDCAN decomposition, positive and negative adaptive noise pairs are added to each IMF layer to effectively eliminate the mode-mixing phenomenon that exists in the original empirical mode decomposition (EMD) method. An energy threshold is designed to select proper IMFs to reconstruct the envelop for high estimation accuracy and low noise effects. Finally, the least-square algorithm is used to do the ambiguous phases unwrapping to obtain the micro-curve, which can be further used to estimate the micro-motion parameters of the warhead. Simulation results show that the proposed method performs well with SNR at −5 dB with an accuracy level of sub-wavelength. Full article
Show Figures

Figure 1

Figure 1
<p>Target model: (<b>a</b>) cone target model and (<b>b</b>) target micro-motion model.</p>
Full article ">Figure 2
<p>Flowchart of the proposed method.</p>
Full article ">Figure 3
<p>HRRP sequences.</p>
Full article ">Figure 4
<p>Envelop extraction: (<b>a</b>) the cone-top scattering point and (<b>b</b>) the cone-bottom scattering point.</p>
Full article ">Figure 5
<p>Decomposition process: (<b>a</b>) the cone-top scattering point decomposition using EEMDCAN method, (<b>b</b>) the cone-bottom scattering point decomposition using EEMDCAN method, (<b>c</b>) the cone-top scattering point decomposition using original EMD method, and (<b>d</b>) the cone-bottom scattering point decomposition using original EMD method.</p>
Full article ">Figure 6
<p>Phase hopping phenomenon: (<b>a</b>) phase distributions of the cone-top scattering point; (<b>b</b>) the difference between two adjacent pulse of panel (<b>a</b>); (<b>c</b>) phase after compensation of panel (<b>a</b>); (<b>d</b>) phase distributions of the cone-bottom scattering point; (<b>e</b>) the difference between two adjacent pulse of panel (<b>d</b>); (<b>f</b>) phase after compensation of panel (<b>d</b>).</p>
Full article ">Figure 7
<p>Comparison of the estimated value with the theoretical value for phase ranging: (<b>a</b>) the cone-top scattering point using EEMDCAN method; (<b>b</b>) the cone-bottom scattering point using EEMDCAN method; (<b>c</b>) the cone-top scattering point using original EMD method; (<b>d</b>) the cone-bottom scattering point using original EMD method.</p>
Full article ">Figure 8
<p>RMSE comparison of the proposed method with MKF method under different SNR conditions: (<b>a</b>) the cone-top scattering point and (<b>b</b>) the cone-bottom scattering point.</p>
Full article ">
22 pages, 1483 KiB  
Article
A Spatial Variant Motion Compensation Algorithm for High-Monofrequency Motion Error in Mini-UAV-Based BiSAR Systems
by Zhanze Wang, Feifeng Liu, Simin He and Zhixiang Xu
Remote Sens. 2021, 13(17), 3544; https://doi.org/10.3390/rs13173544 - 6 Sep 2021
Cited by 3 | Viewed by 2187
Abstract
High-frequency motion errors can drastically decrease the image quality in mini-unmanned-aerial-vehicle (UAV)-based bistatic synthetic aperture radar (BiSAR), where the spatial variance is much more complex than that in monoSAR. High-monofrequency motion error is a special BiSAR case in which the different motion errors [...] Read more.
High-frequency motion errors can drastically decrease the image quality in mini-unmanned-aerial-vehicle (UAV)-based bistatic synthetic aperture radar (BiSAR), where the spatial variance is much more complex than that in monoSAR. High-monofrequency motion error is a special BiSAR case in which the different motion errors from transmitters and receivers lead to the formation of monofrequency motion error. Furthermore, neither of the classic processors, BiSAR and monoSAR, can compensate for the coupled high-monofrequency motion errors. In this paper, a spatial variant motion compensation algorithm for high-monofrequency motion errors is proposed. First, the bistatic rotation error model that causes high-monofrequency motion error is re-established to account for the bistatic spatial variance of image formation. Second, the corresponding parameters of error model nonlinear gradient are obtained by the joint estimation of subimages. Third, the bistatic spatial variance can be adaptively compensated for based on the error of the nonlinear gradient through contour projection. It is suggested based on the simulation and experimental results that the proposed algorithm can effectively compensate for high-monofrequency motion error in mini-UAV-based BiSAR system conditions. Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Schematic diagram of static Euler angles.</p>
Full article ">Figure 2
<p>The configuration of the UAV-based BiSAR systems.</p>
Full article ">Figure 3
<p>Spatial variance of high frequency motion error of monostatic and bistatic SAR system. (<b>a</b>) is the monostatic system. (<b>b</b>) is the bistatic system.</p>
Full article ">Figure 4
<p>The flowchart of the proposed spatial variant high-monofrequency MOCO algorithm.</p>
Full article ">Figure 5
<p>The contour and the nonlinear gradient of the amplitude <span class="html-italic">a</span> and initial phase <math display="inline"><semantics> <msub> <mi>φ</mi> <mn>0</mn> </msub> </semantics></math>. (<b>a</b>) is the result of <span class="html-italic">a</span>. (<b>b</b>) is the result of <math display="inline"><semantics> <msub> <mi>φ</mi> <mn>0</mn> </msub> </semantics></math>.</p>
Full article ">Figure 6
<p>Nonlinear spatial variance gradient projection model in a UAV-based BiSAR system.</p>
Full article ">Figure 7
<p>The targets of the simulation experiment and the bistatic configuration.</p>
Full article ">Figure 8
<p>The bistatic spatial variance of the amplitude and initial phase under simulation. (<b>a</b>) is the result of amplitude. (<b>b</b>) is the result of the initial phase. The contour lines are shown as solid lines with values attached and the gradient lines are shown as blue dotted lines.</p>
Full article ">Figure 9
<p>The difference between the fitted value and the true value of the parameter on the gradient. (<b>a</b>) is the result of amplitude. (<b>b</b>) is the result of the initial phase.</p>
Full article ">Figure 10
<p>Compensation results. (<b>a</b>–<b>c</b>) show the simulation of target 1, the image result before and after MOCO, and the cross-range result after MOCO. (<b>d</b>–<b>f</b>) show the simulation results of target 25.</p>
Full article ">Figure 11
<p>The range direction in the simulation configuration.</p>
Full article ">Figure 12
<p>The compensation results of the traditional algorithm [<a href="#B11-remotesensing-13-03544" class="html-bibr">11</a>,<a href="#B23-remotesensing-13-03544" class="html-bibr">23</a>]. (<b>a</b>) is the result of target 1. (<b>b</b>) is the result of target 25.</p>
Full article ">Figure 13
<p>Image results of different MOCO algorithms. (<b>a</b>) is the image result without MOCO. (<b>b</b>) is the image result with traditional MOCO algorithm [<a href="#B11-remotesensing-13-03544" class="html-bibr">11</a>,<a href="#B23-remotesensing-13-03544" class="html-bibr">23</a>]. (<b>c</b>) is the result with proposed MOCO algorithm.</p>
Full article ">Figure 14
<p>The range direction in the experiment configuration.</p>
Full article ">Figure 15
<p>The bistatic spatial variance of the amplitude and initial phase under the BiSAR experiment. (<b>a</b>) is the result of amplitude. (<b>b</b>) is the result of the initial phase. The contour lines are shown as solid lines with values attached and the gradient lines are shown as blue dotted lines.</p>
Full article ">Figure 16
<p>Enlarged image results of different MOCO algorithms. (<b>a</b>) is the image result of two transponders. (<b>b</b>) is the image result of the wharf. (<b>c</b>) is the image result of a road. From left to right are the image results without MOCO, with traditional MOCO algorithm [<a href="#B11-remotesensing-13-03544" class="html-bibr">11</a>,<a href="#B23-remotesensing-13-03544" class="html-bibr">23</a>], and with proposed MOCO algorithm.</p>
Full article ">
11 pages, 3452 KiB  
Technical Note
Motion Phase Compensation Methods for Azimuth Ambiguity Suppression in HRWS SAR
by Junying Yang, Xiaolan Qiu, Mingyang Shang, Lihua Zhong and Chibiao Ding
Remote Sens. 2021, 13(17), 3543; https://doi.org/10.3390/rs13173543 - 6 Sep 2021
Cited by 1 | Viewed by 1884
Abstract
The azimuth multi-channel synthetic aperture radar (SAR) is widely used in marine observation, because of its excellent imaging ability of high-resolution and wide-swath (HRWS) signals. Different from the static targets, the azimuth ambiguity of the ships on the open sea resulting from the [...] Read more.
The azimuth multi-channel synthetic aperture radar (SAR) is widely used in marine observation, because of its excellent imaging ability of high-resolution and wide-swath (HRWS) signals. Different from the static targets, the azimuth ambiguity of the ships on the open sea resulting from the radial motion seriously affects the SAR image quality and ship detection probability. As a result, two methods of azimuth ambiguity suppression for moving ships based on motion phase compensation are proposed to apply to different practical application conditions. The simulation and real measured data experiments verify the ability of image quality improvement by proposed methods. Full article
(This article belongs to the Section Remote Sensing Communications)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Imaging geometry of HRWS SAR system.</p>
Full article ">Figure 2
<p>The relationship between the azimuth time of each channel and that of the reconstructed equivalent single-channel.</p>
Full article ">Figure 3
<p>Simulation imaging results. There is no phase imbalance between channels: (<b>a</b>–<b>c</b>); There is residual phase error between channels: (<b>d</b>–<b>f</b>).</p>
Full article ">Figure 3 Cont.
<p>Simulation imaging results. There is no phase imbalance between channels: (<b>a</b>–<b>c</b>); There is residual phase error between channels: (<b>d</b>–<b>f</b>).</p>
Full article ">Figure 4
<p>The omitted phase terms in (16) of each channel for the simulated azimuth four-channel SAR system.</p>
Full article ">Figure 5
<p>Imaging result of GF-3 UFS mode: (<b>a</b>) Before motion phase compensation; (<b>b</b>) After motion phase compensation by Method 1; (<b>c</b>) After motion phase compensation by Method 2.</p>
Full article ">Figure 5 Cont.
<p>Imaging result of GF-3 UFS mode: (<b>a</b>) Before motion phase compensation; (<b>b</b>) After motion phase compensation by Method 1; (<b>c</b>) After motion phase compensation by Method 2.</p>
Full article ">
27 pages, 5070 KiB  
Project Report
Air Quality over China
by Gerrit de Leeuw, Ronald van der A, Jianhui Bai, Yong Xue, Costas Varotsos, Zhengqiang Li, Cheng Fan, Xingfeng Chen, Ioannis Christodoulakis, Jieying Ding, Xuewei Hou, Georgios Kouremadas, Ding Li, Jing Wang, Marina Zara, Kainan Zhang and Ying Zhang
Remote Sens. 2021, 13(17), 3542; https://doi.org/10.3390/rs13173542 - 6 Sep 2021
Cited by 15 | Viewed by 4199
Abstract
The strong economic growth in China in recent decades, together with meteorological factors, has resulted in serious air pollution problems, in particular over large industrialized areas with high population density. To reduce the concentrations of pollutants, air pollution control policies have been successfully [...] Read more.
The strong economic growth in China in recent decades, together with meteorological factors, has resulted in serious air pollution problems, in particular over large industrialized areas with high population density. To reduce the concentrations of pollutants, air pollution control policies have been successfully implemented, resulting in the gradual decrease of air pollution in China during the last decade, as evidenced from both satellite and ground-based measurements. The aims of the Dragon 4 project “Air quality over China” were the determination of trends in the concentrations of aerosols and trace gases, quantification of emissions using a top-down approach and gain a better understanding of the sources, transport and underlying processes contributing to air pollution. This was achieved through (a) satellite observations of trace gases and aerosols to study the temporal and spatial variability of air pollutants; (b) derivation of trace gas emissions from satellite observations to study sources of air pollution and improve air quality modeling; and (c) study effects of haze on air quality. In these studies, the satellite observations are complemented with ground-based observations and modeling. Full article
(This article belongs to the Special Issue ESA - NRSCC Cooperation Dragon 4 Final Results)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Views of the ACAS exposure site equipped with a metal rack on the roof of a building located in the Athens center, facing south. The rack consists of an inclined plane, for displaying samples of material under unsheltered conditions, and an aluminum box with an open bottom, for the exposure of materials’ specimens under sheltered conditions. In addition, there is a mast with an arm that holds two rain shields, in the form of discs, under which passive particle collectors and diffusive samplers for gaseous pollutants are exposed [<a href="#B80-remotesensing-13-03542" class="html-bibr">80</a>,<a href="#B81-remotesensing-13-03542" class="html-bibr">81</a>].</p>
Full article ">Figure 2
<p>Annual distribution of space-based NOx emissions according to the DECSO v5 algorithm applied to OMI observations. This image shows the average situation in the year 2009.</p>
Full article ">Figure 3
<p>(<b>a</b>) Trends in SO<sub>2</sub> emissions (normalized to the reference year 2005). The grey lines (within the grey area) represent the individual trends of the 10 provinces with the highest SO<sub>2</sub> emissions. The red line shows the average of these 10 provinces. (<b>b</b>) Trends in NO<sub>2</sub> emissions (normalized to the reference year 2007). The red line is based on the average for East China.</p>
Full article ">Figure 4
<p>MAP of the annual mean AOD (at 550 nm) over China in 2018, produced using the MAIAC MODIS/Terra and MODIS/Aqua merged AOD product MCD19A2. The data were plotted with a spatial resolution of 1 × 1 km<sup>2</sup>.</p>
Full article ">Figure 5
<p>Time series of annual mean AOD (at 550 nm) over Shanghai and Zhengzhou, derived from the MAIAC MODIS/Terra and MODIS/Aqua merged AOD product MCD19A2, for 2011–2020. Note that AOD is plotted on a logarithmic scale. The annual mean values were calculated from de-seasonalized monthly values.</p>
Full article ">Figure 6
<p>AOD retrieved over the study area using the ITS algorithm with data from the AHI sensor on the geostationary satellite Himawari-8, on 22 March 2021, from 00:00 until 05:00 UCT (08:00 am to 01:00 pm local time). The AOD values range from 0 to 1.8 as indicated in the color bars to the right. The data were plotted with a spatial resolution of 5 km. See text for further explanation.</p>
Full article ">Figure 7
<p>(<b>a</b>) Limestone recession estimates using appropriate DRF along with ground data (blue columns) and satellite data (red columns) for 10 European sites during different exposure periods. The relative approximation error (in %) between these two values is shown with yellow columns. (<b>b</b>) Same as (<b>a</b>) but for zinc. (<b>c</b>) The same as (<b>a</b>) for carbon steel. (<b>d</b>) The same as (<b>a</b>) for modern glass.</p>
Full article ">
18 pages, 45464 KiB  
Article
Efficient and Flexible Aggregation and Distribution of MODIS Atmospheric Products Based on Climate Analytics as a Service Framework
by Jianyu Zheng, Xin Huang, Supriya Sangondimath, Jianwu Wang and Zhibo Zhang
Remote Sens. 2021, 13(17), 3541; https://doi.org/10.3390/rs13173541 - 6 Sep 2021
Cited by 4 | Viewed by 2831
Abstract
MODIS (Moderate Resolution Imaging Spectroradiometer) is a key instrument onboard NASA’s Terra (launched in 1999) and Aqua (launched in 2002) satellite missions as part of the more extensive Earth Observation System (EOS). By measuring the reflection and emission by the Earth-Atmosphere system in [...] Read more.
MODIS (Moderate Resolution Imaging Spectroradiometer) is a key instrument onboard NASA’s Terra (launched in 1999) and Aqua (launched in 2002) satellite missions as part of the more extensive Earth Observation System (EOS). By measuring the reflection and emission by the Earth-Atmosphere system in 36 spectral bands from the visible to thermal infrared with near-daily global coverage and high-spatial-resolution (250 m ~ 1 km at nadir), MODIS is playing a vital role in developing validated, global, interactive Earth system models. MODIS products are processed into three levels, i.e., Level-1 (L1), Level-2 (L2) and Level-3 (L3). To shift the current static and “one-size-fits-all” data provision method of MODIS products, in this paper, we propose a service-oriented flexible and efficient MODIS aggregation framework. Using this framework, users only need to get aggregated MODIS L3 data based on their unique requirements and the aggregation can run in parallel to achieve a speedup. The experiments show that our aggregation results are almost identical to the current MODIS L3 products and our parallel execution with 8 computing nodes can work 88.63 times faster than a serial code execution on a single node. Full article
Show Figures

Figure 1

Figure 1
<p>Illustration of data partitioning-based parallel aggregation with one-month data.</p>
Full article ">Figure 2
<p>Integration of Stratus service framework with parallel MODIS aggregation.</p>
Full article ">Figure 3
<p>The comparison of the mean value and pixel counts of cloud top temperature between the python-based flexible aggregation algorithm and the MYD08_D3 product on 1 January 2008.</p>
Full article ">Figure 4
<p>The comparison of the cloud fraction between the python-based algorithm and the MYD08_D3 product on 1 January 2008.</p>
Full article ">Figure 5
<p>The flow chart of the aggregation on the desired cloud top temperature by using the flexible aggregation framework (<b>left</b>) and the current L3 product (<b>right</b>). The dashed box indicates the internal process in the python-based flexible aggregation method.</p>
Full article ">Figure 6
<p>Execution time results (in seconds) for scalability evaluation by (<b>a</b>) increasing the number of processes within a node and (<b>b</b>) increasing the number of nodes with 32 processes per node.</p>
Full article ">
24 pages, 7153 KiB  
Article
Multidimensional Assessment of Lake Water Ecosystem Services Using Remote Sensing
by Donghui Shi, Yishao Shi and Qiusheng Wu
Remote Sens. 2021, 13(17), 3540; https://doi.org/10.3390/rs13173540 - 6 Sep 2021
Cited by 4 | Viewed by 4766
Abstract
Freshwater is becoming scarce worldwide with the rapidly growing population, developing industries, burgeoning agriculture, and increasing consumption. Assessment of ecosystem services has been regarded as a promising way to reconcile the increasing demand and depleting natural resources. In this paper, we proposed a [...] Read more.
Freshwater is becoming scarce worldwide with the rapidly growing population, developing industries, burgeoning agriculture, and increasing consumption. Assessment of ecosystem services has been regarded as a promising way to reconcile the increasing demand and depleting natural resources. In this paper, we proposed a multidimensional assessment framework for evaluating water provisioning ecosystem services by integrating multi-source remote sensing products. We applied the multidimensional framework to assess lake water ecosystem services in the state of Minnesota, US. We found that: (1) the water provisioning ecosystem services degraded during 1998–2018 from three assessment perspectives; (2) the output, efficiency, and trend indices have stable distribution and various spatial clustering patterns from 1998 to 2018; (3) high-level efficiency depends on high-level output, and low-level output relates to low-level efficiency; (4) Western Minnesota, including Northwest, West Central, and Southwest, degraded more severely than other zones in water provisioning services; (5) human activities impact water provisioning services in Minnesota more than climate changes. These findings can benefit policymakers by identifying the priorities for better protection, conservation, and restoration of lake ecosystems. Our multidimensional assessment framework can be adapted to evaluate ecosystem services in other regions. Full article
(This article belongs to the Special Issue Remote Sensing of Ecosystems)
Show Figures

Figure 1

Figure 1
<p>The flowchart of this study.</p>
Full article ">Figure 2
<p>Our study area with lake bathymetric data in the state of Minnesota.</p>
Full article ">Figure 3
<p>An example of the lake bathymetric DEM of Turtle Lake in the state of Minnesota.</p>
Full article ">Figure 4
<p>Multi-temporal water areas of Lake Louisa derived from the JRC Global Surface Water (GSW) dataset.</p>
Full article ">Figure 5
<p>The trend of population and gross domestic product (GDP) growth in the state of Minnesota.</p>
Full article ">Figure 6
<p>Lake ecosystem services defined by the Millennium Ecosystem Assessment, including provisioning, regulating, supporting, and cultural services.</p>
Full article ">Figure 7
<p>An illustration of lake water storage calculation.</p>
Full article ">Figure 8
<p>The multidimensional framework for evaluating ecosystem services.</p>
Full article ">Figure 9
<p>The temporal trend of total lake water storage and lake surface area in Minnesota (1998–2018).</p>
Full article ">Figure 10
<p>P-score distribution and spatial clustering.</p>
Full article ">Figure 11
<p>Q-score distribution and spatial clustering.</p>
Full article ">Figure 12
<p>D-score distribution and spatial clustering.</p>
Full article ">Figure 13
<p>Multidimensional assessment results projected on P plane at the county level.</p>
Full article ">Figure 14
<p>Multidimensional assessment results projected on Q plane at the county level.</p>
Full article ">Figure 15
<p>Multidimensional assessment results projected on D plane at the county level.</p>
Full article ">Figure 16
<p>The spatial distribution of degradation zones.</p>
Full article ">
30 pages, 96630 KiB  
Article
Boosting Few-Shot Hyperspectral Image Classification Using Pseudo-Label Learning
by Chen Ding, Yu Li, Yue Wen, Mengmeng Zheng, Lei Zhang, Wei Wei and Yanning Zhang
Remote Sens. 2021, 13(17), 3539; https://doi.org/10.3390/rs13173539 - 6 Sep 2021
Cited by 13 | Viewed by 3237
Abstract
Deep neural networks have underpinned much of the recent progress in the field of hyperspectral image (HSI) classification owing to their powerful ability to learn discriminative features. However, training a deep neural network often requires the availability of a large number of labeled [...] Read more.
Deep neural networks have underpinned much of the recent progress in the field of hyperspectral image (HSI) classification owing to their powerful ability to learn discriminative features. However, training a deep neural network often requires the availability of a large number of labeled samples to mitigate over-fitting, and these labeled samples are not always available in practical applications. To adapt the deep neural network-based HSI classification approach to cases in which only a very limited number of labeled samples (i.e., few or even only one labeled sample) are provided, we propose a novel few-shot deep learning framework for HSI classification. In order to mitigate over-fitting, the framework borrows supervision from an auxiliary set of unlabeled samples with soft pseudo-labels to assist the training of the feature extractor on few labeled samples. By considering each labeled sample as a reference agent, the soft pseudo-label is assigned by computing the distances between the unlabeled sample and all agents. To demonstrate the effectiveness of the proposed method, we evaluate it on three benchmark HSI classification datasets. The results indicate that our method achieves better performance relative to existing competitors in few-shot and one-shot settings. Full article
Show Figures

Figure 1

Figure 1
<p>Pipeline of the proposed two-branch few-shot deep learning framework for HSI classification.</p>
Full article ">Figure 2
<p>Illustration of the adopted 3D-CNN architecture for the feature extractor sub-network, together with parameter settings (such as dimension of feature (f), convolution kernels (k) and stride (s)) for all convolutional layers and feature maps.</p>
Full article ">Figure 3
<p>Illustrations of the adopted SSRN architecture for the feature extractor sub-network, together with parameter settings (such as dimension of feature (f), convolution kernels (k) and stride (s)) for all layers and feature maps. (<b>a</b>) Spectral residual learning block, (<b>b</b>) spatial residual learning block. The two residual learning blocks contain shortcut connections between the two convolutional layers.</p>
Full article ">Figure 4
<p>The false color images and ground truth map of the PaviaU dataset. The original image size is <math display="inline"><semantics> <mrow> <mn>610</mn> <mo>×</mo> <mn>340</mn> </mrow> </semantics></math> pixels.</p>
Full article ">Figure 5
<p>The false color image and ground truth map of the Salinas dataset. The original image size is <math display="inline"><semantics> <mrow> <mn>512</mn> <mo>×</mo> <mn>217</mn> </mrow> </semantics></math> pixels.</p>
Full article ">Figure 6
<p>The false color image and ground truth map of the Indian Pines dataset. The original image size is <math display="inline"><semantics> <mrow> <mn>145</mn> <mo>×</mo> <mn>145</mn> </mrow> </semantics></math> pixels.</p>
Full article ">Figure 7
<p>The classification maps of all methods with three samples on PaviaU dataset. (<b>a</b>) SVM; (<b>b</b>) SS-LapSVM; (<b>c</b>) 3D-DENSEnet; (<b>d</b>) 3D-Gan; (<b>e</b>) SS-CNN; (<b>f</b>) 3D-CNN; (<b>g</b>) <math display="inline"><semantics> <mrow> <mn>3</mn> <mi mathvariant="normal">D</mi> <mtext>-</mtext> <mi>CN</mi> <msub> <mi mathvariant="normal">N</mi> <mrow> <mi>k</mi> <mtext>-</mtext> <mi>m</mi> <mi>e</mi> <mi>a</mi> <mi>n</mi> <mi>s</mi> </mrow> </msub> </mrow> </semantics></math> (<b>h</b>) <math display="inline"><semantics> <mrow> <mn>3</mn> <mi mathvariant="normal">D</mi> <mtext>-</mtext> <mi>CN</mi> <msub> <mi mathvariant="normal">N</mi> <mrow> <mi>o</mi> <mi>u</mi> <mi>r</mi> <mi>s</mi> </mrow> </msub> </mrow> </semantics></math>; (<b>i</b>) <math display="inline"><semantics> <mrow> <mn>3</mn> <mi mathvariant="normal">D</mi> <mtext>-</mtext> <mi>CN</mi> <msub> <mi mathvariant="normal">N</mi> <mrow> <mi>h</mi> <mi>a</mi> <mi>r</mi> <mi>d</mi> <mi>l</mi> <mi>a</mi> <mi>b</mi> <mi>e</mi> <mi>l</mi> </mrow> </msub> </mrow> </semantics></math>; (<b>j</b>) SSRN; (<b>k</b>) <math display="inline"><semantics> <mrow> <mi>SSR</mi> <msub> <mi mathvariant="normal">N</mi> <mrow> <mi>k</mi> <mtext>-</mtext> <mi>m</mi> <mi>e</mi> <mi>a</mi> <mi>n</mi> <mi>s</mi> </mrow> </msub> </mrow> </semantics></math>; (<b>l</b>) <math display="inline"><semantics> <mrow> <mi>SSR</mi> <msub> <mi mathvariant="normal">N</mi> <mrow> <mi>o</mi> <mi>u</mi> <mi>r</mi> <mi>s</mi> </mrow> </msub> </mrow> </semantics></math>; (<b>m</b>) <math display="inline"><semantics> <mrow> <mi>SSR</mi> <msub> <mi mathvariant="normal">N</mi> <mrow> <mi>h</mi> <mi>a</mi> <mi>r</mi> <mi>d</mi> <mi>l</mi> <mi>a</mi> <mi>b</mi> <mi>e</mi> <mi>l</mi> </mrow> </msub> </mrow> </semantics></math>.</p>
Full article ">Figure 8
<p>The classification maps of all methods with five samples on PaviaU dataset. (<b>a</b>) SVM; (<b>b</b>) SS-LapSVM; (<b>c</b>) 3D-DENSEnet; (<b>d</b>) 3D-Gan; (<b>e</b>) SS-CNN; (<b>f</b>) 3D-CNN; (<b>g</b>) <math display="inline"><semantics> <mrow> <mn>3</mn> <mi mathvariant="normal">D</mi> <mtext>-</mtext> <mi>CN</mi> <msub> <mi mathvariant="normal">N</mi> <mrow> <mi>k</mi> <mtext>-</mtext> <mi>m</mi> <mi>e</mi> <mi>a</mi> <mi>n</mi> <mi>s</mi> </mrow> </msub> </mrow> </semantics></math> (<b>h</b>) <math display="inline"><semantics> <mrow> <mn>3</mn> <mi mathvariant="normal">D</mi> <mtext>-</mtext> <mi>CN</mi> <msub> <mi mathvariant="normal">N</mi> <mrow> <mi>o</mi> <mi>u</mi> <mi>r</mi> <mi>s</mi> </mrow> </msub> </mrow> </semantics></math>; (<b>i</b>) <math display="inline"><semantics> <mrow> <mn>3</mn> <mi mathvariant="normal">D</mi> <mtext>-</mtext> <mi>CN</mi> <msub> <mi mathvariant="normal">N</mi> <mrow> <mi>h</mi> <mi>a</mi> <mi>r</mi> <mi>d</mi> <mi>l</mi> <mi>a</mi> <mi>b</mi> <mi>e</mi> <mi>l</mi> </mrow> </msub> </mrow> </semantics></math>; (<b>j</b>) SSRN; (<b>k</b>) <math display="inline"><semantics> <mrow> <mi>SSR</mi> <msub> <mi mathvariant="normal">N</mi> <mrow> <mi>k</mi> <mtext>-</mtext> <mi>m</mi> <mi>e</mi> <mi>a</mi> <mi>n</mi> <mi>s</mi> </mrow> </msub> </mrow> </semantics></math>; (<b>l</b>) <math display="inline"><semantics> <mrow> <mi>SSR</mi> <msub> <mi mathvariant="normal">N</mi> <mrow> <mi>o</mi> <mi>u</mi> <mi>r</mi> <mi>s</mi> </mrow> </msub> </mrow> </semantics></math>; (<b>m</b>) <math display="inline"><semantics> <mrow> <mi>SSR</mi> <msub> <mi mathvariant="normal">N</mi> <mrow> <mi>h</mi> <mi>a</mi> <mi>r</mi> <mi>d</mi> <mi>l</mi> <mi>a</mi> <mi>b</mi> <mi>e</mi> <mi>l</mi> </mrow> </msub> </mrow> </semantics></math>.</p>
Full article ">Figure 8 Cont.
<p>The classification maps of all methods with five samples on PaviaU dataset. (<b>a</b>) SVM; (<b>b</b>) SS-LapSVM; (<b>c</b>) 3D-DENSEnet; (<b>d</b>) 3D-Gan; (<b>e</b>) SS-CNN; (<b>f</b>) 3D-CNN; (<b>g</b>) <math display="inline"><semantics> <mrow> <mn>3</mn> <mi mathvariant="normal">D</mi> <mtext>-</mtext> <mi>CN</mi> <msub> <mi mathvariant="normal">N</mi> <mrow> <mi>k</mi> <mtext>-</mtext> <mi>m</mi> <mi>e</mi> <mi>a</mi> <mi>n</mi> <mi>s</mi> </mrow> </msub> </mrow> </semantics></math> (<b>h</b>) <math display="inline"><semantics> <mrow> <mn>3</mn> <mi mathvariant="normal">D</mi> <mtext>-</mtext> <mi>CN</mi> <msub> <mi mathvariant="normal">N</mi> <mrow> <mi>o</mi> <mi>u</mi> <mi>r</mi> <mi>s</mi> </mrow> </msub> </mrow> </semantics></math>; (<b>i</b>) <math display="inline"><semantics> <mrow> <mn>3</mn> <mi mathvariant="normal">D</mi> <mtext>-</mtext> <mi>CN</mi> <msub> <mi mathvariant="normal">N</mi> <mrow> <mi>h</mi> <mi>a</mi> <mi>r</mi> <mi>d</mi> <mi>l</mi> <mi>a</mi> <mi>b</mi> <mi>e</mi> <mi>l</mi> </mrow> </msub> </mrow> </semantics></math>; (<b>j</b>) SSRN; (<b>k</b>) <math display="inline"><semantics> <mrow> <mi>SSR</mi> <msub> <mi mathvariant="normal">N</mi> <mrow> <mi>k</mi> <mtext>-</mtext> <mi>m</mi> <mi>e</mi> <mi>a</mi> <mi>n</mi> <mi>s</mi> </mrow> </msub> </mrow> </semantics></math>; (<b>l</b>) <math display="inline"><semantics> <mrow> <mi>SSR</mi> <msub> <mi mathvariant="normal">N</mi> <mrow> <mi>o</mi> <mi>u</mi> <mi>r</mi> <mi>s</mi> </mrow> </msub> </mrow> </semantics></math>; (<b>m</b>) <math display="inline"><semantics> <mrow> <mi>SSR</mi> <msub> <mi mathvariant="normal">N</mi> <mrow> <mi>h</mi> <mi>a</mi> <mi>r</mi> <mi>d</mi> <mi>l</mi> <mi>a</mi> <mi>b</mi> <mi>e</mi> <mi>l</mi> </mrow> </msub> </mrow> </semantics></math>.</p>
Full article ">Figure 9
<p>The classification maps of all methods with three samples on Salinas dataset. (<b>a</b>) SVM; (<b>b</b>) SS-LapSVM; (<b>c</b>) 3D-DENSEnet; (<b>d</b>) 3D-Gan; (<b>e</b>) SS-CNN; (<b>f</b>) 3D-CNN; (<b>g</b>) <math display="inline"><semantics> <mrow> <mn>3</mn> <mi mathvariant="normal">D</mi> <mtext>-</mtext> <mi>CN</mi> <msub> <mi mathvariant="normal">N</mi> <mrow> <mi>k</mi> <mtext>-</mtext> <mi>m</mi> <mi>e</mi> <mi>a</mi> <mi>n</mi> <mi>s</mi> </mrow> </msub> </mrow> </semantics></math> (<b>h</b>) <math display="inline"><semantics> <mrow> <mn>3</mn> <mi mathvariant="normal">D</mi> <mtext>-</mtext> <mi>CN</mi> <msub> <mi mathvariant="normal">N</mi> <mrow> <mi>o</mi> <mi>u</mi> <mi>r</mi> <mi>s</mi> </mrow> </msub> </mrow> </semantics></math>; (<b>i</b>) <math display="inline"><semantics> <mrow> <mn>3</mn> <mi mathvariant="normal">D</mi> <mtext>-</mtext> <mi>CN</mi> <msub> <mi mathvariant="normal">N</mi> <mrow> <mi>h</mi> <mi>a</mi> <mi>r</mi> <mi>d</mi> <mi>l</mi> <mi>a</mi> <mi>b</mi> <mi>e</mi> <mi>l</mi> </mrow> </msub> </mrow> </semantics></math>; (<b>j</b>) SSRN; (<b>k</b>) <math display="inline"><semantics> <mrow> <mi>SSR</mi> <msub> <mi mathvariant="normal">N</mi> <mrow> <mi>k</mi> <mtext>-</mtext> <mi>m</mi> <mi>e</mi> <mi>a</mi> <mi>n</mi> <mi>s</mi> </mrow> </msub> </mrow> </semantics></math>; (<b>l</b>) <math display="inline"><semantics> <mrow> <mi>SSR</mi> <msub> <mi mathvariant="normal">N</mi> <mrow> <mi>o</mi> <mi>u</mi> <mi>r</mi> <mi>s</mi> </mrow> </msub> </mrow> </semantics></math>; (<b>m</b>) <math display="inline"><semantics> <mrow> <mi>SSR</mi> <msub> <mi mathvariant="normal">N</mi> <mrow> <mi>h</mi> <mi>a</mi> <mi>r</mi> <mi>d</mi> <mi>l</mi> <mi>a</mi> <mi>b</mi> <mi>e</mi> <mi>l</mi> </mrow> </msub> </mrow> </semantics></math>.</p>
Full article ">Figure 9 Cont.
<p>The classification maps of all methods with three samples on Salinas dataset. (<b>a</b>) SVM; (<b>b</b>) SS-LapSVM; (<b>c</b>) 3D-DENSEnet; (<b>d</b>) 3D-Gan; (<b>e</b>) SS-CNN; (<b>f</b>) 3D-CNN; (<b>g</b>) <math display="inline"><semantics> <mrow> <mn>3</mn> <mi mathvariant="normal">D</mi> <mtext>-</mtext> <mi>CN</mi> <msub> <mi mathvariant="normal">N</mi> <mrow> <mi>k</mi> <mtext>-</mtext> <mi>m</mi> <mi>e</mi> <mi>a</mi> <mi>n</mi> <mi>s</mi> </mrow> </msub> </mrow> </semantics></math> (<b>h</b>) <math display="inline"><semantics> <mrow> <mn>3</mn> <mi mathvariant="normal">D</mi> <mtext>-</mtext> <mi>CN</mi> <msub> <mi mathvariant="normal">N</mi> <mrow> <mi>o</mi> <mi>u</mi> <mi>r</mi> <mi>s</mi> </mrow> </msub> </mrow> </semantics></math>; (<b>i</b>) <math display="inline"><semantics> <mrow> <mn>3</mn> <mi mathvariant="normal">D</mi> <mtext>-</mtext> <mi>CN</mi> <msub> <mi mathvariant="normal">N</mi> <mrow> <mi>h</mi> <mi>a</mi> <mi>r</mi> <mi>d</mi> <mi>l</mi> <mi>a</mi> <mi>b</mi> <mi>e</mi> <mi>l</mi> </mrow> </msub> </mrow> </semantics></math>; (<b>j</b>) SSRN; (<b>k</b>) <math display="inline"><semantics> <mrow> <mi>SSR</mi> <msub> <mi mathvariant="normal">N</mi> <mrow> <mi>k</mi> <mtext>-</mtext> <mi>m</mi> <mi>e</mi> <mi>a</mi> <mi>n</mi> <mi>s</mi> </mrow> </msub> </mrow> </semantics></math>; (<b>l</b>) <math display="inline"><semantics> <mrow> <mi>SSR</mi> <msub> <mi mathvariant="normal">N</mi> <mrow> <mi>o</mi> <mi>u</mi> <mi>r</mi> <mi>s</mi> </mrow> </msub> </mrow> </semantics></math>; (<b>m</b>) <math display="inline"><semantics> <mrow> <mi>SSR</mi> <msub> <mi mathvariant="normal">N</mi> <mrow> <mi>h</mi> <mi>a</mi> <mi>r</mi> <mi>d</mi> <mi>l</mi> <mi>a</mi> <mi>b</mi> <mi>e</mi> <mi>l</mi> </mrow> </msub> </mrow> </semantics></math>.</p>
Full article ">Figure 10
<p>The classification maps of all methods with five samples on Salinas dataset. (<b>a</b>) SVM; (<b>b</b>) SS-LapSVM; (<b>c</b>) 3D-DENSEnet; (<b>d</b>) 3D-Gan; (<b>e</b>) SS-CNN; (<b>f</b>) 3D-CNN; (<b>g</b>) <math display="inline"><semantics> <mrow> <mn>3</mn> <mi mathvariant="normal">D</mi> <mtext>-</mtext> <mi>CN</mi> <msub> <mi mathvariant="normal">N</mi> <mrow> <mi>k</mi> <mtext>-</mtext> <mi>m</mi> <mi>e</mi> <mi>a</mi> <mi>n</mi> <mi>s</mi> </mrow> </msub> </mrow> </semantics></math> (<b>h</b>) <math display="inline"><semantics> <mrow> <mn>3</mn> <mi mathvariant="normal">D</mi> <mtext>-</mtext> <mi>CN</mi> <msub> <mi mathvariant="normal">N</mi> <mrow> <mi>o</mi> <mi>u</mi> <mi>r</mi> <mi>s</mi> </mrow> </msub> </mrow> </semantics></math>; (<b>i</b>) <math display="inline"><semantics> <mrow> <mn>3</mn> <mi mathvariant="normal">D</mi> <mtext>-</mtext> <mi>CN</mi> <msub> <mi mathvariant="normal">N</mi> <mrow> <mi>h</mi> <mi>a</mi> <mi>r</mi> <mi>d</mi> <mi>l</mi> <mi>a</mi> <mi>b</mi> <mi>e</mi> <mi>l</mi> </mrow> </msub> </mrow> </semantics></math>; (<b>j</b>) SSRN; (<b>k</b>) <math display="inline"><semantics> <mrow> <mi>SSR</mi> <msub> <mi mathvariant="normal">N</mi> <mrow> <mi>k</mi> <mtext>-</mtext> <mi>m</mi> <mi>e</mi> <mi>a</mi> <mi>n</mi> <mi>s</mi> </mrow> </msub> </mrow> </semantics></math>; (<b>l</b>) <math display="inline"><semantics> <mrow> <mi>SSR</mi> <msub> <mi mathvariant="normal">N</mi> <mrow> <mi>o</mi> <mi>u</mi> <mi>r</mi> <mi>s</mi> </mrow> </msub> </mrow> </semantics></math>; (<b>m</b>) <math display="inline"><semantics> <mrow> <mi>SSR</mi> <msub> <mi mathvariant="normal">N</mi> <mrow> <mi>h</mi> <mi>a</mi> <mi>r</mi> <mi>d</mi> <mi>l</mi> <mi>a</mi> <mi>b</mi> <mi>e</mi> <mi>l</mi> </mrow> </msub> </mrow> </semantics></math>.</p>
Full article ">Figure 10 Cont.
<p>The classification maps of all methods with five samples on Salinas dataset. (<b>a</b>) SVM; (<b>b</b>) SS-LapSVM; (<b>c</b>) 3D-DENSEnet; (<b>d</b>) 3D-Gan; (<b>e</b>) SS-CNN; (<b>f</b>) 3D-CNN; (<b>g</b>) <math display="inline"><semantics> <mrow> <mn>3</mn> <mi mathvariant="normal">D</mi> <mtext>-</mtext> <mi>CN</mi> <msub> <mi mathvariant="normal">N</mi> <mrow> <mi>k</mi> <mtext>-</mtext> <mi>m</mi> <mi>e</mi> <mi>a</mi> <mi>n</mi> <mi>s</mi> </mrow> </msub> </mrow> </semantics></math> (<b>h</b>) <math display="inline"><semantics> <mrow> <mn>3</mn> <mi mathvariant="normal">D</mi> <mtext>-</mtext> <mi>CN</mi> <msub> <mi mathvariant="normal">N</mi> <mrow> <mi>o</mi> <mi>u</mi> <mi>r</mi> <mi>s</mi> </mrow> </msub> </mrow> </semantics></math>; (<b>i</b>) <math display="inline"><semantics> <mrow> <mn>3</mn> <mi mathvariant="normal">D</mi> <mtext>-</mtext> <mi>CN</mi> <msub> <mi mathvariant="normal">N</mi> <mrow> <mi>h</mi> <mi>a</mi> <mi>r</mi> <mi>d</mi> <mi>l</mi> <mi>a</mi> <mi>b</mi> <mi>e</mi> <mi>l</mi> </mrow> </msub> </mrow> </semantics></math>; (<b>j</b>) SSRN; (<b>k</b>) <math display="inline"><semantics> <mrow> <mi>SSR</mi> <msub> <mi mathvariant="normal">N</mi> <mrow> <mi>k</mi> <mtext>-</mtext> <mi>m</mi> <mi>e</mi> <mi>a</mi> <mi>n</mi> <mi>s</mi> </mrow> </msub> </mrow> </semantics></math>; (<b>l</b>) <math display="inline"><semantics> <mrow> <mi>SSR</mi> <msub> <mi mathvariant="normal">N</mi> <mrow> <mi>o</mi> <mi>u</mi> <mi>r</mi> <mi>s</mi> </mrow> </msub> </mrow> </semantics></math>; (<b>m</b>) <math display="inline"><semantics> <mrow> <mi>SSR</mi> <msub> <mi mathvariant="normal">N</mi> <mrow> <mi>h</mi> <mi>a</mi> <mi>r</mi> <mi>d</mi> <mi>l</mi> <mi>a</mi> <mi>b</mi> <mi>e</mi> <mi>l</mi> </mrow> </msub> </mrow> </semantics></math>.</p>
Full article ">Figure 11
<p>The classification maps of all methods with three samples on Indian Pines dataset. (<b>a</b>) SVM; (<b>b</b>) SS-LapSVM; (<b>c</b>) 3D-DENSEnet; (<b>d</b>) 3D-Gan; (<b>e</b>) SS-CNN; (<b>f</b>) 3D-CNN; (<b>g</b>) <math display="inline"><semantics> <mrow> <mn>3</mn> <mi mathvariant="normal">D</mi> <mtext>-</mtext> <mi>CN</mi> <msub> <mi mathvariant="normal">N</mi> <mrow> <mi>k</mi> <mtext>-</mtext> <mi>m</mi> <mi>e</mi> <mi>a</mi> <mi>n</mi> <mi>s</mi> </mrow> </msub> </mrow> </semantics></math> (<b>h</b>) <math display="inline"><semantics> <mrow> <mn>3</mn> <mi mathvariant="normal">D</mi> <mtext>-</mtext> <mi>CN</mi> <msub> <mi mathvariant="normal">N</mi> <mrow> <mi>o</mi> <mi>u</mi> <mi>r</mi> <mi>s</mi> </mrow> </msub> </mrow> </semantics></math>; (<b>i</b>) <math display="inline"><semantics> <mrow> <mn>3</mn> <mi mathvariant="normal">D</mi> <mtext>-</mtext> <mi>CN</mi> <msub> <mi mathvariant="normal">N</mi> <mrow> <mi>h</mi> <mi>a</mi> <mi>r</mi> <mi>d</mi> <mi>l</mi> <mi>a</mi> <mi>b</mi> <mi>e</mi> <mi>l</mi> </mrow> </msub> </mrow> </semantics></math>; (<b>j</b>) SSRN; (<b>k</b>) <math display="inline"><semantics> <mrow> <mi>SSR</mi> <msub> <mi mathvariant="normal">N</mi> <mrow> <mi>k</mi> <mtext>-</mtext> <mi>m</mi> <mi>e</mi> <mi>a</mi> <mi>n</mi> <mi>s</mi> </mrow> </msub> </mrow> </semantics></math>; (<b>l</b>) <math display="inline"><semantics> <mrow> <mi>SSR</mi> <msub> <mi mathvariant="normal">N</mi> <mrow> <mi>o</mi> <mi>u</mi> <mi>r</mi> <mi>s</mi> </mrow> </msub> </mrow> </semantics></math>; (<b>m</b>) <math display="inline"><semantics> <mrow> <mi>SSR</mi> <msub> <mi mathvariant="normal">N</mi> <mrow> <mi>h</mi> <mi>a</mi> <mi>r</mi> <mi>d</mi> <mi>l</mi> <mi>a</mi> <mi>b</mi> <mi>e</mi> <mi>l</mi> </mrow> </msub> </mrow> </semantics></math>.</p>
Full article ">Figure 12
<p>The classification maps of all methods with five samples on Indian Pines dataset. (<b>a</b>) SVM; (<b>b</b>) SS-LapSVM; (<b>c</b>) 3D-DENSEnet; (<b>d</b>) 3D-Gan; (<b>e</b>) SS-CNN; (<b>f</b>) 3D-CNN; (<b>g</b>) <math display="inline"><semantics> <mrow> <mn>3</mn> <mi mathvariant="normal">D</mi> <mtext>-</mtext> <mi>CN</mi> <msub> <mi mathvariant="normal">N</mi> <mrow> <mi>k</mi> <mtext>-</mtext> <mi>m</mi> <mi>e</mi> <mi>a</mi> <mi>n</mi> <mi>s</mi> </mrow> </msub> </mrow> </semantics></math> (<b>h</b>) <math display="inline"><semantics> <mrow> <mn>3</mn> <mi mathvariant="normal">D</mi> <mtext>-</mtext> <mi>CN</mi> <msub> <mi mathvariant="normal">N</mi> <mrow> <mi>o</mi> <mi>u</mi> <mi>r</mi> <mi>s</mi> </mrow> </msub> </mrow> </semantics></math>; (<b>i</b>) <math display="inline"><semantics> <mrow> <mn>3</mn> <mi mathvariant="normal">D</mi> <mtext>-</mtext> <mi>CN</mi> <msub> <mi mathvariant="normal">N</mi> <mrow> <mi>h</mi> <mi>a</mi> <mi>r</mi> <mi>d</mi> <mi>l</mi> <mi>a</mi> <mi>b</mi> <mi>e</mi> <mi>l</mi> </mrow> </msub> </mrow> </semantics></math>; (<b>j</b>) SSRN; (<b>k</b>) <math display="inline"><semantics> <mrow> <mi>SSR</mi> <msub> <mi mathvariant="normal">N</mi> <mrow> <mi>k</mi> <mtext>-</mtext> <mi>m</mi> <mi>e</mi> <mi>a</mi> <mi>n</mi> <mi>s</mi> </mrow> </msub> </mrow> </semantics></math>; (<b>l</b>) <math display="inline"><semantics> <mrow> <mi>SSR</mi> <msub> <mi mathvariant="normal">N</mi> <mrow> <mi>o</mi> <mi>u</mi> <mi>r</mi> <mi>s</mi> </mrow> </msub> </mrow> </semantics></math>; (<b>m</b>) <math display="inline"><semantics> <mrow> <mi>SSR</mi> <msub> <mi mathvariant="normal">N</mi> <mrow> <mi>h</mi> <mi>a</mi> <mi>r</mi> <mi>d</mi> <mi>l</mi> <mi>a</mi> <mi>b</mi> <mi>e</mi> <mi>l</mi> </mrow> </msub> </mrow> </semantics></math>.</p>
Full article ">Figure 13
<p>The classification maps of all methods with one sample on PaviaU dataset. (<b>a</b>) SVM; (<b>b</b>) SS-LapSVM; (<b>c</b>) 3D-DENSEnet; (<b>d</b>) 3D-Gan; (<b>e</b>) SS-CNN; (<b>f</b>) 3D-CNN; (<b>g</b>) <math display="inline"><semantics> <mrow> <mn>3</mn> <mi mathvariant="normal">D</mi> <mtext>-</mtext> <mi>CN</mi> <msub> <mi mathvariant="normal">N</mi> <mrow> <mi>k</mi> <mtext>-</mtext> <mi>m</mi> <mi>e</mi> <mi>a</mi> <mi>n</mi> <mi>s</mi> </mrow> </msub> </mrow> </semantics></math> (<b>h</b>) <math display="inline"><semantics> <mrow> <mn>3</mn> <mi mathvariant="normal">D</mi> <mtext>-</mtext> <mi>CN</mi> <msub> <mi mathvariant="normal">N</mi> <mrow> <mi>o</mi> <mi>u</mi> <mi>r</mi> <mi>s</mi> </mrow> </msub> </mrow> </semantics></math>; (<b>i</b>) <math display="inline"><semantics> <mrow> <mn>3</mn> <mi mathvariant="normal">D</mi> <mtext>-</mtext> <mi>CN</mi> <msub> <mi mathvariant="normal">N</mi> <mrow> <mi>h</mi> <mi>a</mi> <mi>r</mi> <mi>d</mi> <mi>l</mi> <mi>a</mi> <mi>b</mi> <mi>e</mi> <mi>l</mi> </mrow> </msub> </mrow> </semantics></math>; (<b>j</b>) SSRN; (<b>k</b>) <math display="inline"><semantics> <mrow> <mi>SSR</mi> <msub> <mi mathvariant="normal">N</mi> <mrow> <mi>k</mi> <mtext>-</mtext> <mi>m</mi> <mi>e</mi> <mi>a</mi> <mi>n</mi> <mi>s</mi> </mrow> </msub> </mrow> </semantics></math>; (<b>l</b>) <math display="inline"><semantics> <mrow> <mi>SSR</mi> <msub> <mi mathvariant="normal">N</mi> <mrow> <mi>o</mi> <mi>u</mi> <mi>r</mi> <mi>s</mi> </mrow> </msub> </mrow> </semantics></math>; (<b>m</b>) <math display="inline"><semantics> <mrow> <mi>SSR</mi> <msub> <mi mathvariant="normal">N</mi> <mrow> <mi>h</mi> <mi>a</mi> <mi>r</mi> <mi>d</mi> <mi>l</mi> <mi>a</mi> <mi>b</mi> <mi>e</mi> <mi>l</mi> </mrow> </msub> </mrow> </semantics></math>.</p>
Full article ">Figure 14
<p>The classification maps of all methods with one sample on Salinas dataset. (<b>a</b>) SVM; (<b>b</b>) SS-LapSVM; (<b>c</b>) 3D-DENSEnet; (<b>d</b>) 3D-Gan; (<b>e</b>) SS-CNN; (<b>f</b>) 3D-CNN; (<b>g</b>) <math display="inline"><semantics> <mrow> <mn>3</mn> <mi mathvariant="normal">D</mi> <mtext>-</mtext> <mi>CN</mi> <msub> <mi mathvariant="normal">N</mi> <mrow> <mi>k</mi> <mtext>-</mtext> <mi>m</mi> <mi>e</mi> <mi>a</mi> <mi>n</mi> <mi>s</mi> </mrow> </msub> </mrow> </semantics></math>-means method, soft label with the proposed method. The experiments were conducted with five training samples per class on the Indian Pines and PaviaU dataset.</p>
Full article ">Figure 15
<p>The classification maps of all methods with one sample on Indian Pines dataset. (<b>a</b>) SVM; (<b>b</b>) SS-LapSVM; (<b>c</b>) 3D-DENSEnet; (<b>d</b>) 3D-Gan; (<b>e</b>) SS-CNN; (<b>f</b>) 3D-CNN; (<b>g</b>) <math display="inline"><semantics> <mrow> <mn>3</mn> <mi mathvariant="normal">D</mi> <mtext>-</mtext> <mi>CN</mi> <msub> <mi mathvariant="normal">N</mi> <mrow> <mi>k</mi> <mtext>-</mtext> <mi>m</mi> <mi>e</mi> <mi>a</mi> <mi>n</mi> <mi>s</mi> </mrow> </msub> </mrow> </semantics></math> (<b>h</b>) <math display="inline"><semantics> <mrow> <mn>3</mn> <mi mathvariant="normal">D</mi> <mtext>-</mtext> <mi>CN</mi> <msub> <mi mathvariant="normal">N</mi> <mrow> <mi>o</mi> <mi>u</mi> <mi>r</mi> <mi>s</mi> </mrow> </msub> </mrow> </semantics></math>; (<b>i</b>) <math display="inline"><semantics> <mrow> <mn>3</mn> <mi mathvariant="normal">D</mi> <mtext>-</mtext> <mi>CN</mi> <msub> <mi mathvariant="normal">N</mi> <mrow> <mi>h</mi> <mi>a</mi> <mi>r</mi> <mi>d</mi> <mi>l</mi> <mi>a</mi> <mi>b</mi> <mi>e</mi> <mi>l</mi> </mrow> </msub> </mrow> </semantics></math>; (<b>j</b>) SSRN; (<b>k</b>) <math display="inline"><semantics> <mrow> <mi>SSR</mi> <msub> <mi mathvariant="normal">N</mi> <mrow> <mi>k</mi> <mtext>-</mtext> <mi>m</mi> <mi>e</mi> <mi>a</mi> <mi>n</mi> <mi>s</mi> </mrow> </msub> </mrow> </semantics></math>; (<b>l</b>) <math display="inline"><semantics> <mrow> <mi>SSR</mi> <msub> <mi mathvariant="normal">N</mi> <mrow> <mi>o</mi> <mi>u</mi> <mi>r</mi> <mi>s</mi> </mrow> </msub> </mrow> </semantics></math>; (<b>m</b>) <math display="inline"><semantics> <mrow> <mi>SSR</mi> <msub> <mi mathvariant="normal">N</mi> <mrow> <mi>h</mi> <mi>a</mi> <mi>r</mi> <mi>d</mi> <mi>l</mi> <mi>a</mi> <mi>b</mi> <mi>e</mi> <mi>l</mi> </mrow> </msub> </mrow> </semantics></math>.</p>
Full article ">Figure 16
<p>Illustration of the sensitivity of <math display="inline"><semantics> <mi>λ</mi> </semantics></math>. It reflects the classification accuracy of OA, AA and <math display="inline"><semantics> <mi>k</mi> </semantics></math> with different values of <math display="inline"><semantics> <mi>λ</mi> </semantics></math>. (<b>a</b>) OA; (<b>b</b>) AA; (<b>c</b>) <math display="inline"><semantics> <mi>k</mi> </semantics></math>.</p>
Full article ">
22 pages, 26327 KiB  
Article
Deriving Aerodynamic Roughness Length at Ultra-High Resolution in Agricultural Areas Using UAV-Borne LiDAR
by Katerina Trepekli and Thomas Friborg
Remote Sens. 2021, 13(17), 3538; https://doi.org/10.3390/rs13173538 - 6 Sep 2021
Cited by 7 | Viewed by 3165
Abstract
The aerodynamic roughness length (Z0) and surface geometry at ultra-high resolution in precision agriculture and agroforestry have substantial potential to improve aerodynamic process modeling for sustainable farming practices and recreational activities. We explored the potential of unmanned aerial vehicle (UAV)-borne LiDAR [...] Read more.
The aerodynamic roughness length (Z0) and surface geometry at ultra-high resolution in precision agriculture and agroforestry have substantial potential to improve aerodynamic process modeling for sustainable farming practices and recreational activities. We explored the potential of unmanned aerial vehicle (UAV)-borne LiDAR systems to provide Z0 maps with the level of spatiotemporal resolution demanded by precision agriculture by generating the 3D structure of vegetated surfaces and linking the derived geometry with morphometric roughness models. We evaluated the performance of three filtering algorithms to segment the LiDAR-derived point clouds into vegetation and ground points in order to obtain the vegetation height metrics and density at a 0.10 m resolution. The effectiveness of three morphometric models to determine the Z0 maps of Danish cropland and the surrounding evergreen trees was assessed by comparing the results with corresponding Z0 values from a nearby eddy covariance tower (Z0_EC). A morphological filter performed satisfactorily over a homogeneous surface, whereas the progressive triangulated irregular network densification algorithm produced fewer errors with a heterogeneous surface. Z0 from UAV-LiDAR-driven models converged with Z0_EC at the source area scale. The Raupach roughness model appropriately simulated temporal variations in Z0 conditioned by vertical and horizontal vegetation density. The Z0 calculated as a fraction of vegetation height or as a function of vegetation height variability resulted in greater differences with the Z0_EC. Deriving Z0 in this manner could be highly useful in the context of surface energy balance and wind profile estimations for micrometeorological, hydrologic, and ecologic applications in similar sites. Full article
(This article belongs to the Special Issue Remote Sensing for Agrometeorology)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Illustration of the agricultural area (56.037644° N, 9.159383° E) surveyed by the Unmanned aerial vehicle-Light detection and ranging (UAV-LiDAR) system (30.68 ha), and the two subscenes with a range of roughness element densities: Plot 1 with more homogeneous heights corresponding to the wind regime 190° to 347° (yellow, left rectangle), and Plot 2 with more heterogeneous heights corresponding to the wind regime spanning from 90° to 190° (red, right rectangle). The orange label indicates the location of the Eddy covariance tower.</p>
Full article ">Figure 2
<p>Illustrations of (<b>a</b>) part of the UAV-surveyed area covered by potato plants and (<b>b</b>) the LiDAR instrumentation mounted on a Matrice 600 Pro UAV.</p>
Full article ">Figure 3
<p>Example of rasterized point clouds after interpolation representing part of the agricultural field.</p>
Full article ">Figure 4
<p>Filter performance sensitivity in terms of total errors to: (<b>a</b>) window size and threshold for the morphological filter (MF), (<b>b</b>) iterative distance and angle for the progressive triangulated irregular network densification (PTD), and (<b>c</b>) grid step and spikes for the triangulated irregular network densification (TIN). All filters were applied to the Plot 1 subscene of the agricultural site.</p>
Full article ">Figure 5
<p>(<b>a</b>) Canopy height model (CHM) of the agricultural field indicating the locations of the experimental plots (yellow points), and profile view of point clouds normalized to the terrain illustrating the height of (<b>b</b>) vegetation in June (brown points), July (light green points), and August (dark green points), (<b>c</b>) a building, and (<b>d</b>) trees.</p>
Full article ">Figure 6
<p>Anemometric-derived roughness length Z<sub>0</sub>_EC, corresponding to different turbulent source areas from: 25 to 27 June; 13 to 15 July; 12 to 13 August. The central mark indicates the median of Z<sub>0</sub>, while the bottom and top edges of the box indicate the 25th and 75th percentiles, respectively. The whiskers represent the most extreme data points.</p>
Full article ">Figure 7
<p>Canopy height model of the agricultural site as obtained by the UAV-LiDAR survey in June. The ellipsoid shapes indicate the probable surface areas contributing to turbulent flux measurements imposing the respective prevailing meteorological conditions.</p>
Full article ">Figure 8
<p>Scatterplots of the anemometric roughness length (Z<sub>0</sub>_EC) and the morphometric-derived roughness length (Z<sub>0</sub>) using the Menethi and Ritchie (MR), Raupach (RAP), and the rule of thumb (RT) methods.</p>
Full article ">Figure 9
<p>(<b>a</b>) Frontal area index (fai) and (<b>b</b>) planar area index (pai) in the form of wind rose for winds oriented from the west (Plot 1) and east (Plot 2) in June.</p>
Full article ">Figure 10
<p>Contours of the anemometric roughness length for June, July, and August with frontal area index and planar area index as predictor variables, indicating that Z<sub>0</sub> depends on the fai and pai and is maximized for fai and pai values close to 0.08 and 0.75, respectively.</p>
Full article ">Figure 11
<p>Maps of roughness length for a relative homogeneous agricultural site estimated by the (<b>a</b>) RT, (<b>b</b>) RAP, and (<b>c</b>) MR methods. The CHM was acquired by a UAV-LiDAR survey conducted on 26 June at 12:00 local time.</p>
Full article ">Figure 12
<p>Maps of Z<sub>0</sub>_RAP for a subset of the CHM covered by field crops and trees considering two different view angles that correspond to the wind directions of (<b>a</b>) 205° and (<b>b</b>) 105° from the north. The CHM was acquired by a UAV-LiDAR survey conducted on 12 August at 12:30 local time.</p>
Full article ">
16 pages, 1012 KiB  
Technical Note
Data-Driven Interpolation of Sea Surface Suspended Concentrations Derived from Ocean Colour Remote Sensing Data
by Jean-Marie Vient, Frederic Jourdin, Ronan Fablet, Baptiste Mengual, Ludivine Lafosse and Christophe Delacourt
Remote Sens. 2021, 13(17), 3537; https://doi.org/10.3390/rs13173537 - 6 Sep 2021
Cited by 5 | Viewed by 3177
Abstract
Due to complex natural and anthropogenic interconnected forcings, the dynamics of suspended sediments within the ocean water column remains difficult to understand and monitor. Numerical models still lack capabilities to account for the variabilities depicted by in situ and satellite-derived datasets. Besides, the [...] Read more.
Due to complex natural and anthropogenic interconnected forcings, the dynamics of suspended sediments within the ocean water column remains difficult to understand and monitor. Numerical models still lack capabilities to account for the variabilities depicted by in situ and satellite-derived datasets. Besides, the irregular space-time sampling associated with satellite sensors make crucial the development of efficient interpolation methods. Optimal Interpolation (OI) remains the state-of-the-art approach for most operational products. Due to the large increase of both in situ and satellite measurements more and more available information is coming from in situ and satellite measurements, as well as from simulation models. The emergence of data-driven schemes as possibly relevant alternatives with increased capabilities to recover finer-scale processes. In this study, we investigate and benchmark three state-of-the-art data-driven schemes, namely an EOF-based technique, an analog data assimilation scheme, and a neural network approach, with an OI scheme. We rely on an Observing System Simulation Experiment based on high-resolution numerical simulations and simulated satellite observations using real satellite sampling patterns. The neural network approach, which relies on variational data assimilation formulation for the interpolation problem, clearly outperforms both the OI and the other data-driven schemes, both in terms of reconstruction performance and of a greater ability to recover high-frequency events. We further discuss how these results could transfer to real data, as well as to other problems beyond interpolation issues, especially short-term forecasting problems from partial satellite observations. Full article
(This article belongs to the Section Ocean Remote Sensing)
Show Figures

Figure 1

Figure 1
<p>(<b>a</b>) Bathymetry of the Bay of Biscay. Black lines represent isobaths 40, 70, 100, and 130 m. The thick white line (corresponding to the 180 m isobath) approximately delimits the shelf edge. (<b>b</b>) Mean spatial distribution of the log10 SSSC (g/L) from the MARS-MUSTANG hydrosedimentary model.</p>
Full article ">Figure 2
<p>Comparison of daily satellite SSSC [<a href="#B29-remotesensing-13-03537" class="html-bibr">29</a>] (SSSC<math display="inline"><semantics> <msub> <mrow/> <mrow> <mi>s</mi> <mi>a</mi> <mi>t</mi> </mrow> </msub> </semantics></math>) with model results (SSSC<math display="inline"><semantics> <msub> <mrow/> <mrow> <mi>m</mi> <mi>o</mi> <mi>d</mi> </mrow> </msub> </semantics></math>). (<b>a</b>) SSSC time series in mg/L (constrained by the daily frequency of satellite observations) along the 2007–2011 period, averaged over the 10–70 m bathymetric range of the area of interest plotted in (<b>b</b>). Time series on (<b>a</b>) are masked when the cloud cover is higher than 90% over the area of interest. (<b>c</b>) Correlations between modeled and observed SSSC, where continuous and red line refer to model underestimations (with respect to observations) by a factor of 2.</p>
Full article ">Figure 3
<p>Illustration of key features of the considered OSSE dataset: (<b>a</b>) Map of pixel-wise available data rates for the simulated SSSC dataset, with, depicted in white, the location of the three reference stations named ‘SaO’, ‘Gas’, and ‘Con’. (<b>b</b>) Time series of the missing data rate for the simulated SSSC dataset. (<b>c</b>) Time series of the reference SSSC dataset (in green) for the three reference stations, and where black crosses are drawn when the pixel corresponding to each station is observed by the satellite.</p>
Full article ">Figure 4
<p>Daily reconstruction RMSE for OI, DinEOF, AndA, AE-4DVarNet, and GE-4DVarNet. Red bars indicate time periods having a daily missing data rate above 95%.</p>
Full article ">Figure 5
<p>Reconstruction of SSSC gradient fields: (<b>a</b>) reconstruction obtained by OI, AnDA, DINEOF, AE-4DVarNet, and GE-4DVarNet on average for the whole 100-day validation period. (<b>b</b>) map of the mean reconstruction error for the 100-day validation period for OI, DinEOF, AnDA, AEP-4DVarNet, and GE-4DVarNet.</p>
Full article ">Figure 6
<p>Time series of the SSSC reconstruction for OI, DinEOF, AnDA, AE-4DVarNet, and GE-4DVarNet. Green series refer to true state, mark to the observational term inputs.</p>
Full article ">
21 pages, 2605 KiB  
Article
A Comparison of ALS and Dense Photogrammetric Point Clouds for Individual Tree Detection in Radiata Pine Plantations
by Irfan A. Iqbal, Jon Osborn, Christine Stone and Arko Lucieer
Remote Sens. 2021, 13(17), 3536; https://doi.org/10.3390/rs13173536 - 6 Sep 2021
Cited by 5 | Viewed by 3766
Abstract
Digital aerial photogrammetry (DAP) has emerged as a potentially cost-effective alternative to airborne laser scanning (ALS) for forest inventory methods that employ point cloud data. Forest inventory derived from DAP using area-based methods has been shown to achieve accuracy similar to that of [...] Read more.
Digital aerial photogrammetry (DAP) has emerged as a potentially cost-effective alternative to airborne laser scanning (ALS) for forest inventory methods that employ point cloud data. Forest inventory derived from DAP using area-based methods has been shown to achieve accuracy similar to that of ALS data. At the tree level, individual tree detection (ITD) algorithms have been developed to detect and/or delineate individual trees either from ALS point cloud data or from ALS- or DAP-based canopy height models. An examination of the application of ITDs to DAP-based point clouds has not yet been reported. In this research, we evaluate the suitability of DAP-based point clouds for individual tree detection in the Pinus radiata plantation. Two ITD algorithms designed to work with point cloud data are applied to dense point clouds generated from small- and medium-format photography and to an ALS point cloud. Performance of the two ITD algorithms, the influence of stand structure on tree detection rates, and the relationship between tree detection rates and canopy structural metrics are investigated. Overall, we show that there is a good agreement between ALS- and DAP-based ITD results (proportion of false negatives for ALS, SFP, and MFP was always lower than 29.6%, 25.3%, and 28.6%, respectively, whereas, the proportion of false positives for ALS, SFP, and MFP was always lower than 39.4%, 30.7%, and 33.7%, respectively). Differences between small- and medium-format DAP results were minor (for SFP and MFP, differences between recall, precision, and F-score were always less than 0.08, 0.03, and 0.05, respectively), suggesting that DAP point cloud data is robust for ITD. Our results show that among all the canopy structural metrics, the number of trees per hectare has the greatest influence on the tree detection rates. Full article
(This article belongs to the Special Issue Advances in LiDAR Remote Sensing for Forestry and Ecology)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Location and map of the study area showing the spatial distribution of field plots within stand boundaries and that of ground control points (GCPs).</p>
Full article ">Figure 2
<p>(<b>a</b>) Aerial view of an MRI patch; (<b>b</b>) an illustration of canopy condition within MRI stand; (<b>c</b>) The distribution of ALS, SFP and MFP points in 2 m height bins is shown. A higher percentage of ALS points is shown to have penetrated through the canopy to the ground. The DAP-based points are mostly contained in the upper canopy. This is a consequence of the reliance of DAP on discrete photo exposure stations and multi-image matching of any points captured in 3D, compared with the continuous capture pattern and single return method of LiDAR (see Iqbal et al. [<a href="#B45-remotesensing-13-03536" class="html-bibr">45</a>] for details). In the upper canopy, the proportion of MFP points is higher than that of SFP.</p>
Full article ">Figure 3
<p>An example (PlotID: MRI_05) of manual treetop identification in CloudCompare. Red, ALS; blue, SFP; green, MFP; black, reference treetops: (<b>a</b>,<b>b</b>) are two views of the same plot from front and back (tilted), respectively.</p>
Full article ">Figure 4
<p>Comparison of PCITD and Li2012 in terms of: (<b>a</b>) overall detection percentage with respect to the total number of reference trees; (<b>b</b>) proportion of reference trees that was detected (TP_r) and the proportion that was omitted (FN) (the sum of detected and omitted trees is equal to the total number of reference trees); (<b>c</b>) proportion of the detected trees that could be matched with a reference tree (TP_d) and the proportion of detected trees that could not be matched with a reference tree (FP) (the sum of matched and unmatched trees is equal to the total number of trees detected by an algorithm).</p>
Full article ">Figure 5
<p>Comparison of tree detection rates of the two ITDs. (<b>a</b>) recall (r), (<b>b</b>) precision (p), and (<b>c</b>) F-score. The values of r in the MRI plots are lower than those in the PHI plots, the values of p in the MRI plots are higher than those in PHI plots.</p>
Full article ">
30 pages, 16310 KiB  
Article
Exploiting High Geopositioning Accuracy of SAR Data to Obtain Accurate Geometric Orientation of Optical Satellite Images
by Zhongli Fan, Li Zhang, Yuxuan Liu, Qingdong Wang and Sisi Zlatanova
Remote Sens. 2021, 13(17), 3535; https://doi.org/10.3390/rs13173535 - 6 Sep 2021
Cited by 24 | Viewed by 4069
Abstract
Accurate geopositioning of optical satellite imagery is a fundamental step for many photogrammetric applications. Considering the imaging principle and data processing manner, SAR satellites can achieve high geopositioning accuracy. Therefore, SAR data can be a reliable source for providing control information in the [...] Read more.
Accurate geopositioning of optical satellite imagery is a fundamental step for many photogrammetric applications. Considering the imaging principle and data processing manner, SAR satellites can achieve high geopositioning accuracy. Therefore, SAR data can be a reliable source for providing control information in the orientation of optical satellite images. This paper proposes a practical solution for an accurate orientation of optical satellite images using SAR reference images to take advantage of the merits of SAR data. Firstly, we propose an accurate and robust multimodal image matching method to match the SAR and optical satellite images. This approach includes the development of a new structural-based multimodal applicable feature descriptor that employs angle-weighted oriented gradients (AWOGs) and the utilization of a three-dimensional phase correlation similarity measure. Secondly, we put forward a general optical satellite imagery orientation framework based on multiple SAR reference images, which uses the matches of the SAR and optical satellite images as virtual control points. A large number of experiments not only demonstrate the superiority of the proposed matching method compared to the state-of-the-art methods but also prove the effectiveness of the proposed orientation framework. In particular, the matching performance is improved by about 17% compared with the latest multimodal image matching method, namely, CFOG, and the geopositioning accuracy of optical satellite images is improved, from more than 200 to around 8 m. Full article
(This article belongs to the Section Earth Observation Data)
Show Figures

Figure 1

Figure 1
<p>A typical optical satellite image (<b>a</b>) and its corresponding SAR image (<b>b</b>) of the same area.</p>
Full article ">Figure 2
<p>The calculation process of gradient magnitude and gradient direction.</p>
Full article ">Figure 3
<p>The generation of a feature orientation index table (<math display="inline"><semantics> <mrow> <mi>F</mi> <mi>O</mi> <mi>I</mi> </mrow> </semantics></math>) for an image with nine pixels. (<b>a</b>) Nine feature orientations with index numbers 0~8, which are used to divide the range of <math display="inline"><semantics> <mrow> <mfenced close=")" open="["> <mrow> <mn>0</mn> <mo>°</mo> <mo>,</mo> <mn>180</mn> <mo>°</mo> </mrow> </mfenced> </mrow> </semantics></math> into 8 subranges evenly. (<b>b</b>) The determination of lower bound (red arrow) and upper bound (black arrow) of a pixel’s gradient direction (green arrow). (<b>c</b>) <math display="inline"><semantics> <mrow> <mi>F</mi> <mi>O</mi> <mi>I</mi> </mrow> </semantics></math>.</p>
Full article ">Figure 4
<p>The statistical process of a feature vector with 9 elements. <math display="inline"><semantics> <mrow> <mi>W</mi> <mi>G</mi> <mi>M</mi> <mi>L</mi> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <mi>W</mi> <mi>G</mi> <mi>M</mi> <mi>U</mi> </mrow> </semantics></math> refer to the two weighted gradient magnitude images.</p>
Full article ">Figure 5
<p>The processing pipeline of AWOG descriptor.</p>
Full article ">Figure 6
<p>Comparative matching experiments of PC and NCC on an image pair of an optical satellite image and SAR images. (<b>a</b>) Optical satellite image with a template window of 61 × 61 pixels. (<b>b</b>) SAR image with a searching area of 121 × 121 pixels. (<b>c</b>,<b>d</b>) are the AWOG descriptors generated from the optical satellite image and SAR image, respectively. (<b>e</b>,<b>g</b>) are the 3D visualization of similarity values for PC and NCC with AWOG descriptors, respectively. (<b>f</b>,<b>h</b>) are the corresponding 2D similarity maps of (<b>e</b>,<b>g</b>).</p>
Full article ">Figure 7
<p>Image reshaping.</p>
Full article ">Figure 8
<p>Flowchart of the proposed geometric orientation framework.</p>
Full article ">Figure 9
<p>The eight experimental image pairs used in <a href="#sec5dot1dot1-remotesensing-13-03535" class="html-sec">Section 5.1.1</a>. (<b>a</b>–<b>h</b>) correspond to image pairs 1–8.</p>
Full article ">Figure 10
<p>The correct matching ratio (CMR) results of all matching methods under different template sizes. (<b>a</b>–<b>h</b>) correspond to the CMR results of image pairs 1–8.</p>
Full article ">Figure 11
<p>The RMSE results of all methods with a template window of 91 × 91 pixels.</p>
Full article ">Figure 12
<p>The average running times of all methods under different sizes of template windows.</p>
Full article ">Figure 13
<p>The matching results of the proposed method on all experimental image pairs. (<b>a</b>–<b>h</b>) correspond to the results of image pairs 1–8.</p>
Full article ">Figure 14
<p>The study area (framed with the yellow rectangle) and the areas (marked by the red polygon) covered by SAR reference images.</p>
Full article ">Figure 15
<p>The overlap of optical satellite images with the used SAR reference images. (<b>a</b>) The distribution of SAR reference images. (<b>b</b>–<b>d</b>) display the overlaps of GF-1, GF-2, and ZY-3 with the SAR reference images, respectively. Note that the number at the end of the name of an image represents a specific image in the corresponding image collection.</p>
Full article ">Figure 16
<p>The obtained matches from all reference SAR images for the optical satellite image GF-1-3. (<b>a</b>–<b>g</b>) display the locations of matches on all SAR reference images. (<b>h</b>) displays the locations of all obtained matches on GF-1-3, where the color of a point indicates the SAR image to which it is matched.</p>
Full article ">Figure 16 Cont.
<p>The obtained matches from all reference SAR images for the optical satellite image GF-1-3. (<b>a</b>–<b>g</b>) display the locations of matches on all SAR reference images. (<b>h</b>) displays the locations of all obtained matches on GF-1-3, where the color of a point indicates the SAR image to which it is matched.</p>
Full article ">Figure 17
<p>Registration checkerboard overlays of optical images and SAR images with image tiles of 300 × 300 m before and after the orientation process. (<b>a</b>,<b>c</b>,<b>e</b>) show the optical satellite images before, and (<b>b</b>,<b>d</b>,<b>f</b>) show them after, the geometric orientation with the orientation framework.</p>
Full article ">Figure 17 Cont.
<p>Registration checkerboard overlays of optical images and SAR images with image tiles of 300 × 300 m before and after the orientation process. (<b>a</b>,<b>c</b>,<b>e</b>) show the optical satellite images before, and (<b>b</b>,<b>d</b>,<b>f</b>) show them after, the geometric orientation with the orientation framework.</p>
Full article ">Figure 17 Cont.
<p>Registration checkerboard overlays of optical images and SAR images with image tiles of 300 × 300 m before and after the orientation process. (<b>a</b>,<b>c</b>,<b>e</b>) show the optical satellite images before, and (<b>b</b>,<b>d</b>,<b>f</b>) show them after, the geometric orientation with the orientation framework.</p>
Full article ">Figure 18
<p>The running time used for extracting AWOG descriptors with or without the <math display="inline"><semantics> <mrow> <mi>F</mi> <mi>O</mi> <mi>I</mi> </mrow> </semantics></math> on all experimental image pairs used in <a href="#sec5dot1dot1-remotesensing-13-03535" class="html-sec">Section 5.1.1</a>.</p>
Full article ">
13 pages, 2050 KiB  
Communication
3D Point Cloud Reconstruction Using Inversely Mapping and Voting from Single Pass CSAR Images
by Shanshan Feng, Yun Lin, Yanping Wang, Fei Teng and Wen Hong
Remote Sens. 2021, 13(17), 3534; https://doi.org/10.3390/rs13173534 - 6 Sep 2021
Cited by 10 | Viewed by 2630
Abstract
3D reconstruction has raised much interest in the field of CSAR. However, three dimensional imaging results with single pass CSAR data reveals that the 3D resolution of the system is poor for anisotropic scatterers. According to the imaging mechanism of CSAR, different targets [...] Read more.
3D reconstruction has raised much interest in the field of CSAR. However, three dimensional imaging results with single pass CSAR data reveals that the 3D resolution of the system is poor for anisotropic scatterers. According to the imaging mechanism of CSAR, different targets located on the same iso-range line in the zero doppler plane fall into the same cell while for the same target point, imaging point will fall into the different positions at different aspect angles. In this paper, we proposed a method for 3D point cloud reconstruction using projections on 2D sub-aperture images. The target and background in the sub-aperture images are separated and binarized. For a projection point of target, given a series of offsets, the projection point will be mapped inversely to the 3D mesh along the iso-range line. We can obtain candidate points of the target. The intersection of iso-range lines can be regarded as voting process. For a candidate, the more times of intersection, the higher the number of votes, and the candidate point will be reserved. This fully excavates the information contained in the angle dimension of CSAR. The proposed approach is verified by the Gotcha Volumetric SAR Data Set. Full article
(This article belongs to the Special Issue Advances in Synthetic Aperture Radar Remote Sensing)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>The imaging diagram under two viewing angles. <span class="html-italic">T</span> is a target above ground plane. <span class="html-italic">P</span> and <span class="html-italic">Q</span> are projection points from different viewing angles.</p>
Full article ">Figure 2
<p>The imaging geometry under viewing angle <span class="html-italic">B</span>. <span class="html-italic">T</span> is a target above ground plane. <span class="html-italic">P</span> is a projection point.</p>
Full article ">Figure 3
<p>Inversely mapping and voting process. <span class="html-italic">P</span> is projection point. <math display="inline"> <semantics> <msub> <mi>P</mi> <mi>i</mi> </msub> </semantics> </math> is target point in 3D space corresponding to different offset along iso-range line <math display="inline"> <semantics> <msup> <mi>L</mi> <mo>′</mo> </msup> </semantics> </math>. <span class="html-italic">Q</span> is another projection point which is mapped inversely along iso-range line under another aspect angle.</p>
Full article ">Figure 4
<p>The algorithm flow chart of the proposed method.</p>
Full article ">Figure 5
<p>Full scene of entire 360<math display="inline"> <semantics> <msup> <mrow/> <mo>∘</mo> </msup> </semantics> </math> aperture after RPCA.</p>
Full article ">Figure 6
<p>Certain 10<math display="inline"> <semantics> <msup> <mrow/> <mo>∘</mo> </msup> </semantics> </math> sub-aperture images without RPCA are non-coherently accumulated.</p>
Full article ">Figure 7
<p>Sparse portion of certain 10<math display="inline"> <semantics> <msup> <mrow/> <mo>∘</mo> </msup> </semantics> </math> sub-aperture images after RPCA are non-coherently accumulated.</p>
Full article ">Figure 8
<p>Comparison between real target of Vehicle C and 3D reconstruction result. (<b>a</b>) Optical image of Vehicle C. (<b>b</b>) Reconstructed 3D point cloud of Vehicle C.</p>
Full article ">Figure 9
<p>Comparison between real target of Vehicle B and 3D reconstruction result. (<b>a</b>) Optical image of Vehicle B. (<b>b</b>) Reconstructed 3D point cloud of Vehicle B.</p>
Full article ">Figure 10
<p>Comparison between real target of Vehicle F and 3D reconstruction result. (<b>a</b>) Optical image of Vehicle F. (<b>b</b>) Reconstructed 3D point cloud of Vehicle F.</p>
Full article ">Figure 11
<p>3D imaging using 3D BP with single pass CSAR data. (<b>a</b>) 3D imaging result of Vehicle C. (<b>b</b>) 3D imaging result of Vehicle B. (<b>c</b>) 3D imaging result of Vehicle F.</p>
Full article ">
26 pages, 16067 KiB  
Article
Mapping Multi-Temporal Population Distribution in China from 1985 to 2010 Using Landsat Images via Deep Learning
by Haoming Zhuang, Xiaoping Liu, Yuchao Yan, Jinpei Ou, Jialyu He and Changjiang Wu
Remote Sens. 2021, 13(17), 3533; https://doi.org/10.3390/rs13173533 - 6 Sep 2021
Cited by 16 | Viewed by 3699
Abstract
Fine knowledge of the spatiotemporal distribution of the population is fundamental in a wide range of fields, including resource management, disaster response, public health, and urban planning. The United Nations’ Sustainable Development Goals also require the accurate and timely assessment of where people [...] Read more.
Fine knowledge of the spatiotemporal distribution of the population is fundamental in a wide range of fields, including resource management, disaster response, public health, and urban planning. The United Nations’ Sustainable Development Goals also require the accurate and timely assessment of where people live to formulate, implement, and monitor sustainable development policies. However, due to the lack of appropriate auxiliary datasets and effective methodological frameworks, there are rarely continuous multi-temporal gridded population data over a long historical period to aid in our understanding of the spatiotemporal evolution of the population. In this study, we developed a framework integrating a ResNet-N deep learning architecture, considering neighborhood effects with a vast number of Landsat-5 images from Google Earth Engine for population mapping, to overcome both the data and methodology obstacles associated with rapid multi-temporal population mapping over a long historical period at a large scale. Using this proposed framework in China, we mapped fine-scale multi-temporal gridded population data (1 km × 1 km) of China for the 1985–2010 period with a 5-year interval. The produced multi-temporal population data were validated with available census data and achieved comparable performance. By analyzing the multi-temporal population grids, we revealed the spatiotemporal evolution of population distribution from 1985 to 2010 in China with the characteristic of concentration of the population in big cities and the contraction of small- and medium-sized cities. The framework proposed in this study demonstrates the feasibility of mapping multi-temporal gridded population distribution at a large scale over a long period in a timely and low-cost manner, which is particularly useful in low-income and data-poor areas. Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>The flowchart of the proposed framework for mapping population distribution of China by integrating the ResNet-N model and Landsat-5 images from GEE.</p>
Full article ">Figure 2
<p>The flowchart of collecting the closest ground-truth population grid cell samples with a resolution of 1 km.</p>
Full article ">Figure 3
<p>The spatial distribution of the ground-truth population samples.</p>
Full article ">Figure 4
<p>Cloud-free Landsat-5 composites of China from 1985 to 2010.</p>
Full article ">Figure 5
<p>Probability density distribution of population count in the ground-truth samples and example RS image patches that correspond to various population counts.</p>
Full article ">Figure 6
<p>An end-to-end ResNet-N model to estimate population count from RS images by embedding the neighbor knowledge into ResNet.</p>
Full article ">Figure 7
<p>Scatterplots and probability density distributions of ground-truth population count and estimated population count from ResNet-N and ResNet.</p>
Full article ">Figure 8
<p>Scatterplots of test samples between <math display="inline"><semantics> <mi>p</mi> </semantics></math> and <math display="inline"><semantics> <mrow> <mover accent="true"> <mi>p</mi> <mo>^</mo> </mover> <mo>−</mo> <mi>p</mi> </mrow> </semantics></math> from ResNet-N and ResNet. (<math display="inline"><semantics> <mi>p</mi> </semantics></math>: true population count; <math display="inline"><semantics> <mover accent="true"> <mi>p</mi> <mo>^</mo> </mover> </semantics></math>: estimated population count).</p>
Full article ">Figure 9
<p>RS image patches (<b>top row</b>) and corresponding heatmaps (<b>bottom row</b>) produced by Grad-CAM in 12 typical grid cells. (<b>a</b>) Built-up areas border natural areas; (<b>b</b>) Interiors of built-up areas.</p>
Full article ">Figure 10
<p>Scatterplots of the true population count and estimated population count from RSPop, WorldPop, and GPWv4 at town scale.</p>
Full article ">Figure 11
<p>Scatterplots of the true population count and estimated population count at county scale from 1990 to 2010.</p>
Full article ">Figure 12
<p>Scatterplots of the true population count and estimated population count at town scale based on county-scale, city-scale, province-scale, and country-scale census data.</p>
Full article ">Figure 13
<p>Variation in the accuracy of gridded population data based on county-scale, city-scale, province-scale, and country-scale census data in terms of 6 accuracy metrics.</p>
Full article ">Figure 14
<p>Gridded population data (1 km × 1 km) of China from 1985 to 2010.</p>
Full article ">Figure 15
<p>Population distributions (<b>bottom row</b>) and landscape variations (<b>top row</b>) of three regions in large urban agglomerations in China from 1985 to 2010. (<b>a</b>) Beijing-Tianjin-Hebei; (<b>b</b>) The Yangtze River Delta; (<b>c</b>) The Pearl River Delta.</p>
Full article ">Figure A1
<p>Illustration of how the input RS image evolves to the output population count in the ResNet-N by an example image patch. The activations of the first three and the last feature map of each network layer were visualized. The principal component analysis (PCA) dimension-reduction technique [<a href="#B77-remotesensing-13-03533" class="html-bibr">77</a>] was used to compress all feature maps of each layer to 3 RGB channels for visualization. It is shown that the shallow neural layers (Conv1 and Conv2) excavate concrete features such as texture, shape, and edge from natural landscapes. Then, the deep layers (Conv3, Conv4, and Conv5) extract informative abstract features based on the shallow features for population estimation.</p>
Full article ">Figure A2
<p>The movement path of population center in China from 1985 to 2010.</p>
Full article ">
16 pages, 3586 KiB  
Article
Detection of Microplastics in Water and Ice
by Seohyun Jang, Joo-Hyung Kim and Jihyun Kim
Remote Sens. 2021, 13(17), 3532; https://doi.org/10.3390/rs13173532 - 6 Sep 2021
Cited by 2 | Viewed by 4568
Abstract
It is possible to detect various microplastics (MPs) floating on water or contained in ice due to the unique optical characteristics of plastics of various chemical compositions and structures. When the MPs are measured in the spectral region between 800 and 1000 nm, [...] Read more.
It is possible to detect various microplastics (MPs) floating on water or contained in ice due to the unique optical characteristics of plastics of various chemical compositions and structures. When the MPs are measured in the spectral region between 800 and 1000 nm, which has relatively little influence on the temperature change in water, they are frequently perceived as noise or obscured by the surrounding reflection spectra because of the small number and low intensity of the representative peak wavelengths. In this study, we have applied several mathematical methods, including the convex hull, Gaussian deconvolution, and curve fitting to amplify and normalize the reflectance and thereby find the spectral properties of each polymer, namely polypropylene (PP), polyethylene terephthalate (PET), methyl methacrylate (PMMA), and polyethylene (PE). Blunt-shaped spectra with a relatively large maximum of normalized reflectance (NRmax) can be decomposed into several Gaussian peak wavelengths: 889, 910, and 932 nm for the PP and 898 and 931 nm for the PE. Moreover, unique peak wavelengths with the meaningful measure at 868 and 907 nm for the PET and 887 nm for the PMMA were also obtained. Based on the results of the study, one can say that each plastic can be identified with up to 81% precision by compensating based on the spectral properties even when they are hidden in water or ice. Full article
(This article belongs to the Section Environmental Remote Sensing)
Show Figures

Figure 1

Figure 1
<p>Schematic and pictures of sample materials. (<b>a</b>) Plastic polymers, (<b>b</b>) plastic polymers exposed to ice, (<b>c</b>) plastic polymers covered with ice, (<b>d</b>) floating plastic polymers in water.</p>
Full article ">Figure 2
<p>Schematics of the experimental setup (SCC: signal conditioning components, DAQ: data acquisition).</p>
Full article ">Figure 3
<p>The spectral analysis of PE in <math display="inline"><semantics> <mrow> <mn>50</mn> <mo>×</mo> <mn>50</mn> <msup> <mrow> <mrow> <mtext> </mtext> <mi>mm</mi> </mrow> </mrow> <mn>2</mn> </msup> </mrow> </semantics></math> (OriginPro 2020 (Academic)). (<b>a</b>,<b>b</b>) Continuum removed with the 2D convex hull; (<b>c</b>,<b>d</b>) LM algorithm with peak deconvolution.</p>
Full article ">Figure 4
<p>Image of the in-house program for microplastic classification. For instance, the water is selected in the ’Environment’ in Part 1, and the classified peak wavelengths in Part 2 represent one non-identified peak (None) and two identified peaks (PP) in Part 3.</p>
Full article ">Figure 5
<p>The normalized reflectance of the PP, PET, PMMA, and PE, which shows the characteristic peaks of the material and the core components of the Gaussian functions of plastic polymers (<math display="inline"><semantics> <mrow> <msub> <mi>λ</mi> <mi mathvariant="bold-italic">p</mi> </msub> </mrow> </semantics></math>: Gaussian peak wavelength, <math display="inline"><semantics> <mrow> <msub> <mrow> <mi>NR</mi> </mrow> <mrow> <mi>max</mi> </mrow> </msub> </mrow> </semantics></math>: maximum of normalized reflectance, FWHM: full-width half maximum with a unit of <math display="inline"><semantics> <mrow> <mi>nm</mi> </mrow> </semantics></math> ).</p>
Full article ">Figure 6
<p>The spectral of the characteristic peak wavelength of plastic polymers (<b>a</b>) exposed to ice, (<b>b</b>) covered with ice, and (<b>c</b>) floating on water.</p>
Full article ">Figure 7
<p>Degree of identification and classification of plastics under various conditions. (<b>a</b>) Single species and (<b>b</b>) polymer mixture.</p>
Full article ">Figure 7 Cont.
<p>Degree of identification and classification of plastics under various conditions. (<b>a</b>) Single species and (<b>b</b>) polymer mixture.</p>
Full article ">
19 pages, 5768 KiB  
Article
Projecting Future Vegetation Change for Northeast China Using CMIP6 Model
by Wei Yuan, Shuang-Ye Wu, Shugui Hou, Zhiwei Xu, Hongxi Pang and Huayu Lu
Remote Sens. 2021, 13(17), 3531; https://doi.org/10.3390/rs13173531 - 6 Sep 2021
Cited by 12 | Viewed by 3900
Abstract
Northeast China lies in the transition zone from the humid monsoonal to the arid continental climate, with diverse ecosystems and agricultural land highly susceptible to climate change. This region has experienced significant greening in the past three decades, but future trends remain uncertain. [...] Read more.
Northeast China lies in the transition zone from the humid monsoonal to the arid continental climate, with diverse ecosystems and agricultural land highly susceptible to climate change. This region has experienced significant greening in the past three decades, but future trends remain uncertain. In this study, we provide a quantitative assessment of how vegetation, indicated by the leaf area index (LAI), will change in this region in response to future climate change. Based on the output of eleven CMIP6 global climates, Northeast China is likely to get warmer and wetter in the future, corresponding to an increase in regional LAI. Under the medium emissions scenario (SSP245), the average LAI is expected to increase by 0.27 for the mid-century (2041–2070) and 0.39 for the late century (2071–2100). Under the high emissions scenario (SSP585), the increase is 0.40 for the mid-century and 0.70 for the late century, respectively. Despite the increase in the regional mean, the LAI trend shows significant spatial heterogeneity, with likely decreases for the arid northwest and some sandy fields in this region. Therefore, climate change could pose additional challenges for long-term ecological and economic sustainability. Our findings could provide useful information to local decision makers for developing effective sustainable land management strategies in Northeast China. Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Study area land cover (<b>a</b>), the growing season mean (GSM) LAI from 1982 to 2013 (<b>b</b>), and the elevation (<b>c</b>) in Northeast China. The land cover data come from the MOD12Q1 provided by NASA Land Processes Distributed Active Archive Center (LP DAAC).</p>
Full article ">Figure 2
<p>GWR model results: spatial distribution of local R<sup>2</sup> value (<b>a</b>), model residual (<b>b</b>), and coefficients for temperature (<b>c</b>), precipitation (<b>d</b>) and elevation (<b>e</b>).</p>
Full article ">Figure 3
<p>Taylor diagram for monthly temperature (<b>a</b>) and precipitation (<b>b</b>).</p>
Full article ">Figure 4
<p>Projected future changes in GSM temperature (<b>a</b>–<b>d</b>) and precipitation (<b>e</b>–<b>h</b>) for the mid-century (2041–2070) and late century (2071–2100) under the SSP245 and SSP585 scenarios relative to 1982–2013. Regional mean change values are presented in the upper right corner of each plot.</p>
Full article ">Figure 5
<p>Future GSM LAI changes (<b>a</b>–<b>d</b>) relative to the historical period (1982–2013) for the mid-century (2041–2070) and late century (2071–2100) under the SSP245 and SSP585 emissions scenarios. Regional mean change values are presented in the upper right corner of each plot. Red line represents the boundaries of four major sandy fields in NE China.</p>
Full article ">Figure 6
<p>Future GSM LAI change values for four major sandy fields in NE China in comparison to the present (1982–2013) for MidSSP245 (pink), MidSSP585 (green), LateSSP245 (blue), and LateSSP585 (purple).</p>
Full article ">Figure 7
<p>Taylor diagram for the Leaf Area Index derived from CMIP6 ESMs.</p>
Full article ">Figure 8
<p>LAI change values predicted by CMIP6 ESMs (brown) and GWR models (yellow) in NE China. The numbers at the top of the bar represent the regional mean change value over NE China.</p>
Full article ">Figure 9
<p>Future GSM LAI changes (<b>a</b>–<b>d</b>) relative to the historical period (1982–2013) in NE China derived from the CMIP6 ESMs over the mid-century (2041–2070) and late century (2071–2100) under the SSP245 and SSP585 emissions scenarios. Regional mean change values are presented in the upper right corner of each plot. Red lines represent the boundaries of four major sandy fields in the NE China.</p>
Full article ">Figure A1
<p>GSM LAI residual simulated by CMIP6 ESMs (multi-model mean), relative to the observed LAI in NE China for 1982–2013.</p>
Full article ">Figure A2
<p>Change (%) in area with GSM LAI less than 0.2 in NE China under the different emissions scenarios and different time periods. The red number above the point corresponds to the exact area percentage of the entire study region.</p>
Full article ">
27 pages, 5078 KiB  
Article
Fog Season Risk Assessment for Maritime Transportation Systems Exploiting Himawari-8 Data: A Case Study in Bohai Sea, China
by Pei Du, Zhe Zeng, Jingwei Zhang, Lu Liu, Jianchang Yang, Chuanping Qu, Li Jiang and Shanwei Liu
Remote Sens. 2021, 13(17), 3530; https://doi.org/10.3390/rs13173530 - 5 Sep 2021
Cited by 13 | Viewed by 3737
Abstract
Sea fog is a disastrous marine phenomenon for ship navigation. Sea fog reduces visibility at sea and has a great impact on the safety of ship navigation, which may lead to catastrophic accidents. Geostationary orbit satellites such as Himawari-8 make it possible to [...] Read more.
Sea fog is a disastrous marine phenomenon for ship navigation. Sea fog reduces visibility at sea and has a great impact on the safety of ship navigation, which may lead to catastrophic accidents. Geostationary orbit satellites such as Himawari-8 make it possible to monitor sea fog over large areas of the sea. In this paper, a framework for marine navigation risk evaluation in fog seasons is developed based on Himawari-8 satellite data, which includes: (1) a sea fog identification method for Himawari-8 satellite data based on multilayer perceptron; (2) a navigation risk evaluation model based on the CRITIC objective weighting method, which, along with the sea fog identification method, allows us to obtain historical sea fog data and marine environmental data, such as properties related to wind, waves, ocean currents, and water depth to evaluate navigation risks; and (3) a way to determine shipping routes based on the Delaunay triangulation method to carry out risk analyses of specific navigation areas. This paper uses global information system mapping technology to get navigation risk maps in different seasons in Bohai Sea and its surrounding waters. The proposed sea fog identification method is verified by CALIPSO vertical feature mask data, and the navigation risk evaluation model is verified by historical accident data. The probability of detection is 81.48% for sea fog identification, and the accident matching rate of the navigation risk evaluation model is 80% in fog seasons. Full article
(This article belongs to the Special Issue Remote Sensing for Marine Environmental Disaster Response)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Study area.</p>
Full article ">Figure 2
<p>Flow chart of the navigation risk evaluation method.</p>
Full article ">Figure 3
<p>Flow chart of sea fog identification method.</p>
Full article ">Figure 4
<p>Sample selection using CALIPSO-VFM data, (<b>a</b>) AHI image on 15 June 2018 at 05:10 UTC. (<b>b</b>) Objects classification of CALIPSO-VFM data from 15 June 2018 at 05:17 UTC.</p>
Full article ">Figure 5
<p>Spectral analysis, (<b>a</b>) the spectral reflectivity of bands 1–6 and NDSI for AHI data; (<b>b</b>) the brightness temperature of bands 7–16 for AHI data, the red box represents the regions of bands 13, 14, and 15 with obvious differences in brightness temperature of each objects.</p>
Full article ">Figure 6
<p>Distribution of accident points.</p>
Full article ">Figure 7
<p>Shipping route distribution.</p>
Full article ">Figure 8
<p>Channel area division process, (<b>a</b>) the simplified shipping routes and points; (<b>b</b>) the regional Tyson polygon using Delaunay triangulation method with route points as the source; (<b>c</b>) shipping route areas after merging and trimming.</p>
Full article ">Figure 9
<p>Distribution of CALIPSO-VFM verification points around UTC 05:00 from January to September in 2018.</p>
Full article ">Figure 10
<p>Frequency distribution of sea fog in (<b>a</b>) spring; (<b>b</b>) summer; (<b>c</b>) autumn; (<b>d</b>) winter.</p>
Full article ">Figure 11
<p>Navigation risk assessment and accident points matching results in (<b>a</b>) spring; (<b>b</b>) summer; (<b>c</b>) autumn; (<b>d</b>) winter.</p>
Full article ">Figure 12
<p>Sensitivity analysis results.</p>
Full article ">Figure 13
<p>Risk distribution in the waterway areas in (<b>a</b>) spring; (<b>b</b>) summer; (<b>c</b>) autumn; (<b>d</b>) winter.</p>
Full article ">
16 pages, 3203 KiB  
Technical Note
Impact of Large-Scale Ocean–Atmosphere Interactions on Interannual Water Storage Changes in the Tropics and Subtropics
by Shengnan Ni, Zhicai Luo, Jianli Chen and Jin Li
Remote Sens. 2021, 13(17), 3529; https://doi.org/10.3390/rs13173529 - 5 Sep 2021
Viewed by 2459
Abstract
Satellite observations from the Gravity Recovery and Climate Experiment (GRACE) provide unique measurements of global terrestrial water storage (TWS) changes at different spatial and temporal scales. Large-scale ocean–atmosphere interactions might have significant impacts on the global hydrological cycle, resulting in considerable influences on [...] Read more.
Satellite observations from the Gravity Recovery and Climate Experiment (GRACE) provide unique measurements of global terrestrial water storage (TWS) changes at different spatial and temporal scales. Large-scale ocean–atmosphere interactions might have significant impacts on the global hydrological cycle, resulting in considerable influences on TWS changes. Quantifying the contributions of large-scale ocean–atmosphere interactions to TWS changes would be beneficial to improving our understanding of water storage responses to climate variability. In the study, we investigate the impact of three major global ocean–atmosphere interactions—El Niño and Southern Oscillation (ENSO), Indian Ocean Dipole (IOD), and Atlantic Meridional Mode (AMM) on interannual TWS changes in the tropics and subtropics, using GRACE measurements and climate indices. Based on the least square principle, these climate indices, and the corresponding Hilbert transformations along with a linear trend, annual and semi-annual terms are fitted to the TWS time series on global 1° × 1° grids. By the fitted results, we analyze the connections between interannual TWS changes and ENSO, IOD, and AMM indices, and estimate the quantitative contributions of these climate phenomena to TWS changes. The results indicate that interannual TWS changes in the tropics and subtropics are related to ENSO, IOD, and AMM climate phenomena. The contribution of each climate phenomenon to TWS changes might vary in different regions, but in most parts of the tropics and subtropics, the ENSO contribution to TWS changes is found to be more dominant than those from IOD and AMM. Full article
(This article belongs to the Special Issue Carbon, Water and Climate Monitoring Using Space Geodesy Observations)
Show Figures

Figure 1

Figure 1
<p>ENSO, IOD, and AMM indices and the corresponding Hilbert transformed time series.</p>
Full article ">Figure 2
<p>ENSO, IOD, and AMM amplitudes from GRACE spherical harmonics (SH) and mascon solutions. (<b>a</b>,<b>b</b>) show ENSO amplitudes from GRACE SH and mascon solutions computed with Equation (3); (<b>c</b>,<b>d</b>) show IOD amplitudes from GRACE SH and mascon solutions computed with Equation (4); (<b>e</b>,<b>f</b>) show AMM amplitudes from GRACE SH and mascon solutions computed with Equation (5).</p>
Full article ">Figure 3
<p>Comparisons between climate-induced TWS changes defined in Equation (6) and interannual TWS changes: (<b>a</b>,<b>b</b>) standard deviations of climate-induced TWS changes from GRACE SH and mascon solutions, (<b>c</b>,<b>d</b>) standard deviations of interannual TWS changes from GRACE SH and mascon solutions, and (<b>e</b>,<b>f</b>) RMS reductions between climate-induced TWS changes and interannual TWS changes from GRACE SH and mascon solutions.</p>
Full article ">Figure 4
<p>ENSO coefficients <span class="html-italic">a</span><sub>7</sub> of least squares fitting and zero-phase-lag cross correlation coefficients between interannual TWS changes and Niño 3.4 index. (<b>a</b>,<b>b</b>) show ENSO coefficients <span class="html-italic">a</span><sub>7</sub> obtained from GRACE SH and mascon solutions, respectively; (<b>c</b>) shows correlation coefficients between TWS changes from GRACE SH solutions and Niño 3.4 index; and (<b>d</b>) shows the cross-correlation coefficients between TWS changes from GRACE mascon solutions and Niño 3.4 index.</p>
Full article ">Figure 5
<p>The same as <a href="#remotesensing-13-03529-f004" class="html-fig">Figure 4</a>, but for ENSO coefficients, <span class="html-italic">a</span><sub>8</sub> and cross correlation coefficients between interannual TWS flux and Niño 3.4 index. It should be noted that the TWS flux in this study means the quantitative difference of TWS changes between two adjacent months.</p>
Full article ">Figure 6
<p>The interannual and climate induced TWS changes from GRACE SH solutions in the three selected regions shown in <a href="#remotesensing-13-03529-f003" class="html-fig">Figure 3</a>a. (<b>a</b>–<b>c</b>) show the interannual TWS changes (blue curves) and climate induced TWS changes (red curves) for Regions 1–3. (<b>d</b>–<b>f</b>) show the corresponding linear fitting relationship between the two curves in the three regions.</p>
Full article ">Figure 7
<p>Selected major river basins in the tropics and subtropics (1—Amazon, 2—La Plata, 3—Orinoco, 4—Colorado, 5—Mississippi, 6—Congo, 7—Zambezi, 8—Niger, 9—Nile, 10—Ganges, 11—Yangtze, 12—Mekong).</p>
Full article ">
16 pages, 8907 KiB  
Article
Analysis of Temporal and Spatial Variability of Fronts on the Amery Ice Shelf Automatically Detected Using Sentinel-1 SAR Data
by Tingting Zhu, Xiangbin Cui and Yu Zhang
Remote Sens. 2021, 13(17), 3528; https://doi.org/10.3390/rs13173528 - 5 Sep 2021
Cited by 1 | Viewed by 2692
Abstract
The Amery Ice Shelf (AIS) dynamics and mass balance caused by iceberg calving and basal melting are significant in the ocean climate system. Using satellite imagery from Sentinel-1 SAR, we monitored the temporal and spatial variability of the frontal positions on the Amery [...] Read more.
The Amery Ice Shelf (AIS) dynamics and mass balance caused by iceberg calving and basal melting are significant in the ocean climate system. Using satellite imagery from Sentinel-1 SAR, we monitored the temporal and spatial variability of the frontal positions on the Amery Ice Shelf, Antarctica, from 2015 to 2021. In this paper, we propose an automatic algorithm based on the SO-CFAR strategy and a profile cumulative method for frontal line extraction. To improve the accuracy of the extracted frontal lines, we developed a framework combining the Constant False Alarm Rate (CFAR) and morphological image-processing strategies. A visual comparison between the proposed algorithm and state-of-the-art algorithm shows that our algorithm is effective in these cases including rifts, icebergs, and crevasses as well as ice-shelf surface structures. We present a detailed analysis of the temporal and spatial variability of fronts on AIS that we find, an advance of the AIS frontal line before the D28 calving event, and a continuous advance after the event. The study reveals that the AIS extent has been advanced at the rate of 1015 m/year. Studies have shown that the frontal location of AIS has continuously expanded. From March 2015 to May 2021, the frontal location of AIS expanded by 6.5 km; while the length of the AIS frontal line is relatively different after the D28 event, the length of the frontal line increased by about 7.5% during 2015 and 2021 (255.03 km increased to 273.5 km). We found a substantial increase in summer advance rates and a decrease in winter advance rates with the seasonal characteristics. We found this variability of the AIS frontal line to be in good agreement with the ice flow velocity. Full article
(This article belongs to the Special Issue The Cryosphere Observations Based on Using Remote Sensing Techniques)
Show Figures

Figure 1

Figure 1
<p>Flowchart of the frontal lines detection algorithm. The left panel is for the ice shelf detection and the right panel is for delineating frontal lines.</p>
Full article ">Figure 2
<p>Flowchart of the SO-CFAR method for ice shelf detection.</p>
Full article ">Figure 3
<p>Frontal points extraction criterion for different ice conditions. In (<b>a</b>,<b>b</b>), the red line is the boundary we defined in this study and the magenta is the defined profile line. Figures (<b>c</b>–<b>e</b>) illustrate the cumulative values for three different situations for fontal points extraction. (<b>c</b>) it means profiles cross the main body of floating ice. (<b>d</b>) it means profiles cross a small part of floating ice. (<b>e</b>) it means no floating ice along the profile.</p>
Full article ">Figure 4
<p>Research area and data set used in this study. The general trend of ice frontal position between 2015 and 2021 for Amery Ice Shelf. The colors indicate the time difference. We defined three regions for dynamical and seasonal analysis. The right panel is the data used for the spatio-temporal analysis. The background is from single-polarized Sentinel-1 data accessed on 26 March 2015.</p>
Full article ">Figure 5
<p>Frontal point extraction on sample Sentinel-1 SAR images using the proposed algorithm (upper) and the comparison method (bottom). (<b>a</b>) 26 March 2015; (<b>b</b>) 24 October 2019; (<b>c</b>) 22 September 2015.</p>
Full article ">Figure 6
<p>Time series variability in frontal lines of the AIS monitored during March 2015–May 2021. Relative changes in frontal lines of the AIS during the D28 calving event in September 2019. The length and extension distance are coded as shown. (<b>a</b>) Monthly cumulative extension distance and length of frontal line for the AIS from 2015 to 2021; (<b>b</b>) cumulative extension distance and length of frontal line for the AIS during the D28 calving event.</p>
Full article ">Figure 7
<p>The advanced AIS fronts from March 2018. The rift in Region 2 became wider and wider. The length of the rift is marked in the respective panel.</p>
Full article ">Figure 8
<p>The result of frontal line in the three regions from 2015 to 2021. All the background image is acquired in March of each year except for 2019 in September. For the last panel, the background image were acquired in March 2015 for Region 1, 2, and 3. (<b>a</b>) Region 1; (<b>b</b>) Region 2; (<b>c</b>) Region 3.</p>
Full article ">Figure 9
<p>Seasonal advance rate with the expansion distance of the AIS frontal line in Region 3. The monthly average advance rate for each year is color-coded.</p>
Full article ">Figure 10
<p>Correlation analysis between the frontal extension rate of the AIS and the MEaSUREs ice velocity product. The relation of ice velocity and the extension rate using the scatter plot by the extension rate and MEaSUREs ice velocity products. (<b>a</b>) MEaSURE ice velocity product; (<b>b</b>) ice velocity at the intersection of Lambert Glacier (LG), Fisher Glacier (FG), and Mrellor Glacier (MG); (<b>c</b>) ice velocity in the front of AIS; (<b>d</b>) scatter plot of the extension rate in the front of AIS and MEaSUREs ice velocity product(yellow and light blue points); (<b>e</b>) scatter plot of the extended distance at the frontal position of the D28 disintegration area and MEaSUREs ice velocity product (yellow points).</p>
Full article ">
24 pages, 2399 KiB  
Article
Wildfire Segmentation Using Deep Vision Transformers
by Rafik Ghali, Moulay A. Akhloufi, Marwa Jmal, Wided Souidene Mseddi and Rabah Attia
Remote Sens. 2021, 13(17), 3527; https://doi.org/10.3390/rs13173527 - 5 Sep 2021
Cited by 62 | Viewed by 9619
Abstract
In this paper, we address the problem of forest fires’ early detection and segmentation in order to predict their spread and help with fire fighting. Techniques based on Convolutional Networks are the most used and have proven to be efficient at solving such [...] Read more.
In this paper, we address the problem of forest fires’ early detection and segmentation in order to predict their spread and help with fire fighting. Techniques based on Convolutional Networks are the most used and have proven to be efficient at solving such a problem. However, they remain limited in modeling the long-range relationship between objects in the image, due to the intrinsic locality of convolution operators. In order to overcome this drawback, Transformers, designed for sequence-to-sequence prediction, have emerged as alternative architectures. They have recently been used to determine the global dependencies between input and output sequences using the self-attention mechanism. In this context, we present in this work the very first study, which explores the potential of vision Transformers in the context of forest fire segmentation. Two vision-based Transformers are used, TransUNet and MedT. Thus, we design two frameworks based on the former image Transformers adapted to our complex, non-structured environment, which we evaluate using varying backbones and we optimize for forest fires’ segmentation. Extensive evaluations of both frameworks revealed a performance superior to current methods. The proposed approaches achieved a state-of-the-art performance with an F1-score of 97.7% for TransUNet architecture and 96.0% for MedT architecture. The analysis of the results showed that these models reduce fire pixels mis-classifications thanks to the extraction of both global and local features, which provide finer detection of the fire’s shape. Full article
(This article belongs to the Special Issue Data Mining in Multi-Platform Remote Sensing)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>The proposed TransUNet architecture.</p>
Full article ">Figure 2
<p>The proposed MedT architecture.</p>
Full article ">Figure 3
<p>The proposed U<math display="inline"><semantics> <msup> <mrow/> <mn>2</mn> </msup> </semantics></math>-Net architecture.</p>
Full article ">Figure 4
<p>The proposed EfficientSeg architecture.</p>
Full article ">Figure 5
<p>Examples from the CorsicanFire dataset. From (<b>top</b>) to (<b>bottom</b>): RGB images and their corresponding masks.</p>
Full article ">Figure 6
<p>Results of TransUNet-Res50-ViT. From (<b>top</b>) to (<b>bottom</b>): RGB images, their corresponding mask, and the predicted images of TransUNet-Res50-ViT.</p>
Full article ">Figure 7
<p>Results of TransUNet-ViT. From (<b>top</b>) to (<b>bottom</b>): RGB images, their corresponding mask, and the predicted images of TransUNet-ViT.</p>
Full article ">Figure 8
<p>Results of MedT. From (<b>top</b>) to (<b>bottom</b>): RGB images, their corresponding mask, and the predicted images of MedT.</p>
Full article ">Figure 9
<p>Results of U-Net. From (<b>top</b>) to (<b>bottom</b>): RGB images, their corresponding mask, and the predicted images of U-Net.</p>
Full article ">Figure 10
<p>Results of U<math display="inline"><semantics> <msup> <mrow/> <mn>2</mn> </msup> </semantics></math>-Net. From (<b>top</b>) to (<b>bottom</b>): RGB images, their corresponding mask, and the predicted images of U<math display="inline"><semantics> <msup> <mrow/> <mn>2</mn> </msup> </semantics></math>-Net.</p>
Full article ">Figure 11
<p>Results of EfficientSeg. From (<b>top</b>) to (<b>bottom</b>): RGB images, their corresponding mask, and the predicted images of EfficientSeg.</p>
Full article ">Figure 12
<p>Results of TransUNet and MedT using web images. From (<b>top</b>) to (<b>bottom</b>): real RGB images, TransUNet-Res50-ViT results, TransUNet-ViT results, and MedT results.</p>
Full article ">
17 pages, 7592 KiB  
Article
A New Method for Crop Row Detection Using Unmanned Aerial Vehicle Images
by Pengfei Chen, Xiao Ma, Fangyong Wang and Jing Li
Remote Sens. 2021, 13(17), 3526; https://doi.org/10.3390/rs13173526 - 5 Sep 2021
Cited by 14 | Viewed by 3851
Abstract
Crop row detection using unmanned aerial vehicle (UAV) images is very helpful for precision agriculture, enabling one to delineate site-specific management zones and to perform precision weeding. For crop row detection in UAV images, the commonly used Hough transform-based method is not sufficiently [...] Read more.
Crop row detection using unmanned aerial vehicle (UAV) images is very helpful for precision agriculture, enabling one to delineate site-specific management zones and to perform precision weeding. For crop row detection in UAV images, the commonly used Hough transform-based method is not sufficiently accurate. Thus, the purpose of this study is to design a new method for crop row detection in orthomosaic UAV images. For this purpose, nitrogen field experiments involving cotton and nitrogen and water field experiments involving wheat were conducted to create different scenarios for crop rows. During the peak square growth stage of cotton and the jointing growth stage of wheat, multispectral UAV images were acquired. Based on these data, a new crop detection method based on least squares fitting was proposed and compared with a Hough transform-based method that uses the same strategy to preprocess images. The crop row detection accuracy (CRDA) was used to evaluate the performance of the different methods. The results showed that the newly proposed method had CRDA values between 0.99 and 1.00 for different nitrogen levels of cotton and CRDA values between 0.66 and 0.82 for different nitrogen and water levels of wheat. In contrast, the Hough transform method had CRDA values between 0.93 and 0.98 for different nitrogen levels of cotton and CRDA values between 0.31 and 0.53 for different nitrogen and water levels of wheat. Thus, the newly proposed method outperforms the Hough transform method. An effective tool for crop row detection using orthomosaic UAV images is proposed herein. Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Location and layout of the cotton field experiment. (<b>a</b>) N1: 0 kg N/ha; N2: 120 kg N/ha; N3: 240 kg N/ha; N4: 360 kg N/ha; and N5: 480 kg N/ha) and the wheat field experiment. (<b>b</b>) W1: 90 mm irrigation; W2: 60 mm irrigation; N1: 0 kg/ha nitrogen fertilizer; N2: 15,000 kg/ha of farmyard manure; N3: 15,000 kg/ha of farmyard manure and 100 kg/ha of nitrogen fertilizer; N4: 15,000 kg/ha of farmyard manure and 200 kg/ha of nitrogen fertilizer; N5: 15,000 kg/ha of farmyard manure and 300 kg/ha of nitrogen fertilizer) with orthomosaic UAV images.</p>
Full article ">Figure 2
<p>Visually identified row lines of cotton (<b>a</b>) and wheat (<b>b</b>).</p>
Full article ">Figure 3
<p>Main structure and flow chart of the data analysis procedure used in this study.</p>
Full article ">Figure 4
<p>Flow chart of the newly proposed method for row identification.</p>
Full article ">Figure 5
<p>Number of representative row points for each column of images from one experimental plot of wheat. (<b>a</b>) An image of the wheat plot; (<b>b</b>) image rotated making the direction of crop rows perpendicular to the direction of image rows; and (<b>c</b>) image not rotated making the direction of crop rows perpendicular to the direction of image rows.</p>
Full article ">Figure 6
<p>Examples of cotton images under different amounts of nitrogen fertilizer. 0 kg N/ha (<b>a</b>); 120 kg N/ha (<b>b</b>); 240 kg N/ha (<b>c</b>); 360 kg N/ha (<b>d</b>); 480 kg N/ha (<b>e</b>).</p>
Full article ">Figure 7
<p>Examples of wheat images under different amounts of nitrogen fertilizer. 0 kg/ha nitrogen fertilizer (<b>a</b>); 15,000 kg/ha of farmyard manure (<b>b</b>); 15,000 kg/ha of farmyard manure and 100 kg/ha of nitrogen fertilizer (<b>c</b>); 15,000 kg/ha of farmyard manure and 200 kg/ha of nitrogen fertilizer (<b>d</b>); 15,000 kg/ha of farmyard manure and 300 kg/ha of nitrogen fertilizer (<b>e</b>).</p>
Full article ">Figure 8
<p>An example of a cotton plot image (<b>a</b>) and the corresponding vegetation/soil binary image (<b>b</b>) and a representative row point image (<b>c</b>).</p>
Full article ">Figure 9
<p>An example of a detected cotton row line based on the newly proposed method and the Hough transform-based method with vegetation/soil binary images as the background.</p>
Full article ">Figure 10
<p>CRDA values for different methods when detecting cotton rows under different nitrogen scenarios.</p>
Full article ">Figure 11
<p>An example of a wheat plot image (<b>a</b>) and the corresponding vegetation/soil binary image (<b>b</b>) and representative row point image (<b>c</b>).</p>
Full article ">Figure 12
<p>An example of a wheat row line detected by the newly proposed method and by the Hough transform-based method with vegetation/soil binary images as the background.</p>
Full article ">Figure 13
<p>CRDA values for different row detection methods when detecting wheat rows in different nitrogen and irrigation treatments.</p>
Full article ">
28 pages, 14436 KiB  
Article
Continuous Monitoring of the Flooding Dynamics in the Albufera Wetland (Spain) by Landsat-8 and Sentinel-2 Datasets
by Carmela Cavallo, Maria Nicolina Papa, Massimiliano Gargiulo, Guillermo Palau-Salvador, Paolo Vezza and Giuseppe Ruello
Remote Sens. 2021, 13(17), 3525; https://doi.org/10.3390/rs13173525 - 5 Sep 2021
Cited by 32 | Viewed by 4088
Abstract
Satellite data are very useful for the continuous monitoring of ever-changing environments, such as wetlands. In this study, we investigated the use of multispectral imagery to monitor the winter evolution of land cover in the Albufera wetland (Spain), using Landsat-8 and Sentinel-2 datasets. [...] Read more.
Satellite data are very useful for the continuous monitoring of ever-changing environments, such as wetlands. In this study, we investigated the use of multispectral imagery to monitor the winter evolution of land cover in the Albufera wetland (Spain), using Landsat-8 and Sentinel-2 datasets. With multispectral data, the frequency of observation is limited by the possible presence of clouds. To overcome this problem, the data acquired by the two missions, Landsat-8 and Sentinel-2, were jointly used, thus roughly halving the revisit time. The varied types of land cover were grouped into four classes: (1) open water, (2) mosaic of water, mud and vegetation, (3) bare soil and (4) vegetated soil. The automatic classification of the four classes was obtained through a rule-based method that combined the NDWI, MNDWI and NDVI indices. Point information, provided by geo-located ground pictures, was spatially extended with the help of a very high-resolution image (GeoEye-1). In this way, surfaces with known land cover were obtained and used for the validation of the classification method. The overall accuracy was found to be 0.96 and 0.98 for Landsat-8 and Sentinel-2, respectively. The consistency evaluation between Landsat-8 and Sentinel-2 was performed in six days, in which acquisitions by both missions were available. The observed dynamics of the land cover were highly variable in space. For example, the presence of the open water condition lasted for around 60–80 days in the areas closest to the Albufera lake and progressively decreased towards the boundaries of the park. The study demonstrates the feasibility of using moderate-resolution multispectral images to monitor land cover changes in wetland environments. Full article
(This article belongs to the Special Issue Earth Observation Technologies for Monitoring of Water Environments)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Overview of the Albufera wetland (© Google 2020) and outline of the rice fields with lower (<span class="html-italic">Tancats</span>) and higher elevation (highlands). The points represent the locations of field surveys performed on 5 February 2020 and 14 February 2020.</p>
Full article ">Figure 2
<p>Spectral reflectance of clear and turbid water compared to L8 and S2 acquired bands.</p>
Full article ">Figure 3
<p>Photographs of land cover classes taken in the Albufera wetland on 14 February 2020; (<b>a</b>,<b>b</b>) water; (<b>c</b>) shallow turbid water; (<b>d</b>) shallow water with dead vegetation; (<b>e</b>,<b>f</b>) mosaic of water ponds, mud and vegetation; (<b>g</b>,<b>h</b>) flooded area partially covered by vegetation; (<b>i</b>) wet soil; (<b>j</b>) dry soil; (<b>k</b>,<b>l</b>) vegetated soil.</p>
Full article ">Figure 4
<p>Distribution of indices values for the four land cover classes (<span class="html-italic">W</span>, <span class="html-italic">M</span>, <span class="html-italic">S</span>, <span class="html-italic">V</span>).</p>
Full article ">Figure 5
<p>(<b>a</b>) RGB GeoEye-1 of 14 February 2020 provided by ESA (© TPMO 2020); (<b>b</b>) Land cover classification extracted by S2 image of 15 February 2020; (<b>c</b>) Land cover classification extracted by L8 image of 13 February 2020.</p>
Full article ">Figure 6
<p>(<b>a</b>) Land cover classification extracted by S2 image of 15 February 2020 with field survey locations of 14 February (points) and the position of the frames displayed with larger scales in the panels 1,2 3, 4 and 5; (<b>b</b>) zoom of land cover classification; (<b>c</b>) zoom of the RGB GeoEye-1 (© TPMO 2020) of 14 February; (<b>d</b>,<b>e</b>) ground pictures taken on 14 February.</p>
Full article ">Figure 7
<p>Comparison between GE-1 VHR of 25 October 2018 (<b>a</b>–<b>c</b>) and land cover classification extracted by S2 image of 28 October 2018 (<b>d</b>–<b>f</b>).</p>
Full article ">Figure 8
<p>(<b>a</b>) Extent of W, M areas extracted by L8 and S2 in 2019/2020; (<b>b</b>) extent of S and V areas extracted by L8 and S2 in 2019/2020.</p>
Full article ">Figure 9
<p>Number of days between two usable images of the L8 and S2 datasets taken individually and of the merged dataset in 2019.</p>
Full article ">Figure 10
<p>(<b>a</b>) Temporal trend of area in <span class="html-italic">W</span> class (open water); (<b>b</b>) temporal trend of area in <span class="html-italic">W</span> or <span class="html-italic">M</span> class (open water or mosaic of water, mud and vegetation).</p>
Full article ">Figure 11
<p>Maps of land cover classes in February 2020 obtained by L8 and S2. The continuous line is the boundary of <span class="html-italic">Tancats</span> area.</p>
Full article ">Figure 12
<p>Days of water presence from October to April in 2019/20; (<b>a</b>) <span class="html-italic">W</span> class; (<b>b</b>) <span class="html-italic">W</span> class plus <span class="html-italic">M</span> class.</p>
Full article ">Figure 13
<p>Average number of days of water presence in <span class="html-italic">W</span> and <span class="html-italic">W</span> + <span class="html-italic">M</span> classes, in the <span class="html-italic">Tancats</span> and in the highlands.</p>
Full article ">
22 pages, 7872 KiB  
Article
Detecting the Responses of CO2 Column Abundances to Anthropogenic Emissions from Satellite Observations of GOSAT and OCO-2
by Mengya Sheng, Liping Lei, Zhao-Cheng Zeng, Weiqiang Rao and Shaoqing Zhang
Remote Sens. 2021, 13(17), 3524; https://doi.org/10.3390/rs13173524 - 5 Sep 2021
Cited by 29 | Viewed by 3760
Abstract
The continuing increase in atmospheric CO2 concentration caused by anthropogenic CO2 emissions significantly contributes to climate change driven by global warming. Satellite measurements of long-term CO2 data with global coverage improve our understanding of global carbon cycles. However, the sensitivity [...] Read more.
The continuing increase in atmospheric CO2 concentration caused by anthropogenic CO2 emissions significantly contributes to climate change driven by global warming. Satellite measurements of long-term CO2 data with global coverage improve our understanding of global carbon cycles. However, the sensitivity of the space-borne measurements to anthropogenic emissions on a regional scale is less explored because of data sparsity in space and time caused by impacts from geophysical factors such as aerosols and clouds. Here, we used global land mapping column averaged dry-air mole fractions of CO2 (XCO2) data (Mapping-XCO2), generated from a spatio-temporal geostatistical method using GOSAT and OCO-2 observations from April 2009 to December 2020, to investigate the responses of XCO2 to anthropogenic emissions at both global and regional scales. Our results show that the long-term trend of global XCO2 growth rate from Mapping-XCO2, which is consistent with that from ground observations, shows interannual variations caused by the El Niño Southern Oscillation (ENSO). The spatial distributions of XCO2 anomalies, derived from removing background from the Mapping-XCO2 data, reveal XCO2 enhancements of about 1.5–3.5 ppm due to anthropogenic emissions and seasonal biomass burning in the wintertime. Furthermore, a clustering analysis applied to seasonal XCO2 clearly reveals the spatial patterns of atmospheric transport and terrestrial biosphere CO2 fluxes, which help better understand and analyze regional XCO2 changes that are associated with atmospheric transport. To quantify regional anomalies of CO2 emissions, we selected three representative urban agglomerations as our study areas, including the Beijing-Tian-Hebei region (BTH), the Yangtze River Delta urban agglomerations (YRD), and the high-density urban areas in the eastern USA (EUSA). The results show that the XCO2 anomalies in winter well capture the several-ppm enhancement due to anthropogenic CO2 emissions. For BTH, YRD, and EUSA, regional positive anomalies of 2.47 ± 0.37 ppm, 2.20 ± 0.36 ppm, and 1.38 ± 0.33 ppm, respectively, can be detected during winter months from 2009 to 2020. These anomalies are slightly higher than model simulations from CarbonTracker-CO2. In addition, we compared the variations in regional XCO2 anomalies and NO2 columns during the lockdown of the COVID-19 pandemic from January to March 2020. Interestingly, the results demonstrate that the variations of XCO2 anomalies have a positive correlation with the decline of NO2 columns during this period. These correlations, moreover, are associated with the features of emitting sources. These results suggest that we can use simultaneously observed NO2, because of its high detectivity and co-emission with CO2, to assist the analysis and verification of CO2 emissions in future studies. Full article
(This article belongs to the Section Urban Remote Sensing)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Spatial distributions of the averaged dXCO<sub>2</sub> from 2009 to 2018 calculated from Mapping-XCO<sub>2</sub>.</p>
Full article ">Figure 2
<p>Spatial distributions of seasonal dXCO<sub>2</sub> in winter (<b>a</b>) and in summer (<b>b</b>) calculated using Mapping-XCO<sub>2</sub> from 2009 to 2020.</p>
Full article ">Figure 3
<p>Long-term average of global CO<sub>2</sub> emissions in 1° grid from ODIAC during the period of 2009 to 2019.</p>
Full article ">Figure 4
<p>Time series of global CO<sub>2</sub> growth rate from 2009 to 2020 and comparison with ENSO indices. (<b>a</b>) Global growth rates of the long-term CO<sub>2</sub> trend from Mapping-XCO<sub>2</sub>, CT-XCO<sub>2</sub>, and ground-based observations of CO<sub>2</sub> data; (<b>b</b>) comparison of satellite-derived growth rate (red line) and ENSO indices. The 1σ uncertainty range of the growth rates are shown as vertical lines. The original ENSO indices are shown as solid lines and time-shifted data are shown as dotted lines.</p>
Full article ">Figure 5
<p>The clustering results of seasonal XCO<sub>2</sub> changes based on Mapping-XCO<sub>2</sub> from 2009 to 2020 (<b>a</b>) and the temporal variations of clusters in the Northern Hemisphere (<b>b</b>). The line colors correspond to the clusters in (<b>a</b>).</p>
Full article ">Figure 6
<p>Spatial distribution of potential temperature contours at 1000 hPa from 2009 to 2020.</p>
Full article ">Figure 7
<p>Spatial distribution of correlation coefficients between seasonal XCO<sub>2</sub> changes based on Mapping-XCO<sub>2</sub> and NDVI from 2009 to 2020.</p>
Full article ">Figure 8
<p>Location of source areas in China and the USA. (<b>a</b>) The areas for BTH and YRD; (<b>b</b>) the area for EUSA. The red boundary represents source areas. The clustering results from <a href="#remotesensing-13-03524-f005" class="html-fig">Figure 5</a> and the lines of potential temperature are also indicated.</p>
Full article ">Figure 9
<p>Time series of regional XCO<sub>2</sub> anomalies(ΔXCO<sub>2</sub>) in the source areas derived from Mapping-XCO<sub>2</sub>. The 1σ uncertainty estimate of regional XCO<sub>2</sub> anomalies is represented by the error bar, which is computed by the averaging mapping uncertainty and the standard deviation of regional statistics.</p>
Full article ">Figure 10
<p>Time series of NO<sub>2</sub> columns and the differences of NO<sub>2</sub> relative to the previous year. (<b>a</b>) Regional NO<sub>2</sub> columns every 16 days and 1σ uncertainty estimate is represented by error bar; (<b>b</b>) contemporaneous differences of NO<sub>2</sub> between 2019 and 2020.</p>
Full article ">Figure 11
<p>Spatial distribution of changes in XCO<sub>2</sub> anomalies and NO<sub>2</sub> columns from January to March in 2020 and 2019. (<b>a</b>) The variations of XCO<sub>2</sub> anomalies and (<b>b</b>) the variations of NO<sub>2</sub> columns. The bold gray lines represent the boundary of the provinces, while thin gray lines represent the boundary of cities.</p>
Full article ">Figure 12
<p>Comparison of NO<sub>2</sub> variations and the changes of XCO<sub>2</sub> anomalies for cities in (<b>a</b>) BTH and (<b>b</b>) YRD. The variations are relative differences in CO<sub>2</sub> anomalies and NO<sub>2</sub> columns from January to March in 2020 and 2019.</p>
Full article ">Figure 13
<p>Spatio-temporal distribution of mapping uncertainties from Mapping-XCO<sub>2</sub>. (<b>a</b>) Averaged mapping uncertainty in 1° latitudinal band and 1 month; (<b>b</b>) long-term averaged uncertainty from 2010 to 2020.</p>
Full article ">Figure A1
<p>The workflow chart for generating Mapping-XCO<sub>2</sub> using satellite XCO<sub>2</sub> retrievals.</p>
Full article ">Figure A2
<p>Spatial distributions of averaged dXCO<sub>2</sub> from 2009 to 2018 calculated from CT-XCO<sub>2</sub> following the same approach adopted by the Mapping-XCO<sub>2</sub> dataset.</p>
Full article ">Figure A3
<p>Comparison of Mapping-XCO<sub>2</sub> and CT-XCO<sub>2</sub> from 2010 to 2018. (<b>a</b>) The absolute mean difference of monthly gridded XCO<sub>2</sub> between Mapping-XCO<sub>2</sub> and CT-XCO<sub>2</sub> from 2010 to 2018; (<b>b</b>) time series of the mean difference in the regions of the red boxes shown in (<b>a</b>), in which the shaded colors represent one standard deviation.</p>
Full article ">Figure A4
<p>Spatial distributions of long-term averaged seasonal dXCO<sub>2</sub> in winter (<b>a</b>) and in summer (<b>b</b>) calculated from CT-XCO<sub>2</sub> from 2009 to 2018.</p>
Full article ">Figure A5
<p>The clustering results of seasonal XCO<sub>2</sub> changes using CT-XCO<sub>2</sub> data from 2009 to 2019.</p>
Full article ">Figure A6
<p>Spatial distribution of correlation coefficients in seasonal XCO<sub>2</sub> changes between CT-XCO<sub>2</sub> and NDVI from 2009 to 2019.</p>
Full article ">Figure A7
<p>Time series of regional CO<sub>2</sub> anomalies(ΔXCO<sub>2</sub>) in the source areas derived from CT-XCO<sub>2</sub>. The 1σ uncertainty estimate of regional XCO<sub>2</sub> anomalies is represented by error bar, which is one standard deviation of the regional statistics.</p>
Full article ">
25 pages, 11179 KiB  
Article
Mapping Crop Types and Cropping Systems in Nigeria with Sentinel-2 Imagery
by Esther Shupel Ibrahim, Philippe Rufin, Leon Nill, Bahareh Kamali, Claas Nendel and Patrick Hostert
Remote Sens. 2021, 13(17), 3523; https://doi.org/10.3390/rs13173523 - 5 Sep 2021
Cited by 43 | Viewed by 8540
Abstract
Reliable crop type maps from satellite data are an essential prerequisite for quantifying crop growth, health, and yields. However, such maps do not exist for most parts of Africa, where smallholder farming is the dominant system. Prevalent cloud cover, small farm sizes, and [...] Read more.
Reliable crop type maps from satellite data are an essential prerequisite for quantifying crop growth, health, and yields. However, such maps do not exist for most parts of Africa, where smallholder farming is the dominant system. Prevalent cloud cover, small farm sizes, and mixed cropping systems pose substantial challenges when creating crop type maps for sub-Saharan Africa. In this study, we provide a mapping scheme based on freely available Sentinel-2A/B (S2) time series and very high-resolution SkySat data to map the main crops—maize and potato—and intercropping systems including these two crops on the Jos Plateau, Nigeria. We analyzed the spectral-temporal behavior of mixed crop classes to improve our understanding of inter-class spectral mixing. Building on the Framework for Operational Radiometric Correction for Environmental monitoring (FORCE), we preprocessed S2 time series and derived spectral-temporal metrics from S2 spectral bands for the main temporal cropping windows. These STMs were used as input features in a hierarchical random forest classification. Our results provide the first wall-to-wall crop type map for this key agricultural region of Nigeria. Our cropland identification had an overall accuracy of 84%, while the crop type map achieved an average accuracy of 72% for the five relevant crop classes. Our crop type map shows distinctive regional variations in the distribution of crop types. Maize is the dominant crop, followed by mixed cropping systems, including maize–cereals and potato–maize cropping; potato was found to be the least prevalent class. Plot analyses based on a sample of 1166 fields revealed largely homogeneous mapping patterns, demonstrating the effectiveness of our classification system also for intercropped classes, which are temporally and spatially highly heterogeneous. Moreover, we found that small field sizes were dominant in all crop types, regardless of whether or not intercropping was used. Maize–legume and maize exhibited the largest plots, with an area of up to 3 ha and slightly more than 10 ha, respectively; potato was mainly cultivated on fields smaller than 0.5 ha and only a few plots were larger than 1 ha. Besides providing the first spatially explicit map of cropping practices in the core production area of the Jos Plateau, Nigeria, the study also offers guidance for the creation of crop type maps for smallholder-dominated systems with intercropping. Critical temporal windows for crop type differentiation will enable the creation of mapping approaches in support of future smart agricultural practices for aspects such as food security, early warning systems, policies, and extension services. Full article
(This article belongs to the Collection Sentinel-2: Science and Applications)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Overview map of Nigeria and a close-up of the Jos Plateau study area. The close-up includes the S2 tiling system as used in this study and SkySat areas of interest (AOIs). See <a href="#sec2dot2-remotesensing-13-03523" class="html-sec">Section 2.2</a> for details on the satellite data.</p>
Full article ">Figure 2
<p>Crop type classes: (<b>A</b>) maize; (<b>B</b>) potato; (<b>C</b>) potato–maize; (<b>D</b>) maize–legumes (in this case, soybean).</p>
Full article ">Figure 3
<p>Cropping calendar showing critical windows (CW) and the 2019 mean monthly precipitation on the Jos Plateau (precipitation data accessed from <a href="http://www.worldweatheronline.com/" target="_blank">http://www.worldweatheronline.com/</a> (accessed on 10 November 2020)).</p>
Full article ">Figure 4
<p>NDVI-based phenology of maize, potato, and mixed crops for 2019 (±1 standard deviation).</p>
Full article ">Figure 5
<p>Methodological flow chart: (<b>A</b>) Preprocessing, STMs, and phenological analysis; (<b>B</b>) land cover mapping and cropland mask generation; and (<b>C</b>) crop type mapping and field-level analysis.</p>
Full article ">Figure 6
<p>Overall and per-class accuracies of crop type predictions. The boxplot shows the median (red line), and the lower and upper quartiles (black whiskers). Accuracies are derived from a 20-fold cross-validation using RF model parameterization.</p>
Full article ">Figure 7
<p>Random forest prediction probabilities.</p>
Full article ">Figure 8
<p>Cropping systems of the Jos Plateau. The “others” class includes grasses, fonio, yam, rice, and vegetables.</p>
Full article ">Figure 9
<p>Samples of crop type predictions at the plot cluster level (see AOIs in <a href="#remotesensing-13-03523-f001" class="html-fig">Figure 1</a>).</p>
Full article ">Figure 10
<p>Density distribution of field size by crop type. The 25th, 50th, and 75th percentile (boxplot in black) and median (white line) on top of the violin plots.</p>
Full article ">Figure 11
<p>Field-level analysis of the fields sampled on the Jos Plateau: (<b>A</b>) distribution of field size; (<b>B</b>) distribution of field-level homogeneity.</p>
Full article ">
27 pages, 8624 KiB  
Article
An Improved Cloud Gap-Filling Method for Longwave Infrared Land Surface Temperatures through Introducing Passive Microwave Techniques
by Thomas P. F. Dowling, Peilin Song, Mark C. De Jong, Lutz Merbold, Martin J. Wooster, Jingfeng Huang and Yongqiang Zhang
Remote Sens. 2021, 13(17), 3522; https://doi.org/10.3390/rs13173522 - 5 Sep 2021
Cited by 8 | Viewed by 3740
Abstract
Satellite-derived land surface temperature (LST) data are most commonly observed in the longwave infrared (LWIR) spectral region. However, such data suffer frequent gaps in coverage caused by cloud cover. Filling these ‘cloud gaps’ usually relies on statistical re-constructions using proximal clear sky LST [...] Read more.
Satellite-derived land surface temperature (LST) data are most commonly observed in the longwave infrared (LWIR) spectral region. However, such data suffer frequent gaps in coverage caused by cloud cover. Filling these ‘cloud gaps’ usually relies on statistical re-constructions using proximal clear sky LST pixels, whilst this is often a poor surrogate for shadowed LSTs insulated under cloud. Another solution is to rely on passive microwave (PM) LST data that are largely unimpeded by cloud cover impacts, the quality of which, however, is limited by the very coarse spatial resolution typical of PM signals. Here, we combine aspects of these two approaches to fill cloud gaps in the LWIR-derived LST record, using Kenya (East Africa) as our study area. The proposed “cloud gap-filling” approach increases the coverage of daily Aqua MODIS LST data over Kenya from <50% to >90%. Evaluations were made against the in situ and SEVIRI-derived LST data respectively, revealing root mean square errors (RMSEs) of 2.6 K and 3.6 K for the proposed method by mid-day, compared with RMSEs of 4.3 K and 6.7 K for the conventional proximal-pixel-based statistical re-construction method. We also find that such accuracy improvements become increasingly apparent when the total cloud cover residence time increases in the morning-to-noon time frame. At mid-night, cloud gap-filling performance is also better for the proposed method, though the RMSE improvement is far smaller (<0.3 K) than in the mid-day period. The results indicate that our proposed two-step cloud gap-filling method can improve upon performances achieved by conventional methods for cloud gap-filling and has the potential to be scaled up to provide data at continental or global scales as it does not rely on locality-specific knowledge or datasets. Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Location and geographical settings of the study area, Kenya- Eastern Africa, and the layout of in situ measurement sites located within the study area whose LST data records are used as validation data herein. (<b>a</b>) Upper left, Location of the ILRI Kapiti Research Station in Kenya and the location of the LST validation masts at Kapiti. Inset right- location of Kenya in Africa. Lower pane- Location of the four LST measurement mast sites at ILRI Kapiti Research Station with the approximate response area of the SEVIRI and MODIS pixels over the research station. (<b>b</b>) NDVI time series from MODIS MYD13A2 16-day composite product for five typical pixels (A, B, C, D, and E) across Kenya of an entire phenological year from October 2018 to September 2019. (<b>c</b>) A 300 m resolution land cover classification map of Kenya in 2018 produced by the European Space Agency (ESA) Climate Change Initiative (CCI); (<b>d</b>) An NDVI map of Kenya at 500 m resolution on 22 October 2018, calculated using MODIS MCD43A4 product. Pixels with NDVI &lt; −0.2 (which might indicate water) were screened out. (<b>e</b>) The resampled 1 km resolution map Digital Elevation Model (DEM) of Kenya, derived from the 90m NASA Shuttle Radar Topography Mission (SRTM) dataset (<a href="http://srtm.csi.cgiar.org" target="_blank">http://srtm.csi.cgiar.org</a> accessed on 7 February 2019).</p>
Full article ">Figure 2
<p>Flowchart of the proposed cloud gap-filling appraoch proposed for MODIS LWIR LST.</p>
Full article ">Figure 3
<p>Comparison between land surface temperature (LST) data derived from passive microwave observations made by AMSR-2 and gridded into 25-km pixels, and LSTs derived from near simultaneous Aqua MODIS observations (i.e., the MYD21A1 LST product) averaged over the same 25-km grid cells during the study period. Ordinary least squares linear best fits and the 1:1 lines are shown in solid orange and black respectively.</p>
Full article ">Figure 4
<p>Demonstration of the cloud gap filling approach for the Aqua MODIS LST data record. (<b>a</b>) Time series of LST fractional coverage of Kenya before and after gap-filling. (<b>b</b>) Proportional distribution of bias adjustment fraction (0–1) in the gap filled pixels of different dates. (<b>c</b>–<b>e</b>) Daytime LST maps from the different spaceborne sensors recoded near simultaneously on 22 October 2018, and (<b>f</b>,<b>g</b>) the MODIS LST data of (<b>c</b>) now cloud gap-filled using the methodology detailed herein. All MODIS LST pixels with an NDVI of &lt;−0.2 (indicating the likely presence of surface water) have been screened out in these latter two datasets. (<b>h</b>) A bias adjustment map of cloud gap-filled LST (clear sky pixels eliminated) generated by (<b>g</b>) minus (<b>f</b>).</p>
Full article ">Figure 5
<p>Comparison of MODIS<sub>Clear</sub> (green ‘x’), MODIS<sub>STDF</sub> (orange plus sign) and MODIS<sub>PMBC</sub> (blue dot) data to corresponding in situ LSTs recorded using the mast-mounted IR radiometers of the ILRI Kapiti Research Station detailed in <a href="#remotesensing-13-03522-f001" class="html-fig">Figure 1</a>. Day-time data are collected around 1:30 p.m. local solar time, whereas night-time data are around 1:30 a.m. The 1:1 line is also shown, and comparison statistics are presented in <a href="#remotesensing-13-03522-t003" class="html-table">Table 3</a>. The clearest benefit of the bias-adjusted gap filling is seen in the coolest temperatures of the day-time record.</p>
Full article ">Figure 6
<p>Time series of MODIS cloud gap filled daytime LSTs and corresponding in situ LSTs recorded at the locations of (<b>a</b>) Mast 1, (<b>b</b>) Masts 2 and 3 (averaged), and (<b>c</b>) Mast 4 of the ILRI Kapiti Research Station. Note that cloud gap filled LSTs are only shown when both Step 1 and Step 2 of the cloud gap-filling methodology were applied as detailed in <a href="#sec3-remotesensing-13-03522" class="html-sec">Section 3</a> (“non-bias adjusted” and “bias adjusted” respectively). Dates marked by the box and arrows are those examined in <a href="#sec4dot4-remotesensing-13-03522" class="html-sec">Section 4.4</a>.</p>
Full article ">Figure 7
<p>Time series of MODIS cloud gap filled night-time LSTs and corresponding in situ LSTs recorded at the locations of (<b>a</b>) Mast 1, (<b>b</b>) Masts 2 and 3 (averaged), and (<b>c</b>) Mast 4 of the ILRI Kapiti Research Station.</p>
Full article ">Figure 8
<p>Comparison of Aqua MODIS and Meteosat SEVIRI-derived land surface temperatures (LSTs). (<b>a</b>) day-time data and (<b>b</b>) night-time data. MODIS data includes both the clear-sky MODIS LSTs from MYD11A and that output from Step 1 and Step 2 of the cloud gap-filling methodology detailed in <a href="#sec3-remotesensing-13-03522" class="html-sec">Section 3</a>. Colour bars indicate the number of co-located observations within the plotting space.</p>
Full article ">Figure 9
<p>RMSE of MODIS<sub>Clear,</sub> MODIS<sub>STDF,</sub> and MODIS<sub>PMBC</sub> against SEVIRI LST for different land cover types in the day (<b>top</b>) and in the night (<b>bottom</b>). Numbers in the brackets denote pixel numbers of that specific land cover type.</p>
Full article ">Figure 10
<p>Difference between RMSE for MODIS<sub>STDF</sub> (or MODIS<sub>PMBC</sub>) and RMSE for MODIS<sub>Clear</sub>, against SEVIRI LST (reflected in the left y-axis), and the fraction of clear sky LST pixels (0–100%) on a daily basis (reflected by the length of red bar in the right y-axis). Results in the daytime and at night-time are shown in the top panel and the bottom panel respectively.</p>
Full article ">Figure 11
<p>Time series of cloud state reported using the in situ data record from the upward pointing LWIR radiometer installed at the ILRI Kapiti Research Station between 7:30 a.m.–1:30 p.m. for the four dates selected and reported in <a href="#remotesensing-13-03522-f006" class="html-fig">Figure 6</a>, (<b>a</b>) 22 October 2018; (<b>b</b>) 2 November 2018; (<b>c</b>) 19 December 2018; and (<b>d</b>) 15 January 2019. A value of 1 = stable cloud was present for at least the prior 15 min, whilst a value of 0 = stable cloud was not present for the 25 min prior to the LST derivation. Cloud duration fraction (CDF) and Δbias values are reported for each sub-Figure and defined at the start of <a href="#sec4dot4-remotesensing-13-03522" class="html-sec">Section 4.4</a>.</p>
Full article ">Figure 12
<p>Outcome of the cloud duration analysis for the whole of Kenya with SEVIRI. Mean (solid point) and standard deviations (dotted line) of the evaluation metric Δbias of cloud gap-filled MODIS LST dataset according to different cloud duration fraction (CDF) groups. The evaluation is made at the pixel level of SEVIRI observations (pixel sizes of around 4 km over Kenya). This analysis highlights that during the day, the STDF+PMBC approach results in improved performance over the STDF-only approach, and this improvement increases with cloud residence time, as indicated by CDF. At night, bias correction offers little additional performance improvement.</p>
Full article ">
23 pages, 8342 KiB  
Article
CCT: Conditional Co-Training for Truly Unsupervised Remote Sensing Image Segmentation in Coastal Areas
by Bo Fang, Gang Chen, Jifa Chen, Guichong Ouyang, Rong Kou and Lizhe Wang
Remote Sens. 2021, 13(17), 3521; https://doi.org/10.3390/rs13173521 - 5 Sep 2021
Cited by 7 | Viewed by 3160
Abstract
As the fastest growing trend in big data analysis, deep learning technology has proven to be both an unprecedented breakthrough and a powerful tool in many fields, particularly for image segmentation tasks. Nevertheless, most achievements depend on high-quality pre-labeled training samples, which are [...] Read more.
As the fastest growing trend in big data analysis, deep learning technology has proven to be both an unprecedented breakthrough and a powerful tool in many fields, particularly for image segmentation tasks. Nevertheless, most achievements depend on high-quality pre-labeled training samples, which are labor-intensive and time-consuming. Furthermore, different from conventional natural images, coastal remote sensing ones generally carry far more complicated and considerable land cover information, making it difficult to produce pre-labeled references for supervised image segmentation. In our research, motivated by this observation, we take an in-depth investigation on the utilization of neural networks for unsupervised learning and propose a novel method, namely conditional co-training (CCT), specifically for truly unsupervised remote sensing image segmentation in coastal areas. In our idea, a multi-model framework consisting of two parallel data streams, which are superpixel-based over-segmentation and pixel-level semantic segmentation, is proposed to simultaneously perform the pixel-level classification. The former processes the input image into multiple over-segments, providing self-constrained guidance for model training. Meanwhile, with this guidance, the latter continuously processes the input image into multi-channel response maps until the model converges. Incentivized by multiple conditional constraints, our framework learns to extract high-level semantic knowledge and produce full-resolution segmentation maps without pre-labeled ground truths. Compared to the black-box solutions in conventional supervised learning manners, this method is of stronger explainability and transparency for its specific architecture and mechanism. The experimental results on two representative real-world coastal remote sensing datasets of image segmentation and the comparison with other state-of-the-art truly unsupervised methods validate the plausible performance and excellent efficiency of our proposed CCT. Full article
(This article belongs to the Special Issue Explainable Artificial Intelligence (XAI) in Remote Sensing Big Data)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Illustration of specific characteristics in coastal remote sensing images.</p>
Full article ">Figure 2
<p>Framework of our proposed conditional co-training (CCT) for truly unsupervised remote sensing image segmentation in coastal areas.</p>
Full article ">Figure 3
<p>Network architectures of our encoder and decoders.</p>
Full article ">Figure 4
<p>Overview of coastal remote sensing datasets: (<b>a</b>) Shanghai dataset, (<b>b</b>) Zhejiang dataset.</p>
Full article ">Figure 5
<p>Training visualization of our multi-model framework, (<b>a</b>)–(<b>e</b>) are the periodical image segmentation results at iterations: (<b>a</b>) 50, (<b>b</b>) 100, (<b>c</b>) 150, (<b>d</b>) 200, and (<b>e</b>) max. The left three images are from Shanghai dataset, while the right ones are from Zhejiang dataset.</p>
Full article ">Figure 6
<p>Representative examples of unsupervised remote sensing image segmentation results on Shanghai dataset: (<b>a</b>) gPb, (<b>b</b>) UCM, (<b>c</b>) BackProp, (<b>d</b>) DFC, (<b>e</b>) our CCT.</p>
Full article ">Figure 7
<p>Representative examples of unsupervised remote sensing image segmentation results on Zhejiang dataset: (<b>a</b>) gPb, (<b>b</b>) UCM, (<b>c</b>) BackProp, (<b>d</b>) DFC, (<b>e</b>) our CCT.</p>
Full article ">Figure 8
<p>Representative examples of image segmentation results on Shanghai dataset, influenced by diverse settings of our image filter: (<b>a</b>) superpixel number equals K/2, (<b>b</b>) superpixel number equals K, (<b>c</b>) superpixel number equals 2K. The left image in each group is the superpixel-based over-segmentation map, while the right one is the image segmentation result.</p>
Full article ">Figure 9
<p>Representative examples of image segmentation results on Zhejiang dataset, influenced by diverse settings of our image filter: (<b>a</b>) superpixel number equals K/2, (<b>b</b>) superpixel number equals K, (<b>c</b>) superpixel number equals 2K. The left image in each group is the superpixel-based over-segmentation map, while the right one is the image segmentation result.</p>
Full article ">Figure 10
<p>Representative examples of image segmentation results driven by diverse designs of our three deep learning models: (<b>a</b>) none down- and up-sampling operation, (<b>b</b>) one down- and up-sampling operation, (<b>c</b>) two down- and up-sampling operations. The left three images are from Shanghai dataset, while the right ones are from Zhejiang dataset.</p>
Full article ">
25 pages, 8977 KiB  
Article
SVG-Loop: Semantic–Visual–Geometric Information-Based Loop Closure Detection
by Zhian Yuan, Ke Xu, Xiaoyu Zhou, Bin Deng and Yanxin Ma
Remote Sens. 2021, 13(17), 3520; https://doi.org/10.3390/rs13173520 - 5 Sep 2021
Cited by 18 | Viewed by 3642
Abstract
Loop closure detection is an important component of visual simultaneous localization and mapping (SLAM). However, most existing loop closure detection methods are vulnerable to complex environments and use limited information from images. As higher-level image information and multi-information fusion can improve the robustness [...] Read more.
Loop closure detection is an important component of visual simultaneous localization and mapping (SLAM). However, most existing loop closure detection methods are vulnerable to complex environments and use limited information from images. As higher-level image information and multi-information fusion can improve the robustness of place recognition, a semantic–visual–geometric information-based loop closure detection algorithm (SVG-Loop) is proposed in this paper. In detail, to reduce the interference of dynamic features, a semantic bag-of-words model was firstly constructed by connecting visual features with semantic labels. Secondly, in order to improve detection robustness in different scenes, a semantic landmark vector model was designed by encoding the geometric relationship of the semantic graph. Finally, semantic, visual, and geometric information was integrated by fuse calculation of the two modules. Compared with art-of-the-state methods, experiments on the TUM RBG-D dataset, KITTI odometry dataset, and practical environment show that SVG-Loop has advantages in complex environments with varying light, changeable weather, and dynamic interference. Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>The pipeline of the proposed SVG-Loop. The blue box on the left is the semantic bag-of-words model and the purple box on the right is the semantic landmark vector model.</p>
Full article ">Figure 2
<p>Extraction process of semantic–visual words. Corners are extracted by feature extraction and semantic labels are acquired by panoptic segmentation. Similar descriptors with the same semantic labels are classified as semantic–visual words through clustering.</p>
Full article ">Figure 3
<p>Example of vocabulary structure and index information. Tree structure mainly includes the semantic layer and feature layer. The inverse index records weight of the words in images containing them. The direct index stores features of the images and semantic labels of associated nodes at a different level of the vocabulary tree.</p>
Full article ">Figure 4
<p>Process of the semantic landmark vector model.</p>
Full article ">Figure 5
<p>Generation of semantic descriptor and semantic landmark vector: (<b>a</b>) result of panoptic segmentation, (<b>b</b>) semantic graph, (<b>c</b>) semantic descriptor generation, and (<b>d</b>) semantic landmark vector of image.</p>
Full article ">Figure 6
<p>Experimental result for fr2_desk sequence. (<b>a</b>) The trajectory graph of LDSO, ORB-SLAM3, and ORB-SLAM3 + SVG-Loop. (<b>b</b>) Images of loop closure scene.</p>
Full article ">Figure 7
<p>Experimental result for fr3_long_office sequence. (<b>a</b>) The trajectory graph of LDSO, ORB-SLAM3, and ORB-SLAM3 + SVG-Loop. (<b>b</b>) Images of loop closure scene.</p>
Full article ">Figure 8
<p>Evaluation results of trajectory in fr2_desk sequence. (<b>a</b>) Histogram of APE. (<b>b</b>) Box diagram of APE.</p>
Full article ">Figure 9
<p>Evaluation results of trajectory in fr3_long_office sequence. (<b>a</b>) Histogram of APE. (<b>b</b>) Box diagram of APE.</p>
Full article ">Figure 10
<p>POR curve of DBoW2, OpenFABMAP, SRLCD, BoTW-LCD and SVG-Loop in KITTI dataset. (<b>a</b>) Sequence 00. (<b>b</b>) Sequence 02. (<b>c</b>) Sequence 05. (<b>d</b>) Sequence 06.</p>
Full article ">Figure 10 Cont.
<p>POR curve of DBoW2, OpenFABMAP, SRLCD, BoTW-LCD and SVG-Loop in KITTI dataset. (<b>a</b>) Sequence 00. (<b>b</b>) Sequence 02. (<b>c</b>) Sequence 05. (<b>d</b>) Sequence 06.</p>
Full article ">Figure 11
<p>Performance of SVG-Loop method on the KITTI odometry dataset. The horizontal position is maintained, and the vertical axis is replaced with frame ID. (<b>a</b>) Results of loop closure detection in sequence 00. (<b>b</b>) Results of loop closure detection in sequence 02. (<b>c</b>) Results of loop closure detection in sequence 05. (<b>d</b>) Results of loop closure detection in sequence 06.</p>
Full article ">Figure 12
<p>Data acquisition sensor and collection scenes. (<b>a</b>) Data acquisition sensor in different environments. (<b>b</b>) Collection scenes in outdoor test.</p>
Full article ">Figure 13
<p>Example of experimental results with the light change in an indoor environment. (<b>a</b>) Process of loop closure detection without light change. (<b>b</b>) Process of loop closure detection with light change.</p>
Full article ">Figure 13 Cont.
<p>Example of experimental results with the light change in an indoor environment. (<b>a</b>) Process of loop closure detection without light change. (<b>b</b>) Process of loop closure detection with light change.</p>
Full article ">Figure 14
<p>Trajectory of the data collection vehicle.</p>
Full article ">Figure 15
<p>Performance of the SVG-Loop method on the practical outdoor dataset. (<b>a</b>) Two-dimensional position trajectory over time and detection result of SVG-Loop in loop 1. (<b>b</b>) Three-dimensional position trajectory and the detection result of SVG-Loop in loop 1.</p>
Full article ">Figure 16
<p>Examples of correct loop scenes detected by SVG-Loop. Each column represents a pair of loop scenes. (<b>a</b>) Results of SVG-Loop in loop 1. (<b>b</b>) Results of SVG-Loop in loop 2. (<b>c</b>) Results of SVG-Loop in loop 3.</p>
Full article ">
13 pages, 3375 KiB  
Case Report
Systematic Approach for Tunnel Deformation Monitoring with Terrestrial Laser Scanning
by Dongfeng Jia, Weiping Zhang and Yanping Liu
Remote Sens. 2021, 13(17), 3519; https://doi.org/10.3390/rs13173519 - 4 Sep 2021
Cited by 24 | Viewed by 4207
Abstract
The use of terrestrial laser scanning (TLS) point clouds for tunnel deformation measurement has elicited much interest. However, general methods of point-cloud processing in tunnels are still under investigation, given the high accuracy and efficiency requirements in this area. This study discusses a [...] Read more.
The use of terrestrial laser scanning (TLS) point clouds for tunnel deformation measurement has elicited much interest. However, general methods of point-cloud processing in tunnels are still under investigation, given the high accuracy and efficiency requirements in this area. This study discusses a systematic method of analyzing tunnel deformation. Point clouds from different stations need to be registered rapidly and with high accuracy before point-cloud processing. An orientation method of TLS in tunnels that uses a positioning base made in the laboratory is proposed for fast point-cloud registration. The calibration methods of the positioning base are demonstrated herein. In addition, an improved moving least-squares method is proposed as a way to reconstruct the centerline of a tunnel from unorganized point clouds. Then, the normal planes of the centerline are calculated and are used to serve as the reference plane for point-cloud projection. The convergence of the tunnel cross-section is analyzed, based on each point cloud slice, to determine the safety status of the tunnel. Furthermore, the results of the deformation analysis of a particular shield tunnel site are briefly discussed. Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Flow diagram of the proposed method.</p>
Full article ">Figure 2
<p>Principle of OMTLS.</p>
Full article ">Figure 3
<p>PB model.</p>
Full article ">Figure 4
<p>Rigorous calibration field.</p>
Full article ">Figure 5
<p>Point cloud section extraction.</p>
Full article ">Figure 6
<p>Feature point generation and curve approximation.</p>
Full article ">Figure 7
<p>Generation of the tunnel cross-section.</p>
Full article ">Figure 8
<p>Cross-section generation.</p>
Full article ">Figure 9
<p>Test tunnel site.</p>
Full article ">Figure 10
<p>Implementation of OMTLS.</p>
Full article ">Figure 11
<p>Cross-section generation.</p>
Full article ">Figure 12
<p>Deviation using the different methods.</p>
Full article ">
Previous Issue
Back to TopTop