Nothing Special   »   [go: up one dir, main page]

Next Issue
Volume 13, September-1
Previous Issue
Volume 13, August-1
You seem to have javascript disabled. Please note that many of the page functionalities won't work as expected without javascript enabled.
 
 
remotesensing-logo

Journal Browser

Journal Browser

Remote Sens., Volume 13, Issue 16 (August-2 2021) – 287 articles

Cover Story (view full-size image): Management of fertilizers is an important agricultural practice and field of research to minimize environmental impacts and cost of production. Applying fertilizer at the right rate, time, and place depends on the crop type, desired yield, and field conditions. In this paper, unmanned aerial vehicle multispectral imagery, vegetation indices, crop height, field topographic metrics, and soil properties were combined in machine learning models to predict canopy nitrogen (N). Compared to common modeling using only spectral variables, the inclusion of crop and environmental parameters improved N prediction allowing for more effective and efficient N fertilizer applications. The cover image was taken during ground data collection in the corn field, highlighting both the contrast and combination of nature and technology in agriculture today. View this paper.
  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
39 pages, 34205 KiB  
Article
DV-LOAM: Direct Visual LiDAR Odometry and Mapping
by Wei Wang, Jun Liu, Chenjie Wang, Bin Luo and Cheng Zhang
Remote Sens. 2021, 13(16), 3340; https://doi.org/10.3390/rs13163340 - 23 Aug 2021
Cited by 37 | Viewed by 8424
Abstract
Self-driving cars have experienced rapid development in the past few years, and Simultaneous Localization and Mapping (SLAM) is considered to be their basic capabilities. In this article, we propose a direct vision LiDAR fusion SLAM framework that consists of three modules. Firstly, a [...] Read more.
Self-driving cars have experienced rapid development in the past few years, and Simultaneous Localization and Mapping (SLAM) is considered to be their basic capabilities. In this article, we propose a direct vision LiDAR fusion SLAM framework that consists of three modules. Firstly, a two-staged direct visual odometry module, which consists of a frame-to-frame tracking step, and an improved sliding window based thinning step, is proposed to estimate the accurate pose of the camera while maintaining efficiency. Secondly, every time a keyframe is generated, a dynamic objects considered LiDAR mapping module is utilized to refine the pose of the keyframe to obtain higher positioning accuracy and better robustness. Finally, a Parallel Global and Local Search Loop Closure Detection (PGLS-LCD) module that combines visual Bag of Words (BoW) and LiDAR-Iris feature is applied for place recognition to correct the accumulated drift and maintain a globally consistent map. We conducted a large number of experiments on the public dataset and our mobile robot dataset to verify the effectiveness of each module in our framework. Experimental results show that the proposed algorithm achieves more accurate pose estimation than the state-of-the-art methods. Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Algorithm overview of proposed Direct Visual LiDAR Odometry and Mapping. Our main contributions in this paper are highlighted in green.</p>
Full article ">Figure 2
<p>Salient point selection result. (<b>a</b>) All LiDAR points projected to the image plane without selection. (<b>b</b>) The result of selected salient points.</p>
Full article ">Figure 3
<p>(<b>a</b>) The patch pattern used in our approach, the colored cells are pixels used in direct tracking process, and all pixels share the same depth with the intermediate dark grid obtained from the LiDAR points. (<b>b</b>) Illustration of the patch based direct tracking problem.</p>
Full article ">Figure 4
<p>Illustration of sliding window-based optimization, the light blue area represents the local keyframes window <math display="inline"> <semantics> <mrow> <mo>(</mo> <msub> <mi mathvariant="bold">W</mi> <mi mathvariant="script">KF</mi> </msub> <mo>)</mo> </mrow> </semantics> </math> which consists of N keyframes, and each keyframe has its image data and LiDAR points. Black dots represent the salient LiDAR points and colored rectangles represent the patches lying in the image. A point <math display="inline"> <semantics> <msubsup> <mi>p</mi> <mi>k</mi> <mn>1</mn> </msubsup> </semantics> </math> in the first keyframe of <math display="inline"> <semantics> <mrow> <mo>(</mo> <msub> <mi mathvariant="bold">W</mi> <mi mathvariant="script">KF</mi> </msub> <mo>)</mo> </mrow> </semantics> </math> is projected into current frame with the transform <math display="inline"> <semantics> <msubsup> <mi mathvariant="bold">T</mi> <mn>1</mn> <mi>c</mi> </msubsup> </semantics> </math>. A photometric residuals are then calculated between the patch at the existing point and the other patch at a projected point.</p>
Full article ">Figure 5
<p>LiDAR feature extraction flowcart. Among them, the yellow represents the input, and the green represents the output result. The gray rectangle represents the processing process, and the rounded rectangle represents the variable.</p>
Full article ">Figure 6
<p>Feature extraction process for a scan (000000.bin) on sequence 00 of the KITTI odometry benchmark. (<b>a</b>) is all laser points in the current frame. (<b>b</b>) is the result of ground segmentation. (<b>c</b>) is the segmentation result of non-ground point cloud clusters, and different colors represent different point cloud clusters obtained by LiDAR segmentation. (<b>d</b>,<b>e</b>) represents the extracted planar points and edge points respectively. (<b>f</b>) is the visualization of ground points, planar points and edge points. The green ones represent the ground points, the yellow ones represent the planar points, and the blue ones represent the edge points respectively.</p>
Full article ">Figure 7
<p>(<b>a</b>) Illustration of scan to map matching and map registration. (<b>b</b>) Residual of point to line. (<b>c</b>) Residual of point to plane.</p>
Full article ">Figure 8
<p>Affinity matrices obtained by the Bow and LiDAR-Iris method on two sequences. The first row corresponds to KITTI sequence 05, and the second row corresponds to KITTI sequence 08. (<b>a</b>) Ground Truth loop closure with trajectory. (<b>b</b>) Ground truth affinity matrix. (<b>c</b>) BoW affinity matrix and (<b>d</b>) is LiDAR-Iris affinity matrix respectively.</p>
Full article ">Figure 9
<p>The proposed PGLS-Loop Closure Detection flowchart.</p>
Full article ">Figure 10
<p>Features extraction for loop closure detection. (<b>a</b>) Illustration of encoding height information of surrounding objects into the LiDAR-Iris image. (<b>b</b>) Two LiDAR-Iris images (buttom row) extracted from the bird’s eye views of two LiDAR point clouds (top row) in a 3D (x,y,yaw) pose space, respectively. (<b>c</b>) ORB features extract from raw image. (<b>d</b>) shows the rotation invariance in matching two point clouds shown in (<b>b</b>).</p>
Full article ">Figure 11
<p>The flow chart of geometry consistency check. The source point cloud and target point cloud were stiched in the cooridinate of query keyframe and candidate keyframe using its neighbor keyframes respectively. The yellow point cloud represent the soure point cloud and gray represent target point cloud start from FPFH feature extraction and matching process. In thre feature extraction and matching process, the red points and blue points represent FPFH feature points respectively, and the green line shows the matching relationship between those points. In order to display clearly, we only show some of the feature points and their matching relationships.</p>
Full article ">Figure 12
<p>Trajectory diagram concerning the training set and the corresponding sequence error results are shown in <a href="#remotesensing-13-03340-t002" class="html-table">Table 2</a>. The gray dotted trajectory is ground, which is collected based on differential GPS that the localization precision is around <span class="html-italic">cm</span> level. The optimized odometry of our scheme is presented in the blue trajectory: <b>Ours-</b>, and the trajectory with loop closure is drawn in orange: <b>Ours*</b>, all of these trajectories have been aligned with the ground truth.</p>
Full article ">Figure 13
<p>The error analysis performed on the KITTI training dataset between A-LOAM, DEMO, DVL-SLAM, LIMO, Huang et al. [<a href="#B35-remotesensing-13-03340" class="html-bibr">35</a>], and Ours. Since we only obtain error curves related to vehicle speed (<b>a</b>,<b>b</b>) of LIMO, we show A-LOAM, DEMO, DVL-SLAM, Huang [<a href="#B35-remotesensing-13-03340" class="html-bibr">35</a>], and our error curves in terms of path length (<b>c</b>,<b>d</b>). The (<b>e</b>,<b>f</b>) show the percentage of different path segment length and vehicle speed in.</p>
Full article ">Figure 14
<p>Change of position and orientation compared with ground truth on the KITTI sequence 00. The trajectories have been transformed to the coordinate system of Camera 0 through extrinsic parameters (xyz→zxy). (<b>a</b>) represents the position change of the camera. (<b>b</b>) represents the change of the Euler angle of the camera.</p>
Full article ">Figure 15
<p>Mapping results of our approach on (<b>a</b>) sequence 01, (<b>b</b>) sequence 05, (<b>c</b>) sequence 06, (<b>d</b>) sequence 07 and (<b>e</b>) sequence 00 of the KITTI Vision Benchmark, also shown are some representative regions.</p>
Full article ">Figure 16
<p>Mapping results of some representive scenes in nuScenes dataset. In each column, from left to right are sample image in the scene, point cloud map color-encoded with images, and total LiDAR point cloud map, respectively. Where the red spheres in the middle represent the estimated trajectory of the camera.</p>
Full article ">Figure 17
<p>Data recording platform and corresponding collection environment. (<b>a</b>) data collection environment; (<b>b</b>) data collection platform; (<b>c</b>,<b>d</b>) are sample data collected at red and yellow box respectively showed in (<b>a</b>), the points are color-coded by their depth value.</p>
Full article ">Figure 18
<p>The trajectories of proposed method and state of art LiDAR based methods, including A-LOAM and LeGO-LOAM, where (<b>a</b>) is the 2d trajectories in x-z plane and (<b>b</b>) is the corresponing 3-D trajectories.</p>
Full article ">Figure 19
<p>Top view, side view, and zoom view of the final 3D maps of campus obtained using our method vs LeGO-LOAM. (<b>a</b>) is our method and (<b>b</b>) is LeGO-LOAM.</p>
Full article ">Figure 20
<p>Mapping result of our approach on campus dataset, also shown are some representative regions from (<b>A</b>–<b>F</b>).</p>
Full article ">Figure 21
<p>Our motion tracking trajectories on several representative KITTI training dataset, with results by our direct frame-to-frame visual LiDAR odometry <b>(Ours FF)</b>, the direct visual LiDAR odometry with slide window optimization module <b>(Ours FF_SW)</b> and LiDAR scan matching based frame-to-frame tracking odometry <b>(A-LOAM FF)</b> comparing with the ground truth <b>(Ground Truth)</b> trajectory separately.</p>
Full article ">Figure 22
<p>The visual odometry error analysis performed on KITTI training dataset.</p>
Full article ">Figure 23
<p>The loop closure detection results of partial KITTI sequeces, where (<b>a</b>) is sequence 00, (<b>b</b>) is sequence 05, (<b>c</b>) is sequence 08 and (<b>d</b>) is sequence 02. The trajectory of Ground Truth is plot in dash gray line, the blue line and red line represent stereo ORB-SLAM2 and ours trajectories respectively. The loop closures detected by our approach are drawn with red circle, and the detected loop closures of LeGO-LOAM and stereo ORB-SLAM2 are drawn with triangle and pentagon respectively.</p>
Full article ">Figure 24
<p>The ground truth loop closure segments of partial KITTI sequences. (<b>a</b>) KITTI sequence 00 with 4 loop closure segments. (<b>b</b>) KITTI sequence 02 with 3 loop closure segments. (<b>c</b>) KITTI sequence 05 with 3 loop closure segments. (<b>d</b>) KITTI sequence 08 with 3 loop closure segments. The top row is the GPS trajectory plotted from light to dark with time going, and the bottom row is the corresponding ground truth affinity matrices. We use the same color to indicate the correspondence between trajectory and affinity matrix.</p>
Full article ">Figure 25
<p>Processing time of each module on the KITTI sequence 00 odometry benchmark.</p>
Full article ">Figure 26
<p>(<b>a</b>) The sample images in our campus dataset with moving vehicle. (<b>b</b>) The trajectories of LiDAR-based odometry and direct visual-LiDAR fusion odometry. (<b>c</b>) Edge points extracted in LiDAR point cloud, they are more likely to be located at the boundary of objects.</p>
Full article ">
23 pages, 3686 KiB  
Article
Urban Growth Derived from Landsat Time Series Using Harmonic Analysis: A Case Study in South England with High Levels of Cloud Cover
by Matthew Nigel Lawton, Belén Martí-Cardona and Alex Hagen-Zanker
Remote Sens. 2021, 13(16), 3339; https://doi.org/10.3390/rs13163339 - 23 Aug 2021
Cited by 4 | Viewed by 2923
Abstract
Accurate detection of spatial patterns of urban growth is crucial to the analysis of urban growth processes. A common practice is to use post-classification change analysis, overlaying multiple independently derived land cover layers. This approach is problematic as propagation of classification errors can [...] Read more.
Accurate detection of spatial patterns of urban growth is crucial to the analysis of urban growth processes. A common practice is to use post-classification change analysis, overlaying multiple independently derived land cover layers. This approach is problematic as propagation of classification errors can lead to overestimation of change by an order of magnitude. This paper contributes to the growing literature on change classification using pixel-based time series analysis. In particular, we have developed a method that identifies change in the urban fabric at the pixel level based on breaks in the seasonal and year-on-year trend of the normalised difference vegetation index (NDVI). The method is applied to a case study area in the south of England that is characterised by high levels of cloud cover. The study uses the Landsat data archive over the period 1984–2018. The performance of the method was assessed using 500 ground truth points. These points were randomly selected and manually assessed for change using high-resolution earth observation imagery. The method identifies pixels where a land cover change occurred with a user’s accuracy of change 45.3 ± 4.45% and outperforms a post-classification analysis of an otherwise more advanced land cover product, which achieved a user’s accuracy of 17.8 ± 3.42%. This method performs better where changes exhibit large differences in NDVI dynamics amongst land cover types, such as the transition from agricultural to suburban, and less so where small differences of NDVI are observed, such as changes in land cover within pixels that are densely built up already. The method proved relatively robust for outliers and missing data, for example, in the case of high levels of cloud cover, but does rely on a period of data availability before and after the change event. Future developments to improve the method are to incorporate spectral information other than NDVI and to consider multiple change events per pixel over the analysed period. Full article
Show Figures

Figure 1

Figure 1
<p>Location of Swindon in the UK relative to London. Yellow A: Study area. Red B: Haydon Wick, C: Blunsdon Bypass, D: South Marston industrial complex, E: East Wichel.</p>
Full article ">Figure 2
<p>Location of the ground truth and accuracy assessment points used. (<b>A</b>) The 500 points used to assess the threshold values. (<b>B</b>) The 500 points used in accuracy assessment of the model and PCC. (<b>C</b>) The 500 training data points selected from the <span class="html-italic">change</span> class for classification. (<b>D</b>) The 300 points used in accuracy assessment of the change classification. (<b>E</b>) The 100 points selected from the Haydon Wick area to assess the accuracy of dating of change (selected from location B in <a href="#remotesensing-13-03339-f001" class="html-fig">Figure 1</a>). (<b>F</b>) The single point chosen from Set B to demonstrate the methodology (<a href="#remotesensing-13-03339-f003" class="html-fig">Figure 3</a>).</p>
Full article ">Figure 3
<p>Illustration of method on a single pixel. (<b>A</b>–<b>C</b>) High-resolution images across the change occurrence, spot indicating centre of pixel. (<b>D</b>) Goodness-of-fit as function of assumed time-of-change, threshold is <span class="html-italic">h</span> = 0.93. (<b>E</b>): NDVI trend of observations, along with fitted functions. Black line corresponds to lowest RMSE, indicating the time-of-change.</p>
Full article ">Figure 4
<p>PA and UA of the <span class="html-italic">change</span> class; OA and WK comparison for all values of <span class="html-italic">h.</span> Note that for UA, OA, K, and F1 score, <span class="html-italic">partial-change</span> is counted as <span class="html-italic">no-change</span>. For WK, <span class="html-italic">partial-change</span> is in half agreement with <span class="html-italic">change</span>, and full agreement with <span class="html-italic">no-change</span>.</p>
Full article ">Figure 5
<p>Land over change map produced for the period of 2006–2015.</p>
Full article ">Figure 6
<p>Comparison of the harmonic model output with a PCC land cover map from [<a href="#B35-remotesensing-13-03339" class="html-bibr">35</a>,<a href="#B36-remotesensing-13-03339" class="html-bibr">36</a>]. Uncoloured areas were detected as having undergone <span class="html-italic">no-change</span> in both methods. Grey line represents outline of urban extent as of 2015.</p>
Full article ">Figure 7
<p>Bars show range of high-resolution images denoting the period of possible change. Grey bars show <span class="html-italic">change</span>, black bars show <span class="html-italic">partial-change</span>. Yellow line denotes the middle period of the high-resolution image range. Abbreviations correspond to <a href="#remotesensing-13-03339-t002" class="html-table">Table 2</a>.</p>
Full article ">Figure 8
<p>Box plots of the parameters of the training data (Set C) used to classify the type of change. (<b>A</b>) Mean value of the NDVI trend before and after change, (<b>B</b>) amplitude before and after change.</p>
Full article ">Figure 9
<p>Classified change map produced using a random forest classifier. Grey line represents urban extent as of 2015 and may be used to qualitatively assign urban change to edge expansion, infill, and leapfrog type growth.</p>
Full article ">
20 pages, 1939 KiB  
Article
Hybrid Spatial–Temporal Graph Convolutional Networks for On-Street Parking Availability Prediction
by Xiao Xiao, Zhiling Jin, Yilong Hui, Yueshen Xu and Wei Shao
Remote Sens. 2021, 13(16), 3338; https://doi.org/10.3390/rs13163338 - 23 Aug 2021
Cited by 24 | Viewed by 3772
Abstract
With the development of sensors and of the Internet of Things (IoT), smart cities can provide people with a variety of information for a more convenient life. Effective on-street parking availability prediction can improve parking efficiency and, at times, alleviate city congestion. Conventional [...] Read more.
With the development of sensors and of the Internet of Things (IoT), smart cities can provide people with a variety of information for a more convenient life. Effective on-street parking availability prediction can improve parking efficiency and, at times, alleviate city congestion. Conventional methods of parking availability prediction often do not consider the spatial–temporal features of parking duration distributions. To this end, we propose a parking space prediction scheme called the hybrid spatial–temporal graph convolution networks (HST-GCNs). We use graph convolutional networks and gated linear units (GLUs) with a 1D convolutional neural network to obtain the spatial features and the temporal features, respectively. Then, we construct a spatial–temporal convolutional block to obtain the instantaneous spatial–temporal correlations. Based on the similarity of the parking duration distributions, we propose an attention mechanism called distAtt to measure the similarity of parking duration distributions. Through the distAtt mechanism, we add the long-term spatial–temporal correlations to our spatial–temporal convolutional block, and thus, we can capture complex hybrid spatial–temporal correlations to achieve a higher accuracy of parking availability prediction. Based on real-world datasets, we compare the proposed scheme with the benchmark models. The experimental results show that the proposed scheme has the best performance in predicting the parking occupancy rate. Full article
(This article belongs to the Special Issue Data Mining and Machine Learning in Urban Applications)
Show Figures

Figure 1

Figure 1
<p>The distributions of the parking durations in two different areas (Mint and Queensberry) in Melbourne in July 2017.</p>
Full article ">Figure 2
<p>The distributions of the parking durations in the Queensberry area in Melbourne in each month of 2017.</p>
Full article ">Figure 3
<p>The framework of the proposed HST-GCN model. ⊙ refers to the element-wise Hadamard product and <math display="inline"><semantics> <mrow> <mi>σ</mi> <mo>(</mo> <mo>·</mo> <mo>)</mo> </mrow> </semantics></math> refers to the sigmoid function.</p>
Full article ">Figure 4
<p>A map of the distribution of parking bays and areas in Melbourne.</p>
Full article ">Figure 5
<p>The heat map of the visualization of the weighted matrix <span class="html-italic">W</span>, which is determined by the distance between the two areas. The color of the heat map indicates the weight values of the two parking areas, which can be calculated by using Equation (<a href="#FD2-remotesensing-13-03338" class="html-disp-formula">2</a>).</p>
Full article ">Figure 6
<p>The heat map of the visualization of the weighted matrix <math display="inline"><semantics> <msub> <mi>W</mi> <mi>λ</mi> </msub> </semantics></math>, which is determined by the “distribution distance” between the two areas. The color of the heat map indicates the weight values of the two parking areas, which can be calculated by using Equation (<a href="#FD3-remotesensing-13-03338" class="html-disp-formula">3</a>).</p>
Full article ">Figure 7
<p>The prediction of a randomly selected area with a 15-min time horizon.</p>
Full article ">Figure 8
<p>The prediction of a randomly selected area with a 30-min time horizon.</p>
Full article ">Figure 9
<p>The prediction of a randomly selected area with a 60-min time horizon.</p>
Full article ">
26 pages, 6161 KiB  
Article
Retrieval of Land-Use/Land Cover Change (LUCC) Maps and Urban Expansion Dynamics of Hyderabad, Pakistan via Landsat Datasets and Support Vector Machine Framework
by Shaker Ul Din and Hugo Wai Leung Mak
Remote Sens. 2021, 13(16), 3337; https://doi.org/10.3390/rs13163337 - 23 Aug 2021
Cited by 59 | Viewed by 7405
Abstract
Land-use/land cover change (LUCC) is an important problem in developing and under-developing countries with regard to global climatic changes and urban morphological distribution. Since the 1900s, urbanization has become an underlying cause of LUCC, and more than 55% of the world’s population resides [...] Read more.
Land-use/land cover change (LUCC) is an important problem in developing and under-developing countries with regard to global climatic changes and urban morphological distribution. Since the 1900s, urbanization has become an underlying cause of LUCC, and more than 55% of the world’s population resides in cities. The speedy growth, development and expansion of urban centers, rapid inhabitant’s growth, land insufficiency, the necessity for more manufacture, advancement of technologies remain among the several drivers of LUCC around the globe at present. In this study, the urban expansion or sprawl, together with spatial dynamics of Hyderabad, Pakistan over the last four decades were investigated and reviewed, based on remotely sensed Landsat images from 1979 to 2020. In particular, radiometric and atmospheric corrections were applied to these raw images, then the Gaussian-based Radial Basis Function (RBF) kernel was used for training, within the 10-fold support vector machine (SVM) supervised classification framework. After spatial LUCC maps were retrieved, different metrics like Producer’s Accuracy (PA), User’s Accuracy (UA) and KAPPA coefficient (KC) were adopted for spatial accuracy assessment to ensure the reliability of the proposed satellite-based retrieval mechanism. Landsat-derived results showed that there was an increase in the amount of built-up area and a decrease in vegetation and agricultural lands. Built-up area in 1979 only covered 30.69% of the total area, while it has increased and reached 65.04% after four decades. In contrast, continuous reduction of agricultural land, vegetation, waterbody, and barren land was observed. Overall, throughout the four-decade period, the portions of agricultural land, vegetation, waterbody, and barren land have decreased by 13.74%, 46.41%, 49.64% and 85.27%, respectively. These remotely observed changes highlight and symbolize the spatial characteristics of “rural to urban transition” and socioeconomic development within a modernized city, Hyderabad, which open new windows for detecting potential land-use changes and laying down feasible future urban development and planning strategies. Full article
Show Figures

Figure 1

Figure 1
<p>Geographical location of the study area—Hyderabad, Pakistan, together with the latest land-use distribution, based on false color combination of Landsat datasets.</p>
Full article ">Figure 2
<p>Overall Flowchart of this study—for LUCC spatial assessment in Hyderabad, with different components labelled in different colors, and intermediate connection stated on top of arrows. FLAASH: Fast Line-of-sight Atmospheric Analysis of Spectral Hypercubes; SVM: Support Vector Machine; ENVI: The Environment for Visualizing Images; GTP: Ground Truth Points; LUCC: Land-use/Land Cover Change.</p>
Full article ">Figure 3
<p>Land cover classification maps of Hyderabad during 1979, 1990, 2000, 2010 and 2020, respectively, as retrieved from Landsat images, and via the statistical and data analytic framework illustrated in <a href="#remotesensing-13-03337-f002" class="html-fig">Figure 2</a>.</p>
Full article ">Figure 4
<p>Spatial areas allocated for each of the 5 key land-use types of Hyderabad during 1979, 1990, 2000, 2010, and 2020. Exact spatial areas are represented as bars.</p>
Full article ">Figure 5
<p>The thematic change maps of land-use patterns in Hyderabad throughout four different periods, namely 1979–1990, 1990–2000, 2000–2010 and 2010–2020, as retrieved from remote sensing approaches. Each color shown on the maps indicates the respective changes of LUCC.</p>
Full article ">Figure 6
<p>Spatial Distribution of built-up area from 1979–2020, within major parts of Hyderabad, Pakistan. The figures are retrieved and computed via remote sensing and SVM approaches.</p>
Full article ">Figure 7
<p>Area of built-up land (<b>left</b>) and Population figures (<b>right</b>) in Hyderabad throughout the 1979–2020 and 1950–2020 periods, respectively. Population figures were obtained from Pakistan Bureau of Statistics [<a href="#B118-remotesensing-13-03337" class="html-bibr">118</a>], and both the blue and oranges lines are obtained based on the assumption that the amount of built-up area and population figures increase uniformly during each 10-year period.</p>
Full article ">
15 pages, 6105 KiB  
Article
Dual-Task Semantic Change Detection for Remote Sensing Images Using the Generative Change Field Module
by Shao Xiang, Mi Wang, Xiaofan Jiang, Guangqi Xie, Zhiqi Zhang and Peng Tang
Remote Sens. 2021, 13(16), 3336; https://doi.org/10.3390/rs13163336 - 23 Aug 2021
Cited by 27 | Viewed by 3705
Abstract
With the advent of very-high-resolution remote sensing images, semantic change detection (SCD) based on deep learning has become a research hotspot in recent years. SCD aims to observe the change in the Earth’s land surface and plays a vital role in monitoring the [...] Read more.
With the advent of very-high-resolution remote sensing images, semantic change detection (SCD) based on deep learning has become a research hotspot in recent years. SCD aims to observe the change in the Earth’s land surface and plays a vital role in monitoring the ecological environment, land use and land cover. Existing research mainly focus on single-task semantic change detection; the problem they face is that existing methods are incapable of identifying which change type has occurred in each multi-temporal image. In addition, few methods use the binary change region to help train a deep SCD-based network. Hence, we propose a dual-task semantic change detection network (GCF-SCD-Net) by using the generative change field (GCF) module to locate and segment the change region; what is more, the proposed network is end-to-end trainable. In the meantime, because of the influence of the imbalance label, we propose a separable loss function to alleviate the over-fitting problem. Extensive experiments are conducted in this work to validate the performance of our method. Finally, our work achieves a 69.9% mIoU and 17.9 Sek on the SECOND dataset. Compared with traditional networks, GCF-SCD-Net achieves the best results and promising performances. Full article
(This article belongs to the Special Issue 3D City Modelling and Change Detection Using Remote Sensing Data)
Show Figures

Figure 1

Figure 1
<p>Architecture of the proposed network.</p>
Full article ">Figure 2
<p>General networks for SCD: (<b>A</b>) is the UNet-based SCD network, and (<b>B</b>) represents the PSPNet-based SCD network.</p>
Full article ">Figure 3
<p>Comparisons with state-of-the-art methods on the SECOND dataset. c1 and c2 represent image pairs and ground truth, respectively; from c3 to c8 are the semantic segmentation results obtained by various change detection methods. (<b>A</b>–<b>E</b>) are the image pairs.</p>
Full article ">Figure 4
<p>Semantic change maps and binary change maps generated by GCF-SCD-Net. c1 is an image pair; c2 and c3 are the semantic label and prediction; images in c4 were obtained by fusing the raw images and semantic prediction masks; c-5,6,7 represent the binary change label, binary change prediction and binary fusion results.</p>
Full article ">
17 pages, 5053 KiB  
Article
Quantification of Phycocyanin in Inland Waters through Remote Measurement of Ratios and Shifts in Reflection Spectral Peaks
by Gibeom Nam, Hyunjoo Shin, Rim Ha, Hyunoh Song, Jaehyun Yoo, Hyuk Lee, Sanghyun Park, Taegu Kang and Kyunghyun Kim
Remote Sens. 2021, 13(16), 3335; https://doi.org/10.3390/rs13163335 - 23 Aug 2021
Cited by 4 | Viewed by 3027
Abstract
This study introduces a semi-empirical algorithm to estimate the extent of the phycocyanin (PC) concentration in eutrophic freshwater bodies; this is achieved by studying the reflectance characteristics of the red and near-red spectral regions, especially the shifting of the peak near 700 nm [...] Read more.
This study introduces a semi-empirical algorithm to estimate the extent of the phycocyanin (PC) concentration in eutrophic freshwater bodies; this is achieved by studying the reflectance characteristics of the red and near-red spectral regions, especially the shifting of the peak near 700 nm towards longer wavelengths. Spectral measurements in a darkroom environment over the pure-cultured cyanobacteria Microcystis showed that the shift is proportional to the algal biomass. A similar proportional trend was found from extensive field measurement data. The data also showed that the correlation of the magnitude of the shift with the PC concentration was greater than that with chlorophyll-a. This indicates that the characteristic can be a useful index to quantify cyanobacterial biomass. Based on these observations, a new PC algorithm was proposed that uses the remote sensing reflectance of the peak band around 700 nm and the trough band around 620 nm, and the magnitude of the peak shift near 700 nm. The efficacy of the algorithm was tested with 300 sets of field data, and the results were compared to select algorithms for the PC concentration prediction. The new algorithm performed better than the other algorithms with respect to most error indices, especially the mean relative error, indicating that the algorithm can reduce errors when PC concentrations are low. The algorithm was also applied to a hyperspectral dataset obtained through aerial imaging, in order to predict the spatial distribution of the PC concentration in an approximately 86 km long reach of the Nakdong River. Full article
Show Figures

Figure 1

Figure 1
<p>River sections (marked as rectangles) representing the sampling zones along the Nakdong River (<b>left</b>). An enlarged image of zones E and F is presented (<b>right</b>).</p>
Full article ">Figure 2
<p>Darkroom setup for the spectral measurements of the algae; (<b>left</b>) schematic of the darkroom experimental design; (<b>right</b>) sensing geometry of each sample in the beaker.</p>
Full article ">Figure 3
<p>Spectral measurement results from the darkroom experiment. The reflectance spectra of samples with six different PC levels show a strong trend of peak shift being proportional to PC concentration in cultured <span class="html-italic">Microcystis aeruginosa</span>.</p>
Full article ">Figure 4
<p>Relationship between the near 700 nm peak shift (<math display="inline"><semantics> <mrow> <mi mathvariant="sans-serif">Δ</mi> <msub> <mi>λ</mi> <mrow> <mi>p</mi> <mi>e</mi> <mi>a</mi> <mi>k</mi> </mrow> </msub> </mrow> </semantics></math>), and phycocyanin or chlorophyll-<span class="html-italic">a</span> concentration.</p>
Full article ">Figure 5
<p>Results of applying four different algorithms to the calibration datasets. BRPD performed better than the other three methods in all of the error indices except RMSE, which was slightly higher than the Si05 value.</p>
Full article ">Figure 6
<p>Results obtained by applying four different algorithms to the validation datasets. Similar to the results of the calibration case, BRPD performed better than the other three methods in all of the error indices except the RMSE and MAE values, which were slightly higher than those of Si05.</p>
Full article ">Figure 7
<p>Comparison of relative error (log %) for each algorithm used in this study. Blue circles, red squares, red triangles, and green circles represent errors from the BRPD, Hu10, MM09, and Si05 algorithms, respectively.</p>
Full article ">Figure 8
<p>Results obtained by regression of only the band-ratio part of BRPD for evaluating the effect of including peak shift on prediction accuracy. The same 200 calibration datasets used in <a href="#remotesensing-13-03335-f004" class="html-fig">Figure 4</a> were used here.</p>
Full article ">Figure 9
<p>Spatial distribution of phycocyanin concentrations estimated using the calibrated BRPD algorithm for a mid- to downstream section of the Nakdong River based on reflectance data observed on 11 August 2016.</p>
Full article ">Figure 10
<p>Comparison of estimated PC concentrations (<b>right</b>) and corresponding RGB images (<b>left</b>) at selected areas along the Nakdong River.</p>
Full article ">
19 pages, 3181 KiB  
Article
Diurnal Cycle Model of Lake Ice Surface Albedo: A Case Study of Wuliangsuhai Lake
by Zhijun Li, Qingkai Wang, Mingguang Tang, Peng Lu, Guoyu Li, Matti Leppäranta, Jussi Huotari, Lauri Arvola and Lijuan Shi
Remote Sens. 2021, 13(16), 3334; https://doi.org/10.3390/rs13163334 - 23 Aug 2021
Cited by 5 | Viewed by 2491
Abstract
Ice surface albedo is an important factor in various optical remote sensing technologies used to determine the distribution of snow or melt water on the ice, and to judge the formation or melting of lake ice in winter, especially in cold and arid [...] Read more.
Ice surface albedo is an important factor in various optical remote sensing technologies used to determine the distribution of snow or melt water on the ice, and to judge the formation or melting of lake ice in winter, especially in cold and arid areas. In this study, field measurements were conducted at Wuliangsuhai Lake, a typical lake in the semi-arid cold area of China, to investigate the diurnal variation of the ice surface albedo. Observations showed that the diurnal variations of the ice surface albedo exhibit bimodal characteristics with peaks occurring after sunrise and before sunset. The curve of ice surface albedo with time is affected by weather conditions. The first peak occurs later on cloudy days compared with sunny days, whereas the second peak appears earlier on cloudy days. Four probability density distribution functions—Laplace, Gauss, Gumbel, and Cauchy—were combined linearly to model the daily variation of the lake ice albedo on a sunny day. The simulations of diurnal variation in the albedo during the period from sunrise to sunset with a solar altitude angle higher than 5° indicate that the Laplace combination is the optimal statistical model. The Laplace combination can not only describe the bimodal characteristic of the diurnal albedo cycle when the solar altitude angle is higher than 5°, but also reflect the U-shaped distribution of the diurnal albedo as the solar altitude angle exceeds 15°. The scale of the model is about half the length of the day, and the position of the two peaks is closely related to the moment of sunrise, which reflects the asymmetry of the two peaks of the ice surface albedo. This study provides a basis for the development of parameterization schemes of diurnal variation of lake ice albedo in semi-arid cold regions. Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>(<b>a</b>) Location of Wuliangsuhai Lake, (<b>b</b>) the Trios spectral irradiance sensors at the observation site on the ice surface, and (<b>c</b>) the incident and reflected spectral irradiance measured at 12:00 on 22 January. The arrows in (<b>b</b>) indicate the incident directions of light to the sensors.</p>
Full article ">Figure 2
<p>Daily variation of incident and reflected irradiances, and ice surface albedo, on 22 January.</p>
Full article ">Figure 3
<p>Daily variation of ice albedo under different weather conditions.</p>
Full article ">Figure 4
<p>The simulated albedo curves of the four combined statistical models for 29 January: (<b>a</b>) the daily variation of the albedo; (<b>b</b>) the comparison of the measured albedo with the simulated albedo.</p>
Full article ">Figure 5
<p>The results of the four combined statistical models vs. the measured values for the albedo during 12 sunny days: (<b>a</b>) Laplace, (<b>b</b>) Gauss, (<b>c</b>) Gumbel, (<b>d</b>) Cauchy.</p>
Full article ">Figure 6
<p>Simulation curves of the albedo derived from the Laplace combination for 12 sunny days with a solar altitude angle of ≥5°: (<b>a</b>) 16 January and 1 February, (<b>b</b>) 29 January and 4 February, (<b>c</b>) 20 January and 3 February, (<b>d</b>) 22 January and 8 February, (<b>e</b>) 23 January and 31 January, (<b>f</b>) 25 January and 1 February The circles and lines are measured data and fitted curves, respectively.</p>
Full article ">Figure 7
<p>Correlation between cloud cover and solar radiation (circle), and the linear fit line (black) in [<a href="#B39-remotesensing-13-03334" class="html-bibr">39</a>]. Also shown are the parabolic fit (blue) and logistic lines (red) for comparisons.</p>
Full article ">
20 pages, 20331 KiB  
Article
Interseismic Slip and Coupling along the Haiyuan Fault Zone Constrained by InSAR and GPS Measurements
by Xin Qiao, Chunyan Qu, Xinjian Shan, Dezheng Zhao and Lian Liu
Remote Sens. 2021, 13(16), 3333; https://doi.org/10.3390/rs13163333 - 23 Aug 2021
Cited by 11 | Viewed by 3054
Abstract
The Haiyuan fault zone is an important tectonic boundary and strong seismic activity belt in northeastern Tibet, but no major earthquake has occurred in the past ∼100 years, since the Haiyuan M8.5 event in 1920. The current state of strain accumulation and seismic [...] Read more.
The Haiyuan fault zone is an important tectonic boundary and strong seismic activity belt in northeastern Tibet, but no major earthquake has occurred in the past ∼100 years, since the Haiyuan M8.5 event in 1920. The current state of strain accumulation and seismic potential along the fault zone have attracted significant attention. In this study, we obtained the interseismic deformation field along the Haiyuan fault zone using Envisat/ASAR data in the period 2003–2010, and inverted fault kinematic parameters including the long-term slip rate, locking degree and slip deficit distribution based on InSAR and GPS individually and jointly. The results show that there is near-surface creep in the Laohushan segment of about 19 km. The locking degree changes significantly along the strike with the western part reaching 17 km and the eastern part of 3–7 km. The long-term slip rate gradually decreases from west 4.7 mm/yr to east 2.0 mm/yr. As such, there is large strain accumulation along the western part of the fault and shallow creep along the Laohushan segment; while in the eastern section, the degree of strain accumulation is low, which suggests the rupture segments of the 1920 earthquake may have been not completely relocked. Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Tectonic setting of the Haiyuan fault zone. Red dots denote the epicentres of the 1920 M8.5 Haiyuan earthquake and the 1927 M8.0 Gulang earthquake. The red solid line denotes the rupture section of the Haiyuan earthquake, and the red dotted ellipse is the possible rupture area of the Gulang earthquake. The blue line denotes the Tianzhu seismic gap. Brown dots denote earthquakes of M5 or above (1920–2016), and white dots denote earthquakes of &lt;M5 (1970–2016). White squares denote the locations of cities. The black rectangle is the coverage of the InSAR data. LLL: Lenglongling segment; JQH: Jinqianghe segment; MMS: Maomaoshan segment; LHS: Laohushan segment.</p>
Full article ">Figure 2
<p>Temporal-spatial baseline networks of (<b>a</b>) T333 and (<b>b</b>) T290.</p>
Full article ">Figure 3
<p>(<b>a</b>) Average line-of-sight (LOS) velocity map from tracks 333 and 290. (<b>b</b>,<b>c</b>) are local zoom areas cross the fault. Red dots denote the epicentres of the 1920 M8.5 Haiyuan earthquake and the 1927 M8.0 Gulang earthquake. Grey solid lines denote other fault traces. Black dotted lines denote 2-km wide velocity profiles. Purple triangles represent global positioning system (GPS) stations. White squares denote the locations of cities. HYF.: the 1920 Haiyuan earthquake rupture segment; GL F.: Gulang fault; XS-TJS F.: Xiangshan-Tianjingshan fault; JQH: Jinqianghe segment; MMS: Maomaoshan segment; LHS: Laohushan segment.</p>
Full article ">Figure 4
<p>Fault-parallel velocity profiles and screw dislocation model fitting results. (<b>AA’–HH’</b>) Profiles shown in <a href="#remotesensing-13-03333-f003" class="html-fig">Figure 3</a>. Grey points denote fault-parallel velocities along the fault transformed by InSAR LOS observations. Red diamonds are average values of the data within 2 km along the profile; error bars are the 1-<math display="inline"><semantics> <mi>σ</mi> </semantics></math> errors. Blue lines are best fit curves, V represents the best fit slip rate in depth, D represents the optimal fit locking depth, and the dashed lines denote the position of the fault surface trace.</p>
Full article ">Figure 5
<p>Global positioning system (GPS) velocity field and block division of the Haiyuan fault zone. (<b>a</b>) GPS velocity field along the Haiyuan fault zone relative to the stable Eurasian plate, and (<b>b</b>) tectonic divisions of the surrounding area (Qilianshan block, Alaxan block, Lanzhou block, and Ordos block). The red box in (<b>b</b>) denotes the area shown in (<b>a</b>). Blue arrows are the results of observations from 1999–2016 recorded at GPS stations of the China Crustal Movement Observation Network (CMONOC); red arrows are the results of 2013–2016 campaign-mode observation from our group [<a href="#B37-remotesensing-13-03333" class="html-bibr">37</a>]. Black rectangles are the coverage of tracks 333 and 290. Error ellipses represent a 95% confidence level.</p>
Full article ">Figure 6
<p>Inversion results using all global positioning system (GPS) data. (<b>a</b>) Locking degree of the fault, (<b>b</b>) relative slip rate, and (<b>c</b>) slip rate deficit. LLL: Lenglongling segment; JQH: Jinqianghe segment; MMS: Maomaoshan segment; LHS: Laohushan segment; HY F.: the 1920 Haiyuan earthquake rupture segment; PHI: fault locking degree.</p>
Full article ">Figure 7
<p>Joint inversion results constrained by all GPS and InSAR. (<b>a</b>) Locking degree along the fault, (<b>b</b>) relative slip rate, and (<b>c</b>) slip rate deficit. LLL: Lenglongling segment; JQH: Jinqianghe segment; MMS: Maomaoshan segment; LHS: Laohushan segment; HY F.: the 1920 Haiyuan earthquake rupture segment; PHI: fault locking degree.</p>
Full article ">Figure 8
<p>Fault locking degree distributions inverted from different data combinations. (<b>a</b>) GPS data only from CMONOC, (<b>b</b>) GPS data from CMONOC and local dense sites built by our group, (<b>c</b>) CMONOC GPS data and InSAR data, and (<b>d</b>) all GPS and InSAR data. PHI: fault locking degree.</p>
Full article ">Figure 9
<p>Data fitting residuals of the GPS and InSAR joint inversions. (<b>a</b>,<b>c</b>) represent GPS and InSAR residuals of the joint inversion using all GPS and InSAR data, respectively. (<b>b</b>,<b>d</b>) represent GPS and InSAR residuals of the joint inversion using CMONOC sites and InSAR data, respectively. Black and grey solid lines denote fault traces. Red dots denote the epicentres of the 1920 M8.5 Haiyuan earthquake and the 1927 M8.0 Gulang earthquake. White squares denote the locations of cities.</p>
Full article ">Figure 10
<p>Comparison of deformation rate map across Haiyuan fault with previous research results. (<b>a</b>,<b>c</b>) Results from this study, and (<b>b</b>,<b>d</b>) deformation fields obtained by [<a href="#B22-remotesensing-13-03333" class="html-bibr">22</a>].</p>
Full article ">Figure 11
<p>Comparison of InSAR LOS velocity results along cross fault profiles. (<b>a</b>,<b>c</b>) are results of [<a href="#B22-remotesensing-13-03333" class="html-bibr">22</a>]. (<b>b</b>,<b>d</b>) are results of this study. Profile locations are shown in <a href="#remotesensing-13-03333-f010" class="html-fig">Figure 10</a>a. Red diamond denotes average values within 1 km along the profiles and error bar is their standard deviation.</p>
Full article ">Figure 12
<p>Fault-perpendicular velocity profiles evolution along the Laohushan (LHS) segment based on observations of track 333. The 28 profiles go from west to east; profiles (1) and (28) are the CC’ and EE’ profiles in <a href="#remotesensing-13-03333-f003" class="html-fig">Figure 3</a>, respectively. Grey points are fault-parallel velocities along the fault transformed using interferometric synthetic aperture radar (InSAR) line-of-sight (LOS) velocities. Red diamonds denote the average values of the data within 2 km along the profile; error bars are the 1-<math display="inline"><semantics> <mi>σ</mi> </semantics></math> error. Black dashed lines denote the fault location.</p>
Full article ">Figure 13
<p>Comparison of InSAR LOS observations and converted GPS values. (<b>a</b>) Track 333 and (<b>b</b>) track 290. Shaded areas denote error within ±1 mm/yr.</p>
Full article ">Figure 14
<p>Summary of the slip rate along the Haiyuan fault zone. The red solid line is the long-term slip rate of the Haiyuan fault zone obtained using the back-slip model; red diamonds are the slip rate of the fault obtained using the screw dislocation model. Other symbols and lines denote the results of previous studies (see figure legend for details). LLL: Lenglongling segment; JQH: Jinqianghe segment; MMS: Maomaoshan segment; LHS: Laohushan segment; HY F.: 1920 Haiyuan earthquake rupture segment.</p>
Full article ">
29 pages, 13083 KiB  
Article
Estimating Rainfall with Multi-Resource Data over East Asia Based on Machine Learning
by Yushan Zhang, Kun Wu, Jinglin Zhang, Feng Zhang, Haixia Xiao, Fuchang Wang, Jianyin Zhou, Yi Song and Liang Peng
Remote Sens. 2021, 13(16), 3332; https://doi.org/10.3390/rs13163332 - 23 Aug 2021
Cited by 13 | Viewed by 4419
Abstract
The lack of accurate estimation of intense precipitation is a universal limitation in precipitation retrieval. Therefore, a new rainfall retrieval technique based on the Random Forest (RF) algorithm is presented using the Advanced Himawari Imager-8 (Himawari-8/AHI) infrared spectrum data and the NCEP operational [...] Read more.
The lack of accurate estimation of intense precipitation is a universal limitation in precipitation retrieval. Therefore, a new rainfall retrieval technique based on the Random Forest (RF) algorithm is presented using the Advanced Himawari Imager-8 (Himawari-8/AHI) infrared spectrum data and the NCEP operational Global Forecast System (GFS) forecast information. And the gauge-calibrated rainfall estimates from the Global Precipitation Measurement (GPM) product served as the ground truth to train the model. The two-step RF classification model was established for (1) rain area delineation and (2) precipitation grades’ estimation to improve the accuracy of moderate rain and heavy rain. In view of the imbalance categories’ distribution in the datasets, the resampling technique including the Random Under-sampling algorithm and Synthetic Minority Over-sampling Technique (SMOTE) was implemented throughout the whole training process to fully learn the characteristics among the samples. Among the features used, the contributions of meteorological variables to the trained models were generally greater than those of infrared information; in particular, the contribution of precipitable water was the largest, indicating the sufficient necessity of water vapor conditions in rainfall forecasting. The simulation results by the RF model were compared with the GPM product pixel-by-pixel. To prove the universality of the model, we used independent validation sets which are not used for training and two independent testing sets with different periods from the training set. In addition, the algorithm was validated against independent rain gauge data and compared with GFS model rainfall. Consequently, the RF model identified rainfall areas with a Probability Of Detection (POD) of around 0.77 and a False-Alarm Ratio (FAR) of around 0.23 for validation, as well as a POD of 0.60–0.70 and a FAR of around 0.30 for testing. To estimate precipitation grades, the value of classification was 0.70 in validation and in testing the accuracy was 0.60 despite a certain overestimation. In summary, the performance on the validation and test data indicated the great adaptability and superiority of the RF algorithm in rainfall retrieval in East Asia. To a certain extent, our study provides a meaningful range division and powerful guidance for quantitative precipitation estimation. Full article
(This article belongs to the Special Issue Optical and Laser Remote Sensing of Atmospheric Composition)
Show Figures

Figure 1

Figure 1
<p>Distribution of the rain gauge stations in the study area.</p>
Full article ">Figure 2
<p>Framework of the two-step RF-based precipitation retrieval model.</p>
Full article ">Figure 3
<p>The parameter tuning in the RF model for rain area determination.</p>
Full article ">Figure 4
<p>The ranking of the predictor variable in the RF model for rain area determination.</p>
Full article ">Figure 5
<p>The diurnal variation of the evaluation scores for rain area determination on the testing dataset during August 2018; the distributions of the statistical values by half-an-hour are interpreted in box plots. Box diagrams indicate the 25th, 50th, and 75th percentiles, whereas the periphery of the box extends to 1.5-times the quartile deviation (25th–75th percentile). Outliers are indicated by black dots.</p>
Full article ">Figure 6
<p>Comparisons of rain area delineation between the GPM (<b>left</b>) and RF model (<b>right</b>) at 0700 UTC 21 August 2018.</p>
Full article ">Figure 7
<p>The diurnal variation of evaluation scores for rain area determination in the testing dataset in summer 2019; the distributions of the statistical values by half-an-hour are interpreted in box plots. Box diagrams indicate the 25th, 50th, and 75th percentiles, whereas the periphery of the box extends to 1.5-times the quartile deviation (25th–75th percentile). Outliers are indicated by black dots.</p>
Full article ">Figure 8
<p>Comparisons of rain area delineation between the GPM (<b>left</b>) and RF model (<b>right</b>) at 0630 UTC 1 July 2019.</p>
Full article ">Figure 9
<p>The parameter tuning in the RF model for the precipitation grades’ estimation.</p>
Full article ">Figure 10
<p>The ranking of the predictor variable in the RF model for the precipitation grades’ estimation.</p>
Full article ">Figure 11
<p>The diurnal variation of ACC for the precipitation grades’ estimation in testing dataset during August 2018; the distributions of the statistical values by half-an-hour are interpreted in box plots. Box diagrams indicate the 25th, 50th, and 75th percentiles, whereas the periphery of the box extends to 1.5-times the quartile deviation (25th–75th percentile). Outliers are indicated by black dots.</p>
Full article ">Figure 12
<p>Comparisons of the precipitation grades’ estimation between the GPM (<b>left</b>) and RF model (<b>right</b>) on 0700 UTC 21 August 2018.</p>
Full article ">Figure 13
<p>The diurnal variation of ACC for the precipitation grades’ estimation in testing dataset in summer 2019; the distributions of the statistical values by half-an-hour are interpreted in box plots. Box diagrams indicate the 25th, 50th, and 75th percentiles, whereas the periphery of the box extends to 1.5-times the quartile deviation (25th–75th percentile). Outliers are indicated by black dots.</p>
Full article ">Figure 14
<p>Comparisons of the precipitation grades’ estimation between the GPM (<b>left</b>) and RF model (<b>right</b>) at 0630 UTC 1 July 2019.</p>
Full article ">Figure 15
<p>The diurnal variation of the accuracy for rainfall retrieval in the testing dataset during August 2018; the distributions of the statistical values by half-an-hour are interpreted in box plots. Box diagrams indicate the 25th, 50th, and 75th percentiles, whereas the periphery of the box extends to 1.5-times the quartile deviation (25th–75th percentile). Outliers are indicated by black dots.</p>
Full article ">Figure 16
<p>Comparisons of the rainfall retrieval integrated model between the GPM (<b>left</b>) and RF model (<b>right</b>) at 0700 UTC 21 August 2018.</p>
Full article ">Figure 17
<p>The diurnal variation of the accuracy for rainfall retrieval in the testing dataset in summer 2019; the distributions of the statistical values by half-an-hour are interpreted in box plots. Box diagrams indicate the 25th, 50th, and 75th percentiles, whereas the periphery of the box extends to 1.5-times the quartile deviation (25th–75th percentile). Outliers are indicated by black dots.</p>
Full article ">Figure 18
<p>Comparisons of the rainfall retrieval integrated model between the GPM (<b>left</b>) and RF model (<b>right</b>) at 0630 UTC 1 July 2019.</p>
Full article ">Figure 19
<p>The diurnal variation of evaluation scores for rain area determination and precipitation grades estimation on RF model against gauge stations during August 2018, distributions of statistical values at one hour intervals are interpreted in box plots. Box diagrams indicate the 25th, 50th, and 75th percentiles, whereas the periphery of the box extends to 1.5 times of the quartile deviation (25th–75th percentile). Outliers are indicated by black dots.</p>
Full article ">Figure 20
<p>The diurnal variation of evaluation scores for rain area determination and precipitation grades estimation on RF model against gauge stations during summer 2019, distributions of statistical values at one hour intervals are interpreted in box plots. Box diagrams indicate the 25th, 50th, and 75th percentiles whereas the periphery of the box extends to 1.5 times of the quartile deviation (25th–75th percentile). Outliers are indicated by black dots.</p>
Full article ">Figure 21
<p>The diurnal variation of evaluation scores for rain area determination and precipitation grades estimation on RF model and GFS model against GPM data during August 2018, distributions of statistical values at six hour intervals are interpreted in box plots. Box diagrams indicate the 25th, 50th, and 75th percentiles whereas the periphery of the box extends to 1.5 times of the quartile deviation (25th–75th percentile). Outliers are indicated by black dots.</p>
Full article ">Figure 22
<p>The diurnal variation of evaluation scores for rain area determination and precipitation grades estimation on RF model and GFS model against GPM data during summer 2019, distributions of statistical values at six hour intervals are interpreted in box plots. Box diagrams indicate the 25th, 50th, and 75th percentiles whereas the periphery of the box extends to 1.5 times of the quartile deviation (25th–75th percentile). Outliers are indicated by black dots.</p>
Full article ">
32 pages, 11818 KiB  
Article
Understanding Spatio-Temporal Patterns of Land Use/Land Cover Change under Urbanization in Wuhan, China, 2000–2019
by Han Zhai, Chaoqun Lv, Wanzeng Liu, Chao Yang, Dasheng Fan, Zikun Wang and Qingfeng Guan
Remote Sens. 2021, 13(16), 3331; https://doi.org/10.3390/rs13163331 - 23 Aug 2021
Cited by 114 | Viewed by 6425
Abstract
Exploring land use structure and dynamics is critical for urban planning and management. This study attempts to understand the Wuhan development mode since the beginning of the 21st century by profoundly investigating the spatio-temporal patterns of land use/land cover (LULC) change under urbanization [...] Read more.
Exploring land use structure and dynamics is critical for urban planning and management. This study attempts to understand the Wuhan development mode since the beginning of the 21st century by profoundly investigating the spatio-temporal patterns of land use/land cover (LULC) change under urbanization in Wuhan, China, from 2000 to 2019, based on continuous time series mapping using Landsat observations with a support vector machine. The results indicated rapid urbanization, with large LULC changes triggered. The built-up area increased by 982.66 km2 (228%) at the expense of a reduction of 717.14 km2 (12%) for cropland, which threatens food security to some degree. In addition, the natural habitat shrank to some extent, with reductions of 182.52 km2, 23.92 km2 and 64.95 km2 for water, forest and grassland, respectively. Generally, Wuhan experienced a typical urbanization course that first sped up, then slowed down and then accelerated again, with an obvious internal imbalance between the 13 administrative districts. Hanyang, Hongshan and Dongxihu specifically presented more significant land dynamicity, with Hanyang being the active center. Over the past 19 years, Wuhan mainly developed toward the east and south, with the urban gravity center transferred from the northwest to the southeast of Jiang’an district. Lastly, based on the predicted land allocation of Wuhan in 2029 by the patch-generating land use simulation (PLUS) model, the future landscape dynamic pattern was further explored, and the result shows a rise in the northern suburbs, which provides meaningful guidance for urban planners and managers to promote urban sustainability. Full article
(This article belongs to the Special Issue Optical Remote Sensing Applications in Urban Areas)
Show Figures

Figure 1

Figure 1
<p>Location and topography of the study area.</p>
Full article ">Figure 2
<p>The workflow of this study.</p>
Full article ">Figure 3
<p>The working mechanism of support vector machine (SVM): (<b>a</b>) the linear model; (<b>b</b>) the non-linear model.</p>
Full article ">Figure 4
<p>The LULC map of Wuhan for each year during the period 2000–2019: the LULC map in (<b>a</b>) 2000; (<b>b</b>) 2001; (<b>c</b>) 2002; (<b>d</b>) 2003; (<b>e</b>) 2004; (<b>f</b>) 2005; (<b>g</b>) 2006; (<b>h</b>) 2007; (<b>i</b>) 2008; (<b>j</b>) 2009; (<b>k</b>) 2010; (<b>l</b>) 2011; (<b>m</b>) 2012; (<b>n</b>) 2013; (<b>o</b>) 2014; (<b>p</b>) 2015; (<b>q</b>) 2016; (<b>r</b>) 2017; (<b>s</b>) 2018; and (<b>t</b>) 2019.</p>
Full article ">Figure 4 Cont.
<p>The LULC map of Wuhan for each year during the period 2000–2019: the LULC map in (<b>a</b>) 2000; (<b>b</b>) 2001; (<b>c</b>) 2002; (<b>d</b>) 2003; (<b>e</b>) 2004; (<b>f</b>) 2005; (<b>g</b>) 2006; (<b>h</b>) 2007; (<b>i</b>) 2008; (<b>j</b>) 2009; (<b>k</b>) 2010; (<b>l</b>) 2011; (<b>m</b>) 2012; (<b>n</b>) 2013; (<b>o</b>) 2014; (<b>p</b>) 2015; (<b>q</b>) 2016; (<b>r</b>) 2017; (<b>s</b>) 2018; and (<b>t</b>) 2019.</p>
Full article ">Figure 4 Cont.
<p>The LULC map of Wuhan for each year during the period 2000–2019: the LULC map in (<b>a</b>) 2000; (<b>b</b>) 2001; (<b>c</b>) 2002; (<b>d</b>) 2003; (<b>e</b>) 2004; (<b>f</b>) 2005; (<b>g</b>) 2006; (<b>h</b>) 2007; (<b>i</b>) 2008; (<b>j</b>) 2009; (<b>k</b>) 2010; (<b>l</b>) 2011; (<b>m</b>) 2012; (<b>n</b>) 2013; (<b>o</b>) 2014; (<b>p</b>) 2015; (<b>q</b>) 2016; (<b>r</b>) 2017; (<b>s</b>) 2018; and (<b>t</b>) 2019.</p>
Full article ">Figure 5
<p>The post-processing on two typical scenes: (<b>a</b>) the false color image of scene 1 in 2006; (<b>b</b>) the false color image of scene 1 in 2007; (<b>c</b>) the LULC result before temporal consistency analysis; (<b>d</b>) the LULC result after temporal consistency analysis; (<b>e</b>) the false color image of scene 2 in 2009; (<b>f</b>) the LULC result before spatial coherence analysis; (<b>g</b>) the LULC result after spatial coherence analysis.</p>
Full article ">Figure 6
<p>Built-up land and cropland changes.</p>
Full article ">Figure 7
<p>Natural habitat change.</p>
Full article ">Figure 8
<p>The spatial pattern of single land use dynamicity of different districts in various periods: single land use dynamicity pattern during the period (<b>a</b>) 2000–2004; (<b>b</b>) 2004–2009; (<b>c</b>) 2009–2014; (<b>d</b>) 2014–2019; and (<b>e</b>) 2000–2019.</p>
Full article ">Figure 9
<p>The spatial pattern of landscape activity of different districts in Wuhan in various periods: the landscape activity pattern during the period (<b>a</b>) 2000–2004; (<b>b</b>) 2004–2009; (<b>c</b>) 2009–2014; (<b>d</b>) 2014–2019; and (<b>e</b>) 2000–2019.</p>
Full article ">Figure 9 Cont.
<p>The spatial pattern of landscape activity of different districts in Wuhan in various periods: the landscape activity pattern during the period (<b>a</b>) 2000–2004; (<b>b</b>) 2004–2009; (<b>c</b>) 2009–2014; (<b>d</b>) 2014–2019; and (<b>e</b>) 2000–2019.</p>
Full article ">Figure 10
<p>The LULC results of four representative districts at five typical time nodes: (<b>a</b>–<b>e</b>) are the LULC results of Jianghan in 2000, 2004, 2009, 2014 and 2019; (<b>f</b>–<b>j</b>) are the LULC results of Hongshan in 2000, 2004, 2009, 2014 and 2019; (<b>k</b>–<b>o</b>) are the LULC results of Huangpi in 2000, 2004, 2009, 2014 and 2019; (<b>p</b>–<b>t</b>) are the LULC results of Jiangxia in 2000, 2004, 2009, 2014 and 2019.</p>
Full article ">Figure 10 Cont.
<p>The LULC results of four representative districts at five typical time nodes: (<b>a</b>–<b>e</b>) are the LULC results of Jianghan in 2000, 2004, 2009, 2014 and 2019; (<b>f</b>–<b>j</b>) are the LULC results of Hongshan in 2000, 2004, 2009, 2014 and 2019; (<b>k</b>–<b>o</b>) are the LULC results of Huangpi in 2000, 2004, 2009, 2014 and 2019; (<b>p</b>–<b>t</b>) are the LULC results of Jiangxia in 2000, 2004, 2009, 2014 and 2019.</p>
Full article ">Figure 10 Cont.
<p>The LULC results of four representative districts at five typical time nodes: (<b>a</b>–<b>e</b>) are the LULC results of Jianghan in 2000, 2004, 2009, 2014 and 2019; (<b>f</b>–<b>j</b>) are the LULC results of Hongshan in 2000, 2004, 2009, 2014 and 2019; (<b>k</b>–<b>o</b>) are the LULC results of Huangpi in 2000, 2004, 2009, 2014 and 2019; (<b>p</b>–<b>t</b>) are the LULC results of Jiangxia in 2000, 2004, 2009, 2014 and 2019.</p>
Full article ">Figure 11
<p>The situation of urban land crowding out other land uses in different periods: the result during the period (<b>a</b>) 2000–2004; (<b>b</b>) 2004–2009; (<b>c</b>) 2009–2014; (<b>d</b>) 2014–2019; and (<b>e</b>) 2000–2019.</p>
Full article ">Figure 11 Cont.
<p>The situation of urban land crowding out other land uses in different periods: the result during the period (<b>a</b>) 2000–2004; (<b>b</b>) 2004–2009; (<b>c</b>) 2009–2014; (<b>d</b>) 2014–2019; and (<b>e</b>) 2000–2019.</p>
Full article ">Figure 12
<p>The urban scope and gravity center transition: (<b>a</b>) the urban scope of typical years; (<b>b</b>) the urban gravity center transfer route.</p>
Full article ">Figure 13
<p>The radar maps of urban development direction in different periods: the radar map during the period (<b>a</b>) 2000–2004; (<b>b</b>) 2004–2009; (<b>c</b>) 2009–2014; (<b>d</b>) 2014–2019; and (<b>e</b>) 2000–2019. The values in the figure represent the percentage of increasement relative to the original built-up land area.</p>
Full article ">Figure 14
<p>The LULC simulation results of Wuhan: (<b>a</b>) the reference LULC map in 2019; (<b>b</b>) the simulated LULC result in 2019; (<b>c</b>) the simulated LULC result in 2029.</p>
Full article ">Figure 15
<p>Importance analysis of various driving factors to the growth of built-up land. (<b>a</b>) The built-up land expansion analysis, with the most important factor overlapped. (<b>b</b>) The importance of each factor relative to the growth of built-up land.</p>
Full article ">Figure 16
<p>The dynamicity of major land uses in 2029: (<b>a</b>) the dynamicity of built-up land; (<b>b</b>) the dynamicity of cropland; (<b>c</b>) the dynamicity of water; (<b>d</b>) the dynamicity of forest; (<b>e</b>) the dynamicity of grassland.</p>
Full article ">Figure 16 Cont.
<p>The dynamicity of major land uses in 2029: (<b>a</b>) the dynamicity of built-up land; (<b>b</b>) the dynamicity of cropland; (<b>c</b>) the dynamicity of water; (<b>d</b>) the dynamicity of forest; (<b>e</b>) the dynamicity of grassland.</p>
Full article ">Figure 17
<p>The future urbanization mode of Wuhan during the period 2019–2029: (<b>a</b>) the urban scope of typical years; (<b>b</b>) the urban gravity center transfer route; (<b>c</b>) the radar map of urban development direction; (<b>d</b>) the situation of urban land crowding out other land uses.</p>
Full article ">
12 pages, 3066 KiB  
Article
Reconstruction of the Radar Reflectivity of Convective Storms Based on Deep Learning and Himawari-8 Observations
by Mingshan Duan, Jiangjiang Xia, Zhongwei Yan, Lei Han, Lejian Zhang, Hanmeng Xia and Shuang Yu
Remote Sens. 2021, 13(16), 3330; https://doi.org/10.3390/rs13163330 - 23 Aug 2021
Cited by 22 | Viewed by 4265
Abstract
Radar reflectivity (RR) greater than 35 dBZ usually indicates the presence of severe convective weather, which affects a variety of human activities, including aviation. However, RR data are scarce, especially in regions with poor radar coverage or substantial terrain obstructions. Fortunately, the radiance [...] Read more.
Radar reflectivity (RR) greater than 35 dBZ usually indicates the presence of severe convective weather, which affects a variety of human activities, including aviation. However, RR data are scarce, especially in regions with poor radar coverage or substantial terrain obstructions. Fortunately, the radiance data of space-based satellites with universal coverage can be converted into a proxy field of RR. In this study, a convolutional neural network-based data-driven model is developed to convert the radiance data (infrared bands 07, 09, 13, 16, and 16–13) of Himawari-8 into the radar combined reflectivity factor (CREF). A weighted loss function is designed to solve the data imbalance problem due to the sparse convective pixels in the sample. The developed model demonstrates an overall reconstruction capability and performs well in terms of classification scores with 35 dBZ as the threshold. A five-channel input is more efficient in reconstructing the CREF than the commonly used one-channel input. In a case study of a convective event over North China in the summer using the test dataset, U-Net reproduces the location, shape and strength of the convective storm well. The present RR reconstruction technology based on deep learning and Himawari-8 radiance data is shown to be an efficient tool for producing high-resolution RR products, which are especially needed for regions without or with poor radar coverage. Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>A topographic map of the study area. The black box represents North China (32°N–42°N, 110°E–120°E).</p>
Full article ">Figure 2
<p>Workflow of the DL model-based CREF data reconstruction algorithm.</p>
Full article ">Figure 3
<p>U-Net architecture with the infrared bands as the input and the CREF as the output.</p>
Full article ">Figure 4
<p>(<b>a</b>) PDF distributions of the RMSEs in the training set and test set. (<b>b</b>) Changes in the RMSE with increasing convection coverage in all samples.</p>
Full article ">Figure 5
<p>Evaluation scores of U-Net_full and U-Net_single in the period from July to August 2018.</p>
Full article ">Figure 6
<p>Performance diagram for CREF categories 20, 25, …, and 50 dBZ. The black dotted line is the CSI and the gray dotted line is the categorical bias. The two solid lines are the model performance with the different test datasets as inputs and the points with the same color as the solid line are the scores of the model at a threshold of 20, 25, …, and 50 dBZ.</p>
Full article ">Figure 7
<p>Case study of a heavy rainfall event on 16 July 2018 from the test dataset showing the model reconstruction results at (<b>a</b>–<b>c</b>) 7:00, (<b>d</b>–<b>f</b>) 8:00, and (<b>g</b>–<b>i</b>) 9:00. The false colors of (<b>a</b>,<b>d</b>,<b>g</b>) are cloud clusters with a BT below 198 K in Band 13 superimposed on a grayscale image. (<b>b</b>,<b>e</b>,<b>h</b>) are the observed CREF values, and (<b>c</b>,<b>f</b>,<b>i</b>) are the reconstructed CREF values.</p>
Full article ">
20 pages, 5302 KiB  
Article
High Speed Maneuvering Platform Squint TOPS SAR Imaging Based on Local Polar Coordinate and Angular Division
by Bowen Bie, Yinghui Quan, Kaijie Xu, Guangcai Sun and Mengdao Xing
Remote Sens. 2021, 13(16), 3329; https://doi.org/10.3390/rs13163329 - 23 Aug 2021
Cited by 1 | Viewed by 1910
Abstract
This paper proposes an imaging algorithm for synthetic aperture radar (SAR) mounted on a high-speed maneuvering platform with squint terrain observation by progressive scan mode. To overcome the mismatch between range model and the signal after range walk correction, the range history is [...] Read more.
This paper proposes an imaging algorithm for synthetic aperture radar (SAR) mounted on a high-speed maneuvering platform with squint terrain observation by progressive scan mode. To overcome the mismatch between range model and the signal after range walk correction, the range history is calculated in local polar format. The Doppler ambiguity is resolved by nonlinear derotation and zero-padding. The recovered signal is divided into several blocks in Doppler according to the angular division. Keystone transform is used to remove the space-variant range cell migration (RCM) components. Thus, the residual RCM terms can be compensated by a unified phase function. Frequency domain perturbation terms are introduced to correct the space-variant Doppler chirp rate term. The focusing parameters are calculated according to the scene center of each angular block and the signal of each block can be processed in parallel. The image of each block is focused in range-Doppler domain. After the geometric correction, the final focused image can be obtained by directly combined the images of all angular blocks. Simulated SAR data has verified the effectiveness of the proposed algorithm. Full article
Show Figures

Figure 1

Figure 1
<p>Imaging geometry of maneuvering platform SAR with TOPS mode.</p>
Full article ">Figure 2
<p>Approximation phase errors of slant range terms before angular division. (<b>a</b>) The first-order term. (<b>b</b>) The second-order term. (<b>c</b>) The third-order term. (<b>d</b>) The fourth-order term.</p>
Full article ">Figure 3
<p>Approximation phase errors of slant range terms after angular division. (<b>a</b>) The first-order term. (<b>b</b>) The second-order term.</p>
Full article ">Figure 4
<p>Phase of the space-variant third-order slant range term. (<b>a</b>) The space-variant phase before angular division. (<b>b</b>) The space-variant phase after angular division.</p>
Full article ">Figure 5
<p>Flow chart of the proposed algorithm.</p>
Full article ">Figure 6
<p>Spectrum recovering analysis by the time-frequency diagrams. (<b>a</b>) Echo signal. (<b>b</b>) Signal after RWC and linear derotation. (<b>c</b>) Signal after nonlinear derotation and zero-padding. (<b>d</b>) Recovered signal.</p>
Full article ">Figure 7
<p>Angular division analysis. (<b>a</b>) Block division in angular domain. (<b>b</b>) Mapping relation between yaw angle and Doppler.</p>
Full article ">Figure 8
<p>RCM lines of targets shared a same reference range. (<b>a</b>) RCM lines after unified RWC. (<b>b</b>) RCM lines after residual RWC. (<b>c</b>) RCM lines after KT. (<b>d</b>) RCM lines after RCMC.</p>
Full article ">Figure 9
<p>Azimuth focusing analysis of targets sharing a same reference range. (<b>a</b>) Time-frequency lines of targets with different Doppler chirp rate. (<b>b</b>) Time-frequency lines after frequency perturbation. (<b>c</b>) Time-frequency lines after deramp. (<b>d</b>) Azimuth focused in Doppler domain.</p>
Full article ">Figure 10
<p>Simulation geometry. (<b>a</b>) Point targets in polar format. (<b>b</b>) Point targets matrix.</p>
Full article ">Figure 11
<p>RCM lines of targets in <a href="#remotesensing-13-03329-f010" class="html-fig">Figure 10</a>a. (<b>a</b>) RCM line of target A without KT. (<b>b</b>) RCM line of target B without KT. (<b>c</b>) RCM line of C without KT. (<b>d</b>) RCM line of target A with KT. (<b>e</b>) RCM line of target B with KT. (<b>f</b>) RCM line of target C with KT.</p>
Full article ">Figure 12
<p>Imaging results of different angular blocks. (<b>a</b>) The image of angular block 1. (<b>b</b>) The image of angular block 2. (<b>c</b>) The image of angular block 3.</p>
Full article ">Figure 13
<p>Images after geometric correction. (<b>a</b>) The image of angular block 1. (<b>b</b>) The image of angular block 2. (<b>c</b>) The image of angular block 3. (<b>d</b>) The image after angular blocks combination.</p>
Full article ">Figure 14
<p>Contour images of point targets selected in <a href="#remotesensing-13-03329-f012" class="html-fig">Figure 12</a> with contour lines at −3, −15 and −30 dB by the reference method. (<b>a</b>) Contour image of A1. (<b>b</b>) Contour image of A2. (<b>c</b>) Contour image of A3. (<b>d</b>) Contour image of B1. (<b>e</b>) Contour image of B2. (<b>f</b>) Contour image of B3. (<b>g</b>) Contour image of C1. (<b>h</b>) Contour image of C2. (<b>i</b>) Contour image of C3.</p>
Full article ">Figure 15
<p>Contour images of point targets selected in <a href="#remotesensing-13-03329-f012" class="html-fig">Figure 12</a> with contour lines at −3, −15 and −30 dB by the proposed method. (<b>a</b>) Contour image of A1. (<b>b</b>) Contour image of A2. (<b>c</b>) Contour image of A3. (<b>d</b>) Contour image of B1. (<b>e</b>) Contour image of B2. (<b>f</b>) Contour image of B3. (<b>g</b>) Contour image of C1. (<b>h</b>) Contour image of C2. (<b>i</b>) Contour image of C3.</p>
Full article ">Figure 15 Cont.
<p>Contour images of point targets selected in <a href="#remotesensing-13-03329-f012" class="html-fig">Figure 12</a> with contour lines at −3, −15 and −30 dB by the proposed method. (<b>a</b>) Contour image of A1. (<b>b</b>) Contour image of A2. (<b>c</b>) Contour image of A3. (<b>d</b>) Contour image of B1. (<b>e</b>) Contour image of B2. (<b>f</b>) Contour image of B3. (<b>g</b>) Contour image of C1. (<b>h</b>) Contour image of C2. (<b>i</b>) Contour image of C3.</p>
Full article ">
16 pages, 3296 KiB  
Article
A New Multi-Scale Sliding Window LSTM Framework (MSSW-LSTM): A Case Study for GNSS Time-Series Prediction
by Jian Wang, Weiping Jiang, Zhao Li and Yang Lu
Remote Sens. 2021, 13(16), 3328; https://doi.org/10.3390/rs13163328 - 23 Aug 2021
Cited by 35 | Viewed by 6032
Abstract
GNSS time-series prediction plays an important role in the monitoring of crustal plate movement, and dam or bridge deformation, and the maintenance of global or regional coordinate frames. Deep learning is a state-of-the-art approach for extracting high-level abstract features from big data without [...] Read more.
GNSS time-series prediction plays an important role in the monitoring of crustal plate movement, and dam or bridge deformation, and the maintenance of global or regional coordinate frames. Deep learning is a state-of-the-art approach for extracting high-level abstract features from big data without any prior knowledge. Moreover, long short-term memory (LSTM) networks are a form of recurrent neural networks that have significant potential for processing time series. In this study, a novel prediction framework was proposed by combining a multi-scale sliding window (MSSW) with LSTM. Specifically, MSSW was applied for data preprocessing to effectively extract the feature relationship at different scales and simultaneously mine the deep characteristics of the dataset. Then, multiple LSTM neural networks were used to predict and obtain the final result by weighting. To verify the performance of MSSW-LSTM, 1000 daily solutions of the XJSS station in the Up component were selected for prediction experiments. Compared with the traditional LSTM method, our results of three groups of controlled experiments showed that the RMSE value was reduced by 2.1%, 23.7%, and 20.1%, and MAE was decreased by 1.6%, 21.1%, and 22.2%, respectively. Our results showed that the MSSW-LSTM algorithm can achieve higher prediction accuracy and smaller error, and can be applied to GNSS time-series prediction. Full article
Show Figures

Figure 1

Figure 1
<p>(<b>a</b>) Long short-term memory architecture; (<b>b</b>) typical structure of LSTM (1 layer).</p>
Full article ">Figure 2
<p>Multilayered neural networks of LSTM.</p>
Full article ">Figure 3
<p>Single-scale sliding window.</p>
Full article ">Figure 4
<p>Constructing data with <math display="inline"><semantics> <mi>k</mi> </semantics></math> sliding windows of different scales.</p>
Full article ">Figure 5
<p>Framework of the multiscale sliding window LSTM (MSSW-LSTM).</p>
Full article ">Figure 6
<p>(<b>a</b>) Flowchart of MSSW-LSTM; (<b>b</b>) data processing of XJSS using MSSW-LSTM.</p>
Full article ">Figure 7
<p>(<b>a</b>) Training loss curve; (<b>b</b>) data training and prediction of XJSS using LSTM Net(1).</p>
Full article ">Figure 8
<p>(<b>a</b>) Training loss curve; (<b>b</b>) data training and prediction of XJSS using LSTM Net(2).</p>
Full article ">Figure 9
<p>(<b>a</b>) Training loss curve; (<b>b</b>) data training and prediction of XJSS using LSTM Net(3).</p>
Full article ">Figure 10
<p>Comparison of MSSW-LSTM prediction and the true values.</p>
Full article ">
17 pages, 9022 KiB  
Article
Rupture Kinematics and Coseismic Slip Model of the 2021 Mw 7.3 Maduo (China) Earthquake: Implications for the Seismic Hazard of the Kunlun Fault
by Han Chen, Chunyan Qu, Dezheng Zhao, Chao Ma and Xinjian Shan
Remote Sens. 2021, 13(16), 3327; https://doi.org/10.3390/rs13163327 - 23 Aug 2021
Cited by 42 | Viewed by 3729
Abstract
The 21 May 2021 Maduo earthquake was the largest event to occur on a secondary fault in the interior of the active Bayanhar block on the north-central Tibetan plateau in the last twenty years. A detailed kinematic study of the Maduo earthquake helps [...] Read more.
The 21 May 2021 Maduo earthquake was the largest event to occur on a secondary fault in the interior of the active Bayanhar block on the north-central Tibetan plateau in the last twenty years. A detailed kinematic study of the Maduo earthquake helps us to better understand the seismogenic environments of the secondary faults within the block, and its relationship with the block-bounding faults. In this study, firstly, SAR images are used to obtain the coseismic deformation fields. Secondly, we use a strain model-based method and steepest descent method (SDM) to resolve the three-dimensional displacement components and to invert the coseismic slip distribution constrained by coseismic displacement fields, respectively. The three-dimensional displacement fields reveal a dominant left-lateral strike-slip motion, local horizontal displacement variations and widely distributed near-fault subsidence/uplift deformation. We prefer a five-segment fault slip model, with well constrained fault geometry featuring different dip angles and striking, constrained by InSAR observations. The peak coseismic slip is estimated to be ~5 m near longitude 98.9°E at a depth of ~4–7 km. Overall, the distribution of the coseismic slip on the fault is highly correlated to the measured surface displacement offsets along the entire rupture. We observe the moderate shallow slip deficit and limited afterslip deformation following the Maduo earthquake, it may indicate the effects of off-fault deformation during the earthquake and stable interseismic creep on the fault. The occurrence of the Maduo earthquake on a subsidiary fault updates the importance and the traditional estimate of the seismic hazards for the Kunlun fault. Full article
(This article belongs to the Special Issue Remote Sensing Monitoring for Tectonic Deformation)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Tectonic setting of the 2021 Mw 7.3 Maduo earthquake. (<b>a</b>) Tectonic setting of the Bayanhar block. Blue arrows show GPS measured interseismic velocities with ellipses indicating uncertainties [<a href="#B18-remotesensing-13-03327" class="html-bibr">18</a>,<a href="#B19-remotesensing-13-03327" class="html-bibr">19</a>]. The red line indicates the boundary of the Bayanhar block. Light blue dots indicate M &gt; 2 earthquakes (1 January 1900–20 May 2021, <a href="https://earthquake.usgs.gov/earthquakes/search/" target="_blank">https://earthquake.usgs.gov/earthquakes/search/</a>, accessed on 21 May 2021). (<b>b</b>) Enlarged tectonic map around the Maduo area. Blue arrows is same as in (<b>a</b>). Light purple circles indicate the relocated aftershocks (22 May 2021–28 May 2021) [<a href="#B20-remotesensing-13-03327" class="html-bibr">20</a>]. Light blue dots indicate M &gt; 3 historical earthquakes (1 January 2011–20 May 2021, <a href="http://www.ceic.ac.cn/history" target="_blank">http://www.ceic.ac.cn/history</a>, accessed on 21 May 2021). Thick red line indicates the surface ruptures of the Maduo earthquake. Thin red lines denote the rough extent of the historical ruptures on the Kunlun fault. Light yellow and light blue dashed boxes mark the spatial coverage of ascending and descending Sentinel-1 SAR images, respectively. The white dashed line illustrates the range of (<b>c</b>). (<b>c</b>) Digital elevation model (<a href="https://earthexplorer.usgs.gov/" target="_blank">https://earthexplorer.usgs.gov/</a>, accessed on 21 May 2021) around the Maduo earthquake. Red lines demonstrate the surface ruptures of the Maduo earthquake.</p>
Full article ">Figure 2
<p>Interferograms and displacement fields of the 2021 Maduo earthquake. The black box (AA′) indicates a 200 km by 8 km profile swath. (<b>a</b>,<b>b</b>) Interferogram and displacement field on the ascending track. (<b>c</b>,<b>d</b>) Interferogram and displacement field on the descending track. (<b>e</b>,<b>f</b>) Ascending and descending displacement fields in ground range direction resolved by the offset tracking method. (<b>g</b>,<b>i</b>) Displacement profiles of LOS deformation fields. Green and red color indicate observations on the ascending and descending tracks, respectively. (<b>h</b>,<b>j</b>) Profiles of displacement in the ground range direction, green and red also denote observations on the ascending and descending tracks, respectively.</p>
Full article ">Figure 3
<p>Three-dimensional displacement fields of the Maduo earthquake. (<b>a</b>) East-west displacement component (positive to the east). (<b>b</b>) North-south displacement component (positive to the north). (<b>c</b>) Vertical displacement component (the upward direction being positive) and horizontal deformation vector field. Arrows indicate horizontal displacement. Red lines indicate the ground trace of the fault plane from our fault model. Black lines indicate the surface projection of the fault plane from our fault model.</p>
Full article ">Figure 4
<p>(<b>a</b>–<b>e</b>) Trade-off curve between root-mean-square (RMS) misfit and dip angle for five fault segments in the slip model. The red point indicates the best-fitting dip angle used to invert the coseismic slip distribution shown in <a href="#remotesensing-13-03327-f005" class="html-fig">Figure 5</a>. (<b>f</b>) Trade-off curve with model roughness plotted as a function of RMS misfit. The red dot is the preferred smoothing factor (0.11) used in our coseismic slip distribution inversion.</p>
Full article ">Figure 5
<p>Coseismic slip distribution of the Maduo earthquake. (<b>a</b>) Estimated coseismic slip along the dip direction; (<b>b</b>) Estimated coseismic slip along the fault strike; (<b>c</b>) Coulomb stress change on the fault plane. The purple dots indicate the relocated aftershocks [<a href="#B20-remotesensing-13-03327" class="html-bibr">20</a>].</p>
Full article ">Figure 6
<p>Model fit to the ascending (<b>a</b>) observation; (<b>b</b>) model prediction; (<b>c</b>) residuals and descending (<b>d</b>) observation; (<b>e</b>) model prediction; (<b>f</b>) residuals InSAR coseismic observations. The black lines are up-dip trace of the fault plane used in the fault model.</p>
Full article ">Figure 7
<p>(<b>a</b>) Surface rupture traces of the Maduo earthquake detected from InSAR deformation fields (red line) and its curvature (green bars). We only use the magnitude of fault curvature to show the local variations of fault strike. (<b>b</b>–<b>d</b>) Coseismic displacement offsets in east-west (positive for the east direction), north-south direction (positive for the north direction). and fault-strike parallel (positive for right-lateral strike slip) direction. The colored lines represent the mean values using different numbers (30–100) of surrounding points in the calculation, and the colored band is 95% confidence interval. Black line indicates the strike angle along fault. (<b>e</b>) Inverted coseismic slip distributions on the fault plane with aftershocks marked by pink points.</p>
Full article ">Figure 8
<p>Depth distribution of the accumulative moment (red curve) and number of aftershock for five fault segments.</p>
Full article ">Figure 9
<p>Postseismic deformation following the Maduo earthquake observed by Sentinel-1 SAR images on ascending track.</p>
Full article ">
19 pages, 89711 KiB  
Article
Profiling of Dust and Urban Haze Mass Concentrations during the 2019 National Day Parade in Beijing by Polarization Raman Lidar
by Zhuang Wang, Cheng Liu, Yunsheng Dong, Qihou Hu, Ting Liu, Yizhi Zhu and Chengzhi Xing
Remote Sens. 2021, 13(16), 3326; https://doi.org/10.3390/rs13163326 - 23 Aug 2021
Cited by 12 | Viewed by 2465
Abstract
The polarization–Raman Lidar combined sun photometer is a powerful method for separating dust and urban haze backscatter, extinction, and mass concentrations. The observation was performed in Beijing during the 2019 National Day parade, the particle depolarization ratio at 532 nm and Lidar ratio [...] Read more.
The polarization–Raman Lidar combined sun photometer is a powerful method for separating dust and urban haze backscatter, extinction, and mass concentrations. The observation was performed in Beijing during the 2019 National Day parade, the particle depolarization ratio at 532 nm and Lidar ratio at 355 nm are 0.13 ± 0.05 and 52 ± 9 sr, respectively. It is the typical value of a mixture of dust and urban haze. Here we quantify the contributions of cross-regional transported natural dust and urban haze mass concentrations to Beijing’s air quality. There is a significant correlation between urban haze mass concentrations and surface PM2.5 (R = 0.74, p < 0.01). The contributions of local emissions to air pollution during the 2019 National Day parade were insignificant, mainly affected by regional transport, including urban haze in North China plain and Guanzhong Plain (Hebei, Tianjin, Shandong, and Shanxi), and dust aerosol in Mongolia regions and Xinjiang. Moreover, the trans-regional transmission of natural dust dominated the air pollution during the 2019 National Day parade, with a relative contribution to particulate matter mass concentrations exceeding 74% below 4 km. Our results highlight that controlling anthropogenic emissions over regional scales and focusing on the effects of natural dust is crucial and effective to improve Beijing’s air quality. Full article
(This article belongs to the Special Issue Optical and Laser Remote Sensing of Atmospheric Composition)
Show Figures

Figure 1

Figure 1
<p>PRL observed (<b>a</b>) backscatter coefficient at 532 nm (Bac<sub>532</sub>), (<b>b</b>) PLDR at 532 nm (PLDR<sub>532</sub>), (<b>c</b>) dust mass concentrations at 0:01 on 1 October 2019 (LT). PRL observed profile is the solid red line and the envelope represents the errors at each altitude. Error bars are calculated from the law of error propagation, which primarily depends on the signal-to-noise ratio of the PRL backscatter signal and input parameters given in <a href="#remotesensing-13-03326-t001" class="html-table">Table 1</a>.</p>
Full article ">Figure 2
<p>Period air pollution cycles in Beijing from 13 September to 9 October 2019. Time–height plots of the (<b>a</b>) Bac<sub>532</sub> and (<b>b</b>) PLDR<sub>532</sub> are measured by PRL. (<b>c</b>) Time series of surface PM<sub>2</sub>.<sub>5</sub> and PM<sub>10</sub> mass concentrations.</p>
Full article ">Figure 3
<p>Scatter plots show the correlation between the (<b>a</b>) hourly average Bac<sub>532</sub> at 250 m and surface PM<sub>2</sub>.<sub>5</sub>, (<b>b</b>) hourly average urban haze mass concentrations at 250 m and surface PM<sub>2</sub>.<sub>5</sub>, and (<b>c</b>) hourly average total mass concentrations at 250 m and surface PM<sub>10</sub>.</p>
Full article ">Figure 4
<p>Air pollution in Beijing during the 2019 National Day military parade. The vertical structure of (<b>a</b>) horizontal winds simulated by WRF–Chem, (<b>b</b>) dust mass concentrations retrieved by PRL, and (<b>c</b>) urban haze mass concentrations retrieved by PRL. (<b>d</b>) Daily average NO<sub>2</sub> concentrations and SO<sub>2</sub>/CO ratios. (<b>e</b>) Daily average PM<sub>2</sub>.<sub>5</sub> and PM<sub>10</sub> mass concentrations. The black arrows in (<b>a</b>) represent the wind direction, and the downward arrow indicates the north wind.</p>
Full article ">Figure 5
<p>The CALIPSO measurements in North China on 1 October 2019. Latitude–height plots of the (<b>a</b>) extinction coefficient at 532 nm, (<b>b</b>) PLDR<sub>532</sub>, and (<b>c</b>) VFM at 2:24 LT. Latitude–height plots of the (<b>d</b>) extinction coefficient at 532 nm, (<b>e</b>) PLDR<sub>532</sub>, and (<b>f</b>) VFM at 13:01 LT. The left two panels are the CALIPSO ground tracks color-coded by AOD. The red star on the map indicates BMOC.</p>
Full article ">Figure 6
<p>Height profiles of the daily average dust mass concentrations (<b>black</b>) and urban haze mass concentrations (<b>blue</b>). The envelopes in dust and urban haze mass concentrations represent the one standard deviation at each height. The date is displayed at the top right of each panel.</p>
Full article ">Figure 7
<p>Height profiles of the daily average PLDR<sub>532</sub> (<b>black</b>) and night-time average LR<sub>355</sub> (<b>blue</b>). The envelopes in PLDR<sub>532</sub> represent the one standard deviation at each height, and the envelopes in LR<sub>355</sub> indicate the errors at each height. The date is displayed at the top right of each panel.</p>
Full article ">Figure 8
<p>The average relative contributions of dust (yellow shade) and urban haze (red shade) mass concentrations in total mass concentrations (<b>a</b>) on 1 October, and (<b>b</b>) from 24 September to 3 October.</p>
Full article ">Figure 9
<p>Spatial distribution of the daily AOD from 24 September to 3 October 2019 retrieved from the Moderate Resolution Imaging Spectrometer (MODIS). The black arrows overlaid on the map represent the surface wind speed and direction. The date is displayed at the top left of each panel.</p>
Full article ">Figure 10
<p>Spatial distribution of the correlation coefficients between observation site and each grid’s daily AOD at lag (<b>a</b>) 0 day, (<b>b</b>) 1 day, and (<b>c</b>) 2 day from 24 September to 3 October, 2019. Seventy-two hour backward trajectories arriving at the observation site at 10:00 LT on 1 October 2019 at 500 m, 1500 m, 2500 m, and 3500 m overlaid on (<b>a</b>).</p>
Full article ">
25 pages, 6199 KiB  
Article
Comparative Evaluation of Algorithms for Leaf Area Index Estimation from Digital Hemispherical Photography through Virtual Forests
by Jing Liu, Longhui Li, Markku Akerblom, Tiejun Wang, Andrew Skidmore, Xi Zhu and Marco Heurich
Remote Sens. 2021, 13(16), 3325; https://doi.org/10.3390/rs13163325 - 23 Aug 2021
Cited by 11 | Viewed by 4212
Abstract
The in situ leaf area index (LAI) measurement plays a vital role in calibrating and validating satellite LAI products. Digital hemispherical photography (DHP) is a widely used in situ forest LAI measurement method. There have been many software programs encompassing a variety of [...] Read more.
The in situ leaf area index (LAI) measurement plays a vital role in calibrating and validating satellite LAI products. Digital hemispherical photography (DHP) is a widely used in situ forest LAI measurement method. There have been many software programs encompassing a variety of algorithms to estimate LAI from DHP. However, there is no conclusive study for an accuracy comparison among them, due to the difficulty in acquiring forest LAI reference values. In this study, we aim to use virtual (i.e., computer-simulated) broadleaf forests for the accuracy assessment of LAI algorithms in commonly used LAI software programs. Three commonly used DHP programs, including Can_Eye, CIMES, and Hemisfer, were selected since they provide estimates of both effective LAI and true LAI. Individual tree models with and without leaves were first reconstructed based on terrestrial LiDAR point clouds. Various stands were then created from these models. A ray-tracing technique was combined with the virtual forests to model synthetic DHP, for both leaf-on and leaf-off conditions. Afterward, three programs were applied to estimate PAI from leaf-on DHP and the woody area index (WAI) from leaf-off DHP. Finally, by subtracting WAI from PAI, true LAI estimates from 37 different algorithms were achieved for evaluation. The performance of these algorithms was compared with pre-defined LAI and PAI values in the virtual forests. The results demonstrated that without correcting for the vegetation clumping effect, Can_Eye, CIMES, and Hemisfer could estimate effective PAI and effective LAI consistent with each other (R2 > 0.8, RMSD < 0.2). After correcting for the vegetation clumping effect, there was a large inconsistency. In general, Can_Eye more accurately estimated true LAI than CIMES and Hemisfer (with R2 = 0.88 > 0.72, 0.49; RMSE = 0.45 < 0.7, 0.94; nRMSE = 15.7% < 24.21%, 32.81%). There was a systematic underestimation of PAI and LAI using Hemisfer. The most accurate algorithm for estimating LAI was identified as the P57 algorithm in Can_Eye which used the 57.5° gap fraction inversion combined with the finite-length averaging clumping correction. These results demonstrated the inconsistency of LAI estimates from DHP using different algorithms. It highlights the importance and provides a reference for standardizing the algorithm protocol for in situ forest LAI measurement using DHP. Full article
(This article belongs to the Special Issue Virtual Forest)
Show Figures

Figure 1

Figure 1
<p>The flowchart of accuracy comparison of different forest LAI estimation algorithms in digital hemispherical photography programs (including Can_Eye, CIMES, and Hemisfer).</p>
Full article ">Figure 2
<p>Example of the 3D individual tree models for virtual forests construction: (<b>a</b>) a 10 m beech tree without leaves, (<b>b</b>) a 10 m beech tree with leaves, (<b>c</b>) a 30 m oak tree without leaves, and (<b>d</b>) a 30 m oak tree with leaves.</p>
Full article ">Figure 3
<p>The distribution of trees with different heights in all 30 virtual forest stands (H5, H10, H15, H20, H25, H30 refer to trees of 5, 10, 15, 20, 25, and 30 m heights, respectively).</p>
Full article ">Figure 4
<p>The plot extent and the DHP acquisition locations inside the virtual forest.</p>
Full article ">Figure 5
<p>Synthetic digital hemispherical photography (DHP) of (<b>a</b>) Plot2 in leaf-on condition, (<b>b</b>) Plot2 in leaf-off condition, (<b>c</b>) Plot30 in leaf-on condition, and (<b>d</b>) Plot30 in leaf-off condition.</p>
Full article ">Figure 6
<p>Correlation of the effective plant area index estimates (PAI<sub>eff-est</sub>) and effective leaf area index estimates (LAI<sub>eff-est</sub>) using three digital hemispherical photography programs (on average, PAI<sub>eff-est</sub> were 55.8% of the PAI<sub>true-ref</sub> values, while LAI<sub>eff-est</sub> were 51.22% of LAI<sub>true-ref</sub> values).</p>
Full article ">Figure 7
<p><math display="inline"><semantics> <mrow> <msub> <mrow> <mi>PAI</mi> </mrow> <mrow> <mi>true</mi> </mrow> </msub> </mrow> </semantics></math> results from Can_Eye using different algorithms including the (<b>a</b>) Miller (<b>b</b>) v5.1 (<b>c</b>) v6.1, and (<b>d</b>) P57 algorithm; the best result was produced by the Can_Eye P57 algorithm. A smaller symbol indicates a smaller average leaf inclination angle (ALA), while a larger symbol indicates a higher ALA.</p>
Full article ">Figure 8
<p><math display="inline"><semantics> <mrow> <msub> <mrow> <mi>PAI</mi> </mrow> <mrow> <mi>true</mi> </mrow> </msub> </mrow> </semantics></math> results from CIMES using different algorithms including the (<b>a</b>) CAM_LX, (<b>b</b>) CMP_WT, (<b>c</b>) LOGCAM, (<b>d</b>) LANG_LX, (<b>e</b>) MLR, (<b>f</b>) Miller_CC57, (<b>g</b>) Miller_CC, (<b>h</b>) Miller_CLX57, and (<b>i</b>) Miller_CLX algorithm; the best result was produced by the CIMES CAM_LX algorithm. A smaller symbol indicates a smaller average leaf inclination angle (ALA), while a larger symbol indicates a higher ALA; the Miller_CLX method only produced estimates for 26 out of 30 plots with results.</p>
Full article ">Figure 9
<p><math display="inline"><semantics> <mrow> <msub> <mrow> <mi>PAI</mi> </mrow> <mrow> <mi>true</mi> </mrow> </msub> </mrow> </semantics></math> results from Hemisfer using different algorithms including the (<b>a</b>) CC_2000, (<b>b</b>) CC_Gonsamo, (<b>c</b>) CC_Lang, (<b>d</b>) CC_Miller, (<b>e</b>) CC_NC, (<b>f</b>) CC_Thimonier, (<b>g</b>) LX_2000, (<b>h</b>) LX_Gonsamo, (<b>i</b>) LX_Lang, (<b>j</b>) LX_Miller, (<b>k</b>) LX_NC, (<b>l</b>) LX_Thimonier, (<b>m</b>) SCC_2000, (<b>n</b>) SCC_Gonsamo, (<b>o</b>) SCC_Lang, (<b>p</b>) SCC_Miller, (<b>q</b>) SCC_NC, (<b>r</b>) SCC_Thimonier, (<b>s</b>) WT_2000, (<b>t</b>) WT_Gonsamo, (<b>u</b>) WT_Lang, (<b>v</b>) WT_Miller, (<b>w</b>) WT_NC, (<b>x</b>) WT_Thimonier algorithm_; the best result was produced by the Hemisfer LX_Miller algorithm. A smaller symbol indicating a smaller average leaf inclination angle (ALA) while a larger symbol indicating a higher ALA.</p>
Full article ">Figure 10
<p>Accuracy of the true leaf area index estimates (LAI<sub>true-est</sub>, calculated from PAI<sub>true-est</sub> minus WAI<sub>true-est</sub> ) using three digital hemispherical photography programs including (<b>a</b>) Can_Eye, (<b>b</b>) CIMES, and (<b>c</b>) Hemisfer compared to ground reference values (LAI<sub>true-ref</sub>). A smaller symbol indicates a smaller average leaf inclination angle (ALA) while a larger symbol indicates a higher ALA.</p>
Full article ">
16 pages, 3827 KiB  
Article
High Wind Speed Inversion Model of CYGNSS Sea Surface Data Based on Machine Learning
by Yun Zhang, Jiwei Yin, Shuhu Yang, Wanting Meng, Yanling Han and Ziyu Yan
Remote Sens. 2021, 13(16), 3324; https://doi.org/10.3390/rs13163324 - 23 Aug 2021
Cited by 13 | Viewed by 3117
Abstract
In response to the deficiency of the detection capability of traditional remote sensing means (scatterometer, microwave radiometer, etc.) for high wind speed above 25 m/s, this paper proposes a GNSS-R technique combined with a machine learning method to invert high wind speed at [...] Read more.
In response to the deficiency of the detection capability of traditional remote sensing means (scatterometer, microwave radiometer, etc.) for high wind speed above 25 m/s, this paper proposes a GNSS-R technique combined with a machine learning method to invert high wind speed at sea surface. The L1-level satellite-based data from the Cyclone Global Navigation Satellite System (CYGNSS), together with the European Centre for Medium-Range Weather Forecasts (ECMWF) and the National Centers for Environmental Prediction (NCEP) data, constitute the original sample set, which is processed and trained with Support Vector Regression (SVR), the combination of Principal Component Analysis (PCA) and SVR (PCA-SVR), and Convolutional Neural Network (CNN) methods, respectively, to finally construct a sea surface high wind speed inversion model. The three models for high wind speed inversion are certified by the test data collected during Typhoon Bavi in 2020. The results show that all three machine learning models can be used for high wind speed inversion on sea surface, among which the CNN method has the highest inversion accuracy with a mean absolute error of 2.71 m/s and a root mean square error of 3.80 m/s. The experimental results largely meet the operational requirements for high wind speed inversion accuracy. Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>PCA-SVR model structure.</p>
Full article ">Figure 2
<p>CNN model structure (Conv: Convolution layer, FC: fully connected layer).</p>
Full article ">Figure 3
<p>High wind speed inversion process based on machine learning.</p>
Full article ">Figure 4
<p>(<b>a</b>) Original training samples histogram; (<b>b</b>) Final training samples histogram.</p>
Full article ">Figure 5
<p>(<b>a</b>) Location of the region for performance evaluation (world map: preview number: GS (2016) 1563); (<b>b</b>) Typhoon Bavi (2020.8.22~2020.8.26) moving track map and daily interested area.</p>
Full article ">Figure 6
<p>(<b>a</b>) SVR model wind speed inversion results; (<b>b</b>) PCA-SVR model wind speed inversion results; and (<b>c</b>) CNN model wind speed inversion results. The color bar on the right represents data density.</p>
Full article ">Figure 7
<p>(<b>a</b>) 2020.8.23 CYGNSS satellite flight track and corresponding CNN wind speed; (<b>b</b>) 2020.8.24 CYGNSS satellite flight track and corresponding CNN wind speed; (<b>c</b>) 2020.8.25 CYGNSS satellite flight track and corresponding CNN wind speed; (<b>d</b>) 2020.8.25 CYGNSS satellite flight track and corresponding CNN wind speed. The color bar on the right represents the wind speed value, Unit: m/s.</p>
Full article ">Figure 7 Cont.
<p>(<b>a</b>) 2020.8.23 CYGNSS satellite flight track and corresponding CNN wind speed; (<b>b</b>) 2020.8.24 CYGNSS satellite flight track and corresponding CNN wind speed; (<b>c</b>) 2020.8.25 CYGNSS satellite flight track and corresponding CNN wind speed; (<b>d</b>) 2020.8.25 CYGNSS satellite flight track and corresponding CNN wind speed. The color bar on the right represents the wind speed value, Unit: m/s.</p>
Full article ">Figure 8
<p>(<b>a</b>) 2020.8.23 CYGNSS satellite flight track and corresponding inversion error; (<b>b</b>) 2020.8.24 CYGNSS satellite flight track and corresponding inversion error; (<b>c</b>) 2020.8.25 CYGNSS satellite flight track and corresponding inversion error; (<b>d</b>) 2020.8.26 CYGNSS satellite flight track and corresponding inversion error. The color bar on the right represents the wind speed value, Unit: m/s.</p>
Full article ">
21 pages, 4454 KiB  
Article
Identifying Individual Nutrient Deficiencies of Grapevine Leaves Using Hyperspectral Imaging
by Sourabhi Debnath, Manoranjan Paul, D. M. Motiur Rahaman, Tanmoy Debnath, Lihong Zheng, Tintu Baby, Leigh M. Schmidtke and Suzy Y. Rogiers
Remote Sens. 2021, 13(16), 3317; https://doi.org/10.3390/rs13163317 - 23 Aug 2021
Cited by 22 | Viewed by 5457
Abstract
The efficiency of a vineyard management system is directly related to the effective management of nutritional disorders, which significantly downgrades vine growth, crop yield and wine quality. To detect nutritional disorders, we successfully extracted a wide range of features using hyperspectral (HS) images [...] Read more.
The efficiency of a vineyard management system is directly related to the effective management of nutritional disorders, which significantly downgrades vine growth, crop yield and wine quality. To detect nutritional disorders, we successfully extracted a wide range of features using hyperspectral (HS) images to identify healthy and individual nutrient deficiencies of grapevine leaves. Features such as mean reflectance, mean first derivative reflectance, variation index, mean spectral ratio, normalised difference vegetation index (NDVI) and standard deviation (SD) were employed at various stages in the ultraviolet (UV), visible (VIS) and near-infrared (N.I.R.) regions for our experiment. Leaves were examined visually in the laboratory and grouped as either healthy (i.e. control) or unhealthy. Then, the features of the leaves were extracted from these two groups. In a second experiment, features of individual nutrient-deficient leaves (e.g., N, K and Mg) were also analysed and compared with those of control leaves. Furthermore, a customised support vector machine (SVM) was used to demonstrate that these features can be utilised with a high degree of effectiveness to identify unhealthy samples and not only to distinguish from control and nutrient deficient but also to identify individual nutrient defects. Therefore, the proposed work corroborated that HS imaging has excellent potential to analyse features based on healthiness and individual nutrient deficiencies of grapevine leaves. Full article
Show Figures

Figure 1

Figure 1
<p>(<b>a</b>) Hyperspectral camera, (<b>b</b>) samples of grapevine leaves, including the benchmark leaf and (<b>c</b>) mean first derivative spectra of control and benchmark leaves between ~675 nm and ~775 nm, showing the similarity between curve shapes of control and benchmark leaves.</p>
Full article ">Figure 2
<p>Reflectance vs. wavelength (nm) for three control, Potassium (K), Magnesium (Mg) and Nitrogen (N) leaves.</p>
Full article ">Figure 3
<p>Unhealthy leaf with white spots and few brown spots and its selected areas for data cube acquisition.</p>
Full article ">Figure 4
<p>Sample of unhealthy leaves containing visible defects used for the study of visibly unhealthy leaves.</p>
Full article ">Figure 5
<p>Mean the first derivative of the reflectance of healthy (benchmark leaf) and unhealthy (whole and selective areas) leaves in the 650 nm to 800 nm range to compare the curve pattern of healthy and unhealthy leaves.</p>
Full article ">Figure 6
<p>Mean spectral ratios of different unhealthy leaves (whole area) to benchmark leaf to compare the change in the curve shape of unhealthy leaves with respect to a healthy benchmark leaf. (<b>a</b>) Unhealthy leaves with many small white and few brown spots and unhealthy leaves with few brown spots exhibit almost similar ratio curve trends from ~380 nm to 1000 nm (<b>b</b>) unhealthy leaves with several large brown spots and unhealthy leaves with brownish regions have almost similar ratio curves between the wavelength range 491 nm and 825 nm. (<b>c</b>) the similar shaped ratio curves of unhealthy leaf with brown spots and holes with an unhealthy leaf with brownish regions and holes in the ~400 nm and 1000 nm wavelength range. (<b>d</b>) The unhealthy leaf with brown and yellowish regions and unhealthy leaf with many large brown regions have similar ratio curves with higher ratio values for the latter between ~570 nm to 1000 nm.</p>
Full article ">Figure 7
<p>Variation index (<span class="html-italic">v<sub>i</sub></span>) of healthy and unhealthy leaves in the N.I.R. region, indicating the difference between healthy and unhealthy leaves.</p>
Full article ">Figure 8
<p>Mean first derivative spectra of control and nutrient-deficient leaves between ~675 nm and ~775 nm, showing the difference between curve shapes of control and nutrient-deficient leaves.</p>
Full article ">Figure 9
<p>A plot of the mean first derivative reflectance ratio of control and different nutrient-deficient leaves, representing the difference between control and nutrient-deficient leaves.</p>
Full article ">Figure 10
<p>Mean spectral ratios of different nutrient-deficient leaves to control leaf, distinguishing individually different nutrient-deficient leaves.</p>
Full article ">Figure 11
<p>A plot of NDVI for control and different nutrient-deficient leaves to distinguish control and different nutrient-deficient leaves.</p>
Full article ">Figure 12
<p>A plot of SD of control and different nutrient-deficient leaves in UV, VIS and N.I.R. regions, representing the difference between control and nutrient-deficient leaves in different wavelength regions.</p>
Full article ">Figure 13
<p>Variation index (v<sub>i</sub>) of leaves in the U.V., VIS and N.I.R. regions for distinguishing control and different nutrient-deficient leaves.</p>
Full article ">Figure 14
<p>Confusion matrix for the binary and multiclass classifier.</p>
Full article ">
27 pages, 16812 KiB  
Article
Studying a Subsiding Urbanized Area from a Multidisciplinary Perspective: The Inner Sector of the Sarno Plain (Southern Apennines, Italy)
by Ettore Valente, Vincenzo Allocca, Umberto Riccardi, Giovanni Camanni and Diego Di Martire
Remote Sens. 2021, 13(16), 3323; https://doi.org/10.3390/rs13163323 - 22 Aug 2021
Cited by 7 | Viewed by 2933
Abstract
Defining the origin of ground deformation, which can be a very challenging task, may be approached through several investigative techniques. Ground deformation can originate in response to both natural (e.g., tectonics) and anthropic (e.g., groundwater pumping) contributions. These may either act simultaneously or [...] Read more.
Defining the origin of ground deformation, which can be a very challenging task, may be approached through several investigative techniques. Ground deformation can originate in response to both natural (e.g., tectonics) and anthropic (e.g., groundwater pumping) contributions. These may either act simultaneously or be somewhat correlated in space and time. For example, the location of structurally controlled basins may be the locus of enhanced human-induced subsidence. In this paper, we investigate the natural and anthropic contributions to ground deformation in the urbanized area of the inner Sarno plain, in the Southern Apennines. We used a multidisciplinary approach based on the collection and analysis of a combination of geomorphological, stratigraphical, structural, hydrogeological, GPS, and DInSAR datasets. Geomorphological, stratigraphical, and structural data suggested the occurrence of a graben-like depocenter, the Sarno basin, bounded by faults with evidence of activity in the last 39 ka. Geodetic data indicated that the Sarno basin also experienced ground deformation (mostly subsidence) in the last 30 years, with a possible anthropogenic contribution due to groundwater pumping. Hydrogeological data suggested that a significant portion of the subsidence detected by geodetic data can be ascribed to groundwater pumping from the alluvial plain aquifer, rather than to a re-activation of faults in the last 30 years. Our interpretation suggested that a positive feedback exists between fault activity and the location of area affected by human-induced subsidence. In fact, fault activity caused the accumulation of poorly consolidated deposits within the Sarno basin, which enhanced groundwater-induced subsidence. The multidisciplinary approach used here was proven to be successful within the study area and could therefore be an effective tool for investigating ground deformation in other urbanized areas worldwide. Full article
(This article belongs to the Special Issue GNSS for Geosciences)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>(<b>A</b>) geological map of the sector of the Southern Apennines including the Sarno plain (modified from [<a href="#B20-remotesensing-13-03323" class="html-bibr">20</a>,<a href="#B34-remotesensing-13-03323" class="html-bibr">34</a>]). White box indicates location of the study area, shown in inset map B. Active faults are derived from the ITHACA database [<a href="#B35-remotesensing-13-03323" class="html-bibr">35</a>] and from Cinque et al. [<a href="#B36-remotesensing-13-03323" class="html-bibr">36</a>]. Earthquake epicentres and magnitude are derived from the Italian Earthquake Catalogue CPTI15 [<a href="#B37-remotesensing-13-03323" class="html-bibr">37</a>]. (<b>B</b>) geological map of the inner sector of Sarno plain (modified from [<a href="#B20-remotesensing-13-03323" class="html-bibr">20</a>]).</p>
Full article ">Figure 2
<p>Geomorphological map of the inner sector of the Sarno plain and the adjoining western slope of the Sarno mountains (modified from [<a href="#B20-remotesensing-13-03323" class="html-bibr">20</a>]).</p>
Full article ">Figure 3
<p>(<b>A</b>) panoramic view of the Sarno plain from the Lavorate embayment. White arrow near Sarno town indicates location of the rectilinear scarp shown in insets B and C; a portion of the rectilinear scarp seen from footwall (<b>B</b>) and hanging wall (<b>C</b>) blocks.</p>
Full article ">Figure 4
<p>Geological map of the inner sector of the Sarno plain (modified from [<a href="#B20-remotesensing-13-03323" class="html-bibr">20</a>]).</p>
Full article ">Figure 5
<p>Geological cross-section showing the spatial distribution of the Quaternary marine, continental, and volcanic deposits and the pre-Quaternary carbonate substratum. Cross-section traces are reported in <a href="#remotesensing-13-03323-f004" class="html-fig">Figure 4</a>. Cross-section A–A′ is centred in the Lavorate embayment and it is modified from Valente et al. [<a href="#B20-remotesensing-13-03323" class="html-bibr">20</a>]. Cross-section B–B′ is centred in the Episcopio embayment; cross-section C–C′ is centred in the Sarno urban area.</p>
Full article ">Figure 6
<p>Geological map of the Sarno Mountains with plots showing the spatial arrangement of bedding and fault data throughout the map area.</p>
Full article ">Figure 7
<p>Piezometric level variation (m) in the period of 1992–2003 for the study area ([<a href="#B38-remotesensing-13-03323" class="html-bibr">38</a>], integrated). Dashed black lines indicate faults mapped in <a href="#remotesensing-13-03323-f004" class="html-fig">Figure 4</a>.</p>
Full article ">Figure 8
<p>Monthly rainfall (mm) at Sarno rain gauge station (<a href="#remotesensing-13-03323-f001" class="html-fig">Figure 1</a>B for location) in the period of 1992–2003. Dotted red line indicates the mean long-term trend.</p>
Full article ">Figure 9
<p>GNSS daily solutions collected at PACA permanent station located in Palma Campania for the up component. Linear trend, which quantifies the subsidence rate, is plotted (red line). A tentative modelling of the hydrological contribution of the observed ground displacement based on MERRA2 and GLDAS models is plotted as well (<b>a</b>). GNSS daily solutions, after correction of the hydrological contribution and final assessment of the subsidence rate. The black brackets mark the time range for which the DInSAR dataset is available for comparison (<b>b</b>).</p>
Full article ">Figure 10
<p>Mean displacement rate maps for: ERS1/2, (<b>A</b>) ascending and (<b>B</b>) descending; ENVISAT: (<b>C</b>) ascending and (<b>D</b>) descending; (<b>E</b>) Cosmo-SkyMed images in descending orbit; and (<b>F</b>) SENTINEL-1 images in descending orbit (2016–2020 time span). Dashed black lines indicate faults inferred by geomorphological and stratigraphic analysis (see <a href="#remotesensing-13-03323-f004" class="html-fig">Figure 4</a>).</p>
Full article ">Figure 11
<p>Mean displacement rate map for ENVISAT images (2003–2010 period): (<b>a</b>) ascending and (<b>b</b>) descending. Red triangle indicates location of GNSS Palma Campania (PACA).</p>
Full article ">Figure 12
<p>Comparison of GNSS and DInSAR time series (vertical component) collected in Palma Campania with the aim of validating the DInSAR data.</p>
Full article ">Figure 13
<p>Time series of the DInSAR data (up component) used to characterise the vertical displacements within the Sarno basin (<b>a</b>–<b>c</b>). Refer to <a href="#remotesensing-13-03323-f011" class="html-fig">Figure 11</a> to locate the selected points.</p>
Full article ">Figure 14
<p>(<b>a</b>) total monthly groundwater pumping rate (L/s) at SF, MP, SM well fields, located within the Sarno Mountains karst aquifer, and (<b>b</b>) monthly precipitation (mm) for the period of 2003–2020. The dotted red line indicates the mean long-time trend.</p>
Full article ">
18 pages, 12872 KiB  
Article
Improving Potato Yield Prediction by Combining Cultivar Information and UAV Remote Sensing Data Using Machine Learning
by Dan Li, Yuxin Miao, Sanjay K. Gupta, Carl J. Rosen, Fei Yuan, Chongyang Wang, Li Wang and Yanbo Huang
Remote Sens. 2021, 13(16), 3322; https://doi.org/10.3390/rs13163322 - 22 Aug 2021
Cited by 35 | Viewed by 6326
Abstract
Accurate high-resolution yield maps are essential for identifying spatial yield variability patterns, determining key factors influencing yield variability, and providing site-specific management insights in precision agriculture. Cultivar differences can significantly influence potato (Solanum tuberosum L.) tuber yield prediction using remote sensing technologies. [...] Read more.
Accurate high-resolution yield maps are essential for identifying spatial yield variability patterns, determining key factors influencing yield variability, and providing site-specific management insights in precision agriculture. Cultivar differences can significantly influence potato (Solanum tuberosum L.) tuber yield prediction using remote sensing technologies. The objective of this study was to improve potato yield prediction using unmanned aerial vehicle (UAV) remote sensing by incorporating cultivar information with machine learning methods. Small plot experiments involving different cultivars and nitrogen (N) rates were conducted in 2018 and 2019. UAV-based multi-spectral images were collected throughout the growing season. Machine learning models, i.e., random forest regression (RFR) and support vector regression (SVR), were used to combine different vegetation indices with cultivar information. It was found that UAV-based spectral data from the early growing season at the tuber initiation stage (late June) were more correlated with potato marketable yield than the spectral data from the later growing season at the tuber maturation stage. However, the best performing vegetation indices and the best timing for potato yield prediction varied with cultivars. The performance of the RFR and SVR models using only remote sensing data was unsatisfactory (R2 = 0.48–0.51 for validation) but was significantly improved when cultivar information was incorporated (R2 = 0.75–0.79 for validation). It is concluded that combining high spatial-resolution UAV images and cultivar information using machine learning algorithms can significantly improve potato yield prediction than methods without using cultivar information. More studies are needed to improve potato yield prediction using more detailed cultivar information, soil and landscape variables, and management information, as well as more advanced machine learning models. Full article
(This article belongs to the Special Issue Remote Sensing of Crop Lands and Crop Production)
Show Figures

Figure 1

Figure 1
<p>The layouts of the potato field experiments in 2018 (left) and 2019 (right) involving six cultivars (MN13142 (MN), Russet Burbank (RB), Umatilla Russet (UM), Lamoka (LA), Clearwater Russet (CW), and Ivory Russet (IR)) and three nitrogen rates (A= 134.5 kg ha<sup>−1</sup>, B = 269.0 kg ha<sup>−1</sup>, and C = 403.5 kg ha<sup>−1</sup>).</p>
Full article ">Figure 2
<p>The true color-composite images of the potato fields during different sensing dates. The true color images collected on 26 June 2018 (<b>a</b>), 10 July 2018 (<b>b</b>), 18 July 2018 (<b>c</b>), 2 August 2018 (<b>d</b>), 26 June 2019 (<b>e</b>), 23 July 2019 (<b>f</b>), 6 August 2019 (<b>g</b>), and 19 August 2019 (<b>h</b>).</p>
Full article ">Figure 3
<p>The process flow diagram of the study.</p>
Full article ">Figure 4
<p>The correlation coefficients between vegetation indices and potato marketable tuber yield on different dates during the growing seasons of 2018 (<b>a</b>) and 2019 (<b>b</b>).</p>
Full article ">Figure 5
<p>The scores of mean decrease in impurity and mean decrease accuracy of the top nine variables and the accumulative explained variance calculated from the random forest algorithm. (<b>a</b>) The scores of mean decrease in impurity; (<b>b</b>) The accumulative explained variance % of the top nine variables selected by the scores of mean decrease in impurity; (<b>c</b>) The scores of mean decrease accuracy; (<b>d</b>) The accumulative explained variance % of the top nine variables selected by the scores of mean decrease accuracy. SGI_1 means the Normalized Sum Green Index (N<sub>SGI</sub>) calculated by the SGI and GDD on the first sensing date. SGI_2 means the normalized SGI calculated by the SGI and GDD on the second sensing date. VARI_3 indicates the Normalized Visible Atmospherically Resistance Index (N<sub>VARI</sub>) calculated by the VARI and GDD on the third sensing date. Other normalized VIs have similar meanings.</p>
Full article ">Figure 6
<p>Relationships between the measured yield and the estimated yield by the SVR and RFR models using the selected N<sub>VI</sub>s data without including cultivar information. (<b>a</b>) The scatterplot between measured yield data and estimated yield data for calibration dataset by SVR model; (<b>b</b>) The scatterplot between measured yield data and estimated yield data for prediction dataset by SVR model; (<b>c</b>) The scatterplot between measured yield data and estimated yield data for calibration dataset set by RFR model; (<b>d</b>) The scatterplot between measured yield data and predicted yield data for prediction dataset by RFR model.</p>
Full article ">Figure 7
<p>Relationships between the measured yield and the estimated yield by the SVR and RFR models using the pooled vegetation indices across site-years, sensing dates, and cultivar information. The scatterplot between measured yield and estimated yield for calibration dataset (<b>a</b>) and validation dataset (<b>b</b>) using the SVR model, and for calibration dataset (<b>c</b>) and validation dataset (<b>d</b>) using the RFR model.</p>
Full article ">Figure 8
<p>The predicted potato yield maps based on the RFR model developed with selected normalized vegetation indices (N<sub>VI</sub>) and cultivar information for 2018 (<b>left</b>) and 2019 (<b>right</b>).</p>
Full article ">
23 pages, 5572 KiB  
Article
Field Observations of Breaking of Dominant Surface Waves
by Pavel D. Pivaev, Vladimir N. Kudryavtsev, Aleksandr E. Korinenko and Vladimir V. Malinovsky
Remote Sens. 2021, 13(16), 3321; https://doi.org/10.3390/rs13163321 - 22 Aug 2021
Cited by 8 | Viewed by 2557
Abstract
The results of field observations of breaking of surface spectral peak waves, taken from an oceanographic research platform, are presented. Whitecaps generated by breaking surface waves were detected using video recordings of the sea surface, accompanied by co-located measurements of waves and wind [...] Read more.
The results of field observations of breaking of surface spectral peak waves, taken from an oceanographic research platform, are presented. Whitecaps generated by breaking surface waves were detected using video recordings of the sea surface, accompanied by co-located measurements of waves and wind velocity. Whitecaps were separated according to the speed of their movement, c, and then described in terms of spectral distributions of their areas and lengths over c. The contribution of dominant waves to the whitecap coverage varies with the wave age and attains more than 50% when seas are young. As found, the whitecap coverage and the total length of whitecaps generated by dominant waves exhibit strong dependence on the dominant wave steepness, ϵp, the former being proportional to ϵp6. This result supports a parameterization of the dissipation term, used in the WAM model. A semi-empirical model of the whitecap coverage, where contributions of breaking of dominant and equilibrium range waves are separated, is suggested. Full article
(This article belongs to the Special Issue Passive Remote Sensing of Oceanic Whitecaps)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Location of the Black Sea research platform and a general view on the instrumentation.</p>
Full article ">Figure 2
<p>(<b>left column</b>) Frequency spectra, <math display="inline"> <semantics> <mrow> <mi>S</mi> <mo stretchy="false">(</mo> <mi>f</mi> <mo stretchy="false">)</mo> </mrow> </semantics> </math>, spectral distribution of the total length of whitecaps, <math display="inline"> <semantics> <mrow> <mi mathvariant="sans-serif">Λ</mi> <mo stretchy="false">(</mo> <mi>f</mi> <mo stretchy="false">)</mo> </mrow> </semantics> </math>, and detected number of breaking events. Dashed vertical lines indicate spectral peak frequencies, <math display="inline"> <semantics> <msub> <mi>f</mi> <mi>p</mi> </msub> </semantics> </math> (<math display="inline"> <semantics> <mrow> <msub> <mi>f</mi> <mi>p</mi> </msub> <mi>P</mi> <mi>M</mi> </mrow> </semantics> </math> corresponds to the Pierson-Moskowitz frequency) and solid vertical lines indicate spectral peak intervals <math display="inline"> <semantics> <mrow> <mo stretchy="false">(</mo> <msub> <mi>f</mi> <mi>p</mi> </msub> <mrow> <mo stretchy="false">(</mo> <mn>1</mn> <mo>−</mo> <mi>δ</mi> <mo stretchy="false">)</mo> </mrow> <mo>,</mo> <msub> <mi>f</mi> <mi>p</mi> </msub> <mrow> <mo stretchy="false">(</mo> <mn>1</mn> <mo>+</mo> <mi>δ</mi> <mo stretchy="false">)</mo> </mrow> <mo stretchy="false">)</mo> </mrow> </semantics> </math> built from swell and wind-waves <math display="inline"> <semantics> <msub> <mi>f</mi> <mi>p</mi> </msub> </semantics> </math>. Shaded blue area around <math display="inline"> <semantics> <mrow> <mi>S</mi> <mo stretchy="false">(</mo> <mi>f</mi> <mo stretchy="false">)</mo> </mrow> </semantics> </math> shows 95% confidence intervals. (<b>right column</b>) Directional wave spectra, <math display="inline"> <semantics> <mrow> <mi>S</mi> <mo stretchy="false">(</mo> <mi>f</mi> <mo>,</mo> <mi>ϕ</mi> <mo stretchy="false">)</mo> </mrow> </semantics> </math>, where arrows show the wind direction. Figures in the top row are for a time interval of onshore winds on 11 September 2019, and those in the bottom row are for a time interval of offshore winds on 25 September 2013.</p>
Full article ">Figure 3
<p>(<math display="inline"> <semantics> <mi mathvariant="bold">a</mi> </semantics> </math>) An example of a grayscale image of the sea surface with breaking waves on 24 September 2013 at around 12:34 (UTC + 3H). (<math display="inline"> <semantics> <mi mathvariant="bold">b</mi> </semantics> </math>) The same with detected foam using brightness threshold and (<math display="inline"> <semantics> <mi mathvariant="bold">c</mi> </semantics> </math>) with extracted active foam. A breaking crest outlined in plot (<b>c</b>) is shown in <a href="#remotesensing-13-03321-f004" class="html-fig">Figure 4</a>.</p>
Full article ">Figure 4
<p>(<math display="inline"> <semantics> <mi mathvariant="bold">top</mi> </semantics> </math>) A sequence of orthorectified frames with the whitecap outlined in <a href="#remotesensing-13-03321-f003" class="html-fig">Figure 3</a>c. (<math display="inline"> <semantics> <mrow> <mi mathvariant="bold">bottom</mi> </mrow> </semantics> </math>) The area, distance of the whitecap’s centroid from an arbitrary origin, and length of the whitecap as a function of time. Red and blue markers indicate stage A and stage B breaking, respectively. Vertical black lines show time moments corresponding to the above frames.</p>
Full article ">Figure 5
<p>(<b>a</b>,<b>b</b>) The <math display="inline"> <semantics> <mi mathvariant="sans-serif">Λ</mi> </semantics> </math> distributions as a function of <span class="html-italic">c</span> and <math display="inline"> <semantics> <mrow> <mi>c</mi> <mo>/</mo> <msub> <mi>c</mi> <mi>p</mi> </msub> </mrow> </semantics> </math>, where <math display="inline"> <semantics> <msub> <mi>c</mi> <mi>p</mi> </msub> </semantics> </math> is the peak phase speed. (<b>c</b>,<b>d</b>) The same, but for <span class="html-italic">q</span> distributions.</p>
Full article ">Figure 6
<p>(<math display="inline"> <semantics> <mi mathvariant="bold">top</mi> </semantics> </math>) Wind speed, <math display="inline"> <semantics> <msub> <mi>U</mi> <mn>10</mn> </msub> </semantics> </math> and direction, <math display="inline"> <semantics> <msub> <mi>ϕ</mi> <mi>U</mi> </msub> </semantics> </math> time series for a video record in the onshore-wind conditions. (<math display="inline"> <semantics> <mi mathvariant="bold">bottom</mi> </semantics> </math>) The same, but in the offshore-wind conditions. Black lines are 15-min-average wind speed. Black vertical thick lines indicate time intervals of continuous video recordings of the sea surface. Shaded gray rectangles indicate time intervals, for which the wave spectra and statistical wave breaking parameters of dominant waves were estimated and shown in <a href="#remotesensing-13-03321-f002" class="html-fig">Figure 2</a>.</p>
Full article ">Figure 7
<p>(<math display="inline"> <semantics> <mi mathvariant="bold">a</mi> </semantics> </math>) The contribution of dominant waves to the observed total active whitecap coverage, <span class="html-italic">Q</span>, as a function of the inverse wave age, <math display="inline"> <semantics> <mi>α</mi> </semantics> </math>. (<math display="inline"> <semantics> <mi mathvariant="bold">b</mi> </semantics> </math>) The same, but as a function of the wind speed, <math display="inline"> <semantics> <msub> <mi>U</mi> <mn>10</mn> </msub> </semantics> </math>. Squares and triangles correspond to the onshore- and offshore-wind conditions, respectively. Lines indicate calculations from a semi-empirical model of <span class="html-italic">Q</span>, introduced in <a href="#sec4-remotesensing-13-03321" class="html-sec">Section 4</a>, for different <math display="inline"> <semantics> <msub> <mi>U</mi> <mn>10</mn> </msub> </semantics> </math> and dominant wave steepness, <math display="inline"> <semantics> <msub> <mi>ϵ</mi> <mi>p</mi> </msub> </semantics> </math>.</p>
Full article ">Figure 8
<p>(<math display="inline"> <semantics> <mi mathvariant="bold">a</mi> </semantics> </math>) The active whitecap coverage of dominant waves, <math display="inline"> <semantics> <msub> <mi>Q</mi> <mi>p</mi> </msub> </semantics> </math>, as a function of the wind speed, <math display="inline"> <semantics> <msub> <mi>U</mi> <mn>10</mn> </msub> </semantics> </math>, (<math display="inline"> <semantics> <mi mathvariant="bold">b</mi> </semantics> </math>) inverse wave age, <math display="inline"> <semantics> <mi>α</mi> </semantics> </math>, and (<math display="inline"> <semantics> <mi mathvariant="bold">c</mi> </semantics> </math>) dominant wave steepness, <math display="inline"> <semantics> <msub> <mi>ϵ</mi> <mi>p</mi> </msub> </semantics> </math>, (<math display="inline"> <semantics> <mi mathvariant="bold">d</mi> </semantics> </math>) The <math display="inline"> <semantics> <mi>α</mi> </semantics> </math> as a function of <math display="inline"> <semantics> <msub> <mi>ϵ</mi> <mi>p</mi> </msub> </semantics> </math>. In the subfigures, a squared correlation coefficient, <math display="inline"> <semantics> <msup> <mi>r</mi> <mn>2</mn> </msup> </semantics> </math>, and confidence bounds are based on the best linear fit in the log–log domain. For the colour-coding see the legend.</p>
Full article ">Figure 9
<p>(<math display="inline"> <semantics> <mi mathvariant="bold">a</mi> </semantics> </math>). The total length of whitecaps of dominant waves per unit surface area, <math display="inline"> <semantics> <msub> <mi>L</mi> <mi>p</mi> </msub> </semantics> </math>, normalised by the peak wavenumber, <math display="inline"> <semantics> <msub> <mi>k</mi> <mi>p</mi> </msub> </semantics> </math>, as a function of the dominant wave steepness, <math display="inline"> <semantics> <msub> <mi>ϵ</mi> <mi>p</mi> </msub> </semantics> </math>. (<math display="inline"> <semantics> <mi mathvariant="bold">b</mi> </semantics> </math>) The same quantity, but as a function of the active whitecap coverage of dominant waves, <math display="inline"> <semantics> <msub> <mi>Q</mi> <mi>p</mi> </msub> </semantics> </math>. Notation and colour-coding are the same as in <a href="#remotesensing-13-03321-f008" class="html-fig">Figure 8</a>. Gray markers are for the recovered <math display="inline"> <semantics> <msubsup> <mi>L</mi> <mi>p</mi> <mo>*</mo> </msubsup> </semantics> </math> at the offshore winds, which represents the total length of the breaking fronts of dominant waves. The gray line corresponds to the power fit to the total cloud of points when red markers are replaced with gray ones.</p>
Full article ">Figure 10
<p>Estimated probability of dominant wave breaking, <math display="inline"> <semantics> <msub> <mi>P</mi> <mi>p</mi> </msub> </semantics> </math>, as a function of the dominant wave steepness, <math display="inline"> <semantics> <msub> <mi>ϵ</mi> <mi>p</mi> </msub> </semantics> </math>, together with previously reported field data, shown as black markers. In the legend: SO—Southern Ocean, BS—Black Sea, LG—Lake George, LW—Lake Washington data described in [<a href="#B30-remotesensing-13-03321" class="html-bibr">30</a>,<a href="#B31-remotesensing-13-03321" class="html-bibr">31</a>], and BGF—data points of [<a href="#B32-remotesensing-13-03321" class="html-bibr">32</a>]. Notation and colour-coding are the same as in <a href="#remotesensing-13-03321-f008" class="html-fig">Figure 8</a> and <a href="#remotesensing-13-03321-f009" class="html-fig">Figure 9</a>. Gray markers are for the <math display="inline"> <semantics> <msubsup> <mi>P</mi> <mi>p</mi> <mo>*</mo> </msubsup> </semantics> </math> at the offshore winds. The gray line corresponds to the power fit to the total cloud of points when red markers are replaced with gray ones.</p>
Full article ">Figure 11
<p>(<math display="inline"> <semantics> <mi mathvariant="bold">a</mi> </semantics> </math>). The semi-empirical total active whitecap coverage, <span class="html-italic">Q</span>, as a function of the wind speed, <math display="inline"> <semantics> <msub> <mi>U</mi> <mn>10</mn> </msub> </semantics> </math>, for different inverse wave ages, <math display="inline"> <semantics> <mi>α</mi> </semantics> </math>. (<math display="inline"> <semantics> <mi mathvariant="bold">b</mi> </semantics> </math>) The same, but for different dominant wave steepness, <math display="inline"> <semantics> <msub> <mi>ϵ</mi> <mi>p</mi> </msub> </semantics> </math>. Markers represent our experimental estimates of <span class="html-italic">Q</span>. Squares and triangles correspond to the onshore- and offshore-wind conditions, respectively. A thick gray line indicates the parameterization of [<a href="#B35-remotesensing-13-03321" class="html-bibr">35</a>].</p>
Full article ">Figure A1
<p>(<math display="inline"> <semantics> <mi mathvariant="bold">left</mi> </semantics> </math>) An example of swell and wind-wave systems separation in the frequency domain. The spectrum is from the upper left <a href="#remotesensing-13-03321-f002" class="html-fig">Figure 2</a>. Orange circles show points used in the polynomial fit in the log-log domain. (<math display="inline"> <semantics> <mi mathvariant="bold">right</mi> </semantics> </math>) Relative change in the energy of wind-waves after partitioning.</p>
Full article ">Figure A2
<p>(<math display="inline"> <semantics> <mi mathvariant="bold">a</mi> </semantics> </math>) Measured <math display="inline"> <semantics> <mrow> <mover accent="true"> <mi mathvariant="sans-serif">Λ</mi> <mo stretchy="false">˜</mo> </mover> <mrow> <mo stretchy="false">(</mo> <mi>c</mi> <mo stretchy="false">)</mo> </mrow> </mrow> </semantics> </math> (gray lines) with their arithmetic average (black line). Vertical black line divides the range of phase speeds into the one used for the curve fitting (solid coloured lines) and for the extrapolation (dash coloured lines). (<math display="inline"> <semantics> <mi mathvariant="bold">b</mi> </semantics> </math>) The correction function, <math display="inline"> <semantics> <mrow> <mi mathvariant="sans-serif">Φ</mi> <mo stretchy="false">(</mo> <mi>c</mi> <mo stretchy="false">)</mo> </mrow> </semantics> </math>, for both power approximations.</p>
Full article ">
19 pages, 12170 KiB  
Article
Fog Measurements with IR Whole Sky Imager and Doppler Lidar, Combined with In Situ Instruments
by Ayala Ronen, Tamir Tzadok, Dorita Rostkier-Edelstein and Eyal Agassi
Remote Sens. 2021, 13(16), 3320; https://doi.org/10.3390/rs13163320 - 22 Aug 2021
Cited by 2 | Viewed by 2692
Abstract
This study describes comprehensive measurements performed for four consecutive nights during a regional-scale radiation fog event in Israel’s central and southern areas in January 2021. Our data included both in situ measurements of droplets size distribution, visibility range, and meteorological parameters and remote [...] Read more.
This study describes comprehensive measurements performed for four consecutive nights during a regional-scale radiation fog event in Israel’s central and southern areas in January 2021. Our data included both in situ measurements of droplets size distribution, visibility range, and meteorological parameters and remote sensing with a thermal IR Whole Sky Imager and a Doppler Lidar. This work is the first extensive field campaign aimed to characterize fog properties in Israel and is a pioneer endeavor that encompasses simultaneous remote sensing measurements and analysis of a fog event with a thermal IR Whole Sky Imager. Radiation fog, as monitored by the sensor’s field of view, reveals three distinctive properties that make it possible to identify it. First, it exhibits an azimuthal symmetrical shape during the buildup phase. Second, the zenith brightness temperature is very close to the ground-level air temperature. Lastly, the rate of increase in cloud cover up to a completely overcast sky is very fast. Additionally, we validated the use of a Doppler Lidar as a tool for monitoring fog by proving that the measured backscatter-attenuation vertical profile agrees with the calculation of the Lidar equation fed with data measured by in situ instruments. It is shown that fog can be monitored by those two, off-the-shelf-stand-off-sensing technologies that were not originally designed for fog purposes. It enables the monitoring of fog properties such as type, evolution with time and vertical depth, and opens the path for future works of studying the different types of fog events. Full article
(This article belongs to the Section Atmospheric Remote Sensing)
Show Figures

Figure 1

Figure 1
<p>Heavy fog covers Tel-Aviv buildings, 3 January 2021, during early morning hours [<a href="#B6-remotesensing-13-03320" class="html-bibr">6</a>].</p>
Full article ">Figure 2
<p>(<b>a</b>) IMS Radiosonde measurement before and during fog periods, from left to right: Horizontal wind speed and direction, air and dew point temperatures, mixing ratio and relative humidity. (<b>b</b>) Visibility values in Ness Ziona before and during the fog event.</p>
Full article ">Figure 3
<p>Field-experiment area map [Google] showing the measurement site in Ness Ziona and the meteorological stations nearby.</p>
Full article ">Figure 4
<p>Field-experiment instrumentation: (<b>a</b>) Sky imager [ASIS1i, SOLOMIRUS, Colorado Springs, CO, USA]. (<b>b</b>) Droplet-sizes measurements [FSSP-100, PMS, Boulder, Colorado, USA]. (<b>c</b>) Visibility-range sensor [SWS250, Biral, Bristol, UK]. (<b>d</b>) Doppler Lidar [StreamLine XR, Halo photonics, Leigh, UK].</p>
Full article ">Figure 5
<p>An example of JPEG qualitative images that are produced by the sensor for a quick look at the sky conditions. (<b>a</b>) An image of a midsummer clear sky from 26 July 2020 in a color scale of the brightness temperature. The scale runs from yellow-orange-red-green-blue from high to low temperatures respectively. The full-color scale in this image corresponds to a brightness temperatures span of about 230–300 K (red is hot). The non-valid pixels appear in black. The calibration mast with two blackbodies is visually observed on the top of the image. (<b>b</b>) A visual graph of the corresponding scatter plot of the brightness temperature (expressed as normalized radiance with respect to ground level temperature) as a function of air mass is used for clear sky estimation. The three-color lines correspond to the different clear sky radiance estimation methods which typically overlap each other at clear sky conditions. It should be noted that these images do not hold quantitative data, and are used only for basic interpretation and for the analysis of the spatial and temporal evolution of the cloud and fog field. The quantitative data were used for obtaining the results depicted in Figures 9–11.</p>
Full article ">Figure 6
<p>Sky image at 4 January 2021. (<b>a</b>) 03:06—the fog is low and the whole sky image is clear. (<b>b</b>) 03:07—the fog is rising rapidly as indicated by the increased brightness temperature of the sky at low elevation angles. (<b>c</b>) 03:09—the fog continues to rise, but the clear sky is still present near the zenith. (<b>d</b>) 03:16—the fog covers the entire sensor’s field of view, but not very densely, as the sky image is not completely homogenous.</p>
Full article ">Figure 7
<p>Sky image on 6 January 2021. (<b>a</b>) 03:42—the fog is low and the sky image is clear. (<b>b</b>) 03:44—the fog is rising rapidly. The slight asymmetry shows that the fog developed from the northeast direction. (<b>c</b>) 03:54—the fog covers the entire sensor’s field of view, but not very densely. There is still a temperature gradient that follows the elevation angle. (<b>d</b>) 04:14—Fully developed dense fog. The sky brightness temperature is homogenous and its magnitude is the same as the air temperature.</p>
Full article ">Figure 8
<p>Sky image on 6 January 2021. (<b>a</b>) 09:57—Fully developed dense fog. (<b>b</b>) 10:03—the fog begins to dissipate. Low-density patches are clearly visible. (<b>c</b>) 10:22—Snapshot in the middle of the rapid clearing process. Half of the sky is clear. (<b>d</b>) 10:24—the sky is almost completely clear. Some residual fog patches are still present close to the ground.</p>
Full article ">Figure 9
<p>Relative humidity and air temperature along with the zenith temperature and thick cloud cover, during the January 2021 event, taken from “Rehovot” station. The X-axis time label denotes time- noon or midnight/date between 3–7 January 2021.</p>
Full article ">Figure 10
<p>Cloud cover and zenith brightness temperature report during the fog event of 6 January 2021, calculated from WSI data. Note that the relative humidity and the air temperature are taken directly from the sensors embedded in Asis1i. Therefore, their values represent much more accurately the true values near the sensor. However, the humidity sensor cannot measure RH above 97.8% (compared to <a href="#remotesensing-13-03320-f009" class="html-fig">Figure 9</a>).</p>
Full article ">Figure 11
<p>Cloud cover and zenith brightness temperature report during 1 March 2021, calculated from WSI data. The relative humidity and the air temperature are obtained from the sensors embedded in Asis1i.</p>
Full article ">Figure 12
<p>Sky image on 1 March 2021. (<b>a</b>) 06:52—The very initial buildup phase. The sky begins to cover from the northwest direction. (<b>b</b>) 07:15—In the middle of the buildup phase. The original buildup direction is still apparent. (<b>c</b>) 07:51—At the end of the buildup phase. Some isolated clear sky patches are clearly seen. (<b>d</b>) 09:39—The cloud covers the sensor’s entire field of view.</p>
Full article ">Figure 13
<p>The backscattered Doppler Lidar during the fog events of 4 Janaury 2021–6 January 2021 in Nes Ziona.</p>
Full article ">Figure 14
<p>Evolution of the fog in time during the morning of 6 Jan 2021. (<b>a</b>): Visibility range. (<b>b</b>): Temperature (Ness Ziona) and relative humidity (Beit Dagan). (<b>c</b>): Droplets size distribution.</p>
Full article ">Figure 15
<p>(<b>a</b>): StreamLine XR Lidar measurement of β during the fog event of 6 January 2021 between 4 a.m. to 7 a.m. local time. (<b>b</b>): Calculated values of β according to the Lidar equation based on mass extinction and backscatter coefficients.</p>
Full article ">
20 pages, 7062 KiB  
Article
Cloud Detection Algorithm for Multi-Satellite Remote Sensing Imagery Based on a Spectral Library and 1D Convolutional Neural Network
by Nan Ma, Lin Sun, Chenghu Zhou and Yawen He
Remote Sens. 2021, 13(16), 3319; https://doi.org/10.3390/rs13163319 - 22 Aug 2021
Cited by 18 | Viewed by 3894
Abstract
Automatic cloud detection in remote sensing images is of great significance. Deep-learning-based methods can achieve cloud detection with high accuracy; however, network training heavily relies on a large number of labels. Manually labelling pixel-wise level cloud and non-cloud annotations for many remote sensing [...] Read more.
Automatic cloud detection in remote sensing images is of great significance. Deep-learning-based methods can achieve cloud detection with high accuracy; however, network training heavily relies on a large number of labels. Manually labelling pixel-wise level cloud and non-cloud annotations for many remote sensing images is laborious and requires expert-level knowledge. Different types of satellite images cannot share a set of training data, due to the difference in spectral range and spatial resolution between them. Hence, labelled samples in each upcoming satellite image are required to train a new deep-learning-based model. In order to overcome such a limitation, a novel cloud detection algorithm based on a spectral library and convolutional neural network (CD-SLCNN) was proposed in this paper. In this method, the residual learning and one-dimensional CNN (Res-1D-CNN) was used to accurately capture the spectral information of the pixels based on the prior spectral library, effectively preventing errors due to the uncertainties in thin clouds, broken clouds, and clear-sky pixels during remote sensing interpretation. Benefiting from data simulation, the method is suitable for the cloud detection of different types of multispectral data. A total of 62 Landsat-8 Operational Land Imagers (OLI), 25 Moderate Resolution Imaging Spectroradiometers (MODIS), and 20 Sentinel-2 satellite images acquired at different times and over different types of underlying surfaces, such as a high vegetation coverage, urban area, bare soil, water, and mountains, were used for cloud detection validation and quantitative analysis, and the cloud detection results were compared with the results from the function of the mask, MODIS cloud mask, support vector machine, and random forest. The comparison revealed that the CD-SLCNN method achieved the best performance, with a higher overall accuracy (95.6%, 95.36%, 94.27%) and mean intersection over union (77.82%, 77.94%, 77.23%) on the Landsat-8 OLI, MODIS, and Sentinel-2 data, respectively. The CD-SLCNN algorithm produced consistent results with a more accurate cloud contour on thick, thin, and broken clouds over a diverse underlying surface, and had a stable performance regarding bright surfaces, such as buildings, ice, and snow. Full article
Show Figures

Figure 1

Figure 1
<p>Framework of the cloud detection algorithm based on a spectral library and a convolutional neural network (CD-SLCNN algorithm).</p>
Full article ">Figure 2
<p>Spectra of different objects: (<b>a</b>) spectrum of cloud; (<b>b</b>) spectrum of vegetation [<a href="#B30-remotesensing-13-03319" class="html-bibr">30</a>]; (<b>c</b>) spectrum of soil [<a href="#B30-remotesensing-13-03319" class="html-bibr">30</a>]; (<b>d</b>) spectrum of manmade surface [<a href="#B30-remotesensing-13-03319" class="html-bibr">30</a>]; (<b>e</b>) spectrum of snow/ice [<a href="#B30-remotesensing-13-03319" class="html-bibr">30</a>]; (<b>f</b>) spectrum of water [<a href="#B30-remotesensing-13-03319" class="html-bibr">30</a>].</p>
Full article ">Figure 3
<p>Spectra of thick, thin, and broken clouds in different surface environments. The red, green, and blue dots represent thick, broken, and thin clouds, respectively.</p>
Full article ">Figure 4
<p>Comparison of actual cloud pixel spectrum and simulated cloud pixel spectrum: (<b>a</b>) actual cloud pixel spectrum from AVIRIS image and simulated cloud pixel spectrum of Landsat-8; (<b>b</b>) actual cloud pixel spectrum from AVIRIS image and simulated cloud pixel spectrum of MODIS; (<b>c</b>) actual cloud pixel spectrum from Landsat-8 image and simulated cloud pixel spectrum of Landsat-8; (<b>d</b>) actual cloud pixel spectrum from MODIS image and simulated cloud pixel spectrum of MODIS.</p>
Full article ">Figure 5
<p>Network structure of cloud detection algorithm. Block1, Block2, and Block3 represent three identical residual modules. Each block contains residual learning (Conv 1D, 1) and a dual-channel convolutional network.</p>
Full article ">Figure 6
<p>Examples of false-colour (RGB: bands 5, 4, and 3) composite images and cloud (masked white) detection results for Landsat-8 full-scene and enlarged-in images (areas outlined by yellow boxes) over diverse underlying surfaces: (<b>a</b>) vegetation and inland water areas, acquired on 3 July 2019, at Path 23 and Row 33; (<b>b</b>) bare soil areas, acquired on 29 July 2019, at Path 40 and Row 34; (<b>c</b>) ocean, sand, vegetation, and urban areas, acquired on 20 May 2017, at Path 118 and Row 38; (<b>d</b>) vegetation and urban areas, acquired on 30 June 2019, at Path 123 and Row 32.</p>
Full article ">Figure 7
<p>Examples of false-colour composite images and cloud (masked as white) detection results for Landsat-8 subimages over diverse underlying surfaces: (<b>a</b>) vegetation and inland water areas; (<b>b</b>) bare soil areas; (<b>c</b>) cultivated land areas; (<b>d</b>) urban area areas; (<b>e</b>) snow/ice areas; (<b>f</b>) vegetation areas; (<b>g</b>) rock and ocean areas; (<b>h</b>) vegetation and urban areas. The annotations on the left and right sides indicate the acquisition time.</p>
Full article ">Figure 8
<p>Comparison of the Mask, RF, Fmask4.0, SVM, and CD-SLCNN cloud detection results on Landsat-8 biome for: (<b>a</b>) forest and ocean areas, acquired on 26 August 2014, with a center longitude at 149° E and a center latitude at 60° N, at Path 108 and Row 18; (<b>b</b>) urban and ocean areas, acquired on 29 August 2014, with a center longitude at 121° E and a center latitude at 4° S, at Path 113 and Row 63; (<b>c</b>) cultivated land areas, acquired on 31 August 2013, with a center longitude at 101° E and a center latitude at 36° N, at Path 132 and Row 35; (<b>d</b>) forest and inland water areas, acquired on 21 May 2014, with a center longitude at 56° W and a center latitude at 4° N, at Path 229 and Row 57.</p>
Full article ">Figure 9
<p>Qualitative comparison of cloud detection results in MODIS images over diverse underlying surfaces: (<b>a</b>) vegetation and inland water areas, acquired on 15 January 2013; (<b>b</b>) ocean areas, acquired on 8 October 2018; (<b>c</b>) forest areas, acquired on 29 December 2017; (<b>d</b>) vegetation and ocean areas, acquired on 8 October 2018; (<b>e</b>) ocean areas, acquired on 9 October 2018; (<b>f</b>) mountain and building areas, acquired on 28 November 2016.</p>
Full article ">Figure 10
<p>Qualitative comparison of cloud detection results in Sentinel-2A images over diverse underlying surfaces: (<b>a</b>) forest areas, acquired on 21 January 2020; (<b>b</b>) shrubland areas, acquired on 20 April 2019; (<b>c</b>) grass and island water areas, acquired on 27 July 2020; (<b>d</b>) urban and ocean areas, acquired on 23 March 2020; (<b>e</b>) bare soil areas, acquired on 23 March 2020; (<b>f</b>) urban areas, acquired on 30 March 2020.</p>
Full article ">
24 pages, 14020 KiB  
Article
SAR Imaging Distortions Induced by Topography: A Compact Analytical Formulation for Radiometric Calibration
by Pasquale Imperatore
Remote Sens. 2021, 13(16), 3318; https://doi.org/10.3390/rs13163318 - 22 Aug 2021
Cited by 10 | Viewed by 2824
Abstract
Modeling of synthetic aperture radar (SAR) imaging distortions induced by topography is addressed and a novel radiometric calibration method is proposed in this paper. An analytical formulation of the problem is primarily provided in purely geometrical terms, by adopting the theoretical notions of [...] Read more.
Modeling of synthetic aperture radar (SAR) imaging distortions induced by topography is addressed and a novel radiometric calibration method is proposed in this paper. An analytical formulation of the problem is primarily provided in purely geometrical terms, by adopting the theoretical notions of the differential geometry of surfaces. The novel and conceptually simple formulation relies on a cylindrical coordinate system, whose longitudinal axis corresponds to the sensor flight direction. A 3D representation of the terrain shape is then incorporated into the SAR imaging model by resorting to a suitable parametrization of the observed ground surface. Within this analytical framework, the area-stretching function quantitatively expresses in geometrical terms the inherent local radiometric distortions. This paper establishes its analytical expression in terms of the magnitude of the gradient of the look-angle function uniquely defined in the image domain, thus resulting in being mathematically concise and amenable to a straightforward implementation. The practical relevance of the formulation is also illustrated from a computational perspective, by elucidating its effective discrete implementation. In particular, an inverse cylindrical mapping approach is adopted, thus avoiding the drawback of pixel area fragmentation and integration required in forward-mapping-based approaches. The effectiveness of the proposed SAR radiometric calibration method is experimentally demonstrated by using COSMO-SkyMed SAR data acquired over a mountainous area in Italy. Full article
(This article belongs to the Special Issue Electromagnetic Modeling in Microwave Remote Sensing)
Show Figures

Figure 1

Figure 1
<p>Cylindrical coordinate system.</p>
Full article ">Figure 2
<p>SAR imaging process: geometric scheme.</p>
Full article ">Figure 3
<p>3D geometrical scheme of a ground surface patch: <math display="inline"><semantics> <mrow> <msup> <mstyle mathvariant="bold" mathsize="normal"> <mover accent="true"> <mi>n</mi> <mo>^</mo> </mover> </mstyle> <mo>′</mo> </msup> </mrow> </semantics></math> is the surface unit normal, <math display="inline"><semantics> <mrow> <msub> <mi>χ</mi> <mi>l</mi> </msub> </mrow> </semantics></math> is local incidence angle, <math display="inline"><semantics> <mi mathvariant="sans-serif">ω</mi> </semantics></math> is the projection angle, <math display="inline"><semantics> <mover accent="true"> <mi>z</mi> <mo>^</mo> </mover> </semantics></math> and <math display="inline"><semantics> <mrow> <mstyle mathvariant="bold" mathsize="normal"> <mover accent="true"> <mi>a</mi> <mo>^</mo> </mover> </mstyle> <mo> </mo> <mrow> <mo>(</mo> <mrow> <mo>=</mo> <mstyle mathvariant="bold" mathsize="normal"> <mover accent="true"> <mi>x</mi> <mo>^</mo> </mover> </mstyle> </mrow> <mo>)</mo> </mrow> <mo> </mo> </mrow> </semantics></math>are the vertical and azimuth directions, respectively.</p>
Full article ">Figure 4
<p>Schematic illustration of the discrete mapping for spatial transformation <math display="inline"><semantics> <mrow> <mstyle mathvariant="bold" mathsize="normal"> <mi>q</mi> </mstyle> <mo>=</mo> <mi>τ</mi> <mo stretchy="false">(</mo> <mstyle mathvariant="bold" mathsize="normal"> <mi>s</mi> </mstyle> </mrow> </semantics></math>), with <math display="inline"><semantics> <mrow> <mi>s</mi> <mo>=</mo> <mrow> <mo>(</mo> <mrow> <mi>u</mi> <mo>,</mo> <mi>v</mi> </mrow> <mo>)</mo> </mrow> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <mstyle mathvariant="bold" mathsize="normal"> <mi>q</mi> </mstyle> <mo>=</mo> <mrow> <mo>(</mo> <mrow> <mi>r</mi> <mo>,</mo> <mi>a</mi> </mrow> <mo>)</mo> </mrow> </mrow> </semantics></math>.</p>
Full article ">Figure 5
<p>3 × 3 Grid kernel.</p>
Full article ">Figure 6
<p>Processing Scheme.</p>
Full article ">Figure 7
<p>Elevation (m) of the DEM: representation in the image space. The range direction is from left to right; the azimuth direction is from bottom to top.</p>
Full article ">Figure 8
<p>The range direction is from left to right; the azimuth direction is from bottom to top: (<b>a</b>) Look Angle Function (LAF) (degree):<math display="inline"><semantics> <mrow> <mo> </mo> <mi>θ</mi> <mo>=</mo> <mi>θ</mi> <mrow> <mo>(</mo> <mrow> <mi>r</mi> <mo>,</mo> <mi>a</mi> </mrow> <mo>)</mo> </mrow> </mrow> </semantics></math>; (<b>b</b>) a mask identifying (red) layover and (blue) shadow areas.</p>
Full article ">Figure 9
<p>Magnitude of the (range-weighted) partial derivative of look-angle function along: (<b>a</b>) the azimuth direction (dB), <math display="inline"><semantics> <mrow> <mrow> <mo>|</mo> <mrow> <mi>r</mi> <mfrac> <mrow> <mo>∂</mo> <mi>θ</mi> </mrow> <mrow> <mo>∂</mo> <mi>a</mi> </mrow> </mfrac> </mrow> <mo>|</mo> </mrow> </mrow> </semantics></math>; (<b>b</b>) the range direction (dB), <math display="inline"><semantics> <mrow> <mrow> <mo>|</mo> <mrow> <mi>r</mi> <mfrac> <mrow> <mo>∂</mo> <mi>θ</mi> </mrow> <mrow> <mo>∂</mo> <mi>r</mi> </mrow> </mfrac> </mrow> <mo>|</mo> </mrow> </mrow> </semantics></math>.</p>
Full article ">Figure 10
<p>(<b>a</b>) Simulated radiometric-distortion image (dB) associated with the ground surface area; (<b>b</b>) local incidence angle (LIA) (degree): <math display="inline"><semantics> <mrow> <msub> <mi mathvariant="sans-serif">χ</mi> <mi>l</mi> </msub> <mo>=</mo> <msub> <mi mathvariant="sans-serif">χ</mi> <mi>l</mi> </msub> <mrow> <mo>(</mo> <mrow> <mi>r</mi> <mo>,</mo> <mi>a</mi> </mrow> <mo>)</mo> </mrow> </mrow> </semantics></math>.</p>
Full article ">Figure 11
<p>(<b>a</b>) <math display="inline"><semantics> <mrow> <msubsup> <mover accent="true"> <mi>σ</mi> <mo>˜</mo> </mover> <mo> </mo> <mn>0</mn> </msubsup> </mrow> </semantics></math> (dB) image obtained from SAR data without compensation of topography-induced radiometric distortions; (<b>b</b>) <math display="inline"><semantics> <mrow> <msubsup> <mi mathvariant="sans-serif">σ</mi> <mo> </mo> <mn>0</mn> </msubsup> </mrow> </semantics></math> (dB) image obtained from SAR data including the compensation of topography-induced radiometric distortions. A mask identifying (red) layover and (blue) shadow areas is superimposed.</p>
Full article ">Figure 12
<p>Distribution of the backscattering coefficient <math display="inline"><semantics> <mrow> <msubsup> <mi mathvariant="sans-serif">σ</mi> <mo> </mo> <mn>0</mn> </msubsup> </mrow> </semantics></math> (dB): (<b>a</b>) obtained without compensation of topography-induced radiometric distortions; (<b>b</b>) obtained by including the compensation of topography-induced radiometric distortions.</p>
Full article ">Figure 13
<p>Distribution of the local incidence angle (LIA) [degree].</p>
Full article ">Figure 14
<p>Backscattering coefficient <math display="inline"><semantics> <mrow> <msubsup> <mi mathvariant="sans-serif">σ</mi> <mo> </mo> <mn>0</mn> </msubsup> </mrow> </semantics></math> [dB] without the compensation of topography-induced distortions vs. local incidence angle [degree].</p>
Full article ">Figure 15
<p>Backscattering coefficient <math display="inline"><semantics> <mrow> <msubsup> <mi mathvariant="sans-serif">σ</mi> <mo> </mo> <mn>0</mn> </msubsup> </mrow> </semantics></math>(dB) after the compensation of topography-induced distortions vs. local incidence angle (degree).</p>
Full article ">
21 pages, 3424 KiB  
Article
Self-Attention-Based Conditional Variational Auto-Encoder Generative Adversarial Networks for Hyperspectral Classification
by Zhitao Chen, Lei Tong, Bin Qian, Jing Yu and Chuangbai Xiao
Remote Sens. 2021, 13(16), 3316; https://doi.org/10.3390/rs13163316 - 21 Aug 2021
Cited by 17 | Viewed by 6813
Abstract
Hyperspectral classification is an important technique for remote sensing image analysis. For the current classification methods, limited training data affect the classification results. Recently, Conditional Variational Autoencoder Generative Adversarial Network (CVAEGAN) has been used to generate virtual samples to augment the training data, [...] Read more.
Hyperspectral classification is an important technique for remote sensing image analysis. For the current classification methods, limited training data affect the classification results. Recently, Conditional Variational Autoencoder Generative Adversarial Network (CVAEGAN) has been used to generate virtual samples to augment the training data, which could improve the classification performance. To further improve the classification performance, based on the CVAEGAN, we propose a Self-Attention-Based Conditional Variational Autoencoder Generative Adversarial Network (SACVAEGAN). Compared with CVAEGAN, we first use random latent vectors to obtain more enhanced virtual samples, which can improve the generalization performance. Then, we introduce the self-attention mechanism into our model to force the training process to pay more attention to global information, which can achieve better classification accuracy. Moreover, we explore model stability by incorporating the WGAN-GP loss function into our model to reduce the mode collapse probability. Experiments on three data sets and a comparison of the state-of-art methods show that SACVAEGAN has great advantages in accuracy compared with state-of-the-art HSI classification methods. Full article
(This article belongs to the Special Issue Deep Learning for Remote Sensing Data)
Show Figures

Figure 1

Figure 1
<p>The self-attention module for SACVAEGAN. The ⊗ denotes matrix multiplication.</p>
Full article ">Figure 2
<p>The structure of the proposed SACVAEGAN.</p>
Full article ">Figure 3
<p>The structure of the discriminator <span class="html-italic">D</span>.</p>
Full article ">Figure 4
<p>The structure of the <math display="inline"><semantics> <mrow> <mi>V</mi> <mi>A</mi> <mi>E</mi> </mrow> </semantics></math>.</p>
Full article ">Figure 5
<p>The structure of the classifier <span class="html-italic">C</span>.</p>
Full article ">Figure 6
<p>Illustration of the classification results on the Indian Pines data set. (<b>a</b>) Ground truth, (<b>b</b>) SVM-RBF (51.69%), (<b>c</b>) Two-CNN (90.10%), (<b>d</b>) 3D-CNN (77.80%), (<b>e</b>) DCGAN (91.11%), (<b>f</b>) DBGAN (84.42%), (<b>g</b>) CVAEGAN (93.47%), and (<b>h</b>) SACVAEGAN (95.98%).</p>
Full article ">Figure 7
<p>Illustration of the classification results on the PaviaU data set. (<b>a</b>) Ground truth, (<b>b</b>) SVM-RBF (76.00%), (<b>c</b>) Two-CNN (95.80%), (<b>d</b>) 3D-CNN (96.55%), (<b>e</b>) DCGAN (93.88%), (<b>f</b>) DBGAN (96.99%), (<b>g</b>) CVAEGAN (98.07%), and (<b>h</b>) SACVAEGAN (98.30%).</p>
Full article ">Figure 8
<p>Illustration of the classification results on the PaviaU data set. (<b>a</b>) Ground truth, (<b>b</b>) SVM-RBF (75.25%), (<b>c</b>) Two-CNN (95.79%), (<b>d</b>) 3D-CNN (91.05%), (<b>e</b>) DCGAN (90.39%), (f) DBGAN (94.13%), (<b>g</b>) CVAEGAN (98.49%), and (<b>h</b>) SACVAEGAN (98.92%).</p>
Full article ">Figure 9
<p>OA(%) of different methods with different training percentages of samples. (<b>a</b>) Indian Pines. (<b>b</b>) PaviaU. (<b>c</b>) Salinas.</p>
Full article ">Figure 10
<p>Illustration of the spatial features on the Indian Pines (<b>a</b>–<b>e</b>), PaviaU (<b>f</b>–<b>j</b>), and Salinas (<b>k</b>–<b>o</b>) data sets.</p>
Full article ">Figure 11
<p>Illustration of the spectral features on the Indian Pines (<b>a</b>–<b>e</b>), PaviaU (<b>f</b>–<b>j</b>), Salinas (<b>k</b>–<b>o</b>) data sets. The orange line represents the mean, the green line represents the mean add the variance, and the blue line represents the mean minus the variance.</p>
Full article ">
18 pages, 12944 KiB  
Article
InSAR Coherence Analysis for Wetlands in Alberta, Canada Using Time-Series Sentinel-1 Data
by Meisam Amani, Valentin Poncos, Brian Brisco, Fatemeh Foroughnia, Evan R. DeLancey and Sadegh Ranjbar
Remote Sens. 2021, 13(16), 3315; https://doi.org/10.3390/rs13163315 - 21 Aug 2021
Cited by 14 | Viewed by 4559
Abstract
Wetlands are valuable natural resources which provide numerous services to the environment. Many studies have demonstrated the potential of various types of remote sensing datasets and techniques for wetland mapping and change analysis. However, there are a relatively low number of studies that [...] Read more.
Wetlands are valuable natural resources which provide numerous services to the environment. Many studies have demonstrated the potential of various types of remote sensing datasets and techniques for wetland mapping and change analysis. However, there are a relatively low number of studies that have investigated the application of the Interferometric Synthetic Aperture Radar (InSAR) coherence products for wetland studies, especially over large areas. Therefore, in this study, coherence products over the entire province of Alberta, Canada (~661,000 km2) were generated using the Sentinel-1 data acquired from 2017 to 2020. Then, these products along with large amount of wetland reference samples were employed to assess the separability of different wetland types and their trends over time. Overall, our analyses showed that coherence can be considered as an added value feature for wetland classification and monitoring. The Treed Bog and Shallow Open Water classes showed the highest and lowest coherence values, respectively. The Treed Wetland and Open Wetland classes were easily distinguishable. When analyzing the wetland subclasses, it was observed that the Treed Bog and Shallow Open Water classes can be easily discriminated from other subclasses. However, there were overlaps between the signatures of the other wetland subclasses, although there were still some dates where these classes were also distinguishable. The analysis of multi-temporal coherence products also showed that the coherence products generated in spring/fall (e.g., May and October) and summer (e.g., July) seasons had the highest and lowest coherence values, respectively. It was also observed that wetland classes preserved coherence during the leaf-off season (15 August–15 October) while they had relatively lower coherence during the leaf-on season (i.e., 15 May–15 August). Finally, several suggestions for future studies were provided. Full article
(This article belongs to the Special Issue Radar Interferometry in Big Data Era)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Study area of Alberta, Canada and the distribution of reference samples with the pink color.</p>
Full article ">Figure 2
<p>Five tracks of Sentinel-1 orbit, which cover the entire study area. Tracks 20, 49, 122, 151, and 166 are indicated by the red, pink, green, yellow, and purple colors, respectively. White color indicates the boundary of the province of Alberta.</p>
Full article ">Figure 3
<p>Sentinel-1 coherence map of Alberta in June 2020.</p>
Full article ">Figure 4
<p>(<b>a</b>) Temporal average coherence and (<b>b</b>) temporal standard deviation of coherence maps for a selected area in Alberta. The more yellowish parts show higher mean and standard deviation of coherence values.</p>
Full article ">Figure 5
<p>Monthly coherence values of wetland classes based on different categories: (<b>a</b>) all wetland classes as one category, (<b>b</b>) Treed vs. Open Wetlands, (<b>c</b>) five wetland classes based on CWCS, and (<b>d</b>) seven individual wetland classes (see <a href="#sec2dot5-remotesensing-13-03315" class="html-sec">Section 2.5</a> for more details).</p>
Full article ">Figure 6
<p>Seasonal coherence values of wetland classes based on different categories: (<b>a</b>) all wetland classes as one category, (<b>b</b>) Treed vs. Open Wetlands, (<b>c</b>) five wetland classes based on CWCS, and (<b>d</b>) seven individual wetland classes (see <a href="#sec2dot5-remotesensing-13-03315" class="html-sec">Section 2.5</a> for more details).</p>
Full article ">Figure 7
<p>Coherence values of wetland classes in leaf-on/off seasons based on different categories: (<b>a</b>) all wetland classes as one category, (<b>b</b>) Treed vs. Open Wetlands, (<b>c</b>) five wetland classes based on CWCS, and (<b>d</b>) seven individual wetland classes (see <a href="#sec2dot5-remotesensing-13-03315" class="html-sec">Section 2.5</a> for more details).</p>
Full article ">Figure 8
<p>Violin plot of the wetland classes from the coherence average products.</p>
Full article ">
22 pages, 7780 KiB  
Article
Assessment of Sentinel-2 Images, Support Vector Machines and Change Detection Algorithms for Bark Beetle Outbreaks Mapping in the Tatra Mountains
by Robert Migas-Mazur, Marlena Kycko, Tomasz Zwijacz-Kozica and Bogdan Zagajewski
Remote Sens. 2021, 13(16), 3314; https://doi.org/10.3390/rs13163314 - 21 Aug 2021
Cited by 26 | Viewed by 4818
Abstract
Cambiophagous insects, fires and windthrow cause significant forest disturbances, generating ecological changes and economical losses. The bark beetle (Ips typographus L.), inhabiting coniferous forests and eliminating weakened trees, plays a key role in posing a threat to tree stands, which are dominated [...] Read more.
Cambiophagous insects, fires and windthrow cause significant forest disturbances, generating ecological changes and economical losses. The bark beetle (Ips typographus L.), inhabiting coniferous forests and eliminating weakened trees, plays a key role in posing a threat to tree stands, which are dominated by Norway spruce (Picea abies) and covers a large part of mountain areas, as well as the lowlands of Northern, Central and Eastern Europe. Due to the dynamics of the phenomena taking place, the EU recommends constant monitoring of forests in terms of large-area disturbances and factors affecting tree stands’ susceptibility to destruction. The right tools for this are multispectral satellite images, which regularly and free of charge provide up-to-date information on changes in the environment. The aim of this study was to develop a method of identifying disturbances of spruce stands, including the identification of bark beetle outbreaks. Sentinel 2 images from 2015–2018 were used for this purpose; the reference data were high-resolution aerial images, satellite WorldView 2, as well as field verification data. Support Vector Machines (SVM) distinguished six classes: deciduous forests, coniferous forests, grasslands, rocks, snags (dieback of standing trees) and cuts/windthrow. Remote sensing vegetation indices, Multivariate Alteration Detection (MAD), Multivariate Alteration Detection/Maximum Autocorrelation Factor (MAD/MAF), iteratively re-weighted Multivariate Alteration Detection (iMAD) and trained SVM signatures from another year, stacked band rasters allowed us to identify: (1) no changes; (2) dieback of standing trees; (3) logging or falling down of trees. The overall accuracy of the SVM classification oscillated between 97–99%; it was observed that in 2015–2018, as a result of the windthrow and bark beetle outbreaks and the consequences of those natural disturbances (e.g., sanitary cuts), approximately 62.5 km2 of coniferous stands (29%) died in the studied area of the Tatra Mountains. Full article
(This article belongs to the Special Issue Remote Sensing for Mountain Ecosystems)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Location of the research area. Explanation: (a) the border between Poland and Slovakia; it also runs along the border between the Polish Tatra National Park and the Slovak TANAP; (b) the border of the Polish Tatra National Park; (c) border of the Slovak TANAP; (d) Sentinel-2 image range; brown polygon range of Figure 4. A map from OpenStreetMap was used in the background.</p>
Full article ">Figure 2
<p>Research schema.</p>
Full article ">Figure 3
<p>Sentinel-2 image-based map of the Tatra land cover: (a) country border; (b) border of the Polish TPN; (c) border of the Slovak TANAP; (1) snags; (2) coniferous forests; (3) deciduous forests; (4) grasslands; (5) rocks; (6) cuts or windthrow areas. A map from OpenStreetMap was used in the background and Sentinel-2 RGB 432 composition, 3 October 2015.</p>
Full article ">Figure 4
<p>A visual presentation of the results obtained from used algorithms on a fragment of the Sentinel-2 scene. Fresh snags are marked in yellow (mainly as a result of bark beetle outbreak), and the cuts/windthrow area class in red. The green line marks the area of strict protection of the Tatra National Park. The geographical range of the image is presented on <a href="#remotesensing-13-03314-f001" class="html-fig">Figure 1</a>; background: orthophotomap from 2017.</p>
Full article ">Figure 5
<p>Increase of informationality of remote-sensing indices measured by OA and F1-score for analyzed classes (indices are presented in the order of informationality: from the most significant SAVI on the left, to the least significant DSWI). Red marked indices were not used for analyses.</p>
Full article ">Figure 6
<p>Map of the bark beetle outbreak in 2017 for the Tatras: (a) country border; (b) border of the Polish TPN; (c) border of the Slovak TANAP; (1) snags; (2) undisturbed coniferous forest; (3) cuts or windthrow areas. A map from OpenStreetMap was used in the background and Sentinel-2 RGB 432 composition acquired on 2 October 2017.</p>
Full article ">Figure 7
<p>Share of undisturbed forest, snags and cuts/windthrow in the following years for the entire analyzed area.</p>
Full article ">Figure 8
<p>Map of coniferous forest degradation in the period of 2015–2018: (a) country border; (b) border of the Polish TPN; (c) border of the Slovak TANAP; (1) undisturbed coniferous forests; (2) damages observed in 2015; (3) damages caused in 2016; (4) damages caused in 2017; (5) damages caused in 2018. A map from OpenStreetMap was used in the background and Sentinel-2 RGB 432 composition, 15 October 2018.</p>
Full article ">
19 pages, 12313 KiB  
Article
LSTM-Based Remote Sensing Inversion of Largescale Sand Wave Topography of the Taiwan Banks
by Yujin Zhao, Liaoying Zhao, Huaguo Zhang and Bin Fu
Remote Sens. 2021, 13(16), 3313; https://doi.org/10.3390/rs13163313 - 21 Aug 2021
Cited by 1 | Viewed by 2745
Abstract
Shallow underwater topography has important practical applications in fisheries, navigation, and pipeline laying. Traditional multibeam bathymetry is limited by the high cost of largescale topographic surveys in large, shallow sand wave areas. Remote sensing inversion methods to detect shallow sand wave topography in [...] Read more.
Shallow underwater topography has important practical applications in fisheries, navigation, and pipeline laying. Traditional multibeam bathymetry is limited by the high cost of largescale topographic surveys in large, shallow sand wave areas. Remote sensing inversion methods to detect shallow sand wave topography in Taiwan rely heavily on measured water depth data. To address these problems, this study proposes a largescale remote sensing inversion model of sand wave topography based on long short-term memory network machine learning. Using multi-angle sun glitter remote sensing to obtain sea surface roughness (SSR) information and by learning and training SSR and its corresponding water depth information, the sand wave topography of a largescale shallow sea sand wave region is extracted. The accuracy of the model is validated through its application to a 774 km2 area in the sand wave topography of the Taiwan Banks. The model obtains a root mean square error of 3.31–3.67 m, indicating that the method has good generalization capability and can achieve a largescale topographic understanding of shallow sand waves with some training on measured bathymetry data. Sand wave topography is widely present in tidal environments; our method has low requirements for ground data, with high application value. Full article
(This article belongs to the Special Issue GIS and RS in Ocean, Island and Coastal Zone)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Taiwan Strait shallow sand wave topography in full view. The red boundary indicates the outer envelope range of the shallow Taiwan Strait with an area of ~16,400 km<sup>2</sup>, and the black box indicates the remote sensing image range as in <a href="#remotesensing-13-03313-f002" class="html-fig">Figure 2</a> with an area of ~3600 km<sup>2</sup>.</p>
Full article ">Figure 2
<p>(<b>a</b>) Experimental area of A1, A2, A3, and A4. The purple line is the measured water depth in 2012, and the blue line is the measured water depth in 2017. The legend indicates the value of the SSR. (<b>b</b>) Distribution of 25 measurement lines with 500 m interval in A1 area.</p>
Full article ">Figure 3
<p>(<b>a</b>) Water depth in area A1 obtained by interpolating with 500 m interval bathymetric lines, (<b>b</b>) roughness in area A1, (<b>c</b>) variations in water depth on the profile line, (<b>d</b>) variations in SSR on the profile line.</p>
Full article ">Figure 4
<p>Experimental flow. Slanted boxes indicate input data, straight boxes indicate actions, rounded boxes indicate output data, and arrows indicate sequence.</p>
Full article ">Figure 5
<p>(<b>a</b>) Cross-filtered template and (<b>b</b>) original SSR on the profile line and filtered SSR.</p>
Full article ">Figure 6
<p>Node structure of the LSTM.</p>
Full article ">Figure 7
<p>LSTM neural network topology.</p>
Full article ">Figure 8
<p>Train loss and test loss variation.</p>
Full article ">Figure 9
<p>(<b>a</b>) Water depth in region A1 obtained by interpolating with 500 m interval bathymetric lines. The blue line and green line indicate the 50% and 75% demarcation lines of the A1 region, respectively, (<b>b</b>) predicted water depth obtained from Model I, (<b>c</b>) predicted water depth obtained from Model II, and (<b>d</b>) predicted water depth obtained from Model III.</p>
Full article ">Figure 10
<p>Scatter density distribution of predicted water depths for (<b>a</b>) Model I, (<b>b</b>) Model II, and (<b>c</b>) Model III.</p>
Full article ">Figure 11
<p>Histograms of the difference frequency distribution of predicted water depths for (<b>a</b>) Model I, (<b>b</b>) Model II, and (<b>c</b>) Model III.</p>
Full article ">Figure 12
<p>(<b>a</b>) Roughness of area A2, (<b>b</b>) predicted water depth of area A2, (<b>c</b>) roughness of area A3, (<b>d</b>) predicted water depth of area A3, (<b>e</b>) roughness of area A4, (<b>f</b>) predicted water depth of area A4. Blue lines indicate measured profile lines.</p>
Full article ">Figure 12 Cont.
<p>(<b>a</b>) Roughness of area A2, (<b>b</b>) predicted water depth of area A2, (<b>c</b>) roughness of area A3, (<b>d</b>) predicted water depth of area A3, (<b>e</b>) roughness of area A4, (<b>f</b>) predicted water depth of area A4. Blue lines indicate measured profile lines.</p>
Full article ">Figure 13
<p>Sections of areas (<b>a</b>) A2, (<b>b</b>) A3, and (<b>c</b>) A4.</p>
Full article ">Figure 14
<p>Scatter density plots of predicted water depths in areas (<b>a</b>) A2, (<b>b</b>) A3, and (<b>c</b>) A4.</p>
Full article ">Figure 15
<p>Difference distribution of predicted water depths in areas (<b>a</b>) A2, (<b>b</b>) A3, and (<b>c</b>) A4.</p>
Full article ">
28 pages, 8907 KiB  
Article
Dual-Satellite Alternate Switching Ranging/INS Integrated Navigation Algorithm for Broadband LEO Constellation Independent of Altimeter and Continuous Observation
by Lvyang Ye, Yikang Yang, Xiaolun Jing, Hengnian Li, Haifeng Yang and Yunxia Xia
Remote Sens. 2021, 13(16), 3312; https://doi.org/10.3390/rs13163312 - 21 Aug 2021
Cited by 11 | Viewed by 2871
Abstract
In challenging environments such as forests, valleys and higher latitude areas, there are usually fewer than four visible satellites. For cases with only two visible satellites, we propose a dual-satellite alternate switching ranging integrated navigation algorithm based on the broadband low earth orbit [...] Read more.
In challenging environments such as forests, valleys and higher latitude areas, there are usually fewer than four visible satellites. For cases with only two visible satellites, we propose a dual-satellite alternate switching ranging integrated navigation algorithm based on the broadband low earth orbit (LEO) constellation, which integrates communication and navigation (ICN) technology. It is different from the traditional dual-satellite integrated navigation algorithm: the difference is that it can complete precise real-time navigation and positioning without an altimeter and continuous observation. First, we give the principle of our algorithm. Second, with the help of an unscented Kalman filter (UKF), we give the observation equation and state equation of our algorithm, and establish the mathematical model of multipath/non-line of sight (NLOS) and noise interference. Finally, based on the SpaceX constellation, for various scenarios, we analyze the performance of our algorithm through simulation. The results show that: our algorithm can effectively suppress the divergence of the inertial navigation system (INS), in the face of different multipath/NLOS interference and various noise environments it still keeps good robustness, and also has great advantages in various indicators compared with the traditional dual-satellite positioning algorithms and some existing 3-satellite advanced positioning algorithms. These results show that our algorithm can meet the real-time location service requirements in harsh and challenging environments, and provides a new navigation and positioning method when there are only two visible satellites. Full article
(This article belongs to the Section Satellite Missions for Earth and Planetary Exploration)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Schematic diagram of the standard INS+LEO2-satellite integrated navigation algorithm.</p>
Full article ">Figure 2
<p>Schematic diagram of INS+LEO2-satellite alternate switching ranging integrated navigation algorithm.</p>
Full article ">Figure 3
<p>Standard INS+LEO2-satellite integrated navigation positioning results.</p>
Full article ">Figure 4
<p>Error curve of the INS+LEO2-satellite alternate switching ranging integrated navigation algorithm based on the same orbital surface.</p>
Full article ">Figure 5
<p>Error curve of the INS+LEO2-alternate switching ranging integrated navigation algorithm based on adjacent orbit surfaces.</p>
Full article ">Figure 6
<p>Navigation and positioning comparison curve of the INS+LEO2-satellite alternate switching ranging algorithm in the same orbit and adjacent orbits.</p>
Full article ">Figure 7
<p>Error statistics.</p>
Full article ">Figure 8
<p>Comparison curve between the INS+LEO2-satellite 5 s alternate switching ranging algorithm and INS+MEO2-satellite 5 s alternate switching algorithm.</p>
Full article ">Figure 8 Cont.
<p>Comparison curve between the INS+LEO2-satellite 5 s alternate switching ranging algorithm and INS+MEO2-satellite 5 s alternate switching algorithm.</p>
Full article ">Figure 9
<p>Error statistics.</p>
Full article ">Figure 10
<p>Error under different MSR.</p>
Full article ">Figure 11
<p>Error statistics.</p>
Full article ">Figure 12
<p>Errors under Different Noise Intensities.</p>
Full article ">Figure 13
<p>Error statistics.</p>
Full article ">
12 pages, 1874 KiB  
Article
First Estimation of Global Trends in Nocturnal Power Emissions Reveals Acceleration of Light Pollution
by Alejandro Sánchez de Miguel, Jonathan Bennie, Emma Rosenfeld, Simon Dzurjak and Kevin J. Gaston
Remote Sens. 2021, 13(16), 3311; https://doi.org/10.3390/rs13163311 - 21 Aug 2021
Cited by 69 | Viewed by 25550
Abstract
The global spread of artificial light is eroding the natural night-time environment. The estimation of the pattern and rate of growth of light pollution on multi-decadal scales has nonetheless proven challenging. Here we show that the power of global satellite observable light emissions [...] Read more.
The global spread of artificial light is eroding the natural night-time environment. The estimation of the pattern and rate of growth of light pollution on multi-decadal scales has nonetheless proven challenging. Here we show that the power of global satellite observable light emissions increased from 1992 to 2017 by at least 49%. We estimate the hidden impact of the transition to solid-state light-emitting diode (LED) technology, which increases emissions at visible wavelengths undetectable to existing satellite sensors, suggesting that the true increase in radiance in the visible spectrum may be as high as globally 270% and 400% on specific regions. These dynamics vary by region, but there is limited evidence that advances in lighting technology have led to decreased emissions. Full article
(This article belongs to the Special Issue Light Pollution Monitoring Using Remote Sensing Data)
Show Figures

Figure 1

Figure 1
<p>Typical normalized spectral response of satellite sensors and selected streetlighting types. (<b>a</b>) the DMSP-OLS (solid line) and VIIRS-DNB (dashed line) sensors, compared with (<b>b</b>) normalized output of high pressure sodium (solid line), 1800 K LED (short dashes) and 4000 K LED (long dashes) streetlights.</p>
Full article ">Figure 2
<p>Rate of change in artificial light at night represented as power output detectable by satellites from 1992 to 2017 assessed across countries (map) and continents (inset plots). In plots, open squares represent annual DMSP-OLS composite data, filled squares represent radiance calibrated DMSP-OLS data, filled circles represent VIIRS Day/Night band data. Plotted data points show satellite data assuming constant spectral composition of light emissions. The shaded areas represent the possible range of undetected light assuming a recent phased transition from high pressure sodium lighting to LEDs of color temperature 3000 K (dark grey) or 4000 K (light grey).</p>
Full article ">Figure 3
<p>Global emitted power (Nikon blue band [<a href="#B14-remotesensing-13-03311" class="html-bibr">14</a>]) detected by satellites from artificial light sources from 1992 to 2017. Open squares represent annual DMSP-OLS composite data, filled squares represent radiance calibrated DMSP-OLS data, filled circles represent VIIRS Day/Night band data. Plotted data points assume constant spectral composition of light emissions. The shaded areas represent the possible range of undetected light assuming a recent phased transition from high pressure sodium lighting to LEDs of color temperature 3000 K (dark grey) or 4000 K (light grey).</p>
Full article ">
Previous Issue
Back to TopTop