Nothing Special   »   [go: up one dir, main page]

Next Issue
Volume 12, February-2
Previous Issue
Volume 12, January-2
You seem to have javascript disabled. Please note that many of the page functionalities won't work as expected without javascript enabled.
 
 
remotesensing-logo

Journal Browser

Journal Browser

Remote Sens., Volume 12, Issue 3 (February-1 2020) – 248 articles

Cover Story (view full-size image): A multidecadal Landsat data record is a unique tool for global land cover and land use change analysis. Here, we present consistently processed and temporally aggregated Landsat Analysis Ready Data produced by the Global Land Analysis and Discovery team at the University of Maryland (GLAD ARD). The GLAD ARD represents a 16-day time-series of tiled normalized Landsat surface reflectance from 1997 to present, updated annually, and suitable for land cover monitoring at any scale from local to global. A set of tools for multitemporal data processing and characterization using machine learning accompanying the GLAD ARD is an end-to-end solution for Landsat-based natural resource assessment and monitoring. The GLAD ARD data and tools provided free of charge from the GLAD website (https://glad.umd.edu/ard/home).View this paper.
  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
21 pages, 11584 KiB  
Article
Identification of Active Gully Erosion Sites in the Loess Plateau of China Using MF-DFA
by Jianjun Cao, Guoan Tang, Xuan Fang, Yongjuan Liu, Ying Zhu, Jinlian Li and Wolfgang Wagner
Remote Sens. 2020, 12(3), 589; https://doi.org/10.3390/rs12030589 - 10 Feb 2020
Cited by 12 | Viewed by 4585
Abstract
Gullies of different scales and types have developed in the Loess Plateau, China. Differences in the amount of gully erosion influence the development, evolution, morphology, and spatial distribution of these gullies. The strengths of headward erosion on the gully shoulder line are used [...] Read more.
Gullies of different scales and types have developed in the Loess Plateau, China. Differences in the amount of gully erosion influence the development, evolution, morphology, and spatial distribution of these gullies. The strengths of headward erosion on the gully shoulder line are used to dictate soil and water conservation measures. In this study, six typical loess landforms in the Loess Plateau were selected as sampling sites: Shenmu, Suide, Ganquan, Yanchuan, Yijun, and Chunhua, which respectively represent loess–aeolian and dune transition zones, loess hills, loess ridge hills, loess ridges, loess long-ridge fragmented tablelands, and loess tablelands. Using 5 m resolution digital elevation model data from the National Basic Geographic Information Database, a small representative watershed was selected from each sampling site to obtain elevation data on the terrain profiles of gully shoulder lines. Multifractal detrended fluctuation analysis (MF-DFA) was used to conduct statistical and comparative analysis of the elevation fluctuation characteristics of these profiles. The results show that MF-DFA is capable of detecting active gully erosion sites. Sites of active gully erosion are concentrated in Shenmu and Suide but more widely distributed in the other five sites. The results provide a scientific basis for small watershed management planning and the design of soil and water conservation measures. Full article
(This article belongs to the Special Issue Advances in Global Digital Elevation Model Processing)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Spatial distribution of the six sampling sites and gully shoulder lines: (<b>a</b>) location of the China in Asia, (<b>b</b>) location of the Shaanxi in China, (<b>c</b>) location map of the sampling sites, (<b>d</b>) Chunhua, (<b>e</b>) Yijun, (<b>f</b>) Ganquan, (<b>g</b>) Yanchuan, (<b>h</b>) Suide, and (<b>i</b>) Shenmu. Shoulder lines are shown in red.</p>
Full article ">Figure 2
<p>Workflow of the whole process method in this study.</p>
Full article ">Figure 3
<p>Multifractal spectrum of gully shoulder lines at the six study sites: (<b>a</b>) Shenmu, (<b>b</b>) Suide, (<b>c</b>) Yanchuan, (<b>d</b>) Ganquan, (<b>e</b>) Yijun, and (<b>f</b>) Chunhua.</p>
Full article ">Figure 3 Cont.
<p>Multifractal spectrum of gully shoulder lines at the six study sites: (<b>a</b>) Shenmu, (<b>b</b>) Suide, (<b>c</b>) Yanchuan, (<b>d</b>) Ganquan, (<b>e</b>) Yijun, and (<b>f</b>) Chunhua.</p>
Full article ">Figure 4
<p>Local Hurst exponents of shoulder line profiles at the Shenmu sampling site for different analysis scales.</p>
Full article ">Figure 5
<p>Local Hurst exponents of shoulder line profiles at the Suide site for different analysis scales.</p>
Full article ">Figure 6
<p>Local Hurst exponents of shoulder line profiles at the Yanchuan site for different analysis scales.</p>
Full article ">Figure 7
<p>Local Hurst exponents of shoulder line profiles at the Ganquan site for different analysis scales.</p>
Full article ">Figure 8
<p>Local Hurst exponents of shoulder line profiles at the Yijun sampling site for different analysis scales.</p>
Full article ">Figure 9
<p>Local Hurst exponents of shoulder line profiles at the Chunhua sampling site for different analysis scales.</p>
Full article ">Figure 10
<p>Calibration of points of active and stable gully erosion derived from <span class="html-italic">Ht</span> values of gully shoulder lines from corresponding DEM data for six sampling sites. Blue points represent stable points, and yellow points represent active points. (<b>a</b>) Shenmu, (<b>b</b>) Suide, (<b>c</b>) Yanchuan, (<b>d</b>) Ganquan, (<b>e</b>) Yijun, and (<b>f</b>) Chunhua.</p>
Full article ">Figure 11
<p>Local root mean square (RMS) values for different analysis scales at the six sampling sites: (<b>a</b>) Shenmu, (<b>b</b>) Suide, (<b>c</b>) Yanchuan, (<b>d</b>) Ganquan, (<b>e</b>) Yijun, and (<b>f</b>) Chunhua.</p>
Full article ">Figure 11 Cont.
<p>Local root mean square (RMS) values for different analysis scales at the six sampling sites: (<b>a</b>) Shenmu, (<b>b</b>) Suide, (<b>c</b>) Yanchuan, (<b>d</b>) Ganquan, (<b>e</b>) Yijun, and (<b>f</b>) Chunhua.</p>
Full article ">Figure 12
<p>Probability distribution of the local Hurst exponent (<span class="html-italic">Ht</span>) of shoulder line profiles for the six sampling sites: (<b>a</b>) Shenmu, (<b>b</b>) Suide, (<b>c</b>) Yanchuan, (<b>d</b>) Ganquan, (<b>e</b>) Yijun, and (<b>f</b>) Chunhua.</p>
Full article ">Figure 12 Cont.
<p>Probability distribution of the local Hurst exponent (<span class="html-italic">Ht</span>) of shoulder line profiles for the six sampling sites: (<b>a</b>) Shenmu, (<b>b</b>) Suide, (<b>c</b>) Yanchuan, (<b>d</b>) Ganquan, (<b>e</b>) Yijun, and (<b>f</b>) Chunhua.</p>
Full article ">Figure 13
<p>Field analysis at the Suide sampling site used to validate the calculated sites of active and stable gully erosion: (<b>a</b>) Suide study site, (<b>b</b>) general overview of points of active gully erosion in Suide, (<b>c</b>–<b>e</b>) detail of points of active gully erosion, and (<b>f</b>) general overview of points of stable gully erosion.</p>
Full article ">
13 pages, 573 KiB  
Letter
A Unified Framework for Depth Prediction from a Single Image and Binocular Stereo Matching
by Wei Chen, Xin Luo, Zhengfa Liang, Chen Li, Mingfei Wu, Yuanming Gao and Xiaogang Jia
Remote Sens. 2020, 12(3), 588; https://doi.org/10.3390/rs12030588 - 10 Feb 2020
Cited by 4 | Viewed by 4075
Abstract
Depth information has long been an important issue in computer vision. The methods for this can be categorized into (1) depth prediction from a single image and (2) binocular stereo matching. However, these two methods are generally regarded as separate tasks, which are [...] Read more.
Depth information has long been an important issue in computer vision. The methods for this can be categorized into (1) depth prediction from a single image and (2) binocular stereo matching. However, these two methods are generally regarded as separate tasks, which are accomplished in different network architectures when using deep learning-based methods. This study argues that these two tasks can be achieved using only one network with the same weights. We modify existing networks for stereo matching to perform the two tasks. We first enable the network capable of accepting both a single image and an image pair by duplicating the left image when the right image is absent. Then, we introduce a training procedure that alternatively selects training samples of depth prediction from a single image and binocular stereo matching. In this manner, the trained network can perform both tasks and single-image depth prediction even benefits from stereo matching to achieve better performance. Experimental results on KITTI raw dataset show that our model achieves state-of-the-art performances for accomplishing depth prediction from a single image and binocular stereo matching in the same architecture. Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Frameworks of the proposed method.</p>
Full article ">Figure 2
<p>Architecture of DispNetC.</p>
Full article ">Figure 3
<p>Architecture of PSMNet.</p>
Full article ">Figure 4
<p>Qualitative results on KITTI dataset.</p>
Full article ">
32 pages, 11862 KiB  
Article
Deep Learning-Based Drivers Emotion Classification System in Time Series Data for Remote Applications
by Rizwan Ali Naqvi, Muhammad Arsalan, Abdul Rehman, Ateeq Ur Rehman, Woong-Kee Loh and Anand Paul
Remote Sens. 2020, 12(3), 587; https://doi.org/10.3390/rs12030587 - 10 Feb 2020
Cited by 63 | Viewed by 8817
Abstract
Aggressive driving emotions is indeed one of the major causes for traffic accidents throughout the world. Real-time classification in time series data of abnormal and normal driving is a keystone to avoiding road accidents. Existing work on driving behaviors in time series data [...] Read more.
Aggressive driving emotions is indeed one of the major causes for traffic accidents throughout the world. Real-time classification in time series data of abnormal and normal driving is a keystone to avoiding road accidents. Existing work on driving behaviors in time series data have some limitations and discomforts for the users that need to be addressed. We proposed a multimodal based method to remotely detect driver aggressiveness in order to deal these issues. The proposed method is based on change in gaze and facial emotions of drivers while driving using near-infrared (NIR) camera sensors and an illuminator installed in vehicle. Driver’s aggressive and normal time series data are collected while playing car racing and truck driving computer games, respectively, while using driving game simulator. Dlib program is used to obtain driver’s image data to extract face, left and right eye images for finding change in gaze based on convolutional neural network (CNN). Similarly, facial emotions that are based on CNN are also obtained through lips, left and right eye images extracted from Dlib program. Finally, the score level fusion is applied to scores that were obtained from change in gaze and facial emotions to classify aggressive and normal driving. The proposed method accuracy is measured through experiments while using a self-constructed large-scale testing database that shows the classification accuracy of the driver’s change in gaze and facial emotions for aggressive and normal driving is high, and the performance is superior to that of previous methods. Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Flowchart of proposed method.</p>
Full article ">Figure 2
<p>Experimental environment and proposed prototype for driving behavior classification.</p>
Full article ">Figure 3
<p>Examples of the captured images of near-infrared (NIR) camera.</p>
Full article ">Figure 4
<p>Examples of detected facial feature points and their corresponding index numbers.</p>
Full article ">Figure 5
<p>25 gaze regions defined for training on the 24-inches monitor screen.</p>
Full article ">Figure 6
<p>Gaze change obtained by (<b>a</b>) Defining region of interest (ROI) on original image based on facial landmarks. (<b>b</b>) Three channel image obtained by combining left eye, right eye, and combined left and right eye.</p>
Full article ">Figure 6 Cont.
<p>Gaze change obtained by (<b>a</b>) Defining region of interest (ROI) on original image based on facial landmarks. (<b>b</b>) Three channel image obtained by combining left eye, right eye, and combined left and right eye.</p>
Full article ">Figure 7
<p>NIR image for showing facial emotions: (<b>a</b>) selected ROIs for emotions; (<b>b</b>) for facial emotions difference image generation.</p>
Full article ">Figure 7 Cont.
<p>NIR image for showing facial emotions: (<b>a</b>) selected ROIs for emotions; (<b>b</b>) for facial emotions difference image generation.</p>
Full article ">Figure 8
<p>Block diagram of proposed convolutional neural network (CNN) structure for driver’s behavior classification.</p>
Full article ">Figure 9
<p>CNN Architectures used for classifying driving behavior.</p>
Full article ">Figure 10
<p>Steps followed during experimental procedure. Normal and smooth driving images are collected while operating euro truck simulator 2 [<a href="#B58-remotesensing-12-00587" class="html-bibr">58</a>] and need for speed [<a href="#B57-remotesensing-12-00587" class="html-bibr">57</a>] simulators respectively.</p>
Full article ">Figure 11
<p>Graphs of mean and standard deviation for three features between normal and aggressive driving: (<b>a</b>) Horizontal gaze change, (<b>b</b>) Vertical gaze change, and (<b>c</b>) Change in facial emotions.</p>
Full article ">Figure 11 Cont.
<p>Graphs of mean and standard deviation for three features between normal and aggressive driving: (<b>a</b>) Horizontal gaze change, (<b>b</b>) Vertical gaze change, and (<b>c</b>) Change in facial emotions.</p>
Full article ">Figure 12
<p>Accuracy and loss curves of training according to the number of epoch (<b>a</b>) Proposed method (<b>b</b>) AlexNet method.</p>
Full article ">Figure 13
<p>Examples of the facial images from Open database: (<b>a</b>) normal emotion; (<b>b</b>) aggressive emotion.</p>
Full article ">Figure 13 Cont.
<p>Examples of the facial images from Open database: (<b>a</b>) normal emotion; (<b>b</b>) aggressive emotion.</p>
Full article ">Figure 14
<p>Receiver operating characteristic (ROC) curves of proposed method and its comparison with previous methods.</p>
Full article ">Figure 15
<p>ROC curves of proposed method and its comparison with previous methods on Open database [<a href="#B69-remotesensing-12-00587" class="html-bibr">69</a>].</p>
Full article ">
28 pages, 3444 KiB  
Article
An Assessment of the GOCE High-Level Processing Facility (HPF) Released Global Geopotential Models with Regional Test Results in Turkey
by Bihter Erol, Mustafa Serkan Işık and Serdar Erol
Remote Sens. 2020, 12(3), 586; https://doi.org/10.3390/rs12030586 - 10 Feb 2020
Cited by 11 | Viewed by 4571
Abstract
The launch of dedicated satellite missions at the beginning of the 2000s led to significant improvement in the determination of Earth gravity field models. As a consequence of this progress, both the accuracies and the spatial resolutions of the global geopotential models increased. [...] Read more.
The launch of dedicated satellite missions at the beginning of the 2000s led to significant improvement in the determination of Earth gravity field models. As a consequence of this progress, both the accuracies and the spatial resolutions of the global geopotential models increased. However, the spectral behaviors and the accuracies of the released models vary mainly depending on their computation strategies. These strategies are briefly explained in this article. Comprehensive quality assessment of the gravity field models by means of spectral and statistical analyses provides a comparison of the gravity field mapping accuracies of these models, as well as providing an understanding of their progress. The practical benefit of these assessments by means of choosing an optimal model with the highest accuracy and best resolution for a specific application is obvious for a broad range of geoscience applications, including geodesy and geophysics, that employ Earth gravity field parameters in their studies. From this perspective, this study aims to evaluate the GOCE High-Level Processing Facility geopotential models including recently published sixth releases using different validation methods recommended in the literature, and investigate their performances comparatively and in addition to some other models, such as GOCO05S, GOGRA04S and EGM2008. In addition to the validation statistics from various countries, the study specifically emphasizes the numerical test results in Turkey. It is concluded that the performance improves from the first generation RL01 models toward the final RL05 models, which were based on the entire mission data. This outcome was confirmed when the releases of different computation approaches were considered. The accuracies of the RL05 models were found to be similar to GOCO05S, GOGRA04S and even to RL06 versions but better than EGM2008, in their maximum expansion degrees. Regarding the results obtained from these tests using the GPS/leveling observations in Turkey, the contribution of the GOCE data to the models was significant, especially between the expansion degrees of 100 and 250. In the study, the tested geopotential models were also considered for detailed geoid modeling using the remove-compute-restore method. It was found that the best-fitting geopotential model with its optimal expansion degree (please see the definition of optimal degree in the article) improved the high-frequency regional geoid model accuracy by almost 15%. Full article
(This article belongs to the Special Issue Remote Sensing by Satellite Gravimetry)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Error amplitudes (accumulated) as a function of the spherical harmonic degrees “<math display="inline"> <semantics> <mo>ℓ</mo> </semantics> </math>” in terms of geoid heights: (<b>a</b>) direct (DIR RLx) models; (<b>b</b>) time-wise (TIM RLx) models; (<b>c</b>) space-wise (SPW RLx) models; (<b>d</b>) GOGRA04S, GOCO05S and EGM2008 models.</p>
Full article ">Figure 1 Cont.
<p>Error amplitudes (accumulated) as a function of the spherical harmonic degrees “<math display="inline"> <semantics> <mo>ℓ</mo> </semantics> </math>” in terms of geoid heights: (<b>a</b>) direct (DIR RLx) models; (<b>b</b>) time-wise (TIM RLx) models; (<b>c</b>) space-wise (SPW RLx) models; (<b>d</b>) GOGRA04S, GOCO05S and EGM2008 models.</p>
Full article ">Figure 2
<p>Distribution of 30 benchmarks (Dataset I) on Shuttle Radar Topography Mission (SRTM) topography.</p>
Full article ">Figure 3
<p>Distribution of 81 benchmarks, northwest of Turkey (Dataset II) on SRTM topography.</p>
Full article ">Figure 4
<p>Validation of global geopotential models (GGMs) regarding the standard deviations of the geoid undulation residuals (<math display="inline"> <semantics> <mrow> <msub> <mi>N</mi> <mrow> <mi>G</mi> <mi>P</mi> <mi>S</mi> <mo>/</mo> <mi>l</mi> <mi>e</mi> <mi>v</mi> <mo>.</mo> </mrow> </msub> <mo>−</mo> <msubsup> <mi>N</mi> <mrow> <mi>G</mi> <mi>G</mi> <mi>M</mi> </mrow> <mrow> <mi>e</mi> <mi>n</mi> <mi>h</mi> <mo>.</mo> </mrow> </msubsup> </mrow> </semantics> </math>) in meters (for Dataset I): (<b>a</b>) DIR RLx models; (<b>b</b>) TIM RLx models; (<b>c</b>) SPW RLx models; (<b>d</b>) comparison among the last versions of the satellite-based and combined models.</p>
Full article ">Figure 5
<p>Validation of GGMs regarding the standard deviations of the geoid undulation residuals (<math display="inline"> <semantics> <mrow> <msub> <mi>N</mi> <mrow> <mi>G</mi> <mi>P</mi> <mi>S</mi> <mo>/</mo> <mi>l</mi> <mi>e</mi> <mi>v</mi> <mo>.</mo> </mrow> </msub> <mo>−</mo> <msubsup> <mi>N</mi> <mrow> <mi>G</mi> <mi>G</mi> <mi>M</mi> </mrow> <mrow> <mi>e</mi> <mi>n</mi> <mi>h</mi> <mo>.</mo> </mrow> </msubsup> </mrow> </semantics> </math>) in meters (Dataset II): (<b>a</b>) DIR RLx models; (<b>b</b>) TIM RLx models; (<b>c</b>) SPW RLx models; (<b>d</b>) comparison among the last versions of the satellite-based and combined models.</p>
Full article ">Figure 6
<p>Standard deviations of the geoid height residuals in cm (note that the geoid heights (<math display="inline"> <semantics> <mrow> <msubsup> <mi>N</mi> <mrow> <mi>G</mi> <mi>G</mi> <mi>M</mi> </mrow> <mrow> <mi>e</mi> <mi>n</mi> <mi>h</mi> <mo>.</mo> </mrow> </msubsup> </mrow> </semantics> </math>), derived from the enhanced GGMs over optimum (blue bar) and maximum (orange bar) degrees for (<b>a</b>) the first dataset; (<b>b</b>) the second dataset.</p>
Full article ">Figure 7
<p>Comparison of RL5 and RL6 models: (<b>a</b>) the standard deviations of the geoid undulation residuals (<math display="inline"> <semantics> <mrow> <msub> <mi>N</mi> <mrow> <mi>G</mi> <mi>P</mi> <mi>S</mi> <mo>/</mo> <mi>l</mi> <mi>e</mi> <mi>v</mi> <mo>.</mo> </mrow> </msub> <mo>−</mo> <msubsup> <mi>N</mi> <mrow> <mi>G</mi> <mi>G</mi> <mi>M</mi> </mrow> <mrow> <mi>e</mi> <mi>n</mi> <mi>h</mi> <mo>.</mo> </mrow> </msubsup> </mrow> </semantics> </math>) in meters (Dataset II), (<b>b</b>) the standard deviations of geoid undulations in cm, and please note that the geoid heights (<math display="inline"> <semantics> <mrow> <msubsup> <mi>N</mi> <mrow> <mi>G</mi> <mi>G</mi> <mi>M</mi> </mrow> <mrow> <mi>e</mi> <mi>n</mi> <mi>h</mi> <mo>.</mo> </mrow> </msubsup> </mrow> </semantics> </math> ), derived from the enhanced GGMs over the optimum (blue bar) and the maximum (orange bar) degrees of the models.</p>
Full article ">Figure 8
<p>The experimental Turkey geoid, calculated using the remove–compute–restore (RCR) approach based on the reference geopotential model DIR-RL5.</p>
Full article ">
20 pages, 7554 KiB  
Article
A New Individual Tree Crown Delineation Method for High Resolution Multispectral Imagery
by Lin Qiu, Linhai Jing, Baoxin Hu, Hui Li and Yunwei Tang
Remote Sens. 2020, 12(3), 585; https://doi.org/10.3390/rs12030585 - 10 Feb 2020
Cited by 39 | Viewed by 6454
Abstract
In current individual tree crown (ITC) delineation methods for high-resolution multispectral imagery, either a spectral band or a brightness component of the multispectral image is employed in delineation with reference to edges or shapes of crowns, whereas spectra of tree crowns are seldom [...] Read more.
In current individual tree crown (ITC) delineation methods for high-resolution multispectral imagery, either a spectral band or a brightness component of the multispectral image is employed in delineation with reference to edges or shapes of crowns, whereas spectra of tree crowns are seldom taken into account. Such methods normally perform well in coniferous forests with obvious between-crown shadows, but fail in dense deciduous or mixed forests, in which tree crowns are close to each other, between-crown shadows and boundaries are unobvious, whereas adjacent tree crowns may be of distinguishable spectra. In order to effectively delineate crowns in dense deciduous or mixed forests, a new ITC delineation method using both brightness and spectra of the image is proposed in this study. In this method, a morphological gradient map of the image is first generated, treetops of multi-scale crowns are extracted from the gradient map and refined regarding the spectral differences between neighboring crowns, the gradient map is segmented using a watershed approach with treetops as markers, and the resulting segmentation map is refined to yield a crown map. Evaluated on images of a rainforest and a deciduous forest, the proposed method more accurately delineated adjacent broad-leaved tree crowns with similar brightness but different spectra than the other two typical ITC delineation algorithms, achieving a delineation accuracy of up to 76% in the rainforest and 63% in the deciduous forest. Full article
(This article belongs to the Special Issue Individual Tree Detection and Characterisation from UAV Data)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>The locations of the two experimental plots. The rainforest (<b>a</b>) and the deciduous forest (<b>b</b>).</p>
Full article ">Figure 2
<p>The overall process of the proposed method.</p>
Full article ">Figure 3
<p>The first principle component (PC1) (<b>a</b>) and inverse gradient image (<b>b</b>).</p>
Full article ">Figure 4
<p>The integration of the series of slices. Initial layers (<b>a</b>), updated layers (<b>b</b>) and final layer (<b>c</b>).</p>
Full article ">Figure 5
<p>Integrating two layers of slices at different scales. Fine layer (<b>a</b>), coarse layer (<b>b</b>), sifted coarse layer (<b>c</b>), combined layer (<b>d</b>), and re-sifted combined layer (<b>e</b>).</p>
Full article ">Figure 6
<p>An image objects composed by one (<b>a</b>), two (<b>b</b>), three (<b>c</b>), or four (<b>d</b>) identical crown.</p>
Full article ">Figure 7
<p>Demonstration of the judgment process when there is only one treetop in the fine layer. Combined layer (<b>a</b>), fine layer (<b>b</b>), coarse layer (<b>c</b>), final output layer (<b>d</b>).</p>
Full article ">Figure 8
<p>Demonstration of the judgment process when there is more than one treetop in the fine layer. Combined layer (<b>a</b>), fine layer (<b>b</b>), coarse layer (<b>c</b>), final output layer if all the Spectral Angle Mapper (SAMs) are greater than the threshold (<b>d</b>), final output layer if any of the SAMs is less than the threshold (<b>e</b>).</p>
Full article ">Figure 9
<p>The tree crown maps of the rainforest plot obtained by the traditional watershed segmentation (<b>a</b>), the Crown Slices from Imagery (CSI) (<b>b</b>), commonly used marker-controlled watershed segmentation (MWT) (<b>c</b>), and the proposed spectral and multi-scale features (SMS) (<b>d</b>) methods, respectively.</p>
Full article ">Figure 10
<p>Subsets of the rainforest plot (<b>a</b>–<b>d</b>) and the subsets of the tree crown maps obtained by the Crown Slices from Imagery (CSI) (<b>a1</b>–<b>d1</b>), the commonly used marker-controlled watershed segmentation (MWT) (<b>a2</b>–<b>d2</b>), and the proposed (SMS) (<b>a3</b>–<b>d3</b>) methods, respectively.</p>
Full article ">Figure 11
<p>The correctly delineated crowns (in blue) obtained by the Crown Slices from Imagery (CSI) (<b>a</b>), the commonly used marker-controlled watershed segmentation (MWT) (<b>b</b>), and the proposed (SMS) (<b>c</b>) methods, respectively, and the corresponding references (in white) of the rainforest plot.</p>
Full article ">Figure 12
<p>The tree crown maps of the deciduous forest plot obtained by the traditional watershed segmentation (<b>a</b>), the Crown Slices from Imagery (CSI) (<b>b</b>), the commonly used marker-controlled watershed segmentation (MWT) (<b>c</b>), and the proposed (SMS) (<b>d</b>) methods, respectively.</p>
Full article ">Figure 13
<p>Subsets of the deciduous forest plot (<b>a</b>–<b>d</b>) and the subsets of the tree crown maps obtained by the Crown Slices from Imagery (CSI) (<b>a1</b>–<b>d1</b>), the commonly used marker-controlled watershed segmentation (MWT) (<b>a2</b>–<b>d2</b>), and the proposed (SMS) (<b>a3</b>–<b>d3</b>) methods.</p>
Full article ">Figure 14
<p>The correctly delineated crowns (in blue) obtained by the Crown Slices from Imagery (CSI) (<b>a</b>), the commonly used marker-controlled watershed segmentation (MWT) (<b>b</b>), and the proposed (SMS) (<b>c</b>) methods, respectively, and the corresponding references (in white) of the deciduous forest plot.</p>
Full article ">Figure 14 Cont.
<p>The correctly delineated crowns (in blue) obtained by the Crown Slices from Imagery (CSI) (<b>a</b>), the commonly used marker-controlled watershed segmentation (MWT) (<b>b</b>), and the proposed (SMS) (<b>c</b>) methods, respectively, and the corresponding references (in white) of the deciduous forest plot.</p>
Full article ">
22 pages, 12003 KiB  
Article
Mapping and Quantifying the Human-Environment Interactions in Middle Egypt Using Machine Learning and Satellite Data Fusion Techniques
by José Manuel Delgado Blasco, Fabio Cian, Ramon F. Hanssen and Gert Verstraeten
Remote Sens. 2020, 12(3), 584; https://doi.org/10.3390/rs12030584 - 10 Feb 2020
Cited by 9 | Viewed by 5434
Abstract
Population growth in rural areas of Egypt is rapidly transforming the landscape. New cities are appearing in desert areas while existing cities and villages within the Nile floodplain are growing and pushing agricultural areas into the desert. To enable control and planning of [...] Read more.
Population growth in rural areas of Egypt is rapidly transforming the landscape. New cities are appearing in desert areas while existing cities and villages within the Nile floodplain are growing and pushing agricultural areas into the desert. To enable control and planning of the urban transformation, these rapid changes need to be mapped with high precision and frequency. Urban detection in rural areas in optical remote sensing is problematic when urban structures are built using the same materials as their surroundings. To overcome this limitation, we propose a multi-temporal classification approach based on satellite data fusion and artificial neural networks. We applied the proposed methodology to data of the Egyptian regions of El-Minya and part of Asyut governorates collected from 1998 until 2015. The produced multi-temporal land cover maps capture the evolution of the area and improve the urban detection of the European Space Agency (ESA) Climate Change Initiative Sentinel-2 Prototype Land Cover 20 m map of Africa and the Global Human Settlements Layer from the Joint Research Center (JRC). The extension of urban and agricultural areas increased over 65 km2 and 200 km2, respectively, during the entire period, with an accelerated increase analysed during the last period (2010–2015). Finally, we identified the trends in urban population density as well as the relationship between farmed and built-up land. Full article
(This article belongs to the Special Issue Remote Sensing of Human-Environment Interactions)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>(<b>A</b>) Google Earth image of Egypt indicating the location of (<b>B</b>) within the red rectangle. (<b>B</b>) False-color Landsat 8 OLITIRS image over the South-Rayan dune field and the direction of sand dune movement (source [<a href="#B29-remotesensing-12-00584" class="html-bibr">29</a>]), indicating the interaction area within the yellow polygon.</p>
Full article ">Figure 2
<p>(<b>A</b>) Location of the study area in Middle Egypt (black polygon), over the satellite imagery footprints (background image Google Earth). (<b>B</b>) Detailed map of the study area highlighting the main cities and simplified representation of the landform regions.</p>
Full article ">Figure 3
<p>Overview of the methodological approach with the three steps discussed in the text.</p>
Full article ">Figure 4
<p>Google Earth image overlaid with the Egyptian administrative boundaries of its governorates (left), and zoom over the location of our study area with the district boundaries (right).</p>
Full article ">Figure 5
<p>LULC maps obtained over the study area with a data fusion approach using SAR and multi-spectral data for 1998 and 2015. Black rectangles highlight areas with an urban increase, cyan rectangles correspond with fields increase in the desert area and red rectangles with crop increase within the Nile Valley and interaction area.</p>
Full article ">Figure 6
<p>Urban expansion in the study area from 1998 to 2015. Green areas were already urban in 1998 whereas other colours show where urban expansion occurred during the corresponding time period.</p>
Full article ">Figure 7
<p>Changes in the spatial extent of agricultural fields for the El-Minya governorate from 1998 to 2015. Permanent fields in the edges indicate the limits of the Nile valley floodplain.</p>
Full article ">Figure 8
<p>Changes in population, urban population density and agricultural land per person in El-Minya governorate in the period 1998–2015.</p>
Full article ">Figure 9
<p>Urban/built-up area of El-Minya city and surroundings from the different datasets: (<b>A</b>) 2010, this study, (<b>B</b>) GUF for 2012, (<b>C</b>) GHSL-L8 2014, (<b>D</b>) 2015, this study, (<b>E</b>) ESA-CCI Prototype 2016 and (<b>F</b>) GHSL-S1 2016. In colour circles, different areas highlighting different classification performances. The blue rectangle highlights our results.</p>
Full article ">Figure 10
<p>Urban/built-up of Mallawi city and surroundings from the different datasets: (<b>A</b>) 2010, this study, (<b>B</b>) GUF for 2012, (<b>C</b>) GHSL-L8 2014, (<b>D</b>) 2015, this study, (<b>E</b>) ESA-CCI Prototype 2016 and (<b>F</b>) GHSL-S1 2016. Top left map showing the location of modern cemeteries [<a href="#B55-remotesensing-12-00584" class="html-bibr">55</a>]. In colour circles different areas highlighting different classification performances. The blue rectangle highlights our results.</p>
Full article ">Figure 11
<p>Urban classes detected for 1998, 2004, 2010 and 2015 in Mallawi province overlaid on optical image (right) and optical image over the area (left).</p>
Full article ">Figure 12
<p>Field classes detected for 1998, 2004, 2010 and 2015 in the Eastern South-Rayan dune field area overlaid on high-resolution optical image (right) and optical image over the area (left).</p>
Full article ">Figure 13
<p>Field classes detected for 1998, 2004, 2010 and 2015 in the Western Dalija region overlaid on a high-resolution optical image (right) and optical image over the area (left).</p>
Full article ">
18 pages, 27040 KiB  
Article
Comparisons of Diurnal Variations of Land Surface Temperatures from Numerical Weather Prediction Analyses, Infrared Satellite Estimates and In Situ Measurements
by Xiaoni Wang and Catherine Prigent
Remote Sens. 2020, 12(3), 583; https://doi.org/10.3390/rs12030583 - 10 Feb 2020
Cited by 15 | Viewed by 3780
Abstract
This study evaluates the diurnal cycle of Land Surface Temperature (LST) from Numerical Weather Prediction (NWP) reanalyses (ECMWF ERA5 and ERA Interim), as well as from infrared satellite estimates (ISCCP and SEVIRI/METEOSAT), with in situ measurements. Data covering a full seasonal cycle in [...] Read more.
This study evaluates the diurnal cycle of Land Surface Temperature (LST) from Numerical Weather Prediction (NWP) reanalyses (ECMWF ERA5 and ERA Interim), as well as from infrared satellite estimates (ISCCP and SEVIRI/METEOSAT), with in situ measurements. Data covering a full seasonal cycle in 2010 are studied. Careful collocations and cloud filtering are applied. We first compare the reanalysis and satellite products at continental and regional scales, and then we concentrate on comparisons with the in situ observations, under a large variety of environments. SEVIRI shows better agreement with the in situ measurements than the other products, with bias often less than ±2K and correlation of 0.99. Over snow or arid surface, ISCCP tends to have more systematic errors than the other products. ERA5 agrees better to the in situ over barren land than ERA Interim, particularly at night time, thanks to the new surface model. However, over vegetated surfaces, both reanalyses tend to have higher/lower temperature at night/day time than the in situ measurements, probably related to the surface processes and its interactions with atmosphere in the NWP model. Full article
(This article belongs to the Section Atmospheric Remote Sensing)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Monthly mean differences of LST (unit: K) for ERA5-ERA Interim, ERA5-ISCCP, and ERA5-SEVIRI at 00 UTC and 12 UTC of two seasons, i.e., January–February, July–August in 2010. The first two rows shows the results in January–February, with 00 UTC at top and then 12 UTC below. The last two rows show the results at the same UTCs in July–August. There are three columns, the left one shows ERA5-ERA Interim, the middle one ERA5-ISCCP, and the right one ERA5-SEVIRI.</p>
Full article ">Figure 2
<p>Number of clear-sky collocations between ISCCP and ERA5 for January–February (<b>left</b>) and July–August (<b>right</b>) 2010. From north to south, the selected regions are in: (1) Portugal, (2) Congo, (3) Namibia and (4) South Africa.</p>
Full article ">Figure 3
<p>Diurnal LST variations (unit: K) for ERA5, ERA Interim, ISCCP, and SEVIRI over four selected vegetation types in January–February, April–May, July–August, and October–November in 2010. <span class="html-italic">X</span>-axis represents local time. The columns represent four selected regions, and the rows for the four seasons.</p>
Full article ">Figure 4
<p>Diurnal LST bias variations (unit: K) for ERA5-ERA Interim, ERA5-ISCCP, ERA5-SEVIRI over four selected vegetation types in January–February, April–May, July–August, and October–November in 2010. <span class="html-italic">X</span>-axis represents local time. The columns represent four selected regions, and the rows for the four seasons.</p>
Full article ">Figure 5
<p>The locations of the ground stations and corresponding satellite pixel when available. Black: ISCCP pixel. Green: SEVIRI. Yellow: ground station. The satellite image is from Google map.</p>
Full article ">Figure 6
<p>Emissivities used at the ground station location in the LST retrieval from ISCCP and SEVIRI (when available), and from the estimates of CIMSS. Black: ISCCP. Green: SEVIRI. Yellow: CIMSS.</p>
Full article ">Figure 7
<p>Example observations at GBB, KAL and EVO from ERA5, ERA Interim, ISCCP, SEVIRI, and in situ respectively in 2010.</p>
Full article ">Figure 8
<p>Diurnal variations of monthly mean LST (unit: K) over selected stations, from ERA5, ERA Interim, ISCCP and SEVIRI (when available), and in situ measurements. <span class="html-italic">X</span>-axis represents local time.</p>
Full article ">Figure 9
<p>Diurnal variations of monthly mean differences of LST (unit: K) at selected stations, i.e., In situ—ERA5, In situ—ERA Interim, In situ—ISCCP and In situ—SEVIRI (when available). <span class="html-italic">X</span>-axis represents local time.</p>
Full article ">Figure 10
<p>(<b>a</b>) Correlation of LST products to the in situ measurements at each station over all the seasons. (<b>b</b>) Mean of the differences (Unit: K) in comparison to the in situ measurements at each station over all the seasons. (<b>c</b>) Root-mean-square (RMS) of the differences (Unit: K) in comparison to the in situ measurements at each station over all the seasons.</p>
Full article ">
25 pages, 22103 KiB  
Article
Classification of Hyperspectral Image Based on Double-Branch Dual-Attention Mechanism Network
by Rui Li, Shunyi Zheng, Chenxi Duan, Yang Yang and Xiqi Wang
Remote Sens. 2020, 12(3), 582; https://doi.org/10.3390/rs12030582 - 10 Feb 2020
Cited by 384 | Viewed by 14410
Abstract
In recent years, researchers have paid increasing attention on hyperspectral image (HSI) classification using deep learning methods. To improve the accuracy and reduce the training samples, we propose a double-branch dual-attention mechanism network (DBDA) for HSI classification in this paper. Two branches are [...] Read more.
In recent years, researchers have paid increasing attention on hyperspectral image (HSI) classification using deep learning methods. To improve the accuracy and reduce the training samples, we propose a double-branch dual-attention mechanism network (DBDA) for HSI classification in this paper. Two branches are designed in DBDA to capture plenty of spectral and spatial features contained in HSI. Furthermore, a channel attention block and a spatial attention block are applied to these two branches respectively, which enables DBDA to refine and optimize the extracted feature maps. A series of experiments on four hyperspectral datasets show that the proposed framework has superior performance to the state-of-the-art algorithm, especially when the training samples are signally lacking. Full article
(This article belongs to the Special Issue Deep Learning and Feature Mining Using Hyperspectral Imagery)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>The structure of 3D-convolutional neural networks (CNN) with a batch normalization (BN) layer.</p>
Full article ">Figure 2
<p>The architecture of residual network (ResNet) and dense convolutional network (DenseNet).</p>
Full article ">Figure 3
<p>The structure of the dense block used in our framework.</p>
Full article ">Figure 4
<p>The details of the spectral attention block and the spatial attention block.</p>
Full article ">Figure 5
<p>The procedure of our proposed double-branch dual-attention (DBDA) framework.</p>
Full article ">Figure 6
<p>The structure of the DBDA network. The upper spectral branch composed of the dense spectral block and channel attention block is designed to capture spectral features. The lower spatial branch constituted by dense spatial block, and spatial attention block is designed to exploit spatial features.</p>
Full article ">Figure 7
<p>The flowchart for the DBDA methodology. The 3D-cube is fed into the spectral branch (top) and spatial branch (bottom) respectively. The obtained features are concatenated to classify the target pixel.</p>
Full article ">Figure 8
<p>The graph of the activation functions (Mish and ReLU).</p>
Full article ">Figure 9
<p>Classification maps for the IP dataset using 3% training samples. (<b>a</b>) False-color image. (<b>b</b>) Ground-truth (GT). (<b>c</b>–<b>h</b>) The classification maps with disparate algorithms.</p>
Full article ">Figure 10
<p>Classification maps for the UP dataset using 0.5% training samples. (<b>a</b>) False-color image. (<b>b</b>) Ground-truth (GT). (<b>c</b>–<b>h</b>) The classification maps with disparate algorithms.</p>
Full article ">Figure 11
<p>Classification maps for the UP dataset using 0.5% training samples. (<b>a</b>) False-color image. (<b>b</b>) Ground-truth (GT). (<b>c</b>–<b>h</b>) The classification maps with disparate algorithms.</p>
Full article ">Figure 12
<p>Classification maps for the BS dataset using 1.2% training samples. (<b>a</b>) False-color image. (<b>b</b>) Ground-truth (GT). (<b>c</b>–<b>h</b>) The classification maps with disparate algorithms.</p>
Full article ">Figure 13
<p>The OA results of SVM, CDCNN, CDCNN, SSRN, FDSSC, DBMA and our proposed method with varying proportions of training samples on the (<b>a</b>) IP, (<b>b</b>) UP, (<b>c</b>) SV and (<b>d</b>) BS.</p>
Full article ">Figure 14
<p>Effectiveness of the attention mechanism (results of different attention mechanisms).</p>
Full article ">Figure 15
<p>Effectiveness of the activation function (results on different activation functions).</p>
Full article ">
15 pages, 8311 KiB  
Article
Enhanced Delaunay Triangulation Sea Ice Tracking Algorithm with Combining Feature Tracking and Pattern Matching
by Ming Zhang, Jubai An, Jie Zhang, Dahua Yu, Junkai Wang and Xiaoqi Lv
Remote Sens. 2020, 12(3), 581; https://doi.org/10.3390/rs12030581 - 10 Feb 2020
Cited by 11 | Viewed by 4179
Abstract
Sea ice drift detection has the key role of global climate analysis and waterway planning. The ability to detect sea ice drift in real-time also contributes to the safe navigation of ships and the prevention of offshore oil platform accidents. In this paper, [...] Read more.
Sea ice drift detection has the key role of global climate analysis and waterway planning. The ability to detect sea ice drift in real-time also contributes to the safe navigation of ships and the prevention of offshore oil platform accidents. In this paper, an Enhanced Delaunay Triangulation (EDT) algorithm for sea ice tracking was proposed for dual-polarization sequential Synthetic Aperture Radar (SAR) images, which was implemented by combining feature tracking with pattern matching based on integrating HH and HV polarization feature information. A sea ice retrieval algorithm for feature detection, matching, fusion, and outlier detection was specifically developed to increase the system’s accuracy and robustness. In comparison with several state-of-the-art sea ice drift retrieval algorithms, including Speeded Up Robust Features (SURF) and the Oriented FAST and Rotated BRIEF (ORB) method, the results of the experiment provided compelling evidence that our algorithm had a higher accuracy than the SURF and ORB method. Furthermore, the results of our method were compared with the drift vector and direction of buoys data. The drift direction is consistent with buoys, and the velocity deviation was about 10 m. It was proved that this method can be applied effectively to the retrieval of sea ice drift. Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Location of the study area and the Synthetic Aperture Radar (SAR) data for the sea ice drift retrieval.</p>
Full article ">Figure 2
<p>Flowchart of the sea ice drift retrieval algorithm.</p>
Full article ">Figure 3
<p>(<b>a</b>) Original SAR image, of size 512 × 512 pixels, we selected a row of SAR image data with a red line to compare the spatial luminance profile changes. (<b>b</b>) Filtered results with the SAR block-matching 3-D (SAR-BM3D) method. (<b>c</b>) Filtered results with the image despeckling convolutional neural network (ID-CNN) method. Spatial luminance profile changes at the red row of the SAR image: (<b>d</b>) Unfiltered SAR image, (<b>e</b>) filtered via SAR-BM3D method, and (<b>f</b>) filtered via ID-CNN method.</p>
Full article ">Figure 3 Cont.
<p>(<b>a</b>) Original SAR image, of size 512 × 512 pixels, we selected a row of SAR image data with a red line to compare the spatial luminance profile changes. (<b>b</b>) Filtered results with the SAR block-matching 3-D (SAR-BM3D) method. (<b>c</b>) Filtered results with the image despeckling convolutional neural network (ID-CNN) method. Spatial luminance profile changes at the red row of the SAR image: (<b>d</b>) Unfiltered SAR image, (<b>e</b>) filtered via SAR-BM3D method, and (<b>f</b>) filtered via ID-CNN method.</p>
Full article ">Figure 4
<p>Constructed a triangular network by the Delaunay Triangulation algorithm.</p>
Full article ">Figure 5
<p>Combination of feature tracking and pattern matching algorithm.</p>
Full article ">Figure 6
<p>Illustration of the maximum cross-correlation (MCC) method. (<b>a</b>) First image. (<b>b</b>) Second image. (<b>c</b>) The cross-correlation matrix between sub-image with a green empty square and the sub-image with a yellow empty square.</p>
Full article ">Figure 7
<p>Comparison of successfully matched key points from (<b>a</b>) HH polarization, (<b>b</b>) HV polarization, and (<b>c</b>) HH + HV polarization. The key points are extracted from case 1.</p>
Full article ">Figure 8
<p>Boxplot of distances between neighbor vector, tracked with HH, HV, and HH + HV polarization for case 1.</p>
Full article ">Figure 9
<p>Comparing the performance of the outlier removal with the horizontal displacements by different methods for case 2. The x-axis represents the simulated start position ×1, and the y-axis represents the end position ×2. (<b>a</b>) Original image. (<b>b</b>) Result of (μ − 3σ, μ + 3σ). (<b>c</b>) Result of (μ − 2σ, μ + 2σ). (<b>d</b>) Result of (μ − σ, μ + σ) (<b>e</b>). Result of Random Sample Consensus (RANSAC). Red points are identified as outliers.</p>
Full article ">Figure 10
<p>The results of feature points tracking with SURF, ORB, and EDT methods. Results of (<b>a</b>) SURF, (<b>b</b>) ORB and (<b>c</b>) EDT methods for case 1. Results of (<b>d</b>) SURF, (<b>e</b>) ORB and (<b>f</b>) EDT methods for case 2.</p>
Full article ">Figure 11
<p>Spatial Density of feature points tracking with the SURF, ORB, and EDT methods. Results of (<b>a</b>) SURF, (<b>b</b>) ORB and (<b>c</b>) EDT methods for case 1. Results of (<b>d</b>) SURF, (<b>e</b>) ORB and (<b>f</b>) EDT methods for case 2.</p>
Full article ">Figure 11 Cont.
<p>Spatial Density of feature points tracking with the SURF, ORB, and EDT methods. Results of (<b>a</b>) SURF, (<b>b</b>) ORB and (<b>c</b>) EDT methods for case 1. Results of (<b>d</b>) SURF, (<b>e</b>) ORB and (<b>f</b>) EDT methods for case 2.</p>
Full article ">Figure 12
<p>Visualization of sea ice drift vectors with the SURF, ORB, and EDT methods. Results of (<b>a</b>) SURF, (<b>b</b>) ORB and (<b>c</b>) EDT methods for case 1. Results of (<b>d</b>) SURF, (<b>e</b>) ORB and (<b>f</b>) EDT methods for case 2.</p>
Full article ">
16 pages, 12262 KiB  
Article
A Spatio-Temporal Analysis of Rainfall and Drought Monitoring in the Tharparkar Region of Pakistan
by Muhammad Usman and Janet E. Nichol
Remote Sens. 2020, 12(3), 580; https://doi.org/10.3390/rs12030580 - 10 Feb 2020
Cited by 16 | Viewed by 5090
Abstract
The Tharpakar desert region of Pakistan supports a population approaching two million, dependent on rain-fed agriculture as the main livelihood. The almost doubling of population in the last two decades, coupled with low and variable rainfall, makes this one of the world’s most [...] Read more.
The Tharpakar desert region of Pakistan supports a population approaching two million, dependent on rain-fed agriculture as the main livelihood. The almost doubling of population in the last two decades, coupled with low and variable rainfall, makes this one of the world’s most food-insecure regions. This paper examines satellite-based rainfall estimates and biomass data as a means to supplement sparsely distributed rainfall stations and to provide timely estimates of seasonal growth indicators in farmlands. Satellite dekadal and monthly rainfall estimates gave good correlations with ground station data, ranging from R = 0.75 to R = 0.97 over a 19-year period, with tendency for overestimation from the Tropical Rainfall Monitoring Mission (TRMM) and underestimation from Climate Hazards Group Infrared Precipitation with Stations (CHIRPS) datasets. CHIRPS was selected for further modeling, as overestimation from TRMM implies the risk of under-predicting drought. The use of satellite rainfall products from CHIRPS was also essential for derivation of spatial estimates of phenological variables and rainfall criteria for comparison with normalized difference vegetation index (NDVI)-based biomass productivity. This is because, in this arid region where drought is common and rainfall unpredictable, determination of phenological thresholds based on vegetation indices proved unreliable. Mapped rainfall distributions across Tharparkar were found to differ substantially from those of maximum biomass (NDVImax), often showing low NDVImax in zones of higher annual rainfall, and vice versa. This mismatch occurs in both wet and dry years. Maps of rainfall intensity suggest that low yields often occur in areas with intense rain causing damage to ripening crops, and that total rainfall in a season is less important than sustained water supply. Correlations between rainfall variables and NDVImax indicate the difficulty of predicting drought early in the growing season in this region of extreme climatic variability. Mapped rainfall and biomass distributions can be used to recommend settlement in areas of more consistent rainfall. Full article
(This article belongs to the Section Remote Sensing in Agriculture and Vegetation)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Tharparkar region and its location in Pakistan, showing location of continuous rain gauges.</p>
Full article ">Figure 2
<p>Mean monthly rainfall and temperature at Mithi station, Tharparkar.</p>
Full article ">Figure 3
<p>Comparison of four rain gauge stations (June–September) over the period 2007–2012, showing approximate dates of first seasonal rainfall. Climate stations at Badin, Hyderabad, and Chhor are located west and north of Tharparkar in the Indus flood plain region.</p>
Full article ">Figure 4
<p>Comparison of daily (<b>a</b>,<b>b</b>), dekadal (10 days) (<b>c</b>,<b>d</b>), and monthly (<b>e</b>,<b>f</b>) rainfall between gauge and satellite-based rainfall estimates for four weather stations for the period 1998–2014.</p>
Full article ">Figure 5
<p>(<b>a</b>). Average annual rainfall (mm) 2000–2018 from correlations between annual rainfall (Climate Hazards Group Infrared Precipitation with Stations (CHIRPS)) data over Tharparkar (5-km resolution), and (<b>b</b>) CHIRPS and Maximum Biomass (NDVI<sub>max</sub>) for the period 2000–2018 (250-m resolution). Pixels with insignificant values are masked as white (95% confidence interval (<span class="html-italic">p</span> = 0.05)).</p>
Full article ">Figure 6
<p>CHIRPS-derived annual rainfall (<b>a</b>–<b>d</b>), and value of NDVI<sub>max</sub> (<b>e</b>–<b>h</b>) for four dry years (rainfall below 200 mm).</p>
Full article ">Figure 7
<p>CHIRPS-derived annual rainfall (<b>a</b>–<b>d</b>), and NDVI<sub>max</sub> (<b>e</b>–<b>h</b>) for four wet years (rainfall above 450 mm)<b>.</b></p>
Full article ">Figure 8
<p>Timing of NDVI<sub>max</sub> for four dry years (<b>a</b>–<b>d</b>) and four wet years (<b>e</b>–<b>h</b>). White areas represent not enough rainfall from July to December; thus, there is no maximum NDVI value.</p>
Full article ">Figure 9
<p>Rainfall intensity in four dry years (<b>a</b>–<b>d</b>) and four wet years (<b>e</b>–<b>h</b>).</p>
Full article ">
19 pages, 5482 KiB  
Article
Evaluation of Landsat 8 OLI/TIRS Level-2 and Sentinel 2 Level-1C Fusion Techniques Intended for Image Segmentation of Archaeological Landscapes and Proxies
by Athos Agapiou
Remote Sens. 2020, 12(3), 579; https://doi.org/10.3390/rs12030579 - 10 Feb 2020
Cited by 23 | Viewed by 6212
Abstract
The use of medium resolution, open access, and freely distributed satellite images, such as those of Landsat, is still understudied in the domain of archaeological research, mainly due to restrictions of spatial resolution. This investigation aims to showcase how the synergistic use of [...] Read more.
The use of medium resolution, open access, and freely distributed satellite images, such as those of Landsat, is still understudied in the domain of archaeological research, mainly due to restrictions of spatial resolution. This investigation aims to showcase how the synergistic use of Landsat and Sentinel optical sensors can efficiently support archaeological research through object-based image analysis (OBIA), a relatively new scientific trend, as highlighted in the relevant literature, in the domain of remote sensing archaeology. Initially, the fusion of a 30 m spatial resolution Landsat 8 OLI/TIRS Level-2 and a 10 m spatial resolution Sentinel 2 Level-1C optical images, over the archaeological site of “Nea Paphos” in Cyprus, are evaluated in order to improve the spatial resolution of the Landsat image. At this step, various known fusion models are implemented and evaluated, namely Gram–Schmidt, Brovey, principal component analysis (PCA), and hue-saturation-value (HSV) algorithms. In addition, all four 10 m available spectral bands of the Sentinel 2 sensor, namely the blue, green, red, and near-infrared bands (Bands 2 to 4 and Band 8, respectively) were assessed for each of the different fusion models. On the basis of these findings, the next step of the study, focused on the image segmentation process, through the evaluation of different scale factors. The segmentation process is an important step moving from pixel-based to object-based image analysis. The overall results show that the Gram–Schmidt fusion method based on the near-infrared band of the Sentinel 2 (Band 8) at a range of scale factor segmentation to 70 are the optimum parameters for the detection of standing visible monuments, monitoring excavated areas, and detecting buried archaeological remains, without any significant spectral distortion of the original Landsat image. The new 10 m fused Landsat 8 image provides further spatial details of the archaeological site and depicts, through the segmentation process, important details within the landscape under examination. Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>(<b>a</b>) The archaeological site of “Nea Paphos” located at the western part of Cyprus; (<b>b</b>) area at the northern part of the site, indicating the standing defensive wall (yellow arrows), as well as other archaeological proxies in the area (white arrows); and (<b>c</b>) area at the central part of the archaeological site, where significant archaeological excavations have been carried out in the past (background: orthoimage from the Department of Land and Surveyors, Cyprus).</p>
Full article ">Figure 2
<p>Overall methodology implemented in the study.</p>
Full article ">Figure 3
<p>Fusion results of the whole archaeological site of “Nea Paphos” using the four different pansharpening methods (Gram–Schmidt, Brovey, PCA, and HSV, at first to fourth row, respectively) based on the different Sentinel 2 (10 m) spectral bands used as the panchromatic image (columns). The last column indicates the original 30 m Landsat 8 and Sentinel 2 10 m pixel resolution images at the red, green, and blue (RGB), and near-infrared green and blue (NIR-R-G) pseudo composites.</p>
Full article ">Figure 4
<p>Fusion results of the whole archaeological site of “Nea Paphos” (area b, <a href="#remotesensing-12-00579-f001" class="html-fig">Figure 1</a>) using the four different pansharpening methods (Gram–Schmidt, Brovey, PCA, and HSV, at the first to fourth row, respectively) based on the different Sentinel 2 (10 m) spectral bands used as the panchromatic image (columns). The last column indicates the original 30 m Landsat 8 and Sentinel 2 10 m pixel resolution images at the red, green, and blue (RGB), and near-infrared green and blue (NIR-R-G) pseudo composites. The standing defensive wall is indicated with yellow arrows, and the archaeological proxies are indicated with white arrows.</p>
Full article ">Figure 5
<p>Fusion results of the whole archaeological site of “Nea Paphos” (area c, <a href="#remotesensing-12-00579-f001" class="html-fig">Figure 1</a>) using the four different pansharpening methods (Gram–Schmidt, Brovey, PCA, and HSV, at the first to fourth row, respectively) based on the different Sentinel 2 (10 m) spectral bands used as the panchromatic image (columns). The last column indicates the original 30 m Landsat 8 and Sentinel 2 10 m pixel resolution images at the red, green, and blue (RGB), and near-infrared green and blue (NIR-R-G) pseudo composites. Excavated areas are indicated with yellow arrows and the boundary of the archaeological site with the modern city of Paphos is indicated with white arrows.</p>
Full article ">Figure 6
<p>Segmentation results for area b of <a href="#remotesensing-12-00579-f001" class="html-fig">Figure 1</a> using a scale factor from 10 to 50, with a step of 10, applied to the Gram–Schmidt pansharpened Landsat 8 image (left) and the original Landsat 8 image (right). Groups of pixels (segments) are visualized in both images with a blue polygon. Similar results for scales 60 to 100 are shown in <a href="#remotesensing-12-00579-f007" class="html-fig">Figure 7</a>. The standing defensive wall is indicated with yellow arrows and the archaeological proxies are indicated with black arrows.</p>
Full article ">Figure 7
<p>Segmentation results for area b of <a href="#remotesensing-12-00579-f001" class="html-fig">Figure 1</a> using a scale factor from 60 to 100, with a step of 10, applied at the Gram–Schmidt pansharpened Landsat 8 image (left) and the original Landsat 8 image (right). Groups of pixels (segments) are visualized in both images with a blue polygon. Similar results for scales 10 to 50 are shown in <a href="#remotesensing-12-00579-f006" class="html-fig">Figure 6</a>. The standing defensive wall is indicated with yellow arrows and the archaeological proxies are indicated with black arrows.</p>
Full article ">Figure 8
<p>Various ground truth areas digitized for the needs of the segmentation performance of the fused Landsat image. In the background, the Gram–Schmidt pansharpened Landsat 8 image using the NIR band of Sentinel is shown.</p>
Full article ">Figure 9
<p>(<b>a</b>) Segmentation of the fused Landsat 8, after the application of the Gram–Schmidt method (indicated with yellow polygons) over the high-resolution aerial photograph of the archaeological site of “Nea Paphos”; (<b>b</b>) segmentation of the original Landsat 30 m pixel resolution. For the details of areas (a) to (f) refer to the text.</p>
Full article ">Figure 10
<p>Difference of the segmentation image of the original Landsat 8 image and the fused Gram–Schmidt pansharpened image, at the scale factor 60. For the details of areas (a) to (f) refer to the text.</p>
Full article ">
22 pages, 19243 KiB  
Article
Simulating the Impact of Urban Surface Evapotranspiration on the Urban Heat Island Effect Using the Modified RS-PM Model: A Case Study of Xuzhou, China
by Yuchen Wang, Yu Zhang, Nan Ding, Kai Qin and Xiaoyan Yang
Remote Sens. 2020, 12(3), 578; https://doi.org/10.3390/rs12030578 - 10 Feb 2020
Cited by 23 | Viewed by 4592
Abstract
As an important energy absorption process in the Earth’s surface energy balance, evapotranspiration (ET) from vegetation and bare soil plays an important role in regulating the environmental temperatures. However, little research has been done to explore the cooling effect of ET on the [...] Read more.
As an important energy absorption process in the Earth’s surface energy balance, evapotranspiration (ET) from vegetation and bare soil plays an important role in regulating the environmental temperatures. However, little research has been done to explore the cooling effect of ET on the urban heat island (UHI) due to the lack of appropriate remote-sensing-based estimation models for complex urban surface. Here, we apply the modified remote sensing Penman–Monteith (RS-PM) model (also known as the urban RS-PM model), which has provided a new regional ET estimation method with the better accuracy for the urban complex underlying surface. Focusing on the city of Xuzhou in China, ET and land surface temperature (LST) were inversed by using 10 Landsat 8 images during 2014–2018. The impact of ET on LST was then analyzed and quantified through statistical and spatial analyses. The results indicate that: (1) The alleviating effect of ET on the UHI was stronger during the warmest months of the year (May–October) but not during the colder months (November–March); (2) ET had the most significant alleviating effect on the UHI effect in those regions with the highest ET intensities; and (3) in regions with high ET intensities and their surrounding areas (within a radius of 150 m), variation in ET was a key factor for UHI regulation; a 10 W·m−2 increase in ET equated to 0.56 K decrease in LST. These findings provide a new perspective for the improvement of urban thermal comfort, which can be applied to urban management, planning, and natural design. Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Location and satellite image of study area: (<b>a</b>) the location of Jiangsu province in China; (<b>b</b>) the location of study area in Jiangsu province; (<b>c</b>) the border and satellite image of study area (Landsat 8 false color image of study area on 2 September 2016 with 7, 5, 3 bands fusion).</p>
Full article ">Figure 2
<p>The flux tower in China University of Mining and Technology (CUMT): (<b>a</b>) location of the flux tower; (<b>b</b>) photograph of the flux tower and the EC.</p>
Full article ">Figure 3
<p>Evapotranspiration estimation for an urban mixed pixel based on the urban remote sensing Penman–Monteith RS-PM model: (<b>a</b>) estimation for transpiration of vegetation component <math display="inline"><semantics> <mrow> <mi>E</mi> <msub> <mi>T</mi> <mrow> <mi>v</mi> <mo>,</mo> <mi>P</mi> <mn>2</mn> </mrow> </msub> </mrow> </semantics></math> (P2 refers to a mixed urban pixel, and P1 refers to a pure vegetation pixel with the same environmental and meteorological conditions as the vegetation component in P2); (<b>b</b>) estimation for water evaporation of soil component <math display="inline"><semantics> <mrow> <mi>E</mi> <msub> <mi>T</mi> <mrow> <mi>s</mi> <mo>,</mo> <mi>P</mi> <mn>4</mn> </mrow> </msub> </mrow> </semantics></math> (P4 refers to a mixed urban pixel, and P3 refers to a pure soil pixel with the same environmental and meteorological conditions as the soil component in P4).</p>
Full article ">Figure 4
<p>(<b>a</b>–<b>j</b>): Ten periods of instant evapotranspiration (ET) results from 2014 to 2018 estimated by applying urban RS-PM model (water body in each period has been masked in dark blue color).</p>
Full article ">Figure 5
<p>(<b>a</b>) Footprint function schematic (adapted from Schmid [<a href="#B70-remotesensing-12-00578" class="html-bibr">70</a>]); (<b>b</b>) modeled latent heat flux (ET, gray image) overlaid with the flux source area (<math display="inline"><semantics> <mrow> <msub> <mi mathvariant="sans-serif">Ω</mi> <mi>p</mi> </msub> </mrow> </semantics></math>, color graphic) of eddy covariance (EC) observation on 1 May 2014.</p>
Full article ">Figure 6
<p>Accuracy analysis between modeled source area ET and EC-observed ET: (<b>a</b>) error value and error rate between <math display="inline"><semantics> <mrow> <mi>E</mi> <msub> <mi>T</mi> <mi>s</mi> </msub> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <mi>E</mi> <msub> <mi>T</mi> <mi>o</mi> </msub> </mrow> </semantics></math>; (<b>b</b>) linear regression between <math display="inline"><semantics> <mrow> <mi>E</mi> <msub> <mi>T</mi> <mi>s</mi> </msub> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <mi>E</mi> <msub> <mi>T</mi> <mi>o</mi> </msub> </mrow> </semantics></math>.</p>
Full article ">Figure 7
<p>Scatter plots and linear regression between urban ET and land surface temperature (LST) in 10 periods.</p>
Full article ">Figure 8
<p>The area proportion statistics of each urban heat island intensity (UHI<sub>i</sub>) level in the regions of ET intensity (ET<sub>i</sub>) level 1 to level 5.</p>
Full article ">Figure 9
<p>Establishment of buffer zone with the high ET intensity regions as the center (the data of 3 May 2018 are taken as an example here).</p>
Full article ">Figure 10
<p>Variation tendencies of the average values of ET and LST in each layer of buffer zone (0 on the abscissa represents the core area).</p>
Full article ">Figure 11
<p>(<b>a</b>) Variation trends of normalized <math display="inline"><semantics> <mrow> <mo>Δ</mo> <mi>E</mi> <msub> <mi>T</mi> <mrow> <mi>A</mi> <mi>b</mi> <mi>u</mi> <mi>f</mi> </mrow> </msub> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <mo>Δ</mo> <msub> <mi>T</mi> <mrow> <mi>A</mi> <mi>b</mi> <mi>u</mi> <mi>f</mi> </mrow> </msub> </mrow> </semantics></math> for each period; (<b>b</b>) linear regression of <math display="inline"><semantics> <mrow> <mo>Δ</mo> <mi>E</mi> <msub> <mi>T</mi> <mrow> <mi>A</mi> <mi>b</mi> <mi>u</mi> <mi>f</mi> </mrow> </msub> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <mo>Δ</mo> <msub> <mi>T</mi> <mrow> <mi>A</mi> <mi>b</mi> <mi>u</mi> <mi>f</mi> </mrow> </msub> </mrow> </semantics></math> between the adjacent buffer layers.</p>
Full article ">
22 pages, 22907 KiB  
Article
Detection and Localisation of Life Signs from the Air Using Image Registration and Spatio-Temporal Filtering
by Asanka G. Perera, Fatema-Tuz-Zohra Khanam, Ali Al-Naji and Javaan Chahl
Remote Sens. 2020, 12(3), 577; https://doi.org/10.3390/rs12030577 - 9 Feb 2020
Cited by 10 | Viewed by 5789
Abstract
In search and rescue operations, it is crucial to rapidly identify those people who are alive from those who are not. If this information is known, emergency teams can prioritize their operations to save more lives. However, in some natural disasters the people [...] Read more.
In search and rescue operations, it is crucial to rapidly identify those people who are alive from those who are not. If this information is known, emergency teams can prioritize their operations to save more lives. However, in some natural disasters the people may be lying on the ground covered with dust, debris, or ashes making them difficult to detect by video analysis that is tuned to human shapes. We present a novel method to estimate the locations of people from aerial video using image and signal processing designed to detect breathing movements. We have shown that this method can successfully detect clearly visible people and people who are fully occluded by debris. First, the aerial videos were stabilized using the key points of adjacent image frames. Next, the stabilized video was decomposed into tile videos and the temporal frequency bands of interest were motion magnified while the other frequencies were suppressed. Image differencing and temporal filtering were performed on each tile video to detect potential breathing signals. Finally, the detected frequencies were remapped to the image frame creating a life signs map that indicates possible human locations. The proposed method was validated with both aerial and ground recorded videos in a controlled environment. Based on the dataset, the results showed good reliability for aerial videos and no errors for ground recorded videos where the average precision measures for aerial videos and ground recorded videos were 0.913 and 1 respectively. Full article
(This article belongs to the Section Remote Sensing Image Processing)
Show Figures

Figure 1

Figure 1
<p>Schematic diagram of the proposed approach.</p>
Full article ">Figure 2
<p>Video stabilization steps of the raw video. Stabilization between the image frame 1 and 10 are illustrated here for the demonstration purpose as they have a noticeable offset. However, in the experiments, adjacent images are stabilized. (<b>a</b>) Both images are overlaid to show the offset in the frames in red and cyan colors; (<b>b</b>,<b>c</b>) are the first and tenth frames of the video respectively with their detected key points; (<b>d</b>) The matching points between the two images are shown in red and green markers; (<b>e</b>) Outlier points are removed and only inlier points are used for the similarity transform; (<b>f</b>) The stabilized image.</p>
Full article ">Figure 3
<p>Random frames from the stabilized video A1 (video names are given in <a href="#remotesensing-12-00577-t001" class="html-table">Table 1</a>). Black colored areas represent the missing areas after the image transformation.</p>
Full article ">Figure 4
<p>The stabilized video is divided in to tiles as shown in the figure. Each yellow coloured box indicates a tile. The red grid shows the locations where the tiles were selected using a sliding window approach. Each tile has 50% overlap with neighbouring tiles in its row and column. The extracted tile from the yellow colour box are shown on the right.</p>
Full article ">Figure 5
<p>The extracted signal from the video A1. It corresponds to the tile shown in <a href="#remotesensing-12-00577-f004" class="html-fig">Figure 4</a>. The first signal is the temporal intensity changes in the difference image sequence. The band-passed and smoothed signal is shown in the middle and the detected peaks are marked on the third signal. x and y axes represent the number of frames and the intensity difference respectively.</p>
Full article ">Figure 6
<p>The first column of the images are the first frames of each aerial video (A1–A6). Aerial videos A7–A10 are shown in <a href="#remotesensing-12-00577-f007" class="html-fig">Figure 7</a>. The second column of images show their localization maps drawn over the first frame. Ground truth bounding boxes are shown in red. Detected bounding boxes are colored green.</p>
Full article ">Figure 7
<p>Aerial videos A7–A10. In these videos, the human subject was covered with a camouflage net. The simulated scenarios were A7: a camouflaged person and clearly visible person, A8: a person covered with a camouflaged net, A9: a camouflaged person and a mannequin and A10: a camouflaged person and mannequin partly covered with a sheet metal. A1–A6 are shown in <a href="#remotesensing-12-00577-f006" class="html-fig">Figure 6</a>.</p>
Full article ">Figure 8
<p>The first column of the images are the first frames of each ground video. The second column of images show their localization maps drawn over the first frame.</p>
Full article ">Figure A1
<p>Some selected mean images after the video stabilization of video A1. (<b>a</b>) The raw input mean (left) and corrected sequence mean (right) of video A1. The corrected sequence mean is calculated after the first level of stabilization. (<b>b</b>) Mean images after the second level of stabilization. Each pair of images represents the raw input mean (left) and corrected sequence mean (right) of a randomly selected tile video of A1.</p>
Full article ">
24 pages, 12339 KiB  
Article
Spatio-Temporal Mapping of Multi-Satellite Observed Column Atmospheric CO2 Using Precision-Weighted Kriging Method
by Zhonghua He, Liping Lei, Yuhui Zhang, Mengya Sheng, Changjiang Wu, Liang Li, Zhao-Cheng Zeng and Lisa R. Welp
Remote Sens. 2020, 12(3), 576; https://doi.org/10.3390/rs12030576 - 9 Feb 2020
Cited by 51 | Viewed by 6167
Abstract
Column-averaged dry air mole fraction of atmospheric CO2 (XCO2), obtained by multiple satellite observations since 2003 such as ENVISAT/SCIAMACHY, GOSAT, and OCO-2 satellite, is valuable for understanding the spatio-temporal variations of atmospheric CO2 concentrations which are related to carbon [...] Read more.
Column-averaged dry air mole fraction of atmospheric CO2 (XCO2), obtained by multiple satellite observations since 2003 such as ENVISAT/SCIAMACHY, GOSAT, and OCO-2 satellite, is valuable for understanding the spatio-temporal variations of atmospheric CO2 concentrations which are related to carbon uptake and emissions. In order to construct long-term spatio-temporal continuous XCO2 from multiple satellites with different temporal and spatial periods of observations, we developed a precision-weighted spatio-temporal kriging method for integrating and mapping multi-satellite observed XCO2. The approach integrated XCO2 from different sensors considering differences in vertical sensitivity, overpass time, the field of view, repeat cycle and measurement precision. We produced globally mapped XCO2 (GM-XCO2) with spatial/temporal resolution of 1 × 1 degree every eight days from 2003 to 2016 with corresponding data precision and interpolation uncertainty in each grid. The predicted GM-XCO2 precision improved in most grids compared with conventional spatio-temporal kriging results, especially during the satellites overlapping period (0.3–0.5 ppm). The method showed good reliability with R2 of 0.97 from cross-validation. GM-XCO2 showed good accuracy with a standard deviation of bias from total carbon column observing network (TCCON) measurements of 1.05 ppm. This method has potential applications for integrating and mapping XCO2 or other similar datasets observed from multiple satellite sensors. The resulting GM-XCO2 product may be also used in different carbon cycle research applications with different precision requirements. Full article
(This article belongs to the Special Issue Remote Sensing of Greenhouse Gases and Air Pollution)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Example of XCO<sub>2</sub> from SCIAMACHY, GOSAT, and OCO-2. Green and blue points represent SCI-XCO<sub>2</sub> and GOS-XCO<sub>2</sub> from 1–8 June 2009. Black and red points are GOS-XCO<sub>2</sub> and OCO-XCO<sub>2</sub> from 1–8 September 2014. Total carbon column observing network (TCCON) sites used for validation are shown with a pink star.</p>
Full article ">Figure 2
<p>Workflow chart of spatio-temporal integration of multi-satellite observed XCO<sub>2</sub> using a precision-weighted kriging method.</p>
Full article ">Figure 3
<p>One example of the optimized spatio-temporal semi-variogram surface (Zone 1: Latitude center: 55°N). Grey, black, and red points represent spatio-temporal semi-variogram that was calculated from experimental data, fitted models of the conventional and optimized correlation structure.</p>
Full article ">Figure 4
<p>Latitudinal-temporal change of integrated XCO<sub>2</sub> (<b>a</b>) and XCO<sub>2</sub> adjustments made during the integration processing, integrated XCO<sub>2</sub> (<math display="inline"><semantics> <mrow> <msub> <mrow> <mi>XCO</mi> </mrow> <mrow> <msub> <mn>2</mn> <mrow> <mi>int</mi> </mrow> </msub> </mrow> </msub> </mrow> </semantics></math>) minus original XCO<sub>2</sub> (<math display="inline"><semantics> <mrow> <msub> <mrow> <mi>XCO</mi> </mrow> <mrow> <msub> <mn>2</mn> <mrow> <mi>ret</mi> </mrow> </msub> </mrow> </msub> </mrow> </semantics></math>) (<b>b</b>) from SCIAMACHY, GOSAT, and OCO-2.</p>
Full article ">Figure 5
<p>Latitudinal and temporal variability of global mapped XCO<sub>2</sub> (GM-XCO<sub>2</sub>, top panel), the uncertainty of the prediction (standard deviation, middle panel), and precision (bottom panel).</p>
Full article ">Figure 6
<p>Latitudinal and temporal difference between results from precision-weighted and conventional spatio-temporal kriging methods for global mapped XCO<sub>2</sub> (GM-XCO<sub>2</sub>, top panel), the difference in the uncertainty of the prediction (standard deviation, middle panel) and the difference in GM-XCO<sub>2</sub> precision (bottom panel). Positive values indicate precision-weighted results are higher and vice versa.</p>
Full article ">Figure 7
<p>Spatial-temporal distribution of mean seasonal globally-mapped XCO<sub>2</sub> (GM-CO<sub>2</sub>) during spring (March, April, May), summer (June, July, August), autumn (September, October, November) and winter (December, January, Febryary) of 2003 (top-left), 2008 (top-right), 2013 (bottom-left), and 2015 (bottom-right). Color bars for different years assume an annual increase of 2 ppm.</p>
Full article ">Figure 8
<p>Spatial-temporal distribution of mean GM-XCO<sub>2</sub> in 2016.</p>
Full article ">Figure 9
<p>Results of cross-validation using the precision-weighted spatio-temporal kriging method. The relationship between predicted XCO<sub>2</sub> (GM-XCO<sub>2</sub>) and reserved integrated XCO<sub>2</sub> (<math display="inline"><semantics> <mrow> <msub> <mrow> <mi>XCO</mi> </mrow> <mrow> <msub> <mn>2</mn> <mi>int</mi> </msub> </mrow> </msub> </mrow> </semantics></math>) is shown in the left panel. The distribution of predicted bias (absolute difference between GM-XCO<sub>2</sub> and reserved <math display="inline"><semantics> <mrow> <msub> <mrow> <mi>XCO</mi> </mrow> <mrow> <msub> <mn>2</mn> <mi>int</mi> </msub> </mrow> </msub> </mrow> </semantics></math> ) and <math display="inline"><semantics> <mrow> <msub> <mrow> <mi>XCO</mi> </mrow> <mrow> <msub> <mn>2</mn> <mi>int</mi> </msub> </mrow> </msub> </mrow> </semantics></math> precision is shown in the right panel. The black and red lines in the right panel represent the slope of 1 and 2.</p>
Full article ">Figure 10
<p>Temporal variation comparison of GM-XCO<sub>2</sub> at 12 TCCON sites. Grey, red, and blue points represent <math display="inline"><semantics> <mrow> <msub> <mrow> <mi>XCO</mi> </mrow> <mrow> <msub> <mn>2</mn> <mi>int</mi> </msub> </mrow> </msub> </mrow> </semantics></math>, GM-XCO<sub>2,</sub> and XCO<sub>2</sub> from TCCON measurements, respectively. <math display="inline"><semantics> <mrow> <msub> <mrow> <mi>XCO</mi> </mrow> <mrow> <msub> <mn>2</mn> <mi>int</mi> </msub> </mrow> </msub> </mrow> </semantics></math> was retrieved within 500 km of TCCON sites. TCCON measurements from 11:00 to 15:00 local time were selected for comparison.</p>
Full article ">Figure 11
<p>Latitudinal and temporal variability of the difference between GM-XCO<sub>2</sub> and CT-XCO<sub>2</sub> (<b>a</b>): GM-XCO<sub>2</sub> minus CT-XCO<sub>2</sub>) and a histogram of the differences (<b>b</b>).</p>
Full article ">Figure 12
<p>Temporal variation of XCO<sub>2</sub> from integrated and global mapped results (grey and red points) and CarbonTracker (blue points) over latitude in the range of 30 to 45°N and longitude of 60 to 125°W and 60 to 125°E.</p>
Full article ">Figure A1
<p>Latitudinal-temporal change of mean XCO<sub>2</sub> averaging kernel from SCIAMACHY (January–March 2003), GOSAT (June 2009–May 2014), and OCO-2 (September 2014.09–December 2016).</p>
Full article ">Figure A2
<p>Latitudinal-temporal change of CT-XCO<sub>2</sub> from 2003 to 2016</p>
Full article ">
24 pages, 6574 KiB  
Article
A Method for Vehicle Detection in High-Resolution Satellite Images that Uses a Region-Based Object Detector and Unsupervised Domain Adaptation
by Yohei Koga, Hiroyuki Miyazaki and Ryosuke Shibasaki
Remote Sens. 2020, 12(3), 575; https://doi.org/10.3390/rs12030575 - 9 Feb 2020
Cited by 59 | Viewed by 10356 | Correction
Abstract
Recently, object detectors based on deep learning have become widely used for vehicle detection and contributed to drastic improvement in performance measures. However, deep learning requires much training data, and detection performance notably degrades when the target area of vehicle detection (the target [...] Read more.
Recently, object detectors based on deep learning have become widely used for vehicle detection and contributed to drastic improvement in performance measures. However, deep learning requires much training data, and detection performance notably degrades when the target area of vehicle detection (the target domain) is different from the training data (the source domain). To address this problem, we propose an unsupervised domain adaptation (DA) method that does not require labeled training data, and thus can maintain detection performance in the target domain at a low cost. We applied Correlation alignment (CORAL) DA and adversarial DA to our region-based vehicle detector and improved the detection accuracy by over 10% in the target domain. We further improved adversarial DA by utilizing the reconstruction loss to facilitate learning semantic features. Our proposed method achieved slightly better performance than the accuracy achieved with the labeled training data of the target domain. We demonstrated that our improved DA method could achieve almost the same level of accuracy at a lower cost than non-DA methods with a sufficient amount of labeled training data of the target domain. Full article
(This article belongs to the Special Issue Deep Transfer Learning for Remote Sensing)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Single Shot Multibox Detector (SSD) architecture.</p>
Full article ">Figure 2
<p>Overview of Correlation alignment (CORAL) domain adaptation (DA).</p>
Full article ">Figure 3
<p>Overview of adversarial DA: (<b>a</b>) algorithm and (<b>b</b>) discriminator network. Leaky ReLU is a variant of the activation function called rectified linear unit (ReLU).</p>
Full article ">Figure 4
<p>Image examples from the four study regions from the Cars Overhead with Context (COWC) dataset: (<b>a</b>) Toronto, Canada; (<b>b</b>) Selwyn, New Zealand; (<b>c</b>) Potsdam, Germany; and (<b>d</b>) Utah, United States.</p>
Full article ">Figure 5
<p>Study area in Japan. (<b>a</b>) Map of Tokyo. The red rectangle outlines the study area for which aerial images were obtained. We extracted non-labeled training images for DA from the area outlined by the blue rectangles and labeled training and test images from the area outlined by the red rectangle (excluding the blue rectangles) where the latter images were manually annotated. (<b>b</b>) An example image.</p>
Full article ">Figure 6
<p>Detection procedure for an image larger than 300 × 300 pixels. The image is separated into 300 × 300 pixel tiles with an overlap of 50 pixels, and the detection results of each tile are merged.</p>
Full article ">Figure 7
<p>Detection accuracy improvement in each condition by increment of labeled training data of the target domain. S and T represent the source and target domains, respectively.</p>
Full article ">Figure 8
<p>SSD training curve with the Dataset S_train. “Loss” represents <math display="inline"> <semantics> <mrow> <msub> <mi>L</mi> <mrow> <mi>S</mi> <mi>S</mi> <mi>D</mi> </mrow> </msub> </mrow> </semantics> </math>, which is the sum of the localization loss (“loss/loc”) and the classification loss (“loss/conf”).</p>
Full article ">Figure 9
<p>Training and validation curves of each DA method: (<b>a</b>) CORAL DA loss, (<b>b</b>) adversarial DA loss, (<b>c</b>) reconstruction loss in Adversarial DA with reconstruction, (<b>d</b>) reconstruction loss in the reconstruction-only method, (<b>e</b>) validation accuracy (1000 iterations moving average of mean of average precision (AP) and F1 measure (F1)) of each DA method. Best viewed in color. The black, purple, yellow, red, green, gray, orange, blue, and pink lines represent CORAL loss, SSD loss, extractor loss, discriminator loss, reconstruction loss, scores of mean of AP and F1 in adversarial DA with reconstruction, adversarial DA, CORAL DA, and reconstruction only, respectively.</p>
Full article ">Figure 9 Cont.
<p>Training and validation curves of each DA method: (<b>a</b>) CORAL DA loss, (<b>b</b>) adversarial DA loss, (<b>c</b>) reconstruction loss in Adversarial DA with reconstruction, (<b>d</b>) reconstruction loss in the reconstruction-only method, (<b>e</b>) validation accuracy (1000 iterations moving average of mean of average precision (AP) and F1 measure (F1)) of each DA method. Best viewed in color. The black, purple, yellow, red, green, gray, orange, blue, and pink lines represent CORAL loss, SSD loss, extractor loss, discriminator loss, reconstruction loss, scores of mean of AP and F1 in adversarial DA with reconstruction, adversarial DA, CORAL DA, and reconstruction only, respectively.</p>
Full article ">Figure 10
<p>Average scores with various <math display="inline"> <semantics> <mi>γ</mi> </semantics> </math>.</p>
Full article ">Figure 11
<p>Examples of detection results: (<b>a</b>) before applying DA and (<b>b</b>) after applying Adv with rec. The red, blue, and green bounding boxes represent true positives (correct detections), false positives (misdetections), and false negatives (undetected groundtruth vehicles), respectively.</p>
Full article ">Figure 12
<p>Reconstructed image samples. (<b>a</b>) An original source domain image, (<b>b</b>) a reconstructed source domain image, (<b>c</b>) an original target domain image, (<b>d</b>) a reconstructed target domain image.</p>
Full article ">Figure 12 Cont.
<p>Reconstructed image samples. (<b>a</b>) An original source domain image, (<b>b</b>) a reconstructed source domain image, (<b>c</b>) an original target domain image, (<b>d</b>) a reconstructed target domain image.</p>
Full article ">Figure 13
<p>Validation curve of methods in <a href="#sec4dot7-remotesensing-12-00575" class="html-sec">Section 4.7</a>: (<b>a</b>) without reconstruction, (<b>b</b>) with reconstruction. Blue, green, and red lines represent F1, AP, and mean of F1 and AP, respectively.</p>
Full article ">Figure 14
<p>The curve of reconstruction loss in the method with reconstruction in <a href="#sec4dot7-remotesensing-12-00575" class="html-sec">Section 4.7</a>.</p>
Full article ">
10 pages, 236 KiB  
Letter
Advancing High-Throughput Phenotyping of Wheat in Early Selection Cycles
by Yuncai Hu, Samuel Knapp and Urs Schmidhalter
Remote Sens. 2020, 12(3), 574; https://doi.org/10.3390/rs12030574 - 9 Feb 2020
Cited by 29 | Viewed by 4419
Abstract
Enhancing plant breeding to ensure global food security requires new technologies. For wheat phenotyping, only limited seeds and resources are available in early selection cycles. This forces breeders to use small plots with single or multiple row plots in order to include the [...] Read more.
Enhancing plant breeding to ensure global food security requires new technologies. For wheat phenotyping, only limited seeds and resources are available in early selection cycles. This forces breeders to use small plots with single or multiple row plots in order to include the maximum number of genotypes/lines for their assessment. High-throughput phenotyping through remote sensing may meet the requirements for the phenotyping of thousands of genotypes grown in small plots in early selection cycles. Therefore, the aim of this study was to compare the performance of an unmanned aerial vehicle (UAV) for assessing the grain yield of wheat genotypes in different row numbers per plot in the early selection cycles with ground-based spectral sensing. A field experiment consisting of 32 wheat genotypes with four plot designs (1, 2, 3, and 12 rows per plot) was conducted. Near infrared (NIR)-based spectral indices showed significant correlations (p < 0.01) with the grain yield at flowering to grain filling, regardless of row numbers, indicating the potential of spectral indices as indirect selection traits for the wheat grain yield. Compared with terrestrial sensing, aerial-based sensing from UAV showed consistently higher levels of association with the grain yield, indicating that an increased precision may be obtained and is expected to increase the efficiency of high-throughput phenotyping in large-scale plant breeding programs. Our results suggest that high-throughput sensing from UAV may become a convenient and efficient tool for breeders to promote a more efficient selection of improved genotypes in early selection cycles. Such new information may support the calibration of genomic information by providing additional information on other complex traits, which can be ascertained by spectral sensing. Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">
22 pages, 4385 KiB  
Article
Formation Design for Single-Pass GEO InSAR Considering Earth Rotation Based on Coordinate Rotational Transformation
by Zhiyang Chen, Xichao Dong, Yuanhao Li and Cheng Hu
Remote Sens. 2020, 12(3), 573; https://doi.org/10.3390/rs12030573 - 8 Feb 2020
Cited by 10 | Viewed by 4281
Abstract
The single-pass geosynchronous synthetic aperture radar interferometry (GEO InSAR) adopts the formation of a slave satellite accompanying the master satellite, which can reduce the temporal decorrelation caused by atmospheric disturbance and observation time gap between repeated tracks. Current formation design methods for spaceborne [...] Read more.
The single-pass geosynchronous synthetic aperture radar interferometry (GEO InSAR) adopts the formation of a slave satellite accompanying the master satellite, which can reduce the temporal decorrelation caused by atmospheric disturbance and observation time gap between repeated tracks. Current formation design methods for spaceborne SAR are based on the Relative Motion Equation (RME) in the Earth-Centered-Inertial (ECI) coordinate system (referred to as ECI-RME). Since the Earth rotation is not taken into account, the methods will lead to a significant error for the baseline calculation while applied to formation design for GEO InSAR. In this paper, a formation design method for single-pass GEO InSAR based on Coordinate Rotational Transformation (CRT) is proposed. Through CRT, the RME in Earth-Centered-Earth-Fixed (ECEF) coordinate system (referred to as ECEF-RME) is derived. The ECEF-RME can be used to describe the accurate baseline of close-flying satellites for different orbital altitudes, but not limited to geosynchronous orbit. Aiming at the problem that ECEF-RME does not have a regular geometry as ECI-RME does, a numerical formation design method based on the minimum baseline error criterion is proposed. Then, an analytical formation design method is proposed for GEO InSAR, based on the Minimum Along-track Baseline Criterion (MABC) subject to a fixed root mean square of the perpendicular baseline. Simulation results verify the validity of the ECEF-RME and the analytical formation design method. The simulation results also show that the proposed method can help alleviate the atmospheric phase impacts and improve the retrieval accuracy of the digital elevation model (DEM) compared with the ECI-RME-based approach. Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>GEO InSAR formation. (<b>a</b>) Sketch map; (<b>b</b>) Nadir tracks.</p>
Full article ">Figure 2
<p>Spacecraft orbit coordinate system.</p>
Full article ">Figure 3
<p>Angle between velocities in ECF and ECI coordinate. (<b>a</b>) Definition of the angle; (<b>b</b>) the angle in LEO SAR; (<b>c</b>) the angle in GEO SAR.</p>
Full article ">Figure 4
<p>The auxiliary coordinate system.</p>
Full article ">Figure 5
<p>Shape of perpendicular baseline for different inclinations in GEO SAR.</p>
Full article ">Figure 6
<p>Nadir-point tracks of LEO SAR, MEO SAR, and GEO SAR (by STK).</p>
Full article ">Figure 7
<p>Baseline calculated using ECF equation, ECI equation, and STK data. (<b>a</b>) LEO SAR formation; (<b>b</b>) MEO SAR formation; (<b>c</b>) GEO SAR formation.</p>
Full article ">Figure 8
<p>Baseline comparison of numerical optimization and proposed analytical expression for GEO InSAR. (<b>a</b>) Along-track baseline; (<b>b</b>) Perpendicular baseline.</p>
Full article ">Figure 9
<p>Formation design results and DEM setting. (<b>a</b>) Formation designed by the proposed method; (<b>b</b>) Formation designed according to ECI-RME; (<b>c</b>) DEM setting for simulation.</p>
Full article ">Figure 10
<p>InSAR simulation results. (<b>a</b>) The correlation coefficient (corr. coe.), with a mean value of 0.53; (<b>b</b>) Retrieved height; (<b>c</b>) Retrieval error, with a mean value of 0.67 m of the SAR pairs acquired from the formation by the proposed method; (<b>d</b>) Corr. coe., with a mean value of 0.33; (<b>e</b>) Retrieved height; (<b>f</b>) Retrieval error, with a mean value of 7.62 m of the SAR pairs acquired from the formation by ECI-RME, without optimal data acquisition; (<b>g</b>) Corr. coe., with a mean value of 0.90; (<b>h</b>) Retrieved height; (<b>i</b>) Retrieval error, with a mean value of 1.01m of the SAR pairs acquired from the formation by ECI-RME, the optimal data acquisition was conducted.</p>
Full article ">Figure A1
<p>Part of the simulation verification of numerical optimization with fixed <math display="inline"><semantics> <mrow> <msub> <mi>α</mi> <mn>1</mn> </msub> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <msub> <mi>α</mi> <mn>2</mn> </msub> </mrow> </semantics></math>. The corresponding parameters are: (<b>a</b>) <math display="inline"><semantics> <mrow> <msub> <mi>α</mi> <mn>1</mn> </msub> <mo>=</mo> <mn>30</mn> <mo>°</mo> </mrow> </semantics></math>,<math display="inline"><semantics> <mrow> <msub> <mi>α</mi> <mn>2</mn> </msub> <mo>=</mo> <mn>20</mn> <mo>°</mo> </mrow> </semantics></math>,<math display="inline"><semantics> <mrow> <msub> <mi>i</mi> <mi>c</mi> </msub> <mo>=</mo> <mn>8</mn> <mo>°</mo> </mrow> </semantics></math>; (<b>b</b>) <math display="inline"><semantics> <mrow> <msub> <mi>α</mi> <mn>1</mn> </msub> <mo>=</mo> <mn>0</mn> <mo>°</mo> </mrow> </semantics></math>,<math display="inline"><semantics> <mrow> <msub> <mi>α</mi> <mn>2</mn> </msub> <mo>=</mo> <mn>90</mn> <mo>°</mo> </mrow> </semantics></math>,<math display="inline"><semantics> <mrow> <msub> <mi>i</mi> <mi>c</mi> </msub> <mo>=</mo> <mn>12</mn> <mo>°</mo> </mrow> </semantics></math>; (<b>c</b>) <math display="inline"><semantics> <mrow> <msub> <mi>α</mi> <mn>1</mn> </msub> <mo>=</mo> <mn>90</mn> <mo>°</mo> </mrow> </semantics></math>,<math display="inline"><semantics> <mrow> <msub> <mi>α</mi> <mn>2</mn> </msub> <mo>=</mo> <mn>90</mn> <mo>°</mo> </mrow> </semantics></math>,<math display="inline"><semantics> <mrow> <msub> <mi>i</mi> <mi>c</mi> </msub> <mo>=</mo> <mn>16</mn> <mo>°</mo> </mrow> </semantics></math>.</p>
Full article ">Figure A2
<p>Part of simulation verification of numerical optimization with fixed <math display="inline"><semantics> <mrow> <msub> <mi>A</mi> <mn>2</mn> </msub> </mrow> </semantics></math>. The corresponding parameters are: (<b>a</b>) <math display="inline"><semantics> <mrow> <msub> <mi>A</mi> <mn>2</mn> </msub> <mo>=</mo> <mn>50</mn> </mrow> </semantics></math>,<math display="inline"><semantics> <mrow> <msub> <mi>i</mi> <mi>c</mi> </msub> <mo>=</mo> <mn>8</mn> <mo>°</mo> </mrow> </semantics></math>; (<b>b</b>) <math display="inline"><semantics> <mrow> <msub> <mi>A</mi> <mn>2</mn> </msub> <mo>=</mo> <mo>−</mo> <mn>200</mn> </mrow> </semantics></math>,<math display="inline"><semantics> <mrow> <msub> <mi>i</mi> <mi>c</mi> </msub> <mo>=</mo> <mn>12</mn> <mo>°</mo> </mrow> </semantics></math>; (<b>c</b>) <math display="inline"><semantics> <mrow> <msub> <mi>A</mi> <mn>2</mn> </msub> <mo>=</mo> <mn>150</mn> </mrow> </semantics></math>,<math display="inline"><semantics> <mrow> <msub> <mi>i</mi> <mi>c</mi> </msub> <mo>=</mo> <mn>16</mn> <mo>°</mo> </mrow> </semantics></math>.</p>
Full article ">
19 pages, 3184 KiB  
Article
Trend Evolution of Vegetation Phenology in China during the Period of 1981–2016
by Fusheng Jiao, Huiyu Liu, Xiaojuan Xu, Haibo Gong and Zhenshan Lin
Remote Sens. 2020, 12(3), 572; https://doi.org/10.3390/rs12030572 - 8 Feb 2020
Cited by 35 | Viewed by 4144
Abstract
The trend of vegetation phenology dynamics is of crucial importance for understanding vegetation growth and its responses to climate change. However, it remains unclear how the trends of vegetation phenology changed over the past decades. By analyzing phenology data including start (SOS), end [...] Read more.
The trend of vegetation phenology dynamics is of crucial importance for understanding vegetation growth and its responses to climate change. However, it remains unclear how the trends of vegetation phenology changed over the past decades. By analyzing phenology data including start (SOS), end (EOS) and length (LOS) of growth season with the Ensemble empirical mode decomposition (EEMD), we revealed the trend evolution of vegetation phenology in China during 1981-2016. Our study suggests that: (1) On the national scale, with EEMD method, the change rates of SOS and LOS were increasing with time, while that of EOS was decreasing. Moreover, the EEMD rates of SOS and LOS exceeded the linear rates in the early-2000s, while that of EOS dropped below the linear rate in the mid-1980s. (2) For each phenological event, the shifted trends took up a large area (~30%), which was close to the sum of all monotonic trends, but more than any monotonic trend type. The shifted trends mainly occurred in the north-eastern China, eastern Qinghai-Tibetan Plateau, eastern Sichuan Basin, North China Plain and Loess Plateau. (3) For each phenological event, the areas in the high-latitude experienced the contrary trends to the other. The amplitude and frequencies of phenology variation in the mid-latitude were stronger than those in the high-latitude and low-latitude. Meanwhile, LOS in the high-latitude was induced by SOS. (4) For each phenological event, the trend evolution varying with longitudes can be divided into eastern region (east of 121°E), central region (92°E–121°E) and western region (west of 92°E) based on the evolution of trends varying with longitudes. The east experienced a delayed SOS and a shorten LOS, which was different from the other areas. The magnitude of delayed trends in EOS and the prolonged trends in LOS were stronger from east to west as longitudes changes. The variation characteristics of LOS with longitude were mainly caused by SOS in the eastern region and by SOS and EOS together in the western and central region. (5) Each land cover types, except Needleleaf Forests, experienced the same trends. For most land cover types, the advance of SOS, delay of EOS and extension of LOS began in the 1980s, the 1990s, and the 1990s, respectively and enhanced several times. Moreover, the magnitudes of Grasslands in SOS and Evergreen Broadleaf Forest in EOS were much greater than the others, while that of croplands was the smallest in each phenological event. Our results showed that the analysis of trend evolution with nonlinear method is very important to accurately reveal the variation characteristics of phenology trends and to extract the inherent trend shifts. Full article
(This article belongs to the Special Issue Monitoring Vegetation Phenology: Trends and Anomalies)
Show Figures

Figure 1

Figure 1
<p>Selected land cover map of China from MODIS MCD12C1 products from 2001–2016.</p>
Full article ">Figure 2
<p>The left three are the mean SOS, EOS and LOS and their EEMD and linear trends. The green solid line and red dotted line indicate the EEMD trend and the linear trend, respectively. The right three are the linear rates of SOS, EOS and LOS and their EEMD trend rates. The green solid line and red dotted line indicate the EEMD trend rates and the linear trend rates, respectively.</p>
Full article ">Figure 3
<p>The spatial pattern of different types of phenology trends. (<b>a</b>–<b>c</b>) show the spatial pattern estimated by EEMD of SOS, EOS and LOS, respectively. (<b>d</b>–<b>f</b>) show the spatial pattern estimated by M-K of SOS, EOS and LOS, respectively. ‘Nonsig’, ‘Mon In’, ‘Mon De’, ‘In to De’ and ‘De to In’ are abbreviations for ‘Nonsignificant’, ‘Monotonic Increase’, ‘Monotonic Decrease’, ‘Increase to Decrease’ and ‘Decrease to Increase’.</p>
Full article ">Figure 4
<p>Variation of the trends estimated by EEMD in each latitudinal band. (<b>a</b>–<b>c</b>) show the results of SOS, EOS and LOS, respectively. In each subfigure, a moving average over 0.5° in the meridional direction was used to eliminate possible noise.</p>
Full article ">Figure 5
<p>Variation of the trends estimated by EEMD in each longitudinal band. (<b>a</b>–<b>c</b>) show the results of SOS, EOS and LOS, respectively. In each subfigure, an average over 0.5° in the equatorial direction was used to eliminate possible noise.</p>
Full article ">Figure 6
<p>Variation of the trends with time estimated by EEMD in each land cover type. (<b>a</b>–<b>c</b>) show the results of SOS, EOS and LOS, respectively.</p>
Full article ">
20 pages, 3179 KiB  
Article
Individual Tree Position Extraction and Structural Parameter Retrieval Based on Airborne LiDAR Data: Performance Evaluation and Comparison of Four Algorithms
by Wei Chen, Haibing Xiang and Kazuyuki Moriya
Remote Sens. 2020, 12(3), 571; https://doi.org/10.3390/rs12030571 - 8 Feb 2020
Cited by 23 | Viewed by 4832
Abstract
Information for individual trees (e.g., position, treetop, height, crown width, and crown edge) is beneficial for forest monitoring and management. Light Detection and Ranging (LiDAR) data have been widely used to retrieve these individual tree parameters from different algorithms, with varying successes. In [...] Read more.
Information for individual trees (e.g., position, treetop, height, crown width, and crown edge) is beneficial for forest monitoring and management. Light Detection and Ranging (LiDAR) data have been widely used to retrieve these individual tree parameters from different algorithms, with varying successes. In this study, we used an iterative Triangulated Irregular Network (TIN) algorithm to separate ground and canopy points in airborne LiDAR data, and generated Digital Elevation Models (DEM) by Inverse Distance Weighted (IDW) interpolation, thin spline interpolation, and trend surface interpolation, as well as by using the Kriging algorithm. The height of the point cloud was assigned to a Digital Surface Model (DSM), and a Canopy Height Model (CHM) was acquired. Then, four algorithms (point-cloud-based local maximum algorithm, CHM-based local maximum algorithm, watershed algorithm, and template-matching algorithm) were comparatively used to extract the structural parameters of individual trees. The results indicated that the two local maximum algorithms can effectively detect the treetop; the watershed algorithm can accurately extract individual tree height and determine the tree crown edge; and the template-matching algorithm works well to extract accurate crown width. This study provides a reference for the selection of algorithms in individual tree parameter inversion based on airborne LiDAR data and is of great significance for LiDAR-based forest monitoring and management. Full article
(This article belongs to the Section Forest Remote Sensing)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Location of the study area and investigated plots: (<b>a</b>,<b>b</b>) the study area of Dayekou forestry site in Gansu, China; (<b>c</b>) Digital Elevation Model (DEM) map; (<b>d</b>) distribution of LiDAR points and field data; (<b>e</b>) individual trees distribution in the field measurements, the numbers on this figure are the super plot numbers.</p>
Full article ">Figure 2
<p>The profile of ground point separation results (orange ones indicate the ground points, and white ones are the non-ground point.).</p>
Full article ">Figure 3
<p>The determination coefficient trend variance of correlation with the treetop and crown for different radius with cosine function.</p>
Full article ">Figure 4
<p>The error histogram of the four interpolation algorithms: (<b>a</b>) inverse distance weighted (IDW), (<b>b</b>) thin spline, (<b>c</b>) trend surface, and (<b>d</b>) Kriging algorithms (<span class="html-italic">X</span>-axis = the difference between the measured and interpolated elevation; <span class="html-italic">Y</span>-axis = frequency).</p>
Full article ">Figure 5
<p>The canopy height model (CHM) of the study area.</p>
Full article ">Figure 6
<p>The distribution of the individual trees extracted by five different algorithms: (<b>a</b>) Max_H, (<b>b</b>) Max_CHM, (<b>c</b>) watershed, (<b>d</b>) cosine-template-matching, and (<b>e</b>) cone-template-matching algorithms.</p>
Full article ">Figure 7
<p>The scatter plot of the measured and extracted tree heights by (<b>a</b>) Max_H, (<b>b</b>) Max_CHM, (<b>c</b>) watershed, (<b>d</b>) cosine-template-matching, and (<b>e</b>) cone-template-matching algorithms (<span class="html-italic">X</span>-axis is measured tree height; <span class="html-italic">Y</span>-axis is extracted tree height).</p>
Full article ">Figure 7 Cont.
<p>The scatter plot of the measured and extracted tree heights by (<b>a</b>) Max_H, (<b>b</b>) Max_CHM, (<b>c</b>) watershed, (<b>d</b>) cosine-template-matching, and (<b>e</b>) cone-template-matching algorithms (<span class="html-italic">X</span>-axis is measured tree height; <span class="html-italic">Y</span>-axis is extracted tree height).</p>
Full article ">
19 pages, 5947 KiB  
Article
Assessment of Multi-Scale SMOS and SMAP Soil Moisture Products across the Iberian Peninsula
by Gerard Portal, Thomas Jagdhuber, Mercè Vall-llossera, Adriano Camps, Miriam Pablos, Dara Entekhabi and Maria Piles
Remote Sens. 2020, 12(3), 570; https://doi.org/10.3390/rs12030570 - 8 Feb 2020
Cited by 32 | Viewed by 5729
Abstract
In the last decade, technological advances led to the launch of two satellite missions dedicated to measure the Earth’s surface soil moisture (SSM): the ESA’s Soil Moisture and Ocean Salinity (SMOS) launched in 2009, and the NASA’s Soil Moisture Active Passive (SMAP) launched [...] Read more.
In the last decade, technological advances led to the launch of two satellite missions dedicated to measure the Earth’s surface soil moisture (SSM): the ESA’s Soil Moisture and Ocean Salinity (SMOS) launched in 2009, and the NASA’s Soil Moisture Active Passive (SMAP) launched in 2015. The two satellites have an L-band microwave radiometer on-board to measure the Earth’s surface emission. These measurements (brightness temperatures TB) are then used to generate global maps of SSM every three days with a spatial resolution of about 30–40 km and a target accuracy of 0.04 m3/m3. To meet local applications needs, different approaches have been proposed to spatially disaggregate SMOS and SMAP TB or their SSM products. They rely on synergies between multi-sensor observations and are built upon different physical assumptions. In this study, temporal and spatial characteristics of six operational SSM products derived from SMOS and SMAP are assessed in order to diagnose their distinct features, and the rationale behind them. The study is focused on the Iberian Peninsula and covers the period from April 2015 to December 2017. A temporal inter-comparison analysis is carried out using in situ SSM data from the Soil Moisture Measurements Station Network of the University of Salamanca (REMEDHUS) to evaluate the impact of the spatial scale of the different products (1, 3, 9, 25, and 36 km), and their correspondence in terms of temporal dynamics. A spatial analysis is conducted for the whole Iberian Peninsula with emphasis on the added-value that the enhanced resolution products provide based on the microwave-optical (SMOS/ERA5/MODIS) or the active–passive microwave (SMAP/Sentinel-1) sensor fusion. Our results show overall agreement among time series of the products regardless their spatial scale when compared to in situ measurements. Still, higher spatial resolutions would be needed to capture local features such as small irrigated areas that are not dominant at the 1-km pixel scale. The degree to which spatial features are resolved by the enhanced resolution products depend on the multi-sensor synergies employed (at TB or soil moisture level), and on the nature of the fine-scale information used. The largest disparities between these products occur in forested areas, which may be related to the reduced sensitivity of high-resolution active microwave and optical data to soil properties under dense vegetation. Full article
(This article belongs to the Special Issue Ten Years of Remote Sensing at Barcelona Expert Center)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>CCI land cover map (at 300 m) over the Iberian Peninsula (left) and a close-up of the REMEDHUS area (right). Black dots depict the 20 in situ SSM stations of the REMEDHUS network available for the study period (from April 2015 to December 2017). The distribution of the land cover within the REMEDHUS area is: agriculture, 95.45% (cropland, 75.44%; irrigated, 16.11%; other, 3.90%); forest, 2.70%; grassland, 0.63%; wetland, 0%; settlement, 0.26%; and other, 0.95%.</p>
Full article ">Figure 2
<p>Daily evolution of the in situ SSM (black) and the three low-resolution (radiometer-only) SSM (SMAPL2_E, red; SMAPL2, green; and SMOSL3, blue) at three REMEDHUS stations with different land use: (<b>a</b>) J3 (vineyard), (<b>b</b>) K13 (irrigated), and (<b>c</b>) O7 (rainfed/fallow).</p>
Full article ">Figure 3
<p>Daily evolution of in situ SSM (black) and the three high-resolution SSM products (SMAP_AP1 at 1 km, red; SMAP_AP3 at 3 km, green; and SMOSL4 at 1 km, blue) after averaging time series of rainfed/fallow stations (F11, H13, J12, J14, K10, M09, and O07) and the pixel time series that contain these stations.</p>
Full article ">Figure 4
<p>Temporally-averaged map of daily SMAP (<b>a</b>) and SMOS (<b>b</b>) products at 1 km over the Iberian Peninsula for the period December 2016 to February 2017.</p>
Full article ">Figure 5
<p>(<b>a</b>) Temporally-averaged map of daily SSM differences between SMAP and SMOS at 1 km (SMAP_AP1 minus SMOSL4) and (<b>b</b>) histogram of daily SSM differences maps, for the period April 2015 to December 2017.</p>
Full article ">Figure 6
<p>(First row) The three most common land covers types over the Iberian Peninsula (<b>a</b>), agriculture; (<b>b</b>) forest; and (<b>c</b>), grassland) according to the CCI LC map. (Second row) Histograms of the daily SSM differences (SMAP_AP1 minus SMOSL4) for the respective land covers.</p>
Full article ">Figure 7
<p>(<b>a</b>) Temporally averaged map of daily T<sub>B</sub> differences between SMAP (40° incidence angle) and SMOS (42.5° incidence angle) at 25 km and (<b>b</b>) histogram of temporally-averaged daily T<sub>B</sub> differences, for the period April 2015 to December 2017.</p>
Full article ">Figure 8
<p>Temporally-averaged map (<b>a</b>) and histogram (<b>b</b>) of daily SMAP SSM differences (SMAP_AP1 at 1 km minus SMAPL2 at 36 km), for the period April 2015 to December 2017.</p>
Full article ">Figure 9
<p>Temporally averaged map (<b>a</b>) and histogram (<b>b</b>) of daily SMOS SSM differences (SMOSL4 at 1 km minus SMOSL3 at 25 km), for the period April 2015 to December 2017.</p>
Full article ">
24 pages, 9559 KiB  
Article
Evapotranspiration in the Tono Reservoir Catchment in Upper East Region of Ghana Estimated by a Novel TSEB Approach from ASTER Imagery
by Abdullah Alhassan and Menggui Jin
Remote Sens. 2020, 12(3), 569; https://doi.org/10.3390/rs12030569 - 8 Feb 2020
Cited by 4 | Viewed by 4053
Abstract
Evapotranspiration (ET) is dynamic and influences water resource distribution. Sustainable management of water resources requires accurate estimations of the individual components that result in evapotranspiration, including the daily net radiation (DNR). Daily ET is more useful than the evaporative fraction (EF) provided by [...] Read more.
Evapotranspiration (ET) is dynamic and influences water resource distribution. Sustainable management of water resources requires accurate estimations of the individual components that result in evapotranspiration, including the daily net radiation (DNR). Daily ET is more useful than the evaporative fraction (EF) provided by remote sensing ET models, and to account for daily variations, EF is usually combined with the DNR. DNR exhibits diurnal and spatiotemporal variations due to landscape heterogeneity. In the modified Two-Source Energy Balance (TSEB) approach by Zhuang and Wu, 2015, ecophysiological constraint functions of temperature and moisture of plants based on atmospheric moisture and vegetation indices were introduced, but the DNR was not spatially accounted for in the estimation of the daily ET. This research adopted a novel approach that accounts for spatiotemporal variations in estimated daily ET by incorporating the Bisht and Bras DNR model in the modified version of the TSEB model. Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER) satellite imagery over the Tono irrigation watershed within the Upper East Region of Ghana and Southern Burkina Faso were used. We estimated the energy fluxes of latent and sensible heat as well as the net radiation and soil heat fluxes from the satellite images and compared our results with ground-based measurements from an eddy covariance (EC) station established by the West African Science Service Center on Climate Change and Adapted Land Use (WASCAL) within the watershed area. We noticed a similarity between our model estimated fluxes and ET with the ground-based EC station measurements. Eight different land use/cover types were identified in the study area, and each of these contributed significantly to the overall ET variations between the two study periods: December 2009 and December 2017. For instance, due to a higher leaf area index (LAI) for all vegetation types in December 2009 than in December 2017, the ET for December 2017 was higher than that for December 2009. We also noticed that the land use/cover types within the footprint area of the EC station were only six out of the eight. Generally, all the surface energy fluxes increased from December 2009 to December 2017. Mean ET varied from 3.576 to 4.486 (mm/d) for December 2009 while from 4.502 to 5.280 (mm/d) for December 2017 across the different land use/cover classes. Knowledge of the dynamics of evapotranspiration and adoption of cost-effective methods to estimate its individual components in an effective and efficient way is critical to water resources management. Our findings provide a tool for all water stakeholders within watersheds to manage water resources in an engaging and cost-effective way. Full article
(This article belongs to the Section Biogeosciences Remote Sensing)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Map of Africa showing neighboring countries and contour lines of flux footprint from the Kayoro eddy covariance (EC) station within the study location of the Tono Catchment in Ghana and Burkina Faso.</p>
Full article ">Figure 2
<p>Map of soil classification of the study area (Source: ISRIC World Soil Information, 2017 available at <a href="https://soilgrids.org/" target="_blank">https://soilgrids.org/</a>).</p>
Full article ">Figure 3
<p>Flow chart of the novel Two-Source Energy Balance (TSEB) approach to estimation of daily evapotranspiration from Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER) L1 T satellite imagery.</p>
Full article ">Figure 4
<p>Footprint climatology for the Kayoro flux tower, Tono catchment, Navrongo, Ghana, 17–26 December 2017.</p>
Full article ">Figure 5
<p>ASTER-derived surface radiometric temperatures (T<sub>rad</sub>) at the study field.</p>
Full article ">Figure 6
<p>Scatterplot of the ASTER-derived T<sub>rad</sub> versus actual temperatures from sonic anemometer at study field for December 2017.</p>
Full article ">Figure 7
<p>Variation in meteorological variables of temperature, precipitation, and evapotranspiration for the year, 2017, extracted from <a href="https://www.worldweatheronline.com/" target="_blank">https://www.worldweatheronline.com/</a>.</p>
Full article ">Figure 8
<p>Land use map of Tono reservoir watershed based on multispectral satellite remote sensing data from ASTER L1T with 15 m spatial resolution.</p>
Full article ">Figure 9
<p>Scatterplots of the novel TSEB-derived energy fluxes versus EC tower-based measurements.</p>
Full article ">Figure 10
<p>Net radiation (Rn) and soil heat flux (G) for December 2017 and 2009.</p>
Full article ">Figure 11
<p>Latent heat flux (LE) and sensible heat flux (H) for December 2017 and 2009.</p>
Full article ">Figure 12
<p>Comparison of the daily evapotranspiration (ET) derived from the novel TSEB with EC tower-based measurements.</p>
Full article ">Figure 13
<p>Maps of the daily evapotranspiration (ET) from ASTER images for December 2017 and December 2009 (WGS 84 UTM Zone 30N).</p>
Full article ">
24 pages, 17230 KiB  
Article
Geometric Accuracy Improvement Method for High-Resolution Optical Satellite Remote Sensing Imagery Combining Multi-Temporal SAR Imagery and GLAS Data
by Quansheng Zhu, Wanshou Jiang, Ying Zhu and Linze Li
Remote Sens. 2020, 12(3), 568; https://doi.org/10.3390/rs12030568 - 8 Feb 2020
Cited by 8 | Viewed by 4751
Abstract
With the widespread availability of satellite data, a single region can be described using multi-source and multi-temporal remote sensing data, such as high-resolution (HR) optical imagery, synthetic aperture radar (SAR) imagery, and space-borne laser altimetry data. These have become the main source of [...] Read more.
With the widespread availability of satellite data, a single region can be described using multi-source and multi-temporal remote sensing data, such as high-resolution (HR) optical imagery, synthetic aperture radar (SAR) imagery, and space-borne laser altimetry data. These have become the main source of data for geopositioning. However, due to the limitation of the direct geometric accuracy of HR optical imagery and the effect of the small intersection angle of HR optical imagery in stereo pair orientation, the geometric accuracy of HR optical imagery cannot meet the requirements for geopositioning without ground control points (GCPs), especially in uninhabited areas, such as forests, plateaus, or deserts. Without satellite attitude error, SAR usually provides higher geometric accuracy than optical satellites. Space-borne laser altimetry technology can collect global laser footprints with high altitude accuracy. Therefore, this paper presents a geometric accuracy improvement method for HR optical satellite remote sensing imagery combining multi-temporal SAR Imagery and GLAS data without GCPs. Based on the imaging mechanism, the differences in the weight matrix determination of the HR optical imagery and SAR imagery were analyzed. The laser altimetry data with high altitude accuracy were selected and applied as height control point in combined geopositioning. To validate the combined geopositioning approach, GaoFen2 (GF2) optical imagery, GaoFen6 (GF6) optical imagery, GaoFen3 (GF3) SAR imagery, and the Geoscience Laser Altimeter System (GLAS) footprint were tested. The experimental results show that the proposed model can be effectively applied to combined geopositioning to improve the geometric accuracy of HR optical imagery. Moreover, we found that the distribution and weight matrix determination of SAR images and the distribution of GLAS footprints are the crucial factors influencing geometric accuracy. Combined geopositioning using multi-source remote sensing data can achieve a plane accuracy of 1.587 m and an altitude accuracy of 1.985 m, which is similar to the geometric accuracy of geopositioning of GF2 with GCPs. Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>The imaging mechanism of optical and SAR satellite: (<b>a</b>) the central projection of optical satellite. (<b>b</b>) The slant range projection of SAR satellite.</p>
Full article ">Figure 2
<p>The image point extraction for GLAS footprint: (<b>a</b>) digital orthophotograph model (DOM), (<b>b</b>) high-resolution (HR) optical image, and (<b>c</b>) synthetic aperture radar (SAR) image. The image points calculated by back-projection are indicated by a green cross. The final image points located in obvious texture area are indicated by a red cross. The circle footprint with a diameter of 70 m is marked by a green ellipse (the line resolution is 0.331 m and sample resolution is 0.562 m). The geometric accuracies of the used DOM, HR optical image, and SAR image were about 0.3, 30, and 5 m, respectively.</p>
Full article ">Figure 3
<p>An illustration of the coverage area of the experimental data.</p>
Full article ">Figure 4
<p>Experimental data for combined geopositioning: (<b>a</b>) GF2-1, (<b>b</b>) GF2-2, (<b>c</b>) GF2-3, (<b>d</b>) GF6-1, (<b>e</b>) GF6-2, (<b>f</b>) GF3-1, and (<b>g</b>) GF3-2.</p>
Full article ">Figure 4 Cont.
<p>Experimental data for combined geopositioning: (<b>a</b>) GF2-1, (<b>b</b>) GF2-2, (<b>c</b>) GF2-3, (<b>d</b>) GF6-1, (<b>e</b>) GF6-2, (<b>f</b>) GF3-1, and (<b>g</b>) GF3-2.</p>
Full article ">Figure 5
<p>The reference evaluation data: (<b>a</b>) DOM and (<b>b</b>) digital elevation models (DEM).</p>
Full article ">Figure 6
<p>Geometric accuracy analysis of GF2 and GF3: (<b>a</b>) GF2 geopositioning, (<b>b</b>) GF3 geopositioning. (<b>c</b>) GF2 + GF3-1 combined geopositioning, (<b>d</b>) GF2 + GF3-1 + GF3-2 combined geopositioning, and (<b>e</b>) GF2 + GF3-1 + GF3-2 combined geopositioning with the same weight. Check points (CKPs) are represented by circles. The plane error and altitude error are represented by the vectors marked by the solid line and the dotted line, respectively.</p>
Full article ">Figure 7
<p>Geometric accuracy analysis of GF6 and GF3: (<b>a</b>) GF6 geopositioning, (<b>b</b>) GF6 + GF3-1 combined geopositioning, (<b>c</b>) GF6 + GF3-1 + GF3-2 combined geopositioning, and (<b>d</b>) GF6 + GF3-1 + GF3-2 combined geopositioning with the same weight.</p>
Full article ">Figure 8
<p>Geometric accuracy analysis of GF2 and GF3 combined geopositioning: (<b>a</b>) plane geometric accuracy analysis with different offset parameters, (<b>b</b>) altitude geometric accuracy analysis with different offset parameters, (<b>c</b>) plane geometric accuracy analysis with different scale and rotation parameters, and (<b>d</b>) altitude geometric accuracy analysis with different scale and rotation parameters.</p>
Full article ">Figure 9
<p>Evaluation of the GLAS footprint extraction with the proposed criterion. The GLAS footprints are marked with red points.</p>
Full article ">Figure 10
<p>Geometric accuracy analysis of GF2 and GLAS combined geopositioning: (<b>a</b>) with one GLAS footprint, (<b>b</b>) with four GLAS footprints, (<b>c</b>) with nine GLAS footprints and (<b>d</b>) with four height control points. The GLAS footprints and height control points are represented as triangles. The plane error and altitude error are represented by the vectors marked with the solid line and the dotted line, respectively.</p>
Full article ">Figure 11
<p>Geometric accuracy analysis of GF6 and GLAS combined geopositioning: (<b>a</b>) with one GLAS footprint, (<b>b</b>) with four GLAS footprints, (<b>c</b>) with nine GLAS footprints, and (<b>d</b>) with four height control points.</p>
Full article ">Figure 12
<p>Geometric accuracy improvement analysis: (<b>a</b>) GF3, GF3, and GLAS combined geopositioning and (<b>b</b>) GF2 with four GCPs.</p>
Full article ">Figure 13
<p>Geometric accuracy improvement analysis: (<b>a</b>) GF6, GF3, and GLAS combined geopositioning and (<b>b</b>) GF6 with four GCPs.</p>
Full article ">
16 pages, 3410 KiB  
Review
Determination of Phycocyanin from Space—A Bibliometric Analysis
by Igor Ogashawara
Remote Sens. 2020, 12(3), 567; https://doi.org/10.3390/rs12030567 - 8 Feb 2020
Cited by 27 | Viewed by 5710
Abstract
Over the past few decades, there has been an increase in the number of studies about the estimation of phycocyanin derived from remote sensing techniques. Since phycocyanin is a unique pigment of inland water cyanobacteria, the quantification of its concentration from earth observation [...] Read more.
Over the past few decades, there has been an increase in the number of studies about the estimation of phycocyanin derived from remote sensing techniques. Since phycocyanin is a unique pigment of inland water cyanobacteria, the quantification of its concentration from earth observation data is important for water quality monitoring - once some species can produce toxins. Because of the growth of this field in the past decade, several reviews and studies comparing algorithms have been published. Thus, instead of focusing on algorithms comparison or description, the goal of the present study is to systematically analyze and visualize the evolution of publications. Using the Web of Science database this study analyzed the existing publications on remote sensing of phycocyanin decade-by-decade for the period 1991–2020. The bibliometric analysis showed how research topics evolved from measuring pigments to the quantification of optical properties and from laboratory experiments to measuring entire temperate and tropical aquatic systems. This study provides the status quo and development trend of the field and points out what could be the direction for future research. Full article
(This article belongs to the Collection Feature Papers for Section Environmental Remote Sensing)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Remote sensing of phycocyanin publication numbers from the search in each database. (<b>A</b>) From Web of Science, (<b>B</b>) From Science Direct.</p>
Full article ">Figure 2
<p>The co-citation network of remote sensing of PC and the most cited publications.</p>
Full article ">Figure 3
<p>The co-citation network of remote sensing of PC publications clusters (1991–2020) and clusters’ labels extracted by the LLR method.</p>
Full article ">Figure 4
<p>The co-citation network of remote sensing of PC publications clusters (1991-2000) and clusters’ labels extracted by LLR method.</p>
Full article ">Figure 5
<p>The co-citation network of remote sensing of PC publications clusters (2001-2010) and clusters’ labels extracted by LLR method.</p>
Full article ">Figure 6
<p>The co-citation network of remote sensing of PC publications clusters (2011–2020) and clusters’ labels extracted by LLR method.</p>
Full article ">
18 pages, 2797 KiB  
Article
Determination of Leaf Nitrogen Concentrations Using Electrical Impedance Spectroscopy in Multiple Crops
by Rinku Basak, Khan Wahid and Anh Dinh
Remote Sens. 2020, 12(3), 566; https://doi.org/10.3390/rs12030566 - 8 Feb 2020
Cited by 16 | Viewed by 4356
Abstract
In this work, crop leaf nitrogen concentration (LNC) is predicted by leaf impedance measurements made by electrical impedance spectroscopy (EIS). This method uses portable equipment and is noninvasive, as are other available nondestructive methods, such as hyperspectral imaging, near-infrared spectroscopy, and soil-plant analyses [...] Read more.
In this work, crop leaf nitrogen concentration (LNC) is predicted by leaf impedance measurements made by electrical impedance spectroscopy (EIS). This method uses portable equipment and is noninvasive, as are other available nondestructive methods, such as hyperspectral imaging, near-infrared spectroscopy, and soil-plant analyses development (SPAD). An EVAL-AD5933EBZ evaluation board is used to measure the impedances of four different crop leaves, i.e., canola, wheat, soybeans, and corn, in the frequency range of 5 to 15 kHz. Multiple linear regression using the least square method is employed to obtain a correlation between leaf nitrogen concentrations and leaf impedances. A strong correlation is found between nitrogen concentrations and measured impedances for multiple features using EIS. The results are obtained by PrimaXL Data Analysis ToolPak and validated by analysis of variance (ANOVA) tests. Optimized regression models are determined by selecting features using the backward elimination method. After a comparative analysis among the four different crops, the best multiple regression results are found for canola with an overall correlation coefficient (R) of 0.99, a coefficient of determination (R2) of 0.98, and root mean square (RMSE) of 0.54% in the frequency range of 8.7–12 kHz. The performance of EIS is also compared with an available SPAD reading which is moderately correlated with LNC. A high correlation coefficient of 0.94, a coefficient of determination of 0.89, and RMSE of 1.12% are obtained using EIS, whereas a maximum correlation coefficient of 0.72, a coefficient of determination of 0.53, and RMSE of 1.52% are obtained using SPAD for the same number of combined observations. The proposed multiple linear regression models based on EIS measurements sensitive to LNC can be used on a very local scale to develop a simple, rapid, inexpensive, and effective instrument for determining the leaf nitrogen concentrations in crops. Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>(<b>a</b>) Schematic diagram of EVAL-AD5933EBZ evaluation board [<a href="#B28-remotesensing-12-00566" class="html-bibr">28</a>]; (<b>b</b>) plants in the Agriculture and Agri-Food Canada (AAFC) greenhouse; (<b>c</b>) impedance measurement of plant leaves using electrical impedance spectroscopy (EIS); and (<b>d</b>) boxplots of actual nitrogen concentrations, measured by the laboratory experiments, for four different plant species.</p>
Full article ">Figure 2
<p>Plots of frequency versus leaf impedance. (<b>a</b>) canola; (<b>b</b>) wheat; (<b>c</b>) soybeans; and (<b>d</b>) corn at different nitrogen fertilization levels. The impedance profile for a few samples of wheat and soybeans could not be taken because of the effects of high nitrogen fertilization.</p>
Full article ">Figure 2 Cont.
<p>Plots of frequency versus leaf impedance. (<b>a</b>) canola; (<b>b</b>) wheat; (<b>c</b>) soybeans; and (<b>d</b>) corn at different nitrogen fertilization levels. The impedance profile for a few samples of wheat and soybeans could not be taken because of the effects of high nitrogen fertilization.</p>
Full article ">Figure 3
<p>Correlations between leaf impedance and leaf nitrogen concentration (LNC) for four different plant species. The coefficient of determination (<span class="html-italic">R</span><sup>2</sup>) is extracted for canola (<b>a</b>) linear 0.03; (<b>b</b>) polynomial 0.03; for wheat (<b>c</b>) linear 0.13; (<b>d</b>) polynomial 0.15; for soybeans (<b>e</b>) linear 0.08; (<b>f</b>) polynomial 0.11; and for corn (<b>g</b>) linear 0.18; (<b>h</b>) polynomial 0.19.</p>
Full article ">Figure 3 Cont.
<p>Correlations between leaf impedance and leaf nitrogen concentration (LNC) for four different plant species. The coefficient of determination (<span class="html-italic">R</span><sup>2</sup>) is extracted for canola (<b>a</b>) linear 0.03; (<b>b</b>) polynomial 0.03; for wheat (<b>c</b>) linear 0.13; (<b>d</b>) polynomial 0.15; for soybeans (<b>e</b>) linear 0.08; (<b>f</b>) polynomial 0.11; and for corn (<b>g</b>) linear 0.18; (<b>h</b>) polynomial 0.19.</p>
Full article ">Figure 4
<p>Residuals in different number of observations for (<b>a</b>) canola; (<b>b</b>) wheat; (<b>c</b>) soybeans; and (<b>d</b>) corn with nitrogen concentrations.</p>
Full article ">Figure 4 Cont.
<p>Residuals in different number of observations for (<b>a</b>) canola; (<b>b</b>) wheat; (<b>c</b>) soybeans; and (<b>d</b>) corn with nitrogen concentrations.</p>
Full article ">Figure 5
<p>Multiple regression analysis for (<b>a</b>) canola; (<b>b</b>) wheat; (<b>c</b>) soybeans; and (<b>d</b>) corn. The extracted coefficient of determination (<span class="html-italic">R</span><sup>2</sup>) is for canola 0.98, for wheat 0.95, for soybeans 0.75, and for corn 0.68.</p>
Full article ">Figure 6
<p>Plots of (<b>a</b>) number of observations versus value of residuals; and (<b>b</b>) actual versus predicted LNC for canola + wheat + soybeans + corn using EIS. The extracted coefficient of determination (<span class="html-italic">R</span><sup>2</sup>) is 0.89 and the corresponding overall correlation coefficient (<span class="html-italic">R</span>) is 0.94.</p>
Full article ">Figure 7
<p>Plots of (<b>a</b>) soil-plant analyses development (SPAD) reading versus LNC; (<b>b</b>) number of observations versus value of residuals; and (<b>c</b>) actual LNC versus predicted LNC for canola+wheat+soybeans+corn using SPAD. The extracted coefficient of determination (<span class="html-italic">R</span><sup>2</sup>) is 0.53 and the corresponding overall correlation coefficient (<span class="html-italic">R</span>) is 0.72.</p>
Full article ">
21 pages, 1583 KiB  
Article
Broad-Scale Weather Patterns Encountered during Flight Influence Landbird Stopover Distributions
by Hannah L. Clipp, Emily B. Cohen, Jaclyn A. Smolinsky, Kyle G. Horton, Andrew Farnsworth and Jeffrey J. Buler
Remote Sens. 2020, 12(3), 565; https://doi.org/10.3390/rs12030565 - 8 Feb 2020
Cited by 18 | Viewed by 6954
Abstract
The dynamic weather conditions that migrating birds experience during flight likely influence where they stop to rest and refuel, particularly after navigating inhospitable terrain or large water bodies, but effects of weather on stopover patterns remain poorly studied. We examined the influence of [...] Read more.
The dynamic weather conditions that migrating birds experience during flight likely influence where they stop to rest and refuel, particularly after navigating inhospitable terrain or large water bodies, but effects of weather on stopover patterns remain poorly studied. We examined the influence of broad-scale weather conditions encountered by nocturnally migrating Nearctic-Neotropical birds during northward flight over the Gulf of Mexico (GOM) on subsequent coastal stopover distributions. We categorized nightly weather patterns using historic maps and quantified region-wide densities of birds in stopover habitat with data collected by 10 weather surveillance radars from 2008 to 2015. We found spring weather patterns over the GOM were most often favorable for migrating birds, with winds assisting northward flight, and document regional stopover patterns in response to specific unfavorable weather conditions. For example, Midwest Continental High is characterized by strong northerly winds over the western GOM, resulting in high-density concentrations of migrants along the immediate coastlines of Texas and Louisiana. We show, for the first time, that broad-scale weather experienced during flight influences when and where birds stop to rest and refuel. Linking synoptic weather patterns encountered during flight with stopover distributions contributes to the emerging macro-ecological understanding of bird migration, which is critical to consider in systems undergoing rapid human-induced changes. Full article
(This article belongs to the Special Issue Radar Aeroecology)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>The locations and coverage of the 10 NEXRAD sites (the circles represent a 100-km radius sampling area) within the northern Gulf of Mexico coastal region, which encompasses six states: Texas (TX), Louisiana (LA), Mississippi (MS), Alabama (AL), Georgia (GA), and Florida (FL).</p>
Full article ">Figure 2
<p>Generalized diagrams of the eight defined synoptic weather types considered in this study, with labeled pressure systems (“L” = low pressure, “H” = high pressure), pressure isobars (black lines), frontal system boundaries (blue lines with triangles denoting direction of movement), and general wind direction over the coast and Gulf of Mexico (indicated by the arrows). The first five synoptic weather types were considered unfavorable (Western Gulf Front, Central Gulf Front, Eastern Gulf Front, East Coast Low, and Midwest Continental High) and the last three favorable (Eastern Continental High, Bermuda High, and Gulf High).</p>
Full article ">Figure 3
<p>Canonical correspondence analysis plot showing the mean canonical variate values of the nine daily synoptic weather types and the canonical vectors of four continuous weather variables measured throughout the night over the Gulf of Mexico including zonal (blowing east or west) and meridional (blowing north or south) wind speeds (m s<sup>−1</sup>) at 925 mb, surface air pressure (kPa), and accumulated total precipitation (kg m<sup>−2</sup>) from North American Regional Reanalysis points over the Gulf of Mexico relative to the first two canonical axes (CCA1 &amp; CCA2). The zonal wind component estimates wind speed in the east–west direction (positive if blowing towards the east and negative if blowing towards the west), and the meridional wind component estimates wind speed in the north–south direction (positive if blowing towards the north and negative if blowing towards the south). All measurements were taken at 6:00 UTC the night prior to migrants departing stopover sites. Ellipses denote the 95% confidence intervals of means for the synoptic weather types, which include five unfavorable for northward migration (e.g., winds blowing south and/or moderate to high amounts of precipitation) in red: Western Gulf Front (GFW), Central Gulf Front (GFC), Eastern Gulf Front (GFE), East Coast Low (ELOW), and Midwest Continental High (MCH); and three favorable for northwards migration (e.g., winds blowing north and little to no precipitation) in blue: Eastern Continental High (ECH), Bermuda High (BH), and Gulf High (GH). The last synoptic weather type of “Other” consisted of a subset of instances that did not fit into one of the eight prior categories.</p>
Full article ">Figure 4
<p>Relative influence of (<b>a</b>) en route synoptic weather (i.e., encountered during migration over the Gulf of Mexico [GOM]) summed across weather types compared to other ecological predictor variables (i.e., excluding distance from the radar and relative elevation) and (<b>b</b>) individual en route synoptic weather types on bird stopover density from the boosted regression tree model (percent deviance explained = 63.2%, CV correlation = 0.655). Other predictor variables pertained to geography (longitude, distance from the GOM coast) and landscape (proportion of hardwood forest within 5 km). CV correlation is the mean correlation of predictions using cross-validated (i.e., out-of-bag) data.</p>
Full article ">Figure 5
<p>Plots of interactions among longitude (<b>a</b>,<b>c</b>,<b>e</b>) and distance from the Gulf of Mexico (GOM) coast (<b>b</b>,<b>d</b>,<b>f</b>) and the three most influential en route synoptic weather types (<b>a</b>,<b>b</b>: Midwest Continental High; <b>c</b>,<b>d</b>: Western Gulf Front; <b>e</b>,<b>f</b>: East Coast Low) from a boosted regression tree model predicting mean bird stopover density within the northern GOM coastal region. The solid line represents the combined response of all the other synoptic weather types. The shaded bars underneath the longitude interactions indicate the state, with Texas on the far left, followed by Louisiana, Mississippi/Alabama, and Florida.</p>
Full article ">
15 pages, 4616 KiB  
Article
Ultrahigh Resolution Scatterometer Winds near Hawaii
by Nolan Hutchings, Thomas Kilpatrick and David G. Long
Remote Sens. 2020, 12(3), 564; https://doi.org/10.3390/rs12030564 - 8 Feb 2020
Cited by 3 | Viewed by 3261
Abstract
Hawaii regional climate model (HRCM), QuikSCAT, and ASCAT wind estimates are compared in the lee of Hawaii’s Big Island with the goal of understanding ultrahigh resolution (UHR) scatterometer wind retrieval capabilities in this area, which includes a reverse-flow toward the island in the [...] Read more.
Hawaii regional climate model (HRCM), QuikSCAT, and ASCAT wind estimates are compared in the lee of Hawaii’s Big Island with the goal of understanding ultrahigh resolution (UHR) scatterometer wind retrieval capabilities in this area, which includes a reverse-flow toward the island in the lee of the predominate flow. A comparison of scatterometer measured σ 0 and model predicted σ 0 suggests that scatterometers can detect the reverse flow in the lee of the island; however, neither QuikSCAT- nor ASCAT-estimated winds consistently report this flow. Furthermore, the scatterometer UHR winds do not resolve the wind direction features predicted by the HRCM. Differences between scatterometer measured σ 0 and HRCM predicted σ 0 indicate possible error in the placement of key reverse flow features predicted by the HRCM. We find that coarse initialization fields and a large size median filter windows used in ambiguity selection can impede the accuracy of the UHR wind direction retrieval in this area, suggesting the need for further development of improved near-coastal ambiguity selection algorithms. Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>An example HRCM 3 km hourly wind vector field from 3:00 a.m. 26 June 2003. Wind speed is shown in color and wind direction quivers are downsampled and unit length. The land mask is shown in white. The reverse flow can be seen on the west side of the Big Island.</p>
Full article ">Figure 2
<p>Density plots of QuikSCAT UHR wind speeds (<b>a</b>) and directions (<b>b</b>) plotted versus HRCM winds. A <math display="inline"><semantics> <mrow> <mi>y</mi> <mo>=</mo> <mi>x</mi> </mrow> </semantics></math> line is included in each plot for reference.</p>
Full article ">Figure 3
<p>ASCAT UHR wind speeds (<b>a</b>) and directions (<b>b</b>) collocated with HRCM winds. A <math display="inline"><semantics> <mrow> <mi>y</mi> <mo>=</mo> <mi>x</mi> </mrow> </semantics></math> line is included in each plot for reference.</p>
Full article ">Figure 4
<p>Panels (<b>a–d</b>) show the average difference in linear values between measured QuikSCAT <math display="inline"><semantics> <msup> <mi>σ</mi> <mn>0</mn> </msup> </semantics></math> and HRCM predicted <math display="inline"><semantics> <msup> <mi>σ</mi> <mn>0</mn> </msup> </semantics></math> for each flavor of <math display="inline"><semantics> <msup> <mi>σ</mi> <mn>0</mn> </msup> </semantics></math>; (<b>e–h</b>) show the standard deviation of the difference between measured QuikSCAT <math display="inline"><semantics> <msup> <mi>σ</mi> <mn>0</mn> </msup> </semantics></math> and HRCM predicted <math display="inline"><semantics> <msup> <mi>σ</mi> <mn>0</mn> </msup> </semantics></math> normalized by QuikSCAT average wind speeds in linear values. The first row shows values for VF, the second is VA, the third HF, and the fourth row is HA. Land is shown in gray and the land contamination buffer in white.</p>
Full article ">Figure 5
<p>ASCAT <math display="inline"><semantics> <msup> <mi>σ</mi> <mn>0</mn> </msup> </semantics></math> from multiple revs from a point (see Figure 8a) in the north high wind speed region and a point in the south high wind speed region are plotted for fore (<b>a</b>), mid (<b>b</b>), and aft (<b>c</b>) looks. Corresponding NWP predicted <math display="inline"><semantics> <msup> <mi>σ</mi> <mn>0</mn> </msup> </semantics></math> are shown in (<b>d–f</b>) for the same beams. <math display="inline"><semantics> <msup> <mi>σ</mi> <mn>0</mn> </msup> </semantics></math> curves at 50<math display="inline"><semantics> <msup> <mrow/> <mo>∘</mo> </msup> </semantics></math> incidence angle for CMOD5 for 5, 10, and 15 m/s are plotted in each panel for reference. Error bars showing the average difference between ASCAT measured <math display="inline"><semantics> <msup> <mi>σ</mi> <mn>0</mn> </msup> </semantics></math> and NWP predicted <math display="inline"><semantics> <msup> <mi>σ</mi> <mn>0</mn> </msup> </semantics></math> and the standard deviation of the differences are plotted in black.</p>
Full article ">Figure 6
<p>Panels (<b>a</b>–<b>d</b>) show the average difference in linear values between measured QuikSCAT <math display="inline"><semantics> <msup> <mi>σ</mi> <mn>0</mn> </msup> </semantics></math> and NWP predicted <math display="inline"><semantics> <msup> <mi>σ</mi> <mn>0</mn> </msup> </semantics></math> for different flavors of <math display="inline"><semantics> <msup> <mi>σ</mi> <mn>0</mn> </msup> </semantics></math>; (<b>e</b>–<b>h</b>) show the normalized standard deviation in linear values between measured QuikSCAT <math display="inline"><semantics> <msup> <mi>σ</mi> <mn>0</mn> </msup> </semantics></math> and NWP predicted <math display="inline"><semantics> <msup> <mi>σ</mi> <mn>0</mn> </msup> </semantics></math>. The first row is VF, second is VA, third HF, and the fourth row HA. The land is shown in gray and the land contamination buffer is shown in white.</p>
Full article ">Figure 7
<p>The average difference between linear values of measured ASCAT <math display="inline"><semantics> <msup> <mi>σ</mi> <mn>0</mn> </msup> </semantics></math> and HRCM predicted <math display="inline"><semantics> <msup> <mi>σ</mi> <mn>0</mn> </msup> </semantics></math> shown for fore (<b>a</b>), mid (<b>b</b>), and aft (<b>c</b>) beams. Corresponding normalized standard deviation of the difference values are shown to the right in panels (<b>d</b>–<b>f</b>). The land is shown in gray and the land contamination buffer is shown in white.</p>
Full article ">Figure 8
<p>The average difference between linear values of measured ASCAT <math display="inline"><semantics> <msup> <mi>σ</mi> <mn>0</mn> </msup> </semantics></math> and NWP predicted <math display="inline"><semantics> <msup> <mi>σ</mi> <mn>0</mn> </msup> </semantics></math> shown for fore (<b>a</b>), mid (<b>b</b>), and aft (<b>c</b>) beams. Corresponding normalized standard deviation of the difference values are shown to the right in panels (<b>d–f</b>). The land is shown in gray and the land contamination buffer is shown in white. The Xs in (<b>a</b>) indicate where the wind speeds are taken from for the plots in <a href="#remotesensing-12-00564-f005" class="html-fig">Figure 5</a>.</p>
Full article ">Figure 9
<p>Simulated HRCM wind field nudged with an L2B field (<b>a</b>), simulated field nudged with L2B and median filtered (<b>b</b>), simulated field nudged with the true field (<b>c</b>), simulated field nudged with true field and median filtered (<b>d</b>). The median filter window size is 42.5 km (17 x 17 UHR WVC). Downsampled wind direction quivers are shown in all panels.</p>
Full article ">Figure 10
<p>QuikSCAT UHR-derived wind direction fields for different median filter window sizes. (<b>a</b>) shows a swath oriented wind direction field of a simulated HRCM wind field nudged with the true wind field. (<b>b</b>) shows (<b>a</b>) processed with a 42.5 km (17 x 17 UHR WVC) median filter window. Note how the dark features that differ from the mean flow (circled within the red box) in (<b>a</b>) disappear in (<b>b</b>) after filtering. The land mask is shown in white and the colorbar denotes the wind direction in degrees.</p>
Full article ">
20 pages, 13294 KiB  
Article
High-Frequency Variations in Pearl River Plume Observed by Soil Moisture Active Passive Sea Surface Salinity
by Xiaomei Liao, Yan Du, Tianyu Wang, Shuibo Hu, Haigang Zhan, Huizeng Liu and Guofeng Wu
Remote Sens. 2020, 12(3), 563; https://doi.org/10.3390/rs12030563 - 8 Feb 2020
Cited by 8 | Viewed by 3778
Abstract
River plumes play an important role in the cross-margin transport of phytoplankton and nutrients, which have profound impacts on coastal ecosystems. Using recently available Soil Moisture Active Passive (SMAP) sea surface salinity (SSS) data and high-resolution ocean color products, this study investigated summertime [...] Read more.
River plumes play an important role in the cross-margin transport of phytoplankton and nutrients, which have profound impacts on coastal ecosystems. Using recently available Soil Moisture Active Passive (SMAP) sea surface salinity (SSS) data and high-resolution ocean color products, this study investigated summertime high-frequency variations in the Pearl River plume of China and its biological response. The SMAP SSS captures the intraseasonal oscillations in the offshore transport of the Pearl River plume well, which has distinct 30–60 day variations from mid-May to late September. The offshore transport of freshwater varies concurrently with southwesterly wind anomalies and is roughly in phase with the Madden–Julian Oscillation (MJO) index in phases 1–5, thus implying that the MJO exerts a significant influence. During MJO phases 1–2, the southwest wind anomalies in the northeastern South China Sea (SCS) enhanced cross-shore Ekman transport, while the northeast wind anomalies during MJO phases 3–5 favored the subsequent southwestward transport of the plume. The high chlorophyll-a concentration coincided well with the low-salinity water variations, emphasizing the important role of the offshore transport of the Pearl River plume in sustaining biological production over the oligotrophic northern SCS. The strong offshore transport of the plume in June 2015 clearly revealed that the proximity of a cyclonic eddy plays a role in the plume’s dispersal pathway. In addition, heavy rainfall related to the landfall of tropical cyclones in the Pearl River Estuary region contributed to the episodic offshore transport of the plume. Full article
(This article belongs to the Special Issue Remote Sensing of Ocean-Atmosphere Interactions)
Show Figures

Figure 1

Figure 1
<p>(<b>a</b>) The bathymetry (contours; m) of the northern South China Sea. Climatological means for (<b>b</b>) Moderate Resolution Imaging Spectroradiometer (MODIS)-Aqua chlorophyll-<span class="html-italic">a</span> (Chl<span class="html-italic">a</span>) (shadings; mg/m<sup>3</sup>), (<b>c</b>) sea surface temperature (SST) (shadings; °C) and surface wind (vectors; m/s), (<b>d</b>) sea surface salinity (SSS; shadings; psu) and precipitation minus evaporation (P-E; contours; mm/day), and (<b>e</b>) surface nitrate concentration (shadings; mmol/l) and nitracline depth (contours; m) in June–August.</p>
Full article ">Figure 2
<p>Longitude–time sections of (<b>a</b>–<b>d</b>) SSS (shadings; psu), and (<b>e</b>–<b>h</b>) 8-day merged Chl<span class="html-italic">a</span> (shadings; mg/m<sup>3</sup>) averaged meridionally over 21–23°N during May 10 to September 20 from 2015 to 2018.</p>
Full article ">Figure 3
<p>Snapshots of MODIS Chl<span class="html-italic">a</span> (shadings; mg/m<sup>3</sup>) and SSS (contours; psu) from (<b>a</b>,<b>b</b>) June 16–17 and August 18–19, 2015, (<b>c</b>,<b>d</b>) June 19–20 and July 23–24, 2016, (<b>e</b>,<b>f</b>) June 28–29 and August 14–15, 2017, and (<b>g</b>) July 9–10, 2018.</p>
Full article ">Figure 4
<p>Longitude–time sections of (<b>a</b>–<b>d</b>) SSS (shadings; psu) and (<b>e</b>–<b>h</b>) 30–90 day bandpass-filtered SSS anomalies (shadings; psu) averaged meridionally over 21–23°N during May 10 to September 20, from 2015 to 2018. The 30–90 day bandpass-filtered (<b>i</b>–<b>l</b>) alongshore wind anomalies (curves; m/s; positive values indicating southwest wind anomalies) averaged between 113 and 117°N, and (<b>m</b>–<b>p</b>) precipitation anomalies (shadings; mm) averaged over the region of 110–118°E, 21–26°S, and the Madden–Julian Oscillation (MJO) index (curves) for the corresponding periods. The red curves in (<b>m</b>–<b>p</b>) highlight MJO in phases 1–5 with amplitudes greater than 1.</p>
Full article ">Figure 5
<p>(<b>a</b>–<b>h</b>) Boreal summer (June–August) MJO composite of 30–90 day bandpass-filtered sea surface wind (vectors; m/s) and outgoing longwave radiation (OLR; shading; W/m<sup>2</sup>) anomalies based on the real-time multivariate MJO (RMM) index.</p>
Full article ">Figure 6
<p>(<b>a</b>–<b>h</b>) As per <a href="#remotesensing-12-00563-f004" class="html-fig">Figure 4</a> but for the SSS anomalies (psu; shading) in the northern South China Sea.</p>
Full article ">Figure 7
<p>Composite of 30–90 day bandpass-filtered SSS anomalies (shading; psu) and surface wind anomalies (vectors; m/s) on (<b>a</b>) June 1–10 (MJO phases 1–2), (<b>b</b>) June 11–21 (MJO phases 3–4), and (<b>c</b>) June 22–26 (MJO phase 5), 2015.</p>
Full article ">Figure 8
<p>Snapshots of MODIS Chl<span class="html-italic">a</span> (shadings; mg/m<sup>3</sup>) on (<b>a</b>) June 8–9, (<b>b</b>) June 16–17, (<b>c</b>) July 1–3, and (<b>d</b>) July 24, 2015. (<b>e</b>–<b>h</b>) and (<b>i</b>–<b>l</b>) are the same as (<b>a</b>–<b>d</b>) but for SSS (shading; psu), SST (shading; °C), and surface wind (vectors; m/s).</p>
Full article ">Figure 9
<p>As per <a href="#remotesensing-12-00563-f007" class="html-fig">Figure 7</a> but for (<b>a</b>–<b>d</b>) Chl<span class="html-italic">a</span> (shadings; mg/m<sup>3</sup>) and finite-size Lyapunov exponents (FSLEs; contours), and (<b>e</b>–<b>h</b>) sea surface height (SSH; shadings; cm) and geostrophic currents (vectors; m/s). The purple crosses in (<b>e</b>–<b>h</b>) denote the eddy center.</p>
Full article ">Figure 10
<p>(<b>a</b>) The trajectory of the selected cyclonic eddy and time series of (<b>b</b>) the averaged Chl<span class="html-italic">a</span> (bars; mg/m<sup>3</sup>) for twice the eddy radius and eddy kinetic energy (EKE; black curve; cm<sup>2</sup>/s<sup>2</sup>) within the eddy, (<b>c</b>) averaged SSS (dashed curve; psu) for twice the eddy radius and eddy rotational speeds (dotted curve; cm/s), and (<b>d</b>) the eddy radius (black curve; km) and amplitude (blue dashed curve; cm).</p>
Full article ">Figure 11
<p>Time series of precipitation (curve; mm) averaged over the region of 110–118°E, 20–25°S (black box in (<b>c</b>)), and the spatial distribution of typhoon-induced precipitation on (<b>a</b>,<b>b</b>) August 22–23, (<b>d</b>,<b>e</b>) August 27–28, and (<b>f</b>,<b>g</b>) September 3–4, 2018.</p>
Full article ">Figure 12
<p>Snapshots of Visible Infrared Imaging Radiometer Suite (VIIRS) Chl<span class="html-italic">a</span> (shadings; mg/m<sup>3</sup>) and geostrophic currents (vectors; m/s) on (<b>a</b>) August 18–19, (<b>b</b>) August 26–27, (<b>c</b>) August 29–30, and (<b>d</b>) July 8–9, 2018. (<b>e</b>–<b>h</b>) and (<b>i</b>–<b>l</b>) are the same as (<b>a</b>–<b>d</b>) but for SSS (shading; psu) and SSH (contours; cm), SST (shading; °C) and surface wind (vectors; m/s).</p>
Full article ">
18 pages, 3494 KiB  
Article
Predicting Microhabitat Suitability for an Endangered Small Mammal Using Sentinel-2 Data
by Francesco Valerio, Eduardo Ferreira, Sérgio Godinho, Ricardo Pita, António Mira, Nelson Fernandes and Sara M. Santos
Remote Sens. 2020, 12(3), 562; https://doi.org/10.3390/rs12030562 - 8 Feb 2020
Cited by 30 | Viewed by 8535
Abstract
Accurate mapping is a main challenge for endangered small-sized terrestrial species. Freely available spatio-temporal data at high resolution from multispectral satellite offer excellent opportunities for improving predictive distribution models of such species based on fine-scale habitat features, thus making it easier to achieve [...] Read more.
Accurate mapping is a main challenge for endangered small-sized terrestrial species. Freely available spatio-temporal data at high resolution from multispectral satellite offer excellent opportunities for improving predictive distribution models of such species based on fine-scale habitat features, thus making it easier to achieve comprehensive biodiversity conservation goals. However, there are still few examples showing the utility of remote-sensing-based products in mapping microhabitat suitability for small species of conservation concern. Here, we address this issue using Sentinel-2 sensor-derived habitat variables, used in combination with more commonly used explanatory variables (e.g., topography), to predict the distribution of the endangered Cabrera vole (Microtus cabrerae) in agrosilvopastorial systems. Based on vole surveys conducted in two different seasons over a ~176,000 ha landscape in Southern Portugal, we assessed the significance of each predictor in explaining Cabrera vole occurrence using the Boruta algorithm, a novel Random forest variant for dealing with high dimensionality of explanatory variables. Overall, results showed a strong contribution of Sentinel-2-derived variables for predicting microhabitat suitability of Cabrera voles. In particular, we found that photosynthetic activity (NDI45), specific spectral signal (SWIR1), and landscape heterogeneity (Rao’s Q) were good proxies of Cabrera voles’ microhabitat, mostly during temporally greener and wetter conditions. In addition to remote-sensing-based variables, the presence of road verges was also an important driver of voles’ distribution, highlighting their potential role as refuges and/or corridors. Overall, our study supports the use of remote-sensing data to predict microhabitat suitability for endangered small-sized species in marginal areas that potentially hold most of the biodiversity found in human-dominated landscapes. We believe our approach can be widely applied to other species, for which detailed habitat mapping over large spatial extents is difficult to obtain using traditional descriptors. This would certainly contribute to improving conservation planning, thereby contributing to global conservation efforts in landscapes that are managed for multiple purposes. Full article
(This article belongs to the Special Issue Remote Sensing for Biodiversity Mapping and Monitoring)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Location of the study area: (<b>a</b>) Iberian Peninsula and actual Cabrera vole distribution range are represented jointly with the study area, located within the Alentejo region (Southern Portugal); and (<b>b</b>) Cabrera vole sampling points layered with the T29SNC, T29SND, T29SPC, and T29SPD Sentinel-2A RGB composite imageries delimited by the study area.</p>
Full article ">Figure 2
<p>The relative contribution of retained variables (%) in the final habitat suitability model, layered with respective groups (grey dot: Distance to landscape element; green dots: Spectral indices; cyan dots: Spectral bands; orange dots: Textural and diversity indices) and overlapped with a dashed line representing mean importance value.</p>
Full article ">Figure 3
<p>Interactive effects (partial dependence curves) of most important variables: (<b>a</b>) “Distance to paved roads”, (<b>b</b>) “NDI45 (Spring)”, (<b>c</b>) “SWIR1 (Autumn)”, and (<b>d</b>) “RAO’s Q (Spring)”, on probability of Cabrera vole occurrence. The average 10-fold cross-validation results are depicted by the blue lines. The grey area limits ± standard error.</p>
Full article ">Figure 4
<p>High-resolution Cabrera vole habitat suitability map in Southern part of Portugal, layered with paved roads and presences (blue dots). Zoomed areas are depicted as examples of identified sites of conservation interest namely (<b>a</b>) road verges, (<b>b</b>) pond banks, and (<b>c</b>) field margins. Purple areas: Low suitability; Green areas: high suitability).</p>
Full article ">
19 pages, 16535 KiB  
Article
A Semiautomatic Pixel-Object Method for Detecting Landslides Using Multitemporal ALOS-2 Intensity Images
by Bruno Adriano, Naoto Yokoya, Hiroyuki Miura, Masashi Matsuoka and Shunichi Koshimura
Remote Sens. 2020, 12(3), 561; https://doi.org/10.3390/rs12030561 - 8 Feb 2020
Cited by 26 | Viewed by 6722
Abstract
The rapid and accurate mapping of large-scale landslides and other mass movement disasters is crucial for prompt disaster response efforts and immediate recovery planning. As such, remote sensing information, especially from synthetic aperture radar (SAR) sensors, has significant advantages over cloud-covered optical imagery [...] Read more.
The rapid and accurate mapping of large-scale landslides and other mass movement disasters is crucial for prompt disaster response efforts and immediate recovery planning. As such, remote sensing information, especially from synthetic aperture radar (SAR) sensors, has significant advantages over cloud-covered optical imagery and conventional field survey campaigns. In this work, we introduced an integrated pixel-object image analysis framework for landslide recognition using SAR data. The robustness of our proposed methodology was demonstrated by mapping two different source-induced landslide events, namely, the debris flows following the torrential rainfall that fell over Hiroshima, Japan, in early July 2018 and the coseismic landslide that followed the 2018 Mw6.7 Hokkaido earthquake. For both events, only a pair of SAR images acquired before and after each disaster by the Advanced Land Observing Satellite-2 (ALOS-2) was used. Additional information, such as digital elevation model (DEM) and land cover information, was employed only to constrain the damage detected in the affected areas. We verified the accuracy of our method by comparing it with the available reference data. The detection results showed an acceptable correlation with the reference data in terms of the locations of damage. Numerical evaluations indicated that our methodology could detect landslides with an accuracy exceeding 80%. In addition, the kappa coefficients for the Hiroshima and Hokkaido events were 0.30 and 0.47, respectively. Full article
Show Figures

Figure 1

Figure 1
<p>(<b>a</b>) Locations of the target areas where the studied events occurred. (<b>b</b>,<b>c</b>) show the target regions (black dashed rectangles) encompassing the Hokkaido and Hiroshima regions, respectively. (<b>d</b>) Hokkaido study area. The red star shows the location of the epicenter. The blue rectangle shows the footprint covered by Advanced Land Observing Satellite-2 (ALOS-2). (<b>e</b>) Hiroshima study area. The blue rectangles show the footprints covered by ALOS-2.</p>
Full article ">Figure 2
<p>(<b>a</b>) JAXA’s land cover maps for the Hiroshima and Hokkaido areas (shown in the left and right panels, respectively). Land cover labels: DBF: Deciduous broadleaf forest, DNF: Deciduous needle-leaved forest, EBF: Evergreen broadleaf forest, and ENF: Evergreen needle-leaved forest. (<b>b</b>) Shuttle Radar Topography Mission (SRTM) digital elevation model (DEM) data used for the two study areas.</p>
Full article ">Figure 3
<p>Pre-event PALSAR-2, post-event PALSAR-2, RGB color-coded PALSAR-2 images (R: Pre-event, G and B: Post-event), and optical images. (<b>a</b>) Debris flow in Hiroshima after torrential rainfall. (<b>b</b>) Coseismic landslide in the Hokkaido study area.</p>
Full article ">Figure 4
<p>Schematic cross-sections of the landslide (debris flow) areas observed in synthetic aperture radar (SAR) intensity images.</p>
Full article ">Figure 5
<p>Research workflow for the detection and mapping of landslides using multitemporal SAR intensity images.</p>
Full article ">Figure 6
<p>(<b>a</b>) PALSAR-2 RGB color-coded images (top: Hiroshima area, bottom: Hokkaido area). (<b>b</b>) Result of applying an adaptive threshold to the PALSAR-2 dataset. Increasing and decreasing segments are indicated by cyan and red colors, respectively. (<b>c</b>) Masked increasing and decreasing intensity images using the land cover information and slope angles computed from the SRTM DEM dataset. (<b>d</b>) Result of applying object-based operations.</p>
Full article ">Figure 7
<p>Cumulative frequency function of the segment size from each target area (INC: Increasing segments; DEC: Decreasing segments). The segments detected from the Hiroshima site are smaller than those from the Hokkaido site.</p>
Full article ">Figure 8
<p>Comparison between the detected coseismic landslide (left panel) and the Geospatial Information Authority of Japan (GSI) reference data (right panel) for the Hokkaido study area.</p>
Full article ">Figure 9
<p>Debris flow detection results in the Hiroshima study area. The top and bottom panels show the areas around Kamiboda in the town of Kurose and the town of Kumano, respectively. The left panels show the detection results, the center panels correspond to the AJG reference data, and the right panels show the Sentinel-2 images.</p>
Full article ">Figure 10
<p>Comparison between the detected coseismic landslide (<b>left panel</b>) and the GSI reference data (<b>right panel</b>) for the Hokkaido study area.</p>
Full article ">Figure 11
<p>Analysis of optimal values for slope angle (left panel), and kernel size for adaptive thresholding that used in the proposed framework. The vertical frames show the range of recommended values for detecting landslides using ALOS-2 intensity images.</p>
Full article ">
24 pages, 7597 KiB  
Article
A New Framework for Automatic Airports Extraction from SAR Images Using Multi-Level Dual Attention Mechanism
by Lifu Chen, Siyu Tan, Zhouhao Pan, Jin Xing, Zhihui Yuan, Xuemin Xing and Peng Zhang
Remote Sens. 2020, 12(3), 560; https://doi.org/10.3390/rs12030560 - 7 Feb 2020
Cited by 24 | Viewed by 4570
Abstract
The detection of airports from Synthetic Aperture Radar (SAR) images is of great significance in various research fields. However, it is challenging to distinguish the airport from surrounding objects in SAR images. In this paper, a new framework, multi-level and densely dual attention [...] Read more.
The detection of airports from Synthetic Aperture Radar (SAR) images is of great significance in various research fields. However, it is challenging to distinguish the airport from surrounding objects in SAR images. In this paper, a new framework, multi-level and densely dual attention (MDDA) network is proposed to extract airport runway areas (runways, taxiways, and parking lots) in SAR images to achieve automatic airport detection. The framework consists of three parts: down-sampling of original SAR images, MDDA network for feature extraction and classification, and up-sampling of airports extraction results. First, down-sampling is employed to obtain a medium-resolution SAR image from the high-resolution SAR images to ensure the samples (500 × 500) can contain adequate information about airports. The dataset is then input to the MDDA network, which contains an encoder and a decoder. The encoder uses ResNet_101 to extract four-level features with different resolutions, and the decoder performs fusion and further feature extraction on these features. The decoder integrates the chained residual pooling network (CRP_Net) and the dual attention fusion and extraction (DAFE) module. The CRP_Net module mainly uses chained residual pooling and multi-feature fusion to extract advanced semantic features. In the DAFE module, position attention module (PAM) and channel attention mechanism (CAM) are combined with weighted filtering. The entire decoding network is constructed in a densely connected manner to enhance the gradient transmission among features and take full advantage of them. Finally, the airport results extracted by the decoding network were up-sampled by bilinear interpolation to accomplish airport extraction from high-resolution SAR images. To verify the proposed framework, experiments were performed using Gaofen-3 SAR images with 1 m resolution, and three different airports were selected for accuracy evaluation. The results showed that the mean pixels accuracy (MPA) and mean intersection over union (MIoU) of the MDDA network was 0.98 and 0.97, respectively, which is much higher than RefineNet and DeepLabV3. Therefore, MDDA can achieve automatic airport extraction from high-resolution SAR images with satisfying accuracy. Full article
(This article belongs to the Special Issue Deep Learning Approaches for Urban Sensing Data Analytics)
Show Figures

Figure 1

Figure 1
<p>The structure of the residual unit.</p>
Full article ">Figure 2
<p>The dense connection.</p>
Full article ">Figure 3
<p>The position attention module (PAM).</p>
Full article ">Figure 4
<p>Channel attention module (CAM).</p>
Full article ">Figure 5
<p>The multi-level densely double attention network for airport extraction.</p>
Full article ">Figure 6
<p>The internal structure of CRP_Net_x. (<b>a</b>) is the overall structure of CRP_Net_x; (<b>b</b>) is the MRF structure; (<b>c</b>) is the Chained Residual Pooling (CRP) structure.</p>
Full article ">Figure 6 Cont.
<p>The internal structure of CRP_Net_x. (<b>a</b>) is the overall structure of CRP_Net_x; (<b>b</b>) is the MRF structure; (<b>c</b>) is the Chained Residual Pooling (CRP) structure.</p>
Full article ">Figure 7
<p>The detailed implementation process of Position Attention Mechanism (PAM) and Channel Attention Mechanism (CAM). (<b>a</b>) PAM; (<b>b</b>) CAM.</p>
Full article ">Figure 8
<p>Airport images and corresponding ground truth. (<b>a</b>–<b>c</b>) denote the SAR image, ground truth, and the corresponding optical remote sensing image for Hongqiao Airport of China. (<b>d</b>–<b>f</b>) are the SAR image, ground truth, and optical remote sensing image for Capital Airport.</p>
Full article ">Figure 9
<p>The experiment result for Airport I. (<b>a</b>) SAR image of Airport I from Gaofen-3. (<b>b</b>) The down-sampled SAR image of (<b>a</b>) by 5 times. (<b>c</b>) The ground truth of the airport for (<b>b</b>). (<b>d</b>) The extraction result of (<b>b</b>) by RefineNet. (<b>e</b>) The extraction result of (<b>b</b>) by DeepLabV3. (<b>f</b>) The extraction result of (<b>b</b>) by MDDA. (<b>g</b>) The fusion map of (<b>d</b>,<b>b</b>). (<b>h</b>) The fusion map of (<b>e</b>,<b>b</b>). (<b>i</b>) The fusion map of (<b>f</b>,<b>b</b>). (<b>j</b>) The fusion map of (<b>a</b>,<b>d</b>) up-sampled by 5 times. (<b>k</b>) The fusion map of (<b>a</b>,<b>e</b>) up-sampled by 5 times. (<b>l</b>) The fusion map of (<b>a</b>,<b>f</b>) up-sampled by 5 times.</p>
Full article ">Figure 9 Cont.
<p>The experiment result for Airport I. (<b>a</b>) SAR image of Airport I from Gaofen-3. (<b>b</b>) The down-sampled SAR image of (<b>a</b>) by 5 times. (<b>c</b>) The ground truth of the airport for (<b>b</b>). (<b>d</b>) The extraction result of (<b>b</b>) by RefineNet. (<b>e</b>) The extraction result of (<b>b</b>) by DeepLabV3. (<b>f</b>) The extraction result of (<b>b</b>) by MDDA. (<b>g</b>) The fusion map of (<b>d</b>,<b>b</b>). (<b>h</b>) The fusion map of (<b>e</b>,<b>b</b>). (<b>i</b>) The fusion map of (<b>f</b>,<b>b</b>). (<b>j</b>) The fusion map of (<b>a</b>,<b>d</b>) up-sampled by 5 times. (<b>k</b>) The fusion map of (<b>a</b>,<b>e</b>) up-sampled by 5 times. (<b>l</b>) The fusion map of (<b>a</b>,<b>f</b>) up-sampled by 5 times.</p>
Full article ">Figure 10
<p>The experiment result for Airport II. (<b>a</b>) SAR image of Airport II from Gaofen-3. (<b>b</b>) The down-sampled SAR image of (<b>a</b>) by 5 times. (<b>c</b>) The ground truth of the airport for (<b>b</b>). (<b>d</b>) The extraction result of (<b>b</b>) by RefineNet. (<b>e</b>) The extraction result of (<b>b</b>) by DeepLabV3. (<b>f</b>) The extraction result of (<b>b</b>) by MDDA. (<b>g</b>) The fusion map of (<b>d</b>,<b>b</b>). (<b>h</b>) The fusion map of (<b>e</b>,<b>b</b>). (<b>i</b>) The fusion map of (<b>f</b>,<b>b</b>). (<b>j</b>) The fusion map of (<b>a</b>,<b>d</b>) up-sampled by 5 times. (<b>k</b>) The fusion map of (<b>a</b>,<b>e</b>) up-sampled by 5 times. (<b>l</b>) The fusion map of (<b>a</b>,<b>f</b>) up-sampled by 5 times.</p>
Full article ">Figure 10 Cont.
<p>The experiment result for Airport II. (<b>a</b>) SAR image of Airport II from Gaofen-3. (<b>b</b>) The down-sampled SAR image of (<b>a</b>) by 5 times. (<b>c</b>) The ground truth of the airport for (<b>b</b>). (<b>d</b>) The extraction result of (<b>b</b>) by RefineNet. (<b>e</b>) The extraction result of (<b>b</b>) by DeepLabV3. (<b>f</b>) The extraction result of (<b>b</b>) by MDDA. (<b>g</b>) The fusion map of (<b>d</b>,<b>b</b>). (<b>h</b>) The fusion map of (<b>e</b>,<b>b</b>). (<b>i</b>) The fusion map of (<b>f</b>,<b>b</b>). (<b>j</b>) The fusion map of (<b>a</b>,<b>d</b>) up-sampled by 5 times. (<b>k</b>) The fusion map of (<b>a</b>,<b>e</b>) up-sampled by 5 times. (<b>l</b>) The fusion map of (<b>a</b>,<b>f</b>) up-sampled by 5 times.</p>
Full article ">Figure 11
<p>The experiment result for Airport III. (<b>a</b>) SAR image of Airport III from Gaofen-3. (<b>b</b>) The down-sampled SAR image of (<b>a</b>) by 5 times. (<b>c</b>) The ground truth of the airport for (<b>b</b>). (<b>d</b>) The extraction result of (<b>b</b>) by RefineNet. (<b>e</b>) The extraction result of (<b>b</b>) by DeepLabV3. (<b>f</b>) The extraction result of (<b>b</b>) by MDDA. (<b>g</b>) The fusion map of (<b>d</b>,<b>b</b>). (<b>h</b>) The fusion map of (<b>e</b>,<b>b</b>). (<b>i</b>) The fusion map of (<b>f</b>,<b>b</b>). (<b>j</b>) The fusion map of (<b>a</b>,<b>d</b>) up-sampled by 5 times. (<b>k</b>) The fusion map of (<b>a</b>,<b>e</b>) up-sampled by 5 times. (<b>l</b>) The fusion map of (<b>a</b>,<b>f</b>) up-sampled by 5 times.</p>
Full article ">Figure 12
<p>The experiment result for Airport IV. (<b>a</b>) SAR image of Airport IV from Gaofen-3. (<b>b</b>) The down-sampled SAR image of (<b>a</b>) by 5 times. (<b>c</b>) The ground truth of the airport for (<b>b</b>). (<b>d</b>) The extraction result of (<b>b</b>) by RefineNet. (<b>e</b>) The extraction result of (<b>b</b>) by DeepLabV3. (<b>f</b>) The extraction result of (<b>b</b>) by MDDA. (<b>g</b>) The fusion map of (<b>d</b>,<b>b</b>). (<b>h</b>) The fusion map of (<b>e</b>,<b>b</b>). (<b>i</b>) The fusion map of (<b>f</b>,<b>b</b>). (<b>j</b>) The fusion map of (<b>a</b>,<b>d</b>) up-sampled by 5 times. (<b>k</b>) The fusion map of (<b>a</b>,<b>e</b>) up-sampled by 5 times. (<b>l</b>) The fusion map of (<b>a</b>,<b>f</b>) up-sampled by 5 times.</p>
Full article ">Figure 12 Cont.
<p>The experiment result for Airport IV. (<b>a</b>) SAR image of Airport IV from Gaofen-3. (<b>b</b>) The down-sampled SAR image of (<b>a</b>) by 5 times. (<b>c</b>) The ground truth of the airport for (<b>b</b>). (<b>d</b>) The extraction result of (<b>b</b>) by RefineNet. (<b>e</b>) The extraction result of (<b>b</b>) by DeepLabV3. (<b>f</b>) The extraction result of (<b>b</b>) by MDDA. (<b>g</b>) The fusion map of (<b>d</b>,<b>b</b>). (<b>h</b>) The fusion map of (<b>e</b>,<b>b</b>). (<b>i</b>) The fusion map of (<b>f</b>,<b>b</b>). (<b>j</b>) The fusion map of (<b>a</b>,<b>d</b>) up-sampled by 5 times. (<b>k</b>) The fusion map of (<b>a</b>,<b>e</b>) up-sampled by 5 times. (<b>l</b>) The fusion map of (<b>a</b>,<b>f</b>) up-sampled by 5 times.</p>
Full article ">Figure 13
<p>The enlarged view of a small part of Airport I. (<b>a</b>) SAR image of a part of Airport I from Gaofen-3. (<b>b</b>) The down-sampled SAR image of (<b>a</b>) by 5 times. (<b>c</b>) The ground truth of the airport for (<b>b</b>). (<b>d</b>) The extraction result of (b) by RefineNet. (<b>e</b>) The extraction result of (<b>b</b>) by DeepLabV3. (<b>f</b>) The extraction result of (<b>b</b>) by MDDA. (<b>g</b>) The fusion map of (<b>d</b>,<b>b</b>). (<b>h</b>) The fusion map of (<b>e</b>,<b>b</b>). (<b>i</b>) The fusion map of (<b>f</b>,<b>b</b>). (<b>j</b>) The fusion map of (<b>a</b>,<b>d</b>) up-sampled by 5 times. (<b>k</b>) The fusion map of (<b>a</b>) and (<b>e</b>) up-sampled by 5 times. (<b>l</b>) The fusion map of (<b>a</b>,<b>f</b>) up-sampled by 5 times.</p>
Full article ">Figure 13 Cont.
<p>The enlarged view of a small part of Airport I. (<b>a</b>) SAR image of a part of Airport I from Gaofen-3. (<b>b</b>) The down-sampled SAR image of (<b>a</b>) by 5 times. (<b>c</b>) The ground truth of the airport for (<b>b</b>). (<b>d</b>) The extraction result of (b) by RefineNet. (<b>e</b>) The extraction result of (<b>b</b>) by DeepLabV3. (<b>f</b>) The extraction result of (<b>b</b>) by MDDA. (<b>g</b>) The fusion map of (<b>d</b>,<b>b</b>). (<b>h</b>) The fusion map of (<b>e</b>,<b>b</b>). (<b>i</b>) The fusion map of (<b>f</b>,<b>b</b>). (<b>j</b>) The fusion map of (<b>a</b>,<b>d</b>) up-sampled by 5 times. (<b>k</b>) The fusion map of (<b>a</b>) and (<b>e</b>) up-sampled by 5 times. (<b>l</b>) The fusion map of (<b>a</b>,<b>f</b>) up-sampled by 5 times.</p>
Full article ">Figure 14
<p>The experimental results for the horizontal flipped Airport I. (<b>a</b>) SAR image from Gaofen-3. (<b>b</b>) The down-sampled SAR image of (<b>a</b>) by 5 times. (<b>c</b>) The ground truth of the airport for (<b>b</b>). (<b>d</b>) The extraction result of (<b>b</b>) by RefineNet. (<b>e</b>) The extraction result of (<b>b</b>) by DeepLabV3. (<b>f</b>) The extraction result of (<b>b</b>) by MDDA. (<b>g</b>) The fusion map of (<b>d</b>,<b>b</b>). (<b>h</b>) The fusion map of (<b>e</b>,<b>b</b>). (<b>i</b>) The fusion map of (<b>f</b>,<b>b</b>). (<b>j</b>) The fusion map of (<b>a</b>,<b>d</b>) up-sampled by 5 times. (<b>k</b>) The fusion map of (<b>a</b>) and (<b>e</b>) up-sampled by 5 times. (<b>l</b>) The fusion map of (<b>a</b>,<b>f</b>) up-sampled by 5 times.</p>
Full article ">
Previous Issue
Back to TopTop