Nothing Special   »   [go: up one dir, main page]

Next Issue
Volume 12, August-2
Previous Issue
Volume 12, July-2
You seem to have javascript disabled. Please note that many of the page functionalities won't work as expected without javascript enabled.
 
 
remotesensing-logo

Journal Browser

Journal Browser

Remote Sens., Volume 12, Issue 15 (August-1 2020) – 164 articles

Cover Story (view full-size image): With a decline in the number of operational river gauges monitoring sediments, a viable means of quantifying sediment transport is needed. In this study, we address this issue by applying relationships between hydraulic geometry of river channels, water discharge, water-leaving surface reflectance, and suspended sediment concentration (SSC) to quantify sediment discharge with the aid of space-based observations. We examined 5490 Landsat scenes to estimate water discharge, SSC, and sediment discharge for the period 1984–2017 at nine gauging sites along the Upper Mississippi River. The results show that the water discharge and SSC retrieval from Landsat imagery yield reasonable sediment discharge estimates for the Upper Mississippi River. View this paper
  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
17 pages, 9231 KiB  
Article
Use of Remote Sensing in Comprehending the Influence of Urban Landscape’s Composition and Configuration on Land Surface Temperature at Neighbourhood Scale
by Ifeanyi R. Ejiagha, M. Razu Ahmed, Quazi K. Hassan, Ashraf Dewan, Anil Gupta and Elena Rangelova
Remote Sens. 2020, 12(15), 2508; https://doi.org/10.3390/rs12152508 - 4 Aug 2020
Cited by 24 | Viewed by 5601
Abstract
The spatial composition and configuration of land use land cover (LULC) in the urban landscape impact the land surface temperature (LST). In this study, we assessed such impacts at the neighbourhood level of the City of Edmonton. In doing so, we employed Landsat-8 [...] Read more.
The spatial composition and configuration of land use land cover (LULC) in the urban landscape impact the land surface temperature (LST). In this study, we assessed such impacts at the neighbourhood level of the City of Edmonton. In doing so, we employed Landsat-8 Operational Land Imager (OLI) and Thermal Infrared Sensors (TIRS) satellite images to derive LULC and LST maps, respectively. We used three classification methods, such as ISODATA, random forest, and indices-based, for mapping LULC classes including built-up, water, and green. We obtained the highest overall accuracy of 98.53 and 97.90% with a kappa value of 0.96 and 0.92 in the indices-based method for the 2018 and 2015 LULC maps, respectively. Besides, we estimated the LST map from the brightness temperature using a single-channel algorithm. Our analysis showed that the highest contributors to LST were the industrial (303.51 K in 2018 and 295.99 K in 2015) and residential (303.47 K in 2018 and 296.56 K in 2015) neighbourhoods, and the lowest contributor was the riverine/creek (298.77 K in 2018 and 292.89 K in 2015) during the 2018 late summer and 2015 early spring seasons. We also found that the residential neighbourhoods exhibited higher LST in comparison with the industrial with the same LULC composition. The result was also supported by our surface albedo analysis, where industrial and residential neighbourhoods were giving higher and lower albedo values, respectively. This indicated that the rooftop materials played further role in impacting the LST. In addition, our spatial autocorrelation (local Moran’s I) and proximity (near distance) analyses revealed that the structural configurations would additionally play an important role in contributing to the LST in the neighbourhoods. For example, the cluster pattern with a small gap of minimum 2.4 m between structures in the residential neighbourhoods were showing higher LST in compared with the sparse pattern, with large gaps between structures in the industrial areas. The wide passages for wind flow through the large gaps would be responsible for cooling the LST in the industrial neighbourhoods. The outcomes of this study would help planners in planning and designing urban neighbourhoods, and policymakers and stakeholders in developing strategies to balance surface energy and mitigate local warming. Full article
(This article belongs to the Special Issue Understanding Urban Systems Using Remote Sensing)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Study area showing the City of Edmonton’s extent in the Canadian Province of Alberta, where the city comprises of 400 neighbourhoods. The boundaries of these neighbourhoods were overlaid on a Landsat-8 OLI image with a false colour composite (RGB:543) acquired on 07 September 2018.</p>
Full article ">Figure 2
<p>Schematic diagram of the proposed methods in the scope of this study.</p>
Full article ">Figure 3
<p>(<b>a</b>) Indices-based LULC map showing the spatial distribution of the three LULC classes (2018), where the built-up is located in the core surrounded by the green at the fringes, and the major water portion is the river that passes across diagonally from southwest to northeast in the middle; and (<b>b</b>) the LST map (2018) representing the temperature variations over the study area, where the higher LST are located in the built-up, and the lower LST are along the green and water classes.</p>
Full article ">Figure 4
<p>(<b>a</b>) Map showing the neighbourhoods grouped into five different categories, overlaid on the Landsat-8 OLI image (2018) with a natural colour composite (RGB:432); and (<b>b</b>) map showing the 12 subcategories of the neighbourhoods (2018), where ‘I’ represents the industrial and ‘R’ represents the residential categories.</p>
Full article ">Figure 5
<p>A scatterplot of the mean LST values comparing the two subcategory sets of industrial and residential neighbourhood categories for both 2018 (late summer) and 2015 (early spring), where each set consists with six subcategories. The linear regression coefficients of the both neighbourhood categories are representing the relations of mean surface temperature with the configuration of the subcategories of the neighbourhoods.</p>
Full article ">Figure 6
<p>(<b>a</b>) Map showing the estimated values of surface albedo (2018), where the red to reddish colour tones represent the higher albedo values in compared to the lower albedo values represented by the blue to bluish. The yellow to yellowish colours represent the medium level of albedo; (<b>b</b>) surface albedo of subcategorized industrial neighbourhoods indicated high surface albedo (both 2018 and 2015), while low surface albedo for residential subcategories (both 2018 and 2015).</p>
Full article ">
23 pages, 4837 KiB  
Review
Integrated Satellite–Terrestrial Connectivity for Autonomous Ships: Survey and Future Research Directions
by Marko Höyhtyä and Jussi Martio
Remote Sens. 2020, 12(15), 2507; https://doi.org/10.3390/rs12152507 - 4 Aug 2020
Cited by 32 | Viewed by 8299
Abstract
An autonomous vessel uses multiple different radio technologies such as satellites, mobile networks and dedicated narrowband systems, to connect to other ships, services, and the remote operations center (ROC). In-ship communication is mainly implemented with wired technologies but also wireless links can be [...] Read more.
An autonomous vessel uses multiple different radio technologies such as satellites, mobile networks and dedicated narrowband systems, to connect to other ships, services, and the remote operations center (ROC). In-ship communication is mainly implemented with wired technologies but also wireless links can be used. In this survey paper, we provide a short overview of autonomous and remote-controlled systems. This paper reviews 5G-related standardization in the maritime domain, covering main use cases and both the role of autonomous ships and that of people onboard. We discuss the concept of a connectivity manager, an intelligent entity that manages complex set of technologies, integrating satellite and terrestrial technologies together, ensuring robust in-ship connections and ship-to-outside connections in any environment. This survey paper describes the architecture and functionalities of connectivity management required for an autonomous ship to be able to operate globally. As a specific case example, we have implemented a research environment consisting of ship simulators with connectivity components. Our simulation results on the effects of delays to collision avoidance confirm the role of reliable connectivity for safety. Finally, we outline future research directions for autonomous ship connectivity research, providing ideas for further work. Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Key use case groups in 3GPP maritime standardization.</p>
Full article ">Figure 2
<p>Network architecture for an autonomous ship.</p>
Full article ">Figure 3
<p>Connectivity manager for integrated satellite–terrestrial system in a future maritime scenario.</p>
Full article ">Figure 4
<p>Ship-to-ship maximum link distances according to antenna heights.</p>
Full article ">Figure 5
<p>Vertical handover for autonomous ship connectivity.</p>
Full article ">Figure 6
<p>VTT ship simulator for connectivity research.</p>
Full article ">Figure 7
<p>Ship motion directions.</p>
Full article ">Figure 8
<p>Track and track limits.</p>
Full article ">Figure 9
<p>Selected simulator scenario and two locations for connectivity delay testing. Simulated vessels are presented in amplified figures. The whole area is approximately 3.5 km in width and 5 km in length, located close to Helsinki, Finland.</p>
Full article ">Figure 10
<p>Actual simulated distances between ‘own ship’ and ‘Ship1’ as a function of time for simulation scenario cases A, B, and C. The first encounter situation.</p>
Full article ">Figure 11
<p>Selected actual simulated distances between ‘own ship’ and ‘Ship2’ as a function of time for simulation scenario cases A, B, and C. The second encounter situation.</p>
Full article ">Figure 12
<p>Future maritime connections.</p>
Full article ">Figure 13
<p>Ship mega-constellation for global connectivity.</p>
Full article ">
31 pages, 7726 KiB  
Article
Remote Sensing Analysis of Surface Temperature from Heterogeneous Data in a Maize Field and Related Water Stress
by Marinella Masina, Alessandro Lambertini, Irene Daprà, Emanuele Mandanici and Alberto Lamberti
Remote Sens. 2020, 12(15), 2506; https://doi.org/10.3390/rs12152506 - 4 Aug 2020
Cited by 12 | Viewed by 5991
Abstract
Precision agriculture aims at optimizing crop production by adapting management actions to real needs and requires that a reliable and extensive description of soil and crop conditions is available, that multispectral satellite images can provide. The purpose of the present study, based on [...] Read more.
Precision agriculture aims at optimizing crop production by adapting management actions to real needs and requires that a reliable and extensive description of soil and crop conditions is available, that multispectral satellite images can provide. The purpose of the present study, based on activities carried out in 2019 on an agricultural area north of Ravenna (Italy) within the project LIFE AGROWETLANDS II, is to evaluate the potentials and limitations of freely available satellite thermal images for the identification of water stress conditions and the optimization of irrigation management practices, especially in agricultural areas and wetlands affected by saline soils and salt water capillary rise. Point field surveys and a very-high resolution thermal survey (5 cm) by an unmanned aerial vehicle (UAV) supported thermal camera were performed on a maize field tentatively at every Landsat-8 passage to check land surface temperature (LST) and canopy cover (CC) estimated from satellite. Temperature measured in the soil near ground surface and from UAV flying at 100 m altitude is compared with LST estimated from satellite measurements using three conversion methods: the top of atmosphere brightness temperature based on Landsat-8 band 10 (SB) corrected to account only for surface emissivity, the radiative transfer equation (RTE) for atmosphere effects correction, and the original split window method (SW) using both Thermal Infrared Sensor (TIRS) bands. The comparison shows discrepancies, due to extreme difference in resolution, the systematic hour of satellite passage (11 am solar time), and systematic differences between methods beside the unavoidable inaccuracy of UAV measurements. Satellite derived temperatures result usually lower than UAV measurements; SB produced the lowest values, SW the best (difference = −1.7 ± 1.7), and RTE the median (difference = −2.7 ± 1.6). The correlation between contemporary 30 m resolution temperature values of near pixels and corresponding tile-average temperatures was not significant, due to the purely numerical interpolation from the 100 m resolution TIRS images, whereas the time pattern along the season is consistent among methods, being correlation coefficient always greater than 0.85. Correlation coefficients among temperatures obtained from Landsat-8 by different methods are almost 1, showing that values are almost strictly related by a linear transformation. All the methods are useful to estimate water stress, since its associated Crop Water Stress Index (CWSI) is, from its definition, insensitive to linear transformation of temperatures. Actual evapotranspiration (ETa) maps are evaluated with the Surface Energy Balance Algorithm for Land (SEBAL) based on the three Landsat-8 derived LSTs; the higher is LST, the lower is ETa. Resulting ETa estimates are related with LST but not strictly, due to variation in vegetation cover and soil, therefore patterns result similar but not equivalent, whereas values are dependent on the atmosphere correction method. RTE and SW result in the best methods among the tested ones and the derived ETa values result reliable and appropriate to user needs. For real time application the Normalized Difference Moisture Index (NDMI), which can also be derived from more frequent Sentinel-2 passages, can be profitably used in combination or as a substitute of the CWSI. Full article
(This article belongs to the Special Issue Application of Remote Sensing in Agroforestry)
Show Figures

Figure 1

Figure 1
<p>Location of the study area with unmanned aerial vehicle (UAV) survey area marker in blue. North and south parcels are highlighted. Red squares are measure points in the area. Special network nodes: P02 and P07 are also equipped with agro-meteorological stations; P09 detects soil and groundwater properties. Orthophoto AGEA 2011 displayed as base-map is provided by Regione Emilia-Romagna Geoportal. Grid coordinates are UTM-ETRS89 Zone 33T.</p>
Full article ">Figure 2
<p>Reference area for UAV surveys is highlighted with a red polygon. The Landsat-8 grid of 30 × 30 m pixels at ground level in path 192 is overlaid in light blue. The 12 selected tiles discussed in this paper are highlighted. The orthophoto is produced from images acquired on July 19 accurately geo-referenced with 10 ground control points (GCPs) thermal target (a). Grid coordinates are UTM-ETRS89 Zone 33T. The Reno river is visible at the NE end of the strip.</p>
Full article ">Figure 3
<p>The calendar of survey activities and satellite passages. Grey and black dots represent surveys or passages that for any reason resulted in not useful images (cloud cover, hazy sky at high altitude or technical problems).</p>
Full article ">Figure 4
<p>Thermal orthomosaic acquired on 19 July 2019. Reported temperatures are derived with a constant 0.95 emissivity. Grid coordinates are UTM-ETRS89 Zone 33T.</p>
Full article ">Figure 5
<p>HYDRUS simulation of soil temperature based on meteorologic conditions compared with Sentek temperature measurements.</p>
Full article ">Figure 6
<p>Image of the tile (left) and corresponding temperature distribution before calibration and after edge removal.</p>
Full article ">Figure 7
<p>Empirical correlation between Landsat and UAV derived temperatures (<b>left</b>), and between Landsat derived NDVI and UAV derived vegetation cover (<b>right</b>). Different atmosphere corrections are tested to relate top of atmosphere and land surface temperatures (left). NDVI is evaluated either using top of atmosphere (TOA) data, or level-2 surface reflectance data (SR).</p>
Full article ">Figure 8
<p>Temperature distribution over the experimental field derived with SB method for a dry (<b>left</b>) and wet (<b>right</b>) condition.</p>
Full article ">Figure 9
<p>Land surface temperature (LST) maps derived with RTE and SW methods from the 3 July Landsat-8 images.</p>
Full article ">Figure 10
<p>Evapotranspiration and water stress maps at and around the survey field on date 3 July, 4 August, and 20 August obtained from Surface Energy Balance Algorithm for Land (SEBAL) using temperatures derived from SB method.</p>
Full article ">Figure 11
<p>Evapotranspiration around the survey field on date 3 July obtained from SEBAL with RTE (left) and SW (right) correction methods of atmosphere effects on LST.</p>
Full article ">Figure 12
<p>Normalized Difference Moisture Index (NDMI) maps derived from Landsat-8 with 30 m resolution (left) and from Sentinel-2 with 10 m resolution from 3 July 2019 near-infrared (NIR) and shortwave infrared (SWIR1) images.</p>
Full article ">
28 pages, 11778 KiB  
Article
Landslide Susceptibility Assessment of Wildfire Burnt Areas through Earth-Observation Techniques and a Machine Learning-Based Approach
by Mariano Di Napoli, Palmira Marsiglia, Diego Di Martire, Massimo Ramondini, Silvia Liberata Ullo and Domenico Calcaterra
Remote Sens. 2020, 12(15), 2505; https://doi.org/10.3390/rs12152505 - 4 Aug 2020
Cited by 44 | Viewed by 6611
Abstract
Climate change has increased the likelihood of the occurrence of disasters like wildfires, floods, storms, and landslides worldwide in the last years. Weather conditions change continuously and rapidly, and wildfires are occurring repeatedly and diffusing with higher intensity. The burnt catchments are known, [...] Read more.
Climate change has increased the likelihood of the occurrence of disasters like wildfires, floods, storms, and landslides worldwide in the last years. Weather conditions change continuously and rapidly, and wildfires are occurring repeatedly and diffusing with higher intensity. The burnt catchments are known, in many parts of the world, as one of the main sensitive areas to debris flows characterized by different trigger mechanisms (runoff-initiated and debris slide-initiated debris flow). The large number of studies produced in recent decades has shown how the response of a watershed to precipitation can be extremely variable, depending on several on-site conditions, as well as the characteristics of precipitation duration and intensity. Moreover, the availability of satellite data has significantly improved the ability to identify the areas affected by wildfires, and, even more importantly, to carry out post-fire assessment of burnt areas. Many difficulties have to be faced in attempting to assess landslide risk in burnt areas, which present a higher likelihood of occurrence; in densely populated neighbourhoods, human activities can be the cause of the origin of the fires. The latter is, in fact, one of the main operations used by man to remove vegetation along slopes in an attempt to claim new land for pastures or construction purposes. Regarding the study area, the Camaldoli and Agnano hill (Naples, Italy) fires seem to act as a predisposing factor, while the triggering factor is usually represented by precipitation. Eleven predisposing factors were chosen and estimated according to previous knowledge of the territory and a database consisting of 400 landslides was adopted. The present work aimed to expand the knowledge of the relationship existing between the triggering of landslides and burnt areas through the following phases: (1) Processing of the thematic maps of the burnt areas through band compositions of satellite images; and (2) landslide susceptibility assessment through the application of a new statistical approach (machine learning techniques). The analysis has the scope to support decision makers and local agencies in urban planning and safety monitoring of the environment. Full article
(This article belongs to the Section Remote Sensing in Geology, Geomorphology and Hydrology)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Location of the study area. In the top-left inset, the study area in detail; in red, the Camaldoli and Agnano hills. In the bottom-left inset, post-fire erosion phenomena (15 September 2001); in the bottom-right inset, the post-fire landslides triggered in the summer of 1996. Both images refer to the Soccavo slopes (photo: D. Calcaterra).</p>
Full article ">Figure 2
<p>Geological sketch map of the Naples municipality area. Caldera rims associated with two large Plinian and Phreatoplinian eruptions are also illustrated (CI and NYT) (modified after [<a href="#B45-remotesensing-12-02505" class="html-bibr">45</a>]).</p>
Full article ">Figure 3
<p>Flow chart of the implemented approach.</p>
Full article ">Figure 4
<p>(<b>a</b>,<b>b</b>) Pre- and post-fire conditions visible with a band composition 7-4-2; (<b>c</b>,<b>d</b>) pre- and post-fire conditions visible with a 7-5-3 band composition. In (<b>b</b>,<b>d</b>), red polygons are shown, which highlight where the wildfire occurred. Note that with the band composition 7-4-2, the identification of the burnt areas is more and evident than with the 7-5-3 band composition.</p>
Full article ">Figure 5
<p>“Base” ensemble susceptibility maps. (<b>a</b>) Weighted mean ensemble susceptibility map of the Camaldoli hill; (<b>b</b>) weighted mean ensemble susceptibility map of the Agnano hills.</p>
Full article ">Figure 6
<p>Landslide and areal extension distribution in susceptibility classes for the Agnano hills (<b>a</b>,<b>b</b>) and the Camaldoli hill (<b>c</b>,<b>d</b>).</p>
Full article ">Figure 7
<p>Landslide susceptibility maps considering also the burnt areas. (<b>a</b>,<b>b</b>) Susceptibility map considering fires that occurred in 1995–1996 and the rainfall-triggered landslides of 1997 (Camaldoli hill); (<b>c</b>,<b>d</b>) susceptibility maps considering the fires that occurred in 1999–2000 and the rainfall-triggered landslides of 2001 (Camaldoli hill); finally, (<b>e</b>,<b>f</b>) susceptibility map considering the fires that occurred in 2017–2018 and the rainfall-triggered landslides of 2019 (Agnano hills).</p>
Full article ">Figure 8
<p>(<b>a</b>) Landslide susceptibility distribution of wildfires area in the different classes; (<b>b</b>) areal extension distribution of susceptibility maps of the wildfire area for each susceptibility level.</p>
Full article ">Figure 9
<p>(<b>a</b>) Percentage of landslides falling in pre- and post-fire susceptibility maps; (<b>b</b>) percentage of pre- and post-fire areal extension.</p>
Full article ">
24 pages, 6149 KiB  
Article
Using UAV Collected RGB and Multispectral Images to Evaluate Winter Wheat Performance across a Site Characterized by Century-Old Biochar Patches in Belgium
by Ramin Heidarian Dehkordi, Victor Burgeon, Julien Fouche, Edmundo Placencia Gomez, Jean-Thomas Cornelis, Frederic Nguyen, Antoine Denis and Jeroen Meersmans
Remote Sens. 2020, 12(15), 2504; https://doi.org/10.3390/rs12152504 - 4 Aug 2020
Cited by 21 | Viewed by 5721
Abstract
Remote sensing data play a crucial role in monitoring crop dynamics in the context of precision agriculture by characterizing the spatial and temporal variability of crop traits. At present there is special interest in assessing the long-term impacts of biochar in agro-ecosystems. Despite [...] Read more.
Remote sensing data play a crucial role in monitoring crop dynamics in the context of precision agriculture by characterizing the spatial and temporal variability of crop traits. At present there is special interest in assessing the long-term impacts of biochar in agro-ecosystems. Despite the growing body of literature on monitoring the potential biochar effects on harvested crop yield and aboveground productivity, studies focusing on the detailed crop performance as a consequence of long-term biochar enrichment are still lacking. The primary objective of this research was to evaluate crop performance based on high-resolution unmanned aerial vehicle (UAV) imagery considering both crop growth and health through RGB and multispectral analysis, respectively. More specifically, this approach allowed monitoring of century-old biochar impacts on winter wheat crop performance. Seven Red-Green-Blue (RGB) and six multispectral flights were executed over 11 century-old biochar patches of a cultivated field. UAV-based RGB imagery exhibited a significant positive impact of century-old biochar on the evolution of winter wheat canopy cover (p-value = 0.00007). Multispectral optimized soil adjusted vegetation index indicated a better crop development over the century-old biochar plots at the beginning of the season (p-values < 0.01), while there was no impact towards the end of the season. Plant height, derived from the RGB imagery, was slightly higher for century-old biochar plots. Crop health maps were computed based on principal component analysis and k-means clustering. To our knowledge, this is the first attempt to quantify century-old biochar effects on crop performance during the entire growing period using remotely sensed data. Ground-based measurements illustrated a significant positive impact of century-old biochar on crop growth stages (p-value of 0.01265), whereas the harvested crop yield was not affected. Multispectral simplified canopy chlorophyll content index and normalized difference red edge index were found to be good linear estimators of harvested crop yield (p-value(Kendall) of 0.001 and 0.0008, respectively). The present research highlights that other factors (e.g., inherent pedological variations) are of higher importance than the presence of century-old biochar in determining crop health and yield variability. Full article
(This article belongs to the Special Issue UAVs for Vegetation Monitoring)
Show Figures

Figure 1

Figure 1
<p>(<b>a</b>) A schematic layout of the experimental pairs (reference versus century-old biochar plots) in the winter wheat field and distribution of the ground control points. Background image corresponds to the Red-Green-Blue (RGB) orthomosaic captured by the unmanned aerial vehicle on the 16 April 2019. The right magnified windows present an example of a plot (reference plot 5) for visual RGB (<b>b</b>) and multispectral weighted difference vegetation index (<b>c</b>) monitored on 16 April 2019.</p>
Full article ">Figure 2
<p>Methodological flowchart for crop traits retrieval from the unmanned aerial vehicle (UAV) imageries in combination with ground-based data.</p>
Full article ">Figure 3
<p>Methodological workflow to derive plants height from RGB images collected by the unmanned aerial vehicle. DTM is the digital terrain model acquired on 22 February when the field was completely bare soil. DSM represents the digital surface model at each acquisition date. Ground control point 2 was used as a reference levelling point to calibrate the DSM images.</p>
Full article ">Figure 4
<p>(<b>a</b>) Temporal profiles of the canopy cover, derived from the RGB imagery, comparing the 11 century-old biochar plots with the corresponding reference plots throughout the season. (<b>b</b>) Box plot visualization of the area under the curve of canopy cover between the century-old biochar and reference plots (i.e., obtained from graphs displayed in sub-plot a for each site). The horizontal black line displays the median value, surrounded by box edges representing the 25th and 75th percentiles. The black circles show all of the experimental plots and the grey dark green line indicates the corresponding pairs of a reference and century-old biochar plot.</p>
Full article ">Figure 5
<p>Comparison of the plant height of all the 11 century-old biochar plots (red) versus the 11 reference plots for each acquisition date. The black circle and cross represent the mean and median height, respectively. The error-bar presents the corresponding standard deviation. The bottom and top black pluses (+) indicate the minimum and maximum height, respectively. Outside of the plot, asterisks *, **, ***, and **** reveal the statistical levels of significance; the acronym NS stands for non-significant.</p>
Full article ">Figure 6
<p>Box plot visualization of winter wheat height development for the 11 experimental pairs of century-old biochar versus reference plots. The horizontal black line displays the median value, surrounded by box edges representing the 25th and 75th percentiles. Outside of the plot, asterisks *, **, ***, and **** reveal the statistical levels of significance and the acronym NS stands for non-significant. The bottom magnified window displays an example of a height comparison between century-old biochar and reference plot 5 (i.e., on 16 April 2019).</p>
Full article ">Figure 7
<p>Comparison of the optimized soil adjusted vegetation index (OSAVI) of all of the 11 century-old biochar plots (red) versus the 11 reference plots for each acquisition date. The black circle and cross represent the mean and median OSAVI, respectively. The error-bar represents the corresponding standard deviation. The bottom and top black pluses (+) indicate the minimum and maximum OSAVI, respectively. Outside of the plot, asterisks *, **, ***, and **** reveal the statistical levels of significance and the acronym NS stands for non-significant.</p>
Full article ">Figure 8
<p>Box plot visualization of optimized soil adjusted vegetation index for the 11 experimental pairs of century-old biochar versus reference plots over the growing season of 2019. The horizontal black line displays the median value, surrounded by box edges representing the 25th and 75th percentiles. Outside of the plot, asterisks *, **, ***, and **** reveal the statistical levels of significance and the acronym NS stands for non-significant. No spectral information is available for reference plot 1 on 20 March because of a flight planning constraint.</p>
Full article ">Figure 9
<p>Data from 24 June 2019: (<b>a</b>) Map of optimized soil adjusted vegetation index (OSAVI). (<b>b</b>) Crop health map derived from the k-means clustering of OSAVI; the magnified window represents an example of an area characterized by poor crop health in the study field. (<b>c</b>) RGB orthomosaic image (left), the green and red spectral responses of the visible RGB orthomosaic image (middle), and the first component (PC1) raster of the principal component analysis (PCA) of the green and red bands (right). (<b>d</b>) Crop health map derived based on the k-means clustering of the PC1 raster of the PCA of green and red spectral channels.</p>
Full article ">Figure 10
<p>Impact of century-old biochar on ground-based crop traits including crop growth stages, expressed as the biologische bundesanstalt, bundessortenamt and chemical industry (BBCH) scale, on 28 May 2019 (<b>a</b>), and yield on 20 July 2019 (<b>b</b>). The horizontal black line displays the median value, surrounded by box edges representing the 25th and 75th percentiles. The black circles show the experimental plots of each pair and the dark green lines relate the corresponding reference and century-old biochar plots of each pair.</p>
Full article ">Figure 11
<p>(<b>a</b>) Relationship between the multispectral (MSP) simplified canopy chlorophyll content index (s-CCCI) on 24 June and the harvested crop yield on 20 July. The black circles represent the harvested samples and the brown line indicates the linear regression fit. (<b>b</b>) Map of the predicted MSP crop yield computed from the relationship between the MSP s-CCCI and the harvested crop yield. (<b>c</b>) A comparison of the predicted MSP crop yield of all of the 11 century-old biochar plots (red) versus the 11 reference plots. The black circle and cross represent the mean and median MSP crop yield, respectively. The error-bar represents the corresponding standard deviation. The bottom and top black pluses (+) indicate the minimum and maximum MSP crop yield, respectively. A comparison of the predicted MSP crop yield of the good (green), moderate (yellow), and poor (red) crop health classes derived from the MSP (<b>d</b>) and RGB (<b>e</b>) sensors. The black circle and cross represent the mean and median MSP crop yield, respectively. The error-bar represents the corresponding standard deviation. The bottom and top black pluses (+) indicate the minimum and maximum MSP crop yield, respectively. Outside of the plot, asterisks *, **, ***, and **** reveal the statistical levels of significance and the acronym NS stands for non-significant.</p>
Full article ">
29 pages, 22416 KiB  
Article
Classification of Urban Area Using Multispectral Indices for Urban Planning
by Philip Lynch, Leonhard Blesius and Ellen Hines
Remote Sens. 2020, 12(15), 2503; https://doi.org/10.3390/rs12152503 - 4 Aug 2020
Cited by 25 | Viewed by 11934
Abstract
An accelerating trend of global urbanization accompanying population growth makes frequently updated land use and land cover (LULC) maps critical. LULC maps have been widely created through the classification of remotely sensed imagery. Maps of urban areas have been both dichotomous (urban or [...] Read more.
An accelerating trend of global urbanization accompanying population growth makes frequently updated land use and land cover (LULC) maps critical. LULC maps have been widely created through the classification of remotely sensed imagery. Maps of urban areas have been both dichotomous (urban or non-urban) and entailing of discrete urban types. This study incorporated multispectral built-up indices, designed to enhance satellite imagery, for introducing new urban classification schemes. The indices examined are the new built-up index (NBI), the built-up area extraction index (BAEI), and the normalized difference concrete condition index (NDCCI). Landsat Level-2 data covering the city of Miami, FL, USA was leveraged with geographic data from the Florida Geospatial Data Library and Florida Department of Environmental Protection to develop and validate new methods of supervised and unsupervised classification of urban area. NBI was used to extract discrete urban features through object-oriented image analysis. BAEI was found to possess properties for visualizing and tracking urban development as a low-high gradient. NDCCI was composited with NBI and BAEI as the basis for a robust urban intensity classification scheme superior to that of the United States Geological Survey National Land Cover Database 2016. BAEI, implemented as a shadow index, was incorporated in a novel infill geosimulation of high-rise construction. The findings suggest that the proposed classification schemes are advantageous to the process of creating more detailed cartography in response to the increasing global demand. Full article
(This article belongs to the Special Issue Remote Sensing-Based Urban Planning Indicators)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Miami sits on the southeastern tip of Florida. The city is enclosed by Biscayne Bay and the Atlantic Ocean to the east, and urbanized parts of Miami-Dade County inland.</p>
Full article ">Figure 2
<p>Select spectral enhancements derived from Landsat 8 OLI 22 October 2016 data; (<b>a</b>) NBI, (<b>b</b>) BAEI, (<b>c</b>) NDCCI.</p>
Full article ">Figure 3
<p>USGS NLCD 2016 urban intensities with noticeable errors denoted.</p>
Full article ">Figure 4
<p>Urban intensity classification workflow diagram.</p>
Full article ">Figure 5
<p>NBI object primitives derived from Landsat 8 OLI 22 October 2016 data; (<b>a</b>) small segment size emphasizing surface brightness (spectral detail: 13, spatial detail: 8, minimum segment size in pixels: 1), (<b>b</b>) moderate segment size emphasizing surface objects (spectral detail: 15, spatial detail: 10, minimum segment size in pixels: 5), (<b>c</b>) large segment size emphasizing surface texture (spectral detail: 17, spatial detail: 12, minimum segment size in pixels: 10).</p>
Full article ">Figure 6
<p>NBI-derived object of industrialized land in Little Haiti.</p>
Full article ">Figure 7
<p>BAEI change detection time-series, giving mean index values, derived from Landsat data; (<b>a</b>) 5 TM 2 November 1985, (<b>b</b>) 6 November 1998, (<b>c</b>) 20 November 2003, (<b>d</b>) 10 November 2011, (<b>e</b>) 8 OLI 17 October 2014, (<b>f</b>) 22 October 2016.</p>
Full article ">Figure 8
<p>Change detection time-series, giving mean index values, derived from Landsat 5 TM data; (<b>a</b>) 2 November 1985 BAEI, (<b>b</b>) 6 November 1998 BAEI, (<b>c</b>) 10 November 2011 BAEI, (<b>d</b>) 2 November 1985 NDVI, (<b>e</b>) 6 November 1998 NDVI, (<b>f</b>) 10 November 2011 NDVI.</p>
Full article ">Figure 9
<p>Iso cluster classifications of urban land use intensity based on spectral index composites derived from 22 October 2016 Landsat 8 OLI data; (<b>a</b>) NDCCI/NBI/BAEI composite, (<b>b</b>) NDVI/NBI/BAEI composite.</p>
Full article ">Figure 10
<p>2016 Miami urban land use classifications derived from Landsat 8 OLI data; (<b>a</b>) NDCCI/NBI/BAEI composite SVM, (<b>b</b>) NLCD 2016 Percent Developed Imperviousness, (<b>c</b>) fusion of NDCCI/NBI/BAEI SVM and Percent Developed Imperviousness, (<b>d</b>) NDCCI/NBI/BAEI composite iso cluster, (<b>e</b>) five-class fusion of Percent Developed Imperviousness with NDCCI/NBI/BAEI SVM and iso cluster, (<b>f</b>) four-class fusion of Percent Developed Imperviousness with NDCCI/NBI/BAEI SVM and iso cluster.</p>
Full article ">Figure 11
<p>BAEI (as a shadow index) change detection time-series of the Brickell neighborhood derived from Landsat data; (<b>a</b>) 5 TM 20 November 2003, (<b>b</b>) 17 November 2008, (<b>c</b>) 8 OLI 22 October 2016. The area of interest overlaid with a focalized classification (transparent) based on thresholding; (<b>d</b>) 5 TM 20 November 2003. (<b>e</b>) 17 November 2008. (<b>f</b>) 8 OLI 22 October 2016.</p>
Full article ">Figure 12
<p>Classifications, created by manual and semi-automated methods, of the Brickell neighborhood derived from Landsat 5 TM data used as LCM inputs; (<b>a</b>) 20 November 2003, (<b>b</b>) 17 November 2008. Polynomial transition trends created in LCM; (<b>c</b>) 2nd order, (<b>d</b>) 3rd order.</p>
Full article ">Figure 13
<p>Results of LCM 2016 prediction for Brickell neighborhood area of interest based on 2003 and 2008 data; (<b>a</b>) predicted 2016 area of interest land cover, (<b>b</b>) vector derived from 2016 prediction referencing the formation of shadows in the area of interest.</p>
Full article ">
23 pages, 8483 KiB  
Article
Vegetation Detection Using Deep Learning and Conventional Methods
by Bulent Ayhan, Chiman Kwan, Bence Budavari, Liyun Kwan, Yan Lu, Daniel Perez, Jiang Li, Dimitrios Skarlatos and Marinos Vlachos
Remote Sens. 2020, 12(15), 2502; https://doi.org/10.3390/rs12152502 - 4 Aug 2020
Cited by 60 | Viewed by 13967
Abstract
Land cover classification with the focus on chlorophyll-rich vegetation detection plays an important role in urban growth monitoring and planning, autonomous navigation, drone mapping, biodiversity conservation, etc. Conventional approaches usually apply the normalized difference vegetation index (NDVI) for vegetation detection. In this paper, [...] Read more.
Land cover classification with the focus on chlorophyll-rich vegetation detection plays an important role in urban growth monitoring and planning, autonomous navigation, drone mapping, biodiversity conservation, etc. Conventional approaches usually apply the normalized difference vegetation index (NDVI) for vegetation detection. In this paper, we investigate the performance of deep learning and conventional methods for vegetation detection. Two deep learning methods, DeepLabV3+ and our customized convolutional neural network (CNN) were evaluated with respect to their detection performance when training and testing datasets originated from different geographical sites with different image resolutions. A novel object-based vegetation detection approach, which utilizes NDVI, computer vision, and machine learning (ML) techniques, is also proposed. The vegetation detection methods were applied to high-resolution airborne color images which consist of RGB and near-infrared (NIR) bands. RGB color images alone were also used with the two deep learning methods to examine their detection performances without the NIR band. The detection performances of the deep learning methods with respect to the object-based detection approach are discussed and sample images from the datasets are used for demonstrations. Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Block diagram showing the used dataset and applied methods. mIoU is the mean-intersection-of-union metric.</p>
Full article ">Figure 2
<p>Sample images from Vasiliko dataset and their annotations (silver color corresponds to barren land, green color corresponds to tree/shrub/grass, red color corresponds to urban land, and blue color corresponds to water in land cover map annotations). (<b>a</b>) Color image for mari_20_3_8; (<b>b</b>) ground truth land cover map for mari_20_3_8; (<b>c</b>) color image for mari_20_42_4; (<b>d</b>) ground truth land cover map for mari_20_42_4.</p>
Full article ">Figure 3
<p>The color Kimisala images for the 10 and 20 cm resolutions and their ground truth land cover annotations (yellow: Vegetation (tree/shrub), blue: Non-vegetation (barren land and archaeological sites)). (<b>a</b>) Color image (Kimisala-10); (<b>b</b>) color image (Kimisala-20); (<b>c</b>) land cover map (Kimisala-10); (<b>d</b>) land cover map (Kimisala-20).</p>
Full article ">Figure 4
<p>Block diagram of DeepLabV3+ [<a href="#B42-remotesensing-12-02502" class="html-bibr">42</a>].</p>
Full article ">Figure 5
<p>Our customized convolutional neural network (CNN) model structure.</p>
Full article ">Figure 6
<p>Block diagram for the NDVI-ML method.</p>
Full article ">Figure 7
<p>DeepLabV3+ vegetation detection results for the two Kimisala test images using the model trained with the Vasiliko dataset. (<b>a</b>) DeepLabV3+ results for Kimisala-10 (green: Tree/shrub/grass, silver: Barren land, red: urban land); (<b>b</b>) DeepLabV3+ results for Kimisala-20 (green: Tree/shrub/grass, silver: Barren land, red: urban land); (<b>c</b>) DeepLabV3+ estimated vegetation map for Kimisala-10 (yellow: Vegetation, blue: Non-vegetation); (<b>d</b>) DeepLabV3+ estimated vegetation map Kimisala-20 (yellow: Vegetation, blue: Non-vegetation); (<b>e</b>) detected vegetation with DeepLabV3+ for Kimisala-10; (<b>f</b>) detected vegetation with DeepLabV3+ for Kimisala-20.</p>
Full article ">Figure 8
<p>The total loss plot for DeepLabV3+ model training from scratch for NDVI-GB bands.</p>
Full article ">Figure 9
<p>The total loss plot for DeepLabV3+ model training with the NDVI band replaced with the R band and with the pre-trained RGB model as the initial model.</p>
Full article ">Figure 10
<p>CNN vegetation detection results for the Kimisala-20 test image using the CNN model trained with the Vasiliko dataset. (<b>a</b>) Estimated vegetation binary map Kimisala-20 using CNN (yellow: Vegetation, blue: Non-vegetation); (<b>b</b>) detected vegetation using CNN for Kimisala-20.</p>
Full article ">Figure 11
<p>NDVI only (threshold = 0) vegetation detection results for the two Kimisala test images. (<b>a</b>) NDVI-only estimated vegetation map (Kimisala-10) (yellow: Vegetation, blue: Non-vegetation); (<b>b</b>) NDVI-only estimated vegetation map (Kimisala-20) (yellow: Vegetation, blue: Non-vegetation); (<b>c</b>) Detected vegetation with NDVI only (Kimisala-10); (<b>d</b>) Detected vegetation with NDVI only (Kimisala-20).</p>
Full article ">Figure 12
<p>NDVI + ML vegetation detection results for the two Kimisala test images. (<b>a</b>) NDVI + ML estimated vegetation map (Kimisala-10) (yellow: Vegetation, blue: Non-vegetation); (<b>b</b>) NDVI + ML estimated vegetation map (Kimisala-20) (yellow: Vegetation, blue: Non-vegetation); (<b>c</b>) detected vegetation with NDVI + ML (Kimisala-10); (<b>d</b>) detected vegetation with NDVI + ML (Kimisala-20).</p>
Full article ">Figure 13
<p>Zoomed in section of Vasiliko (training) and Kimisala (test) images.</p>
Full article ">
26 pages, 10839 KiB  
Article
YOLO-Fine: One-Stage Detector of Small Objects Under Various Backgrounds in Remote Sensing Images
by Minh-Tan Pham, Luc Courtrai, Chloé Friguet, Sébastien Lefèvre and Alexandre Baussard
Remote Sens. 2020, 12(15), 2501; https://doi.org/10.3390/rs12152501 - 4 Aug 2020
Cited by 99 | Viewed by 16229
Abstract
Object detection from aerial and satellite remote sensing images has been an active research topic over the past decade. Thanks to the increase in computational resources and data availability, deep learning-based object detection methods have achieved numerous successes in computer vision, and more [...] Read more.
Object detection from aerial and satellite remote sensing images has been an active research topic over the past decade. Thanks to the increase in computational resources and data availability, deep learning-based object detection methods have achieved numerous successes in computer vision, and more recently in remote sensing. However, the ability of current detectors to deal with (very) small objects still remains limited. In particular, the fast detection of small objects from a large observed scene is still an open question. In this work, we address this challenge and introduce an enhanced one-stage deep learning-based detection model, called You Only Look Once (YOLO)-fine, which is based on the structure of YOLOv3. Our detector is designed to be capable of detecting small objects with high accuracy and high speed, allowing further real-time applications within operational contexts. We also investigate its robustness to the appearance of new backgrounds in the validation set, thus tackling the issue of domain adaptation that is critical in remote sensing. Experimental studies that were conducted on both aerial and satellite benchmark datasets show some significant improvement of YOLO-fine as compared to other state-of-the art object detectors. Full article
(This article belongs to the Special Issue Convolutional Neural Networks for Object Detection)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Overview of the state-of-the-art You Only Look Once (YOLO) family for one-stage object detection. (<b>a</b>) In YOLOv1, the output is a tensor of dimension <math display="inline"><semantics> <mrow> <mo>(</mo> <mi>S</mi> <mo>,</mo> <mi>S</mi> <mo>,</mo> <mi>B</mi> <mo>×</mo> <mn>5</mn> <mo>+</mo> <mi>C</mi> <mo>)</mo> </mrow> </semantics></math> with <math display="inline"><semantics> <mrow> <mo>(</mo> <mi>S</mi> <mo>,</mo> <mi>S</mi> <mo>)</mo> </mrow> </semantics></math> the size of the grid, <span class="html-italic">B</span> the number of predicted boxes for each cell and <span class="html-italic">C</span> the number of classes. By default in [<a href="#B10-remotesensing-12-02501" class="html-bibr">10</a>], <math display="inline"><semantics> <mrow> <mi>S</mi> <mo>=</mo> <mn>7</mn> <mo>,</mo> <mi>B</mi> <mo>=</mo> <mn>2</mn> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <mi>C</mi> <mo>=</mo> <mn>20</mn> </mrow> </semantics></math> for the PASCAL VOC dataset. For an input image of size <math display="inline"><semantics> <mrow> <mn>448</mn> <mo>×</mo> <mn>448</mn> </mrow> </semantics></math> pixels, the output is a tensor of size <math display="inline"><semantics> <mrow> <mn>7</mn> <mo>×</mo> <mn>7</mn> <mo>×</mo> <mn>30</mn> </mrow> </semantics></math>. (<b>b</b>) In YOLOv2, the output is a tensor of dimension <math display="inline"><semantics> <mrow> <mo>(</mo> <mi>S</mi> <mo>,</mo> <mi>S</mi> <mo>,</mo> <mi>B</mi> <mo>×</mo> <mo>(</mo> <mn>5</mn> <mo>+</mo> <mi>C</mi> <mo>)</mo> <mo>)</mo> </mrow> </semantics></math>. The difference is that the class probabilities are calculated for each anchor box. By default in [<a href="#B11-remotesensing-12-02501" class="html-bibr">11</a>], <math display="inline"><semantics> <mrow> <mi>S</mi> <mo>=</mo> <mn>13</mn> <mo>,</mo> <mi>B</mi> <mo>=</mo> <mn>5</mn> </mrow> </semantics></math> anchor boxes and <math display="inline"><semantics> <mrow> <mi>C</mi> <mo>=</mo> <mn>20</mn> </mrow> </semantics></math> for the PASCAL VOC dataset. For an input image of size <math display="inline"><semantics> <mrow> <mn>416</mn> <mo>×</mo> <mn>416</mn> </mrow> </semantics></math> pixels, the output is a tensor of size <math display="inline"><semantics> <mrow> <mn>13</mn> <mo>×</mo> <mn>13</mn> <mo>×</mo> <mn>125</mn> </mrow> </semantics></math>. (<b>c</b>) In YOLOv3, the output consists of 3 tensors of dimension <math display="inline"><semantics> <mrow> <mo>(</mo> <mi>S</mi> <mo>,</mo> <mi>S</mi> <mo>,</mo> <mi>B</mi> <mo>×</mo> <mo>(</mo> <mn>5</mn> <mo>+</mo> <mi>C</mi> <mo>)</mo> <mo>)</mo> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <mo>(</mo> <mn>2</mn> <mi>S</mi> <mo>,</mo> <mn>2</mn> <mi>S</mi> <mo>,</mo> <mi>B</mi> <mo>×</mo> <mo>(</mo> <mn>5</mn> <mo>+</mo> <mi>C</mi> <mo>)</mo> <mo>)</mo> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <mo>(</mo> <mn>4</mn> <mi>S</mi> <mo>,</mo> <mn>4</mn> <mi>S</mi> <mo>,</mo> <mi>B</mi> <mo>×</mo> <mo>(</mo> <mn>5</mn> <mo>+</mo> <mi>C</mi> <mo>)</mo> <mo>)</mo> </mrow> </semantics></math> which correspond to the 3 detection levels (scales). By default in [<a href="#B12-remotesensing-12-02501" class="html-bibr">12</a>], <math display="inline"><semantics> <mrow> <mi>S</mi> <mo>=</mo> <mn>13</mn> <mo>,</mo> <mi>B</mi> <mo>=</mo> <mn>3</mn> </mrow> </semantics></math> anchor boxes and <math display="inline"><semantics> <mrow> <mi>C</mi> <mo>=</mo> <mn>80</mn> </mrow> </semantics></math> for the COCO dataset. For an input image of size <math display="inline"><semantics> <mrow> <mn>416</mn> <mo>×</mo> <mn>416</mn> </mrow> </semantics></math> pixels, the outputs are three tensors of size <math display="inline"><semantics> <mrow> <mn>13</mn> <mo>×</mo> <mn>13</mn> <mo>×</mo> <mn>255</mn> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <mn>26</mn> <mo>×</mo> <mn>26</mn> <mo>×</mo> <mn>255</mn> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <mn>52</mn> <mo>×</mo> <mn>52</mn> <mo>×</mo> <mn>255</mn> </mrow> </semantics></math>.</p>
Full article ">Figure 2
<p>The YOLO-fine framework, including residual blocks (yellow), detection layers (magenta), and upsampling layers (cyan).</p>
Full article ">Figure 3
<p>Illustrations of the VEDAI dataset [<a href="#B31-remotesensing-12-02501" class="html-bibr">31</a>] in color version (<b>top</b>) and infrared version (<b>bottom</b>).</p>
Full article ">Figure 4
<p>Example of an image crop from the MUNICH data set [<a href="#B32-remotesensing-12-02501" class="html-bibr">32</a>].</p>
Full article ">Figure 5
<p>Examples of 30-cm resolution images in the XVIEW data [<a href="#B33-remotesensing-12-02501" class="html-bibr">33</a>].</p>
Full article ">Figure 6
<p>Comparison of recall/precision curves for the 25-cm VEDAI512 dataset. (<b>a</b>) color version; (<b>b</b>) infrared version.</p>
Full article ">Figure 7
<p>Illustration of detection results on VEDAI1024 (color version).</p>
Full article ">Figure 8
<p>Illustration of detection results on VEDAI1024 (infrared version).</p>
Full article ">Figure 9
<p>Illustration of detection results on MUNICH.</p>
Full article ">Figure 10
<p>Illustration of detection results on XVIEW.</p>
Full article ">Figure 11
<p>Appearance of new backgrounds in validation sets: the 25-cm VEDAI512 dataset (color version) is divided into three sets: one training set with images acquired in rural, forest, and desert environments; and, two validation sets with new backgrounds from urban and very dense urban areas.</p>
Full article ">
28 pages, 6414 KiB  
Article
Development of Land Surface Albedo Algorithm for the GK-2A/AMI Instrument
by Kyeong-Sang Lee, Sung-Rae Chung, Changsuk Lee, Minji Seo, Sungwon Choi, Noh-Hun Seong, Donghyun Jin, Minseok Kang, Jong-Min Yeom, Jean-Louis Roujean, Daeseong Jung, Suyoung Sim and Kyung-Soo Han
Remote Sens. 2020, 12(15), 2500; https://doi.org/10.3390/rs12152500 - 4 Aug 2020
Cited by 13 | Viewed by 4562
Abstract
The Korea Meteorological Administration successfully launched Korea’s next-generation meteorological satellite, Geo-KOMPSAT-2A (GK-2A), on 5 December 2018. It belongs to the new generation of GEO (Geostationary Elevation Orbit) satellite which offers capabilities to disseminate high spatial- (0.5–2 km) and high temporal-resolution (10 min) observations [...] Read more.
The Korea Meteorological Administration successfully launched Korea’s next-generation meteorological satellite, Geo-KOMPSAT-2A (GK-2A), on 5 December 2018. It belongs to the new generation of GEO (Geostationary Elevation Orbit) satellite which offers capabilities to disseminate high spatial- (0.5–2 km) and high temporal-resolution (10 min) observations over a broad area, herein a geographic disk encompassing the Asia–Oceania region. The targeted objective is to enhance our understanding of climate change, owing to a bulk of coherent observations. For such, we developed an algorithm to map the land surface albedo (LSA), which is a major Essential Climate Variable (ECV). The retrieval algorithm devoted to GK-2A/Advanced Meteorological Imager (AMI) data considered Japan’s Himawari-8/Advanced Himawari Imager (AHI) data for prototyping, as this latter owns similar specifications to AMI. Our proposed algorithm is decomposed in three major steps: atmospheric correction, bidirectional reflectance distribution function (BRDF) modeling and angular integration, and narrow-to-broadband conversion. To perform BRDF modeling, the optimization method using normalized reflectance was applied, which improved the quality of BRDF modeling results, particularly when the number of observations was less than 15. A quality assessment was performed to compare our results to those of Moderate Resolution Imaging Spectroradiometer (MODIS) LSA products and ground measurement from Aerosol Robotic Network (AERONET) sites, Australian and New Zealand flux tower network (OzFlux) site and the Korea Flux Network (KoFlux) site from throughout 2017. Our results show dependable spatial and temporal consistency with MODIS broadband LSA data, and rapid changes in LSA due to snowfall and snow melting were well expressed in the temporal profile of our results. Our outcomes also show good agreement with the ground measurements from AERONET, OzFlux and KoFlux ground-based network with root mean square errors (RMSE) of 0.0223 and 0.0306, respectively, which is close to the accuracy of MODIS broadband LSA. Moreover, our results reveal still more reliable LSA products even when clouds are frequently present, such as during the summer monsoon season. It shows that our results are useful for continuous LSA monitoring. Full article
(This article belongs to the Special Issue Earth Monitoring from A New Generation of Geostationary Satellites)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>The map of the AHI full-disk area. The area used in this study is shaded light blue. The red symbol “v”, green symbol “+”, and blue symbol “x” indicate the locations of 27 AERONET sites, Boyagin site of OzFlux, and the KoFlux site, respectively, used in the validation stage.</p>
Full article ">Figure 2
<p>Flowchart of GK-2A retrieval algorithm.</p>
Full article ">Figure 3
<p>Scheme of synthesis period and retrieval cycle for BRDF modeling in this algorithm. Each horizontal line represents a synthesis period, and the star represents the date produced through that synthesis period.</p>
Full article ">Figure 4
<p>The scheme of the BRDF inversion optimization process.</p>
Full article ">Figure 5
<p>Changes in BRDF parameters, normalized reflectance (<b>a</b>,<b>c</b>,<b>e</b>) and TOC reflectance (<b>b</b>,<b>d</b>,<b>f</b>) according to the number of optimization iterations for channel 3.</p>
Full article ">Figure 6
<p>(<b>a</b>–<b>e</b>) The mean (solid line) and standard deviation (dashed line) of RMSE in BRDF modeling before (red) and after (blue-green) optimization process according to the number of observations in the BRDF composite period for 5 channels (<b>a</b>–<b>e</b>). (<b>f</b>) The ratio of the number of observations to all valid pixels throughout 2017.</p>
Full article ">Figure 7
<p>Density scatter plots for AHI LSA versus MODIS LSA: (<b>a</b>) case with black-sky and snow-free (<b>b</b>) case with black-sky and snow-covered (<b>c</b>) white-sky and snow-free (<b>d</b>) case with white-sky and snow-covered. The color bar denotes the ratio of frequency in 0.005 X 0.005 bins. The terms “slope” and “offset” represent slope and interception of regression line, and “N_data” means the number of data entries used in this analysis. The regression line and 1:1 line are shown as dash and solid gray lines, respectively.</p>
Full article ">Figure 8
<p>The spatial maps of mean bias and RMSE between AHI and MODIS LSA product in 2017 for black-sky and white-sky: (<b>a</b>) bias for black-sky (<b>b</b>) bias for white-sky (<b>c</b>) RMSE for black-sky (<b>d</b>) RMSE for white-sky. In the vicinity of the equator and in southern China, there is no good quality MODIS LSA, so it is marked as fill value (white).</p>
Full article ">Figure 9
<p>Temporal comparison with the black-sky AHI and MODIS broadband LSA for five land types throughout 2017: (<b>a</b>) barren (<b>b</b>) croplands (<b>c</b>) grasslands (<b>d</b>) mixed forest (<b>e</b>) openshrub. The red and blue squares in the top of each plot present LSA from AHI and MODIS, respectively. In the bottom of each plot, a green bar denotes the ratio of observations covered with snow from AHI, and the circle with orange color indicates a snow flag in MODIS LSA product.</p>
Full article ">Figure 10
<p>The RMSE (blue) and mean bias (red) of black-sky and white-sky AHI broadband LSA compared to MODIS broadband LSA according to land types and month: (<b>a</b>) RMSE according to land types for black-sky broadband LSA (<b>b</b>) RMSE according to land types for white-sky broadband LSA (<b>c</b>) RMSE according to month for black-sky broadband LSA (<b>d</b>) RMSE according to land types for white-sky broadband LSA.</p>
Full article ">Figure 11
<p>Histogram of difference between satellite-derived LSA and ground measurements from 27 AERONET sites and Boyagin site. The term of N indicates the number of observations used.</p>
Full article ">Figure 12
<p>Temporal profile of AHI (red) and MODIS (blue) broadband LSA against ground measurements (green) for CRK site. The term “N” indicates the number of retrieved broadband LSA over 2017. The validation metrics shown in this figure were retrieved from direct comparison.</p>
Full article ">
25 pages, 15582 KiB  
Article
Water Stress Estimation in Vineyards from Aerial SWIR and Multispectral UAV Data
by Zacharias Kandylakis, Alexandros Falagas, Christina Karakizi and Konstantinos Karantzalos
Remote Sens. 2020, 12(15), 2499; https://doi.org/10.3390/rs12152499 - 4 Aug 2020
Cited by 24 | Viewed by 5898
Abstract
Mapping water stress in vineyards, at the parcel level, is of significant importance for supporting crop management decisions and applying precision agriculture practices. In this paper, a novel methodology based on aerial Shortwave Infrared (SWIR) data is presented, towards the estimation of water [...] Read more.
Mapping water stress in vineyards, at the parcel level, is of significant importance for supporting crop management decisions and applying precision agriculture practices. In this paper, a novel methodology based on aerial Shortwave Infrared (SWIR) data is presented, towards the estimation of water stress in vineyards at canopy scale for entire parcels. In particular, aerial broadband spectral data were collected from an integrated SWIR and multispectral instrumentation, onboard an unmanned aerial vehicle (UAV). Concurrently, in-situ leaf stomatal conductance measurements and supplementary data for radiometric and geometric corrections were acquired. A processing pipeline has been designed, developed, and validated, able to execute the required analysis, including data pre-processing, data co-registration, reflectance calibration, canopy extraction and water stress estimation. Experiments were performed at two viticultural regions in Greece, for several vine parcels of four different vine varieties, Sauvignon Blanc, Merlot, Syrah and Xinomavro. The performed qualitative and quantitative assessment indicated that a single model for the estimation of water stress across all studied vine varieties was not able to be established (r2 < 0.30). Relatively high correlation rates (r2 > 0.80) were achieved per variety and per individual variety clone. The overall root mean square error (RMSE) for the estimated canopy water stress was less than 29 mmol m−2 s−1, spanning from no-stress to severe canopy stress levels. Overall, experimental results and validation indicated the quite high potentials of the proposed instrumentation and methodology. Full article
(This article belongs to the Section Remote Sensing in Agriculture and Vegetation)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Overview of the study areas and studied parcels: (<b>a</b>) The location of study areas, i.e., Drama and Naousa, on a map of Greece divided into the 13 administrative regions of the country; (<b>b</b>) The study area in Drama containing Sauvignon Blanc (SB) parcel; (<b>c</b>) The study area in Naousa, containing Xinomavro (X1, X2), Syrah (SY) and Merlot (M1, M2) parcels. Extents of parcels are defined with blue outline and overlaid on NIR-Red-Green composites of the Parrot Sequoia images and Bing Satellite background, projected on WGS84 UTM zone 34N.</p>
Full article ">Figure 2
<p>An overview of the activities performed during the data acquisition field campaigns.</p>
Full article ">Figure 3
<p>Distribution of in-situ data acquisition points (black dots) on the extracted canopy (green) of the studied vine parcels. Zoomed-in views offer a better outlook of the exact location of measurements on the canopy. Overlaid on Bing Aerial (SY, SB, X2) and Google Satellite (M2, M1, X1) imagery.</p>
Full article ">Figure 4
<p>Analysis ready data for Parcel SB (<b>a</b>) multispectral orthoimage reflectance values; (<b>b</b>) Normalized Difference Vegetation Index (NDVI); (<b>c</b>) Shortwave Infrared (SWIR) orthoimage reflectance values; (<b>d</b>) SWIR orthoimage reflectance values in pseudocolor.</p>
Full article ">Figure 5
<p>The extracted canopy of parcel SB superimposed on the SWIR reflectance mosaic (<b>left</b>). The corresponding Leaf Area Index (LAI) map (<b>right</b>).</p>
Full article ">Figure 6
<p>(<b>a</b>) The resulting water-stress map for parcel SB in Drama; (<b>b</b>) zoomed-in view: two rows at the northwestern corner of (<b>a</b>); (<b>c</b>) synthetic representation of (<b>b</b>) with the median stress along the vine row. Colormap: severe water stress 0–50 mmol m<sup>−2</sup> s<sup>−1</sup> (red), moderate water stress 50–150 mmol m<sup>−2</sup> s<sup>−1</sup> (brown), 150–500 mmol m<sup>−2</sup> s<sup>−1</sup> mild water stress (cyan), no water stress &gt;500 mmol m<sup>−2</sup> s<sup>−1</sup> (dark blue).</p>
Full article ">Figure 7
<p>The produced water stress synthetic representation for Xinomavro #1 vineyard (<b>X1</b>, Naousa) is presented along with the corresponding Leaf Area Index (LAI) map, the established correlation and the computed Digital Surface Model (DSM) in pseudocolor.</p>
Full article ">Figure 8
<p>The produced water stress synthetic representation for Xinomavro #2 vineyard (<b>X2</b>, Naousa) is presented along with the corresponding Leaf Area Index (LAI) map, the established correlation and the computed Digital Surface Model (DSM) in pseudocolor.</p>
Full article ">Figure 9
<p>The produced water stress synthetic representations for Merlot #1 and Merlot #2 vineyards (<b>M1&amp;M2,</b> Naousa) are presented along with the corresponding Leaf Area Index (LAI) maps, the established correlation and the computed Digital Surface Models (DSM) in pseudocolor.</p>
Full article ">Figure 10
<p>The produced water stress synthetic representation for the Syrah vineyard (SY, Naousa) is presented along with the corresponding Leaf Area Index (LAI) map, the established correlation and the computed Digital Surface Model (DSM) in pseudocolor.</p>
Full article ">Figure 11
<p>The produced water stress synthetic representation for Sauvignon Blanc vineyard (<b>SB</b>, Drama) is presented along with the corresponding Leaf Area Index (LAI) map, the established correlation and the computed Digital Surface Models (DSM) in pseudocolor.</p>
Full article ">Scheme 1
<p>The flowchart of the proposed methodology. Data processing and modeling steps are marked with rounded green frames. Data and geospatial products are marked with purple color.</p>
Full article ">Figure A1
<p>Mean Temperature and Rainfall graphs for July 2017 in Mikrokampos weather station (<b>left</b>) and Naousa weather station (<b>right</b>).</p>
Full article ">Figure A2
<p>Unregistered (<b>left</b>) and co-registered (<b>right</b>) SWIR and multispectral data. Vine canopy on the SWIR data is displayed with green color, while with red the one from the multispectral data. Several mis-registration cases (<b>left</b>) can be observed, e.g., as marked with a white dashed ellipse. Data after the successful alignment are presented on the right-hand side.</p>
Full article ">Figure A3
<p>Linear regression model between SWIR reflectance and g<sub>s</sub> for the entire dataset.</p>
Full article ">
20 pages, 6809 KiB  
Article
Multi-Year Comparison of CO2 Concentration from NOAA Carbon Tracker Reanalysis Model with Data from GOSAT and OCO-2 over Asia
by Farhan Mustafa, Lingbing Bu, Qin Wang, Md. Arfan Ali, Muhammad Bilal, Muhammad Shahzaman and Zhongfeng Qiu
Remote Sens. 2020, 12(15), 2498; https://doi.org/10.3390/rs12152498 - 4 Aug 2020
Cited by 30 | Viewed by 7232
Abstract
Accurate knowledge of the carbon budget on global and regional scales is critically important to design mitigation strategies aimed at stabilizing the atmospheric carbon dioxide (CO2) emissions. For a better understanding of CO2 variation trends over Asia, in this study, [...] Read more.
Accurate knowledge of the carbon budget on global and regional scales is critically important to design mitigation strategies aimed at stabilizing the atmospheric carbon dioxide (CO2) emissions. For a better understanding of CO2 variation trends over Asia, in this study, the column-averaged CO2 dry air mole fraction (XCO2) derived from the National Oceanic and Atmospheric Administration (NOAA) CarbonTracker (CT) was compared with that of Greenhouse Gases Observing Satellite (GOSAT) from September 2009 to August 2019 and with Orbiting Carbon Observatory 2 (OCO-2) from September 2014 until August 2019. Moreover, monthly averaged time-series and seasonal climatology comparisons were also performed separately over the five regions of Asia; i.e., Central Asia, East Asia, South Asia, Southeast Asia, and Western Asia. The results show that XCO2 from GOSAT is higher than the XCO2 simulated by CT by an amount of 0.61 ppm, whereas, OCO-2 XCO2 is lower than CT by 0.31 ppm on average, over Asia. The mean spatial correlations of 0.93 and 0.89 and average Root Mean Square Deviations (RMSDs) of 2.61 and 2.16 ppm were found between the CT and GOSAT, and CT and OCO-2, respectively, implying the existence of a good agreement between the CT and the other two satellites datasets. The spatial distribution of the datasets shows that the larger uncertainties exist over the southwest part of China. Over Asia, NOAA CT shows a good agreement with GOSAT and OCO-2 in terms of spatial distribution, monthly averaged time series, and seasonal climatology with small biases. These results suggest that CO2 can be used from either of the datasets to understand its role in the carbon budget, climate change, and air quality at regional to global scales. Full article
(This article belongs to the Special Issue Remote Sensing of Greenhouse Gases and Air Pollution)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Regional division of Asia.</p>
Full article ">Figure 2
<p>(<b>a</b>) Distribution of 10 years mean XCO<sub>2</sub> over Asia for (<b>a</b>) CarbonTracker (CT) 2019; (<b>b</b>) Greenhouse Gases Observing Satellite (GOSAT); (<b>c</b>) their differences (CT-GOSAT); and (<b>d</b>) the total number of datasets from GOSAT in each grid.</p>
Full article ">Figure 3
<p>Distribution of 5 years mean XCO<sub>2</sub> over Asia (<b>a</b>) from CT; (<b>b</b>) Orbiting Carbon Observatory 2 (OCO-2); (<b>c</b>) their differences (CT-OCO2); (<b>d</b>) and the total number of datasets from OCO-2 in each grid.</p>
Full article ">Figure 4
<p>The scatter density plot between the XCO<sub>2</sub> derived from (<b>a</b>) the CT and GOSAT; (<b>b</b>) and the CT and OCO-2.</p>
Full article ">Figure 5
<p>Spatial distributions of correlations between (<b>a</b>) the CarbonTracker and GOSAT; (<b>b</b>) the CarbonTracker and OCO-2; and mean posteriori estimate of XCO<sub>2</sub> uncertainty in (<b>c</b>) GOSAT; and (<b>d</b>) OCO-2.</p>
Full article ">Figure 6
<p>The time-series variations of monthly averaged XCO<sub>2</sub> derived from CT and the two satellite datasets for a period of 5 years ranging from September 2014 to August 2019 over (<b>a</b>) Asia; (<b>b</b>) Central Asia; (<b>c</b>) East Asia; (<b>d</b>) South Asia; (<b>e</b>) Southeast Asia; (<b>f</b>) and Western Asia, The gaps in the graph show the missing data.</p>
Full article ">Figure 7
<p>The annual growth rate of XCO<sub>2</sub> concentration for the CT and the satellite datasets over (<b>a</b>) Asia; (<b>b</b>) Central Asia; (<b>c</b>) East Asia; (<b>d</b>) South Asia; (<b>e</b>) Southeast Asia; and (<b>f</b>) Western Asia</p>
Full article ">Figure 8
<p>Seasonal distribution of XCO<sub>2</sub> from CT (<b>a</b>), GOSAT (<b>b</b>), and their differences (CT-GOSAT) (<b>c</b>).</p>
Full article ">Figure 9
<p>Seasonal distribution of XCO<sub>2</sub> from the CT (left panel), OCO-2 (middle panel), and their differences (CT-OCO2) (right panel).</p>
Full article ">
8 pages, 211 KiB  
Editorial
Remote Sensing for Land Administration
by Rohan Bennett, Peter van Oosterom, Christiaan Lemmen and Mila Koeva
Remote Sens. 2020, 12(15), 2497; https://doi.org/10.3390/rs12152497 - 4 Aug 2020
Cited by 18 | Viewed by 6992
Abstract
Land administration constitutes the socio-technical systems that govern land tenure, use, value and development within a jurisdiction. The land parcel is the fundamental unit of analysis. Each parcel has identifiable boundaries, associated rights, and linked parties. Spatial information is fundamental. It represents the [...] Read more.
Land administration constitutes the socio-technical systems that govern land tenure, use, value and development within a jurisdiction. The land parcel is the fundamental unit of analysis. Each parcel has identifiable boundaries, associated rights, and linked parties. Spatial information is fundamental. It represents the boundaries between land parcels and is embedded in cadastral sketches, plans, maps and databases. The boundaries are expressed in these records using mathematical or graphical descriptions. They are also expressed physically with monuments or natural features. Ideally, the recorded and physical expressions should align, however, in practice, this may not occur. This means some boundaries may be physically invisible, lacking accurate documentation, or potentially both. Emerging remote sensing tools and techniques offers great potential. Historically, the measurements used to produce recorded boundary representations were generated from ground-based surveying techniques. The approach was, and remains, entirely appropriate in many circumstances, although it can be timely, costly, and may only capture very limited contextual boundary information. Meanwhile, advances in remote sensing and photogrammetry offer improved measurement speeds, reduced costs, higher image resolutions, and enhanced sampling granularity. Applications of unmanned aerial vehicles (UAV), laser scanning, both airborne and terrestrial (LiDAR), radar interferometry, machine learning, and artificial intelligence techniques, all provide examples. Coupled with emergent societal challenges relating to poverty reduction, rapid urbanisation, vertical development, and complex infrastructure management, the contemporary motivation to use these new techniques is high. Fundamentally, they enable more rapid, cost-effective, and tailored approaches to 2D and 3D land data creation, analysis, and maintenance. This Special Issue hosts papers focusing on this intersection of emergent remote sensing tools and techniques, applied to domain of land administration. Full article
(This article belongs to the Special Issue Remote Sensing for Land Administration)
15 pages, 6552 KiB  
Article
Preliminary Evaluation and Correction of Sea Surface Height from Chinese Tiangong-2 Interferometric Imaging Radar Altimeter
by Lin Ren, Jingsong Yang, Xiao Dong, Yunhua Zhang and Yongjun Jia
Remote Sens. 2020, 12(15), 2496; https://doi.org/10.3390/rs12152496 - 4 Aug 2020
Cited by 19 | Viewed by 3198
Abstract
In this study, we performed preliminary comparative evaluation and correction of two-dimensional sea surface height (SSH) data from the Chinese Tiangong-2 Interferometric Imaging Radar Altimeter (InIRA) with the goal of advancing its retrieval. Data from the InIRA were compared with one-dimensional SSH data [...] Read more.
In this study, we performed preliminary comparative evaluation and correction of two-dimensional sea surface height (SSH) data from the Chinese Tiangong-2 Interferometric Imaging Radar Altimeter (InIRA) with the goal of advancing its retrieval. Data from the InIRA were compared with one-dimensional SSH data from the traditional altimeters Jason-2, Saral/AltiKa, and Jason-3. Because the sea state bias (SSB) of distributed InIRA data has not yet been considered, consistency was maintained by neglecting the SSB for the traditional altimeters. The results of the comparisons show that the InIRA captures the same SSH trends as those obtained by traditional altimeters. However, there is a significant deviation between InIRA and traditional altimeter SSHs; consequently, systematic and parametric biases were analyzed. The parametric bias was found to be related to the incidence angles and a significant wave height. Upon correcting the two biases, the standard deviation significantly reduced to 8.1 cm. This value is slightly higher than those of traditional altimeters, which typically have a bias of ~7.0 cm. The results indicate that the InIRA is promising in providing a wide swath of SSH measurements. Moreover, we recommend that the InIRA retrieval algorithm should consider the two biases to improve SSH accuracy. Full article
(This article belongs to the Section Ocean Remote Sensing)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Location of Interferometric Imaging Radar Altimeter (InIRA) and colocated traditional altimeter data. The red circles represent the centre location of InIRA data, while the blue lines represent the ground track of colocated traditional altimeter data.</p>
Full article ">Figure 2
<p>Comparison of sea surface height (SSH) measurements between the Interferometric Imaging Radar Altimeter (InIRA) and colocated Saral/AltiKa altimeter. (<b>a</b>) SSH map from the InIRA (large rectangle) and Saral/AltiKa (small circles), (<b>b</b>) SSH profile along the Saral/AltiKa track, and (<b>c</b>) SSH difference along the Saral/AltiKa track.</p>
Full article ">Figure 3
<p>All colocated sea surface height (SSH) maps of the Interferometric Imaging Radar Altimeter (InIRA) and traditional altimeters, except for the case described in <a href="#remotesensing-12-02496-f002" class="html-fig">Figure 2</a>. (<b>a</b>, <b>c</b>, <b>e</b>) show colocations with Jason-2, (<b>b</b>, <b>d</b>) show colocations with Saral/AltiKa, and (<b>f</b>) shows colocations with Jason-3.</p>
Full article ">Figure 4
<p>Sea surface height (SSH) comparisons between the Interferometric Imaging Radar Altimeter (InIRA) and each type of traditional altimeter, (<b>a</b>) for all Jason-2 passes, (<b>b</b>) for all Saral/AltiKa passes, and (<b>c</b>) for a single Jason-3 pass.</p>
Full article ">Figure 5
<p>Statistics of the sea surface height (SSH) difference between the Interferometric Imaging Radar Altimeter (InIRA) and all traditional altimeters after systematic bias correction.</p>
Full article ">Figure 6
<p>Sea surface height (SSH) parametric bias analysis for (<b>a</b>) incidence angle, (<b>b</b>) wind speed, and (<b>c</b>) significant wave height.</p>
Full article ">Figure 7
<p>Parametric model details of (<b>a</b>) residual evolution with fitting degree, (<b>b</b>) fitting, and (<b>c</b>) error bars. For (<b>b</b>), the colours represent the different SWH bins. The dashed line represents the Interferometric Imaging Radar Altimeter (InIRA) sea surface height (SSH) mean bias (BIAS), while the solid line represents the fitting.</p>
Full article ">Figure 8
<p>Error analysis after further parametric bias correction. Mean bias (BIAS) plotted against (<b>a</b>) incidence angle, (<b>c</b>) wind speed, and (<b>e</b>) significant wave height. Root mean square error (RMSE) plotted against (<b>b</b>) incidence angle, (<b>d</b>) wind speed, and (<b>f</b>) significant wave height.</p>
Full article ">Figure 9
<p>SSH difference statistics between the Interferometric Imaging Radar Altimeter (InIRA) and traditional altimeter after further parametric bias correction.</p>
Full article ">
32 pages, 1205 KiB  
Review
Deep Learning for Land Use and Land Cover Classification Based on Hyperspectral and Multispectral Earth Observation Data: A Review
by Ava Vali, Sara Comai and Matteo Matteucci
Remote Sens. 2020, 12(15), 2495; https://doi.org/10.3390/rs12152495 - 3 Aug 2020
Cited by 239 | Viewed by 21437
Abstract
Lately, with deep learning outpacing the other machine learning techniques in classifying images, we have witnessed a growing interest of the remote sensing community in employing these techniques for the land use and land cover classification based on multispectral and hyperspectral images; the [...] Read more.
Lately, with deep learning outpacing the other machine learning techniques in classifying images, we have witnessed a growing interest of the remote sensing community in employing these techniques for the land use and land cover classification based on multispectral and hyperspectral images; the number of related publications almost doubling each year since 2015 is an attest to that. The advances in remote sensing technologies, hence the fast-growing volume of timely data available at the global scale, offer new opportunities for a variety of applications. Deep learning being significantly successful in dealing with Big Data, seems to be a great candidate for exploiting the potentials of such complex massive data. However, there are some challenges related to the ground-truth, resolution, and the nature of data that strongly impact the performance of classification. In this paper, we review the use of deep learning in land use and land cover classification based on multispectral and hyperspectral images and we introduce the available data sources and datasets used by literature studies; we provide the readers with a framework to interpret the-state-of-the-art of deep learning in this context and offer a platform to approach methodologies, data, and challenges of the field. Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>The three upper level categories in the land cover classification system (LCCS) hierarchy.</p>
Full article ">Figure 2
<p>The publication trends over LULC classification of remote sensing data. The graph shows a consistent increase in the number of publications. The graph also shows the portion of publications dedicated to hyperspectral images classification and the use of deep learning techniques (data were retrieved in May 2020).</p>
Full article ">Figure 3
<p><b>Left</b>: The wavelength acquisition of spectral bands for multispectral (below) and hyperspectral sampling (above) (taken from [<a href="#B30-remotesensing-12-02495" class="html-bibr">30</a>]). <b>Right</b>: a schema of multispectral and hyperspectral images in the spatial-spectral domain.</p>
Full article ">Figure 4
<p>The most popular datasets for land cover classification purposes employing deep learning techniques. This graph is based on the number of papers referencing the datasets by May 2020.</p>
Full article ">Figure 5
<p>The machine learning classification frameworks. The upper one shows the common steps of the conventional approaches, and the lower one shows the modern end-to-end structure. In the end-to-end deep learning structure, the feature engineering is replaced with feature learning as a part of the classifier training phase.</p>
Full article ">Figure 6
<p>An example of convolutional neural network with two layers of convolution and two layers of pooling, for (<b>a</b>) patch level classification, (<b>b</b>) pixel level classification and (<b>c</b>) an image reconstructive model. The resulting cubes after each layer of convolution and pooling are called feature maps.</p>
Full article ">Figure 7
<p>An illustration of different convolution operations: (<b>a</b>) 1D convolution (with 1D filter) (<b>b</b>) 2D convolution (with 2D filter) and (<b>c</b>) 3D convolution (with 3D filter). For each of the images, the left part is the input of convolution and the right is the output. The filter is shown in red.</p>
Full article ">Figure 8
<p>The general schema of a residual block with the skip or identity connection. The skip connection let the training process bypass learning the inner weight layers (of convolutions with/without pooling) parameters.</p>
Full article ">Figure 9
<p>The U-Net model for semantic segmentation. The model is composed of three steps: Contraction with convolutional layers and max pooling, Bottleneck with a couple of convolutional layers and a drop-out, and Expansion with some deconvolutional and convolutional layers and feature map concatenations.</p>
Full article ">Figure 10
<p>Data augmentation approach to enlarge the training dataset (ground-truth). The augmented dataset is composed of the original dataset together with its rotated, flipped or translated versions.</p>
Full article ">Figure 11
<p>A general schema of generative adversarial network (GAN) depicting how a generative model gets trained and how the trained generator is used to create the ground-truth.</p>
Full article ">Figure 12
<p>Transfer learning approach: a pre-trained model on another dataset is employed as a starting point to extract the initial representations from another (smaller) dataset.</p>
Full article ">Figure 13
<p>An example of a 3D auto-encoder with a couple of convolution layers followed by pooling layers at the encoder and a couple of up-sampling layers followed by convolutional layers at the decoder part, which learns the representations from an unlabelled set of data. In such an unsupervised learning strategy, the learning process takes place to encode the data into a set of representations, and the decoder evaluates how the representations are good enough to reconstruct the original data using the same convolutions.</p>
Full article ">Figure 14
<p>The general schema of multi-modal data fusion at three major stages of the machine learning pipeline: (<b>a</b>) Data fusion at the data preparation stage (early fusion). (<b>b</b>) Data fusion at the feature engineering stage (feature fusion). (<b>c</b>) Data fusion at the very final decision making level (late fusion).</p>
Full article ">
22 pages, 13695 KiB  
Article
Seasonal Comparisons of Himawari-8 AHI and MODIS Vegetation Indices over Latitudinal Australian Grassland Sites
by Ngoc Nguyen Tran, Alfredo Huete, Ha Nguyen, Ian Grant, Tomoaki Miura, Xuanlong Ma, Alexei Lyapustin, Yujie Wang and Elizabeth Ebert
Remote Sens. 2020, 12(15), 2494; https://doi.org/10.3390/rs12152494 - 3 Aug 2020
Cited by 17 | Viewed by 5242
Abstract
The Advanced Himawari Imager (AHI) on board the Himawari-8 geostationary (GEO) satellite offers comparable spectral and spatial resolutions as low earth orbiting (LEO) sensors such as the Moderate Resolution Imaging Spectroradiometer (MODIS) and Visible Infrared Imaging Radiometer Suite (VIIRS) sensors, but with hypertemporal [...] Read more.
The Advanced Himawari Imager (AHI) on board the Himawari-8 geostationary (GEO) satellite offers comparable spectral and spatial resolutions as low earth orbiting (LEO) sensors such as the Moderate Resolution Imaging Spectroradiometer (MODIS) and Visible Infrared Imaging Radiometer Suite (VIIRS) sensors, but with hypertemporal image acquisition capability. This raises the possibility of improved monitoring of highly dynamic ecosystems, such as grasslands, including fine-scale phenology retrievals from vegetation index (VI) time series. However, identifying and understanding how GEO VI temporal profiles would be different from traditional LEO VIs need to be evaluated, especially with the new generation of geostationary satellites, with unfamiliar observation geometries not experienced with MODIS, VIIRS, or Advanced Very High Resolution Radiometer (AVHRR) VI time series data. The objectives of this study were to investigate the variations in AHI reflectances and normalized difference vegetation index (NDVI), enhanced vegetation index (EVI), and two-band EVI (EVI2) in relation to diurnal phase angle variations, and to compare AHI VI seasonal datasets with MODIS VIs (standard and sun and view angle-adjusted VIs) over a functional range of dry grassland sites in eastern Australia. Strong NDVI diurnal variations and negative NDVI hotspot effects were found due to differential red and NIR band sensitivities to diurnal phase angle changes. In contrast, EVI and EVI2 were nearly insensitive to diurnal phase angle variations and displayed nearly flat diurnal profiles without noticeable hotspot influences. At seasonal time scales, AHI NDVI values were consistently lower than MODIS NDVI values, while AHI EVI and EVI2 values were significantly higher than MODIS EVI and EVI2 values, respectively. We attributed the cross-sensor differences in VI patterns to the year-round smaller phase angles and backscatter observations from AHI, in which the sunlit canopies induced a positive EVI/ EVI2 response and negative NDVI response. BRDF adjustments of MODIS VIs to solar noon and to the oblique view zenith angle of AHI resulted in strong cross-sensor convergence of VI values (R2 > 0.94, mean absolute difference <0.02). These results highlight the importance of accounting for cross-sensor observation geometries for generating compatible AHI and MODIS annual VI time series. The strong agreement found in this study shows promise in cross-sensor applications and suggests that a denser time series can be formed through combined GEO and LEO measurement synergies. Full article
(This article belongs to the Special Issue Earth Monitoring from A New Generation of Geostationary Satellites)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Location of the four Australian grassland/ pasture sites on the Dynamic Land Cover Data (DLCD) map across a range of latitudes. The DLCD map (version 2.1) is from Geoscience Australia, <a href="http://www.ga.gov.au/scientific-topics/earth-obs/accessing-satellite-imagery/landcover" target="_blank">http://www.ga.gov.au/scientific-topics/earth-obs/accessing-satellite-imagery/landcover</a>.</p>
Full article ">Figure 2
<p>Diagram of Himawari-8 diurnal sun-sensor view geometry variations, with fixed view zenith angle (VZA) and diurnal varying solar zenith angle (SZA) and relative azimuth angle (RAA). The angle between the illumination source and the sensor detector is called the phase angle (PA).</p>
Full article ">Figure 3
<p>Workflow diagram of Advanced Himawari Imager (AHI) and Moderate Resolution Imaging Spectroradiometer (MODIS) data extraction, BRDF corrections, AHI daily compositing, and cross-sensor analyses.</p>
Full article ">Figure 4
<p>Diurnal and seasonal variations in solar zenith angles (<b>left</b>) and relative azimuth angles (<b>right</b>) for four seasonal times of the year at the Redesdale grassland pasture site.</p>
Full article ">Figure 5
<p>Diurnal variations of solar zenith angles (<b>left</b>) and relative azimuth angle (<b>right</b>) across the four grassland sites for one date (1 July 2017).</p>
Full article ">Figure 6
<p>Example of the overlap and orientation of geostationary AHI 1 km pixel sampling area (3 × 3, blue gridded lines) with polar orbiting MODIS 250 m pixels (yellow area with black grid lines).</p>
Full article ">Figure 7
<p>AHI 10-minute NDVI, EVI and EVI2 values over the Redesdale site for the 2016 and 2017 grass growing seasons. The color scale depicts the diurnal and seasonal solar zenith angle (SZA) geometries of the observations.</p>
Full article ">Figure 8
<p>Diurnal patterns of AHI spectral reflectances and vegetation indices during four seasonal grass growing periods and across the four grassland sites. These included austral autumn (end of March), winter (July), austral spring (late September), and early summer (December).</p>
Full article ">Figure 9
<p>Comparison of grass seasonal NDVI (<b>left</b>), EVI (<b>middle</b>), and EVI2 (<b>right</b>) profiles at the four grassland sites (2016 and 2017), as depicted by AHI 10-minute data (green points), daily composited data (red points), and 3-day smoothed time-series (blue line).</p>
Full article ">Figure 10
<p>Comparison of smoothed, solar noon daily composited (<b>A</b>) AHI NDVI, (<b>B</b>) AHI EVI, and (<b>C</b>) AHI EVI2, with respective standard MODIS 16-day NDVI, EVI, EVI2 over the 2016 and 2017 growing seasons at the Redesdale grassland site. Associated MODIS and AHI observation geometries are shown for (<b>D</b>) solar zenith angle (SZA), (<b>E</b>) view zenith angle (VZA), and (<b>F</b>) phase angle.</p>
Full article ">Figure 11
<p>Comparison of smoothed, daily composite AHI NDVI (<b>left</b>), EVI (<b>center</b>), and EVI2 (<b>right</b>) with the standard Terra MODIS VIs, the MODIS BRDF-adjusted to solar noon, and the MODIS BRDF-adjusted to solar noon and the AHI view angle, over the 2016 and 2017 growing seasons at the four grassland sites.</p>
Full article ">Figure 12
<p>Global cross-site relationships between AHI and MODIS VI values using Terra MODIS standard product (<b>left</b>), MODIS sun angle adjusted to solar noon/ nadir view (<b>middle</b>), and MODIS solar noon and view angle adjusted to AHI fixed view angle (<b>right</b>). R<sup>2</sup> was calculated using data points across the four sites (<span class="html-italic">p</span>-values were lower than 2.2 × 10<sup>−16</sup> for all cases).</p>
Full article ">Figure 13
<p>Global cross-site relationships (mean absolute difference, R<sup>2</sup>, and slope) between daily composited AHI reflectances (red, NIR, blue) and VIs (NDVI, EVI, EVI2) with equivalent MODIS VI values using Terra and Aqua MODIS standard products, MODIS BRDF-adjusted values to solar noon/ nadir view, and BRDF-adjusted values to solar noon and AHI fixed view zenith angle.</p>
Full article ">
18 pages, 5059 KiB  
Article
Crop Mapping from Sentinel-1 Polarimetric Time-Series with a Deep Neural Network
by Yang Qu, Wenzhi Zhao, Zhanliang Yuan and Jiage Chen
Remote Sens. 2020, 12(15), 2493; https://doi.org/10.3390/rs12152493 - 3 Aug 2020
Cited by 30 | Viewed by 6088
Abstract
Timely and accurate agricultural information is essential for food security assessment and agricultural management. Synthetic aperture radar (SAR) systems are increasingly available in crop mapping, as they provide all-weather imagery. In particular, the Sentinel-1 sensor provides dense time-series data, thus offering a unique [...] Read more.
Timely and accurate agricultural information is essential for food security assessment and agricultural management. Synthetic aperture radar (SAR) systems are increasingly available in crop mapping, as they provide all-weather imagery. In particular, the Sentinel-1 sensor provides dense time-series data, thus offering a unique opportunity for crop mapping. However, in most studies, the Sentinel-1 complex backscatter coefficient was used directly which limits the potential of the Sentinel-1 in crop mapping. Meanwhile, most of the existing methods may not be tailored for the task of crop classification in time-series polarimetric SAR data. To solve the above problem, we present a novel deep learning strategy in this research. To be specific, we collected Sentinel-1 time-series data in two study areas. The Sentinel-1 image covariance matrix is used as an input to maintain the integrity of polarimetric information. Then, a depthwise separable convolution recurrent neural network (DSCRNN) architecture is proposed to characterize crop types from multiple perspectives and achieve better classification results. The experimental results indicate that the proposed method achieves better accuracy in complex agricultural areas than other classical methods. Additionally, the variable importance provided by the random forest (RF) illustrated that the covariance vector has a far greater influence than the backscatter coefficient. Consequently, the strategy proposed in this research is effective and promising for crop mapping. Full article
(This article belongs to the Special Issue Deep Learning and Remote Sensing for Agriculture)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>The study areas in California. The crop areas of interest (AOI) in study area 1 and 2 with true color composite of Sentinel-2. (<b>a</b>) study area 2 dated on 2019/02/10 (<b>b</b>) study area 1 on 2018/03/12.</p>
Full article ">Figure 2
<p>Data acquisition date in two study areas.</p>
Full article ">Figure 3
<p>Ground truth maps of the study area. (<b>a</b>) RGB image of study area 1 from Sentinel-2 on 2018/03/12, (<b>d</b>) RGB image of study area 2 from Sentinel-2 on 2019/02/10. (<b>b</b>,<b>e</b>) the CDL data (major crop types) for 2018 and 2019, respectively. (<b>c</b>,<b>f</b>) manually labeled ground reference data.</p>
Full article ">Figure 4
<p>The general view of the proposed depthwise separable convolution recurrent neural network (DSCRNN).</p>
Full article ">Figure 5
<p>Representation of the comparison between (<b>a</b>) conventional convolutional neural networks (CNNs) and (<b>b</b>) depthwise separable CNN.</p>
Full article ">Figure 6
<p>Classification of (<b>a</b>) Net A, (<b>b</b>) Net B, and (<b>c</b>) Net C.</p>
Full article ">Figure 7
<p>Maps of study area 1: (<b>a</b>) Ground truth map, (<b>b</b>) SVM, (<b>c</b>)RF, (<b>d</b>) Conv1D, (<b>e</b>) LSTM, (<b>f</b>) Net A, (<b>g</b>) Net B, (<b>h</b>) Net C, and (<b>i</b>) DSCRNN.</p>
Full article ">Figure 8
<p>Maps of study area 2: (<b>a</b>) Ground truth map, (<b>b</b>) SVM, (<b>c</b>) RF, (<b>d</b>) Conv1D, (<b>e</b>) LSTM, (<b>f</b>) Net A, (<b>g</b>) Net B, (<b>h</b>) Net C, and (<b>i</b>) DSCRNN.</p>
Full article ">Figure 9
<p>Importance validation with random forest (RF) classifier. The sum of the importance of all variables represented by each bar. For example, the first orange bar represents the sum of the importance of the four variables in the Sentinel-1 covariance matrix on January 5.</p>
Full article ">
24 pages, 6651 KiB  
Article
Automated Geometric Quality Inspection of Prefabricated Housing Units Using BIM and LiDAR
by Yi Tan, Silin Li and Qian Wang
Remote Sens. 2020, 12(15), 2492; https://doi.org/10.3390/rs12152492 - 3 Aug 2020
Cited by 43 | Viewed by 6799
Abstract
Traditional quality inspection of prefabricated components is labor intensive, time-consuming, and error prone. This study developed an automated geometric quality inspection technique for prefabricated housing units using building information modeling (BIM) and light detection and ranging (LiDAR). The proposed technique collects the 3D [...] Read more.
Traditional quality inspection of prefabricated components is labor intensive, time-consuming, and error prone. This study developed an automated geometric quality inspection technique for prefabricated housing units using building information modeling (BIM) and light detection and ranging (LiDAR). The proposed technique collects the 3D laser scanned data of the prefabricated unit using a LiDAR which contains accurate as-built surface geometries of the prefabricated unit. On the other hand, the BIM model of the prefabricated unit contains the as-designed geometries of the unit. The scanned data and BIM model are then automatically processed to inspect the geometric quality of individual elements of the prefabricated units including both structural and mechanical elements, as well as electrical and plumbing (MEP) elements. To validate the proposed technique, experiments were conducted on two prefabricated bathroom units (PBUs). The inspection results showed that the proposed technique can provide accurate quality inspection results with 0.7 mm and 0.9 mm accuracy for structural and MEP elements, respectively. In addition, the experiments also showed that the proposed technique greatly improves the inspection efficiency regarding time and labor. Full article
(This article belongs to the Section Engineering Remote Sensing)
Show Figures

Figure 1

Figure 1
<p>Manual geometric quality inspection of prefabricated components.</p>
Full article ">Figure 2
<p>Prefabricated housing units in Singapore: (<b>a</b>) prefabricated prefinished volumetric construction (PPVC) and (<b>b</b>) prefabricated bathroom unit (PBU).</p>
Full article ">Figure 3
<p>Measurement of dimensions (e.g., <span class="html-italic">L</span><sub>1</sub> to <span class="html-italic">L</span><sub>4</sub>) of a structural element.</p>
Full article ">Figure 4
<p>Measurement of the dimensions (<span class="html-italic">d</span><sub>1</sub> to <span class="html-italic">d</span><sub>4</sub>) and positions (<span class="html-italic">p</span><sub>1</sub> to <span class="html-italic">p</span><sub>4</sub>) of an opening on a structural element.</p>
Full article ">Figure 5
<p>Measurement of straightness <math display="inline"><semantics> <mrow> <msub> <mi>d</mi> <mrow> <mi>s</mi> <mi>t</mi> </mrow> </msub> </mrow> </semantics></math> of a structural element.</p>
Full article ">Figure 6
<p>Measurement of squareness <math display="inline"><semantics> <mrow> <msub> <mi>d</mi> <mrow> <mi>s</mi> <mi>q</mi> </mrow> </msub> </mrow> </semantics></math> of a structural element.</p>
Full article ">Figure 7
<p>Measurement of twist <math display="inline"><semantics> <mrow> <msub> <mi>d</mi> <mrow> <mi>t</mi> <mi>w</mi> </mrow> </msub> </mrow> </semantics></math> of a prefabricated element.</p>
Full article ">Figure 8
<p>Measurement of flatness <math display="inline"><semantics> <mrow> <msub> <mi>d</mi> <mrow> <mi>f</mi> <mi>l</mi> </mrow> </msub> </mrow> </semantics></math> of a prefabricated element.</p>
Full article ">Figure 9
<p>Measurement of the dimension (<math display="inline"><semantics> <mrow> <msub> <mi>d</mi> <mi>p</mi> </msub> </mrow> </semantics></math>) and position (<math display="inline"><semantics> <mrow> <msub> <mi>p</mi> <mi>h</mi> </msub> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <msub> <mi>p</mi> <mi>v</mi> </msub> </mrow> </semantics></math>) of a cylindrical pipe.</p>
Full article ">Figure 10
<p>The overview and detailed five-step process of the developed geometric quality inspection technique.</p>
Full article ">Figure 11
<p>Scanned data quality with regard to (<b>a</b>) ranging error and (<b>b</b>) spatial resolution.</p>
Full article ">Figure 12
<p>Relationships between scanning parameters and scanned data quality.</p>
Full article ">Figure 13
<p>Data cleansing: (<b>a</b>) selection of scanned data surrounding the target unit, (<b>b</b>) 3D view of the selected scanned data, (<b>c</b>) scanned data after removing the roof and floor, and (<b>d</b>) extraction of the target unit using region growing.</p>
Full article ">Figure 14
<p>Scan-BIM registration: (<b>a</b>) scanned data in LiDAR’s coordinate system, (<b>b</b>) as-designed BIM in user-defined coordinate system, and (<b>c</b>) scanned data after registration.</p>
Full article ">Figure 15
<p>Extraction of as-built surfaces and edges: (<b>a</b>) extraction of surfaces, (<b>b</b>) extraction of last scan points in each row/column for an edge, and (<b>c</b>) estimation of an edge as a set of edge points.</p>
Full article ">Figure 16
<p>Extraction of estimated edge lines and corner points from surface points and edge points for the inspection of dimensions and locations of structural elements and openings.</p>
Full article ">Figure 17
<p>Surface quality inspection for structural elements: (<b>a</b>) straightness inspection, (<b>b</b>) squareness inspection, (<b>c</b>) twist inspection, and (<b>d</b>) flatness inspection.</p>
Full article ">Figure 18
<p>Extraction of pipes: (<b>a</b>) scanned data near the element and (<b>b</b>) cylinder fitting.</p>
Full article ">Figure 19
<p>Example of inspection-related properties and their values of a structural element.</p>
Full article ">Figure 20
<p>Two PBUs for validation experiments.</p>
Full article ">Figure 21
<p>Optimal scanning location for PBU B obtained from simulations.</p>
Full article ">Figure 22
<p>Experimental process for validation experiments: (<b>a</b>) scanned data for PBU B after data pre-processing, (<b>b</b>) extraction of three planar surfaces S4 to S6 for structural element inspection, (<b>c</b>) recognition of MEP4 and MEP5 (as shown in <a href="#remotesensing-12-02492-f020" class="html-fig">Figure 20</a>) for MEP element inspection, and (<b>d</b>) storage of inspection information of MEP4 in BIM model.</p>
Full article ">
24 pages, 16818 KiB  
Article
Multi-Sensor Approach to Improve Bathymetric Lidar Mapping of Semi-Arid Groundwater-Dependent Streams: Devils River, Texas
by Kutalmis Saylam, Aaron R. Averett, Lucie Costard, Brad D. Wolaver and Sarah Robertson
Remote Sens. 2020, 12(15), 2491; https://doi.org/10.3390/rs12152491 - 3 Aug 2020
Cited by 6 | Viewed by 5513
Abstract
Remote sensing technology enables detecting, acquiring, and recording certain information about objects and locations from distances relative to their geographic locations. Airborne Lidar bathymetry (ALB) is an active, non-imaging, remote sensing technology for measuring the depths of shallow and relatively transparent water bodies [...] Read more.
Remote sensing technology enables detecting, acquiring, and recording certain information about objects and locations from distances relative to their geographic locations. Airborne Lidar bathymetry (ALB) is an active, non-imaging, remote sensing technology for measuring the depths of shallow and relatively transparent water bodies using light beams from an airborne platform. In this study, we acquired Lidar datasets using near-infrared and visible (green) wavelength with the Leica Airborne Hydrography AB Chiroptera-I system over the Devils River basin of southwestern Texas. Devils River is a highly groundwater-dependent stream that flows 150 km from source springs to Lake Amistad on the lower Rio Grande. To improve spatially distributed stream bathymetry in aquatic habitats of species of state and federal conservation interest, we conducted supplementary water-depth observations using other remote sensing technologies integrated with the airborne Lidar datasets. Ground penetrating radar (GPR) mapped the river bottom where vegetation impeded other active sensors in attaining depth measurements. We confirmed the accuracy of bathymetric Lidar datasets with a differential global positioning system (GPS) and compared the findings to sonar and GPR measurements. The study revealed that seamless bathymetric and geomorphic mapping of karst environments in complex settings (e.g., aquatic vegetation, entrained air bubbles, riparian zone obstructions) require the integration of a variety of terrestrial and remotely operated survey methods. We apply this approach to Devils River of Texas. However, the methods are applicable to similar streams globally. Full article
(This article belongs to the Special Issue Remote Sensing for Biodiversity Mapping and Monitoring)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Scattering and refraction occur when light beams traverse through the air–water interface. Near infrared (NIR) beams are absorbed either at the surface or in the immediate water column, whereas some reflect back to the receiver and indicate the water surface. Distances (range) from the surface and bottom can be computed by measuring the flight duration of the pulses and adjusting with the varying speed of light in air and water.</p>
Full article ">Figure 2
<p>Survey area and the Devils River watershed by Dolan Falls. In situ locations are marked through L1-11, and purple lines indicate the airborne survey paths.</p>
Full article ">Figure 3
<p>A representation of the Palmer scanner and its elliptical scanning pattern of Leica AHAB Chiroptera-I sensors.</p>
Full article ">Figure 4
<p>(<b>A</b>) TxDOT-owned Cessna 206 (aircraft registration N147TX) and (<b>B</b>) Leica AHAB Chiroptera-I installation.</p>
Full article ">Figure 5
<p>River bottom captured with a submerged GoPro HERO5 camera.</p>
Full article ">Figure 6
<p>Surveying stream depth using a dual-beam Garmin echosounder.</p>
Full article ">Figure 7
<p>Dense submerged aquatic habitat in the upper basin.</p>
Full article ">Figure 8
<p>GSSI SIR-3000 GPR mounted to an inflatable boat.</p>
Full article ">Figure 9
<p>Graph indicating the minimal absorption coefficient in the 200–800 nm spectral region, attained in pure freshwater (modified after Beć and Huck [<a href="#B40-remotesensing-12-02491" class="html-bibr">40</a>]).</p>
Full article ">Figure 10
<p>Conceptual representation of Lidar, sonar, GPS, and GPR-derived elevation comparisons using Delaunay triangulation and a point-to-patch relationship.</p>
Full article ">Figure 11
<p>Quadratic relationship between <span class="html-italic">K<sub>d</sub></span> (m<sup>−1</sup>) and <span class="html-italic">D<sub>max</sub></span> (m) in Devils River, indicating the lessening <span class="html-italic">D<sub>max</sub></span> in parallel to the escalation in <span class="html-italic">K<sub>d</sub></span>.</p>
Full article ">Figure 12
<p>(<b>A</b>) In situ Location 7 showing transparent and shallow water (turbidity: 1.46 NTU). (<b>B</b>) In situ Location 9 showing invisible bottom and 3.3 m depth (turbidity: 2.79 NTU).</p>
Full article ">Figure 13
<p>Representation of Class 0 (NIR + green) versus Class 5 (green wavelength only) returns, indicating the water-surface detection challenges in inland pools. Note the “noisy” characteristic of Class 5 returns; indicating the median surface is 9 cm lower than Class 0 returns.</p>
Full article ">Figure 14
<p>Goodness-of-fit between Lidar-derived patch elevations to GPS measurements in the upper basin.</p>
Full article ">Figure 15
<p>Regression of sonar-derived elevations versus Lidar-derived TIN patch (<span class="html-italic">W<sub>B</sub></span>) elevations in the upper basin. Aquatic vegetation is evident at the surface and in deeper parts of the water column.</p>
Full article ">Figure 16
<p>Regression of sonar-derived elevations to Lidar-derived TIN patch (<span class="html-italic">W<sub>B</sub></span>) elevations in the lower basin. Sonar measured deeper than Lidar due to depth by the Dolan Falls plunge pool area.</p>
Full article ">Figure 17
<p>GPR measurements compared to Lidar TIN patches using the least-squares method.</p>
Full article ">Figure 18
<p>Solid linear-fit line revealing an improved regression between Lidar and GPR-derived depths (3.3 cm/ns versus 3.7 cm/ns) using the adjusted EM propagation.</p>
Full article ">Figure 19
<p>GPR EM “B-scan” representing survey line #117. Lidar pulses (blue) were blocked by submerged vegetation, and GPR (orange) penetrated to measure the riverbed.</p>
Full article ">Figure 20
<p>Maps showing upper basin depths measured using (<b>A</b>) GPS, (<b>B</b>) sonar, and (<b>C</b>) GPR, and (<b>D</b>), a final depth map with all measurements combined.</p>
Full article ">Figure 20 Cont.
<p>Maps showing upper basin depths measured using (<b>A</b>) GPS, (<b>B</b>) sonar, and (<b>C</b>) GPR, and (<b>D</b>), a final depth map with all measurements combined.</p>
Full article ">Figure 20 Cont.
<p>Maps showing upper basin depths measured using (<b>A</b>) GPS, (<b>B</b>) sonar, and (<b>C</b>) GPR, and (<b>D</b>), a final depth map with all measurements combined.</p>
Full article ">Figure 20 Cont.
<p>Maps showing upper basin depths measured using (<b>A</b>) GPS, (<b>B</b>) sonar, and (<b>C</b>) GPR, and (<b>D</b>), a final depth map with all measurements combined.</p>
Full article ">
29 pages, 13310 KiB  
Article
Segmentation of Vegetation and Flood from Aerial Images Based on Decision Fusion of Neural Networks
by Loretta Ichim and Dan Popescu
Remote Sens. 2020, 12(15), 2490; https://doi.org/10.3390/rs12152490 - 3 Aug 2020
Cited by 8 | Viewed by 4646
Abstract
The detection and evaluation of flood damage in rural zones are of great importance for farmers, local authorities, and insurance companies. To this end, the paper proposes an efficient system based on five neural networks to assess the degree of flooding and the [...] Read more.
The detection and evaluation of flood damage in rural zones are of great importance for farmers, local authorities, and insurance companies. To this end, the paper proposes an efficient system based on five neural networks to assess the degree of flooding and the remaining vegetation. After a previous analysis the following neural networks were selected as primary classifiers: you only look once network (YOLO), generative adversarial network (GAN), AlexNet, LeNet, and residual network (ResNet). Their outputs were connected in a decision fusion scheme, as a new convolutional layer, considering two sets of components: (a) the weights, corresponding to the proven accuracy of the primary neural networks in the validation phase, and (b) the probabilities generated by the neural networks as primary classification results in the operational (testing) phase. Thus, a subjective behavior (individual interpretation of single neural networks) was transformed into a more objective behavior (interpretation based on fusion of information). The images, difficult to be segmented, were obtained from an unmanned aerial vehicle photogrammetry flight after a moderate flood in a rural region of Romania and make up our database. For segmentation and evaluation of the flooded zones and vegetation, the images were first decomposed in patches and, after classification the resulting marked patches were re-composed in segmented images. From the performance analysis point of view, better results were obtained with the proposed system than the neural networks taken separately and with respect to some works from the references. Full article
(This article belongs to the Special Issue Deep Neural Networks for Remote Sensing Applications)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>System components: (<b>a</b>) UAV fixed wing; (<b>b</b>) ground terminal, and (<b>c</b>) payload photo.</p>
Full article ">Figure 2
<p>Image acquisitions: (<b>a</b>) the UAV trajectory—photogrammetry (cropped); (<b>b</b>) the orthophotoplan (cropped).</p>
Full article ">Figure 3
<p>Decision YOLO proposed architecture.</p>
Full article ">Figure 4
<p>The architecture of the generative adversarial network (GAN) generator: (<b>a</b>) the encoder; (<b>b</b>) the decoder. R—rectified linear unit, LR—leaky rectified linear unit, C—convolutional layer, BN—batch normalization, DO—dropout layer, and T—<span class="html-italic">tanh</span> function.</p>
Full article ">Figure 5
<p>The architecture of the GAN discriminator. LR—leaky rectified linear unit, C—convolutional layer, BN—batch normalization, and S—<span class="html-italic">sigmoid</span> function.</p>
Full article ">Figure 6
<p>Block diagram of the cGAN based system for flood and vegetation detection. The following notation were used: IL—image for learning; RM—real mask; G—generator; FM—fake mask; RP—real pair; FP—fake pair; D—discriminator; UM—unit matrix; UC—unit comparator; DW—weights for the discriminator; Σ—adder; NC—null comparator; NM—null matrix; GW—weights optimizer for the generator; DC—comparator for D with UM; GC—comparator for G between RM and FM.</p>
Full article ">Figure 7
<p>LeNet architecture.</p>
Full article ">Figure 8
<p>AlexNet architecture.</p>
Full article ">Figure 9
<p>ResNet architecture (A and B—repetitive modules described in <a href="#remotesensing-12-02490-f010" class="html-fig">Figure 10</a>, FC—fully connected layer, F—flood type patch, V—vegetation type patch, and ×n—number of module repetition).</p>
Full article ">Figure 10
<p>Repetitive modules in ResNet configuration.</p>
Full article ">Figure 11
<p>Flow chart of the global system (F—flood class, V—vegetation class, and R—rest class).</p>
Full article ">Figure 12
<p>Architecture of the image segmentation proposed system. I—input image, ID—image decomposition module, P<sub>ij</sub>—patch (<span class="html-italic">i,j</span>) as input, PC<sub>i</sub>—primary classifier <span class="html-italic">i</span>, <span class="html-italic">p</span><sub>i</sub>—probability provided by PC<sub>i</sub>, <span class="html-italic">w</span><sub>i</sub> the weight associated with PC<sub>i</sub>, FBC—fusion based classifier, S<sub>ij</sub>—patch (<span class="html-italic">i,j</span>) as output (segmented), SIC—segmented image recomposition module, and SI—segmented image.</p>
Full article ">Figure 13
<p>Examples of patches for flood (F), vegetation (V), and rest (R) for learning phase (our dataset).</p>
Full article ">Figure 14
<p>Flood and vegetation segmentation (manual and predicted) for individual classifiers (validation phase)—our dataset.</p>
Full article ">Figure 14 Cont.
<p>Flood and vegetation segmentation (manual and predicted) for individual classifiers (validation phase)—our dataset.</p>
Full article ">Figure 15
<p>Confusion matrices for flood (<b>a</b>) and vegetation (<b>b</b>) detection in the case of primary classifier PC<sub>1.</sub></p>
Full article ">Figure 16
<p>Examples of flood and vegetation segmentation by fusion-based classifier (testing phase)—our dataset.</p>
Full article ">Figure 16 Cont.
<p>Examples of flood and vegetation segmentation by fusion-based classifier (testing phase)—our dataset.</p>
Full article ">
13 pages, 3640 KiB  
Article
Causal Analysis of Accuracy Obtained Using High-Resolution Global Forest Change Data to Identify Forest Loss in Small Forest Plots
by Yusuke Yamada, Toshihiro Ohkubo and Katsuto Shimizu
Remote Sens. 2020, 12(15), 2489; https://doi.org/10.3390/rs12152489 - 3 Aug 2020
Cited by 10 | Viewed by 3519
Abstract
Identifying areas of forest loss is a fundamental aspect of sustainable forest management. Global Forest Change (GFC) datasets developed by Hansen et al. (in Science 342:850–853, 2013) are publicly available, but the accuracy of these datasets for small forest plots has not been [...] Read more.
Identifying areas of forest loss is a fundamental aspect of sustainable forest management. Global Forest Change (GFC) datasets developed by Hansen et al. (in Science 342:850–853, 2013) are publicly available, but the accuracy of these datasets for small forest plots has not been assessed. We used a forest-wide polygon-based approach to assess the accuracy of using GFC data to identify areas of forest loss in an area containing numerous small forest plots. We evaluated the accuracy of detection of individual forest-loss polygons in the GFC dataset in terms of a “recall ratio”, the ratio of the spatial overlap of a forest-loss polygon determined from the GFC dataset to the area of a corresponding reference forest-loss polygon, which we determined by visual interpretation of aerial photographs. We analyzed the structural relationships of recall ratio with area of forest loss, tree species, and slope of the forest terrain by using linear non-Gaussian acyclic modelling. We showed that only 11.1% of forest-loss polygons in the reference dataset were successfully identified in the GFC dataset. The inferred structure indicated that recall ratio had the strongest relationships with area of forest loss, forest tree species, and height of the forest canopy. Our results indicate the need for careful consideration of structural relationships when using GFC datasets to identify areas of forest loss in regions where there are small forest plots. Moreover, further studies are required to examine the structural relationships for accuracy of land-use classification in forested areas in various regions and with different forest characteristics. Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>(<b>a</b>,<b>b</b>) Location map and (<b>c</b>) forest type distribution map of the study area. Forest type distribution was visually interpreted from the aerial photographs acquired in 2014.</p>
Full article ">Figure 2
<p>Examples of forest-loss polygons of (<b>a</b>,<b>b</b>) the reference dataset and (<b>c</b>,<b>d</b>) the GFC dataset with the Landsat composite images in (<b>a</b>,<b>c</b>) 2014 and (<b>b</b>,<b>d</b>) 2017 that cover between June 2014 and December 2016. The Landsat composite images were used only for visualization.</p>
Full article ">Figure 3
<p>Schematic illustration of a forest-loss polygon including non-forest. Four percent of the forest-loss polygon in this figure is covered with non-forest with lower resolution land classification map.</p>
Full article ">Figure 4
<p>Schematic illustration of method for calculation of recall and precision ratios.</p>
Full article ">Figure 5
<p>Comparison of violin and box plots of the distributions of areas of forest loss during 2014 to 2016 derived from the GFC and reference datasets.</p>
Full article ">Figure 6
<p>Comparison of violin and box plots of the distribution of slope for the entire forested part of the study area with the distributions for areas of forest loss during 2014 to 2016 derived from the reference and GFC datasets.</p>
Full article ">Figure 7
<p>Comparison of the proportions of forest type in the entire forested part of the study area in 2014 (reference dataset) with those in areas of forest loss from 2014 to 2016 derived from the GFC and reference datasets.</p>
Full article ">Figure 8
<p>Comparison of violin plots of the distributions of recall ratios for forest-loss polygons in the reference dataset in five size ranges. Dashed horizontal lines show quartile and median values. For each size range, the black dots mark the proportion of forest-loss polygons successfully being detected (recall ratio &gt;50%) in the reference dataset.</p>
Full article ">Figure 9
<p>Comparison of violin plots of the distributions of precision ratios for forest-loss polygons in the reference dataset in five size ranges. Dashed horizontal lines show quartile and median values. For each size range, the black dots mark the proportion of forest-loss polygons successfully detected (precision ratio &gt;50%) in the GFC dataset.</p>
Full article ">Figure 10
<p>Structural relationships inferred by application of the DirectLiNGAM algorithm. The arrows show directions of influences and the numbers besides the arrows are the inferred values of coefficients (<math display="inline"><semantics> <mstyle mathvariant="bold" mathsize="normal"> <mi>B</mi> </mstyle> </semantics></math>) of Equation (2). Positive (negative) values indicate positive (negative) effects. Higher |<b>B</b>| indicates a stronger relationship.</p>
Full article ">
19 pages, 5838 KiB  
Article
Mapping the Essential Urban Land Use in Changchun by Applying Random Forest and Multi-Source Geospatial Data
by Shouzhi Chang, Zongming Wang, Dehua Mao, Kehan Guan, Mingming Jia and Chaoqun Chen
Remote Sens. 2020, 12(15), 2488; https://doi.org/10.3390/rs12152488 - 3 Aug 2020
Cited by 39 | Viewed by 5449
Abstract
Understanding urban spatial pattern of land use is of great significance to urban land management and resource allocation. Urban space has strong heterogeneity, and thus there were many researches focusing on the identification of urban land use. The emergence of multiple new types [...] Read more.
Understanding urban spatial pattern of land use is of great significance to urban land management and resource allocation. Urban space has strong heterogeneity, and thus there were many researches focusing on the identification of urban land use. The emergence of multiple new types of geospatial data provide an opportunity to investigate the methods of mapping essential urban land use. The popularization of street view images represented by Baidu Maps is benificial to the rapid acquisition of high-precision street view data, which has attracted the attention of scholars in the field of urban research. In this study, OpenStreetMap (OSM) was used to delineate parcels which were recognized as basic mapping units. A semantic segmentation of street view images was combined to enrich the multi-dimensional description of urban parcels, together with point of interest (POI), Sentinel-2A, and Luojia-1 nighttime light data. Furthermore, random forest (RF) was applied to determine the urban land use categories. The results show that street view elements are related to urban land use in the perspective of spatial distribution. It is reasonable and feasible to describe urban parcels according to the characteristics of street view elements. Due to the participation of street view, the overall accuracy reaches 79.13%. The contribution of street view features to the optimal classification model reached 20.6%, which is more stable than POI features. Full article
(This article belongs to the Special Issue Urban Land Use Mapping and Analysis in the Big Data Era)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Location of the study area. The base map is from ChinaOnlineCommunityENG (MapServer).</p>
Full article ">Figure 2
<p>OpenStreetMap (OSM) data processing flow diagram. (<b>a</b>) Filtered OSM; (<b>b</b>) buffer zones; (<b>c</b>) centerlines.</p>
Full article ">Figure 3
<p>Schematic diagram of the street view acquisition.</p>
Full article ">Figure 4
<p>Main flowchart of using street view data and random forest to identify essential urban land use. DN: Digital number; NDBI: Normalized difference built-up index; NDVI: Normalized difference vegetation index.</p>
Full article ">Figure 5
<p>Example of the street view image segmentation results. (<b>a</b>) Original image; (<b>b</b>) segmentation result.</p>
Full article ">Figure 6
<p>Spatial distribution characteristics of the street view elements. (<b>a</b>) Sky; (<b>b</b>) roads; (<b>c</b>) vegetation; (<b>d</b>) buildings.</p>
Full article ">Figure 7
<p>OOB_SCORE values under different parameter combinations. (<b>a</b>) S1: Street view feature participation; (<b>b</b>) S2: No street view feature participation.</p>
Full article ">Figure 8
<p>The importance of the different categories of features in the classification. Dens_Res: Density of residential points; M_Buil: Average value of building elements; Pro_Edu: Proportion of educational points; Pro_Res: Proportion of residential points; Std_GVR: Standard deviation of GVR.</p>
Full article ">Figure 9
<p>Essential urban land use map in the downtown area of Changchun.</p>
Full article ">Figure 10
<p>Distribution of inconsistent parcels.</p>
Full article ">
14 pages, 10663 KiB  
Letter
Automatic Mapping of Landslides by the ResU-Net
by Wenwen Qi, Mengfei Wei, Wentao Yang, Chong Xu and Chao Ma
Remote Sens. 2020, 12(15), 2487; https://doi.org/10.3390/rs12152487 - 3 Aug 2020
Cited by 82 | Viewed by 6931
Abstract
Massive landslides over large regions can be triggered by heavy rainfalls or major seismic events. Mapping regional landslides quickly is important for disaster mitigation. In recent years, deep learning methods have been successfully applied in many fields, including landslide automatic identification. In this [...] Read more.
Massive landslides over large regions can be triggered by heavy rainfalls or major seismic events. Mapping regional landslides quickly is important for disaster mitigation. In recent years, deep learning methods have been successfully applied in many fields, including landslide automatic identification. In this work, we proposed a deep learning approach, the ResU-Net, to map regional landslides automatically. This method and a baseline model (U-Net) were collectively tested in Tianshui city, Gansu province, where a heavy rainfall triggered more than 10,000 landslides in July 2013. All models were performed on a 3-band (near infrared, red, and green) GeoEye-1 image with a spatial resolution of 0.5 m. At such a fine spatial resolution, the study area is spatially heterogeneous. The tested study area is 128 km2, 80% of which was used to train models and the remaining 20% was used to validate accuracy of the models. This proposed ResU-Net achieved higher accuracy than the baseline U-Net model in this mountain region, where F1 improved by 0.09. Compared with the U-Net model, this proposed model (ResU-Net) performs better in discriminating landslides from bare floodplains along river valleys and unplanted terraces. By incorporating environmental information, this ResU-Net may also be applied to other landslide mapping, such as landslide susceptibility and hazard assessment. Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Maps of the study area. (<b>a</b>,<b>b</b>) Location of the study area and (<b>c</b>) the landslide inventory map and boundary of study area.</p>
Full article ">Figure 2
<p>Landslide photos (<b>a</b>–<b>c</b>) taken during field reconnaissance. Locations and directions to take these photos are shown on GeoEye-1 RGB (Red-Green-Blue) composite images on the right column.</p>
Full article ">Figure 3
<p>Residual learning block.</p>
Full article ">Figure 4
<p>Architecture of ResU-Net.</p>
Full article ">Figure 5
<p>Comparison of results in validation area using different models. (<b>a1</b>–<b>a7</b>) GeoEye-1 images with ground truth of landslides (red boundary polygons), extracted landslides by the UNet (<b>b1</b>–<b>b7</b>) and the ResU-Net (<b>c1</b>–<b>c7</b>). The yellow, red, and blue polygons are corrected landslides, omission, and commission errors, respectively.</p>
Full article ">Figure 5 Cont.
<p>Comparison of results in validation area using different models. (<b>a1</b>–<b>a7</b>) GeoEye-1 images with ground truth of landslides (red boundary polygons), extracted landslides by the UNet (<b>b1</b>–<b>b7</b>) and the ResU-Net (<b>c1</b>–<b>c7</b>). The yellow, red, and blue polygons are corrected landslides, omission, and commission errors, respectively.</p>
Full article ">
19 pages, 16219 KiB  
Article
Proof of Concept for Sea Ice Stage of Development Classification Using Deep Learning
by Ryan Kruk, M. Christopher Fuller, Alexander S. Komarov, Dustin Isleifson and Ian Jeffrey
Remote Sens. 2020, 12(15), 2486; https://doi.org/10.3390/rs12152486 - 3 Aug 2020
Cited by 19 | Viewed by 4517
Abstract
Accurate maps of ice concentration and ice type are needed to address increased interest in commercial marine transportation through the Arctic. RADARSAT-2 SAR imagery is the primary source of data used by expert ice analysts at the Canadian Ice Service (CIS) to produce [...] Read more.
Accurate maps of ice concentration and ice type are needed to address increased interest in commercial marine transportation through the Arctic. RADARSAT-2 SAR imagery is the primary source of data used by expert ice analysts at the Canadian Ice Service (CIS) to produce sea ice maps over the Canadian territory. This study serves as a proof of concept that neural networks can be used to accurately predict ice type from SAR data. Datasets of SAR images served as inputs, and CIS ice charts served as labelled outputs to train a neural network to classify sea ice type. Our results show that DenseNet achieves the highest overall classification accuracy of 94.0% including water and the highest ice classification accuracy of 91.8% on a three class dataset using a fusion of HH and HV SAR polarizations for the input samples. The 91.8% ice classification accuracy validates the premise that a neural network can be used to effectively categorize different ice types based on SAR data. Full article
(This article belongs to the Special Issue Polar Sea Ice: Detection, Monitoring and Modeling)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>(<b>a</b>) Hudson Bay region. Input-output data sources: (<b>b</b>) Single raw SCW image. Input data consist of 350 RADARSAT-2 ScanSAR Wide images. (<b>c</b>) the scheme in <a href="#remotesensing-12-02486-t001" class="html-table">Table 1</a>. Output data consist of 172 image analysis charts.</p>
Full article ">Figure 2
<p>Candidate dataset samples consist of SAR image sub-regions, highlighted in the figure, centred about ice chart image analysis sample coordinates. Background SAR image showing the southwestern region of the Hudson Bay and Churchill, Manitoba, indicated by the blue dot. The red hatched area denotes land.</p>
Full article ">Figure 3
<p>U-Net based model: The model consists of the encoder (down-sampling) portion of a U-Net model that maps a SAR sub-image to a prediction label. A representative model used in this work consists of approximately 1.2 million parameters.</p>
Full article ">Figure 4
<p>DenseNet model: The model consists of a number of dense blocks connected by transition blocks that maps a SAR sub-image to a predication label. A representative model used in this work consists of approximately 7 million parameters.</p>
Full article ">Figure 5
<p>A visual representation of training and testing samples from three SAR scenes and the predictions from DenseNet. (<b>a</b>) Training samples from 2018/10/16 11:27:17. (<b>b</b>) Testing samples from 2018/10/16 11:27:17. (<b>c</b>) Training samples from 2018/10/17 22:29:32. (<b>d</b>) Testing samples from 2018/10/17 22:29:32. (<b>e</b>) Training samples from 2018/11/04 12:14:50. (<b>f</b>) Testing samples from 2018/11/04 12:14:50.</p>
Full article ">
2 pages, 155 KiB  
Erratum
Erratum: Skoneczny, H., et al. Fire Blight Disease Detection for Apple Trees: Hyperspectral Analysis of Healthy, Infected and Dry Leaves. Remote Sensing 2020, 12(13), 2101
by Hubert Skoneczny, Katarzyna Kubiak, Marcin Spiralski, Jan Kotlarz, Artur Mikiciński and Joanna Puławska
Remote Sens. 2020, 12(15), 2485; https://doi.org/10.3390/rs12152485 - 3 Aug 2020
Viewed by 2498
Abstract
The authors wish to make the following corrections to this paper [...] Full article
(This article belongs to the Special Issue Spectroscopic Analysis of Plants and Vegetation)
20 pages, 4092 KiB  
Article
Comparison of CORINE Land Cover Data with National Statistics and the Possibility to Record This Data on a Local Scale—Case Studies from Slovakia
by Vladimír Falťan, František Petrovič, Ján Oťaheľ, Ján Feranec, Michal Druga, Matej Hruška, Jozef Nováček, Vladimír Solár and Veronika Mechurová
Remote Sens. 2020, 12(15), 2484; https://doi.org/10.3390/rs12152484 - 3 Aug 2020
Cited by 15 | Viewed by 5665
Abstract
Monitoring of land cover (LC) provides important information of actual land use (LU) and landscape dynamics. LC research results depend on the size of the area, purpose and applied methodology. CORINE Land Cover (CLC) data is one of the most important sources of [...] Read more.
Monitoring of land cover (LC) provides important information of actual land use (LU) and landscape dynamics. LC research results depend on the size of the area, purpose and applied methodology. CORINE Land Cover (CLC) data is one of the most important sources of LU data from a European perspective. Our research compares official CLC data (third hierarchical level of nomenclature at a scale of 1:100,000) and national statistics (NS) of LU in Slovakia between 2000 and 2018 at national, county, and local levels. The most significant differences occurred in arable land and permanent grassland, which is also related to the recording method and the development of agricultural land management. Due to the abandonment of agricultural areas, a real recorded increase in forest cover due to forest succession was not introduced in the official records of Land register. New modification of CLC methodology for identifying LC classes at a scale of 1:10,000 and fifth hierarchical level of CLC is firstly applied for local case studies representing lowland, basin, and mountain landscape. The size of the least identified and simultaneously recorded area was established at 0.1 ha the minimum width of a polygon was established at 10 m, the minimum recorded width of linear elements such as communications was established at 2 m. The use of the fifth CLC level in the case studies areas generated average boundary density 17.2 km/km2, comparing to the 2.6 km/km2 of the third level. Therefore, when measuring the density of spatial information by the polygon boundary lengths, the fifth level carries 6.6 times more information than the third level. Detailed investigation of LU affords better verification of national statistics data at a local level. This study also contributes to a more detailed recording of the current state of the Central European landscape and its changes. Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Localization of investigated counties and municipalities with the CORINE Land Cover (CLC3) inventory in the Slovak Republic in 2018. Legend of land cover classes is available at the following link: <a href="https://land.copernicus.eu/pan-european/corine-land-cover/clc2018" target="_blank">https://land.copernicus.eu/pan-european/corine-land-cover/clc2018</a>.</p>
Full article ">Figure 2
<p>Development of relative land use (LU) values for the Slovak Republic according to recorded national statistics (NS) data and comparison with CLC3 data in 2000, 2006, 2012, and 2018. LU codes according to NS: 2 arable land, 3 hop fields, 4 vineyards, 5 gardens, 6 orchards, 7 permanent grassland, 10 forest land, 11 water areas, 13 built-up areas and courtyards, and 14 other areas.</p>
Full article ">Figure 3
<p>Development of relative LU values for the counties of Senec according to recorded national statistics (NS) and CLC3 data in 2000, 2006, 2012, and 2018. LU codes according to NS: 2 arable land, 3 hop fields, 4 vineyards, 5 gardens, 6 orchards, 7 permanent grassland, 10 forest land, 11 water bodies, 13 built-up areas and courtyards, and 14 other areas.</p>
Full article ">Figure 4
<p>Development of relative LU values for the counties of Poprad and Námestovo according to recorded NS and CLC3 data in 2000, 2006, 2012, and 2018, respectively. LU codes according to NS: 2 arable land, 5 gardens, 7 permanent grassland, 10 forest land, 11 water bodies, 13 built-up areas and courtyards, and 14 other areas.</p>
Full article ">Figure 5
<p>Case studies with the CLC5 maps in 2018 (<b>left</b>) and examples of the detail outputs of this mapping at a scale of 1:10,000 (<b>right</b>). The colours correspond to the legend in <a href="#remotesensing-12-02484-f001" class="html-fig">Figure 1</a> (see above).</p>
Full article ">
11 pages, 778 KiB  
Letter
The T Index: Measuring the Reliability of Accuracy Estimates Obtained from Non-Probability Samples
by François Waldner
Remote Sens. 2020, 12(15), 2483; https://doi.org/10.3390/rs12152483 - 3 Aug 2020
Cited by 5 | Viewed by 4457
Abstract
In remote sensing, the term accuracy typically expresses the degree of correctness of a map. Best practices in accuracy assessment have been widely researched and include guidelines on how to select validation data using probability sampling designs. In practice, however, probability samples may [...] Read more.
In remote sensing, the term accuracy typically expresses the degree of correctness of a map. Best practices in accuracy assessment have been widely researched and include guidelines on how to select validation data using probability sampling designs. In practice, however, probability samples may be lacking and, instead, cross-validation using non-probability samples is common. This practice is risky because the resulting accuracy estimates can easily be mistaken for map accuracy. The following question arises: to what extent are accuracy estimates obtained from non-probability samples representative of map accuracy? This letter introduces the T index to answer this question. Certain cross-validation designs (such as the common single-split or hold-out validation) provide representative accuracy estimates when hold-out sets are simple random samples of the map population. The T index essentially measures the probability of a hold-out set of unknown sampling design to be a simple random sample. To that aim, we compare its spread in the feature space against the spread of random unlabelled samples of the same size. Data spread is measured by a variant of Moran’s I autocorrelation index. Consistent interpretation of the T index is proposed through the prism of significance testing, with T values < 0.05 indicating unreliable accuracy estimates. Its relevance and interpretation guidelines are also illustrated in a case study on crop-type mapping. Uptake of the T index by the remote-sensing community will help inform about—and sometimes caution against—the representativeness of accuracy estimates obtained by cross-validation, so that users can better decide whether a map is fit for their purpose or how its accuracy impacts their application. Subsequently, the T index will build trust and improve the transparency of accuracy assessment in conditions which deviate from best practices. Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>The Normalised Moran’s <span class="html-italic">I</span> Index (<math display="inline"><semantics> <msub> <mi>I</mi> <mi>B</mi> </msub> </semantics></math>) takes the value of +1 for clustered samples (<b>left</b>), 0 for random samples (<b>centre</b>) and −1 for spatially-balanced samples (<b>right</b>).</p>
Full article ">Figure 2
<p>Procedure to construct the <span class="html-italic">T</span> index: (1) calculate the Normalised Moran’s <span class="html-italic">I</span> Index of the labelled hold-out set, (2) generate random unlabelled samples from the map population, (3) calculate the Normalised Moran’s <span class="html-italic">I</span> Index of all random unlabelled samples and (4) compute the probability of the labelled set to belong to the empirical distribution of random unlabelled samples.</p>
Full article ">Figure 3
<p>Area of interest in Kansas and the 16 stratification layers. The stratification layers were used to introduce sample selection biases in reference data. Colours represent different strata. Grey areas were not considered.</p>
Full article ">Figure 4
<p>Relationship between Normalised Moran’s <span class="html-italic">I</span> Index (<math display="inline"><semantics> <msub> <mi>I</mi> <mi>B</mi> </msub> </semantics></math>) and map accuracy. (<b>a</b>) Empirical distribution of <math display="inline"><semantics> <msub> <mi>I</mi> <mi>B</mi> </msub> </semantics></math> values for random unlabelled data sets. (<b>b</b>) Relationship between the <math display="inline"><semantics> <msub> <mi>I</mi> <mi>B</mi> </msub> </semantics></math> of the labelled sets, their bias and corresponding <span class="html-italic">T</span> index. There is a strong linear relationship between <math display="inline"><semantics> <msub> <mi>I</mi> <mi>B</mi> </msub> </semantics></math> and bias (<math display="inline"><semantics> <mrow> <mi>y</mi> <mo>=</mo> <mn>0</mn> <mo>.</mo> <mn>03</mn> <mo>+</mo> <mn>1</mn> <mo>.</mo> <mn>67</mn> <mi>x</mi> </mrow> </semantics></math>; <math display="inline"><semantics> <mrow> <msup> <mi>R</mi> <mn>2</mn> </msup> <mo>=</mo> <mn>0</mn> <mo>.</mo> <mn>79</mn> </mrow> </semantics></math>). The dashed line indicates <math display="inline"><semantics> <msub> <mi>I</mi> <mi>B</mi> </msub> </semantics></math> values for which the <span class="html-italic">T</span> index is 0.05.</p>
Full article ">Figure 5
<p>Ability of the <span class="html-italic">T</span> index to accurately identify representative (random) hold-out sets. Results validate the proposed nomenclature to interpret the <span class="html-italic">T</span> index.</p>
Full article ">
23 pages, 6450 KiB  
Article
Coastline Fractal Dimension of Mainland, Island, and Estuaries Using Multi-temporal Landsat Remote Sensing Data from 1978 to 2018: A Case Study of the Pearl River Estuary Area
by Xinyi Hu and Yunpeng Wang
Remote Sens. 2020, 12(15), 2482; https://doi.org/10.3390/rs12152482 - 3 Aug 2020
Cited by 10 | Viewed by 4159
Abstract
The Pearl River Estuary Area was selected for this study. For the past 40 years, it has been one of the most complex coasts in China, yet few studies have analyzed the complexity and variations of the area’s different coastlines. In this investigation, [...] Read more.
The Pearl River Estuary Area was selected for this study. For the past 40 years, it has been one of the most complex coasts in China, yet few studies have analyzed the complexity and variations of the area’s different coastlines. In this investigation, the coastlines of the Pearl River Estuary Area were extracted from multi-temporal Landsat remote sensing data from 1978, 1988, 1997, 2008, and 2018. The coastline of this area was classified into mainland, island, and estuarine. To obtain more detailed results of the mainland and island, we regarded this area as the main body, rezoned into different parts. The box-counting dimension was applied to compute the bidimensional (2D) fractal dimension. Coastline length and the fractal dimension of different types of coastline and different parts of the main body were calculated and compared. The fractal dimension of the Pearl River Estuary Area was found to have increased significantly, from 1.228 to 1.263, and coastline length also increased during the study period. The island and mainland showed the most complex coastlines, while estuaries showed the least complexity during the past forty years. A positive correlation was found between length and 2D-fractal dimension in some parts of the study area. Land reclamation had the strongest influence on fractal dimension variations. Full article
(This article belongs to the Special Issue Coastal Environments and Coastal Hazards)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Location of the study area with (<b>a</b>) rezoning coastline types and (<b>b</b>) reclassification of (<b>c</b>) coastal sectors from Part I–Part IV.</p>
Full article ">Figure 2
<p>Flow chart for the entire process used in this study.</p>
Full article ">Figure 3
<p>Images involved in the precision test with (<b>a</b>) Landsat 8 OLI_TIRS collected on February 12, 2018, (<b>b</b>) Google Earth image collected on January 18, 2018, and (<b>c</b>) corresponding points of No. 9, No. 10, and No. 11 on Landsat and Google Earth.</p>
Full article ">Figure 4
<p>Box-counting method to calculate FD with different sizes of squares. When the size of squares is changed, the number of squares needed to cover the coastline changes, with (<b>a</b>) squares with 1200 m and 85 squares involved, and (<b>b</b>) squares with 2700 m and 35 squares involved.</p>
Full article ">Figure 5
<p>Variations of FD among different types of coastlines in the PREA from 1978 to 2018.</p>
Full article ">Figure 6
<p>Mainland FD of Part I–Part IV and the PREA from 1978 to 2018.</p>
Full article ">Figure 7
<p>Island FD of Part I–Part IV and the PREA from 1978 to 2018.</p>
Full article ">Figure 8
<p>Variations of FD of the four parts in the main body of the PREA from 1978 to 2018. Fractal dimension of total, mainland, and island coastlines of (<b>a</b>) Part I, (<b>b</b>) Part II, (<b>c</b>) Part III, and (<b>d</b>) Part IV.</p>
Full article ">Figure 9
<p>Changing trend of coastline length and FD of different coastline types from 1978 to 2018 with variations of coastline length and fractal dimension of (<b>a</b>) the PREA, (<b>b</b>) mainland coastline, (<b>c</b>) island coastline, (<b>d</b>) Humen, (<b>e</b>) Jiaomen, (<b>f</b>) Hongqimen, (<b>g</b>) Hengmen, (<b>h</b>) Modaomen, (<b>i</b>) Jitimen, (<b>j</b>) Hutiaomen, and (<b>k</b>) Yamen.</p>
Full article ">Figure 10
<p>Changing trend of coastline length and FD of Parts I–IV with variations of coastline length and FD (1978–2018) for (<b>a</b>) Part I, (<b>b</b>) mainland of Part I, (<b>c</b>) island of Part I, (<b>d</b>) Part II, (<b>e</b>) mainland of Part II, (<b>f</b>) island of Part II, (<b>g</b>) Part III, (<b>h</b>) mainland of Part III, (<b>i</b>) island of Part III, (<b>j</b>) Part IV, (<b>k</b>) mainland of Part IV, and (<b>l</b>) island of Part IV.</p>
Full article ">Figure 11
<p>Relationship between coastline length and FD of different coastline types during the study period for (<b>a</b>) the PREA, (<b>b</b>) mainland coastline, (<b>c</b>) island coastline, (<b>d</b>) Humen, (<b>e</b>) Jiaomen, (<b>f</b>) Hongqimen, (<b>g</b>) Hengmen, (<b>h</b>) Modaomen, (<b>i</b>) Jitimen, (<b>j</b>) Hutiaomen, and (<b>k</b>) Yamen.</p>
Full article ">Figure 12
<p>Relationship between coastline length and FD during the study period of Part I–Part IV: (<b>a</b>) Part I, (<b>b</b>) mainland of Part I, (<b>c</b>) island of Part I, (<b>d</b>) Part II, (<b>e</b>) mainland of Part II, (<b>f</b>) island of Part II, (<b>g</b>) Part III, (<b>h</b>) mainland of Part III, (<b>i</b>) island of Part III, (<b>j</b>) Part IV, (<b>k</b>) mainland of Part IV, and (<b>l</b>) island of Part IV.</p>
Full article ">
18 pages, 4528 KiB  
Article
Apple Shape Detection Based on Geometric and Radiometric Features Using a LiDAR Laser Scanner
by Nikos Tsoulias, Dimitrios S. Paraforos, George Xanthopoulos and Manuela Zude-Sasse
Remote Sens. 2020, 12(15), 2481; https://doi.org/10.3390/rs12152481 - 3 Aug 2020
Cited by 52 | Viewed by 7385
Abstract
Yield monitoring systems in fruit production mostly rely on color features, making the discrimination of fruits challenging due to varying light conditions. The implementation of geometric and radiometric features in three-dimensional space (3D) analysis can alleviate such difficulties improving the fruit detection. In [...] Read more.
Yield monitoring systems in fruit production mostly rely on color features, making the discrimination of fruits challenging due to varying light conditions. The implementation of geometric and radiometric features in three-dimensional space (3D) analysis can alleviate such difficulties improving the fruit detection. In this study, a light detection and range (LiDAR) system was used to scan apple trees before (TL) and after defoliation (TD) four times during seasonal tree growth. An apple detection method based on calibrated apparent backscattered reflectance intensity (RToF) and geometric features, capturing linearity (L) and curvature (C) derived from the LiDAR 3D point cloud, is proposed. The iterative discretion of apple class from leaves and woody parts was obtained at RToF > 76.1%, L < 15.5%, and C > 73.2%. The position of fruit centers in TL and in TD was compared, showing a root mean square error (RMSE) of 5.7%. The diameter of apples estimated from the foliated trees was related to the reference values based on the perimeter of the fruits, revealing an adjusted coefficient of determination (R2adj) of 0.95 and RMSE of 9.5% at DAFB120. When comparing the results obtained on foliated and defoliated tree’s data, the estimated number of fruit’s on foliated trees at DAFB42, DAFB70, DAFB104, and DAFB120 88.6%, 85.4%, 88.5%, and 94.8% of the ground truth values, respectively. The algorithm resulted in maximum values of 88.2% precision, 91.0% recall, and 89.5 F1 score at DAFB120. The results point to the high capacity of LiDAR variables [RToF, C, L] to localize fruit and estimate its size by means of remote sensing. Full article
(This article belongs to the Section Remote Sensing in Agriculture and Vegetation)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Flowchart showing the protocol for apple detection and sizing using defoliated trees (T<sub>D</sub>), starting with the threshold application in the registered point cloud pair, through the filtering and partitioning, until the counting of clusters, which were compared to reference data from manual measurements and applied as ground truth when analysing the foliated trees. The cells with shading and dashed frame point out the differences from apple detection in foliated trees.</p>
Full article ">Figure 2
<p>Flowchart of the validation showing the protocol for apple detection and sizing using foliated (T<sub>L</sub>) trees, starting with the threshold application in the registered point cloud, through the filtering and partitioning, until the counting of centers of clusters. The cell with shading and dashed frame point out the inclusion of leaf class in the analysis.</p>
Full article ">Figure 3
<p>Representation of 3D point clouds of (<b>a</b>) RGB image (<b>b</b>) reflectance (R<sub>ToF</sub>) [%], (<b>c</b>) curvature (C) [%] and (<b>d</b>) linearity (L) [%] of trees measured with leaves at DAFB<sub>120</sub>.</p>
Full article ">Figure 4
<p>Representation of 3D point clouds of (<b>a</b>) RGB image (<b>b</b>) reflectance (R<sub>ToF</sub>) [%], (<b>c</b>) curvature (C) [%] and (<b>d</b>) linearity (L) [%] in defoliated trees at DAFB<sub>120</sub>.</p>
Full article ">Figure 5
<p>The probability density of (<b>a</b>) calibrated reflectance intensity (R<sub>ToF</sub>) [%], (<b>b</b>) curvature (C) [%], and (<b>c</b>) linearity (L) [%] for wood, leaves, and apples.</p>
Full article ">Figure 6
<p>Box-Whisker plot of segmented points of wood (W), leaves (L) and apples (A) based on reflectance (R<sub>ToF</sub>), curvature (C) and linearity (L) showing the mean and mode values, maximum, minimum, standard deviation are represented by lower and upper edges of the box, the dash in each box indicates the median.</p>
Full article ">Figure 7
<p>Detection of apple centers (M<sub>D</sub>) using sphere segmentation (b) enlargement of upper zone of the tree with points of apples class enveloped in the sphere. The blue color depicts the apple points of the defoliated trees (A<sub>D</sub>).</p>
Full article ">Figure 8
<p>Representation of (<b>a</b>) detected centers in foliated trees (M<sub>L</sub>) and (<b>b</b>) after defoliation (M<sub>D</sub>). The red color depicts the apple points of the foliated tree (A<sub>L</sub>) while the blue color show the points of the fruits in the defoliated tree (A<sub>D</sub>).</p>
Full article ">Figure 9
<p>Representation of the (<b>a</b>) worst- and the (<b>b</b>) best-case scenario of the distance of the center of clusters in defoliated (M<sub>D</sub>) and foliated (M<sub>L</sub>) trees at DAFB<sub>42</sub> and DAFB<sub>120</sub>. The red color depicts the apple points of the foliated trees (A<sub>L</sub>) while the blue color the points of the defoliated tree (A<sub>D</sub>).</p>
Full article ">
21 pages, 6106 KiB  
Article
Species Monitoring Using Unmanned Aerial Vehicle to Reveal the Ecological Role of Plateau Pika in Maintaining Vegetation Diversity on the Northeastern Qinghai-Tibetan Plateau
by Yu Qin, Yi Sun, Wei Zhang, Yan Qin, Jianjun Chen, Zhiwei Wang and Zhaoye Zhou
Remote Sens. 2020, 12(15), 2480; https://doi.org/10.3390/rs12152480 - 3 Aug 2020
Cited by 22 | Viewed by 4258
Abstract
Plateau pika (Ochotona curzoniae, hereafter pika) is considered to exert a profound impact on vegetation species diversity of alpine grasslands. Great efforts have been made at mound or quadrat scales; nevertheless, there is still controversy about the effect of pika. It [...] Read more.
Plateau pika (Ochotona curzoniae, hereafter pika) is considered to exert a profound impact on vegetation species diversity of alpine grasslands. Great efforts have been made at mound or quadrat scales; nevertheless, there is still controversy about the effect of pika. It is vital to monitor vegetation species composition in natural heterogeneous ecosystems at a large scale to accurately evaluate the real role of pika. In this study, we performed field survey at 55 alpine grassland sites across the Shule River Basin using combined methods of aerial photographing using an unmanned aerial vehicle (UAV) and traditional ground measurement. Based on our UAV operation system, Fragmentation Monitoring and Analysis with aerial Photography (FragMAP), aerial images were acquired. Plot-scale vegetation species were visually identified, and total pika burrow exits were automatically retrieved using the self-developed image processing software. We found that there were significant linear relationships between the vegetation species diversity indexes obtained by these two methods. Additionally, the total number of identified species by the UAV method was 71, which was higher than the Quadrat method recognition, with the quantity of 63. Our results indicate that the UAV was suitable for long-term repeated monitoring vegetation species composition of multiple alpine grasslands at plot scale. With the merits of UAV, it confirmed that pika’s disturbance belonged to the medium level, with the density ranging from 30.17 to 65.53 ha−1. Under this density level, pika had a positive effect on vegetation species diversity, particularly for the species richness of sedge and forb. These findings conclude that the UAV was an efficient and economic tool for species monitoring to reveal the role of pika in the alpine grasslands. Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Source region of the Shule River Basin and its location on the Qinghai-Tibetan Plateau, China. Red pentagrams are 21 sites to perform simultaneously aerial photographing and field sampling, and green pentagrams are the other 34 sites with only aerial photographing.</p>
Full article ">Figure 2
<p>The flowchart of study scheme. N<sub>APB</sub>, N<sub>TPB</sub> and N<sub>PD</sub> mean the number of active pika burrows, total pika burrows and pika density in a hectare, respectively.</p>
Full article ">Figure 3
<p>(<b>a</b>) Image shows the coverage area for aerial photographing and field sampling. There is one GRID flight way with 16 fixed way points; (<b>b</b>) photo shows the coverage area of the BELT flight way with 16 fixed way points; (<b>c</b>) picture presents the synchronous survey vegetation species composition by aerial photographing and traditional quadrat sampling. The white frame is the area for quadrat sampling with the size of 0.5 m × 0.5 m.</p>
Full article ">Figure 4
<p>Relationships of diversity indexes ((<b>a</b>) species richness, (<b>b</b>) Shannon index, (<b>c</b>) Simpson index and (<b>d</b>) Pielou’s J index) between the Quadrat method and UAV method. The shaded area shows the 95% confidence interval of the fit. ASwM, AM, AStM and AS indicate alpine swamp meadow, alpine meadow, alpine steppe meadow and alpine steppe, respectively.</p>
Full article ">Figure 5
<p>Species richness within sampling units of the Quadrat method and the UAV method. * and ns indicate that the significant effects at <span class="html-italic">p</span> &lt; 0.05 and no significant difference, respectively. ASwM, AM, AStM and AS indicate alpine swamp meadow, alpine meadow, alpine steppe meadow and alpine steppe, respectively.</p>
Full article ">Figure 6
<p>(<b>a</b>) Species richness, (<b>b</b>) Shannon index, (<b>c</b>) Simpson index and (<b>d</b>) Pielou’s J index of four types of alpine grasslands. Red dots represent mean value of different types of functional groups. Blue dots represent the value of all sampling sites (n = 55). The bold solid horizontal line means the median value. ASwM, AM, AStM and AS indicate alpine swamp meadow, alpine meadow, alpine steppe meadow and alpine steppe, respectively.</p>
Full article ">Figure 7
<p>Total pika burrows (<b>a</b>), active pika burrows (<b>b</b>) and pika density (<b>c</b>) of four typical alpine grasslands. Red dots mean the average value. Blue dots are the value of all sampling sites (n = 55). The blue bold solid horizontal lines are the median value. ASwM, AM, AStM and AS indicate alpine swamp meadow, alpine meadow, alpine steppe meadow and alpine steppe, respectively.</p>
Full article ">Figure 8
<p>Fractional vegetation cover (<b>a</b>) and bare patch area fraction (<b>b</b>) of four typical alpine grasslands. Red dots mean the average value. Blue dots are the value of all sampling sites (n = 55). The blue bold solid horizontal lines are the median value. ASwM, AM, AStM and AS indicate alpine swamp meadow, alpine meadow, alpine steppe meadow and alpine steppe, respectively.</p>
Full article ">Figure 9
<p>Vegetation biomass of four functional groups in (<b>a</b>) alpine swamp meadow (ASwM), (<b>b</b>) alpine meadow (AM), (<b>c</b>) alpine steppe meadow (AStM) and (<b>d</b>) alpine steppe. Red dots represent mean value of different types of functional groups. Blue dots represent the value of all sampling sites (n = 21). The bold solid horizontal line means the median value. Grass is monocotyledon, which is characterized by an alternate leaf, stem section and caryopsis. Legume, belonging to the family of Rosalesis, is dicotyledon, which can fix nitrogen by symbiotic rhizobia. Sedge is also a monocotyledon with a solid stem, leaf sheath and small flower clustered into a spikelet. Forb is the general name of all herbage plants other than grass, legume and sedge.</p>
Full article ">Figure 10
<p>Relationships between pika density and species richness (<b>a</b>), Shannon index (<b>b</b>), Simpson index (<b>c</b>) and Pielou’s J index (<b>d</b>). The shaded area shows the 95% confidence interval of the fit. ASwM, AM, AStM and AS indicate alpine swamp meadow, alpine meadow, alpine steppe meadow and alpine steppe, respectively.</p>
Full article ">Figure 11
<p>Redundancy analysis (RDA) triplot presenting the relationship of vegetation species richness of functional groups with pika burrows and pika density. The blue solid arrows are vegetation species richness of functional groups. The hollow red arrows are pika burrows and pika density. Green, orange, yellow and blue solid cicles are all plots for vegetation species richness of functional groups investigating in the alpine swamp meadow, alpine meadow, alpine steppe meadow and alpine steppe, respectively. TPB, APB and PD are total pika burrows, active pika burrows and pika density, respectively.</p>
Full article ">Figure 12
<p>(<b>a</b>,<b>b</b>) synchronous survey vegetation species composition by the UAV and Quadrat, (<b>c</b>,<b>d</b>) monitoring of vegetation species composition by the UAV, (<b>e</b>) ground validation of vegetation species composition and (<b>f</b>) vegetation biomass investigation of four functional groups. Both the UAV aerial photographing and Quadrat survey were carried out during the period from the end of July to the middle of August in 2019. Each aerial image covered a ground area of nearly 2.6 m × 3.5 m.</p>
Full article ">
20 pages, 33575 KiB  
Article
Linear and Non-Linear Models for Remotely-Sensed Hyperspectral Image Visualization
by Radu-Mihai Coliban, Maria Marincaş, Cosmin Hatfaludi and Mihai Ivanovici
Remote Sens. 2020, 12(15), 2479; https://doi.org/10.3390/rs12152479 - 2 Aug 2020
Cited by 10 | Viewed by 4946
Abstract
The visualization of hyperspectral images still constitutes an open question and may have an important impact on the consequent analysis tasks. The existing techniques fall mainly in the following categories: band selection, PCA-based approaches, linear approaches, approaches based on digital image processing techniques [...] Read more.
The visualization of hyperspectral images still constitutes an open question and may have an important impact on the consequent analysis tasks. The existing techniques fall mainly in the following categories: band selection, PCA-based approaches, linear approaches, approaches based on digital image processing techniques and machine/deep learning methods. In this article, we propose the usage of a linear model for color formation, to emulate the image acquisition process by a digital color camera. We show how the choice of spectral sensitivity curves has an impact on the visualization of hyperspectral images as RGB color images. In addition, we propose a non-linear model based on an artificial neural network. We objectively assess the impact and the intrinsic quality of the hyperspectral image visualization from the point of view of the amount of information and complexity: (i) in order to objectively quantify the amount of information present in the image, we use the color entropy as a metric; (ii) for the evaluation of the complexity of the scene we employ the color fractal dimension, as an indication of detail and texture characteristics of the image. For comparison, we use several state-of-the-art visualization techniques. We present experimental results on visualization using both the linear and non-linear color formation models, in comparison with four other methods and report on the superiority of the proposed non-linear model. Full article
(This article belongs to the Special Issue Advances in Hyperspectral Data Exploitation)
Show Figures

Figure 1

Figure 1
<p>RGB representations of the five hyperspectral images used in our experiments. Top row: images acquired by the ROSIS-3 sensor; bottom row: images acquired by the AVIRIS sensor</p>
Full article ">Figure 2
<p>The D65 illuminant.</p>
Full article ">Figure 3
<p>Spectral sensitivities of human cone cells.</p>
Full article ">Figure 4
<p>Spectral sensitivity functions for 5 digital cameras</p>
Full article ">Figure 5
<p>Gaussian spectral sensitivity functions based on the functions of the Canon 5D camera from <a href="#remotesensing-12-02479-f004" class="html-fig">Figure 4</a>a.</p>
Full article ">Figure 6
<p>Architecture of the ANN.</p>
Full article ">Figure 7
<p>The McBeth color chart.</p>
Full article ">Figure 8
<p>Spectral reflectance curves of the color patches in each row of the McBeth color chart</p>
Full article ">Figure 9
<p>Experimental results on the Pavia University image. (<b>a</b>) BS. (<b>b</b>) NOL. (<b>c</b>) SOL. (<b>d</b>) MOL. (<b>e</b>) HOL. (<b>f</b>) Canon 5D. (<b>g</b>) Canon 1D. (<b>h</b>) Hasselblad H2. (<b>i</b>) Nikon D3X. (<b>j</b>) Nikon D50. (<b>k</b>) ANN. (<b>l</b>) PCA [<a href="#B15-remotesensing-12-02479" class="html-bibr">15</a>]. (<b>m</b>) CMF [<a href="#B18-remotesensing-12-02479" class="html-bibr">18</a>]. (<b>n</b>) CML [<a href="#B25-remotesensing-12-02479" class="html-bibr">25</a>]. (<b>o</b>) DHV [<a href="#B21-remotesensing-12-02479" class="html-bibr">21</a>].</p>
Full article ">Figure 10
<p>Experimental results on the Pavia Centre image. (<b>a</b>) BS. (<b>b</b>) NOL. (<b>c</b>) SOL. (<b>d</b>) MOL. (<b>e</b>) HOL. (<b>f</b>) Canon 5D. (<b>g</b>) Canon 1D. (<b>h</b>) Hasselblad H2. (<b>i</b>) Nikon D3X. (<b>j</b>) Nikon D50. (<b>k</b>) ANN. (<b>l</b>) PCA [<a href="#B15-remotesensing-12-02479" class="html-bibr">15</a>]. (<b>m</b>) CMF [<a href="#B18-remotesensing-12-02479" class="html-bibr">18</a>]. (<b>n</b>) CML [<a href="#B25-remotesensing-12-02479" class="html-bibr">25</a>]. (<b>o</b>) DHV [<a href="#B21-remotesensing-12-02479" class="html-bibr">21</a>].</p>
Full article ">Figure 11
<p>Experimental results on the Indian Pines image. (<b>a</b>) BS. (<b>b</b>) NOL. (<b>c</b>) SOL. (<b>d</b>) MOL. (<b>e</b>) HOL. (<b>f</b>) Canon 5D. (<b>g</b>) Canon 1D. (<b>h</b>) Hasselblad H2. (<b>i</b>) Nikon D3X. (<b>j</b>) Nikon D50. (<b>k</b>) ANN. (<b>l</b>) PCA [<a href="#B15-remotesensing-12-02479" class="html-bibr">15</a>]. (<b>m</b>) CMF [<a href="#B18-remotesensing-12-02479" class="html-bibr">18</a>]. (<b>n</b>) CML [<a href="#B25-remotesensing-12-02479" class="html-bibr">25</a>]. (<b>o</b>) DHV [<a href="#B21-remotesensing-12-02479" class="html-bibr">21</a>].</p>
Full article ">Figure 12
<p>Experimental results on the SalinasA image. (<b>a</b>) BS. (<b>b</b>) NOL. (<b>c</b>) SOL. (<b>d</b>) MOL. (<b>e</b>) HOL. (<b>f</b>) Canon 5D. (<b>g</b>) Canon 1D. (<b>h</b>) Hasselblad H2. (<b>i</b>) Nikon D3X. (<b>j</b>) Nikon D50. (<b>k</b>) ANN. (<b>l</b>) PCA [<a href="#B15-remotesensing-12-02479" class="html-bibr">15</a>]. (<b>m</b>) CMF [<a href="#B18-remotesensing-12-02479" class="html-bibr">18</a>]. (<b>n</b>) CML [<a href="#B25-remotesensing-12-02479" class="html-bibr">25</a>]. (<b>o</b>) DHV [<a href="#B21-remotesensing-12-02479" class="html-bibr">21</a>].</p>
Full article ">Figure 13
<p>Experimental results on the Cuprite image. (<b>a</b>) BS. (<b>b</b>) NOL. (<b>c</b>) SOL. (<b>d</b>) MOL. (<b>e</b>) HOL. (<b>f</b>) Canon 5D. (<b>g</b>) Canon 1D. (<b>h</b>) Hasselblad H2. (<b>i</b>) Nikon D3X. (<b>j</b>) Nikon D50. (<b>k</b>) ANN. (<b>l</b>) PCA [<a href="#B15-remotesensing-12-02479" class="html-bibr">15</a>]. (<b>m</b>) CMF [<a href="#B18-remotesensing-12-02479" class="html-bibr">18</a>]. (<b>n</b>) CML [<a href="#B25-remotesensing-12-02479" class="html-bibr">25</a>]. (<b>o</b>) DHV [<a href="#B21-remotesensing-12-02479" class="html-bibr">21</a>].</p>
Full article ">
Previous Issue
Next Issue
Back to TopTop