Nothing Special   »   [go: up one dir, main page]

You seem to have javascript disabled. Please note that many of the page functionalities won't work as expected without javascript enabled.
 
 
Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (506)

Search Parameters:
Keywords = mixed pixels

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
20 pages, 15992 KiB  
Article
Analysis and Application of Particle Backtracking Algorithm in Wind–Sand Two-Phase Flow Using SPH Method
by Wenxiu Gao, Afang Jin, Zhenguo An and Ming Yan
Appl. Sci. 2024, 14(22), 10370; https://doi.org/10.3390/app142210370 - 11 Nov 2024
Viewed by 419
Abstract
Due to the high sensitivity of grid-based micro-scale wind–sand flow models to deformation and distortion, this study employs the Smooth Particle Hydrodynamics (SPH) method for numerical simulations. The advantage of the SPH method is that it can dynamically analyze the entire trajectory of [...] Read more.
Due to the high sensitivity of grid-based micro-scale wind–sand flow models to deformation and distortion, this study employs the Smooth Particle Hydrodynamics (SPH) method for numerical simulations. The advantage of the SPH method is that it can dynamically analyze the entire trajectory of the particles, thus allowing the initial positional distribution of sand-buried particles to be traced. This study utilizes the advantages of the SPH method. It develops particle backtracking algorithms based on the SPH method using the C language. It analyses the initial location distribution, concentration, velocity, and particle size distribution of sand-buried particles to formulate targeted measures to cope with wind–sand disasters. Meanwhile, this paper improves a particle modeling algorithm to realize arbitrary mixing particle size and mixing ratio by programming in C language and combining it with pixel recognition technology. In addition, this paper will use the particle backtracking algorithm to analyze the classical embankment wind and sand flow field and then propose adequate measures for embankment wind and sand disaster management by investigating sand particle movement characteristics. Full article
(This article belongs to the Section Fluid Science and Technology)
Show Figures

Figure 1

Figure 1
<p>Selection of support domains and smoothing factors.</p>
Full article ">Figure 2
<p>Complex Diameter Particle Generation. (<b>a</b>) Geometric modeling of embankments; (<b>b</b>) Modeling of single-size particles on embankments. Figures should be placed in the main text near the first time they are cited. (<b>c</b>) Modeling of mixed-size particles on embankments; (<b>d</b>) geometric modeling of roadway trenches; (<b>e</b>) modeling of single-size particles in road graben; (<b>f</b>) modeling of mixed-size particles in road riffles.</p>
Full article ">Figure 3
<p>Leapfrog trajectories of sand particles with different grain sizes.</p>
Full article ">Figure 4
<p>Comparison of sand concentration for different types of mixed grain.</p>
Full article ">Figure 5
<p>Mixed grain size sand transport rate distribution along height.</p>
Full article ">Figure 6
<p>Concentration distribution of single grain size sand particles.</p>
Full article ">Figure 7
<p>Distribution of initial positions of sand buried particles.</p>
Full article ">Figure 8
<p>Particle model of windbreak embankment.</p>
Full article ">Figure 9
<p>Distribution of windbreak embankment particle concentration.</p>
Full article ">Figure 10
<p>Distribution of windbreak embankment particle concentration.</p>
Full article ">Figure 11
<p>Distribution of particle concentration under different working conditions.</p>
Full article ">Figure 12
<p>Horizontal velocity streamline diagram.</p>
Full article ">Figure 12 Cont.
<p>Horizontal velocity streamline diagram.</p>
Full article ">Figure 13
<p>Comparison of sand velocity distribution.</p>
Full article ">Figure 14
<p>Distribution of sand concentration at different wind speeds.</p>
Full article ">Figure 15
<p>Distribution of initial positions of sand-buried particles at different wind speeds.</p>
Full article ">Figure 16
<p>Distribution of sand concentration in embankment windbreaks.</p>
Full article ">Figure 17
<p>Comparison of sand velocity distribution.</p>
Full article ">Figure 18
<p>Particle source distribution of sand-embedded particles in road embankment.</p>
Full article ">Figure 19
<p>Distribution of windbreak embankment sand concentration.</p>
Full article ">Figure 20
<p>Sand particle size distribution around windbreaks.</p>
Full article ">Figure 21
<p>Sand particle size distribution at the foot of leeward slopes.</p>
Full article ">
23 pages, 14074 KiB  
Article
Comprehensive Representations of Subpixel Land Use and Cover Shares by Fusing Multiple Geospatial Datasets and Statistical Data with Machine-Learning Methods
by Yuxuan Chen, Rongping Li, Yuwei Tu, Xiaochen Lu and Guangsheng Chen
Land 2024, 13(11), 1814; https://doi.org/10.3390/land13111814 - 1 Nov 2024
Viewed by 675
Abstract
Land use and cover change (LUCC) is a key factor influencing global environmental and socioeconomic systems. Many long-term geospatial LUCC datasets have been developed at various scales during the recent decades owing to the availability of long-term satellite data, statistical data and computational [...] Read more.
Land use and cover change (LUCC) is a key factor influencing global environmental and socioeconomic systems. Many long-term geospatial LUCC datasets have been developed at various scales during the recent decades owing to the availability of long-term satellite data, statistical data and computational techniques. However, most existing LUCC products cannot accurately reflect the spatiotemporal change patterns of LUCC at the regional scale in China. Based on these geospatial LUCC products, normalized difference vegetation index (NDVI), socioeconomic data and statistical data, we developed multiple procedures to represent both the spatial and temporal changes of the major LUC types by applying machine-learning, regular decision-tree and hierarchical assignment methods using northeastern China (NEC) as a case study. In this approach, each individual LUC type was developed in sequence under different schemes and methods. The accuracy evaluation using sampling plots indicated that our approach can accurately reflect the actual spatiotemporal patterns of LUC shares in NEC, with an overall accuracy of 82%, Kappa coefficient of 0.77 and regression coefficient of 0.82. Further comparisons with existing LUCC datasets and statistical data also indicated the accuracy of our approach and datasets. Our approach unfolded the mixed-pixel issue of LUC types and integrated the strengths of existing LUCC products through multiple fusion processes. The analysis based on our developed dataset indicated that forest, cropland and built-up land area increased by 17.11 × 104 km2, 15.19 × 104 km2 and 2.85 × 104 km2, respectively, during 1980–2020, while grassland, wetland, shrubland and bare land decreased by 26.06 × 104 km2, 4.24 × 104 km2, 3.97 × 104 km2 and 0.92 × 104 km2, respectively, in NEC. Our developed approach accurately reconstructed the shares and spatiotemporal patterns of all LUC types during 1980–2020 in NEC. This approach can be further applied to the entirety of China, and worldwide, and our products can provide accurate data supports for studying LUCC consequences and making effective land use policies. Full article
(This article belongs to the Special Issue Advances in Land Use and Land Cover Mapping (Second Edition))
Show Figures

Figure 1

Figure 1
<p>The study area and the distribution of major land use and cover types (Source: [<a href="#B17-land-13-01814" class="html-bibr">17</a>]) and the training and validation sample plots. Note: the triangle points are visually-interpreted plots at a 1 km spatial resolution; the dark circle points are field-investigated plots at a 30 m resolution (Source: [<a href="#B37-land-13-01814" class="html-bibr">37</a>]).</p>
Full article ">Figure 2
<p>The dataset development procedure for fractional cropland area and crop types. Note: TA: summed total cropland area; RA: reference cropland area (provincial inventory data); m: iteration number; n: total iteration numbers; areaT: identified cropland area; areaP: possible cropland area.</p>
Full article ">Figure 3
<p>The forest share dataset development procedure. Note: areaF: the summed forest area after each iteration run; Fshare: the fitted forest share (%).</p>
Full article ">Figure 4
<p>The wetland share dataset development procedure. Note: areaT: summed total wetland area; RA: reference wetland area (provincial inventory data).</p>
Full article ">Figure 5
<p>The grassland, shrubland and bare land share dataset development procedure. Note: R: the remaining area for allocation in each pixel; i: the iteration number; n: the existing dataset number; Gi, Si and Bi: calculated relative fractions of grassland, shrubland and bare land, respectively, in each pixel in 2020 based on multiple existing LUCC products; Gsh, Ssh and Bsh: the generated grassland, shrubland and bare land shares.</p>
Full article ">Figure 6
<p>The correlations between the reconstructed and the visually-interpreted (observations) area shares (%) for different land use and cover types within each pixel at a 1 km spatial resolution.</p>
Full article ">Figure 7
<p>The evaluation of our developed land use and cover shares (%) within each pixel at a 1 km spatial resolution against visually-interpreted shares based on high-resolution images. Note: F: forest share (Green color region); B: built-up land share (Purple); C: cropland share (Orange); We: wetland share (Cyan); G: grassland share (Yellow); Wa: water body share (Blue).</p>
Full article ">Figure 8
<p>The total area (10<sup>4</sup> km<sup>2</sup>) of all land use and cover types for 1980–2020 in NEC.</p>
Full article ">Figure 9
<p>The spatial distribution of cropland, forest, wetland, grassland and shrubland shares (‰) in 1980, 2000 and 2020 in NEC.</p>
Full article ">Figure 10
<p>The spatial changes (%) of cropland, forest, wetland, grassland and shrubland shares for 1980–2000, 2000–2020 and 1980–2020 in NEC.</p>
Full article ">Figure 11
<p>The intercomparisons of our developed LUC share dataset with two existing LUCC products and visually-interpreted data.</p>
Full article ">Figure 12
<p>The comparisons of the LUC shares of our developed dataset with other existing LUCC products for the pixels with visually-interpreted LUC shares.</p>
Full article ">
26 pages, 4756 KiB  
Article
An Adaptive Unmixing Method Based on Iterative Multi-Objective Optimization for Surface Water Fraction Mapping (IMOSWFM) Using Zhuhai-1 Hyperspectral Images
by Cong Lei, Rong Liu, Zhiyuan Kuang and Ruru Deng
Remote Sens. 2024, 16(21), 4038; https://doi.org/10.3390/rs16214038 - 30 Oct 2024
Viewed by 356
Abstract
Surface water fraction mapping is an essential preprocessing step for the subpixel mapping (SPM) of surface water, providing valuable prior knowledge about surface water distribution at the subpixel level. In recent years, spectral mixture analysis (SMA) has been extensively applied to estimate surface [...] Read more.
Surface water fraction mapping is an essential preprocessing step for the subpixel mapping (SPM) of surface water, providing valuable prior knowledge about surface water distribution at the subpixel level. In recent years, spectral mixture analysis (SMA) has been extensively applied to estimate surface water fractions in multispectral images by decomposing each mixed pixel into endmembers and their corresponding fractions using linear or nonlinear spectral mixture models. However, challenges emerge when introducing existing surface water fraction mapping methods to hyperspectral images (HSIs) due to insufficient exploration of spectral information. Additionally, inaccurate extraction of endmembers can result in unsatisfactory water fraction estimations. To address these issues, this paper proposes an adaptive unmixing method based on iterative multi-objective optimization for surface water fraction mapping (IMOSWFM) using Zhuhai-1 HSIs. In IMOSWFM, a modified normalized difference water fraction index (MNDWFI) was developed to fully exploit the spectral information. Furthermore, an iterative unmixing framework was adopted to dynamically extract high-quality endmembers and estimate their corresponding water fractions. Experimental results on the Zhuhai-1 HSIs from three test sites around Nanyi Lake indicate that water fraction maps obtained by IMOSWFM are closest to the reference maps compared with the other three SMA-based surface water fraction estimation methods, with the highest overall accuracy (OA) of 91.74%, 93.12%, and 89.73% in terms of pure water extraction and the lowest root-mean-square errors (RMSE) of 0.2506, 0.2403, and 0.2265 in terms of water fraction estimation. This research provides a reference for adapting existing surface water fraction mapping methods to HSIs. Full article
Show Figures

Figure 1

Figure 1
<p>Locations of study areas and the corresponding Zhuhai-1 OHS false color images consisting of R-band 21, G-band 10, and B-band 7.</p>
Full article ">Figure 2
<p>Framework of IMOSWFM.</p>
Full article ">Figure 3
<p>Distribution of the selected solution in the normalized objective space.</p>
Full article ">Figure 4
<p>Spectra of typical ground features in Zhuhai-1 OHS image.</p>
Full article ">Figure 5
<p>Illustration of the double threshold method from the histogram of MNDWFI.</p>
Full article ">Figure 6
<p>Sketch map of the iterative estimation of water fraction in IMOSWFM.</p>
Full article ">Figure 7
<p>Surface water fraction maps of Area 1, Area 2, and Area 3 from IMOSWFM and other compared methods with Zhuhai-1 OHS images.</p>
Full article ">Figure 8
<p>Surface water fraction maps of Area 4, Area 5, and Area 6 from IMOSWFM and other compared methods with Zhuhai-1 OHS images.</p>
Full article ">Figure 9
<p>Comparison of NDWI and NDWFI map of Area 4. (<b>a</b>) Zhuhai-1 OHS false color image of Area 4; (<b>b</b>) NDWI map of Area 4; (<b>c</b>) MNDWFI map of Area 4; (<b>d</b>) Spectra of representative pixels in Area 4.</p>
Full article ">Figure 10
<p>Histograms of NDWI, NDWFI, and MNDWFI on the three areas around Nanyi Lake.</p>
Full article ">Figure 11
<p>RMSE and SE of the four different combinations of components on the three areas. (<b>a</b>) RMSE; (<b>b</b>) SE.</p>
Full article ">Figure 12
<p>Convergence curves of objective function values at different iterations. First row: convergence curves of the volume inverse; Second row: convergence curves of RMSE; (<b>a</b>–<b>e</b>) represent the corresponding iteration number from 1 to 5.</p>
Full article ">Figure 13
<p>Variation in the number of remaining mixed pixels as iteration increases.</p>
Full article ">
24 pages, 15178 KiB  
Article
Sentinel-2A Image Reflectance Simulation Method for Estimating the Chlorophyll Content of Larch Needles with Pest Damage
by Le Yang, Xiaojun Huang, Debao Zhou, Junsheng Zhang, Gang Bao, Siqin Tong, Yuhai Bao, Dashzebeg Ganbat, Dorjsuren Altanchimeg, Davaadorj Enkhnasan and Mungunkhuyag Ariunaa
Forests 2024, 15(11), 1901; https://doi.org/10.3390/f15111901 - 28 Oct 2024
Viewed by 514
Abstract
With the development of remote sensing technology, the estimation of the chlorophyll content (CHLC) of vegetation via satellite data has become an important means of monitoring vegetation health, and high-precision estimation has been the focus of research in this field. In this study, [...] Read more.
With the development of remote sensing technology, the estimation of the chlorophyll content (CHLC) of vegetation via satellite data has become an important means of monitoring vegetation health, and high-precision estimation has been the focus of research in this field. In this study, we used larch affected by Yarl’s larch looper (Erannis jacobsoni Djak) in the boundary region of Mongolia as the research object, simulated the multispectral reflectance, downscaled Sentinel-2A satellite data, performed mixed-pixel decomposition, analyzed the potential of Sentinel-2A satellite data for estimating the chlorophyll content by calculating the spectral indices (SIs) and spectral derivatives (SDFs) of images, and then extracted sensitive spectral features as the model training set. Spectral features sensitive to the chlorophyll content were extracted to establish the training set, and, finally, the chlorophyll content estimation model for larch was constructed on the basis of the partial least squares algorithm (PLSR). The results revealed that SI and SDF based on simulated remote sensing data were highly sensitive to the chlorophyll content under the influence of pests, with the SAVI and EVI2 spectral indices as well as the D_B2 and D_B5 spectral derivatives being the most sensitive to the chlorophyll content. The estimation models based on simulated data performed significantly better than models without simulated data in terms of accuracy, especially those based on SDF-PLSR. The simulated spectral reflectance well reflected the spectral characteristics of the larch canopy and was sensitive to damaged larch, especially in the green light, red edge, and near-infrared bands. The proposed approach improves the accuracy of chlorophyll content estimation via Sentinel-2A data and enhances the ability to monitor changes in the chlorophyll content under complex forest conditions through simulations, providing new technical means and a theoretical basis for forestry pest monitoring and vegetation health management. Full article
(This article belongs to the Section Forest Inventory, Modeling and Remote Sensing)
Show Figures

Figure 1

Figure 1
<p>Experimental area: (<b>a</b>) Topography of Mongolia, (<b>b</b>) Sentinel-2A image of the test area, (<b>c</b>) UAV RGB map of the test area, (<b>d</b>) schematic diagram of the sample tree used for sample plot selection, (<b>e</b>) health sample tree, (<b>f</b>) damage sample tree.</p>
Full article ">Figure 2
<p>Sentinel-2A image processing and chlorophyll content estimation methodological framework.</p>
Full article ">Figure 3
<p>Linear fits to downscaled reflectance in the B5, B6, B7, and B8A bands.</p>
Full article ">Figure 4
<p>RF model classification results.</p>
Full article ">Figure 5
<p>Healthy and damaged larch depression: (<b>a</b>) health, (<b>b</b>) damage.</p>
Full article ">Figure 6
<p>Effectiveness of fitting multispectral simulation models: (<b>a</b>–<b>h</b>) represent the fitting effects of the above equations, respectively.</p>
Full article ">Figure 7
<p>Simulated and nonsimulated reflectance curves for each spectral band.</p>
Full article ">Figure 8
<p>Spectral characteristics and CHLC correlation.</p>
Full article ">Figure 9
<p>CHLC estimation model 1:1 linear fit: (<b>a</b>,<b>b</b>) show the fitted plots of the model results for the simulated remote sensing data, (<b>c</b>,<b>d</b>) show the fitted plots for the nonsimulated data.</p>
Full article ">Figure 10
<p>Estimation of CHLC in insect-infested stands based on spectral features from simulated (<b>a</b>) and nonsimulated (<b>b</b>) remote sensing data.</p>
Full article ">Figure 11
<p>Comparison of images before and after Sentinel-2A hybrid pixel decomposition.</p>
Full article ">
18 pages, 3655 KiB  
Article
Investigating the Role of Cover-Crop Spectra for Vineyard Monitoring from Airborne and Spaceborne Remote Sensing
by Michael Williams, Niall G. Burnside, Matthew Brolly and Chris B. Joyce
Remote Sens. 2024, 16(21), 3942; https://doi.org/10.3390/rs16213942 - 23 Oct 2024
Viewed by 561
Abstract
The monitoring of grape quality parameters within viticulture using airborne remote sensing is an increasingly important aspect of precision viticulture. Airborne remote sensing allows high volumes of spatial consistent data to be collected with improved efficiency over ground-based surveys. Spectral data can be [...] Read more.
The monitoring of grape quality parameters within viticulture using airborne remote sensing is an increasingly important aspect of precision viticulture. Airborne remote sensing allows high volumes of spatial consistent data to be collected with improved efficiency over ground-based surveys. Spectral data can be used to understand the characteristics of vineyards, including the characteristics and health of the vines. Within viticultural remote sensing, the use of cover-crop spectra for monitoring is often overlooked due to the perceived noise it generates within imagery. However, within viticulture, the cover crop is a widely used and important management tool. This study uses multispectral data acquired by a high-resolution uncrewed aerial vehicle (UAV) and Sentinel-2 MSI to explore the benefit that cover-crop pixels could have for grape yield and quality monitoring. This study was undertaken across three growing seasons in the southeast of England, at a large commercial wine producer. The site was split into a number of vineyards, with sub-blocks for different vine varieties and rootstocks. Pre-harvest multispectral UAV imagery was collected across three vineyard parcels. UAV imagery was radiometrically corrected and stitched to create orthomosaics (red, green, and near-infrared) for each vineyard and survey date. Orthomosaics were segmented into pure cover-cropuav and pure vineuav pixels, removing the impact that mixed pixels could have upon analysis, with three vegetation indices (VIs) constructed from the segmented imagery. Sentinel-2 Level 2a bottom of atmosphere scenes were also acquired as close to UAV surveys as possible. In parallel, the yield and quality surveys were undertaken one to two weeks prior to harvest. Laboratory refractometry was performed to determine the grape total acid, total soluble solids, alpha amino acids, and berry weight. Extreme gradient boosting (XGBoost v2.1.1) was used to determine the ability of remote sensing data to predict the grape yield and quality parameters. Results suggested that pure cover-cropuav was a successful predictor of grape yield and quality parameters (range of R2 = 0.37–0.45), with model evaluation results comparable to pure vineuav and Sentinel-2 models. The analysis also showed that, whilst the structural similarity between the both UAV and Sentinel-2 data was high, the cover crop is the most influential spectral component within the Sentinel-2 data. This research presents novel evidence for the ability of cover-cropuav to predict grape yield and quality. Moreover, this finding then provides a mechanism which explains the success of the Sentinel-2 modelling of grape yield and quality. For growers and wine producers, creating grape yield and quality prediction models through moderate-resolution satellite imagery would be a significant innovation. Proving more cost-effective than UAV monitoring for large vineyards, such methodologies could also act to bring substantial cost savings to vineyard management. Full article
Show Figures

Figure 1

Figure 1
<p>(Subfigures (<b>A</b>): top left, (<b>B</b>): right, (<b>C</b>): bottom left): Three sample vineyards, (<b>A</b>): 62 vine rows, study area of 20,000 m<sup>2</sup>; (<b>B</b>): 8 vine rows, centre coordinates 51.047926, 0.789715, study area of 10,000 m<sup>2</sup>; (<b>C</b>): 25 vine rows, centre coordinates, study area of 15,400 m<sup>2</sup>.</p>
Full article ">Figure 2
<p>(<b>Left</b>): an example of a sampling strategy at Bottom Camp, (<b>Right</b>): example of vine and cover-crop segmentation at the individual block of ten vines.</p>
Full article ">Figure 3
<p>Near-infrared (NIR), red, and green bandwidths from UAV (<b>top</b>) and Sentinel-2 (<b>bottom</b>) were used in this study. The different resolutions between the platforms is evident, with the vine rows indiscernible within Sentinel-2 imagery.</p>
Full article ">Figure 4
<p>Sentinel-2 XGBoost regression outputs for four-grape yield and quality parameters (<b>a</b>): total acid; (<b>b</b>): total soluble solids; (<b>c</b>): alpha amino acids; and (<b>d</b>): berry weight. Results plot the predicted <span class="html-italic">Y</span> variable from remote sensing data against the observed laboratory-derived quality parameter. Shown with 95% confidence intervals.</p>
Full article ">Figure 5
<p>UAV vine XGBoost regression outputs for four grape yield and quality parameters ((<b>a</b>): total acid, (<b>b</b>): total soluble solids, (<b>c</b>): alpha amino acids, and (<b>d</b>): berry weight). Results plot the predicted <span class="html-italic">Y</span> variable from remote sensing data against the observed laboratory-derived quality parameter. Shown with 95% confidence intervals.</p>
Full article ">Figure 6
<p>UAV-derived cover-crop XGBoost regression outputs for four grape yield and quality parameters ((<b>a</b>): total acid, (<b>b</b>): total soluble solids, (<b>c</b>): alpha amino acids, and (<b>d</b>): berry weight). Results plot the predicted <span class="html-italic">Y</span> variable from remote sensing data against the observed laboratory-derived quality parameter. Shown with 95% confidence intervals.</p>
Full article ">Figure 7
<p>Structure similarity index and difference between Sentinel-2 (S2) near-infrared (NIR) and uncrewed aerial vehicle (UAV) NIR data at Butness (BT).</p>
Full article ">Figure 8
<p>Structure similarity index and difference between Sentinel-2 (S2) near-infrared (NIR) and unmanned aircraft vehicle (UAV) NIR data at Bottom Camp (BC).</p>
Full article ">Figure 9
<p>Structure similarity index and difference between Sentinel-2 (S2) near-infrared (NIR) and uncrewed aerial vehicle (UAV) NIR data at Boothill (BH).</p>
Full article ">Figure 10
<p>The relationship between S2 NDVI with vine<sub>uav</sub> NDVI (<b>a</b>) and cover-crop<sub>uav</sub> NDVI (<b>b</b>) with data points classified by sample vineyard: Bottom Camp (bc), Boothill (bh), and Butness (bt).</p>
Full article ">
17 pages, 5140 KiB  
Article
Does It Matter Whether to Use Circular or Square Plots in Forest Inventories? A Multivariate Comparison
by Efrain Velasco-Bautista, Antonio Gonzalez-Hernandez, Martin Enrique Romero-Sanchez, Vidal Guerra-De La Cruz and Ramiro Perez-Miranda
Forests 2024, 15(11), 1847; https://doi.org/10.3390/f15111847 - 22 Oct 2024
Viewed by 805
Abstract
The design of a sampling unit, whether a simple plot or a subplot within a clustered structure, including shape and size, has received little attention in inferential forestry research. The use of auxiliary variables from remote sensing impacts the precision of estimators from [...] Read more.
The design of a sampling unit, whether a simple plot or a subplot within a clustered structure, including shape and size, has received little attention in inferential forestry research. The use of auxiliary variables from remote sensing impacts the precision of estimators from both model-assisted and model-based inference perspectives. In both cases, model parameters are estimated from a sample of field plots and information from pixels corresponding to these units. In studies assisted by remote sensing, the shape of the plot used to fit regression models (typically circular) often differs from the shape of the population elements for prediction, where the area of interest is divided into equal tessellated parts. This raises interest in understanding the effect of the sampling unit shape on the mean of variables in forest stands of interest. Therefore, the objective of this study was to evaluate the effect of circular and square subplots, concentrically overlapped and arranged in an inverted Y cluster structure, over tree density, basal area, and aboveground biomass in a managed temperate forest in central Mexico. We used a Multivariate Generalised Linear Mixed Model, which considers the Gamma distribution of the variables and accounts for spatial correlation between Secondary Sampling Units nested within the Primary Sampling Unit. The main findings of this study indicate that the type of secondary sampling unit of the same area and centroid, whether circular or square, does not significantly affect the mean tree density (trees), basal area (m2), and aerial biomass. Full article
(This article belongs to the Section Forest Inventory, Modeling and Remote Sensing)
Show Figures

Figure 1

Figure 1
<p>Localisation of the study area.</p>
Full article ">Figure 2
<p>Localisation of the sampling plots and sampling design.</p>
Full article ">Figure 3
<p>Total height and diameter at breast height for all species recorded (blue dots: sacred fir, red dots: moctezuma pine; green dot: oaks, purple dots: ocote pine, brown dots: other species).</p>
Full article ">Figure 4
<p>Density (# tress/SSU), basal area (m<sup>2</sup>/SSU), and aboveground biomass (kg/SSU) in circular and square plots.</p>
Full article ">Figure 5
<p>Scatter plot for density (r = 0.9961), basal area (r = 0.9875), and aboveground biomass (r = 0.9811) from square and circular subplots (r = 0.9811). The colours identify the SSU for SPU (cluster).</p>
Full article ">Figure 6
<p>Empirical cumulative distribution function (blue) and modelled (Gamma, green; Burr, magenta; Weibull, pink) for density, (Gamma, green; Lognormal, red; Burr, magenta) for basal area and (Gamma, green; Weibull, red; Burr, magenta) for aboveground biomass. FDA: cumulative distribution function.</p>
Full article ">Figure 7
<p>Mean and confidence intervals for aboveground biomass, density, and basal area resulting from the joint statistical model.</p>
Full article ">Figure 8
<p>Histogram and normal distribution of residuals of multivariate statistical analysis of density, basal area, and aboveground biomass.</p>
Full article ">Figure 9
<p>Observed and normal percentiles of Pearson’s residuals.</p>
Full article ">
20 pages, 6388 KiB  
Article
Extraction of Winter Wheat Planting Plots with Complex Structures from Multispectral Remote Sensing Images Based on the Modified Segformer Model
by Chunshan Wang, Shuo Yang, Penglei Zhu and Lijie Zhang
Agronomy 2024, 14(10), 2433; https://doi.org/10.3390/agronomy14102433 - 20 Oct 2024
Viewed by 568
Abstract
As one of the major global food crops, the monitoring and management of the winter wheat planting area is of great significance for agricultural production and food security worldwide. Today, the development of high-resolution remote sensing imaging technology has provided rich sources of [...] Read more.
As one of the major global food crops, the monitoring and management of the winter wheat planting area is of great significance for agricultural production and food security worldwide. Today, the development of high-resolution remote sensing imaging technology has provided rich sources of data for extracting the visual planting information of winter wheat. However, the existing research mostly focuses on extracting the planting plots that have a simple terrain structure. In the face of diverse terrain features combining mountainous areas, plains, and saline alkali land, as well as small-scale but complex planting structures, the extraction of planting plots through remote sensing imaging is subjected to great challenges in terms of recognition accuracy and model complexity. In this paper, we propose a modified Segformer model for extracting winter wheat planting plots with complex structures in rural areas based on the 0.8 m high-resolution multispectral data obtained from the Gaofen-2 satellite, which significantly improves the extraction accuracy and efficiency under complex conditions. In the encoder and decoder of this method, new modules were developed for the purpose of optimizing the feature extraction and fusion process. Specifically, the improvement measures of the proposed method include: (1) The MixFFN module in the original Segformer model is replaced with the Multi-Scale Feature Fusion Fully-connected Network (MSF-FFN) module, which enhances the model’s representation ability in handling complex terrain features through multi-scale feature extraction and position embedding convolution; furthermore, the DropPath mechanism is introduced to reduce the possibility of overfitting while improving the model’s generalization ability. (2) In the decoder part, after fusing features at four different scales, a CoordAttention module is added, which can precisely locate important regions with enhanced features in the images by utilizing the coordinate attention mechanism, therefore further improving the model’s extraction accuracy. (3) The model’s input data are strengthened by incorporating multispectral indices, which are also conducive to the improvement of the overall extraction accuracy. The experimental results show that the accuracy rate of the modified Segformer model in extracting winter wheat planting plots is significantly increased compared to traditional segmentation models, with the mean Intersection over Union (mIOU) and mean Pixel Accuracy (mPA) reaching 89.88% and 94.67%, respectively (an increase of 1.93 and 1.23 percentage points, respectively, compared to the baseline model). Meanwhile, the parameter count and computational complexity are significantly reduced compared to other similar models. Furthermore, when multispectral indices are input into the model, the mIOU and mPA reach 90.97% and 95.16%, respectively (an increase of 3.02 and 1.72 percentage points, respectively, compared to the baseline model). Full article
(This article belongs to the Section Precision and Digital Agriculture)
Show Figures

Figure 1

Figure 1
<p>From left to right, images with resolutions of 0.8 m, 2 m, and 16 m, respectively.</p>
Full article ">Figure 2
<p>Sample villages in Shijiazhuang.</p>
Full article ">Figure 3
<p>Sample villages in Cangzhou.</p>
Full article ">Figure 4
<p>Schematic diagram of unmanned aerial vehicle (UAV) aerial photography and handheld coordinate meter.</p>
Full article ">Figure 5
<p>Flowchart of the winter wheat remote sensing image dataset creation process.</p>
Full article ">Figure 6
<p>Structure of the original Segformer model. (‘C1, C2, C3, C4’ represent the number of channels at each stage, ‘H’ and ‘W’ represent the height and width of the feature maps, respectively).</p>
Full article ">Figure 7
<p>Structure of the modified Segformer model.</p>
Full article ">Figure 8
<p>The MixFFN module and the MSF-FFN module.</p>
Full article ">Figure 9
<p>The CoordAttention module.</p>
Full article ">Figure 10
<p>Comparison of prediction results among different models [<a href="#B14-agronomy-14-02433" class="html-bibr">14</a>,<a href="#B15-agronomy-14-02433" class="html-bibr">15</a>,<a href="#B16-agronomy-14-02433" class="html-bibr">16</a>,<a href="#B38-agronomy-14-02433" class="html-bibr">38</a>,<a href="#B39-agronomy-14-02433" class="html-bibr">39</a>,<a href="#B40-agronomy-14-02433" class="html-bibr">40</a>,<a href="#B41-agronomy-14-02433" class="html-bibr">41</a>]. ((<b>a</b>,<b>b</b>) represent imagery from the Shijiazhuang region, (<b>c</b>–<b>f</b>) represent imagery from the Cangzhou region. The yellow dashed box indicates the areas where there are significant differences in the prediction results of different models).</p>
Full article ">Figure 11
<p>Comparison of ablation test results. ((<b>a</b>,<b>b</b>) represent imagery from the Shijiazhuang region, (<b>c</b>,<b>d</b>) represent imagery from the Cangzhou region. The yellow dashed box indicates the areas where there are significant differences in the prediction results of different models).</p>
Full article ">Figure 12
<p>Comparison of results before and after spectral index enhancement. ((<b>a</b>,<b>b</b>) represent imagery from the Cangzhou region, (<b>c</b>) represents imagery from the Shijiazhuang region. The yellow dashed box indicates the areas where there are significant differences in the prediction results of different models).</p>
Full article ">
31 pages, 19893 KiB  
Article
A Low-Measurement-Cost-Based Multi-Strategy Hyperspectral Image Classification Scheme
by Yu Bai, Dongmin Liu, Lili Zhang and Haoqi Wu
Sensors 2024, 24(20), 6647; https://doi.org/10.3390/s24206647 - 15 Oct 2024
Viewed by 699
Abstract
The cost of hyperspectral image (HSI) classification primarily stems from the annotation of image pixels. In real-world classification scenarios, the measurement and annotation process is both time-consuming and labor-intensive. Therefore, reducing the number of labeled pixels while maintaining classification accuracy is a key [...] Read more.
The cost of hyperspectral image (HSI) classification primarily stems from the annotation of image pixels. In real-world classification scenarios, the measurement and annotation process is both time-consuming and labor-intensive. Therefore, reducing the number of labeled pixels while maintaining classification accuracy is a key research focus in HSI classification. This paper introduces a multi-strategy triple network classifier (MSTNC) to address the issue of limited labeled data in HSI classification by improving learning strategies. First, we use the contrast learning strategy to design a lightweight triple network classifier (TNC) with low sample dependence. Due to the construction of triple sample pairs, the number of labeled samples can be increased, which is beneficial for extracting intra-class and inter-class features of pixels. Second, an active learning strategy is used to label the most valuable pixels, improving the quality of the labeled data. To address the difficulty of sampling effectively under extremely limited labeling budgets, we propose a new feature-mixed active learning (FMAL) method to query valuable samples. Fine-tuning is then used to help the MSTNC learn a more comprehensive feature distribution, reducing the model’s dependence on accuracy when querying samples. Therefore, the sample quality is improved. Finally, we propose an innovative dual-threshold pseudo-active learning (DSPAL) strategy, filtering out pseudo-label samples with both high confidence and uncertainty. Extending the training set without increasing the labeling cost further improves the classification accuracy of the model. Extensive experiments are conducted on three benchmark HSI datasets. Across various labeling ratios, the MSTNC outperforms several state-of-the-art methods. In particular, under extreme small-sample conditions (five samples per class), the overall accuracy reaches 82.97% (IP), 87.94% (PU), and 86.57% (WHU). Full article
Show Figures

Figure 1

Figure 1
<p>The overall architecture of the proposed MSTN for HIS classification. <span class="html-fig-inline" id="sensors-24-06647-i001"><img alt="Sensors 24 06647 i001" src="/sensors/sensors-24-06647/article_deploy/html/images/sensors-24-06647-i001.png"/></span>: encoding layer vector of the sample; <span class="html-fig-inline" id="sensors-24-06647-i002"><img alt="Sensors 24 06647 i002" src="/sensors/sensors-24-06647/article_deploy/html/images/sensors-24-06647-i002.png"/></span>: the classifier output vector of the sample; ⊕: add selected sample to the train-set; ⊖: remove selected sample from the test-set; filter 1: a filter used in active learning strategies to screen samples using the FMAX method; filter 2: a filter used in the pseudo-active learning strategy, which includes two filters that filter high-confidence samples and high-uncertainty samples, respectively.</p>
Full article ">Figure 2
<p>TNC detailed model structure. <math display="inline"><semantics> <mrow> <msubsup> <mrow> <mi>k</mi> </mrow> <mrow> <mn>0</mn> </mrow> <mrow> <mi>i</mi> </mrow> </msubsup> </mrow> </semantics></math>@<math display="inline"><semantics> <mrow> <msubsup> <mrow> <mi>k</mi> </mrow> <mrow> <mn>1</mn> </mrow> <mrow> <mi>i</mi> </mrow> </msubsup> <msubsup> <mrow> <mi>k</mi> </mrow> <mrow> <mn>2</mn> </mrow> <mrow> <mi>i</mi> </mrow> </msubsup> <msubsup> <mrow> <mi>k</mi> </mrow> <mrow> <mn>3</mn> </mrow> <mrow> <mi>i</mi> </mrow> </msubsup> </mrow> </semantics></math>: <math display="inline"><semantics> <mrow> <msubsup> <mrow> <mi>k</mi> </mrow> <mrow> <mn>0</mn> </mrow> <mrow> <mi>i</mi> </mrow> </msubsup> </mrow> </semantics></math> denote the size of the convolution kernel; <math display="inline"><semantics> <mrow> <msubsup> <mrow> <mi>k</mi> </mrow> <mrow> <mn>1</mn> </mrow> <mrow> <mi>i</mi> </mrow> </msubsup> <msubsup> <mrow> <mi>k</mi> </mrow> <mrow> <mn>2</mn> </mrow> <mrow> <mi>i</mi> </mrow> </msubsup> <msubsup> <mrow> <mi>k</mi> </mrow> <mrow> <mn>3</mn> </mrow> <mrow> <mi>i</mi> </mrow> </msubsup> </mrow> </semantics></math> represents the three dimensions of the input data; <math display="inline"><semantics> <mrow> <mi>i</mi> </mrow> </semantics></math> represents the <span class="html-italic">i</span>-th layer convolution.</p>
Full article ">Figure 3
<p>Feature mixing-based active learning (FMAL) illustration. ⊕: add selected sample to the train-set; ⊖: remove selected sample from the test-set; Repeat: number of iterations; <span class="html-fig-inline" id="sensors-24-06647-i003"><img alt="Sensors 24 06647 i003" src="/sensors/sensors-24-06647/article_deploy/html/images/sensors-24-06647-i003.png"/></span>: training TNC with train-set.</p>
Full article ">Figure 4
<p>The proposed DSPAL architecture. A filter is established to screen samples classified by the classifier. The samples are labeled using the predicted values of the classifier, and the qualified pseudo-labeled samples are then added to <math display="inline"><semantics> <mrow> <msup> <mrow> <mi>D</mi> </mrow> <mrow> <mi>t</mi> <mi>r</mi> <mi>a</mi> <mi>i</mi> <mi>n</mi> </mrow> </msup> </mrow> </semantics></math> for iterative training.</p>
Full article ">Figure 5
<p>Indian Pines dataset: (<b>a</b>) false-color map; (<b>b</b>) ground-truth map. The numbers in parentheses represent the total number of samples in each class.</p>
Full article ">Figure 6
<p>Pavia University dataset: (<b>a</b>) false-color map; (<b>b</b>) ground-truth map. The numbers in parentheses represent the total number of samples in each class.</p>
Full article ">Figure 7
<p>Wuhan Han Chuan dataset: (<b>a</b>) false-color map; (<b>b</b>) ground-truth map. The numbers in parentheses represent the total number of samples in each class.</p>
Full article ">Figure 8
<p>Classification maps of the different models for the Indian Pines dataset: (<b>a</b>) 3DCNN; (<b>b</b>) DFSL-NN; (<b>c</b>) DFSL-SVM; (<b>d</b>) Gia-CFSL; (<b>e</b>) DBDAFSL; (<b>f</b>) CapsGLOM; (<b>g</b>) MSTNC.</p>
Full article ">Figure 9
<p>Classification maps of the different models for the Pavia University dataset: (<b>a</b>) 3DCNN; (<b>b</b>) DFSL-NN; (<b>c</b>) DFSL-SVM; (<b>d</b>) Gia-CFSL; (<b>e</b>) DBDAFSL; (<b>f</b>) CapsGLOM; (<b>g</b>) MSTNC.</p>
Full article ">Figure 10
<p>Classification maps of the different models for the WHU-Hi-HanChuan dataset: (<b>a</b>) 3DCNN; (<b>b</b>) DFSL-NN; (<b>c</b>) DFSL-SVM; (<b>d</b>) Gia-CFSL; (<b>e</b>) DBDAFSL; (<b>f</b>) CapsGLOM; (<b>g</b>) MSTNC.</p>
Full article ">Figure 11
<p>Evolution of OA as a function of the number of training samples per class: (<b>a</b>) IP dataset; (<b>b</b>) PU dataset; (<b>c</b>) WHU dataset.</p>
Full article ">Figure 12
<p>Evolution of AA as a function of the number of training samples per class: (<b>a</b>) IP dataset; (<b>b</b>) PU dataset; (<b>c</b>) WHU dataset.</p>
Full article ">Figure 13
<p>Evolution of Kappa as a function of the number of training samples per class: (<b>a</b>) IP dataset; (<b>b</b>) PU dataset; (<b>c</b>) WHU dataset.</p>
Full article ">Figure 14
<p>Training time and testing time of different methods.</p>
Full article ">Figure 15
<p>Ablation experiments of the triplet contrastive learning.</p>
Full article ">Figure 16
<p>Ablation experiments of the proposed method on.</p>
Full article ">Figure 17
<p>Comparison of accuracy of three different sample selection methods: (<b>a</b>) OA; (<b>b</b>) AA; (<b>c</b>) Kappa.</p>
Full article ">Figure 18
<p>Classification maps of different methods for Indian Pines. Seven labeled samples were selected for each class. Light blue dots are randomly selected samples. The red dots is samples selected with a specific method. Dark blue dots are samples that were incorrectly predicted. (<b>a</b>) Random method; (<b>b</b>) BvSB method; (<b>c</b>) FMAL method.</p>
Full article ">Figure 19
<p>Analysis of the role of iteration in active learning methods. The data in parentheses in the horizontal axis represent the number of iterations of ALn. (<b>a</b>) OA; (<b>b</b>) AA; (<b>c</b>) Kappa.</p>
Full article ">Figure 20
<p>Effect of the number of DSPAL iterations on accuracy: (<b>a</b>) IP dataset; (<b>b</b>) PU dataset; (<b>c</b>) WHU dataset.</p>
Full article ">
24 pages, 15074 KiB  
Article
The Standardized Spectroscopic Mixture Model
by Christopher Small and Daniel Sousa
Remote Sens. 2024, 16(20), 3768; https://doi.org/10.3390/rs16203768 - 11 Oct 2024
Viewed by 450
Abstract
The standardized spectral mixture model combines the specificity of a physically based representation of a spectrally mixed pixel with the generality and portability of a spectral index. Earlier studies have used spectrally and geographically diverse collections of broadband and spectroscopic imagery to show [...] Read more.
The standardized spectral mixture model combines the specificity of a physically based representation of a spectrally mixed pixel with the generality and portability of a spectral index. Earlier studies have used spectrally and geographically diverse collections of broadband and spectroscopic imagery to show that the reflectance of the majority of ice-free landscapes on Earth can be represented as linear mixtures of rock and soil substrates (S), photosynthetic vegetation (V) and dark targets (D) composed of shadow and spectrally absorptive/transmissive materials. However, both broadband and spectroscopic studies of the topology of spectral mixing spaces raise questions about the completeness and generality of the Substrate, Vegetation, Dark (SVD) model for imaging spectrometer data. This study uses a spectrally diverse collection of 40 granules from the EMIT imaging spectrometer to verify the generality and stability of the spectroscopic SVD model and characterize the SVD topology and plane of substrates to assess linearity of spectral mixing. New endmembers for soil and non-photosynthetic vegetation (NPV; N) allow the planar SVD model to be extended to a tetrahedral SVDN model to better accommodate the 3D topology of the mixing space. The SVDN model achieves smaller misfit than the SVD, but does so at the expense of implausible fractions beyond [0, 1]. However, a refined spectroscopic SVD model still achieves small (<0.03) RMS misfit, negligible sensitivity to endmember variability and strongly linear scaling over more than an order of magnitude range of spatial resolution. Full article
Show Figures

Figure 1

Figure 1
<p>Index map for EMIT sample sites. Each of the 33 agricultural sites was chosen on the basis of climate, biome, geologic substrate and cropping stage to maximize diversity of vegetation cover density and soil exposure.</p>
Full article ">Figure 2
<p>(<b>A</b>) EMIT mosaic of sites used in this study. Common linear stretch [0, 0.8] for all granules. (<b>B</b>). EMIT mosaic of sites used in this study. Identical to <a href="#remotesensing-16-03768-f002" class="html-fig">Figure 2</a>A, but with a scene-specific 2% linear stretch for each granule.</p>
Full article ">Figure 2 Cont.
<p>(<b>A</b>) EMIT mosaic of sites used in this study. Common linear stretch [0, 0.8] for all granules. (<b>B</b>). EMIT mosaic of sites used in this study. Identical to <a href="#remotesensing-16-03768-f002" class="html-fig">Figure 2</a>A, but with a scene-specific 2% linear stretch for each granule.</p>
Full article ">Figure 3
<p>Spectral mixing space formed by low-order principal components of the EMIT mosaic (<a href="#remotesensing-16-03768-f002" class="html-fig">Figure 2</a>). Orthogonal projections of PCs 1–3 clearly show prominent apexes corresponding to substrate, vegetation and dark reflectances. The outward convexity in PC 3 reveals an additional non-photosynthetic (N) vegetation endmember. Substrate endmember S corresponds to sandy soils, but pure sands have distinct reflectances and form separate mixing trends with the dark endmember. A linear mixture model using the S, V, D and N endmembers projects the mixing space into a tetrahedron bounded by a convex hull of 6 linear mixing trends, excluding sands, cloud and turbid water.</p>
Full article ">Figure 4
<p>Spectral mixing space and joint characterization for the EMIT mosaic. A 3D embedding derived from Uniform Manifold Approximation and Projection (UMAP) reveals two distinct continua for substrates and vegetation surrounded by a constellation of distinct sand and water body reflectances (<b>top</b>). The joint characterization using UMAP and PC projections combines the global structure of the orthogonal PCs with the local structure preserved by UMAP (<b>bottom</b>). Distinct soil and NPV continua increase in reflectance amplitude with PC distinguishing the substrates (PC1) and vegetation (PC2). NPV spans both continua. A single continuum spanning multiple sample sites splits to yield general soil (S1) and NPV (N1) endmembers while many other site-specific soil continua yield endmembers corresponding to spectrally distinct sands shown in <a href="#remotesensing-16-03768-f005" class="html-fig">Figure 5</a>. In contrast to the distinct soil and sand endmembers, all vegetation forms a single continuum spanned by photosynthetic and non-photosynthetic vegetation endmembers.</p>
Full article ">Figure 5
<p>Reflectance spectra from Soil and NPV continua in <a href="#remotesensing-16-03768-f004" class="html-fig">Figure 4</a>. Two distinct NPV continua (N3 and N4) converge to a single continuum that branches (N2) from the soil continuum to a single higher amplitude NPV endmember (N1). The soil continuum extends from the branch point to a single higher amplitude soil endmember (S1). In parallel to this main soil continuum, seven different soil continua (S3–S9) extend to spectrally distinct sand endmembers. Isolated clusters correspond to geographically distinct pure sands in the Negev desert (S2, S10) and Anza-Borrego desert (S11).</p>
Full article ">Figure 6
<p>(<b>A</b>). EMIT SVD composite from SVDN model. Common linear stretch [0, 1] for all. (<b>B</b>). EMIT NPV + misfit composite from SVDN model. Common linear stretch [0, 1] for NPV and [0, 0.05] for misfit.</p>
Full article ">Figure 6 Cont.
<p>(<b>A</b>). EMIT SVD composite from SVDN model. Common linear stretch [0, 1] for all. (<b>B</b>). EMIT NPV + misfit composite from SVDN model. Common linear stretch [0, 1] for NPV and [0, 0.05] for misfit.</p>
Full article ">Figure 7
<p>Endmember fraction spaces for the SVD, SVDN and NVD models. All models are subsets of the same SVDN endmembers, differing only in the inclusion of the S and N endmembers. Comparing the left and center columns, it is apparent that including the N endmember increases the negative fractions for all endmembers. For the SVDN model, RMS misfit diminishes with increasing NPV fraction, but is greatest for spectra with negative NPV fractions. Note much wider ranges for all fraction distributions for the NVD model.</p>
Full article ">Figure 8
<p>SVD versus SVDN model comparison. The Substrate fraction is most sensitive to the addition of the NPV endmember. However, the Vegetation fraction is quite insensitive and the Dark fraction is most sensitive at low fractions. As expected, the SVD model has somewhat higher misfit, although still relatively low at well under 0.04 for the vast majority of spectra in the mosaic. The negligible number of higher misfit spectra are associated with clouds and high albedo sands, which are not represented in either model. The inset covariability matric shows EM correlations on/above the diagonal and Mutual Information (MI) scores below. Note high correlation and MI for S and N.</p>
Full article ">Figure 9
<p>Raw and modeled EMIT spectra with SVD vs. SVDN model misfit space. The NPV-dominant spectra are modeled more accurately with the SVDN than the SVD model. Sands (1,2,3) have higher misfits for both models because neither has a sand endmember. SVD and SVDN models have 90% and 95% (respectively) of spectra with less than 0.03 misfit. Note expanded reflectance scale on example 7.</p>
Full article ">Figure 10
<p>Endmember sensitivity analysis. Three peripheral spectral endmembers (upper left) for S, V and D yield 3<sup>3</sup> = 27 SVD model permutations. Pairwise combinations of each resulting endmember fraction distribution for the EMIT mosaic yield (<sup>27</sup><sub>2</sub>) = 351 model inversion correlations (inset) for each SVD endmember. S and V fraction distribution correlations are &gt; 0.99 but D fractions go as low as 0.98 because differences among S and V endmembers are amplified in D fractions. Standard deviations among model pairs are &lt; 0.05 for each fraction for 99.8% of all 63,692,800 spectra.</p>
Full article ">Figure 11
<p>Linearity of scaling for the SVDN model. The 4 m resolution AVIRIS-3 line was collected the day after the 47 × 60 m resolution EMIT scene. All fractions scale linearly over the order of magnitude difference in resolution. Dispersion is greater for Substrate and NPV fractions. The slight bias of the fractions relative to 1:1 is consistent with the lower solar elevation at time of EMIT collection. Some of the dispersion about the 1:1 lines results from orthographic displacements between images.</p>
Full article ">Figure A1
<p>SVDN fraction spaces for the AVIRIS-3 and EMIT acquisitions compared in <a href="#remotesensing-16-03768-f011" class="html-fig">Figure 11</a>. As expected, the 4.4 m AVIRIS pixel fractions span a wider range than the more spectrally mixed EMIT pixels. As with the EMIT mosaic, S, V and D fractions are well-bounded while NPV shows a significant percentage of negative fractions.</p>
Full article ">
17 pages, 9539 KiB  
Article
A Chaos-Based Encryption Algorithm to Protect the Security of Digital Artwork Images
by Li Shi, Xiangjun Li, Bingxue Jin and Yingjie Li
Mathematics 2024, 12(20), 3162; https://doi.org/10.3390/math12203162 - 10 Oct 2024
Viewed by 450
Abstract
Due to the security weaknesses of chaos-based pseudorandom number generators, in this paper, a new pseudorandom number generator (PRNG) based on mixing three-dimensional variables of a cat chaotic map is proposed. A uniformly distributed chaotic sequence by a logistic map is used in [...] Read more.
Due to the security weaknesses of chaos-based pseudorandom number generators, in this paper, a new pseudorandom number generator (PRNG) based on mixing three-dimensional variables of a cat chaotic map is proposed. A uniformly distributed chaotic sequence by a logistic map is used in the mixing step. Both statistical tests and a security analysis indicate that our PRNG has good randomness and is more complex than any one-dimensional variable of a cat map. Furthermore, a new image encryption algorithm based on the chaotic PRNG is provided to protect the content of artwork images. The core of the algorithm is to use the sequence generated by the pseudorandom number generator to achieve the process of disruption and diffusion of the image pixels, so as to achieve the effect of obfuscation and encryption of the image content. Several security tests demonstrate that this image encryption algorithm has a high security level. Full article
(This article belongs to the Special Issue Chaos-Based Secure Communication and Cryptography, 2nd Edition)
Show Figures

Figure 1

Figure 1
<p>The state variables of cat chaotic map. (<b>a</b>) <span class="html-italic">xy</span>-dimensional; (<b>b</b>) <span class="html-italic">xz</span>-dimensional; (<b>c</b>) <span class="html-italic">yz</span>-dimensional; (<b>d</b>) chaotic attractor.</p>
Full article ">Figure 2
<p>The main frame of our PRNG.</p>
Full article ">Figure 3
<p>Comparison of ApEn for sequences {<span class="html-italic">b<sub>i</sub></span>}, {<span class="html-italic">x<sub>i</sub></span>}, {<span class="html-italic">y<sub>i</sub></span>}, and {<span class="html-italic">z<sub>i</sub></span>}.</p>
Full article ">Figure 4
<p>Linear complexity.</p>
Full article ">Figure 5
<p>Encryption and decryption tests: (<b>a</b>,<b>d</b>,<b>g</b>) denotes the original image; (<b>b</b>,<b>e</b>,<b>h</b>) denotes the encrypted image; (<b>c</b>,<b>f</b>,<b>i</b>) denotes the decrypted image.</p>
Full article ">Figure 6
<p>Histograms: (<b>a</b>–<b>c</b>) original images; (<b>d</b>–<b>f</b>) cryptographic images (the red, green and blue appearing in the diagram correspond to the three RGB colour channels).</p>
Full article ">Figure 7
<p>Pixel correlation analysis: (<b>a</b>–<b>i</b>) denote the pixel correlations of the original images of images 1–3 in different colour channels, horizontally, vertically and diagonally; (<b>j</b>–<b>r</b>) correspond to the pixel correlations of their secret images in different colour channels, horizontally, vertically and diagonally.</p>
Full article ">Figure 8
<p>Key sensitive analysis: (<b>a</b>) correctly decrypted image; (<b>b</b>) x<sub>0</sub> + 2<sup>−12</sup>; (<b>c</b>) y<sub>0</sub> + 2<sup>−12</sup>; (<b>d</b>) z<sub>0</sub> + 2<sup>−12</sup>.</p>
Full article ">Figure 9
<p>(<b>a</b>–<b>c</b>) denote the corresponding decrypted images after adding 5% salt and pepper noise to the three encrypted images, respectively.</p>
Full article ">
19 pages, 11653 KiB  
Article
Influence of Vegetation Phenology on the Temporal Effect of Crop Fractional Vegetation Cover Derived from Moderate-Resolution Imaging Spectroradiometer Nadir Bidirectional Reflectance Distribution Function–Adjusted Reflectance
by Yinghao Lin, Tingshun Fan, Dong Wang, Kun Cai, Yang Liu, Yuye Wang, Tao Yu and Nianxu Xu
Agriculture 2024, 14(10), 1759; https://doi.org/10.3390/agriculture14101759 - 5 Oct 2024
Viewed by 625
Abstract
Moderate-Resolution Imaging Spectroradiometer (MODIS) Nadir Bidirectional Reflectance Distribution Function (BRDF)-Adjusted Reflectance (NBAR) products are being increasingly used for the quantitative remote sensing of vegetation. However, the assumption underlying the MODIS NBAR product’s inversion model—that surface anisotropy remains unchanged over the 16-day retrieval period—may [...] Read more.
Moderate-Resolution Imaging Spectroradiometer (MODIS) Nadir Bidirectional Reflectance Distribution Function (BRDF)-Adjusted Reflectance (NBAR) products are being increasingly used for the quantitative remote sensing of vegetation. However, the assumption underlying the MODIS NBAR product’s inversion model—that surface anisotropy remains unchanged over the 16-day retrieval period—may be unreliable, especially since the canopy structure of vegetation undergoes stark changes at the start of season (SOS) and the end of season (EOS). Therefore, to investigate the MODIS NBAR product’s temporal effect on the quantitative remote sensing of crops at different stages of the growing seasons, this study selected typical phenological parameters, namely SOS, EOS, and the intervening stable growth of season (SGOS). The PROBA-V bioGEOphysical product Version 3 (GEOV3) Fractional Vegetation Cover (FVC) served as verification data, and the Pearson correlation coefficient (PCC) was used to compare and analyze the retrieval accuracy of FVC derived from the MODIS NBAR product and MODIS Surface Reflectance product. The Anisotropic Flat Index (AFX) was further employed to explore the influence of vegetation type and mixed pixel distribution characteristics on the BRDF shape under different stages of the growing seasons and different FVC; that was then combined with an NDVI spatial distribution map to assess the feasibility of using the reflectance of other characteristic directions besides NBAR for FVC correction. The results revealed the following: (1) Generally, at the SOSs and EOSs, the differences in PCCs before vs. after the NBAR correction mainly ranged from 0 to 0.1. This implies that the accuracy of FVC derived from MODIS NBAR is lower than that derived from MODIS Surface Reflectance. Conversely, during the SGOSs, the differences in PCCs before vs. after the NBAR correction ranged between –0.2 and 0, suggesting the accuracy of FVC derived from MODIS NBAR surpasses that derived from MODIS Surface Reflectance. (2) As vegetation phenology shifts, the ensuing differences in NDVI patterning and AFX can offer auxiliary information for enhanced vegetation classification and interpretation of mixed pixel distribution characteristics, which, when combined with NDVI at characteristic directional reflectance, could enable the accurate retrieval of FVC. Our results provide data support for the BRDF correction timescale effect of various stages of the growing seasons, highlighting the potential importance of considering how they differentially influence the temporal effect of NBAR corrections prior to monitoring vegetation when using the MODIS NBAR product. Full article
Show Figures

Figure 1

Figure 1
<p>Spatial extent of the Wancheng District study area (in Henan Province, China). (<b>a</b>) Map of land cover types showing the location of sampling points across the study area. This map came from MCD12Q1 (v061). (<b>b</b>–<b>d</b>) True-color images of the three mixed pixels, obtained from Sentinel-2. The distribution characteristics are as follows: crops above with buildings below (<b>b</b>); crops below with buildings above (<b>c</b>); and buildings in the upper-left corner, crops in the remainder (<b>d</b>).</p>
Full article ">Figure 2
<p>Monthly average temperature and monthly total precipitation in the study area, from 2017 to 2021.</p>
Full article ">Figure 3
<p>Data processing flow chart. The green rectangles from top to the bottom represent three steps: crop phenological parameters extraction with TIMESAT; Fractional Vegetation Cover (FVC) derived from MOD09GA and MCD43A4; and accuracy evaluation, respectively. Blue solid rectangles refer to a used product or derived results, while blue dashed rectangles refer to the software or model used in this study. NDVI<sub>MOD09GA</sub>: NDVI derived from MOD09GA, NDVI<sub>MCD43A4</sub>: NDVI derived from MCD43A4, FVC<sub>MOD09GA</sub>: FVC derived from MOD09GA, FVC<sub>MCD43A4</sub>: FVC derived from MCD43A4. PCC<sub>MOD09GA</sub>: Pearson correlation coefficient (PCC) calculated for FVC<sub>MOD09GA</sub> and GEOV3 FVC, PCC<sub>MCD43A4</sub>: PCC calculated for FVC<sub>MCD43A4</sub> and GEOV3 FVC.</p>
Full article ">Figure 4
<p>NDVI and EVI time series fitted curves and phenological parameters of crops. SOS: start of season; EOS: end of season; SGOS: stable growth of season.</p>
Full article ">Figure 5
<p>Spatial distribution of Fractional Vegetation Cover (FVC) derived from MOD09GA and MCD43A4, and the difference images of FVC. FVC<sub>MOD09GA</sub>: FVC derived from MOD09GA, FVC<sub>MCD43A4</sub>: FVC derived from MCD43A4. (<b>a</b>–<b>c</b>) FVC derived from MOD09GA, MCD43A4, and the difference between FVC<sub>MOD09GA</sub> and FVC<sub>MCD43A4</sub> on 15 November 2020, respectively; (<b>d</b>–<b>f</b>) FVC derived from MOD09GA, MCD43A4, and the difference between FVC<sub>MOD09GA</sub> and FVC<sub>MCD43A4</sub> on 10 February 2021, respectively; (<b>g</b>–<b>i</b>) FVC derived from MOD09GA, MCD43A4, and the difference between FVC<sub>MOD09GA</sub> and FVC<sub>MCD43A4</sub> on 30 September 2021, respectively.</p>
Full article ">Figure 6
<p>Pearson correlation coefficients (PCCs) of Fractional Vegetation Cover (FVC) derived before and after the NBAR correction with GEOV3 FVC at different stages of the growing seasons. FVC<sub>MOD09GA</sub>: FVC derived from MOD09GA. FVC<sub>MCD43A4</sub>: FVC derived from MCD43A4. PCC<sub>MOD09GA</sub>: PCC calculated for FVC<sub>MOD09GA</sub> and GEOV3 FVC, PCC<sub>MCD43A4</sub>: PCC calculated for FVC<sub>MCD43A4</sub> and GEOV3 FVC. (<b>a</b>) PCC<sub>MOD09GA</sub> and PCC<sub>MCD43A4</sub> in 2018–2021; (<b>b</b>) Scatterplot of numerical differences between PCC<sub>MOD09GA</sub> and PCC<sub>MCD43A4</sub>. SOS: start of season; EOS: end of season; SGOS: stable growth of season.</p>
Full article ">Figure 7
<p>NDVI spatial distribution maps of crop pixel, savanna pixel, and grassland pixel in different stages of the growing seasons. (<b>a</b>–<b>d</b>) Crop. (<b>e</b>–<b>h</b>) Savanna. (<b>i</b>–<b>l</b>) Grassland. SZA: Solar Zenith Angle, FVC: Fractional Vegetation Cover, AFX_RED: Anisotropic Flat Index (AFX) in the red band, AFX_NIR: AFX in the near-infrared band.</p>
Full article ">Figure 8
<p>NDVI spatial distribution maps of mixed pixels in different stages of the growing seasons. (<b>a</b>–<b>d</b>) Crops above and buildings below. (<b>e</b>–<b>h</b>) Crops below and buildings above. (<b>i</b>–<b>l</b>) Buildings in the upper-left corner and crops in the remainder. SZA: Solar Zenith Angle, FVC: Fractional Vegetation Cover, AFX_RED: Anisotropic Flat Index (AFX) in the red band, AFX_NIR: AFX in the near-infrared band.</p>
Full article ">
29 pages, 6780 KiB  
Article
Phenological and Biophysical Mediterranean Orchard Assessment Using Ground-Based Methods and Sentinel 2 Data
by Pierre Rouault, Dominique Courault, Guillaume Pouget, Fabrice Flamain, Papa-Khaly Diop, Véronique Desfonds, Claude Doussan, André Chanzy, Marta Debolini, Matthew McCabe and Raul Lopez-Lozano
Remote Sens. 2024, 16(18), 3393; https://doi.org/10.3390/rs16183393 - 12 Sep 2024
Viewed by 938
Abstract
A range of remote sensing platforms provide high spatial and temporal resolution insights which are useful for monitoring vegetation growth. Very few studies have focused on fruit orchards, largely due to the inherent complexity of their structure. Fruit trees are mixed with inter-rows [...] Read more.
A range of remote sensing platforms provide high spatial and temporal resolution insights which are useful for monitoring vegetation growth. Very few studies have focused on fruit orchards, largely due to the inherent complexity of their structure. Fruit trees are mixed with inter-rows that can be grassed or non-grassed, and there are no standard protocols for ground measurements suitable for the range of crops. The assessment of biophysical variables (BVs) for fruit orchards from optical satellites remains a significant challenge. The objectives of this study are as follows: (1) to address the challenges of extracting and better interpreting biophysical variables from optical data by proposing new ground measurements protocols tailored to various orchards with differing inter-row management practices, (2) to quantify the impact of the inter-row at the Sentinel pixel scale, and (3) to evaluate the potential of Sentinel 2 data on BVs for orchard development monitoring and the detection of key phenological stages, such as the flowering and fruit set stages. Several orchards in two pedo-climatic zones in southeast France were monitored for three years: four apricot and nectarine orchards under different management systems and nine cherry orchards with differing tree densities and inter-row surfaces. We provide the first comparison of three established ground-based methods of assessing BVs in orchards: (1) hemispherical photographs, (2) a ceptometer, and (3) the Viticanopy smartphone app. The major phenological stages, from budburst to fruit growth, were also determined by in situ annotations on the same fields monitored using Viticanopy. In parallel, Sentinel 2 images from the two study sites were processed using a Biophysical Variable Neural Network (BVNET) model to extract the main BVs, including the leaf area index (LAI), fraction of absorbed photosynthetically active radiation (FAPAR), and fraction of green vegetation cover (FCOVER). The temporal dynamics of the normalised FAPAR were analysed, enabling the detection of the fruit set stage. A new aggregative model was applied to data from hemispherical photographs taken under trees and within inter-rows, enabling us to quantify the impact of the inter-row at the Sentinel 2 pixel scale. The resulting value compared to BVs computed from Sentinel 2 gave statistically significant correlations (0.57 for FCOVER and 0.45 for FAPAR, with respective RMSE values of 0.12 and 0.11). Viticanopy appears promising for assessing the PAI (plant area index) and FCOVER for orchards with grassed inter-rows, showing significant correlations with the Sentinel 2 LAI (R2 of 0.72, RMSE 0.41) and FCOVER (R2 0.66 and RMSE 0.08). Overall, our results suggest that Sentinel 2 imagery can support orchard monitoring via indicators of development and inter-row management, offering data that are useful to quantify production and enhance resource management. Full article
(This article belongs to the Section Remote Sensing in Agriculture and Vegetation)
Show Figures

Figure 1

Figure 1
<p>Schematic of the three approaches used to monitor orchard development at different spatial scales throughout the year (from tree level for phenological observations to watershed level using Sentinel 2 data).</p>
Full article ">Figure 2
<p>(<b>a</b>) Locations of the monitored orchards in the Ouvèze–Ventoux watershed (green points at right) and in the La Crau area (yellow points at left), (<b>b</b>) pictures of 2 cherry orchards (13 September and 22 July 2022): top, non-grassed orchard drip-irrigated by two rows of drippers and bottom, grassed orchard drip-irrigated in summer, (<b>c</b>) pictures of 2 orchards in La Crau (top, nectarine tree in spring 22 March 2023 and bottom, in summer 26 June 2022).</p>
Full article ">Figure 3
<p>(<b>a</b>) Main steps in processing the hemispherical photographs. (<b>b</b>) The three methods of data acquisition around the central tree. (<b>c</b>) Protocol used with hemispherical photographs. (<b>d</b>) Protocol used with the Viticanopy application, with 3 trees monitored in the four directions (blue arrows). (<b>e</b>) Protocols used with the ceptometer: P1 measured in the shadow of the trees and (blue) P2 in the inter-rows (black).</p>
Full article ">Figure 4
<p>Protocol for the monitoring of the phenological stages of cherry trees. (<b>a</b>) Phenology of cherry trees according to BBCH; (<b>b</b>) at plot scale, in an orchard, three trees in red monitored by observations (BBCH scale); (<b>c</b>) at tree scale, two locations are selected to classify flowering stage in the tree; and (<b>d</b>) flowering stage of a cherry tree in April 2022.</p>
Full article ">Figure 5
<p>Comparison of temporal profiles of Sentinel 2 LAI interpolated profile (black line) and PAI obtained from the ceptometer (blue line, P2 protocol) and Viticanopy (green line) for three orchards: (<b>a</b>) 3099 (cherry—grassed—Ouvèze), (<b>b</b>) 183 (cherry—non-grassed—Ouvèze), and (<b>c</b>) 4 (nectarine—La Crau) at the beginning of 2023.</p>
Full article ">Figure 6
<p>Comparison between Sentinel 2 LAI and PAI from (<b>a</b>) ceptometer measurements taken at all orchards of the two areas (La Crau and Ouvèze), (<b>b</b>) Viticanopy measurements at all orchards, and (<b>c</b>) Viticanopy measurements excluding 2 non-grassed orchards (183, 259). The black line represents the optimal correlation 1:1; the red line represents the results from linear regression.</p>
Full article ">Figure 7
<p>(<b>a</b>)—(top graphs) Proportion of tree (orange <span class="html-italic">100*FCOVER<sub>t</sub>/FCOVER<sub>c</sub></span>, see Equation (1)) and of inter-row (green <span class="html-italic">100*((1-FCOVER<sub>t</sub>)*FCOVER<sub>g</sub>)/FCOVER<sub>c</sub></span>) components computed from hemispherical photographs used to estimate FCOVER for two dates, 22 March 2022 (doy:81) and 21 June 2022 (doy 172), for all the monitored fields. (<b>b</b>)—(bottom graphs) For two plots, left, field 183.2 and right, field 3099.1, temporal variations in proportion of tree and inter-row components for the different observation dates in 2022.</p>
Full article ">Figure 8
<p>(<b>a</b>) Averaged percentage of grass contribution on FAPAR computed from hemispherical photographs according to Equation (1) for all grassed orchard plots in 2022. Examples of Sentinel 2 FAPAR dynamics (black lines) for plots at (<b>b</b>) non-grassed site 183 and (<b>c</b>) grassed site 1418. Initial values of FAPAR, as computed from BVNET, are provided in black. The green line represents adjusted FAPAR after subtracting the grass contribution (percentage obtained from hemispherical photographs). It corresponds to FAPAR only for the trees. The percentage of grass contribution is in red.</p>
Full article ">Figure 9
<p>Correlation between (<b>a</b>) FCOVER obtained from hemispherical photographs (from Equation (1)) for all orchards of the two studied areas and FCOVER from Sentinel 2 computed with BVNET (<b>b</b>) FAPAR from hemispherical photographs and FAPAR from Sentinel 2 for all orchards and for the 3 years. (<b>c</b>) Correlation between FCOVER from Viticanopy and Sentinel 2 for all orchards for the two areas, except 183 and 259. (<b>d</b>) Correlation between FCOVER from upward-aimed hemispherical photographs and from Viticanopy for all plots.</p>
Full article ">Figure 10
<p>(<b>a</b>) LAI temporal profiles obtained from BVNET applied to Sentinel 2 data averaged at plot and field scales (field 3099) for the year 2022 and (<b>b</b>) soil water stock (in mm in blue) computed at 0–50 cm using capacitive sensors (described in <a href="#sec2dot1-remotesensing-16-03393" class="html-sec">Section 2.1</a>), with rainfall recorded at the Carpentras station (see <a href="#app1-remotesensing-16-03393" class="html-app">Supplementary Part S1 and Table S1</a>).</p>
Full article ">Figure 11
<p>Time series of FCOVER (mean value at field scale) for the cherry trees in field 3099 in Ouvèze area from 2016 to 2023.</p>
Full article ">Figure 12
<p>Sentinel 2 FAPAR evolution in 2022 for two cherry tree fields, with the date of flowering observation (in green) and the date of fruit set observation (in red) for (<b>a</b>) plot 183 (non-grassed cherry trees) and (<b>b</b>) plot 3099 (grassed cherry trees).</p>
Full article ">Figure 13
<p>Variability in dates for the phenological stages of a cherry tree orchard (plot 3099) observed in 2022.</p>
Full article ">Figure 14
<p>(<b>a</b>) Normalised FAPAR computed for all observed cherry trees relative to observation dates for BBCH stages in the Ouvèze area in 2021 for five plots. (<b>b</b>) Map of dates distinguishing between flowering and fruit set stages for 2021 obtained by thresholding FAPAR images.</p>
Full article ">
16 pages, 3639 KiB  
Article
Time-of-Flight Camera Intensity Image Reconstruction Based on an Untrained Convolutional Neural Network
by Tian-Long Wang, Lin Ao, Na Han, Fu Zheng, Yan-Qiu Wang and Zhi-Bin Sun
Photonics 2024, 11(9), 821; https://doi.org/10.3390/photonics11090821 - 30 Aug 2024
Viewed by 1056
Abstract
With the continuous development of science and technology, laser ranging technology will become more efficient, convenient, and widespread, and it has been widely used in the fields of medicine, engineering, video games, and three-dimensional imaging. A time-of-flight (ToF) camera is a three-dimensional stereo [...] Read more.
With the continuous development of science and technology, laser ranging technology will become more efficient, convenient, and widespread, and it has been widely used in the fields of medicine, engineering, video games, and three-dimensional imaging. A time-of-flight (ToF) camera is a three-dimensional stereo imaging device with the advantages of small size, small measurement error, and strong anti-interference ability. However, compared to traditional sensors, ToF cameras typically exhibit lower resolution and signal-to-noise ratio due to inevitable noise from multipath interference and mixed pixels during usage. Additionally, in environments with scattering media, the information about objects gets scattered multiple times, making it challenging for ToF cameras to obtain effective object information. To address these issues, we propose a solution that combines ToF cameras with single-pixel imaging theory. Leveraging intensity information acquired by ToF cameras, we apply various reconstruction algorithms to reconstruct the object’s image. Under undersampling conditions, our reconstruction approach yields higher peak signal-to-noise ratio compared to the raw camera image, significantly improving the quality of the target object’s image. Furthermore, when ToF cameras fail in environments with scattering media, our proposed approach successfully reconstructs the object’s image when the camera is imaging through the scattering medium. This experimental demonstration effectively reduces the noise and direct ambient light generated by the ToF camera itself, while opening up the potential application of ToF cameras in challenging environments, such as scattering media or underwater. Full article
Show Figures

Figure 1

Figure 1
<p>Flight time measurement in continuous sinusoidal wave modulation mode.</p>
Full article ">Figure 2
<p>Schematic diagram of the image reconstruction using a neural network. (<b>a</b>) Schematic diagram of network operation, (<b>b</b>) images reconstructed by the neural network with different sampling rates and different number of iterations.</p>
Full article ">Figure 3
<p>The schematic diagrams of SPI.</p>
Full article ">Figure 4
<p>The schematic diagrams of SPI based on a ToF camera.</p>
Full article ">Figure 5
<p>Experimental results of imaging reconstruction using intensity images at different SRs. (<b>a</b>) Target object, (<b>b</b>) ToF image, (<b>c</b>–<b>f</b>) the recovered images by CGI, BP, TVAL3, and DL. The SRs from left to right is 6.25%, 12.5%, 18.75%, 25%, 31.25% and 37.5%.</p>
Full article ">Figure 6
<p>Plots of the PSNRs of the reconstructed intensity images versus the SRs by different algorithms. The black, red, blue, and green lines denote the PSNRs by CGI, BP, TVAL3, and DL.</p>
Full article ">Figure 7
<p>Experimental results of reconstruction using the intensity images through the scattering media at different SRs. (<b>a</b>) ToF image, (<b>b</b>–<b>e</b>) the recovered images by CGI, BP, TVAL3, and DL. The SRs from left to right are 6.25%, 12.5%, 18.75%, 25%, 31.25%, and 37.5%.</p>
Full article ">Figure 8
<p>Plots comparing the PSNR and SRs for the reconstruction of intensity images through scattering media using different algorithms.</p>
Full article ">Figure 9
<p>Experimental results of reconstruction using the intensity images through the scattering media at different SRs. (<b>a</b>) ToF image, (<b>b</b>) ToF image with added Gaussian noise. (<b>c</b>–<b>f</b>) the recovered images by CGI, BP, TVAL3, and DL. The SRs from left to right are 6.25%, 12.5%, 18.75%, 25%, 31.25%, and 37.5%.</p>
Full article ">Figure 10
<p>Plots comparing the PSNR and SRs for the reconstruction of intensity images through scattering media using different algorithms.</p>
Full article ">
32 pages, 14893 KiB  
Article
Mapping of Clay Montmorillonite Abundance in Agricultural Fields Using Unmixing Methods at Centimeter Scale Hyperspectral Images
by Etienne Ducasse, Karine Adeline, Audrey Hohmann, Véronique Achard, Anne Bourguignon, Gilles Grandjean and Xavier Briottet
Remote Sens. 2024, 16(17), 3211; https://doi.org/10.3390/rs16173211 - 30 Aug 2024
Viewed by 972
Abstract
The composition of clay minerals in soils, and more particularly the presence of montmorillonite (as part of the smectite family), is a key factor in soil swell–shrinking as well as off–road vehicle mobility. Detecting these topsoil clay minerals and quantifying the montmorillonite abundance [...] Read more.
The composition of clay minerals in soils, and more particularly the presence of montmorillonite (as part of the smectite family), is a key factor in soil swell–shrinking as well as off–road vehicle mobility. Detecting these topsoil clay minerals and quantifying the montmorillonite abundance are a challenge since they are usually intimately mixed with other minerals, soil organic carbon and soil moisture content. Imaging spectroscopy coupled with unmixing methods can address these issues, but the quality of the estimation degrades the coarser the spatial resolution is due to pixel heterogeneity. With the advent of UAV-borne and proximal hyperspectral acquisitions, it is now possible to acquire images at a centimeter scale. Thus, the objective of this paper is to evaluate the accuracy and limitations of unmixing methods to retrieve montmorillonite abundance from very-high-resolution hyperspectral images (1.5 cm) acquired from a camera installed on top of a bucket truck over three different agricultural fields, in Loiret department, France. Two automatic endmember detection methods based on the assumption that materials are linearly mixed, namely the Simplex Identification via Split Augmented Lagrangian (SISAL) and the Minimum Volume Constrained Non-negative Matrix Factorization (MVC-NMF), were tested prior to unmixing. Then, two linear unmixing methods, the fully constrained least square method (FCLS) and the multiple endmember spectral mixture analysis (MESMA), and two nonlinear unmixing ones, the generalized bilinear method (GBM) and the multi-linear model (MLM), were performed on the images. In addition, several spectral preprocessings coupled with these unmixing methods were applied in order to improve the performances. Results showed that our selected automatic endmember detection methods were not suitable in this context. However, unmixing methods with endmembers taken from available spectral libraries performed successfully. The nonlinear method, MLM, without prior spectral preprocessing or with the application of the first Savitzky–Golay derivative, gave the best accuracies for montmorillonite abundance estimation using the USGS library (RMSE between 2.2–13.3% and 1.4–19.7%). Furthermore, a significant impact on the abundance estimations at this scale was in majority due to (i) the high variability of the soil composition, (ii) the soil roughness inducing large variations of the illumination conditions and multiple surface scatterings and (iii) multiple volume scatterings coming from the intimate mixture. Finally, these results offer a new opportunity for mapping expansive soils from imaging spectroscopy at very high spatial resolution. Full article
(This article belongs to the Special Issue Remote Sensing for Geology and Mapping)
Show Figures

Figure 1

Figure 1
<p>Site locations from AGEOTHYP project depicted with colored squares on: (<b>a</b>) topographic map by IGN (National Institute of Geographic and Forest Information) overlaid with smectite abundance from XRD analyses and (<b>b</b>) BRGM swelling hazard map. Soil digital photos of the three selected sites: (<b>c</b>) “Le Buisson” located in Coinces, (<b>d</b>) “Les Laps” located in Gémigny and (<b>e</b>) “La Malandière” located in Mareau.</p>
Full article ">Figure 2
<p>Acquisition setup with the HySpex cameras, RGB composite image from HySpex VNIR camera on Gémigny, Coinces and Mareau sites, with the sampling grid composed of 15 subzones (named after “SUB”), samples collected for laboratory soil characterization in subzones are delimited by red squares (<b>right</b>).</p>
Full article ">Figure 3
<p>NDVI and CAI values for the Mareau hyperspectral image. In red: the thresholds chosen for each index in order to characterize four classes.</p>
Full article ">Figure 4
<p>Grain size and SOC for each site (<b>left</b>), texture triangle for all samples (<b>right</b>).</p>
Full article ">Figure 5
<p>Processing scheme to estimate montmorillonite abundance.</p>
Full article ">Figure 6
<p>Endmembers from laboratory spectral libraries: (<b>a</b>) montmorillonite, (<b>b</b>) kaolinite, (<b>c</b>) illite, (<b>d</b>) quartz and (<b>e</b>) calcite.</p>
Full article ">Figure 7
<p>EM estimates over the Gémigny image. Comparison of the detected and Ducasse EM spectra and graphs of mixture simplex in the first two components space (PC 1 and PC 2) for (<b>a</b>) SISAL to detect 4 EM, (<b>b</b>) SISAL to detect 5 EM, (<b>c</b>) MVC-NMF to detect 4 EM and (<b>d</b>) MVC-NMF to detect 5 EM.</p>
Full article ">Figure 8
<p>Montmorillonite abundance estimations over all the subzones per site (gray boxplots with the median highlighted by a red line) compared to the XRD dataset (boxplots with a red square depicting the median). The inputs are the USGS library, the six preprocessings and REF followed by MLM.</p>
Full article ">Figure 9
<p>Montmorillonite abundance estimations over all the subzones per site (gray boxplots with the median highlighted by a red line) compared to the XRD dataset (boxplots with a red square depicting the median). The inputs are the Ducasse library, the six preprocessings and REF followed by MLM.</p>
Full article ">Figure 10
<p>Performances of Montmorillonite abundance estimations (wt%) obtained with (<b>a</b>) REF-MLM and (<b>b</b>) 1stSGD-MLM with the USGS library (red) and Ducasse spectral library (blue). Bars in the x axis correspond to the accuracy of XRD analysis, and bars in the y axis correspond to the standard deviation of estimated montmorillonite abundances.</p>
Full article ">Figure 11
<p>Results on Gémigny-SUB14: (<b>a</b>) RGB image (in black: masked areas), (<b>b</b>) hillshade map, (<b>c</b>) hillshade histogram (the red vertical line represents the median), (<b>d</b>) difference between the estimated montmorillonite abundance map obtained with REF-MLM and the XRD measured value (in white: masked areas), (<b>g</b>) the same for 1stSGD-MLM, (<b>e</b>) <span class="html-italic">p</span> value maps for REF-MLM (in white: masked areas), (<b>h</b>) the same for 1stSGD-MLM, (<b>f</b>) <span class="html-italic">p</span> value histogram for REF-MLM (the red vertical line represents the median) and (<b>i</b>) the same for 1stSGD-MLM.</p>
Full article ">Figure 12
<p>Results on Coinces-SUB2: (<b>a</b>) RGB image (in black: masked areas), (<b>b</b>) hillshade map, (<b>c</b>) hillshade histogram (the red vertical line represents the median), (<b>d</b>) difference between the estimated montmorillonite abundance map obtained with REF-MLM and the XRD measured value (in white: masked areas), (<b>g</b>) the same for 1stSGD-MLM, (<b>e</b>) <span class="html-italic">p</span> value maps for REF-MLM (in white: masked areas), (<b>h</b>) the same for 1stSGD-MLM, (<b>f</b>) <span class="html-italic">p</span> value histogram for REF-MLM (the red vertical line represents the median) and (<b>i</b>) the same for 1stSGD-MLM.</p>
Full article ">Figure 13
<p>Performances for Montmorillonite abundance estimation with REF-MLM for all subsites (gray boxplots with the median highlighted by a red line) plotted with the XRD dataset (boxplots with a red square depicting the median).</p>
Full article ">Figure 14
<p>Maps for Gémigny site (<b>a</b>) RGB composite image, (<b>b</b>) composite mask and (<b>c</b>) abundance map of montmorillonite obtained with the REF-MLM and USGS library.</p>
Full article ">Figure 15
<p>Maps for Coinces with wet area SUB10 site (<b>a</b>) RGB composite image, (<b>b</b>) composite mask and (<b>c</b>) abundance map of montmorillonite obtained with the REF-MLM and USGS library.</p>
Full article ">Figure 16
<p>Maps for Mareau site with wet area SUB15 (<b>a</b>) RGB composite image, (<b>b</b>) composite mask and (<b>c</b>) abundance map of montmorillonite obtained with the REF-MLM and USGS library.</p>
Full article ">Figure 17
<p>Comparison between mineral abundance estimations with REF-MLM and USGS library and the XRD dataset for each site: (<b>a</b>) Coinces, (<b>b</b>) Gémigny, (<b>c</b>) Mareau.</p>
Full article ">
27 pages, 79059 KiB  
Article
Unsupervised Noise-Resistant Remote-Sensing Image Change Detection: A Self-Supervised Denoising Network-, FCM_SICM-, and EMD Metric-Based Approach
by Jiangling Xie, Yikun Li, Shuwen Yang and Xiaojun Li
Remote Sens. 2024, 16(17), 3209; https://doi.org/10.3390/rs16173209 - 30 Aug 2024
Viewed by 848
Abstract
The detection of change in remote-sensing images is broadly applicable to many fields. In recent years, both supervised and unsupervised methods have demonstrated excellent capacity to detect changes in high-resolution images. However, most of these methods are sensitive to noise, and their performance [...] Read more.
The detection of change in remote-sensing images is broadly applicable to many fields. In recent years, both supervised and unsupervised methods have demonstrated excellent capacity to detect changes in high-resolution images. However, most of these methods are sensitive to noise, and their performance significantly deteriorates when dealing with remote-sensing images that have been contaminated by mixed random noises. Moreover, supervised methods require that samples are manually labeled for training, which is time-consuming and labor-intensive. This study proposes a new unsupervised change-detection (CD) framework that is resilient to mixed random noise called self-supervised denoising network-based unsupervised change-detection coupling FCM_SICM and EMD (SSDNet-FSE). It consists of two components, namely a denoising module and a CD module. The proposed method first utilizes a self-supervised denoising network with real 3D weight attention mechanisms to reconstruct noisy images. Then, a noise-resistant fuzzy C-means clustering algorithm (FCM_SICM) is used to decompose the mixed pixels of reconstructed images into multiple signal classes by exploiting local spatial information, spectral information, and membership linkage. Next, the noise-resistant Earth mover’s distance (EMD) is used to calculate the distance between signal-class centers and the corresponding fuzzy memberships of bitemporal pixels and generate a map of the magnitude of change. Finally, automatic thresholding is undertaken to binarize the change-magnitude map into the final CD map. The results of experiments conducted on five public datasets prove the superior noise-resistant performance of the proposed method over six state-of-the-art CD competitors and confirm its effectiveness and potential for practical application. Full article
Show Figures

Figure 1

Figure 1
<p>Flowchart of the proposed SSDNet-FSE framework.</p>
Full article ">Figure 2
<p>Graphical illustration of the SimAM attention mechanism, where the complete 3-D weights are for attention.</p>
Full article ">Figure 3
<p>Network structure of SSDNet.</p>
Full article ">Figure 4
<p>Coupling mechanism of FCM_SICM and EMD.</p>
Full article ">Figure 5
<p>CD results of competitive methods obtained on Shangtang. (<b>a</b>) Time 1 image with mixed noises. (<b>b</b>) Time 2 image with mixed noises. (<b>c</b>) Ground truth (<b>d</b>) GMCD. (<b>e</b>) KPCAMNet. (<b>f</b>) DCVA. (<b>g</b>) PCAKMeans. (<b>h</b>) ASEA. (<b>i</b>) INLPG. (<b>j</b>) Ours.</p>
Full article ">Figure 6
<p>CD Results of competitive methods obtained on DSIFN-CD. (<b>a</b>) Time 1 image with mixed noises. (<b>b</b>) Time 2 image with mixed noises. (<b>c</b>) Ground truth. (<b>d</b>) GMCD. (<b>e</b>) KPCAMNet. (<b>f</b>) DCVA. (<b>g</b>) PCAKMeans. (<b>h</b>) ASEA. (<b>i</b>) INLPG. (<b>j</b>) Ours.</p>
Full article ">Figure 6 Cont.
<p>CD Results of competitive methods obtained on DSIFN-CD. (<b>a</b>) Time 1 image with mixed noises. (<b>b</b>) Time 2 image with mixed noises. (<b>c</b>) Ground truth. (<b>d</b>) GMCD. (<b>e</b>) KPCAMNet. (<b>f</b>) DCVA. (<b>g</b>) PCAKMeans. (<b>h</b>) ASEA. (<b>i</b>) INLPG. (<b>j</b>) Ours.</p>
Full article ">Figure 7
<p>CD results of competitive methods obtained on LZ. (<b>a</b>) Time 1 image with mixed noises. (<b>b</b>) Time 2 image with mixed noises. (<b>c</b>) Ground truth. (<b>d</b>) GMCD. (<b>e</b>) KPCAMNet. (<b>f</b>) DCVA. (<b>g</b>) PCAKMeans. (<b>h</b>) ASEA. (<b>i</b>) INLPG. (<b>j</b>) Ours.</p>
Full article ">Figure 8
<p>CD Results of competitive methods obtained on CDD. (<b>a</b>) Time 1 image with mixed noises. (<b>b</b>) Time 2 image with mixed noises. (<b>c</b>) Ground truth. (<b>d</b>) GMCD. (<b>e</b>) KPCAMNet. (<b>f</b>) DCVA. (<b>g</b>) PCAKMeans. (<b>h</b>) ASEA. (<b>i</b>) INLPG. (<b>j</b>) Ours.</p>
Full article ">Figure 9
<p>CD Results of competitive methods obtained on GZ. (<b>a</b>) Time 1 image with mixed noises. (<b>b</b>) Time 2 image with mixed noises. (<b>c</b>) Ground truth. (<b>d</b>) GMCD. (<b>e</b>) KPCAMNet. (<b>f</b>) DCVA. (<b>g</b>) PCAKMeans. (<b>h</b>) ASEA. (<b>i</b>) INLPG. (<b>j</b>) Ours.</p>
Full article ">Figure 10
<p>Noise-resistance performance of competitive methods on the five datasets.</p>
Full article ">Figure 11
<p>Change maps obtained by nine ablation methods on GZ dataset.</p>
Full article ">Figure 12
<p>Change-magnitude maps obtained by nine ablation methods on the GZ dataset (real change areas are marked with yellow boundaries).</p>
Full article ">Figure 12 Cont.
<p>Change-magnitude maps obtained by nine ablation methods on the GZ dataset (real change areas are marked with yellow boundaries).</p>
Full article ">Figure 13
<p>Change-magnitude maps obtained by nine ablation methods on the LZ dataset (real change areas are marked with yellow boundaries).</p>
Full article ">Figure 14
<p>Fuzzy level sensitivity on the five datasets.</p>
Full article ">Figure 15
<p>FCM_SICM loss value vs. iteration number.</p>
Full article ">
Back to TopTop