Nothing Special   »   [go: up one dir, main page]

You seem to have javascript disabled. Please note that many of the page functionalities won't work as expected without javascript enabled.
 
 
remotesensing-logo

Journal Browser

Journal Browser

Satellite Image Processing and Object Recognition for Agriculture and Food Security Applications

A special issue of Remote Sensing (ISSN 2072-4292). This special issue belongs to the section "Remote Sensing in Agriculture and Vegetation".

Deadline for manuscript submissions: closed (15 November 2024) | Viewed by 25109

Special Issue Editors


E-Mail Website
Guest Editor
Independent Scientist, Overijssel, The Netherlands
Interests: remote sensing; earth observation; machine learning; artificial intelligence; computer vision; feature engineering; big data visualization
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
College of Computer and Information Engineering, Henan University, Kaifeng 475004, China
Interests: radar systems; SAR; image processing; remote sensing; earth observation; satellite image processing
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

Plant agriculture is facing immense challenges due to climate change. By 2050, it is expected that more than nine billion people will live on our planet. To feed this number of people, the amount of food that is produced must increase by approximately 70%. At the same time, there is an increasing demand for sustainable agriculture which has a far smaller ecological footprint than the current agricultural processes. Therefore, it is important to find new ways to increase productivity while reducing harmful chemical use.

In this Special Issue, we would like researchers to propose new approaches to process remote sensing satellite images with object detection, machine learning, and artificial intelligence methods in order to provide opportunities for the use of sustainable plant agriculture and food security applications. We welcome researchers to use novel methods on real-life use cases and conduct experiments on specific test scenarios. We are looking forward to receiving journal manuscripts that are dedicated to helping our planet and extending the state-of-the-art research in this field. The topics include, but are not limited to, the following:

  • The identification of agricultural infrastructures;
  • The mapping of crop plantation and distribution;
  • The monitoring of crop growth;
  • The monitoring of crop diseases and insect pests;
  • The inversion of farmland soil moisture and other key parameters;
  • The models and methods for predicting crop yield;
  • The protection and monitoring of farmland biodiversity;
  • Food security and sustainable agriculture;
  • The novel image processing methods for agricultural and food security applications.

Dr. Beril Kallfelz-Sirmacek
Prof. Dr. Ning Li
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Remote Sensing is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2700 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • remote sensing
  • geoscience and earth observation
  • artificial intelligence
  • machine learning
  • food security
  • water security
  • biodiversity protection
  • big data
  • visualization and mapping
  • automation and robotics
  • soil quality
  • water quality
  • yield protection
  • yield prediction

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue polices can be found here.

Published Papers (11 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Other

20 pages, 22475 KiB  
Article
Assessing the Potential of Multi-Temporal Conditional Generative Adversarial Networks in SAR-to-Optical Image Translation for Early-Stage Crop Monitoring
by Geun-Ho Kwak and No-Wook Park
Remote Sens. 2024, 16(7), 1199; https://doi.org/10.3390/rs16071199 - 29 Mar 2024
Cited by 1 | Viewed by 1089
Abstract
The incomplete construction of optical image time series caused by cloud contamination is one of the major limitations facing the application of optical satellite images in crop monitoring. Thus, the construction of a complete optical image time series via image reconstruction of cloud-contaminated [...] Read more.
The incomplete construction of optical image time series caused by cloud contamination is one of the major limitations facing the application of optical satellite images in crop monitoring. Thus, the construction of a complete optical image time series via image reconstruction of cloud-contaminated regions is essential for thematic mapping in croplands. This study investigates the potential of multi-temporal conditional generative adversarial networks (MTcGANs) that use a single synthetic aperture radar (SAR) image acquired on a prediction date and a pair of SAR and optical images acquired on a reference date in the context of early-stage crop monitoring. MTcGAN has an advantage over conventional SAR-to-optical image translation methods as it allows input data of various compositions. As the prediction performance of MTcGAN depends on the input data composition, the variations in the prediction performance should be assessed for different input data combination cases. Such an assessment was performed through experiments using Sentinel-1 and -2 images acquired in the US Corn Belt. MTcGAN outperformed existing SAR-to-optical image translation methods, including Pix2Pix and supervised CycleGAN (S-CycleGAN), in cases representing various input compositions. In particular, MTcGAN was substantially superior when there was little change in crop vitality between the reference and prediction dates. For the SWIR1 band, the root mean square error of MTcGAN (0.021) for corn was significantly improved by 54.4% and 50.0% compared to Pix2Pix (0.046) and S-CycleGAN (0.042), respectively. Even when there were large changes in crop vitality, the prediction accuracy of MTcGAN was more than twice that of Pix2Pix and S-CycleGAN. Without considering the temporal intervals between input image acquisition dates, MTcGAN was found to be beneficial when crops were visually distinct in both SAR and optical images. These experimental results demonstrate the potential of MTcGAN in SAR-to-optical image translation for crop monitoring during the early growth stage and can serve as a guideline for selecting appropriate input images for MTcGAN. Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Procedures applied to assess MTcGAN-based SAR-to-optical image translation for early-stage crop monitoring (MTcGAN: multi-temporal conditional generative adversarial network).</p>
Full article ">Figure 2
<p>Illustration of the difference in input data of Pix2Pix and S-CycleGAN vs. MTcGAN (S-CycleGAN: supervised cycle generative adversarial network; <math display="inline"><semantics> <mrow> <msub> <mrow> <mi mathvariant="normal">t</mi> </mrow> <mrow> <mi>r</mi> </mrow> </msub> </mrow> </semantics></math>: reference date; <math display="inline"><semantics> <mrow> <msub> <mrow> <mi mathvariant="normal">t</mi> </mrow> <mrow> <mi>p</mi> </mrow> </msub> </mrow> </semantics></math>: prediction date).</p>
Full article ">Figure 3
<p>Location of the study area. The blue and red boxes represent the training and test areas where hypothetical image generation is conducted. The background in the right image is the cropland data layer in 2022.</p>
Full article ">Figure 4
<p>Multi-temporal Sentinel images in the test region: (<b>a</b>) Sentinel-2 images (NIR–SWIR1–RE2 as RGB); (<b>b</b>) Sentinel-1 images (VV–VH–radar vegetation index as RGB). The reclassified cropland data layer in (<b>c</b>) is used as auxiliary information for interpretations. Image acquisition dates indicated as t can be found in <a href="#remotesensing-16-01199-t001" class="html-table">Table 1</a>.</p>
Full article ">Figure 5
<p>Boxplots of spectral and scattering distributions of corn and soybean calculated from individual Sentinel-1 and -2 images for the indicated image acquisition dates: (<b>a</b>) normalized difference vegetation index (NDVI); (<b>b</b>) VH backscattering coefficient. S1 and S2 denote the Sentinel-1 and -2 images, respectively.</p>
Full article ">Figure 6
<p>Hypothetical Sentinel-2 images generated by MTcGAN and the real Sentinel-2 images (NIR–SWIR1–RE2 as RGB) for the four A cases with the CDL. Red boxes indicate soybean parcels where rapid changes occurred. Image acquisition dates indicated as t can be found in <a href="#remotesensing-16-01199-t001" class="html-table">Table 1</a>.</p>
Full article ">Figure 7
<p>Quantitative accuracy measures of corn and soybean for the four A cases of MTcGAN (RMSE: root mean squared error; rRMSE: relative RMSE; SSIM: structural similarity index measure; CC: correlation coefficient).</p>
Full article ">Figure 8
<p>Hypothetical Sentinel-2 images generated by MTcGAN and the real Sentinel-2 images (NIR–SWIR1–RE2 as RGB) for the two B cases with the CDL. Red boxes indicate soybean parcels where rapid changes occurred. Image acquisition dates indicated as t can be found in <a href="#remotesensing-16-01199-t001" class="html-table">Table 1</a>.</p>
Full article ">Figure 9
<p>Quantitative accuracy measures of corn and soybean for the two B cases of MTcGAN.</p>
Full article ">Figure 10
<p>Hypothetical Sentinel-2 images generated by Pix2Pix and S-CycleGAN and the real Sentinel-2 images (NIR–SWIR1–RE2 as RGB) for the four C cases with the CDL. Red boxes indicate soybean parcels whose spectral reflectance values differ from surrounding parcels. Image acquisition dates indicated as t can be found in <a href="#remotesensing-16-01199-t001" class="html-table">Table 1</a>.</p>
Full article ">Figure 11
<p>Quantitative accuracy measures of corn and soybean for the four C cases of Pix2Pix and S-CycleGAN.</p>
Full article ">
20 pages, 7614 KiB  
Article
Mapping Main Grain Crops and Change Analysis in the West Liaohe River Basin with Limited Samples Based on Google Earth Engine
by Zhenxing Wang, Dong Liu and Min Wang
Remote Sens. 2023, 15(23), 5515; https://doi.org/10.3390/rs15235515 - 27 Nov 2023
Cited by 1 | Viewed by 1252
Abstract
It is an important issue to explore achieving high accuracy long-term crop classification with limited historical samples. The West Liaohe River Basin (WLRB) serves as a vital agro-pastoral ecotone of Northern China, which experiences significant changes in crop planting structure due to a [...] Read more.
It is an important issue to explore achieving high accuracy long-term crop classification with limited historical samples. The West Liaohe River Basin (WLRB) serves as a vital agro-pastoral ecotone of Northern China, which experiences significant changes in crop planting structure due to a range of policy. Taking WLRB as a case study, this study constructed multidimensional features for crop classification suitable for Google Earth Engine cloud platform and proposed a method to extract main grain crops using sample augmentation and model migration in case of limited samples. With limited samples in 2017, the method was employed to train and classify crops (maize, soybean, and rice) in other years, and the spatiotemporal changes in the crop planting structure in WLRB from 2014 to 2020 were analyzed. The following conclusions were drawn: (1) Integrating multidimensional features could discriminate subtle differences, and feature optimization could ensure the accuracy and efficiency of classification. (2) By augmenting the original sample size by calculating the similarity of the time series NDVI (normalized difference vegetation index) curves, migrating the random forest model, and reselecting the samples for other years based on the model accuracy scores, it was possible to achieve a high crop classification accuracy with limited samples. (3) The main grain crops in the WLRB were primarily distributed in the northeastern and southern plains with lower elevations. Maize was the most predominant crop type with a wide distribution. The planting area of main grain crops in the WLRB exhibited an increasing trend, and national policies primarily influenced the variations of planting structure in maize and soybean. This study provides a scheme for extracting crop types from limited samples with high accuracy and can be applied for long-term crop monitoring and change analysis to support crop structure adjustment and food security. Full article
Show Figures

Figure 1

Figure 1
<p>The geographic location of the study area. Note: Abbreviations and full names of the main county-level administrative units are given below: HSD—Hongshan District, YD—Yuanbaoshan District, SS—Songshan District, AB—Aruhorqin Banner, BLB—Baarin Left Banner, BRB—Baarin Right Banner, LC—Linxi County, OB—Ongniud Banner, HD—Horqin District, HLMB—Horqin Left Middle Banner, HLBB—Horqin Left Back Banner, KC—Kailu County, SC—Shuangliao City, CC—Changling County, TC—Tongyu County, NB—Naiman Banner, JB—Jarud Banner, and KB—Kalaqin Banner.</p>
Full article ">Figure 2
<p>The classification flowchart of main grain crops in the WLRB from 2014 to 2020.</p>
Full article ">Figure 3
<p>The number of revisiting frequency of Landsat images after cloud removal in WLRB.</p>
Full article ">Figure 4
<p>The curves of Landsat NDVI from different sources.</p>
Full article ">Figure 5
<p>The relationship between feature dimension and accuracy.</p>
Full article ">Figure 6
<p>The distribution of main grain crops in the West Liaohe River Basin in 2017.</p>
Full article ">Figure 7
<p>The area of main grain crops in the WLRB from 2014 to 2020.</p>
Full article ">Figure 8
<p>The distribution of main grain crops in the WLRB in 2014 and 2020. Note: Border a refers to areas where maize increased prominently. Border b refers to typical areas where maize decreased. Border c refers to areas where soybean converted from maize.</p>
Full article ">Figure 9
<p>The overall accuracy of different feature sets and classifiers.</p>
Full article ">Figure 10
<p>The comparison of classification area and statistical area.</p>
Full article ">
17 pages, 6867 KiB  
Article
An Improved UAV-Based ATI Method Incorporating Solar Radiation for Farm-Scale Bare Soil Moisture Measurement
by Renhao Jia, Jianli Liu, Jiabao Zhang, Yujie Niu, Yifei Jiang, Kefan Xuan, Can Wang, Jingchun Ji, Bin Ma and Xiaopeng Li
Remote Sens. 2023, 15(15), 3769; https://doi.org/10.3390/rs15153769 - 29 Jul 2023
Viewed by 1619
Abstract
The use of UAV-based remote sensing for soil moisture has developed rapidly in recent decades, with advantages such as high spatial resolution, flexible work arrangement, and ease of operation. In bare and low-vegetation-covered soils, the apparent thermal inertia (ATI) method, which adopts thermal [...] Read more.
The use of UAV-based remote sensing for soil moisture has developed rapidly in recent decades, with advantages such as high spatial resolution, flexible work arrangement, and ease of operation. In bare and low-vegetation-covered soils, the apparent thermal inertia (ATI) method, which adopts thermal infrared data from UAV-based remote sensing, has been widely used for soil moisture estimation at the field scale. However, the ATI method may not perform well under inconsistent weather conditions due to inconsistency of the intensity of the soil surface energy input. In this study, an improvement of the ATI method (ATI-R), considering the variation in soil surface energy input, was developed by the incorporation of solar radiation measurements. The performances of the two methods were compared using field experiment data during multiple heating processes under various weather conditions. It showed that on consistently sunny days, both ATI-R and ATI methods obtained good correlations with the volumetric water contents (VWC) (R2ATI-R = 0.775, RMSEATI-R = 0.023 cm3·cm−3 and R2ATI = 0.778, RMSEATI = 0.018 cm3·cm−3) on cloudy or a combination of sunny and cloudy days as long as there were significant soil-heating processes despite the different energy input intensities; the ATI-R method could perform better than the ATI method (cloudy: R2ATI-R = 0.565, RMSEATI-R = 0.024 cm3·cm−3 and R2ATI = 0.156, RMSEATI = 0.033 cm3·cm−3; combined: R2ATI-R = 0.673, RMSEATI-R = 0.028 cm3·cm−3 and R2ATI = 0.310, RMSEATI = 0.032 cm3·cm−3); and on overcast days, both the ATI-R and ATI methods could not perform satisfactorily (R2ATI-R = 0.027, RMSEATI-R = 0.024 cm3·cm−3 and R2ATI = 0.027, RMSEATI = 0.031 cm3·cm−3). The results indicate that supplemental solar radiation data could effectively expand applications of the ATI method, especially for inconsistent weather conditions. Full article
Show Figures

Figure 1

Figure 1
<p>(<b>a</b>) UAV-based Vis-NIR image of the experimental area and schematic of the location of the SP-110 total radiation sensor and temperature logger, and (<b>b</b>) distribution image of watering treatments in each plot. T0 was no-watering treatment, as the CK. The watering amounts with T1, T2, T3, and T4 were 10% (3.2 mm), 20% (6.4 mm), 30% (9.6 mm), and 40% (12.8 mm) of the field water-holding capacity, respectively, based on the T0 soil moisture.</p>
Full article ">Figure 2
<p>(<b>a</b>) DJI M300 RTK, (<b>b</b>) Zenmuse H20T thermal infrared camera, (<b>c</b>) RedEdge-MX five-channel multispectral camera, (<b>d</b>) temperature logger, (<b>e</b>) SP-110 solar total radiation sensor.</p>
Full article ">Figure 3
<p>(<b>a</b>) Air temperature, (<b>b</b>) total solar irradiance, and (<b>c</b>) cumulative total solar radiation intensity and average solar total irradiance during heating processes (5:30–14:00) on sunny days (May 2nd, May 3rd, and May 4th), cloudy days (May 7th and May 11th), and overcast days (May 8th).</p>
Full article ">Figure 4
<p>(<b>a</b>) Soil VWCs in 0, 3.2, 6.4, 9.6, and 12.8 mm water treatments, and (<b>b</b>) the soil temperature increments under the heating processes during sunny days (May 2nd, 3rd, and 4th), cloudy days (May 7th and 11th), and the overcast day (May 8th).</p>
Full article ">Figure 5
<p>(<b>a</b>) R<sup>2</sup> of ATI method validation results with ATI-VWC for a single heating process as the modeling set, (<b>b</b>) R<sup>2</sup> of ATI-R method validation results with ATI-VWC for a single heating process as the modeling set, (<b>c</b>) RMSE of ATI method validation results with ATI for a single heating process as the modeling set, (<b>d</b>) RMSE of ATI-R method validation results with ATI-R for a single heating process as the modeling set, (<b>e</b>) MAE of ATI method validation results with ATI for a single heating process as the modeling set, and (<b>f</b>) MAE of ATI-R method validation results with ATI-R for a single heating process as the modeling set.</p>
Full article ">Figure 6
<p>(<b>a</b>) Correlation between the ATI-R and the VWC on sunny days, (<b>b</b>) correlation between the ATI and the VWC on sunny days, (<b>c</b>) correlation between the ATI-R and the VWC on cloudy days, (<b>d</b>) correlation between the ATI and the VWC on cloudy days. The red-dashed line is the result of the linear fit.</p>
Full article ">Figure 7
<p>(<b>a</b>) Correlation between the ATI-R method and the VWC on sunny and cloudy days, (<b>b</b>) correlation between the ATI method and the VWC on sunny and cloudy days, (<b>c</b>) correlation between the ATI-R method and the VWC on cloudy and overcast days, (<b>d</b>) correlation between the ATI method and the VWC on cloudy and overcast days, (<b>e</b>) correlation between the ATI-R method and the VWC on sunny and overcast days, and (<b>f</b>) correlation between the ATI method and the VWC on sunny and overcast days. The red-dashed line is the result of the linear fit.</p>
Full article ">
20 pages, 5036 KiB  
Article
N-STGAT: Spatio-Temporal Graph Neural Network Based Network Intrusion Detection for Near-Earth Remote Sensing
by Yalu Wang, Jie Li, Wei Zhao, Zhijie Han, Hang Zhao, Lei Wang and Xin He
Remote Sens. 2023, 15(14), 3611; https://doi.org/10.3390/rs15143611 - 20 Jul 2023
Cited by 4 | Viewed by 2198
Abstract
With the rapid development of the Internet of Things (IoT)-based near-Earth remote sensing technology, the problem of network intrusion for near-Earth remote sensing systems has become more complex and large-scale. Therefore, seeking an intelligent, automated, and robust network intrusion detection method is essential. [...] Read more.
With the rapid development of the Internet of Things (IoT)-based near-Earth remote sensing technology, the problem of network intrusion for near-Earth remote sensing systems has become more complex and large-scale. Therefore, seeking an intelligent, automated, and robust network intrusion detection method is essential. Many researchers have researched network intrusion detection methods, such as traditional feature-based and machine learning methods. In recent years, network intrusion detection methods based on graph neural networks (GNNs) have been proposed. However, there are still some practical issues with these methods. For example, they have not taken into consideration the characteristics of near-Earth remote sensing systems, the state of the nodes, and the temporal features. Therefore, this article analyzes the factors of existing near-Earth remote sensing systems and proposes a spatio-temporal graph attention network (N-STGAT) that considers the state of nodes and applies them to the network intrusion detection of near-Earth remote sensing systems. Finally, the proposed method in this article is validated using the latest flow-based datasets NF-BoT-IoT-v2 and NF-ToN-IoT-v2. The results demonstrate that the binary classification accuracy for network intrusion detection exceeds 99%, while the multi-classification accuracy exceeds 93%. These findings provide substantial evidence that the proposed method outperforms existing intrusion detection techniques. Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Schematic diagram of near-ground remote sensing system based on the Internet of Things.</p>
Full article ">Figure 2
<p>Example of near-ground remote sensing technology for agriculture.</p>
Full article ">Figure 3
<p>Characteristics of IoT structure.</p>
Full article ">Figure 4
<p>(<b>a</b>) The calculation process of attention coefficient; (<b>b</b>) the calculation process of hidden layer features.</p>
Full article ">Figure 5
<p>The core cell unit of LSTM.</p>
Full article ">Figure 6
<p>The structure of STGAT demonstrates the fusion process of LSTM and GAT.</p>
Full article ">Figure 7
<p>Problem definition structure.</p>
Full article ">Figure 8
<p>The workflow of the proposed N-STGAT intrusion detection system. First, the dataset is preprocessed, and a graph is constructed for training and testing based on time-axis relationships and similarity rules (<b>left</b>). N-STGAT is used to train the model on the training graph, and the trained model is output (<b>middle</b>). Finally, the generated testing graph is input into the trained model for intrusion detection classification (<b>right</b>).</p>
Full article ">Figure 9
<p>Data preprocessing process.</p>
Full article ">Figure 10
<p>Random selection of datasets. The blue circle represents the dataset used for training, the green represents the validation set, and the yellow represents the test set.</p>
Full article ">Figure 11
<p>The accuracy changes on different datasets. (<b>a</b>) Dataset NF-BoT-IoT-v2. (<b>b</b>) Dataset NF-ToN-IoT-v2.</p>
Full article ">Figure 12
<p>The loss changes on different datasets. (<b>a</b>) Dataset NF-BoT-IoT-v2. (<b>b</b>) Dataset NF-ToN-IoT-v2.</p>
Full article ">
19 pages, 15585 KiB  
Article
Land Cover Classification of SAR Based on 1DCNN-MRF Model Using Improved Dual-Polarization Radar Vegetation Index
by Yabo Huang, Mengmeng Meng, Zhuoyan Hou, Lin Wu, Zhengwei Guo, Xiajiong Shen, Wenkui Zheng and Ning Li
Remote Sens. 2023, 15(13), 3221; https://doi.org/10.3390/rs15133221 - 21 Jun 2023
Cited by 2 | Viewed by 2019
Abstract
Accurate land cover classification (LCC) is essential for studying global change. Synthetic aperture radar (SAR) has been used for LCC due to its advantage of weather independence. In particular, the dual-polarization (dual-pol) SAR data have a wider coverage and are easier to obtain, [...] Read more.
Accurate land cover classification (LCC) is essential for studying global change. Synthetic aperture radar (SAR) has been used for LCC due to its advantage of weather independence. In particular, the dual-polarization (dual-pol) SAR data have a wider coverage and are easier to obtain, which provides an unprecedented opportunity for LCC. However, the dual-pol SAR data have a weak discrimination ability due to limited polarization information. Moreover, the complex imaging mechanism leads to the speckle noise of SAR images, which also decreases the accuracy of SAR LCC. To address the above issues, an improved dual-pol radar vegetation index based on multiple components (DpRVIm) and a new LCC method are proposed for dual-pol SAR data. Firstly, in the DpRVIm, the scattering information of polarization and terrain factors were considered to improve the separability of ground objects for dual-pol data. Then, the Jeffries-Matusita (J-M) distance and one-dimensional convolutional neural network (1DCNN) algorithm were used to analyze the effect of difference dual-pol radar vegetation indexes on LCC. Finally, in order to reduce the influence of the speckle noise, a two-stage LCC method, the 1DCNN-MRF, based on the 1DCNN and Markov random field (MRF) was designed considering the spatial information of ground objects. In this study, the HH-HV model data of the Gaofen-3 satellite in the Dongting Lake area were used, and the results showed that: (1) Through the combination of the backscatter coefficient and dual-pol radar vegetation indexes based on the polarization decomposition technique, the accuracy of LCC can be improved compared with the single backscatter coefficient. (2) The DpRVIm was more conducive to improving the accuracy of LCC than the classic dual-pol radar vegetation index (DpRVI) and radar vegetation index (RVI), especially for farmland and forest. (3) Compared with the classic machine learning methods K-nearest neighbor (KNN), random forest (RF), and the 1DCNN, the designed 1DCNN-MRF achieved the highest accuracy, with an overall accuracy (OA) score of 81.76% and a Kappa coefficient (Kappa) score of 0.74. This study indicated the application potential of the polarization decomposition technique and DEM in enhancing the separability of different land cover types in SAR LCC. Furthermore, it demonstrated that the combination of deep learning networks and MRF is suitable to suppress the influence of speckle noise. Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>(<b>a</b>) Location map of the study area; (<b>b</b>) topographic map of the study area.</p>
Full article ">Figure 2
<p>Distribution map of land cover samples in the study area.</p>
Full article ">Figure 3
<p>The flow chart of the proposed land cover classification method.</p>
Full article ">Figure 4
<p>The flow chart of the two-stage land cover classification model, the 1DCNN-MRF, which is composed of the 1DCNN and MRF.</p>
Full article ">Figure 5
<p>Distribution of the values of the four different land cover types in terms of different features: (<b>a</b>) <math display="inline"><semantics> <mrow> <msubsup> <mi mathvariant="sans-serif">σ</mi> <mrow> <mi>HH</mi> </mrow> <mn>0</mn> </msubsup> </mrow> </semantics></math>; (<b>b</b>) <math display="inline"><semantics> <mrow> <msubsup> <mi mathvariant="sans-serif">σ</mi> <mrow> <mi>HV</mi> </mrow> <mn>0</mn> </msubsup> </mrow> </semantics></math>; (<b>c</b>) RVI; (<b>d</b>) DpRVI; (<b>e</b>) DpRVI<sub>m</sub>.</p>
Full article ">Figure 6
<p>J-M distances of each feature combination.</p>
Full article ">Figure 7
<p>Classification results of different feature combinations: (<b>i</b>) Sentinel-2A (11-8-3/SWIR-NIR-Green), (<b>ii</b>) <math display="inline"><semantics> <mrow> <msubsup> <mi mathvariant="sans-serif">σ</mi> <mrow> <mi>HH</mi> </mrow> <mn>0</mn> </msubsup> </mrow> </semantics></math> + <math display="inline"><semantics> <mrow> <msubsup> <mi mathvariant="sans-serif">σ</mi> <mrow> <mi>HV</mi> </mrow> <mn>0</mn> </msubsup> </mrow> </semantics></math>, (<b>iii</b>) <math display="inline"><semantics> <mrow> <msubsup> <mi mathvariant="sans-serif">σ</mi> <mrow> <mi>HH</mi> </mrow> <mn>0</mn> </msubsup> </mrow> </semantics></math> + <math display="inline"><semantics> <mrow> <msubsup> <mi mathvariant="sans-serif">σ</mi> <mrow> <mi>HV</mi> </mrow> <mn>0</mn> </msubsup> </mrow> </semantics></math> + RVI, (<b>iv</b>) <math display="inline"><semantics> <mrow> <msubsup> <mi mathvariant="sans-serif">σ</mi> <mrow> <mi>HH</mi> </mrow> <mn>0</mn> </msubsup> </mrow> </semantics></math> + <math display="inline"><semantics> <mrow> <msubsup> <mi mathvariant="sans-serif">σ</mi> <mrow> <mi>HV</mi> </mrow> <mn>0</mn> </msubsup> </mrow> </semantics></math> + DpRVI, (<b>v</b>) <math display="inline"><semantics> <mrow> <msubsup> <mi mathvariant="sans-serif">σ</mi> <mrow> <mi>HH</mi> </mrow> <mn>0</mn> </msubsup> </mrow> </semantics></math> + <math display="inline"><semantics> <mrow> <msubsup> <mi mathvariant="sans-serif">σ</mi> <mrow> <mi>HV</mi> </mrow> <mn>0</mn> </msubsup> </mrow> </semantics></math> + DpRVI<sub>m</sub>. (<b>a</b>) Overall comparison maps. (<b>b</b>) Detailed comparison maps.</p>
Full article ">Figure 8
<p>Classification results of different algorithms. (<b>i</b>) Sentinel-2A (11-8-3/SWIR-NIR-Green), (<b>ii</b>) RF, (<b>iii</b>) KNN, (<b>iv</b>) 1DCNN, (<b>v</b>) 1DCNN-MRF.</p>
Full article ">Figure 9
<p>Detailed comparison of the classification results of different algorithms. (<b>i</b>) Sentinel-2A (11-8-3/SWIR-NIR-Green), (<b>ii</b>) RF, (<b>iii</b>) KNN, (<b>iv</b>) 1DCNN, (<b>v</b>) 1DCNN-MRF.</p>
Full article ">
25 pages, 7447 KiB  
Article
Synergy of Sentinel-1 and Sentinel-2 Imagery for Crop Classification Based on DC-CNN
by Kaixin Zhang, Da Yuan, Huijin Yang, Jianhui Zhao and Ning Li
Remote Sens. 2023, 15(11), 2727; https://doi.org/10.3390/rs15112727 - 24 May 2023
Cited by 6 | Viewed by 2877
Abstract
Over the years, remote sensing technology has become an important means to obtain accurate agricultural production information, such as crop type distribution, due to its advantages of large coverage and a short observation period. Nowadays, the cooperative use of multi-source remote sensing imagery [...] Read more.
Over the years, remote sensing technology has become an important means to obtain accurate agricultural production information, such as crop type distribution, due to its advantages of large coverage and a short observation period. Nowadays, the cooperative use of multi-source remote sensing imagery has become a new development trend in the field of crop classification. In this paper, the polarimetric components of Sentinel-1 (S-1) decomposed by a new model-based decomposition method adapted to dual-polarized SAR data were introduced into crop classification for the first time. Furthermore, a Dual-Channel Convolutional Neural Network (DC-CNN) with feature extraction, feature fusion, and encoder-decoder modules for crop classification based on S-1 and Sentinel-2 (S-2) was constructed. The two branches can learn from each other by sharing parameters so as to effectively integrate the features extracted from multi-source data and obtain a high-precision crop classification map. In the proposed method, firstly, the backscattering components (VV, VH) and polarimetric components (volume scattering, remaining scattering) were obtained from S-1, and the multispectral feature was extracted from S-2. Four candidate combinations of multi-source features were formed with the above features. Following that, the optimal one was found on a trial. Next, the characteristics of optimal combinations were input into the corresponding network branches. In the feature extraction module, the features with strong collaboration ability in multi-source data were learned by parameter sharing, and they were deeply fused in the feature fusion module and encoder-decoder module to obtain more accurate classification results. The experimental results showed that the polarimetric components, which increased the difference between crop categories and reduced the misclassification rate, played an important role in crop classification. Among the four candidate feature combinations, the combination of S-1 and S-2 features had a higher classification accuracy than using a single data source, and the classification accuracy was the highest when two polarimetric components were utilized simultaneously. On the basis of the optimal combination of features, the effectiveness of the proposed method was verified. The classification accuracy of DC-CNN reached 98.40%, with Kappa scoring 0.98 and Macro-F1 scoring 0.98, compared to 2D-CNN (OA reached 94.87%, Kappa scored 0.92, and Macro-F1 scored 0.95), FCN (OA reached 96.27%, Kappa scored 0.94, and Macro-F1 scored 0.96), and SegNet (OA reached 96.90%, Kappa scored 0.95, and Macro-F1 scored 0.97). The results of this study demonstrated that the proposed method had significant potential for crop classification. Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Location map and RS images of Tongxiang. (<b>a</b>) pseudo-color image of S-1 (pseudo-color image defined by combination of <math display="inline"><semantics> <mrow> <msub> <mrow> <mi>d</mi> <mi>B</mi> </mrow> <mrow> <mi>m</mi> <mi>s</mi> </mrow> </msub> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <msub> <mrow> <mi>d</mi> <mi>B</mi> </mrow> <mrow> <mi>m</mi> <mi>v</mi> </mrow> </msub> <mo>,</mo> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <msub> <mrow> <mi>d</mi> <mi>B</mi> </mrow> <mrow> <mi>m</mi> <mi>s</mi> </mrow> </msub> <mo>/</mo> <msub> <mrow> <mi>d</mi> <mi>B</mi> </mrow> <mrow> <mi>m</mi> <mi>v</mi> </mrow> </msub> </mrow> </semantics></math>.); (<b>b</b>) pseudo-color image of S-2 (pseudo-color image defined by combination of bands B8 (near infrared), B4 (red), and B3 (green)).</p>
Full article ">Figure 2
<p>Weather conditions in the study area from April to June.</p>
Full article ">Figure 3
<p>Distribution map of crop samples from field surveys and field photos of major crops corresponding to image acquisition dates. (<b>a</b>) Photo of wheat; (<b>b</b>) photo of bare land; and (<b>c</b>) photo of oilseed rape.</p>
Full article ">Figure 4
<p>Flow chart of the proposed method.</p>
Full article ">Figure 5
<p>Structure of the DC-CNN model.</p>
Full article ">Figure 6
<p>Selected typical areas (the samples not included in the training process were used for testing). (<b>a</b>) Locations of selected typical areas; (<b>b</b>) Ground-truth maps of selected typical areas.</p>
Full article ">Figure 7
<p>The classification results of three local regions were obtained by using different combinations as input data sets, respectively. <b>a(1)</b>–<b>a(3)</b> Results based on Combination A; <b>b(1)</b>–<b>b(3)</b> Results based on Combination B; <b>c(1)</b>–<b>c(3)</b> Results based on Combination C. Combination A: S-1 (VV, VH); Combination B: S-1 (<math display="inline"><semantics> <mrow> <msub> <mrow> <mi>m</mi> </mrow> <mrow> <mi>s</mi> </mrow> </msub> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <msub> <mrow> <mi>m</mi> </mrow> <mrow> <mi>v</mi> </mrow> </msub> </mrow> </semantics></math>); Combination C: S-1 (VV, VH, <math display="inline"><semantics> <mrow> <msub> <mrow> <mi>m</mi> </mrow> <mrow> <mi>s</mi> </mrow> </msub> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <msub> <mrow> <mi>m</mi> </mrow> <mrow> <mi>v</mi> </mrow> </msub> </mrow> </semantics></math>).</p>
Full article ">Figure 8
<p>The classification results of three local regions were obtained by using different combinations as input data sets, respectively. <b>d(1)</b>–<b>d(3)</b> Results based on Combination D; <b>e(1)</b>–<b>e(3)</b> Results based on Combination E; <b>f(1)</b>–<b>f(3)</b> Results based on Combination F; <b>g(1)</b>–<b>g(3)</b> Results based on Combination G; <b>h(1)</b>–<b>h(3)</b> Results based on Combination H. Combination D: S-1 (VV, VH) + S-2(MS); Combination E: S-1 (<math display="inline"><semantics> <mrow> <msub> <mrow> <mi>m</mi> </mrow> <mrow> <mi>s</mi> </mrow> </msub> </mrow> </semantics></math><span class="html-italic">,</span> <math display="inline"><semantics> <mrow> <msub> <mrow> <mi>m</mi> </mrow> <mrow> <mi>v</mi> </mrow> </msub> </mrow> </semantics></math>) + S-2(MS); Combination F: S-1 (VV, VH, <math display="inline"><semantics> <mrow> <msub> <mrow> <mi>m</mi> </mrow> <mrow> <mi>s</mi> </mrow> </msub> </mrow> </semantics></math>) + S-2(MS); Combination G: S-1 (VV, VH, <math display="inline"><semantics> <mrow> <msub> <mrow> <mi>m</mi> </mrow> <mrow> <mi>v</mi> </mrow> </msub> </mrow> </semantics></math>) + S-2(MS); H: S-1 (VV, VH, <math display="inline"><semantics> <mrow> <msub> <mrow> <mi>m</mi> </mrow> <mrow> <mi>s</mi> </mrow> </msub> </mrow> </semantics></math> , <math display="inline"><semantics> <mrow> <msub> <mrow> <mi>m</mi> </mrow> <mrow> <mi>v</mi> </mrow> </msub> </mrow> </semantics></math>) + S-2(MS).</p>
Full article ">Figure 9
<p>Intuitive comparison of evaluation indicators.</p>
Full article ">Figure 10
<p>Visual images of features. (<b>a</b>) Feature visualization results of combination D (S-1 (VV, VH) + S-2(MS)) (<b>b</b>) Feature visualization results of combination H (S-1 (VV, VH, <math display="inline"><semantics> <mrow> <msub> <mrow> <mi>m</mi> </mrow> <mrow> <mi>s</mi> </mrow> </msub> </mrow> </semantics></math><span class="html-italic">,</span> <math display="inline"><semantics> <mrow> <msub> <mrow> <mi>m</mi> </mrow> <mrow> <mi>v</mi> </mrow> </msub> </mrow> </semantics></math>) + S-2(MS)).</p>
Full article ">Figure 11
<p>Classification results of the main crops in the study area obtained by DC-CNN using Combination G. Background: ture color composite of S-2.</p>
Full article ">Figure 12
<p>The classification results of three local regions were obtained by different methods (the input data sets are all Combination G). <b>a(1)</b>–<b>a(3)</b> results of 2D-CNN; <b>b(1)</b>–<b>b(3)</b> results of FCN; <b>c(1)</b>–<b>c(3)</b> results of SegNet; <b>d(1)</b>–<b>d(3)</b> results of DC-CNN.</p>
Full article ">Figure 13
<p>Confusion matrices for models generated by different classifiers (the input data sets are all Combination G). (<b>a</b>) 2D-CNN; (<b>b</b>) FCN; (<b>c</b>) SegNet; and (<b>d</b>) DC-CNN.</p>
Full article ">Figure 13 Cont.
<p>Confusion matrices for models generated by different classifiers (the input data sets are all Combination G). (<b>a</b>) 2D-CNN; (<b>b</b>) FCN; (<b>c</b>) SegNet; and (<b>d</b>) DC-CNN.</p>
Full article ">
22 pages, 7353 KiB  
Article
Inversion of Soil Moisture on Farmland Areas Based on SSA-CNN Using Multi-Source Remote Sensing Data
by Ran Wang, Jianhui Zhao, Huijin Yang and Ning Li
Remote Sens. 2023, 15(10), 2515; https://doi.org/10.3390/rs15102515 - 10 May 2023
Cited by 7 | Viewed by 2601
Abstract
Soil moisture is a crucial factor in the field of meteorology, hydrology, and agricultural sciences. In agricultural production, surface soil moisture (SSM) is crucial for crop yield estimation and drought monitoring. For SSM inversion, a synthetic aperture radar (SAR) offers a trustworthy data [...] Read more.
Soil moisture is a crucial factor in the field of meteorology, hydrology, and agricultural sciences. In agricultural production, surface soil moisture (SSM) is crucial for crop yield estimation and drought monitoring. For SSM inversion, a synthetic aperture radar (SAR) offers a trustworthy data source. However, for agricultural fields, the use of SAR data alone to invert SSM is susceptible to the influence of vegetation cover. In this paper, based on Sentinel-1 microwave remote sensing data and Sentinel-2 optical remote sensing data, a convolution neural network optimized by sparrow search algorithm (SSA-CNN) was suggested to invert farmland SSM. The feature parameters were first extracted from pre-processed remote sensing data. Then, the correlation analysis between the extracted feature parameters and field measured SSM data was carried out, and the optimal combination of feature parameters for SSM inversion was selected as the input data of the subsequent models. To enhance the performance of the CNN, the hyper-parameters of CNN were optimized using SSA, and the SSA-CNN model was built for SSM inversion based on the obtained optimal hyper-parameter combination. Three typical machine learning approaches, including generalized regression neural network, random forest, and CNN, were used for comparison to show the efficacy of the suggested method. With an average coefficient of determination of 0.80, an average root mean square error of 2.17 vol.%, and an average mean absolute error of 1.68 vol.%, the findings demonstrated that the SSA-CNN model with the optimal feature combination had a better accuracy among the 4 models. In the end, the SSM of the study region was inverted throughout four phenological periods using the SSA-CNN model. The inversion results indicated that the suggested method performed well in local situations. Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Location and Sentinel-1 images of the study area and sampling points: (<b>a</b>) location of the study area; (<b>b</b>) Sentinel-1 image of the study area and sampling points.</p>
Full article ">Figure 2
<p>Main states of winter wheat growing during 10 field surveys.</p>
Full article ">Figure 2 Cont.
<p>Main states of winter wheat growing during 10 field surveys.</p>
Full article ">Figure 3
<p>Technology roadmap.</p>
Full article ">Figure 4
<p>Structure of the proposed SSA-CNN model.</p>
Full article ">Figure 5
<p>SSM prediction results for the four models using the first testing set.</p>
Full article ">Figure 6
<p>SSM prediction results for the four models using the second testing set.</p>
Full article ">Figure 7
<p>Performance of the four models with different number of the feature parameters.</p>
Full article ">Figure 8
<p>Mean SSM variation in winter wheat pre-fertility periods: (<b>a</b>) 2019/10/18–2020/03/22; (<b>b</b>) 2020/10/24–2021/02/21.</p>
Full article ">Figure 9
<p>Dynamics of the mean NDVI of the winter wheat sampling sites: (<b>a</b>) 2019/10/18–2020/03/22; (<b>b</b>) 2020/10/24–2021/02/21.</p>
Full article ">Figure 10
<p>RMSE of between SSM estimated and measured on 10 dates using the four models: (<b>a</b>) 2019/10/18–2020/03/22; (<b>b</b>) 2020/10/24–2021/02/21.</p>
Full article ">Figure 11
<p>Inversion results for 18 October 2019: (<b>a</b>) Inversion results of the regional SSM in the study area; (<b>b</b>) Differences of the measured and retrieved SSM values at 20 reference plots.</p>
Full article ">Figure 12
<p>Inversion results for 30 October 2019: (<b>a</b>) Inversion results of the regional SSM in the study area; (<b>b</b>) Differences of the measured and retrieved SSM values at 20 reference plots.</p>
Full article ">Figure 13
<p>Inversion results for 29 December 2019: (<b>a</b>) Inversion results of the regional SSM in the study area; (<b>b</b>) Differences of the measured and retrieved SSM values at 20 reference plots.</p>
Full article ">Figure 14
<p>Inversion results for 22 March 2020: (<b>a</b>) Inversion results of the regional SSM in the study area; (<b>b</b>) Differences of the measured and retrieved SSM values at 20 reference plots.</p>
Full article ">Figure 15
<p>Inversion results for 11 December 2020: (<b>a</b>) Inversion results of the regional SSM in the study area; (<b>b</b>) Differences of the measured and retrieved SSM values at 20 reference plots.</p>
Full article ">
22 pages, 12296 KiB  
Article
Classification of Land Cover in Complex Terrain Using Gaofen-3 SAR Ascending and Descending Orbit Data
by Hongxia Wang, Haoran Yang, Yabo Huang, Lin Wu, Zhengwei Guo and Ning Li
Remote Sens. 2023, 15(8), 2177; https://doi.org/10.3390/rs15082177 - 20 Apr 2023
Cited by 5 | Viewed by 1756
Abstract
Synthetic aperture radar (SAR) image is an effective remote sensing data source for geographic surveys. However, accurate land cover mapping based on SAR image in areas of complex terrain has become a challenge due to serious geometric distortions and the inadequate separation ability [...] Read more.
Synthetic aperture radar (SAR) image is an effective remote sensing data source for geographic surveys. However, accurate land cover mapping based on SAR image in areas of complex terrain has become a challenge due to serious geometric distortions and the inadequate separation ability of dual-polarization data. To address these issues, a new land cover mapping framework which is suitable for complex terrain is proposed based on Gaofen-3 data of ascending and descending orbits. Firstly, the geometric distortion area is determined according to the local incident angle, based on analysis of the SAR imaging mechanism, and the correct polarization information of the opposite track is used to compensate for the geometric distortion area, including layovers and shadows. Then, the dual orbital polarization characteristics (DOPC) and dual polarization radar vegetation index (DpRVI) of dual-pol SAR data are extracted, and the optimal feature combination is found by means of Jeffries–Matusita (J-M) distance analysis. Finally, the deep learning method 2D convolutional neural network (2D-CNN) is applied to classify the compensated images. The proposed method was applied to a mountainous region of the Danjiangkou ecological protection area in China. The accuracy and reliability of the method were experimentally compared using the uncompensated images and the images without DpRVI. Quantitative evaluation revealed that the proposed method achieved better performance in complex terrain areas, with an overall accuracy (OA) score of 0.93, and a Kappa coefficient score of 0.92. Compared with the uncompensated image, OA increased by 5% and Kappa increased by 6%. Compared with the images without DpRVI, OA increased by 4% and Kappa increased by 5%. In summary, the results demonstrate the importance of ascending and descending orbit data to compensate geometric distortion and reveal the effectiveness of optimal feature combination including DpRVI. Its simple and effective polarization information compensation capability can broaden the promising application prospects of SAR images. Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Study area: (<b>a</b>) Location map of study area; (<b>b</b>) Slope angle of the study area.</p>
Full article ">Figure 2
<p>Distribution diagram of samples.</p>
Full article ">Figure 3
<p>SAR imaging model.</p>
Full article ">Figure 4
<p>Geometric distortion phenomenon model: (<b>a</b>) layover. (<b>b</b>) shadow.</p>
Full article ">Figure 5
<p>The framework of the proposed method.</p>
Full article ">Figure 6
<p>The flowchart of layover and shadow area detection.</p>
Full article ">Figure 7
<p>Local incident angle model.</p>
Full article ">Figure 8
<p>Frame of Features Extraction.</p>
Full article ">Figure 9
<p>Framework of 2D-CNN.</p>
Full article ">Figure 10
<p>Gaofen-3 images of ascending and descending orbits:(<b>a</b>) descending orbit HV polarized SAR image as primary image; (<b>b</b>) enlarged area in (<b>a</b>); (<b>c</b>) ascending orbit HV polarized SAR image as secondary image; (<b>d</b>) enlarged area in (<b>c</b>). Region A and B are the selected regions for subsequent quantitative analysis.</p>
Full article ">Figure 11
<p>Gafen-3 image and detected geometric distortion area: (<b>a</b>) Gaofen-3 HV polarized image; (<b>b</b>) local incident angle; (<b>c</b>) detected layover and shadow area;.</p>
Full article ">Figure 12
<p>Gafen-3 image and compensated geometric distortion area: (<b>a</b>) Gaofen-3 original HV polarized image; (<b>b</b>) enlarged area in (<b>a</b>); (<b>c</b>) compensated image of Gaofen-3 HV polarized image; (<b>d</b>) enlarged area in (<b>c</b>).</p>
Full article ">Figure 13
<p>J-M distance based on different scenic feature combinations.</p>
Full article ">Figure 14
<p>Confusion matrix of the feature combinations: (<b>a</b>) Compensated_DOPC; (<b>b</b>) Uncompensated_DOPC_DpRVI; (<b>c</b>) Compensated_DOPC_DpRVI.</p>
Full article ">Figure 15
<p>Classification results: (<b>a</b>) optical image; (<b>b</b>) classification result of Uncompensated_DOPC_DpRVI; (<b>c</b>) classification result of Compensated_DOPC; (<b>d</b>) classification result of Compensated_DOPC_DpRVI.</p>
Full article ">Figure 16
<p>Enlarged classification results of regions A and B based on different feature combinations: (<b>a</b>) optical image of area A; (<b>b</b>) classification result of Uncompensated_DOPC_DpRVI area A; (<b>c</b>) classification result of area A in the result diagram of Compensated_DOPC; (<b>d</b>) classification result of area A in the result diagram of Compensated_DOPC_DpRVI; (<b>e</b>) optical image of area B; (<b>f</b>) classification result of area B in the result diagram of Uncompensated_DOPC_DpRVI; (<b>g</b>) classification result of area B in the result diagram of Compensated_DOPC; (<b>h</b>) classification result of area B in the result diagram of Compensated_DOPC_DpRVI.</p>
Full article ">Figure 17
<p>Original and compensated SAR images: (<b>a</b>) Original HV polarized Gaofen-3 image; (<b>b</b>) part of the enlarged area of (<b>a</b>); (<b>c</b>) compensated image of (<b>a</b>); (<b>d</b>) part of the enlarged area of (<b>c</b>).</p>
Full article ">
17 pages, 6025 KiB  
Article
Soil Moisture Inversion Based on Data Augmentation Method Using Multi-Source Remote Sensing Data
by Yinglin Wang, Jianhui Zhao, Zhengwei Guo, Huijin Yang and Ning Li
Remote Sens. 2023, 15(7), 1899; https://doi.org/10.3390/rs15071899 - 31 Mar 2023
Cited by 5 | Viewed by 2521
Abstract
Soil moisture is an important land environment characteristic that connects agriculture, ecology, and hydrology. Surface soil moisture (SSM) prediction can be used to plan irrigation, monitor water quality, manage water resources, and estimate agricultural production. Multi-source remote sensing is a crucial tool for [...] Read more.
Soil moisture is an important land environment characteristic that connects agriculture, ecology, and hydrology. Surface soil moisture (SSM) prediction can be used to plan irrigation, monitor water quality, manage water resources, and estimate agricultural production. Multi-source remote sensing is a crucial tool for assessing SSM in agricultural areas. The field-measured SSM sample data are required in model building and accuracy assessment of SSM inversion using remote sensing data. When the SSM samples are insufficient, the SSM inversion accuracy is severely affected. An SSM inversion method suitable for a small sample size was proposed. The alpha approximation method was employed to expand the measured SSM samples to offer more training data for SSM inversion models. Then, feature parameters were extracted from Sentinel-1 microwave and Sentinel-2 optical remote sensing data, and optimized using three methods, which were Pearson correlation analysis, random forest (RF), and principal component analysis. Then, three common machine learning models suitable for small sample training, which were RF, support vector regression, and genetic algorithm-back propagation neural network, were built to retrieve SSM. Comparison experiments were carried out between various feature optimization methods and machine learning models. The experimental results showed that after sample augmentation, SSM inversion accuracy was enhanced, and the combination of utilizing RF for feature screening and RF for SSM inversion had a higher accuracy, with a coefficient of determination of 0.7256, a root mean square error of 0.0539 cm3/cm3, and a mean absolute error of 0.0422 cm3/cm3, respectively. The proposed method was finally used to invert the regional SSM of the study area. The inversion results indicated that the proposed method had good performance in regional applications with a small sample size. Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Location of the study area and the sampling points: (<b>a</b>) the Danjiangkou ecological service area; (<b>b</b>) the study area and sampling points.</p>
Full article ">Figure 2
<p>Technology roadmap of the proposed method.</p>
Full article ">Figure 3
<p>Correlation between features obtained by PCA method.</p>
Full article ">Figure 4
<p>Inversion results of regional SSM in the study area on 11 September 2021: (<b>a</b>) spatial distribution of retrieved SSM; (<b>b</b>) frequency distribution of retrieved and measured SSM.</p>
Full article ">Figure 5
<p>Inversion results of regional SSM in the study area on 23 September 2021: (<b>a</b>) spatial distribution of retrieved SSM; (<b>b</b>) frequency distribution of retrieved and measured SSM.</p>
Full article ">Figure 6
<p>Inversion results of regional SSM in the study area on 5 October 2021: (<b>a</b>) spatial distribution of retrieved SSM; (<b>b</b>) frequency distribution of retrieved and measured SSM.</p>
Full article ">
11 pages, 3743 KiB  
Communication
Which Vegetation Index? Benchmarking Multispectral Metrics to Hyperspectral Mixture Models in Diverse Cropland
by Daniel Sousa and Christopher Small
Remote Sens. 2023, 15(4), 971; https://doi.org/10.3390/rs15040971 - 10 Feb 2023
Cited by 11 | Viewed by 4269
Abstract
The monitoring of agronomic parameters like biomass, water stress, and plant health can benefit from synergistic use of all available remotely sensed information. Multispectral imagery has been used for this purpose for decades, largely with vegetation indices (VIs). Many multispectral VIs exist, typically [...] Read more.
The monitoring of agronomic parameters like biomass, water stress, and plant health can benefit from synergistic use of all available remotely sensed information. Multispectral imagery has been used for this purpose for decades, largely with vegetation indices (VIs). Many multispectral VIs exist, typically relying on a single feature—the spectral red edge—for information. Where hyperspectral imagery is available, spectral mixture models can use the full VSWIR spectrum to yield further insight, simultaneously estimating area fractions of multiple materials within mixed pixels. Here we investigate the relationships between VIs and mixture models by comparing hyperspectral endmember fractions to six common multispectral VIs in California’s diverse crops and soils. In so doing, we isolate spectral effects from sensor- and acquisition-specific variability associated with atmosphere, illumination, and view geometry. Specifically, we compare: (1) fractional area of photosynthetic vegetation (Fv) from 64,000,000 3–5 m resolution AVIRIS-ng reflectance spectra; and (2) six popular VIs (NDVI, NIRv, EVI, EVI2, SR, DVI) computed from simulated Planet SuperDove reflectance spectra derived from the AVIRIS-ng spectra. Hyperspectral Fv and multispectral VIs are compared using both parametric (Pearson correlation, ρ) and nonparametric (Mutual Information, MI) metrics. Four VIs (NIRv, DVI, EVI, EVI2) showed strong linear relationships with Fv (ρ > 0.94; MI > 1.2). NIRv and DVI showed strong interrelation (ρ > 0.99, MI > 2.4), but deviated from a 1:1 correspondence with Fv. EVI and EVI2 were strongly interrelated (ρ > 0.99, MI > 2.3) and more closely approximated a 1:1 relationship with Fv. In contrast, NDVI and SR showed a weaker, nonlinear, heteroskedastic relation to Fv (ρ < 0.84, MI = 0.69). NDVI exhibited both especially severe sensitivity to unvegetated background (–0.05 < NDVI < +0.6) and saturation (0.2 < Fv < 0.8 for NDVI = 0.7). The self-consistent atmospheric correction, radiometry, and sun-sensor geometry allows this simulation approach to be further applied to indices, sensors, and landscapes worldwide. Full article
Show Figures

Figure 1

Figure 1
<p>Index map. (<b>Left</b>): 15 flight lines from the 2020 AVIRIS-ng campaign (red) span broad crop and soil diversity in one of the most productive agricultural regions on earth. (<b>Right</b>): false color composite mosaic image compiled from 125 subsets, each of 600 × 800 pixels (~2.4 × 3.2 km). Ground sampling distance for all lines is approximately 3 m. For further information about soil diversity, see [<a href="#B31-remotesensing-15-00971" class="html-bibr">31</a>,<a href="#B37-remotesensing-15-00971" class="html-bibr">37</a>]; for agricultural diversity see [<a href="#B30-remotesensing-15-00971" class="html-bibr">30</a>].</p>
Full article ">Figure 2
<p>Spectral feature space and endmembers. The first 3 dimensions of the spectral feature space, visualized here, account for over 97% of the variance in the AVIRIS-ng spectra. Substrate, vegetation and dark (S, V, D) endmembers bound the first two dimensions of this feature space. Non- photosynthetic vegetation (N) extends the plane of substrates in the third dimension. Compare to previous results from mosaics of AVIRIS-classic and Landsat [<a href="#B31-remotesensing-15-00971" class="html-bibr">31</a>]. Colors represent point density, from sparser (violet) to denser (green).</p>
Full article ">Figure 3
<p>Bivariate distributions of spectral indices versus vegetation fraction. Six commonly used multispectral indices (y-axis) are compared to photosynthetic vegetation fraction (F<sub>v</sub>) computed directly from hyperspectral AVIRIS-ng reflectances (x-axis). DVI and NIRv are highly correlated with each other (0.99) and with F<sub>v</sub> (0.95), but not near the 1:1 line (red). Similarly, EVI and EVI2 are also highly correlated with each other (0.99) and with F<sub>v</sub> (0.94 or 0.95). NDVI and SR exhibit substantially reduced correlation to F<sub>v</sub> (0.84 and 0.81). Mutual information (MI) generally agrees with these correlations. MI values (relative to Fv) for DVI, NIRv, EVI, and EVI2 are all 1.35 +/− 0.1. NDVI and SR MI values are lower, each at 0.69. SR* indicates that values are scaled by 0.1 for visualization. Spectra with values &lt; −0.2 or &gt;1.2 are excluded. All Pearson correlation values are significantly different from the uncorrelated null hypothesis (<span class="html-italic">p</span> &lt; 0.01). Bootstrapping via random selection of 30% of data values resulted in MI variability on the order of 0.01 or less.</p>
Full article ">Figure 4
<p>Spectral index distributions for unvegetated spectra. (<b>Left</b>): histograms show VI values for all spectra with &lt;5% photosynthetic vegetation cover, as estimated from inversion of the AVIRIS-ng spectral mixture model. NIRv values are closest to zero with nearly all values &lt; 0.1 and a mode near 0.05. DVI is characterized by a mode near 0.08 and greater dispersion. EVI and EVI2 both show modes near 0.12. SR modal value is near EVI, but with less dispersion. NDVI shows, by far, the highest mode (&gt;0.2) and largest dispersion (NDVI for some unvegetated spectra as large as 0.5). (<b>Right</b>): histogram of NIRv values after linear regression is applied. For the distribution of regressed NIRv, mean = 0.08 and standard deviation = 0.026.</p>
Full article ">

Other

Jump to: Research

16 pages, 6331 KiB  
Technical Note
Early Identification of Cotton Fields Based on Gf-6 Images in Arid and Semiarid Regions (China)
by Chen Zou, Donghua Chen, Zhu Chang, Jingwei Fan, Jian Zheng, Haiping Zhao, Zuo Wang and Hu Li
Remote Sens. 2023, 15(22), 5326; https://doi.org/10.3390/rs15225326 - 12 Nov 2023
Cited by 1 | Viewed by 1316
Abstract
Accurately grasping the distribution and area of cotton for agricultural irrigation scheduling, intensive and efficient management of water resources, and yield estimation in arid and semiarid regions is of great significance. In this paper, taking the Xinjiang Shihezi oasis agriculture region as the [...] Read more.
Accurately grasping the distribution and area of cotton for agricultural irrigation scheduling, intensive and efficient management of water resources, and yield estimation in arid and semiarid regions is of great significance. In this paper, taking the Xinjiang Shihezi oasis agriculture region as the study area, extracting the spectroscopic characterization (R, G, B, panchromatic), texture feature (entropy, mean, variance, contrast, homogeneity, angular second moment, correlation, and dissimilarity) and characteristics of vegetation index (normalized difference vegetation index/NDVI, ratio vegetation index/DVI, difference vegetation index/RVI) in the cotton flowering period before and after based on GF-6 image data, four models such as the random forests (RF) and deep learning approach (U-Net, DeepLabV3+ network, Deeplabv3+ model based on attention mechanism) were used to identify cotton and to compare their accuracies. The results show that the deep learning model is better than that of the random forest model. In all the deep learning models with three kinds of feature sets, the recognition accuracy and credibility of the DeepLabV3+ model based on the attention mechanism are the highest, the overall recognition accuracy of cotton is 98.23%, and the kappa coefficient is 96.11. Using the same Deeplabv3+ model based on an attention mechanism with different input feature sets (all features and only spectroscopic characterization), the identification accuracy of the former is much higher than that of the latter. GF-6 satellite image data in the field of crop type recognition has great application potential and prospects. Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>The location of the study area and distribution map of field sampling points.</p>
Full article ">Figure 2
<p>Distribution map of visual interpretation samples.</p>
Full article ">Figure 3
<p>Network architecture diagram of DeepLab V<sup>3</sup>+ semantic segmentation model based on attention mechanism.</p>
Full article ">Figure 4
<p>Schematic diagram of DAM structure.</p>
Full article ">Figure 5
<p>Loss changes corresponding to Epoch settings in different models.</p>
Full article ">Figure 6
<p>Comparison of the cotton identification effect of different models in some areas. (<b>A</b>) GF-6 remote sensing images, (<b>B</b>) visual interpretation samples, (<b>C</b>) the DeepLabV3+DAM with full features, (<b>D</b>) the DeepLabV3+ Network, (<b>E</b>) the DeepLabV3+ DAM with 4-band, (<b>F</b>) the U-Net model, (<b>G</b>) the RF model. The white color indicates the identified non-cotton in the Figures (<b>B</b>) to (<b>G</b>), and the red solid line depicts area with differences in the Figure (<b>A</b>).</p>
Full article ">Figure 7
<p>Comparison of the cotton identification effect of different models in some areas. (<b>A</b>) GF-6 remote-sensing images, (<b>B</b>) visual interpretation samples, (<b>C</b>) the DeepLabV3+DAM with full features, (<b>D</b>) the DeepLabV3+ Network, (<b>E</b>) the DeepLabV3+ DAM with 4-band, (<b>F</b>) the U-Net model, (<b>G</b>) the RF model. The white color indicates the identified non-cotton in the Figures (<b>B</b>) to (<b>G</b>), and the red solid line depicts areas with differences in the Figure (<b>A</b>).</p>
Full article ">
Back to TopTop