CroplandCDNet: Cropland Change Detection Network for Multitemporal Remote Sensing Images Based on Multilayer Feature Transmission Fusion of an Adaptive Receptive Field
<p>The structure of CroplandCDNet.</p> "> Figure 2
<p>Effect of data augmentation. (<b>a</b>) Original image. (<b>b</b>) Rotation. (<b>c</b>) Horizontal flip. (<b>d</b>) Vertical flip. (<b>e</b>) Cropping. (<b>f</b>) Translation. (<b>g</b>) Contrast change. (<b>h</b>) Brightness change. (<b>i</b>) Addition of Gaussian noise. (<b>j</b>) Data augmentation for multiple images.</p> "> Figure 2 Cont.
<p>Effect of data augmentation. (<b>a</b>) Original image. (<b>b</b>) Rotation. (<b>c</b>) Horizontal flip. (<b>d</b>) Vertical flip. (<b>e</b>) Cropping. (<b>f</b>) Translation. (<b>g</b>) Contrast change. (<b>h</b>) Brightness change. (<b>i</b>) Addition of Gaussian noise. (<b>j</b>) Data augmentation for multiple images.</p> "> Figure 3
<p>Process of the feature extraction module.</p> "> Figure 4
<p>Change detection module.</p> "> Figure 5
<p>The structure of SKA.</p> "> Figure 6
<p>Sample images in the CLCD dataset. (<b>a</b>) T1. (<b>b</b>) T2. (<b>c</b>) Ground truth: the white area indicates a change, and the black area indicates no change.</p> "> Figure 7
<p>Visualization results for Scene 1. (<b>a</b>) T1. (<b>b</b>) T2. (<b>c</b>) Ground truth. (<b>d</b>) CDNet. (<b>e</b>) DSIFN. (<b>f</b>) SNUNet. (<b>g</b>) BIT. (<b>h</b>) L-UNet. (<b>i</b>) P2V-CD. (<b>j</b>) CroplandCDNet (ours).</p> "> Figure 8
<p>Visualization results for Scene 2. (<b>a</b>) T1. (<b>b</b>) T2. (<b>c</b>) Ground truth. (<b>d</b>) CDNet. (<b>e</b>) DSIFN. (<b>f</b>) SNUNet. (<b>g</b>) BIT. (<b>h</b>) L-UNet. (<b>i</b>) P2V-CD. (<b>j</b>) CroplandCDNet (ours).</p> "> Figure 9
<p>Visualization results for Scene 3. (<b>a</b>) T1. (<b>b</b>) T2. (<b>c</b>) Ground truth. (<b>d</b>) CDNet. (<b>e</b>) DSIFN. (<b>f</b>) SNUNet. (<b>g</b>) BIT. (<b>h</b>) L-UNet. (<b>i</b>) P2V-CD. (<b>j</b>) CroplandCDNet (ours).</p> "> Figure 10
<p>Visualization results for Scene 4. (<b>a</b>) T1. (<b>b</b>) T2. (<b>c</b>) Ground truth. (<b>d</b>) CDNet. (<b>e</b>) DSIFN. (<b>f</b>) SNUNet. (<b>g</b>) BIT. (<b>h</b>) L-UNet. (<b>i</b>) P2V-CD. (<b>j</b>) CroplandCDNet (ours).</p> "> Figure 11
<p>Visualization results of ablation experiments of the proposed method using the CLCD dataset. (<b>a</b>) T1. (<b>b</b>) T2. (<b>c</b>) Ground truth. (<b>d</b>) Base. (<b>e</b>) Base + SKA. (<b>f</b>) Base + SKA, +layer2. (<b>g</b>) Base + SKA, +layer2,3. (<b>h</b>) Base +layer2,3,4. (<b>i</b>) Base + SKA, +layer2,3,4. (<b>j</b>) CroplandCDNet (ours).</p> "> Figure 12
<p>Partial visualization results of the proposed method using the Jilin-1 cropland change detection dataset. (<b>a</b>) T1. (<b>b</b>) T2. (<b>c</b>) Ground truth: white areas indicate changes; black areas indicate no changes. (<b>d</b>) CroplandCDNet (ours).</p> ">
Abstract
:1. Introduction
- (1)
- Traditional methods of cropland change detection mainly include two categories: statistical analysis methods based on pixels [3,4] and post-classification comparison methods based on machine learning. Statistical analysis methods based on pixels mainly use medium and low spatial resolution remote sensing images as the data source, apply the simple algebraic operations to the corresponding band of multitemporal remote sensing images, and obtain difference map; subsequently, an adaptive or manually determined threshold is used for segmentation to obtain the final change detection result [5]. However, the accuracy of these methods is largely limited by the threshold, and it is difficult to meet the needs of fine cropland change extraction. Given the widespread utilization of machine learning techniques in remote sensing image classification, employing post-classification comparison methods can significantly enhance the accuracy of cropland change detection [6]. Various machine learning methods, including support vector machine (SVM) [7], decision tree (DT) [8], random forest (RF) [9,10], maximum likelihood method [11], and artificial neural networks [12], have been employed for this purpose. However, the utilization of post-classification comparison methods often leads to accumulated errors [13], thereby impacting the accuracy of change detection [5,14]. Additionally, the manual construction of features required by machine learning methods poses limitations on their applicability in cropland change detection.
- (2)
- Methods based on deep learning. With their good self-learning ability for features, deep learning methods have been widely used in the field of cropland change detection. The development of cropland change detection methods based on deep learning has been closely related to improvements in the quality and quantity of remote sensing data and computer computing abilities. Among them, network models based on convolution neural networks (CNNs) have shown good performance in terms of cropland change detection. Bhattad et al. [15] used a UNet-based encoder to extract parameters and features of cropland from remote sensing images, employing the decoder to accurately locate cropland changes. Some CNN-based methods perform well in detecting other ground objects [16,17,18]. Bai et al. [19] integrated discriminative information and edge structure prior information into a single CNN framework to improve the results of change detection. Additionally, to enhance the performance of change detection networks, an increasing number of scholars have begun adding attention modules to these networks [20,21]. Xu et al. [22] and Zhang et al. [23] used a cross-attention module and multilevel change-aware deformable attention module to improve the detection performance, respectively. Although the CNN has good feature extraction ability overall, its ability to extract features is proportional to the number of layers in its own network, and the number of layers in the network determines the operation speed of the network. Therefore, a convolution neural network with more layers takes a long time in the task of accessing large datasets. Different from CNNs, transformers can obtain global dependencies in computations because of the special self-attention mechanism in their network. Moreover, transformer allows elements at each location to calculate attention weights in parallel during network training, so it is more efficient than CNN training in some tasks [24]. Liu et al. [25] proposed a multiscale context aggregation module based on a transformer that can encode and decode multiscale context information and realize the modeling and fusion of cropland multiscale information in remote sensing images. Wu et al. [26] applied a transformer-based union attention module to the decoding layer to extract global and local context information and maintain the rich spatial details of croplands in remote sensing images. In addition, the advantages of combining CNNs and transformers have been demonstrated in the field of change detection to effectively improve network detection performance [27,28]. Moreover, a generative adversarial network is used to perform data augmentation on change detection samples, reducing the dependence of deep learning change detection methods on large labeled datasets [29,30]. The above research provides a good basis for the construction of cropland change detection networks. In recent years, significant progress has been made in cropland change detection based on deep learning, but the following challenges still exist: (1) At present, to obtain the deep features of cropland in remote sensing images, mainstream cropland change detection networks based on CNNs often use a large number of convolution and pooling operations, and the accumulation of irrelevant features affects the detection accuracy in the process of mining deeper features. (2) Although the method combining CNN and a transformer compensates for the limitations of the small receptive field of CNNs, it has difficulty fully capturing multiscale features and making effective use of spatial context information when the convolution kernel size is fixed.
- (1)
- A novel CroplandCDNet is proposed that combines an adaptive receptive field and multiscale feature transmission fusion module. CroplandCDNet maximize the use of the deep features of bitemporal remote sensing images, and the cropland change results are effectively output.
- (2)
- The adaptive attention module of the receptive field is introduced into the feature transmission layer. This module enhances the representation of useful feature channels and effectively extracts cropland change information while suppressing irrelevant information. In addition, the module dynamically adjusts the size of the convolution kernel according to the multiscale features of the cropland so that the network can effectively use the spatial context information of the cropland in remote sensing images and improve the accuracy of detection.
- (3)
- Six advanced change detection networks were used to conduct comparative experiments on the cropland change detection dataset (CLCD). Furthermore, the generalization experiments were carried out with the Jilin-1 cropland change detection dataset. The results show that the CroplandCDNet is optimally comprehensive.
2. Methodology
2.1. Data Augmentation
2.2. Feature Extraction Module
- (1)
- Two identical convolution layers of 3 × 3 × 64 are used to learn the shallow features of the cropland in the remote sensing image. After ReLU activation, the maximum pooling layer is used, with the first pooling kernel of 2 × 2 and a stride of 2 to screen the important features and reduce the number of parameters. At this time, the size of the image is changed to 128 × 128 × 64;
- (2)
- After two 3 × 3 × 128 convolution layers, the maximum pooling layer with a 2 × 2 kernel and stride of 2 is input after ReLU activation, and the size of the image is changed to 64 × 64 × 128;
- (3)
- After three 3 × 3 × 256 convolution layers, the maximum pooling layer with a third pooling kernel of 2 × 2 and a stride of 2 is input after ReLU activation, and the size of the image is changed to 32 × 32 × 256;
- (4)
- After three 3 × 3 × 512 convolution layers, the maximum pooling layer with the last pooling kernel of 2 × 2 and stride of 2 is input after ReLU activation, and the size of the image is changed to 16 × 16 × 512;
- (5)
- Finally, through three convolution layers of 3 × 3 × 512, a feature map of 16 × 16 × 512 is obtained. All the five-layer multiscale features are extracted and input into the change detection module.
2.3. Change Detection Module
2.4. Selective Kernel Attention
2.5. Loss Function
3. Experiment
3.1. Dataset
3.2. Comparative Experiments
3.3. Parameter Setting and Evaluation Metrics
3.4. Experimental Results
- Scene 1: From cropland to buildings
- 2.
- Scene 2: From cropland to roads
- 3.
- Scene 3: From cropland to bare land
- 4.
- Scene 4: From cropland to water body
4. Discussion
4.1. Ablation Analysis
4.2. Generalization Analysis
4.3. Potential and Planning
5. Conclusions
Author Contributions
Funding
Data Availability Statement
Conflicts of Interest
References
- Han, H.; Peng, H.; Li, S.; Yang, J.; Yan, Z. The Non-Agriculturalization of Cultivated Land in Karst Mountainous Areas in China. Land 2022, 11, 1727. [Google Scholar] [CrossRef]
- Zhang, Y.; Shao, Z. Assessing of Urban Vegetation Biomass in Combination with LiDAR and High-resolution Remote Sensing Images. Int. J. Remote Sens. 2020, 42, 964–985. [Google Scholar] [CrossRef]
- Sharma, N.; Chawla, S. Digital Change Detection Analysis Criteria and Techniques used for Land Use and Land Cover Classification in Agriculture. In Proceedings of the 2023 3rd International Conference on Advance Computing and Innovative Technologies in Engineering (ICACITE), Greater Noida, India, 12–13 May 2023; pp. 331–335. [Google Scholar]
- Useya, J.; Chen, S.; Murefu, M. Cropland Mapping and Change Detection: Toward Zimbabwean Cropland Inventory. IEEE Access 2019, 7, 53603–53620. [Google Scholar] [CrossRef]
- Liu, B.; Song, W.; Meng, Z.; Liu, X. Review of Land Use Change Detection—A Method Combining Machine Learning and Bibliometric Analysis. Land 2023, 12, 1050. [Google Scholar] [CrossRef]
- Chughtai, A.H.; Abbasi, H.; Karas, I.R. A review on change detection method and accuracy assessment for land use land cover. Remote Sens. Appl. Soc. Environ. 2021, 22, 100482. [Google Scholar] [CrossRef]
- Xie, G.; Niculescu, S. Mapping and Monitoring of Land Cover/Land Use (LCLU) Changes in the Crozon Peninsula (Brittany, France) from 2007 to 2018 by Machine Learning Algorithms (Support Vector Machine, Random Forest, and Convolutional Neural Network) and by Post-classification Comparison (PCC). Remote Sens. 2021, 13, 3899. [Google Scholar] [CrossRef]
- Sebbar, B.; Moumni, A.; Lahrouni, A. Decisional Tree Models for Land Cover Mapping and Change Detection Based on Phenological Behaviors: Application Case: Localization of Non-Fully-Exploited Agricultural Surfaces in the Eastern Part of the Haouz Plain in the Semi-Arid Central Morocco. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2020, XLIV-4/W3-2020, 365–373. [Google Scholar] [CrossRef]
- Phalke, A.R.; Özdoğan, M.; Thenkabail, P.S.; Erickson, T.; Gorelick, N.; Yadav, K.; Congalton, R.G. Mapping croplands of Europe, Middle East, Russia, and Central Asia using Landsat, Random Forest, and Google Earth Engine. ISPRS J. Photogramm. Remote Sens. 2020, 167, 104–122. [Google Scholar] [CrossRef]
- Pande, C.B. Land use/land cover and change detection mapping in Rahuri watershed area (MS), India using the google earth engine and machine learning approach. Geocarto Int. 2022, 37, 13860–13880. [Google Scholar] [CrossRef]
- Hailu, A.; Mammo, S.; Kidane, M. Dynamics of land use, land cover change trend and its drivers in Jimma Geneti District, Western Ethiopia. Land Use Policy 2020, 99, 105011. [Google Scholar] [CrossRef]
- Taiwo, B.E.; Kafy, A.A.; Samuel, A.A.; Rahaman, Z.A.; Ayowole, O.E.; Shahrier, M.; Duti, B.M.; Rahman, M.T.; Peter, O.T.; Abosede, O.O. Monitoring and predicting the influences of land use/land cover change on cropland characteristics and drought severity using remote sensing techniques. Environ. Sustain. Indic. 2023, 18, 100248. [Google Scholar] [CrossRef]
- Bai, T.; Wang, L.; Yin, D.; Sun, K.; Chen, Y.; Li, W.; Li, D. Deep learning for change detection in remote sensing: A review. Geo Spat. Inf. Sci. 2022, 26, 262–288. [Google Scholar] [CrossRef]
- Dahiya, N.; Gupta, S.; Singh, S. Qualitative and quantitative analysis of artificial neural network-based post-classification comparison to detect the earth surface variations using hyperspectral and multispectral datasets. J. Appl. Remote Sens. 2023, 17, 032403. [Google Scholar] [CrossRef]
- Bhattad, R.; Patel, V.; Patel, S. Novel H-Unet Approach for Cropland Change Detection Using CLCD. In Proceedings of the IGARSS 2023—2023 IEEE International Geoscience and Remote Sensing Symposium, Pasadena, CA, USA, 16–21 July 2023; pp. 6692–6695. [Google Scholar]
- Peng, D.; Bruzzone, L.; Zhang, Y.; Guan, H.; He, P. SCDNET: A novel convolutional network for semantic change detection in high resolution optical remote sensing imagery. Int. J. Appl. Earth Obs. Geoinf. 2021, 103, 102465. [Google Scholar] [CrossRef]
- Li, X.; He, M.; Li, H.; Shen, H. A Combined Loss-Based Multiscale Fully Convolutional Network for High-Resolution Remote Sensing Image Change Detection. IEEE Geosci. Remote Sens. Lett. 2022, 19, 8017505. [Google Scholar] [CrossRef]
- Peng, X.; Zhong, R.; Li, Z.; Li, Q. Optical Remote Sensing Image Change Detection Based on Attention Mechanism and Image Difference. IEEE Trans. Geosci. Remote Sens. 2021, 59, 7296–7307. [Google Scholar] [CrossRef]
- Bai, B.; Fu, W.; Lu, T.; Li, S. Edge-Guided Recurrent Convolutional Neural Network for Multitemporal Remote Sensing Image Building Change Detection. IEEE Trans. Geosci. Remote Sens. 2022, 60, 5610613. [Google Scholar] [CrossRef]
- Shi, Q.; Liu, M.; Li, S.; Liu, X.; Wang, F.; Zhang, L. A Deeply Supervised Attention Metric-Based Network and an Open Aerial Image Dataset for Remote Sensing Change Detection. IEEE Trans. Geosci. Remote Sens. 2022, 60, 5604816. [Google Scholar] [CrossRef]
- Liu, Y.; Pang, C.; Zhan, Z.; Zhang, X.; Yang, X. Building Change Detection for Remote Sensing Images Using a Dual-Task Constrained Deep Siamese Convolutional Network Model. IEEE Geosci. Remote Sens. Lett. 2021, 18, 811–815. [Google Scholar] [CrossRef]
- Xu, C.; Ye, Z.; Mei, L.; Shen, S.; Sun, S.; Wang, Y.; Yang, W. Cross-Attention Guided Group Aggregation Network for Cropland Change Detection. IEEE Sens. J. 2023, 23, 13680–13691. [Google Scholar] [CrossRef]
- Zhang, X.; Yu, W.; Pun, M.-O. Multilevel Deformable Attention-Aggregated Networks for Change Detection in Bitemporal Remote Sensing Imagery. IEEE Trans. Geosci. Remote Sens. 2022, 60, 7827–7856. [Google Scholar] [CrossRef]
- Zhang, C.; Wang, L.; Cheng, S.; Li, Y. SwinSUNet: Pure Transformer Network for Remote Sensing Image Change Detection. IEEE Trans. Geosci. Remote Sens. 2022, 60, 5224713. [Google Scholar] [CrossRef]
- Liu, M.; Chai, Z.; Deng, H.; Liu, R. A CNN-Transformer Network with Multiscale Context Aggregation for Fine-Grained Cropland Change Detection. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2022, 15, 4297–4306. [Google Scholar] [CrossRef]
- Wu, Z.; Chen, Y.; Meng, X.; Huang, Y.; Li, T.; Sun, J. SwinUCDNet: A UNet-like Network with Union Attention for Cropland Change Detection of Aerial Images. In Proceedings of the 2023 30th International Conference on Geoinformatics, London, UK, 19–21 July 2023; pp. 1–7. [Google Scholar]
- Chen, H.; Qi, Z.; Shi, Z. Remote Sensing Image Change Detection with Transformers. IEEE Trans. Geosci. Remote Sens. 2022, 60, 5607514. [Google Scholar] [CrossRef]
- Li, Q.; Zhong, R.; Du, X.; Du, Y. TransUNetCD: A Hybrid Transformer Network for Change Detection in Optical Remote-Sensing Images. IEEE Trans. Geosci. Remote Sens. 2022, 60, 5622519. [Google Scholar] [CrossRef]
- Chen, J.; Zhao, W.; Chen, X. Cropland Change Detection with Harmonic Function and Generative Adversarial Network. IEEE Geosci. Remote Sens. Lett. 2022, 19, 2500205. [Google Scholar] [CrossRef]
- Chen, H.; Li, W.; Shi, Z. Adversarial Instance Augmentation for Building Change Detection in Remote Sensing Images. IEEE Trans. Geosci. Remote Sens. 2022, 60, 5603216. [Google Scholar] [CrossRef]
- Li, X.; Wang, W.; Hu, X.; Yang, J. Selective kernel networks. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA, 16–20 June 2019; pp. 510–519. [Google Scholar]
- Zhang, C.; Yue, P.; Tapete, D.; Jiang, L.; Shangguan, B.; Huang, L.; Liu, G. A deeply supervised image fusion network for change detection in high resolution bi-temporal remote sensing images. ISPRS J. Photogramm. Remote Sens. 2020, 166, 183–200. [Google Scholar] [CrossRef]
- Rebuffi, S.-A.; Gowal, S.; Calian, D.A.; Stimberg, F.; Wiles, O.; Mann, T. Data augmentation can improve robustness. Adv. Neural Inf. Process. Syst. 2021, 34, 29935–29948. [Google Scholar]
- Simonyan, K.; Zisserman, A. Very deep convolutional networks for large-scale image recognition. arXiv 2015, arXiv:1409.1556. [Google Scholar]
- Zheng, D.; Wu, Z.; Liu, J.; Hung, C.-C.; Wei, Z. Detail Enhanced Change Detection in VHR Images Using a Self-Supervised Multiscale Hybrid Network. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2024, 17, 3181–3196. [Google Scholar] [CrossRef]
- Chen, Z.; Song, Y.; Ma, Y.; Li, G.; Wang, R.; Hu, H. Interaction in Transformer for Change Detection in VHR Remote Sensing Images. IEEE Trans. Geosci. Remote Sens. 2023, 61, 1–12. [Google Scholar] [CrossRef]
- Fang, H.; Guo, S.; Zhang, P.; Zhang, W.; Wang, X.; Liu, S.; Du, P. Scene Change Detection by Differential Aggregation Network and Class Probability-Based Fusion Strategy. IEEE Trans. Geosci. Remote Sens. 2023, 61, 5406918. [Google Scholar] [CrossRef]
- Li, W.; Xue, L.; Wang, X.; Li, G. MCTNet: A multi-scale CNN-transformer network for change detection in optical remote sensing images. In Proceedings of the 2023 26th International Conference on Information Fusion (FUSION), Charleston, SC, USA, 27–30 June 2023; pp. 1–5. [Google Scholar]
- Li, J.; Li, S.; Wang, F. Adaptive Fusion NestedUNet for Change Detection Using Optical Remote Sensing Images. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2023, 16, 5374–5386. [Google Scholar] [CrossRef]
- Zhang, J.; Shao, Z.; Ding, Q.; Huang, X.; Wang, Y.; Zhou, X.; Li, D. AERNet: An Attention-Guided Edge Refinement Network and a Dataset for Remote Sensing Building Change Detection. IEEE Trans. Geosci. Remote Sens. 2023, 61, 5617116. [Google Scholar] [CrossRef]
- De Bem, P.P.; de Carvalho Júnior, O.A.; de Carvalho, O.L.F.; Gomes, R.A.T.; Fontes Guimarães, R. Performance Analysis of Deep Convolutional Autoencoders with Different Patch Sizes for Change Detection from Burnt Areas. Remote Sens. 2020, 12, 2576. [Google Scholar] [CrossRef]
- Alcantarilla, P.; Stent, S.; Ros, G.; Arroyo, R.; Gherardi, R. Street-view change detection with deconvolutional networks. Auton. Robot. 2018, 42, 1301–1322. [Google Scholar] [CrossRef]
- Fang, S.; Li, K.; Shao, J.; Li, Z. SNUNet-CD: A Densely Connected Siamese Network for Change Detection of VHR Images. IEEE Geosci. Remote Sens. Lett. 2022, 19, 8007805. [Google Scholar] [CrossRef]
- Papadomanolaki, M.; Vakalopoulou, M.; Karantzalos, K. A deep multitask learning framework coupling semantic segmentation and fully convolutional LSTM networks for urban change detection. IEEE Trans. Geosci. Remote Sens. 2021, 59, 7651–7668. [Google Scholar] [CrossRef]
- Lin, M.; Yang, G.; Zhang, H. Transition Is a Process: Pair-to-Video Change Detection Networks for Very High Resolution Remote Sensing Images. IEEE Trans. Image Process 2023, 32, 57–71. [Google Scholar] [CrossRef] [PubMed]
- Kingma, D.; Ba, J. Adam: A method for stochastic optimization. arXiv 2015, arXiv:1412.6980v9. [Google Scholar]
- Jilin-1 Net. 2023 “Jilin-1” Cup Satellite Remote Sensing Application Youth Innovation and Entrepreneurship Competition. Available online: http://archive.today/2024.01.23-024742/https://www.jl1mall.com/contest/match (accessed on 3 July 2023).
- Kulkarni, S.C.; Rege, P.P. Pixel level fusion techniques for SAR and optical images: A review. Inf. Fusion 2020, 59, 13–29. [Google Scholar] [CrossRef]
- Mardian, J.; Berg, A.; Daneshfar, B. Evaluating the temporal accuracy of grassland to cropland change detection using multitemporal image analysis. Remote Sens. Environ. 2021, 255, 112292. [Google Scholar] [CrossRef]
- Chen, D.; Wang, Y.; Shen, Z.; Liao, J.; Chen, J.; Sun, S. Long Time-Series Mapping and Change Detection of Coastal Zone Land Use Based on Google Earth Engine and Multi-Source Data Fusion. Remote Sens. 2021, 14, 1. [Google Scholar] [CrossRef]
- Liu, B.; Song, W. Mapping abandoned cropland using Within-Year Sentinel-2 time series. Catena 2023, 223, 106924. [Google Scholar] [CrossRef] [PubMed]
- Shi, C.; Zhang, Z.; Zhang, W.; Zhang, C.; Xu, Q. Learning Multiscale Temporal–Spatial–Spectral Features via a Multipath Convolutional LSTM Neural Network for Change Detection with Hyperspectral Images. IEEE Trans. Geosci. Remote Sens. 2022, 60, 5529816. [Google Scholar] [CrossRef]
- Sun, Z.; Di, L.; Fang, H. Using long short-term memory recurrent neural network in land cover classification on Landsat and Cropland data layer time series. Int. J. Remote Sens. 2019, 40, 593–614. [Google Scholar] [CrossRef]
Methods | Pre (%) | Rec (%) | F1 (%) | OA (%) |
---|---|---|---|---|
CDNet | 77.24 | 63.95 | 69.97 | 93.64 |
DSIFN | 69.47 | 76.81 | 72.96 | 93.40 |
SNUNet | 69.14 | 69.87 | 69.50 | 92.89 |
BIT | 75.65 | 68.09 | 71.67 | 93.76 |
L-UNet | 65.10 | 70.57 | 67.73 | 92.20 |
P2V-CD | 77.91 | 65.89 | 71.40 | 93.88 |
CroplandCDNet (ours) | 76.46 | 75.63 | 76.04 | 94.47 |
Methods | Pre (%) | Rec (%) | F1 (%) | OA (%) |
---|---|---|---|---|
CDNet | 99.49 | 74.64 | 85.29 | 90.41 |
DSIFN | 98.39 | 80.40 | 88.49 | 92.21 |
SNUNet | 93.63 | 95.74 | 94.67 | 95.99 |
BIT | 99.27 | 78.59 | 87.73 | 91.81 |
L-UNet | 94.54 | 97.84 | 96.16 | 97.09 |
P2V-CD | 98.63 | 92.61 | 95.52 | 96.77 |
CroplandCDNet (ours) | 99.32 | 93.84 | 96.50 | 97.47 |
Methods | Pre (%) | Rec (%) | F1 (%) | OA (%) |
---|---|---|---|---|
CDNet | 51.10 | 40.92 | 45.45 | 95.59 |
DSIFN | 31.08 | 64.99 | 42.05 | 91.96 |
SNUNet | 48.00 | 63.97 | 54.84 | 95.27 |
BIT | 70.10 | 65.26 | 67.59 | 97.19 |
L-UNet | 39.97 | 82.73 | 53.90 | 93.65 |
P2V-CD | 42.68 | 44.70 | 43.67 | 94.82 |
CroplandCDNet (ours) | 89.86 | 71.38 | 79.56 | 98.35 |
Methods | Pre (%) | Rec (%) | F1 (%) | OA (%) |
---|---|---|---|---|
CDNet | 96.90 | 50.45 | 66.35 | 90.63 |
DSIFN | 93.43 | 74.13 | 82.67 | 94.31 |
SNUNet | 93.93 | 76.63 | 84.40 | 94.81 |
BIT | 87.95 | 91.77 | 89.82 | 96.19 |
L-UNet | 93.61 | 88.85 | 91.17 | 96.85 |
P2V-CD | 99.67 | 10.21 | 15.53 | 83.55 |
CroplandCDNet (ours) | 94.27 | 94.04 | 94.16 | 97.86 |
Methods | Pre (%) | Rec (%) | F1 (%) | OA (%) |
---|---|---|---|---|
CDNet | 84.17 | 17.60 | 29.11 | 95.61 |
DSIFN | 78.58 | 75.93 | 77.23 | 97.71 |
SNUNet | 82.57 | 96.78 | 89.11 | 98.79 |
BIT | 90.40 | 94.36 | 92.34 | 99.20 |
L-UNet | 96.33 | 91.62 | 93.92 | 99.39 |
P2V-CD | 95.55 | 65.94 | 78.03 | 98.10 |
CroplandCDNet (ours) | 93.61 | 98.75 | 96.11 | 99.59 |
Methods | Pre (%) | Rec (%) | F1 (%) | OA (%) |
---|---|---|---|---|
Base | 68.85 | 62.96 | 65.77 | 92.40 |
+SKA | 69.96 | 68.23 | 69.08 | 92.92 |
+SKA, +layer2 | 74.65 | 73.66 | 74.15 | 94.05 |
+SKA, +layer2,3 | 77.10 | 72.74 | 74.86 | 94.33 |
+layer2,3,4 | 77.04 | 72.14 | 74.51 | 94.28 |
+SKA, +layer2,3,4 | 77.84 | 72.53 | 75.09 | 94.42 |
CroplandCDNet (ours) | 76.46 | 75.63 | 76.04 | 94.47 |
Methods | Pre (%) | Rec (%) | F1 (%) | OA (%) |
---|---|---|---|---|
CDNet | 76.72 | 73.47 | 75.06 | 86.36 |
DSIFN | 86.75 | 81.07 | 83.81 | 91.25 |
SNUNet | 80.49 | 76.79 | 78.60 | 88.32 |
BIT | 80.71 | 77.99 | 79.32 | 88.64 |
L-UNet | 75.11 | 72.21 | 73.63 | 85.55 |
P2V-CD | 85.79 | 78.25 | 81.85 | 90.31 |
CroplandCDNet (ours) | 89.03 | 85.22 | 87.08 | 92.94 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Wu, Q.; Huang, L.; Tang, B.-H.; Cheng, J.; Wang, M.; Zhang, Z. CroplandCDNet: Cropland Change Detection Network for Multitemporal Remote Sensing Images Based on Multilayer Feature Transmission Fusion of an Adaptive Receptive Field. Remote Sens. 2024, 16, 1061. https://doi.org/10.3390/rs16061061
Wu Q, Huang L, Tang B-H, Cheng J, Wang M, Zhang Z. CroplandCDNet: Cropland Change Detection Network for Multitemporal Remote Sensing Images Based on Multilayer Feature Transmission Fusion of an Adaptive Receptive Field. Remote Sensing. 2024; 16(6):1061. https://doi.org/10.3390/rs16061061
Chicago/Turabian StyleWu, Qiang, Liang Huang, Bo-Hui Tang, Jiapei Cheng, Meiqi Wang, and Zixuan Zhang. 2024. "CroplandCDNet: Cropland Change Detection Network for Multitemporal Remote Sensing Images Based on Multilayer Feature Transmission Fusion of an Adaptive Receptive Field" Remote Sensing 16, no. 6: 1061. https://doi.org/10.3390/rs16061061
APA StyleWu, Q., Huang, L., Tang, B. -H., Cheng, J., Wang, M., & Zhang, Z. (2024). CroplandCDNet: Cropland Change Detection Network for Multitemporal Remote Sensing Images Based on Multilayer Feature Transmission Fusion of an Adaptive Receptive Field. Remote Sensing, 16(6), 1061. https://doi.org/10.3390/rs16061061