Farmland Segmentation in Landsat 8 Satellite Images Using Deep Learning and Conditional Generative Adversarial Networks
<p>Region of Emilia-Romagna, Italy.</p> "> Figure 2
<p>Landsat OLI 8 scene covering the region of Emilia-Romagna in two tiles/scenes (path 192–193 and row 029). Image obtained from USGS EarthExplorer [<a href="#B47-remotesensing-16-00823" class="html-bibr">47</a>].</p> "> Figure 3
<p>Dark object correction (DOC). (<b>A</b>) Originally dark raw Landsat 8 scene with values for different intensities at each pixel. (<b>B</b>) DOS- and colour-corrected Landsat tile.</p> "> Figure 4
<p>The overall analysis plan for the paper.</p> "> Figure 5
<p>Landsat image converted into 256 × 256 non-overlapping patches.</p> "> Figure 6
<p>Adapted U-Net architecture.</p> "> Figure 7
<p>Atrous convolution spatial pyramid pooling: different convolution rates explore the image to enhance the visual receptive fields [<a href="#B27-remotesensing-16-00823" class="html-bibr">27</a>].</p> "> Figure 8
<p>DeepLabV3+ architecture.</p> "> Figure 9
<p>Illustration of a <math display="inline"><semantics> <mrow> <mn>256</mn> <mo>×</mo> <mn>256</mn> </mrow> </semantics></math> patch of an NDVI image randomly selected from Emilia-Romagna, covering around 1966.0 km<sup>2</sup>. The pixels in different levels of vegetation using the NDVI standard range are depicted in different colours. Translating the colours to numerical values, the no/low-vegetation areas are closer to −1, while the high-vegetation parts are close to +1.</p> "> Figure 10
<p>Illustration of the same <math display="inline"><semantics> <mrow> <mn>256</mn> <mo>×</mo> <mn>256</mn> </mrow> </semantics></math> patch shown in <a href="#remotesensing-16-00823-f009" class="html-fig">Figure 9</a> in the format of an agriculture composite image shaped using SWIR, NIR, and blue bands. The patch is randomly selected from Emilia-Romagna, covering around 1966.0 km<sup>2</sup>. The healthy vegetation (vibrant green colour) appears different from the bare earth (magenta) and non-crop vegetation (pale green) [<a href="#B59-remotesensing-16-00823" class="html-bibr">59</a>].</p> "> Figure 11
<p>Illustration of a <math display="inline"><semantics> <mrow> <mn>256</mn> <mo>×</mo> <mn>256</mn> </mrow> </semantics></math> patch of the image randomly selected from Emilia-Romagna, covering around 1966.0 km<sup>2</sup>. The original image patch, as well as the results of applying various types of noises to the same patch, is visualised, showing how the training images were augmented in the dataset.</p> "> Figure 12
<p>Geometric transformation.</p> "> Figure 13
<p>Modified CGAN with two-branched generators.</p> "> Figure 14
<p>(<b>A</b>) The generator is bifurcated into two channels: the top branch (visualized in orange), producing the generated image, and the bottom branch (visualized inside the grey box), producing the matching label/mask. (<b>B</b>) The discriminator architecture takes both the real and generated image–mask pairs as inputs to make predictions.</p> "> Figure 15
<p>(<b>A</b>) Generated images and (<b>B</b>) the corresponding generated masks. The white (1) label represents farms, and black (0) represents non-farms.</p> "> Figure 16
<p>Test accuracy comparison between U-net and Deeplabv3+ models.</p> "> Figure 17
<p>Resultant segmentation maps. In the mask array, the black (0) represents non-farms, and the white (1) represents farms in the masks. The model’s outputs for the segmentation of farms are visualized based on colours. Segmented farms are green, whereas red in the top row and blue in the bottom row are segmented non-farms.</p> "> Figure 18
<p>The RGB image patches for train/test/validation sets were chosen randomly from different geo-locations in the ROI to avoid unintentional data leakage.</p> "> Figure 19
<p>Comparison of testing accuracy for random data sampling and selected geo-location-based sampling.</p> "> Figure 20
<p>The high variance in plot A and the maximum of Laplacian measures shown in plot B were calculated for the real and CGAN-generated images. The comparison graphs show differences between the two groups of images, i.e., high for the original real image population, whereas the synthetic data showcase low values, indicating blurriness present in the GAN-generated imagery.</p> "> Figure 21
<p>The RGB colour histograms of the real population and the generated images. The overall colour scheme is identical.</p> "> Figure 22
<p>Illustration of the accuracy when synthetic data were added incrementally to the training set. The accuracy increases up to a certain point. However, it starts to degrade at some point (after adding more than 40 synthetic samples) due to the drift in the data population. The number of original real images was kept constant (175).</p> ">
Abstract
:1. Introduction
- We develop a new labelled dataset of 30 m resolution Landsat 8 images with labelled farm and non-farm areas from the region of Emilia-Romagna in Italy.
- We compare two encoder–decoder-based semantic segmentation pipelines using two different convolution strategies.
- We compare the effects of different band combinations on segmentation results, such as RGB, the normalised vegetation index (NDVI), and the combination of the NDVI and other visible bands.
- We tackle the problem of label scarcity by data augmentation and generating both images and the masks using a CGAN, in addition to systematically including the augmented images to avoid drastic data shifts in the training samples.
2. Background and Previous Work
2.1. Farm Area Segmentation in Agricultural Studies
2.2. Traditional Semantic Segmentation Techniques
2.3. Deep Learning Strategies in Remote Sensing
2.4. Addressing Data Scarcity and Quality
3. Data Description and Pre-Processing
3.1. Study Area: Emilia-Romagna, Italy
3.2. Experimental Data
3.3. Satellite Imagery Pre-Processing
3.3.1. Radiometric Band Correction
3.3.2. Dark Object Correction (DOC)
4. Methodology
4.1. Supervised Semantic Segmentation
4.1.1. Multi-Scale Feature Fusion Based on U-Net
4.1.2. Contextual Features Based on Atrous Filtering
4.2. Spectral Images for Semantic Segmentation
4.2.1. Normalized Difference Vegetation Index (NDVI)
4.2.2. Agriculture Band Composite Imagery
4.3. Data Augmentation
4.3.1. Image Augmentation Based on Transformation and Noise
4.3.2. Conditional Generative Adversarial Models (cGANs) for Data Augmentation
4.3.3. The Proposed Augmentation Strategy
4.4. Evaluation Metrics
4.4.1. Pixel Accuracy
4.4.2. Intersection over Union (IoU)
4.4.3. Matthew’s Correlation Coefficient (MCC)
5. Results
5.1. Supervised Semantic Segmentation
5.2. Model Sensitivity Analysis Using Randomly Sampled versus Specific Geo-Location Training Data
5.3. Spectral Bands for Image Segmentation
5.4. Synthetic Data Augmentation
5.4.1. Noise and Geometric Augmentation Results
5.4.2. Testing the Quality of GAN-Generated Images
5.4.3. Segmentation Results Using Synthetic Imagery
6. Discussion
6.1. Effect of Deep Learning Architectures
6.2. Effect of IR Bands on Semantic Segmentation of Farmlands
6.3. Effect of Data Augmentation
6.4. Effect of Training Data Sample Strategy
7. Conclusions
Author Contributions
Funding
Data Availability Statement
Acknowledgments
Conflicts of Interest
References
- Gap Report. Virginia Tech Cals Global. 28 September 2022. Available online: https://globalagriculturalproductivity.org/ (accessed on 10 May 2023).
- Decuyper, M.; Chávez, R.O.; Lohbeck, M.; Lastra, J.A.; Tsendbazar, N.; Hackländer, J.; Herold, M.; Vågen, T.G. Continuous Monitoring of Forest Change Dynamics With Satellite Time Series. Remote Sens. Environ. 2022, 269, 112829. [Google Scholar] [CrossRef]
- Hall, D.K.; Chang, A.T.; Siddalingaiah, H. Reflectances of Glaciers as Calculated Using Landsat-5 Thematic Mapper Data. Remote Sens. Environ. 1988, 25, 311–321. [Google Scholar] [CrossRef]
- Hong, X.; Chen, L.; Sun, S.; Sun, Z.; Chen, Y.; Mei, Q.; Chen, Z. Detection of Oil Spills in the Northern South China Sea Using Landsat-8 OLI. Remote Sens. 2022, 14, 3966. [Google Scholar] [CrossRef]
- Pandey, P.C.; Pandey, M. Highlighting the Role of Agriculture and Geospatial Technology in Food Security and Sustainable Development Goals. Sustain. Dev. 2023, 31, 3175–3195. [Google Scholar] [CrossRef]
- Landsat Satellite Missions|U.S. Geological Survey. Available online: https://www.usgs.gov/landsat-missions/landsat-known-issues (accessed on 10 May 2023).
- Sharifzadeh, S.; Tata, J.; Sharifzadeh, H.; Tan, B. Farm Area Segmentation in Satellite Images Using DeepLabv3+ Neural Networks. In Data Management Technologies and Applications; Communications in Computer and Information Science Book Series, CCIS; Springer: Cham, Switzerland, 2020; Volume 1255. [Google Scholar] [CrossRef]
- Chen, T.-H.K.; Qiu, C.; Schmitt, M.; Zhu, X.X.; Sabel, C.E.; Prishchepov, A.V. Mapping Horizontal and Vertical Urban Densification in Denmark with Landsat Time-Series from 1985 to 2018: A Semantic Segmentation Solution. Remote Sens. Environ. 2020, 251, 112096. [Google Scholar] [CrossRef]
- Zhong, L.; Hu, L.; Zhou, H. Deep Learning Based Multi-Temporal Crop Classification. Remote Sens. Environ. 2018, 221, 430–443. [Google Scholar] [CrossRef]
- Dou, P.; Shen, H.; Li, Z.; Guan, X. Time series remote sensing image classification framework using combination of deep learning and multiple classifiers system. Int. J. Appl. Earth Obs. Geoinform. 2021, 103, 102477. [Google Scholar] [CrossRef]
- Mirza, M.; Osindero, S. Conditional Generative Adversarial Nets. arXiv 2014, arXiv:1411.1784. [Google Scholar]
- Ronneberger, O.; Philipp, F.; Thomas, B. U-Net: Convolutional Networks for Biomedical Image Segmentation. In Proceedings of the Medical Image Computing and Computer-Assisted Intervention—MICCAI 2015, Munich, Germany, 5–9 October 2015. [Google Scholar] [CrossRef]
- Chen, L.; Zhu, Y.; Papandreou, G.; Schroff, F.; Adam, H. Encoder-Decoder with Atrous Separable Convolution for Semantic Image Segmentation. In Proceedings of the European Conference on Computer Vision (ECCV), Munich, Germany, 8–14 September 2018. [Google Scholar]
- Masek, J.G.; Wulder, M.A.; Markham, B.; McCorkel, J.; Crawford, C.J.; Storey, J.; Jenstrom, D.T. Landsat 9: Empowering Open Science and Applications through Continuity. Remote Sens. Environ. 2020, 248, 111968. [Google Scholar] [CrossRef]
- Lowe, D.G. Distinctive Image Features from Scale-Invariant Keypoints. Int. J. Comput. Vis. 2004, 60, 91–110. [Google Scholar] [CrossRef]
- Rosten, E.; Drummond, T. Fusing Points and Lines for High Performance Tracking. In Proceedings of the Tenth IEEE International Conference on Computer Vision (ICCV’05), Beijing, China, 17–21 October 2005. [Google Scholar]
- Dorj, U.O.; Lee, M.; Yun, S.S. An Yield Estimation in Citrus Orchards via Fruit Detection and Counting Using Image Processing. Comput. Electron. Agric. 2017, 140, 103–112. [Google Scholar] [CrossRef]
- Otsu, N. A Threshold Selection Method from Gray-Level Histograms. IEEE Trans. Syst. Man Cybern. 1979, 9, 62–66. [Google Scholar] [CrossRef]
- Ling, P.P.; Ruzhitsky, V.N. Machine vision techniques for measuring the canopy of tomato seedling. J. Agric. Eng. Res. 1996, 65, 85–95. [Google Scholar] [CrossRef]
- Achanta, R.; Shaji, A.; Smith, K.; Lucchi, A.; Fua, P.; Süsstrunk, S. SLIC Superpixels Compared to State-of-the-Art Superpixel Methods. IEEE Trans. Pattern Anal. Mach. Intell. 2012, 34, 2274–2282. [Google Scholar] [CrossRef]
- Chen, J.; Yang, C.; Xu, G.; Ning, L. Image Segmentation Method Using Fuzzy C Mean Clustering Based on Multi-Objective Optimization. J. Phys. Conf. Ser. 2018, 1004, 012035. [Google Scholar] [CrossRef]
- Yi, F.; Inkyu, M. Image Segmentation: A Survey of Graph-Cut Methods. In Proceedings of the 2012 International Conference on Systems and Informatics (ICSAI2012), Yantai, China, 19–20 May 2012; pp. 1936–1941. [Google Scholar] [CrossRef]
- Chen, M.; Artières, T.; Denoyer, L. Unsupervised Object Segmentation by Redrawing. arXiv 2019, arXiv:1905.13539. [Google Scholar]
- Xia, X.; Kulis, B. W-Net: A Deep Model for Fully Unsupervised Image Segmentation. arXiv 2017, arXiv:1711.08506. [Google Scholar]
- Teichmann, M.T.; Cipolla, R. Convolutional CRFs for Semantic Segmentation. arXiv 2018, arXiv:1805.04777. [Google Scholar]
- He, K.; Gkioxari, G.; Dollár, P.; Girshick, R. Mask R-CNN. arXiv 2018, arXiv:1703.06870. [Google Scholar]
- Chen, L.C.; Papandreou, G.; Kokkinos, I.; Murphy, K.; Yuille, A.L. DeepLab: Semantic Image Segmentation with Deep Convolutional Nets, Atrous Convolution, and Fully Connected CRFs. arXiv 2017, arXiv:1606.00915. [Google Scholar]
- Strudel, R.; Garcia, R.; Laptev, I.; Schmid, C. Segmenter: Transformer for Semantic Segmentation. arXiv 2021, arXiv:2105.05633. [Google Scholar]
- Giraud, R.; Ta, V.T.; Papadakis, N. Robust Superpixels Using Color And Contour Features Along Linear Path. Comput. Vis. Image Underst. 2018, 170, 1–13. [Google Scholar] [CrossRef]
- Wu, Z.; Gao, Y.; Li, L.; Xue, J.; Li, Y. Semantic segmentation of high-resolution remote sensing images using fully convolutional network with adaptive threshold. Connect. Sci. 2019, 31, 169–184. [Google Scholar] [CrossRef]
- Wu, J.; Chen, X.Y.; Zhang, H.; Xiong, L.D.; Lei, H.; Deng, S.H. Hyperparameter Optimization for Machine Learning Models Based on Bayesian Optimization. J. Electron. Sci. Technol. 2019, 17, 26–40. [Google Scholar] [CrossRef]
- Passos, D.; Mishra, P. A Tutorial on Automatic Hyperparameter Tuning of Deep Spectral Modelling for Regression and Classification Tasks. Chemom. Intell. Lab. Syst. 2022, 223, 104520. [Google Scholar] [CrossRef]
- Ma, L.; Liu, Y.; Zhang, X.; Ye, Y.; Yin, G.; Johnson, B.A. Deep Learning in Remote Sensing Applications: A Meta-Analysis and Review. ISPRS J. Photogramm. Remote Sens. 2019, 152, 166–177. [Google Scholar] [CrossRef]
- He, Y.; Wang, C.; Chen, F.; Jia, H.; Liang, D.; Yang, A. Feature Comparison and Optimization for 30-M Winter Wheat Mapping Based on Landsat-8 and Sentinel-2 Data Using Random Forest Algorithm. Remote Sens. 2019, 11, 535. [Google Scholar] [CrossRef]
- Wang, L.; Wang, J.; Liu, Z.; Zhu, J.; Qin, F. Evaluation of a Deep-Learning Model for Multispectral Remote Sensing of Land Use and Crop Classification. Crop J. 2022, 10, 1435–1451. [Google Scholar] [CrossRef]
- Peng, D.; Zhang, Y.; Guan, H. End-to-End Change Detection for High-Resolution Satellite Images Using Improved UNet++. Remote Sens. 2019, 11, 1382. [Google Scholar] [CrossRef]
- Kotaridis, I.; Lazaridou, M. Remote Sensing Image Segmentation Advances: A Meta-Analysis. ISPRS J. Photogramm. Remote Sens. 2021, 173, 309–322. [Google Scholar] [CrossRef]
- Alzubaidi, L.; Bai, J.; Al-Sabaawi, A.; Santamaría, J.; Albahri, A.S.; Al-dabbagh, B.S.N.; Fadhel, M.A.; Manoufali, M.; Zhang, J.; Al-Timemy, A.H.; et al. A Survey on Deep Learning Tools Dealing with Data Scarcity: Definitions, Challenges, Solutions, Tips, and Applications. J. Big Data 2023, 10, 46. [Google Scholar] [CrossRef]
- Hao, X.; Liu, L.; Yang, R.; Yin, L.; Zhang, L.; Li, X. A Review of Data Augmentation Methods of Remote Sensing Image Target Recognition. Remote Sens. 2023, 15, 827. [Google Scholar] [CrossRef]
- Safarov, F.; Temurbek, K.; Jamoljon, D.; Temur, O.; Chedjou, J.C.; Abdusalomov, A.B.; Cho, Y.-I. Improved Agricultural Field Segmentation in Satellite Imagery Using TL-ResUNet Architecture. Sensors 2022, 22, 9784. [Google Scholar] [CrossRef] [PubMed]
- Abady, L.; Horváth, J.; Tondi, B.; Delp, E.J.; Barni, M. Manipulation and Generation of Synthetic Satellite Images Using Deep Learning Models. J. Appl. Remote. Sens. 2022, 16, 046504. [Google Scholar] [CrossRef]
- Isola, P.; Zhu, J.-Y.; Zhou, T.; Efros, A.A. Image-to-Image Translation with Conditional Adversarial Networks. arXiv 2018, arXiv:1611.07004. [Google Scholar]
- Marín, J.; Escalera, S. SSSGAN: Satellite Style and Structure Generative Adversarial Networks. Remote Sens. 2021, 13, 3984. [Google Scholar] [CrossRef]
- Singh, P.; Komodakis, N. Cloud-Gan: Cloud Removal for Sentinel-2 Imagery Using a Cyclic Consistent Generative Adversarial Networks. In Proceedings of the IGARSS 2018—2018 IEEE International Geoscience and Remote Sensing Symposium, Valencia, Spain, 22–27 July 2018; pp. 1772–1775. [Google Scholar] [CrossRef]
- Weather Emilia-Romagna. 10 May 2023. Available online: https://www.meteoblue.com/en/weather/week/emilia-romagna_italy_3177401 (accessed on 10 May 2023).
- Regione Emilia-Romagna. Agriculture and Food. Available online: https://www.regione.emilia-romagna.it/en/agriculture-and-food (accessed on 10 May 2023).
- EarthExplorer. Available online: https://earthexplorer.usgs.gov/ (accessed on 10 May 2023).
- Young, N.E.; Anderson, R.S.; Chignell, S.M.; Vorster, A.G.; Lawrence, R.; Evangelista, P.H. A Survival Guide to Landsat Preprocessing. Ecology 2017, 98, 920–932. [Google Scholar] [CrossRef]
- Landsat 8 Data Users Handbook|U.S. Geological Survey. Available online: https://www.usgs.gov/landsat-missions/landsat-8-data-users-handbook/ (accessed on 10 May 2023).
- GISGeography. Landsat 8 Bands and Band Combinations. GIS Geography. 18 October 2019. Available online: https://gisgeography.com/landsat-8-bands-combinations/ (accessed on 10 May 2023).
- Chávez, P.S.J.; Mitchell, W.B. Computer Enhancement Techniques of Landsat MSS Digital Images for Land Use/Land Cover Assessments. 1977. Available online: http://pascal-francis.inist.fr/vibad/index.php?action=getRecordDetail&idt=PASCAL7930201432 (accessed on 10 May 2023).
- Armstrong, R.A. Remote Sensing of Submerged Vegetation Canopies for Biomass Estimation. Int. J. Remote Sens. 1993, 14, 621–627. [Google Scholar] [CrossRef]
- QGIS—A Free and Open Source Geographic Information System, Version 3.30.2. Available online: https://qgis.org/en/site/ (accessed on 10 May 2023).
- Shelhamer, E.; Long, J.; Darrell, T. Fully Convolutional Networks for Semantic Segmentation. arXiv 2015, arXiv:1411.4038. [Google Scholar]
- Hou, Y.; Liu, Z.; Zhang, T.; Li, Y. C-UNet: Complement UNet for Remote Sensing Road Extraction. Sensors 2021, 21, 2153. [Google Scholar] [CrossRef]
- Chen, Z.; Shi, B.E. Appearance-Based Gaze Estimation Using Dilated-Convolutions. arXiv 2019, arXiv:1903.07296. [Google Scholar]
- Chen, L.; Zhu, Y.; Papandreou, G.; Schroff, F.; Adam, H. Rethinking Atrous Convolution for Semantic Image Segmentation. arXiv 2017, arXiv:1706.05587. [Google Scholar]
- Rouse, J.W.; Haas, R.H.; Schell, J.A.; Deering, D.W.; Harlan, J.C. Monitoring the vernal advancements and retrogradation of natural vegetation. In NASA/GSFC; Final Report; NASA: Greenbelt, MD, USA, 1974; pp. 1–137. [Google Scholar]
- Agriculture Satellite Bands: Healthy Vegetation Band Overview. 6 June 2022. Available online: https://eos.com/make-an-analysis/agriculture-band/ (accessed on 10 May 2023).
- Negassi, M.; Wagner, D.; Reiterer, A. Smart(Sampling)Augment: Optimal and Efficient Data Augmentation for Semantic Segmentation. arXiv 2021, arXiv:2111.00487. [Google Scholar]
- Liu, S.; Zhang, J.; Chen, Y.; Liu, Y.; Qin, Z.; Wan, T. Pixel Level Data Augmentation for Semantic Image Segmentation Using Generative Adversarial Networks. In Proceedings of the ICASSP 2019—2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Brighton, UK, 12–17 May 2019; pp. 1902–1906. [Google Scholar] [CrossRef]
- Ma, R.; Tao, P.; Tang, H. Optimizing data augmentation for semantic segmentation on small-scale dataset. In Proceedings of the 2nd International Conference on Control and Computer Vision, Jeju Island, Republic of Korea, 15–18 June 2019; pp. 77–81. [Google Scholar] [CrossRef]
- Wong, S.C.; Gatt, A.; Stamatescu, V.; McDonnell, M.D. Understanding Data Augmentation for Classification: When to Warp? In Proceedings of the 2016 International Conference on Digital Image Computing: Techniques and Applications (DICTA), Gold Coast, QLD, Australia, 30 November–2 December 2016; pp. 1–6. [Google Scholar] [CrossRef]
- Shorten, C.; Khoshgoftaar, T.M. A survey on image data augmentation for deep learning. J. Big Data 2019, 6, 60. [Google Scholar] [CrossRef]
- Van der Walt, S.; Schönberger, J.L.; Nunez-Iglesias, J.; Boulogne, F.; Warner, J.D.; Yager, N.; Gouillart, E.; Yu, T.; The Scikit-Image Contributors. scikit-image: Image processing in Python. PeerJ 2014, 2, e453. [Google Scholar] [CrossRef]
- Neff, T.; Payer, C.; Stern, D.; Urschler, M. Generative Adversarial Network Based Synthesis for Supervised Medical Image Segmentation. In Proceedings of the OAGM&ARW Joint Workshop 2017, Vienna, Austria, 10–12 May 2017. [Google Scholar] [CrossRef]
- Radford, A.; Metz, L.; Chintala, S. Unsupervised Representation Learning with Deep Convolutional Generative Adversarial Networks. arXiv 2016, arXiv:1511.06434. [Google Scholar]
- Kingma, D.P.; Ba, J. Adam: A Method For Stochastic Optimization. arXiv 2014, arXiv:1412.6980. [Google Scholar]
- Nair, V.; Hinton, G.E. Rectified linear units improve restricted boltzmann machines. In Proceedings of the 27th International Conference on Machine Learning (ICML-10), Haifa, Israel, 21–24 June 2010; pp. 807–814. [Google Scholar]
- Maas, A.L.; Awni, Y.H.; Andrew, Y.N. Rectifier nonlinearities improve neural network acoustic models. In Proceedings of the 30th International Conference on Machine Learning (ICML 2013), Atlanta, GA, USA, 16–21 June 2013; Volume 30, p. 3. [Google Scholar]
- Dubey, A.; Vanita, J. Comparative Study of Convolution Neural Network’s Relu and Leaky-Relu Activation Functions. arXiv 2019, arXiv:1511.06434. [Google Scholar]
- Goodfellow, I. NIPS 2016 Tutorial: Generative Adversarial Networks. arXiv 2016, arXiv:1701.00160. [Google Scholar]
- Sampath, V.; Maurtua, I.; Martín, J.J.A.; Gutierrez, A. A Survey on Generative Adversarial Networks for Imbalance Problems in Computer Vision Tasks. J. Big Data 2021, 8, 27. [Google Scholar] [CrossRef] [PubMed]
- Chicco, D.; Jurman, G. The Advantages of the Matthews Correlation Coefficient (MCC) over F1 Score and Accuracy in Binary Classification Evaluation. BMC Genom. 2020, 21, 6. [Google Scholar] [CrossRef] [PubMed]
- Cai, C.; Tan, J.; Zhang, P.; Ye, Y.; Zhang, J. Determining Strawberries’ Varying Maturity Levels by Utilizing Image Segmentation Methods of Improved DeepLabV3+. Agronomy 2022, 12, 1875. [Google Scholar] [CrossRef]
- Naushad, R.; Kaur, T.; Ghaderpour, E. Deep Transfer Learning for Land Use and Land Cover Classification: A Comparative Study. Sensors 2021, 21, 8083. [Google Scholar] [CrossRef] [PubMed]
Band No. | Name | Wavelength (μm) | Resolution (m) | Sensor |
---|---|---|---|---|
1 | Coastal aerosol | 0.43–0.45 | 30 | OLI |
2 | Blue | 0.45–0.51 | 30 | OLI |
3 | Green | 0.53–0.59 | 30 | OLI |
4 | Red | 0.63–0.67 | 30 | OLI |
5 | Near-Infrared (NIR) | 0.85–0.88 | 30 | OLI |
6 | Short-wave Infrared (SWIR) 1 | 1.57–1.65 | 30 | OLI |
7 | Short-wave Infrared (SWIR) 2 | 2.11–2.29 | 30 | OLI |
8 | Panchromatic | 0.50–0.68 | 15 | OLI |
9 | Cirrus | 1.36–1.38 | 30 | OLI |
10 | TIRS 1 | 2.11–2.29 | 30 (100) | TIRS |
11 | TIRS 2 | 10.60–11.19 | 30 (100) | TIRS |
Exp. No. | Pre-Trained Networks | Train Accuracy | Test Accuracy | MIoU | MCC |
---|---|---|---|---|---|
1 | VGG16 | 79.57 | 76.77 | 73.30 | 0.647 |
2 | ResNet50 | 89.34 | 86.92 | 83.12 | 0.763 |
3 | ResNet101 | 87.32 | 83.41 | 79.20 | 0.714 |
4 | MobileNetV2 | 74.29 | 70.47 | 68.38 | 0.608 |
Exp. No. | Pre-Trained Networks | Train Accuracy | Test Accuracy | MIoU | MCC |
---|---|---|---|---|---|
5 | VGG16 | 76.34 | 74.29 | 70.44 | 0.682 |
6 | ResNet50 | 69.59 | 67.32 | 65.73 | 0.638 |
7 | ResNet101 | 62.51 | 60.99 | 60.18 | 0.651 |
8 | MobileNetV2 | 73.94 | 71.45 | 68.24 | 0.619 |
Bands | Train Accuracy | Test Accuracy | MIoU | MCC |
---|---|---|---|---|
R-G-B | 87.84 | 82.77 | 79.30 | 0.689 |
NDVI-G-B | 92.96 | 90.49 | 72.90 | 0.700 |
NIR, SWIR1 and Blue | 88.23 | 84.42 | 68.76 | 0.652 |
Real | Synthetic | Total | Train Accuracy | Test Accuracy | MIoU | MCC |
---|---|---|---|---|---|---|
175 | 10 | 185 | 80.92 | 88.45 | 77.25 | 0.640 |
175 | 25 | 200 | 80.92 | 77.25 | 74.34 | 0.634 |
175 | 40 | 215 | 91.12 | 90.71 | 88.30 | 0.716 |
175 | 55 | 230 | 90.65 | 86.52 | 82.53 | 0.686 |
175 | 70 | 245 | 89.26 | 85.95 | 80.72 | 0.649 |
175 | 95 | 270 | 78.64 | 74.73 | 75.11 | 0.582 |
175 | 110 | 285 | 71.26 | 68.18 | 72.96 | 0.532 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Nair, S.; Sharifzadeh, S.; Palade, V. Farmland Segmentation in Landsat 8 Satellite Images Using Deep Learning and Conditional Generative Adversarial Networks. Remote Sens. 2024, 16, 823. https://doi.org/10.3390/rs16050823
Nair S, Sharifzadeh S, Palade V. Farmland Segmentation in Landsat 8 Satellite Images Using Deep Learning and Conditional Generative Adversarial Networks. Remote Sensing. 2024; 16(5):823. https://doi.org/10.3390/rs16050823
Chicago/Turabian StyleNair, Shruti, Sara Sharifzadeh, and Vasile Palade. 2024. "Farmland Segmentation in Landsat 8 Satellite Images Using Deep Learning and Conditional Generative Adversarial Networks" Remote Sensing 16, no. 5: 823. https://doi.org/10.3390/rs16050823
APA StyleNair, S., Sharifzadeh, S., & Palade, V. (2024). Farmland Segmentation in Landsat 8 Satellite Images Using Deep Learning and Conditional Generative Adversarial Networks. Remote Sensing, 16(5), 823. https://doi.org/10.3390/rs16050823