Graph Neural Network-Based Method of Spatiotemporal Land Cover Mapping Using Satellite Imagery
<p>The proposed method’s workflow with four main steps.</p> "> Figure 2
<p>Examples of applying Felzenszwalb’s image segmentation algorithm (<math display="inline"><semantics><mi>σ</mi></semantics></math> = 0.5) on a Sentinel-2 13-layer image with 10 m spatial resolution of an example region of Graz, Austria in July of 2017. A True colour (RGB) composite is shown of the multispectral image.</p> "> Figure 3
<p>An example of overlap portion calculations between segments, shown in (<b>a</b>), and creation of <span class="html-italic">G</span> with <math display="inline"><semantics><mi>τ</mi></semantics></math> = 0.3 in (<b>b</b>) and <math display="inline"><semantics><mi>τ</mi></semantics></math> = 0.4 in (<b>c</b>). (<b>a</b>) Overlap portion calculations between the red segment in time <span class="html-italic">t</span> and colored segments in time <math display="inline"><semantics><mrow><mi>t</mi><mo>+</mo><mn>1</mn></mrow></semantics></math>, (<b>b</b>) <math display="inline"><semantics><mi>τ</mi></semantics></math> = 0.3, (<b>c</b>) <math display="inline"><semantics><mi>τ</mi></semantics></math> = 0.4.</p> "> Figure 4
<p>Examples of <math display="inline"><semantics><msub><mi>G</mi><mrow><mi>s</mi><mi>u</mi><mi>b</mi></mrow></msub></semantics></math> construction for a <math display="inline"><semantics><msub><mi>v</mi><mrow><mi>t</mi><mi>a</mi><mi>r</mi><mi>g</mi><mi>e</mi><mi>t</mi></mrow></msub></semantics></math> (red) in time <span class="html-italic">t</span> at <math display="inline"><semantics><msub><mi>T</mi><mrow><mi>l</mi><mi>o</mi><mi>o</mi><mi>k</mi><mi>b</mi><mi>a</mi><mi>c</mi><mi>k</mi></mrow></msub></semantics></math> = [0, 2]. The <span class="html-italic">G</span> was constructed for a <math display="inline"><semantics><mrow><mi>T</mi><msub><mi>S</mi><mrow><mi>m</mi><mi>a</mi><mi>s</mi><mi>k</mi></mrow></msub></mrow></semantics></math> with 3 images of a small example subregion using <math display="inline"><semantics><mi>τ</mi></semantics></math> = 0.2. The orange edges connect all the included (enlarged) nodes in the <math display="inline"><semantics><msub><mi>G</mi><mrow><mi>s</mi><mi>u</mi><mi>b</mi></mrow></msub></semantics></math>. Each included <span class="html-italic">v</span> has a <math display="inline"><semantics><mrow><mi>b</mi><mi>b</mi><mi>o</mi><mi>x</mi></mrow></semantics></math> drawn around the coloured <span class="html-italic">s</span> it represents. <math display="inline"><semantics><msub><mi>G</mi><mrow><mi>s</mi><mi>u</mi><mi>b</mi></mrow></msub></semantics></math> in (<b>a</b>) includes only the <math display="inline"><semantics><msub><mi>v</mi><mrow><mi>t</mi><mi>a</mi><mi>r</mi><mi>g</mi><mi>e</mi><mi>t</mi></mrow></msub></semantics></math>, and the <math display="inline"><semantics><msub><mi>G</mi><mrow><mi>s</mi><mi>u</mi><mi>b</mi></mrow></msub></semantics></math> in (<b>b</b>) contains <math display="inline"><semantics><msub><mi>v</mi><mrow><mi>t</mi><mi>a</mi><mi>r</mi><mi>g</mi><mi>e</mi><mi>t</mi></mrow></msub></semantics></math> and 3 nodes with 3 edges between them.</p> "> Figure 5
<p>Target node classification pipeline, which outputs the land cover class of the <math display="inline"><semantics><msub><mi>s</mi><mrow><mi>s</mi><mi>e</mi><mi>l</mi><mi>e</mi><mi>c</mi><mi>t</mi><mi>e</mi><mi>d</mi></mrow></msub></semantics></math> by classifying the <math display="inline"><semantics><msub><mi>v</mi><mrow><mi>t</mi><mi>a</mi><mi>r</mi><mi>g</mi><mi>e</mi><mi>t</mi></mrow></msub></semantics></math> based on the input <math display="inline"><semantics><msub><mi>G</mi><mrow><mi>s</mi><mi>u</mi><mi>b</mi></mrow></msub></semantics></math>.</p> "> Figure 6
<p>Intermonthly <math display="inline"><semantics><mrow><mi>T</mi><msub><mi>S</mi><mrow><mi>i</mi><mi>m</mi><mi>a</mi><mi>g</mi><mi>e</mi></mrow></msub></mrow></semantics></math> for the region of Graz and the region of Portorož, Izola and Koper. The individual multispectral image contains <span class="html-italic">C</span> = 17 layers. The images in <math display="inline"><semantics><mrow><mi>T</mi><msub><mi>S</mi><mrow><mi>i</mi><mi>m</mi><mi>a</mi><mi>g</mi><mi>e</mi></mrow></msub></mrow></semantics></math> are visualised with a True colour (RGB) composite.</p> "> Figure 7
<p>Examples of the segmented regions. Images (<b>a</b>,<b>c</b>) show the True colour (RGB) composites, while (<b>b</b>,<b>d</b>) show their respective segmentation masks. (<b>a</b>) The region of Graz in January 2019, (<b>b</b>) The <math display="inline"><semantics><mrow><mi>m</mi><mi>a</mi><mi>s</mi><mi>k</mi></mrow></semantics></math> for the region of Graz in January 2019, (<b>c</b>) The region of Portorož, Izola and Koper in November 2018, (<b>d</b>) The <math display="inline"><semantics><mrow><mi>m</mi><mi>a</mi><mi>s</mi><mi>k</mi></mrow></semantics></math> for the region of Portorož, Izola and Koper in November 2018.</p> "> Figure 8
<p>Examples of CLC level 2 classification outputs, obtained with the UNet model by Esri, are shown in (<b>a</b>,<b>c</b>). The manually corrected ground truth, derived from respective UNet model outputs, are shown in (<b>b</b>,<b>d</b>). (<b>a</b>) Classification output of Esri’s UNet model for the region of Graz in January 2019, (<b>b</b>) Manually corrected ground truth for the region of Graz in January 2019, (<b>c</b>) Classification output of Esri’s UNet model for the region of Portorož, Izola and Koper in November 2018, (<b>d</b>) Manually corrected ground truth for the region of Portorož, Izola and Koper in November 2018.</p> "> Figure 9
<p>Number of pixels per land cover class for each region in the dataset. Images (<b>a</b>,<b>c</b>) show the class distribution in <math display="inline"><semantics><mrow><mi>T</mi><msub><mi>S</mi><mrow><mi>t</mi><mi>r</mi><mi>a</mi><mi>i</mi><mi>n</mi><mtext>_</mtext><mi>g</mi><mi>t</mi><mtext>_</mtext><mi>c</mi><mi>o</mi><mi>v</mi><mi>e</mi><mi>r</mi></mrow></msub></mrow></semantics></math>, while (<b>b</b>,<b>d</b>) show the class distribution in <math display="inline"><semantics><mrow><mi>T</mi><msub><mi>S</mi><mrow><mi>t</mi><mi>e</mi><mi>s</mi><mi>t</mi><mtext>_</mtext><mi>g</mi><mi>t</mi><mtext>_</mtext><mi>c</mi><mi>o</mi><mi>v</mi><mi>e</mi><mi>r</mi></mrow></msub></mrow></semantics></math>. (<b>a</b>) Distribution of ground truth land cover labels of pixels in <math display="inline"><semantics><mrow><mi>T</mi><msub><mi>S</mi><mrow><mi>t</mi><mi>r</mi><mi>a</mi><mi>i</mi><mi>n</mi><mtext>_</mtext><mi>g</mi><mi>t</mi><mtext>_</mtext><mi>c</mi><mi>o</mi><mi>v</mi><mi>e</mi><mi>r</mi></mrow></msub></mrow></semantics></math> for the region of Graz, (<b>b</b>) Distribution of ground truth land cover labels of pixels in <math display="inline"><semantics><mrow><mi>T</mi><msub><mi>S</mi><mrow><mi>t</mi><mi>e</mi><mi>s</mi><mi>t</mi><mtext>_</mtext><mi>g</mi><mi>t</mi><mtext>_</mtext><mi>c</mi><mi>o</mi><mi>v</mi><mi>e</mi><mi>r</mi></mrow></msub></mrow></semantics></math> for the region of Graz, (<b>c</b>) Distribution of ground truth land cover labels of pixels in <math display="inline"><semantics><mrow><mi>T</mi><msub><mi>S</mi><mrow><mi>t</mi><mi>r</mi><mi>a</mi><mi>i</mi><mi>n</mi><mtext>_</mtext><mi>g</mi><mi>t</mi><mtext>_</mtext><mi>c</mi><mi>o</mi><mi>v</mi><mi>e</mi><mi>r</mi></mrow></msub></mrow></semantics></math> for the region of Portorož, Izola and Koper, (<b>d</b>) Distribution of ground truth land cover labels of pixels in <math display="inline"><semantics><mrow><mi>T</mi><msub><mi>S</mi><mrow><mi>t</mi><mi>e</mi><mi>s</mi><mi>t</mi><mtext>_</mtext><mi>g</mi><mi>t</mi><mtext>_</mtext><mi>c</mi><mi>o</mi><mi>v</mi><mi>e</mi><mi>r</mi></mrow></msub></mrow></semantics></math> for the region of Portorož, Izola and Koper.</p> "> Figure 10
<p>Number of nodes per land cover class for each region in the dataset. Images (<b>a</b>,<b>c</b>) shows the class distribution in <math display="inline"><semantics><msub><mi>G</mi><mrow><mi>t</mi><mi>r</mi><mi>a</mi><mi>i</mi><mi>n</mi></mrow></msub></semantics></math>, while (<b>b</b>,<b>d</b>) show the class distribution in <math display="inline"><semantics><msub><mi>G</mi><mrow><mi>t</mi><mi>e</mi><mi>s</mi><mi>t</mi></mrow></msub></semantics></math>. (<b>a</b>) Distribution of ground truth land cover labels of nodes in <math display="inline"><semantics><msub><mi>G</mi><mrow><mi>t</mi><mi>r</mi><mi>a</mi><mi>i</mi><mi>n</mi></mrow></msub></semantics></math> for the region of Graz, (<b>b</b>) Distribution of ground truth land cover labels of nodes in <math display="inline"><semantics><msub><mi>G</mi><mrow><mi>t</mi><mi>e</mi><mi>s</mi><mi>t</mi></mrow></msub></semantics></math> for the region of Graz, (<b>c</b>) Distribution of ground truth land cover labels of nodes in <math display="inline"><semantics><msub><mi>G</mi><mrow><mi>t</mi><mi>r</mi><mi>a</mi><mi>i</mi><mi>n</mi></mrow></msub></semantics></math> for the region of Portorož, Izola and Koper, (<b>d</b>) Distribution of ground truth land cover labels of nodes in <math display="inline"><semantics><msub><mi>G</mi><mrow><mi>t</mi><mi>e</mi><mi>s</mi><mi>t</mi></mrow></msub></semantics></math> for the region of Portorož, Izola and Koper.</p> "> Figure 11
<p>CLC level 2 classification results for a region of Portorož, Izola and Koper, depending on the selection of GNN and the value of <math display="inline"><semantics><msub><mi>T</mi><mrow><mi>l</mi><mi>o</mi><mi>o</mi><mi>k</mi><mi>b</mi><mi>a</mi><mi>c</mi><mi>k</mi></mrow></msub></semantics></math>.</p> "> Figure 12
<p>Confusion matrices for classification of the region of Graz, obtained with the best performing classification model of the proposed GNN-based method. (<b>a</b>) Confusion matrix for CLC level 2 classification. (<b>b</b>) Confusion matrix for CLC level 1 classification.</p> "> Figure 12 Cont.
<p>Confusion matrices for classification of the region of Graz, obtained with the best performing classification model of the proposed GNN-based method. (<b>a</b>) Confusion matrix for CLC level 2 classification. (<b>b</b>) Confusion matrix for CLC level 1 classification.</p> "> Figure 13
<p>Confusion matrices for classification of the region of Portorož, Izola and Koper, obtained with the best performing classification model of the proposed GNN-based method. (<b>a</b>) Confusion matrix for CLC level 2 classification. (<b>b</b>) Confusion matrix for CLC level 1 classification.</p> "> Figure 14
<p>Individual weighted F1-scores for CLC level 2 classification for the consecutive images in <math display="inline"><semantics><mrow><mi>T</mi><msub><mi>S</mi><mrow><mi>t</mi><mi>e</mi><mi>s</mi><mi>t</mi><mtext>_</mtext><mi>i</mi><mi>m</mi><mi>a</mi><mi>g</mi><mi>e</mi></mrow></msub></mrow></semantics></math> for both regions of the dataset, obtained with the best performing corresponding classification model of the proposed GNN-based method. (<b>a</b>) Results for the 11 consecutive images of the region of Graz. (<b>b</b>) Results for the 12 consecutive images of the region of Portorož, Izola and Koper.</p> "> Figure 15
<p>Pixel -based heatmaps of cumulative incorrect CLC level 2 classifications for both regions of the dataset, derived from classifying the <math display="inline"><semantics><mrow><mi>T</mi><msub><mi>S</mi><mrow><mi>t</mi><mi>e</mi><mi>s</mi><mi>t</mi><mtext>_</mtext><mi>i</mi><mi>m</mi><mi>a</mi><mi>g</mi><mi>e</mi></mrow></msub></mrow></semantics></math> with the best performing corresponding classification model of the proposed GNN-based method. The colours’ transition from dark purple to bright yellow represent the frequency of misclassifications, with intensifying brightness signifying a higher count of errors. (<b>a</b>) Heatmap for the region of Graz. (<b>b</b>) Heatmap for the region of Portorož, Izola and Koper.</p> "> Figure 16
<p>Examples of CLC level 2 ground truth ((<b>a</b>,<b>c</b>,<b>e</b>,<b>g</b>)) and predictions ((<b>b</b>,<b>d</b>,<b>f</b>,<b>h</b>)) for both regions of the dataset, obtained with the best performing corresponding classification model of the proposed GNN-based method. (<b>a</b>) Ground truth—May 2020—region of Graz, (<b>b</b>) Predicted land cover—May 2020—region of Graz, (<b>c</b>) Ground truth—June 2021—region of Graz, (<b>d</b>) Predicted land cover—June 2021—region of Graz, (<b>e</b>) Ground truth—January 2020—region of Portorož, Izola and Koper, (<b>f</b>) Predicted land cover—January 2020—region of Portorož, Izola and Koper, (<b>g</b>) Ground truth—May 2021—region of Portorož, Izola and Koper, (<b>h</b>) Predicted land cover—May 2021—region of Portorož, Izola and Koper.</p> "> Figure 17
<p>Average node count and cumulative spatial coverage (total size of all segments) in a <math display="inline"><semantics><msub><mi>G</mi><mrow><mi>s</mi><mi>u</mi><mi>b</mi></mrow></msub></semantics></math>, along with metric scores, depending on the value of <math display="inline"><semantics><msub><mi>T</mi><mrow><mi>l</mi><mi>o</mi><mi>o</mi><mi>k</mi><mi>b</mi><mi>a</mi><mi>c</mi><mi>k</mi></mrow></msub></semantics></math>. (<b>a</b>) Subgraph-related statistics for the region of Graz. (<b>b</b>) Subgraph-related statistics for the region of Portorož, Izola and Koper.</p> "> Figure 18
<p>Class-specific CLC level 2 classification accuracies (sourced from the confusion matrices in <a href="#sensors-23-06648-f012" class="html-fig">Figure 12</a>a and <a href="#sensors-23-06648-f013" class="html-fig">Figure 13</a>a) for both dataset regions in relation to the distribution of ground truth land cover labels in <math display="inline"><semantics><msub><mi>G</mi><mrow><mi>t</mi><mi>r</mi><mi>a</mi><mi>i</mi><mi>n</mi></mrow></msub></semantics></math>, derived from <a href="#sensors-23-06648-f010" class="html-fig">Figure 10</a>a,c.</p> ">
Abstract
:1. Introduction
- A new representation of the sequential satellite images as a directed graph by connecting segmented land region through time, based on sequential spatial segment overlaps.
- A new land cover mapping method as node classification in the derived directed graph using the GNN. The proposed method allows selection of a target node’s neighbourhood, which contains historical temporal context information of connected land region segments. The size of the neighbourhood determines the volume of input spatial and temporal information that the selected GNN uses for node classification.
- A modular target node classification pipeline, which offers flexible selection of a CNN for image feature extraction and a GNN for node classification.
- The first application of using EfficientNetV2 as a feature extractor for GraphSAGE classification models to perform intermonthly land cover classification.
- Complete intermonthly land cover classification maps for the given regions by using Sentinel-2 imagery, as shown in the Section 5.
2. Related Work
2.1. Object-Based Land Cover Classification of Satellite Imagery
2.2. GNNs for Land Cover Classification
3. Methodology
3.1. Superpixel Segmentation
- —the Standard Deviation of the Gaussian kernel to smooth the image in the preprocessing stage;
- k—a scale of observations for the threshold function, which controls the degree of required difference between two adjacent superpixels (a larger k causes a preference for larger superpixels);
- —the minimum number of pixels inside the superpixel to control the merging of neighbouring superpixels in the postprocessing stage.
3.2. Graph Construction
3.3. Segment Representation and Subgraph Construction
3.4. Node Classification Pipeline
4. Dataset Preparation
4.1. Intermonthly Satellite Imagery Acquisition
4.2. Land Cover Ground Truth Creation
5. Results and Discussion
5.1. Application of the Proposed Method and Experimental Parameters
- Individual v contained the , which was extended by 20 pixels in both width and height. The was resized to the size of = 48 and = 48. It also contained = 7 layers, specifically, B04, B03, B02, NDVI, NDMI, NDWI and NDSI.
- The feature extraction part of the target node classification pipeline starts by passing the segment’s through the trainable 2D convolution with 3 kernels. This outputs a tensor with 3 feature maps. These are then passed into the CNN - the state-of-the-art EfficientNetV2-S [51] was selected. It had the fully-connected output layer removed and all the layers trainable. Before passing the 3 feature maps into the EfficientNetV2-S, they were passed through the EfficientNetV2-S’s preprocessing transforms. The final output of the EfficientNetV2-S was = 1280 extracted features.
- Each but the last layer in the GNN had a hidden dimension of 256, with a dropout of 0.5. The output dimension of the final layer in the GNN was set to match the number of classification classes N. Specifically, it was set to 12 for the Graz region and 13 for the region of Portorož, Izola and Koper. All the GNN layers were trainable.
- Before training, the EfficientNetV2-S model for feature extraction was initialised with ImageNet pretrained weights, because the preliminary experiments showed that this led to lower training loss. The classification pipeline forms a single model and it was, same as in [58], trained in a single training process using the Adam optimiser [74]. The initial learning rate was set to 0.001 and the model was trained for 10 epochs with a random shuffle of the training samples (i.e., subgraphs) inbetween. The batch size was 30. The loss function was categorical cross-entropy, which used class weights, due to unbalanced ground truth land cover labels of the nodes in . The edge weights in were ignored during the message passing through the GNN, because initial empirical experiments had shown that that led to lower training loss.
5.2. Analysis of GNN Usage in the Proposed Method
- GAT [57] with outputs of hidden layers being passed through the exponential linear unit (ELU) activation function.
5.3. Classification Performance and Discussion
6. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
References
- Bhandari, A.; Joshi, R.; Thapa, M.S.; Sharma, R.P.; Rauniyar, S.K. Land Cover Change and Its Impact in Crop Yield: A Case Study from Western Nepal. Sci. World J. 2022, 2022, 5129423. [Google Scholar] [CrossRef] [PubMed]
- Hussain, S.; Mubeen, M.; Ahmad, A.; Majeed, H.; Qaisrani, S.; Hammad, H.; Amjad, M.; Ahmad, I.; Fahad, S.; Ahmad, N.; et al. Assessment of land use/land cover changes and its effect on land surface temperature using remote sensing techniques in Southern Punjab, Pakistan. Environ. Sci. Pollut. Res. 2022. [Google Scholar] [CrossRef] [PubMed]
- Som-ard, J.; Immitzer, M.; Vuolo, F.; Ninsawat, S.; Atzberger, C. Mapping of crop types in 1989, 1999, 2009 and 2019 to assess major land cover trends of the Udon Thani Province, Thailand. Comput. Electron. Agric. 2022, 198, 107083. [Google Scholar] [CrossRef]
- Koetz, B.; Morsdorf, F.; van der Linden, S.; Curt, T.; Allgöwer, B. Multi-source land cover classification for forest fire management based on imaging spectrometry and LiDAR data. For. Ecol. Manag. 2008, 256, 263–271. [Google Scholar] [CrossRef]
- Hao, L.; van Westen, C.; Rajaneesh, A.; Sajinkumar, K.; Martha, T.R.; Jaiswal, P. Evaluating the relation between land use changes and the 2018 landslide disaster in Kerala, India. CATENA 2022, 216, 106363. [Google Scholar] [CrossRef]
- Shuaishuai, J.; Yang, C.; Wang, M.; Failler, P. Heterogeneous Impact of Land-Use on Climate Change: Study From a Spatial Perspective. Front. Environ. Sci. 2022, 10, 1–17. [Google Scholar]
- Aslam, S.; Chak, Y.C.; Hussain Jaffery, M.; Varatharajoo, R.; Ahmad Ansari, E. Model predictive control for Takagi–Sugeno fuzzy model-based Spacecraft combined energy and attitude control system. Adv. Space Res. 2023, 71, 4155–4172. [Google Scholar] [CrossRef]
- Yahya, N.; Varatharajoo, R.; Mohd Harithuddin, A.S. Satellite Formation Flying Relative Geodesic and Latitudinal Error Measures. J. Aeronaut. Astronaut. Aviat. Ser. A 2020, 52, 83–94. [Google Scholar]
- Li, J.; Chen, B. Global Revisit Interval Analysis of Landsat-8 -9 and Sentinel-2A -2B Data for Terrestrial Monitoring. Sensors 2020, 20, 6631. [Google Scholar] [CrossRef]
- Yin, J.; Dong, J.; Hamm, N.A.; Li, Z.; Wang, J.; Xing, H.; Fu, P. Integrating remote sensing and geospatial big data for urban land use mapping: A review. Int. J. Appl. Earth Obs. Geoinf. 2021, 103, 102514. [Google Scholar] [CrossRef]
- Hansen, M.C.; Loveland, T.R. A review of large area monitoring of land cover change using Landsat data. Remote Sens. Environ. 2012, 122, 66–74. [Google Scholar] [CrossRef]
- Phiri, D.; Simwanda, M.; Salekin, S.; Nyirenda, V.R.; Murayama, Y.; Ranagalage, M. Sentinel-2 Data for Land Cover/Use Mapping: A Review. Remote Sens. 2020, 12, 2291. [Google Scholar] [CrossRef]
- Zhu, Z.; Wang, S.; Woodcock, C.E. Improvement and expansion of the Fmask algorithm: Cloud, cloud shadow, and snow detection for Landsats 4–7, 8, and Sentinel 2 images. Remote Sens. Environ. 2015, 159, 269–277. [Google Scholar] [CrossRef]
- Frantz, D.; Haß, E.; Uhl, A.; Stoffels, J.; Hill, J. Improvement of the Fmask algorithm for Sentinel-2 images: Separating clouds from bright surfaces based on parallax effects. Remote Sens. Environ. 2018, 215, 471–481. [Google Scholar] [CrossRef]
- Dupuy, S.; Gaetano, R. Reunion Island-2019, Land Cover Map (Spot6/7)-1.5 m. 2020. Available online: https://dataverse.cirad.fr/dataset.xhtml?persistentId=doi:10.18167/DVN1/YZJQ7Q (accessed on 12 July 2023).
- Censi, A.M.; Ienco, D.; Gbodjo, Y.J.E.; Pensa, R.G.; Interdonato, R.; Gaetano, R. Attentive Spatial Temporal Graph CNN for Land Cover Mapping From Multi Temporal Remote Sensing Data. IEEE Access 2021, 9, 23070–23082. [Google Scholar] [CrossRef]
- Heymann, Y.; Steenmans, C.; Croisille, G.; Bossard, M.; Lenco, M.; Wyatt, B.; Jean-Louis, W.; O’Brian, C.; Cornaert, M.-H.; Nicolas, S. Corine Land Cover Technical Guide, Part I; Commission of the European Communities: Mestreech, The Netherlands, 1994. [Google Scholar]
- Copernicus Land Monitoring Service 2018. 2018. Available online: https://land.copernicus.eu/pan-european/corine-land-cover/clc2018 (accessed on 12 July 2023).
- Soukup, T.; Feranec, J.; Hazeu, G.; Jaffrain, G.; Jindrova, M.; Kopecky, M.; Orlitova, E. Chapter 10 CORINE Land Cover 1990 (CLC1990): Analysis and Assessment: CORINE Land Cover Data. In European Landscape Dynamics; Feranec, J., Soukup, T., Hazeu, G., Jaffrain, G., Eds.; Taylor & Francis Group: London, UK, 2016; pp. 69–78. [Google Scholar]
- Buttner, G.; Feranec, J.; Jaffrain, G.; Mari, L.; Maucha, G.; Soukup, T. The CORINE land cover 2000 project. EARSeL Eproceedings 2004, 3, 331–346. [Google Scholar]
- Soukup, T.; Feranec, J.; Hazeu, G.; Jaffrain, G.; Jindrova, M.; Kopecky, M.; Orlitova, E. Chapter 12 CORINE Land Cover 2006 (CLC2006): Analysis and Assessment: CORINE Land Cover Data. In European Landscape Dynamics; Feranec, J., Soukup, T., Hazeu, G., Jaffrain, G., Eds.; Taylor & Francis Group: London, UK, 2016; pp. 87–92. [Google Scholar]
- Soukup, T.; Büttner, G.; Feranec, J.; Hazeu, G.; Jaffrain, G.; Jindrova, M.; Kopecky, M.; Orlitova, E. Chapter 13 CORINE Land Cover 2012 (CLC2012): Analysis and Assessment: CORINE Land Cover Data. In European Landscape Dynamics; Feranec, J., Soukup, T., Hazeu, G., Jaffrain, G., Eds.; Taylor & Francis Group: London, UK, 2016; pp. 93–98. [Google Scholar]
- Aune-Lundberg, L.; Strand, G.H. The content and accuracy of the CORINE Land Cover dataset for Norway. Int. J. Appl. Earth Obs. Geoinf. 2020, 96, 102266. [Google Scholar] [CrossRef]
- Eurostat. LUCAS—EU Land Use and Cover Area Survey—2021 Edition; EU: Mestreech, The Netherlands, 2021. [Google Scholar]
- Landa, M.; Brodský, L.; Halounová, L.; Bouček, T.; Pešek, O. Open Geospatial System for LUCAS In Situ Data Harmonization and Distribution. ISPRS Int. J. Geo-Inf. 2022, 11, 361. [Google Scholar] [CrossRef]
- Brown, C.; Brumby, S.; Guzder-Williams, B.; Birch, T.; Hyde, S.; Mazzariello, J.; Czerwinski, W.; Pasquarella, V.; Haertel, R.; Ilyushchenko, S.; et al. Dynamic World, Near real-time global 10 m land use land cover mapping. Sci. Data 2022, 9, 251. [Google Scholar] [CrossRef]
- Hofierka, J.; Gallay, M.; Onačillová, K.; Hofierka, J. Physically-based land surface temperature modeling in urban areas using a 3-D city model and multispectral satellite data. Urban Clim. 2020, 31, 100566. [Google Scholar] [CrossRef]
- Li, S.; Xiong, L.; Tang, G.; Strobl, J. Deep learning-based approach for landform classification from integrated data sources of digital elevation model and imagery. Geomorphology 2020, 354, 107045. [Google Scholar] [CrossRef]
- Gaur, S.; Singh, R. A Comprehensive Review on Land Use/Land Cover (LULC) Change Modeling for Urban Development: Current Status and Future Prospects. Sustainability 2023, 15, 903. [Google Scholar] [CrossRef]
- Gašparović, M.; Jogun, T. The Effect of Fusing Sentinel-2 Bands on Land-Cover Classification. Int. J. Remote Sens. 2018, 39, 822–841. [Google Scholar] [CrossRef]
- Gómez, C.; White, J.C.; Wulder, M.A. Optical remotely sensed time series data for land cover classification: A review. ISPRS J. Photogramm. Remote Sens. 2016, 116, 55–72. [Google Scholar] [CrossRef] [Green Version]
- Zhang, C.; Yue, P.; Tapete, D.; Shangguan, B.; Wang, M.; Wu, Z. A multi-level context-guided classification method with object-based convolutional neural network for land cover classification using very high resolution remote sensing images. Int. J. Appl. Earth Obs. Geoinf. 2020, 88, 102086. [Google Scholar] [CrossRef]
- Mongus, D.; Žalik, B. Segmentation schema for enhancing land cover identification: A case study using Sentinel 2 data. Int. J. Appl. Earth Obs. Geoinf. 2018, 66, 56–68. [Google Scholar] [CrossRef]
- Yang, C.; Rottensteiner, F.; Heipke, C. A hierarchical deep learning framework for the consistent classification of land use objects in geospatial databases. ISPRS J. Photogramm. Remote Sens. 2021, 177, 38–56. [Google Scholar] [CrossRef]
- Fitton, D.; Laurens, E.; Hongkarnjanakul, N.; Schwob, C.; Mezeix, L. Land cover classification through Convolutional Neural Network model assembly: A case study of a local rural area in Thailand. Remote Sens. Appl. Soc. Environ. 2022, 26, 100740. [Google Scholar]
- Fu, J.; Yi, X.; Wang, G.; Mo, L.; Wu, P.; Kapula, K.E. Research on Ground Object Classification Method of High Resolution Remote-Sensing Images Based on Improved DeeplabV3+. Sensors 2022, 22, 7477. [Google Scholar] [CrossRef]
- Li, M.; Lu, Y.; Cao, S.; Wang, X.; Xie, S. A Hyperspectral Image Classification Method Based on the Nonlocal Attention Mechanism of a Multiscale Convolutional Neural Network. Sensors 2023, 23, 3190. [Google Scholar] [CrossRef]
- Li, J.; Wang, H.; Zhang, A.; Liu, Y. Semantic Segmentation of Hyperspectral Remote Sensing Images Based on PSE-UNet Model. Sensors 2022, 22, 9678. [Google Scholar] [CrossRef]
- Abidi, A.; Ienco, D.; Abbes, A.B.; Farah, I.R. Combining 2D encoding and convolutional neural network to enhance land cover mapping from Satellite Image Time Series. Eng. Appl. Artif. Intell. 2023, 122, 106152. [Google Scholar] [CrossRef]
- Qiu, C.; Mou, L.; Schmitt, M.; Zhu, X.X. Local climate zone-based urban land cover classification from multi-seasonal Sentinel-2 images with a recurrent residual network. ISPRS J. Photogramm. Remote Sens. 2019, 154, 151–162. [Google Scholar] [CrossRef]
- Dantas, C.F.; Marcos, D.; Ienco, D. Counterfactual Explanations for Land Cover Mapping in a Multi-class Setting. arXiv 2023, arXiv:cs.LG/2301.01520. [Google Scholar]
- Chen, B.; Zheng, H.; Wang, L.; Hellwich, O.; Chen, C.; Yang, L.; Liu, T.; Luo, G.; Bao, A.; Chen, X. A joint learning Im-BiLSTM model for incomplete time-series Sentinel-2A data imputation and crop classification. Int. J. Appl. Earth Obs. Geoinf. 2022, 108, 102762. [Google Scholar] [CrossRef]
- Jiang, Z.; Yang, S.; Liu, Z.; Xu, Y.; Xiong, Y.; Qi, S.; Pang, Q.; Xu, J.; Liu, F.; Xu, T. Coupling machine learning and weather forecast to predict farmland flood disaster: A case study in Yangtze River basin. Environ. Model. Softw. 2022, 155, 105436. [Google Scholar] [CrossRef]
- Aamir, M.; Ali, T.; Irfan, M.; Shaf, A.; Azam, M.Z.; Glowacz, A.; Brumercik, F.; Glowacz, W.; Alqhtani, S.; Rahman, S. Natural Disasters Intensity Analysis and Classification Based on Multispectral Images Using Multi-Layered Deep Convolutional Neural Network. Sensors 2021, 21, 2648. [Google Scholar] [CrossRef]
- Siddiqui, M.K.; Imran, M.; Ahmad, A. On Zagreb indices, Zagreb polynomials of some nanostar dendrimers. Appl. Math. Comput. 2016, 280, 132–139. [Google Scholar] [CrossRef]
- Ahmad, A.; Bača, M.; Siddiqui, M.K. On Edge Irregular Total Labeling of Categorical Product of Two Cycles. Theory Comput. Syst. 2013, 54, 1–12. [Google Scholar] [CrossRef]
- Azeem, M.; Imran, M.; Nadeem, M.F. Sharp bounds on partition dimension of hexagonal Möbius ladder. J. King Saud Univ. Sci. 2022, 34, 101779. [Google Scholar] [CrossRef]
- Pang, H.E.; Biljecki, F. 3D building reconstruction from single street view images using deep learning. Int. J. Appl. Earth Obs. Geoinf. 2022, 112, 102859. [Google Scholar] [CrossRef]
- Ding, Y.; Zhang, Z.; Zhao, X.; Hong, D.; Li, W.; Cai, W.; Zhan, Y. AF2GNN: Graph convolution with adaptive filters and aggregator fusion for hyperspectral image classification. Inf. Sci. 2022, 602, 201–219. [Google Scholar] [CrossRef]
- Dosovitskiy, A.; Beyer, L.; Kolesnikov, A.; Weissenborn, D.; Zhai, X.; Unterthiner, T.; Dehghani, M.; Minderer, M.; Heigold, G.; Gelly, S.; et al. An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale. In Proceedings of the International Conference on Learning Representations, Virtual, 3–7 May 2021. [Google Scholar]
- Tan, M.; Le, Q.V. EfficientNetV2: Smaller Models and Faster Training. In Proceedings of the 38th International Conference on Machine Learning, Virtual, 18–24 July 2021. [Google Scholar]
- Tan, M.; Le, Q.V. EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks. arXiv 2020, arXiv:cs.LG/1905.11946. [Google Scholar]
- He, K.; Zhang, X.; Ren, S.; Sun, J. Deep Residual Learning for Image Recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 27–30 June 2016. [Google Scholar]
- Xie, S.; Girshick, R.; Dollar, P.; Tu, Z.; He, K. Aggregated Residual Transformations for Deep Neural Networks. arXiv 2017, arXiv:1611.05431v2. [Google Scholar]
- Szegedy, C.; Vanhoucke, V.; Ioffe, S.; Shlens, J.; Wojna, Z. Rethinking the Inception Architecture for Computer Vision. empharXiv 2016, arXiv:1512.00567. [Google Scholar]
- Huang, G.; Liu, Z.; van der Maaten, L.; Weinberger, K. Densely Connected Convolutional Networks. arXiv 2017, arXiv:1608.06993. [Google Scholar]
- Veličković, P.; Cucurull, G.; Casanova, A.; Romero, A.; Liò, P.; Bengio, Y. Graph Attention Networks. arXiv 2017, arXiv:1710.10903. [Google Scholar]
- Hamilton, W.; Ying, Z.; Leskovec, J. Inductive Representation Learning on Large Graphs. In Proceedings of the 31st Conference on Neural Information Processing Systems (NeurIPS 2017), Long Beach, CA, USA, 4–9 December 2017. [Google Scholar]
- Kaiser, P.; Wegner, J.; Lucchi, A.; Jaggi, M.; Hofmann, T.; Schindler, K. Learning Aerial Image Segmentation From Online Maps. IEEE Trans. Geosci. Remote Sens. 2017, 55, 6054–6068. [Google Scholar] [CrossRef]
- Nasir, S.M.; Kamran, K.V.; Blaschke, T.; Karimzadeh, S. Change of land use / land cover in kurdistan region of Iraq: A semi-automated object-based approach. Remote Sens. Appl. Soc. Environ. 2022, 26, 100713. [Google Scholar] [CrossRef]
- Liu, T.; Abd-Elrahman, A. Deep convolutional neural network training enrichment using multi-view object-based analysis of Unmanned Aerial systems imagery for wetlands classification. ISPRS J. Photogramm. Remote Sens. 2018, 139, 154–170. [Google Scholar] [CrossRef]
- Herawan, A.; Julzarika, A.; Hakim, P.; Asti, E. Object-Based on Land Cover Classification on LAPAN-A3 Satellite Imagery Using Tree Algorithm (Case Study: Rote Island). Int. J. Adv. Sci. Eng. Inf. Technol.y 2021, 11, 2254–2260. [Google Scholar] [CrossRef]
- Vizzari, M. PlanetScope, Sentinel-2, and Sentinel-1 Data Integration for Object-Based Land Cover Classification in Google Earth Engine. Remote Sens. 2022, 14, 2628. [Google Scholar] [CrossRef]
- Hedayati, A.; Vahidnia, M.H.; Behzadi, S. Paddy lands detection using Landsat-8 satellite images and object-based classification in Rasht city, Iran. Egypt. J. Remote Sens. Space Sci. 2022, 25, 73–84. [Google Scholar] [CrossRef]
- Kipf, T.N.; Welling, M. Semi-Supervised Classification with Graph Convolutional Networks. In Proceedings of the 5th International Conference on Learning Representations, ICLR 2017, Toulon, France, 24–26 April 2017. [Google Scholar]
- Zhao, L.; Song, Y.; Zhang, C.; Liu, Y.; Wang, P.; Lin, T.; Deng, M.; Li, H. T-GCN: A Temporal Graph Convolutional Network for Traffic Prediction. IEEE Trans. Intell. Transp. Syst. 2020, 21, 3848–3858. [Google Scholar] [CrossRef] [Green Version]
- Zhang, Z.; Huang, J.; Tan, Q. SR-HGAT: Symmetric Relations Based Heterogeneous Graph Attention Network. IEEE Access 2020, 8, 165631–165645. [Google Scholar] [CrossRef]
- Hu, B.; Guo, K.; Wang, X.; Zhang, J.; Zhou, D. RRL-GAT: Graph Attention Network-Driven Multilabel Image Robust Representation Learning. IEEE Internet Things J. 2022, 9, 9167–9178. [Google Scholar] [CrossRef]
- Ying, R.; He, R.; Chen, K.; Eksombatchai, P.; Hamilton, W.L.; Leskovec, J. Graph Convolutional Neural Networks for Web-Scale Recommender Systems. In Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, KDD 2018, London, UK, 19–23 August 2018; pp. 974–983. [Google Scholar]
- Zhao, W.; Peng, S.; Chen, J.; Peng, R. Contextual-Aware Land Cover Classification With U-Shaped Object Graph Neural Network. IEEE Geosci. Remote Sens. Lett. 2022, 19, 6510705. [Google Scholar] [CrossRef]
- Achanta, R.; Shaji, A.; Smith, K.; Lucchi, A.; Fua, P.; Süsstrunk, S. SLIC Superpixels Compared to State-of-the-Art Superpixel Methods. IEEE Trans. Pattern Anal. Mach. Intell. 2012, 34, 2274–2282. [Google Scholar] [CrossRef] [Green Version]
- Stutz, D.; Hermans, A.; Leibe, B. Superpixels: An evaluation of the state-of-the-art. Comput. Vis. Image Underst. 2018, 166, 1–27. [Google Scholar] [CrossRef] [Green Version]
- Felzenszwalb, P.F.; Huttenlocher, D.P. Efficient Graph-Based Image Segmentation. Int. J. Comput. Vis. 2004, 59, 167–181. [Google Scholar] [CrossRef]
- Kingma, D.; Ba, J. Adam: A Method for Stochastic Optimization. In Proceedings of the International Conference on Learning Representations (ICLR 2015), San Diego, CA, USA, 7–9 May 2015. [Google Scholar]
- Grandini, M.; Bagli, E.; Visani, G. Metrics for Multi-Class Classification: An Overview. arXiv 2020, arXiv:abs/2008.05756. [Google Scholar]
- Lever, J.; Krzywinski, M.; Altman, N. Points of Significance: Classification evaluation. Nat. Methods 2016, 13, 603–604. [Google Scholar] [CrossRef]
Metric | |||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|
Classification | Accuracy | Precision | Recall | F1-Score | |||||||
Method | Weighted | Macro | Micro | Weighted | Macro | Micro | Weighted | Macro | Micro | ||
Esri’s UNet | / | 0.818 | 0.841 | 0.349 | 0.818 | 0.818 | 0.465 | 0.818 | 0.824 | 0.371 | 0.818 |
Proposed GNN-based method | 0 | 0.821 ± 0.016 | 0.862 ± 0.004 | 0.432 ± 0.021 | 0.821 ± 0.016 | 0.821 ± 0.016 | 0.552 ± 0.024 | 0.821 ± 0.016 | 0.834 ± 0.012 | 0.458 ± 0.020 | 0.821 ± 0.016 |
1 | 0.828 ± 0.013 | 0.865 ± 0.005 | 0.441 ± 0.030 | 0.828 ± 0.013 | 0.828 ± 0.013 | 0.535 ± 0.029 | 0.828 ± 0.013 | 0.841 ± 0.010 | 0.465 ± 0.032 | 0.828 ± 0.013 | |
2 | 0.831 ± 0.004 | 0.858 ± 0.003 | 0.442 ± 0.031 | 0.831 ± 0.004 | 0.831 ± 0.004 | 0.534 ± 0.040 | 0.831 ± 0.004 | 0.841 ± 0.003 | 0.468 ± 0.034 | 0.831 ± 0.004 | |
3 | 0.823 ± 0.016 | 0.855 ± 0.005 | 0.409 ± 0.009 | 0.823 ± 0.016 | 0.823 ± 0.016 | 0.509 ± 0.017 | 0.823 ± 0.016 | 0.836 ± 0.010 | 0.432 ± 0.012 | 0.823 ± 0.016 | |
4 | 0.800 ± 0.015 | 0.840 ± 0.015 | 0.392 ± 0.009 | 0.800 ± 0.015 | 0.800 ± 0.015 | 0.494 ± 0.016 | 0.800 ± 0.015 | 0.815 ± 0.014 | 0.412 ± 0.012 | 0.800 ± 0.015 | |
5 | 0.788 ± 0.020 | 0.836 ± 0.010 | 0.401 ± 0.027 | 0.788 ± 0.020 | 0.788 ± 0.020 | 0.508 ± 0.028 | 0.788 ± 0.020 | 0.805 ± 0.016 | 0.422 ± 0.030 | 0.788 ± 0.020 | |
6 | 0.737 ± 0.030 | 0.810 ± 0.022 | 0.386 ± 0.033 | 0.737 ± 0.030 | 0.737 ± 0.030 | 0.482 ± 0.058 | 0.737 ± 0.030 | 0.763 ± 0.030 | 0.393 ± 0.040 | 0.737 ± 0.030 | |
7 | 0.636 ± 0.085 | 0.737 ± 0.075 | 0.391 ± 0.045 | 0.636 ± 0.085 | 0.636 ± 0.085 | 0.478 ± 0.044 | 0.636 ± 0.085 | 0.665 ± 0.091 | 0.395 ± 0.050 | 0.636 ± 0.085 |
Metric | |||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|
Classification | Accuracy | Precision | Recall | F1-Score | |||||||
Method | Weighted | Macro | Micro | Weighted | Macro | Micro | Weighted | Macro | Micro | ||
Esri’s UNet | / | 0.792 | 0.827 | 0.578 | 0.792 | 0.792 | 0.677 | 0.792 | 0.801 | 0.589 | 0.792 |
Proposed GNN-based method | 0 | 0.741 ± 0.021 | 0.770 ± 0.011 | 0.520 ± 0.020 | 0.741 ± 0.021 | 0.741 ± 0.021 | 0.577 ± 0.014 | 0.741 ± 0.021 | 0.746 ± 0.019 | 0.529 ± 0.021 | 0.741 ± 0.021 |
1 | 0.727 ± 0.022 | 0.762 ± 0.013 | 0.512 ± 0.018 | 0.727 ± 0.022 | 0.727 ± 0.022 | 0.575 ± 0.013 | 0.727 ± 0.022 | 0.730 ± 0.022 | 0.522 ± 0.015 | 0.727 ± 0.022 | |
2 | 0.709 ± 0.012 | 0.736 ± 0.009 | 0.491 ± 0.029 | 0.709 ± 0.012 | 0.709 ± 0.012 | 0.561 ± 0.011 | 0.709 ± 0.012 | 0.709 ± 0.011 | 0.504 ± 0.021 | 0.709 ± 0.012 | |
3 | 0.656 ± 0.051 | 0.714 ± 0.021 | 0.439 ± 0.038 | 0.656 ± 0.051 | 0.656 ± 0.051 | 0.531 ± 0.029 | 0.656 ± 0.051 | 0.660 ± 0.047 | 0.448 ± 0.051 | 0.656 ± 0.051 | |
4 | 0.584 ± 0.037 | 0.674 ± 0.022 | 0.369 ± 0.023 | 0.584 ± 0.037 | 0.584 ± 0.037 | 0.492 ± 0.020 | 0.584 ± 0.037 | 0.585 ± 0.033 | 0.362 ± 0.027 | 0.584 ± 0.037 | |
5 | 0.476 ± 0.042 | 0.595 ± 0.043 | 0.311 ± 0.028 | 0.476 ± 0.042 | 0.476 ± 0.042 | 0.421 ± 0.023 | 0.476 ± 0.042 | 0.466 ± 0.049 | 0.285 ± 0.025 | 0.476 ± 0.042 | |
6 | 0.385 ± 0.089 | 0.557 ± 0.044 | 0.249 ± 0.029 | 0.385 ± 0.089 | 0.385 ± 0.089 | 0.279 ± 0.053 | 0.385 ± 0.089 | 0.395 ± 0.090 | 0.199 ± 0.036 | 0.385 ± 0.089 | |
7 | 0.369 ± 0.046 | 0.463 ± 0.040 | 0.192 ± 0.018 | 0.369 ± 0.046 | 0.369 ± 0.046 | 0.206 ± 0.021 | 0.369 ± 0.046 | 0.368 ± 0.035 | 0.157 ± 0.020 | 0.369 ± 0.046 |
Metric | |||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|
Classification | Accuracy | Precision | Recall | F1-Score | |||||||
Method | Weighted | Macro | Micro | Weighted | Macro | Micro | Weighted | Macro | Micro | ||
Esri’s UNet | / | 0.857 | 0.863 | 0.568 | 0.857 | 0.857 | 0.570 | 0.857 | 0.857 | 0.567 | 0.857 |
Proposed GNN-based method | 0 | 0.881 ± 0.007 | 0.883 ± 0.006 | 0.867 ± 0.006 | 0.881 ± 0.007 | 0.881 ± 0.007 | 0.901 ± 0.010 | 0.881 ± 0.007 | 0.880 ± 0.008 | 0.881 ± 0.007 | 0.881 ± 0.007 |
1 | 0.888 ± 0.006 | 0.890 ± 0.006 | 0.877 ± 0.008 | 0.888 ± 0.006 | 0.888 ± 0.006 | 0.907 ± 0.008 | 0.888 ± 0.006 | 0.888 ± 0.007 | 0.889 ± 0.006 | 0.888 ± 0.006 | |
2 | 0.886 ± 0.003 | 0.888 ± 0.002 | 0.872 ± 0.009 | 0.886 ± 0.003 | 0.886 ± 0.003 | 0.906 ± 0.007 | 0.886 ± 0.003 | 0.886 ± 0.003 | 0.886 ± 0.002 | 0.886 ± 0.003 | |
3 | 0.882 ± 0.009 | 0.883 ± 0.008 | 0.860 ± 0.010 | 0.882 ± 0.009 | 0.882 ± 0.009 | 0.903 ± 0.009 | 0.882 ± 0.009 | 0.882 ± 0.009 | 0.878 ± 0.009 | 0.882 ± 0.009 | |
4 | 0.866 ± 0.015 | 0.868 ± 0.014 | 0.846 ± 0.017 | 0.866 ± 0.015 | 0.866 ± 0.015 | 0.885 ± 0.026 | 0.866 ± 0.015 | 0.865 ± 0.015 | 0.861 ± 0.020 | 0.866 ± 0.015 | |
5 | 0.859 ± 0.011 | 0.863 ± 0.010 | 0.838 ± 0.012 | 0.859 ± 0.011 | 0.859 ± 0.011 | 0.884 ± 0.011 | 0.859 ± 0.011 | 0.859 ± 0.011 | 0.855 ± 0.013 | 0.859 ± 0.011 | |
6 | 0.822 ± 0.027 | 0.832 ± 0.024 | 0.789 ± 0.035 | 0.822 ± 0.027 | 0.822 ± 0.027 | 0.839 ± 0.031 | 0.822 ± 0.027 | 0.821 ± 0.028 | 0.798 ± 0.042 | 0.822 ± 0.027 | |
7 | 0.756 ± 0.071 | 0.770 ± 0.065 | 0.723 ± 0.069 | 0.756 ± 0.071 | 0.756 ± 0.071 | 0.796 ± 0.059 | 0.756 ± 0.071 | 0.756 ± 0.074 | 0.738 ± 0.074 | 0.756 ± 0.071 |
Metric | |||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|
Classification | Accuracy | Precision | Recall | F1-Score | |||||||
Method | Weighted | Macro | Micro | Weighted | Macro | Micro | Weighted | Macro | Micro | ||
Esri’s UNet | / | 0.864 | 0.869 | 0.862 | 0.864 | 0.864 | 0.794 | 0.864 | 0.862 | 0.821 | 0.864 |
Proposed GNN-based method | 0 | 0.864 ± 0.023 | 0.873 ± 0.011 | 0.847 ± 0.014 | 0.864 ± 0.023 | 0.864 ± 0.023 | 0.882 ± 0.016 | 0.864 ± 0.023 | 0.864 ± 0.024 | 0.859 ± 0.018 | 0.864 ± 0.023 |
1 | 0.871 ± 0.014 | 0.876 ± 0.010 | 0.844 ± 0.020 | 0.871 ± 0.014 | 0.871 ± 0.014 | 0.890 ± 0.010 | 0.871 ± 0.014 | 0.872 ± 0.013 | 0.862 ± 0.016 | 0.871 ± 0.014 | |
2 | 0.861 ± 0.012 | 0.865 ± 0.009 | 0.830 ± 0.019 | 0.861 ± 0.012 | 0.861 ± 0.012 | 0.882 ± 0.007 | 0.861 ± 0.012 | 0.861 ± 0.012 | 0.850 ± 0.016 | 0.861 ± 0.012 | |
3 | 0.841 ± 0.026 | 0.843 ± 0.024 | 0.793 ± 0.045 | 0.841 ± 0.026 | 0.841 ± 0.026 | 0.859 ± 0.022 | 0.841 ± 0.026 | 0.840 ± 0.026 | 0.814 ± 0.047 | 0.841 ± 0.026 | |
4 | 0.793 ± 0.027 | 0.803 ± 0.022 | 0.712 ± 0.037 | 0.793 ± 0.027 | 0.793 ± 0.027 | 0.814 ± 0.019 | 0.793 ± 0.027 | 0.788 ± 0.030 | 0.722 ± 0.040 | 0.793 ± 0.027 | |
5 | 0.734 ± 0.033 | 0.750 ± 0.026 | 0.646 ± 0.039 | 0.734 ± 0.033 | 0.734 ± 0.033 | 0.752 ± 0.028 | 0.734 ± 0.033 | 0.730 ± 0.025 | 0.644 ± 0.046 | 0.734 ± 0.033 | |
6 | 0.623 ± 0.072 | 0.710 ± 0.033 | 0.571 ± 0.034 | 0.623 ± 0.072 | 0.623 ± 0.072 | 0.586 ± 0.088 | 0.623 ± 0.072 | 0.638 ± 0.056 | 0.515 ± 0.042 | 0.623 ± 0.072 | |
7 | 0.572 ± 0.067 | 0.615 ± 0.050 | 0.484 ± 0.047 | 0.572 ± 0.067 | 0.572 ± 0.067 | 0.461 ± 0.048 | 0.572 ± 0.067 | 0.572 ± 0.063 | 0.451 ± 0.053 | 0.572 ± 0.067 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Kavran, D.; Mongus, D.; Žalik, B.; Lukač, N. Graph Neural Network-Based Method of Spatiotemporal Land Cover Mapping Using Satellite Imagery. Sensors 2023, 23, 6648. https://doi.org/10.3390/s23146648
Kavran D, Mongus D, Žalik B, Lukač N. Graph Neural Network-Based Method of Spatiotemporal Land Cover Mapping Using Satellite Imagery. Sensors. 2023; 23(14):6648. https://doi.org/10.3390/s23146648
Chicago/Turabian StyleKavran, Domen, Domen Mongus, Borut Žalik, and Niko Lukač. 2023. "Graph Neural Network-Based Method of Spatiotemporal Land Cover Mapping Using Satellite Imagery" Sensors 23, no. 14: 6648. https://doi.org/10.3390/s23146648
APA StyleKavran, D., Mongus, D., Žalik, B., & Lukač, N. (2023). Graph Neural Network-Based Method of Spatiotemporal Land Cover Mapping Using Satellite Imagery. Sensors, 23(14), 6648. https://doi.org/10.3390/s23146648