Empirical Evidence Regarding Few-Shot Learning for Scene Classification in Remote Sensing Images
<p>An arrangement for a 2-way 1-shot few-shot task.</p> "> Figure 2
<p>The workflow of the method related to this study.</p> "> Figure 3
<p>Samples from EuroSAT (top row: (<b>a</b>–<b>e</b>)) and XAI4SAR (bottom row: (<b>f</b>–<b>j</b>)) datasets.</p> "> Figure 4
<p>Samples from UC Merced (top row: (<b>a</b>–<b>e</b>)) and WHU-RS19 (bottom row: (<b>f</b>–<b>j</b>)) datasets. Caption: resid. = residential.</p> "> Figure 5
<p>Samples from AID (top row: (<b>a</b>–<b>e</b>)) and RESISC-45 (bottom row: (<b>f</b>–<b>j</b>)) datasets.</p> "> Figure 6
<p>Q-Q plots for the 1-shot and 5-shot sets. (<b>a</b>) 1-shot set. (<b>b</b>) 5-shot set.</p> "> Figure 7
<p>Average accuracies: 5-shot and 1-shot sets.</p> ">
Abstract
:1. Introduction
- 1.
- RQ_1—Which of the inference approaches is more suitable: inductive or transductive?
- 2.
- RQ_2—Considering classical training, how does the number of epochs used during training, based on the meta-training (base) dataset, relate to the number of unseen classes during inference?
- 3.
- RQ_3—Is relying on 5-shot tasks statistically significantly better than 1-shot ones?
- 4.
- RQ_4—Would a higher similarity between unseen classes (during inference) and some of the existing classes in the base training dataset improve technique performance?
2. Background
2.1. Related Work
3. Materials and Methods
3.1. Research Questions
- 1.
- RQ_1—Which of the inference approaches is more suitable: inductive or transductive?
- 2.
- RQ_2—Considering classical training, how does the number of epochs used during training, based on the meta-training (base) dataset, relate to the number of unseen classes during inference?
- 3.
- RQ_3—Is relying on 5-shot tasks statistically significantly better than 1-shot ones?
- 4.
- RQ_4—Would a higher similarity between unseen classes (during inference) and some of the existing classes in the base training dataset improve technique performance?
3.2. Datasets
3.3. FSL Approaches
3.3.1. Inductive Approaches
3.3.2. Transductive Approaches
3.4. Experiment Design Options and Metric
4. Results
4.1. RQ_1
4.2. RQ_2
4.3. RQ_3
4.4. RQ_4
5. Discussion
6. Conclusions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Acknowledgments
Conflicts of Interest
References
- Zhou, Z.H. A brief introduction to weakly supervised learning. Natl. Sci. Rev. 2017, 5, 44–53. [Google Scholar] [CrossRef]
- Li, Y.F.; Guo, L.Z.; Zhou, Z.H. Towards Safe Weakly Supervised Learning. IEEE Trans. Pattern Anal. Mach. Intell. 2021, 43, 334–346. [Google Scholar] [CrossRef] [PubMed]
- van Engelen, J.E.; Hoos, H.H. A survey on semi-supervised learning. Mach. Learn. 2020, 109, 373–440. [Google Scholar] [CrossRef]
- Chen, Y.; Tan, X.; Zhao, B.; Chen, Z.; Song, R.; Liang, J.; Lu, X. Boosting Semi-Supervised Learning by Exploiting All Unlabeled Data. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Vancouver, BC, Canada, 18–22 June 2023; pp. 7548–7557. [Google Scholar]
- Rani, V.; Nabi, S.T.; Kumar, M.; Mittal, A.; Kumar, K. Self-supervised Learning: A Succinct Review. Arch. Comput. Methods Eng. 2023, 30, 2761–2775. [Google Scholar] [CrossRef] [PubMed]
- Grill, J.B.; Strub, F.; Altché, F.; Tallec, C.; Richemond, P.; Buchatskaya, E.; Doersch, C.; Avila Pires, B.; Guo, Z.; Gheshlaghi Azar, M.; et al. Bootstrap Your Own Latent—A New Approach to Self-Supervised Learning. In Proceedings of the 34th International Conference on Neural Information Processing Systems, Virtual, 6–12 December 2020; Larochelle, H., Ranzato, M., Hadsell, R., Balcan, M., Lin, H., Eds.; Volume 33, pp. 21271–21284. [Google Scholar]
- Zhu, W.; Liu, J.; Huang, Y. HNSSL: Hard Negative-Based Self-Supervised Learning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, Vancouver, BC, Canada, 18–22 June 2023; pp. 4778–4787. [Google Scholar]
- Toche Tchio, G.M.; Kenfack, J.; Kassegne, D.; Menga, F.D.; Ouro-Djobo, S.S. A Comprehensive Review of Supervised Learning Algorithms for the Diagnosis of Photovoltaic Systems, Proposing a New Approach Using an Ensemble Learning Algorithm. Appl. Sci. 2024, 14, 2072. [Google Scholar] [CrossRef]
- Aljuaid, A.; Anwar, M. Survey of Supervised Learning for Medical Image Processing. SN Comput. Sci. 2022, 3, 292. [Google Scholar] [CrossRef]
- Wang, Y.; Yao, Q.; Kwok, J.T.; Ni, L.M. Generalizing from a Few Examples: A Survey on Few-shot Learning. ACM Comput. Surv. 2020, 53, 1–34. [Google Scholar] [CrossRef]
- Song, Y.; Wang, T.; Cai, P.; Mondal, S.K.; Sahoo, J.P. A Comprehensive Survey of Few-shot Learning: Evolution, Applications, Challenges, and Opportunities. ACM Comput. Surv. 2023, 55, 1–40. [Google Scholar] [CrossRef]
- Laenen, S.; Bertinetto, L. On episodes, prototypical networks, and few-shot learning. In Proceedings of the 35th International Conference on Neural Information Processing Systems, Virtual, 6–14 December 2021. [Google Scholar]
- Zhu, H.; Koniusz, P. Transductive Few-Shot Learning with Prototype-Based Label Propagation by Iterative Graph Refinement. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Vancouver, BC, Canada, 18–22 June 2023; pp. 23996–24006. [Google Scholar]
- Sun, Q.; Chao, J.; Lin, W.; Xu, Z.; Chen, W.; He, N. Learn to Few-Shot Segment Remote Sensing Images from Irrelevant Data. Remote Sens. 2023, 15, 4937. [Google Scholar] [CrossRef]
- Liu, Y.; Zhang, L.; Han, Z.; Chen, C. Integrating Knowledge Distillation with Learning to Rank for Few-Shot Scene Classification. IEEE Trans. Geosci. Remote Sens. 2022, 60, 1–12. [Google Scholar] [CrossRef]
- Tang, J.; Zhang, F.; Zhou, Y.; Yin, Q.; Hu, W. A Fast Inference Networks for SAR Target Few-Shot Learning Based on Improved Siamese Networks. In Proceedings of the IGARSS 2019—2019 IEEE International Geoscience and Remote Sensing Symposium, Yokohama, Japan, 28 July–2 August 2019; pp. 1212–1215. [Google Scholar] [CrossRef]
- Wang, L.; Yang, X.; Tan, H.; Bai, X.; Zhou, F. Few-Shot Class-Incremental SAR Target Recognition Based on Hierarchical Embedding and Incremental Evolutionary Network. IEEE Trans. Geosci. Remote Sens. 2023, 61, 1–11. [Google Scholar] [CrossRef]
- Li, Y.; Bian, C. Few-Shot Fine-Grained Ship Classification with a Foreground-Aware Feature Map Reconstruction Network. IEEE Trans. Geosci. Remote Sens. 2022, 60, 1–12. [Google Scholar] [CrossRef]
- Liu, Y.; Zhang, T.; Zhuang, Y.; Wang, G.; Chen, H. Multi-Grained Global-Local Semantic Feature Fusion for Few Shot Remote Sensing Scene Classification. In Proceedings of the IGARSS 2023—2023 IEEE International Geoscience and Remote Sensing Symposium, Pasadena, CA, USA, 16–21 July 2023; pp. 6235–6238. [Google Scholar] [CrossRef]
- Zhang, B.; Feng, S.; Li, X.; Ye, Y.; Ye, R.; Luo, C.; Jiang, H. SGMNet: Scene Graph Matching Network for Few-Shot Remote Sensing Scene Classification. IEEE Trans. Geosci. Remote Sens. 2022, 60, 1–15. [Google Scholar] [CrossRef]
- Liu, Q.; Peng, J.; Ning, Y.; Chen, N.; Sun, W.; Du, Q.; Zhou, Y. Refined Prototypical Contrastive Learning for Few-Shot Hyperspectral Image Classification. IEEE Trans. Geosci. Remote Sens. 2023, 61, 1–14. [Google Scholar] [CrossRef]
- Zhao, S.; Bai, Y.; Shao, S.; Liu, W.; Ge, X.; Li, Y.; Liu, B. SELM: Self-Motivated Ensemble Learning Model for Cross-Domain Few-Shot Classification in Hyperspectral Images. IEEE Geosci. Remote Sens. Lett. 2024, 21, 1–5. [Google Scholar] [CrossRef]
- Yang, Z.; Zhang, Y.; Zheng, J.; Yu, Z.; Zheng, B. Scale Information Enhancement for Few-Shot Object Detection on Remote Sensing Images. Remote Sens. 2023, 15, 5372. [Google Scholar] [CrossRef]
- Huang, X.; He, B.; Tong, M.; Wang, D.; He, C. Few-Shot Object Detection on Remote Sensing Images via Shared Attention Module and Balanced Fine-Tuning Strategy. Remote Sens. 2021, 13, 3816. [Google Scholar] [CrossRef]
- Wang, L.; Bai, X.; Gong, C.; Zhou, F. Hybrid Inference Network for Few-Shot SAR Automatic Target Recognition. IEEE Trans. Geosci. Remote Sens. 2021, 59, 9257–9269. [Google Scholar] [CrossRef]
- Pan, C.; Huang, J.; Gong, J.; Hao, J. Few-shot learning with hierarchical pooling induction network. Multimed. Tools Appl. 2022, 81, 32937–32952. [Google Scholar] [CrossRef]
- LENS.ORG. LENS.ORG: Explore Global Science and Technology Knowledge. Available online: https://www.lens.org/ (accessed on 4 November 2024).
- Piccialli, F.; Somma, V.D.; Giampaolo, F.; Cuomo, S.; Fortino, G. A survey on deep learning in medicine: Why, how and when? Information Fusion 2021, 66, 111–137. [Google Scholar] [CrossRef]
- Albahar, M. A Survey on Deep Learning and Its Impact on Agriculture: Challenges and Opportunities. Agriculture 2023, 13, 540. [Google Scholar] [CrossRef]
- Ozbayoglu, A.M.; Gudelek, M.U.; Sezer, O.B. Deep learning for financial applications: A survey. Appl. Soft Comput. 2020, 93, 106384. [Google Scholar] [CrossRef]
- Elallid, B.B.; Benamar, N.; Hafid, A.S.; Rachidi, T.; Mrani, N. A Comprehensive Survey on the Application of Deep and Reinforcement Learning Approaches in Autonomous Driving. J. King Saud Univ. Comput. Inf. Sci. 2022, 34, 7366–7390. [Google Scholar] [CrossRef]
- Zhao, M.; Liu, Y. A class distribution learning method for few-shot remote sensing scene classification. Remote Sens. Lett. 2024, 15, 558–569. [Google Scholar] [CrossRef]
- Yuan, T.; Liu, W.; Liu, B. Double Discriminative Constraint-Based Affine Nonnegative Representation for Few-Shot Remote Sensing Scene Classification. IEEE Geosci. Remote Sens. Lett. 2023, 20, 1–5. [Google Scholar] [CrossRef]
- Zeng, Q.; Geng, J.; Jiang, W.; Huang, K.; Wang, Z. IDLN: Iterative Distribution Learning Network for Few-Shot Remote Sensing Image Scene Classification. IEEE Geosci. Remote Sens. Lett. 2021, 19, 1–5. [Google Scholar] [CrossRef]
- Sheng, Y.; Xiao, L. Manifold Augmentation Based Self-Supervised Contrastive Learning for Few-Shot Remote Sensing Scene Classification. In Proceedings of the IGARSS 2022—2022 IEEE International Geoscience and Remote Sensing Symposium, Kuala Lumpur, Malaysia, 17–22 July 2022; pp. 2239–2242. [Google Scholar] [CrossRef]
- Yuan, Z.; Tang, C.; Yang, A.; Huang, W.; Chen, W. Few-Shot Remote Sensing Image Scene Classification Based on Metric Learning and Local Descriptors. Remote Sens. 2023, 15, 831. [Google Scholar] [CrossRef]
- Pei, S.; Wang, Y.; Ma, J.; Tang, X.; Yang, Y. Multi-Scale Interaction Prototypical Network For Few-Shot Remote Sensing Scene Classification. In Proceedings of the IGARSS 2023—2023 IEEE International Geoscience and Remote Sensing Symposium, Pasadena, CA, USA, 16–21 July 2023; pp. 6231–6234. [Google Scholar] [CrossRef]
- Dong, Z.; Lin, B.; Xie, F. Optimizing Few-Shot Remote Sensing Scene Classification Based on an Improved Data Augmentation Approach. Remote Sens. 2024, 16, 525. [Google Scholar] [CrossRef]
- Snell, J.; Swersky, K.; Zemel, R. Prototypical networks for few-shot learning. In Proceedings of the 31st International Conference on Neural Information Processing Systems, Long Beach, CA, USA, 4–9 December 2017; pp. 4080–4090. [Google Scholar]
- Wang, Y.; Chao, W.L.; Weinberger, K.Q.; van der Maaten, L. SimpleShot: Revisiting Nearest-Neighbor Classification for Few-Shot Learning. arXiv 2019, arXiv:1911.04623. [Google Scholar]
- Chen, W.; Liu, Y.; Kira, Z.; Wang, Y.F.; Huang, J. A Closer Look at Few-shot Classification. In Proceedings of the 7th International Conference on Learning Representations, ICLR 2019, New Orleans, LA, USA, 6–9 May 2019. [Google Scholar]
- Liu, J.; Song, L.; Qin, Y. Prototype Rectification for Few-Shot Learning. In Proceedings of the Computer Vision—ECCV 2020, Glasgow, UK, 23–28 August 2020; Vedaldi, A., Bischof, H., Brox, T., Frahm, J.M., Eds.; Elsevier: Cham, Switzerland, 2020; pp. 741–756. [Google Scholar]
- Ziko, I.; Dolz, J.; Granger, E.; Ayed, I.B. Laplacian Regularized Few-Shot Learning. In Proceedings of the 37th International Conference on Machine Learning, Online, 13–18 July 2020; Volume 119, pp. 11660–11670. [Google Scholar]
- Hu, Y.; Gripon, V.; Pateux, S. Leveraging the Feature Distribution in Transfer-Based Few-Shot Learning. In Proceedings of the Artificial Neural Networks and Machine Learning—ICANN 2021, Online, 14–17 September 2021; Farkaš, I., Masulli, P., Otte, S., Wermter, S., Eds.; Elsevier: Cham, Switzerland, 2021; pp. 487–499. [Google Scholar]
- Boudiaf, M.; Masud, Z.I.; Rony, J.; Dolz, J.; Piantanida, P.; Ayed, I.B. Transductive information maximization for few-shot learning. In Proceedings of the 34th International Conference on Neural Information Processing Systems, Virtual, 6–12 December 2020; pp. 2445–2457. [Google Scholar]
- Dhillon, G.S.; Chaudhari, P.; Ravichandran, A.; Soatto, S. A Baseline for Few-Shot Image Classification. In Proceedings of the Eight International Conference on Learning Representations, Virtual, 26 April–1 May 2020. [Google Scholar]
- Helber, P.; Bischke, B.; Dengel, A.; Borth, D. EuroSAT: A Novel Dataset and Deep Learning Benchmark for Land Use and Land Cover Classification. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2019, 12, 2217–2226. [Google Scholar] [CrossRef]
- Huang, Z.; Yao, X.; Liu, Y.; Dumitru, C.O.; Datcu, M.; Han, J. Physically explainable CNN for SAR image classification. ISPRS J. Photogramm. Remote Sens. 2022, 190, 25–37. [Google Scholar] [CrossRef]
- Yang, Y.; Newsam, S. Bag-of-Visual-Words and Spatial Extensions for Land-Use Classification. In Proceedings of the 18th SIGSPATIAL International Conference on Advances in Geographic Information Systems, San Jose, CA, USA, 2–5 November 2010; pp. 270–279. [Google Scholar] [CrossRef]
- Xia, G.S.; Yang, W.; Delon, J.; Gousseau, Y.; Sun, H.; Maitre, H. Structural High-resolution Satellite Image Indexing. In Proceedings of the ISPRS TC VII Symposium—100 Years ISPRS, Vienna, Austria, 5–7 July 2010; pp. 298–303. [Google Scholar]
- Xia, G.S.; Hu, J.; Hu, F.; Shi, B.; Bai, X.; Zhong, Y.; Zhang, L.; Lu, X. AID: A Benchmark Data Set for Performance Evaluation of Aerial Scene Classification. IEEE Trans. Geosci. Remote Sens. 2017, 55, 3965–3981. [Google Scholar] [CrossRef]
- Cheng, G.; Han, J.; Lu, X. Remote Sensing Image Scene Classification: Benchmark and State of the Art. Proc. IEEE 2017, 105, 1865–1883. [Google Scholar] [CrossRef]
- Long, Y.; Gong, Y.; Xiao, Z.; Liu, Q. Accurate Object Localization in Remote Sensing Images Based on Convolutional Neural Networks. IEEE Trans. Geosci. Remote Sens. 2017, 55, 2486–2498. [Google Scholar] [CrossRef]
- Ravi, S.; Larochelle, H. Optimization as a Model for Few-Shot Learning. In Proceedings of the Fifth International Conference on Learning Representations, Toulon, France, 24–26 April 2017. [Google Scholar]
- Wah, C.; Branson, S.; Welinder, P.; Perona, P.; Belongie, S. Caltech-UCSD Birds-200-2011 (CUB-200-2011) Dataset; Technical Report CNS-TR-2011-001; California Institute of Technology: Pasadena, CA, USA, 2011. [Google Scholar]
- Ren, M.; Triantafillou, E.; Ravi, S.; Snell, J.; Swersky, K.; Tenenbaum, J.B.; Larochelle, H.; Zemel, R.S. Meta-Learning for Semi-Supervised Few-Shot Classification. In Proceedings of the Sixth International Conference on Learning Representations, Vancouver, BC, Canada, 30 April–3 May 2018. [Google Scholar]
- Wang, Z.; Bovik, A.C.; Sheikh, H.R.; Simoncelli, E.P. Image quality assessment: From error visibility to structural similarity. IEEE Trans. Image Process. 2004, 13, 600–612. [Google Scholar] [CrossRef]
- Sicara. Easy Few-Shot Learning. Available online: https://github.com/sicara/easy-few-shot-learning (accessed on 4 November 2024).
- He, K.; Zhang, X.; Ren, S.; Sun, J. Deep Residual Learning for Image Recognition. In Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 27–30 June 2016; pp. 770–778. [Google Scholar] [CrossRef]
- Laboratório Nacional de Computação Científica (LNCC). SDumont: Sistema de Computação Petaflópica do SINAPAD. Available online: https://sdumont.lncc.br/ (accessed on 4 November 2024).
Dataset | #Cl Training | #Cl Val | #Cl Test | #Tasks Val | #Tasks Test |
---|---|---|---|---|---|
EuroSAT | 6 | 2 | 2 | 50 | 1000 |
XAI4SAR | 4 | 2 | 2 | 50 | 1000 |
UC Merced | 13 | 4 | 4 | 50 | 1000 |
WHU-RS19 | 11 | 4 | 4 | 50 | 1000 |
AID | 18 | 6 | 6 | 50 | 1000 |
RESISC-45 | 27 | 9 | 9 | 50 | 1000 |
Approach | EuroSAT | XAI4SAR | UC Merced | |||
---|---|---|---|---|---|---|
1-Shot | 5-Shot | 1-Shot | 5-Shot | 1-Shot | 5-Shot | |
PNs | 0.509 | 0.526 | 0.618 | 0.674 | 0.571 | 0.687 |
SS | 0.512 | 0.527 | 0.650 | 0.712 | 0.552 | 0.666 |
FT | 0.510 | 0.518 | 0.670 | 0.767 | 0.548 | 0.659 |
BD-CSPN | 0.501 | 0.520 | 0.656 | 0.705 | 0.557 | 0.663 |
LS | 0.505 | 0.523 | 0.600 | 0.630 | 0.570 | 0.685 |
PT-MAP | 0.492 | 0.507 | 0.654 | 0.748 | 0.655 | 0.742 |
TIM | 0.510 | 0.524 | 0.694 | 0.769 | 0.557 | 0.674 |
TFT | 0.509 | 0.526 | 0.618 | 0.679 | 0.571 | 0.687 |
Approach | WHU-RS19 | AID | RESISC-45 | |||
---|---|---|---|---|---|---|
1-Shot | 5-Shot | 1-Shot | 5-Shot | 1-Shot | 5-Shot | |
PNs | 0.649 | 0.775 | 0.405 | 0.528 | 0.368 | 0.474 |
SS | 0.674 | 0.782 | 0.427 | 0.537 | 0.389 | 0.480 |
FT | 0.668 | 0.785 | 0.424 | 0.538 | 0.387 | 0.487 |
BD-CSPN | 0.682 | 0.787 | 0.422 | 0.518 | 0.386 | 0.472 |
LS | 0.670 | 0.785 | 0.415 | 0.506 | 0.381 | 0.465 |
PT-MAP | 0.679 | 0.785 | 0.416 | 0.505 | 0.399 | 0.502 |
TIM | 0.679 | 0.787 | 0.428 | 0.543 | 0.393 | 0.485 |
TFT | 0.649 | 0.776 | 0.405 | 0.528 | 0.367 | 0.475 |
Approach | EuroSAT | XAI4SAR | UC Merced | |||
---|---|---|---|---|---|---|
1-Shot | 5-Shot | 1-Shot | 5-Shot | 1-Shot | 5-Shot | |
PNs | 0.525 | 0.558 | 0.535 | 0.573 | 0.623 | 0.744 |
SS | 0.527 | 0.563 | 0.577 | 0.662 | 0.659 | 0.760 |
FT | 0.526 | 0.566 | 0.568 | 0.654 | 0.657 | 0.760 |
BD-CSPN | 0.521 | 0.555 | 0.559 | 0.633 | 0.661 | 0.735 |
LS | 0.513 | 0.540 | 0.520 | 0.543 | 0.652 | 0.722 |
PT-MAP | 0.525 | 0.553 | 0.522 | 0.547 | 0.756 | 0.829 |
TIM | 0.529 | 0.569 | 0.584 | 0.686 | 0.664 | 0.772 |
TFT | 0.525 | 0.559 | 0.534 | 0.574 | 0.623 | 0.744 |
Approach | WHU-RS19 | AID | RESISC-45 | |||
---|---|---|---|---|---|---|
1-Shot | 5-Shot | 1-Shot | 5-Shot | 1-Shot | 5-Shot | |
PNs | 0.774 | 0.886 | 0.570 | 0.722 | 0.422 | 0.594 |
SS | 0.773 | 0.866 | 0.634 | 0.714 | 0.482 | 0.635 |
FT | 0.775 | 0.870 | 0.634 | 0.775 | 0.481 | 0.638 |
BD-CSPN | 0.807 | 0.871 | 0.658 | 0.779 | 0.490 | 0.635 |
LS | 0.811 | 0.879 | 0.656 | 0.772 | 0.463 | 0.600 |
PT-MAP | 0.749 | 0.833 | 0.481 | 0.631 | 0.391 | 0.515 |
TIM | 0.782 | 0.874 | 0.640 | 0.779 | 0.487 | 0.639 |
TFT | 0.774 | 0.886 | 0.602 | 0.762 | 0.422 | 0.594 |
Approach | EuroSAT | XAI4SAR | UC Merced | |||
---|---|---|---|---|---|---|
1-Shot | 5-Shot | 1-Shot | 5-Shot | 1-Shot | 5-Shot | |
PNs | 0.538 | 0.590 | 0.633 | 0.722 | 0.669 | 0.816 |
SS | 0.541 | 0.589 | 0.699 | 0.806 | 0.721 | 0.849 |
FT | 0.540 | 0.595 | 0.680 | 0.805 | 0.721 | 0.852 |
BD-CSPN | 0.536 | 0.581 | 0.688 | 0.803 | 0.734 | 0.847 |
LS | 0.529 | 0.576 | 0.639 | 0.719 | 0.723 | 0.828 |
PT-MAP | 0.552 | 0.606 | 0.620 | 0.707 | 0.777 | 0.854 |
TIM | 0.545 | 0.595 | 0.719 | 0.815 | 0.728 | 0.858 |
TFT | 0.538 | 0.590 | 0.632 | 0.724 | 0.669 | 0.816 |
Approach | WHU-RS19 | AID | RESISC-45 | |||
---|---|---|---|---|---|---|
1-Shot | 5-Shot | 1-Shot | 5-Shot | 1-Shot | 5-Shot | |
PNs | 0.763 | 0.892 | 0.578 | 0.771 | 0.418 | 0.594 |
SS | 0.783 | 0.882 | 0.599 | 0.770 | 0.467 | 0.619 |
FT | 0.786 | 0.889 | 0.597 | 0.765 | 0.469 | 0.624 |
BD-CSPN | 0.832 | 0.893 | 0.623 | 0.779 | 0.480 | 0.616 |
LS | 0.834 | 0.896 | 0.640 | 0.784 | 0.463 | 0.592 |
PT-MAP | 0.746 | 0.813 | 0.482 | 0.633 | 0.425 | 0.554 |
TIM | 0.794 | 0.889 | 0.607 | 0.775 | 0.473 | 0.624 |
TFT | 0.763 | 0.892 | 0.577 | 0.771 | 0.417 | 0.594 |
Approach | EuroSAT | XAI4SAR | UC Merced | |||
---|---|---|---|---|---|---|
1-Shot | 5-Shot | 1-Shot | 5-Shot | 1-Shot | 5-Shot | |
PNs | 0.548 | 0.615 | 0.673 | 0.782 | 0.717 | 0.864 |
SS | 0.554 | 0.621 | 0.704 | 0.803 | 0.754 | 0.869 |
FT | 0.554 | 0.623 | 0.702 | 0.806 | 0.753 | 0.869 |
BD-CSPN | 0.558 | 0.615 | 0.717 | 0.807 | 0.774 | 0.867 |
LS | 0.551 | 0.603 | 0.694 | 0.777 | 0.774 | 0.869 |
PT-MAP | 0.560 | 0.627 | 0.647 | 0.739 | 0.670 | 0.816 |
TIM | 0.558 | 0.627 | 0.714 | 0.810 | 0.759 | 0.874 |
TFT | 0.548 | 0.615 | 0.673 | 0.783 | 0.717 | 0.864 |
Approach | WHU-RS19 | AID | RESISC-45 | |||
---|---|---|---|---|---|---|
1-Shot | 5-Shot | 1-Shot | 5-Shot | 1-Shot | 5-Shot | |
PNs | 0.783 | 0.908 | 0.660 | 0.827 | 0.467 | 0.660 |
SS | 0.831 | 0.916 | 0.703 | 0.837 | 0.503 | 0.676 |
FT | 0.835 | 0.919 | 0.704 | 0.838 | 0.503 | 0.679 |
BD-CSPN | 0.868 | 0.919 | 0.750 | 0.847 | 0.526 | 0.674 |
LS | 0.856 | 0.912 | 0.746 | 0.842 | 0.519 | 0.663 |
PT-MAP | 0.739 | 0.806 | 0.366 | 0.534 | 0.235 | 0.390 |
TIM | 0.839 | 0.919 | 0.709 | 0.840 | 0.507 | 0.679 |
TFT | 0.783 | 0.908 | 0.660 | 0.827 | 0.467 | 0.660 |
Approach | Per Task | Total | ||||
---|---|---|---|---|---|---|
1st
(1-Shot) |
2nd
(1-Shot) |
1st
(5-Shot) |
2nd
(5-Shot) | 1st | 2nd | |
PNs | 0 | 1 | 1 | 2 | 1 | 3 |
SS | 1 | 4 | 1 | 6 | 2 | 10 |
FT | 0 | 2 | 3 | 10 | 3 | 12 |
BD-CSPN | 9 | 5 | 4 | 3 | 13 | 8 |
LS | 4 | 4 | 2 | 4 | 6 | 8 |
PT-MAP | 6 | 1 | 5 | 2 | 11 | 3 |
TIM | 5 | 10 | 15 | 2 | 20 | 12 |
TFT | 0 | 1 | 1 | 2 | 1 | 3 |
Comparison | EuroSAT | XAI4SAR | UC Merced | |||
---|---|---|---|---|---|---|
1-Shot | 5-Shot | 1-Shot | 5-Shot | 1-Shot | 5-Shot | |
3.549% | 7.013% | −14.729% | −14.226% | 15.630% | 11.075% | |
6.733% | 13.200% | 2.938% | 7.554% | 25.561% | 23.168% | |
9.480% | 18.582% | 7.187% | 11.384% | 29.820% | 26.347% |
Comparison | WHU-RS19 | AID | RESISC-45 | |||
---|---|---|---|---|---|---|
1-Shot | 5-Shot | 1-Shot | 5-Shot | 1-Shot | 5-Shot | |
16.749% | 11.242% | 45.791% | 41.197% | 18.496% | 26.429% | |
17.814% | 12.530% | 40.692% | 43.873% | 17.648% | 25.600% | |
22.140% | 15.087% | 58.412% | 51.887% | 21.717% | 32.704% |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2024 by the author. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Santiago Júnior, V.A.d. Empirical Evidence Regarding Few-Shot Learning for Scene Classification in Remote Sensing Images. Appl. Sci. 2024, 14, 10776. https://doi.org/10.3390/app142310776
Santiago Júnior VAd. Empirical Evidence Regarding Few-Shot Learning for Scene Classification in Remote Sensing Images. Applied Sciences. 2024; 14(23):10776. https://doi.org/10.3390/app142310776
Chicago/Turabian StyleSantiago Júnior, Valdivino Alexandre de. 2024. "Empirical Evidence Regarding Few-Shot Learning for Scene Classification in Remote Sensing Images" Applied Sciences 14, no. 23: 10776. https://doi.org/10.3390/app142310776
APA StyleSantiago Júnior, V. A. d. (2024). Empirical Evidence Regarding Few-Shot Learning for Scene Classification in Remote Sensing Images. Applied Sciences, 14(23), 10776. https://doi.org/10.3390/app142310776