CESA-MCFormer: An Efficient Transformer Network for Hyperspectral Image Classification by Eliminating Redundant Information
<p>Overall architecture of CESA-MCFormer.</p> "> Figure 2
<p>Overall architecture of EMB Block.The symbol “T” represents the transpose of a matrix. The symbol “·” represents element-wise multiplication of matrices, and the symbol ”x” denotes matrix multiplication.</p> "> Figure 3
<p>Overall architecture of CESA.</p> "> Figure 4
<p>Overall architecture of Soft CESA.</p> "> Figure 5
<p>Overall architecture of Transformer Encoder.</p> "> Figure 6
<p>Overall architecture of the Spectral Morph Block and Spatial Morph Block.</p> "> Figure 7
<p>Overall architecture of dilation block.</p> "> Figure 8
<p>Visualization results on the IP dataset.</p> "> Figure 9
<p>Visualization results on the UP dataset.</p> ">
Abstract
:1. Introduction
- We designed a flexible and efficient Center Enhanced Spatial Attention (CESA) module specifically for hyperspectral image feature extraction. This module can be easily integrated into various models, enhancing focus on areas around the center pixel while considering global spatial information;
- We introduced Morphological Convolution (MC) to replace the traditional linear layer feature extraction mechanism in the transformer encoder. MC selects fine-grained features through a strategy of separating and then integrating spatial and spectral features, significantly reducing the number of parameters and enhancing the model’s robustness;
- Utilizing these modules, we developed the CESA-MCFormer feature extractor, capable of effectively extracting key features from a multitude of channels, supporting various downstream classification tasks. We conducted in-depth ablation experiments to provide practical and theoretical insights for researchers exploring and applying similar modules.
2. Methodology
2.1. Emb Block
2.1.1. Hard CESA
2.1.2. Soft CESA
2.2. Transformer Encoder
3. Results and Discussion
3.1. Dataset Description
3.1.1. Semantic Segmentation Task
3.1.2. Few-Shot Learning Task
3.2. Training Details and Evaluation Indicators
3.2.1. Configuration
3.2.2. Training Details
3.2.3. Evaluation Indicators
3.3. Semantic Segmentation Task Experimental Results
3.3.1. Classification Results
3.3.2. Ablation Experiment
3.4. Few-Shot Learning Task Experimental Results
4. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Data Availability Statement
Conflicts of Interest
References
- Roy, S.K.; Kar, P.; Hong, D.; Wu, X.; Plaza, A.; Chanussot, J. Revisiting deep hyperspectral feature extraction networks via gradient centralized convolution. IEEE Trans. Geosci. Remote Sens. 2021, 60, 5516619. [Google Scholar] [CrossRef]
- Roy, S.K.; Mondal, R.; Paoletti, M.E.; Haut, J.M.; Plaza, A. Morphological convolutional neural networks for hyperspectral image classification. IEEE J. Sel. Top. Appl. Earth Observ. Remote Sens. 2021, 14, 8689–8702. [Google Scholar] [CrossRef]
- Ahmad, M.; Shabbir, S.; Roy, S.K.; Hong, D.; Wu, X.; Yao, J.; Khan, A.M.; Mazzara, M.; Distefano, S.; Chanussot, J. Hyperspectral image classification-traditional to deep models: A survey for future prospects. IEEE J. Sel. Top. Appl. Earth Observ. Remote Sens. 2022, 15, 968–999. [Google Scholar] [CrossRef]
- Signoroni, A.; Savardi, M.; Baronio, A.; Benini, S. Deep Learning Meets Hyperspectral Image Analysis: A Multidisciplinary Review. J. Imaging 2019, 5, 52. [Google Scholar] [CrossRef]
- Li, K.; Huang, J.; Liu, X. Mapping wheat plant height using a crop surface model from unmanned aerial vehicle imagery and 3D feature points. Comput. Electron. Agric. 2019, 164, 104881. [Google Scholar] [CrossRef]
- Atherton, D.; Choudhary, R.; Watson, D. Hyperspectral Remote Sensing for Advanced Detection of Early Blight (Alternaria solani) Disease in Potato (Solanum tuberosum) Plants Prior to Visual Disease Symptoms. In Proceedings of the 2017 ASABE Annual International Meeting, American Society of Agricultural and Biological Engineers, Spokane, WA, USA, 16–19 July 2017; p. 1. [Google Scholar]
- Shafri, H.Z.M.; Taherzadeh, E.; Mansor, S.; Ashurov, R. Hyperspectral Remote Sensing of Urban Areas: An Overview of Techniques and Applications. Res. J. Appl. Sci. Eng. Technol. 2012, 4, 1557–1565. [Google Scholar]
- Navin, M.S.; Agilandeeswari, L. Multispectral and Hyperspectral Images Based Land Use/Land Cover Change Prediction Analysis: An Extensive Review. Multimedia Tools Appl. 2020, 79, 29751–29774. [Google Scholar] [CrossRef]
- Andrew, M.E.; Ustin, S.L. The role of environmental context in mapping invasive plants with hyperspectral image data. Remote Sens. Environ. 2008, 112, 4301–4317. [Google Scholar] [CrossRef]
- Stuart, M.B.; McGonigle, A.J.S.; Willmott, J.R. Hyperspectral imaging in environmental monitoring: A review of recent developments and technological advances in compact field deployable systems. Sensors 2019, 19, 3071. [Google Scholar] [CrossRef]
- Rajabi, R.; Zehtabian, A.; Singh, K.D. Hyperspectral imaging in environmental monitoring and analysis. Front. Environ. Sci. 2024. [Google Scholar] [CrossRef]
- Nasrabadi, N.M. Hyperspectral target detection: An overview of current and future challenges. IEEE Signal Process. Mag. 2013, 31, 34–44. [Google Scholar] [CrossRef]
- Chang, C.-I. Hyperspectral anomaly detection: A dual theory of hyperspectral target detection. IEEE Trans. Geosci. Remote Sens. 2021, 60, 1–20. [Google Scholar] [CrossRef]
- Lee, G.; Lee, J.; Baek, J.; Kim, H.; Cho, D. Channel sampler in hyperspectral images for vehicle detection. IEEE Geosci. Remote Sens.Lett. 2022, 19, 2022. [Google Scholar] [CrossRef]
- Shi, P.; Jiang, Q.; Li, Z. Hyperspectral Characteristic Band Selection and Estimation Content of Soil Petroleum Hydrocarbon Based on GARF-PLSR. J. Imaging 2023, 9, 87. [Google Scholar] [CrossRef]
- Fauvel, M.; Tarabalka, Y.; Benediktsson, J.A.; Chanussot, J.A.; Tilton, J.C. Advances in spectral-spatial classification of hyperspectral images. Proc. IEEE 2013, 101, 652–675. [Google Scholar] [CrossRef]
- SahIn, Y.E.; Arisoy, S.; Kayabol, K. Anomaly detection with Bayesian Gauss background model in hyperspectral images. In Proceedings of the 26th Signal Processing and Communications Applications Conference, SIU, Izmir, Turkey, 2–5 May 2018; pp. 1–4. [Google Scholar]
- Haut, J.; Paoletti, M.; Paz-Gallardo, A.; Plaza, J.; Plaza, A. Cloud implementation of logistic regression for hyperspectral image classification. In Proceedings of the 17th International Conference on Computational and Mathematical Methods in Science and Engineering (CMMSE), Cadiz, Spain, 4–8 July 2017; Costa Ballena: Cádiz, Spain, 2017; Volume 3, pp. 1063–1073. [Google Scholar]
- Li, J.; Bioucas-Dias, J.; Plaza, A. Spectral-spatial hyperspectral image segmentation using subspace multinomial logistic regression and Markov random fields. IEEE Trans. Geosci. Remote Sens. 2012, 50, 809–823. [Google Scholar] [CrossRef]
- Melgani, F.; Bruzzone, L. Classification of hyperspectral remote sensing images with support vector machines. IEEE Trans. Geosci. Remote Sens. 2004, 42, 1778–1790. [Google Scholar] [CrossRef]
- Ye, Q.; Huang, P.; Zhang, Z.; Zheng, Y.; Fu, L.; Yang, W. Multiview learning with robust double-sided twin SVM. IEEE Trans. Cybern. 2021, 52, 12745–12758. [Google Scholar] [CrossRef]
- Ye, Q.; Zhao, H.; Li, Z.; Yang, X.; Gao, S.; Yin, T.; Ye, N. L1-norm distance minimization-based fast robust twin support vector k-plane clustering. IEEE Trans. Neural Netw. Learn. Syst. 2017, 29, 4494–4503. [Google Scholar] [CrossRef]
- Chen, Y.-N.; Thaipisutikul, T.; Han, C.-C.; Liu, T.-J.; Fan, K.-C. Feature line embedding based on support vector machine for hyperspectral image classification. Remote Sens. 2021, 13, 130. [Google Scholar] [CrossRef]
- Lee, H.; Kwon, H. Going deeper with contextual CNN for hyperspectral image classification. IEEE Trans. Image Process. 2017, 26, 4843–4855. [Google Scholar] [CrossRef]
- Luo, Y.; Zou, J.; Yao, C.; Zhao, X.; Li, T.; Bai, G. HSI-CNN: A novel convolution neural network for hyperspectral image. In Proceedings of the International Conference on Audio, Language and Image Processing (ICALIP), Shanghai, China, 16–17 July 2018. [Google Scholar]
- Roy, S.K.; Krishna, G.; Dubey, S.R.; Chaudhuri, B.B. HybridSN: Exploring 3-D–2D CNN feature hierarchy for hyperspectral image classification. IEEE Geosci. Remote Sens. Lett. 2020, 17, 277–281. [Google Scholar] [CrossRef]
- Wang, J.; Song, X.; Sun, L.; Huang, W.; Wang, J. A novel cubic convolutional neural network for hyperspectral image classification. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2020, 13, 4133–4148. [Google Scholar] [CrossRef]
- Moraga, J.; Duzgun, H.S. JigsawHSI: A network for hyperspectral image classification. arXiv 2022, arXiv:2206.02327. [Google Scholar]
- Dosovitskiy, A.; Beyer, L.; Kolesnikov, A.; Weissenborn, D.; Zhai, X.; Unterthiner, T.; Dehghani, M.; Minderer, M.; Heigold, G.; Gelly, S.; et al. An image is worth 16x16 words: Transformers for image recognition at scale. arXiv 2020, arXiv:2010.11929. [Google Scholar]
- He, X.; Chen, Y.; Lin, Z. Spatial-Spectral Transformer for Hyperspectral Image Classification. Remote Sens. 2021, 13, 498. [Google Scholar] [CrossRef]
- Simonyan, K.; Zisserman, A. Very Deep Convolutional Networks for Large-Scale Image Recognition. arXiv 2014, arXiv:1409.1556. [Google Scholar]
- Qing, Y.; Liu, W.; Feng, L.; Gao, W. Improved Transformer Net for Hyperspectral Image Classification. Remote Sens. 2021, 13, 2216. [Google Scholar] [CrossRef]
- Hong, D.; Han, Z.; Yao, J.; Gao, L.; Zhang, B.; Plaza, A.; Chanussot, J. SpectralFormer: Rethinking Hyperspectral Image Classification with Transformers. arXiv 2021, arXiv:2107.02988. [Google Scholar] [CrossRef]
- He, X.; Chen, Y.; Lin, Z. Spectral-Spatial Feature Tokenization Transformer for Hyperspectral Image Classification. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2021, 14, 6824–6836. [Google Scholar]
- Zhang, X.; Su, Y.; Gao, L.; Bruzzone, L.; Gu, X.; Tian, Q. A Lightweight Transformer Network for Hyperspectral Image Classification. IEEE Trans. Geosci. Remote Sens. 2023, 61, 5517617. [Google Scholar] [CrossRef]
- Zhang, S.; Zhang, J.; Wang, X.; Wang, J.; Zhu, Z. ELS2T: Efficient Lightweight Spectral-Spatial Transformer for Hyperspectral Image Classification. IEEE Trans. Geosci. Remote Sens. 2023, 61, 5518416. [Google Scholar] [CrossRef]
- Geiger, B.C.; Kubin, G. Relative information loss in the PCA. In Proceedings of the 2012 IEEE Information Theory Workshop, San Diego, CA, USA, 5 February 2012; pp. 562–566. [Google Scholar]
- Hu, J.; Shen, L.; Sun, G. Squeeze-and-Excitation Networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–23 June 2018; pp. 7132–7141. [Google Scholar]
- Woo, S.; Park, J.; Lee, J.-Y.; Kweon, I.S. CBAM: Convolutional Block Attention Module. In Proceedings of the European Conference on Computer Vision (ECCV), Munich, Germany, 8–14 September 2018; pp. 3–19. [Google Scholar]
- Zhong, Y.; Liu, L.; Yang, Y.; Loy, C.C. Dual Attention Network for Scene Segmentation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 14–19 June 2018; pp. 3146–3154. [Google Scholar]
- Zhang, Y.; Wang, X.; Liu, Y.; Zhang, X. Discriminative spectral-spatial attention-aware residual network for hyperspectral image classification. IEEE Access 2020, 8, 226169–226184. [Google Scholar]
- Li, J.; Li, G.; Sun, Q.; Li, L. Residual Spectral–Spatial Attention Network for Hyperspectral Image Classification. IEEE Trans. Geosci. Remote Sens. 2020, 5, 449–462. [Google Scholar]
- Liu, L.; Zhang, P.; Zhang, W.; Lu, J.; Wang, J. Spectral-Spatial Attention Network for Hyperspectral Image Classification. IEEE Trans. Geosci. Remote Sens. 2020, 58, 3232–3245. [Google Scholar]
- Gevaert, A.S.; De Backer, S.S.; Schiavon, A.C.; Philips, W. SpectralSpatial Fused Attention Network for Hyperspectral Image Classification. IEEE Trans. Geosci. Remote Sens. 2020, 58. [Google Scholar] [CrossRef]
- Wang, C.; Ma, X.; Chen, Y.; Ren, Y.; Han, Z. Center Attention Network for Hyperspectral Image Classification. IEEE Trans. Geosci. Remote Sens. 2021, 59. [Google Scholar] [CrossRef]
- Wang, Y.; Zhang, L.; Zhang, L. Spatial Attention Guided Residual Attention Network for Hyperspectral Image Classification. IEEE Trans. Geosci. Remote Sens. 2021, 59, 5966–5980. [Google Scholar] [CrossRef]
- Serra, J.; Soille, P. Morphological Image Analysis: Principles and Applications; Springer: Berlin, Germany, 1994. [Google Scholar]
- Pesaresi, M.; Benediktsson, J.A. A new approach for the morphological segmentation of high-resolution satellite imagery. IEEE Trans. Geosci. Remote Sens. 2001, 39, 309–320. [Google Scholar] [CrossRef]
- Pedergnana, M.; Marpu, P.R.; Mura, M.D.; Benediktsson, J.A.; Bruzzone, L. Classification of remote sensing optical and LiDAR data using extended attribute profiles. IEEE J. Sel. Top. Signal Process. 2012, 6, 856–865. [Google Scholar] [CrossRef]
- Roy, S.K.; Chanda, B.; Chaudhuri, B.B.; Ghosh, D.K.; Dubey, S.R. Local morphological pattern: A scale space shape descriptor for texture classification. Digit. Signal Process. 2018, 82, 152–165. [Google Scholar] [CrossRef]
- Hong, D.; Wu, X.; Ghamisi, P.; Chanussot, J.; Yokoya, N.; Zhu, X.X. Invariant attribute profiles: A spatial-frequency joint feature extractor for hyperspectral image classification. IEEE Trans. Geosci. Remote Sens. 2020, 58, 3791–3808. [Google Scholar] [CrossRef]
- Roy, S.K.; Deria, A.; Shah, C.; Haut, J.M.; Du, Q.; Plaza, A. Spectral–Spatial Morphological Attention Transformer for Hyperspectral Image Classification. IEEE Trans. Geosci. Remote Sens. 2023, 61, 5503615. [Google Scholar] [CrossRef]
- Cai, Y.; Liu, X.; Cai, Z. BS-Nets: An End-to-End Framework for Band Selection of Hyperspectral Image. IEEE Trans. Geosci. Remote Sens. 2020, 58, 1969–1984. [Google Scholar] [CrossRef]
Dataset | Image Size | Number of Classes | Number of Bands |
---|---|---|---|
IP | 145 × 145 | 16 | 220 |
UP | 610 × 340 | 9 | 103 |
Chikusei | 2517 × 2335 | 19 | 128 |
Botswana | 1476 × 256 | 14 | 145 |
KSC | 512 × 614 | 13 | 176 |
Salinas Valley | 512 × 217 | 16 | 224 |
Class No. | Class Name | Training | Testing |
---|---|---|---|
1 | Alfalfa | 3 | 43 |
2 | Corn-notill | 72 | 1356 |
3 | Corn-mintill | 42 | 788 |
4 | Corn | 12 | 225 |
5 | Grass-pasture | 25 | 458 |
6 | Grass-trees | 37 | 693 |
7 | Grass-pasture-mowed | 2 | 26 |
8 | Hay-windrowed | 24 | 454 |
9 | Oats | 1 | 19 |
10 | Soybean-notill | 49 | 923 |
11 | Soybean-mintill | 123 | 2332 |
12 | Soybean-clean | 30 | 563 |
13 | Wheat | 11 | 194 |
14 | Woods | 64 | 1201 |
15 | Buildings-Grass-Trees-Drives | 20 | 366 |
16 | Stone-Steel-Towers | 5 | 88 |
Total | 520 | 9729 |
Class No. | Class Name | Training | Testing |
---|---|---|---|
1 | Asphalt | 67 | 6564 |
2 | Meadows | 187 | 18,462 |
3 | Gravel | 21 | 2078 |
4 | Trees | 31 | 3033 |
5 | Painted metal sheets | 14 | 1331 |
6 | Bare Soil | 51 | 4978 |
7 | Bitumen | 14 | 1316 |
8 | Self-Blocking Bricks | 37 | 3645 |
9 | Shadows | 10 | 937 |
Total | 432 | 42,344 |
Class No. | Class Name | Training | Testing |
---|---|---|---|
1 | Water | 29 | 2816 |
2 | Bare soil (school) | 29 | 2830 |
3 | Bare soil (park) | 3 | 283 |
4 | Bare soil (farmland) | 49 | 4830 |
5 | Natural plants | 43 | 4254 |
6 | Weeds in farmland | 12 | 1096 |
7 | Forest | 206 | 20,310 |
8 | Grass | 66 | 6449 |
9 | Rice field (grown) | 134 | 13,235 |
10 | Rice field (first stage) | 13 | 1255 |
11 | Row crops | 60 | 5901 |
12 | Plastic house | 22 | 2171 |
13 | Manmade (non-dark) | 13 | 1207 |
14 | Manmade (dark) | 77 | 7587 |
15 | Manmade (blue) | 5 | 426 |
16 | Manmade (red) | 3 | 219 |
17 | Manmade grass | 11 | 1029 |
18 | Asphalt | 9 | 792 |
19 | Paved ground | 2 | 143 |
Total | 786 | 76,806 |
Class No | HybridSN | Vit | SF | SSFTT | morphFormer | CESA-MCFormer | CESA-MCFormer * |
---|---|---|---|---|---|---|---|
FLOPs (M) | 44.80 | 75.34 | 24.91 | 29.88 | 41.78 | 32.24 | 27.67 |
Params (M) | 1.12 | 0.61 | 0.11 | 0.51 | 0.25 | 0.36 | 0.25 |
Class No | HybridSN | Vit | SF | SSFTT | morphFormer | CESA-MCFormer |
---|---|---|---|---|---|---|
1 | 62.79 | 39.53 | 83.72 | 58.14 | 97.67 | 93.02 |
2 | 73.45 | 74.78 | 64.16 | 91.81 | 92.92 | 96.61 |
3 | 70.05 | 69.29 | 50.25 | 94.54 | 88.07 | 95.81 |
4 | 67.56 | 57.33 | 53.78 | 77.78 | 91.11 | 90.22 |
5 | 76.86 | 46.51 | 57.42 | 79.91 | 89.74 | 95.85 |
6 | 94.95 | 86.29 | 81.53 | 98.70 | 99.42 | 100.00 |
7 | 46.15 | 42.31 | 76.92 | 100.00 | 100.00 | 100.00 |
8 | 96.70 | 95.15 | 92.51 | 100.00 | 99.34 | 99.34 |
9 | 63.16 | 26.32 | 73.68 | 89.47 | 31.58 | 84.21 |
10 | 76.60 | 73.67 | 76.71 | 87.22 | 95.56 | 95.12 |
11 | 81.86 | 84.95 | 83.40 | 97.51 | 97.13 | 97.64 |
12 | 44.76 | 51.33 | 45.83 | 82.77 | 88.28 | 95.38 |
13 | 100.00 | 100.00 | 98.97 | 100.00 | 97.94 | 100.00 |
14 | 98.17 | 96.92 | 99.25 | 98.83 | 99.75 | 99.17 |
15 | 59.84 | 73.50 | 84.43 | 85.79 | 91.53 | 87.98 |
16 | 55.68 | 95.45 | 51.14 | 97.73 | 100.00 | 100.00 |
OA (%) | 79.24 | 78.38 | 75.59 | 93.15 | 94.96 | 96.82 |
AA (%) | 73.04 | 69.58 | 73.36 | 90.01 | 91.25 | 95.65 |
(%) | 76.28 | 75.17 | 71.96 | 92.17 | 94.26 | 96.38 |
Class No | HybridSN | Vit | SF | SSFTT | morphFormer | CESA-MCFormer |
---|---|---|---|---|---|---|
1 | 89.98 | 91.97 | 83.32 | 94.70 | 98.00 | 98.35 |
2 | 95.54 | 93.90 | 95.38 | 99.70 | 99.69 | 99.92 |
3 | 75.07 | 47.50 | 59.58 | 87.73 | 87.05 | 89.56 |
4 | 94.16 | 92.81 | 89.48 | 96.24 | 96.74 | 95.12 |
5 | 96.39 | 99.85 | 89.41 | 99.92 | 93.09 | 100.00 |
6 | 84.39 | 84.23 | 85.01 | 99.38 | 99.08 | 100.00 |
7 | 82.14 | 62.54 | 63.91 | 90.20 | 97.04 | 98.33 |
8 | 92.76 | 91.39 | 87.46 | 95.80 | 95.69 | 99.18 |
9 | 98.40 | 99.79 | 99.89 | 98.51 | 97.87 | 97.55 |
OA (%) | 91.70 | 89.23 | 88.36 | 97.40 | 97.85 | 98.67 |
AA (%) | 89.87 | 84.89 | 83.72 | 95.80 | 96.27 | 97.56 |
(%) | 89.00 | 85.75 | 84.59 | 96.56 | 97.15 | 98.24 |
Class No | HybridSN | Vit | SF | SSFTT | morphFormer | CESA-MCFormer |
---|---|---|---|---|---|---|
1 | 97.41 | 94.74 | 91.58 | 99.61 | 99.96 | 99.15 |
2 | 95.23 | 94.03 | 99.01 | 95.30 | 99.82 | 99.72 |
3 | 0.00 | 18.37 | 23.67 | 46.64 | 22.61 | 85.16 |
4 | 99.79 | 97.67 | 98.44 | 97.48 | 98.69 | 99.88 |
5 | 99.79 | 96.94 | 97.32 | 99.98 | 100.00 | 99.98 |
6 | 97.90 | 97.26 | 93.89 | 91.88 | 98.54 | 91.79 |
7 | 100.00 | 100.00 | 100.00 | 100.00 | 100.00 | 100.00 |
8 | 99.74 | 98.71 | 99.18 | 100.00 | 100.00 | 99.94 |
9 | 99.81 | 99.89 | 99.99 | 100.00 | 99.77 | 100.00 |
10 | 97.37 | 99.20 | 95.30 | 100.00 | 100.00 | 100.00 |
11 | 100.00 | 99.75 | 100.00 | 99.83 | 99.76 | 100.00 |
12 | 96.96 | 93.74 | 98.11 | 96.32 | 97.14 | 96.78 |
13 | 95.53 | 95.53 | 94.86 | 95.53 | 95.53 | 95.53 |
14 | 99.79 | 98.87 | 99.00 | 99.97 | 99.91 | 100.00 |
15 | 99.06 | 93.19 | 95.54 | 100.00 | 99.30 | 99.53 |
16 | 100.00 | 98.17 | 99.54 | 93.61 | 100.00 | 100.00 |
17 | 99.03 | 86.10 | 97.57 | 100.00 | 100.00 | 100.00 |
18 | 79.80 | 91.41 | 88.76 | 90.03 | 97.85 | 99.37 |
19 | 16.78 | 31.47 | 42.66 | 88.11 | 88.11 | 95.10 |
OA (%) | 98.65 | 97.97 | 98.38 | 99.01 | 99.34 | 99.59 |
AA (%) | 88.10 | 88.69 | 90.23 | 94.44 | 94.58 | 98.00 |
(%) | 98.44 | 97.65 | 98.13 | 98.85 | 99.24 | 99.53 |
Class No | HybridSN | SF | SSFTT | CESA-MCFormer | CESA-MCFormer * |
---|---|---|---|---|---|
1 | 72.09 | 51.16 | 100.00 | 76.74 | 93.02 |
2 | 92.99 | 78.17 | 94.91 | 96.02 | 96.61 |
3 | 98.98 | 73.86 | 98.35 | 95.18 | 95.81 |
4 | 94.67 | 61.78 | 100.00 | 88.89 | 90.22 |
5 | 90.83 | 84.50 | 98.69 | 90.17 | 95.85 |
6 | 99.86 | 93.94 | 99.13 | 99.42 | 100.00 |
7 | 57.69 | 11.54 | 73.08 | 100.00 | 100.00 |
8 | 94.49 | 100.00 | 99.56 | 100.00 | 99.34 |
9 | 68.42 | 10.53 | 52.63 | 100.00 | 84.21 |
10 | 91.12 | 85.59 | 97.29 | 94.58 | 95.12 |
11 | 96.57 | 87.78 | 97.60 | 98.37 | 97.64 |
12 | 86.50 | 69.45 | 84.37 | 88.28 | 95.38 |
13 | 97.94 | 98.45 | 100.00 | 100.00 | 100.00 |
14 | 98.92 | 90.67 | 99.67 | 99.92 | 99.17 |
15 | 96.45 | 62.02 | 93.44 | 90.16 | 87.98 |
16 | 48.86 | 81.82 | 95.45 | 100.00 | 100.00 |
OA (%) | 94.60 | 83.33 | 96.78 | 96.23 | 96.82 |
AA (%) | 86.65 | 71.33 | 92.76 | 94.86 | 95.65 |
(%) | 93.84 | 80.97 | 96.33 | 95.70 | 96.38 |
Class No | HybridSN | SF | SSFTT | CESA-MCFormer | CESA-MCFormer * |
---|---|---|---|---|---|
1 | 98.86 | 88.82 | 98.28 | 98.49 | 98.35 |
2 | 99.96 | 96.59 | 99.97 | 99.87 | 99.92 |
3 | 85.51 | 71.80 | 90.38 | 91.77 | 89.56 |
4 | 92.91 | 91.16 | 97.36 | 95.19 | 95.12 |
5 | 100.00 | 95.87 | 100.00 | 100.00 | 100.00 |
6 | 99.76 | 79.23 | 98.49 | 99.16 | 100.00 |
7 | 97.04 | 75.08 | 98.94 | 98.02 | 98.33 |
8 | 92.76 | 84.94 | 91.44 | 98.30 | 99.18 |
9 | 93.17 | 92.32 | 93.06 | 95.73 | 97.55 |
OA (%) | 97.69 | 89.95 | 97.96 | 98.56 | 98.67 |
AA (%) | 95.55 | 86.20 | 96.44 | 97.39 | 97.56 |
(%) | 96.93 | 86.58 | 97.29 | 98.09 | 98.24 |
CESA | MC+ | OA (%) | AA (%) | (%) |
---|---|---|---|---|
94.96 | 91.25 | 94.26 | ||
√ | 95.86 | 93.06 | 95.27 | |
√ | 95.80 | 91.50 | 95.21 | |
√ | √ | 96.82 | 95.65 | 96.38 |
OA (%) | AA (%) | (%) | |
---|---|---|---|
SA | 95.79 | 91.76 | 95.19 |
CAM | 95.94 | 93.17 | 95.37 |
CESA | 96.82 | 95.65 | 96.38 |
HybridSN | SSFTT | morphForm | CESA-MCFormer | |
---|---|---|---|---|
OA (%) | 68.90 | 69.56 | 71.26 | 73.29 |
AA (%) | 81.10 | 81.87 | 82.97 | 84.61 |
(%) | 65.17 | 65.85 | 67.82 | 70.05 |
HybridSN | SSFTT | morphForm | CESA-MCFormer | |
---|---|---|---|---|
OA (%) | 70.81 | 74.43 | 74.03 | 77.54 |
AA (%) | 77.90 | 79.39 | 78.90 | 83.55 |
(%) | 63.44 | 67.17 | 66.71 | 71.46 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Liu, S.; Yin, C.; Zhang, H. CESA-MCFormer: An Efficient Transformer Network for Hyperspectral Image Classification by Eliminating Redundant Information. Sensors 2024, 24, 1187. https://doi.org/10.3390/s24041187
Liu S, Yin C, Zhang H. CESA-MCFormer: An Efficient Transformer Network for Hyperspectral Image Classification by Eliminating Redundant Information. Sensors. 2024; 24(4):1187. https://doi.org/10.3390/s24041187
Chicago/Turabian StyleLiu, Shukai, Changqing Yin, and Huijuan Zhang. 2024. "CESA-MCFormer: An Efficient Transformer Network for Hyperspectral Image Classification by Eliminating Redundant Information" Sensors 24, no. 4: 1187. https://doi.org/10.3390/s24041187
APA StyleLiu, S., Yin, C., & Zhang, H. (2024). CESA-MCFormer: An Efficient Transformer Network for Hyperspectral Image Classification by Eliminating Redundant Information. Sensors, 24(4), 1187. https://doi.org/10.3390/s24041187