MEA-EFFormer: Multiscale Efficient Attention with Enhanced Feature Transformer for Hyperspectral Image Classification
<p>The architecture of the proposed MEA-EFFormer network. The network can be divided into three stages: data preprocessing, feature extraction and processing, and the transformer encoder. The data preprocessing stage includes principal component analysis (PCA) to extract the main bands from the raw HSI and local binary pattern (LBP) extraction. The feature extraction and processing stage is mainly a multiscale efficient attention feature extraction module and a spectral–spatial enhancement attention module. Finally, the obtained refined features are fed into the transformer encoder for classification operations.</p> "> Figure 2
<p>Illustration of ECA. It uses global average pooling and a one-dimensional convolution operation with an adaptive convolution kernel to compute the weights under each band, followed by an activation function to implement the mapping of the attention weights.</p> "> Figure 3
<p>Illustration of SSEA. It consists of three branches that establish the dependencies between the spectral dimension <span class="html-italic">C</span> and the spatial dimensions <span class="html-italic">H</span> and <span class="html-italic">W</span> by means of rotational operations, and it computes the attention weights in each of the three directions.</p> "> Figure 4
<p>Graphical representation of the transformer encoder: (<b>a</b>) The general structure of encoder blocks. (<b>b</b>) Multi-head self-attention mechanism.</p> "> Figure 5
<p>Indian Pines dataset. (<b>a</b>) False-color map. (<b>b</b>) Ground-truth map.</p> "> Figure 6
<p>Salinas dataset. (<b>a</b>) False-color map. (<b>b</b>) Ground-truth map.</p> "> Figure 7
<p>Pavia University dataset. (<b>a</b>) False-color map. (<b>b</b>) Ground-truth map.</p> "> Figure 8
<p>Patch size as a function of OA, AA and kappa. (<b>a</b>) Indian Pines (IP). (<b>b</b>) Salinas (SA). (<b>c</b>) Pavia University (PU).</p> "> Figure 9
<p>Effect of reducing spectral dimensionality on OA, AA and kappa coefficient: (<b>a</b>) Indian Pines (IP). (<b>b</b>) Salinas (SA). (<b>c</b>) Pavia University (PU).</p> "> Figure 10
<p>Effect of learning rate on OA, AA and kappa coefficient: (<b>a</b>) Indian Pines (IP). (<b>b</b>) Salinas (SA). (<b>c</b>) Pavia University (PU).</p> "> Figure 11
<p>Effect of the number of attention heads on the OA, AA and kappa coefficient. (<b>a</b>) Indian Pines (IP). (<b>b</b>) Salinas (SA). (<b>c</b>) Pavia University (PU).</p> "> Figure 12
<p>Maps depicting the classifications on Indian Pines dataset using various methods. (<b>a</b>) Ground truth. (<b>b</b>) RF. (<b>c</b>) SVM. (<b>d</b>) 1D-CNN. (<b>e</b>) 2D-CNN. (<b>f</b>) 3D-CNN. (<b>g</b>) HybridSN. (<b>h</b>) GAHT. (<b>i</b>) SpectralFormer. (<b>j</b>) SSFTT. (<b>k</b>) GSC-ViT. (<b>l</b>) MEA-EFFormer.</p> "> Figure 13
<p>Maps depicting the classification of Salinas dataset using various methods. (<b>a</b>) Ground truth. (<b>b</b>) RF. (<b>c</b>) SVM. (<b>d</b>) 1D-CNN. (<b>e</b>) 2D-CNN. (<b>f</b>) 3D-CNN. (<b>g</b>) HybridSN. (<b>h</b>) GAHT. (<b>i</b>) SpectralFormer. (<b>j</b>) SSFTT. (<b>k</b>) GSC-ViT. (<b>l</b>) MEA-EFFormer.</p> "> Figure 14
<p>Maps depicting the classification of Pavia University dataset using various methods. (<b>a</b>) Ground truth. (<b>b</b>) RF. (<b>c</b>) SVM. (<b>d</b>) 1D-CNN. (<b>e</b>) 2D-CNN. (<b>f</b>) 3D-CNN. (<b>g</b>) HybridSN. (<b>h</b>) GAHT. (<b>i</b>) SpectralFormer. (<b>j</b>) SSFTT. (<b>k</b>) GSC-ViT. (<b>l</b>) MEA-EFFormer.</p> ">
Abstract
:1. Introduction
- (1)
- MEA-EFFormer is a multiscale efficient attentional feature extraction module that incorporates an efficient channel attention mechanism with multiscale convolution. It facilitates the mining of details in spectral–spatial information and solves the problem of fine-grained feature loss during single-scale sampling.
- (2)
- MEA-EFFormer uses an SSEA module. Based on three directions, C-H, C-W and H-W, it captures the dependencies between spectral–spatial LBP information, refines the scale of the features and improves the perception of the attention mechanisms.
- (3)
- The classification performance of MEA-EFFormer outperforms several classical and SOTA methods. Experiments on all three well-known datasets show that the proposed method has excellent classification performance.
2. Materials and Methods
2.1. Spectral–Spatial Multi-Feature Convolution Extraction
2.1.1. Multiscale Efficient Attention Feature Extraction Module
2.1.2. LBP Convolution Feature Processing
2.2. Spectral–Spatial Enhancement Attention Module
2.3. Transformer Encoder Module
2.4. Algorithm Summarization for MEA-EFFormer
Algorithm 1: MEA-EFFormer network. |
Require:
Predicted classification labels for the test dataset.
|
3. Experiment and Analysis
3.1. Data Description
3.1.1. Indian Pines
3.1.2. Salinas
3.1.3. Pavia University
3.2. Experimental Setting
3.2.1. Evaluation Criteria
3.2.2. Environment Configuration
3.2.3. Parameter Setting Adjustment
3.3. Ablation Study
3.4. Classification Results
3.5. Visual Evaluation
3.6. Model Complexity and Efficiency Analysis
4. Conclusions
Author Contributions
Funding
Data Availability Statement
Acknowledgments
Conflicts of Interest
References
- Liu, S.; Liu, S.; Zhang, S.; Li, B.; Hu, W.; Zhang, Y.D. SSAU-Net: A Spectral–Spatial Attention-Based U-Net for Hyperspectral Image Fusion. IEEE Trans. Geosci. Remote Sens. 2022, 60, 5542116. [Google Scholar] [CrossRef]
- Sun, G.; Pan, Z.; Zhang, A.; Jia, X.; Ren, J.; Fu, H.; Yan, K. Large Kernel Spectral and Spatial Attention Networks for Hyperspectral Image Classification. IEEE Trans. Geosci. Remote Sens. 2023, 61, 5519915. [Google Scholar] [CrossRef]
- Ghamisi, P.; Yokoya, N.; Li, J.; Liao, W.; Liu, S.; Plaza, J.; Rasti, B.; Plaza, A. Advances in Hyperspectral Image and Signal Processing: A Comprehensive Overview of the State of the Art. IEEE Geosci. Remote Sens. Mag. 2017, 5, 37–78. [Google Scholar] [CrossRef]
- Sun, L.; Wang, Q.; Chen, Y.; Zheng, Y.; Wu, Z.; Fu, L.; Jeon, B. CRNet: Channel-enhanced Remodeling-based Network for Salient Object Detection in Optical Remote Sensing Images. IEEE Trans. Geosci. Remote Sens. 2023, 61, 5618314. [Google Scholar] [CrossRef]
- Sun, L.; Wang, X.; Zheng, Y.; Wu, Z.; Fu, L. Multiscale 3-D–2-D Mixed CNN and Lightweight Attention-Free Transformer for Hyperspectral and LiDAR Classification. IEEE Trans. Geosci. Remote Sens. 2024, 62, 2100116. [Google Scholar] [CrossRef]
- Sun, L.; Zhang, H.; Zheng, Y.; Wu, Z.; Ye, Z.; Zhao, H. MASSFormer: Memory-Augmented Spectral-Spatial Transformer for Hyperspectral Image Classification. IEEE Trans. Geosci. Remote Sens. 2024, 62, 1–15. [Google Scholar] [CrossRef]
- Yao, J.; Hong, D.; Xu, L.; Meng, D.; Chanussot, J.; Xu, Z. Sparsity-Enhanced Convolutional Decomposition: A Novel Tensor-Based Paradigm for Blind Hyperspectral Unmixing. IEEE Trans. Geosci. Remote Sens. 2022, 60, 5505014. [Google Scholar] [CrossRef]
- Dalponte, M.; Ørka, H.O.; Gobakken, T.; Gianelle, D.; Næsset, E. Tree Species Classification in Boreal Forests with Hyperspectral Data. IEEE Trans. Geosci. Remote Sens. 2013, 51, 2632–2645. [Google Scholar] [CrossRef]
- Zhang, J.; Tao, D. Empowering Things With Intelligence: A Survey of the Progress, Challenges, and Opportunities in Artificial Intelligence of Things. IEEE Internet Things J. 2021, 8, 7789–7817. [Google Scholar] [CrossRef]
- Hang, R.; Liu, Q.; Li, Z. Spectral Super-Resolution Network Guided by Intrinsic Properties of Hyperspectral Imagery. IEEE Trans. Image Process. 2021, 30, 7256–7265. [Google Scholar] [CrossRef]
- Melgani, F.; Bruzzone, L. Classification of hyperspectral remote sensing images with support vector machines. IEEE Trans. Geosci. Remote Sens. 2004, 42, 1778–1790. [Google Scholar] [CrossRef]
- Fauvel, M.; Chanussot, J.; Benediktsson, J. Evaluation of Kernels for Multiclass Classification of Hyperspectral Remote Sensing Data. In Proceedings of the 2006 IEEE International Conference on Acoustics Speech and Signal Processing Proceedings, Toulouse, France, 14–19 May 2006; Volume 2, p. II. [Google Scholar] [CrossRef]
- Tu, B.; Wang, J.; Kang, X.; Zhang, G.; Ou, X.; Guo, L. KNN-Based Representation of Superpixels for Hyperspectral Image Classification. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2018, 11, 4032–4047. [Google Scholar] [CrossRef]
- Samaniego, L.; Bardossy, A.; Schulz, K. Supervised Classification of Remotely Sensed Imagery Using a Modified k-NN Technique. IEEE Trans. Geosci. Remote Sens. 2008, 46, 2112–2125. [Google Scholar] [CrossRef]
- Zhang, Y.; Cao, G.; Li, X.; Wang, B. Cascaded Random Forest for Hyperspectral Image Classification. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2018, 11, 1082–1094. [Google Scholar] [CrossRef]
- Bandos, T.V.; Bruzzone, L.; Camps-Valls, G. Classification of Hyperspectral Images With Regularized Linear Discriminant Analysis. IEEE Trans. Geosci. Remote Sens. 2009, 47, 862–873. [Google Scholar] [CrossRef]
- Lobo, A. Image segmentation and discriminant analysis for the identification of land cover units in ecology. IEEE Trans. Geosci. Remote Sens. 1997, 35, 1136–1145. [Google Scholar] [CrossRef]
- Huang, J.; Liu, K.; Xu, M.; Perc, M.; Li, X. Background Purification Framework With Extended Morphological Attribute Profile for Hyperspectral Anomaly Detection. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2021, 14, 8113–8124. [Google Scholar] [CrossRef]
- Liu, C.; Li, J.; He, L.; Plaza, A.; Li, S.; Li, B. Naive Gabor Networks for Hyperspectral Image Classification. IEEE Trans. Neural Netw. Learn. Syst. 2021, 32, 376–390. [Google Scholar] [CrossRef]
- Jiang, C.; Su, J. Gabor Binary Layer in Convolutional Neural Networks. In Proceedings of the 2018 25th IEEE International Conference on Image Processing (ICIP), Athens, Greece, 7–10 October 2018; pp. 3408–3412. [Google Scholar] [CrossRef]
- Li, H.; Ye, Z.; Xiao, G. Hyperspectral Image Classification Using Spectral–Spatial Composite Kernels Discriminant Analysis. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2015, 8, 2341–2350. [Google Scholar] [CrossRef]
- Li, J.; Marpu, P.R.; Plaza, A.; Bioucas-Dias, J.M.; Benediktsson, J.A. Generalized Composite Kernel Framework for Hyperspectral Image Classification. IEEE Trans. Geosci. Remote Sens. 2013, 51, 4816–4829. [Google Scholar] [CrossRef]
- Tang, Y.Y.; Lu, Y.; Yuan, H. Hyperspectral Image Classification Based on Three-Dimensional Scattering Wavelet Transform. IEEE Trans. Geosci. Remote Sens. 2015, 53, 2467–2480. [Google Scholar] [CrossRef]
- Li, S.; Song, W.; Fang, L.; Chen, Y.; Ghamisi, P.; Benediktsson, J.A. Deep Learning for Hyperspectral Image Classification: An Overview. IEEE Trans. Geosci. Remote Sens. 2019, 57, 6690–6709. [Google Scholar] [CrossRef]
- Romero, A.; Gatta, C.; Camps-Valls, G. Unsupervised Deep Feature Extraction for Remote Sensing Image Classification. IEEE Trans. Geosci. Remote Sens. 2016, 54, 1349–1362. [Google Scholar] [CrossRef]
- Cao, X.; Yao, J.; Xu, Z.; Meng, D. Hyperspectral Image Classification With Convolutional Neural Network and Active Learning. IEEE Trans. Geosci. Remote Sens. 2020, 58, 4604–4616. [Google Scholar] [CrossRef]
- Yang, X.; Ye, Y.; Li, X.; Lau, R.Y.K.; Zhang, X.; Huang, X. Hyperspectral Image Classification With Deep Learning Models. IEEE Trans. Geosci. Remote Sens. 2018, 56, 5408–5423. [Google Scholar] [CrossRef]
- Roy, S.K.; Krishna, G.; Dubey, S.R.; Chaudhuri, B.B. HybridSN: Exploring 3-D–2-D CNN Feature Hierarchy for Hyperspectral Image Classification. IEEE Geosci. Remote Sens. Lett. 2020, 17, 277–281. [Google Scholar] [CrossRef]
- Zhu, J.; Fang, L.; Ghamisi, P. Deformable Convolutional Neural Networks for Hyperspectral Image Classification. IEEE Geosci. Remote Sens. Lett. 2018, 15, 1254–1258. [Google Scholar] [CrossRef]
- Jia, S.; Lin, Z.; Xu, M.; Huang, Q.; Zhou, J.; Jia, X.; Li, Q. A Lightweight Convolutional Neural Network for Hyperspectral Image Classification. IEEE Trans. Geosci. Remote Sens. 2021, 59, 4150–4163. [Google Scholar] [CrossRef]
- Czaja, W.; Ehler, M. Schroedinger Eigenmaps for the Analysis of Biomedical Data. IEEE Trans. Pattern Anal. Mach. Intell. 2013, 35, 1274–1280. [Google Scholar] [CrossRef]
- Gong, Z.; Zhong, P.; Yu, Y.; Hu, W.; Li, S. A CNN With Multiscale Convolution and Diversified Metric for Hyperspectral Image Classification. IEEE Trans. Geosci. Remote Sens. 2019, 57, 3599–3618. [Google Scholar] [CrossRef]
- Xie, P.; Salakhutdinov, R.; Mou, L.; Xing, E.P. Deep Determinantal Point Process for Large-Scale Multi-label Classification. 2017; 473–482. [Google Scholar] [CrossRef]
- Hang, R.; Liu, Q.; Hong, D.; Ghamisi, P. Cascaded Recurrent Neural Networks for Hyperspectral Image Classification. IEEE Trans. Geosci. Remote Sens. 2019, 57, 5384–5394. [Google Scholar] [CrossRef]
- Zhang, X.; Sun, Y.; Jiang, K.; Li, C.; Jiao, L.; Zhou, H. Spatial Sequential Recurrent Neural Network for Hyperspectral Image Classification. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2018, 11, 4141–4155. [Google Scholar] [CrossRef]
- Zhu, L.; Chen, Y.; Ghamisi, P.; Benediktsson, J.A. Generative Adversarial Networks for Hyperspectral Image Classification. IEEE Trans. Geosci. Remote Sens. 2018, 56, 5046–5063. [Google Scholar] [CrossRef]
- Jia, S.; Wang, Z.; Li, Q.; Jia, X.; Xu, M. Multiattention Generative Adversarial Network for Remote Sensing Image Super-Resolution. IEEE Trans. Geosci. Remote Sens. 2022, 60, 5624715. [Google Scholar] [CrossRef]
- Neagoe, V.E.; Diaconescu, P. CNN Hyperspectral Image Classification Using Training Sample Augmentation with Generative Adversarial Networks. In Proceedings of the 2020 13th International Conference on Communications (COMM), Bucharest, Romania, 8–20 June 2020; pp. 515–519. [Google Scholar] [CrossRef]
- He, K.; Zhang, X.; Ren, S.; Sun, J. Deep Residual Learning for Image Recognition. In Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 26 June–1 July 2016; pp. 770–778. [Google Scholar] [CrossRef]
- Zhong, Z.; Li, J.; Ma, L.; Jiang, H.; Zhao, H. Deep residual networks for hyperspectral image classification. In Proceedings of the 2017 IEEE International Geoscience and Remote Sensing Symposium (IGARSS), Fort Worth, TX, USA, 23–28 July 2017; pp. 1824–1827. [Google Scholar] [CrossRef]
- Li, T.; Zhang, X.; Zhang, S.; Wang, L. Self-Supervised Learning With a Dual-Branch ResNet for Hyperspectral Image Classification. IEEE Geosci. Remote Sens. Lett. 2022, 19, 5512905. [Google Scholar] [CrossRef]
- Zhang, X.; Liang, Y.; Li, C.; Huyan, N.; Jiao, L.; Zhou, H. Recursive Autoencoders-Based Unsupervised Feature Learning for Hyperspectral Image Classification. IEEE Geosci. Remote Sens. Lett. 2017, 14, 1928–1932. [Google Scholar] [CrossRef]
- Koda, S.; Melgani, F.; Nishii, R. Unsupervised Spectral–Spatial Feature Extraction With Generalized Autoencoder for Hyperspectral Imagery. IEEE Geosci. Remote Sens. Lett. 2020, 17, 469–473. [Google Scholar] [CrossRef]
- Chen, Y.; Zhao, X.; Jia, X. Spectral–Spatial Classification of Hyperspectral Data Based on Deep Belief Network. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2015, 8, 2381–2392. [Google Scholar] [CrossRef]
- Chen, C.; Ma, Y.; Ren, G. Hyperspectral Classification Using Deep Belief Networks Based on Conjugate Gradient Update and Pixel-Centric Spectral Block Features. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2020, 13, 4060–4069. [Google Scholar] [CrossRef]
- Sun, H.; Zheng, X.; Lu, X. A Supervised Segmentation Network for Hyperspectral Image Classification. IEEE Trans. Image Process. 2021, 30, 2810–2825. [Google Scholar] [CrossRef]
- Zheng, Z.; Zhong, Y.; Ma, A.; Zhang, L. FPGA: Fast Patch-Free Global Learning Framework for Fully End-to-End Hyperspectral Image Classification. IEEE Trans. Geosci. Remote Sens. 2020, 58, 5612–5626. [Google Scholar] [CrossRef]
- Paoletti, M.E.; Haut, J.M.; Fernandez-Beltran, R.; Plaza, J.; Plaza, A.; Li, J.; Pla, F. Capsule Networks for Hyperspectral Image Classification. IEEE Trans. Geosci. Remote Sens. 2019, 57, 2145–2160. [Google Scholar] [CrossRef]
- Jiang, X.; Liu, W.; Zhang, Y.; Liu, J.; Li, S.; Lin, J. Spectral–Spatial Hyperspectral Image Classification Using Dual-Channel Capsule Networks. IEEE Geosci. Remote Sens. Lett. 2021, 18, 1094–1098. [Google Scholar] [CrossRef]
- Liang, L.; Zhang, Y.; Zhang, S.; Li, J.; Plaza, A.; Kang, X. Fast Hyperspectral Image Classification Combining Transformers and SimAM-Based CNNs. IEEE Trans. Geosci. Remote Sens. 2023, 61, 5522219. [Google Scholar] [CrossRef]
- Hong, D.; Han, Z.; Yao, J.; Gao, L.; Zhang, B.; Plaza, A.; Chanussot, J. SpectralFormer: Rethinking Hyperspectral Image Classification With Transformers. IEEE Trans. Geosci. Remote Sens. 2022, 60, 5518615. [Google Scholar] [CrossRef]
- Sun, L.; Zhao, G.; Zheng, Y.; Wu, Z. Spectral–Spatial Feature Tokenization Transformer for Hyperspectral Image Classification. IEEE Trans. Geosci. Remote Sens. 2022, 60, 5522214. [Google Scholar] [CrossRef]
- Tu, B.; Liao, X.; Li, Q.; Peng, Y.; Plaza, A. Local Semantic Feature Aggregation-Based Transformer for Hyperspectral Image Classification. IEEE Trans. Geosci. Remote Sens. 2022, 60, 5536115. [Google Scholar] [CrossRef]
- Qiao, X.; Huang, W. A Dual Frequency Transformer Network for Hyperspectral Image Classification. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2023, 16, 10344–10358. [Google Scholar] [CrossRef]
- Wang, S.; Liu, Z.; Chen, Y.; Hou, C.; Liu, A.; Zhang, Z. Expansion Spectral–Spatial Attention Network for Hyperspectral Image Classification. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2023, 16, 6411–6427. [Google Scholar] [CrossRef]
- Zhou, F.; Xu, C.; Yang, G.; Hang, R.; Liu, Q. Masked Spectral–Spatial Feature Prediction for Hyperspectral Image Classification. IEEE Trans. Geosci. Remote Sens. 2024, 62, 4400913. [Google Scholar] [CrossRef]
- Huang, L.; Chen, Y.; He, X. Spectral–Spatial Masked Transformer with Supervised and Contrastive Learning for Hyperspectral Image Classification. IEEE Trans. Geosci. Remote Sens. 2023, 61, 5508718. [Google Scholar] [CrossRef]
- Ham, J.; Chen, Y.; Crawford, M.M.; Ghosh, J. Investigation of the random forest framework for classification of hyperspectral data. IEEE Trans. Geosci. Remote Sens. 2005, 43, 492–501. [Google Scholar] [CrossRef]
- Makantasis, K.; Karantzalos, K.; Doulamis, A.; Doulamis, N. Deep supervised learning for hyperspectral data classification through convolutional neural networks. In Proceedings of the 2015 IEEE International Geoscience and Remote Sensing Symposium (IGARSS), Milan, Italy, 26–31 July 2015; pp. 4959–4962. [Google Scholar] [CrossRef]
- Zhao, W.; Du, S. Spectral–spatial feature extraction for hyperspectral image classification: A dimension reduction and deep learning approach. IEEE Trans. Geosci. Remote Sens. 2016, 54, 4544–4554. [Google Scholar] [CrossRef]
- Chen, Y.; Jiang, H.; Li, C.; Jia, X.; Ghamisi, P. Deep feature extraction and classification of hyperspectral images based on convolutional neural networks. IEEE Trans. Geosci. Remote Sens. 2016, 54, 6232–6251. [Google Scholar]
- Mei, S.; Song, C.; Ma, M.; Xu, F. Hyperspectral Image Classification Using Group-Aware Hierarchical Transformer. IEEE Trans. Geosci. Remote Sens. 2022, 60, 5539014. [Google Scholar] [CrossRef]
- Zhao, Z.; Xu, X.; Li, S.; Plaza, A. Hyperspectral Image Classification Using Groupwise Separable Convolutional Vision Transformer Network. IEEE Trans. Geosci. Remote Sens. 2024, 62, 5511817. [Google Scholar] [CrossRef]
ID | Indian Pines | Salinas | Pavia University | ||||||
---|---|---|---|---|---|---|---|---|---|
Land Cover Class | Training | Test | Land Cover Class | Training | Test | Land Cover Class | Training | Test | |
C01 | Alfalfa | 3 | 43 | Brocoli_green_weeds_1 | 11 | 1998 | Asphalt | 67 | 6564 |
C02 | Corn-notill | 72 | 1356 | Brocoli_green_weeds_22 | 19 | 3707 | Meadows | 187 | 18,462 |
C03 | Corn-mintill | 42 | 788 | Fallow | 10 | 1966 | Gravel | 21 | 2078 |
C04 | Corn | 12 | 225 | Fallow_rough_plow | 7 | 1387 | Trees | 31 | 3033 |
C05 | Grass-pasture | 25 | 458 | Fallow_smooth | 14 | 2664 | Painted metal sheets | 14 | 1331 |
C06 | Grass-tree | 37 | 693 | Stubble | 20 | 3939 | Bare Soil | 51 | 4978 |
C07 | Grass-pasture-mowed | 2 | 26 | Celery | 18 | 3561 | Bitumen | 14 | 1316 |
C08 | Hay-windrowed | 24 | 454 | Grapes_untrained | 57 | 11,214 | Self-Blocking Bricks | 37 | 3645 |
C09 | Oats | 1 | 19 | Soil_vinyard_develop | 32 | 6171 | Shadows | 10 | 937 |
C10 | Soybean-notill | 49 | 923 | Corn_senesced_green_weeds | 17 | 3261 | |||
C11 | Soybean-mintill | 123 | 2332 | Lettuce_romaine _4wk | 6 | 1062 | |||
C12 | Soybean-clean | 30 | 563 | Lettuce_romaine_5wk | 10 | 1917 | |||
C13 | Wheat | 11 | 194 | Lettuce_romaine_6wk | 5 | 911 | |||
C14 | Woods | 64 | 1201 | Lettuce_romaine_7wk | 6 | 1064 | |||
C15 | Buildings-Grass-Trees | 20 | 366 | Vinyard_untrained | 37 | 7231 | |||
C16 | Stone-Steel-Towers | 5 | 88 | Vinyard_vertical_trellis | 10 | 1797 | |||
Total | 513 | 9736 | Total | 271 | 53,858 | Total | 428 | 42,348 |
Cases | Component | Indicators | |||||||
---|---|---|---|---|---|---|---|---|---|
PCA | MS | ECA | LBP | SSEA | TE | OA (%) | AA (%) | ||
1 | × | ✓ | ✓ | ✓ | ✓ | ✓ | 97.27 | 94.19 | 96.88 |
2 | ✓ | × | ✓ | ✓ | ✓ | ✓ | 96.92 | 93.07 | 96.48 |
3 | ✓ | ✓ | × | ✓ | ✓ | ✓ | 97.03 | 92.66 | 96.61 |
4 | ✓ | ✓ | ✓ | × | ✓ | ✓ | 97.18 | 92.04 | 96.78 |
5 | ✓ | ✓ | ✓ | ✓ | × | ✓ | 97.14 | 93.61 | 96.74 |
6 | ✓ | ✓ | ✓ | ✓ | ✓ | × | 95.31 | 90.97 | 92.24 |
7 | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | 97.44 | 94.69 | 97.07 |
Cases | Combination of Branches | Indicators | ||||
---|---|---|---|---|---|---|
Spe-C Spa-W | Spe-C Spa-H | Spa-W Spa-H | OA (%) | AA (%) | ||
1 | × | ✓ | ✓ | 97.11 | 94.43 | 96.76 |
2 | ✓ | × | ✓ | 97.03 | 92.66 | 96.61 |
3 | ✓ | ✓ | × | 97.18 | 92.04 | 96.78 |
4 | ✓ | ✓ | ✓ | 97.44 | 94.69 | 97.07 |
No. | Traditional Classifiers | Deep-Learning-Based Classifiers | |||||||||
---|---|---|---|---|---|---|---|---|---|---|---|
RF | SVM | 1D-CNN | 2D-CNN | 3D-CNN | HybridSN | GAHT | SpectralFormer | SSFTT | GSC-ViT | MEA-EFFormer | |
1 | 1.95 ± 1.83 | 3.41 ± 2.49 | 22.09 ± 6.6 | 30.0 ± 7.39 | 43.72 ± 11.76 | 32.79 ± 4.7 | 62.56 ± 11.32 | 27.67 ± 6.93 | 72.56 ± 17.18 | 40.7 ± 4.91 | 87.67 ± 7.79 |
2 | 57.73 ± 18.2 | 48.14 ± 4.75 | 90.16 ± 1.16 | 90.16 ± 2.06 | 92.19 ± 1.48 | 89.93 ± 2.52 | 94.26 ± 1.38 | 80.06 ± 1.83 | 94.67 ± 0.69 | 95.76 ± 1.12 | 95.19 ± 0.54 |
3 | 34.24 ± 10.38 | 37.99 ± 3.2 | 94.94 ± 2.19 | 94.65 ± 2.05 | 96.12 ± 1.51 | 91.04 ± 2.45 | 95.73 ± 2.88 | 91.65 ± 2.61 | 96.5 ± 1.14 | 98.64 ± 0.98 | 98.81 ± 0.69 |
4 | 4.98 ± 1.5 | 11.36 ± 1.93 | 82.41 ± 7.27 | 82.63 ± 5.2 | 78.66 ± 4.03 | 77.5 ± 6.6 | 90.94 ± 2.82 | 68.3 ± 4.45 | 92.9 ± 2.7 | 94.91 ± 3.04 | 94.73 ± 2.15 |
5 | 64.46 ± 5.99 | 61.2 ± 2.17 | 98.87 ± 1.45 | 98.67 ± 1.51 | 99.54 ± 0.6 | 98.85 ± 0.7 | 97.73 ± 3.26 | 97.45 ± 0.8 | 99.93 ± 0.1 | 98.82 ± 0.84 | 99.98 ± 0.07 |
6 | 55.76 ± 17.52 | 61.62 ± 12.91 | 98.67 ± 0.7 | 98.99 ± 0.42 | 99.74 ± 0.17 | 99.06 ± 0.68 | 97.75 ± 1.22 | 97.12 ± 0.56 | 98.87 ± 0.53 | 98.29 ± 0.58 | 99.22 ± 0.48 |
7 | 4.0 ± 5.06 | 1.6 ± 1.96 | 54.44 ± 23.49 | 59.26 ± 22.28 | 98.52 ± 1.81 | 55.19 ± 15.93 | 61.48 ± 31.91 | 12.59 ± 7.26 | 94.44 ± 9.83 | 88.89 ± 10.61 | 97.04 ± 6.79 |
8 | 54.98 ± 21.31 | 57.81 ± 12.68 | 99.96 ± 0.09 | 99.87 ± 0.18 | 100.0 ± 0.0 | 99.85 ± 0.4 | 99.4 ± 0.49 | 98.15 ± 2.3 | 99.6 ± 0.64 | 99.98 ± 0.07 | 99.45 ± 0.51 |
9 | 2.22 ± 4.44 | 1.11 ± 2.22 | 63.16 ± 12.23 | 61.58 ± 11.05 | 72.63 ± 16.94 | 64.74 ± 16.48 | 86.32 ± 9.76 | 26.32 ± 7.06 | 79.47 ± 10.38 | 80.0 ± 10.99 | 80.53 ± 4.74 |
10 | 50.79 ± 19.74 | 38.4 ± 5.3 | 94.45 ± 1.7 | 93.7 ± 1.77 | 91.84 ± 2.58 | 94.21 ± 1.77 | 92.18 ± 9.13 | 86.04 ± 1.25 | 96.46 ± 1.04 | 97.38 ± 1.12 | 97.8 ± 0.52 |
11 | 75.19 ± 14.49 | 68.72 ± 3.11 | 95.94 ± 1.88 | 95.72 ± 0.88 | 95.02 ± 1.65 | 95.46 ± 0.88 | 97.39 ± 0.59 | 91.89 ± 1.24 | 98.76 ± 0.29 | 98.06 ± 0.42 | 98.73 ± 0.32 |
12 | 23.11 ± 8.69 | 19.74 ± 3.6 | 83.57 ± 6.9 | 82.2 ± 4.48 | 81.76 ± 3.29 | 79.86 ± 4.15 | 90.53 ± 2.67 | 66.71 ± 5.55 | 91.47 ± 1.31 | 92.86 ± 1.23 | 90.39 ± 1.77 |
13 | 18.37 ± 7.58 | 45.0 ± 15.07 | 99.38 ± 0.5 | 99.38 ± 0.79 | 98.67 ± 0.7 | 98.56 ± 0.99 | 88.41 ± 8.34 | 98.21 ± 1.88 | 100.0 ± 0.0 | 97.18 ± 1.38 | 100.0 ± 0.0 |
14 | 84.84 ± 10.0 | 75.72 ± 6.73 | 99.56 ± 0.31 | 99.7 ± 0.28 | 98.5 ± 1.53 | 96.57 ± 1.7 | 98.8 ± 0.49 | 98.95 ± 0.65 | 99.17 ± 0.31 | 99.85 ± 0.10 | 99.26 ± 0.32 |
15 | 13.93 ± 4.63 | 27.75 ± 4.64 | 85.48 ± 5.82 | 83.95 ± 5.92 | 86.57 ± 2.96 | 81.61 ± 3.24 | 91.66 ± 3.52 | 90.0 ± 3.15 | 94.69 ± 3.8 | 90.84 ± 3.31 | 95.86 ± 1.65 |
16 | 1.69 ± 1.8 | 10.12 ± 3.01 | 92.07 ± 6.31 | 92.76 ± 5.95 | 88.28 ± 6.68 | 85.98 ± 11.75 | 63.91 ± 8.97 | 56.21 ± 5.33 | 85.29 ± 4.23 | 87.13 ± 3.55 | 80.46 ± 6.93 |
OA (%) | 56.08 ± 8.24 | 52.67 ± 1.96 | 93.98 ± 0.84 | 93.78 ± 0.54 | 93.91 ± 0.68 | 92.66 ± 0.73 | 95.12 ± 1.28 | 88.57 ± 0.62 | 96.97 ± 0.28 | 97.01 ± 0.30 | 97.44 ± 0.17 |
AA (%) | 34.26 ± 4.64 | 35.61 ± 2.24 | 84.70 ± 2.45 | 85.20 ± 2.19 | 88.86 ± 1.79 | 83.82 ± 1.77 | 88.07 ± 2.42 | 74.21 ± 0.89 | 93.43 ± 1.74 | 91.21 ± 1.50 | 94.69 ± 0.57 |
k × 100 | 49.48 ± 9.09 | 46.02 ± 2.26 | 93.13 ± 0.96 | 92.90 ± 0.62 | 93.05 ± 0.77 | 91.62 ± 0.84 | 94.42 ± 1.48 | 86.92 ± 0.70 | 96.54 ± 0.32 | 96.59 ± 0.34 | 97.07 ± 0.19 |
No. | Traditional Classifiers | Deep-Learning-Based Classifiers | |||||||||
---|---|---|---|---|---|---|---|---|---|---|---|
RF | SVM | 1D-CNN | 2D-CNN | 3D-CNN | HybridSN | GAHT | SpectralFormer | SSFTT | GSC-ViT | MEA-EFFormer | |
1 | 23.82 ± 18.27 | 56.82 ± 28.67 | 99.71 ± 0.37 | 99.88 ± 0.17 | 99.99 ± 0.02 | 99.78 ± 0.3 | 99.93 ± 0.16 | 99.38 ± 0.3 | 99.98 ± 0.03 | 99.32 ± 0.5 | 99.98 ± 0.03 |
2 | 50.95 ± 42.42 | 75.92 ± 19.78 | 100.0 ± 0.0 | 100.0 ± 0.0 | 100.0 ± 0.0 | 99.85 ± 0.42 | 100.0 ± 0.0 | 100.0 ± 0.0 | 100.0 ± 0.0 | 100.0 ± 0.0 | 100.0 ± 0.0 |
3 | 14.88 ± 22.89 | 57.54 ± 5.5 | 99.99 ± 0.02 | 100.0 ± 0.0 | 99.99 ± 0.02 | 99.47 ± 0.45 | 100.0 ± 0.0 | 99.98 ± 0.05 | 99.98 ± 0.04 | 99.83 ± 0.21 | 100.0 ± 0.0 |
4 | 4.18 ± 6.32 | 34.52 ± 13.48 | 96.28 ± 1.44 | 94.26 ± 2.68 | 93.82 ± 2.74 | 94.86 ± 2.82 | 98.48 ± 1.53 | 94.58 ± 1.49 | 98.33 ± 0.99 | 92.54 ± 4.17 | 97.62 ± 0.99 |
5 | 6.73 ± 8.32 | 59.23 ± 7.03 | 99.95 ± 0.11 | 99.86 ± 0.21 | 99.21 ± 0.64 | 99.21 ± 0.47 | 99.65 ± 0.24 | 98.86 ± 0.47 | 99.33 ± 1.25 | 97.36 ± 0.76 | 99.92 ± 0.14 |
6 | 68.04 ± 34.3 | 74.01 ± 5.14 | 100.0 ± 0.0 | 99.76 ± 0.7 | 99.9 ± 0.24 | 99.94 ± 0.11 | 97.67 ± 0.71 | 99.95 ± 0.09 | 99.85 ± 0.14 | 99.87 ± 0.16 | 99.96 ± 0.07 |
7 | 91.08 ± 2.45 | 79.04 ± 13.67 | 99.97 ± 0.06 | 99.95 ± 0.06 | 99.92 ± 0.14 | 99.58 ± 0.37 | 99.87 ± 0.11 | 99.91 ± 0.09 | 99.16 ± 1.01 | 99.89 ± 0.12 | 99.1 ± 0.62 |
8 | 79.07 ± 16.99 | 67.52 ± 8.16 | 88.55 ± 1.04 | 88.02 ± 1.47 | 90.42 ± 1.79 | 86.82 ± 3.63 | 94.65 ± 0.8 | 84.85 ± 0.69 | 93.46 ± 0.94 | 89.99 ± 1.47 | 95.09 ± 0.94 |
9 | 77.78 ± 38.86 | 64.66 ± 15.31 | 100.0 ± 0.0 | 100.0 ± 0.0 | 100.0 ± 0.0 | 99.83 ± 0.14 | 100.0 ± 0.0 | 100.0 ± 0.0 | 99.99 ± 0.02 | 99.92 ± 0.05 | 100.0 ± 0.0 |
10 | 33.34 ± 36.65 | 46.85 ± 2.77 | 98.25 ± 0.74 | 97.74 ± 0.92 | 98.79 ± 0.39 | 97.88 ± 1.78 | 99.94 ± 0.07 | 98.23 ± 0.48 | 98.73 ± 0.49 | 98.03 ± 0.46 | 98.36 ± 0.86 |
11 | 18.51 ± 33.65 | 10.01 ± 11.83 | 99.38 ± 0.47 | 99.13 ± 1.21 | 99.33 ± 0.64 | 98.43 ± 1.67 | 97.61 ± 1.17 | 98.51 ± 0.81 | 99.74 ± 0.26 | 97.06 ± 1.91 | 99.75 ± 0.19 |
12 | 5.41 ± 8.79 | 39.61 ± 13.54 | 99.81 ± 0.25 | 99.84 ± 0.21 | 99.61 ± 0.37 | 98.73 ± 1.07 | 99.66 ± 0.3 | 99.81 ± 0.32 | 99.48 ± 0.37 | 95.61 ± 4.25 | 99.39 ± 0.37 |
13 | 17.27 ± 34.49 | 18.33 ± 12.4 | 94.34 ± 10.6 | 95.63 ± 5.78 | 94.96 ± 4.1 | 92.13 ± 6.54 | 95.15 ± 2.85 | 97.14 ± 1.94 | 95.42 ± 3.07 | 74.4 ± 14.45 | 94.25 ± 2.64 |
14 | 15.62 ± 31.09 | 31.65 ± 13.26 | 97.82 ± 2.18 | 97.81 ± 2.28 | 97.06 ± 2.13 | 96.57 ± 4.11 | 86.67 ± 1.14 | 96.07 ± 1.65 | 95.86 ± 1.9 | 93.03 ± 3.55 | 95.39 ± 2.28 |
15 | 27.18 ± 19.38 | 56.11 ± 6.93 | 87.09 ± 1.01 | 87.33 ± 1.72 | 85.67 ± 4.22 | 86.27 ± 4.79 | 92.68 ± 1.16 | 85.17 ± 1.46 | 89.49 ± 1.18 | 82.6 ± 2.82 | 91.79 ± 1.15 |
16 | 21.14 ± 27.89 | 49.0 ± 13.96 | 99.28 ± 0.16 | 99.16 ± 0.11 | 99.09 ± 0.14 | 98.6 ± 1.2 | 99.99 ± 0.02 | 99.94 ± 0.05 | 99.49 ± 0.17 | 96.89 ± 1.03 | 99.72 ± 0.19 |
OA (%) | 49.29 ± 9.65 | 59.91 ± 3.89 | 95.48 ± 0.33 | 95.32 ± 0.42 | 95.60 ± 0.42 | 94.70 ± 0.61 | 97.26 ± 0.21 | 94.35 ± 0.19 | 96.81 ± 0.22 | 94.19 ± 0.55 | 97.42 ± 0.27 |
AA (%) | 34.69 ± 14.32 | 51.30 ± 4.10 | 97.53 ± 0.67 | 97.40 ± 0.44 | 97.36 ± 0.41 | 96.75 ± 0.53 | 97.62 ± 0.25 | 97.02 ± 0.17 | 98.02 ± 0.23 | 94.77 ± 1.12 | 98.15 ± 0.23 |
k × 100 | 41.70 ± 10.89 | 55.34 ± 4.33 | 94.97 ± 0.37 | 94.79 ± 0.46 | 95.10 ± 0.48 | 94.10 ± 0.67 | 96.95 ± 0.24 | 93.72 ± 0.21 | 96.44 ± 0.24 | 93.53 ± 0.62 | 97.13 ± 0.30 |
No. | Traditional Classifiers | Deep-Learning-Based Classifiers | |||||||||
---|---|---|---|---|---|---|---|---|---|---|---|
RF | SVM | 1D-CNN | 2D-CNN | 3D-CNN | HybridSN | GAHT | SpectralFormer | SSFTT | GSC-ViT | MEA-EFFormer | |
1 | 63.15 ± 27.63 | 40.51 ± 12.69 | 95.42 ± 2.52 | 96.09 ± 1.17 | 95.43 ± 1.24 | 94.54 ± 2.07 | 96.68 ± 2.36 | 93.78 ± 0.84 | 97.05 ± 1.21 | 89.17 ± 1.62 | 97.37 ± 0.69 |
2 | 93.89 ± 9.44 | 78.54 ± 4.51 | 99.41 ± 0.47 | 99.74 ± 0.15 | 99.52 ± 0.74 | 99.36 ± 0.47 | 99.82 ± 0.11 | 99.77 ± 0.16 | 99.89 ± 0.05 | 99.78 ± 0.18 | 99.94 ± 0.03 |
3 | 4.08 ± 2.83 | 21.1 ± 6.26 | 77.62 ± 6.96 | 82.52 ± 5.78 | 73.12 ± 4.41 | 76.41 ± 4.93 | 87.29 ± 4.38 | 72.25 ± 3.19 | 85.35 ± 6.65 | 76.03 ± 4.42 | 88.67 ± 2.07 |
4 | 19.31 ± 10.7 | 39.51 ± 4.94 | 91.32 ± 2.74 | 90.35 ± 1.98 | 87.72 ± 3.04 | 86.44 ± 3.64 | 82.98 ± 3.72 | 84.94 ± 2.36 | 93.65 ± 1.04 | 88.48 ± 1.51 | 93.33 ± 1.22 |
5 | 28.12 ± 26.35 | 48.56 ± 17.35 | 99.76 ± 0.33 | 99.68 ± 0.37 | 99.85 ± 0.15 | 99.81 ± 0.26 | 99.5 ± 0.35 | 98.64 ± 0.44 | 99.55 ± 0.42 | 99.89 ± 0.15 | 99.13 ± 0.48 |
6 | 28.18 ± 25.37 | 31.82 ± 4.73 | 93.84 ± 3.22 | 94.54 ± 1.17 | 97.19 ± 1.71 | 96.18 ± 1.86 | 98.84 ± 1.19 | 91.58 ± 1.54 | 99.5 ± 0.39 | 91.13 ± 2.68 | 99.98 ± 0.04 |
7 | 25.13 ± 19.44 | 15.78 ± 9.6 | 93.15 ± 5.25 | 93.12 ± 3.74 | 96.26 ± 3.08 | 94.31 ± 4.03 | 99.02 ± 1.15 | 93.55 ± 2.99 | 99.86 ± 0.26 | 88.56 ± 4.77 | 99.96 ± 0.09 |
8 | 19.26 ± 11.6 | 25.26 ± 1.97 | 85.58 ± 4.57 | 82.65 ± 8.29 | 89.18 ± 3.03 | 86.27 ± 4.78 | 95.35 ± 2.63 | 81.06 ± 1.04 | 92.3 ± 2.77 | 78.65 ± 2.58 | 92.85 ± 1.1 |
9 | 10.75 ± 12.87 | 27.65 ± 26.7 | 89.36 ± 6.71 | 88.77 ± 4.79 | 92.34 ± 6.66 | 89.1 ± 4.32 | 73.62 ± 4.18 | 71.51 ± 4.91 | 94.4 ± 2.52 | 92.78 ± 3.33 | 92.4 ± 3.08 |
OA (%) | 59.18 ± 8.44 | 52.93 ± 1.59 | 94.89 ± 0.72 | 95.12 ± 0.55 | 95.33 ± 0.50 | 94.69 ± 0.57 | 96.40 ± 0.63 | 93.00 ± 0.33 | 97.46 ± 0.35 | 92.82 ± 0.34 | 97.72 ± 0.24 |
AA (%) | 32.43 ± 9.57 | 36.53 ± 2.81 | 91.72 ± 1.62 | 91.94 ± 1.05 | 92.29 ± 1.34 | 91.38 ± 1.16 | 92.57 ± 1.03 | 87.45 ± 0.66 | 95.73 ± 0.86 | 89.38 ± 0.88 | 95.96 ± 0.38 |
k × 100 | 42.45 ± 12.58 | 37.36 ± 2.00 | 93.19 ± 0.96 | 93.50 ± 0.73 | 93.79 ± 0.67 | 92.93 ± 0.76 | 95.21 ± 0.84 | 90.64 ± 0.45 | 96.62 ± 0.47 | 90.42 ± 0.46 | 96.97 ± 0.32 |
Deep-Learning-Based Approaches | |||||||
---|---|---|---|---|---|---|---|
3D-CNN | HybridSN | GAHT | SpectralFormer | SSFTT | GSC-ViT | MEA-EFFormer | |
Testing Time (s) | 6.39 | 7.29 | 13.69 | 18.52 | 7.24 | 10.85 | 8.54 |
Params. (K) | 462.486 K | 797.57 | 946.83 | 128.8 | 148.3 | 77.90 | 436.625 |
OA (%) | 95.33 ± 0.50 | 94.69 ± 0.57 | 96.40 ± 0.63 | 93.00 ± 0.33 | 97.46 ± 0.35 | 92.82 ± 0.34 | 97.72 ± 0.24 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Sun, Q.; Zhao, G.; Fang, Y.; Fang, C.; Sun, L.; Li, X. MEA-EFFormer: Multiscale Efficient Attention with Enhanced Feature Transformer for Hyperspectral Image Classification. Remote Sens. 2024, 16, 1560. https://doi.org/10.3390/rs16091560
Sun Q, Zhao G, Fang Y, Fang C, Sun L, Li X. MEA-EFFormer: Multiscale Efficient Attention with Enhanced Feature Transformer for Hyperspectral Image Classification. Remote Sensing. 2024; 16(9):1560. https://doi.org/10.3390/rs16091560
Chicago/Turabian StyleSun, Qian, Guangrui Zhao, Yu Fang, Chenrong Fang, Le Sun, and Xingying Li. 2024. "MEA-EFFormer: Multiscale Efficient Attention with Enhanced Feature Transformer for Hyperspectral Image Classification" Remote Sensing 16, no. 9: 1560. https://doi.org/10.3390/rs16091560