Self-Supervised Assisted Semi-Supervised Residual Network for Hyperspectral Image Classification
"> Figure 1
<p>Overview of the proposed SSRNet.</p> "> Figure 2
<p>The details structure of the RNet.</p> "> Figure 3
<p>The details structure of the base module.</p> "> Figure 4
<p>The classification maps of the Indian Pines with 10 labeled samples.</p> "> Figure 5
<p>The classification maps of the University of Pavia with 10 labeled samples.</p> "> Figure 6
<p>The classification maps of the Salinas with 10 labeled samples.</p> "> Figure 7
<p>The classification maps of the Houston 2013 with 10 labeled samples.</p> "> Figure 8
<p>The effects of spectral feature shift and masked bands reconstruction auxiliary task under different hyper-parameter choices on the Indian Pines dataset.</p> ">
Abstract
:1. Introduction
2. Methodology
2.1. The Overall Framework of the Proposed SSRNet
2.2. Semi-Supervised Learning Branch
2.2.1. Mean-Teacher Framework
2.2.2. The RNet Overview
2.2.3. Data Random Perturbation
2.3. Self-Supervised Learning Branch
2.3.1. Masked Bands Reconstruction
2.3.2. Spectral Order Forecast
2.4. Overall Loss
3. Experiments
3.1. Dataset Description
3.2. Experiment Setup
3.3. Experimental Results
3.4. Ablation Study
3.4.1. Complementarity between Components
- SSRNet-S-R-O: The spectral feature shift in the semi-supervised branch is discarded, and two self-supervised auxiliary tasks are discarded;
- SSRNet-S: Only the spectral feature shift in the semi-supervised branch is discarded;
- SSRNet-R: Only the masked bands reconstruction in the self-supervised branch is discarded;
- SSRNet-O: Only the spectral order forecast in the self-supervised branch is discarded;
- SSRNet (ALL): No components are discarded.
3.4.2. Choice of Hyper-Parameters
3.4.3. Choice of Patch Size
3.5. Investigation on Running Time
4. Conclusions
Author Contributions
Funding
Data Availability Statement
Conflicts of Interest
References
- Li, Z.; Huang, L.; He, J. A multiscale deep middle-level feature fusion network for hyperspectral classification. Remote Sens. 2019, 11, 695. [Google Scholar] [CrossRef] [Green Version]
- Awad, M.; Jomaa, I.; Arab, F. Improved capability in stone pine forest mapping and management in Lebanon using hyperspectral CHRIS-Proba data relative to Landsat ETM+. Photogramm. Eng. Remote Sens. 2014, 80, 725–731. [Google Scholar] [CrossRef]
- Ibrahim, A.; Franz, B.; Ahmad, Z.; Healy, R.; Knobelspiesse, K.; Gao, B.C.; Proctor, C.; Zhai, P.W. Atmospheric correction for hyperspectral ocean color retrieval with application to the Hyperspectral Imager for the Coastal Ocean (HICO). Remote Sens. Environ. 2018, 204, 60–75. [Google Scholar] [CrossRef] [Green Version]
- Foglini, F.; Angeletti, L.; Bracchi, V.; Chimienti, G.; Grande, V.; Hansen, I.M.; Meroni, A.N.; Marchese, F.; Mercorella, A.; Prampolini, M.; et al. Underwater Hyperspectral Imaging for seafloor and benthic habitat mapping. In Proceedings of the 2018 IEEE International Workshop on Metrology for the Sea; Learning to Measure Sea Health Parameters (MetroSea), Bari, Italy, 8–10 October 2018; pp. 201–205. [Google Scholar]
- Ghamisi, P.; Yokoya, N.; Li, J.; Liao, W.; Liu, S.; Plaza, J.; Rasti, B.; Plaza, A. Advances in hyperspectral image and signal processing: A comprehensive overview of the state of the art. IEEE Geosci. Remote Sens. Mag. 2017, 5, 37–78. [Google Scholar] [CrossRef] [Green Version]
- Villa, A.; Benediktsson, J.A.; Chanussot, J.; Jutten, C. Hyperspectral image classification with independent component discriminant analysis. IEEE Trans. Geosci. Remote Sens. 2011, 49, 4865–4876. [Google Scholar] [CrossRef] [Green Version]
- Melgani, F.; Bruzzone, L. Classification of hyperspectral remote sensing images with support vector machines. IEEE Trans. Geosci. Remote Sens. 2004, 42, 1778–1790. [Google Scholar] [CrossRef] [Green Version]
- Ghamisi, P.; Maggiori, E.; Li, S.; Souza, R.; Tarablaka, Y.; Moser, G.; De Giorgi, A.; Fang, L.; Chen, Y.; Chi, M.; et al. New frontiers in spectral–spatial hyperspectral image classification: The latest advances based on mathematical morphology, Markov random fields, segmentation, sparse representation, and deep learning. IEEE Geosci. Remote Sens. Mag. 2018, 6, 10–43. [Google Scholar] [CrossRef]
- Benediktsson, J.A.; Palmason, J.A.; Sveinsson, J.R. Classification of hyperspectral data from urban areas based on extended morphological profiles. IEEE Trans. Geosci. Remote Sens. 2005, 43, 480–491. [Google Scholar] [CrossRef]
- Krizhevsky, A.; Sutskever, I.; Hinton, G.E. Imagenet classification with deep convolutional neural networks. Adv. Neural Inf. Process. Syst. 2012, 25. [Google Scholar] [CrossRef]
- Makantasis, K.; Karantzalos, K.; Doulamis, A.; Doulamis, N. Deep supervised learning for hyperspectral data classification through convolutional neural networks. In Proceedings of the 2015 IEEE International Geoscience and Remote Sensing Symposium (IGARSS), Milan, Italy, 26–31 July 2015; pp. 4959–4962. [Google Scholar]
- Hu, W.; Huang, Y.; Wei, L.; Zhang, F.; Li, H. Deep convolutional neural networks for hyperspectral image classification. J. Sens. 2015, 2015, 12. [Google Scholar] [CrossRef] [Green Version]
- Chen, Y.; Jiang, H.; Li, C.; Jia, X.; Ghamisi, P. Deep feature extraction and classification of hyperspectral images based on convolutional neural networks. IEEE Trans. Geosci. Remote Sens. 2016, 54, 6232–6251. [Google Scholar] [CrossRef] [Green Version]
- Cheng, C.; Li, H.; Peng, J.; Cui, W.; Zhang, L. Hyperspectral image classification via spectral–spatial random patches network. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2021, 14, 4753–4764. [Google Scholar] [CrossRef]
- Dópido, I.; Li, J.; Marpu, P.R.; Plaza, A.; Dias, J.M.B.; Benediktsson, J.A. Semisupervised self-learning for hyperspectral image classification. IEEE Trans. Geosci. Remote Sens. 2013, 51, 4032–4044. [Google Scholar] [CrossRef] [Green Version]
- Li, F.; Clausi, D.A.; Xu, L.; Wong, A. ST-IRGS: A region-based self-training algorithm applied to hyperspectral image classification and segmentation. IEEE Trans. Geosci. Remote Sens. 2017, 56, 3–16. [Google Scholar] [CrossRef]
- Wu, Y.; Mu, G.; Qin, C.; Miao, Q.; Ma, W.; Zhang, X. Semi-supervised hyperspectral image classification via spatial-regulated self-training. Remote Sens. 2020, 12, 159. [Google Scholar] [CrossRef] [Green Version]
- He, Z.; Liu, H.; Wang, Y.; Hu, J. Generative adversarial networks-based semi-supervised learning for hyperspectral image classification. Remote Sens. 2017, 9, 1042. [Google Scholar] [CrossRef] [Green Version]
- Feng, J.; Ye, Z.; Li, D.; Liang, Y.; Tang, X.; Zhang, X. Hyperspectral Image Classification Based on Semi-Supervised Dual-Branch Convolutional Autoencoder with Self-Attention. In Proceedings of the IGARSS 2020-2020 IEEE International Geoscience and Remote Sensing Symposium, Waikoloa, HI, USA, 26 September–2 October 2020; pp. 1267–1270. [Google Scholar]
- Li, J.; Bioucas-Dias, J.M.; Plaza, A. Semisupervised hyperspectral image classification using soft sparse multinomial logistic regression. IEEE Geosci. Remote Sens. Lett. 2012, 10, 318–322. [Google Scholar]
- Camps-Valls, G.; Marsheva, T.V.B.; Zhou, D. Semi-supervised graph-based hyperspectral image classification. IEEE Trans. Geosci. Remote Sens. 2007, 45, 3044–3054. [Google Scholar] [CrossRef]
- De Morsier, F.; Borgeaud, M.; Gass, V.; Thiran, J.P.; Tuia, D. Kernel low-rank and sparse graph for unsupervised and semi-supervised classification of hyperspectral images. IEEE Trans. Geosci. Remote Sens. 2016, 54, 3410–3420. [Google Scholar] [CrossRef]
- Ding, Y.; Zhao, X.; Zhang, Z.; Cai, W.; Yang, N.; Zhan, Y. Semi-supervised locality preserving dense graph neural network with ARMA filters and context-aware learning for hyperspectral image classification. IEEE Trans. Geosci. Remote Sens. 2021, 60, 1–12. [Google Scholar] [CrossRef]
- Sun, Q.; Liu, X.; Bourennane, S. Unsupervised Multi-Level Feature Extraction for Improvement of Hyperspectral Classification. Remote Sens. 2021, 13, 1602. [Google Scholar] [CrossRef]
- Zhao, B.; Ulfarsson, M.O.; Sveinsson, J.R.; Chanussot, J. Unsupervised and supervised feature extraction methods for hyperspectral images based on mixtures of factor analyzers. Remote Sens. 2020, 12, 1179. [Google Scholar] [CrossRef] [Green Version]
- Zhu, M.; Fan, J.; Yang, Q.; Chen, T. SC-EADNet: A Self-supervised Contrastive Efficient Asymmetric Dilated Network for Hyperspectral Image Classification. IEEE Trans. Geosci. Remote Sens. 2021, 60, 1–17. [Google Scholar] [CrossRef]
- Yue, J.; Fang, L.; Rahmani, H.; Ghamisi, P. Self-supervised learning with adaptive distillation for hyperspectral image classification. IEEE Trans. Geosci. Remote Sens. 2021, 60, 1–13. [Google Scholar] [CrossRef]
- Miyato, T.; Maeda, S.i.; Koyama, M.; Ishii, S. Virtual adversarial training: A regularization method for supervised and semi-supervised learning. IEEE Trans. Pattern Anal. Mach. Intell. 2018, 41, 1979–1993. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Tarvainen, A.; Valpola, H. Mean-teachers are better role models: Weight-averaged consistency targets improve semi-supervised deep learning results. Adv. Neural Inf. Process. Syst. 2017, 30, 1195–1204. [Google Scholar]
- Wang, X.; Kihara, D.; Luo, J.; Qi, G.J. Enaet: Self-trained ensemble autoencoding transformations for semi-supervised learning. arXiv 2019, arXiv:1911.09265. [Google Scholar]
- Berthelot, D.; Carlini, N.; Goodfellow, I.; Papernot, N.; Oliver, A.; Raffel, C.A. Mixmatch: A holistic approach to semi-supervised learning. Adv. Neural Inf. Process. Syst. 2019, 32, 5050–5060. [Google Scholar]
- Chen, Y.; Lin, Z.; Zhao, X.; Wang, G.; Gu, Y. Deep learning-based classification of hyperspectral data. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2014, 7, 2094–2107. [Google Scholar] [CrossRef]
- He, K.; Zhang, X.; Ren, S.; Sun, J. Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 770–778. [Google Scholar]
- Laine, S.; Aila, T. Temporal ensembling for semi-supervised learning. arXiv 2016, arXiv:1610.02242. [Google Scholar]
- Srivastava, N.; Hinton, G.; Krizhevsky, A.; Sutskever, I.; Salakhutdinov, R. Dropout: A simple way to prevent neural networks from overfitting. J. Mach. Learn. Res. 2014, 15, 1929–1958. [Google Scholar]
- Tao, C.; Pan, H.; Li, Y.; Zou, Z. Unsupervised spectral–spatial feature learning with stacked sparse autoencoder for hyperspectral imagery classification. IEEE Geosci. Remote Sens. Lett. 2015, 12, 2438–2442. [Google Scholar]
- Mei, S.; Ji, J.; Geng, Y.; Zhang, Z.; Li, X.; Du, Q. Unsupervised spatial–spectral feature learning by 3D convolutional autoencoder for hyperspectral classification. IEEE Trans. Geosci. Remote Sens. 2019, 57, 6808–6820. [Google Scholar] [CrossRef]
- Liu, L.; Wang, Y.; Peng, J.; Zhang, L.; Zhang, B.; Cao, Y. Latent relationship guided stacked sparse autoencoder for hyperspectral imagery classification. IEEE Trans. Geosci. Remote Sens. 2020, 58, 3711–3725. [Google Scholar] [CrossRef]
- Liu, B.; Yu, A.; Yu, X.; Wang, R.; Gao, K.; Guo, W. Deep multiview learning for hyperspectral image classification. IEEE Trans. Geosci. Remote Sens. 2020, 59, 7758–7772. [Google Scholar] [CrossRef]
- Zhong, Z.; Li, J.; Luo, Z.; Chapman, M. Spectral–spatial residual network for hyperspectral image classification: A 3-D deep learning framework. IEEE Trans. Geosci. Remote Sens. 2017, 56, 847–858. [Google Scholar] [CrossRef]
- Zhou, F.; Hang, R.; Liu, Q.; Yuan, X. Hyperspectral image classification using spectral–spatial LSTMs. Neurocomputing 2019, 328, 39–47. [Google Scholar] [CrossRef]
- Ma, W.; Yang, Q.; Wu, Y.; Zhao, W.; Zhang, X. Double-branch multi-attention mechanism network for hyperspectral image classification. Remote Sens. 2019, 11, 1307. [Google Scholar] [CrossRef] [Green Version]
- Roy, S.K.; Krishna, G.; Dubey, S.R.; Chaudhuri, B.B. HybridSN: Exploring 3-D–2-D CNN feature hierarchy for hyperspectral image classification. IEEE Geosci. Remote Sens. Lett. 2019, 17, 277–281. [Google Scholar] [CrossRef] [Green Version]
- Lee, H.; Kwon, H. Going deeper with contextual CNN for hyperspectral image classification. IEEE Trans. Image Process. 2017, 26, 4843–4855. [Google Scholar] [CrossRef] [Green Version]
- Kingma, D.P.; Ba, J. Adam: A method for stochastic optimization. arXiv 2014, arXiv:1412.6980. [Google Scholar]
Class No. | Land Cover Type | Training | Testing |
---|---|---|---|
1 | Alfalfa | 10 | 27 |
2 | Corn-notill | 10 | 1133 |
3 | Corn-mintill | 10 | 655 |
4 | Corn | 10 | 180 |
5 | Grass-pasture | 10 | 377 |
6 | Grass-tree | 10 | 575 |
7 | Grass-pasture-mowed | 10 | 13 |
8 | Hay-windrowed | 10 | 373 |
9 | Oats | 10 | 7 |
10 | Soybean-notill | 10 | 768 |
11 | Soybean-mintill | 10 | 1955 |
12 | Soybean-clean | 10 | 465 |
13 | Wheat | 10 | 155 |
14 | Woods | 10 | 1003 |
15 | Buildings-Grass-Trees | 10 | 299 |
16 | Stone-Steel-Towers | 10 | 65 |
Total | 160 | 8050 |
Class No. | Land Cover Type | Training | Testing |
---|---|---|---|
1 | Asphalt | 10 | 5295 |
2 | Meadows | 10 | 14,910 |
3 | Gravel | 10 | 1670 |
4 | Trees | 10 | 2442 |
5 | Metal Sheets | 10 | 1067 |
6 | Bare Soil | 10 | 4014 |
7 | Bitumen | 10 | 1055 |
8 | Bricks | 10 | 2936 |
9 | Shadows | 10 | 748 |
Total | 90 | 34,137 |
Class No. | Land Cover Type | Training | Testing |
---|---|---|---|
1 | Brocoli-green-weeds-1 | 10 | 1598 |
2 | Brocoli-green-weeds-2 | 10 | 2971 |
3 | Fallow | 10 | 1571 |
4 | Fallow-rough-plow | 10 | 1106 |
5 | Fallow-smooth | 10 | 2133 |
6 | Stubble | 10 | 3158 |
7 | Celery | 10 | 2854 |
8 | Grapes-untrained | 10 | 9007 |
9 | Soil-vinyard-develop | 10 | 9007 |
10 | Corn-senesced-green-weeds | 10 | 4953 |
11 | Lettuce-romaine-4wk | 10 | 2613 |
12 | Lettuce-romaine-5wk | 10 | 845 |
13 | Lettuce-romaine-6wk | 10 | 1532 |
14 | Lettuce-romaine-7wk | 10 | 723 |
15 | Vinyard-untrained | 10 | 847 |
16 | Vinyard-vertical-trellis | 10 | 5805 |
Total | 160 | 43,152 |
Class No. | Land Cover Type | Training | Testing |
---|---|---|---|
1 | Healthy grass | 10 | 991 |
2 | Stressed grass | 10 | 994 |
3 | Synthetic grass | 10 | 548 |
4 | Trees | 10 | 986 |
5 | Soil | 10 | 984 |
6 | Water | 10 | 251 |
7 | Residential | 10 | 1005 |
8 | Commercial | 10 | 986 |
9 | Road | 10 | 992 |
10 | Highway | 10 | 972 |
11 | Railway | 10 | 979 |
12 | Parking Lot 1 | 10 | 977 |
13 | Parking Lot 2 | 10 | 366 |
14 | Tennis Court | 10 | 333 |
15 | Running Track | 10 | 519 |
Total | 150 | 11,883 |
Class No. | SVM [7] | SSLSTM [41] | CDCNN [44] | 3DCAE [37] | SSRN [40] | HybridSN [43] | DBMA [42] | Proposed |
---|---|---|---|---|---|---|---|---|
1 | 20.59 ± 4.55 | 30.55 ± 10.8 | 35.84 ± 13.8 | 52.98 ± 28.9 | 71.49 ± 19.3 | 34.37 ± 25.6 | 74.96 ± 15.2 | 98.76 ± 1.74 |
2 | 42.21 ± 4.31 | 55.35 ± 7.89 | 55.86 ± 13.3 | 50.15 ± 19.4 | 76.25 ± 7.47 | 58.12 ± 8.74 | 65.09 ± 9.11 | 66.66 ± 11.7 |
3 | 35.40 ± 10.0 | 37.91 ± 11.1 | 39.87 ± 8.63 | 48.59 ± 14.1 | 69.08 ± 16.1 | 45.15 ± 15.0 | 57.39 ± 14.8 | 64.98 ± 7.70 |
4 | 23.26 ± 3.96 | 37.88 ± 12.2 | 34.71 ± 9.83 | 31.33 ± 10.6 | 57.53 ± 15.3 | 35.23 ± 15.7 | 54.05 ± 15.9 | 95.36 ± 4.72 |
5 | 63.52 ± 8.94 | 64.48 ± 19.2 | 65.62 ± 15.5 | 78.46 ± 12.0 | 93.90 ± 7.17 | 70.62 ± 18.0 | 92.35 ± 4.55 | 85.76 ± 5.11 |
6 | 87.13 ± 2.95 | 78.57 ± 10.9 | 85.22 ± 6.22 | 72.74 ± 17.8 | 96.64 ± 3.54 | 82.52 ± 10.9 | 97.43 ± 2.97 | 98.60 ± 0.37 |
7 | 26.52 ± 12.4 | 18.81 ± 6.41 | 23.40 ± 0.11 | 35.70 ± 23.1 | 45.81 ± 21.9 | 27.74 ± 24.3 | 25.73 ± 9.42 | 100.0 ± 0.00 |
8 | 95.51 ± 1.08 | 95.07 ± 4.30 | 96.42 ± 11.9 | 82.92 ± 28.1 | 98.63 ± 3.22 | 77.09 ± 31.8 | 99.86 ± 0.22 | 99.55 ± 0.63 |
9 | 13.41 ± 5.06 | 14.39 ± 6.69 | 17.52 ± 3.09 | 20.37 ± 20.3 | 46.15 ± 18.9 | 20.35 ± 21.0 | 09.61 ± 4.89 | 100.0 ± 0.00 |
10 | 46.77 ± 7.51 | 46.61 ± 7.86 | 17.52 ± 11.6 | 49.53 ± 21.4 | 67.42 ± 12.3 | 57.34 ± 9.61 | 67.32 ± 12.5 | 79.16 ± 3.40 |
11 | 62.14 ± 4.69 | 63.90 ± 6.00 | 64.98 ± 10.1 | 64.19 ± 21.8 | 79.76 ± 6.15 | 69.35 ± 9.09 | 78.54 ± 8.25 | 75.91 ± 4.42 |
12 | 28.09 ± 2.84 | 31.50 ± 6.74 | 31.74 ± 7.28 | 42.24 ± 17.1 | 59.93 ± 14.6 | 35.25 ± 14.6 | 52.58 ± 18.6 | 79.92 ± 3.78 |
13 | 82.81 ± 5.26 | 76.39 ± 8.12 | 83.83 ± 7.09 | 73.12 ± 19.3 | 94.87 ± 4.78 | 77.85 ± 16.0 | 91.57 ± 8.62 | 99.13 ± 1.22 |
14 | 89.44 ± 4.23 | 83.15 ± 6.27 | 87.19 ± 11.0 | 86.55 ± 9.08 | 97.12 ± 1.75 | 86.37 ± 10.1 | 94.90 ± 4.42 | 94.94 ± 0.44 |
15 | 42.56 ± 6.75 | 47.70 ± 9.20 | 53.70 ± 4.00 | 48.42 ± 17.2 | 75.73 ± 10.2 | 41.17 ± 9.81 | 60.55 ± 9.66 | 92.52 ± 5.06 |
16 | 91.39 ± 8.22 | 67.35 ± 11.6 | 68.15 ± 18.9 | 61.69 ± 12.3 | 84.12 ± 9.81 | 50.83 ± 6.68 | 81.72 ± 9.11 | 100.0 ± 0.00 |
OA (%) | 54.22 ± 2.47 | 56.57 ± 3.49 | 58.50 ± 3.36 | 57.24 ± 4.66 | 77.48 ± 3.96 | 57.31 ± 5.83 | 70.73 ± 4.83 | 81.65 ± 1.71 |
AA (%) | 55.09 ± 1.74 | 53.10 ± 2.25 | 55.89 ± 3.53 | 56.19 ± 6.37 | 75.90 ± 3.71 | 54.34 ± 4.87 | 68.98 ± 2.67 | 86.46 ± 1.17 |
Kappa×100 | 48.88 ± 2.64 | 51.08 ± 3.62 | 53.20 ± 3.56 | 52.75 ± 4.71 | 74.61 ± 4.24 | 52.66 ± 6.05 | 67.09 ± 5.25 | 79.21 ± 1.92 |
Class No. | SVM [7] | SSLSTM [41] | CDCNN [44] | 3DCAE [37] | SSRN [40] | HybridSN [43] | DBMA [42] | Proposed |
---|---|---|---|---|---|---|---|---|
1 | 92.38 ± 2.05 | 92.66 ± 1.65 | 90.29 ± 3.17 | 80.00 ± 13.1 | 98.18 ± 1.16 | 54.58 ± 30.3 | 95.64 ± 2.06 | 86.09 ± 9.03 |
2 | 81.67 ± 8.25 | 88.39 ± 1.30 | 92.00 ± 2.28 | 93.54 ± 1.32 | 96.29 ± 2.13 | 73.87 ± 37.2 | 96.78 ± 2.40 | 90.80 ± 2.75 |
3 | 40.65 ± 4.72 | 52.57 ± 5.11 | 51.01 ± 14.8 | 58.49 ± 5.70 | 64.42 ± 11.9 | 34.77 ± 20.9 | 77.21 ± 12.1 | 91.01 ± 7.41 |
4 | 60.97 ± 11.8 | 70.93 ± 11.8 | 75.75 ± 16.1 | 66.91 ± 10.1 | 79.92 ± 16.9 | 60.83 ± 23.3 | 85.22 ± 18.0 | 93.51 ± 5.09 |
5 | 90.77 ± 6.92 | 92.17 ± 3.92 | 93.47 ± 5.77 | 63.32 ± 44.8 | 99.10 ± 1.48 | 94.49 ± 4.96 | 98.86 ± 1.21 | 99.75 ± 0.35 |
6 | 34.24 ± 5.09 | 45.63 ± 6.35 | 51.85 ± 16.1 | 78.07 ± 4.04 | 73.38 ± 15.3 | 58.08 ± 19.7 | 65.15 ± 13.5 | 96.51 ± 3.04 |
7 | 44.62 ± 5.84 | 50.90 ± 6.05 | 52.51 ± 14.5 | 65.12 ± 11.9 | 66.81 ± 16.1 | 53.73 ± 17.8 | 85.52 ± 15.6 | 97.75 ± 1.51 |
8 | 70.31 ± 6.73 | 79.22 ± 2.64 | 73.13 ± 6.80 | 49.34 ± 1.70 | 79.65 ± 6.81 | 46.58 ± 11.6 | 81.79 ± 7.17 | 64.77 ± 31.3 |
9 | 99.88 ± 0.10 | 99.89 ± 0.09 | 61.63 ± 24.7 | 65.24 ± 19.5 | 99.55 ± 0.86 | 57.31 ± 20.7 | 92.01 ± 4.25 | 98.74 ± 1.24 |
OA (%) | 64.32 ± 6.29 | 74.29 ± 2.72 | 74.39 ± 6.50 | 76.54 ± 4.68 | 85.24 ± 4.32 | 63.05 ± 12.5 | 85.66 ± 4.55 | 89.38 ± 1.14 |
AA (%) | 68.39 ± 1.85 | 74.71 ± 2.08 | 71.29 ± 5.36 | 68.89 ± 5.49 | 84.14 ± 2.90 | 59.36 ± 9.42 | 86.46 ± 3.74 | 90.99 ± 1.88 |
Kappa×100 | 55.47 ± 6.63 | 67.36 ± 3.13 | 67.68 ± 7.60 | 70.13 ± 5.45 | 81.08 ± 5.25 | 55.59 ± 12.4 | 81.72 ± 5.49 | 86.18 ± 1.52 |
Class No. | SVM [7] | SSLSTM [41] | CDCNN [44] | 3DCAE [37] | SSRN [40] | HybridSN [43] | DBMA [42] | Proposed |
---|---|---|---|---|---|---|---|---|
1 | 98.49 ± 1.37 | 81.94 ± 21.2 | 85.02 ± 20.2 | 81.68 ± 16.2 | 87.74 ± 30.0 | 95.46 ± 6.99 | 99.10 ± 2.68 | 99.70 ± 0.23 |
2 | 98.95 ± 0.41 | 85.61 ± 15.4 | 96.29 ± 6.37 | 89.38 ± 4.85 | 99.96 ± 0.07 | 94.02 ± 6.43 | 99.99 ± 0.02 | 97.76 ± 2.87 |
3 | 86.03 ± 5.29 | 93.98 ± 4.91 | 91.71 ± 8.09 | 64.88 ± 45.8 | 92.56 ± 4.19 | 95.29 ± 4.58 | 97.60 ± 1.10 | 99.95 ± 0.06 |
4 | 97.30 ± 1.06 | 96.62 ± 2.38 | 93.23 ± 8.56 | 74.28 ± 23.9 | 92.98 ± 13.2 | 92.07 ± 8.11 | 90.69 ± 2.57 | 98.79 ± 1.28 |
5 | 97.14 ± 1.70 | 99.09 ± 0.48 | 96.47 ± 3.94 | 93.84 ± 4.28 | 98.47 ± 2.32 | 93.55 ± 4.99 | 98.75 ± 1.64 | 97.24 ± 0.05 |
6 | 99.94 ± 0.06 | 98.93 ± 0.66 | 97.01 ± 2.17 | 95.86 ± 1.90 | 99.94 ± 0.06 | 97.65 ± 3.46 | 99.58 ± 0.53 | 99.61 ± 0.47 |
7 | 95.38 ± 2.65 | 98.67 ± 1.05 | 97.25 ± 2.91 | 94.80 ± 5.35 | 96.50 ± 6.35 | 97.62 ± 2.17 | 97.85 ± 2.71 | 99.45 ± 0.77 |
8 | 70.82 ± 2.56 | 80.39 ± 7.40 | 68.33 ± 21.7 | 80.99 ± 3.23 | 82.33 ± 4.75 | 83.57 ± 5.78 | 89.42 ± 5.00 | 84.84 ± 4.97 |
9 | 98.83 ± 1.15 | 98.27 ± 1.54 | 99.41 ± 0.44 | 92.80 ± 1.82 | 97.27 ± 6.38 | 95.79 ± 4.68 | 99.36 ± 0.41 | 99.86 ± 0.16 |
10 | 78.67 ± 9.37 | 87.99 ± 1.93 | 85.23 ± 5.82 | 90.92 ± 6.56 | 94.13 ± 4.28 | 86.57 ± 10.2 | 90.55 ± 4.97 | 89.34 ± 6.07 |
11 | 79.57 ± 7.85 | 81.93 ± 8.79 | 72.03 ± 11.2 | 74.02 ± 13.6 | 95.38 ± 2.02 | 83.11 ± 16.6 | 91.55 ± 7.81 | 99.84 ± 0.22 |
12 | 93.88 ± 3.65 | 96.57 ± 1.31 | 95.91 ± 3.22 | 96.74 ± 2.85 | 99.34 ± 0.65 | 86.96 ± 29.1 | 99.40 ± 0.95 | 98.86 ± 1.41 |
13 | 91.47 ± 5.19 | 92.15 ± 3.22 | 88.81 ± 7.92 | 54.86 ± 8.18 | 96.05 ± 8.74 | 33.20 ± 42.2 | 91.69 ± 6.83 | 98.42 ± 1.43 |
14 | 83.71 ± 9.73 | 95.53 ± 3.42 | 92.89 ± 4.44 | 71.57 ± 8.54 | 88.55 ± 22.8 | 53.57 ± 24.5 | 92.71 ± 7.94 | 98.97 ± 0.54 |
15 | 54.96 ± 5.77 | 44.81 ± 3.07 | 52.23 ± 7.04 | 70.13 ± 14.7 | 66.87 ± 9.12 | 78.20 ± 7.75 | 64.27 ± 12.8 | 83.68 ± 3.80 |
16 | 90.53 ± 5.75 | 92.04 ± 9.08 | 94.44 ± 3.91 | 83.88 ± 7.25 | 99.37 ± 0.82 | 89.07 ± 9.26 | 98.35 ± 1.97 | 99.44 ± 0.46 |
OA (%) | 83.53 ± 1.81 | 79.39 ± 2.82 | 81.00 ± 3.91 | 83.27 ± 5.72 | 88.32 ± 5.76 | 87.35 ± 3.30 | 88.84 ± 4.03 | 93.47 ± 1.04 |
AA (%) | 88.48 ± 1.22 | 89.03 ± 2.76 | 87.89 ± 2.37 | 81.91 ± 6.86 | 92.96 ± 5.18 | 84.73 ± 6.41 | 93.80 ± 1.33 | 96.61 ± 0.05 |
Kappa×100 | 81.73 ± 1.97 | 77.31 ± 3.09 | 78.97 ± 4.24 | 81.52 ± 6.25 | 87.03 ± 6.35 | 85.97 ± 3.64 | 87.67 ± 4.40 | 92.75 ± 1.15 |
Class No. | SVM [7] | SSLSTM [41] | CDCNN [44] | 3DCAE [37] | SSRN [40] | HybridSN [43] | DBMA [42] | Proposed |
---|---|---|---|---|---|---|---|---|
1 | 88.65 ± 4.32 | 79.20 ± 10.8 | 81.77 ± 9.50 | 87.56 ± 6.25 | 83.84 ± 5.78 | 75.18 ± 26.5 | 88.33 ± 4.24 | 83.18 ± 4.24 |
2 | 91.38 ± 6.54 | 94.13 ± 7.35 | 89.77 ± 10.7 | 93.06 ± 3.65 | 94.26 ± 5.24 | 77.59 ± 12.4 | 92.36 ± 4.33 | 87.58 ± 9.24 |
3 | 87.88 ± 11.2 | 75.87 ± 23.5 | 88.05 ± 15.7 | 87.45 ± 10.6 | 99.64 ± 0.43 | 92.56 ± 6.99 | 99.94 ± 0.11 | 98.17 ± 2.06 |
4 | 96.09 ± 3.53 | 91.05 ± 10.6 | 94.96 ± 5.69 | 91.28 ± 3.33 | 97.07 ± 2.25 | 81.88 ± 12.0 | 94.66 ± 8.99 | 91.74 ± 4.71 |
5 | 90.91 ± 2.62 | 92.36 ± 2.71 | 95.36 ± 2.75 | 88.39 ± 2.51 | 93.15 ± 4.83 | 84.62 ± 8.58 | 93.91 ± 3.11 | 99.96 ± 0.05 |
6 | 93.99 ± 5.88 | 95.70 ± 3.31 | 85.34 ± 6.97 | 93.55 ± 5.95 | 94.97 ± 8.88 | 79.39 ± 18.9 | 95.22 ± 3.26 | 99.46 ± 0.75 |
7 | 67.72 ± 9.40 | 79.74 ± 7.14 | 74.40 ± 4.42 | 70.47 ± 14.3 | 75.90 ± 10.1 | 63.50 ± 13.0 | 78.17 ± 12.6 | 75.28 ± 2.72 |
8 | 66.75 ± 10.9 | 84.54 ± 4.33 | 81.29 ± 9.67 | 56.72 ± 5.72 | 80.49 ± 21.4 | 49.88 ± 30.8 | 94.07 ± 6.12 | 64.56 ± 13.0 |
9 | 62.88 ± 9.74 | 72.80 ± 7.18 | 70.61 ± 9.83 | 61.88 ± 14.9 | 66.63 ± 7.93 | 64.06 ± 12.3 | 75.42 ± 8.31 | 70.46 ± 7.82 |
10 | 59.57 ± 7.49 | 67.72 ± 9.71 | 61.68 ± 9.19 | 39.36 ± 28.2 | 69.39 ± 12.3 | 57.21 ± 24.7 | 70.02 ± 10.7 | 78.22 ± 4.01 |
11 | 58.80 ± 6.83 | 64.71 ± 7.71 | 65.33 ± 9.44 | 83.24 ± 3.79 | 77.09 ± 6.22 | 74.47 ± 9.72 | 61.67 ± 11.1 | 85.52 ± 6.27 |
12 | 59.63 ± 4.29 | 75.72 ± 7.38 | 74.23 ± 5.62 | 57.91 ± 8.05 | 73.57 ± 9.88 | 62.53 ± 10.5 | 75.75 ± 8.49 | 82.01 ± 8.35 |
13 | 31.88 ± 10.3 | 83.53 ± 8.37 | 80.28 ± 8.39 | 64.04 ± 19.4 | 93.11 ± 2.99 | 71.87 ± 11.2 | 77.75 ± 13.8 | 90.97 ± 5.78 |
14 | 79.28 ± 8.88 | 76.69 ± 12.1 | 73.72 ± 11.9 | 89.26 ± 6.98 | 85.64 ± 16.7 | 70.68 ± 30.2 | 98.46 ± 3.06 | 74.97 ± 31.6 |
15 | 99.26 ± 0.53 | 91.15 ± 4.17 | 88.54 ± 6.66 | 85.45 ± 2.88 | 95.90 ± 1.96 | 66.02 ± 25.4 | 92.70 ± 4.77 | 100.0 ± 0.00 |
OA (%) | 74.36 ± 0.02 | 79.05 ± 3.59 | 78.32 ± 1.74 | 75.08 ± 1.76 | 81.79 ± 4.77 | 71.37 ± 3.86 | 82.16 ± 3.27 | 83.93 ± 0.88 |
AA (%) | 75.64 ± 0.01 | 81.66 ± 3.18 | 80.36 ± 1.63 | 76.64 ± 3.69 | 85.38 ± 3.84 | 71.43 ± 5.31 | 85.90 ± 2.11 | 85.54 ± 0.43 |
Kappa×100 | 72.29 ± 0.02 | 77.38 ± 3.86 | 76.58 ± 1.87 | 73.10 ± 1.88 | 80.31 ± 5.15 | 69.10 ± 4.14 | 80.72 ± 3.53 | 82.65 ± 0.94 |
Datasets | Methods | OA (L = 10) (%) | OA (L = 20) (%) |
---|---|---|---|
SSRNet-S-R-O | 72.19 | 79.26 | |
SSRNet-S | 73.74 | 82.40 | |
Indian Pines | SSRNet-R | 76.37 | 83.51 |
SSRNet-O | 81.13 | 86.86 | |
SSRNet (ALL) | 83.96 | 87.34 | |
SSRNet-S-R-O | 77.39 | 86.49 | |
SSRNet-S | 79.33 | 89.72 | |
Houston 2013 | SSRNet-R | 81.67 | 87.88 |
SSRNet-O | 83.30 | 91.63 | |
SSRNet (ALL) | 85.54 | 92.79 |
Patch Size | Indian Pines | PaviaU | Salinas | Houston 2013 |
---|---|---|---|---|
7 × 7 | 81.13 | 86.45 | 93.78 | 83.30 |
9 × 9 | 82.50 | 88.56 | 94.57 | 83.32 |
11 × 11 | 83.96 | 90.99 | 94.58 | 85.54 |
13 × 13 | 83.34 | 90.55 | 94.91 | 85.18 |
Dataset | Method | Training Times (s) | Test Times (s) |
---|---|---|---|
SVM [7] | 3.12 | 0.88 | |
CDCNN [44] | 13.28 | 1.89 | |
3DCAE [37] | 15.29 | 1.79 | |
Indian Pines | SSRN [40] | 56.84 | 11.09 |
SSLSTM [41] | 65.35 | 6.55 | |
HybridSN [43] | 4.48 | 0.85 | |
DBMA [42] | 85.38 | 13.42 | |
Proposed | 211.84 | 4.33 |
Dataset | Method | Training Times (s) | Test Times (s) |
---|---|---|---|
SVM [7] | 1.24 | 2.95 | |
CDCNN [44] | 9.53 | 9.27 | |
3DCAE [37] | 21.69 | 7.52 | |
PaviaU | SSRN [40] | 43.87 | 23.77 |
SSLSTM [41] | 41.26 | 31.60 | |
HybridSN [43] | 4.26 | 3.67 | |
DBMA [42] | 48.31 | 32.06 | |
Proposed | 624.84 | 17.48 |
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |
© 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Song, L.; Feng, Z.; Yang, S.; Zhang, X.; Jiao, L. Self-Supervised Assisted Semi-Supervised Residual Network for Hyperspectral Image Classification. Remote Sens. 2022, 14, 2997. https://doi.org/10.3390/rs14132997
Song L, Feng Z, Yang S, Zhang X, Jiao L. Self-Supervised Assisted Semi-Supervised Residual Network for Hyperspectral Image Classification. Remote Sensing. 2022; 14(13):2997. https://doi.org/10.3390/rs14132997
Chicago/Turabian StyleSong, Liangliang, Zhixi Feng, Shuyuan Yang, Xinyu Zhang, and Licheng Jiao. 2022. "Self-Supervised Assisted Semi-Supervised Residual Network for Hyperspectral Image Classification" Remote Sensing 14, no. 13: 2997. https://doi.org/10.3390/rs14132997
APA StyleSong, L., Feng, Z., Yang, S., Zhang, X., & Jiao, L. (2022). Self-Supervised Assisted Semi-Supervised Residual Network for Hyperspectral Image Classification. Remote Sensing, 14(13), 2997. https://doi.org/10.3390/rs14132997