Hyperspectral Image Classification via Deep Structure Dictionary Learning
<p>Workflow of the proposed feature extraction model.</p> "> Figure 2
<p>Network architectures for our encoder: (<b>a</b>) a block of residual networks, (<b>b</b>) main structure of CNNs.</p> "> Figure 3
<p>Overview of the built dictionary of the developed model. Shared constraints are used to describe the common features of all classes of HSI samples.</p> "> Figure 4
<p>The loss function value of training samples and classification accuracy of the developed model versus the number of epochs.</p> "> Figure 5
<p>The classification OA under different numbers of atoms for each sub-dictionary.</p> "> Figure 6
<p>The classification OA under different regularization parameters <math display="inline"><semantics> <msub> <mi>λ</mi> <mn>1</mn> </msub> </semantics></math> and <math display="inline"><semantics> <msub> <mi>λ</mi> <mn>2</mn> </msub> </semantics></math>.</p> "> Figure 7
<p>The confusion matrix of the developed model on the Center of Pavia dataset.</p> "> Figure 8
<p>Classification maps of the Center of Pavia dataset with methods in comparison: (<b>a</b>) pseudo-color image; (<b>b</b>) FDDL; (<b>c</b>) DPL; (<b>d</b>) ResNet; (<b>e</b>) RNN; (<b>f</b>) CNN; (<b>g</b>) Ours; (<b>h</b>) ground truth. The yellow and red rectangles correspond to building and water areas.</p> "> Figure 9
<p>Confusion matrix of the developed model on the Botswana dataset.</p> "> Figure 10
<p>Classification map for the Botswana dataset with methods in comparison: (<b>a</b>) pseudo-color image; (<b>b</b>) FDDL; (<b>c</b>) DPL; (<b>d</b>) ResNet; (<b>e</b>) RNN; (<b>f</b>) CNN; (<b>g</b>) ours; (<b>h</b>) ground truth. Rectangles colored as red and yellow represent mountain and grassland areas.</p> "> Figure 11
<p>Confusion matrix of the developed model on the Houston 2013 dataset.</p> "> Figure 12
<p>Classification map generated by Houston 2013 dataset with approaches in comparison: (<b>a</b>) pseudo-color image; (<b>b</b>) FDDL; (<b>c</b>) DPL; (<b>d</b>) ResNet; (<b>e</b>) RNN; (<b>f</b>) CNN; (<b>g</b>) ours; (<b>h</b>) ground truth. The rectangles with the colors of red and yellow represent the parking lot space and building area.</p> "> Figure 13
<p>The confusion matrix of our model on the Houston 2018 dataset.</p> "> Figure 14
<p>Classification maps of the Houston 2018 dataset with compared methods: (<b>a</b>) pseudo-color image; (<b>b</b>) FDDL; (<b>c</b>) DPL; (<b>d</b>) ResNet; (<b>e</b>) RNN; (<b>f</b>) CNN; (<b>g</b>) Ours; (<b>h</b>) ground truth. The yellow and red rectangles are corresponding to grassland andbuilding areas.</p> ">
Abstract
:1. Introduction
- (1)
- We devise an effective feature learning framework that adopts convolutional neural networks (CNNs) to capture abundant spectral information and construct a structure dictionary to predict HSI samples.
- (2)
- We design a novel shared constraint in terms of the sub-dictionaries. In this way, the common and specific feature of HSI samples will be learned separately to represent features in a more discriminative manner.
- (3)
- We carefully design two kinds of loss functions,, i.e., coding loss and discriminating loss, for code coefficients to enhance the classification performance.
- (4)
- Extensive experiments conducted on several hyperspectral datasets demonstrate the superiority of proposed method in terms of the performance and efficiency in comparison with the state-of-the-art techniques.
2. Materials and Methodology
2.1. Experimental Datasets
2.2. Methodology
2.2.1. Residual Networks Encoder
2.2.2. Dictionary Learning
2.2.3. Loss Functions
3. Experimental Results and Analysis
3.1. Sample Selection
3.2. Parameter Setting
3.2.1. Number of Dictionary Atoms
3.2.2. Constraint Coefficients and
3.3. Classification Performance Analysis for Different Datasets
3.3.1. Center of Pavia
3.3.2. Botswana
3.3.3. Houston 2013
3.3.4. Houston 2018
4. Discussion
4.1. Influence of Imbalanced Samples
4.2. Influence of Small Training Samples
4.3. Computational Cost
5. Conclusions
Author Contributions
Funding
Data Availability Statement
Conflicts of Interest
References
- Ghamisi, P.; Plaza, J.; Chen, Y.; Li, J.; Plaza, A.J. Advanced spectral classifiers for hyperspectral images: A review. IEEE Geosci. Remote Sens. Mag. 2017, 5, 8–32. [Google Scholar] [CrossRef] [Green Version]
- Hong, D.; Yokoya, N.; Chanussot, J.; Zhu, X.X. An augmented linear mixing model to address spectral variability for hyperspectral unmixing. IEEE Trans. Image Process. 2018, 28, 1923–1938. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Li, J.; Huang, X.; Gamba, P.; Bioucas-Dias, J.M.; Zhang, L.; Benediktsson, J.A.; Plaza, A. Multiple feature learning for hyperspectral image classification. IEEE Trans. Geosci. Remote Sens. 2014, 53, 1592–1606. [Google Scholar] [CrossRef] [Green Version]
- Hong, D.; Gao, L.; Yokoya, N.; Yao, J.; Chanussot, J.; Du, Q.; Zhang, B. More diverse means better: Multimodal deep learning meets remote-sensing imagery classification. IEEE Trans. Geosci. Remote Sens. 2020, 59, 4340–4354. [Google Scholar] [CrossRef]
- Hong, D.; Gao, L.; Yokoya, N.; Yao, J.; Chanussot, J.; Du, Q.; Zhang, B. Joint and progressive subspace analysis (JPSA) with spatial-spectral manifold alignment for semi-supervised hyperspectral dimensionality reduction. IEEE Trans. Cybern. 2021, 51, 3602–3615. [Google Scholar] [CrossRef]
- Hong, D.; Gao, L.; Yao, J.; Zhang, B.; Plaza, A.; Chanussot, J. Graph convolutional networks for hyperspectral image classification. IEEE Trans. Geosci. Remote Sens. 2020, 59, 5966–5978. [Google Scholar] [CrossRef]
- Hong, D.; Gao, L.; Yao, J.; Yokoya, N.; Chanussot, J.; Heiden, U.; Zhang, B. Endmember-guided unmixing network (EGU-Net): A general deep learning framework for self-supervised hyperspectral unmixing. IEEE Trans. Neural Netw. Learn. Syst. 2021. [Google Scholar] [CrossRef]
- Liu, X.; Deng, C.; Chanussot, J.; Hong, D.; Zhao, B. StfNet: A two-stream convolutional neural network for spatiotemporal image fusion. IEEE Trans. Geosci. Remote Sens. 2019, 57, 6552–6564. [Google Scholar] [CrossRef]
- Kumar, S.; Ghosh, J.; Crawford, M.M. Best-bases feature extraction algorithms for classification of hyperspectral data. IEEE Trans. Geosci. Remote Sens. 2001, 39, 1368–1379. [Google Scholar] [CrossRef] [Green Version]
- Rashwan, S.; Dobigeon, N. A split-and-merge approach for hyperspectral band selection. IEEE Geosci. Remote Sens. Lett. 2017, 14, 1378–1382. [Google Scholar] [CrossRef] [Green Version]
- Jolliffe, I.T. Principal component analysis. Technometrics 2003, 45, 276. [Google Scholar]
- Senthilnath, J.; Omkar, S.; Mani, V.; Karnwal, N.; Shreyas, P. Crop stage classification of hyperspectral data using unsupervised techniques. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2012, 6, 861–866. [Google Scholar] [CrossRef]
- Green, A.A.; Berman, M.; Switzer, P.; Craig, M.D. A transformation for ordering multispectral data in terms of image quality with implications for noise removal. IEEE Trans. Geosci. Remote Sens. 1988, 26, 65–74. [Google Scholar] [CrossRef] [Green Version]
- Schölkopf, B.; Smola, A.; Müller, K.R. Nonlinear component analysis as a kernel eigenvalue problem. Neural Comput. 1998, 10, 1299–1319. [Google Scholar] [CrossRef] [Green Version]
- Mei, F.; Zhao, C.; Wang, L.; Huo, H. Anomaly detection in hyperspectral imagery based on kernel ICA feature extraction. In Proceedings of the 2008 Second International Symposium on Intelligent Information Technology Application, Shanghai, China, 20–22 December 2008; Volume 1, pp. 869–873. [Google Scholar]
- Fauvel, M.; Chanussot, J.; Benediktsson, J.A. Kernel principal component analysis for the classification of hyperspectral remote sensing data over urban areas. EURASIP J. Adv. Signal Process. 2009, 2009, 1–14. [Google Scholar] [CrossRef] [Green Version]
- Marchesi, S.; Bruzzone, L. ICA and kernel ICA for change detection in multispectral remote sensing images. In Proceedings of the 2009 IEEE International Geoscience and Remote Sensing Symposium, Cape Town, South Africa, 12–17 July 2009; Volume 2. [Google Scholar]
- Cortes, C.; Vapnik, V. Support vector networks. Mach. Learn. 1995, 20, 273–297. [Google Scholar] [CrossRef]
- Camps-Valls, G.; Bruzzone, L. Kernel-based methods for hyperspectral image classification. IEEE Trans. Geosci. Remote Sens. 2005, 43, 1351–1362. [Google Scholar] [CrossRef]
- Bruzzone, L.; Chi, M.; Marconcini, M. A novel transductive SVM for semisupervised classification of remote-sensing images. IEEE Trans. Geosci. Remote Sens. 2006, 44, 3363–3373. [Google Scholar] [CrossRef] [Green Version]
- Zhao, C.; Liu, W.; Xu, Y.; Wen, J. A spectral-spatial SVM-based multi-layer learning algorithm for hyperspectral image classification. Remote Sens. Lett. 2018, 9, 218–227. [Google Scholar] [CrossRef]
- Li, S.; Song, W.; Fang, L.; Chen, Y.; Ghamisi, P.; Benediktsson, J.A. Deep learning for hyperspectral image classification: An overview. IEEE Trans. Geosci. Remote Sens. 2019, 57, 6690–6709. [Google Scholar] [CrossRef] [Green Version]
- Chen, Y.; Lin, Z.; Zhao, X.; Wang, G.; Gu, Y. Deep learning-based classification of hyperspectral data. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2014, 7, 2094–2107. [Google Scholar] [CrossRef]
- Chen, Y.; Zhao, X.; Jia, X. Spectral–spatial classification of hyperspectral data based on deep belief network. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2015, 8, 2381–2392. [Google Scholar] [CrossRef]
- Shi, C.; Pun, C.M. Multi-scale hierarchical recurrent neural networks for hyperspectral image classification. Neurocomputing 2018, 294, 82–93. [Google Scholar] [CrossRef]
- Hang, R.; Liu, Q.; Hong, D.; Ghamisi, P. Cascaded recurrent neural networks for hyperspectral image classification. IEEE Trans. Geosci. Remote Sens. 2019, 57, 5384–5394. [Google Scholar] [CrossRef] [Green Version]
- Rasti, B.; Hong, D.; Hang, R.; Ghamisi, P.; Kang, X.; Chanussot, J.; Benediktsson, J.A. Feature extraction for hyperspectral imagery: The evolution from shallow to deep: Overview and toolbox. IEEE Geosci. Remote Sens. Mag. 2020, 8, 60–88. [Google Scholar] [CrossRef]
- Wright, J.; Yang, A.Y.; Ganesh, A.; Sastry, S.S.; Ma, Y. Robust face recognition via sparse representation. IEEE Trans. Pattern Anal. Mach. Intell. 2008, 31, 210–227. [Google Scholar] [CrossRef] [Green Version]
- Chen, Y.; Nasrabadi, N.M.; Tran, T.D. Hyperspectral image classification using dictionary-based sparse representation. IEEE Trans. Geosci. Remote Sens. 2011, 49, 3973–3985. [Google Scholar] [CrossRef]
- Gao, L.; Yu, H.; Zhang, B.; Li, Q. Locality-preserving sparse representation-based classification in hyperspectral imagery. J. Appl. Remote Sens. 2016, 10, 042004. [Google Scholar] [CrossRef]
- Yang, M.; Zhang, L.; Yang, J.; Zhang, D. Metaface learning for sparse representation based face recognition. In Proceedings of the 2010 IEEE International Conference on Image Processing, Hong Kong, China, 26–29 September 2010; pp. 1601–1604. [Google Scholar]
- Yang, M.; Zhang, L.; Feng, X.; Zhang, D. Fisher discrimination dictionary learning for sparse representation. In Proceedings of the 2011 International Conference on Computer Vision, Barcelona, Spain, 6–13 November 2011; pp. 543–550. [Google Scholar]
- Gu, S.; Zhang, L.; Zuo, W.; Feng, X. Projective dictionary pair learning for pattern classification. Adv. Neural Inf. Process. Syst. 2014, 27. [Google Scholar]
- Akhtar, N.; Mian, A. Nonparametric coupled Bayesian dictionary and classifier learning for hyperspectral classification. IEEE Trans. Neural Netw. Learn. Syst. 2017, 29, 4038–4050. [Google Scholar] [CrossRef]
- Tu, X.; Shen, X.; Fu, P.; Wang, T.; Sun, Q.; Ji, Z. Discriminant sub-dictionary learning with adaptive multiscale superpixel representation for hyperspectral image classification. Neurocomputing 2020, 409, 131–145. [Google Scholar] [CrossRef]
- Tang, H.; Liu, H.; Xiao, W.; Sebe, N. When dictionary learning meets deep learning: Deep dictionary learning and coding network for image recognition with limited data. IEEE Trans. Neural Netw. Learn. Syst. 2020, 32, 2129–2141. [Google Scholar] [CrossRef] [PubMed]
- Tao, L.; Zhou, Y.; Jiang, X.; Liu, X.; Zhou, Z. Convolutional neural network-based dictionary learning for SAR target recognition. IEEE Geosci. Remote Sens. Lett. 2020, 18, 1776–1780. [Google Scholar] [CrossRef]
- Liu, Q.; Zhou, F.; Hang, R.; Yuan, X. Bidirectional-convolutional LSTM based spectral-spatial feature learning for hyperspectral image classification. Remote Sens. 2017, 9, 1330. [Google Scholar] [CrossRef] [Green Version]
- Yang, H.L.; Crawford, M.M. Spectral and spatial proximity-based manifold alignment for multitemporal hyperspectral image classification. IEEE Trans. Geosci. Remote Sens. 2016, 54, 51–64. [Google Scholar] [CrossRef]
- Debes, C.; Merentitis, A.; Heremans, R.; Hahn, J.; Frangiadakis, N.; Kasteren, T.; Liao, W.; Bellens, R.; Pižurica, A.; Gautama, S. Hyperspectral and LiDAR Data Fusion: Outcome of the 2013 GRSS Data Fusion Contest. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2014, 7, 2405–2418. [Google Scholar] [CrossRef]
- Xu, Y.; Du, B.; Zhang, L.; Cerra, D.; Pato, M.; Carmona, E.; Prasad, S.; Yokoya, N.; Hänsch, R.; Le Saux, B. Advanced Multi-Sensor Optical Remote Sensing for Urban Land Use and Land Cover Classification: Outcome of the 2018 IEEE GRSS Data Fusion Contest. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2019, 12, 1709–1724. [Google Scholar] [CrossRef]
- Kong, S.; Wang, D. A brief summary of dictionary learning based approach for classification (revised). arXiv 2012, arXiv:1205.6544. [Google Scholar]
- He, K.; Zhang, X.; Ren, S.; Sun, J. Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 770–778. [Google Scholar]
- Sandler, M.; Howard, A.; Zhu, M.; Zhmoginov, A.; Chen, L.C. Mobilenetv2: Inverted residuals and linear bottlenecks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–23 June 2018; pp. 4510–4520. [Google Scholar]
- Luo, W.; Li, Y.; Urtasun, R.; Zemel, R. Understanding the effective receptive field in deep convolutional neural networks. In Proceedings of the 30th International Conference on Neural Information Processing Systems, Barcelona, Spain, 5–10 December 2016; Volume 29. [Google Scholar]
- Sinha, B.; Yimprayoon, P.; Tiensuwan, M. Cohen’s Kappa Statistic: A Critical Appraisal and Some Modifications. Math. Calcutta Stat. Assoc. Bull. 2006, 58, 151–170. [Google Scholar] [CrossRef]
- Chang, C.I.; Ma, K.Y.; Liang, C.C.; Kuo, Y.M.; Chen, S.; Zhong, S. Iterative random training sampling spectral spatial classification for hyperspectral images. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2020, 13, 3986–4007. [Google Scholar] [CrossRef]
Class | SVM | FDDL | DPL | ResNet | AE | RNN | CNN | CRNN | Ours |
---|---|---|---|---|---|---|---|---|---|
1 | 0.9866 | 0.9882 | 0.9856 | 0.9845 | 0.9997 | 0.9836 | 0.9966 | 0.9999 | 1.0000 |
2 | 0.6302 | 0.2319 | 0.3743 | 0.6641 | 0.9752 | 0.4118 | 0.7496 | 0.9861 | 0.9662 |
3 | 0.9708 | 0.9851 | 0.9682 | 0.9644 | 0.8884 | 0.9902 | 0.9669 | 0.8994 | 0.9579 |
4 | 0.5055 | 0.3760 | 0.2568 | 0.4877 | 0.8675 | 0.4646 | 0.5256 | 0.8500 | 0.8619 |
5 | 0.9969 | 0.9848 | 0.9729 | 0.9835 | 0.9680 | 0.9924 | 0.9905 | 0.9809 | 0.9785 |
6 | 0.6659 | 0.6944 | 0.8576 | 0.7035 | 0.9597 | 0.8335 | 0.9331 | 0.9696 | 0.9776 |
7 | 0.9163 | 0.8811 | 0.9143 | 0.9363 | 0.9443 | 0.9465 | 0.9503 | 0.9604 | 0.9556 |
8 | 0.9416 | 0.9595 | 0.9711 | 0.9504 | 0.9812 | 0.9794 | 0.9904 | 0.9961 | 0.9925 |
9 | 0.9965 | 0.9643 | 0.9825 | 0.9895 | 0.9980 | 0.9930 | 0.9874 | 0.9980 | 0.9995 |
OA | 0.9234 | 0.9057 | 0.9244 | 0.9289 | 0.9828 | 0.9331 | 0.9663 | 0.9864 | 0.9875 |
AA | 0.8456 | 0.7850 | 0.8093 | 0.8515 | 0.9535 | 0.8439 | 0.8989 | 0.9600 | 0.9655 |
kappa | 0.8927 | 0.8677 | 0.8937 | 0.9004 | 0.9704 | 0.9060 | 0.9524 | 0.9767 | 0.9785 |
Class | SVM | FDDL | DPL | ResNet | AE | RNN | CNN | CRNN | Ours |
---|---|---|---|---|---|---|---|---|---|
1 | 0.9465 | 0.9712 | 0.9794 | 0.9835 | 0.8934 | 0.9346 | 0.9492 | 0.9529 | 1.0000 |
2 | 1.0000 | 0.8571 | 0.9341 | 0.9890 | 0.7126 | 0.9189 | 0.8333 | 0.9048 | 0.9136 |
3 | 0.8451 | 0.7920 | 0.8496 | 0.8274 | 0.9426 | 0.8366 | 0.9264 | 0.9770 | 0.9701 |
4 | 0.8918 | 0.7887 | 0.9175 | 0.8918 | 0.6111 | 0.7846 | 0.9323 | 0.9479 | 0.9709 |
5 | 0.7037 | 0.6831 | 0.7284 | 0.7572 | 0.7880 | 0.7704 | 0.8219 | 0.8200 | 0.8935 |
6 | 0.6831 | 0.6461 | 0.6379 | 0.6214 | 0.6552 | 0.6250 | 0.7861 | 0.7471 | 0.7222 |
7 | 0.9615 | 0.7479 | 0.9316 | 0.9017 | 0.9462 | 0.9234 | 0.9607 | 0.9735 | 0.9808 |
8 | 0.8852 | 0.9126 | 0.9836 | 0.9781 | 0.7784 | 0.8214 | 0.9005 | 0.9394 | 0.9816 |
9 | 0.7279 | 0.7032 | 0.6784 | 0.7739 | 0.7877 | 0.7651 | 0.7651 | 0.8750 | 0.9405 |
10 | 0.7321 | 0.4777 | 0.8348 | 0.8527 | 0.7919 | 0.7704 | 0.8071 | 0.8768 | 0.8543 |
11 | 0.7418 | 0.7564 | 0.8945 | 0.8836 | 0.7233 | 0.8404 | 0.8517 | 0.8897 | 0.9221 |
12 | 0.9080 | 0.8037 | 0.8834 | 0.9816 | 0.7353 | 0.7746 | 0.8580 | 0.7927 | 0.9379 |
13 | 0.5785 | 0.7810 | 0.8554 | 0.7397 | 0.8522 | 0.7371 | 0.8966 | 0.8899 | 0.8930 |
14 | 0.9070 | 0.6628 | 0.7907 | 0.7907 | 0.7468 | 0.7404 | 0.8901 | 0.7900 | 1.0000 |
OA | 0.8017 | 0.7515 | 0.8420 | 0.8444 | 0.7884 | 0.8017 | 0.8676 | 0.8846 | 0.9220 |
AA | 0.8223 | 0.7560 | 0.8500 | 0.8552 | 0.7832 | 0.8031 | 0.8699 | 0.8840 | 0.9271 |
kappa | 0.7854 | 0.7311 | 0.8289 | 0.8316 | 0.7706 | 0.7850 | 0.8566 | 0.8751 | 0.9156 |
Class | SVM | FDDL | DPL | ResNet | AE | RNN | CNN | CRNN | Ours |
---|---|---|---|---|---|---|---|---|---|
1 | 0.8890 | 0.9076 | 0.9831 | 0.9387 | 0.9166 | 0.9538 | 0.9224 | 0.9659 | 0.9920 |
2 | 0.9353 | 0.9477 | 0.9814 | 0.9752 | 0.9856 | 0.9628 | 0.9824 | 0.9443 | 0.9811 |
3 | 0.9586 | 0.9984 | 0.9825 | 0.9904 | 1.0000 | 0.9857 | 0.9888 | 0.9952 | 0.9892 |
4 | 0.8875 | 0.9446 | 0.8634 | 0.9598 | 0.9480 | 0.9714 | 0.9435 | 0.9962 | 0.9980 |
5 | 0.9284 | 0.9776 | 0.9902 | 0.9723 | 0.9563 | 0.9785 | 0.9663 | 0.9779 | 0.9930 |
6 | 0.8703 | 0.9829 | 0.9693 | 0.9590 | 0.9288 | 0.9249 | 0.9691 | 0.9898 | 0.9846 |
7 | 0.6261 | 0.7881 | 0.6996 | 0.7977 | 0.8341 | 0.7820 | 0.8567 | 0.9389 | 0.9369 |
8 | 0.7250 | 0.5188 | 0.6571 | 0.5634 | 0.7907 | 0.4223 | 0.7945 | 0.8488 | 0.9578 |
9 | 0.5510 | 0.6557 | 0.7329 | 0.7063 | 0.7158 | 0.7045 | 0.7269 | 0.8580 | 0.9152 |
10 | 0.6389 | 0.4244 | 0.8462 | 0.7747 | 0.7982 | 0.7738 | 0.7808 | 0.8489 | 0.9460 |
11 | 0.5117 | 0.4317 | 0.5926 | 0.7752 | 0.7840 | 0.8354 | 0.7889 | 0.8781 | 0.9008 |
12 | 0.5396 | 0.5315 | 0.6595 | 0.6036 | 0.7023 | 0.7450 | 0.7348 | 0.8550 | 0.9422 |
13 | 0.2766 | 0.5414 | 0.2884 | 0.6430 | 0.7911 | 0.5745 | 0.4879 | 0.6250 | 0.7313 |
14 | 0.9689 | 0.9948 | 0.9896 | 0.9896 | 0.9450 | 0.9908 | 0.9721 | 0.9807 | 0.9854 |
15 | 0.9545 | 0.9882 | 0.9848 | 0.9562 | 0.9966 | 0.9781 | 0.9351 | 0.9866 | 0.9943 |
OA | 0.7409 | 0.7476 | 0.8103 | 0.8255 | 0.8600 | 0.8280 | 0.8549 | 0.9127 | 0.9539 |
AA | 0.7508 | 0.7756 | 0.8147 | 0.8404 | 0.8729 | 0.8381 | 0.8579 | 0.9126 | 0.9499 |
kappa | 0.7199 | 0.7271 | 0.7949 | 0.8114 | 0.8485 | 0.8142 | 0.8431 | 0.9056 | 0.9502 |
Class | SVM | FDDL | DPL | ResNet | AE | RNN | CNN | CRNN | Ours |
---|---|---|---|---|---|---|---|---|---|
1 | 0.9922 | 0.9295 | 0.9813 | 0.9486 | 0.8319 | 0.7925 | 0.6037 | 0.8157 | 0.9399 |
2 | 0.9371 | 0.8008 | 0.9064 | 0.7504 | 0.9275 | 0.9277 | 0.9305 | 0.8898 | 0.9288 |
3 | 0.9821 | 1.0000 | 1.0000 | 1.0000 | 0.9968 | 0.9951 | 0.9968 | 0.9952 | 0.9984 |
4 | 0.9717 | 0.8972 | 0.9647 | 0.9367 | 0.9028 | 0.8421 | 0.8520 | 0.8738 | 0.9653 |
5 | 0.8774 | 0.8387 | 0.8902 | 0.7701 | 0.7485 | 0.5942 | 0.4048 | 0.7697 | 0.8604 |
6 | 0.9670 | 0.8305 | 0.9784 | 0.9754 | 0.9143 | 0.9493 | 0.8131 | 0.9073 | 0.9847 |
7 | 0.9208 | 0.9958 | 0.9958 | 0.9625 | 0.8885 | 0.9424 | 0.8131 | 0.9795 | 0.9625 |
8 | 0.7535 | 0.6829 | 0.7247 | 0.8043 | 0.6741 | 0.7473 | 0.6975 | 0.8328 | 0.8802 |
9 | 0.6341 | 0.4107 | 0.6423 | 0.8066 | 0.9432 | 0.9380 | 0.9835 | 0.9262 | 0.9277 |
10 | 0.4501 | 0.2693 | 0.3757 | 0.4452 | 0.6765 | 0.6803 | 0.6394 | 0.6583 | 0.7109 |
11 | 0.4591 | 0.3753 | 0.4358 | 0.4852 | 0.6809 | 0.6073 | 0.4301 | 0.6844 | 0.6975 |
12 | 0.5091 | 0.4499 | 0.5611 | 0.3416 | 0.2762 | 0.2680 | 0.1945 | 0.2513 | 0.3738 |
13 | 0.4700 | 0.2544 | 0.4235 | 0.4389 | 0.7362 | 0.7378 | 0.6273 | 0.7633 | 0.7184 |
14 | 0.8117 | 0.7870 | 0.8380 | 0.7528 | 0.7166 | 0.7144 | 0.6478 | 0.6986 | 0.8308 |
15 | 0.9643 | 0.7154 | 0.9387 | 0.9366 | 0.9409 | 0.9223 | 0.8968 | 0.9090 | 0.9819 |
16 | 0.8934 | 0.8179 | 0.8843 | 0.8129 | 0.8468 | 0.7942 | 0.7226 | 0.8878 | 0.9080 |
17 | 0.9621 | 0.8939 | 0.9848 | 0.9924 | 0.8618 | 0.9754 | 0.9034 | 0.9470 | 1.0000 |
18 | 0.8203 | 0.6360 | 0.7125 | 0.7161 | 0.5774 | 0.5450 | 0.4096 | 0.6660 | 0.8003 |
19 | 0.8912 | 0.6545 | 0.8899 | 0.6725 | 0.7739 | 0.7704 | 0.3747 | 0.8632 | 0.9185 |
20 | 0.9531 | 0.9438 | 0.9424 | 0.8110 | 0.9205 | 0.8817 | 0.6037 | 0.8902 | 0.9824 |
OA | 0.6646 | 0.4938 | 0.6498 | 0.7193 | 0.8352 | 0.8298 | 0.7433 | 0.8451 | 0.8667 |
AA | 0.8110 | 0.7092 | 0.8035 | 0.7679 | 0.7918 | 0.7813 | 0.6773 | 0.8105 | 0.8685 |
kappa | 0.5988 | 0.4277 | 0.5825 | 0.6478 | 0.7874 | 0.7798 | 0.6849 | 0.7979 | 0.8281 |
Class No. | AE | RNN | CNN | CRNN | Ours |
---|---|---|---|---|---|
1 | 0.9449 | 0.9059 | 0.9230 | 0.9654 | 0.9654 |
2 | 0.9610 | 0.9442 | 0.9521 | 0.9731 | 0.9734 |
3 | 0.9777 | 0.9984 | 0.9823 | 0.9952 | 1.0000 |
4 | 0.9893 | 0.9571 | 0.9502 | 0.9595 | 0.9866 |
5 | 0.9776 | 0.9857 | 0.9338 | 0.9879 | 0.9839 |
6 | 0.8805 | 0.9795 | 0.9727 | 0.9861 | 0.9966 |
7 | 0.7539 | 0.8914 | 0.7038 | 0.9203 | 0.9247 |
8 | 0.6223 | 0.5955 | 0.7475 | 0.7830 | 0.9125 |
9 | 0.7232 | 0.7143 | 0.6891 | 0.7823 | 0.8784 |
10 | 0.8389 | 0.6769 | 0.7188 | 0.7935 | 0.9258 |
11 | 0.8156 | 0.5414 | 0.7443 | 0.7694 | 0.8687 |
12 | 0.7622 | 0.5595 | 0.6189 | 0.7658 | 0.8991 |
13 | 0.2931 | 0.4090 | 0.5517 | 0.6459 | 0.3191 |
14 | 0.9508 | 0.7332 | 0.8843 | 0.9343 | 0.9793 |
15 | 0.9832 | 0.9916 | 0.9848 | 0.9609 | 0.9916 |
OA | 0.8387 | 0.7892 | 0.8155 | 0.9754 | 0.9213 |
AA | 0.8316 | 0.7922 | 0.8238 | 0.8815 | 0.9070 |
kappa | 0.8255 | 0.7720 | 0.8002 | 0.8653 | 0.9148 |
Class No. | 10 Samples per Class | 20 Samples per Class | 30 Samples per Class | ||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
AE | RNN | CNN | CRNN | Ours | AE | RNN | CNN | CRNN | Ours | AE | RNN | CNN | CRNN | Ours | |
1 | 0.33 | 0.57 | 0.77 | 0.82 | 0.84 | 0.73 | 0.92 | 0.77 | 0.93 | 0.96 | 0.89 | 0.72 | 0.86 | 0.94 | 0.99 |
2 | 0.39 | 0.46 | 0.92 | 0.95 | 0.96 | 0.59 | 0.74 | 0.81 | 0.81 | 0.94 | 0.87 | 0.86 | 0.98 | 0.84 | 0.93 |
3 | 0.56 | 0.26 | 0.52 | 0.99 | 0.99 | 0.93 | 0.90 | 0.98 | 0.94 | 0.99 | 0.99 | 0.84 | 0.99 | 0.95 | 0.99 |
4 | 0.75 | 0.84 | 0.88 | 0.96 | 0.91 | 0.77 | 0.85 | 0.97 | 0.92 | 0.86 | 0.97 | 0.91 | 0.97 | 0.97 | 0.86 |
5 | 0.86 | 0.74 | 0.90 | 0.95 | 0.96 | 0.96 | 0.89 | 0.97 | 0.99 | 0.98 | 0.96 | 0.88 | 0.98 | 0.98 | 0.97 |
6 | 0.69 | 0.41 | 0.92 | 0.87 | 0.97 | 0.83 | 0.80 | 0.76 | 0.75 | 0.97 | 0.97 | 0.96 | 0.89 | 0.90 | 0.98 |
7 | 0.49 | 0.35 | 0.60 | 0.81 | 0.80 | 0.50 | 0.52 | 0.49 | 0.80 | 0.75 | 0.56 | 0.40 | 0.71 | 0.78 | 0.74 |
8 | 0.23 | 0.24 | 0.55 | 0.49 | 0.66 | 0.45 | 0.32 | 0.41 | 0.65 | 0.67 | 0.66 | 0.60 | 0.81 | 0.75 | 0.75 |
9 | 0.46 | 0.35 | 0.41 | 0.48 | 0.63 | 0.61 | 0.61 | 0.65 | 0.68 | 0.78 | 0.64 | 0.53 | 0.66 | 0.68 | 0.76 |
10 | 0.18 | 0.00 | 0.61 | 0.57 | 0.61 | 0.41 | 0.26 | 0.48 | 0.66 | 0.85 | 0.61 | 0.45 | 0.71 | 0.72 | 0.83 |
11 | 0.51 | 0.30 | 0.52 | 0.67 | 0.69 | 0.48 | 0.65 | 0.53 | 0.75 | 0.74 | 0.62 | 0.54 | 0.60 | 0.75 | 0.79 |
12 | 0.38 | 0.26 | 0.31 | 0.45 | 0.38 | 0.19 | 0.34 | 0.63 | 0.64 | 0.65 | 0.58 | 0.50 | 0.61 | 0.67 | 0.80 |
13 | 0.15 | 0.57 | 0.15 | 0.26 | 0.46 | 0.18 | 0.12 | 0.22 | 0.36 | 0.52 | 0.19 | 0.10 | 0.43 | 0.39 | 0.51 |
14 | 0.72 | 0.89 | 0.85 | 0.86 | 0.93 | 0.90 | 0.66 | 0.95 | 0.93 | 0.96 | 0.83 | 0.83 | 0.90 | 0.85 | 0.95 |
15 | 0.96 | 0.66 | 0.96 | 0.97 | 0.99 | 0.96 | 0.66 | 0.96 | 0.95 | 0.99 | 0.93 | 0.92 | 0.98 | 0.93 | 0.99 |
OA | 0.53 | 0.45 | 0.63 | 0.72 | 0.77 | 0.62 | 0.63 | 0.68 | 0.79 | 0.83 | 0.74 | 0.64 | 0.80 | 0.81 | 0.85 |
AA | 0.51 | 0.46 | 0.66 | 0.74 | 0.79 | 0.63 | 0.62 | 0.70 | 0.78 | 0.84 | 0.75 | 0.67 | 0.80 | 0.81 | 0.86 |
kappa | 0.49 | 0.41 | 0.61 | 0.70 | 0.75 | 0.61 | 0.61 | 0.66 | 0.77 | 0.82 | 0.72 | 0.61 | 0.78 | 0.79 | 0.84 |
Class | SVM | FDDL | DPL | ResNet | AE | RNN | CNN | CRNN | Ours |
---|---|---|---|---|---|---|---|---|---|
Pavia center | 6.8 | 346.1 | 3.4 | 16.1 | 3.5 | 67.2 | 8.7 | 52.8 | 3.6 |
Botswana | 1.8 | 40.2 | 2.9 | 0.8 | 0.3 | 3.5 | 0.4 | 3.5 | 0.2 |
Houston 2013 | 2.1 | 69.1 | 4.5 | 2.5 | 0.8 | 13.8 | 1.6 | 13.8 | 0.5 |
Houston 2018 | 16.3 | 3310.8 | 1.3 | 79.1 | 32.2 | 155.2 | 51.6 | 137.7 | 10.9 |
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |
© 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Wang, W.; Han, Y.; Deng, C.; Li, Z. Hyperspectral Image Classification via Deep Structure Dictionary Learning. Remote Sens. 2022, 14, 2266. https://doi.org/10.3390/rs14092266
Wang W, Han Y, Deng C, Li Z. Hyperspectral Image Classification via Deep Structure Dictionary Learning. Remote Sensing. 2022; 14(9):2266. https://doi.org/10.3390/rs14092266
Chicago/Turabian StyleWang, Wenzheng, Yuqi Han, Chenwei Deng, and Zhen Li. 2022. "Hyperspectral Image Classification via Deep Structure Dictionary Learning" Remote Sensing 14, no. 9: 2266. https://doi.org/10.3390/rs14092266
APA StyleWang, W., Han, Y., Deng, C., & Li, Z. (2022). Hyperspectral Image Classification via Deep Structure Dictionary Learning. Remote Sensing, 14(9), 2266. https://doi.org/10.3390/rs14092266