Classification of Hyperspectral Image Based on Double-Branch Dual-Attention Mechanism Network
"> Figure 1
<p>The structure of 3D-convolutional neural networks (CNN) with a batch normalization (BN) layer.</p> "> Figure 2
<p>The architecture of residual network (ResNet) and dense convolutional network (DenseNet).</p> "> Figure 3
<p>The structure of the dense block used in our framework.</p> "> Figure 4
<p>The details of the spectral attention block and the spatial attention block.</p> "> Figure 5
<p>The procedure of our proposed double-branch dual-attention (DBDA) framework.</p> "> Figure 6
<p>The structure of the DBDA network. The upper spectral branch composed of the dense spectral block and channel attention block is designed to capture spectral features. The lower spatial branch constituted by dense spatial block, and spatial attention block is designed to exploit spatial features.</p> "> Figure 7
<p>The flowchart for the DBDA methodology. The 3D-cube is fed into the spectral branch (top) and spatial branch (bottom) respectively. The obtained features are concatenated to classify the target pixel.</p> "> Figure 8
<p>The graph of the activation functions (Mish and ReLU).</p> "> Figure 9
<p>Classification maps for the IP dataset using 3% training samples. (<b>a</b>) False-color image. (<b>b</b>) Ground-truth (GT). (<b>c</b>–<b>h</b>) The classification maps with disparate algorithms.</p> "> Figure 10
<p>Classification maps for the UP dataset using 0.5% training samples. (<b>a</b>) False-color image. (<b>b</b>) Ground-truth (GT). (<b>c</b>–<b>h</b>) The classification maps with disparate algorithms.</p> "> Figure 11
<p>Classification maps for the UP dataset using 0.5% training samples. (<b>a</b>) False-color image. (<b>b</b>) Ground-truth (GT). (<b>c</b>–<b>h</b>) The classification maps with disparate algorithms.</p> "> Figure 12
<p>Classification maps for the BS dataset using 1.2% training samples. (<b>a</b>) False-color image. (<b>b</b>) Ground-truth (GT). (<b>c</b>–<b>h</b>) The classification maps with disparate algorithms.</p> "> Figure 13
<p>The OA results of SVM, CDCNN, CDCNN, SSRN, FDSSC, DBMA and our proposed method with varying proportions of training samples on the (<b>a</b>) IP, (<b>b</b>) UP, (<b>c</b>) SV and (<b>d</b>) BS.</p> "> Figure 14
<p>Effectiveness of the attention mechanism (results of different attention mechanisms).</p> "> Figure 15
<p>Effectiveness of the activation function (results on different activation functions).</p> ">
Abstract
:1. Introduction
- Based on DenseNet and 3D-CNN, we propose an end-to-end framework double-branch dual-attention mechanism network (DBDA). The spectral branch and spatial branch of the proposed framework can exploit features respectively without any feature engineering.
- A flexible and adaptive self-attention mechanism is introduced to both the spectral and spatial dimensions. The channel-wise attention block is designed to focus on the information-rich spectral bands, and the spatial-wise attention block is built to concentrate on the information-rich pixels.
- The DBDA obtains the state-of-the-art classification accuracy in four datasets with limited training data. Furthermore, the time consumption of our proposed network is less than the two compared deep-learning algorithms.
2. Related Work
2.1. HSI Classification Framework Based on 3D-Cube
2.2. 3D-CNN with Batch Normalization
2.3. ResNet and DenseNet
2.4. Attention Mechanism
2.4.1. Spectral Attention Block
2.4.2. Spatial Attention Block
3. Methodology
3.1. The Framework of the DBDA Network
3.1.1. Spectral Branch with the Channel Attention Block
3.1.2. Spatial Branch with the Spatial Attention Block
3.1.3. Spectral and Spatial Fusion for HSI Classification
3.2. Measures Taken to Prevent Overfitting
3.2.1. A Strong and Appropriate Activation Function
3.2.2. Dropout Layer, Early Stopping Strategy and Dynamic Learning Rate Adjustment
4. Experimental Results
4.1. The Introduction about Datasets
4.2. Experimental Setting
4.3. Classification Maps and Categorized Results
4.3.1. Classification Maps and Categorized Results for the IP Dataset
4.3.2. Classification Maps and Categorized Result for the UP Dataset
4.3.3. Classification Maps and Categorized Results for the SV Dataset
4.3.4. Classification Maps and Categorized Result for the BS Dataset
4.4. Investigation of Running Time
5. Discussion
5.1. Investigation of the Proportion of Training Samples
5.2. Effectiveness of the Attention Mechanism
5.3. Effectiveness of the Activation Function
6. Conclusions
Author Contributions
Funding
Acknowledgments
Conflicts of Interest
References
- Zhong, Y.; Ma, A.; Ong, Y.; Zhu, Z.; Zhang, L. Computational intelligence in optical remote sensing image processing. Appl. Soft Comput. 2018, 64, 75–93. [Google Scholar] [CrossRef]
- Mahdianpari, M.; Salehi, B.; Rezaee, M.; Mohammadimanesh, F.; Zhang, Y. Very deep convolutional neural networks for complex land cover mapping using multispectral remote sensing imagery. Remote Sens. 2018, 10, 1119. [Google Scholar] [CrossRef] [Green Version]
- Pipitone, C.; Maltese, A.; Dardanelli, G.; Brutto, M.; Loggia, G. Monitoring water surface and level of a reservoir using different remote sensing approaches and comparison with dam displacements evaluated via GNSS. Remote Sens. 2018, 10, 71. [Google Scholar] [CrossRef] [Green Version]
- Zhao, C.; Wang, Y.; Qi, B.; Wang, J. Global and local real-time anomaly detectors for hyperspectral remote sensing imagery. Remote Sens. 2015, 7, 3966–3985. [Google Scholar] [CrossRef] [Green Version]
- Li, Z.; Huang, L.; He, J. A Multiscale Deep Middle-level Feature Fusion Network for Hyperspectral Classification. Remote Sens. 2019, 11, 695. [Google Scholar] [CrossRef] [Green Version]
- Awad, M.; Jomaa, I.; Arab, F. Improved Capability in Stone Pine Forest Mapping and Management in Lebanon Using Hyperspectral CHRIS-Proba Data Relative to Landsat ETM+. Photogramm. Eng. Remote Sens. 2014, 80, 725–731. [Google Scholar] [CrossRef]
- Ibrahim, A.; Franz, B.; Ahmad, Z.; Healy, R.; Knobelspiesse, K.; Gao, B.; Proctor, C.; Zhai, P. Atmospheric correction for hyperspectral ocean color retrieval with application to the Hyperspectral Imager for the Coastal Ocean (HICO). Remote Sens. Environ. 2018, 204, 60–75. [Google Scholar] [CrossRef] [Green Version]
- Marinelli, D.; Bovolo, F.; Bruzzone, L. A novel change detection method for multitemporal hyperspectral images based on binary hyperspectral change vectors. IEEE Trans. Geosci. Remote Sens. 2019, 57, 4913–4928. [Google Scholar] [CrossRef]
- Melgani, F.; Bruzzone, L. Classification of hyperspectral remote sensing images with support vector machines. IEEE Trans. Geosci. Remote Sens. 2004, 42, 1778–1790. [Google Scholar] [CrossRef] [Green Version]
- Li, J.; Bioucas-Dias, J.; Plaza, A. Semisupervised hyperspectral image segmentation using multinomial logistic regression with active learning. IEEE Trans. Geosci. Remote Sens. 2010, 48, 4085–4098. [Google Scholar] [CrossRef] [Green Version]
- Li, J.; Bioucas-Dias, J.; Plaza, A. Spectral–spatial hyperspectral image segmentation using subspace multinomial logistic regression and Markov random fields. IEEE Trans. Geosci. Remote Sens. 2011, 50, 809–823. [Google Scholar] [CrossRef]
- Du, B.; Zhang, L. Random-selection-based anomaly detector for hyperspectral imagery. IEEE Trans. Geosci. Remote Sens. 2010, 49, 1578–1589. [Google Scholar] [CrossRef]
- Du, B.; Zhang, L. Target detection based on a dynamic subspace. Pattern Recognit. 2014, 47, 344–358. [Google Scholar] [CrossRef]
- Li, J.; Marpu, P.; Plaza, A.; Bioucas-Dias, J.; Benediktsson, J. Generalized composite kernel framework for hyperspectral image classification. IEEE Trans. Geosci. Remote Sens. 2013, 51, 4816–4829. [Google Scholar] [CrossRef]
- Li, W.; Du, Q. Gabor-filtering-based nearest regularized subspace for hyperspectral image classification. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2014, 7, 1012–1022. [Google Scholar] [CrossRef]
- Fang, L.; Li, S.; Duan, W.; Ren, J.; Benediktsson, J. Classification of hyperspectral images by exploiting spectral–spatial information of superpixel via multiple kernels. IEEE Trans. Geosci. Remote Sens. 2015, 53, 6663–6674. [Google Scholar] [CrossRef] [Green Version]
- Camps-Valls, G.; Gomez-Chova, L.; Muñoz-Marí, J.; Vila-Frances, J.; Calpe-Maravilla, J. Composite kernels for hyperspectral image classification. IEEE Geosci. Remote Sens. Lett. 2006, 3, 93–97. [Google Scholar] [CrossRef]
- Li, P.; Chen, X.; Shen, S. Stereo r-cnn based 3d object detection for autonomous driving. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA, 16–20 June 2019; pp. 7644–7652. [Google Scholar]
- Zhang, W.; Feng, Y.; Meng, F.; You, D.; Liu, Q. Bridging the Gap between Training and Inference for Neural Machine Translation. arXiv 2019, arXiv:1906.02448. [Google Scholar]
- Durand, T.; Mehrasa, N.; Mori, G. Learning a Deep ConvNet for Multi-label Classification with Partial Labels. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA, 16–20 June 2019; pp. 647–657. [Google Scholar]
- Chen, Y.; Lin, Z.; Zhao, X.; Wang, G.; Gu, Y. Deep learning-based classification of hyperspectral data. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2014, 7, 2094–2107. [Google Scholar] [CrossRef]
- Tao, C.; Pan, H.; Li, Y.; Zou, Z. Unsupervised spectral–spatial feature learning with stacked sparse autoencoder for hyperspectral imagery classification. IEEE Geosci. Remote Sens. Lett. 2015, 12, 2438–2442. [Google Scholar]
- Ma, X.; Wang, H.; Geng, J. Spectral–spatial classification of hyperspectral image based on deep auto-encoder. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2016, 9, 4073–4085. [Google Scholar] [CrossRef]
- Zhang, X.; Liang, Y.; Li, C.; Hu, N.; Jiao, L.; Zhou, H. Recursive autoencoders-based unsupervised feature learning for hyperspectral image classification. IEEE Geosci. Remote Sens. Lett. 2017, 14, 1928–1932. [Google Scholar] [CrossRef] [Green Version]
- Chen, Y.; Zhao, X.; Jia, X. Spectral–spatial classification of hyperspectral data based on deep belief network. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2015, 8, 2381–2392. [Google Scholar] [CrossRef]
- Zhao, W.; Du, S. Spectral–spatial feature extraction for hyperspectral image classification: A dimension reduction and deep learning approach. IEEE Trans. Geosci. Remote Sens. 2016, 54, 4544–4554. [Google Scholar] [CrossRef]
- Lee, H.; Kwon, H. Going deeper with contextual CNN for hyperspectral image classification. IEEE Trans. Image Process. 2017, 26, 4843–4855. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Chen, Y.; Jiang, H.; Li, C.; Jia, X.; Ghamisi, P. Deep feature extraction and classification of hyperspectral images based on convolutional neural networks. IEEE Trans. Geosci. Remote Sens. 2016, 54, 6232–6251. [Google Scholar] [CrossRef] [Green Version]
- He, K.; Zhang, X.; Ren, S.; Sun, J. Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 26 June–1 July 2016; pp. 770–778. [Google Scholar]
- Huang, G.; Liu, Z.; Van Der Maaten, L.; Weinberger, K.Q. Densely connected convolutional networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 4700–4708. [Google Scholar]
- Zhong, Z.; Li, J.; Luo, Z.; Chapman, M.; Weinberger, Q. Spectral–spatial residual network for hyperspectral image classification: A 3-D deep learning framework. IEEE Trans. Geosci. Remote Sens. 2017, 56, 847–858. [Google Scholar] [CrossRef]
- Wang, W.; Dou, S.; Jiang, Z.; Sun, L. A Fast Dense Spectral–Spatial Convolution Network Framework for Hyperspectral Images Classification. Remote Sens. 2018, 10, 1068. [Google Scholar] [CrossRef] [Green Version]
- Fang, B.; Li, Y.; Zhang, H.; Chan, J. Hyperspectral Images Classification Based on Dense Convolutional Networks with Spectral-Wise Attention Mechanism. Remote Sens. 2019, 11, 159. [Google Scholar] [CrossRef] [Green Version]
- Ma, W.; Yang, Q.; Wu, Y.; Zhao, W.; Zhang, X. Double-Branch Multi-Attention Mechanism Network for Hyperspectral Image Classification. Remote Sens. 2019, 11, 1307. [Google Scholar] [CrossRef] [Green Version]
- Woo, S.; Park, J.; Lee, J.; Kweon, I. Cbam: Convolutional block attention module. In Proceedings of the European Conference on Computer Vision (ECCV), Amsterdam, The Netherlands, 8–16 October 2018; pp. 3–19. [Google Scholar]
- Mou, L.; Ghamisi, P.; Zhu, X. Deep recurrent neural networks for hyperspectral image classification. IEEE Trans. Geosci. Remote Sens. 2017, 55, 3639–3655. [Google Scholar] [CrossRef] [Green Version]
- Tan, K.; Hu, J.; Li, J.; Du, P. A novel semi-supervised hyperspectral image classification approach based on spatial neighborhood information and classifier combination. ISPRS J. Photogramm. Remote Sens. 2015, 105, 19–29. [Google Scholar] [CrossRef]
- Zhang, M.; Gong, M.; Mao, Y.; Li, J.; Wu, Y. Unsupervised feature extraction in hyperspectral images based on wasserstein generative adversarial network. IEEE Trans. Geosci. Remote Sens. 2018, 57, 2669–2688. [Google Scholar] [CrossRef]
- Haut, J.; Paoletti, M.; Plaza, J.; Li, J.; Plaza, A. Active learning with convolutional neural networks for hyperspectral image classification using a new bayesian approach. IEEE Trans. Geosci. Remote Sens. 2018, 56, 6440–6461. [Google Scholar] [CrossRef]
- Paoletti, M.; Haut, J.; Fernandez-Beltran, R.; Plaza, J.; Plaza, A.; Li, J.; Pla, F. Capsule networks for hyperspectral image classification. Ieee Trans. Geosci. Remote Sens. 2018, 57, 2145–2160. [Google Scholar] [CrossRef]
- Yang, S.; Feng, Z.; Wang, M.; Zhang, K. Self-paced learning-based probability subspace projection for hyperspectral image classification. IEEE Trans. Neural Netw. Learn. Syst. 2018, 30, 630–635. [Google Scholar] [CrossRef]
- Kemker, R.; Kanan, C. Self-taught feature learning for hyperspectral image classification. IEEE Trans. Geosci. Remote Sens. 2017, 55, 2693–2705. [Google Scholar] [CrossRef]
- Chen, Z.; Jiang, J.; Zhou, C.; Fu, S.; Cai, Z. SuperBF: Superpixel-Based Bilateral Filtering Algorithm and Its Application in Feature Extraction of Hyperspectral Images. IEEE Access 2019, 7, 147796–147807. [Google Scholar] [CrossRef]
- Fu, J.; Liu, J.; Tian, H.; Li, Y.; Bao, Y.; Fang, Z.; Lu, H. Dual attention network for scene segmentation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA, 16–20 June 2019; pp. 3146–3154. [Google Scholar]
- Ioffe, S.; Szegedy, C. Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift. In Proceedings of the ICML 2015 32nd International Conference on Machine Learning, Lile, France, 6–11 July 2015; pp. 1–9. [Google Scholar]
- Rensink, R. The dynamic representation of scenes. Vis. Cogn. 2000, 7, 17–42. [Google Scholar] [CrossRef]
- Mnih, V.; Heess, N.; Graves, A. Recurrent models of visual attention. In Proceedings of the Advances in Neural Information Processing Systems, Montreal, QC, Canada, 8–13 December 2014; pp. 2204–2212. [Google Scholar]
- Xu, K.; Ba, J.; Kiros, R.; Cho, K.; Courville, A.; Salakhutdinov, R.; Zemel, R.; Bengio, Y. Show, attend and tell: Neural image caption generation with visual attention. In Proceedings of the International Conference on Machine Learning, Lille, France, 6–11 July 2015; pp. 2048–2057. [Google Scholar]
- Xu, T.; Zhang, P.; Huang, Q.; Zhang, H.; Gan, Z.; Huang, X.; He, X. Attngan: Fine-grained text to image generation with attentional generative adversarial networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–23 June 2018; pp. 1316–1324. [Google Scholar]
- Misra, D. Mish: A Self Regularized Non-Monotonic Neural Activation Function. arXiv 2019, arXiv:1908.08681. [Google Scholar]
- Krizhevsky, A.; Sutskever, I.; Hinton, G. Imagenet classification with deep convolutional neural networks. In Proceedings of the Advances in Neural Information Processing Systems, Lake Tahoe, NV, USA, 3–6 December 2012; pp. 1097–1105. [Google Scholar]
- Srivastava, N.; Hinton, G.; Krizhevsky, A.; Sutskever, I.; Salakhutdinov, R. Dropout: A simple way to prevent neural networks from overfitting. J. Mach. Learn. Res. 2014, 15, 1929–1958. [Google Scholar]
- Loshchilov, I.; Hutter, F. Sgdr: Stochastic gradient descent with warm restarts. arXiv 2016, arXiv:1608.03983. [Google Scholar]
Layer Name | Kernel Size | Output Size |
---|---|---|
Input | - | ( |
Conv | (1 | ( |
BN-Mish-Conv | (1 | ( |
Concatenate | - | ( |
BN-Mish-Conv | (1 | ( |
Concatenate | - | ( |
BN-Mish-Conv | (1 | ( |
Concatenate | - | ( |
BN-Mish-Conv | (1 | ( |
Channel Attention Block | - | ( |
BN-Dropout-GlobalAveragePooling | - | ( |
Layer Name | Kernel Size | Output Size |
---|---|---|
Input | - | ( |
Conv | (1 | ( |
BN-Mish-Conv | (3 | ( |
Concatenate | - | ( |
BN-Mish-Conv | (3 | ( |
Concatenate | - | ( |
BN-Mish-Conv | (3 | ( |
Concatenate | - | ( |
Channel Attention Block | - | ( |
BN-Dropout-GlobalAveragePooling | - | ( |
Order | Class | Total Number | Train | Val | Test |
---|---|---|---|---|---|
1 | Alfalfa | 46 | 3 | 3 | 40 |
2 | Corn-notill | 1428 | 42 | 42 | 1344 |
3 | Corn-mintill | 830 | 24 | 24 | 782 |
4 | Corn | 237 | 7 | 7 | 223 |
5 | Grass-pasture | 483 | 14 | 14 | 455 |
6 | Grass-trees | 730 | 21 | 21 | 688 |
7 | Grass-pasture-mowed | 28 | 3 | 3 | 22 |
8 | Hay-windrowed | 478 | 14 | 14 | 450 |
9 | Oats | 20 | 3 | 3 | 14 |
10 | Soybean-notill | 972 | 29 | 29 | 914 |
11 | Soybean-mintill | 2455 | 73 | 73 | 2309 |
12 | Soybean-clean | 593 | 17 | 17 | 559 |
13 | Wheat | 205 | 6 | 6 | 193 |
14 | Woods | 1265 | 37 | 37 | 1191 |
15 | Buildings-Grass-Trees-Drives | 386 | 11 | 11 | 364 |
16 | Stone-Steel-Towers | 93 | 3 | 3 | 87 |
Total | 10,249 | 307 | 307 | 9635 |
Order | Class | Total Number | Train | Val | Test |
---|---|---|---|---|---|
1 | Asphalt | 6631 | 33 | 33 | 6565 |
2 | Meadows | 18,649 | 93 | 93 | 18,463 |
3 | Gravel | 2099 | 10 | 10 | 2079 |
4 | Trees | 3064 | 15 | 15 | 3034 |
5 | Painted metal sheets | 1345 | 6 | 6 | 1333 |
6 | Bare Soil | 5029 | 25 | 25 | 4979 |
7 | Bitumen | 1330 | 6 | 6 | 1318 |
8 | Self-Blocking Bricks | 3682 | 18 | 18 | 3646 |
9 | Shadows | 947 | 4 | 4 | 939 |
Total | 42,776 | 210 | 210 | 42,356 |
Order | Class | Total Number | Train | Val | Test |
---|---|---|---|---|---|
1 | Brocoli-green-weeds-1 | 2009 | 10 | 10 | 1989 |
2 | Brocoli-green-weeds-2 | 3726 | 18 | 18 | 3690 |
3 | Fallow | 1976 | 9 | 9 | 1958 |
4 | Fallow-rough-plow | 1394 | 6 | 6 | 1382 |
5 | Fallow-smooth | 2678 | 13 | 13 | 2652 |
6 | Stubble | 3959 | 19 | 19 | 3921 |
7 | Celery | 3579 | 17 | 17 | 3545 |
8 | Grapes-untrained | 11,271 | 56 | 56 | 11,159 |
9 | Soil-vinyard-develop | 6203 | 31 | 31 | 6141 |
10 | Corn-senesced-green-weeds | 3278 | 16 | 16 | 3246 |
11 | Lettuce-romaine-4wk | 1068 | 5 | 5 | 1058 |
12 | Lettuce-romaine-5wk | 1927 | 9 | 94 | 1824 |
13 | Lettuce-romaine-6wk | 916 | 4 | 4 | 908 |
14 | Lettuce-romaine-7wk | 1070 | 5 | 5 | 1060 |
15 | Vinyard-untrained | 7268 | 36 | 36 | 7196 |
16 | Vinyard-vertical-trellis | 1807 | 9 | 9 | 1789 |
Total | 54,129 | 263 | 348 | 53,603 |
Order | Class | Total Number | Train | Val | Test |
---|---|---|---|---|---|
1 | Water | 270 | 3 | 3 | 264 |
2 | Hippo grass | 101 | 2 | 2 | 97 |
3 | Floodplain grasses1 | 251 | 3 | 3 | 245 |
4 | Floodplain grasses2 | 215 | 3 | 3 | 209 |
5 | Reeds1 | 269 | 3 | 3 | 263 |
6 | Riparian | 269 | 3 | 3 | 263 |
7 | Fierscar2 | 259 | 3 | 3 | 253 |
8 | Island interior | 203 | 3 | 3 | 197 |
9 | Acacia woodlands | 314 | 4 | 4 | 306 |
10 | Acacia shrublands | 248 | 3 | 3 | 242 |
11 | Acacia grasslands | 305 | 4 | 4 | 297 |
12 | Short mopane | 181 | 2 | 2 | 177 |
13 | Mixed mopane | 268 | 3 | 3 | 262 |
14 | Exposed soils | 95 | 1 | 1 | 93 |
Total | 3248 | 40 | 40 | 3168 |
Class | Color | SVM | CDCNN | SSRN | FDSSC | DBMA | Proposed |
---|---|---|---|---|---|---|---|
1 | 24.24 | 0.00 | 100.0 | 85.42 | 93.48 | 100.0 | |
2 | 58.10 | 62.36 | 89.14 | 97.20 | 91.15 | 88.49 | |
3 | 64.37 | 57.00 | 77.49 | 94.45 | 99.58 | 97.12 | |
4 | 37.07 | 37.50 | 88.95 | 100.0 | 98.57 | 100.0 | |
5 | 87.67 | 88.16 | 96.48 | 100.0 | 97.45 | 100.0 | |
6 | 84.02 | 79.63 | 98.15 | 100.0 | 95.66 | 97.18 | |
7 | 56.10 | 0.00 | 0.00 | 73.53 | 40.00 | 92.59 | |
8 | 89.62 | 84.02 | 84.54 | 99.78 | 100.0 | 99.78 | |
9 | 21.21 | 0.00 | 0.00 | 100.0 | 38.10 | 100.0 | |
10 | 65.89 | 37.50 | 92.07 | 89.25 | 85.98 | 89.87 | |
11 | 62.32 | 53.25 | 90.89 | 93.97 | 94.39 | 99.33 | |
12 | 52.40 | 42.96 | 84.19 | 95.41 | 89.92 | 98.50 | |
13 | 94.30 | 49.47 | 98.47 | 100.0 | 99.48 | 96.02 | |
14 | 90.15 | 76.71 | 94.56 | 93.14 | 92.81 | 93.22 | |
15 | 63.96 | 62.60 | 84.11 | 90.61 | 89.66 | 96.99 | |
16 | 98.46 | 83.70 | 91.40 | 96.55 | 96.55 | 94.38 | |
OA | 69.41 | 62.32 | 89.81 | 94.87 | 93.15 | 95.38 | |
AA | 65.62 | 50.93 | 79.40 | 94.33 | 87.67 | 96.47 | |
kappa | 0.6472 | 0.5593 | 0.8839 | 0.9414 | 0.9219 | 0.9474 |
Class | Color | SVM | CDCNN | SSRN | FDSSC | DBMA | Proposed |
---|---|---|---|---|---|---|---|
1 | 82.87 | 85.74 | 99.15 | 96.88 | 94.86 | 89.03 | |
2 | 88.07 | 94.45 | 98.06 | 97.57 | 96.57 | 98.32 | |
3 | 70.84 | 32.59 | 96.64 | 89.97 | 100.0 | 98.70 | |
4 | 95.61 | 97.46 | 99.86 | 99.21 | 97.44 | 98.42 | |
5 | 92.24 | 99.10 | 99.85 | 99.55 | 95.69 | 99.78 | |
6 | 76.98 | 80.88 | 96.88 | 97.97 | 96.78 | 98.57 | |
7 | 68.98 | 88.83 | 73.24 | 100.0 | 95.69 | 95.84 | |
8 | 71.14 | 66.19 | 82.36 | 70.97 | 78.93 | 89.47 | |
9 | 99.89 | 96.01 | 100.0 | 100.0 | 99.55 | 99.89 | |
OA | 84.29 | 87.70 | 95.59 | 94.43 | 94.72 | 96.00 | |
AA | 82.96 | 82.36 | 94.01 | 94.68 | 95.49 | 96.45 | |
kappa | 0.7883 | 0.8359 | 0.9415 | 0.9257 | 0.9295 | 0.9467 |
Class | Color | SVM | CDCNN | SSRN | FDSSC | DBMA | Proposed |
---|---|---|---|---|---|---|---|
1 | 99.85 | 0.00 | 100.0 | 100.0 | 100.0 | 100.0 | |
2 | 98.95 | 64.82 | 100.0 | 100.0 | 99.51 | 99.17 | |
3 | 89.88 | 94.69 | 89.72 | 99.44 | 98.92 | 97.74 | |
4 | 97.30 | 82.99 | 94.85 | 98.57 | 96.39 | 95.95 | |
5 | 93.56 | 98.24 | 99.39 | 99.87 | 96.39 | 99.29 | |
6 | 99.89 | 96.51 | 99.95 | 99.97 | 99.17 | 99.92 | |
7 | 91.33 | 95.98 | 99.75 | 99.75 | 96.80 | 99.83 | |
8 | 74.73 | 88.23 | 88.60 | 99.60 | 95.60 | 95.97 | |
9 | 97.69 | 99.26 | 98.48 | 99.69 | 99.22 | 99.37 | |
10 | 90.01 | 67.39 | 98.81 | 99.02 | 96.20 | 96.72 | |
11 | 75.92 | 72.03 | 93.30 | 92.77 | 82.29 | 93.72 | |
12 | 95.19 | 75.49 | 99.95 | 99.64 | 99.17 | 100.0 | |
13 | 94.87 | 95.71 | 100.0 | 100.0 | 98.91 | 100.0 | |
14 | 89.26 | 94.92 | 97.86 | 98.05 | 98.22 | 96.89 | |
15 | 75.86 | 51.88 | 89.96 | 74.58 | 84.71 | 93.42 | |
16 | 99.03 | 99.62 | 100.0 | 100.0 | 100.0 | 100.0 | |
OA | 88.09 | 77.79 | 94.72 | 94.99 | 95.44 | 97.51 | |
AA | 91.45 | 79.86 | 96.66 | 97.56 | 96.34 | 98.00 | |
kappa | 0.8671 | 0.7547 | 0.9412 | 0.9444 | 0.9493 | 0.9723 |
Class | Color | SVM | CDCNN | SSRN | FDSSC | DBMA | Proposed |
---|---|---|---|---|---|---|---|
1 | 100.0 | 94.60 | 94.95 | 97.41 | 97.77 | 95.64 | |
2 | 97.56 | 68.64 | 100.0 | 98.95 | 88.89 | 98.99 | |
3 | 86.35 | 81.11 | 91.42 | 100.0 | 100.0 | 100.0 | |
4 | 63.51 | 65.45 | 97.34 | 93.03 | 92.51 | 91.30 | |
5 | 84.33 | 89.10 | 92.42 | 80.74 | 93.51 | 95.58 | |
6 | 61.27 | 69.28 | 66.39 | 84.93 | 68.94 | 82.23 | |
7 | 82.09 | 80.07 | 100.0 | 84.62 | 100.0 | 100.0 | |
8 | 63.46 | 89.36 | 100.0 | 93.36 | 96.10 | 95.63 | |
9 | 63.53 | 55.53 | 90.75 | 88.44 | 85.15 | 96.50 | |
10 | 65.74 | 81.69 | 86.83 | 99.59 | 97.60 | 98.79 | |
11 | 93.91 | 92.48 | 100.0 | 99.67 | 99.66 | 99.67 | |
12 | 90.70 | 90.91 | 100.0 | 100.0 | 97.79 | 100.0 | |
13 | 73.62 | 88.59 | 94.83 | 81.59 | 100.0 | 100.0 | |
14 | 92.98 | 100.0 | 100.0 | 100.0 | 100.0 | 100.0 | |
OA | 77.21 | 80.90 | 91.89 | 91.57 | 93.43 | 96.24 | |
AA | 79.93 | 81.92 | 93.92 | 93.02 | 94.14 | 96.74 | |
kappa | 0.7532 | 0.7930 | 0.9121 | 0.9086 | 0.9.89 | 0.9593 |
Dataset | Algorithm | Training Times (s) | Testing Times (s) |
---|---|---|---|
Indian Pines | SVM | 20.10 | 0.66 |
CDCNN | 11.13 | 1.54 | |
SSRN | 46.03 | 2.71 | |
FDSSC | 105.05 | 4.86 | |
DBMA | 94.69 | 6.35 | |
Proposed | 69.83 | 5.60 |
Dataset | Algorithms | Training Times (s) | Testing Times (s) |
---|---|---|---|
Pavia University | SVM | 3.38 | 2.29 |
CDCNN | 10.26 | 4.92 | |
SSRN | 9.93 | 6.41 | |
FDSSC | 26.01 | 11.56 | |
DBMA | 21.02 | 11.17 | |
Proposed | 18.46 | 13.32 |
Dataset | Algorithms | Training Times (s) | Testing Times (s) |
---|---|---|---|
Salinas | SVM | 9.35 | 3.89 |
CDCNN | 9.82 | 6.14 | |
SSRN | 73.75 | 13.99 | |
FDSSC | 99.91 | 25.57 | |
DBMA | 105.30 | 31.82 | |
Proposed | 71.18 | 23.93 |
Dataset | Algorithms | Training Times (s) | Testing Times (s) |
---|---|---|---|
Botswana | SVM | 0.93 | 0.15 |
CDCNN | 11.10 | 1.33 | |
SSRN | 8.87 | 1.37 | |
FDSSC | 17.84 | 1.45 | |
DBMA | 13.67 | 2.04 | |
Proposed | 17.19 | 1.90 |
© 2020 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).
Share and Cite
Li, R.; Zheng, S.; Duan, C.; Yang, Y.; Wang, X. Classification of Hyperspectral Image Based on Double-Branch Dual-Attention Mechanism Network. Remote Sens. 2020, 12, 582. https://doi.org/10.3390/rs12030582
Li R, Zheng S, Duan C, Yang Y, Wang X. Classification of Hyperspectral Image Based on Double-Branch Dual-Attention Mechanism Network. Remote Sensing. 2020; 12(3):582. https://doi.org/10.3390/rs12030582
Chicago/Turabian StyleLi, Rui, Shunyi Zheng, Chenxi Duan, Yang Yang, and Xiqi Wang. 2020. "Classification of Hyperspectral Image Based on Double-Branch Dual-Attention Mechanism Network" Remote Sensing 12, no. 3: 582. https://doi.org/10.3390/rs12030582
APA StyleLi, R., Zheng, S., Duan, C., Yang, Y., & Wang, X. (2020). Classification of Hyperspectral Image Based on Double-Branch Dual-Attention Mechanism Network. Remote Sensing, 12(3), 582. https://doi.org/10.3390/rs12030582