Three-Dimensional ResNeXt Network Using Feature Fusion and Label Smoothing for Hyperspectral Image Classification
<p>Overall structure of the proposed Hyperspectral Image (HSI) classification framework.</p> "> Figure 2
<p>The end-to-end HSI classification flowchart.</p> "> Figure 3
<p>Three-dimensional spectral residual block to extract spectral features.</p> "> Figure 4
<p>A block of ResNet (Left) and ResNeXt with cardinality = 8 (Right). A layer is shown as (# in channels, filter size, # out channels).</p> "> Figure 5
<p>General structure of a ResNeXt block with cardinality = 8 (Taking the Block2_1 for example).</p> "> Figure 6
<p>Precision, Recall, and F1-Score indicators of classes with the smallest number of samples under different training ratios in the three HSI datasets. (Class 9 Oats for IN dataset, Class 9 Shadows for UP dataset, and Class 7 Swamp for KSC dataset).</p> "> Figure 7
<p>Precision, Recall, and F1-Score indicators of classes with the smallest number of samples under different input spatial size in the three HSI datasets. (Class 9 Oats for IN dataset, Class 9 Shadows for UP dataset, and Class 7 Swamp for KSC dataset).</p> "> Figure 8
<p>Precision, Recall, and F1-Score indicators of classes with the smallest number of samples under different cardinality in the three HSI datasets. (Class 9 Oats for IN dataset, Class 9 Shadows for UP dataset, and Class 7 Swamp for KSC dataset).</p> "> Figure 9
<p>Precision, Recall, and F1-Score indicators of classes with the smallest number of samples of four models using 3-D Convolutional Neural Network (CNN) in the three HSI datasets. (Class 9 Oats for IN dataset, Class 9 Shadows for UP dataset, and Class 7 Swamp for KSC dataset).</p> "> Figure 10
<p>Classification results of the models in comparison for the IN dataset. (<b>a</b>) False color image, (<b>b</b>) Ground-truth labels, (<b>c</b>)–(<b>f</b>) Classification results of 3D-CNN, SSRN, 3D-ResNet, and 3D-ResNeXt.</p> "> Figure 11
<p>Classification results of the models in comparison for the UP dataset. (<b>a</b>) False color image, (<b>b</b>) Ground-truth labels, (<b>c</b>)–(<b>f</b>) Classification results of 3D-CNN, SSRN, 3D-ResNet, and 3D-ResNeXt.</p> "> Figure 12
<p>Classification results of the models in comparison for the KSC dataset. (<b>a</b>) False color image, (<b>b</b>) Ground-truth labels, (<b>c</b>)–(<b>f</b>) Classification results of 3D-CNN, SSRN, 3D-ResNet, and 3D-ResNeXt.</p> "> Figure 13
<p>The OA and loss of models with different loss functions for the IN dataset, (<b>a</b>) the original cross-entropy loss function, (<b>b</b>) the cross-entropy loss function modified by label smoothing strategy.</p> "> Figure 14
<p>The OA and loss of models with different loss functions for the UP dataset, (<b>a</b>) the original cross-entropy loss function, (<b>b</b>) the cross-entropy loss function modified by label smoothing strategy.</p> "> Figure 15
<p>The OA and loss of models with different loss functions for the KSC dataset, (<b>a</b>) the original cross-entropy loss function, (<b>b</b>) the cross-entropy loss function modified by label smoothing strategy.</p> "> Figure 16
<p>Precision, Recall, and F1-Score indicators of classes with the smallest number of samples with different loss functions in the three HSI datasets. (Class 9 Oats for IN dataset, Class 9 Shadows for UP dataset, and Class 7 Swamp for KSC dataset).</p> "> Figure 17
<p>OAs of the 3D-ResNet and 3D-ResNeXt with different ratios of training samples for the IN dataset.</p> "> Figure 18
<p>OAs of the 3D-ResNet and 3D-ResNeXt with different ratios of training samples for the UP dataset.</p> "> Figure 19
<p>OAs of the 3D-ResNet and 3D-ResNeXt with different ratios of training samples for the KSC dataset.</p> ">
Abstract
:1. Introduction
- The designed HSI classification model adopts a highly modularized network structure based on residual connections and group convolutions to mitigate the decreasing-accuracy phenomenon and reduce the number of the parameters. The whole network consists of two consecutive 3D blocks which can improve the classification accuracy of classes with relatively small number of samples.
- Considering that HSIs have many spectral bands, the HSI data cubes after dimensionality reduction by the convolutional layer, combined with the spectral features learned by 3D-ResNet, is used as the input of the 3D-ResNeXt spectral-spatial feature learning network. This approach enriches the information of network input, especially for those classes with few samples, and is conducive to network learning more effectively.
- Owing to the imbalance of HSI sample categories, the proposed network adopts the cross-entropy loss function modified by label smoothing strategy to improve the classification results further.
2. Proposed Framework
2.1. Overview of Proposed Network Architecture
2.2. Data Preprocessing
2.3. Block 1: Spectral Feature Extraction and Feature Fusion
2.4. Block 2: 3D-ResNeXt Spectral-Spatial Feature Learning
2.5. Modified Loss Function
3. Experiments and Results
3.1. Experimental Datasets
3.2. Experimental Setup
3.3. Experimental Parameter Discussion
3.3.1. Effect of Different Ratios of Training, Validation and Test Datasets
3.3.2. Effect of Cardinality
3.4. Classification Results Comparison with State-of-the-Art
3.5. Discussion
4. Conclusion
Author Contributions
Funding
Conflicts of Interest
References
- Bioucas-Dias, J.M.; Plaza, A.; Camps-Valls, G.; Scheunders, P.; Nasrabadi, N.; Chanussot, J. Hyperspectral Remote Sensing Data Analysis and Future Challenges. IEEE Geosci. Remote Sens. Mag. 2013, 1, 6–36. [Google Scholar] [CrossRef] [Green Version]
- Notesco, G.; Ben Dor, E.; Brook, A. Mineral mapping of makhtesh ramon in israel using hyperspectral remote sensing day and night LWIR images. In Proceedings of the 2014 6th Workshop on Hyperspectral Image and Signal Processing: Evolution in Remote Sensing (WHISPERS), Lausanne, Switzerland, 24–27 June 2014; pp. 1–4. [Google Scholar]
- Villa, P.; Pepe, M.; Boschetti, M.; de Paulis, R. Spectral mapping capabilities of sedimentary rocks using hyperspectral data in Sicily, Italy. In Proceedings of the 2011 IEEE International Geoscience and Remote Sensing Symposium, Vancouver, BC, Canada, 24–29 July 2011; pp. 2625–2628. [Google Scholar]
- Foucher, P.-Y.; Poutier, L.; Déliot, P.; Puckrin, E.; Chataing, S. Hazardous and Noxious Substance detection by hyperspectral imagery for marine pollution application. In Proceedings of the 2016 IEEE International Geoscience and Remote Sensing Symposium (IGARSS), Beijing, China, 10–15 July 2016; pp. 7694–7697. [Google Scholar]
- Zhou, K.; Cheng, T.; Deng, X.; Yao, X.; Tian, Y.; Zhu, Y.; Cao, W. Assessment of spectral variation between rice canopy components using spectral feature analysis of near-ground hyperspectral imaging data. In Proceedings of the 2016 8th Workshop on Hyperspectral Image and Signal Processing: Evolution in Remote Sensing (WHISPERS), Los Angeles, CA, USA, 21–24 August 2016; pp. 1–4. [Google Scholar]
- Chang, C.-I. Hyperspectral Imaging: Techniques for Spectral Detection and Classification; Springer Science & Business Media: Berlin, Germany, 2003; Volume 1. [Google Scholar]
- Hossain, M.A.; Hasin-E-Jannat; Ahmed, B.; Mamun, M.A. Feature mining for effective subspace detection and classification of hyperspectral images. In Proceedings of the 2017 International Conference on Electrical, Computer and Communication Engineering (ECCE), Cox's Bazar, Bangladesh, 16–18 February 2017; pp. 544–547. [Google Scholar]
- Gan, Y.; Luo, F.; Liu, J.; Lei, B.; Zhang, T.; Liu, K. Feature Extraction Based Multi-Structure Manifold Embedding for Hyperspectral Remote Sensing Image Classification. IEEE Access 2017, 5, 25069–25080. [Google Scholar] [CrossRef]
- Fauvel, M.; Tarabalka, Y.; Benediktsson, J.A.; Chanussot, J.; Tilton, J.C. Advances in spectral-spatial classification of hyperspectral images. Proc. IEEE 2012, 101, 652–675. [Google Scholar] [CrossRef] [Green Version]
- Huang, H.; Duan, Y.; Shi, G.; Lv, Z. Fusion of Weighted Mean Reconstruction and SVMCK for Hyperspectral Image Classification. IEEE Access 2018, 6, 15224–15235. [Google Scholar] [CrossRef]
- Paul, S.; Kumar, D.N. Partial informational correlation-based band selection for hyperspectral image classification. J. Appl. Remote Sens. 2019, 13, 046505. [Google Scholar] [CrossRef]
- Fauvel, M.; Benediktsson, J.A.; Chanussot, J.; Sveinsson, J.R. Spectral and Spatial Classification of Hyperspectral Data Using SVMs and Morphological Profiles. IEEE Trans. Geosci. Remote Sens. 2008, 46, 3804–3814. [Google Scholar] [CrossRef] [Green Version]
- Zhang, Y.; He, J. Bilateral texture filtering for spectral-spatial hyperspectral image classification. J. Eng. 2019, 2019, 9173–9177. [Google Scholar] [CrossRef]
- Makantasis, K.; Doulamis, A.D.; Doulamis, N.D.; Nikitakis, A. Tensor-Based Classification Models for Hyperspectral Data Analysis. IEEE Trans. Geosci. Remote Sens. 2018, 56, 6884–6898. [Google Scholar] [CrossRef]
- Makantasis, K.; Voulodimos, A.; Doulamis, A.; Doulamis, N.; Georgoulas, I. Hyperspectral Image Classification with Tensor-Based Rank-R Learning Models. In Proceedings of the 2019 IEEE International Conference on Image Processing (ICIP), Taipei, Taiwan, 22–25 September 2019; pp. 3125–3148. [Google Scholar]
- Li, C.; Wang, Y.; Zhang, X.; Gao, H.; Yang, Y.; Wang, J. Deep belief network for spectral–spatial classification of hyperspectral remote sensor data. Sensors 2019, 19, 204. [Google Scholar] [CrossRef] [Green Version]
- Makantasis, K.; Karantzalos, K.; Doulamis, A.; Doulamis, N. Deep supervised learning for hyperspectral data classification through convolutional neural networks. In Proceedings of the 2015 IEEE International Geoscience and Remote Sensing Symposium (IGARSS), Milan, Italy, 26–31 July 2015; pp. 4959–4962. [Google Scholar]
- Liu, Y.; Cao, G.; Sun, Q.; Siegel, M. Hyperspectral classification via deep networks and superpixel segmentation. Int. J. Remote Sens. 2015, 36, 3459–3482. [Google Scholar] [CrossRef]
- Li, H.; Su, A.; Liu, C.; Wu, Y.; Chen, S. Bisupervised network with pyramid pooling module for land cover classification of satellite remote sensing imagery. J. Appl. Remote Sens. 2019, 13, 048502. [Google Scholar] [CrossRef]
- Li, S.; Zhu, X.; Liu, Y.; Bao, J. Adaptive spatial-spectral feature learning for hyperspectral image classification. IEEE Access 2019, 7, 61534–61547. [Google Scholar] [CrossRef]
- Chen, Y.; Jiang, H.; Li, C.; Jia, X.; Ghamisi, P. Deep feature extraction and classification of hyperspectral images based on convolutional neural networks. IEEE Trans. Geosci. Remote Sens. 2016, 54, 6232–6251. [Google Scholar] [CrossRef] [Green Version]
- Chen, Y.; Lin, Z.; Zhao, X.; Wang, G.; Gu, Y. Deep learning-based classification of hyperspectral data. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2014, 7, 2094–2107. [Google Scholar] [CrossRef]
- Li, S.; Song, W.; Fang, L.; Chen, Y.; Ghamisi, P.; Benediktsson, J.A. Deep learning for hyperspectral image classification: An overview. IEEE Trans. Geosci. Remote Sens. 2019, 57, 6690–6709. [Google Scholar] [CrossRef] [Green Version]
- Li, W.; Wu, G.; Zhang, F.; Du, Q. Hyperspectral image classification using deep pixel-pair features. IEEE Trans. Geosci. Remote Sens. 2016, 55, 844–853. [Google Scholar] [CrossRef]
- Mei, S.; Ji, J.; Hou, J.; Li, X.; Du, Q. Learning Sensor-Specific Spatial-Spectral Features of Hyperspectral Images via Convolutional Neural Networks. IEEE Trans. Geosci. Remote Sens. 2017, 55, 4520–4533. [Google Scholar] [CrossRef]
- Zhong, Z.; Li, J.; Luo, Z.; Chapman, M. Spectral–spatial residual network for hyperspectral image classification: A 3-D deep learning framework. IEEE Trans. Geosci. Remote Sens. 2017, 56, 847–858. [Google Scholar] [CrossRef]
- Lee, H.; Kwon, H. Going Deeper with Contextual CNN for Hyperspectral Image Classification. IEEE Trans. Image Process. 2017, 26, 4843–4855. [Google Scholar] [CrossRef] [Green Version]
- Bai, Y.; Zhang, Q.; Lu, Z.; Zhang, Y. SSDC-DenseNet: A Cost-Effective End-to-End Spectral-Spatial Dual-Channel Dense Network for Hyperspectral Image Classification. IEEE Access 2019, 7, 84876–84889. [Google Scholar] [CrossRef]
- Jia, S.; Zhao, B.; Tang, L.; Feng, F.; Wang, W. Spectral–spatial classification of hyperspectral remote sensing image based on capsule network. J. Eng. 2019, 2019, 7352–7355. [Google Scholar] [CrossRef]
- Feng, F.; Wang, S.; Wang, C.; Zhang, J. Learning Deep Hierarchical Spatial–Spectral Features for Hyperspectral Image Classification Based on Residual 3D-2D CNN. Sensors 2019, 19, 5276. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Zhao, W.; Du, S. Spectral–Spatial Feature Extraction for Hyperspectral Image Classification: A Dimension Reduction and Deep Learning Approach. IEEE Trans. Geosci. Remote Sens. 2016, 54, 4544–4554. [Google Scholar] [CrossRef]
- Li, Y.; Zhang, H.; Shen, Q. Spectral–spatial classification of hyperspectral imagery with 3D convolutional neural network. Remote Sens. 2017, 9, 67. [Google Scholar] [CrossRef] [Green Version]
- Li, S.; Zhu, X.; Bao, J. Hierarchical Multi-Scale Convolutional Neural Networks for Hyperspectral Image Classification. Sensors 2019, 19, 1714. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Zhong, Z.; Li, J.; Ma, L.; Jiang, H.; Zhao, H. Deep residual networks for hyperspectral image classification. In Proceedings of the 2017 IEEE International Geoscience and Remote Sensing Symposium (IGARSS), Fort Worth, TX, USA, 23–28 July 2017; pp. 1824–1827. [Google Scholar]
- Song, W.; Li, S.; Fang, L.; Lu, T. Hyperspectral image classification with deep feature fusion network. IEEE Trans. Geosci. Remote Sens. 2018, 56, 3173–3184. [Google Scholar] [CrossRef]
- Huang, G.; Liu, Z.; Van Der Maaten, L.; Weinberger, K.Q. Densely connected convolutional networks. In Proceedings of the IEEE conference on computer vision and pattern recognition, Honolulu, HI, USA, 21-26 July 2017; pp. 4700–4708. [Google Scholar]
- Zhang, C.; Li, G.; Du, S.; Tan, W.; Gao, F. Three-dimensional densely connected convolutional network for hyperspectral remote sensing image classification. J. Appl. Remote Sens. 2019, 13, 016519. [Google Scholar] [CrossRef]
- Paoletti, M.E.; Haut, J.M.; Fernandez-Beltran, R.; Plaza, J.; Plaza, A.; Li, J.; Pla, F. Capsule networks for hyperspectral image classification. IEEE Trans. Geosci. Remote Sens. 2018, 57, 2145–2160. [Google Scholar] [CrossRef]
- He, K.; Zhang, X.; Ren, S.; Sun, J. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 770–778. [Google Scholar]
- Xie, S.; Girshick, R.; Dollár, P.; Tu, Z.; He, K. Aggregated residual transformations for deep neural networks. In Proceedings of the IEEE conference on computer vision and pattern recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 1492–1500. [Google Scholar]
- Ioffe, S.; Szegedy, C. Batch normalization: Accelerating deep network training by reducing internal covariate shift. ArXiv Prepr. ArXiv150203167 2015. [Google Scholar]
- Krizhevsky, A.; Sutskever, I.; Hinton, G.E. Imagenet classification with deep convolutional neural networks. In Proceedings of the Advances in neural information processing systems, Harrahs and Harveys, Lake Tahoe, CA, USA, 3–8 December 2012; pp. 1097–1105. [Google Scholar]
- Srivastava, N.; Hinton, G.; Krizhevsky, A.; Sutskever, I.; Salakhutdinov, R. Dropout: a simple way to prevent neural networks from overfitting. J. Mach. Learn. Res. 2014, 15, 1929–1958. [Google Scholar]
- He, T.; Zhang, Z.; Zhang, H.; Zhang, Z.; Xie, J.; Li, M. Bag of tricks for image classification with convolutional neural networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA, 16–20 June 2019; pp. 558–567. [Google Scholar]
- Szegedy, C.; Vanhoucke, V.; Ioffe, S.; Shlens, J.; Wojna, Z. Rethinking the inception architecture for computer vision. In Proceedings of the IEEE conference on computer vision and pattern recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 2818–2826. [Google Scholar]
- Müller, R.; Kornblith, S.; Hinton, G.E. When does label smoothing help? In Proceedings of the Advances in Neural Information Processing Systems, Vancouver Convention Center, Vancouver, Canada, 8–14 December 2019; pp. 4696–4705. [Google Scholar]
- Computational Intelligence Group of the Basque University (UPV/EHU). Hyperspectral Remote Sensing Scenes. Available online: http://www.ehu.eus/ccwintco/index.php/Hyperspectral_Remote_Sensing_Scenes (accessed on 20 October 2019).
- Luo, F.; Du, B.; Zhang, L.; Zhang, L.; Tao, D. Feature learning using spatial-spectral hypergraph discriminant analysis for hyperspectral image. IEEE Trans. Cybern. 2018, 49, 2406–2419. [Google Scholar] [CrossRef] [PubMed]
- Xu, Y.; Zhang, L.; Du, B.; Zhang, F. Spectral–spatial unified networks for hyperspectral image classification. IEEE Trans. Geosci. Remote Sens. 2018, 56, 5893–5909 [CrossRef]. [Google Scholar] [CrossRef]
No. | Class | No. of Samples |
---|---|---|
1 | Alfalfa | 46 |
2 | Corn-notill | 1428 |
3 | Corn-mintill | 830 |
4 | Corn | 237 |
5 | Grass-pasture | 483 |
6 | Grass-trees | 730 |
7 | Grass-pasture-mowed | 28 |
8 | Hay-windrowed | 478 |
9 | Oats | 20 |
10 | Soybean-notill | 972 |
11 | Soybean-mintill | 2455 |
12 | Soybean-clean | 593 |
13 | Wheat | 205 |
14 | Woods | 1265 |
15 | Buildings-Grass-Trees-Drives | 386 |
16 | Stone-Steel-Towers | 93 |
Total | 10249 |
No. | Class | No. of Samples |
---|---|---|
1 | Asphalt | 6631 |
2 | Meadows | 18649 |
3 | Gravel | 2099 |
4 | Trees | 3064 |
5 | Painted metal sheets | 1345 |
6 | Bare Soil | 5029 |
7 | Bitumen | 1330 |
8 | Self-Blocking Bricks | 3682 |
9 | Shadows | 947 |
Total | 42776 |
No. | Class | No. of Samples |
---|---|---|
1 | Scrub | 761 |
2 | Willow swamp | 243 |
3 | CP hammock | 256 |
4 | Slash pine | 252 |
5 | Oak/Broadleaf | 161 |
6 | Hardwood | 229 |
7 | Swamp | 105 |
8 | Graminoid marsh | 431 |
9 | Spartina marsh | 520 |
10 | Cattail marsh | 404 |
11 | Salt marsh | 419 |
12 | Mud flats | 503 |
13 | Water | 927 |
Total | 5211 |
Layer | Output Size | 3D-ResNeXt | Connected to |
---|---|---|---|
Input | |||
CONV1 | Input | ||
CONV2 | same | CONV1 | |
CONV3 | same | CONV2 | |
CONV4 | Input | ||
Add | CONV3, CONV4 | ||
Block2_1 | same | Add | |
Block2_2 | Block2_1 | ||
Block2_3 | Block2_2 | ||
Block2_4 | Block2_3 | ||
Flatten | 26624 | Block2_4 | |
Dense1 | 1024 | 1024 | Flatten |
Dense2(SoftMax) | 16 | 16 | Dense1 |
Ratios | Training Time (s) | Test Time (s) | OA (%) |
---|---|---|---|
2:1:7 | 2751.91 | 24.05 | 99.22 |
3:1:6 | 3140.21 | 20.88 | 99.82 |
4:1:5 | 2709.78 | 17.41 | 99.90 |
5:1:4 | 3977.16 | 14.86 | 99.96 |
Ratios | Training Time (s) | Test Time (s) | OA (%) |
---|---|---|---|
2:1:7 | 6077.30 | 54.85 | 99.93 |
3:1:6 | 7095.93 | 52.18 | 99.98 |
4:1:5 | 6857.42 | 39.32 | 99.99 |
5:1:4 | 8630.59 | 40.39 | 99.99 |
Ratios | Training Time (s) | Test Time (s) | OA (%) |
---|---|---|---|
2:1:7 | 1260.70 | 10.27 | 99.53 |
3:1:6 | 1384.68 | 8.66 | 99.90 |
4:1:5 | 1352.76 | 7.25 | 99.96 |
5:1:4 | 1656.15 | 5.88 | 99.99 |
Spatial Size | Training Time (s) | Test Time (s) | OA (%) |
---|---|---|---|
7×7 | 1503.56 | 5.38 | 99.90 |
9×9 | 2173.70 | 8.75 | 99.90 |
11×11 | 3977.16 | 14.86 | 99.96 |
13×13 | 4721.14 | 19.41 | 99.95 |
15×15 | 6034.21 | 23.28 | 99.90 |
Spatial Size | Training Time (s) | Test Time (s) | OA (%) |
---|---|---|---|
7×7 | 3230.44 | 18.43 | 99.93 |
9×9 | 4432.76 | 25.46 | 99.95 |
11×11 | 6857.42 | 39.32 | 99.99 |
13×13 | 9039.73 | 80.44 | 99.99 |
15×15 | 11318.89 | 78.97 | 99.99 |
Spatial Size | Training Time (s) | Test Time (s) | OA (%) |
---|---|---|---|
7×7 | 719.82 | 2.51 | 99.95 |
9×9 | 1045.13 | 3.87 | 99.98 |
11×11 | 1656.15 | 5.88 | 99.99 |
13×13 | 2278.41 | 8.72 | 99.99 |
15×15 | 2807.50 | 11.92 | 99.99 |
Datasets | C | Params | Training Time (s) | Test Time (s) | OA (%) |
---|---|---|---|---|---|
IN | 6 | 21,562,960 | 2912.32 | 11.82 | 99.88 |
8 | 28,825,456 | 3977.16 | 14.86 | 99.96 | |
10 | 36,130,960 | 4351.49 | 15.55 | 99.90 | |
UP | 6 | 12,118,608 | 5533.12 | 31.61 | 99.98 |
8 | 16,235,376 | 6857.42 | 39.32 | 99.99 | |
10 | 20,395,152 | 8077.80 | 43.65 | 99.99 | |
KSC | 6 | 18,414,160 | 1337.70 | 4.87 | 99.97 |
8 | 24,628,080 | 1656.15 | 5.88 | 99.99 | |
10 | 30,885,008 | 1994.49 | 6.78 | 99.95 |
Method | SVM | Rank-1 FNN | 1D-CNN | 2D-CNN-LR | 3D-CNN | SSRN | 3D-ResNet | 3D-ResNeXt |
---|---|---|---|---|---|---|---|---|
IN | 81.67 | 92.82 | 87.81 | 89.99 | 99.76 | 99.19 | 99.68 | 99.96 |
UP | 90.58 | 93.50 | 92.28 | 94.04 | 99.50 | 99.79 | 99.93 | 99.99 |
KSC | 80.29 | 95.51 | 89.23 | 94.11 | 99.81 | 99.61 | 99.86 | 99.99 |
3D-CNN | SSRN | 3D-ResNet | 3D-ResNeXt | |
---|---|---|---|---|
OA (%) | 99.76 | 99.19 | 99.68 | 99.96 |
AA (%) | 99.59 | 98.93 | 99.62 | 99.80 |
Kappa ×100 | 99.72 | 99.07 | 99.64 | 99.95 |
1 | 100.0 | 97.82 | 100.0 | 100.0 |
2 | 100.0 | 99.17 | 99.65 | 100.0 |
3 | 100.0 | 99.53 | 99.38 | 100.0 |
4 | 98.94 | 97.79 | 97.89 | 100.0 |
5 | 98.95 | 99.24 | 98.96 | 100.0 |
6 | 99.30 | 99.51 | 100.0 | 100.0 |
7 | 100.0 | 98.70 | 100.0 | 100.0 |
8 | 100.0 | 99.85 | 100.0 | 100.0 |
9 | 100.0 | 98.50 | 100.0 | 100.0 |
10 | 100.0 | 98.74 | 100.0 | 100.0 |
11 | 99.69 | 99.30 | 100.0 | 100.0 |
12 | 100.0 | 98.43 | 98.01 | 100.0 |
13 | 100.0 | 100.0 | 100.0 | 100.0 |
14 | 100.0 | 99.31 | 100.0 | 100.0 |
15 | 99.37 | 99.20 | 100.0 | 100.0 |
16 | 97.22 | 97.82 | 100.0 | 97.22 |
3D-CNN | SSRN | 3D-ResNet | 3D-ResNeXt | |
---|---|---|---|---|
OA (%) | 99.50 | 99.79 | 99.93 | 99.99 |
AA (%) | 99.38 | 99.66 | 99.91 | 99.99 |
Kappa ×100 | 99.34 | 99.72 | 99.91 | 99.98 |
1 | 99.67 | 99.92 | 99.94 | 99.98 |
2 | 99.89 | 99.96 | 100.0 | 100.0 |
3 | 99.80 | 98.46 | 100.0 | 99.98 |
4 | 99.87 | 99.69 | 99.93 | 100.0 |
5 | 99.71 | 99.99 | 99.86 | 100.0 |
6 | 99.52 | 99.94 | 99.80 | 100.0 |
7 | 99.26 | 99.82 | 100.0 | 100.0 |
8 | 96.73 | 99.22 | 99.67 | 99.98 |
9 | 100.0 | 99.95 | 100.0 | 100.0 |
3D-CNN | SSRN | 3D-ResNet | 3D-ResNeXt | |
---|---|---|---|---|
OA (%) | 99.81 | 99.61 | 99.86 | 99.99 |
AA (%) | 99.74 | 99.33 | 99.81 | 99.99 |
Kappa ×100 | 99.79 | 99.56 | 99.84 | 99.99 |
1 | 100.0 | 99.70 | 100.0 | 100.0 |
2 | 100.0 | 99.88 | 100.0 | 100.0 |
3 | 100.0 | 99.00 | 99.01 | 100.0 |
4 | 97.25 | 98.26 | 99.06 | 99.99 |
5 | 100.0 | 99.03 | 100.0 | 100.0 |
6 | 100.0 | 99.43 | 100.0 | 100.0 |
7 | 100.0 | 97.03 | 100.0 | 100.0 |
8 | 99.42 | 99.54 | 100.0 | 100.0 |
9 | 100.0 | 99.70 | 99.52 | 100.0 |
10 | 100.0 | 99.96 | 100.0 | 100.0 |
11 | 100.0 | 99.80 | 100.0 | 100.0 |
12 | 100.0 | 100.0 | 100.0 | 100.0 |
13 | 100.0 | 100.0 | 100.0 | 100.0 |
Method | Training Time (s) | Test Time (s) |
---|---|---|
3D-CNN | 1157.71 | 4.19 |
SSRN | 767.33 | 3.53 |
3D-ResNet | 2604.53 | 9.99 |
3D-ResNeXt | 3977.16 | 14.86 |
Method | Training Time (s) | Test Time (s) |
---|---|---|
3D-CNN | 1872.98 | 12.53 |
SSRN | 1368.08 | 12.45 |
3D-ResNet | 5042.26 | 28.39 |
3D-ResNeXt | 6857.42 | 39.32 |
Method | Training Time (s) | Test Time (s) |
---|---|---|
3D-CNN | 535.18 | 2.05 |
SSRN | 386.14 | 1.73 |
3D-ResNet | 1207.17 | 4.24 |
3D-ResNeXt | 1656.15 | 5.88 |
3D-ResNeXt (Cross-Entropy) | 3D-ResNeXt (with Label Smoothing) | ||
---|---|---|---|
IN | OA (%) | 99.83 | 99.96 |
AA (%) | 99.70 | 99.80 | |
Kappa ×100 | 99.81 | 99.95 | |
UP | OA (%) | 99.93 | 99.99 |
AA (%) | 99.91 | 99.99 | |
Kappa ×100 | 99.91 | 99.98 | |
KSC | OA (%) | 99.71 | 99.99 |
AA (%) | 99.76 | 99.99 | |
Kappa ×100 | 99.68 | 99.99 |
Datasets | Method | Params | Training Time (s) | Test Time (s) | OA (%) |
---|---|---|---|---|---|
IN | 3D-ResNet-4 | 32,176,496 | 2604.53 | 9.99 | 99.68 |
3D-ResNet-6 | 34,472,560 | 5230.70 | 19.42 | 99.29 | |
3D-ResNeXt-4 | 28,825,456 | 3977.16 | 14.86 | 99.96 | |
3D-ResNeXt-6 | 29,268,080 | 5957.54 | 22.09 | 99.99 | |
UP | 3D-ResNet-4 | 19,586,416 | 5042.26 | 28.39 | 99.93 |
3D-ResNet-6 | 21,882,480 | 9582.73 | 54.62 | 99.92 | |
3D-ResNeXt-4 | 16,235,376 | 6857.42 | 39.32 | 99.99 | |
3D-ResNeXt-6 | 16,678,000 | 11088.90 | 64.23 | 99.98 | |
KSC | 3D-ResNet-4 | 27,979,120 | 1207.17 | 4.24 | 99.86 |
3D-ResNet-6 | 30,275,184 | 2402.87 | 8.72 | 99.62 | |
3D-ResNeXt-4 | 24,628,080 | 1656.15 | 5.88 | 99.99 | |
3D-ResNeXt-6 | 25,070,704 | 2707.75 | 9.82 | 99.96 |
© 2020 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).
Share and Cite
Wu, P.; Cui, Z.; Gan, Z.; Liu, F. Three-Dimensional ResNeXt Network Using Feature Fusion and Label Smoothing for Hyperspectral Image Classification. Sensors 2020, 20, 1652. https://doi.org/10.3390/s20061652
Wu P, Cui Z, Gan Z, Liu F. Three-Dimensional ResNeXt Network Using Feature Fusion and Label Smoothing for Hyperspectral Image Classification. Sensors. 2020; 20(6):1652. https://doi.org/10.3390/s20061652
Chicago/Turabian StyleWu, Peida, Ziguan Cui, Zongliang Gan, and Feng Liu. 2020. "Three-Dimensional ResNeXt Network Using Feature Fusion and Label Smoothing for Hyperspectral Image Classification" Sensors 20, no. 6: 1652. https://doi.org/10.3390/s20061652