Joint Alternate Small Convolution and Feature Reuse for Hyperspectral Image Classification
<p>The diagram of data pre-processing procedure.</p> "> Figure 2
<p>Proposed convolutional neural network (CNN) architecture.</p> "> Figure 3
<p>The Indian Pines image: (<b>a</b>) the 21th band image; (<b>b</b>) the ground truth of Indian Pines, where the white area represents the unlabeled pixels.</p> "> Figure 4
<p>Mean spectral signatures of 16 classes in the Indian Pines dataset.</p> "> Figure 5
<p>2D spectral feature maps of 16 classes in the Indian Pines dataset.</p> "> Figure 6
<p>The Salinas image: (<b>a</b>) the 21th band image; (<b>b</b>) the ground truth of Salinas, where the white area represents the unlabeled pixels.</p> "> Figure 7
<p>Mean spectral signatures of 16 classes in the Salinas dataset.</p> "> Figure 8
<p>2D spectral feature maps of 16 classes in the Salinas dataset.</p> "> Figure 9
<p>The Pavia University image: (<b>a</b>) the 21th band image; (<b>b</b>) the ground truth of Salinas, where the white area represents the unlabeled pixels.</p> "> Figure 10
<p>Mean spectral signatures of 16 classes in the Pavia University dataset.</p> "> Figure 11
<p>2D spectral feature maps of 16 classes in the Pavia University dataset.</p> "> Figure 12
<p>Classification result of proposed method when <span class="html-italic">nc_layer</span> = 2, <span class="html-italic">g</span> = 8/20/32: (<b>a</b>) shows the classification accuracy per class, (<b>b</b>) shows overall accuracy/average accuracy/kappa coefficient (OA/AA/Kappa).</p> "> Figure 13
<p>Classification results of proposed method <span class="html-italic">nc_layer</span> = 2, <span class="html-italic">g</span> = 8/20/32: (<b>a</b>) shows the classification accuracy per class, (<b>b</b>) shows the OA/AA/Kappa.</p> "> Figure 14
<p>Spectral curves of 100 bands and 144 bands in the Indian Pines dataset.</p> "> Figure 15
<p>Classification maps of different CNN architectures on the Indian Pines dataset.</p> "> Figure 16
<p>Classification maps of different CNN architectures on the Salinas dataset.</p> "> Figure 17
<p>Classification maps of different CNN architectures on the Pavia University dataset.</p> ">
Abstract
:1. Introduction
- Unlike existing HSI classification methods, this work transforms the 1D spectral vectors of hyperspectral data into 2D spectral feature matrices. The spectral features are mapped from 1D to 2D space. And the variations among different samples, especially those among samples from various classes, are highlighted. This work enables the CNN to fully use the spectral information from each band and extract the spectral features of the hyperspectral data accurately. Meanwhile, the interference of highly correlated bands for HSI classification can be weakened.
- The entire network architecture adopts small convolution kernels with size of 3 × 3 or 1 × 1 to form convolutional layers. The conversion of the 1D spectral vector to a 2D spectral feature matrix can weaken the interference of highly correlated bands for HSI classification, but cannot eliminate the correlation among bands. Adopting convolutional kernels with different sizes allows the acquisition of local receptive fields with varying sizes. After multilayer abstraction, the correlation among different spectral bands is gradually weakened. The entire network can learn the features of HSIs meticulously and robustly. Furthermore, cascaded 1 × 1 convolutional layers can increase the non-linearity of the network and make the spectral features of the hyperspectral data increasingly abstract. Simultaneously, the correlation among bands in the hyperspectral data can be weakened and the features of hyperspectral data can be learned effectively.
- 1 × 1 and 3 × 3 convolutional layers are cascaded to form a special composite layer. The 1 × 1 convolutional layer can integrate high-level spectral features output by the front layer from a global perspective and increasing the compactness of the proposed CNN architecture. The 3 × 3 convolutional layer can deeply learn the features integrated by the 1 × 1 convolutional layer in detail from multiple local perspectives. Multiple composite layers are cascaded so that 1 × 1 and 3 × 3 convolutional layers are stacked in the network alternately. In a cross-layer connection, the input and output of each composite layer are spliced into new features in the feature dimension and passed to the next composite layer, thus accomplishing feature reuse. This combination of alternating small convolutions and feature reuse is called the ASC–FR module. When extracting the features of hyperspectral data, the ASC–FR module can constantly switch the perspective of extraction between the global and local perspectives. Therefore, this module ensures that the spectral features can be fully utilized after multilayer abstraction, the deep features of hyperspectral data are extracted comprehensively and meticulously, and the adverse effects of strong correlation among bands on classification are weakened. To a certain extent, the overfitting and gradient disappearance in the proposed CNN architecture are solved. Thus, this module can improve the accuracy of HSI classification effectively.
2. Related Work
2.1. Small Convolution
2.2. Convolutional Neural Network (CNN)-Based Classification for Hyperspectral Image (HSI)
3. HSI Classification Method Based on Alternating Small Convolutions and Feature Reuse (ASC–FR)
3.1. Data Pre-processing
3.2. Proposed CNN Architecture
4. Experiments and Analysis
4.1. Datasets and Data Pre-Processing
4.1.1. Indian Pines Dataset
4.1.2. Salinas Dataset
4.1.3. Pavia University Dataset
4.2. Experimental Design
- (1)
- Comparison of classification performances of proposed method under different parameter settings. Two comparisons are needed because different width expansion rates (g) and network depths lead to varying classification performances of the proposed method. ① The number of composite layers is denoted as nc_layer and set as 2, and the classification performances of the proposed method when the value of g is 8/20/32 are compared. ② g is set as 20, and the classification performances of the proposed method when the value of nc_layer is 2/4/8 are compared.
- (2)
- Comparison with other methods. The classification performance of proposed method is compared with that of the deep learning method and non-deep learning method on the HSI.
4.3. Experimental Results and Analyses
5. Conclusions
Author Contributions
Funding
Conflicts of Interest
References
- Hughes, G. On the mean accuracy of statistical pattern recognizers. IEEE Trans. Inf. Theory 1968, 14, 55–63. [Google Scholar] [CrossRef]
- Tong, Q.; Zhang, B.; Zheng, L. Hyperspectral Remote Sensing—Principle, Technology and Application; Higher Education Press: Beijing, China, 2006. [Google Scholar]
- Wang, M.; Gao, K.; Wang, L.J.; Miu, X.H. A Novel Hyperspectral Classification Method Based on C5.0 Decision Tree of Multiple Combined Classifiers. In Proceedings of the Fourth International Conference on Computational and Information Sciences, Chongqing, China, 17–19 August 2012; pp. 373–376. [Google Scholar]
- Rojas-Moraleda, R.; Valous, N.A.; Gowen, A.; Esquerre, C.; Härtel, S.; Salinas, L.; O’Donnell, C. A frame-based ANN for classification of hyperspectral images: Assessment of mechanical damage in mushrooms. Neural Comput. Appl. 2017, 28, 969–981. [Google Scholar] [CrossRef]
- Sun, W.; Liu, C.; Xu, Y.; Tian, L.; Li, W.Y. A Band-Weighted Support Vector Machine Method for Hyperspectral Imagery Classification. IEEE Geosci. Remote Sens. Lett. 2017, 14, 1710–1714. [Google Scholar] [CrossRef]
- Li, J.; Du, Q.; Li, W.; Li, Y.S. Optimizing extreme learning machine for hyperspectral image classification. J. Appl. Remote Sens. 2015, 9, 097296. [Google Scholar] [CrossRef]
- Wei, Y.; Xiao, G.; Deng, H.; Chen, H.; Tong, M.G.; Zhao, G.; Liu, Q.T. Hyperspectral image classification using FPCA-based kernel extreme learning machine. Optik Int. J. Light Electron Opt. 2015, 126, 3942–3948. [Google Scholar] [CrossRef]
- Zhang, X.; Zhou, X.; Lin, M.; Sun, J. ShuffleNet: An Extremely Efficient Convolutional Neural Network for Mobile Devices. arXiv, 2017; arXiv:1707.01083v2. [Google Scholar]
- Shrivastava, A.; Sukthankar, R.; Malik, J.; Gupta, A. Beyond Skip Connections: Top-Down Modulation for Object Detection. arXiv, 2016; arXiv:1612.06851. [Google Scholar]
- Dai, J.F.; Qi, H.Z.; Xiong, Y.W.; Li, Y.; Zhang, G.D.; Hu, H.; Wei, Y.C. Deformable Convolutional Networks. In Proceedings of the IEEE International Conference on Computer Vision, Venice, Italy, 22–29 October 2017; pp. 764–773. [Google Scholar]
- Yann, L.; Leon, B.; Yoshua, B.; Patrick, H. Gradient-based learning applied to document recognition. Proc. IEEE 1998, 86, 2278–2324. [Google Scholar] [Green Version]
- Krizhevsky, A.; Sutskever, I.; Hinton, G.E. ImageNet classification with deep convolutional neural networks. In Proceedings of the International Conference on Neural Information Processing Systems, Lake Tahoe, NV, USA, 3–6 December 2012; pp. 1097–1105. [Google Scholar]
- Lin, M.; Chen, Q.; Yan, S. Network in Network. arXiv, 2014; arXiv:1312.4400v3. [Google Scholar]
- Szegedy, C.; Liu, W.; Jia, Y.Q.; Sermanet, P.; Reed, S.; Anguelov, D.; Erhan, D.; Vanhoucke, V.; Rabinovich, A. Going Deeper with Convolutions. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Boston, MA, USA, 7–12 June 2015; pp. 1–9. [Google Scholar]
- Sun, Y.; Wang, X.; Tang, X. Deeply learned face representations are sparse, selective, and robust. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Boston, MA, USA, 7–12 June 2015; pp. 2892–2900. [Google Scholar]
- He, K.M.; Zhang, X.Y.; Ren, S.Q.; Sun, J. Deep Residual Learning for Image Recognition. In Proceedings of the Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 770–778. [Google Scholar]
- Chen, Y.S.; Jiang, H.L.; Li, C.Y.; Jia, X.P. Deep Feature Extraction and Classification of Hyperspectral Images Based on Convolutional Neural Networks. IEEE Trans. Geosci. Remote Sens. 2016, 54, 6232–6251. [Google Scholar] [CrossRef]
- Yue, Q.; Ma, C. Deep Learning for Hyperspectral Data Classification through Exponential Momentum Deep Convolution Neural Networks. J. Sens. 2016, 2016, 3150632. [Google Scholar] [CrossRef]
- Hu, W.; Huang, Y.Y.; Wei, L.; Zhang, F.; Li, H.C. Deep Convolutional Neural Networks for Hyperspectral Image Classification. J. Sens. 2015, 2015, 258619. [Google Scholar] [CrossRef]
- Lee, H.; Kwon, H. Going Deeper with Contextual CNN for Hyperspectral Image Classification. IEEE Trans. Image Process. 2017, 26, 4843–4855. [Google Scholar] [CrossRef] [PubMed]
- Lee, H.; Kwon, H. Contextual deep CNN based hyperspectral classification. In Proceedings of the Geoscience and Remote Sensing Symposium, Beijing, China, 10–15 July 2016. [Google Scholar]
- Li, Y.; Zhang, H.; Shen, Q. Spectral-Spatial Classification of Hyperspectral Imagery with 3D Convolutional Neural Network. Remote Sens. 2017, 9, 67. [Google Scholar] [CrossRef]
- Makantasis, K.; Karantzalos, K.; Doulamis, A.; Doulamis, N. Deep supervised learning for hyperspectral data classification through convolutional neural networks. In Proceedings of the 2015 IEEE International Geoscience and Remote Sensing Symposium (IGARSS), Milan, Italy, 26–31 July 2015; pp. 4959–4962. [Google Scholar]
- Simonyan, K.; Zisserman, A. Very Deep Convolutional Networks for Large-Scale Image Recognition. arXiv, 2014; arXiv:1409.1556. [Google Scholar]
- Alam, F.I.; Zhou, J.; Liew, W.C.; Jia, X.P. CRF learning with CNN features for hyperspectral image segmentation. In Proceedings of the Geoscience and Remote Sensing Symposium, Beijing, China, 10–15 July 2016; pp. 6890–6893. [Google Scholar]
- Yuan, Q.Q.; Zhang, Q.; Li, J.; Shen, H.F.; Zhang, L.P. Hyperspectral Image Denoising Employing a Spatial-Spectral Deep Residual Convolutional Neural Network. arXiv, 2018; arXiv:1806.00183. [Google Scholar]
- Liu, Q.S.; Hang, R.L.; Song, H.H.; Zhu, F.P.; Plaza, J.; Plaza, A. Adaptive Deep Pyramid Matching for Remote Sensing Scene Classification. arXiv, 2016; arXiv:1611.03589v1. [Google Scholar]
- Yu, S.; Jia, S.; Xu, C. Convolutional neural networks for hyperspectral image classification. Neurocomputing 2016, 219, 88–98. [Google Scholar] [CrossRef]
- Makantasis, K.; Doulamis, A.D.; Doulamis, N.D.; Nikitakis, A. Tensor-Based Classification Models for Hyperspectral Data Analysis. IEEE Trans. Geosci. Remote Sens. 2018, 99, 1–15. [Google Scholar] [CrossRef]
- Chen, Y.S.; Zhu, L.; Ghamisi, P.; Jia, X.P.; Li, G.Y.; Tang, L. Hyperspectral Images Classification with Gabor Filtering and Convolutional Neural Network. IEEE Geosci. Remote Sens. Lett. 2017, 14, 2355–2359. [Google Scholar] [CrossRef]
- He, K.M.; Zhang, X.Y.; Ren, S.Q.; Sun, J. Delving Deep into Rectifiers: Surpassing Human-Level Performance on ImageNet Classification. In Proceedings of the IEEE International Conference on Computer Vision, Santiago, Chile, 7–13 December 2015; pp. 1026–1034. [Google Scholar]
- Chen, Y.; Zhao, X.; Jia, X. Spectral–Spatial Classification of Hyperspectral Data Based on Deep Belief Network. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2015, 8, 2381–2392. [Google Scholar] [CrossRef]
- Fu, Q.Y.; Yu, X.C.; Tan, X.; Wei, X.P.; Zhao, J.L. Classification of Hyperspectral Imagery Based on Denoising Autoencoders. J. Geomat. Sci. Technol. 2016, 33, 485–489. [Google Scholar]
Class | Name | Number | Train | Test |
---|---|---|---|---|
C1 | Alfalfa | 46 | 10 | 36 |
C2 | Corn-notill | 1428 | 345 | 1083 |
C3 | Corn-mintill | 830 | 219 | 611 |
C4 | Corn | 237 | 64 | 173 |
C5 | Grass-pasture | 483 | 133 | 350 |
C6 | Grass-trees | 730 | 188 | 542 |
C7 | Grass-pasture-mowed | 28 | 7 | 21 |
C8 | Hay-windrowed | 478 | 115 | 363 |
C9 | Oats | 20 | 8 | 12 |
C10 | Soybean-notill | 972 | 243 | 729 |
C11 | Soybean-mintill | 2455 | 626 | 1829 |
C12 | Soybean-clean | 593 | 136 | 457 |
C13 | Wheat | 205 | 46 | 159 |
C14 | Woods | 1265 | 311 | 954 |
C15 | Buildings-Grass-Trees-Drives | 386 | 85 | 301 |
C16 | Stone-Steel-Towers | 93 | 26 | 67 |
Total | 10,249 | 2562 | 7687 |
Class | Name | Number | Train | Test |
---|---|---|---|---|
C1 | green_weeds_1 | 2009 | 524 | 1485 |
C2 | green_weeds_2 | 3726 | 933 | 2793 |
C3 | Fallow | 1976 | 514 | 1462 |
C4 | Fallow rough plow | 1394 | 343 | 1051 |
C5 | Fallow smooth | 2678 | 671 | 2007 |
C6 | Stubble | 3959 | 977 | 2982 |
C7 | Celery | 3579 | 931 | 2648 |
C8 | Grapes untrained | 11,271 | 2826 | 8445 |
C9 | Soil vinyard | 6203 | 1536 | 4667 |
C10 | Corn senesced | 3278 | 813 | 2465 |
C11 | Lettuce_romaine_4wk | 1068 | 263 | 805 |
C12 | Lettuce_romaine_5wk | 1927 | 493 | 1434 |
C13 | Lettuce_romaine_6wk | 916 | 211 | 705 |
C14 | Lettuce_romaine_7wk | 1070 | 238 | 832 |
C15 | Vinyard_untrained | 7268 | 1806 | 5462 |
C16 | Vinyard_vertical | 1807 | 453 | 1354 |
Total | 54,129 | 13,532 | 40,597 |
Class | Name | Number | Train | Test |
---|---|---|---|---|
C1 | Asphalt | 6631 | 1668 | 4963 |
C2 | Meadows | 18,649 | 4583 | 14,066 |
C3 | Gravel | 2099 | 512 | 1587 |
C4 | Trees | 3064 | 831 | 2233 |
C5 | Painted metal sheets | 1345 | 331 | 1014 |
C6 | Bare Soil | 5029 | 1270 | 3759 |
C7 | Bitumen | 1330 | 334 | 996 |
C8 | Self-Blocking Bricks | 3682 | 936 | 2746 |
C9 | Shadows | 947 | 229 | 718 |
Total | 42,776 | 10,694 | 32,082 |
Band Number | OA | AA | Kappa |
---|---|---|---|
100 | 86.94% | 85.73% | 0.8510 |
144 | 89.76% | 88.37% | 0.8838 |
196 | 89.88% | 90.92% | 0.8845 |
nc_layer = 2 | g = 20 | |||||
---|---|---|---|---|---|---|
g = 8 | g = 20 | g = 32 | nc_layer = 2 | nc_layer = 4 | nc_layer = 8 | |
C1 | 100.00% | 99.93% | 100.00% | 99.93% | 100.00% | 100.00% |
C2 | 99.61% | 99.82% | 99.78% | 99.82% | 99.78% | 99.80% |
C3 | 99.86% | 99.73% | 99.86% | 99.73% | 100.00% | 100.00% |
C4 | 99.33% | 99.81% | 99.62% | 99.81% | 99.71% | 99.92% |
C5 | 99.20% | 99.45% | 99.60% | 99.45% | 98.96% | 99.20% |
C6 | 99.90% | 99.93% | 99.98% | 99.93% | 99.90% | 100.00% |
C7 | 99.85% | 99.96% | 100.00% | 99.96% | 99.96% | 100.00% |
C8 | 87.65% | 90.91% | 89.75% | 90.91% | 89.60% | 89.95% |
C9 | 99.59% | 99.57% | 99.42% | 99.57% | 99.59% | 99.72% |
C10 | 98.35% | 98.18% | 98.45% | 98.18% | 98.96% | 99.06% |
C11 | 98.76% | 99.25% | 99.01% | 99.25% | 99.14% | 98.68% |
C12 | 99.17% | 99.30% | 99.79% | 99.30% | 99.37% | 99.24% |
C13 | 99.15% | 100.00% | 100.00% | 100.00% | 99.86% | 100.00% |
C14 | 97.16% | 97.29% | 96.96% | 97.29% | 99.28% | 98.26% |
C15 | 83.38% | 86.75% | 86.92% | 86.75% | 84.49% | 84.52% |
C16 | 99.48% | 99.78% | 99.92% | 99.78% | 99.63% | 99.96% |
OA | 94.82% | 96.01% | 95.81% | 96.01% | 95.49% | 95.68% |
AA | 97.53% | 98.11% | 98.07% | 98.11% | 98.02% | 98.12% |
Kappa | 0.9423 | 0.9555 | 0.9533 | 0.9555 | 0.9497 | 0.9493 |
C1 | C2 | C3 | C4 | C5 | C6 | C7 | C8 | C9 | |
---|---|---|---|---|---|---|---|---|---|
C1 | 4790 | 0 | 31 | 0 | 2 | 1 | 108 | 70 | 2 |
C2 | 6 | 13,871 | 2 | 58 | 0 | 169 | 0 | 7 | 0 |
C3 | 29 | 1 | 1341 | 0 | 0 | 0 | 1 | 179 | 0 |
C4 | 1 | 42 | 0 | 2172 | 0 | 2 | 0 | 0 | 0 |
C5 | 1 | 0 | 0 | 0 | 1011 | 0 | 0 | 0 | 0 |
C6 | 12 | 150 | 0 | 3 | 1 | 3580 | 0 | 10 | 0 |
C7 | 60 | 0 | 1 | 0 | 0 | 0 | 887 | 2 | 0 |
C8 | 64 | 2 | 212 | 0 | 0 | 7 | 0 | 2478 | 0 |
C9 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 716 |
C1 | C2 | C3 | C4 | C5 | C6 | C7 | C8 | C9 | |
---|---|---|---|---|---|---|---|---|---|
C1 | 95.72% | 0.00% | 0.62% | 0.00% | 0.04% | 0.02% | 2.16% | 1.40% | 0.04% |
C2 | 0.04% | 98.29% | 0.01% | 0.41% | 0.00% | 1.20% | 0.00% | 0.05% | 0.00% |
C3 | 1.87% | 0.06% | 86.46% | 0.00% | 0.00% | 0.00% | 0.06% | 11.54% | 0.00% |
C4 | 0.05% | 1.89% | 0.00% | 97.97% | 0.00% | 0.09% | 0.00% | 0.00% | 0.00% |
C5 | 0.10% | 0.00% | 0.00% | 0.00% | 99.90% | 0.00% | 0.00% | 0.00% | 0.00% |
C6 | 0.32% | 3.99% | 0.00% | 0.08% | 0.03% | 95.31% | 0.00% | 0.27% | 0.00% |
C7 | 6.32% | 0.00% | 0.11% | 0.00% | 0.00% | 0.00% | 93.37% | 0.21% | 0.00% |
C8 | 2.32% | 0.07% | 7.67% | 0.00% | 0.00% | 0.25% | 0.00% | 89.69% | 0.00% |
C9 | 0.00% | 0.00% | 0.00% | 0.00% | 0.00% | 0.00% | 0.00% | 0.00% | 100.00% |
Layer | Output Size | Proposed CNN | NIN | LeNet-5 |
---|---|---|---|---|
Conv | 14 × 14 | |||
Pool | 7 × 7 | 3 × 3 max pool, stride 2 | ||
Conv | 7 × 7 | |||
Pool | 3 × 3 | 2 × 2 average pool, stride 2 | ||
Classification | 1 × 1 | global average pool | 120-D FC, 84-D FC | |
16-D FC, Softmax | ||||
FLOPS | 12,693,600 | 13,869,600 | 175,704 |
Dataset | Index | LeNet-5 | NIN | Proposed CNN |
---|---|---|---|---|
Indian Pines | OA | 85.78% | 88.97% | 89.95% |
AA | 85.33% | 86.36% | 90.92% | |
Kappa | 0.8379 | 0.8742 | 0.8845 | |
Salinas | OA | 94.28% | 95.09% | 96.01% |
AA | 97.38% | 97.94% | 98.11% | |
Kappa | 0.9363 | 0.9453 | 0.9555 | |
Pavia University | OA | 93.37% | 95.24% | 96.15% |
AA | 92.12% | 94.58% | 95.19% | |
Kappa | 0.9120 | 0.9359 | 0.9488 |
Methods | Training Time | Testing Time | OA |
---|---|---|---|
LeNet-5 | 74 s | 5.37 s | 85.78% |
NIN | 174.36 s | 27.68 s | 88.97% |
Proposed CNN | 131.53 s | 24.39 s | 89.95% |
References | Baseline | Feature | Training Set | Accuracy |
---|---|---|---|---|
Hu et al. [19] | CNN | Spectral | 8 classes, 200 samples per class | 90.16% (90.74%) |
Chen et al. [17] | CNN | Spectral | 150 samples per class | 87.81% (88.16%) |
Chen et al. [32] | DBN | Spectral–Spatial | 50% | 91.34% (92.58%) |
Fu et al. [33] | DAE | Spectral | 30% | 89.82% (90.18%) |
Sun et al. [5] | BWSVM | Spectral | 25% | 88% (89.95%) |
Li et al. [6] | KELM | Spectral | 10% | 80.37% (84.53%) |
Li et al. [6] | KSVM | Spectral | 10% | 79.17% (84.53%) |
Wei et al. [7] | FPCA+KELM | Spectral | 15% | 87.62% (87.92%) |
Hu et al. [19] | RBF-SVM | Spectral | 8 classes, 200 samples per class | 87.60% (90.74%) |
© 2018 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).
Share and Cite
Gao, H.; Yang, Y.; Li, C.; Zhou, H.; Qu, X. Joint Alternate Small Convolution and Feature Reuse for Hyperspectral Image Classification. ISPRS Int. J. Geo-Inf. 2018, 7, 349. https://doi.org/10.3390/ijgi7090349
Gao H, Yang Y, Li C, Zhou H, Qu X. Joint Alternate Small Convolution and Feature Reuse for Hyperspectral Image Classification. ISPRS International Journal of Geo-Information. 2018; 7(9):349. https://doi.org/10.3390/ijgi7090349
Chicago/Turabian StyleGao, Hongmin, Yao Yang, Chenming Li, Hui Zhou, and Xiaoyu Qu. 2018. "Joint Alternate Small Convolution and Feature Reuse for Hyperspectral Image Classification" ISPRS International Journal of Geo-Information 7, no. 9: 349. https://doi.org/10.3390/ijgi7090349