Nothing Special   »   [go: up one dir, main page]

skip to main content
research-article

Multiscale Densely-Connected Fusion Networks for Hyperspectral Images Classification

Published: 01 January 2021 Publication History

Abstract

Convolutional neural network (CNN) has demonstrated to be a powerful tool for hyperspectral images (HSIs) classification. Previous CNN-based HSI classification methods only adopt the fixed-size patches to train the CNN model, and such single scale patches may not reflect the complex spatial structural information in the HSIs. In addition, although different layers of CNN can extract features of multiple scales, the traditional CNN model can only utilize features from the highest level for the classification task. These features, however, do not fully consider the strong complementary yet correlated information among different layers. To address these issues, in this paper, a multiscale densely-connected convolutional network (MS-DenseNet) framework is proposed to sufficiently exploit multiple scales information for the HSIs classification. Specifically, for each pixel, the MS-DenseNet, first, extracts its surrounding patches of multiple scales. These patches can separately constitute multiple scale training and testing samples. Within each specific scale sample, instead of using the forward convolutional layers, the MS-DenseNet adopts the dense blocks, which can connect each layer to other layers in a feed-forward fashion and thus can exploit the information among different layers for training and testing. Furthermore, since high correlations exist in patches of different scales, the MS-DenseNet introduces several dense blocks to fuse the multiscale information among different layers for the final HSI classification. Experimental results on several real HSIs demonstrate the superiority of the proposed MS-DenseNet over single scale-based CNN classification model and several well-known classification methods.

References

[1]
J. Fan, T. Chen, and S. Lu, “Superpixel guided deep-sparse-representation learning for hyperspectral image classification,” IEEE Trans. Circuits Syst. Video Technol., vol. 28, no. 11, pp. 3163–3173, Nov. 2018.
[2]
R. Ribeiro, G. Cruz, J. Matos, and A. Bernardino, “A data set for airborne maritime surveillance environments,” IEEE Trans. Circuits Syst. Video Technol., vol. 29, no. 9, pp. 2720–2732, Sep. 2019.
[3]
E. Honkavaaraet al., “Remote sensing of 3-D geometry and surface moisture of a peat production area using hyperspectral frame cameras in visible to short-wave infrared spectral ranges onboard a small unmanned airborne vehicle (UAV),” IEEE Trans. Geosci. Remote Sens., vol. 54, no. 9, pp. 5440–5454, Sep. 2016.
[4]
J. Pontius, M. Martin, L. Plourde, and R. Hallett, “Ash decline assessment in emerald ash borer-infested regions: A test of tree-level, hyperspectral technologies,” Remote Sens. Environ., vol. 112, no. 5, pp. 2665–2676, May 2008.
[5]
L. Wang, Z. Xiong, G. Shi, W. Zeng, and F. Wu, “Simultaneous depth and spectral imaging with a cross-modal stereo system,” IEEE Trans. Circuits Syst. Video Technol., vol. 28, no. 3, pp. 812–817, Mar. 2018.
[6]
K. Zhang, X. Xu, B. Han, L. R. Mansaray, Q. Guo, and J. Huang, “The influence of different spatial resolutions on the retrieval accuracy of sea surface wind speed with C-2PO models using full polarization C-band SAR,” IEEE Trans. Geosci. Remote Sens., vol. 55, no. 9, pp. 5015–5025, Sep. 2017.
[7]
R. A. Farrugia and C. J. Debono, “A robust error detection mechanism for H.264/AVC coded video sequences based on support vector machines,” IEEE Trans. Circuits Syst. Video Technol., vol. 18, no. 12, pp. 1766–1770, Dec. 2008.
[8]
P. Zhong and R. Wang, “Jointly learning the hybrid CRF and MLR model for simultaneous denoising and classification of hyperspectral imagery,” IEEE Trans. Neural Netw. Learn. Syst., vol. 25, no. 7, pp. 1319–1334, Jul. 2014.
[9]
W. Wang, C. Wang, S. Liu, T. Zhang, and X. Cao, “Robust target tracking by online random forests and superpixels,” IEEE Trans. Circuits Syst. Video Technol., vol. 28, no. 7, pp. 1609–1622, Jul. 2018.
[10]
J. Xia, P. Ghamisi, N. Yokoya, and A. Iwasaki, “Random forest ensembles and extended multiextinction profiles for hyperspectral image classification,” IEEE Trans. Geosci. Remote Sens., vol. 56, no. 1, pp. 202–216, Jan. 2018.
[11]
L. Fang, S. Li, W. Duan, J. Ren, and J. A. Benediktsson, “Classification of hyperspectral images by exploiting spectral–spatial information of superpixel via multiple kernels,” IEEE Trans. Geosci. Remote Sens., vol. 53, no. 12, pp. 6663–6674, Dec. 2015.
[12]
L. Fang, S. Li, X. Kang, and J. A. Benediktsson, “Spectral–spatial classification of hyperspectral images with a superpixel-based discriminative sparse model,” IEEE Trans. Geosci. Remote Sens., vol. 53, no. 8, pp. 4186–4201, Aug. 2015.
[13]
W. Fu, S. Li, and L. Fang, “Spectral-spatial hyperspectral image classification via superpixel merging and sparse representation,” in Proc. IEEE Int. Geosci. Remote Sens. Symp. (IGARSS), Jul. 2015, pp. 4971–4974.
[14]
P. Zhong and R. Wang, “Modeling and classifying hyperspectral imagery by CRFs with sparse higher order potentials,” IEEE Trans. Geosci. Remote Sens., vol. 49, no. 2, pp. 688–705, Feb. 2011.
[15]
M. Golipour, H. Ghassemian, and F. Mirzapour, “Integrating hierarchical segmentation maps with MRF prior for classification of hyperspectral images in a Bayesian framework,” IEEE Trans. Geosci. Remote Sens., vol. 54, no. 2, pp. 805–816, Feb. 2016.
[16]
M. Grana and D. Chyzhyk, “Image understanding applications of lattice autoassociative memories,” IEEE Trans. Neural Netw. Learn. Syst., vol. 27, no. 9, pp. 1920–1932, Sep. 2016.
[17]
J. Li, J. M. Bioucas-Dias, and A. Plaza, “Spectral–spatial classification of hyperspectral data using loopy belief propagation and active learning,” IEEE Trans. Geosci. Remote Sens., vol. 51, no. 2, pp. 844–856, Feb. 2013.
[18]
B. Sun, X. Kang, S. Li, and J. A. Benediktsson, “Random-walker-based collaborative learning for hyperspectral image classification,” IEEE Trans. Geosci. Remote Sens., vol. 55, no. 1, pp. 212–222, Jan. 2017.
[19]
J. Xie, N. He, L. Fang, and A. Plaza, “Scale-free convolutional neural network for remote sensing scene classification,” IEEE Trans. Geosci. Remote Sens., vol. 57, no. 9, pp. 6916–6928, Sep. 2019.
[20]
S. Lopez-Tapia, R. Molina, and N. P. de la Blanca, “Deep CNNs for object detection using passive millimeter sensors,” IEEE Trans. Circuits Syst. Video Technol., vol. 29, no. 9, pp. 2580–2589, Sep. 2019.
[21]
Y. Wang, L. Wang, H. Wang, and P. Li, “RAN: Resolution-aware network for image super-resolution,” IEEE Trans. Circuits Syst. Video Technol., vol. 29, no. 9, pp. 1259–1269, May 2019.
[22]
A. Santaraet al., “BASS net: Band-adaptive spectral-spatial feature learning neural network for hyperspectral image classification,” IEEE Trans. Geosci. Remote Sens., vol. 55, no. 9, pp. 5293–5301, Sep. 2017.
[23]
Y. Chen, Z. Lin, X. Zhao, G. Wang, and Y. Gu, “Deep learning-based classification of hyperspectral data,” IEEE J. Sel. Topics Appl. Earth Observ. Remote Sens., vol. 7, no. 6, pp. 2094–2107, Jun. 2014.
[24]
C. Tao, H. Pan, Y. Li, and Z. Zou, “Unsupervised spectral–spatial feature learning with stacked sparse autoencoder for hyperspectral imagery classification,” IEEE Geosci. Remote Sens. Lett., vol. 12, no. 12, pp. 2438–2442, Dec. 2015.
[25]
Y. Chen, X. Zhao, and X. Jia, “Spectral–spatial classification of hyperspectral data based on deep belief network,” IEEE J. Sel. Topics Appl. Earth Observ. Remote Sens., vol. 8, no. 6, pp. 2381–2392, Jun. 2015.
[26]
Y. Chen, H. Jiang, C. Li, X. Jia, and P. Ghamisi, “Deep feature extraction and classification of hyperspectral images based on convolutional neural networks,” IEEE Trans. Geosci. Remote Sens., vol. 54, no. 10, pp. 6232–6251, Oct. 2016.
[27]
P. Ghamisi, Y. Chen, and X. X. Zhu, “A self-improving convolution neural network for the classification of hyperspectral data,” IEEE Geosci. Remote Sens. Lett., vol. 13, no. 10, pp. 1537–1541, Oct. 2016.
[28]
W. Li, G. Wu, F. Zhang, and Q. Du, “Hyperspectral image classification using deep pixel-pair features,” IEEE Trans. Geosci. Remote Sens., vol. 55, no. 2, pp. 844–853, Feb. 2017.
[29]
X. Li, M. Ye, Y. Liu, and C. Zhu, “Adaptive deep convolutional neural networks for scene-specific object detection,” IEEE Trans. Circuits Syst. Video Technol., vol. 29, no. 9, pp. 2538–2551, Sep. 2019.
[30]
M. H. Rafiei and H. Adeli, “A new neural dynamic classification algorithm,” IEEE Trans. Neural Netw. Learn. Syst., vol. 28, no. 12, pp. 3074–3083, Dec. 2017.
[31]
A. Krizhevsky, I. Sutskever, and G. E. Hinton, “Imagenet classification with deep convolutional neural networks,” in Proc. Adv. Neural Inf. Process. Syst. (NIPS), 2012, pp. 1097–1105.
[32]
S. Mei, J. Ji, J. Hou, X. Li, and Q. Du, “Learning sensor-specific spatial-spectral features of hyperspectral images via convolutional neural networks,” IEEE Trans. Geosci. Remote Sens., vol. 55, no. 8, pp. 4520–4533, Aug. 2017.
[33]
J. Yang, Y.-Q. Zhao, and J. C.-W. Chan, “Learning and transferring deep joint spectral–spatial features for hyperspectral classification,” IEEE Trans. Geosci. Remote Sens., vol. 55, no. 8, pp. 4729–4742, Aug. 2017.
[34]
W. Song, J. Zhu, Y. Li, and C. Chen, “Image alignment by online robust PCA via stochastic gradient descent,” IEEE Trans. Circuits Syst. Video Technol., vol. 26, no. 7, pp. 1241–1250, Jul. 2016.
[35]
J. Xia, J. Chanussot, P. Du, and X. He, “Spectral–spatial classification for hyperspectral data using rotation forests with local feature extraction and Markov random fields,” IEEE Trans. Geosci. Remote Sens., vol. 53, no. 5, pp. 2532–2546, May 2015.
[36]
G. Huang, Z. Liu, L. V. D. Maaten, and K. Q. Weinberger, “Densely connected convolutional networks,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), Jul. 2017, pp. 2261–2269.
[37]
L. Fang, S. Li, X. Kang, and J. A. Benediktsson, “Spectral–spatial hyperspectral image classification via multiscale adaptive sparse representation,” IEEE Trans. Geosci. Remote Sens., vol. 52, no. 12, pp. 7738–7749, Dec. 2014.
[38]
L. Jiao, M. Liang, H. Chen, S. Yang, H. Liu, and X. Cao, “Deep fully convolutional network-based spatial distribution prediction for hyperspectral image classification,” IEEE Trans. Geosci. Remote Sens., vol. 55, no. 10, pp. 5585–5599, Oct. 2017.
[39]
F. Melgani and L. Bruzzone, “Classification of hyperspectral remote sensing images with support vector machines,” IEEE Trans. Geosci. Remote Sens., vol. 42, no. 8, pp. 1778–1790, Aug. 2004.
[40]
J. A. Benediktsson, J. A. Palmason, and J. R. Sveinsson, “Classification of hyperspectral data from urban areas based on extended morphological profiles,” IEEE Trans. Geosci. Remote Sens., vol. 43, no. 3, pp. 480–491, Mar. 2005.
[41]
Y. Chen, N. M. Nasrabadi, and T. D. Tran, “Hyperspectral image classification using dictionary-based sparse representation,” IEEE Trans. Geosci. Remote Sens., vol. 49, no. 10, pp. 3973–3985, Oct. 2011.
[42]
X. Kang, S. Li, L. Fang, and J. A. Benediktsson, “Intrinsic image decomposition for feature extraction of hyperspectral images,” IEEE Trans. Geosci. Remote Sens., vol. 53, no. 4, pp. 2241–2253, Apr. 2015.
[43]
X. Kang, X. Xiang, S. Li, and J. A. Benediktsson, “PCA-based edge-preserving features for hyperspectral image classification,” IEEE Trans. Geosci. Remote Sens., vol. 55, no. 12, pp. 7140–7151, Dec. 2017.
[44]
S. Zhang, S. Li, W. Fu, and L. Fang, “Multiscale superpixel-based sparse representation for hyperspectral image classification,” Remote Sens., vol. 9, no. 2, p. 139, Feb. 2017.
[45]
B. Tu, X. Zhang, X. Kang, G. Zhang, J. Wang, and J. Wu, “Hyperspectral image classification via fusing correlation coefficient and joint sparse representation,” IEEE Geosci. Remote Sens. Lett., vol. 15, no. 3, pp. 340–344, Mar. 2018.
[46]
G. Huang, D. Chen, T. Li, F. Wu, L. van der Maaten, and K. Q. Weinberger, “Multi-scale dense networks for resource efficient image classification,” in Proc. Int. Conf. Learn. Represent. (ICLR), vol. 2018, pp. 1–14.
[47]
W. Song, S. Li, L. Fang, and T. Lu, “Hyperspectral image classification with deep feature fusion network,” IEEE Trans. Geosci. Remote Sens., vol. 56, no. 6, pp. 3173–3184, Jun. 2018.
[48]
S. Li, Q. Hao, G. Gao, and X. Kang, “The effect of ground truth on performance evaluation of hyperspectral image classification,” IEEE Trans. Geosci. Remote Sens., vol. 56, no. 12, pp. 7195–7206, Dec. 2018.

Cited By

View all
  • (2025)WHANet:Wavelet-Based Hybrid Asymmetric Network for Spectral Super-Resolution From RGB InputsIEEE Transactions on Multimedia10.1109/TMM.2024.352171327(414-428)Online publication date: 1-Jan-2025
  • (2024)Dimensionality Reduction via Multiple Neighborhood-Aware Nonlinear Collaborative Analysis for Hyperspectral Image ClassificationIEEE Transactions on Circuits and Systems for Video Technology10.1109/TCSVT.2024.339708634:10_Part_1(9356-9370)Online publication date: 1-Oct-2024
  • (2024)Multimodal Informative ViT: Information Aggregation and Distribution for Hyperspectral and LiDAR ClassificationIEEE Transactions on Circuits and Systems for Video Technology10.1109/TCSVT.2024.337551134:8(7643-7656)Online publication date: 1-Aug-2024
  • Show More Cited By

Index Terms

  1. Multiscale Densely-Connected Fusion Networks for Hyperspectral Images Classification
        Index terms have been assigned to the content through auto-classification.

        Recommendations

        Comments

        Please enable JavaScript to view thecomments powered by Disqus.

        Information & Contributors

        Information

        Published In

        cover image IEEE Transactions on Circuits and Systems for Video Technology
        IEEE Transactions on Circuits and Systems for Video Technology  Volume 31, Issue 1
        Jan. 2021
        424 pages

        Publisher

        IEEE Press

        Publication History

        Published: 01 January 2021

        Qualifiers

        • Research-article

        Contributors

        Other Metrics

        Bibliometrics & Citations

        Bibliometrics

        Article Metrics

        • Downloads (Last 12 months)0
        • Downloads (Last 6 weeks)0
        Reflects downloads up to 12 Feb 2025

        Other Metrics

        Citations

        Cited By

        View all
        • (2025)WHANet:Wavelet-Based Hybrid Asymmetric Network for Spectral Super-Resolution From RGB InputsIEEE Transactions on Multimedia10.1109/TMM.2024.352171327(414-428)Online publication date: 1-Jan-2025
        • (2024)Dimensionality Reduction via Multiple Neighborhood-Aware Nonlinear Collaborative Analysis for Hyperspectral Image ClassificationIEEE Transactions on Circuits and Systems for Video Technology10.1109/TCSVT.2024.339708634:10_Part_1(9356-9370)Online publication date: 1-Oct-2024
        • (2024)Multimodal Informative ViT: Information Aggregation and Distribution for Hyperspectral and LiDAR ClassificationIEEE Transactions on Circuits and Systems for Video Technology10.1109/TCSVT.2024.337551134:8(7643-7656)Online publication date: 1-Aug-2024
        • (2024)FSNA: Few-Shot Object Detection via Neighborhood Information Adaption and All AttentionIEEE Transactions on Circuits and Systems for Video Technology10.1109/TCSVT.2024.337060034:8(7121-7134)Online publication date: 1-Aug-2024
        • (2024)AdaptorNAS: A New Perturbation-Based Neural Architecture Search for Hyperspectral Image SegmentationIEEE Transactions on Circuits and Systems for Video Technology10.1109/TCSVT.2023.329879634:3(1559-1571)Online publication date: 1-Mar-2024
        • (2024)A Lightweight Multiscale-Multiobject Deep Segmentation Architecture for UAV-Based Consumer ApplicationsIEEE Transactions on Consumer Electronics10.1109/TCE.2024.336753170:1(3740-3753)Online publication date: 29-Feb-2024
        • (2023)HMFTComputational Intelligence and Neuroscience10.1155/2023/47259862023Online publication date: 1-Jan-2023
        • (2023)Hyperspectral Meets Optical Flow: Spectral Flow Extraction for Hyperspectral Image ClassificationIEEE Transactions on Image Processing10.1109/TIP.2023.331292832(5181-5196)Online publication date: 1-Jan-2023
        • (2023)SpecTr: Spectral Transformer for Microscopic Hyperspectral Pathology Image SegmentationIEEE Transactions on Circuits and Systems for Video Technology10.1109/TCSVT.2023.332619634:6(4610-4624)Online publication date: 20-Oct-2023
        • (2023)OASNet: Object Affordance State Recognition Network With Joint Visual Features and Relational Semantic EmbeddingsIEEE Transactions on Circuits and Systems for Video Technology10.1109/TCSVT.2023.332459534:5(3368-3382)Online publication date: 13-Oct-2023
        • Show More Cited By

        View Options

        View options

        Figures

        Tables

        Media

        Share

        Share

        Share this Publication link

        Share on social media