Nothing Special   »   [go: up one dir, main page]

skip to main content
research-article

Spectral-Spatial Boundary Detection in Hyperspectral Images

Published: 01 January 2022 Publication History

Abstract

In this paper, we propose a novel method for boundary detection in close-range hyperspectral images. This method can effectively predict the boundaries of objects of similar colour but different materials. To effectively extract the material information in the image, the spatial distribution of the spectral responses of different materials or endmembers is first estimated by hyperspectral unmixing. The resulting abundance map represents the fraction of each endmember spectra at each pixel. The abundance map is used as a supportive feature such that the spectral signature and the abundance vector for each pixel are fused to form a new spectral feature vector. Then different spectral similarity measures are adopted to construct a sparse spectral-spatial affinity matrix that characterizes the similarity between the spectral feature vectors of neighbouring pixels within a local neighborhood. After that, a spectral clustering method is adopted to produce eigenimages. Finally, the boundary map is constructed from the most informative eigenimages. We created a new HSI dataset and use it to compare the proposed method with four alternative methods, one for hyperspectral image and three for RGB image. The results exhibit that our method outperforms the alternatives and can cope with several scenarios that methods based on colour images cannot handle.

References

[1]
X. Ren, “Multi-scale improves boundary detection in natural images,” in Proc. Eur. Conf. Comput. Vis., 2008, pp. 533–545.
[2]
Y. Wang, X. Zhao, Y. Li, and K. Huang, “Deep crisp boundaries: From boundaries to higher-level tasks,” IEEE Trans. Image Process., vol. 28, no. 3, pp. 1285–1298, Mar. 2019.
[3]
S. Zhao and B. Zhang, “Robust and adaptive algorithm for hyperspectral palmprint region of interest extraction,” IET Biometrics, vol. 8, no. 6, pp. 391–400, 2019.
[4]
D. H. Foster, K. Amano, S. M. C. Nascimento, and M. Foster, “Frequency of metamerism in natural scenes,” J. Opt. Soc. Amer. A, Opt. Image Sci., vol. 23, no. 12, pp. 2359–2372, 2006.
[5]
M. D. Fairchild, M. R. Rosen, and G. M. Johnson, “Spectral and metameric color imaging,” Munsell Color Sci. Lab., Center Imag. Sci., Rochester Inst. Technol., Rochester, NY, USA, Tech. Rep., 2001.
[6]
W. Donget al., “Hyperspectral image super-resolution via non-negative structured sparse representation,” IEEE Trans. Image Process., vol. 25, no. 5, pp. 2337–2352, May 2016.
[7]
L. Tong, J. Zhou, Y. Qian, X. Bai, and Y. Gao, “Nonnegative-matrix-factorization-based hyperspectral unmixing with partially known endmembers,” IEEE Trans. Geosci. Remote Sens., vol. 54, no. 11, pp. 6531–6544, Nov. 2016.
[8]
P. Isola, D. Zoran, D. Krishnan, and E. H. Adelson, “Crisp boundary detection using pointwise mutual information,” in Proc. Eur. Conf. Comput. Vis., 2014, pp. 799–814.
[9]
R. Qureshi, M. Uzair, K. Khurshid, and H. Yan, “Hyperspectral document image processing: Applications, challenges and future prospects,” Pattern Recognit., vol. 90, pp. 12–22, Jun. 2019.
[10]
H. van der Werff, F. van Ruitenbeek, M. van der Meijde, F. van der Meer, S. de Jong, and S. Kalubandara, “Rotation-variant template matching for supervised hyperspectral boundary detection,” IEEE Geosci. Remote Sens. Lett., vol. 4, no. 1, pp. 70–74, Jan. 2007.
[11]
C. Chen, B. Guo, X. Wu, and H. Shen, “An edge detection method for hyperspectral image classification based on mean shift,” in Proc. 7th Int. Congr. Image Signal Process., Oct. 2014, pp. 553–557.
[12]
L. Gu, A. Robles-Kelly, and J. Zhou, “Efficient estimation of reflectance parameters from imaging spectroscopy,” IEEE Trans. Image Process., vol. 22, no. 9, pp. 3548–3663, Sep. 2013.
[13]
P. Mishra, M. S. M. Asaari, A. Herrero-Langreo, S. Lohumi, B. Diezma, and P. Scheunders, “Close range hyperspectral imaging of plants: A review,” Biosyst. Eng., vol. 164, pp. 49–67, Dec. 2017.
[14]
S. L. Al-khafaji, A. Zia, J. Zhou, and A. W.-C. Liew, “Material based boundary detection in hyperspectral images,” in Proc. Int. Conf. Digit. Image Comput., Techn. Appl. (DICTA), Nov. 2017, pp. 1–7.
[15]
S. L. Al-Khafaji, J. Zhou, and A. W. C. Liew, “An efficient method for boundary detection from hyperspectral imagery,” in Proc. Joint IAPR Int. Workshops Stat. Techn. Pattern Recognit. Struct. Syntactic Pattern Recognit., 2018, pp. 416–426.
[16]
J. Canny, “A computational approach to edge detection,” in Readings in Computer Vision. New York, NY, USA: IEEE, 1987, pp. 184–203.
[17]
R. M. Haralick, “Digital step edges from zero crossing of second directional derivatives,” in Readings in Computer Vision. Amsterdam, The Netherlands: Elsevier, 1987, pp. 216–226.
[18]
P. Arbeláez, M. Maire, C. Fowlkes, and J. Malik, “Contour detection and hierarchical image segmentation,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 33, no. 5, pp. 898–916, May 2011.
[19]
Y. Liu, Z. Xie, and H. Liu, “An adaptive and robust edge detection method based on edge proportion statistics,” IEEE Trans. Image Process., vol. 29, pp. 5206–5215, 2020.
[20]
P. Dollár and C. L. Zitnick, “Fast edge detection using structured forests,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 37, no. 8, pp. 1558–1570, Aug. 2015.
[21]
H. Kang, S. Lee, and C. K. Chui, “Coherent line drawing,” in Proc. 5th Int. Symp. Non-Photorealistic Animation Rendering, 2007, pp. 43–50.
[22]
Y. Ganin and V. Lempitsky, “$N^{4}$ -fields: Neural network nearest neighbor fields for image transforms,” in Proc. Asian Conf. Comput. Vis., 2014, pp. 536–551.
[23]
W. Shen, X. Wang, Y. Wang, X. Bai, and Z. Zhang, “DeepContour: A deep convolutional feature learned by positive-sharing loss for contour detection,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), Jun. 2015, pp. 3982–3991.
[24]
G. Bertasius, J. Shi, and L. Torresani, “DeepEdge: A multi-scale bifurcated deep network for top-down contour detection,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), Jun. 2015, pp. 4380–4389.
[25]
G. Franchi, A. Fehri, and A. Yao, “Deep morphological networks,” Pattern Recognit., vol. 102, Jun. 2020, Art. no.
[26]
J. Penget al., “A relic sketch extraction framework based on detail-aware hierarchical deep network,” Signal Process., vol. 183, Jun. 2021, Art. no.
[27]
S. Verzakov, P. Paclík, and R. Duin, “Edge detection in hyperspectral imaging: Multivariate statistical approaches,” in Structural, Syntactic, and Statistical Pattern Recognition. New York, NY, USA: Springer, 2006, pp. 551–559.
[28]
T. Lei, Y. Fan, and Y. Wang, “Colour edge detection based on the fusion of hue component and principal component analysis,” IET Image Process., vol. 8, no. 1, pp. 44–55, 2014.
[29]
P. Massoudifar, A. Rangarajan, and P. Gader, “Superpixel estimation for hyperspectral imagery,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit. Workshops, Jun. 2014, pp. 287–292.
[30]
V. Dinh, R. Leitner, P. Paclik, and R. Duin, “A clustering based method for edge detection in hyperspectral images,” in Image Analysis. New York, NY, USA: Springer, 2009, pp. 580–587.
[31]
P. J. Toivanen, J. Ansamäki, J. P. S. Parkkinen, and J. Mielikäinen, “Edge detection in multispectral images using the self-organizing map,” Pattern Recognit. Lett., vol. 24, no. 16, pp. 2987–2994, Dec. 2003.
[32]
J. Jordan and E. Angelopoulou, “Edge detection in multispectral images using the n-dimensional self-organizing map,” in Proc. 18th IEEE Int. Conf. Image Process., Sep. 2011, pp. 3181–3184.
[33]
E. Zhang, X. Zhang, L. Jiao, L. Li, and B. Hou, “Spectral–spatial hyperspectral image ensemble classification via joint sparse representation,” Pattern Recognit., vol. 59, pp. 42–54, Nov. 2016.
[34]
A. Plaza, P. Martinez, R. Perez, and J. Plaza, “A new approach to mixed pixel classification of hyperspectral imagery based on extended morphological profiles,” Pattern Recognit., vol. 37, no. 6, pp. 1097–1116, 2004.
[35]
M. A. Veganzones, G. Tochon, M. Dalla-Mura, A. J. Plaza, and J. Chanussot, “Hyperspectral image segmentation using a new spectral unmixing-based binary partition tree representation,” IEEE Trans. Image Process., vol. 23, no. 8, pp. 3574–3589, Aug. 2014.
[36]
L. Dong, Y. Yuan, and X. Lu, “Spectral–spatial joint sparse NMF for hyperspectral unmixing,” IEEE Trans. Geosci. Remote Sens., vol. 59, no. 3, pp. 2391–2402, Mar. 2021.
[37]
W. Wang, Y. Qian, and H. Liu, “Multiple clustering guided nonnegative matrix factorization for hyperspectral unmixing,” IEEE J. Sel. Topics Appl. Earth Observ. Remote Sens., vol. 13, pp. 5162–5179, 2020.
[38]
X. Xu, J. Li, S. Li, and A. Plaza, “Curvelet transform domain-based sparse nonnegative matrix factorization for hyperspectral unmixing,” IEEE J. Sel. Topics Appl. Earth Observ. Remote Sens., vol. 13, pp. 4908–4924, 2020.
[39]
H. Wang, W. Yang, and N. Guan, “Cauchy sparse NMF with manifold regularization: A robust method for hyperspectral unmixing,” Knowl.-Based Syst., vol. 184, Nov. 2019, Art. no.
[40]
X. Lu, L. Dong, and Y. Yuan, “Subspace clustering constrained sparse NMF for hyperspectral unmixing,” IEEE Trans. Geosci. Remote Sens., vol. 58, no. 5, pp. 3007–3019, May 2020.
[41]
M. Wang, B. Zhang, X. Pan, and S. Yang, “Group low-rank nonnegative matrix factorization with semantic regularizer for hyperspectral unmixing,” IEEE J. Sel. Topics Appl. Earth Observ. Remote Sens., vol. 11, no. 4, pp. 1022–1029, Apr. 2018.
[42]
S. Jia and Y. Qian, “Constrained nonnegative matrix factorization for hyperspectral unmixing,” IEEE Trans. Geosci. Remote Sens., vol. 47, no. 1, pp. 161–173, Jan. 2009.
[43]
X. Lu, H. Wu, Y. Yuan, P. Yan, and X. Li, “Manifold regularized sparse NMF for hyperspectral unmixing,” IEEE Trans. Geosci. Remote Sens., vol. 51, no. 5, pp. 2815–2826, May 2013.
[44]
L. Miao and H. Qi, “Endmember extraction from highly mixed data using minimum volume constrained nonnegative matrix factorization,” IEEE Trans. Geosci. Remote Sens., vol. 45, no. 3, pp. 765–777, Mar. 2007.
[45]
M.-D. Iordache, J. Bioucas-Dias, and A. Plaza, “Total variation spatial regularization for sparse hyperspectral unmixing,” IEEE Trans. Geosci. Remote Sens., vol. 50, no. 11, pp. 4484–4502, Nov. 2012.
[46]
W. He, H. Zhang, and L. Zhang, “Total variation regularized reweighted sparse nonnegative matrix factorization for hyperspectral unmixing,” IEEE Trans. Geosci. Remote Sens., vol. 55, no. 7, pp. 3909–3921, Jul. 2017.
[47]
T. Uezato, M. Fauvel, and N. Dobigeon, “Hyperspectral image unmixing with LiDAR data-aided spatial regularization,” IEEE Trans. Geosci. Remote Sens., vol. 56, no. 7, pp. 4098–4108, Jul. 2018.
[48]
J. Li, J. M. Bioucas-Dias, A. Plaza, and L. Liu, “Robust collaborative nonnegative matrix factorization for hyperspectral unmixing,” IEEE Trans. Geosci. Remote Sens., vol. 54, no. 10, pp. 6076–6090, Oct. 2016.
[49]
J. M. P. Nascimento and J. M. Bioucas-Dias, “Vertex component analysis: A fast algorithm to unmix hyperspectral data,” IEEE Trans. Geosci. Remote Sens., vol. 43, no. 4, pp. 898–910, Apr. 2005.
[50]
S. Robila and A. Gershman, “Spectral matching accuracy in processing hyperspectral data,” in Proc. Int. Symp. Signals, Circuits Syst., vol. 1, 2005, pp. 163–166.
[51]
L. Huang, H. Chao, and C. Wang, “Multi-view intact space clustering,” Pattern Recognit., vol. 86, pp. 344–353, Feb. 2019.
[52]
J. Shi and J. Malik, “Normalized cuts and image segmentation,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 22, no. 8, pp. 888–905, Aug. 2000.
[53]
P. Arbeláez, J. Pont-Tuset, J. Barron, F. Marques, and J. Malik, “Multiscale combinatorial grouping,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit., Jun. 2014, pp. 328–335.
[54]
A. Amer, “New binary morphological operations for effective low-cost boundary detection,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 17, no. 2, pp. 1–13, Feb. 2002.
[55]
M. Leordeanu, R. Sukthankar, and C. Sminchisescu, “Efficient closed-form solution to generalized boundary detection,” in Proc. Eur. Conf. Comput. Vis., 2012, pp. 516–529.
[56]
Y. Liuet al., “Richer convolutional features for edge detection,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 41, no. 8, pp. 1939–1946, Aug. 2019.

Cited By

View all
  • (2023)QTN: Quaternion Transformer Network for Hyperspectral Image ClassificationIEEE Transactions on Circuits and Systems for Video Technology10.1109/TCSVT.2023.328328933:12(7370-7384)Online publication date: 1-Dec-2023

Recommendations

Comments

Please enable JavaScript to view thecomments powered by Disqus.

Information & Contributors

Information

Published In

cover image IEEE Transactions on Image Processing
IEEE Transactions on Image Processing  Volume 31, Issue
2022
3518 pages

Publisher

IEEE Press

Publication History

Published: 01 January 2022

Qualifiers

  • Research-article

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)0
  • Downloads (Last 6 weeks)0
Reflects downloads up to 03 Oct 2024

Other Metrics

Citations

Cited By

View all
  • (2023)QTN: Quaternion Transformer Network for Hyperspectral Image ClassificationIEEE Transactions on Circuits and Systems for Video Technology10.1109/TCSVT.2023.328328933:12(7370-7384)Online publication date: 1-Dec-2023

View Options

View options

Get Access

Login options

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media