Nothing Special   »   [go: up one dir, main page]

Next Article in Journal
Investigation of Potential Volcanic Risk from Mt. Baekdu by DInSAR Time Series Analysis and Atmospheric Correction
Previous Article in Journal
Identifying the Relative Contributions of Climate and Grazing to Both Direction and Magnitude of Alpine Grassland Productivity Dynamics from 1993 to 2011 on the Northern Tibetan Plateau
You seem to have javascript disabled. Please note that many of the page functionalities won't work as expected without javascript enabled.
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Multiscale Superpixel-Based Sparse Representation for Hyperspectral Image Classification

1
College of Electrical and Information Engineering, Hunan University, Changsha 418002, China
2
School of Information Science and Engineering, Jishou University, Jishou 416000, China
*
Author to whom correspondence should be addressed.
Remote Sens. 2017, 9(2), 139; https://doi.org/10.3390/rs9020139
Submission received: 30 November 2016 / Revised: 18 January 2017 / Accepted: 25 January 2017 / Published: 7 February 2017
Graphical abstract
">
Figure 1
<p>Schematic illustration of the multiscale superpixel-based sparse representation (MSSR) algorithm for HSI classification.</p> ">
Figure 2
<p>Indian Pines image: (<b>a</b>) false-color image; and (<b>b</b>) reference image.</p> ">
Figure 3
<p>University of Pavia image: (<b>a</b>) false-color image; and (<b>b</b>) reference image.</p> ">
Figure 4
<p>Salinas image: (<b>a</b>) false-color image; and (<b>b</b>) reference image.</p> ">
Figure 5
<p>Washington DC image: (<b>a</b>) false-color image; and (<b>b</b>) reference image.</p> ">
Figure 6
<p>Superpixel segmentation results of the Indian Pines image under different scales. The number of single-scale superpixels is gained by using the Equation (<a href="#FD5-remotesensing-09-00139" class="html-disp-formula">5</a>), in which the fundamental superpixel number is set to 3200 and the power exponent <span class="html-italic">n</span> is an integer changing from <math display="inline"> <semantics> <mrow> <mo>-</mo> <mn>3</mn> </mrow> </semantics> </math> to three: (<b>a</b>) <math display="inline"> <semantics> <mrow> <mi>n</mi> <mo>=</mo> <mo>-</mo> <mn>3</mn> </mrow> </semantics> </math>; (<b>b</b>) <math display="inline"> <semantics> <mrow> <mi>n</mi> <mo>=</mo> <mo>-</mo> <mn>2</mn> </mrow> </semantics> </math>; (<b>c</b>) <math display="inline"> <semantics> <mrow> <mi>n</mi> <mo>=</mo> <mo>-</mo> <mn>1</mn> </mrow> </semantics> </math>; (<b>d</b>) <math display="inline"> <semantics> <mrow> <mi>n</mi> <mo>=</mo> <mn>0</mn> </mrow> </semantics> </math>; (<b>e</b>) <math display="inline"> <semantics> <mrow> <mi>n</mi> <mo>=</mo> <mn>1</mn> </mrow> </semantics> </math>; (<b>f</b>) <math display="inline"> <semantics> <mrow> <mi>n</mi> <mo>=</mo> <mn>2</mn> </mrow> </semantics> </math>; and (<b>g</b>) <math display="inline"> <semantics> <mrow> <mi>n</mi> <mo>=</mo> <mn>3</mn> </mrow> </semantics> </math>.</p> ">
Figure 7
<p>Classification results of the Indian Pines image under different scales. The number of single-scale superpixels is gained by using Equation (<a href="#FD5-remotesensing-09-00139" class="html-disp-formula">5</a>), in which the fundamental superpixel number is set to 3200 and the power exponent <span class="html-italic">n</span> is an integer changing from <math display="inline"> <semantics> <mrow> <mo>-</mo> <mn>3</mn> </mrow> </semantics> </math> to three: (<b>a</b>) <math display="inline"> <semantics> <mrow> <mi>n</mi> <mo>=</mo> <mo>-</mo> <mn>3</mn> </mrow> </semantics> </math>, OA = 93.14%; (<b>b</b>) <math display="inline"> <semantics> <mrow> <mi>n</mi> <mo>=</mo> <mo>-</mo> <mn>2</mn> </mrow> </semantics> </math>, OA = 96.42%; (<b>c</b>) <math display="inline"> <semantics> <mrow> <mi>n</mi> <mo>=</mo> <mo>-</mo> <mn>1</mn> </mrow> </semantics> </math>, OA = 96.62%; (<b>d</b>) <math display="inline"> <semantics> <mrow> <mi>n</mi> <mo>=</mo> <mn>0</mn> </mrow> </semantics> </math>, OA = 97.08%; (<b>e</b>) <math display="inline"> <semantics> <mrow> <mi>n</mi> <mo>=</mo> <mn>1</mn> </mrow> </semantics> </math>, OA = 95.64%; (<b>f</b>) <math display="inline"> <semantics> <mrow> <mi>n</mi> <mo>=</mo> <mn>2</mn> </mrow> </semantics> </math>, OA = 95.61%; and (<b>g</b>) <math display="inline"> <semantics> <mrow> <mi>n</mi> <mo>=</mo> <mn>3</mn> </mrow> </semantics> </math>, OA = 93.65%.</p> ">
Figure 8
<p>Classification maps for the Indian Pines image by different algorithms (OA values are reported in parentheses): (<b>a</b>) SVM (78.01%); (<b>b</b>) EMP (92.71%); (<b>c</b>) SRC (68.91%); (<b>d</b>) JSRC (94.42%); (<b>e</b>) MASR (98.27%); (<b>f</b>) SCMK (97.96%); (<b>g</b>) SBDSM (97.08%); and (<b>h</b>) MSSR (98.58%).</p> ">
Figure 9
<p>Superpixel segmentation results of the University of Pavia image under different scales. The number of single-scale superpixels is gained by using Equation (<a href="#FD5-remotesensing-09-00139" class="html-disp-formula">5</a>), in which the fundamental superpixel number is set to 3200 and the power exponent <span class="html-italic">n</span> is an integer changing from <math display="inline"> <semantics> <mrow> <mo>-</mo> <mn>3</mn> </mrow> </semantics> </math> to three: (<b>a</b>) <math display="inline"> <semantics> <mrow> <mi>n</mi> <mo>=</mo> <mo>-</mo> <mn>3</mn> </mrow> </semantics> </math>; (<b>b</b>) <math display="inline"> <semantics> <mrow> <mi>n</mi> <mo>=</mo> <mo>-</mo> <mn>2</mn> </mrow> </semantics> </math>; (<b>c</b>) <math display="inline"> <semantics> <mrow> <mi>n</mi> <mo>=</mo> <mo>-</mo> <mn>1</mn> </mrow> </semantics> </math>; (<b>d</b>) <math display="inline"> <semantics> <mrow> <mi>n</mi> <mo>=</mo> <mn>0</mn> </mrow> </semantics> </math>; (<b>e</b>) <math display="inline"> <semantics> <mrow> <mi>n</mi> <mo>=</mo> <mn>1</mn> </mrow> </semantics> </math>; (<b>f</b>) <math display="inline"> <semantics> <mrow> <mi>n</mi> <mo>=</mo> <mn>2</mn> </mrow> </semantics> </math>; and (<b>g</b>) <math display="inline"> <semantics> <mrow> <mi>n</mi> <mo>=</mo> <mn>3</mn> </mrow> </semantics> </math>.</p> ">
Figure 9 Cont.
<p>Superpixel segmentation results of the University of Pavia image under different scales. The number of single-scale superpixels is gained by using Equation (<a href="#FD5-remotesensing-09-00139" class="html-disp-formula">5</a>), in which the fundamental superpixel number is set to 3200 and the power exponent <span class="html-italic">n</span> is an integer changing from <math display="inline"> <semantics> <mrow> <mo>-</mo> <mn>3</mn> </mrow> </semantics> </math> to three: (<b>a</b>) <math display="inline"> <semantics> <mrow> <mi>n</mi> <mo>=</mo> <mo>-</mo> <mn>3</mn> </mrow> </semantics> </math>; (<b>b</b>) <math display="inline"> <semantics> <mrow> <mi>n</mi> <mo>=</mo> <mo>-</mo> <mn>2</mn> </mrow> </semantics> </math>; (<b>c</b>) <math display="inline"> <semantics> <mrow> <mi>n</mi> <mo>=</mo> <mo>-</mo> <mn>1</mn> </mrow> </semantics> </math>; (<b>d</b>) <math display="inline"> <semantics> <mrow> <mi>n</mi> <mo>=</mo> <mn>0</mn> </mrow> </semantics> </math>; (<b>e</b>) <math display="inline"> <semantics> <mrow> <mi>n</mi> <mo>=</mo> <mn>1</mn> </mrow> </semantics> </math>; (<b>f</b>) <math display="inline"> <semantics> <mrow> <mi>n</mi> <mo>=</mo> <mn>2</mn> </mrow> </semantics> </math>; and (<b>g</b>) <math display="inline"> <semantics> <mrow> <mi>n</mi> <mo>=</mo> <mn>3</mn> </mrow> </semantics> </math>.</p> ">
Figure 10
<p>Classification results of the University of Pavia image under different scales. The number of single-scale superpixels is gained by using Equation (<a href="#FD5-remotesensing-09-00139" class="html-disp-formula">5</a>), in which the fundamental superpixel number is set to 3200 and the power exponent <span class="html-italic">n</span> is an integer changing from <math display="inline"> <semantics> <mrow> <mo>-</mo> <mn>3</mn> </mrow> </semantics> </math> to three: (<b>a</b>) <math display="inline"> <semantics> <mrow> <mi>n</mi> <mo>=</mo> <mo>-</mo> <mn>3</mn> </mrow> </semantics> </math>, OA = 91.74%; (<b>b</b>) <math display="inline"> <semantics> <mrow> <mi>n</mi> <mo>=</mo> <mo>-</mo> <mn>2</mn> </mrow> </semantics> </math>, OA = 91.42%; (<b>c</b>) <math display="inline"> <semantics> <mrow> <mi>n</mi> <mo>=</mo> <mo>-</mo> <mn>1</mn> </mrow> </semantics> </math>, OA = 92.39%; (<b>d</b>) <math display="inline"> <semantics> <mrow> <mi>n</mi> <mo>=</mo> <mn>0</mn> </mrow> </semantics> </math>, OA = 92.60%; (<b>e</b>) <math display="inline"> <semantics> <mrow> <mi>n</mi> <mo>=</mo> <mn>1</mn> </mrow> </semantics> </math>, OA = 92.12%; (<b>f</b>) <math display="inline"> <semantics> <mrow> <mi>n</mi> <mo>=</mo> <mn>2</mn> </mrow> </semantics> </math>, OA = 91.54%; and (<b>g</b>) <math display="inline"> <semantics> <mrow> <mi>n</mi> <mo>=</mo> <mn>3</mn> </mrow> </semantics> </math>, OA = 91.35%.</p> ">
Figure 11
<p>Classification maps for the Pavia University image by different algorithms (OA values are reported in parentheses): (<b>a</b>) SVM (86.52%); (<b>b</b>) EMP (91.80%); (<b>c</b>) SRC (77.90%); (<b>d</b>) JSRC (86.78%); (<b>e</b>) MASR (89.45%); (<b>f</b>) SCMK (94.96%); (<b>g</b>) SBDSM (92.60%); and (<b>h</b>) MSSR (95.47%).</p> ">
Figure 12
<p>Superpixel segmentation results of the Salinas image under different scales. The number of single-scale superpixels is gained by using Equation (<a href="#FD5-remotesensing-09-00139" class="html-disp-formula">5</a>), in which the fundamental superpixel number is set to 1600 and the power exponent <span class="html-italic">n</span> is an integer changing from <math display="inline"> <semantics> <mrow> <mo>-</mo> <mn>2</mn> </mrow> </semantics> </math> to two: (<b>a</b>) <math display="inline"> <semantics> <mrow> <mi>n</mi> <mo>=</mo> <mo>-</mo> <mn>2</mn> </mrow> </semantics> </math>; (<b>b</b>) <math display="inline"> <semantics> <mrow> <mi>n</mi> <mo>=</mo> <mo>-</mo> <mn>1</mn> </mrow> </semantics> </math>; (<b>c</b>) <math display="inline"> <semantics> <mrow> <mi>n</mi> <mo>=</mo> <mn>0</mn> </mrow> </semantics> </math>; (<b>d</b>) <math display="inline"> <semantics> <mrow> <mi>n</mi> <mo>=</mo> <mn>1</mn> </mrow> </semantics> </math>; and (<b>e</b>) <math display="inline"> <semantics> <mrow> <mi>n</mi> <mo>=</mo> <mn>2</mn> </mrow> </semantics> </math>.</p> ">
Figure 13
<p>Classification results of the Salinas image under different scales. The number of single-scale superpixels is gained by using Equation (<a href="#FD5-remotesensing-09-00139" class="html-disp-formula">5</a>), in which the fundamental superpixel number is set to 1600 and the power exponent <span class="html-italic">n</span> is an integer changing from <math display="inline"> <semantics> <mrow> <mo>-</mo> <mn>2</mn> </mrow> </semantics> </math> to two: (<b>a</b>) <math display="inline"> <semantics> <mrow> <mi>n</mi> <mo>=</mo> <mo>-</mo> <mn>2</mn> </mrow> </semantics> </math>, OA = 95.21%; (<b>b</b>) <math display="inline"> <semantics> <mrow> <mi>n</mi> <mo>=</mo> <mo>-</mo> <mn>1</mn> </mrow> </semantics> </math>, OA = 97.04%; (<b>c</b>) <math display="inline"> <semantics> <mrow> <mi>n</mi> <mo>=</mo> <mn>0</mn> </mrow> </semantics> </math>, OA = 98.38%; (<b>d</b>) <math display="inline"> <semantics> <mrow> <mi>n</mi> <mo>=</mo> <mn>1</mn> </mrow> </semantics> </math>, OA = 97.70%; and (<b>e</b>) <math display="inline"> <semantics> <mrow> <mi>n</mi> <mo>=</mo> <mn>2</mn> </mrow> </semantics> </math>, OA = 97.00%.</p> ">
Figure 14
<p>Superpixel segmentation results of the Washington DC image under different scales. The number of single-scale superpixels is gained by using Equation (<a href="#FD5-remotesensing-09-00139" class="html-disp-formula">5</a>), in which the fundamental superpixel number is set to 12,800 and the power exponent <span class="html-italic">n</span> is an integer changing from <math display="inline"> <semantics> <mrow> <mo>-</mo> <mn>5</mn> </mrow> </semantics> </math> to five: (<b>a</b>) <math display="inline"> <semantics> <mrow> <mi>n</mi> <mo>=</mo> <mo>-</mo> <mn>5</mn> </mrow> </semantics> </math>; (<b>b</b>) <math display="inline"> <semantics> <mrow> <mi>n</mi> <mo>=</mo> <mo>-</mo> <mn>4</mn> </mrow> </semantics> </math>; (<b>c</b>) <math display="inline"> <semantics> <mrow> <mi>n</mi> <mo>=</mo> <mo>-</mo> <mn>3</mn> </mrow> </semantics> </math>; (<b>d</b>) <math display="inline"> <semantics> <mrow> <mi>n</mi> <mo>=</mo> <mo>-</mo> <mn>2</mn> </mrow> </semantics> </math>; (<b>e</b>) <math display="inline"> <semantics> <mrow> <mi>n</mi> <mo>=</mo> <mo>-</mo> <mn>1</mn> </mrow> </semantics> </math>; (<b>f</b>) <math display="inline"> <semantics> <mrow> <mi>n</mi> <mo>=</mo> <mn>0</mn> </mrow> </semantics> </math>; (<b>g</b>) <math display="inline"> <semantics> <mrow> <mi>n</mi> <mo>=</mo> <mn>1</mn> </mrow> </semantics> </math>; (<b>h</b>) <math display="inline"> <semantics> <mrow> <mi>n</mi> <mo>=</mo> <mn>2</mn> </mrow> </semantics> </math>; (<b>i</b>) <math display="inline"> <semantics> <mrow> <mi>n</mi> <mo>=</mo> <mn>3</mn> </mrow> </semantics> </math>; (<b>j</b>) <math display="inline"> <semantics> <mrow> <mi>n</mi> <mo>=</mo> <mn>4</mn> </mrow> </semantics> </math>; and (<b>k</b>) <math display="inline"> <semantics> <mrow> <mi>n</mi> <mo>=</mo> <mn>5</mn> </mrow> </semantics> </math>.</p> ">
Figure 15
<p>Classification results of the Washington DC image under different scales. The number of single-scale superpixel is gained by using Equation (<a href="#FD5-remotesensing-09-00139" class="html-disp-formula">5</a>), in which the fundamental superpixel number is set to 12,800 and the power exponent <span class="html-italic">n</span> is an integer changing from <math display="inline"> <semantics> <mrow> <mo>-</mo> <mn>5</mn> </mrow> </semantics> </math> to five: (<b>a</b>) <math display="inline"> <semantics> <mrow> <mi>n</mi> <mo>=</mo> <mo>-</mo> <mn>5</mn> </mrow> </semantics> </math>, OA = 86.48%; (<b>b</b>) <math display="inline"> <semantics> <mrow> <mi>n</mi> <mo>=</mo> <mo>-</mo> <mn>4</mn> </mrow> </semantics> </math>, OA = 90.29%; (<b>c</b>) <math display="inline"> <semantics> <mrow> <mi>n</mi> <mo>=</mo> <mo>-</mo> <mn>3</mn> </mrow> </semantics> </math>, OA = 92.56%; (<b>d</b>) <math display="inline"> <semantics> <mrow> <mi>n</mi> <mo>=</mo> <mo>-</mo> <mn>2</mn> </mrow> </semantics> </math>, OA = 91.43%; (<b>e</b>) <math display="inline"> <semantics> <mrow> <mi>n</mi> <mo>=</mo> <mo>-</mo> <mn>1</mn> </mrow> </semantics> </math>, OA = 91.84%; (<b>f</b>) <math display="inline"> <semantics> <mrow> <mi>n</mi> <mo>=</mo> <mn>0</mn> </mrow> </semantics> </math>, OA = 93.33%; (<b>g</b>) <math display="inline"> <semantics> <mrow> <mi>n</mi> <mo>=</mo> <mn>1</mn> </mrow> </semantics> </math>, OA = 92.25%; (<b>h</b>) <math display="inline"> <semantics> <mrow> <mi>n</mi> <mo>=</mo> <mn>2</mn> </mrow> </semantics> </math>, OA = 92.49%; (<b>i</b>) <math display="inline"> <semantics> <mrow> <mi>n</mi> <mo>=</mo> <mn>3</mn> </mrow> </semantics> </math>, OA = 93.12%; (<b>j</b>) <math display="inline"> <semantics> <mrow> <mi>n</mi> <mo>=</mo> <mn>4</mn> </mrow> </semantics> </math>, OA = 92.55%; and (<b>k</b>) <math display="inline"> <semantics> <mrow> <mi>n</mi> <mo>=</mo> <mn>5</mn> </mrow> </semantics> </math>, OA = 92.31%.</p> ">
Figure 15 Cont.
<p>Classification results of the Washington DC image under different scales. The number of single-scale superpixel is gained by using Equation (<a href="#FD5-remotesensing-09-00139" class="html-disp-formula">5</a>), in which the fundamental superpixel number is set to 12,800 and the power exponent <span class="html-italic">n</span> is an integer changing from <math display="inline"> <semantics> <mrow> <mo>-</mo> <mn>5</mn> </mrow> </semantics> </math> to five: (<b>a</b>) <math display="inline"> <semantics> <mrow> <mi>n</mi> <mo>=</mo> <mo>-</mo> <mn>5</mn> </mrow> </semantics> </math>, OA = 86.48%; (<b>b</b>) <math display="inline"> <semantics> <mrow> <mi>n</mi> <mo>=</mo> <mo>-</mo> <mn>4</mn> </mrow> </semantics> </math>, OA = 90.29%; (<b>c</b>) <math display="inline"> <semantics> <mrow> <mi>n</mi> <mo>=</mo> <mo>-</mo> <mn>3</mn> </mrow> </semantics> </math>, OA = 92.56%; (<b>d</b>) <math display="inline"> <semantics> <mrow> <mi>n</mi> <mo>=</mo> <mo>-</mo> <mn>2</mn> </mrow> </semantics> </math>, OA = 91.43%; (<b>e</b>) <math display="inline"> <semantics> <mrow> <mi>n</mi> <mo>=</mo> <mo>-</mo> <mn>1</mn> </mrow> </semantics> </math>, OA = 91.84%; (<b>f</b>) <math display="inline"> <semantics> <mrow> <mi>n</mi> <mo>=</mo> <mn>0</mn> </mrow> </semantics> </math>, OA = 93.33%; (<b>g</b>) <math display="inline"> <semantics> <mrow> <mi>n</mi> <mo>=</mo> <mn>1</mn> </mrow> </semantics> </math>, OA = 92.25%; (<b>h</b>) <math display="inline"> <semantics> <mrow> <mi>n</mi> <mo>=</mo> <mn>2</mn> </mrow> </semantics> </math>, OA = 92.49%; (<b>i</b>) <math display="inline"> <semantics> <mrow> <mi>n</mi> <mo>=</mo> <mn>3</mn> </mrow> </semantics> </math>, OA = 93.12%; (<b>j</b>) <math display="inline"> <semantics> <mrow> <mi>n</mi> <mo>=</mo> <mn>4</mn> </mrow> </semantics> </math>, OA = 92.55%; and (<b>k</b>) <math display="inline"> <semantics> <mrow> <mi>n</mi> <mo>=</mo> <mn>5</mn> </mrow> </semantics> </math>, OA = 92.31%.</p> ">
Figure 16
<p>Classification maps for the Salinas image by different algorithms (OA values are reported in parentheses): (<b>a</b>) SVM (80.23%); (<b>b</b>) EMP (85.84%); (<b>c</b>) SRC (81.94%); (<b>d</b>) JSRC (84.79%); (<b>e</b>) MASR (92.21%); (<b>f</b>) SCMK (94.53%); (<b>g</b>) SBDSM (98.38%); and (<b>h</b>) MSSR (99.41%).</p> ">
Figure 16 Cont.
<p>Classification maps for the Salinas image by different algorithms (OA values are reported in parentheses): (<b>a</b>) SVM (80.23%); (<b>b</b>) EMP (85.84%); (<b>c</b>) SRC (81.94%); (<b>d</b>) JSRC (84.79%); (<b>e</b>) MASR (92.21%); (<b>f</b>) SCMK (94.53%); (<b>g</b>) SBDSM (98.38%); and (<b>h</b>) MSSR (99.41%).</p> ">
Figure 17
<p>Classification maps for the Washington DC image by different algorithms (OA values are reported in parentheses): (<b>a</b>) SVM (90.98%); (<b>b</b>) EMP (90.28%); (<b>c</b>) SRC (91.95%); (<b>d</b>) JSRC (92.79%); (<b>e</b>) MASR (95.62%); (<b>f</b>) SCMK (94.55%); (<b>g</b>) SBDSM (93.33%); and(<b>h</b>) MSSR (96.60%).</p> ">
Figure 18
<p>Classification accuracy OA versus different fundamental superpixel numbers <math display="inline"> <semantics> <msub> <mi>S</mi> <mi>f</mi> </msub> </semantics> </math> on the four test images.</p> ">
Figure 19
<p>Relationship among the OA value, the number of multiscale superpixels and the fundamental superpixel number <math display="inline"> <semantics> <msub> <mi>S</mi> <mi>f</mi> </msub> </semantics> </math>: (<b>a</b>) Indian Pines image; (<b>b</b>) University of Pavia image; (<b>c</b>) Salinas image; and (<b>d</b>) Washington DC image.</p> ">
Figure 20
<p>Felzenszwalb-Huttenlocher (FH) segmentation results of the Indian Pines image under different scales. Multiscale superpixels are generated with various scales and smoothing parameters, <span class="html-italic">σ</span> and <math display="inline"> <semantics> <mrow> <mi>k</mi> <mo>_</mo> <mi>S</mi> </mrow> </semantics> </math>: (<b>a</b>) <math display="inline"> <semantics> <mrow> <mi>σ</mi> <mo>=</mo> <mn>0</mn> <mo>.</mo> <mn>2</mn> <mo>,</mo> <mi>k</mi> <mo>_</mo> <mi>S</mi> <mo>=</mo> <mn>43</mn> </mrow> </semantics> </math>; (<b>b</b>) <math display="inline"> <semantics> <mrow> <mi>σ</mi> <mo>=</mo> <mn>0</mn> <mo>.</mo> <mn>2</mn> <mo>,</mo> <mi>k</mi> <mo>_</mo> <mi>S</mi> <mo>=</mo> <mn>31</mn> </mrow> </semantics> </math>; (<b>c</b>) <math display="inline"> <semantics> <mrow> <mi>σ</mi> <mo>=</mo> <mn>0</mn> <mo>.</mo> <mn>2</mn> <mo>,</mo> <mi>k</mi> <mo>_</mo> <mi>S</mi> <mo>=</mo> <mn>23</mn> </mrow> </semantics> </math>; (<b>d</b>) <math display="inline"> <semantics> <mrow> <mi>σ</mi> <mo>=</mo> <mn>0</mn> <mo>.</mo> <mn>3</mn> <mo>,</mo> <mi>k</mi> <mo>_</mo> <mi>S</mi> <mo>=</mo> <mn>17</mn> </mrow> </semantics> </math>; (<b>e</b>) <math display="inline"> <semantics> <mrow> <mi>σ</mi> <mo>=</mo> <mn>0</mn> <mo>.</mo> <mn>4</mn> <mo>,</mo> <mi>k</mi> <mo>_</mo> <mi>S</mi> <mo>=</mo> <mn>12</mn> </mrow> </semantics> </math>; (<b>f</b>) <math display="inline"> <semantics> <mrow> <mi>σ</mi> <mo>=</mo> <mn>0</mn> <mo>.</mo> <mn>4</mn> <mo>,</mo> <mi>k</mi> <mo>_</mo> <mi>S</mi> <mo>=</mo> <mn>9</mn> </mrow> </semantics> </math>; and (<b>g</b>) <math display="inline"> <semantics> <mrow> <mi>σ</mi> <mo>=</mo> <mn>0</mn> <mo>.</mo> <mn>3</mn> <mo>,</mo> <mi>k</mi> <mo>_</mo> <mi>S</mi> <mo>=</mo> <mn>7</mn> </mrow> </semantics> </math>.</p> ">
Figure 21
<p>Simple linear iterative clustering (SLIC) segmentation results of the Indian Pines image under different scales. The number of multiscale superpixels is obtained by presetting the number of superpixel segmentations <math display="inline"> <semantics> <mrow> <mi>n</mi> <mo>_</mo> <mi>S</mi> </mrow> </semantics> </math>: (<b>a</b>) <math display="inline"> <semantics> <mrow> <mi>n</mi> <mo>_</mo> <mi>S</mi> </mrow> </semantics> </math> = 147; (<b>b</b>) <math display="inline"> <semantics> <mrow> <mi>n</mi> <mo>_</mo> <mi>S</mi> </mrow> </semantics> </math> = 206; (<b>c</b>) <math display="inline"> <semantics> <mrow> <mi>n</mi> <mo>_</mo> <mi>S</mi> </mrow> </semantics> </math> = 288; (<b>d</b>) <math display="inline"> <semantics> <mrow> <mi>n</mi> <mo>_</mo> <mi>S</mi> </mrow> </semantics> </math> = 415; (<b>e</b>) <math display="inline"> <semantics> <mrow> <mi>n</mi> <mo>_</mo> <mi>S</mi> </mrow> </semantics> </math> = 562; (<b>f</b>) <math display="inline"> <semantics> <mrow> <mi>n</mi> <mo>_</mo> <mi>S</mi> </mrow> </semantics> </math> = 780; and (<b>g</b>) <math display="inline"> <semantics> <mrow> <mi>n</mi> <mo>_</mo> <mi>S</mi> </mrow> </semantics> </math> = 1055.</p> ">
Figure 22
<p>Classification results of the Indian Pines image under different scales. The FH segmentation method is applied in the SBDSM algorithm. Multiscale superpixels are generated with various scales and smoothing parameters, <span class="html-italic">σ</span> and <math display="inline"> <semantics> <mrow> <mi>k</mi> <mo>_</mo> <mi>S</mi> </mrow> </semantics> </math>: (<b>a</b>) <math display="inline"> <semantics> <mrow> <mi>σ</mi> <mo>=</mo> <mn>0</mn> <mo>.</mo> <mn>2</mn> <mo>,</mo> <mi>k</mi> <mo>_</mo> <mi>S</mi> <mo>=</mo> <mn>43</mn> </mrow> </semantics> </math>, OA = 83.56%; (<b>b</b>) <math display="inline"> <semantics> <mrow> <mi>σ</mi> <mo>=</mo> <mn>0</mn> <mo>.</mo> <mn>2</mn> <mo>,</mo> <mi>k</mi> <mo>_</mo> <mi>S</mi> <mo>=</mo> <mn>31</mn> </mrow> </semantics> </math>, OA = 93.21%; (<b>c</b>) <math display="inline"> <semantics> <mrow> <mi>σ</mi> <mo>=</mo> <mn>0</mn> <mo>.</mo> <mn>2</mn> <mo>,</mo> <mi>k</mi> <mo>_</mo> <mi>S</mi> <mo>=</mo> <mn>23</mn> </mrow> </semantics> </math>, OA = 93.52%; (<b>d</b>) <math display="inline"> <semantics> <mrow> <mi>σ</mi> <mo>=</mo> <mn>0</mn> <mo>.</mo> <mn>3</mn> <mo>,</mo> <mi>k</mi> <mo>_</mo> <mi>S</mi> <mo>=</mo> <mn>17</mn> </mrow> </semantics> </math>, OA = 96.25%; (<b>e</b>) <math display="inline"> <semantics> <mrow> <mi>σ</mi> <mo>=</mo> <mn>0</mn> <mo>.</mo> <mn>4</mn> <mo>,</mo> <mi>k</mi> <mo>_</mo> <mi>S</mi> <mo>=</mo> <mn>12</mn> </mrow> </semantics> </math>, OA = 94.52%; (<b>f</b>) <math display="inline"> <semantics> <mrow> <mi>σ</mi> <mo>=</mo> <mn>0</mn> <mo>.</mo> <mn>4</mn> <mo>,</mo> <mi>k</mi> <mo>_</mo> <mi>S</mi> <mo>=</mo> <mn>9</mn> </mrow> </semantics> </math>, OA = 94.32%; and (<b>g</b>) <math display="inline"> <semantics> <mrow> <mi>σ</mi> <mo>=</mo> <mn>0</mn> <mo>.</mo> <mn>3</mn> <mo>,</mo> <mi>k</mi> <mo>_</mo> <mi>S</mi> <mo>=</mo> <mn>7</mn> </mrow> </semantics> </math>, OA = 94.28%.</p> ">
Figure 23
<p>Classification results of the Indian Pines image under different scales. The SLIC segmentation method is applied in the SBDSM algorithm. The number of multiscale superpixels is obtained by presetting the number of superpixel segmentations <math display="inline"> <semantics> <mrow> <mi>n</mi> <mo>_</mo> <mi>S</mi> </mrow> </semantics> </math>: (<b>a</b>) <math display="inline"> <semantics> <mrow> <mi>n</mi> <mo>_</mo> <mi>S</mi> </mrow> </semantics> </math> = 147, OA = 89.10%; (<b>b</b>) <math display="inline"> <semantics> <mrow> <mi>n</mi> <mo>_</mo> <mi>S</mi> </mrow> </semantics> </math> = 206, OA = 91.45%; (<b>c</b>) <math display="inline"> <semantics> <mrow> <mi>n</mi> <mo>_</mo> <mi>S</mi> </mrow> </semantics> </math> = 288, OA = 93.16%; (<b>d</b>) <math display="inline"> <semantics> <mrow> <mi>n</mi> <mo>_</mo> <mi>S</mi> </mrow> </semantics> </math> = 415, OA = 96.81%; (<b>e</b>) <math display="inline"> <semantics> <mrow> <mi>n</mi> <mo>_</mo> <mi>S</mi> </mrow> </semantics> </math> = 562, OA = 96.52%; (<b>f</b>) <math display="inline"> <semantics> <mrow> <mi>n</mi> <mo>_</mo> <mi>S</mi> </mrow> </semantics> </math> = 780, OA = 95.32%; and (<b>g</b>) <math display="inline"> <semantics> <mrow> <mi>n</mi> <mo>_</mo> <mi>S</mi> </mrow> </semantics> </math> = 1055, OA = 95.24%.</p> ">
Figure 24
<p>Classification maps by using MSSR_FH, MSSR_SLIC and MSSR_ERS algorithms: (<b>a</b>) MSSR_FH, OA = 96.54%; (<b>b</b>) MSSR_SLIC, OA = 97.38%; (<b>c</b>) MSSR_ERS, OA = 98.58%.</p> ">
Figure 25
<p>Effect of the number of training samples on SVM, EMP, SRC, JSRC, MASR, SCMK, SBDSM and MSSR for the: (<b>a</b>) Indian Pines image; (<b>b</b>) University of Pavia images; (<b>c</b>) Salinas image; and (<b>d</b>) Washington DC image.</p> ">
Versions Notes

Abstract

:
Recently, superpixel segmentation has been proven to be a powerful tool for hyperspectral image (HSI) classification. Nonetheless, the selection of the optimal superpixel size is a nontrivial task. In addition, compared with single-scale superpixel segmentation, the same image segmented on a different scale can obtain different structure information. To overcome such a drawback also utilizing the structural information, a multiscale superpixel-based sparse representation (MSSR) algorithm for the HSI classification is proposed. Specifically, a modified segmentation strategy of multiscale superpixels is firstly applied on the HSI. Once the superpixels on different scales are obtained, the joint sparse representation classification is used to classify the multiscale superpixels. Furthermore, majority voting is utilized to fuse the labels of different scale superpixels and to obtain the final classification result. Two merits are realized by the MSSR. First, multiscale information fusion can more effectively explore the spatial information of HSI. Second, in the multiscale superpixel segmentation, except for the first scale, the superpixel number on a different scale for different HSI datasets can be adaptively changed based on the spatial complexity of the corresponding HSI. Experiments on four real HSI datasets demonstrate the qualitative and quantitative superiority of the proposed MSSR algorithm over several well-known classifiers.

Graphical Abstract">

Graphical Abstract

1. Introduction

A hyperspectral sensor can capture hundreds of narrow contiguous spectral bands from the visible to infrared spectrum for each image pixel. Therefore, hyperspectral images consist of rich spectral-spatial information, which have attracted great attention in different application domains, such as national defense [1], urban planning [2], precision agriculture [3,4] and environment monitoring [5,6,7].
In the last few decades, HSI classification has been an important issue in remote sensing. In the earlier research, different pixel-wise approaches have been developed [8,9,10,11]. However, without considering the spatial information, the obtained classification results by these approaches usually contain much noise. To further improve the classification performance, methods incorporating the spatial information of the HSI have been proposed recently. In these methods, pixels in a small region are assumed to belong to the same material and have similar spectral properties. Various contexture feature extraction methods used in the traditional two-dimensional images have been extend to the HSI for improving the classification performance, such as the Gabor filter [12], the local binary pattern (LBP) filter [13], the edge-preserving filter (EPF) [14], the two-dimensional Gaussian derivative (GD) filter [15] and the extended morphological profiles (EMPs) filter [16,17]. In addition, to exploit the nonlinearity information, kernel technique has been widely used in HSI. For instance, the generalized composite kernel [18], graphic kernel [19], spatial-spectral derivative-aided kernel [20] and probabilistic kernel [21] have been introduced to the HSI classification. Furthermore, HSI can be regarded as “cube” data. In this case, the tensor-based classification methods have been used for the HSI [22,23,24]. Moreover, with the rise of deep learning, spectral-spatial information-based deep learning algorithms [25,26,27] are also applied in the HSI classification, which can extract potential and invariant features of the HSI.
In light of the operating mechanism of human vision, the key information of a nature image can be captured by learning a sparse coding. The sparse representation technique has been developed according to the above-mentioned theory. In the last few years, this technique has been extremely employed in the computer vision domain, such as face recognition [28], feature matching [29,30] and image fusion [31]. In these different signal processing tasks, state-of-the-art performance is usually obtained. Recently, sparse representation classification (SRC) has also attracted much attention for the classification of the HSI [27,32,33,34,35]. The SRC assumes that a test pixel can be approximately represented by a linear combination of all training samples. The class label of the test pixel is determined by which class leads to the minimum reconstruction error. For the pixel-wise SRC, the HSI classification result usually appears very noisy. To gain better classification accuracies, Chen et al. [36] proposed a joint sparse representation classification (JSRC) for the HSI. The JSRC assumes that pixels in a fixed window belong to the same class, and thus, these pixels can be simultaneously represented by a set of common atoms of a training dictionary. In the past few years, several modified versions of the JSRC have been proposed [20,37,38,39,40,41]. Although these approaches obtain improved performance, the neighborhood of the test pixel is a fixed square window. That is, for each test pixel in these methods, if pixels locate at the image edges or in a detailed region, the neighborhood may contain pixels from different classes, and the classification results are usually unsatisfactory. Therefore, to solve the aforementioned problem, the shape of the regions should be adaptively changed according to the different spatial information of the HSI.
In the image processing field, various superpixel segmentation methods have been widely used [42,43,44]. The superpixel has also been introduced for the HSI classification in recent years [45,46,47,48,49]. Each superpixel of the image is an adaptive segmentation region according to the spatial structure. Therefore, it can effectively exploit the spatial information compared with the fixed window centered at the test pixel. Meanwhile, developing classification methods in a superpixel-by-superpixel manner has lower computational complexity than the pixel-wise approaches. However, for single-scale superpixel based algorithms, the accuracy of the superpixel segmentation will directly affect the final results [45,46,50]. Therefore, the choice of the superpixel size is important. However, it is not a trivial work to choose the optimal superpixel size. The reason is that the small size may be short of enough information and, the large size may result in the error segmentation. In fact, for single-scale superpixel segmentation, some mixed superpixels that consist of pixels from different classes will still exist in the segmentation image. In addition, for the same region of an image, different structural information can be explored by segmenting superpixels on different scales. In view of this reason, multiscale superpixel-based methods are used for feature representation, target detection and recognition in some very recent works [50,51,52]. For different applications, the superpixel information of different scales is usually integrated via different strategies, such as adopting the similarity between a pixel and the average of pixels within the superpixel [50], converting to a sparse constraint problem [51] and utilizing the convolutional neutral network (CNN) [52]. These methods can effectively integrate multiscale information to obtain the optimal result.
In this paper, a modified segmentation strategy of multiscale superpixels is proposed. In the strategy, the number of each scale superpixel is related with the complexity of the first principal component of the HSI. Adopting the segmentation strategy, a multiscale superpixel-based sparse representation (MSSR) algorithm is proposed.
The rest of this paper is organized as follows. In Section 2, the JSRC algorithm for the HSI classification is briefly introduced. The proposed MSSR for the HSI classification is detailed in Section 3. In Section 4, the experimental results and discussions are given. Finally, in Section 5, the paper is summarized, and the future works are suggested.

2. JSRC Algorithm for HSI Classification

For HSI, pixels in a fixed window are assumed to come from the same ground materials and share the same spectral characteristics. According to the sparse representation theory, the correlations among the pixels within the window can be represented by the joint sparse regularization. Specifically, we denote one pixel in a HSI with B bands as y c R B and pixels in the p × p window as Y c = y c 1 , y c 2 , , y c p R B × p . Let D = D 1 , D 2 , , D M R B × T represent the structure dictionary with T training samples from M distinct classes, where D j j = 1 , 2 , , M are sub-dictionaries. Let t j be the number of training samples from the j-th class, and j = 1 M t j = T . Then, pixels Y c in the window can be represented as:
Y c = D A + N
where N is the possible noise and A = A 1 , A 2 , , A M is the sparse coefficients matrix of Y c . The vector A j j = 1 , 2 , , M is the corresponding component of sub-dictionary D j j = 1 , 2 , , M in A .
According to the JSRC algorithm, the sparse regularization places a l r o w , 0 -norm on the sparse matrix A , which means to select a number of the most representative nonzero rows in A . The joint sparse matrix A can be obtained by solving the following optimization problem:
A ^ = arg min Y c - D A 2 s · t · A r o w , 0 K
where K represents the sparsity level. The simultaneous orthogonal matching pursuit (SOMP) algorithm [53] can efficiently solve (2). After the sparse coefficients matrix A ^ is obtained, the class label of Y c is determined by the minimum residual error:
C l a s s ( Y c ) = arg min j = 1 , 2 , , M E j ( Y c )
where E j ( Y c ) = Y c - D j · A ^ j 2 , j = 1 , 2 , , M is the corresponding reconstruction residual error of the j-th class.

3. Proposed MSSR for HSI Classification

Compared with the fixed-shape neighborhood in the JSRC method, a superpixel is an adaptively spatial region, which is beneficial to obtain a better classification performance [45,46,47]. However, as previously mentioned, it is difficult to determine the optimal superpixel size. Meanwhile, the land covers in HSI have very complex structures with different sizes. Therefore, the multiscale superpixel-based approach is applied in the MSSR algorithm, which can more effectively exploit the spatial information of the HSI. The proposed MSSR algorithm for HSI classification consists of three parts: (1) the generation of multiscale superpixels in HSI; (2) the sparse representation for HSI with multiscale superpixels; (3) the fusion of multiscale classification results. The algorithmic schematic is demonstrated in Figure 1, and the detailed description is given below.

3.1. Generation of Multiscale Superpixels in HSI

As shown in Figure 1, to reduce the computational cost, the principal component analysis (PCA) algorithm [54] is firstly applied on the original HSI. Since the first principal component contains the major information of the HSI, we denote it as a fundamental image. Then, multiscale superpixel segmentation is applied on the fundamental image. For the multiscale superpixel segmentation, let F represent the fundamental image. Let S n denote the number of superpixels in the n-th scale. Let Y k n represent the k-th superpixel in the n-th scale. Then, the fundamental image F can be described as:
F = k = 0 S n Y k n , ( n = 0 , ± 1 , ± 2 , , ± N ) and Y k n Y g n = ϕ , ( k g )
In terms of Equation (4), the total number of superpixel scales is ( 2 N + 1 ) .
In general, the more complicated structure of the fundamental image is, the greater the number of segmented superpixels should be. Therefore, in the MSSR algorithm, we connect the number of superpixels in the n-th scale S n with the complexity of the fundamental image. To be specific, the Canny operator [55] is applied for the fundamental image F to gain the corresponding edge image. The edge ratio C [56], which is the proportion of nonzero pixels accounting for the total pixels in the edge image, reflects the complexity of the fundamental image. Then, S n is defined as:
S n = 2 n / 2 × S f × C ( n = 0 , ± 1 , ± 2 , , ± N )
where S f is the fundamental number of superpixels, which is empirically selected. In general, the more complicated the fundamental image is, the larger the value of S f should be. In addition, in terms of Equation (5), when the fundamental image is more complicated, the number of superpixels in the same scale is also larger. Therefore, the step length variation among multiscale superpixels is related with the complexity of the fundamental image. At the same time, the variation range of multiscale superpixels is also connected with the image complexity. It should be noted that other advanced methods depicting the image complexity might be applied to enhance the performance, but that will increase the computation amount [57,58].
According to the number of superpixels in the n-th scale S n , a graph-based segmentation algorithm is used to generate the n-th scale superpixel segmentation result. Graph-based image segmentation algorithms are widely used in superpixel segmentation [59,60,61]. Among these, the entropy rate superpixel (ERS) [61] segmentation method has been demonstrated to be very efficient. Specifically, the fundamental image F is firstly mapped to a graph G = ( V , E ) , where V is the vertex set denoting pixels of the fundamental image and E is the edge set representing the pairwise similarities given in the form of a similarity matrix. In the ERS, for the n-th scale of superpixel segmentation, the graph is partitioned into connected S n subgraphs by choosing a subset of edges A n E . To obtain the compact and homogeneous superpixels, an entropy rate term H n ( A n ) is adopted. Meanwhile, a balancing term B n ( A n ) is utilized to enable superpixels with similar sizes. Therefore, the objective function of the ERS method is given by:
max A n H n ( A n ) + ω n B n ( A n ) s · t · A n E
where ω n 0 is the weight of the balancing term. As described in [62], a greedy algorithm effectively solves the optimization problem in (6). After multiscale superpixel segmentation, for each test pixel, there are ( 2 N + 1 ) corresponding superpixels, which incorporate the test pixel. Then, the spatial information of each superpixel will be combined with the spectral information of pixels within the superpixel for HSI classification. Therefore, for each test pixel, there will be ( 2 N + 1 ) classification results.

3.2. Sparse Representation for HSI with Multiscale Superpixels

Multiscale superpixel segmentation results combine the original HSI to acquire a group of HSI marked with multiscale superpixels. Therefore, there are ( 2 N + 1 ) different marked regions corresponding to each test pixel in the HSI. For pixels within each region, they are supposed to have similar spectral characteristics. Hence, these pixels are simultaneously represented by a few common atoms from a structure dictionary. Assume the superpixel Y k n contains p spectral pixels, i.e., Y k n = y 1 , y 2 , , y p R B × p . Let A k n = A k 1 n , A k 2 n , , A k M n be the sparse coefficients matrix of Y k n and A k j n j = 1 , 2 , , M is the corresponding component of sub-dictionary D j j = 1 , 2 , , M in A k n . The joint sparse matrix A k n can be obtained by applying (2):
A ^ k n = arg min Y k n - D A k n 2 s · t · A k n r o w , 0 K
The reconstruction residual error of each class can be described as:
E j ( Y k n ) = Y k n - D j · A ^ k j n 2 , j = 1 , 2 , , M
The class label of Y k n is represented as:
C l a s s ( Y k n ) = arg min j = 1 , 2 , , M E j ( Y k n )

3.3. Fusion of Multiscale Classification Results

For each test pixel, the class labels of corresponding multiscale superpixels may be different. That is, there are ( 2 N + 1 ) different classification results for an HSI. For these multiscale classification results, a quick and effective decision fusion strategy (i.e., the majority voting) is utilized to obtain the final classification result. Specifically, assume the class labels of a test pixel under different scales respectively are l 1 , l 2 , , l 2 N + 1 . We count the number of each class occurrence, and denote them as L 1 , L 2 , , L M , where ( 2 N + 1 ) = j = 1 M L j . The class label of the test pixel can be obtained by:
L y = arg max ( L 1 , L 2 , , L M )

4. Experimental Results and Discussion

In this section, the effectiveness of the proposed MSSR algorithm is tested in the classification of four hyperspectral datasets, i.e., the Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) Indian Pines image, the Reflective Optics System Imaging Spectrometer (ROSIS-03) University of Pavia image, the AVIRIS Salinas image and the Hyperspectral Digital Image Collection Experiment (HYDICE) Washington DC image. The performance of the proposed MSSR algorithm is compared with those of seven competing classification algorithms, i.e., SVM [8], EMP [16], SRC [36], JSRC [36], multiscale adaptive sparse representation (MASR) [63], superpixel-based classification via multiple kernels (SCMK) [46] and the superpixel-based discriminative sparse model (SBDSM) [64]. The EMP, JSRC, MASR, SCMK, SBDSM and MSSR algorithms take advantage of the spectral-spatial information for HSI classification, while the SVM and SRC algorithms only exploit the spectral information. It should be noted that, for the SBDSM algorithm, the sparse dictionary is built by directly extracting pixels from HSI. Therefore, compared with the MSSR algorithm, the SBDSM algorithm is based on the single-scale superpixel and sparse representation.

4.1. Datasets Description

The Indian Pines image was acquired by the AVIRIS sensor over the agricultural Indian Pines site in northwestern Indiana. The size of this image is 145 × 145 × 220 , where 20 water absorption bands are discarded. The spatial resolution of the image is 20 m per pixel and the spectral coverage ranges from 0 . 2 to 2 . 4 μ m. The reference of this image contains sixteen classes, most of which are different kinds of crops. Figure 2 demonstrates the false-color composite of the Indian Pines image and the corresponding reference data.
The University of Pavia image was captured by the ROSIS-03 sensor over an urban area surrounding the University of Pavia, Italy. The ROSIS-03 sensor generated an image with a geometric resolution of 1.3 m per pixel and the spectral coverage ranging from 0 . 43 to 0 . 86 μ m. This image is of a size of 610 × 340 × 120 , where 12 spectral bands are removed due to high noise. The reference of this image contains nine ground-truth classes. Figure 3 shows the false-color composite of the University of Pavia image and the corresponding reference data.
The Salinas image was captured by the AVIRIS sensor over Salinas Valley, California. The image consists of 512 × 217 pixels and 224 spectral bands, where 20 water absorption spectral bands were removed. The geometric resolution of this image is 3 . 7 m. The reference of this image contains sixteen ground-truth classes. Figure 4 shows the false-color composite of the Salinas image and the corresponding reference data.
The Washington DC image was recorded by the HYDICE sensor over the Washington DC Mall. The image consists of 280 × 307 pixels, each pixel including 210 spectral bands. The spectral coverage ranges from 0 . 4 to 2 . 5 μ m and the spatial resolution of the image is 3 m per pixel. In the experiments, bands ranging from 0 . 9 to 1 . 4 μ m, where the atmosphere of these bands is opaque, are discarded from the dataset, leaving 191 bands. Figure 5 demonstrates the false-color composite of the Washington DC image and the corresponding reference data, which considers six classes of interest.

4.2. Comparison of Results

In the experiments, the SVM algorithm adopting a spectral Gaussian kernel is implemented with the LIBSVM [65] package, which is accelerated with Visual C++ software (6.0 Version). The parameters C and σ of the SVM are obtained by ten-fold cross validation. For the EMP algorithm, the parameters of feature extraction are set to the default in [16]. Once these morphological features are acquired, an SVM classifier is applied for the HSI classification. For the MASR and SCMK algorithms, the parameters are set to the same in [46,63], respectively. The parameters for the SRC and JSRC algorithms are tuned to reach the best results in these experiments. For the MSSR algorithm, the fundamental number of superpixel S f is set to 3200, 3200, 1600 and 12,800 for the Indian Pine, University of Pavia, Salinas and Washington DC images, respectively. The number of multiscales for the four images is respectively set to 7, 7, 5 and 11. For the SBDSM algorithm, the number of superpixels is obtained by applying Equation (5), in which the fundamental numbers of superpixels for the four images are the same values in the MSSR algorithm, and the power exponent n in Equation (5) is set to zero. In the following subsection, the parameters of the proposed MSSR algorithm and the parameters of the SBDSM algorithm will be further analyzed. In addition, different algorithms are compared based on the overall accuracy (OA), average accuracy (AA) and kappa coefficient. These quantitative values of each algorithm were averaged over ten runs to diminish the possible bias.
In the experiment on the Indian Pines image, 10 % of the labeled samples for each class are randomly selected as the training set and the remainder as the test set (see Table 1). Figure 6 and Figure 7, respectively, show the superpixel segmentation maps and classification maps under different single scales. In the two figures, the number of single-scale superpixels is gained by using Equation (5), in which the fundamental superpixel number is set to 3200 and the power exponent n is an integer changing from - 3 to 3. Table 2 lists the quantitative values under different scales. Obviously, different scales yield different performances for different classes. For example, when the power exponent is set to - 3 , the value of OA is minimum. However, for this scale, the sixth, eighth, ninth and thirteenth classes, the optimal classification performances are obtained. Conversely, although the optimal OA is acquired when the power exponent n is zero, the classification accuracy of the third class at this scale is minimum. These results demonstrate that, for the HSI classification, the optimal single scale is not suitable for all of the spatial structure regions; multiscale information fusion may be a better approach. In addition, the classification maps from the compared classifiers are illustrated in Figure 8, and the quantitative results are tabulated in Table 3. As can be seen, the SVM and SRC algorithms, which only consider the spectral information, deliver the classification maps with much noise. Meanwhile, the spectral-spatial-based classification algorithms (EMP, JSRC, MASR, SCMK, SBDSM and MSSR) significantly outperform the pixel-wise algorithms. Compared with the SBDSM algorithm, the MSSR algorithm achieves more accurate estimations in the detailed area. This result indicates that the multiscale strategy can overcome the problem of the non-uniformity of spatial structure. At the same time, the problem bringing from the existing mixed superpixels of the single-scale superpixel segmentation can be well solved. Moreover, in terms of OA, AA and the kappa coefficient, the proposed MSSR algorithm also outperforms the other compared algorithms.
In the experiment on the Pavia University image, we randomly select 1 % labeled pixels for each class as training samples and the rest of labeled pixels as the testing samples (see Table 4). Superpixel segmentation maps and classification maps under different scales are shown in Figure 9 and Figure 10. The quantitative results are illustrated in Table 5. As can be seen from Figure 9, Figure 10 and Table 5, with the increase of superpixel number, more details are presented in the segmentation maps and the classification maps. In this case, for the regions containing an abundance of details, the classification performances are improved. For example, the region size of the forth and the ninth classes is small and their classification accuracies are improved with the increase of superpixel number. Meanwhile, the classification accuracy of the seventh class is always 100%. The reason is that the region of this class is relatively smooth, and the superpixel-based classification method can obtain a good classification result. In addition, the classification maps and quantitative results from different classifiers on the University of Pavia image are shown in Figure 11 and Table 6. As shown from Figure 11 and Table 6, the proposed MSSR algorithm achieves competitive results, in terms of visual quality and quantitative metrics. Moreover, the MSSR algorithm obtains higher classification accuracies than the SBDSM algorithm for almost all classes. For some classes, the increase is obvious. For instance, in Table 6, the classification accuracy of pixels representing gravel climbs from 78.19% to 98.78%. The results demonstrate the superiority of multiple scales compared with the single scale, which can more accurately classify pixels in the detailed or near-edge regions.
The third and fourth experiments are conducted on the Salinas and Washington DC images. For the Salinas image, 0.2% of the labeled data are randomly chosen for training, and the remaining 99.8% of data for testing (see Table 7). For the very small proportion of the training samples, the experiment is quite challenging. In the Washington DC image, 2% of the labeled pixels are selected as a training set and the remaining 98% as a testing set (see Table 8). For the Salinas image, Figure 12 and Figure 13 respectively show the superpixel segmentation maps and classification maps under different scales. The corresponding quantitative values of classification results are tabulated in Table 9. For the Washington DC image, the superpixel segmentation maps and classification maps under different scales are illustrated in Figure 14 and Figure 15. Table 10 shows the classification accuracies under different scales. As can be seen, for the Salinas image, although the proportion of training samples is very small, the high OA under all scales is acquired. Meanwhile, some classes, such as the first class, the third class and the ninth class, can obtain 100% classification accuracies. This is because the Salinas image has many large homogeneous regions, which make superpixel segmentation become easy. Obviously, error segmentation will appear when the number of superpixels is too small. In this case, the classification performance of some classes with small size regions will be deteriorated sharply, such as the eleventh class and the twelfth class. For the Washington DC image, it consists of many heterogeneous regions. Therefore, the optimal number of segmented superpixels is large, and the OA value is relatively steady near of the optimal scale. The qualitative and quantitative results from different algorithms on the Salinas and the Washington DC are shown in Figure 16 and Figure 17 and Table 11 and Table 12. We can observe that the proposed MSSR algorithm is usually superior to the other classifiers on the two datasets. Especially, for the Salinas image, compared with the other methods, the superpixel-based sparse representation algorithms greatly improve the classification accuracy under the condition of very limited training samples.
The average running times over ten realizations of the proposed MSSR algorithm and the other algorithms are given in Table 13. We implemented these experiments using MATLAB on a computer with an Intel(R) Xeon(R) CPU E5-2603 v3 @1.60 GHz and 96 GB of RAM. As can be seen, in the case of a comparatively great proportion of training samples, the SBDSM algorithm consumes much less execution time compared with other algorithms. This demonstrates the high efficiency of the superpixel-based sparse classification strategy. However, on account of the multiscale information procedure, the MSSR algorithm consumes more computation time. Meanwhile, under the condition of less training samples, for the SBDSM algorithm, there is no advantage on the computation speed. The main reason is that the computational cost by the training processes for the SVM, EMP and SCMK algorithms is significantly reduced. In addition, for the multiscale-based sparse algorithms, the time consumption of the MSSR algorithm decreased compared with the MASR method, which illustrates the effectiveness of the superpixel-based strategy. Moreover, the running time is expected to be further reduced by adopting a general-purpose graphics processing unit.

4.3. Effect of Superpixel Scale Selection

We first analyze the effect of the number of single-scale superpixels. In this analysis, the training and testing sets for the Indian Pines, University of Pavia, Salinas and Washington DC images are the same sets in the aforementioned comparison experiments. The average results over ten runs for this analysis are obtained. Figure 18 shows the OA values under different fundamental superpixel numbers. The number of single-scale superpixels is gained by applying Equation (5), in which the power exponent n is set to zero. In these experiments, the fundamental superpixel number S f is varied from 400 to 51,200. Because the range of S f is large, we adopt the log scale to represent. The log value of 400, 800, 1600, 3200, 6400, 12,800, 25,600 and 51,200, respectively, is 2.6, 2.9, 3.2, 3.5, 3.8, 4.1, 4.4 and 4.7. As shown in Figure 18, with the increase of the fundamental superpixel number, the OA values of the four images firstly increase and then decrease. This demonstrates that the classification accuracy will be deteriorated at a too small or a too large superpixel scale. Moreover, in Figure 18, the highest OA values of the four images are obtained when S f reaches 3200, 3200, 1600 and 12,800 for the Indian Pines, University of Pavia, Salinas and Washington DC images, respectively. This result illustrates that the classification accuracy is closely relative with the complexity of the image. Compared with the other three images, the Washington DC image needs the most superpixels to realize the optimal classification performance, although its size is relatively small. The main reason is that the spatial structure of this image is comparatively complicated. In addition, the OA values keep high in a large dynamic range. For example, when the fundamental superpixel number is between 6400 and 51,200, the OA values of the Washington DC image are always over 90%. Therefore, multiscale superpixel information can be used for HSI classification.
Figure 19 illustrates the relationship among the OA value, the number of multiscale superpixels and the fundamental superpixel number S f . In the same way, for S f in Figure 19, we adopt the log value. The log value of 400, 800, 1600, 3200, 6400, 12,800, 25,600 and 51,200, respectively, is 2.6, 2.9, 3.2, 3.5, 3.8, 4.1, 4.4 and 4.7. The training sets for the four images are set to the same as before. The number of superpixel multiscales is an odd number rising from three to 15. In these experiments, five contiguous fundamental superpixel numbers in the previous experiment are selected, in which the third number corresponds to the optimal OA in the previous experiment. As can be seen, for the Indian Pines and University of Pavia images, the optimal segmentation accuracies are obtained when the fundamental superpixel number is set to 3200 and the scale number of multiscale superpixels is set to seven. For the Salinas image, when the fundamental superpixel number reaches to 1600 and the scale number of multiscale superpixels reaches to five, the optimal OA can be acquired. For the Washington DC image, when the fundamental superpixel number is 12,800 and the scale number of multiscale superpixels is 11, the best classification performance is obtained. Among these four hyperspectral images, the Salinas image contains many homogeneous regions, and the Washington DC has a large number of heterogeneous regions. The experimental results show that, to obtain perfect classification performance, the relatively complex image requires more scale number and more superpixel number at each scale.

4.4. Comparison of Different Superpixel Segmentation Methods

In this section, we compare the performance of the adopted ERS algorithm with the performances of two competing superpixel segmentation algorithms, i.e., the Felzenszwalb-Huttenlocher (FH) algorithm [59] and the simple linear iterative clustering (SLIC) [42] algorithm. In the comparison, the Indian Pines image is utilized. The training and testing sets are the same sets as before. In the ERS algorithm, the fundamental superpixel number is 3200 and the power exponent n in Equation (5) is an integer changing from −3 to three. In the FH algorithm, multiscale superpixels are generated with various scales and smoothing parameters, σ and k _ S , where σ is the Gaussian smoothing parameter and k _ S controls the region size. In the SLIC algorithm, the number of multiscale superpixels is obtained by presetting the number of superpixel segmentation. In the two comparison experiments, for each scale, the superpixel number is approximately the superpixel number generated by applying the ERS algorithm. Figure 6, Figure 20 and Figure 21 illustrate the superpixel segmentation results under different scales by adopting ERS, FH and SLIC algorithms, respectively. The qualitative and quantitative results under different scales are shown in Figure 7, Figure 22 and Figure 23, and Table 2, Table 14 and Table 15. In addition, the three over-segmentation methods are utilized in the proposed MSSR algorithm, which are called the MSSR_ERS, MSSR_FH and MSSR_SLIC algorithms, respectively. The classification accuracies and maps by these algorithms are also shown in Table 16 and Figure 24. As can be seen from Figure 6, Figure 20 and Figure 21, for the FH algorithm, since there is no explicit constraint on length, the segmentation shapes are the most irregular. The SLIC algorithm yields similar size segmentation regions by setting uniform grid spacing. For the ERS algorithm, a balancing term is utilized to enable superpixels with similar sizes. From the point of classification performances on the single scale, with the increase of superpixel number, the OA values present an increase firstly, then a descending tendency. When superpixel number is too small, some classes with few pixels are completely misclassified by applying the FH and SLIC algorithms, such as the seventh class. The error classification is induced by error segmentation. On the contrary, when the superpixel number is too large, small segmented regions will deteriorate classification performance, since it lacks sufficient spatial information for classification. Three superpixel segmentation methods are applied in the proposed MSSR algorithm. These methods almost consume the same computational time. The classification results show that ERS-based classification algorithm outperforms the other two over-segmentations-based classification algorithms.

4.5. Effects of Training Sample Number

In this section, we analyze the effects of different training samples on the aforementioned classifiers on four HSI datasets. Except for the number of the training samples; the other parameters for all classifiers are the same as before. Different percentages of training samples are randomly selected for the Indian Pine image (from 1% to 20%), University of Pavia image (from 0.2% to 2%), Salinas (from 0.1% to 1%) and Washington DC (from 1% to 10%) images, and the rest of the samples are for testing. The OA values for different classifiers under the different number of training samples are indicated in Figure 25, which are averaged over ten runs. As shown in Figure 25, with the growth of training samples, the performance of classification generally improves. Moreover, the proposed MSSR algorithm generally outperforms all other algorithms. The reason is that, comparing with other algorithms, the superpixel-based segmentation method and the multiscale strategy can more effectively explore the spatial information for HSI classification. Particularly, for the Salinas image, under a limited number of training samples, the MSSR algorithm can obtain high classification accuracy. This is because this image has large homogenous regions.

5. Conclusions

In this paper, a novel MSSR algorithm is presented for spectral-spatial HSI classification. Instead of using a single-scale superpixel, the MSSR adopts multiscale superpixels to effectively explore the spatial information of the HSI. Then, the JSRC is used to classify the multiscale superpixels, and an effective decision fusion is applied to obtain the final classification result. Unlike the common multiscale superpixel segmentation, in the proposed MSSR, the step length variation among multiscale superpixels can be adaptively changed with the complexity of the fundamental image. Experiments on four well-known HSI datasets demonstrate that the proposed MSSR classifier outperforms other state-of-the-art classifiers, in terms of quantitative metrics and visual quality on the classification maps. Moreover, under limited training samples, for the HSI with many large homogenous regions, the proposed algorithm can obtain high classification accuracy.
In these experiments, for the multiscale strategy, the initial scale (i.e. the superpixel number in the first scale) is empirically selected. In future work, a more systematic way of adaptively selecting this parameter for different datasets will be studied. Moreover, multiple features fusion will be integrated into the MSSR method to further improve the classification performance.

Acknowledgments

This work was supported in part by the National Natural Science Fund of China for Distinguished Young Scholars under Grant 61325007, by the National Natural Science Fund of China for International Cooperation and Exchanges under Grant 61520126001 and by the 2017 Scientific Research Project of Education Department of Hunan Province under Grant 1800. The authors would like to thank David A. Landgrebe from Purdue University for providing the AVIRIS image of Indian Pines and Paolo Gamba from University of Pavia for providing the ROSIS data set. The authors would like to thank the National Aeronautics and Space Administration Jet Propulsion Laboratory for providing the AVIRIS image of Salinas and the Spectral Information Technology Application Center of Virginia for providing the HYDICE image of Washington DC. The authors would also like to thank the handling editor and anonymous reviewers for their valuable comments and suggestions, which significantly improved the quality of this paper.

Author Contributions

Shuzhen Zhang and Wei Fu designed the proposed model and implemented the experiments. Shuzhen Zhang drafted the manuscript. Leyuan Fang contributed to the improvement of the proposed model and edited the manuscript. Shutao Li provided overall guidance to the project, reviewed and edited the manuscript and obtained funding to support this research.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Yuan, Y.; Wang, Q.; Zhu, G. Fast hyperspectral anomaly detection via high-order 2-D crossing filter. IEEE Trans. Geosci. Remote Sens. 2015, 53, 620–630. [Google Scholar] [CrossRef]
  2. Heldens, W.; Heiden, U.; Esch, T.; Stein, E.; Muller, A. Can the future EnMAP mission contribute to urban applications? A literature survey. Remote Sens. 2011, 3, 1817–1846. [Google Scholar] [CrossRef]
  3. Lee, M.A.; Huang, Y.; Yao, H.; Thomson, S.J. Determining the effects of storage on cotton and soybean leaf samples for hyperspectral analysis. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2014, 7, 2562–2570. [Google Scholar] [CrossRef]
  4. Kanning, M.; Siegmann, B.; Jarmer, T. Regionalization of uncovered agricultural soils based on organic carbon and soil texture estimations. Remote Sens. 2016, 8, 927. [Google Scholar] [CrossRef]
  5. Clark, M.L.; Roberts, D.A. Species-level differences in hyperspectral metrics among tropical rainforest trees as determined by a tree-based classifier. Remote Sens. 2012, 4, 1820–1855. [Google Scholar] [CrossRef]
  6. Ryan, J.P.; Davis, C.O.; Tufillaro, N.B.; Kudela, R.M.; Gao, B. Application of the hyperspectral imager for the coastal ocean to phytoplankton ecology studies in monterey bay, CA, USA. Remote Sens. 2014, 6, 1007–1025. [Google Scholar] [CrossRef]
  7. Brook, A.; Dor, E.B. Quantitative detection of settled dust over green canopy using sparse unmixing of airborne hyperspectral data. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2016, 9, 884–897. [Google Scholar] [CrossRef]
  8. Melgani, F.; Bruzzone, L. Classification of hyperspectral remote sensing images with support vector machines. IEEE Trans. Geosci. Remote Sens. 2004, 42, 1778–1790. [Google Scholar] [CrossRef]
  9. Ratle, F.; Camps-Valls, G.; Weston, J. Semisupervised neural networks for efficient hyperspectral image classification. IEEE Trans. Geosci. Remote Sens. 2010, 48, 2271–2282. [Google Scholar] [CrossRef]
  10. Zhong, Y.; Zhang, L. An adaptive artificial immune network for supervised classification of multi-/hyperspectral remote sensing imagery. IEEE Trans. Geosci. Remote Sens. 2012, 50, 894–909. [Google Scholar] [CrossRef]
  11. Jiao, H.; Zhong, Y.; Zhang, L. Artificial DNA computing-based spectral encoding and matching algorithm for hyperspectral remote sensing data. IEEE Trans. Geosci. Remote Sens. 2012, 50, 4085–4104. [Google Scholar] [CrossRef]
  12. Jia, S.; Shen, L.; Li, Q. Gabor feature-based collaborative representation for hyperspectral imagery classification. IEEE Trans. Geosci. Remote Sens. 2015, 53, 1118–1129. [Google Scholar]
  13. Li, W.; Chen, C.; Su, H.; Du, Q. Local binary patterns and extreme learning machine for hyperspectral imagery classification. IEEE Trans. Geosci. Remote Sens. 2015, 53, 3681–3693. [Google Scholar] [CrossRef]
  14. Kang, X.; Li, S.; Benediktsson, J.A. Spectral-spatial hyperspectral image classification with edge-preserving filtering. IEEE Trans. Geosci. Remote Sens. 2014, 52, 2666–2677. [Google Scholar] [CrossRef]
  15. Mirzapour, F.; Ghassemian, H. Multiscale gaussian derivative functions for hyperspectral image feature extraction. IEEE Geosci. Remote Sens. Lett. 2016, 13, 525–529. [Google Scholar] [CrossRef]
  16. Benediktsson, J.A.; Palmason, J.A.; Sveinsson, J.R. Classification of hyperspectral data from urban areas based on extended morphological profiles. IEEE Trans. Geosci. Remote Sens. 2005, 43, 480–491. [Google Scholar] [CrossRef]
  17. Quesada-Barriuso, P.; Arguello, F.; Heras, D.B. Spectral-spatial classification of hyperspectral images using wavelets and extended morphological profiles. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2014, 7, 1177–1185. [Google Scholar] [CrossRef]
  18. Li, J.; Marpu, P.R.; Plaza, A.; Bioucas-Dias, J.M.; Benediktsson, J.A. Generalized composite kernel framework for hyperspectral image classification. IEEE Trans. Geosci. Remote Sens. 2013, 51, 4816–4829. [Google Scholar] [CrossRef]
  19. Camps-Valls, G.; Shervashidze, N.; Borgwardt, K.M. Spatio-spectral remote sensing image classification with graph kernels. IEEE Geosci. Remote Sens. Lett. 2010, 7, 741–745. [Google Scholar] [CrossRef]
  20. Wang, J.; Jiao, L.; Liu, H.; Yang, S.; Liu, F. Hyperspectral image classification by spatial-spectral derivative-aided kernel joint sparse representation. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2015, 8, 2485–2500. [Google Scholar] [CrossRef]
  21. Liu, J.; Wu, Z.; Li, J.; Plaza, A.; Yuan, Y. Probabilistic-kernel collaborative representation for spatial-spectral hyperspectral image classification. IEEE Trans. Geosci. Remote Sens. 2016, 54, 2371–2384. [Google Scholar] [CrossRef]
  22. Zhang, L.; Zhang, L.; Tao, D.; Huang, X. Tensor discriminative locality alignment for hyperspectral image spectral-spatial feature extraction. IEEE Trans. Geosci. Remote Sens. 2013, 51, 242–256. [Google Scholar] [CrossRef]
  23. Guo, X.; Huang, X.; Zhang, L.; Zhang, L.; Plaza, A.; Benediktsson, J.A. Support tensor machines for classification of hyperspectral remote sensing imagery. IEEE Trans. Geosci. Remote Sens. 2016, 54, 3248–3264. [Google Scholar] [CrossRef]
  24. He, Z.; Li, J.; Liu, L.; Zhang, L. Tensor block-sparsity based representation for spectral-spatial hyperspectral image classification. Remote Sens. 2016, 8, 636. [Google Scholar] [CrossRef]
  25. Chen, Y.; Lin, Z.; Zhao, X.; Wang, G.; Gu, Y. Deep learning-based classification of hyperspectral data. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2014, 7, 2094–2107. [Google Scholar] [CrossRef]
  26. Chen, Y.; Zhao, X.; Jia, X. Spectral-spatial classification of hyperspectral data based on deep belief network. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2015, 8, 2381–2391. [Google Scholar] [CrossRef]
  27. Liang, H.; Li, Q. Hyperspectral imagery classification using sparse representations of convolutional neural network features. Remote Sens. 2016, 8, 99. [Google Scholar] [CrossRef]
  28. Wright, J.; Yang, A.Y.; Ganesh, A.; Sastry, S.S.; Ma, Y. Robust face recognition via sparse representation. IEEE Trans. Pattern Anal. Mach. Intell. 2009, 31, 210–227. [Google Scholar] [CrossRef] [PubMed]
  29. Ma, J.; Zhou, H.; Zhao, J.; Gao, Y.; Jiang, J.; Tian, J. Robust feature matching for remote sensing image registration via locally linear transforming. IEEE Trans. Geosci. Remote Sens. 2015, 53, 6469–6481. [Google Scholar] [CrossRef]
  30. Ma, J.; Zhao, J.; Yuille, A.L. Non-rigid point set registration by preserving global and local structures. IEEE Trans. Image Process. 2016, 25, 53–64. [Google Scholar] [PubMed]
  31. Li, S.; Yin, H.; Fang, L. Remote sensing image fusion via sparse representations over learned dictionaries. IEEE Trans. Geosci. Remote Sens. 2013, 51, 4779–4789. [Google Scholar] [CrossRef]
  32. Srinivas, U.; Chen, Y.; Monga, V.; Nasrabadi, N.M.; Tran, T.D. Exploiting sparsity in hyperspectral image classification via graphical models. IEEE Geosci. Remote Sens. Lett. 2013, 10, 505–509. [Google Scholar] [CrossRef]
  33. Qian, Y.; Ye, M.; Zhou, J. Hyperspectral image classification based on structured sparse logistic regression and three-dimensional wavelet texture feature. IEEE Trans. Geosci. Remote Sens. 2013, 51, 2276–2291. [Google Scholar] [CrossRef]
  34. Fu, W.; Li, S.; Fang, L.; Kang, X.; Benediktsson, J.A. Hyperspectral image classification via shape-adaptive joint sparse representation. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2016, 9, 556–567. [Google Scholar] [CrossRef]
  35. Li, C.; Ma, Y.; Mei, X.; Liu, C.; Ma, J. Hyperspectral image classification with robust sparse representation. IEEE Geosci. Remote Sens. Lett. 2016, 13, 641–645. [Google Scholar] [CrossRef]
  36. Chen, Y.; Nasrabadi, N.M.; Tran, T.D. Hyperspectral image classification using dictionary-based sparse representation. IEEE Trans. Geosci. Remote Sens. 2011, 49, 3973–3985. [Google Scholar] [CrossRef]
  37. Zhang, H.; Li, J.; Huang, Y.; Zhang, L. A nonlocal weighted joint sparse representation classification method for hyperspectral imagery. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2013, 7, 2056–2065. [Google Scholar] [CrossRef]
  38. Zhang, E.; Jiao, L.; Zhang, X.; Liu, H.; Wang, S. Class-level joint sparse representation for multifeature-based hyperspectral image classification. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2016, 9, 4160–4177. [Google Scholar] [CrossRef]
  39. Peng, X.; Zhang, L.; Yi, Z.; Tan, K.K. Learning locality-constrained collaborative representation for robust face recognition. Pattern Recognit. 2014, 47, 2794–2806. [Google Scholar] [CrossRef]
  40. Peng, X.; Yu, Z.; Yi, Z.; Tang, H. Constructing the L2-graph for robust subspace learning and subspace clustering. IEEE Trans. Cybern. 2016, PP, 1–14. [Google Scholar] [CrossRef] [PubMed]
  41. Yuan, Y.; Lin, J.; Wang, Q. Hyperspectral image classification via multitask joint sparse representation and stepwise MRF optimization. IEEE Tran. Cybern. 2016, 46, 2966–2977. [Google Scholar] [CrossRef] [PubMed]
  42. Achanta, R.; Shaji, A.; Smith, K.; Lucchi, A.; Fua, P.; Su¨sstrunk, S. SLIC superpixels compared to state-of-the-art superpixel methods. IEEE Trans. Pattern Anal. Mach. Intell. 2012, 34, 2274–2282. [Google Scholar] [CrossRef] [PubMed]
  43. Tian, Z.; Liu, L.; Zhang, Z.; Fei, B. A superpixel-based segmentation for 3D prostate MR images. IEEE Trans. Med. Imag. 2016, 35, 791–801. [Google Scholar] [CrossRef] [PubMed]
  44. Lu, H.; Li, X.; Zhang, L.; Ruan, X.; Yang, M. Dense and sparse reconstruction error based saliency descriptor. IEEE Trans. Image Process. 2016, 25, 1592–1603. [Google Scholar] [CrossRef] [PubMed]
  45. Li, J.; Zhang, H.; Zhang, L. Efficient superpixel-level multitask joint sparse representation for hyperspectral image classification. IEEE Trans. Geosci. Remote Sens. 2015, 53, 5338–5351. [Google Scholar]
  46. Fang, L.; Li, S.; Duan, W.; Ren, J.; Benediktsson, J.A. Classification of hyperspectral images by exploiting spectra-spatial information of superpixel via multiple kernels. IEEE Trans. Geosci. Remote Sens. 2015, 53, 6663–6674. [Google Scholar] [CrossRef]
  47. Saranathan, A.M.; Parente, M. Uniformity-based superpixel segmentation of hyperspectral images. IEEE Trans. Geosci. Remote Sens. 2016, 54, 1419–1430. [Google Scholar] [CrossRef]
  48. Wang, Y.; Zhang, Y.; Song, H.W. A Spectral-texture kernel-based classification method for hyperspectral images. Remote Sens. 2016, 8, 919. [Google Scholar] [CrossRef]
  49. Wang, Q.; Lin, J.; Yuan, Y. Salient band selection for hyperspectral image classification via manifold ranking. IEEE Trans. Neural Netw. 2016, 27, 1279–1289. [Google Scholar] [CrossRef] [PubMed]
  50. Tong, N.; Lu, H.; Zhang, L.; Ruan, X. Saliency detection with multiscale superpixels. IEEE Signal Process. Lett. 2014, 21, 1035–1039. [Google Scholar]
  51. Tan, N.; Xu, Y.; Goh, W.B.; Liu, J. Robust multi-scale superpixel classification for optic cup location. Comput. Med. Imag. Grap. 2015, 40, 182–193. [Google Scholar] [CrossRef] [PubMed]
  52. Neubert, P.; Protzel, P. Beyond holistic descriptors, keypoints, and fixed patches: multiscale superpixel grids for place recognition in changing environments. IEEE Robot. Autom. Lett. 2016, 1, 484–491. [Google Scholar] [CrossRef]
  53. Tropp, J.A.; Gilbert, A.C.; Strauss, M.J. Algorithms for simultaneous sparse approximation. part I: Greedy pursuit. Signal Process. 2006, 86, 572–588. [Google Scholar] [CrossRef]
  54. Jolliffe, I.T. Principal Component Analysis; Wiley: New York, NY, USA, 2005. [Google Scholar]
  55. Gonzalez, R.C.; Woods, R.E. Digital Image Processing; Prentice Hall: Englewood Cliffs, NJ, USA, 2009. [Google Scholar]
  56. Chacon, M.I.; Corral, A.D. Image complexity measure: A human criterion free approach. In Proceedings of the 2005 Annual Meeting of the North American Fuzzy Information Processing Society, Detroit, MI, USA, 26–28 June 2005; pp. 241–246.
  57. Silva, M.P.D.; Courboulay, V.; Estraillier, P. Image complexity measure based on visual attention. In Proceedings of the eighteenth IEEE International Conference on Image Processing, Brussels, Belgium, 11–14 September 2011; pp. 3281–3284.
  58. Cardaci, M.; Gesù, V.D.; Petrou, M.; Tabacchi, M.E. A fuzzy approach to the evaluation of image complexity. Fuzzy Sets Syst. 2006, 160, 1474–1484. [Google Scholar] [CrossRef]
  59. Felzenszwalb, P.F.; Huttenlocher, D.P. Efficient graph-based image segmentation. Int. J. Comput. Vis. 2004, 59, 167–181. [Google Scholar] [CrossRef]
  60. Veksler, O.; Boykov, Y.; Mehrani, P. Superpixels and supervoxels in an energy optimization framework. In Proceedings of the eleventh European Conference on Computer Vision, Crete, Greece, 5–11 September 2010; pp. 211–224.
  61. Liu, M.Y.; Tuzel, O.; Ramalingam, S.; Chellappa, R. Entropy-rate clustering analysis via maximizing a submodular function subject to matroid constraint. IEEE Trans. Pattern Anal. Mach. Intell. 2014, 36, 99–112. [Google Scholar] [CrossRef] [PubMed]
  62. Nemhauser, G.L.; Wolsey, L.A.; Fisher, M.L. An analysis of approximations for maximizing submodular set functions. Math. Prog. 1978, 14, 265–294. [Google Scholar] [CrossRef]
  63. Fang, L.; Li, S.; Kang, X.; Benediktsson, J.A. Spectral-spatial hyperspectral image classification via multiscale adaptive sparse representation. IEEE Trans. Geosci. Remote Sens. 2014, 52, 7738–7749. [Google Scholar] [CrossRef]
  64. Fang, L.; Li, S.; Kang, X.; Benediktsson, J.A. Spectral-spatial classification of hyperspectral images with a superpixel-based discriminative sparse model. IEEE Trans. Geosci. Remote Sens. 2015, 53, 4186–4201. [Google Scholar] [CrossRef]
  65. Chang, C.C.; Lin, C.J. LIBSVM: A library for support vector machines. ACM Trans. Intell. Syst. Technol. 2011, 2, 27. [Google Scholar] [CrossRef]
Figure 1. Schematic illustration of the multiscale superpixel-based sparse representation (MSSR) algorithm for HSI classification.
Figure 1. Schematic illustration of the multiscale superpixel-based sparse representation (MSSR) algorithm for HSI classification.
Remotesensing 09 00139 g001
Figure 2. Indian Pines image: (a) false-color image; and (b) reference image.
Figure 2. Indian Pines image: (a) false-color image; and (b) reference image.
Remotesensing 09 00139 g002
Figure 3. University of Pavia image: (a) false-color image; and (b) reference image.
Figure 3. University of Pavia image: (a) false-color image; and (b) reference image.
Remotesensing 09 00139 g003
Figure 4. Salinas image: (a) false-color image; and (b) reference image.
Figure 4. Salinas image: (a) false-color image; and (b) reference image.
Remotesensing 09 00139 g004
Figure 5. Washington DC image: (a) false-color image; and (b) reference image.
Figure 5. Washington DC image: (a) false-color image; and (b) reference image.
Remotesensing 09 00139 g005
Figure 6. Superpixel segmentation results of the Indian Pines image under different scales. The number of single-scale superpixels is gained by using the Equation (5), in which the fundamental superpixel number is set to 3200 and the power exponent n is an integer changing from - 3 to three: (a) n = - 3 ; (b) n = - 2 ; (c) n = - 1 ; (d) n = 0 ; (e) n = 1 ; (f) n = 2 ; and (g) n = 3 .
Figure 6. Superpixel segmentation results of the Indian Pines image under different scales. The number of single-scale superpixels is gained by using the Equation (5), in which the fundamental superpixel number is set to 3200 and the power exponent n is an integer changing from - 3 to three: (a) n = - 3 ; (b) n = - 2 ; (c) n = - 1 ; (d) n = 0 ; (e) n = 1 ; (f) n = 2 ; and (g) n = 3 .
Remotesensing 09 00139 g006
Figure 7. Classification results of the Indian Pines image under different scales. The number of single-scale superpixels is gained by using Equation (5), in which the fundamental superpixel number is set to 3200 and the power exponent n is an integer changing from - 3 to three: (a) n = - 3 , OA = 93.14%; (b) n = - 2 , OA = 96.42%; (c) n = - 1 , OA = 96.62%; (d) n = 0 , OA = 97.08%; (e) n = 1 , OA = 95.64%; (f) n = 2 , OA = 95.61%; and (g) n = 3 , OA = 93.65%.
Figure 7. Classification results of the Indian Pines image under different scales. The number of single-scale superpixels is gained by using Equation (5), in which the fundamental superpixel number is set to 3200 and the power exponent n is an integer changing from - 3 to three: (a) n = - 3 , OA = 93.14%; (b) n = - 2 , OA = 96.42%; (c) n = - 1 , OA = 96.62%; (d) n = 0 , OA = 97.08%; (e) n = 1 , OA = 95.64%; (f) n = 2 , OA = 95.61%; and (g) n = 3 , OA = 93.65%.
Remotesensing 09 00139 g007
Figure 8. Classification maps for the Indian Pines image by different algorithms (OA values are reported in parentheses): (a) SVM (78.01%); (b) EMP (92.71%); (c) SRC (68.91%); (d) JSRC (94.42%); (e) MASR (98.27%); (f) SCMK (97.96%); (g) SBDSM (97.08%); and (h) MSSR (98.58%).
Figure 8. Classification maps for the Indian Pines image by different algorithms (OA values are reported in parentheses): (a) SVM (78.01%); (b) EMP (92.71%); (c) SRC (68.91%); (d) JSRC (94.42%); (e) MASR (98.27%); (f) SCMK (97.96%); (g) SBDSM (97.08%); and (h) MSSR (98.58%).
Remotesensing 09 00139 g008
Figure 9. Superpixel segmentation results of the University of Pavia image under different scales. The number of single-scale superpixels is gained by using Equation (5), in which the fundamental superpixel number is set to 3200 and the power exponent n is an integer changing from - 3 to three: (a) n = - 3 ; (b) n = - 2 ; (c) n = - 1 ; (d) n = 0 ; (e) n = 1 ; (f) n = 2 ; and (g) n = 3 .
Figure 9. Superpixel segmentation results of the University of Pavia image under different scales. The number of single-scale superpixels is gained by using Equation (5), in which the fundamental superpixel number is set to 3200 and the power exponent n is an integer changing from - 3 to three: (a) n = - 3 ; (b) n = - 2 ; (c) n = - 1 ; (d) n = 0 ; (e) n = 1 ; (f) n = 2 ; and (g) n = 3 .
Remotesensing 09 00139 g009aRemotesensing 09 00139 g009b
Figure 10. Classification results of the University of Pavia image under different scales. The number of single-scale superpixels is gained by using Equation (5), in which the fundamental superpixel number is set to 3200 and the power exponent n is an integer changing from - 3 to three: (a) n = - 3 , OA = 91.74%; (b) n = - 2 , OA = 91.42%; (c) n = - 1 , OA = 92.39%; (d) n = 0 , OA = 92.60%; (e) n = 1 , OA = 92.12%; (f) n = 2 , OA = 91.54%; and (g) n = 3 , OA = 91.35%.
Figure 10. Classification results of the University of Pavia image under different scales. The number of single-scale superpixels is gained by using Equation (5), in which the fundamental superpixel number is set to 3200 and the power exponent n is an integer changing from - 3 to three: (a) n = - 3 , OA = 91.74%; (b) n = - 2 , OA = 91.42%; (c) n = - 1 , OA = 92.39%; (d) n = 0 , OA = 92.60%; (e) n = 1 , OA = 92.12%; (f) n = 2 , OA = 91.54%; and (g) n = 3 , OA = 91.35%.
Remotesensing 09 00139 g010
Figure 11. Classification maps for the Pavia University image by different algorithms (OA values are reported in parentheses): (a) SVM (86.52%); (b) EMP (91.80%); (c) SRC (77.90%); (d) JSRC (86.78%); (e) MASR (89.45%); (f) SCMK (94.96%); (g) SBDSM (92.60%); and (h) MSSR (95.47%).
Figure 11. Classification maps for the Pavia University image by different algorithms (OA values are reported in parentheses): (a) SVM (86.52%); (b) EMP (91.80%); (c) SRC (77.90%); (d) JSRC (86.78%); (e) MASR (89.45%); (f) SCMK (94.96%); (g) SBDSM (92.60%); and (h) MSSR (95.47%).
Remotesensing 09 00139 g011
Figure 12. Superpixel segmentation results of the Salinas image under different scales. The number of single-scale superpixels is gained by using Equation (5), in which the fundamental superpixel number is set to 1600 and the power exponent n is an integer changing from - 2 to two: (a) n = - 2 ; (b) n = - 1 ; (c) n = 0 ; (d) n = 1 ; and (e) n = 2 .
Figure 12. Superpixel segmentation results of the Salinas image under different scales. The number of single-scale superpixels is gained by using Equation (5), in which the fundamental superpixel number is set to 1600 and the power exponent n is an integer changing from - 2 to two: (a) n = - 2 ; (b) n = - 1 ; (c) n = 0 ; (d) n = 1 ; and (e) n = 2 .
Remotesensing 09 00139 g012
Figure 13. Classification results of the Salinas image under different scales. The number of single-scale superpixels is gained by using Equation (5), in which the fundamental superpixel number is set to 1600 and the power exponent n is an integer changing from - 2 to two: (a) n = - 2 , OA = 95.21%; (b) n = - 1 , OA = 97.04%; (c) n = 0 , OA = 98.38%; (d) n = 1 , OA = 97.70%; and (e) n = 2 , OA = 97.00%.
Figure 13. Classification results of the Salinas image under different scales. The number of single-scale superpixels is gained by using Equation (5), in which the fundamental superpixel number is set to 1600 and the power exponent n is an integer changing from - 2 to two: (a) n = - 2 , OA = 95.21%; (b) n = - 1 , OA = 97.04%; (c) n = 0 , OA = 98.38%; (d) n = 1 , OA = 97.70%; and (e) n = 2 , OA = 97.00%.
Remotesensing 09 00139 g013
Figure 14. Superpixel segmentation results of the Washington DC image under different scales. The number of single-scale superpixels is gained by using Equation (5), in which the fundamental superpixel number is set to 12,800 and the power exponent n is an integer changing from - 5 to five: (a) n = - 5 ; (b) n = - 4 ; (c) n = - 3 ; (d) n = - 2 ; (e) n = - 1 ; (f) n = 0 ; (g) n = 1 ; (h) n = 2 ; (i) n = 3 ; (j) n = 4 ; and (k) n = 5 .
Figure 14. Superpixel segmentation results of the Washington DC image under different scales. The number of single-scale superpixels is gained by using Equation (5), in which the fundamental superpixel number is set to 12,800 and the power exponent n is an integer changing from - 5 to five: (a) n = - 5 ; (b) n = - 4 ; (c) n = - 3 ; (d) n = - 2 ; (e) n = - 1 ; (f) n = 0 ; (g) n = 1 ; (h) n = 2 ; (i) n = 3 ; (j) n = 4 ; and (k) n = 5 .
Remotesensing 09 00139 g014
Figure 15. Classification results of the Washington DC image under different scales. The number of single-scale superpixel is gained by using Equation (5), in which the fundamental superpixel number is set to 12,800 and the power exponent n is an integer changing from - 5 to five: (a) n = - 5 , OA = 86.48%; (b) n = - 4 , OA = 90.29%; (c) n = - 3 , OA = 92.56%; (d) n = - 2 , OA = 91.43%; (e) n = - 1 , OA = 91.84%; (f) n = 0 , OA = 93.33%; (g) n = 1 , OA = 92.25%; (h) n = 2 , OA = 92.49%; (i) n = 3 , OA = 93.12%; (j) n = 4 , OA = 92.55%; and (k) n = 5 , OA = 92.31%.
Figure 15. Classification results of the Washington DC image under different scales. The number of single-scale superpixel is gained by using Equation (5), in which the fundamental superpixel number is set to 12,800 and the power exponent n is an integer changing from - 5 to five: (a) n = - 5 , OA = 86.48%; (b) n = - 4 , OA = 90.29%; (c) n = - 3 , OA = 92.56%; (d) n = - 2 , OA = 91.43%; (e) n = - 1 , OA = 91.84%; (f) n = 0 , OA = 93.33%; (g) n = 1 , OA = 92.25%; (h) n = 2 , OA = 92.49%; (i) n = 3 , OA = 93.12%; (j) n = 4 , OA = 92.55%; and (k) n = 5 , OA = 92.31%.
Remotesensing 09 00139 g015aRemotesensing 09 00139 g015b
Figure 16. Classification maps for the Salinas image by different algorithms (OA values are reported in parentheses): (a) SVM (80.23%); (b) EMP (85.84%); (c) SRC (81.94%); (d) JSRC (84.79%); (e) MASR (92.21%); (f) SCMK (94.53%); (g) SBDSM (98.38%); and (h) MSSR (99.41%).
Figure 16. Classification maps for the Salinas image by different algorithms (OA values are reported in parentheses): (a) SVM (80.23%); (b) EMP (85.84%); (c) SRC (81.94%); (d) JSRC (84.79%); (e) MASR (92.21%); (f) SCMK (94.53%); (g) SBDSM (98.38%); and (h) MSSR (99.41%).
Remotesensing 09 00139 g016aRemotesensing 09 00139 g016b
Figure 17. Classification maps for the Washington DC image by different algorithms (OA values are reported in parentheses): (a) SVM (90.98%); (b) EMP (90.28%); (c) SRC (91.95%); (d) JSRC (92.79%); (e) MASR (95.62%); (f) SCMK (94.55%); (g) SBDSM (93.33%); and(h) MSSR (96.60%).
Figure 17. Classification maps for the Washington DC image by different algorithms (OA values are reported in parentheses): (a) SVM (90.98%); (b) EMP (90.28%); (c) SRC (91.95%); (d) JSRC (92.79%); (e) MASR (95.62%); (f) SCMK (94.55%); (g) SBDSM (93.33%); and(h) MSSR (96.60%).
Remotesensing 09 00139 g017
Figure 18. Classification accuracy OA versus different fundamental superpixel numbers S f on the four test images.
Figure 18. Classification accuracy OA versus different fundamental superpixel numbers S f on the four test images.
Remotesensing 09 00139 g018
Figure 19. Relationship among the OA value, the number of multiscale superpixels and the fundamental superpixel number S f : (a) Indian Pines image; (b) University of Pavia image; (c) Salinas image; and (d) Washington DC image.
Figure 19. Relationship among the OA value, the number of multiscale superpixels and the fundamental superpixel number S f : (a) Indian Pines image; (b) University of Pavia image; (c) Salinas image; and (d) Washington DC image.
Remotesensing 09 00139 g019
Figure 20. Felzenszwalb-Huttenlocher (FH) segmentation results of the Indian Pines image under different scales. Multiscale superpixels are generated with various scales and smoothing parameters, σ and k _ S : (a) σ = 0 . 2 , k _ S = 43 ; (b) σ = 0 . 2 , k _ S = 31 ; (c) σ = 0 . 2 , k _ S = 23 ; (d) σ = 0 . 3 , k _ S = 17 ; (e) σ = 0 . 4 , k _ S = 12 ; (f) σ = 0 . 4 , k _ S = 9 ; and (g) σ = 0 . 3 , k _ S = 7 .
Figure 20. Felzenszwalb-Huttenlocher (FH) segmentation results of the Indian Pines image under different scales. Multiscale superpixels are generated with various scales and smoothing parameters, σ and k _ S : (a) σ = 0 . 2 , k _ S = 43 ; (b) σ = 0 . 2 , k _ S = 31 ; (c) σ = 0 . 2 , k _ S = 23 ; (d) σ = 0 . 3 , k _ S = 17 ; (e) σ = 0 . 4 , k _ S = 12 ; (f) σ = 0 . 4 , k _ S = 9 ; and (g) σ = 0 . 3 , k _ S = 7 .
Remotesensing 09 00139 g020
Figure 21. Simple linear iterative clustering (SLIC) segmentation results of the Indian Pines image under different scales. The number of multiscale superpixels is obtained by presetting the number of superpixel segmentations n _ S : (a) n _ S = 147; (b) n _ S = 206; (c) n _ S = 288; (d) n _ S = 415; (e) n _ S = 562; (f) n _ S = 780; and (g) n _ S = 1055.
Figure 21. Simple linear iterative clustering (SLIC) segmentation results of the Indian Pines image under different scales. The number of multiscale superpixels is obtained by presetting the number of superpixel segmentations n _ S : (a) n _ S = 147; (b) n _ S = 206; (c) n _ S = 288; (d) n _ S = 415; (e) n _ S = 562; (f) n _ S = 780; and (g) n _ S = 1055.
Remotesensing 09 00139 g021
Figure 22. Classification results of the Indian Pines image under different scales. The FH segmentation method is applied in the SBDSM algorithm. Multiscale superpixels are generated with various scales and smoothing parameters, σ and k _ S : (a) σ = 0 . 2 , k _ S = 43 , OA = 83.56%; (b) σ = 0 . 2 , k _ S = 31 , OA = 93.21%; (c) σ = 0 . 2 , k _ S = 23 , OA = 93.52%; (d) σ = 0 . 3 , k _ S = 17 , OA = 96.25%; (e) σ = 0 . 4 , k _ S = 12 , OA = 94.52%; (f) σ = 0 . 4 , k _ S = 9 , OA = 94.32%; and (g) σ = 0 . 3 , k _ S = 7 , OA = 94.28%.
Figure 22. Classification results of the Indian Pines image under different scales. The FH segmentation method is applied in the SBDSM algorithm. Multiscale superpixels are generated with various scales and smoothing parameters, σ and k _ S : (a) σ = 0 . 2 , k _ S = 43 , OA = 83.56%; (b) σ = 0 . 2 , k _ S = 31 , OA = 93.21%; (c) σ = 0 . 2 , k _ S = 23 , OA = 93.52%; (d) σ = 0 . 3 , k _ S = 17 , OA = 96.25%; (e) σ = 0 . 4 , k _ S = 12 , OA = 94.52%; (f) σ = 0 . 4 , k _ S = 9 , OA = 94.32%; and (g) σ = 0 . 3 , k _ S = 7 , OA = 94.28%.
Remotesensing 09 00139 g022
Figure 23. Classification results of the Indian Pines image under different scales. The SLIC segmentation method is applied in the SBDSM algorithm. The number of multiscale superpixels is obtained by presetting the number of superpixel segmentations n _ S : (a) n _ S = 147, OA = 89.10%; (b) n _ S = 206, OA = 91.45%; (c) n _ S = 288, OA = 93.16%; (d) n _ S = 415, OA = 96.81%; (e) n _ S = 562, OA = 96.52%; (f) n _ S = 780, OA = 95.32%; and (g) n _ S = 1055, OA = 95.24%.
Figure 23. Classification results of the Indian Pines image under different scales. The SLIC segmentation method is applied in the SBDSM algorithm. The number of multiscale superpixels is obtained by presetting the number of superpixel segmentations n _ S : (a) n _ S = 147, OA = 89.10%; (b) n _ S = 206, OA = 91.45%; (c) n _ S = 288, OA = 93.16%; (d) n _ S = 415, OA = 96.81%; (e) n _ S = 562, OA = 96.52%; (f) n _ S = 780, OA = 95.32%; and (g) n _ S = 1055, OA = 95.24%.
Remotesensing 09 00139 g023
Figure 24. Classification maps by using MSSR_FH, MSSR_SLIC and MSSR_ERS algorithms: (a) MSSR_FH, OA = 96.54%; (b) MSSR_SLIC, OA = 97.38%; (c) MSSR_ERS, OA = 98.58%.
Figure 24. Classification maps by using MSSR_FH, MSSR_SLIC and MSSR_ERS algorithms: (a) MSSR_FH, OA = 96.54%; (b) MSSR_SLIC, OA = 97.38%; (c) MSSR_ERS, OA = 98.58%.
Remotesensing 09 00139 g024
Figure 25. Effect of the number of training samples on SVM, EMP, SRC, JSRC, MASR, SCMK, SBDSM and MSSR for the: (a) Indian Pines image; (b) University of Pavia images; (c) Salinas image; and (d) Washington DC image.
Figure 25. Effect of the number of training samples on SVM, EMP, SRC, JSRC, MASR, SCMK, SBDSM and MSSR for the: (a) Indian Pines image; (b) University of Pavia images; (c) Salinas image; and (d) Washington DC image.
Remotesensing 09 00139 g025
Table 1. Number of training and test samples of sixteen classes in the Indian Pines image.
Table 1. Number of training and test samples of sixteen classes in the Indian Pines image.
ClassNameTrainTest
1Alfalfa541
2Corn-no till1431285
3Corn-min till83747
4Corn24213
5Grass/pasture49434
6Grass/tree73657
7Grass/pasture-mowed325
8Hay-windrowed48430
9Oats218
10Soybean-no till98874
11Soybean-min till2462209
12Soybean-clean till60533
13Wheat21184
14Woods1271138
15Bldg-grass-trees-drives39347
16Stone-steel towers1083
Total10319218
Table 2. Classification accuracy of the Indian Pines image under different scales. The number of single-scale superpixels is gained by using Equation (5), in which the fundamental superpixel number is set to 3200 and the power exponent n is an integer changing from - 3 to three. Class-specific accuracy values are in percentage. The best results are highlighted in bold typeface. AA, average accuracy.
Table 2. Classification accuracy of the Indian Pines image under different scales. The number of single-scale superpixels is gained by using Equation (5), in which the fundamental superpixel number is set to 3200 and the power exponent n is an integer changing from - 3 to three. Class-specific accuracy values are in percentage. The best results are highlighted in bold typeface. AA, average accuracy.
Class n = - 3 n = - 2 n = - 1 n = 0 n = 1 n = 2 n = 3
199.0198.0599.0297.5698.0597.5690.73
287.2190.2795.0496.3490.4192.5490.33
392.0595.5397.1189.6995.6192.3492.88
495.3199.0699.0499.0688.6491.5587.98
591.7194.1094.7993.5594.4794.0192.30
699.8899.7699.7699.8599.3098.4897.53
796.0096.0097.6096.4396.0096.8096.00
810010010010010098.7998.28
910010010010010094.4486.67
1090.4896.8492.2094.7493.9194.7493.46
1193.8697.3296.5196.9297.7196.4395.15
1282.6692.3595.2097.7590.7391.2284.80
1310099.4699.5799.4699.4699.1399.46
1499.7499.9399.7499.8299.4698.5498.98
1593.6098.0491.8793.0895.3393.0383.80
1696.6396.3997.5996.3997.1197.5691.57
OA (%)93.3796.4496.5696.9295.7795.3293.62
AA (%)94.8895.0795.1995.1995.0195.4592.50
Kappa0.920.950.950.950.950.950.93
Table 3. Classification accuracy of the Indian Pines image by the classification algorithms used in this work for comparison. Class-specific accuracy values are in percentage. The best results are highlighted in bold typeface. EMP, extended morphological profile; JSRC, joint sparse representation classification; MASR, multiscale adaptive sparse representation; SCMK, superpixel-based classification via multiple kernel; SBDSM, superpixel-based discriminative sparse model; MSSR, multiscale superpixel-based sparse representation.
Table 3. Classification accuracy of the Indian Pines image by the classification algorithms used in this work for comparison. Class-specific accuracy values are in percentage. The best results are highlighted in bold typeface. EMP, extended morphological profile; JSRC, joint sparse representation classification; MASR, multiscale adaptive sparse representation; SCMK, superpixel-based classification via multiple kernel; SBDSM, superpixel-based discriminative sparse model; MSSR, multiscale superpixel-based sparse representation.
ClassSVMEMPSRCJSRCMASRSCMKSBDSMMSSR
180.2498.2258.6492.6894.6899.9997.5697.45
272.9985.8452.3594.6897.2597.2496.3497.83
366.7690.8653.6593.4497.3497.2089.6999.60
484.1093.7536.7891.9397.4196.8999.0698.59
590.6893.4082.3594.0597.2096.3193.5598.16
694.0998.4789.9895.5899.6299.5999.8599.82
783.9692.7588.5683.2096.4199.6896.4396.73
896.1599.9090.2399.8699.89100100100
992.0010071.3536.6776.4210010092.23
1077.1187.4568.3291.2197.9992.3594.7498.28
1169.8491.9175.3295.9898.6498.6196.9299.28
1273.7287.8542.5688.8997.7296.7897.7597.19
1398.2897.9791.2183.0499.0199.1399.4699.46
1490.6799.2288.5299.5699.9999.6499.82100
1570.6797.8536.2593.2698.6297.2193.0892.04
1696.1298.2788.5890.1295.8797.0296.3996.39
OA (%)78.3792.4968.3494.5698.2698.0196.9298.56
AA (%)83.5894.6165.2089.0196.6697.9595.1997.98
Kappa0.750.910.640.940.980.980.950.98
Table 4. Number of training and test samples of nine classes in the University of Pavia image.
Table 4. Number of training and test samples of nine classes in the University of Pavia image.
ClassNameTrainTest
1Asphalt676631
2Meadows18718,649
3Gravel212099
4Trees313064
5Metal sheets141345
6Bare soil515029
7Bitumen141330
8Bricks373682
9Shadows10947
Total43242,776
Table 5. Classification accuracy of the University of Pavia image under different scales. The number of single-scale superpixels is gained by using Equation (5), in which the fundamental superpixel number is set to 3200 and the power exponent n is an integer changing from - 3 to three. Class-specific accuracy values are in percentage. The best results are highlighted in bold typeface.
Table 5. Classification accuracy of the University of Pavia image under different scales. The number of single-scale superpixels is gained by using Equation (5), in which the fundamental superpixel number is set to 3200 and the power exponent n is an integer changing from - 3 to three. Class-specific accuracy values are in percentage. The best results are highlighted in bold typeface.
Class n = - 3 n = - 2 n = - 1 n = 0 n = 1 n = 2 n = 3
194.5089.1083.1086.4382.0481.8478.85
295.0896.7598.9798.0097.2998.1398.82
310010099.1878.1988.9987.2591.15
458.6960.2767.0369.0766.6768.1874.35
588.4385.1591.5996.3993.9195.9492.04
699.2699.1899.9896.8297.9195.5287.81
7100100100100100100100
899.3799.0588.8291.2891.3591.6697.91
929.2433.5143.0157.5268.6268.8172.40
OA (%)91.7091.5192.0692.1591.9291.6491.37
AA (%)84.8384.9186.2886.9688.5286.6687.70
Kappa0.890.890.900.900.890.890.88
Table 6. Classification accuracy of the University of Pavia image by the classification algorithms used in this work for comparison. Class-specific accuracy values are in percentage. The best results are highlighted in bold typeface.
Table 6. Classification accuracy of the University of Pavia image by the classification algorithms used in this work for comparison. Class-specific accuracy values are in percentage. The best results are highlighted in bold typeface.
ClassSVMEMPSRCJSRCMASRSCMKSBDSMMSSR
181.4494.3973.3460.6574.0190.1986.4394.20
291.3990.4091.8497.8298.7199.5898.0099.97
382.1395.1752.3986.7796.7793.7178.1998.78
493.4796.0274.5582.8687.6088.8269.0774.18
599.3099.8599.6295.7910099.7096.3996.92
688.2778.8345.3787.2084.5398.0596.8297.38
793.8496.9561.5598.1798.2585.79100100
874.4697.6076.1374.6280.5890.7588.8299.34
976.5299.1585.3836.2549.9598.6557.5265.78
OA (%)86.1391.5877.8184.5389.7394.7392.1595.54
AA (%)78.2594.2572.9178.2085.6091.8586.9689.19
Kappa0.820.890.700.810.860.930.900.94
Table 7. Number of training and test samples of sixteen classes in the Salinas image.
Table 7. Number of training and test samples of sixteen classes in the Salinas image.
ClassNameTrainTest
1Weeds_152004
2Weeds_283718
3Fallow41973
4Fallow plow31391
5Fallow smooth62672
6Stubble83951
7Celery83571
8Grapes2311,248
9Soil136190
10Corn73271
11Lettuce 4 wk31065
12Lettuce 5 wk41923
13Lettuce 6 wk2914
14Lettuce 7 wk31068
15Vineyard untrained157253
16Vineyard trellis41803
Total11654,015
Table 8. Number of training and test samples of six classes in the Washington DC image.
Table 8. Number of training and test samples of six classes in the Washington DC image.
ClassNameTrainTest
1Roof633122
2Road361786
3Trail291399
4Grass261261
5Shadow241191
6Tree231117
Total2019698
Table 9. Classification accuracy of the Salinas image under different scales. The number of single-scale superpixels is gained by using Equation (5), in which the fundamental superpixel number is set to 1600 and the power exponent n is an integer changing from - 2 to two. Class-specific accuracy values are in percentage. The best results are highlighted in bold typeface.
Table 9. Classification accuracy of the Salinas image under different scales. The number of single-scale superpixels is gained by using Equation (5), in which the fundamental superpixel number is set to 1600 and the power exponent n is an integer changing from - 2 to two. Class-specific accuracy values are in percentage. The best results are highlighted in bold typeface.
Class n = - 2 n = - 1 n = 0 n = 1 n = 2
1100100100100100
299.7899.8799.9199.8799.83
3100100100100100
499.9399.9396.9393.0884.67
599.3399.3399.9399.3399.40
699.9599.3399.9599.9599.89
799.8999.9099.9599.7599.66
899.1199.8899.1899.0296.01
9100100100100100
1097.3498.0897.4077.9295.92
1140.00100100100100
1260.00100100100100
1378.4269.6198.0398.2197.81
1495.9795.9996.2978.9195.90
1599.9490.9396.4999.9492.42
1697.2397.2497.2397.2497.24
OA (%)95.3596.8498.6897.6797.23
AA (%)90.6893.7398.8596.4697.42
Kappa0.960.960.980.970.97
Table 10. Classification accuracy of the Washington DC image under different scales. The number of single-scale superpixels is gained by using Equation (5), in which the fundamental superpixel number is set to 12,800 and the power exponent n is an integer changing from - 5 to five. Class-specific accuracy values are in percentage. The best results are highlighted in bold typeface.
Table 10. Classification accuracy of the Washington DC image under different scales. The number of single-scale superpixels is gained by using Equation (5), in which the fundamental superpixel number is set to 12,800 and the power exponent n is an integer changing from - 5 to five. Class-specific accuracy values are in percentage. The best results are highlighted in bold typeface.
Class n = - 5 n = - 4 n = - 3 n = - 2 n = - 1 n = 0 n = 1 n = 2 n = 3 n = 4 n = 5
190.1289.3793.2890.7787.9093.4891.6592.0490.0289.0788.42
298.6397.3897.6699.8398.1897.2197.7896.4199.3799.2099.37
383.6192.6494.8394.9797.6785.2995.0592.5792.4392.5090.97
483.6891.5293.3092.4192.8996.7797.5095.8896.7796.6995.96
578.4693.3390.4389.3291.7994.0287.9593.3393.3393.2593.68
668.5573.7581.5980.4082.8692.6282.0485.8786.8788.0686.78
OA (%)86.0790.2792.6291.8691.6893.3892.4592.8592.9692.7592.17
AA (%)83.8489.6691.8591.2891.8893.230.9292.6893.1393.1392.53
Kappa0.830.880.910.900.900.920.910.910.910.910.90
Table 11. Classification accuracy of the Salinas image by the classification algorithms used in this work for comparison. Class-specific accuracy values are in percentage. The best results are highlighted in bold typeface.
Table 11. Classification accuracy of the Salinas image by the classification algorithms used in this work for comparison. Class-specific accuracy values are in percentage. The best results are highlighted in bold typeface.
ClassSVMEMPSRCJSRCMASRSCMKSBDSMMSSR
198.3599.2392.5699.9999.9099.94100100
298.7698.9095.3999.6799.3597.2199.9199.78
392.5493.5374.6768.2597.8887.54100100
496.2898.9298.6862.4788.9599.9396.9399.93
595.7594.2589.8882.5592.8699.3599.9399.93
695.9095.6198.7699.2799.9799.9599.9599.90
798.4799.2499.0295.3510099.5399.9599.97
863.2653.3575.3086.6384.9897.8599.1899.28
997.5598.7593.7910099.7699.64100100
1082.3893.4176.0285.2788.1087.9497.4097.32
1188.9796.5188.2089.1798.9797.18100100
1285.5795.3695.4867.5498.8510010098.03
1398.6898.5796.8151.8299.6992.5898.0395.97
1486.8293.8489.7595.2594.1695.9796.2995.97
1559.6679.0088.2067.4080.8488.4896.4999.94
1635.8838.5271.4483.4292.6894.9397.2397.25
OA (%)80.6585.7581.7285.3392.3394.1498.6899.41
AA (%)87.0193.4085.2182.6394.7594.9298.8599.17
Kappa0.810.840.790.840.910.930.980.99
Table 12. Classification accuracy of the Washington DC image by the classification algorithms used in this work for comparison. Class-specific accuracy values are in percentage. The best results are highlighted in bold typeface.
Table 12. Classification accuracy of the Washington DC image by the classification algorithms used in this work for comparison. Class-specific accuracy values are in percentage. The best results are highlighted in bold typeface.
ClassSVMEMPSRCJSRCMASRSCMKSBDSMMSSR
183.4086.0989.9591.1398.7593.5693.4893.98
298.3596.8094.9897.9598.1498.9297.2199.45
393.2389.3181.3589.8098.2698.1885.2995.92
496.1793.4695.4093.3890.9492.5796.7797.58
591.2697.5895.9894.3699.1695.0494.0297.69
692.8379.6494.5392.9882.7491.2592.6296.81
OA (%)91.1190.0491.5993.0695.9794.9693.3896.74
AA (%)92.5490.4892.0393.2794.7594.9193.2396.58
Kappa0.890.880.900.910.950.940.920.95
Table 13. Average running time (seconds) over ten realizations for the classification of the Indian Pines, University of Pavia, Salinas and Washington DC images by the algorithms used in this work.
Table 13. Average running time (seconds) over ten realizations for the classification of the Indian Pines, University of Pavia, Salinas and Washington DC images by the algorithms used in this work.
ImagesSVMEMPSRCJSRCMASRSCMKSBDSMMSSR
Indian Pines242.363.517.4118.61580.9254.69.267.4
U.Pavia30.424.650.4425.33010.448.726.5197.5
Salinas13.410.576.2800.94129.120.618.492.1
Washington DC14.623.316.734.2210.616.521.9187.3
Table 14. Classification accuracy of the Indian Pines image under different scales. The FH segmentation method is applied in the SBDSM algorithm. Multiscale superpixels are generated with various scales and smoothing parameters, σ and k _ S . Class-specific accuracy values are in percentage. The best results are highlighted in bold typeface.
Table 14. Classification accuracy of the Indian Pines image under different scales. The FH segmentation method is applied in the SBDSM algorithm. Multiscale superpixels are generated with various scales and smoothing parameters, σ and k _ S . Class-specific accuracy values are in percentage. The best results are highlighted in bold typeface.
Class σ = 0 . 2
k _ S = 43
σ = 0 . 2
k _ S = 31
σ = 0 . 2
k _ S = 23
σ = 0 . 3
k _ S = 17
σ = 0 . 4
k _ S = 12
σ = 0 . 4
k _ S = 9
σ = 0 . 3
k _ S = 7
192.6892.6892.6892.6892.6892.6892.68
286.9381.4879.9296.7493.6290.5093.93
399.4686.8598.6692.9094.5193.3191.16
442.7294.0187.7990.7475.5990.6182.63
590.7899.5499.0899.0895.6291.9494.93
699.7196.0099.8599.5499.7099.2496.35
7010096.0096.0096.0096.0096.00
810010010010098.60100100
9010010010010010083.33
1091.1990.9394.5196.9197.9496.5790.96
1162.6193.7596.2092.0391.5895.5294.34
1295.8791.9379.3681.4391.5687.2486.12
1399.4699.4699.4699.4699.4699.4698.37
1483.0499.8299.8299.8299.7499.7499.74
1597.9897.9898.2798.8598.8598.2797.41
1697.5997.5997.5997.5997.5997.5997.59
OA (%)83.6293.6293.9796.0494.9694.5694.26
AA (%)77.5094.9994.9595.8695.1995.9293.47
Kappa0.820.930.930.940.940.950.93
Table 15. Classification accuracy of the Indian Pines image under different scales. The SLIC segmentation method is applied in the SBDSM algorithm. The number of multiscale superpixels is obtained by presetting the number of superpixel segmentations n _ S . Class-specific accuracy values are in percentage. The best results are highlighted in bold typeface.
Table 15. Classification accuracy of the Indian Pines image under different scales. The SLIC segmentation method is applied in the SBDSM algorithm. The number of multiscale superpixels is obtained by presetting the number of superpixel segmentations n _ S . Class-specific accuracy values are in percentage. The best results are highlighted in bold typeface.
Class n _ S = 147 n _ S = 206 n _ S = 288 n _ S = 415 n _ S = 562 n _ S = 780 n _ S = 1055
197.5697.5675.6197.5697.5673.1775.61
283.6682.3391.9194.1691.3692.2293.70
386.7584.7585.6893.8498.6697.9995.45
481.6766.2096.7192.9692.9697.6589.67
587.3394.0195.1688.7187.7994.2491.94
698.1798.9396.1998.3398.3399.0998.63
7010010096.0096.0092.0096.00
899.0799.0710010099.3099.7799.97
998.4599.5810010088.8966.7755.56
1071.4089.8284.6797.7194.0595.3193.48
1192.3993.5793.6296.2995.0297.0197.15
1280.8688.7490.8190.9987.6286.4991.37
1399.4386.4199.4699.4699.4698.9198.91
1498.4299.8999.5699.6598.5199.3099.82
1599.1498.8597.4197.9891.6489.3487.32
1698.8097.5997.5998.8097.5997.5996.39
OA (%)89.2891.7893.0697.0896.6995.6295.57
AA (%)85.9192.4987.7796.4094.4292.5592.68
Kappa0.880.910.920.960.940.950.95
Table 16. Classification accuracy of the Indian Pines image by applying the MSSR_FH, MSSR_SLIC and MSSR_ERS algorithms. Class-specific accuracy values are in percentage. The best results are highlighted in bold typeface.
Table 16. Classification accuracy of the Indian Pines image by applying the MSSR_FH, MSSR_SLIC and MSSR_ERS algorithms. Class-specific accuracy values are in percentage. The best results are highlighted in bold typeface.
ClassMSSR_FHMSSR_SLICMSSR_ERS
192.6897.5697.45
292.9797.3597.83
397.8697.1999.60
496.7197.1898.59
598.3993.0998.16
699.8599.2499.82
796.0096.0096.73
8100100100
910010092.23
1097.3797.9498.28
1195.7096.7199.28
1291.3797.9497.19
1399.4699.4699.46
1499.82100100
1599.1498.8592.04
1697.5697.5996.39
OA (%)96.7797.7598.56
AA (%)97.1897.8897.98
Kappa0.960.970.98
time (s)67.166.367.4

Share and Cite

MDPI and ACS Style

Zhang, S.; Li, S.; Fu, W.; Fang, L. Multiscale Superpixel-Based Sparse Representation for Hyperspectral Image Classification . Remote Sens. 2017, 9, 139. https://doi.org/10.3390/rs9020139

AMA Style

Zhang S, Li S, Fu W, Fang L. Multiscale Superpixel-Based Sparse Representation for Hyperspectral Image Classification . Remote Sensing. 2017; 9(2):139. https://doi.org/10.3390/rs9020139

Chicago/Turabian Style

Zhang, Shuzhen, Shutao Li, Wei Fu, and Leiyuan Fang. 2017. "Multiscale Superpixel-Based Sparse Representation for Hyperspectral Image Classification " Remote Sensing 9, no. 2: 139. https://doi.org/10.3390/rs9020139

APA Style

Zhang, S., Li, S., Fu, W., & Fang, L. (2017). Multiscale Superpixel-Based Sparse Representation for Hyperspectral Image Classification . Remote Sensing, 9(2), 139. https://doi.org/10.3390/rs9020139

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop