Multiscale Superpixel-Based Sparse Representation for Hyperspectral Image Classification
"> Figure 1
<p>Schematic illustration of the multiscale superpixel-based sparse representation (MSSR) algorithm for HSI classification.</p> "> Figure 2
<p>Indian Pines image: (<b>a</b>) false-color image; and (<b>b</b>) reference image.</p> "> Figure 3
<p>University of Pavia image: (<b>a</b>) false-color image; and (<b>b</b>) reference image.</p> "> Figure 4
<p>Salinas image: (<b>a</b>) false-color image; and (<b>b</b>) reference image.</p> "> Figure 5
<p>Washington DC image: (<b>a</b>) false-color image; and (<b>b</b>) reference image.</p> "> Figure 6
<p>Superpixel segmentation results of the Indian Pines image under different scales. The number of single-scale superpixels is gained by using the Equation (<a href="#FD5-remotesensing-09-00139" class="html-disp-formula">5</a>), in which the fundamental superpixel number is set to 3200 and the power exponent <span class="html-italic">n</span> is an integer changing from <math display="inline"> <semantics> <mrow> <mo>-</mo> <mn>3</mn> </mrow> </semantics> </math> to three: (<b>a</b>) <math display="inline"> <semantics> <mrow> <mi>n</mi> <mo>=</mo> <mo>-</mo> <mn>3</mn> </mrow> </semantics> </math>; (<b>b</b>) <math display="inline"> <semantics> <mrow> <mi>n</mi> <mo>=</mo> <mo>-</mo> <mn>2</mn> </mrow> </semantics> </math>; (<b>c</b>) <math display="inline"> <semantics> <mrow> <mi>n</mi> <mo>=</mo> <mo>-</mo> <mn>1</mn> </mrow> </semantics> </math>; (<b>d</b>) <math display="inline"> <semantics> <mrow> <mi>n</mi> <mo>=</mo> <mn>0</mn> </mrow> </semantics> </math>; (<b>e</b>) <math display="inline"> <semantics> <mrow> <mi>n</mi> <mo>=</mo> <mn>1</mn> </mrow> </semantics> </math>; (<b>f</b>) <math display="inline"> <semantics> <mrow> <mi>n</mi> <mo>=</mo> <mn>2</mn> </mrow> </semantics> </math>; and (<b>g</b>) <math display="inline"> <semantics> <mrow> <mi>n</mi> <mo>=</mo> <mn>3</mn> </mrow> </semantics> </math>.</p> "> Figure 7
<p>Classification results of the Indian Pines image under different scales. The number of single-scale superpixels is gained by using Equation (<a href="#FD5-remotesensing-09-00139" class="html-disp-formula">5</a>), in which the fundamental superpixel number is set to 3200 and the power exponent <span class="html-italic">n</span> is an integer changing from <math display="inline"> <semantics> <mrow> <mo>-</mo> <mn>3</mn> </mrow> </semantics> </math> to three: (<b>a</b>) <math display="inline"> <semantics> <mrow> <mi>n</mi> <mo>=</mo> <mo>-</mo> <mn>3</mn> </mrow> </semantics> </math>, OA = 93.14%; (<b>b</b>) <math display="inline"> <semantics> <mrow> <mi>n</mi> <mo>=</mo> <mo>-</mo> <mn>2</mn> </mrow> </semantics> </math>, OA = 96.42%; (<b>c</b>) <math display="inline"> <semantics> <mrow> <mi>n</mi> <mo>=</mo> <mo>-</mo> <mn>1</mn> </mrow> </semantics> </math>, OA = 96.62%; (<b>d</b>) <math display="inline"> <semantics> <mrow> <mi>n</mi> <mo>=</mo> <mn>0</mn> </mrow> </semantics> </math>, OA = 97.08%; (<b>e</b>) <math display="inline"> <semantics> <mrow> <mi>n</mi> <mo>=</mo> <mn>1</mn> </mrow> </semantics> </math>, OA = 95.64%; (<b>f</b>) <math display="inline"> <semantics> <mrow> <mi>n</mi> <mo>=</mo> <mn>2</mn> </mrow> </semantics> </math>, OA = 95.61%; and (<b>g</b>) <math display="inline"> <semantics> <mrow> <mi>n</mi> <mo>=</mo> <mn>3</mn> </mrow> </semantics> </math>, OA = 93.65%.</p> "> Figure 8
<p>Classification maps for the Indian Pines image by different algorithms (OA values are reported in parentheses): (<b>a</b>) SVM (78.01%); (<b>b</b>) EMP (92.71%); (<b>c</b>) SRC (68.91%); (<b>d</b>) JSRC (94.42%); (<b>e</b>) MASR (98.27%); (<b>f</b>) SCMK (97.96%); (<b>g</b>) SBDSM (97.08%); and (<b>h</b>) MSSR (98.58%).</p> "> Figure 9
<p>Superpixel segmentation results of the University of Pavia image under different scales. The number of single-scale superpixels is gained by using Equation (<a href="#FD5-remotesensing-09-00139" class="html-disp-formula">5</a>), in which the fundamental superpixel number is set to 3200 and the power exponent <span class="html-italic">n</span> is an integer changing from <math display="inline"> <semantics> <mrow> <mo>-</mo> <mn>3</mn> </mrow> </semantics> </math> to three: (<b>a</b>) <math display="inline"> <semantics> <mrow> <mi>n</mi> <mo>=</mo> <mo>-</mo> <mn>3</mn> </mrow> </semantics> </math>; (<b>b</b>) <math display="inline"> <semantics> <mrow> <mi>n</mi> <mo>=</mo> <mo>-</mo> <mn>2</mn> </mrow> </semantics> </math>; (<b>c</b>) <math display="inline"> <semantics> <mrow> <mi>n</mi> <mo>=</mo> <mo>-</mo> <mn>1</mn> </mrow> </semantics> </math>; (<b>d</b>) <math display="inline"> <semantics> <mrow> <mi>n</mi> <mo>=</mo> <mn>0</mn> </mrow> </semantics> </math>; (<b>e</b>) <math display="inline"> <semantics> <mrow> <mi>n</mi> <mo>=</mo> <mn>1</mn> </mrow> </semantics> </math>; (<b>f</b>) <math display="inline"> <semantics> <mrow> <mi>n</mi> <mo>=</mo> <mn>2</mn> </mrow> </semantics> </math>; and (<b>g</b>) <math display="inline"> <semantics> <mrow> <mi>n</mi> <mo>=</mo> <mn>3</mn> </mrow> </semantics> </math>.</p> "> Figure 9 Cont.
<p>Superpixel segmentation results of the University of Pavia image under different scales. The number of single-scale superpixels is gained by using Equation (<a href="#FD5-remotesensing-09-00139" class="html-disp-formula">5</a>), in which the fundamental superpixel number is set to 3200 and the power exponent <span class="html-italic">n</span> is an integer changing from <math display="inline"> <semantics> <mrow> <mo>-</mo> <mn>3</mn> </mrow> </semantics> </math> to three: (<b>a</b>) <math display="inline"> <semantics> <mrow> <mi>n</mi> <mo>=</mo> <mo>-</mo> <mn>3</mn> </mrow> </semantics> </math>; (<b>b</b>) <math display="inline"> <semantics> <mrow> <mi>n</mi> <mo>=</mo> <mo>-</mo> <mn>2</mn> </mrow> </semantics> </math>; (<b>c</b>) <math display="inline"> <semantics> <mrow> <mi>n</mi> <mo>=</mo> <mo>-</mo> <mn>1</mn> </mrow> </semantics> </math>; (<b>d</b>) <math display="inline"> <semantics> <mrow> <mi>n</mi> <mo>=</mo> <mn>0</mn> </mrow> </semantics> </math>; (<b>e</b>) <math display="inline"> <semantics> <mrow> <mi>n</mi> <mo>=</mo> <mn>1</mn> </mrow> </semantics> </math>; (<b>f</b>) <math display="inline"> <semantics> <mrow> <mi>n</mi> <mo>=</mo> <mn>2</mn> </mrow> </semantics> </math>; and (<b>g</b>) <math display="inline"> <semantics> <mrow> <mi>n</mi> <mo>=</mo> <mn>3</mn> </mrow> </semantics> </math>.</p> "> Figure 10
<p>Classification results of the University of Pavia image under different scales. The number of single-scale superpixels is gained by using Equation (<a href="#FD5-remotesensing-09-00139" class="html-disp-formula">5</a>), in which the fundamental superpixel number is set to 3200 and the power exponent <span class="html-italic">n</span> is an integer changing from <math display="inline"> <semantics> <mrow> <mo>-</mo> <mn>3</mn> </mrow> </semantics> </math> to three: (<b>a</b>) <math display="inline"> <semantics> <mrow> <mi>n</mi> <mo>=</mo> <mo>-</mo> <mn>3</mn> </mrow> </semantics> </math>, OA = 91.74%; (<b>b</b>) <math display="inline"> <semantics> <mrow> <mi>n</mi> <mo>=</mo> <mo>-</mo> <mn>2</mn> </mrow> </semantics> </math>, OA = 91.42%; (<b>c</b>) <math display="inline"> <semantics> <mrow> <mi>n</mi> <mo>=</mo> <mo>-</mo> <mn>1</mn> </mrow> </semantics> </math>, OA = 92.39%; (<b>d</b>) <math display="inline"> <semantics> <mrow> <mi>n</mi> <mo>=</mo> <mn>0</mn> </mrow> </semantics> </math>, OA = 92.60%; (<b>e</b>) <math display="inline"> <semantics> <mrow> <mi>n</mi> <mo>=</mo> <mn>1</mn> </mrow> </semantics> </math>, OA = 92.12%; (<b>f</b>) <math display="inline"> <semantics> <mrow> <mi>n</mi> <mo>=</mo> <mn>2</mn> </mrow> </semantics> </math>, OA = 91.54%; and (<b>g</b>) <math display="inline"> <semantics> <mrow> <mi>n</mi> <mo>=</mo> <mn>3</mn> </mrow> </semantics> </math>, OA = 91.35%.</p> "> Figure 11
<p>Classification maps for the Pavia University image by different algorithms (OA values are reported in parentheses): (<b>a</b>) SVM (86.52%); (<b>b</b>) EMP (91.80%); (<b>c</b>) SRC (77.90%); (<b>d</b>) JSRC (86.78%); (<b>e</b>) MASR (89.45%); (<b>f</b>) SCMK (94.96%); (<b>g</b>) SBDSM (92.60%); and (<b>h</b>) MSSR (95.47%).</p> "> Figure 12
<p>Superpixel segmentation results of the Salinas image under different scales. The number of single-scale superpixels is gained by using Equation (<a href="#FD5-remotesensing-09-00139" class="html-disp-formula">5</a>), in which the fundamental superpixel number is set to 1600 and the power exponent <span class="html-italic">n</span> is an integer changing from <math display="inline"> <semantics> <mrow> <mo>-</mo> <mn>2</mn> </mrow> </semantics> </math> to two: (<b>a</b>) <math display="inline"> <semantics> <mrow> <mi>n</mi> <mo>=</mo> <mo>-</mo> <mn>2</mn> </mrow> </semantics> </math>; (<b>b</b>) <math display="inline"> <semantics> <mrow> <mi>n</mi> <mo>=</mo> <mo>-</mo> <mn>1</mn> </mrow> </semantics> </math>; (<b>c</b>) <math display="inline"> <semantics> <mrow> <mi>n</mi> <mo>=</mo> <mn>0</mn> </mrow> </semantics> </math>; (<b>d</b>) <math display="inline"> <semantics> <mrow> <mi>n</mi> <mo>=</mo> <mn>1</mn> </mrow> </semantics> </math>; and (<b>e</b>) <math display="inline"> <semantics> <mrow> <mi>n</mi> <mo>=</mo> <mn>2</mn> </mrow> </semantics> </math>.</p> "> Figure 13
<p>Classification results of the Salinas image under different scales. The number of single-scale superpixels is gained by using Equation (<a href="#FD5-remotesensing-09-00139" class="html-disp-formula">5</a>), in which the fundamental superpixel number is set to 1600 and the power exponent <span class="html-italic">n</span> is an integer changing from <math display="inline"> <semantics> <mrow> <mo>-</mo> <mn>2</mn> </mrow> </semantics> </math> to two: (<b>a</b>) <math display="inline"> <semantics> <mrow> <mi>n</mi> <mo>=</mo> <mo>-</mo> <mn>2</mn> </mrow> </semantics> </math>, OA = 95.21%; (<b>b</b>) <math display="inline"> <semantics> <mrow> <mi>n</mi> <mo>=</mo> <mo>-</mo> <mn>1</mn> </mrow> </semantics> </math>, OA = 97.04%; (<b>c</b>) <math display="inline"> <semantics> <mrow> <mi>n</mi> <mo>=</mo> <mn>0</mn> </mrow> </semantics> </math>, OA = 98.38%; (<b>d</b>) <math display="inline"> <semantics> <mrow> <mi>n</mi> <mo>=</mo> <mn>1</mn> </mrow> </semantics> </math>, OA = 97.70%; and (<b>e</b>) <math display="inline"> <semantics> <mrow> <mi>n</mi> <mo>=</mo> <mn>2</mn> </mrow> </semantics> </math>, OA = 97.00%.</p> "> Figure 14
<p>Superpixel segmentation results of the Washington DC image under different scales. The number of single-scale superpixels is gained by using Equation (<a href="#FD5-remotesensing-09-00139" class="html-disp-formula">5</a>), in which the fundamental superpixel number is set to 12,800 and the power exponent <span class="html-italic">n</span> is an integer changing from <math display="inline"> <semantics> <mrow> <mo>-</mo> <mn>5</mn> </mrow> </semantics> </math> to five: (<b>a</b>) <math display="inline"> <semantics> <mrow> <mi>n</mi> <mo>=</mo> <mo>-</mo> <mn>5</mn> </mrow> </semantics> </math>; (<b>b</b>) <math display="inline"> <semantics> <mrow> <mi>n</mi> <mo>=</mo> <mo>-</mo> <mn>4</mn> </mrow> </semantics> </math>; (<b>c</b>) <math display="inline"> <semantics> <mrow> <mi>n</mi> <mo>=</mo> <mo>-</mo> <mn>3</mn> </mrow> </semantics> </math>; (<b>d</b>) <math display="inline"> <semantics> <mrow> <mi>n</mi> <mo>=</mo> <mo>-</mo> <mn>2</mn> </mrow> </semantics> </math>; (<b>e</b>) <math display="inline"> <semantics> <mrow> <mi>n</mi> <mo>=</mo> <mo>-</mo> <mn>1</mn> </mrow> </semantics> </math>; (<b>f</b>) <math display="inline"> <semantics> <mrow> <mi>n</mi> <mo>=</mo> <mn>0</mn> </mrow> </semantics> </math>; (<b>g</b>) <math display="inline"> <semantics> <mrow> <mi>n</mi> <mo>=</mo> <mn>1</mn> </mrow> </semantics> </math>; (<b>h</b>) <math display="inline"> <semantics> <mrow> <mi>n</mi> <mo>=</mo> <mn>2</mn> </mrow> </semantics> </math>; (<b>i</b>) <math display="inline"> <semantics> <mrow> <mi>n</mi> <mo>=</mo> <mn>3</mn> </mrow> </semantics> </math>; (<b>j</b>) <math display="inline"> <semantics> <mrow> <mi>n</mi> <mo>=</mo> <mn>4</mn> </mrow> </semantics> </math>; and (<b>k</b>) <math display="inline"> <semantics> <mrow> <mi>n</mi> <mo>=</mo> <mn>5</mn> </mrow> </semantics> </math>.</p> "> Figure 15
<p>Classification results of the Washington DC image under different scales. The number of single-scale superpixel is gained by using Equation (<a href="#FD5-remotesensing-09-00139" class="html-disp-formula">5</a>), in which the fundamental superpixel number is set to 12,800 and the power exponent <span class="html-italic">n</span> is an integer changing from <math display="inline"> <semantics> <mrow> <mo>-</mo> <mn>5</mn> </mrow> </semantics> </math> to five: (<b>a</b>) <math display="inline"> <semantics> <mrow> <mi>n</mi> <mo>=</mo> <mo>-</mo> <mn>5</mn> </mrow> </semantics> </math>, OA = 86.48%; (<b>b</b>) <math display="inline"> <semantics> <mrow> <mi>n</mi> <mo>=</mo> <mo>-</mo> <mn>4</mn> </mrow> </semantics> </math>, OA = 90.29%; (<b>c</b>) <math display="inline"> <semantics> <mrow> <mi>n</mi> <mo>=</mo> <mo>-</mo> <mn>3</mn> </mrow> </semantics> </math>, OA = 92.56%; (<b>d</b>) <math display="inline"> <semantics> <mrow> <mi>n</mi> <mo>=</mo> <mo>-</mo> <mn>2</mn> </mrow> </semantics> </math>, OA = 91.43%; (<b>e</b>) <math display="inline"> <semantics> <mrow> <mi>n</mi> <mo>=</mo> <mo>-</mo> <mn>1</mn> </mrow> </semantics> </math>, OA = 91.84%; (<b>f</b>) <math display="inline"> <semantics> <mrow> <mi>n</mi> <mo>=</mo> <mn>0</mn> </mrow> </semantics> </math>, OA = 93.33%; (<b>g</b>) <math display="inline"> <semantics> <mrow> <mi>n</mi> <mo>=</mo> <mn>1</mn> </mrow> </semantics> </math>, OA = 92.25%; (<b>h</b>) <math display="inline"> <semantics> <mrow> <mi>n</mi> <mo>=</mo> <mn>2</mn> </mrow> </semantics> </math>, OA = 92.49%; (<b>i</b>) <math display="inline"> <semantics> <mrow> <mi>n</mi> <mo>=</mo> <mn>3</mn> </mrow> </semantics> </math>, OA = 93.12%; (<b>j</b>) <math display="inline"> <semantics> <mrow> <mi>n</mi> <mo>=</mo> <mn>4</mn> </mrow> </semantics> </math>, OA = 92.55%; and (<b>k</b>) <math display="inline"> <semantics> <mrow> <mi>n</mi> <mo>=</mo> <mn>5</mn> </mrow> </semantics> </math>, OA = 92.31%.</p> "> Figure 15 Cont.
<p>Classification results of the Washington DC image under different scales. The number of single-scale superpixel is gained by using Equation (<a href="#FD5-remotesensing-09-00139" class="html-disp-formula">5</a>), in which the fundamental superpixel number is set to 12,800 and the power exponent <span class="html-italic">n</span> is an integer changing from <math display="inline"> <semantics> <mrow> <mo>-</mo> <mn>5</mn> </mrow> </semantics> </math> to five: (<b>a</b>) <math display="inline"> <semantics> <mrow> <mi>n</mi> <mo>=</mo> <mo>-</mo> <mn>5</mn> </mrow> </semantics> </math>, OA = 86.48%; (<b>b</b>) <math display="inline"> <semantics> <mrow> <mi>n</mi> <mo>=</mo> <mo>-</mo> <mn>4</mn> </mrow> </semantics> </math>, OA = 90.29%; (<b>c</b>) <math display="inline"> <semantics> <mrow> <mi>n</mi> <mo>=</mo> <mo>-</mo> <mn>3</mn> </mrow> </semantics> </math>, OA = 92.56%; (<b>d</b>) <math display="inline"> <semantics> <mrow> <mi>n</mi> <mo>=</mo> <mo>-</mo> <mn>2</mn> </mrow> </semantics> </math>, OA = 91.43%; (<b>e</b>) <math display="inline"> <semantics> <mrow> <mi>n</mi> <mo>=</mo> <mo>-</mo> <mn>1</mn> </mrow> </semantics> </math>, OA = 91.84%; (<b>f</b>) <math display="inline"> <semantics> <mrow> <mi>n</mi> <mo>=</mo> <mn>0</mn> </mrow> </semantics> </math>, OA = 93.33%; (<b>g</b>) <math display="inline"> <semantics> <mrow> <mi>n</mi> <mo>=</mo> <mn>1</mn> </mrow> </semantics> </math>, OA = 92.25%; (<b>h</b>) <math display="inline"> <semantics> <mrow> <mi>n</mi> <mo>=</mo> <mn>2</mn> </mrow> </semantics> </math>, OA = 92.49%; (<b>i</b>) <math display="inline"> <semantics> <mrow> <mi>n</mi> <mo>=</mo> <mn>3</mn> </mrow> </semantics> </math>, OA = 93.12%; (<b>j</b>) <math display="inline"> <semantics> <mrow> <mi>n</mi> <mo>=</mo> <mn>4</mn> </mrow> </semantics> </math>, OA = 92.55%; and (<b>k</b>) <math display="inline"> <semantics> <mrow> <mi>n</mi> <mo>=</mo> <mn>5</mn> </mrow> </semantics> </math>, OA = 92.31%.</p> "> Figure 16
<p>Classification maps for the Salinas image by different algorithms (OA values are reported in parentheses): (<b>a</b>) SVM (80.23%); (<b>b</b>) EMP (85.84%); (<b>c</b>) SRC (81.94%); (<b>d</b>) JSRC (84.79%); (<b>e</b>) MASR (92.21%); (<b>f</b>) SCMK (94.53%); (<b>g</b>) SBDSM (98.38%); and (<b>h</b>) MSSR (99.41%).</p> "> Figure 16 Cont.
<p>Classification maps for the Salinas image by different algorithms (OA values are reported in parentheses): (<b>a</b>) SVM (80.23%); (<b>b</b>) EMP (85.84%); (<b>c</b>) SRC (81.94%); (<b>d</b>) JSRC (84.79%); (<b>e</b>) MASR (92.21%); (<b>f</b>) SCMK (94.53%); (<b>g</b>) SBDSM (98.38%); and (<b>h</b>) MSSR (99.41%).</p> "> Figure 17
<p>Classification maps for the Washington DC image by different algorithms (OA values are reported in parentheses): (<b>a</b>) SVM (90.98%); (<b>b</b>) EMP (90.28%); (<b>c</b>) SRC (91.95%); (<b>d</b>) JSRC (92.79%); (<b>e</b>) MASR (95.62%); (<b>f</b>) SCMK (94.55%); (<b>g</b>) SBDSM (93.33%); and(<b>h</b>) MSSR (96.60%).</p> "> Figure 18
<p>Classification accuracy OA versus different fundamental superpixel numbers <math display="inline"> <semantics> <msub> <mi>S</mi> <mi>f</mi> </msub> </semantics> </math> on the four test images.</p> "> Figure 19
<p>Relationship among the OA value, the number of multiscale superpixels and the fundamental superpixel number <math display="inline"> <semantics> <msub> <mi>S</mi> <mi>f</mi> </msub> </semantics> </math>: (<b>a</b>) Indian Pines image; (<b>b</b>) University of Pavia image; (<b>c</b>) Salinas image; and (<b>d</b>) Washington DC image.</p> "> Figure 20
<p>Felzenszwalb-Huttenlocher (FH) segmentation results of the Indian Pines image under different scales. Multiscale superpixels are generated with various scales and smoothing parameters, <span class="html-italic">σ</span> and <math display="inline"> <semantics> <mrow> <mi>k</mi> <mo>_</mo> <mi>S</mi> </mrow> </semantics> </math>: (<b>a</b>) <math display="inline"> <semantics> <mrow> <mi>σ</mi> <mo>=</mo> <mn>0</mn> <mo>.</mo> <mn>2</mn> <mo>,</mo> <mi>k</mi> <mo>_</mo> <mi>S</mi> <mo>=</mo> <mn>43</mn> </mrow> </semantics> </math>; (<b>b</b>) <math display="inline"> <semantics> <mrow> <mi>σ</mi> <mo>=</mo> <mn>0</mn> <mo>.</mo> <mn>2</mn> <mo>,</mo> <mi>k</mi> <mo>_</mo> <mi>S</mi> <mo>=</mo> <mn>31</mn> </mrow> </semantics> </math>; (<b>c</b>) <math display="inline"> <semantics> <mrow> <mi>σ</mi> <mo>=</mo> <mn>0</mn> <mo>.</mo> <mn>2</mn> <mo>,</mo> <mi>k</mi> <mo>_</mo> <mi>S</mi> <mo>=</mo> <mn>23</mn> </mrow> </semantics> </math>; (<b>d</b>) <math display="inline"> <semantics> <mrow> <mi>σ</mi> <mo>=</mo> <mn>0</mn> <mo>.</mo> <mn>3</mn> <mo>,</mo> <mi>k</mi> <mo>_</mo> <mi>S</mi> <mo>=</mo> <mn>17</mn> </mrow> </semantics> </math>; (<b>e</b>) <math display="inline"> <semantics> <mrow> <mi>σ</mi> <mo>=</mo> <mn>0</mn> <mo>.</mo> <mn>4</mn> <mo>,</mo> <mi>k</mi> <mo>_</mo> <mi>S</mi> <mo>=</mo> <mn>12</mn> </mrow> </semantics> </math>; (<b>f</b>) <math display="inline"> <semantics> <mrow> <mi>σ</mi> <mo>=</mo> <mn>0</mn> <mo>.</mo> <mn>4</mn> <mo>,</mo> <mi>k</mi> <mo>_</mo> <mi>S</mi> <mo>=</mo> <mn>9</mn> </mrow> </semantics> </math>; and (<b>g</b>) <math display="inline"> <semantics> <mrow> <mi>σ</mi> <mo>=</mo> <mn>0</mn> <mo>.</mo> <mn>3</mn> <mo>,</mo> <mi>k</mi> <mo>_</mo> <mi>S</mi> <mo>=</mo> <mn>7</mn> </mrow> </semantics> </math>.</p> "> Figure 21
<p>Simple linear iterative clustering (SLIC) segmentation results of the Indian Pines image under different scales. The number of multiscale superpixels is obtained by presetting the number of superpixel segmentations <math display="inline"> <semantics> <mrow> <mi>n</mi> <mo>_</mo> <mi>S</mi> </mrow> </semantics> </math>: (<b>a</b>) <math display="inline"> <semantics> <mrow> <mi>n</mi> <mo>_</mo> <mi>S</mi> </mrow> </semantics> </math> = 147; (<b>b</b>) <math display="inline"> <semantics> <mrow> <mi>n</mi> <mo>_</mo> <mi>S</mi> </mrow> </semantics> </math> = 206; (<b>c</b>) <math display="inline"> <semantics> <mrow> <mi>n</mi> <mo>_</mo> <mi>S</mi> </mrow> </semantics> </math> = 288; (<b>d</b>) <math display="inline"> <semantics> <mrow> <mi>n</mi> <mo>_</mo> <mi>S</mi> </mrow> </semantics> </math> = 415; (<b>e</b>) <math display="inline"> <semantics> <mrow> <mi>n</mi> <mo>_</mo> <mi>S</mi> </mrow> </semantics> </math> = 562; (<b>f</b>) <math display="inline"> <semantics> <mrow> <mi>n</mi> <mo>_</mo> <mi>S</mi> </mrow> </semantics> </math> = 780; and (<b>g</b>) <math display="inline"> <semantics> <mrow> <mi>n</mi> <mo>_</mo> <mi>S</mi> </mrow> </semantics> </math> = 1055.</p> "> Figure 22
<p>Classification results of the Indian Pines image under different scales. The FH segmentation method is applied in the SBDSM algorithm. Multiscale superpixels are generated with various scales and smoothing parameters, <span class="html-italic">σ</span> and <math display="inline"> <semantics> <mrow> <mi>k</mi> <mo>_</mo> <mi>S</mi> </mrow> </semantics> </math>: (<b>a</b>) <math display="inline"> <semantics> <mrow> <mi>σ</mi> <mo>=</mo> <mn>0</mn> <mo>.</mo> <mn>2</mn> <mo>,</mo> <mi>k</mi> <mo>_</mo> <mi>S</mi> <mo>=</mo> <mn>43</mn> </mrow> </semantics> </math>, OA = 83.56%; (<b>b</b>) <math display="inline"> <semantics> <mrow> <mi>σ</mi> <mo>=</mo> <mn>0</mn> <mo>.</mo> <mn>2</mn> <mo>,</mo> <mi>k</mi> <mo>_</mo> <mi>S</mi> <mo>=</mo> <mn>31</mn> </mrow> </semantics> </math>, OA = 93.21%; (<b>c</b>) <math display="inline"> <semantics> <mrow> <mi>σ</mi> <mo>=</mo> <mn>0</mn> <mo>.</mo> <mn>2</mn> <mo>,</mo> <mi>k</mi> <mo>_</mo> <mi>S</mi> <mo>=</mo> <mn>23</mn> </mrow> </semantics> </math>, OA = 93.52%; (<b>d</b>) <math display="inline"> <semantics> <mrow> <mi>σ</mi> <mo>=</mo> <mn>0</mn> <mo>.</mo> <mn>3</mn> <mo>,</mo> <mi>k</mi> <mo>_</mo> <mi>S</mi> <mo>=</mo> <mn>17</mn> </mrow> </semantics> </math>, OA = 96.25%; (<b>e</b>) <math display="inline"> <semantics> <mrow> <mi>σ</mi> <mo>=</mo> <mn>0</mn> <mo>.</mo> <mn>4</mn> <mo>,</mo> <mi>k</mi> <mo>_</mo> <mi>S</mi> <mo>=</mo> <mn>12</mn> </mrow> </semantics> </math>, OA = 94.52%; (<b>f</b>) <math display="inline"> <semantics> <mrow> <mi>σ</mi> <mo>=</mo> <mn>0</mn> <mo>.</mo> <mn>4</mn> <mo>,</mo> <mi>k</mi> <mo>_</mo> <mi>S</mi> <mo>=</mo> <mn>9</mn> </mrow> </semantics> </math>, OA = 94.32%; and (<b>g</b>) <math display="inline"> <semantics> <mrow> <mi>σ</mi> <mo>=</mo> <mn>0</mn> <mo>.</mo> <mn>3</mn> <mo>,</mo> <mi>k</mi> <mo>_</mo> <mi>S</mi> <mo>=</mo> <mn>7</mn> </mrow> </semantics> </math>, OA = 94.28%.</p> "> Figure 23
<p>Classification results of the Indian Pines image under different scales. The SLIC segmentation method is applied in the SBDSM algorithm. The number of multiscale superpixels is obtained by presetting the number of superpixel segmentations <math display="inline"> <semantics> <mrow> <mi>n</mi> <mo>_</mo> <mi>S</mi> </mrow> </semantics> </math>: (<b>a</b>) <math display="inline"> <semantics> <mrow> <mi>n</mi> <mo>_</mo> <mi>S</mi> </mrow> </semantics> </math> = 147, OA = 89.10%; (<b>b</b>) <math display="inline"> <semantics> <mrow> <mi>n</mi> <mo>_</mo> <mi>S</mi> </mrow> </semantics> </math> = 206, OA = 91.45%; (<b>c</b>) <math display="inline"> <semantics> <mrow> <mi>n</mi> <mo>_</mo> <mi>S</mi> </mrow> </semantics> </math> = 288, OA = 93.16%; (<b>d</b>) <math display="inline"> <semantics> <mrow> <mi>n</mi> <mo>_</mo> <mi>S</mi> </mrow> </semantics> </math> = 415, OA = 96.81%; (<b>e</b>) <math display="inline"> <semantics> <mrow> <mi>n</mi> <mo>_</mo> <mi>S</mi> </mrow> </semantics> </math> = 562, OA = 96.52%; (<b>f</b>) <math display="inline"> <semantics> <mrow> <mi>n</mi> <mo>_</mo> <mi>S</mi> </mrow> </semantics> </math> = 780, OA = 95.32%; and (<b>g</b>) <math display="inline"> <semantics> <mrow> <mi>n</mi> <mo>_</mo> <mi>S</mi> </mrow> </semantics> </math> = 1055, OA = 95.24%.</p> "> Figure 24
<p>Classification maps by using MSSR_FH, MSSR_SLIC and MSSR_ERS algorithms: (<b>a</b>) MSSR_FH, OA = 96.54%; (<b>b</b>) MSSR_SLIC, OA = 97.38%; (<b>c</b>) MSSR_ERS, OA = 98.58%.</p> "> Figure 25
<p>Effect of the number of training samples on SVM, EMP, SRC, JSRC, MASR, SCMK, SBDSM and MSSR for the: (<b>a</b>) Indian Pines image; (<b>b</b>) University of Pavia images; (<b>c</b>) Salinas image; and (<b>d</b>) Washington DC image.</p> ">
Abstract
:1. Introduction
2. JSRC Algorithm for HSI Classification
3. Proposed MSSR for HSI Classification
3.1. Generation of Multiscale Superpixels in HSI
3.2. Sparse Representation for HSI with Multiscale Superpixels
3.3. Fusion of Multiscale Classification Results
4. Experimental Results and Discussion
4.1. Datasets Description
4.2. Comparison of Results
4.3. Effect of Superpixel Scale Selection
4.4. Comparison of Different Superpixel Segmentation Methods
4.5. Effects of Training Sample Number
5. Conclusions
Acknowledgments
Author Contributions
Conflicts of Interest
References
- Yuan, Y.; Wang, Q.; Zhu, G. Fast hyperspectral anomaly detection via high-order 2-D crossing filter. IEEE Trans. Geosci. Remote Sens. 2015, 53, 620–630. [Google Scholar] [CrossRef]
- Heldens, W.; Heiden, U.; Esch, T.; Stein, E.; Muller, A. Can the future EnMAP mission contribute to urban applications? A literature survey. Remote Sens. 2011, 3, 1817–1846. [Google Scholar] [CrossRef]
- Lee, M.A.; Huang, Y.; Yao, H.; Thomson, S.J. Determining the effects of storage on cotton and soybean leaf samples for hyperspectral analysis. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2014, 7, 2562–2570. [Google Scholar] [CrossRef]
- Kanning, M.; Siegmann, B.; Jarmer, T. Regionalization of uncovered agricultural soils based on organic carbon and soil texture estimations. Remote Sens. 2016, 8, 927. [Google Scholar] [CrossRef]
- Clark, M.L.; Roberts, D.A. Species-level differences in hyperspectral metrics among tropical rainforest trees as determined by a tree-based classifier. Remote Sens. 2012, 4, 1820–1855. [Google Scholar] [CrossRef]
- Ryan, J.P.; Davis, C.O.; Tufillaro, N.B.; Kudela, R.M.; Gao, B. Application of the hyperspectral imager for the coastal ocean to phytoplankton ecology studies in monterey bay, CA, USA. Remote Sens. 2014, 6, 1007–1025. [Google Scholar] [CrossRef]
- Brook, A.; Dor, E.B. Quantitative detection of settled dust over green canopy using sparse unmixing of airborne hyperspectral data. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2016, 9, 884–897. [Google Scholar] [CrossRef]
- Melgani, F.; Bruzzone, L. Classification of hyperspectral remote sensing images with support vector machines. IEEE Trans. Geosci. Remote Sens. 2004, 42, 1778–1790. [Google Scholar] [CrossRef]
- Ratle, F.; Camps-Valls, G.; Weston, J. Semisupervised neural networks for efficient hyperspectral image classification. IEEE Trans. Geosci. Remote Sens. 2010, 48, 2271–2282. [Google Scholar] [CrossRef]
- Zhong, Y.; Zhang, L. An adaptive artificial immune network for supervised classification of multi-/hyperspectral remote sensing imagery. IEEE Trans. Geosci. Remote Sens. 2012, 50, 894–909. [Google Scholar] [CrossRef]
- Jiao, H.; Zhong, Y.; Zhang, L. Artificial DNA computing-based spectral encoding and matching algorithm for hyperspectral remote sensing data. IEEE Trans. Geosci. Remote Sens. 2012, 50, 4085–4104. [Google Scholar] [CrossRef]
- Jia, S.; Shen, L.; Li, Q. Gabor feature-based collaborative representation for hyperspectral imagery classification. IEEE Trans. Geosci. Remote Sens. 2015, 53, 1118–1129. [Google Scholar]
- Li, W.; Chen, C.; Su, H.; Du, Q. Local binary patterns and extreme learning machine for hyperspectral imagery classification. IEEE Trans. Geosci. Remote Sens. 2015, 53, 3681–3693. [Google Scholar] [CrossRef]
- Kang, X.; Li, S.; Benediktsson, J.A. Spectral-spatial hyperspectral image classification with edge-preserving filtering. IEEE Trans. Geosci. Remote Sens. 2014, 52, 2666–2677. [Google Scholar] [CrossRef]
- Mirzapour, F.; Ghassemian, H. Multiscale gaussian derivative functions for hyperspectral image feature extraction. IEEE Geosci. Remote Sens. Lett. 2016, 13, 525–529. [Google Scholar] [CrossRef]
- Benediktsson, J.A.; Palmason, J.A.; Sveinsson, J.R. Classification of hyperspectral data from urban areas based on extended morphological profiles. IEEE Trans. Geosci. Remote Sens. 2005, 43, 480–491. [Google Scholar] [CrossRef]
- Quesada-Barriuso, P.; Arguello, F.; Heras, D.B. Spectral-spatial classification of hyperspectral images using wavelets and extended morphological profiles. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2014, 7, 1177–1185. [Google Scholar] [CrossRef]
- Li, J.; Marpu, P.R.; Plaza, A.; Bioucas-Dias, J.M.; Benediktsson, J.A. Generalized composite kernel framework for hyperspectral image classification. IEEE Trans. Geosci. Remote Sens. 2013, 51, 4816–4829. [Google Scholar] [CrossRef]
- Camps-Valls, G.; Shervashidze, N.; Borgwardt, K.M. Spatio-spectral remote sensing image classification with graph kernels. IEEE Geosci. Remote Sens. Lett. 2010, 7, 741–745. [Google Scholar] [CrossRef]
- Wang, J.; Jiao, L.; Liu, H.; Yang, S.; Liu, F. Hyperspectral image classification by spatial-spectral derivative-aided kernel joint sparse representation. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2015, 8, 2485–2500. [Google Scholar] [CrossRef]
- Liu, J.; Wu, Z.; Li, J.; Plaza, A.; Yuan, Y. Probabilistic-kernel collaborative representation for spatial-spectral hyperspectral image classification. IEEE Trans. Geosci. Remote Sens. 2016, 54, 2371–2384. [Google Scholar] [CrossRef]
- Zhang, L.; Zhang, L.; Tao, D.; Huang, X. Tensor discriminative locality alignment for hyperspectral image spectral-spatial feature extraction. IEEE Trans. Geosci. Remote Sens. 2013, 51, 242–256. [Google Scholar] [CrossRef]
- Guo, X.; Huang, X.; Zhang, L.; Zhang, L.; Plaza, A.; Benediktsson, J.A. Support tensor machines for classification of hyperspectral remote sensing imagery. IEEE Trans. Geosci. Remote Sens. 2016, 54, 3248–3264. [Google Scholar] [CrossRef]
- He, Z.; Li, J.; Liu, L.; Zhang, L. Tensor block-sparsity based representation for spectral-spatial hyperspectral image classification. Remote Sens. 2016, 8, 636. [Google Scholar] [CrossRef]
- Chen, Y.; Lin, Z.; Zhao, X.; Wang, G.; Gu, Y. Deep learning-based classification of hyperspectral data. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2014, 7, 2094–2107. [Google Scholar] [CrossRef]
- Chen, Y.; Zhao, X.; Jia, X. Spectral-spatial classification of hyperspectral data based on deep belief network. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2015, 8, 2381–2391. [Google Scholar] [CrossRef]
- Liang, H.; Li, Q. Hyperspectral imagery classification using sparse representations of convolutional neural network features. Remote Sens. 2016, 8, 99. [Google Scholar] [CrossRef]
- Wright, J.; Yang, A.Y.; Ganesh, A.; Sastry, S.S.; Ma, Y. Robust face recognition via sparse representation. IEEE Trans. Pattern Anal. Mach. Intell. 2009, 31, 210–227. [Google Scholar] [CrossRef] [PubMed]
- Ma, J.; Zhou, H.; Zhao, J.; Gao, Y.; Jiang, J.; Tian, J. Robust feature matching for remote sensing image registration via locally linear transforming. IEEE Trans. Geosci. Remote Sens. 2015, 53, 6469–6481. [Google Scholar] [CrossRef]
- Ma, J.; Zhao, J.; Yuille, A.L. Non-rigid point set registration by preserving global and local structures. IEEE Trans. Image Process. 2016, 25, 53–64. [Google Scholar] [PubMed]
- Li, S.; Yin, H.; Fang, L. Remote sensing image fusion via sparse representations over learned dictionaries. IEEE Trans. Geosci. Remote Sens. 2013, 51, 4779–4789. [Google Scholar] [CrossRef]
- Srinivas, U.; Chen, Y.; Monga, V.; Nasrabadi, N.M.; Tran, T.D. Exploiting sparsity in hyperspectral image classification via graphical models. IEEE Geosci. Remote Sens. Lett. 2013, 10, 505–509. [Google Scholar] [CrossRef]
- Qian, Y.; Ye, M.; Zhou, J. Hyperspectral image classification based on structured sparse logistic regression and three-dimensional wavelet texture feature. IEEE Trans. Geosci. Remote Sens. 2013, 51, 2276–2291. [Google Scholar] [CrossRef]
- Fu, W.; Li, S.; Fang, L.; Kang, X.; Benediktsson, J.A. Hyperspectral image classification via shape-adaptive joint sparse representation. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2016, 9, 556–567. [Google Scholar] [CrossRef]
- Li, C.; Ma, Y.; Mei, X.; Liu, C.; Ma, J. Hyperspectral image classification with robust sparse representation. IEEE Geosci. Remote Sens. Lett. 2016, 13, 641–645. [Google Scholar] [CrossRef]
- Chen, Y.; Nasrabadi, N.M.; Tran, T.D. Hyperspectral image classification using dictionary-based sparse representation. IEEE Trans. Geosci. Remote Sens. 2011, 49, 3973–3985. [Google Scholar] [CrossRef]
- Zhang, H.; Li, J.; Huang, Y.; Zhang, L. A nonlocal weighted joint sparse representation classification method for hyperspectral imagery. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2013, 7, 2056–2065. [Google Scholar] [CrossRef]
- Zhang, E.; Jiao, L.; Zhang, X.; Liu, H.; Wang, S. Class-level joint sparse representation for multifeature-based hyperspectral image classification. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2016, 9, 4160–4177. [Google Scholar] [CrossRef]
- Peng, X.; Zhang, L.; Yi, Z.; Tan, K.K. Learning locality-constrained collaborative representation for robust face recognition. Pattern Recognit. 2014, 47, 2794–2806. [Google Scholar] [CrossRef]
- Peng, X.; Yu, Z.; Yi, Z.; Tang, H. Constructing the L2-graph for robust subspace learning and subspace clustering. IEEE Trans. Cybern. 2016, PP, 1–14. [Google Scholar] [CrossRef] [PubMed]
- Yuan, Y.; Lin, J.; Wang, Q. Hyperspectral image classification via multitask joint sparse representation and stepwise MRF optimization. IEEE Tran. Cybern. 2016, 46, 2966–2977. [Google Scholar] [CrossRef] [PubMed]
- Achanta, R.; Shaji, A.; Smith, K.; Lucchi, A.; Fua, P.; Su¨sstrunk, S. SLIC superpixels compared to state-of-the-art superpixel methods. IEEE Trans. Pattern Anal. Mach. Intell. 2012, 34, 2274–2282. [Google Scholar] [CrossRef] [PubMed]
- Tian, Z.; Liu, L.; Zhang, Z.; Fei, B. A superpixel-based segmentation for 3D prostate MR images. IEEE Trans. Med. Imag. 2016, 35, 791–801. [Google Scholar] [CrossRef] [PubMed]
- Lu, H.; Li, X.; Zhang, L.; Ruan, X.; Yang, M. Dense and sparse reconstruction error based saliency descriptor. IEEE Trans. Image Process. 2016, 25, 1592–1603. [Google Scholar] [CrossRef] [PubMed]
- Li, J.; Zhang, H.; Zhang, L. Efficient superpixel-level multitask joint sparse representation for hyperspectral image classification. IEEE Trans. Geosci. Remote Sens. 2015, 53, 5338–5351. [Google Scholar]
- Fang, L.; Li, S.; Duan, W.; Ren, J.; Benediktsson, J.A. Classification of hyperspectral images by exploiting spectra-spatial information of superpixel via multiple kernels. IEEE Trans. Geosci. Remote Sens. 2015, 53, 6663–6674. [Google Scholar] [CrossRef]
- Saranathan, A.M.; Parente, M. Uniformity-based superpixel segmentation of hyperspectral images. IEEE Trans. Geosci. Remote Sens. 2016, 54, 1419–1430. [Google Scholar] [CrossRef]
- Wang, Y.; Zhang, Y.; Song, H.W. A Spectral-texture kernel-based classification method for hyperspectral images. Remote Sens. 2016, 8, 919. [Google Scholar] [CrossRef]
- Wang, Q.; Lin, J.; Yuan, Y. Salient band selection for hyperspectral image classification via manifold ranking. IEEE Trans. Neural Netw. 2016, 27, 1279–1289. [Google Scholar] [CrossRef] [PubMed]
- Tong, N.; Lu, H.; Zhang, L.; Ruan, X. Saliency detection with multiscale superpixels. IEEE Signal Process. Lett. 2014, 21, 1035–1039. [Google Scholar]
- Tan, N.; Xu, Y.; Goh, W.B.; Liu, J. Robust multi-scale superpixel classification for optic cup location. Comput. Med. Imag. Grap. 2015, 40, 182–193. [Google Scholar] [CrossRef] [PubMed]
- Neubert, P.; Protzel, P. Beyond holistic descriptors, keypoints, and fixed patches: multiscale superpixel grids for place recognition in changing environments. IEEE Robot. Autom. Lett. 2016, 1, 484–491. [Google Scholar] [CrossRef]
- Tropp, J.A.; Gilbert, A.C.; Strauss, M.J. Algorithms for simultaneous sparse approximation. part I: Greedy pursuit. Signal Process. 2006, 86, 572–588. [Google Scholar] [CrossRef]
- Jolliffe, I.T. Principal Component Analysis; Wiley: New York, NY, USA, 2005. [Google Scholar]
- Gonzalez, R.C.; Woods, R.E. Digital Image Processing; Prentice Hall: Englewood Cliffs, NJ, USA, 2009. [Google Scholar]
- Chacon, M.I.; Corral, A.D. Image complexity measure: A human criterion free approach. In Proceedings of the 2005 Annual Meeting of the North American Fuzzy Information Processing Society, Detroit, MI, USA, 26–28 June 2005; pp. 241–246.
- Silva, M.P.D.; Courboulay, V.; Estraillier, P. Image complexity measure based on visual attention. In Proceedings of the eighteenth IEEE International Conference on Image Processing, Brussels, Belgium, 11–14 September 2011; pp. 3281–3284.
- Cardaci, M.; Gesù, V.D.; Petrou, M.; Tabacchi, M.E. A fuzzy approach to the evaluation of image complexity. Fuzzy Sets Syst. 2006, 160, 1474–1484. [Google Scholar] [CrossRef]
- Felzenszwalb, P.F.; Huttenlocher, D.P. Efficient graph-based image segmentation. Int. J. Comput. Vis. 2004, 59, 167–181. [Google Scholar] [CrossRef]
- Veksler, O.; Boykov, Y.; Mehrani, P. Superpixels and supervoxels in an energy optimization framework. In Proceedings of the eleventh European Conference on Computer Vision, Crete, Greece, 5–11 September 2010; pp. 211–224.
- Liu, M.Y.; Tuzel, O.; Ramalingam, S.; Chellappa, R. Entropy-rate clustering analysis via maximizing a submodular function subject to matroid constraint. IEEE Trans. Pattern Anal. Mach. Intell. 2014, 36, 99–112. [Google Scholar] [CrossRef] [PubMed]
- Nemhauser, G.L.; Wolsey, L.A.; Fisher, M.L. An analysis of approximations for maximizing submodular set functions. Math. Prog. 1978, 14, 265–294. [Google Scholar] [CrossRef]
- Fang, L.; Li, S.; Kang, X.; Benediktsson, J.A. Spectral-spatial hyperspectral image classification via multiscale adaptive sparse representation. IEEE Trans. Geosci. Remote Sens. 2014, 52, 7738–7749. [Google Scholar] [CrossRef]
- Fang, L.; Li, S.; Kang, X.; Benediktsson, J.A. Spectral-spatial classification of hyperspectral images with a superpixel-based discriminative sparse model. IEEE Trans. Geosci. Remote Sens. 2015, 53, 4186–4201. [Google Scholar] [CrossRef]
- Chang, C.C.; Lin, C.J. LIBSVM: A library for support vector machines. ACM Trans. Intell. Syst. Technol. 2011, 2, 27. [Google Scholar] [CrossRef]
Class | Name | Train | Test |
---|---|---|---|
1 | Alfalfa | 5 | 41 |
2 | Corn-no till | 143 | 1285 |
3 | Corn-min till | 83 | 747 |
4 | Corn | 24 | 213 |
5 | Grass/pasture | 49 | 434 |
6 | Grass/tree | 73 | 657 |
7 | Grass/pasture-mowed | 3 | 25 |
8 | Hay-windrowed | 48 | 430 |
9 | Oats | 2 | 18 |
10 | Soybean-no till | 98 | 874 |
11 | Soybean-min till | 246 | 2209 |
12 | Soybean-clean till | 60 | 533 |
13 | Wheat | 21 | 184 |
14 | Woods | 127 | 1138 |
15 | Bldg-grass-trees-drives | 39 | 347 |
16 | Stone-steel towers | 10 | 83 |
Total | 1031 | 9218 |
Class | |||||||
---|---|---|---|---|---|---|---|
1 | 99.01 | 98.05 | 99.02 | 97.56 | 98.05 | 97.56 | 90.73 |
2 | 87.21 | 90.27 | 95.04 | 96.34 | 90.41 | 92.54 | 90.33 |
3 | 92.05 | 95.53 | 97.11 | 89.69 | 95.61 | 92.34 | 92.88 |
4 | 95.31 | 99.06 | 99.04 | 99.06 | 88.64 | 91.55 | 87.98 |
5 | 91.71 | 94.10 | 94.79 | 93.55 | 94.47 | 94.01 | 92.30 |
6 | 99.88 | 99.76 | 99.76 | 99.85 | 99.30 | 98.48 | 97.53 |
7 | 96.00 | 96.00 | 97.60 | 96.43 | 96.00 | 96.80 | 96.00 |
8 | 100 | 100 | 100 | 100 | 100 | 98.79 | 98.28 |
9 | 100 | 100 | 100 | 100 | 100 | 94.44 | 86.67 |
10 | 90.48 | 96.84 | 92.20 | 94.74 | 93.91 | 94.74 | 93.46 |
11 | 93.86 | 97.32 | 96.51 | 96.92 | 97.71 | 96.43 | 95.15 |
12 | 82.66 | 92.35 | 95.20 | 97.75 | 90.73 | 91.22 | 84.80 |
13 | 100 | 99.46 | 99.57 | 99.46 | 99.46 | 99.13 | 99.46 |
14 | 99.74 | 99.93 | 99.74 | 99.82 | 99.46 | 98.54 | 98.98 |
15 | 93.60 | 98.04 | 91.87 | 93.08 | 95.33 | 93.03 | 83.80 |
16 | 96.63 | 96.39 | 97.59 | 96.39 | 97.11 | 97.56 | 91.57 |
OA (%) | 93.37 | 96.44 | 96.56 | 96.92 | 95.77 | 95.32 | 93.62 |
AA (%) | 94.88 | 95.07 | 95.19 | 95.19 | 95.01 | 95.45 | 92.50 |
Kappa | 0.92 | 0.95 | 0.95 | 0.95 | 0.95 | 0.95 | 0.93 |
Class | SVM | EMP | SRC | JSRC | MASR | SCMK | SBDSM | MSSR |
---|---|---|---|---|---|---|---|---|
1 | 80.24 | 98.22 | 58.64 | 92.68 | 94.68 | 99.99 | 97.56 | 97.45 |
2 | 72.99 | 85.84 | 52.35 | 94.68 | 97.25 | 97.24 | 96.34 | 97.83 |
3 | 66.76 | 90.86 | 53.65 | 93.44 | 97.34 | 97.20 | 89.69 | 99.60 |
4 | 84.10 | 93.75 | 36.78 | 91.93 | 97.41 | 96.89 | 99.06 | 98.59 |
5 | 90.68 | 93.40 | 82.35 | 94.05 | 97.20 | 96.31 | 93.55 | 98.16 |
6 | 94.09 | 98.47 | 89.98 | 95.58 | 99.62 | 99.59 | 99.85 | 99.82 |
7 | 83.96 | 92.75 | 88.56 | 83.20 | 96.41 | 99.68 | 96.43 | 96.73 |
8 | 96.15 | 99.90 | 90.23 | 99.86 | 99.89 | 100 | 100 | 100 |
9 | 92.00 | 100 | 71.35 | 36.67 | 76.42 | 100 | 100 | 92.23 |
10 | 77.11 | 87.45 | 68.32 | 91.21 | 97.99 | 92.35 | 94.74 | 98.28 |
11 | 69.84 | 91.91 | 75.32 | 95.98 | 98.64 | 98.61 | 96.92 | 99.28 |
12 | 73.72 | 87.85 | 42.56 | 88.89 | 97.72 | 96.78 | 97.75 | 97.19 |
13 | 98.28 | 97.97 | 91.21 | 83.04 | 99.01 | 99.13 | 99.46 | 99.46 |
14 | 90.67 | 99.22 | 88.52 | 99.56 | 99.99 | 99.64 | 99.82 | 100 |
15 | 70.67 | 97.85 | 36.25 | 93.26 | 98.62 | 97.21 | 93.08 | 92.04 |
16 | 96.12 | 98.27 | 88.58 | 90.12 | 95.87 | 97.02 | 96.39 | 96.39 |
OA (%) | 78.37 | 92.49 | 68.34 | 94.56 | 98.26 | 98.01 | 96.92 | 98.56 |
AA (%) | 83.58 | 94.61 | 65.20 | 89.01 | 96.66 | 97.95 | 95.19 | 97.98 |
Kappa | 0.75 | 0.91 | 0.64 | 0.94 | 0.98 | 0.98 | 0.95 | 0.98 |
Class | Name | Train | Test |
---|---|---|---|
1 | Asphalt | 67 | 6631 |
2 | Meadows | 187 | 18,649 |
3 | Gravel | 21 | 2099 |
4 | Trees | 31 | 3064 |
5 | Metal sheets | 14 | 1345 |
6 | Bare soil | 51 | 5029 |
7 | Bitumen | 14 | 1330 |
8 | Bricks | 37 | 3682 |
9 | Shadows | 10 | 947 |
Total | 432 | 42,776 |
Class | |||||||
---|---|---|---|---|---|---|---|
1 | 94.50 | 89.10 | 83.10 | 86.43 | 82.04 | 81.84 | 78.85 |
2 | 95.08 | 96.75 | 98.97 | 98.00 | 97.29 | 98.13 | 98.82 |
3 | 100 | 100 | 99.18 | 78.19 | 88.99 | 87.25 | 91.15 |
4 | 58.69 | 60.27 | 67.03 | 69.07 | 66.67 | 68.18 | 74.35 |
5 | 88.43 | 85.15 | 91.59 | 96.39 | 93.91 | 95.94 | 92.04 |
6 | 99.26 | 99.18 | 99.98 | 96.82 | 97.91 | 95.52 | 87.81 |
7 | 100 | 100 | 100 | 100 | 100 | 100 | 100 |
8 | 99.37 | 99.05 | 88.82 | 91.28 | 91.35 | 91.66 | 97.91 |
9 | 29.24 | 33.51 | 43.01 | 57.52 | 68.62 | 68.81 | 72.40 |
OA (%) | 91.70 | 91.51 | 92.06 | 92.15 | 91.92 | 91.64 | 91.37 |
AA (%) | 84.83 | 84.91 | 86.28 | 86.96 | 88.52 | 86.66 | 87.70 |
Kappa | 0.89 | 0.89 | 0.90 | 0.90 | 0.89 | 0.89 | 0.88 |
Class | SVM | EMP | SRC | JSRC | MASR | SCMK | SBDSM | MSSR |
---|---|---|---|---|---|---|---|---|
1 | 81.44 | 94.39 | 73.34 | 60.65 | 74.01 | 90.19 | 86.43 | 94.20 |
2 | 91.39 | 90.40 | 91.84 | 97.82 | 98.71 | 99.58 | 98.00 | 99.97 |
3 | 82.13 | 95.17 | 52.39 | 86.77 | 96.77 | 93.71 | 78.19 | 98.78 |
4 | 93.47 | 96.02 | 74.55 | 82.86 | 87.60 | 88.82 | 69.07 | 74.18 |
5 | 99.30 | 99.85 | 99.62 | 95.79 | 100 | 99.70 | 96.39 | 96.92 |
6 | 88.27 | 78.83 | 45.37 | 87.20 | 84.53 | 98.05 | 96.82 | 97.38 |
7 | 93.84 | 96.95 | 61.55 | 98.17 | 98.25 | 85.79 | 100 | 100 |
8 | 74.46 | 97.60 | 76.13 | 74.62 | 80.58 | 90.75 | 88.82 | 99.34 |
9 | 76.52 | 99.15 | 85.38 | 36.25 | 49.95 | 98.65 | 57.52 | 65.78 |
OA (%) | 86.13 | 91.58 | 77.81 | 84.53 | 89.73 | 94.73 | 92.15 | 95.54 |
AA (%) | 78.25 | 94.25 | 72.91 | 78.20 | 85.60 | 91.85 | 86.96 | 89.19 |
Kappa | 0.82 | 0.89 | 0.70 | 0.81 | 0.86 | 0.93 | 0.90 | 0.94 |
Class | Name | Train | Test |
---|---|---|---|
1 | Weeds_1 | 5 | 2004 |
2 | Weeds_2 | 8 | 3718 |
3 | Fallow | 4 | 1973 |
4 | Fallow plow | 3 | 1391 |
5 | Fallow smooth | 6 | 2672 |
6 | Stubble | 8 | 3951 |
7 | Celery | 8 | 3571 |
8 | Grapes | 23 | 11,248 |
9 | Soil | 13 | 6190 |
10 | Corn | 7 | 3271 |
11 | Lettuce 4 wk | 3 | 1065 |
12 | Lettuce 5 wk | 4 | 1923 |
13 | Lettuce 6 wk | 2 | 914 |
14 | Lettuce 7 wk | 3 | 1068 |
15 | Vineyard untrained | 15 | 7253 |
16 | Vineyard trellis | 4 | 1803 |
Total | 116 | 54,015 |
Class | Name | Train | Test |
---|---|---|---|
1 | Roof | 63 | 3122 |
2 | Road | 36 | 1786 |
3 | Trail | 29 | 1399 |
4 | Grass | 26 | 1261 |
5 | Shadow | 24 | 1191 |
6 | Tree | 23 | 1117 |
Total | 201 | 9698 |
Class | |||||
---|---|---|---|---|---|
1 | 100 | 100 | 100 | 100 | 100 |
2 | 99.78 | 99.87 | 99.91 | 99.87 | 99.83 |
3 | 100 | 100 | 100 | 100 | 100 |
4 | 99.93 | 99.93 | 96.93 | 93.08 | 84.67 |
5 | 99.33 | 99.33 | 99.93 | 99.33 | 99.40 |
6 | 99.95 | 99.33 | 99.95 | 99.95 | 99.89 |
7 | 99.89 | 99.90 | 99.95 | 99.75 | 99.66 |
8 | 99.11 | 99.88 | 99.18 | 99.02 | 96.01 |
9 | 100 | 100 | 100 | 100 | 100 |
10 | 97.34 | 98.08 | 97.40 | 77.92 | 95.92 |
11 | 40.00 | 100 | 100 | 100 | 100 |
12 | 60.00 | 100 | 100 | 100 | 100 |
13 | 78.42 | 69.61 | 98.03 | 98.21 | 97.81 |
14 | 95.97 | 95.99 | 96.29 | 78.91 | 95.90 |
15 | 99.94 | 90.93 | 96.49 | 99.94 | 92.42 |
16 | 97.23 | 97.24 | 97.23 | 97.24 | 97.24 |
OA (%) | 95.35 | 96.84 | 98.68 | 97.67 | 97.23 |
AA (%) | 90.68 | 93.73 | 98.85 | 96.46 | 97.42 |
Kappa | 0.96 | 0.96 | 0.98 | 0.97 | 0.97 |
Class | |||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|
1 | 90.12 | 89.37 | 93.28 | 90.77 | 87.90 | 93.48 | 91.65 | 92.04 | 90.02 | 89.07 | 88.42 |
2 | 98.63 | 97.38 | 97.66 | 99.83 | 98.18 | 97.21 | 97.78 | 96.41 | 99.37 | 99.20 | 99.37 |
3 | 83.61 | 92.64 | 94.83 | 94.97 | 97.67 | 85.29 | 95.05 | 92.57 | 92.43 | 92.50 | 90.97 |
4 | 83.68 | 91.52 | 93.30 | 92.41 | 92.89 | 96.77 | 97.50 | 95.88 | 96.77 | 96.69 | 95.96 |
5 | 78.46 | 93.33 | 90.43 | 89.32 | 91.79 | 94.02 | 87.95 | 93.33 | 93.33 | 93.25 | 93.68 |
6 | 68.55 | 73.75 | 81.59 | 80.40 | 82.86 | 92.62 | 82.04 | 85.87 | 86.87 | 88.06 | 86.78 |
OA (%) | 86.07 | 90.27 | 92.62 | 91.86 | 91.68 | 93.38 | 92.45 | 92.85 | 92.96 | 92.75 | 92.17 |
AA (%) | 83.84 | 89.66 | 91.85 | 91.28 | 91.88 | 93.23 | 0.92 | 92.68 | 93.13 | 93.13 | 92.53 |
Kappa | 0.83 | 0.88 | 0.91 | 0.90 | 0.90 | 0.92 | 0.91 | 0.91 | 0.91 | 0.91 | 0.90 |
Class | SVM | EMP | SRC | JSRC | MASR | SCMK | SBDSM | MSSR |
---|---|---|---|---|---|---|---|---|
1 | 98.35 | 99.23 | 92.56 | 99.99 | 99.90 | 99.94 | 100 | 100 |
2 | 98.76 | 98.90 | 95.39 | 99.67 | 99.35 | 97.21 | 99.91 | 99.78 |
3 | 92.54 | 93.53 | 74.67 | 68.25 | 97.88 | 87.54 | 100 | 100 |
4 | 96.28 | 98.92 | 98.68 | 62.47 | 88.95 | 99.93 | 96.93 | 99.93 |
5 | 95.75 | 94.25 | 89.88 | 82.55 | 92.86 | 99.35 | 99.93 | 99.93 |
6 | 95.90 | 95.61 | 98.76 | 99.27 | 99.97 | 99.95 | 99.95 | 99.90 |
7 | 98.47 | 99.24 | 99.02 | 95.35 | 100 | 99.53 | 99.95 | 99.97 |
8 | 63.26 | 53.35 | 75.30 | 86.63 | 84.98 | 97.85 | 99.18 | 99.28 |
9 | 97.55 | 98.75 | 93.79 | 100 | 99.76 | 99.64 | 100 | 100 |
10 | 82.38 | 93.41 | 76.02 | 85.27 | 88.10 | 87.94 | 97.40 | 97.32 |
11 | 88.97 | 96.51 | 88.20 | 89.17 | 98.97 | 97.18 | 100 | 100 |
12 | 85.57 | 95.36 | 95.48 | 67.54 | 98.85 | 100 | 100 | 98.03 |
13 | 98.68 | 98.57 | 96.81 | 51.82 | 99.69 | 92.58 | 98.03 | 95.97 |
14 | 86.82 | 93.84 | 89.75 | 95.25 | 94.16 | 95.97 | 96.29 | 95.97 |
15 | 59.66 | 79.00 | 88.20 | 67.40 | 80.84 | 88.48 | 96.49 | 99.94 |
16 | 35.88 | 38.52 | 71.44 | 83.42 | 92.68 | 94.93 | 97.23 | 97.25 |
OA (%) | 80.65 | 85.75 | 81.72 | 85.33 | 92.33 | 94.14 | 98.68 | 99.41 |
AA (%) | 87.01 | 93.40 | 85.21 | 82.63 | 94.75 | 94.92 | 98.85 | 99.17 |
Kappa | 0.81 | 0.84 | 0.79 | 0.84 | 0.91 | 0.93 | 0.98 | 0.99 |
Class | SVM | EMP | SRC | JSRC | MASR | SCMK | SBDSM | MSSR |
---|---|---|---|---|---|---|---|---|
1 | 83.40 | 86.09 | 89.95 | 91.13 | 98.75 | 93.56 | 93.48 | 93.98 |
2 | 98.35 | 96.80 | 94.98 | 97.95 | 98.14 | 98.92 | 97.21 | 99.45 |
3 | 93.23 | 89.31 | 81.35 | 89.80 | 98.26 | 98.18 | 85.29 | 95.92 |
4 | 96.17 | 93.46 | 95.40 | 93.38 | 90.94 | 92.57 | 96.77 | 97.58 |
5 | 91.26 | 97.58 | 95.98 | 94.36 | 99.16 | 95.04 | 94.02 | 97.69 |
6 | 92.83 | 79.64 | 94.53 | 92.98 | 82.74 | 91.25 | 92.62 | 96.81 |
OA (%) | 91.11 | 90.04 | 91.59 | 93.06 | 95.97 | 94.96 | 93.38 | 96.74 |
AA (%) | 92.54 | 90.48 | 92.03 | 93.27 | 94.75 | 94.91 | 93.23 | 96.58 |
Kappa | 0.89 | 0.88 | 0.90 | 0.91 | 0.95 | 0.94 | 0.92 | 0.95 |
Images | SVM | EMP | SRC | JSRC | MASR | SCMK | SBDSM | MSSR |
---|---|---|---|---|---|---|---|---|
Indian Pines | 242.3 | 63.5 | 17.4 | 118.6 | 1580.9 | 254.6 | 9.2 | 67.4 |
U.Pavia | 30.4 | 24.6 | 50.4 | 425.3 | 3010.4 | 48.7 | 26.5 | 197.5 |
Salinas | 13.4 | 10.5 | 76.2 | 800.9 | 4129.1 | 20.6 | 18.4 | 92.1 |
Washington DC | 14.6 | 23.3 | 16.7 | 34.2 | 210.6 | 16.5 | 21.9 | 187.3 |
Class | |||||||
---|---|---|---|---|---|---|---|
1 | 92.68 | 92.68 | 92.68 | 92.68 | 92.68 | 92.68 | 92.68 |
2 | 86.93 | 81.48 | 79.92 | 96.74 | 93.62 | 90.50 | 93.93 |
3 | 99.46 | 86.85 | 98.66 | 92.90 | 94.51 | 93.31 | 91.16 |
4 | 42.72 | 94.01 | 87.79 | 90.74 | 75.59 | 90.61 | 82.63 |
5 | 90.78 | 99.54 | 99.08 | 99.08 | 95.62 | 91.94 | 94.93 |
6 | 99.71 | 96.00 | 99.85 | 99.54 | 99.70 | 99.24 | 96.35 |
7 | 0 | 100 | 96.00 | 96.00 | 96.00 | 96.00 | 96.00 |
8 | 100 | 100 | 100 | 100 | 98.60 | 100 | 100 |
9 | 0 | 100 | 100 | 100 | 100 | 100 | 83.33 |
10 | 91.19 | 90.93 | 94.51 | 96.91 | 97.94 | 96.57 | 90.96 |
11 | 62.61 | 93.75 | 96.20 | 92.03 | 91.58 | 95.52 | 94.34 |
12 | 95.87 | 91.93 | 79.36 | 81.43 | 91.56 | 87.24 | 86.12 |
13 | 99.46 | 99.46 | 99.46 | 99.46 | 99.46 | 99.46 | 98.37 |
14 | 83.04 | 99.82 | 99.82 | 99.82 | 99.74 | 99.74 | 99.74 |
15 | 97.98 | 97.98 | 98.27 | 98.85 | 98.85 | 98.27 | 97.41 |
16 | 97.59 | 97.59 | 97.59 | 97.59 | 97.59 | 97.59 | 97.59 |
OA (%) | 83.62 | 93.62 | 93.97 | 96.04 | 94.96 | 94.56 | 94.26 |
AA (%) | 77.50 | 94.99 | 94.95 | 95.86 | 95.19 | 95.92 | 93.47 |
Kappa | 0.82 | 0.93 | 0.93 | 0.94 | 0.94 | 0.95 | 0.93 |
Class | |||||||
---|---|---|---|---|---|---|---|
1 | 97.56 | 97.56 | 75.61 | 97.56 | 97.56 | 73.17 | 75.61 |
2 | 83.66 | 82.33 | 91.91 | 94.16 | 91.36 | 92.22 | 93.70 |
3 | 86.75 | 84.75 | 85.68 | 93.84 | 98.66 | 97.99 | 95.45 |
4 | 81.67 | 66.20 | 96.71 | 92.96 | 92.96 | 97.65 | 89.67 |
5 | 87.33 | 94.01 | 95.16 | 88.71 | 87.79 | 94.24 | 91.94 |
6 | 98.17 | 98.93 | 96.19 | 98.33 | 98.33 | 99.09 | 98.63 |
7 | 0 | 100 | 100 | 96.00 | 96.00 | 92.00 | 96.00 |
8 | 99.07 | 99.07 | 100 | 100 | 99.30 | 99.77 | 99.97 |
9 | 98.45 | 99.58 | 100 | 100 | 88.89 | 66.77 | 55.56 |
10 | 71.40 | 89.82 | 84.67 | 97.71 | 94.05 | 95.31 | 93.48 |
11 | 92.39 | 93.57 | 93.62 | 96.29 | 95.02 | 97.01 | 97.15 |
12 | 80.86 | 88.74 | 90.81 | 90.99 | 87.62 | 86.49 | 91.37 |
13 | 99.43 | 86.41 | 99.46 | 99.46 | 99.46 | 98.91 | 98.91 |
14 | 98.42 | 99.89 | 99.56 | 99.65 | 98.51 | 99.30 | 99.82 |
15 | 99.14 | 98.85 | 97.41 | 97.98 | 91.64 | 89.34 | 87.32 |
16 | 98.80 | 97.59 | 97.59 | 98.80 | 97.59 | 97.59 | 96.39 |
OA (%) | 89.28 | 91.78 | 93.06 | 97.08 | 96.69 | 95.62 | 95.57 |
AA (%) | 85.91 | 92.49 | 87.77 | 96.40 | 94.42 | 92.55 | 92.68 |
Kappa | 0.88 | 0.91 | 0.92 | 0.96 | 0.94 | 0.95 | 0.95 |
Class | MSSR_FH | MSSR_SLIC | MSSR_ERS |
---|---|---|---|
1 | 92.68 | 97.56 | 97.45 |
2 | 92.97 | 97.35 | 97.83 |
3 | 97.86 | 97.19 | 99.60 |
4 | 96.71 | 97.18 | 98.59 |
5 | 98.39 | 93.09 | 98.16 |
6 | 99.85 | 99.24 | 99.82 |
7 | 96.00 | 96.00 | 96.73 |
8 | 100 | 100 | 100 |
9 | 100 | 100 | 92.23 |
10 | 97.37 | 97.94 | 98.28 |
11 | 95.70 | 96.71 | 99.28 |
12 | 91.37 | 97.94 | 97.19 |
13 | 99.46 | 99.46 | 99.46 |
14 | 99.82 | 100 | 100 |
15 | 99.14 | 98.85 | 92.04 |
16 | 97.56 | 97.59 | 96.39 |
OA (%) | 96.77 | 97.75 | 98.56 |
AA (%) | 97.18 | 97.88 | 97.98 |
Kappa | 0.96 | 0.97 | 0.98 |
time (s) | 67.1 | 66.3 | 67.4 |
© 2017 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license ( http://creativecommons.org/licenses/by/4.0/).
Share and Cite
Zhang, S.; Li, S.; Fu, W.; Fang, L. Multiscale Superpixel-Based Sparse Representation for Hyperspectral Image Classification . Remote Sens. 2017, 9, 139. https://doi.org/10.3390/rs9020139
Zhang S, Li S, Fu W, Fang L. Multiscale Superpixel-Based Sparse Representation for Hyperspectral Image Classification . Remote Sensing. 2017; 9(2):139. https://doi.org/10.3390/rs9020139
Chicago/Turabian StyleZhang, Shuzhen, Shutao Li, Wei Fu, and Leiyuan Fang. 2017. "Multiscale Superpixel-Based Sparse Representation for Hyperspectral Image Classification " Remote Sensing 9, no. 2: 139. https://doi.org/10.3390/rs9020139
APA StyleZhang, S., Li, S., Fu, W., & Fang, L. (2017). Multiscale Superpixel-Based Sparse Representation for Hyperspectral Image Classification . Remote Sensing, 9(2), 139. https://doi.org/10.3390/rs9020139