Sparse Feature Learning of Hyperspectral Imagery via Multiobjective-Based Extreme Learning Machine
<p>(<b>a</b>) ELM-AE includes NPR and LR. Using <math display="inline"><semantics> <msup> <mi mathvariant="bold-italic">β</mi> <mi>T</mi> </msup> </semantics></math> as the transformation matrix to transform features. (<b>b</b>) AE consists of an encoder (red rhomboid box) and a decoder (green rhomboid box). The outputs of encoder represent the learned features.</p> "> Figure 2
<p>Structure of EMO-ELM which consists of an encoder (red rhomboid box) and decoder (green rhomboid box). Where the red neurons in hidden layer denote they are activated while the orange neurons represent they are limited.</p> "> Figure 3
<p>Pseudo-color image (<b>a</b>) and ground truth (<b>b</b>) of Salinas-A data set.</p> "> Figure 4
<p>Pseudo-color image (<b>a</b>) and ground truth (<b>b</b>) of KSC data set.</p> "> Figure 5
<p>Normalized Pareto front and solution selection of (<b>a</b>) SalinasA and (<b>b</b>) KSC data sets. The curvatures are normalized for plotting it in a same coordinate. The best compromise is denoted as the top three points of closing to the maximum curvature.</p> "> Figure 6
<p>The 2-dimensional visualization of Iris dataset of (<b>a</b>) NRP, (<b>b</b>) SPCA, (<b>c</b>) ELM-AE, (<b>d</b>) SELM-AE, (<b>e</b>) AE, (<b>f</b>) SAE, (<b>g</b>) EMO-ELM(<math display="inline"><semantics> <msub> <mi>f</mi> <mn>1</mn> </msub> </semantics></math>), (<b>h</b>) EMO-ELM(<math display="inline"><semantics> <msub> <mi>f</mi> <mn>2</mn> </msub> </semantics></math>) and (<b>i</b>) EMO-ELM(best).</p> "> Figure 7
<p>The sparsity of different algorithms of (<b>a</b>) SalinasA and (<b>b</b>) KSC data set.</p> "> Figure 8
<p>Box plot of SalinasA and KSC data sets. (<b>a</b>–<b>c</b>) denotes the box plot of the SalinasA data set in terms of OA, AA, and Kappa, respectively; (<b>d</b>–<b>f</b>) represents the box plot of the KSC data set with respect to OA, AA, and Kappa, respectively. The edges of boxes are the 25th and 75th percentiles and the middle lines indicate the median line. Whiskers extend to the maximum and minimum points. Abnormal outliers are shown as “∘”s.</p> ">
Abstract
:1. Introduction
2. Related Work
2.1. ELM-AE
2.2. AE
3. EMO-ELM
3.1. Constructing a Multiobjective Model
3.2. Solving a Multiobjective Model
Algorithm 1: NSGA-II-based Solving | ||
Input: , , other evolutionary parameters | ||
Output: Pareto optimal solution set | ||
1 Initialize population; | ||
2 while Termination criteria not met do | ||
3 | Elitist selection technique; | |
4 | Generic operations; | |
5 | Objectives evaluation; | |
6 | Fast nondominated sorting; | |
7 | Crowding distance assignment; | |
8 end |
3.3. Selecting Solution
- the solution getting a minimum value of ;
- the solution getting a minimum value of ;
- the solution locating at the knee area.
3.4. Sparse Feature learning Using EMO-ELM
Algorithm 2: EMO-ELM for sparse feature learning |
Input: , , L Output: Learned feature 1 Optimize Equation (23) according to Algorithm 1, and obtain the Pareto optimal solution set; 2 Select from the obtained Pareto optimal solution set according to selection criteria; 3 Regenerate and ; 4 Extract features according to Equation (24); 5 Return extracted features ; |
4. Experiments
4.1. Data Description and Experiment Design
4.1.1. SalinasA Data Set
4.1.2. Kennedy Space Center (KSC) Data Set
4.2. Experiment Settings
4.3. Convergence and Solution Selection
4.4. Visual Investigation of Features Learned by Different Algorithms
4.5. Measuring Sparsity of the Learned Features
4.6. Comparison of Classification Ability
- EMO-ELM() v.s. EMO-ELM() v.s EMO-ELM(best): Generally, EMO-ELM() yields higher accuracy than other competitors in terms of mean performance for SalinasA and KSC. The reason is that because EMO-ELM() guarantees the model to achieve the smallest reconstruction error, whereas EMO-ELM(), although, obtains the smallest sparsity, the feature reconstruction is limited. Reviewing Figure 7a,b, EMO-ELM() also maintains good sparsity. Hence, the solution selection strategy based on can be considered best in our experiments. EMO-ELM(best) also plays a trade-off role between sparsity and reconstruction error, thus we view it as the second choice.
- NRP v.s. EMO-ELM: As shown in Figure 8a–f, the original nonlinear random projection (NRP) is effective in feature mapping, but EMO-ELM has shown that the NRP’s performance can be further improved after optimizing.
- SPCA v.s EMO-ELM: The features learned by SPCA maintain the remarkable sparsity, however, the classification ability is damaged. Furthermore, as a dimension-reduction method, SPCA cannot work when the learned dimension is larger than the original dimension. On the contrary, EMO-ELM outperforms SPCA in respects of many tested performances.
- ELM-AE and SELM-AE v.s EMO-ELM: As we known, ELM-AE and SELM-AE learn features linearly. In this experiment, SELM-AE performs better than ELM-AE due to the sparse matrix is used. Whereas, EMO-ELM learns features nonlinearly and has significantly enhanced the classification capacity of the learned features.
- AE and SAE v.s EMO-ELM: The learning procedure of EMO-ELM is similar to AE and SAE, but EMO-ELM becomes more competitive in both respects of classification ability and sparsity after the same times of updating. Especially, EMO-ELM optimizes only the hidden layer, whereas AE and SAE have to simultaneously optimize the hidden layer and the output layer.
4.7. Discussion
- The proposed approach can be regarded as a general framework that is composed of a nonlinear encoder and a linear decoder. For the optimization of this framework, we only need to focus on the encoder (or the hidden layer) since the decoder can be represented as a closed-form solution, which is very different from neural networks. Thus, compared with SAE and AE, the number of EMO-ELM’s parameters can be reduced to half.
- In addition to the objectives used in this paper, various objectives, such as classification error and matrix norm constraints are considered in this framework. More importantly, the optimizer is replaceable and flexible. Therefore, EMO-ELM can be used as an alternative to unsupervised feature learning.
- It is well known that the evolutionary operating is time-consuming, this is the main challenge faced in EMO-ELM. Thus, EMO-ELM is difficult to directly handle the big data. Fortunately, evolutionary algorithms are easy to implement in parallel. There is an issue that how to use EMO-ELM to do deep representation learning is worth studying in the future works.
5. Conclusions
- The experimental results demonstrate that EMO-ELMs is more suitable to extract sparse features from the hyperspectral image, and EMO plays a more significant role in dealing with the nonlinear data of hyperspectral image.
- The proposed EMO-ELM significantly improves the performance of the original ELM. These experimental results demonstrate that the optimized hidden layer of ELM is effective for HSI feature learning.
- EMO-ELM generally outperforms ELM-AE, SELM-AE, AE, and SAE in terms of sparsity and classification ability because of two optimized objectives.
- The knee-based solution selection strategy can accurately focus on the knee area of the PF curve. But, the RMSE-based solution selection strategy is more applicable in our experiments.
Author Contributions
Funding
Acknowledgments
Conflicts of Interest
References
- Li, W.; Tramel, E.W.; Prasad, S.; Fowler, J.E. Nearest regularized subspace for hyperspectral classification. IEEE Trans. Geosci. Remote Sens. 2014, 52, 477–489. [Google Scholar] [CrossRef] [Green Version]
- Chen, Y.; Li, C.; Ghamisi, P.; Jia, X.; Gu, Y. Deep fusion of remote sensing data for accurate classification. IEEE Geosci. Remote Sens. Lett. 2017. [Google Scholar] [CrossRef]
- Thenkabail, P.S. Hyperspectral data processing: Algorithm design and analysis. Photogramm. Eng. Remote Sens. 2015, 81, 441–442. [Google Scholar] [CrossRef]
- Sun, W.; Tian, L.; Xu, Y.; Zhang, D.; Du, Q. Fast and robust self-representation method for hyperspectral band selection. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2017, 10, 5087–5098. [Google Scholar] [CrossRef]
- Shippert, P. Introduction to hyperspectral image analysis. Online J. Space Commun. 2003, 3, 13. [Google Scholar]
- Cai, Y.; Liu, X.; Cai, Z. BS-Nets: An End-to-End framework for band selection of hyperspectral image. IEEE Trans. Geosci. Remote Sens. 2019. [Google Scholar] [CrossRef] [Green Version]
- Mei, S.; Ji, J.; Geng, Y.; Zhang, Z.; Li, X.; Du, Q. Unsupervised spatial-spectral feature learning by 3D convolutional autoencoder for hyperspectral classification. IEEE Trans. Geosci. Remote Sens. 2019, 57, 6808–6820. [Google Scholar] [CrossRef]
- Luo, F.; Bo, D.; Zhang, L.; Zhang, L.; Tao, D. Feature learning using spatial-spectral hypergraph discriminant analysis for hyperspectral image. IEEE Trans. Cybern. 2019, 49, 2406–2419. [Google Scholar] [CrossRef]
- Rodarmel, C.; Shan, J. Principal component analysis for hyperspectral image classification. Geo. Spat. Inf. Sci. 2002, 62, 115. [Google Scholar]
- Kemker, R.; Kanan, C. Self-taught feature learning for hyperspectral image classification. IEEE Trans. Geosci. Remote Sens. 2017, 55, 2693–2705. [Google Scholar] [CrossRef]
- Zou, H.; Hastie, T.; Tibshirani, R. Sparse principal component analysis. J. Comput. Graph. Stat. 2006, 15, 265–286. [Google Scholar] [CrossRef] [Green Version]
- Zhang, Y.; Wu, J.; Cai, Z.; Yu, P. Multi-view Multi-label Learning with Sparse Feature Selection for Image Annotation. IEEE Trans. Multimed. 2020, 1–14. [Google Scholar] [CrossRef]
- Zhang, Y.; Wu, J.; Cai, Z.; Du, B.; Yu, P. An unsupervised parameter learning model for RVFL neural network. Neural Netw. 2019, 112, 85–97. [Google Scholar] [CrossRef] [PubMed]
- Agarwal, A.; El-Ghazawi, T.; El-Askary, H.; Le-Moigne, J. Efficient hierarchical-PCA dimension reduction for hyperspectral imagery. In Proceedings of the 2007 IEEE International Symposium on Signal Processing and Information Technology, Giza, Egypt, 15–18 December 2007; pp. 353–356. [Google Scholar]
- Cheng, X.; Chen, Y.; Tao, Y.; Wang, C.; Kim, M.; Lefcourt, A. A novel integrated PCA and FLD method on hyperspectral image feature extraction for cucumber chilling damage inspection. Trans. ASABE 2004, 47, 1313. [Google Scholar] [CrossRef]
- Lazcano, R.; Madroñal, D.; Fabelo, H.; Ortega, S.; Salvador, R.; Callico, G.; Juarez, E.; Sanz, C. Adaptation of an iterative PCA to a manycore architecture for hyperspectral image processing. IET Signal Process. 2019, 91, 759–771. [Google Scholar] [CrossRef]
- Hinton, G.E.; Salakhutdinov, R.R. Reducing the dimensionality of data with neural networks. Science 2006, 313, 504–507. [Google Scholar] [CrossRef] [Green Version]
- Lin, Z.H.; Chen, Y.S.; Zhao, X.; Wang, G. Spectral-spatial classification of hyperspectral image using autoencoders. In Proceedings of the 2013 9th International Conference on Information, Communications and Signal Processing (ICICS), Tainan, Taiwan, 10–13 December 2013; pp. 1–5. [Google Scholar]
- Windrim, L.; Ramakrishnan, R.; Melkumyan, A.; Murphy, R.J.; Chlingaryan, A. Unsupervised feature-learning for hyperspectral data with autoencoders. Remote Sens. 2019, 11, 864. [Google Scholar] [CrossRef] [Green Version]
- Koda, S.; Melgani, F.; Nishii, R. Unsupervised spectral-spatial feature extraction with generalized autoencoder for hyperspectral imagery. IEEE Geosci. Remote Sens. Lett. 2019, 1–5. [Google Scholar] [CrossRef]
- Liao, Y.; Wang, Y.; Liu, Y. Graph regularized auto-encoders for image representation. IEEE Trans. Image Process. 2016, 26, 2839–2852. [Google Scholar] [CrossRef]
- Liang, M.; Jiao, L.; Meng, Z. A superpixel-based relational auto-encoder for feature extraction of hyperspectral images. Remote Sens. 2019, 11, 2454. [Google Scholar] [CrossRef] [Green Version]
- He, K.; Zhang, X.; Ren, S.; Sun, J. Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Lasvegas, NV, USA, 26 June–1 July 2016; pp. 770–778. [Google Scholar]
- Tao, C.; Pan, H.B.; Li, Y.S.; Zou, Z.R. Unsupervised spectral-spatial feature learning with stacked sparse autoencoder for hyperspectral imagery classification. IEEE Geosci. Remote Sens. Lett. 2015, 12, 2438–2442. [Google Scholar]
- Huang, G.B.; Zhu, Q.Y.; Siew, C.K. Extreme learning machine: a new learning scheme of feedforward neural networks. In Proceedings of the 2004 IEEE International Joint Conference on Neural Networks, Budapest, Hungary, 25–29 July 2004; pp. 985–990. [Google Scholar]
- Cai, Y.; Liu, X.; Zhang, Y.; Cai, Z. Hierarchical ensemble of extreme learning machine. Pattern Recognit. Lett. 2018, 116, 101–106. [Google Scholar] [CrossRef]
- Zhang, Y.; Wu, J.; Zhou, C.; Cai, Z.; Yang, J.; Yu, P. Multi-View Fusion with Extreme Learning Machine for Clustering. ACM Trans. Intell. Syst. Technol. 2019, 10, 1–23. [Google Scholar] [CrossRef] [Green Version]
- Han, M.; Liu, B. Ensemble of extreme learning machine for remote sensing image classification. Neurocomputing 2015, 149, 65–70. [Google Scholar] [CrossRef]
- Lv, Q.; Niu, X.; Dou, Y.; Xu, J.; Lei, Y. Classification of hyperspectral remote sensing image using hierarchical local-receptive-field-based extreme learning machine. IEEE Geosci. Remote Sens. Lett. 2016, 13, 434–438. [Google Scholar] [CrossRef]
- Zhou, Y.; Lian, J.; Han, M. Remote sensing image transfer classification based on weighted extreme learning machine. IEEE Geosci. Remote Sens. Lett. 2016, 13, 1405–1409. [Google Scholar] [CrossRef]
- Zhou, Y.; Peng, J.; Chen, C.L.P. Extreme learning machine with composite kernels for hyperspectral image classification. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2015, 8, 2351–2360. [Google Scholar] [CrossRef]
- Zhang, Y.; Jiang, X.; Wang, X.; Cai, Z. Spectral-Spatial Hyperspectral Image Classification with Superpixel Pattern and Extreme Learning Machine. Remote Sens. 2019, 11, 1983. [Google Scholar] [CrossRef] [Green Version]
- Kasun, L.L.C.; Zhou, H.; Huang, G.B.; Vong, C.M. Representational learning with ELMs for big data. IEEE Intell. Syst. 2013, 28, 31–34. [Google Scholar]
- Tang, J.; Deng, C.; Huang, G.B. Extreme learning machine for multilayer perceptron. IEEE Trans. Neural Netw. Learn Syst. 2016, 27, 809–821. [Google Scholar] [CrossRef]
- Lv, F.; Han, M.; Qiu, T. Remote sensing image classification based on ensemble extreme learning machine with stacked autoencoder. IEEE Access 2017, 5, 9021–9031. [Google Scholar] [CrossRef]
- Ahmad, M.; Khan, A.M.; Mazzara, M.; Distefano, S. Multi-layer extreme learning machine-based autoencoder for hyperspectral image classification. In Proceedings of the 14th International Conference on Computer Vision Theory and Applications (VISAPP’19), Prague, Czech Republic, 25–27 February 2019; pp. 25–27. [Google Scholar]
- Kasun, L.L.C.; Yang, Y.; Huang, G.B.; Zhang, Z. Dimension reduction with extreme learning machine. IEEE Trans. Image Process. 2016, 25, 3906–3918. [Google Scholar] [CrossRef] [PubMed]
- Li, P.; Hastie, T.J.; Church, K.W. Very sparse random projections. In Proceedings of the Twelfth ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, Philadelphia, PA, USA, 20–23 August 2006; pp. 287–296. [Google Scholar]
- Luo, X.; Xu, Y.; Wang, W.; Yuan, M.; Ban, X.; Zhu, Y.; Zhao, W. Towards enhancing stacked extreme learning machine with sparse autoencoder by correntropy. J. Franklin Inst. 2018, 355, 1945–1966. [Google Scholar] [CrossRef]
- Huang, G.; Huang, G.B.; Song, S.; You, K. Trends in extreme learning machines: A review. Neural Netw. 2015, 61, 32–48. [Google Scholar] [CrossRef] [PubMed]
- Huang, G.B. An insight into extreme learning machines: Random neurons, random features and kernels. Cognit. Comput. 2014, 6, 376–390. [Google Scholar] [CrossRef]
- Deb, K.; Pratap, A.; Agarwal, S.; Meyarivan, T. A fast and elitist multiobjective genetic algorithm: NSGA-II. IEEE Trans. Evol. Comput. 2002, 6, 182–197. [Google Scholar] [CrossRef] [Green Version]
- Zhang, Q.; Li, H. MOEA/D: A multiobjective evolutionary algorithm based on decomposition. IEEE Trans. Evol. Comput. 2007, 11, 712–731. [Google Scholar] [CrossRef]
- Coello, C.A.; Pulido, G.T.; Lechuga, M.S. Handling multiple objectives with particle swarm optimization. IEEE Trans. Evol. Comput. 2004, 8, 256–279. [Google Scholar] [CrossRef]
- Chollet, F. Keras. Available online: https://github.com/fchollet/keras (accessed on 6 November 2019).
- Hurley, N.; Rickard, S. Comparing measures of sparsity. IEEE Trans. Inf. Theory 2009, 55, 4723–4741. [Google Scholar] [CrossRef] [Green Version]
# | Class | Number of Samples |
---|---|---|
1 | Brocoli_green_weeds_1 | 391 |
2 | Corn_senesced_green_weeds | 1343 |
3 | Lettuce_romaine_4wk | 616 |
4 | Lettuce_romaine_5wk | 1525 |
5 | Lettuce_romaine_6wk | 674 |
6 | Lettuce_romaine_7wk | 799 |
# | Class | Number of Samples |
---|---|---|
1 | Scrub | 761 |
2 | Willow swamp | 243 |
3 | CP/Oak | 256 |
4 | Slash pine | 252 |
5 | Oak/Broadleaf | 161 |
6 | Hardwood | 229 |
7 | swamp | 105 |
8 | Graminoid marsh | 431 |
9 | Spartina marsh | 520 |
10 | Cattail marsh | 404 |
11 | Salt marsh | 419 |
12 | Mud flats | 503 |
13 | Water | 927 |
Class | Algorithm | ||||||||
---|---|---|---|---|---|---|---|---|---|
NRP | SPCA | ELM-AE | SELM-AE | AE | SAE | EMO-ELM() | EMO-ELM() | EMO-ELM(best) | |
1 | 99.49 ± 0.72 | 99.49 ± 0.72 | 99.16 ± 0.96 | 99.16 ± 0.94 | 99.49 ± 0.72 | 99.49 ± 0.72 | 99.49 ± 0.72 | 99.49 ± 0.72 | 99.49 ± 0.72 |
2 | 97.80 ± 0.44 | 98.73 ± 0.41 | 95.95 ± 2.28 | 98.81 ± 0.41 | 97.83 ± 0.58 | 97.57 ± 0.16 | 98.18 ± 0.64 | 99.09 ± 0.43 | 98.73 ± 0.40 |
3 | 96.12 ± 1.37 | 92.03 ± 1.96 | 86.54 ± 6.22 | 96.36 ± 1.59 | 97.50 ± 0.80 | 96.64 ± 0.74 | 96.14 ± 0.43 | 96.43 ± 0.83 | 92.94 ± 0.64 |
4 | 99.93 ± 0.09 | 99.8 ± 0.16 | 99.13 ± 1.91 | 99.72 ± 0.26 | 100.00 ± 0.00 | 99.87 ± 0.09 | 99.99 ± 0.04 | 100.00 ± 0.00 | 100.00 ± 0.00 |
5 | 100.00 ± 0.00 | 99.87 ± 0.2 | 99.44 ± 0.57 | 99.54 ± 0.57 | 99.70 ± 0.42 | 99.70 ± 0.42 | 99.99 ± 0.08 | 99.70 ± 0.42 | 99.81 ± 0.22 |
6 | 99.37 ± 0.47 | 97.12 ± 1.2 | 98.84 ± 0.44 | 99.20 ± 0.42 | 99.50 ± 0.35 | 99.25 ± 0.31 | 98.62 ± 0.34 | 98.89 ± 0.30 | 98.75 ± 0.71 |
AA | 98.79 ± 0.23 | 97.84 ± 0.27 | 96.51 ± 1.29 | 98.80 ± 0.30 | 99.00 ± 0.05 | 98.75 ± 0.07 | 98.73 ± 0.22 | 98.93 ± 0.08 | 98.29 ± 0.07 |
OA | 98.85 ± 0.16 | 98.22 ± 0.28 | 96.88 ± 1.28 | 98.96 ± 0.23 | 99.02 ± 0.14 | 98.78 ± 0.04 | 98.85 ± 0.23 | 99.12 ± 0.12 | 98.62 ± 0.09 |
Kappa | 0.986 ± 0.002 | 0.978 ± 0.004 | 0.961 ± 0.016 | 0.987 ± 0.003 | 0.988 ± 0.002 | 0.985 ± 0.000 | 0.986 ± 0.003 | 0.989 ± 0.002 | 0.983 ± 0.001 |
Class | Algorithm | ||||||||
---|---|---|---|---|---|---|---|---|---|
NRP | SPCA | ELM-AE | SELM-AE | AE | SAE | EMO-ELM() | EMO-ELM() | EMO-ELM(best) | |
1 | 97.76 ± 1.32 | 97.66 ± 0.66 | 96.57 ± 1.03 | 96.36 ± 1.10 | 97.83 ± 0.27 | 97.84 ± 0.49 | 95.11 ± 0.27 | 98.09 ± 0.50 | 96.19 ± 0.48 |
2 | 89.59 ± 3.81 | 92.10 ± 2.46 | 83.50 ± 3.63 | 85.72 ± 3.69 | 86.87 ± 2.60 | 86.71 ± 1.93 | 94.40 ± 2.08 | 89.09 ± 2.87 | 90.78 ± 0.83 |
3 | 88.32 ± 1.41 | 93.73 ± 3.38 | 90.12 ± 3.14 | 89.80 ± 3.20 | 92.13 ± 3.82 | 90.02 ± 4.44 | 92.97 ± 0.04 | 88.07 ± 3.59 | 91.68 ± 1.66 |
4 | 63.53 ± 1.42 | 31.15 ± 1.79 | 57.46 ± 5.57 | 56.19 ± 4.27 | 47.46 ± 2.06 | 51.43 ± 2.12 | 2.02 ± 0.55 | 67.74 ± 3.26 | 26.71 ± 5.96 |
5 | 55.58 ± 6.59 | 51.84 ± 4.34 | 43.50 ± 4.72 | 43.99 ± 3.73 | 58.15 ± 2.13 | 57.89 ± 2.81 | 16.92 ± 2.83 | 58.68 ± 5.91 | 54.23 ± 3.55 |
6 | 54.09 ± 5.91 | 29.48 ± 2.36 | 46.64 ± 7.24 | 48.78 ± 6.88 | 52.04 ± 5.98 | 47.96 ± 2.63 | 0.09 ± 0.33 | 50.41 ± 3.86 | 35.42 ± 3.37 |
7 | 90.48 ± 5.39 | 61.62 ± 10.32 | 82.29 ± 7.75 | 80.19 ± 9.67 | 86.10 ± 5.46 | 86.67 ± 8.32 | 41.05 ± 11.58 | 87.81 ± 7.63 | 77.81 ± 12.64 |
8 | 84.32 ± 4.96 | 81.24 ± 5.96 | 82.53 ± 6.43 | 83.67 ± 6.66 | 88.82 ± 6.13 | 89.89 ± 4.29 | 70.79 ± 3.40 | 88.59 ± 4.08 | 84.92 ± 6.83 |
9 | 97.06 ± 0.27 | 92.48 ± 0.99 | 96.62 ± 2.24 | 97.50 ± 1.69 | 97.56 ± 1.10 | 97.50 ± 1.32 | 79.76 ± 4.57 | 98.81 ± 1.27 | 97.94 ± 0.91 |
10 | 92.73 ± 1.28 | 97.85 ± 1.60 | 91.24 ± 2.88 | 90.55 ± 2.58 | 93.77 ± 2.38 | 96.49 ± 2.30 | 77.37 ± 1.41 | 94.66 ± 1.85 | 90.65 ± 1.87 |
11 | 98.59 ± 0.98 | 97.66 ± 0.36 | 94.41 ± 1.69 | 95.49 ± 1.81 | 98.83 ± 0.34 | 99.16 ± 0.27 | 83.60 ± 3.62 | 98.73 ± 0.36 | 93.89 ± 0.45 |
12 | 84.97 ± 1.53 | 95.63 ± 0.79 | 78.85 ± 2.98 | 81.63 ± 2.68 | 94.69 ± 0.96 | 96.28 ± 0.90 | 85.39 ± 1.05 | 94.77 ± 1.73 | 90.96 ± 1.74 |
13 | 99.81 ± 0.16 | 100.00 ± 0.00 | 98.34 ± 0.73 | 97.93 ± 0.59 | 100.00 ± 0.00 | 100.00 ± 0.00 | 99.40 ± 0.27 | 99.61 ± 0.19 | 99.32 ± 0.19 |
AA | 84.37 ± 1.17 | 78.65 ± 0.38 | 80.16 ± 0.92 | 80.60 ± 1.11 | 84.17 ± 1.01 | 84.45 ± 0.85 | 64.53 ± 0.23 | 85.77 ± 0.89 | 79.27 ± 0.73 |
OA | 89.51 ± 0.59 | 87.21 ± 0.60 | 86.49 ± 0.65 | 86.96 ± 0.72 | 90.20 ± 0.70 | 90.59 ± 0.56 | 76.77 ± 0.67 | 91.21 ± 0.47 | 86.70 ± 0.76 |
Kappa | 0.883 ± 0.007 | 0.857 ± 0.007 | 0.849 ± 0.007 | 0.855 ± 0.008 | 0.891 ± 0.008 | 0.895 ± 0.006 | 0.739 ± 0.008 | 0.902 ± 0.005 | 0.851 ± 0.008 |
© 2020 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).
Share and Cite
Fang, X.; Cai, Y.; Cai, Z.; Jiang, X.; Chen, Z. Sparse Feature Learning of Hyperspectral Imagery via Multiobjective-Based Extreme Learning Machine. Sensors 2020, 20, 1262. https://doi.org/10.3390/s20051262
Fang X, Cai Y, Cai Z, Jiang X, Chen Z. Sparse Feature Learning of Hyperspectral Imagery via Multiobjective-Based Extreme Learning Machine. Sensors. 2020; 20(5):1262. https://doi.org/10.3390/s20051262
Chicago/Turabian StyleFang, Xiaoping, Yaoming Cai, Zhihua Cai, Xinwei Jiang, and Zhikun Chen. 2020. "Sparse Feature Learning of Hyperspectral Imagery via Multiobjective-Based Extreme Learning Machine" Sensors 20, no. 5: 1262. https://doi.org/10.3390/s20051262