Abstract
In this paper, we discuss an improved sparse least support vector training in the reduced empirical feature space which is generated by linearly independent training data. In this method, we select the linearly independent training data as the basis vectors of empirical feature space. Then, before we select these data, we sort training data in ascending order from the standpoint of classification with the values of objective function in training least squares support vector machines. Thus, good training data from the standpoint of classification can be selected in preference as the basis vectors of the empirical feature space. Next, we train least squares support vector machine in the empirical feature space. Then, the solution is sparse since the number of support vectors is equal to that of the basis vectors. Using two-class problems, we evaluate the effectiveness of the proposed method over the conventional methods.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
References
Abe, S.: Support Vector Machines for Pattern Classification. Advances in Pattern Recognition. Springer, London (2010)
Suykens, J.A.K., Vandewalle, J.: Least squares support vector machine classifiers. Neural Process. Lett. 9(3), 293–300 (1999)
Cawley, G.C., Talbot, N.L.C.: Improved sparse least squares support vector machines. Neurocomputing 48, 1025–1031 (2002)
Jiao, L., Bo, L., Wang, L.: Fast sparse approximation for least squares support vector machine. Neural Netw. 18(3), 1025–1031 (2007)
Suykens, J.A.K., Lukas, L., Vandewalle, J.: Sparse least squares support vector machine classifiers. In: European Symposium on Artificial Neural Networks (ESANN 2000), pp. 37–42 (2000)
Liu, J., Li, J., Xu, W., Shi, Y.: A weighted Lq adaptive least squares support vector machine classifiers - robust and sparse approximation. Expert Syst. Appl. 38(3), 2253–2259 (2011)
Kitamura, T., Sekine, T.: A novel method of sparse least squares support vector machines in class empirical feature space. In: Huang, T., Zeng, Z., Li, C., Leung, C.S. (eds.) ICONIP 2012, Part II. LNCS, vol. 7664, pp. 475–482. Springer, Heidelberg (2012)
Abe, S.: Sparse least squares support vector training in the reduced empirical feature space. Pattern Anal. Appl. 10(3), 203–214 (2007)
Xiong, H., Swamy, M.N.S., Ahmad, M.O.: Optimizing the kernel in the empirical feature space. IEEE Trans. Neural Netw. 16(2), 460–474 (2005)
Kitamura, T., Takeuchi, S., Abe, S., Fukui, K.: Subspace-based support vector machines for pattern classification. Neural Netw. 22, 558–567 (2009)
Kitamura, T., Takeuchi, S., Abe, S.: Feature selection and fast training of subspace based support vector machines. In: International Joint Conference on Neural Networks (IJCNN 2010), pp. 1967–1972 (2010)
Rätsch, G., Onda, T., Müller, K.R.: Soft margins for AdaBoost. Mach. Learn. 42(3), 287–320 (2001)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2015 Springer International Publishing Switzerland
About this paper
Cite this paper
Kitamura, T., Asano, K. (2015). Sparse LS-SVM in the Sorted Empirical Feature Space for Pattern Classification. In: Arik, S., Huang, T., Lai, W., Liu, Q. (eds) Neural Information Processing. ICONIP 2015. Lecture Notes in Computer Science(), vol 9489. Springer, Cham. https://doi.org/10.1007/978-3-319-26532-2_60
Download citation
DOI: https://doi.org/10.1007/978-3-319-26532-2_60
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-319-26531-5
Online ISBN: 978-3-319-26532-2
eBook Packages: Computer ScienceComputer Science (R0)