Abstract
This paper presents an algorithm for reducing a classifier’s complexity by pruning support vectors in learning the kernel matrix. The proposed algorithm retains the ‘best’ support vectors such that the span of support vectors, as defined by Vapnik and Chapelle, is as small as possible. Experiments on real world data sets show that the number of support vectors can be reduced in some cases by as much as 85% with little degradation in generalization performance.
Preview
Unable to display preview. Download preview PDF.
Similar content being viewed by others
References
Burges, C.J.C.: Simplified support vector decision rules. In: 13th International Conference on Machine Learning, p. 71 (1996)
Burges, C.J.C.: Improving the accuracy and speed of support vector machines. In: Neural Information Processing Systems (1997)
Chapelle, O., Vapnik, V., Bosquet, O., Mukherjee, S.: Choosing kernel parameters for support vector machines. Machine Learning 46(1-3), 131 (2001)
Downs, T., Gates, K.E., Masters, A.: Exact simplification of support vector solutions. Journal of Machine Learning Research 2, 293 (2001)
Keerthi, S.S., Chapelle, O., DeCoste, D.: Building support vector machines with reduced classifier complexity. Journal of Machine Learning Research 7 (2006)
Lanckriet, G.R.G., Cristianini, N., Bartlett, P., El Ghaoui, L., Jordan, M.I.: Learning the kernel matrix with semidefinite programming. Journal of Machine Learning Research 5, 27 (2004)
Lee, Y.-J., Mangasarian, O.L.: Rsvm: reduced support vector machines. In: CD Proceedings of the first SIAM International Conference on Data Mining, Chicago (2001)
Löfberg, J.: YALMIP: A toolbox for modeling and optimization in MATLAB. In: Proceedings of the CACSD Conference, Taipei, Taiwan (2004), Available from http://control.ee.ethz.ch/~joloef/yalmip.php
Nguyen, D., Ho, T.: An efficient method for simplifying support vector machines. In: 22nd International Conference on Machine Learning, Bonn, Germany, pp. 617–624 (2005)
Rätsch, G.: Benchmark repository. Technical report, Intelligent Data Analysis Group, Fraunhofer-FIRST (2005)
Schoelkopf, B., Smola, A.: Learning with Kernels. MIT Press, Cambridge (2002)
Sturm, J.F.: Using sedumi 1.02, a matlab toolbox for optimization over symmetric cones. Optimization Methods and Software 11-12, 625–653 (1999)
Tipping, M.E.: Sparse bayesian learning and the relevance vector machine. Journal of Machine Learning Research 1, 211 (2001)
Vapnik, V.: Statistical Learning Theory. John Wiley and Sons, New York (1998)
Vapnik, V., Chapelle, O.: Bounds on error expectation for SVM. Neural Computation 12, 2013 (2000)
Wu, M., Scholkopf, B., Bakir, G.: Building sparse large margin classifiers. In: 22nd International Conference on Machine Learning, Bonn, Germany (2005)
Author information
Authors and Affiliations
Editor information
Rights and permissions
Copyright information
© 2007 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Saradhi, V.V., Karnick, H. (2007). Classifier Complexity Reduction by Support Vector Pruning in Kernel Matrix Learning. In: Sandoval, F., Prieto, A., Cabestany, J., Graña, M. (eds) Computational and Ambient Intelligence. IWANN 2007. Lecture Notes in Computer Science, vol 4507. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-540-73007-1_33
Download citation
DOI: https://doi.org/10.1007/978-3-540-73007-1_33
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-540-73006-4
Online ISBN: 978-3-540-73007-1
eBook Packages: Computer ScienceComputer Science (R0)