Nothing Special   »   [go: up one dir, main page]

skip to main content
10.5555/1625275.1625458guideproceedingsArticle/Chapter ViewAbstractPublication PagesConference Proceedingsacm-pubtype
Article

A subspace kernel for nonlinear feature extraction

Published: 06 January 2007 Publication History

Abstract

Kernel based nonlinear Feature Extraction (KFE) or dimensionality reduction is a widely used preprocessing step in pattern classification and data mining tasks. Given a positive definite kernel function, it is well known that the input data are implicitly mapped to a feature space with usually very high dimensionality. The goal of KFE is to find a low dimensional subspace of this feature space, which retains most of the information needed for classification or data analysis. In this paper, we propose a subspace kernel based on which the feature extraction problem is transformed to a kernel parameter learning problem. The key observation is that when projecting data into a low dimensional subspace of the feature space, the parameters that are used for describing this subspace can be regarded as the parameters of the kernel function between the projected data. Therefore current kernel parameter learning methods can be adapted to optimize this parameterized kernel function. Experimental results are provided to validate the effectiveness of the proposed approach.

References

[1]
{Chapelle et al., 2002} O. Chapelle, V. Vapnik, O. Bousquet, and S. Mukherjee. Choosing multiple parameters for support vector machines. Machine Learning, 46(1-3):131-159, 2002.
[2]
{Cristianini et al., 2002} N. Cristianini, J. Shawe-Taylor, A. Elisseeff, and J. Kandola. On kernel-target alignment. In T. G. Dietterich, S. Becker, and Z. Ghahramani, editors, Advances in Neural Information Processing Systems 14, Cambridge, MA, 2002. MIT Press.
[3]
{Kim et al., 2005} H. Kim, P. Howland, and H. Park. Dimension reduction in text classification with support vector machines. Journal of Machine Learning Research, 6:37-53, 2005.
[4]
{Mika et al., 2001} S. Mika, G. Rätsch, and K. R. Müller. A mathematical programming approach to the kernel fisher algorithm. In T. K. Leen, T. G. Dietterich, and V. Tresp, editors, Advances in Neural Information Processing Systems 13, Cambridge, MA, 2001. The MIT Press.
[5]
{Park and Park, 2004} C. H. Park and H. Park. Nonlinear feature extraction based on centroids and kernel functions. Pattern Recognition, 37:801-810, 2004.
[6]
{Rosipal and Trejo, 2001} R. Rosipal and L. J. Trejo. Kernel partial least squares regression in reproducing kernel hilbert space. Journal of Machine Learning Research, 2:97-123, 2001.
[7]
{Schölkopf and Smola, 2002} B. Schölkopf and A. J. Smola. Learning with Kernels. The MIT Press, Cambridge, MA, 2002.
[8]
{Vapnik, 1995} V. Vapnik. The Nature of Statistical Learning Theory. Springer Verlag, New York, 1995.
[9]
{Weston et al., 2000} J. Weston, S. Mukherjee, O. Chapelle, M. Pontil, T. Poggio, and V. Vapnik. Feature selection for svms. In Advances in Neural Information Processing Systems 12, Cambridge, MA, 2000. MIT Press.
[10]
{Wold, 1975} H. Wold. Soft modeling by latent variables; the nonlinear iterative partial least squares approach. In J. Gani, editor, Perspectives in Probability and Statistics, pages 520-540, London, 1975. Academic Press.
[11]
{Wu et al., 2005} M.Wu, B. Schölkopf, and G. Bakir. Building sparse large margin classifiers. In L. D. Raedt and S. Wrobel, editors, Proc. 22th International Conference on Machine Learning, pages 1001-1008. ACM, 2005.
[12]
{Xiong et al., 2005} T. Xiong, J. Ye, Q. Li, R. Janardan, and V. Cherkassky. Efficient kernel discriminant analysis via QR decomposition. In L. K. Saul, Y. Weiss, and L. Bottou, editors, Advances in Neural Information Processing Systems 17, pages 1529-1536. MIT Press, Cambridge, MA, 2005.

Cited By

View all
  • (2008)Learning subspace kernels for classificationProceedings of the 14th ACM SIGKDD international conference on Knowledge discovery and data mining10.1145/1401890.1401908(106-114)Online publication date: 24-Aug-2008
  • (2008)An efficient kernel matrix evaluation measurePattern Recognition10.1016/j.patcog.2008.04.00541:11(3366-3372)Online publication date: 1-Nov-2008

Recommendations

Comments

Please enable JavaScript to view thecomments powered by Disqus.

Information & Contributors

Information

Published In

cover image Guide Proceedings
IJCAI'07: Proceedings of the 20th international joint conference on Artifical intelligence
January 2007
2953 pages
  • Editors:
  • Rajeev Sangal,
  • Harish Mehta,
  • R. K. Bagga

Sponsors

  • The International Joint Conferences on Artificial Intelligence, Inc.

Publisher

Morgan Kaufmann Publishers Inc.

San Francisco, CA, United States

Publication History

Published: 06 January 2007

Qualifiers

  • Article

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)0
  • Downloads (Last 6 weeks)0
Reflects downloads up to 19 Nov 2024

Other Metrics

Citations

Cited By

View all
  • (2008)Learning subspace kernels for classificationProceedings of the 14th ACM SIGKDD international conference on Knowledge discovery and data mining10.1145/1401890.1401908(106-114)Online publication date: 24-Aug-2008
  • (2008)An efficient kernel matrix evaluation measurePattern Recognition10.1016/j.patcog.2008.04.00541:11(3366-3372)Online publication date: 1-Nov-2008

View Options

View options

Login options

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media