Nothing Special   »   [go: up one dir, main page]

Skip to main content

A Gradient-Based Metric Learning Algorithm for k-NN Classifiers

  • Conference paper
AI 2010: Advances in Artificial Intelligence (AI 2010)

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 6464))

Included in the following conference series:

Abstract

The Nearest Neighbor (NN) classification/regression techniques, besides their simplicity, are amongst the most widely applied and well studied techniques for pattern recognition in machine learning. A drawback, however, is the assumption of the availability of a suitable metric to measure distances to the k nearest neighbors. It has been shown that k-NN classifiers with a suitable distance metric can perform better than other, more sophisticated, alternatives such as Support Vector Machines and Gaussian Process classifiers. For this reason, much recent research in k-NN methods has focused on metric learning, i.e. finding an optimized metric. In this paper we propose a simple gradient-based algorithm for metric learning. We discuss in detail the motivations behind metric learning, i.e. error minimization and margin maximization. Our formulation differs from the prevalent techniques in metric learning, where the goal is to maximize the classifier’s margin. Instead our proposed technique (MEGM) finds an optimal metric by directly minimizing the mean square error. Our technique not only results in greatly improved k-NN performance, but also performs better than competing metric learning techniques. Promising results are reported on major UCIML databases.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

Similar content being viewed by others

References

  1. Cover, T.: Rates of convergence for nearest neighbor procedures. In: Proceedings of the International Conference on Systems Sciences (1968)

    Google Scholar 

  2. Fix, E., Hodges, J.: Discriminatory analysis - nonparameteric discrimination: consistency properties. Tech. Report, Randolph Field Texas, US Airforce School of Aviation Medicine, Tech. Rep. (1951)

    Google Scholar 

  3. Snapp, R., Venkatesh, S.: Asymptotic expansions of the k-nearest neighbor risk. The Annals of Statistics (1998)

    Google Scholar 

  4. Hastie, T., Tibshirani, R., Friedman, J.: The Elements of Statistical Learning. Springer Series in Statistics (2001)

    Google Scholar 

  5. Goldberger, J., Roweis, S., Hinton, G., Salakhutdinov, R.: Neighborhood component analysis. In: Proceedings of Neural Inforamtion and Processing Systems (2005)

    Google Scholar 

  6. Davis, J., Dhillon, I.: Structured metric learning for high dimensional problems. In: ACM SIGKDD Conference on Knowledge Discovery and Data Mining (2008)

    Google Scholar 

  7. Weinberger, K., Blitzer, J., Saul, L.: Distance metric learning for large margin nearest neighbor classification. In: Proceedings of Neural Inforamtion and Processing Systems (2005)

    Google Scholar 

  8. Sriperumbudar, B., Lang, O., Lanckriet, G.: Metric embedding for kernel classification rules. In: Proceedings of the International Conference on Machine Learning (2008)

    Google Scholar 

  9. Globerson, A., Roweis, S.: Metric learning by collapsing classes. In: Proceedings of Neural Inforamtion and Processing Systems (2005)

    Google Scholar 

  10. Xing, E., Ng, A., Jordan, M., Russell, S.: Distance metric learning with application to clustering with side-information. In: Proceedings of Neural Inforamtion and Processing Systems (2002)

    Google Scholar 

  11. Friedman, J.: Flexible metric nearest neighbor classification. Tech. Report, Dept. of Statistics, Stanford University, Tech. Rep. (1994)

    Google Scholar 

  12. Lowe, D.: Similarity metric learning for a variable-kernel classifier. In: Proceedings of Neural Inforamtion and Processing Systems (1996)

    Google Scholar 

  13. Weinberger, K., Blitzer, J., Saul, L.: Distance metric learning for large margin nearest neighbor classification. In: Proceedings of Neural Inforamtion and Processing Systems (2006)

    Google Scholar 

  14. Bar-Hillel, A., Hertz, T., Shental, N., Weinshall, D.: Learning distance functions using equivalence relation. In: Proceedings of the International Conference on Machine Learning (2003)

    Google Scholar 

  15. Mertz, C., Murphy, P.: Machine learning repository (2005), http://archive.ics.uci.edu/ml/

  16. Hastie, T., Tibshirani, R.: Discriminative adaptive nearest neighbor classification. IEEE Transactions on Pattern Analysis and Machine Intelligence (1996)

    Google Scholar 

  17. Zaidi, N., Squire, D.M., Suter, D.: BoostML: An adaptive metric learning for nearest neighbor classification. In: Proceedings of the Pacific-Asia Conference on Knowledge Discovery and Data Mining (2010)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2010 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Zaidi, N.A., Squire, D.M., Suter, D. (2010). A Gradient-Based Metric Learning Algorithm for k-NN Classifiers. In: Li, J. (eds) AI 2010: Advances in Artificial Intelligence. AI 2010. Lecture Notes in Computer Science(), vol 6464. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-17432-2_20

Download citation

  • DOI: https://doi.org/10.1007/978-3-642-17432-2_20

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-642-17431-5

  • Online ISBN: 978-3-642-17432-2

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics