Nothing Special   »   [go: up one dir, main page]

Skip to main content

Maximal-Margin Approach for Cost-Sensitive Learning Based on Scaled Convex Hull

  • Conference paper
Advances in Neural Networks – ISNN 2011 (ISNN 2011)

Part of the book series: Lecture Notes in Computer Science ((LNTCS,volume 6675))

Included in the following conference series:

Abstract

In this paper, a new maximal margin method, scaled convex hull (SCH) method is proposed to solve the cost-sensitive learning. By providing different SCH with a different scale factor, the initial overlapping SCHs can be reduced to become separable, and the existing methods can be used to find the separating hyperplane. The new method changes the distribution of the sample, which assigns different scale factor. The experiment results are used to validate the effectiveness of the scaled convex hull and its simplicity.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 84.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

Similar content being viewed by others

References

  1. Brefeld, U., Geibel, P., Wysotzki, F.: Support vector machines with example dependent costs. In: Lavrač, N., Gamberger, D., Todorovski, L., Blockeel, H. (eds.) ECML 2003. LNCS (LNAI), vol. 2837, pp. 23–34. Springer, Heidelberg (2003); Int. Conf. Machine Learning, pp. 57–64 (2000)

    Chapter  Google Scholar 

  2. Domingos, P.: MetaCost: a general method for making classifiers cost-sensitive. In: Proceedings of the 5th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, San Diego, CA, pp. 155–164 (1999)

    Google Scholar 

  3. Elkan, C.: The foundations of cost-senstive learning. In: Proceedings of the 17th International Joint Conference on Artificial Intelligence, Seattle, WA, pp. 973–978 (2001)

    Google Scholar 

  4. Margineantu, D.D., Dietterich, T.G.: Bootstrap methods for the cost-sensitive evaluation of classifiers. In: Proceedings of the 17th International Conference on Machine Learning, San Francisco, CA, pp. 583–590 (2000)

    Google Scholar 

  5. Ting, K.M.: A comparative study of cost-sensitive boosting algorithms. In: Proceedings of the 17th International Conference on Machine Learning, San Francisco, CA, pp. 983–990 (2000)

    Google Scholar 

  6. Bradford, J.P., Kuntz, C., Kohavi, R., Brunk, C., Brodley, C.E.: Pruning decision trees with misclassification costs. In: Nédellec, C., Rouveirol, C. (eds.) ECML 1998. LNCS, vol. 1398, pp. 131–136. Springer, Heidelberg (1998)

    Chapter  Google Scholar 

  7. Knoll, U., Nakhaeizadeh, G., Tausend, B.: Cost-sensitive pruning of decision trees. In: Lavrač, N., Wrobel, S. (eds.) ECML 1995. LNCS, vol. 912, pp. 383–386. Springer, Heidelberg (1995)

    Google Scholar 

  8. Ting, K.M.: An instance-weighting method to induce cost-sensitive trees. IEEE Transactions on Knowledge and Data Engineering 14(3), 659–665 (2002)

    Article  Google Scholar 

  9. Kukar, M., Kononenko, I.: Cost-sensitive learning with neural networks. In: Proceedings of the 13th European Conference on Artificial Intelligence, Brighton, UK, pp. 445–449 (1998)

    Google Scholar 

  10. Lawrence, S., Burns, I., Back, A., Tsoi, A.C., Giles, C.L.: Neural network classification and prior class probabilities. In: Orr, G.B., Müller, K.-R. (eds.) NIPS-WS 1996. LNCS, vol. 1524, pp. 299–313. Springer, Heidelberg (1998)

    Chapter  Google Scholar 

  11. Cortes, C., Vapnik, V.N.: Support Vector Networks. Mach. Learn. 20(3), 273–297 (1995)

    MATH  Google Scholar 

  12. Bennett, K.P., Bredensteiner, E.J.: Duality and Geometry in SVM classifiers. In: Proc. 17th Int. Conf. Machine Learning, pp. 57–64 (2000)

    Google Scholar 

  13. Tao, Q., Wu, G.W., Wang, J.: A general soft method for learning SVM classifiers with L1-norm penalty. Pattern Recognit. 41(3), 939–948 (2008)

    Article  MATH  Google Scholar 

  14. Keerthi, S., Shevade, S., Bhattacharyya, C., Murthy, K.: Improvements to Platt’s SMO algorithm for SVM classifier design, Dept of CSA, IISc, Bangalore, India, Tech. Rep. (1999)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2011 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Liu, Z. (2011). Maximal-Margin Approach for Cost-Sensitive Learning Based on Scaled Convex Hull. In: Liu, D., Zhang, H., Polycarpou, M., Alippi, C., He, H. (eds) Advances in Neural Networks – ISNN 2011. ISNN 2011. Lecture Notes in Computer Science, vol 6675. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-21105-8_72

Download citation

  • DOI: https://doi.org/10.1007/978-3-642-21105-8_72

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-642-21104-1

  • Online ISBN: 978-3-642-21105-8

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics