Nothing Special   »   [go: up one dir, main page]

Skip to main content
Log in

Generalization improvement for regularized least squares classification

  • Original Article
  • Published:
Neural Computing and Applications Aims and scope Submit manuscript

Abstract

In the past decades, regularized least squares classification (RLSC) is a commonly used supervised classification method in the machine learning filed because it can be easily resolved through the simple matrix analysis and achieve a close-form solution. Recently, some studies conjecture that the margin distribution is more crucial to the generalization performance. Moreover, from the view of margin distribution, RLSC only considers the first-order statistics (i.e., margin mean) and does not consider the actual higher-order statistics of margin distribution. In this paper, we propose a novel RLSC which takes into account the actual second-order (i.e., variance) information of margin distribution. It is intuitively expected that small margin variance will improve the generalization performance of RLSC from a geometric view. We incorporate the margin variance into the objective function of RLSC and achieve the optimal classifier by minimizing the margin variance. To evaluate the performance of our algorithm, we conduct a series of experiments on several benchmark datasets in comparison with RLSC, kernel minimum squared error, support vector machine and large margin distribution machine. And the empirical results verify the effectiveness of our algorithm and indicate that the margin distribution is helpful to improve the classification performance.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11

Similar content being viewed by others

Explore related subjects

Discover the latest articles, news and stories from top researchers in related subjects.

References

  1. Adankon MM, Cheriet M (2011) Help-training for semi-supervised support vector machines. Pattern Recognit 44(9):2220–2230

    Article  Google Scholar 

  2. Belkin M, Niyogi P, Sindhwani V (2006) Manifold regularization: a geometric framework for learning from labeled and unlabeled examples. J Mach Learn Res 7:2399–2434

    MathSciNet  MATH  Google Scholar 

  3. Gao W, Zhou ZH (2013) On the doubt about margin explanation of boosting. Artif Intell 203:1–18

    Article  MathSciNet  MATH  Google Scholar 

  4. Hsieh CJ, Chang KW, Lin CJ, Keerthi SS, Sundararajan S (2008) A dual coordinate descent method for large-scale linear SVM. In: Proceedings of the 25th international conference on machine learning, ACM, pp 408–415

  5. Lichman M (2013) UCI machine learning repository. http://archive.ics.uci.edu/ml

  6. Micchelli CA, Pontil M (2005) Learning the kernel function via regularization. J Mach Learn Res 6:1099–1125

    MathSciNet  MATH  Google Scholar 

  7. Reyzin L, Schapire RE (2006) How boosting the margin can also boost classifier complexity. In: Proceedings of the 23rd international conference on Machine learning, ACM, pp 753–760

  8. Rifkin R, Yeo G, Poggio T (2003) Regularized least-squares classification. Nato Sci Ser Sub Ser III Comput Syst Sci 190:131–154

    Google Scholar 

  9. Vapnik VN, Vapnik V (1998) Statistical learning theory. Wiley, New York

    MATH  Google Scholar 

  10. Wang L, Sugiyama M, Jing Z, Yang C, Zhou ZH, Feng J (2011) A refined margin analysis for boosting algorithms via equilibrium margin. J Mach Learn Res 12:1835–1863

    MathSciNet  MATH  Google Scholar 

  11. Wang Y, Chen S, Xue H, Fu Z (2015) Semi-supervised classification learning by discrimination-aware manifold regularization. Neurocomputing 147:299–306

    Article  Google Scholar 

  12. Xu J, Zhang X, Li Y (2001) Kernel MSE algorithm: a unified framework for KFD, LS-SVM and KRR. In: Proceedings of international joint conference on neural networks, IEEE, pp 1486–1491

  13. Xue H, Chen S, Yang Q (2009) Discriminatively regularized least-squares classification. Pattern Recognit 42(1):93–104

    Article  MATH  Google Scholar 

  14. Yang XJ, Wang L (2015) A modified Tikhonov regularization method. J Comput Appl Math 288:180–192

    Article  MathSciNet  MATH  Google Scholar 

  15. Zhang P, Peng J (2004) SVM vs regularized least squares classification. In: Proceedings of the 17th international conference on pattern recognition, IEEE, pp 176–179

  16. Zhou ZH (2014) Large margin distribution learning. In: Artificial neural networks in pattern recognition, Springer, pp 1–11

Download references

Acknowledgements

The work was supported by National Natural Science Foundation of China under Grant Nos. 61601162, 61501154 and 61671197, and Open Foundation of first level Zhejiang key in key discipline of Control Science and Engineering.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Haitao Gan.

Ethics declarations

Conflict of interest

The authors declare that they have no conflict of interest.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Gan, H., She, Q., Ma, Y. et al. Generalization improvement for regularized least squares classification. Neural Comput & Applic 31 (Suppl 2), 1045–1051 (2019). https://doi.org/10.1007/s00521-017-3090-9

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00521-017-3090-9

Keywords

Navigation