Abstract
In the past decades, regularized least squares classification (RLSC) is a commonly used supervised classification method in the machine learning filed because it can be easily resolved through the simple matrix analysis and achieve a close-form solution. Recently, some studies conjecture that the margin distribution is more crucial to the generalization performance. Moreover, from the view of margin distribution, RLSC only considers the first-order statistics (i.e., margin mean) and does not consider the actual higher-order statistics of margin distribution. In this paper, we propose a novel RLSC which takes into account the actual second-order (i.e., variance) information of margin distribution. It is intuitively expected that small margin variance will improve the generalization performance of RLSC from a geometric view. We incorporate the margin variance into the objective function of RLSC and achieve the optimal classifier by minimizing the margin variance. To evaluate the performance of our algorithm, we conduct a series of experiments on several benchmark datasets in comparison with RLSC, kernel minimum squared error, support vector machine and large margin distribution machine. And the empirical results verify the effectiveness of our algorithm and indicate that the margin distribution is helpful to improve the classification performance.
Similar content being viewed by others
Explore related subjects
Discover the latest articles, news and stories from top researchers in related subjects.References
Adankon MM, Cheriet M (2011) Help-training for semi-supervised support vector machines. Pattern Recognit 44(9):2220–2230
Belkin M, Niyogi P, Sindhwani V (2006) Manifold regularization: a geometric framework for learning from labeled and unlabeled examples. J Mach Learn Res 7:2399–2434
Gao W, Zhou ZH (2013) On the doubt about margin explanation of boosting. Artif Intell 203:1–18
Hsieh CJ, Chang KW, Lin CJ, Keerthi SS, Sundararajan S (2008) A dual coordinate descent method for large-scale linear SVM. In: Proceedings of the 25th international conference on machine learning, ACM, pp 408–415
Lichman M (2013) UCI machine learning repository. http://archive.ics.uci.edu/ml
Micchelli CA, Pontil M (2005) Learning the kernel function via regularization. J Mach Learn Res 6:1099–1125
Reyzin L, Schapire RE (2006) How boosting the margin can also boost classifier complexity. In: Proceedings of the 23rd international conference on Machine learning, ACM, pp 753–760
Rifkin R, Yeo G, Poggio T (2003) Regularized least-squares classification. Nato Sci Ser Sub Ser III Comput Syst Sci 190:131–154
Vapnik VN, Vapnik V (1998) Statistical learning theory. Wiley, New York
Wang L, Sugiyama M, Jing Z, Yang C, Zhou ZH, Feng J (2011) A refined margin analysis for boosting algorithms via equilibrium margin. J Mach Learn Res 12:1835–1863
Wang Y, Chen S, Xue H, Fu Z (2015) Semi-supervised classification learning by discrimination-aware manifold regularization. Neurocomputing 147:299–306
Xu J, Zhang X, Li Y (2001) Kernel MSE algorithm: a unified framework for KFD, LS-SVM and KRR. In: Proceedings of international joint conference on neural networks, IEEE, pp 1486–1491
Xue H, Chen S, Yang Q (2009) Discriminatively regularized least-squares classification. Pattern Recognit 42(1):93–104
Yang XJ, Wang L (2015) A modified Tikhonov regularization method. J Comput Appl Math 288:180–192
Zhang P, Peng J (2004) SVM vs regularized least squares classification. In: Proceedings of the 17th international conference on pattern recognition, IEEE, pp 176–179
Zhou ZH (2014) Large margin distribution learning. In: Artificial neural networks in pattern recognition, Springer, pp 1–11
Acknowledgements
The work was supported by National Natural Science Foundation of China under Grant Nos. 61601162, 61501154 and 61671197, and Open Foundation of first level Zhejiang key in key discipline of Control Science and Engineering.
Author information
Authors and Affiliations
Corresponding author
Ethics declarations
Conflict of interest
The authors declare that they have no conflict of interest.
Rights and permissions
About this article
Cite this article
Gan, H., She, Q., Ma, Y. et al. Generalization improvement for regularized least squares classification. Neural Comput & Applic 31 (Suppl 2), 1045–1051 (2019). https://doi.org/10.1007/s00521-017-3090-9
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s00521-017-3090-9