Abstract
This paper proposes a new approach to train ensembles of learning machines in a regression context. At each iteration a new learner is added to compensate the error made by the previous learner in the prediction of its training patterns. The algorithm operates directly over values to be predicted by the next machine to retain the ensemble in the target hypothesis and to ensure diversity. We expose a theoretical explanation which clarifies what the method is doing algorithmically and allows to show its stochastic convergence. Finally, experimental results are presented to compare the performance of this algorithm with boosting and bagging in two well-known data sets.
This work was supported in part by Research Grant Fondecyt (Chile) 1040365 and 7040051, and in part by Research Grant DGIP-UTFSM (Chile). Partial support was also received from Research Grant BMBF (Germany) CHL 03-Z13.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
Similar content being viewed by others
References
Vedelsby., J., Krogh, A.: Neural network ensembles, cross-validation and active learning. Neural Information Processing Systems 7, 231–238 (1995)
Blake, C.L., Merz, C.J.: UCI repository of machine learning databases (1998)
Breiman, L.: Bagging predictors. Machine Learning 24(2), 123–140 (1996)
Drucker, H.: Improving regressors using boosting techniques. In: Fourteenth International Conference on Machine Learning, pp. 107–115 (1997)
Harris., R., Brown., G., Wyatt, J., Yao, X.: Diversity creation methods: A survey and categorisation. Information Fusion Journal (Special issue on Diversity in Multiple Classifier Systems) 6(1), 5–20 (2004)
Whitaker, C., Kuncheva, L.: Measures of diversity in classifier ensembles. Machine Learning 51, 181–207 (2003)
Hand, D., Berthold, M. (eds.): Intelligent data analysis, 2nd edn. Springer, Heidelberg (2003)
Meir, R., Rätsch, G.: An introduction to boosting and leveraging. In: Advanced lectures on machine learning, pp. 118–183. Springer, Heidelberg (2003)
Mitchell, T. (ed.): Machine learning, 1st edn. Mc Graw-Hill (1997)
Nakano, R., Ueda, N.: Generalization error of ensemble estimators. In: Proceedings of International Conference on Neural Networks, pp. 90–95 (1996)
Prechelt, L.: Proben1 - a set of benchmarks and benchmarking rules for neural training algorithms, Tech. Report 21/94, Fakultat fur Informatik, Universitat Karlsruhe, D-76128 Karlsruhe, Germany (1994)
Valle, C., Ñanculef, R.: Self-poised ensemble learning, Tech. Report 2005/01, Departamento de Informática, Universidad Federico Santa María, CP 110-V, Valparaíso, Chile (2005)
Opitz., D., Maclin, R.: An empirical evaluation of bagging and boosting. In: AAAI/IAAI, pp. 546–551 (1997)
Rosen, B.: Ensemble learning used decorrelated neural networks. Connection Science (Special Issue on Combining Artificial Neural Networks: Ensemble Approaches) 8(3-4), 373–384 (1999)
Schapire, R.: The stregth of weak learnability. Machine Learning 5, 197–227 (1990)
Schapire, R., Freud, Y.: A decision-theoretic generalization of on-line learning and application to boosting. Journal of Computer and System Sciences 55(1), 119–137 (1997)
Yao., X., Lui, Y.: Ensemble learning via negative correlation. Neural Networks 12(10), 1399–1404 (1999)
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2005 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Ñanculef, R., Valle, C., Allende, H., Moraga, C. (2005). Self-poised Ensemble Learning. In: Famili, A.F., Kok, J.N., Peña, J.M., Siebes, A., Feelders, A. (eds) Advances in Intelligent Data Analysis VI. IDA 2005. Lecture Notes in Computer Science, vol 3646. Springer, Berlin, Heidelberg. https://doi.org/10.1007/11552253_25
Download citation
DOI: https://doi.org/10.1007/11552253_25
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-540-28795-7
Online ISBN: 978-3-540-31926-9
eBook Packages: Computer ScienceComputer Science (R0)