Abstract
In this article, a brief overview is given of one particular approach to machine learning, known as PAC (probably approximately correct) learning theory. A central concept in PAC learning theory is the Vapnik-Chervonenkis (VC) dimension. Finiteness of the VC-dimension is sufficient for PAC learnability, and in some cases, is also necessary. Some directions for future research are also indicated.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
Bibliography
Anthony M, Bartlett PL (1999) Neural network learning: theoretical foundations. Cambridge University Press, Cambridge
Anthony M, Biggs N (1992) Computational learning theory. Cambridge University Press, Cambridge
Benedek G, Itai A (1991) Learnability by fixed distributions. Theor Comput Sci 86:377–389
Blumer A, Ehrenfeucht A, Haussler D, Warmuth M (1989) Learnability and the Vapnik-Chervonenkis dimension. J ACM 36(4):929–965
Campi M, Vidyasagar M (2001) Learning with prior information. IEEE Trans Autom Control 46(11):1682–1695
Devroye L, Györfi L, Lugosi G (1996) A probabilistic theory of pattern recognition. Springer, New York
Gamarnik D (2003) Extension of the PAC framework to finite and countable Markov chains. IEEE Trans Inf Theory 49(1):338–345
Kearns M, Vazirani U (1994) Introduction to computational learning theory. MIT, Cambridge
Kulkarni SR, Vidyasagar M (1997) Learning decision rules under a family of probability measures. IEEE Trans Inf Theory 43(1):154–166
Meir R (2000) Nonparametric time series prediction through adaptive model selection. Mach Learn 39(1):5–34
Natarajan BK (1991) Machine learning: a theoretical approach. Morgan-Kaufmann, San Mateo
van der Vaart AW, Wallner JA (1996) Weak convergence and empirical processes. Springer, New York
Vapnik VN (1995) The nature of statistical learning theory. Springer, New York
Vapnik VN (1998) Statistical learning theory. Wiley, New York
Vidyasagar M (1997) A theory of learning and generalization. Springer, London
Vidyasagar M (2003) Learning and generalization: with applications to neural networks. Springer, London
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2021 Springer Nature Switzerland AG
About this entry
Cite this entry
Vidyasagar, M. (2021). Learning Theory: the Probably Approximately Correct Framework. In: Baillieul, J., Samad, T. (eds) Encyclopedia of Systems and Control. Springer, Cham. https://doi.org/10.1007/978-3-030-44184-5_227
Download citation
DOI: https://doi.org/10.1007/978-3-030-44184-5_227
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-44183-8
Online ISBN: 978-3-030-44184-5
eBook Packages: Intelligent Technologies and RoboticsReference Module Computer Science and Engineering