Nothing Special   »   [go: up one dir, main page]

Skip to main content

Learning under uniform distribution

  • Conference paper
  • First Online:
Fundamentals of Computation Theory (FCT 1989)

Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 380))

Included in the following conference series:

  • 153 Accesses

Abstract

We study the learnability from examples of boolean formulae assuming that the examples satisfy a uniform distribution assumption. We analyze the requirements of known algorithms (upper and lower bounds) under uniform distribution and we propose a new combinatorial measure in order to characterize the complexity of boolean formulae.

Work partially supported by Italian project MPI ‘Algoritmi e Structure di Calcolo’.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

Similar content being viewed by others

6 References

  1. A.Blumer,A.Ehrenfeucht,D.Haussler,M.Warmuth Classifying learnable geometric concepts with the Vapnik-Chervonenkis dimension Proc. 18th Acm Symposium on Theory of Computing (1987).

    Google Scholar 

  2. P.Berman,R.Roos Learning one-counter languages in polynomial time Proc. 28th Symposium on Foundations of Computer Science (1987).

    Google Scholar 

  3. A.Ehrenfeucht, D.Haussler,M.Kearns,L.Valiant A general lower bound on the number of examples needed for learning Information and Computation (1988).

    Google Scholar 

  4. W.Feller An introduction to probability theory and its applications Vol.1 Wiley and Sons (1971).

    Google Scholar 

  5. D.Haussler Quantifying inductive bias: AI learning and Valiant's learning framework Artificial Intelligence, 36 (1988).

    Google Scholar 

  6. M.Kearns,M.Li,L.Pitt,L.G.Valiant On the learnability of boolean formulae Proc.19th Acm Symposium on Theory of Computing (1987).

    Google Scholar 

  7. N.Linial,Y.Mansour,R.L.Rivest Results on learnability and Vapnik Chervonenkis dimension Proc. 29th Symposium on Foundations of Computer Science (1988).

    Google Scholar 

  8. N.Littlestone Learning quickly when irrelevant attributes abound: a new linear threshold algorithm Machine learning 2,(4) (1988).

    Google Scholar 

  9. T.M.Mitchell Generalization as search Artificial Intelligence, 18 (1982).

    Google Scholar 

  10. B.K.Natarajan On learning boolean functions Proc. 19th Symposium on Theory of Computing (1987).

    Google Scholar 

  11. L.G.Valiant A theory of learnable Comm.Acm, 27,(11) (1984).

    Google Scholar 

  12. V.Vapnik,A.Y.Chervonenkis On the uniform convergence of relative frequencies of events to their probabilities Theor. Prob. and Appl., 16, (2) (1971).

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

J. Csirik J. Demetrovics F. Gécseg

Rights and permissions

Reprints and permissions

Copyright information

© 1989 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Marchetti-Spaccamela, A., Protasi, M. (1989). Learning under uniform distribution. In: Csirik, J., Demetrovics, J., Gécseg, F. (eds) Fundamentals of Computation Theory. FCT 1989. Lecture Notes in Computer Science, vol 380. Springer, Berlin, Heidelberg. https://doi.org/10.1007/3-540-51498-8_32

Download citation

  • DOI: https://doi.org/10.1007/3-540-51498-8_32

  • Published:

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-51498-5

  • Online ISBN: 978-3-540-48180-5

  • eBook Packages: Springer Book Archive

Publish with us

Policies and ethics