Nothing Special   »   [go: up one dir, main page]

Skip to main content

On the intrinsic complexity of learning

  • Conference paper
  • First Online:
Computational Learning Theory (EuroCOLT 1995)

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 904))

Included in the following conference series:

Abstract

A new view of learning is presented. The basis of this view is a natural notion of reduction. We prove completeness and relative difficulty results. An infinite hierarchy of intrinsically more and more difficult to learn concepts is presented. Our results indicate that the complexity notion captured by our new notion of reduction differs dramatically from the traditional studies of the complexity of the algorithms performing learning tasks.

This work was facilitated by an international agreement under NSF Grants 9119540 and 9421640.

Supported in part by NSF Grant 9020079.

Supported in part by NSF Grants 9020079 and 9301339.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

Similar content being viewed by others

References

  1. D. Angluin. Queries and concept learning. Machine Learning, 2:319–342, 1988.

    Google Scholar 

  2. J. Barzdins. Two theorems on the limiting synthesis of functions. In Barzdins, editor, Theory of Algorithms and Programs, volume 1, pages 82–88. Latvian State University, Riga, U.S.S.R., 1974.

    Google Scholar 

  3. L. Blum and M. Blum. Toward a mathematical theory of inductive inference. Information and Control, 28:125–155, 1975.

    Google Scholar 

  4. P. Cholak, R. Downey, L. Fortnow, W. Gasarch, E. Kinber, M. Kummer, S. Kurtz, and T. Slaman. Degrees of inferability. In Fifth Annual Workshop on Computational Learning Theory, pages 180–192. ACM, 1992.

    Google Scholar 

  5. J. Case and C. Smith. Comparison of identification criteria for machine inductive inference. Theoretical Computer Science, 25(2):193–220, 1983.

    Google Scholar 

  6. R. V. Freivalds and E. B. Kinber. Identification in the limit of minimal gödel numbers. In J. Barzdins, editor, Theory Of Algorithms and Programs, number 3, pages 1–34. Latvian State University, Riga, U.S.S.R., 1977.

    Google Scholar 

  7. R. Freivalds, E. Kinber, and C. Smith. On the impact of forgetting on learning machines. In L. Pitt, editor, Proceedings of the Sixth Annual Workshop on Computational Learning Theory, pages 165–174. ACM Press, 1993.

    Google Scholar 

  8. R. Freivalds. Inductive inference of minimal size programs. In M. Fulk and J. Case, editors, Proceedings of the third Annual Workshop on Computational Learning Theory, pages 1–20. Morgan Kaufman, 1990.

    Google Scholar 

  9. R. Freivalds. Inductive inference of recursive functions: Qualitative theory. In J. Bārzdinš and D. Bjørner, editors, Baltic Computer Science, pages 77–110. Springer Verlag, 1991. Lecture Notes in Computer Science, Vol. 502.

    Google Scholar 

  10. R. Freivalds and C. Smith. On the power of procrastination for machine learning. Information and Computation, 107:237–271, 1993.

    Google Scholar 

  11. E. M. Gold. Language identification in the limit. Information and Control, 10:447–474, 1967.

    Google Scholar 

  12. W. Gasarch and M. Pleszkoch. Learning via queries to an oracle. In Second Annual Workshop on Computational Learning Theory, pages 214–229. Morgan Kaufmann, 1989.

    Google Scholar 

  13. Jr. H. Rogers. Gödel numberings of partial recursive functions. Journal of Symbolic Logic, 23:331–341, 1958.

    Google Scholar 

  14. S. Jain and A. Sharma. On the intrinsic complexity of language identification. In Proceedings of the Workshop on Computational Learning Theory, pages 278–286. ACM, 1994.

    Google Scholar 

  15. S. Jain and A. Sharma. The structure of intrinsic complexity of learning. In Proceedings of the European Workshop on Computational Learning Theory, 1995.

    Google Scholar 

  16. M. Machtey and P. Young. An Introduction to the General Theory of Algorithms. North-Holland, New York, 1978.

    Google Scholar 

  17. L. Pitt and M. Warmuth. Reductions among prediction problems: On the difficulty of predicting automata. In Proceedings of the Structure in Complexity Theory Conference, pages 60–69. IEEE Computer Society Press, 1988.

    Google Scholar 

  18. C. Smith. A Recursive Introduction to the Theory of Computation. Springer-Verlag, 1994.

    Google Scholar 

  19. L. G. Valiant. A theory of the learnable. Communications of the ACM, 27(11):1134–1142, 1984.

    Google Scholar 

  20. M. Warmuth. Towards representation independence in PAC learning. In K. Jantke, editor, Proceedings of the International Workshop on Analogical and Inductive Infernece, volume 397, pages 78–103. Springer Verlag Lecture Notes in Artificial Intelligence, 1989.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Paul Vitányi

Rights and permissions

Reprints and permissions

Copyright information

© 1995 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Freivalds, R., Kinber, E., Smith, C.H. (1995). On the intrinsic complexity of learning. In: Vitányi, P. (eds) Computational Learning Theory. EuroCOLT 1995. Lecture Notes in Computer Science, vol 904. Springer, Berlin, Heidelberg. https://doi.org/10.1007/3-540-59119-2_175

Download citation

  • DOI: https://doi.org/10.1007/3-540-59119-2_175

  • Published:

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-59119-1

  • Online ISBN: 978-3-540-49195-8

  • eBook Packages: Springer Book Archive

Publish with us

Policies and ethics