Abstract
The present paper deals with with the learnability of indexed families \({\cal L}\) of uniformly recursive languages from positive data. We consider the influence of three monotonicity demands to the efficiency of the learning process. The efficiency of learning is measured in dependence on the number of mind changes a learning algorithm is allowed to perform. The three notions of monotonicity reflect different formalizations of the requirement that the learner has to produce better and better generalizations when fed more and more data on the target concept.
We distinguish between exact learnability (\({\cal L}\) has to be inferred with respect to \({\cal L}\)), class preserving learning (\({\cal L}\) has to be inferred with respect to some suitable chosen enumeration of all the languages from \({\cal L}\)), and class comprising inference (\({\cal L}\) has to be learned with respect to some suitable chosen enumeration of uniformly recursive languages containing at least all the languages from \({\cal L}\)).
In particular, we prove that a relaxation of the relevant monotonicity requirement may result in an arbitrarily large speed-up.
Preview
Unable to display preview. Download preview PDF.
Similar content being viewed by others
References
Angluin, D. (1980), Inductive inference of formal languages from positive data, Information and Control, 45 (1980), 117–135.
Barzdin, Ya.M., and Freivalds, R.V. (1972), On the prediction of general recursive functions, Sov. Math. Dokl. 13, 1224–1228.
Barzdin, Ya.M., Kinber, E.B., and Podnieks, K.M. (1974), Об ускорении синтеза и прогнозирования φункций, in “Теория Алгоритмов и Программ,” Vol. 1 (Ya.M. Barzdin, Ed.) Latvian State University, Riga, pp. 117–128.
Case, J., and Smith, C.H. (1983), Comparison of identification criteria for machine inductive inference, Theoretical Computer Science25, 193–220.
Gasarch, W.I., and Velauthapillai, M. (1992), Asking questions versus verifiability, in “Proceedings 3rd International Workshop on Analogical and Inductive Inference,” (K.P. Jantke, ed.) Lecture Notes in Artificial Intelligence Vol. 642, pp. 197–213, Springer-Verlag, Berlin.
Gold, M.E. (1967), Language identification in the limit, Information and Control10, 447–474.
Hopcroft, J.E., and Ullman, J.D. (1969), “Formal Languages and their Relation to Automata,” Addison-Wesley, Reading, Massachusetts.
Jantke, K.P. (1991), Monotonic and non-monotonic inductive inference, New Generation Computing8, 349–360.
Kapur, S., and Bilardi, G. (1992), Language learning without overgeneralization, in “Proceedings 9th Annual Symposium on Theoretical Aspects of Computer Science,” (A. Finkel and M. Jantzen, Eds.), Lecture Notes in Computer Science Vol. 577, pp. 245–256, Springer-Verlag, Berlin.
Kinber, E.B. (1994), Monotonicity versus efficiency for learning languages from texts, in “Proceedings 5th Workshop on Algorithmic Learning Theory,” (S. Arikawa and K.P. Jantke, Eds.), Lecture Notes in Artificial Intelligence Vol. 872, pp. 395–406.
Lange, S. (1994), The representation of recursive languages and its impact on the efficiency of learning, in “Proceedings 7th Annual ACM Conference on Computational Learning Theory, New Brunswick, July 1994,” pp. 256–267, ACM Press, New York.
Lange, S., and Zeugmann, T. (1993a), Monotonic versus non-monotonic language learning, in “Proceedings 2nd International Workshop on Non-monotonic and Inductive Logic, December 1991, Reinhardsbrunn,” (G. Brewka, K.P. Jantke and P.H. Schmitt, Eds.), Lecture Notes in Artificial Intelligence Vol. 659, pp. 254–269, Springer-Verlag, Berlin.
Lange, S., and Zeugmann, T. (1993b), Learning recursive languages with bounded mind changes, International Journal of Foundations of Computer Science4, 157–178.
Lange, S., and Zeugmann, T. (1993c), Language learning in dependence on the space of hypotheses, in “Proceedings 6th Annual ACM Conference on Computational Learning Theory,” pp. 127–136, ACM Press, New York.
Lange, S., and Zeugmann, T. (1994), Trading monotonicity demands versus efficiency, Bulletin of Informatics and Cybernetics, to appear.
Lange, S., Zeugmann, T., and Kapur, S. (1994), Monotonic and dual-monotonic language learning, Theoretical Computer Science, to appear.
Machtey, M., and Young, P. (1978), “An Introduction to the General Theory of Algorithms,” North-Holland, New York.
Mukouchi, Y. (1992), Inductive inference with bounded mind changes, in “Proceedings 3rd Workshop on Algorithmic Learning Theory,” Tokyo, Japan, JSAI, pp. 125–134.
Mukouchi, Y. (1994), Inductive inference of recursive concepts, Ph.D. Thesis, R1FIS, Kyushu University 33, RIFIS-TR-CS-82, March 25th.
Osherson, D., Stob, M., and Weinstein, S. (1986), “Systems that Learn, An Introduction to Learning Theory for Cognitive and Computer Scientists,” MIT-Press, Cambridge, Massachusetts.
Shinohara, T. (1990), Inductive Inference from Positive Data is Powerful, in “Proc. 3rd Annual Workshop on Computational Learning Theory,” pp. 97–110, Morgan Kaufmann Publishers Inc.
Wiehagen, R. (1991), A thesis in inductive inference, in “Proceedings First International Workshop on Nonmonotonic and Inductive Logic,” (J. Dix, K.P. Jantke and P.H. Schmitt, Eds.), Lecture Notes in Artificial Intelligence 543, pp. 184–207, Springer-Verlag, Berlin.
Wiehagen, R., and Zeugmann, T. (1994), Ignoring data may be the only way to learn efficiently, Journal Experimental and Theoretical Artificial Intelligence6, 131–144.
Author information
Authors and Affiliations
Editor information
Rights and permissions
Copyright information
© 1995 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Lange, S., Zeugmann, T. (1995). Trading monotonicity demands versus mind changes. In: Vitányi, P. (eds) Computational Learning Theory. EuroCOLT 1995. Lecture Notes in Computer Science, vol 904. Springer, Berlin, Heidelberg. https://doi.org/10.1007/3-540-59119-2_173
Download citation
DOI: https://doi.org/10.1007/3-540-59119-2_173
Published:
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-540-59119-1
Online ISBN: 978-3-540-49195-8
eBook Packages: Springer Book Archive