Abstract
We present a neurobiologically motivated model for an agent which generates a representation of its spacial environment by an active exploration. Our main objectives is the introduction of an action-selection mechanism based on the principle of self-reinforcement learning. We introduce the action-selection mechanism under the constraint that the agent receives only information an animal could receive too. Hence, we have to avoid all supervised learning methods which require a teacher. To solve this problem, we define a self-reinforcement signal as qualitative comparison between predicted an perceived stimulus of the agent. The self-reinforcement signal is used to construct internally a self-punishment function and the agent chooses its actions to minimize this function during learning. As a result it turns out that an active action-selection mechanism can improve the performance significantly if the problem to be learned becomes more difficult.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
Similar content being viewed by others
References
Döerner, D., Hille, K.: Artificial Souls: Motivated Emotional Robots. In: IEEE Conference Proceedings, International Conference on Systems Man, and Cybernetics; Intelligent Systems for the 21st Century, Vancouver, vol. 4(5), pp. 3828–3832 (1995)
Emmert-Streib, F.: Aktive Computation in offenen Systemen. Lerndynamiken in biologischen Systemen: Vom Netzwerk zum Organisms, Ph.D. Thesis, Universität Bremen (Germany), Mensch & Buch Verlag (2003)
Herrmann, J.M., Emmert-Streib, F., Pawelzik, K.: Autonomous robots and neuroethology: Emergence of behavior from a sense of curiosity. In: Rückert, U., Löffler, A., Mondada, F. (eds.) Experiments with the Mini-Robot Khepera, Proceedings of the 1st Int. Khepera Workshop, Paderborn, pp. 89–98. HNI-Verlagsschriftenreihe, Bd 64 (1999)
Herrmann, J.M., Pawelzik, K.: Self-localization of autonomous robots by hidden representations. Autonomous Robots 7(1), 31–40 (1999)
Oore, S., Hinton, G.E., Dudek, G.: A mobile robot that learns its place. Neural Computation 9(3), 683–699 (1997)
Rumelhart, D.E., Hinton, G.E., Williams, R.J.: Learning representations by back-propagating errors. Nature 323, 533–536 (1986)
Samuel, A.: Some studies in machine learning using the game of checkers. IBM J. Res. Develop. 3, 210–229 (1959)
Sutton, R.S., Burto, A.G.: Reinforcement Learning: An introduction. MIT Press, Cambridge, Mass. (1998)
Tversky, A., Kahneman, D.: The framing of decisions and the psychology of choice. Science 211, 453–458 (1981)
von Uexküll, J.: Theoretische Biologie, Berlin Verlag von Gebrüder Paetel, 260. English translation (1926): Theoretical Biology. Kegan Paul, Trench, Trubner & Co., London (1920)
Zhu, X., Lafferty, J., Ghahramani, Z.: Combining Active Learning and Semi-Supervised Learning Using Gaussian Fields and Harmonic Functions. In: Proc. of the ICML 2003 workshop on The Continuum from Labeled to Unlabeled Data in Machine Learning and Data Mining, pp. 58–65 (2003)
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2005 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Emmert-Streib, F. (2005). A Neurobiologically Motivated Model for Self-organized Learning. In: Gelbukh, A., de Albornoz, Á., Terashima-Marín, H. (eds) MICAI 2005: Advances in Artificial Intelligence. MICAI 2005. Lecture Notes in Computer Science(), vol 3789. Springer, Berlin, Heidelberg. https://doi.org/10.1007/11579427_42
Download citation
DOI: https://doi.org/10.1007/11579427_42
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-540-29896-0
Online ISBN: 978-3-540-31653-4
eBook Packages: Computer ScienceComputer Science (R0)