Abstract
In the RoboCup environment, it is difficult to learn cooperative behaviors, because it includes both real-world problems and multiagent problems. In this paper, we describe the concept and the architecture of our team at the RoboCup'97, and discuss how to make this agent learn cooperative behaviors in the RoboCup environment. We test the effectiveness using a case study of learning pass play in soccer.
Chapter PDF
Similar content being viewed by others
References
H.Kitano, M.Asada, Y.Kuniyoshi, I.Noda and E.Osawa “Robocup: The robot world cup initiative” IJCAI-95 Workshop on Entertainment and AI/Alife pages 19-24 August 1995.
Itsuki Noda. “Soccer server: a simulator of robocup” In Proceedings of AI symposium '95, pages 29–34 Japanese Society for Artificial Intelligence December 1995.
Peter Stone and Manuela Veloso. “Multiagent systems: A survey from a machine learning perspective” IEEE Transactions on Knowledge and Data Engineering, June 1996.
Ming Tan. “Multi-Agent Reinforcement Learning: Independent Vs. Cooperative Agents” Proceedings of the Tenth International Conference on Machine Learning pages 330–337 June 1993.
Author information
Authors and Affiliations
Editor information
Rights and permissions
Copyright information
© 1998 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Ohta, M. (1998). Learning cooperative behaviors in RoboCup agents. In: Kitano, H. (eds) RoboCup-97: Robot Soccer World Cup I. RoboCup 1997. Lecture Notes in Computer Science, vol 1395. Springer, Berlin, Heidelberg. https://doi.org/10.1007/3-540-64473-3_77
Download citation
DOI: https://doi.org/10.1007/3-540-64473-3_77
Published:
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-540-64473-6
Online ISBN: 978-3-540-69789-3
eBook Packages: Springer Book Archive