Nothing Special   »   [go: up one dir, main page]

skip to main content
10.5555/1558109.1558293guideproceedingsArticle/Chapter ViewAbstractPublication PagesaamasConference Proceedingsconference-collections
research-article
Free access

Learning complementary multiagent behaviors: a case study

Published: 10 May 2009 Publication History

Abstract

As the reach of multiagent reinforcement learning extends to increasingly complex tasks, it is likely that the diverse challenges encountered can only be surmounted by combining the strengths of different learning methods. We consider this aspect of learning through the case study of Keepaway, a popular benchmark for multiagent reinforcement learning from the robot soccer domain. Whereas previous successful results in this domain have limited learning to an isolated, infrequent decision that amounts to a turn-taking behavior (Pass), we expand the agents' learning capability to include the more ubiquitous action of moving without the ball (GetOpen), such that at any given time, multiple agents are executing learned behaviors simultaneously. We introduce a policy search method for learning GetOpen to complement the temporal difference learning approach employed for learning Pass [4]. The learned GetOpen policy matches the best hand-coded policy for this task, and outperforms the best policy found when Pass is learned. We demonstrate that Pass and GetOpen can be learned simultaneously, and indeed that these learned behaviors specialize towards the counterpart behaviors with which they are trained.

References

[1]
M. Benda, V. Jagannathan, and R. Dodhiawala. On optimal cooperation of knowledge sources - an empirical investigation. Technical Report BCS--G2010--28, Boeing Advanced Technology Center, Boeing Computing Services, Seattle, WA, July 1986.
[2]
M. Chen, E. Foroughi, F. Heintz, Z. Huang, S. Kapetanakis, K. Kostiadis, J. Kummeneje, I. Noda, O. Obst, P. Riley, T. Steffens, Y. Wang, and X. Yin. Users manual: RoboCup soccer server -- for soccer server version 7.07 and later. The RoboCup Federation, August 2002.
[3]
P. T. De Boer, D. P. Kroese, S. Mannor, and R. Rubinstein. A tutorial on the cross-entropy method. Annals of Operations Research, 134(1):19--67, 2005.
[4]
P. Stone, R. S. Sutton, and G. Kuhlmann. Reinforcement learning for RoboCup-soccer keepaway. Adaptive Behavior, 13(3):165--188, 2005.

Index Terms

  1. Learning complementary multiagent behaviors: a case study

    Recommendations

    Comments

    Please enable JavaScript to view thecomments powered by Disqus.

    Information & Contributors

    Information

    Published In

    cover image Guide Proceedings
    AAMAS '09: Proceedings of The 8th International Conference on Autonomous Agents and Multiagent Systems - Volume 2
    May 2009
    730 pages
    ISBN:9780981738178

    Sponsors

    • Drexel University
    • Wiley-Blackwell
    • Microsoft Research: Microsoft Research
    • Whitestein Technologies
    • European Office of Aerospace Research and Development, Air Force Office of Scientific Research, United States Air Force Research Laboratory
    • The Foundation for Intelligent Physical Agents

    Publisher

    International Foundation for Autonomous Agents and Multiagent Systems

    Richland, SC

    Publication History

    Published: 10 May 2009

    Author Tags

    1. multiagent reinforcement learning
    2. policy search
    3. robot soccer

    Qualifiers

    • Research-article

    Acceptance Rates

    Overall Acceptance Rate 1,155 of 5,036 submissions, 23%

    Contributors

    Other Metrics

    Bibliometrics & Citations

    Bibliometrics

    Article Metrics

    • 0
      Total Citations
    • 101
      Total Downloads
    • Downloads (Last 12 months)4
    • Downloads (Last 6 weeks)0
    Reflects downloads up to 21 Sep 2024

    Other Metrics

    Citations

    View Options

    View options

    PDF

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader

    Get Access

    Login options

    Media

    Figures

    Other

    Tables

    Share

    Share

    Share this Publication link

    Share on social media