Nothing Special   »   [go: up one dir, main page]

Skip to main content

Unifying Convergence and No-Regret in Multiagent Learning

  • Conference paper
Learning and Adaption in Multi-Agent Systems (LAMAS 2005)

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 3898))

Included in the following conference series:

  • 927 Accesses

Abstract

We present a new multiagent learning algorithm, RV σ(t), that builds on an earlier version, ReDVaLeR . ReDVaLeR could guarantee (a) convergence to best response against stationary opponents and either (b) constant bounded regret against arbitrary opponents, or (c) convergence to Nash equilibrium policies in self-play. But it makes two strong assumptions: (1) that it can distinguish between self-play and otherwise non-stationary agents and (2) that all agents know their portions of the same equilibrium in self-play. We show that the adaptive learning rate of RV σ(t) that is explicitly dependent on time can overcome both of these assumptions. Consequently, RV σ(t) theoretically achieves (a’) convergence to near-best response against eventually stationary opponents, (b’) no-regret payoff against arbitrary opponents and (c’) convergence to some Nash equilibrium policy in some classes of games, in self-play. Each agent now needs to know its portion of any equilibrium, and does not need to distinguish among non-stationary opponent types. This is also the first successful attempt (to our knowledge) at convergence of a no-regret algorithm in the Shapley game.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

Similar content being viewed by others

References

  1. Banerjee, B., Peng, J.: Performance bounded reinforcement learning in strategic intercations. In: Proceedings of the 19th National Conference on Artificial Intelligence (AAAI 2004), pp. 2–7. AAAI Press, San Jose (2004)

    Google Scholar 

  2. Jafari, A., Greenwald, A., Gondek, D., Ercal, G.: On no-regret learning, fictitious play, and Nash equilibrium. In: Proceedings of the 18th International Conference on Machine Learning, pp. 216–223 (2001)

    Google Scholar 

  3. Nash, J.F.: Non-cooperative games. Annals of Mathematics 54, 286–295 (1951)

    Article  MathSciNet  MATH  Google Scholar 

  4. Littman, M.L.: Markov games as a framework for multi-agent reinforcement learning. In: Proc. of the 11th Int. Conf. on Machine Learning, pp. 157–163. Morgan Kaufmann, San Mateo (1994)

    Google Scholar 

  5. Littman, M., Szepesvári, C.: A generalized reinforcement learning model: Convergence and applications. In: Proceedings of the 13th International Conference on Machine Learning, pp. 310–318 (1996)

    Google Scholar 

  6. Hu, J., Wellman, M.P.: Nash Q-learning for general-sum stochastic games. Journal of Machine Learning Research 4, 1039–1069 (2003)

    MathSciNet  MATH  Google Scholar 

  7. Littman, M.L.: Friend-or-foe Q-learning in general-sum games. In: Proceedings of the Eighteenth International Conference on Machine Learnig. Williams College, USA (2001)

    Google Scholar 

  8. Greenwald, A., Hall, K.: Correlated Q-learning. In: Proceedings of AAAI Symposium on Collaborative Learning Agents (2002)

    Google Scholar 

  9. Singh, S., Kearns, M., Mansour, Y.: Nash convergence of gradient dynamics in general-sum games. In: Proceedings of the Sixteenth Conference on Uncertainty in Artificial Intelligence, pp. 541–548 (2000)

    Google Scholar 

  10. Bowling, M., Veloso, M.: Rational and convergent learning in stochastic games. In: Proceedings of the 17th International Joint Conference on Artificial Intelligence, Seattle,WA, pp. 1021–1026 (2001)

    Google Scholar 

  11. Bowling, M., Veloso, M.: Multiagent learning using a variable learning rate. Artificial Intelligence 136, 215–250 (2002)

    Article  MathSciNet  MATH  Google Scholar 

  12. Conitzer, V., Sandholm, T.: AWESOME: A general multiagent learning algorithm that converges in self-play and learns a best response against stationary opponents. In: Proceedings of the 20th International Conference on Machine Learning (2003)

    Google Scholar 

  13. Auer, P., Cesa-Bianchi, N., Freund, Y., Schapire, R.E.: Gambling in a rigged casino: The adversarial multi-arm bandit problem. In: Proceedings of the 36th Annual Symposium on Foundations of Compter Science, Milwaukee, WI, pp. 322–331. IEEE Computer Society Press, Los Alamitos (1995)

    Google Scholar 

  14. Fudenberg, D., Levine, D.K.: Consistency and cautious fictitious play. Journal of Economic Dynamics and Control 19, 1065–1089 (1995)

    Article  MathSciNet  MATH  Google Scholar 

  15. Freund, Y., Schapire, R.E.: Adaptive game playing using multiplicative weights. Games and Economic Behavior 29, 79–103 (1999)

    Article  MathSciNet  MATH  Google Scholar 

  16. Littlestone, N., Warmuth, M.: The weighted majority algorithm. Information and Computation 108, 212–261 (1994)

    Article  MathSciNet  MATH  Google Scholar 

  17. Zinkevich, M.: Online convex programming and generalized infinitesimal gradient ascent. In: Proceedings of the 20th International Conference on Machine Learning, Washington, DC (2003)

    Google Scholar 

  18. Bowling, M.: Convergence and no-regret in multiagent learning. In: Proceedings of NIPS 2004/5 (2005)

    Google Scholar 

  19. Powers, R., Shoham, Y.: New criteria and a new algorithm for learning in multi-agent systems. In: Proceedings of NIPS 2004/5 (2005)

    Google Scholar 

  20. Weinberg, M., Rosenschein, J.S.: Best-response multiagent learning in non-stationary environments. In: Proceedings of the 3rd International Joint Conference on Autonomous Agents and Multiagent Systems (AAMAS), vol. 2, pp. 506–513. ACM, New York (2004)

    Google Scholar 

  21. Owen, G.: Game Theory. Academic Press, UK (1995)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2006 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Banerjee, B., Peng, J. (2006). Unifying Convergence and No-Regret in Multiagent Learning. In: Tuyls, K., Hoen, P.J., Verbeeck, K., Sen, S. (eds) Learning and Adaption in Multi-Agent Systems. LAMAS 2005. Lecture Notes in Computer Science(), vol 3898. Springer, Berlin, Heidelberg. https://doi.org/10.1007/11691839_5

Download citation

  • DOI: https://doi.org/10.1007/11691839_5

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-33053-0

  • Online ISBN: 978-3-540-33059-2

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics