Nothing Special   »   [go: up one dir, main page]

skip to main content
10.1145/3278721.3278759acmconferencesArticle/Chapter ViewAbstractPublication PagesaiesConference Proceedingsconference-collections
research-article

Designing Non-greedy Reinforcement Learning Agents with Diminishing Reward Shaping

Published: 27 December 2018 Publication History

Abstract

This paper intends to address an issue in RL that when agents possessing varying capabilities, most resources may be acquired by stronger agents, leaving the weaker ones "starving". We introduce a simple method to train non-greedy agents in multi-agent reinforcement learning scenarios with nearly no extra cost. Our model can achieve the following goals in designing the non-greedy agent:non-homogeneous equality, only need local information, cost-effective, generalizable and configurable. We propose the idea of diminishing reward that makes the agent feel less satisfied for consecutive rewards obtained. This idea allows the agents to behave less greedy with-out the need to explicitly coding any ethical pattern nor monitor other agents' status. Given our framework, resources distributed more equally without running the risk of reaching homogeneous equality. We designed two games, Gathering Game and Hunter Prey to evaluate the quality of the model.

References

[1]
David Abel, James MacGlashan, and Michael L Littman. 2016. Reinforcement Learning as a Framework for Ethical Decision Making. In AAAI Workshop: AI, Ethics, and Society.
[2]
Michael Anderson and Susan Leigh Anderson. 2011. Machine ethics. Cambridge University Press.
[3]
Stuart Armstrong. 2015. Motivated Value Selection for Artificial Agents. In AAAI Workshop: AI and Ethics.
[4]
Thomas Arnold, Daniel Kasenberg, and Matthias Scheutz. 2017. Value Alignment or Misalignment--What Will Keep Systems Accountable? (2017).
[5]
Lucian Busoniu, Robert Babuka, and Bart De Schutter. 2010. Multi-agent reinforcement learning: An overview. Innovations in multi-agent systems and applications-1 310 (2010), 183--221.
[6]
Kenji Doya, Kazuyuki Samejima, Ken-ichi Katagiri, and Mitsuo Kawato. 2002. Multiple model-based reinforcement learning. Neural computation 14, 6 (2002), 1347--1369.
[7]
Amy Greenwald, Keith Hall, and Roberto Serrano. 2003. Correlated Q-learning. In ICML, Vol. 3. 242--249.
[8]
Jayesh K Gupta, Maxim Egorov, and Mykel Kochenderfer. 2017. Cooperative multiagent control using deep reinforcement learning. In Proceedings of the Adaptive and Learning Agents workshop (at AAMAS 2017).
[9]
Junling Hu, Michael P Wellman, et al. 1998. Multiagent reinforcement learning: theoretical framework and an algorithm. In ICML, Vol. 98. 242--250.
[10]
Martin Lauer and Martin Riedmiller. 2000. An algorithm for distributed reinforcement learning in cooperative multi-agent systems. In In Proceedings of the Seventeenth International Conference on Machine Learning. Citeseer.
[11]
Joel Z Leibo, Vinicius Zambaldi, Marc Lanctot, Janusz Marecki, and Thore Graepel. 2017. Multi-agent Reinforcement Learning in Sequential Social Dilemmas. In Proceedings of the 16th Conference on Autonomous Agents and MultiAgent Systems. International Foundation for Autonomous Agents and Multiagent Systems, 464--473.
[12]
Adam Lerer and Alexander Peysakhovich. 2017. Maintaining cooperation in complex social dilemmas using deep reinforcement learning. arXiv preprint arXiv:1707.01068 (2017).
[13]
Michael L Littman. 1994. Markov games as a framework for multi-agent reinforcement learning. In Proceedings of the eleventh international conference on machine learning, Vol. 157. 157--163.
[14]
Michael L Littman. 2001. Friend-or-foe Q-learning in general-sum games. In ICML, Vol. 1. 322--328.
[15]
Laëtitia Matignon, Guillaume J Laurent, and Nadine Le Fort-Piat. 2007. Hysteretic q-learning: an algorithm for decentralized reinforcement learning in cooperative multi-agent teams. In Intelligent Robots and Systems, 2007. IROS 2007. IEEE/RSJ International Conference on. IEEE, 64--69.
[16]
Laetitia Matignon, Guillaume J Laurent, and Nadine Le Fort-Piat. 2012. Independent reinforcement learners in cooperative Markov games: a survey regarding coordination problems. The Knowledge Engineering Review 27, 1 (2012), 1--31.
[17]
Volodymyr Mnih, Koray Kavukcuoglu, David Silver, Andrei A Rusu, Joel Veness,Marc G Bellemare, Alex Graves, Martin Riedmiller, Andreas K Fidjeland, Georg Ostrovski, et al. 2015. Human-level control through deep reinforcement learning. Nature 518, 7540 (2015), 529--533.
[18]
Andrew Y Ng, Daishi Harada, and Stuart Russell. {n. d.}. Policy invariance under reward transformations: Theory and application to reward shaping.
[19]
Gavin A Rummery and Mahesan Niranjan. 1994. On-line Q-learning using connectionist systems. Vol. 37. University of Cambridge, Department of Engineering.
[20]
Stuart Russell, Daniel Dewey, and Max Tegmark. 2015. Research priorities for robust and beneficial artificial intelligence. Ai Magazine 36, 4 (2015), 105--114.
[21]
Ming Tan. 1993. Multi-agent reinforcement learning: Independent vs. cooperative agents. In Proceedings of the tenth international conference on machine learning. 330--337.
[22]
Christopher JCH Watkins and Peter Dayan. 1992. Q-learning. Machine learning 8, 3--4 (1992), 279--292.
[23]
Martin Zinkevich, Amy Greenwald, and Michael L Littman. 2006. Cyclic equilibria in Markov games. In Advances in Neural Information Processing Systems. 1641--1648.

Cited By

View all
  • (2023)Multi-objective reinforcement learning for designing ethical multi-agent environmentsNeural Computing and Applications10.1007/s00521-023-08898-yOnline publication date: 23-Aug-2023
  • (2023)Hindsight Balanced Reward ShapingNeural Information Processing10.1007/978-981-99-1642-9_42(492-503)Online publication date: 14-Apr-2023
  • (2021)On Assessing The Safety of Reinforcement Learning algorithms Using Formal Methods2021 IEEE 21st International Conference on Software Quality, Reliability and Security (QRS)10.1109/QRS54544.2021.00037(260-269)Online publication date: Dec-2021
  • Show More Cited By

Index Terms

  1. Designing Non-greedy Reinforcement Learning Agents with Diminishing Reward Shaping

    Recommendations

    Comments

    Please enable JavaScript to view thecomments powered by Disqus.

    Information & Contributors

    Information

    Published In

    cover image ACM Conferences
    AIES '18: Proceedings of the 2018 AAAI/ACM Conference on AI, Ethics, and Society
    December 2018
    406 pages
    ISBN:9781450360128
    DOI:10.1145/3278721
    Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

    Sponsors

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    Published: 27 December 2018

    Permissions

    Request permissions for this article.

    Check for updates

    Author Tags

    1. multi-agent reinforcement learning
    2. non-greedy
    3. reward shaping

    Qualifiers

    • Research-article

    Conference

    AIES '18
    Sponsor:
    AIES '18: AAAI/ACM Conference on AI, Ethics, and Society
    February 2 - 3, 2018
    LA, New Orleans, USA

    Acceptance Rates

    AIES '18 Paper Acceptance Rate 61 of 162 submissions, 38%;
    Overall Acceptance Rate 61 of 162 submissions, 38%

    Contributors

    Other Metrics

    Bibliometrics & Citations

    Bibliometrics

    Article Metrics

    • Downloads (Last 12 months)30
    • Downloads (Last 6 weeks)2
    Reflects downloads up to 24 Nov 2024

    Other Metrics

    Citations

    Cited By

    View all
    • (2023)Multi-objective reinforcement learning for designing ethical multi-agent environmentsNeural Computing and Applications10.1007/s00521-023-08898-yOnline publication date: 23-Aug-2023
    • (2023)Hindsight Balanced Reward ShapingNeural Information Processing10.1007/978-981-99-1642-9_42(492-503)Online publication date: 14-Apr-2023
    • (2021)On Assessing The Safety of Reinforcement Learning algorithms Using Formal Methods2021 IEEE 21st International Conference on Software Quality, Reliability and Security (QRS)10.1109/QRS54544.2021.00037(260-269)Online publication date: Dec-2021
    • (2021)Domain-Aware Multiagent Reinforcement Learning in Navigation2021 International Joint Conference on Neural Networks (IJCNN)10.1109/IJCNN52387.2021.9533975(1-8)Online publication date: 2021
    • (2020)Learning to utilize shaping rewardsProceedings of the 34th International Conference on Neural Information Processing Systems10.5555/3495724.3497060(15931-15941)Online publication date: 6-Dec-2020
    • (2020)Improved Reinforcement Learning with CurriculumExpert Systems with Applications10.1016/j.eswa.2020.113515(113515)Online publication date: May-2020
    • (2019)A Regulation Enforcement Solution for Multi-agent Reinforcement LearningProceedings of the 18th International Conference on Autonomous Agents and MultiAgent Systems10.5555/3306127.3332057(2201-2203)Online publication date: 8-May-2019

    View Options

    Login options

    View options

    PDF

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader

    Media

    Figures

    Other

    Tables

    Share

    Share

    Share this Publication link

    Share on social media