Nothing Special   »   [go: up one dir, main page]

skip to main content
10.1145/3555858.3555914acmotherconferencesArticle/Chapter ViewAbstractPublication PagesfdgConference Proceedingsconference-collections
short-paper

State similarity based Rapid Action Value Estimation for general game playing MCTS agents

Published: 04 November 2022 Publication History

Abstract

As Monte Carlo Tree Search has been established as one of the most promising algorithms in the field of Game AI, several approaches have been proposed in an attempt to exploit as much information as possible during the tree search, most important of which include Rapid Action Value Estimation and its variants. These techniques estimate for each action in a node an additional value (AMAF), based on statistics of all simulations where the action was selected deeper in the search tree. In this study, a methodology for determining the most suitable node for using its AMAF scores during the selection phase is presented. Two different approaches are proposed under the scope of discovering similar nodes’ states based on the actions selected towards their paths; in the first one, N-grams are employed to detect similar paths, while in the second one a vectorized representation of the actions taken is used. The suggested algorithms are tested in the context of general game playing achieving quite satisfactory results in terms of both win rate and overall score.

References

[1]
Tristan Cazenave. 2015. Generalized rapid action value estimation. In Twenty-Fourth International Joint Conference on Artificial Intelligence.
[2]
Rémi Coulom. 2006. Efficient selectivity and backup operators in Monte-Carlo tree search. In International conference on computers and games. Springer, 72–83.
[3]
Hilmar Finnsson and Yngvi Björnsson. 2008. Simulation-Based Approach to General Game Playing. In Aaai, Vol. 8. 259–264.
[4]
Sylvain Gelly and David Silver. 2011. Monte-Carlo tree search and rapid action value estimation in computer Go. Artificial Intelligence 175, 11 (2011), 1856–1875.
[5]
Levente Kocsis and Csaba Szepesvári. 2006. Bandit based monte-carlo planning. In European conference on machine learning. Springer, 282–293.
[6]
Diego Perez-Liebana, Spyridon Samothrakis, Julian Togelius, Tom Schaul, and Simon M Lucas. 2016. General video game ai: Competition, challenges and opportunities. In Thirtieth AAAI conference on artificial intelligence.
[7]
Julian Schrittwieser, Ioannis Antonoglou, Thomas Hubert, Karen Simonyan, Laurent Sifre, Simon Schmitt, Arthur Guez, Edward Lockhart, Demis Hassabis, Thore Graepel, Timothy Lillicrap, and David Silver. 2020. Mastering Atari, Go, chess and shogi by planning with a learned model. Nature 588, 7839 (dec 2020), 604–609. https://doi.org/10.1038/s41586-020-03051-4
[8]
David Silver, Thomas Hubert, Julian Schrittwieser, Ioannis Antonoglou, Matthew Lai, Arthur Guez, Marc Lanctot, Laurent Sifre, Dharshan Kumaran, Thore Graepel, Timothy Lillicrap, Karen Simonyan, and Demis Hassabis. 2017. Mastering Chess and Shogi by Self-Play with a General Reinforcement Learning Algorithm. https://doi.org/10.48550/ARXIV.1712.01815
[9]
Chiara F Sironi and Mark HM Winands. 2016. Comparison of rapid action value estimation variants for general game playing. In 2016 IEEE Conference on Computational Intelligence and Games (CIG). IEEE, 1–8.
[10]
Mandy JW Tak, Mark HM Winands, and Yngvi Bjornsson. 2012. N-grams and the last-good-reply policy applied in general game playing. IEEE Transactions on Computational Intelligence and AI in Games 4, 2(2012), 73–83.

Recommendations

Comments

Please enable JavaScript to view thecomments powered by Disqus.

Information & Contributors

Information

Published In

cover image ACM Other conferences
FDG '22: Proceedings of the 17th International Conference on the Foundations of Digital Games
September 2022
664 pages
ISBN:9781450397957
DOI:10.1145/3555858
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 04 November 2022

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. MCTS
  2. N-grams
  3. RAVE
  4. general game playing
  5. state similarity

Qualifiers

  • Short-paper
  • Research
  • Refereed limited

Conference

FDG22

Acceptance Rates

Overall Acceptance Rate 152 of 415 submissions, 37%

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • 0
    Total Citations
  • 33
    Total Downloads
  • Downloads (Last 12 months)15
  • Downloads (Last 6 weeks)0
Reflects downloads up to 25 Nov 2024

Other Metrics

Citations

View Options

Login options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

HTML Format

View this article in HTML Format.

HTML Format

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media