Nothing Special   »   [go: up one dir, main page]

skip to main content
10.1145/3472538.3472580acmotherconferencesArticle/Chapter ViewAbstractPublication PagesfdgConference Proceedingsconference-collections
short-paper

Antagonistic Procedural Content Generation Of Sparse Reward Game

Published: 21 October 2021 Publication History

Abstract

With the development of the game industry, procedural content generation technology has been widely used in game automatic content generation. And with the help of this technology, designers can quickly generate unlimited levels, implement real-time level generation, preset difficulty level generation, and so on. However, due to the difficulty of providing real-time and continuous feedback from game elements, sparse reward game is difficult to meet the expected results in content generation. This paper uses search-based procedural content generation combined with auxiliary task reinforcement learning to implement automatic content generation of sparse bonus games by means of confrontation generation. In this method, hierarchical reinforcement learning can smoothly evaluate the fitness of the generated candidate individuals, and screen individual populations according to the obtained fitness. This generation method is based on agent simulation, and generates runnable levels through free exploration of agents, which is more novel than traditional rule-based generation. In the end, we successfully implemented the procedural content generation of a typical sparse reward game, proving that our method is feasible. Different sparse reward games have different task complexity. By modifying the levels and tasks of hierarchical reinforcement learning, we can extend this method to other sparse reward games.

References

[1]
G. N. Yannakakis and J. Togelius. 2018. Artificial Intelligence and Games. Springer,New York.
[2]
Togelius J, Yannakakis G N, Stanley K O, Search-based procedural content generation: A taxonomy and survey[J]. IEEE Transactions on Computational Intelligence and AI in Games, 2011, 3(3): 172-186.
[3]
Bento D S, Pereira A G, Lelis L H S. Procedural generation of initial states of sokoban[J]. arXiv preprint arXiv:1907.02548, 2019.
[4]
C. Browne, “Automatic generation and evaluation of recombination games,” Ph.D. dissertation, Queensland Univ. Technol., Queensland, Australia, 2008.
[5]
A. Summerville, "Procedural Content Generation via Machine Learning (PCGML)," in IEEE Transactions on Games, vol. 10, no. 3, pp. 257-270, Sept. 2018.
[6]
Liu S, Chaoran L, Yue L, Automatic generation of tower defense levels using PCG[C]//Proceedings of the 14th International Conference on the Foundations of Digital Games. 2019: 1-9.
[7]
Baldwin A, Dahlskog S, Font J M, Mixed-initiative procedural generation of dungeons using game design patterns[C]//2017 IEEE conference on computational intelligence and games (CIG). IEEE, 2017: 25-32.
[8]
Raffe W L, Zambetta F, Li X, Integrated approach to personalized procedural map generation using evolutionary algorithms[J]. IEEE Transactions on Computational Intelligence and AI in Games, 2014, 7(2): 139-155.
[9]
Dietterich T G. The MAXQ Method for Hierarchical Reinforcement Learning[C]//ICML. 1998, 98: 118-126.
[10]
Nachum O, Gu S, Lee H, Data-efficient hierarchical reinforcement learning[J]. arXiv preprint arXiv:1805.08296, 2018.
[11]
G. N. Yannakakis and J. Togelius, “Experience-driven procedural content generation,” IEEE Trans. Affective Comput., 2011
  1. Antagonistic Procedural Content Generation Of Sparse Reward Game

    Recommendations

    Comments

    Please enable JavaScript to view thecomments powered by Disqus.

    Information & Contributors

    Information

    Published In

    cover image ACM Other conferences
    FDG '21: Proceedings of the 16th International Conference on the Foundations of Digital Games
    August 2021
    534 pages
    Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    Published: 21 October 2021

    Permissions

    Request permissions for this article.

    Check for updates

    Author Tags

    1. Computer games
    2. PCG
    3. Reinforcement learning
    4. Sparse reward game

    Qualifiers

    • Short-paper
    • Research
    • Refereed limited

    Conference

    FDG'21

    Acceptance Rates

    Overall Acceptance Rate 152 of 415 submissions, 37%

    Contributors

    Other Metrics

    Bibliometrics & Citations

    Bibliometrics

    Article Metrics

    • 0
      Total Citations
    • 118
      Total Downloads
    • Downloads (Last 12 months)27
    • Downloads (Last 6 weeks)1
    Reflects downloads up to 25 Nov 2024

    Other Metrics

    Citations

    View Options

    Login options

    View options

    PDF

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader

    HTML Format

    View this article in HTML Format.

    HTML Format

    Media

    Figures

    Other

    Tables

    Share

    Share

    Share this Publication link

    Share on social media