[ICML 2021] DouZero: Mastering DouDizhu with Self-Play Deep Reinforcement Learning | 斗地主AI
-
Updated
Jun 26, 2024 - Python
[ICML 2021] DouZero: Mastering DouDizhu with Self-Play Deep Reinforcement Learning | 斗地主AI
Reinforcement Learning / AI Bots in Card (Poker) Games - Blackjack, Leduc, Texas, DouDizhu, Mahjong, UNO.
Fully functional Pokerbot that works on PartyPoker, PokerStars and GGPoker, scraping tables with Open-CV (adaptable via gui) or neural network and making decisions based on a genetic algorithm and montecarlo simulation for poker equity calculation. Binaries can be downloaded with this link:
🔥 An online poker game server powered by Redis, node.js and socket.io
Texas holdem OpenAi gym poker environment with reinforcement learning based on keras-rl. Includes virtual rendering and montecarlo for equity calculation.
Framework for Multi-Agent Deep Reinforcement Learning in Poker
[Development suspended] Advanced open-source Texas Hold'em GTO solver with optimized performance (web browser version)
♟️ Vectorized RL game environments in JAX
果派德州客户端源代码,使用Unity3D引擎。
Poker-Hand-Evaluator: An efficient poker hand evaluation algorithm and its implementation, supporting 7-card poker and Omaha poker evaluation
Scalable Implementation of Deep CFR and Single Deep CFR
An open-source Python library for poker game simulations, hand evaluations, and statistical analysis
[Development suspended] An efficient open-source postflop solver library written in Rust
[Development suspended] Advanced open-source Texas Hold'em GTO solver with optimized performance
7-card Texas Hold'em hand evaluator
Implementation of Pluribus by Noam Brown & Tuomas Sandholm, a Superhuman AI for 6-MAX No-Limit Holdem Poker Bot.
Add a description, image, and links to the poker topic page so that developers can more easily learn about it.
To associate your repository with the poker topic, visit your repo's landing page and select "manage topics."