Q-learning-based non-zero sum games for Markov jump multiplayer systems under actor-critic NNs structure
References
Index Terms
- Q-learning-based non-zero sum games for Markov jump multiplayer systems under actor-critic NNs structure
Recommendations
Zero-sum game-based optimal control for discrete-time Markov jump systems: A parallel off-policy Q-learning method
AbstractIn this paper, the zero-sum game problem for linear discrete-time Markov jump systems is solved by two novel model-free reinforcement Q-learning algorithms, on-policy Q-learning and off-policy Q-learning. Firstly, under the framework of the zero-...
Highlights- The zero-sum game problem for linear discrete-time Markov jump systems is addressed with the reinforcement learning method.
- A novel model-free parallel off-policy Q-learning method is developed, which is not affected by probing noise.
QL2, a simple reinforcement learning scheme for two-player zero-sum Markov games
Markov games is a framework which can be used to formalise n-agent reinforcement learning (RL). Littman (Markov games as a framework for multi-agent reinforcement learning, in: Proceedings of the 11th International Conference on Machine Learning (ICML-...
Stochastic Stabilization of Discrete-Time Markov Jump Systems with Generalized Delay and Deficient Transition Rates
In this paper, the problem of mode-dependent state feedback controller design is studied for discrete-time Markov jump systems with generalized delay and deficient transition rates. The time delay under consideration is subject to mode-dependent and ...
Comments
Please enable JavaScript to view thecomments powered by Disqus.Information & Contributors
Information
Published In
Publisher
Elsevier Science Inc.
United States
Publication History
Author Tags
Qualifiers
- Research-article
Contributors
Other Metrics
Bibliometrics & Citations
Bibliometrics
Article Metrics
- 0Total Citations
- 0Total Downloads
- Downloads (Last 12 months)0
- Downloads (Last 6 weeks)0
Other Metrics
Citations
View Options
View options
Get Access
Login options
Check if you have access through your login credentials or your institution to get full access on this article.
Sign in