Abstract
Reinforcement learning tackles the problem of how to act optimally given observations of the current world state. Agents that learn from reinforcements execute actions in an environment and receive feedback (reward) that can be used to guide the learning process. The distinguishing feature of reinforcement learning is that the model of the environment (i.e., effects of actions or the reward function) are not known in advance. Model-based approaches represent a class of reinforcement learning algorithms which learn the model of dynamics. This model can be used by the learning agent to simulate interactions with the environment. DynaQ and its extended version with prioritised sweeping are the most popular examples of model-based approaches. This paper shows that, contrary to common belief, DynaQ with prioritised sweeping may perform worse than pure DynaQ in domains where the agent can be easily misled by a sub-optimal solution.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
Similar content being viewed by others
References
Andre, D., Friedman, N., Parr, R.: Generalized prioritized sweeping. In: Proceedings of the 1997 conference on Advances in Neural Information Processing Systems, pp. 1001–1007 (1997)
Boutilier, C., Dean, T., Hanks, S.: Decision-theoretic planning: Structural assumptions and computational leverage. Journal of Artificial Intelligence Research 11, 1–94 (1999)
Dearden, R., Friedman, N., Russell, S.J.: Bayesian Q-learning. In: Proceedings of the Fifteenth National Conference on Artificial Intelligence, pp. 761–768. AAAI Press (1998)
Kalyanakrishnan, S., Stone, P., Liu, Y.: Model-based reinforcement learning in a complex domain. In: RoboCup-2007: Robot Soccer World Cup XI. Springer, Berlin (2008)
Moore, A.W., Atkenson, C.G.: Prioritized sweeping: Reinforcement learning with less data and less time. Machine Learning 13, 103–130 (1993)
Peng, J., Williams, R.J.: Efficient learning and planning within the dyna framework. In: Proceedings of the 1993 IEEE International Conference on Neural Networks, pp. 168–174 (1993)
Rayner, D.C., Davison, K., Bulitko, V., Anderson, K., Lu, J.: Real-time heuristic search with a priority queue. In: Proceedings of the 2007 International Joint Conference on Artificial Intelligence, pp. 2372–2377 (2007)
Strens, M.J.A.: A bayesian framework for reinforcement learning. In: Proceedings of the 17th International Conference on Machine Learning, pp. 943–950 (2000)
Sutton, R.S.: Integrated architectures for learning, planning, and reacting based on approximating dynamic programming. In: Proceedings of the Seventh International Conference on Machine Learning, pp. 216–224 (1990)
Sutton, R.S., Barto, A.G.: Reinforcement Learning: An Introduction (Adaptive Computation and Machine Learning). MIT Press, Cambridge (1998)
Author information
Authors and Affiliations
Editor information
Rights and permissions
Copyright information
© 2008 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Grześ, M., Kudenko, D. (2008). An Empirical Analysis of the Impact of Prioritised Sweeping on the DynaQ’s Performance. In: Rutkowski, L., Tadeusiewicz, R., Zadeh, L.A., Zurada, J.M. (eds) Artificial Intelligence and Soft Computing – ICAISC 2008. ICAISC 2008. Lecture Notes in Computer Science(), vol 5097. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-540-69731-2_98
Download citation
DOI: https://doi.org/10.1007/978-3-540-69731-2_98
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-540-69572-1
Online ISBN: 978-3-540-69731-2
eBook Packages: Computer ScienceComputer Science (R0)