Nothing Special   »   [go: up one dir, main page]

×
Please click here if you are not redirected within a few seconds.
This paper presents an approach that makes it possible to use pre-existing knowledge about the task for guiding exploration through the state space.
In many applications receiving even the first reward may require long exploration, during which the agent has no information about its progress. This paper ...
This paper presents an approach that makes it possible to use pre-existing knowledge about the task for guiding exploration through the state space.
Reinforcement learning is based on exploration of the environment and receiving reward that indicates which actions taken by the agent are.
Reinforcement learning is based on exploration of the environment and receiving reward that indicates which actions taken by the agent are good and which ...
Bibliographic details on Guiding exploration by pre-existing knowledge without modifying reward.
Bi-Memory Model for Guiding Exploration by Pre-existing Knowledge Kary Främling Helsinki University of Technology, P.O. Box 5500, FI-02015 TKK, Finland.
Främling, K. (2007a). Guiding exploration by pre-existing knowledge without modifying reward. Neural Networks, 20:736-747. Främling, K. (2007b). Replacing ...
We present a new approach for exploration in Reinforcement Learning (RL) based on certain properties of the Markov Decision Processes (MDP).
Guiding exploration by pre-existing knowledge without modifying reward. Reinforcement learning is based on exploration of the environment and receiving reward ...