As a guest user you are not logged in or recognized by your IP address. You have
access to the Front Matter, Abstracts, Author Index, Subject Index and the full
text of Open Access publications.
Implicit imitation assumes that learning agents observe only the state transitions of an agent they use as a mentor, and try to recreate them based on their own abilities and knowledge of their environment. In this paper, we put forward a deep implicit imitation Q-network (DIIQN) model, which incorporates ideas from three well-known Deep Q-Network (DQN) variants. As such, we enable a novel implicit imitation method for online, model-free deep reinforcement learning. Our thorough experimentation in the complex environment of the emerging lane-free traffic paradigm, verifies the benefits of our approach. Specifically, we show that deep implicit imitation RL dramatically accelerates the learning process when compared to a “vanilla” DQN method; and, unlike explicit imitation reinforcement learning, it is able to outperform mentor performance without resorting to additional information, such as the mentor’s actions.
This website uses cookies
We use cookies to provide you with the best possible experience. They also allow us to analyze user behavior in order to constantly improve the website for you. Info about the privacy policy of IOS Press.
This website uses cookies
We use cookies to provide you with the best possible experience. They also allow us to analyze user behavior in order to constantly improve the website for you. Info about the privacy policy of IOS Press.