Nothing Special   »   [go: up one dir, main page]

loading
Papers Papers/2022 Papers Papers/2022

Research.Publish.Connect.

Paper

Paper Unlock

Authors: Kun Qian ; Robert W. Brehm and Lars Duggen

Affiliation: SDU Mechatronics, Mads Clausen Institute, University of Southern Denmark and Denmark

Keyword(s): Cooperative Multi-Agent Systems, Multi-Agent Reinforcement Learning, Multi-Agent Actor-Critic, Cooperative Navigation, Simulation Based Learning.

Related Ontology Subjects/Areas/Topics: Agents ; Artificial Intelligence ; Artificial Intelligence and Decision Support Systems ; Computational Intelligence ; Cooperation and Coordination ; Distributed and Mobile Software Systems ; Enterprise Information Systems ; Evolutionary Computing ; Industrial Applications of AI ; Knowledge Discovery and Information Retrieval ; Knowledge Engineering and Ontology Development ; Knowledge-Based Systems ; Machine Learning ; Multi-Agent Systems ; Robot and Multi-Robot Systems ; Self Organizing Systems ; Soft Computing ; Software Engineering ; Symbolic Systems

Abstract: A method for simulation based reinforcement learning (RL) for a multi-agent system acting in a physical environment is introduced, which is based on Multi-Agent Actor-Critic (MAAC) reinforcement learning. In the proposed method, avatar agents learn in a simulated model of the physical environment and the learned experience is then used by agents in the actual physical environment. The proposed concept is verified using a laboratory benchmark setup in which multiple agents, acting within the same environment, are required to coordinate their movement actions to prevent collisions. Three state-of-the-art algorithms for multi-agent reinforcement learning (MARL) are evaluated, with respect to their applicability for a predefined benchmark scenario. Based on simulations it is shown that the MAAC method is most applicable for implementation as it provides effective distributed learning and suits well to the concept of learning in simulated environments. Our experimental results, which comp are simulated learning and task execution in a simulated environment with that of task execution in a physical environment demonstrate the feasibility of the proposed concept. (More)

CC BY-NC-ND 4.0

Sign In Guest: Register as new SciTePress user now for free.

Sign In SciTePress user: please login.

PDF ImageMy Papers

You are not signed in, therefore limits apply to your IP address 65.254.225.175

In the current month:
Recent papers: 100 available of 100 total
2+ years older papers: 200 available of 200 total

Paper citation in several formats:
Qian, K.; Brehm, R. and Duggen, L. (2019). Experimental Evaluation of a Method for Simulation based Learning for a Multi-Agent System Acting in a Physical Environment. In Proceedings of the 11th International Conference on Agents and Artificial Intelligence - Volume 1: ICAART; ISBN 978-989-758-350-6; ISSN 2184-433X, SciTePress, pages 103-109. DOI: 10.5220/0007250301030109

@conference{icaart19,
author={Kun Qian. and Robert W. Brehm. and Lars Duggen.},
title={Experimental Evaluation of a Method for Simulation based Learning for a Multi-Agent System Acting in a Physical Environment},
booktitle={Proceedings of the 11th International Conference on Agents and Artificial Intelligence - Volume 1: ICAART},
year={2019},
pages={103-109},
publisher={SciTePress},
organization={INSTICC},
doi={10.5220/0007250301030109},
isbn={978-989-758-350-6},
issn={2184-433X},
}

TY - CONF

JO - Proceedings of the 11th International Conference on Agents and Artificial Intelligence - Volume 1: ICAART
TI - Experimental Evaluation of a Method for Simulation based Learning for a Multi-Agent System Acting in a Physical Environment
SN - 978-989-758-350-6
IS - 2184-433X
AU - Qian, K.
AU - Brehm, R.
AU - Duggen, L.
PY - 2019
SP - 103
EP - 109
DO - 10.5220/0007250301030109
PB - SciTePress

<style> #socialicons>a span { top: 0px; left: -100%; -webkit-transition: all 0.3s ease; -moz-transition: all 0.3s ease-in-out; -o-transition: all 0.3s ease-in-out; -ms-transition: all 0.3s ease-in-out; transition: all 0.3s ease-in-out;} #socialicons>ahover div{left: 0px;} </style>