Navigating Communication Networks with Deep Reinforcement Learning

Authors

  • Patrick Krämer Technical University of Munich
  • Andreas Blenk Technical University of Munich University of Vienna

DOI:

https://doi.org/10.14279/tuj.eceasst.80.1177

Abstract

Traditional routing protocols such as Open Shortest Path First cannot incorporate fast-changing network states due to their inherent slowness and limited expressiveness. To overcome these limitations, we propose COMNAV, a system that uses Reinforcement Learning (RL) to learn a distributed routing protocol tailored to a specific network. COMNAV interprets routing as a navigational problem, in which flows have to find a way from source to destination. Thus, COMNAV has a close connection to congestion games. The key concept and main contribution is the design of the learning process as a congestion game that allows RL to effectively learn a distributed protocol. Game Theory thereby provides a solid foundation against which the policies RL learns can be evaluated, interpreted, and questioned. We evaluate the capabilities of the learning system in two scenarios in which the routing protocol must react to changes in the network state, and make decisions based on the properties of the flow. Our results show that RL can learn the desired behavior and requires the exchange of only 16 bits of information.

Downloads

Published

2021-09-08

How to Cite

[1]
P. Krämer and A. Blenk, “Navigating Communication Networks with Deep Reinforcement Learning”, eceasst, vol. 80, Sep. 2021.