Abstract
The limited generalization ability of reinforcement learning constrains its potential applications, particularly in complex scenarios such as multi-agent systems. To overcome this limitation and enhance the generalization capability of MARL algorithms, this paper proposes a three-stage method that integrates domain randomization and domain adaptation to extract effective features for policy learning. Specifically, the first stage samples environments provided for training and testing in the following stages using domain randomization. The second stage pretrains a domain-invariant feature extractor (DIFE) which employs cycle consistency to disentangle domain-invariant and domain-specific features. The third stage utilizes DIFE for policy learning. Experimental results in MPE tasks demonstrate that our approach yields better performance and generalization ability. Meanwhile, the features captured by DIFE are more interpretable for subsequent policy learning in visualization analysis.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
References
Bousmalis, K., Trigeorgis, G., Silberman, N., Krishnan, D., Dumitru Erhan, D.: Domain Separation Networks. In: Proceedings of NeurIPS, pp. 343–351. Curran Associates Inc (2016). ISBN 978-1-5108-3881-9
Chen, X., Hu, J., Jin, C., Li, L., Wang, L.: Understanding domain randomization for sim-to-real transfer. In: ICLR (2022)
Hassabis, D., et al.: A general reinforcement learning algorithm that masters chess, shogi, and Go through self-play. Science, 362(6419), 1140–1144 (2018). ISSN 0036–8075. https://doi.org/10.1126/science.aar6404. Publisher: American Association for the Advancement of Science
Ganin, Y., et al.: Domain-Adversarial Training of Neural Networks. In: Csurka, G. (ed.) Domain Adaptation in Computer Vision Applications. ACVPR, pp. 189–209. Springer, Cham (2017). https://doi.org/10.1007/978-3-319-58347-1_10
Ghifary, M., Kleijn, W.B., Zhang, M., Balduzzi, D., Li, W.: Deep Reconstruction-Classification Networks for Unsupervised Domain Adaptation. In: Leibe, B., Matas, J., Sebe, N., Welling, M. (eds.) ECCV 2016. LNCS, vol. 9908, pp. 597–613. Springer, Cham (2016). https://doi.org/10.1007/978-3-319-46493-0_36
Ghosh, D., Rahme, J., Kumar, A., Zhang, A., Adams, R.P., Levine, S.: Why Generalization in RL is Difficult: Epistemic POMDPs and Implicit Partial Observability. In: M. Ranzato, A. Beygelzimer, Y. Dauphin, P. S. Liang, and J. Wortman Vaughan, editors, Advances in NeurIPS, vol. 34, pp. 25502–25515. Curran Associates Inc (2021)
Hoffman, J., Tzeng, E., Darrell, T., Saenko, K.: Simultaneous Deep Transfer Across Domains and Tasks. In: Csurka, G. (ed.) Domain Adaptation in Computer Vision Applications. ACVPR, Springer, Cham (2017). https://doi.org/10.1007/978-3-319-58347-1_9
Jha, A.H., Anand, S., Singh, M., Veeravasarapu, V.S.R.: Disentangling Factors of Variation with Cycle-Consistent Variational Auto-encoders. In: Ferrari, V., Hebert, M., Sminchisescu, C., Weiss, Y. (eds.) ECCV 2018. LNCS, vol. 11207, pp. 829–845. Springer, Cham (2018). https://doi.org/10.1007/978-3-030-01219-9_49
Kwon, J., Efroni, Y., Caramanis, C., Mannor, S.: RL for Latent MDPs: Regret Guarantees and a Lower Bound. In: M. Ranzato, A. Beygelzimer, Y. Dauphin, P. S. Liang, and J. Wortman Vaughan, eds, Advances in NeurIPS, vol. 34, pp. 24523–24534. Curran Associates Inc (2021)
Liu, A.H., et al.: A unified feature disentangler for multi-domain image translation and manipulation. In: Proceedings of NeurIPS, pp. 2595–2604 (2018). https://papers.nips.cc/paper/7525-a-unified-feature-disentangler-for-multi-domain-image-translation-and-manipulation
Long, M., Cao, Y., Wang, J., Jordan, M.I.: Learning transferable features with deep adaptation networks. In: Proceedings of ICML, ICML’15, pp. 97–105. JMLR.org (2015)
Mandlekar, A., Zhu, Y., Garg, A., Fei-Fei, L., Savarese, S.: Adversarially robust policy learning: active construction of physically-plausible perturbations. In: Proceedings of IROS, pp. 3932–3939 (2017). https://doi.org/10.1109/IROS.2017.8206245. ISSN: 2153-0866
OpenAI, Christopher Berner, C., et al.: Dota 2 with large scale deep reinforcement learning. arXiv:1912.06680 (2019)
OpenAI, Akkaya, I., et al.: Solving Rubik’s Cube with a Robot Hand. arXiv:1910.07113 (2019)
Peng, X.B., Andrychowicz, M., Zaremba, W., Abbeel, P.: Sim-to-real transfer of robotic control with dynamics randomization. In: Proceedings of ICRA, pp. 3803–3810 (2018). https://doi.org/10.1109/ICRA.2018.8460528. arXiv:1710.06537
Reed, S., et al.: A Generalist Agent. arXiv:2205.06175 (2022)
Sadeghi F., Levine, S.: CAD2RL: real single-image flight without a single real image. arxiv.org/abs/1611.04201arXiv:1611.04201 (2017)
Silver, D., et al.: Mastering the game of Go with deep neural networks and tree search. Nature 529(7587), 484–489 (2016). ISSN 1476–4687. https://doi.org/10.1038/nature16961. Number: 7587 Publisher: Nature Publishing Group
Sun, B., Saenko, K.: Deep CORAL: Correlation Alignment for Deep Domain Adaptation. In: Hua, G., Jégou, H. (eds.) ECCV 2016. LNCS, vol. 9915, pp. 443–450. Springer, Cham (2016). https://doi.org/10.1007/978-3-319-49409-8_35
Adaptive Agent Team, et al.: Human-timescale adaptation in an open-ended task space. arXiv:2301.07608 (2023)
Tobin, J., et al.: Domain randomization for transferring deep neural networks from simulation to the real world, arXiv:1703.06907 (2017)
Tzeng, E., J., H., Zhang, N., Saenko, K., Trevor Darrell, T.: Deep domain confusion: maximizing for domain invariance. arXiv:1412.3474 (2014)
Tzeng, E., Hoffman, J., Saenko, K., Darrell, T.: Adversarial discriminative domain adaptation. In: Proceedings of CVPR, pp. 2962–2971 (2017). https://doi.org/10.1109/CVPR.2017.316. ISSN: 1063–6919
Vinyals, O, et al.: Grandmaster level in StarCraft II using multi-agent reinforcement learning. Nature, 575(7782), 350–354 (2019). ISSN 1476–4687. https://doi.org/10.1038/s41586-019-1724-z. Number: 7782 Publisher: Nature Publishing Group
Xing, J., Nagata, T., Chen, K., Zou, X., Neftci, E., Krichmar, J.L.: Domain adaptation in reinforcement learning via latent unified state representation. In: Proceedings of AAAI, vol. 35, pp. 10452–10459 (2021). https://doi.org/10.1609/aaai.v35i12.17251
Xing, Y., Song, O., Cheng, G.: On the algorithmic stability of adversarial training. In: M. Ranzato, A. Beygelzimer, Y. Dauphin, P. S. Liang, and J. Wortman Vaughan, editors, Advances in NeurIPS, vol. 34, pp. 26523–26535. Curran Associates Inc (2021)
Yifan Xu, Y., et al.: A double-observation policy learning framework for multi-target coverage with connectivity maintenance. In: Ren, Z., Wang, M., Hua, Y., eds, Proceedings of CCSICC, pp. 1279–1290. Springer Nature Singapore (2023). https://doi.org/10.1007/978-981-19-3998-3_120
Acknowledgements
This work was supported by the Beijing Nova Program under Grant 20220484077, the National Natural Science Foundation of China under Grant 62073323, the External cooperation key project of Chinese Academy Sciences No. 173211KYSB20200002.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
Xu, Y., Pu, Z., Cai, Q., Li, F., Chai, X. (2023). Improving Generalization of Multi-agent Reinforcement Learning Through Domain-Invariant Feature Extraction. In: Iliadis, L., Papaleonidas, A., Angelov, P., Jayne, C. (eds) Artificial Neural Networks and Machine Learning – ICANN 2023. ICANN 2023. Lecture Notes in Computer Science, vol 14259. Springer, Cham. https://doi.org/10.1007/978-3-031-44223-0_5
Download citation
DOI: https://doi.org/10.1007/978-3-031-44223-0_5
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-44222-3
Online ISBN: 978-3-031-44223-0
eBook Packages: Computer ScienceComputer Science (R0)