Skip to main content

2024 | OriginalPaper | Buchkapitel

Communicative and Cooperative Learning for Multi-agent Indoor Navigation

verfasst von : Fengda Zhu, Vincent CS Lee, Rui Liu

Erschienen in: Advances in Knowledge Discovery and Data Mining

Verlag: Springer Nature Singapore

Aktivieren Sie unsere intelligente Suche, um passende Fachinhalte oder Patente zu finden.

search-config
loading …

Abstract

The ability to cooperate and work as a team is one of the “holy grail” goals of intelligent robots. To address the importance of communication in multi-agent reinforcement learning (MARL), we propose a Cooperative Indoor Navigation (CIN) task, where agents cooperatively navigate to reach a goal in a 3D indoor room with realistic observation inputs. This navigation task is more challenging and closer to real-world robotic applications than previous multi-agent tasks since each agent can observe only part of the environment from its first-person view. Therefore, this task requires the communication and cooperation of agents to accomplish. To research the CIN task, we collect a large-scale dataset with challenging demonstration trajectories. The code and data of the CIN task have been released. The prior methods of MARL primarily emphasized the learning of policies for multiple agents but paid little attention to the communication model, resulting in their inability to perform optimally in the CIN task. In this paper, we propose a MARL model with a communication mechanism to address the CIN task. In our experiments, we discover that our proposed model outperforms previous MARL methods and communication is the key to addressing the CIN task. Our quantitative results shows that our proposed MARL method outperforms the baseline by 6% on SPL. And our qualitative results demonstrates that the agent with the communication mechanism is able to explore the whole environment sufficiently so that navigate efficiently.

Sie haben noch keine Lizenz? Dann Informieren Sie sich jetzt über unsere Produkte:

Springer Professional "Wirtschaft+Technik"

Online-Abonnement

Mit Springer Professional "Wirtschaft+Technik" erhalten Sie Zugriff auf:

  • über 102.000 Bücher
  • über 537 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Maschinenbau + Werkstoffe
  • Versicherung + Risiko

Jetzt Wissensvorsprung sichern!

Springer Professional "Technik"

Online-Abonnement

Mit Springer Professional "Technik" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 390 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Maschinenbau + Werkstoffe




 

Jetzt Wissensvorsprung sichern!

Springer Professional "Wirtschaft"

Online-Abonnement

Mit Springer Professional "Wirtschaft" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 340 Zeitschriften

aus folgenden Fachgebieten:

  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Versicherung + Risiko




Jetzt Wissensvorsprung sichern!

Literatur
3.
Zurück zum Zitat Anderson, P., et al.: Vision-and-language navigation: interpreting visually-grounded navigation instructions in real environments. In: CVPR (2018) Anderson, P., et al.: Vision-and-language navigation: interpreting visually-grounded navigation instructions in real environments. In: CVPR (2018)
4.
Zurück zum Zitat Baker, B., et al.: Emergent tool use from multi-agent autocurricula. In: ICLR (2020) Baker, B., et al.: Emergent tool use from multi-agent autocurricula. In: ICLR (2020)
5.
Zurück zum Zitat Bard, N., et al.: The Hanabi challenge: a new frontier for AI research. Artif. Intell. 280, 103216 (2020) Bard, N., et al.: The Hanabi challenge: a new frontier for AI research. Artif. Intell. 280, 103216 (2020)
9.
Zurück zum Zitat Cho, K., et al.: Learning phrase representations using RNN encoder-decoder for statistical machine translation (2014). arXiv:1406.1078 Cho, K., et al.: Learning phrase representations using RNN encoder-decoder for statistical machine translation (2014). arXiv:​1406.​1078
10.
Zurück zum Zitat Deitke, M., et al.: RoboTHOR: an open simulation-to-real embodied AI platform. In: CVPR (2020) Deitke, M., et al.: RoboTHOR: an open simulation-to-real embodied AI platform. In: CVPR (2020)
11.
Zurück zum Zitat Foerster, J., Farquhar, G., Afouras, T., Nardelli, N., Whiteson, S.: Counterfactual multi-agent policy gradients. In: AAAI (2018) Foerster, J., Farquhar, G., Afouras, T., Nardelli, N., Whiteson, S.: Counterfactual multi-agent policy gradients. In: AAAI (2018)
12.
Zurück zum Zitat Hu, S., Zhu, F., Chang, X., Liang, X.: UPDeT: universal multi-agent RL via policy decoupling with transformers. In: ICLR (2021) Hu, S., Zhu, F., Chang, X., Liang, X.: UPDeT: universal multi-agent RL via policy decoupling with transformers. In: ICLR (2021)
13.
Zurück zum Zitat Ikram, K., Mondragón, E., Alonso, E., Garcia-Ortiz, M.: HexaJungle: a marl simulator to study the emergence of language (2021) Ikram, K., Mondragón, E., Alonso, E., Garcia-Ortiz, M.: HexaJungle: a marl simulator to study the emergence of language (2021)
14.
Zurück zum Zitat Khan, M.J., Ahmed, S.H., Sukthankar, G.: Transformer-based value function decomposition for cooperative multi-agent reinforcement learning in starCraft. In: Proceedings of the AAAI Conference on Artificial Intelligence and Interactive Digital Entertainment. vol. 18, pp. 113–119 (2022) Khan, M.J., Ahmed, S.H., Sukthankar, G.: Transformer-based value function decomposition for cooperative multi-agent reinforcement learning in starCraft. In: Proceedings of the AAAI Conference on Artificial Intelligence and Interactive Digital Entertainment. vol. 18, pp. 113–119 (2022)
15.
Zurück zum Zitat Kolve, E., Mottaghi, R., Gordon, D., Zhu, Y., Gupta, A., Farhadi, A.: AI2-THOR: An interactive 3D environment for visual AI (2017). arXiv:1712.05474 Kolve, E., Mottaghi, R., Gordon, D., Zhu, Y., Gupta, A., Farhadi, A.: AI2-THOR: An interactive 3D environment for visual AI (2017). arXiv:​1712.​05474
16.
Zurück zum Zitat Lin, T., Huh, J., Stauffer, C., Lim, S.N., Isola, P.: Learning to ground multi-agent communication with autoencoders. NeurIPS 34, 15230–15242 (2021) Lin, T., Huh, J., Stauffer, C., Lim, S.N., Isola, P.: Learning to ground multi-agent communication with autoencoders. NeurIPS 34, 15230–15242 (2021)
17.
Zurück zum Zitat Littman, M.L.: Markov games as a framework for multi-agent reinforcement learning. In: ICML (1994) Littman, M.L.: Markov games as a framework for multi-agent reinforcement learning. In: ICML (1994)
18.
Zurück zum Zitat Liu, S., Lever, G., Merel, J., Tunyasuvunakool, S., Heess, N., Graepel, T.: Emergent coordination through competition. In: ICLR (2019) Liu, S., Lever, G., Merel, J., Tunyasuvunakool, S., Heess, N., Graepel, T.: Emergent coordination through competition. In: ICLR (2019)
19.
21.
Zurück zum Zitat Mordatch, I., Abbeel, P.: Emergence of grounded compositional language in multi-agent populations. In: AAAI (2017) Mordatch, I., Abbeel, P.: Emergence of grounded compositional language in multi-agent populations. In: AAAI (2017)
23.
24.
Zurück zum Zitat Rashid, T., Samvelyan, M., Schroeder, C., Farquhar, G., Foerster, J., Whiteson, S.: QMIX: Monotonic value function factorisation for deep multi-agent reinforcement learning. In: ICML (2018) Rashid, T., Samvelyan, M., Schroeder, C., Farquhar, G., Foerster, J., Whiteson, S.: QMIX: Monotonic value function factorisation for deep multi-agent reinforcement learning. In: ICML (2018)
26.
Zurück zum Zitat Savva, M., Chang, A.X., Dosovitskiy, A., Funkhouser, T.A., Koltun, V.: MINOS: Multimodal indoor simulator for navigation in complex environments (2017). arXiv:1712.03931 Savva, M., Chang, A.X., Dosovitskiy, A., Funkhouser, T.A., Koltun, V.: MINOS: Multimodal indoor simulator for navigation in complex environments (2017). arXiv:​1712.​03931
27.
Zurück zum Zitat Savva, M., et al.: Habitat: a platform for embodied AI research. In: ICCV (2019) Savva, M., et al.: Habitat: a platform for embodied AI research. In: ICCV (2019)
28.
Zurück zum Zitat Schulman, J., Moritz, P., Levine, S., Jordan, M., Abbeel, P.: High-dimensional continuous control using generalized advantage estimation. In: ICLR 2016 (2016) Schulman, J., Moritz, P., Levine, S., Jordan, M., Abbeel, P.: High-dimensional continuous control using generalized advantage estimation. In: ICLR 2016 (2016)
29.
Zurück zum Zitat Schulman, J., Wolski, F., Dhariwal, P., Radford, A., Klimov, O.: Proximal policy optimization algorithms (2017). arXiv:1707.06347 Schulman, J., Wolski, F., Dhariwal, P., Radford, A., Klimov, O.: Proximal policy optimization algorithms (2017). arXiv:​1707.​06347
30.
Zurück zum Zitat Sutton, R.S., McAllester, D.A., Singh, S.P., Mansour, Y., et al.: Policy gradient methods for reinforcement learning with function approximation. In: NeurIPS (1999) Sutton, R.S., McAllester, D.A., Singh, S.P., Mansour, Y., et al.: Policy gradient methods for reinforcement learning with function approximation. In: NeurIPS (1999)
31.
Zurück zum Zitat Wang, T., Gupta, T., Mahajan, A., Peng, B., Whiteson, S., Zhang, C.: RODE: Learning roles to decompose multi-agent tasks (2020). arXiv:2010.01523 Wang, T., Gupta, T., Mahajan, A., Peng, B., Whiteson, S., Zhang, C.: RODE: Learning roles to decompose multi-agent tasks (2020). arXiv:​2010.​01523
32.
Zurück zum Zitat Wani, S., Patel, S., Jain, U., Chang, A.X., Savva, M.: MultiON: Benchmarking semantic map memory using multi-object navigation (2020). arXiv:2012.03912 Wani, S., Patel, S., Jain, U., Chang, A.X., Savva, M.: MultiON: Benchmarking semantic map memory using multi-object navigation (2020). arXiv:​2012.​03912
33.
Zurück zum Zitat de Witt, C.S., et al.: Is independent learning all you need in the StarCraft multi-agent challenge? CoRR (2020) de Witt, C.S., et al.: Is independent learning all you need in the StarCraft multi-agent challenge? CoRR (2020)
34.
Zurück zum Zitat Xia, F., et al.: Interactive Gibson benchmark: a benchmark for interactive navigation in cluttered environments. IEEE Robot. Autom. Lett. 5(2), 713–720 (2020) Xia, F., et al.: Interactive Gibson benchmark: a benchmark for interactive navigation in cluttered environments. IEEE Robot. Autom. Lett. 5(2), 713–720 (2020)
35.
Zurück zum Zitat Xia, F., Zamir, A.R., He, Z., Sax, A., Malik, J., Savarese, S.: Gibson Env: real-world perception for embodied agents. In: CVPR (2018) Xia, F., Zamir, A.R., He, Z., Sax, A., Malik, J., Savarese, S.: Gibson Env: real-world perception for embodied agents. In: CVPR (2018)
36.
Zurück zum Zitat Yang, Y., et al.: Multi-agent determinantal Q-learning. In: ICML (2020) Yang, Y., et al.: Multi-agent determinantal Q-learning. In: ICML (2020)
37.
Zurück zum Zitat Yu, C., Velu, A., Vinitsky, E., Wang, Y., Bayen, A.M., Wu, Y.: The surprising effectiveness of MAPPO in cooperative, multi-agent games (2021). arXiv:2103.01955 Yu, C., Velu, A., Vinitsky, E., Wang, Y., Bayen, A.M., Wu, Y.: The surprising effectiveness of MAPPO in cooperative, multi-agent games (2021). arXiv:​2103.​01955
38.
Zurück zum Zitat Yu, C., Velu, A., Vinitsky, E., Wang, Y., Bayen, A.M., Wu, Y.: The surprising effectiveness of MAPPO in cooperative, multi-agent games. CoRR (2021) Yu, C., Velu, A., Vinitsky, E., Wang, Y., Bayen, A.M., Wu, Y.: The surprising effectiveness of MAPPO in cooperative, multi-agent games. CoRR (2021)
39.
Zurück zum Zitat Zabounidis, R., Campbell, J., Stepputtis, S., Hughes, D., Sycara, K.P.: Concept learning for interpretable multi-agent reinforcement learning. In: Conference on Robot Learning, pp. 1828–1837. PMLR (2023) Zabounidis, R., Campbell, J., Stepputtis, S., Hughes, D., Sycara, K.P.: Concept learning for interpretable multi-agent reinforcement learning. In: Conference on Robot Learning, pp. 1828–1837. PMLR (2023)
Metadaten
Titel
Communicative and Cooperative Learning for Multi-agent Indoor Navigation
verfasst von
Fengda Zhu
Vincent CS Lee
Rui Liu
Copyright-Jahr
2024
Verlag
Springer Nature Singapore
DOI
https://doi.org/10.1007/978-981-97-2253-2_22

Premium Partner