Paper ID | MLSP-26.5 | ||
Paper Title | GLOBAL-LOCALIZED AGENT GRAPH CONVOLUTION FOR MULTI-AGENT REINFORCEMENT LEARNING | ||
Authors | Yuntao Liu, Yong Dou, Siqi Shen, Peng Qiao, National University of Defence Technology, China | ||
Session | MLSP-26: Reinforcement Learning 2 | ||
Location | Gather.Town | ||
Session Time: | Thursday, 10 June, 13:00 - 13:45 | ||
Presentation Time: | Thursday, 10 June, 13:00 - 13:45 | ||
Presentation | Poster | ||
Topic | Machine Learning for Signal Processing: [MLR-REI] Reinforcement learning | ||
IEEE Xplore Open Preview | Click here to view in IEEE Xplore | ||
Abstract | A lot of efforts have been devoted to solving the problem about complex relationship and localized cooperation between a large number of agents in large-scale multi-agent systems. However, global cooperation among all agents is also important while interactions between agents often happen locally. It is a challenging problem to enable agent to learn global and localized cooperate information simultaneously in multi-agent systems. In this paper, we model the global and localized cooperation among agents by global and localized agent graphs and propose a novel graph convolutional reinforcement learning mechanism based on these two graphs which allows each agent to communicate with neighbors and all agents to cooperate at the high level. Experiments on the large-scale multi-agent scenarios in StarCraft II show that our proposed method gets better performance compared with state-of-the-art algorithms and allows agents learning to cooperate efficiently. |