Paper ID | MLSP-27.6 |
Paper Title |
GRAPHCOMM: A GRAPH NEURAL NETWORK BASED METHOD FOR MULTI-AGENT REINFORCEMENT LEARNING |
Authors |
Siqi Shen, Xiamen University, China; Yongquan Fu, Huayou Su, Hengyue Pan, Qiao Peng, Yong Dou, National University of Defense Technology, China; Cheng Wang, Xiamen University, China |
Session | MLSP-27: Reinforcement Learning 3 |
Location | Gather.Town |
Session Time: | Thursday, 10 June, 13:00 - 13:45 |
Presentation Time: | Thursday, 10 June, 13:00 - 13:45 |
Presentation |
Poster
|
Topic |
Machine Learning for Signal Processing: [MLR-SLER] Sequential learning; sequential decision methods |
IEEE Xplore Open Preview |
Click here to view in IEEE Xplore |
Virtual Presentation |
Click here to watch in the Virtual Conference |
Abstract |
The communication among agents is important for Multi-Agent Reinforcement Learning (MARL). In this work, we propose GraphComm, a method makes use of the relationships among agents for MARL communication. GraphComm takes the explicit relations (e.g., agent types), which can be provided through some knowledge background, into account to better model the relationships among agents. Besides explicit relations, GraphComm considers implicit relations, which are formed by agent interactions. GraphComm use Graph Neural Networks (GNNs) to model the relational information, and use GNNs to assist the learning of agent communication. We show that GraphComm can obtain better results than state-of-the-art methods on the challenging StarCraft II unit micromanagement tasks through extensive experimental evaluation. |