Paper ID | SS-3.1 |
Paper Title |
SCALABLE REINFORCEMENT LEARNING FOR ROUTING IN AD-HOC NETWORKS BASED ON PHYSICAL-LAYER ATTRIBUTES |
Authors |
Wei Cui, Wei Yu, University of Toronto, Canada |
Session | SS-3: Machine Learning in Wireless Networks |
Location | Gather.Town |
Session Time: | Tuesday, 08 June, 14:00 - 14:45 |
Presentation Time: | Tuesday, 08 June, 14:00 - 14:45 |
Presentation |
Poster
|
Topic |
Special Sessions: Machine Learning in Wireless Networks |
IEEE Xplore Open Preview |
Click here to view in IEEE Xplore |
Virtual Presentation |
Click here to watch in the Virtual Conference |
Abstract |
This work proposes a novel and scalable reinforcement learning approach for routing in ad-hoc wireless networks. In most previous reinforcement learning based routing methods, the links in the network are assumed to be fixed, and a different agent is trained for each transmission node --- this limits scalability and generalizability. In this paper, we account for the inherent signal-to-interference-plus-noise ratio (SINR) in the physical layer and propose a more scalable approach in which a single agent is associated with each flow and is trained using a novel reward definition and according to the physical-layer characteristics of the environment. This allows a highly effective routing strategy based on the geographic locations of the nodes in the ad-hoc network. The proposed deep reinforcement learning strategy is capable of accounting for the mutual interference between the links and is capable of producing highly effective routing solutions over the entire network in a scalable manner. |