Paper ID | SPCOM-9.3 |
Paper Title |
ADAPTIVE CONTENTION WINDOW DESIGN USING DEEP Q-LEARNING |
Authors |
Abhishek Kumar, Rice University, United States; Gunjan Verma, Chirag Rao, Ananthram Swami, US Army's CCDC Army Research Laboratory, United States; Santiago Segarra, Rice University, United States |
Session | SPCOM-9: Online and Active Learning for Communications |
Location | Gather.Town |
Session Time: | Friday, 11 June, 14:00 - 14:45 |
Presentation Time: | Friday, 11 June, 14:00 - 14:45 |
Presentation |
Poster
|
Topic |
Signal Processing for Communications and Networking: [SPC-ML] Machine Learning for Communications |
IEEE Xplore Open Preview |
Click here to view in IEEE Xplore |
Virtual Presentation |
Click here to watch in the Virtual Conference |
Abstract |
We study the problem of adaptive contention window (CW) design for random-access wireless networks. More precisely, our goal is to design an intelligent node that can dynamically adapt its minimum CW (MCW) parameter to maximize a network-level utility knowing neither the MCWs of other nodes nor how these change over time. To achieve this goal, we adopt a reinforcement learning (RL) framework where we circumvent the lack of system knowledge with local channel observations and we reward actions that lead to high utilities. To efficiently learn these preferred actions, we follow a deep Q-learning approach, where the Q-value function is parametrized using a multi-layer perceptron. In particular, we implement a rainbow agent, which incorporates several empirical improvements over the basic deep Q-network. Numerical experiments based on the NS3 simulator reveal that the proposed RL agent performs close to optimal and markedly improves upon existing learning and non-learning based alternatives. |