Paper ID | MLSP-25.1 |
Paper Title |
Cooperative Scenarios For Multi-agent Reinforcement learning In Wireless Edge Caching |
Authors |
Navneet Garg, Tharmalingam Ratnarajah, University of Edinburgh, United Kingdom |
Session | MLSP-25: Reinforcement Learning 1 |
Location | Gather.Town |
Session Time: | Thursday, 10 June, 13:00 - 13:45 |
Presentation Time: | Thursday, 10 June, 13:00 - 13:45 |
Presentation |
Poster
|
Topic |
Machine Learning for Signal Processing: [MLR-REI] Reinforcement learning |
IEEE Xplore Open Preview |
Click here to view in IEEE Xplore |
Virtual Presentation |
Click here to watch in the Virtual Conference |
Abstract |
Wireless edge caching is an important strategy to fulfill the demands in the next generation wireless systems. Recent studies have indicated that among a network of small base stations (SBSs), joint content placement improves the cache hit performance via reinforcement learning, since content requests are correlated across SBSs and files. In this paper, we investigate multi-agent reinforcement learning (MARL), and identify four scenarios for cooperation. These scenarios include full cooperation (S1), episodic cooperation (S2), distributed cooperation (S3), and independent operation (no-cooperation). MARL algorithms have been presented for each scenario. Simulations results for averaged normalized cache hits show that cooperation with one neighbor (S3) can improve the performance significantly closer to full-cooperation (S1). Scenario 2 shows the importance of frequent cooperation, when the level of cooperation is high, which depends on the number of SBSs. |