Paper ID | SS-3.3 |
Paper Title |
An Actor-Critic Reinforcement Learning Approach to Minimum Age of Information Scheduling in Energy Harvesting Networks |
Authors |
Shiyang Leng, The Pennsylvania State University, United States; Aylin Yener, The Ohio State University, United States |
Session | SS-3: Machine Learning in Wireless Networks |
Location | Gather.Town |
Session Time: | Tuesday, 08 June, 14:00 - 14:45 |
Presentation Time: | Tuesday, 08 June, 14:00 - 14:45 |
Presentation |
Poster
|
Topic |
Special Sessions: Machine Learning in Wireless Networks |
IEEE Xplore Open Preview |
Click here to view in IEEE Xplore |
Virtual Presentation |
Click here to watch in the Virtual Conference |
Abstract |
We study age of information (AoI) minimization in a network consisting of energy harvesting transmitters that are scheduled to send status updates to their intended receivers. We consider the user scheduling problem over a communication session. To solve online user scheduling with causal knowledge of the system state, we formulate an infinite-state Markov decision problem and adopt model-free on-policy deep reinforcement learning (DRL), where the actor-critic algorithm with deep neural network function approximation is implemented. Comparable AoI to the offline optimal is demonstrated, verifying the efficacy of learning for AoI-focused scheduling and resource allocation problems in wireless networks. |