Paper ID | SPTM-11.1 | ||
Paper Title | VARIANCE-CONSTRAINED LEARNING FOR STOCHASTIC GRAPH NEURAL NETWORKS | ||
Authors | Zhan Gao, University of Pennsylvania, United States; Elvin Isufi, Delft University of Technology, Netherlands; Alejandro Ribeiro, University of Pennsylvania, United States | ||
Session | SPTM-11: Graphs Neural Networks | ||
Location | Gather.Town | ||
Session Time: | Wednesday, 09 June, 16:30 - 17:15 | ||
Presentation Time: | Wednesday, 09 June, 16:30 - 17:15 | ||
Presentation | Poster | ||
Topic | Signal Processing Theory and Methods: [SIPG] Signal and Information Processing over Graphs | ||
IEEE Xplore Open Preview | Click here to view in IEEE Xplore | ||
Abstract | Stochastic graph neural networks (SGNNs) are information processing architectures that can learn representations from data over random graphs. SGNNs are trained with respect to the expected performance, but this training comes with no guarantee about the deviation of particular output realizations around the optimal mean. To overcome this issue, we propose a learning strategy for SGNNs based on a variance constrained optimization problem, balancing the expected performance and the stochastic deviation. To handle the variance constraint in the stochastic optimization problem, training is undertaken in the dual domain. We propose an alternating primal-dual learning algorithm that updates the primal variable (SGNN parameters) with gradient descent and the dual variable with gradient ascent. We show the stochastic deviation is explicitly controlled through Chebyshev inequality and analyze the optimality loss induced by the primal-dual learning. Through numerical simulations, we observe a strong performance in expectation with a controllable deviation corroborating the theoretical findings. |