Paper ID | SPTM-22.4 |
Paper Title |
TRAINING LOGICAL NEURAL NETWORKS BY PRIMAL–DUAL METHODS FOR NEURO-SYMBOLIC REASONING |
Authors |
Songtao Lu, Naweed Khan, Ismail Akhalwaya, Ryan Riegel, Lior Horesh, Alexander Gray, IBM Research, United States |
Session | SPTM-22: Signal Processing Theory and Methods |
Location | Gather.Town |
Session Time: | Friday, 11 June, 13:00 - 13:45 |
Presentation Time: | Friday, 11 June, 13:00 - 13:45 |
Presentation |
Poster
|
Topic |
Signal Processing Theory and Methods: [OPT] Optimization Methods for Signal Processing |
IEEE Xplore Open Preview |
Click here to view in IEEE Xplore |
Virtual Presentation |
Click here to watch in the Virtual Conference |
Abstract |
Parametrized machine learning models for inference often include non-linear and nonconvex constraints over the parameters and meta-parameters. Training these models to convergence is in general difficult, and naive methods such as projected gradient descent or grid search are not easily able to enforce the functional constraints. This work explores the optimization of a constrained Neural Network (familiar from machine learning but with parameter constraints), in the service of neuro-symbolic logical reasoning. Logical Neural Networks (LNNs) provide a well-justified, interpretable example of training under non-trivial constraints. In this paper, we propose a unified framework for solving this nonlinear programming problem by leveraging primal-dual optimization methods, and quantify the corresponding convergence rate to the Karush-Kuhn-Tucker (KKT) points of this problem. Extensive numerical results on both a toy example and training an LNN over real datasets validate the efficacy of the method. |