Paper ID | MLSP-3.6 | ||
Paper Title | A ReLU Dense Layer to Improve the Performance of Neural Networks | ||
Authors | Alireza M. Javid, Sandipan Das, Mikael Skoglund, Saikat Chatterjee, KTH Royal Institute of Technology, Sweden | ||
Session | MLSP-3: Deep Learning Training Methods 3 | ||
Location | Gather.Town | ||
Session Time: | Tuesday, 08 June, 13:00 - 13:45 | ||
Presentation Time: | Tuesday, 08 June, 13:00 - 13:45 | ||
Presentation | Poster | ||
Topic | Machine Learning for Signal Processing: [MLR-DEEP] Deep learning techniques | ||
IEEE Xplore Open Preview | Click here to view in IEEE Xplore | ||
Abstract | We propose ReDense as a simple and low complexity way to improve the performance of trained neural networks. We use a combination of random weights and rectified linear unit (ReLU) activation function to add a ReLU dense (ReDense) layer to the trained neural network such that it can achieve a lower training loss. The lossless flow property (LFP) of ReLU is the key to achieve the lower training loss while keeping the generalization error small. ReDense does not suffer from vanishing gradient problem in the training due to having a shallow structure. We experimentally show that ReDense can improve the training and testing performance of various neural network architectures with different optimization loss and activation functions. Finally, we test ReDense on some of the state-of-the-art architectures and show the performance improvement on benchmark datasets. |