Paper ID | MLSP-8.3 |
Paper Title |
SEQUENCE-LEVEL SELF-TEACHING REGULARIZATION |
Authors |
Eric Sun, Liang Lu, Zhong Meng, Yifan Gong, Microsoft Corporation, United States |
Session | MLSP-8: Learning |
Location | Gather.Town |
Session Time: | Tuesday, 08 June, 16:30 - 17:15 |
Presentation Time: | Tuesday, 08 June, 16:30 - 17:15 |
Presentation |
Poster
|
Topic |
Machine Learning for Signal Processing: [MLR-SSUP] Self-supervised and semi-supervised learning |
IEEE Xplore Open Preview |
Click here to view in IEEE Xplore |
Virtual Presentation |
Click here to watch in the Virtual Conference |
Abstract |
In our previous research, we proposed frame-level self-teaching networks to regularize the deep neural networks during training. In this paper, we extend the previous approach and propose sequence self-teaching network to regularize the sequence-level information in speech recognition. The idea is to generate the sequence-level soft supervision labels from the top layer of the network to supervise the training of lower layer parameters. The network is trained with an auxiliary criterion in order to reduce the sequence-level Kullback-Leibler (KL) divergence between the top layer and lower layers, where the posterior probabilities in the KL-divergence term is computed from a lattice at the sequence-level. We evaluated the sequence-level self-teaching regularization approach with bidirectional long short-term memory (BLSTM) models on LibriSpeech task, and show consistent improvements over the discriminative sequence maximum mutual information (MMI) trained baseline. |