Paper ID | SPE-32.3 |
Paper Title |
PRE-TRAINING TRANSFORMER DECODER FOR END-TO-END ASR MODEL WITH UNPAIRED TEXT DATA |
Authors |
Changfeng Gao, Gaofeng Cheng, Runyan Yang, Han Zhu, Pengyuan Zhang, Yonghong Yan, Key Laboratory of Speech Acoustics and Content Understanding, China |
Session | SPE-32: Speech Recognition 12: Self-supervised, Semi-supervised, Unsupervised Training |
Location | Gather.Town |
Session Time: | Thursday, 10 June, 13:00 - 13:45 |
Presentation Time: | Thursday, 10 June, 13:00 - 13:45 |
Presentation |
Poster
|
Topic |
Speech Processing: [SPE-GASR] General Topics in Speech Recognition |
IEEE Xplore Open Preview |
Click here to view in IEEE Xplore |
Virtual Presentation |
Click here to watch in the Virtual Conference |
Abstract |
This paper presents a method to pre-train transformer-based encoder-decoder automatic speech recognition (ASR) models using sufficient target-domain text. During pre-training, we train the transformer decoder as a conditional language model with empty or artifical states, rather than the real encoder states. By this pre-training strategy, the decoder can learn how to generate grammatical text sequence before learning how to generate correct transcriptions. Contrast to other methods which utilize text only data to improve the ASR performance, our method does not change the network architecture of the ASR model or introduce extra component like text-to-speech (TTS) or text-to-encoder (TTE). Experimental results on LibriSpeech corpus show that the proposed method can relatively reduce the word error rate over 10%, using 960 hours transcriptions. |