Paper ID | HLT-14.4 | ||
Paper Title | Task Aware Multi-Task Learning for Speech to Text Tasks | ||
Authors | Sathish Indurthi, Mohd Abbas Zaidi, Nikhil Kumar Lakumarapu, Beomseok Lee, Hyojung Han, Seokchan Ahn, Sangha Kim, Chanwoo Kim, Inchul Hwang, Samsung Electronics, South Korea | ||
Session | HLT-14: Language Representations | ||
Location | Gather.Town | ||
Session Time: | Thursday, 10 June, 14:00 - 14:45 | ||
Presentation Time: | Thursday, 10 June, 14:00 - 14:45 | ||
Presentation | Poster | ||
Topic | Human Language Technology: [HLT-MTSW] Machine Translation for Spoken and Written Language | ||
IEEE Xplore Open Preview | Click here to view in IEEE Xplore | ||
Abstract | In general, the direct Speech-to-text translation (ST) is jointly trained with Automatic Speech Recognition (ASR), and Machine Translation (MT) tasks. However, the issues with the current joint learning strategies inhibit the knowledge transfer across these tasks. We propose a task modulation network which allows the model to learn task specific features, while learning the shared features simultaneously. This proposed approach removes the need for separate finetuning step resulting in a single model which performs all these tasks. This single model achieves a performance of 28.88 BLEU score on ST MuST-C English-German, WER of 10.01% on ASR TEDLium v3, 23.35 BLEU score on MT WMT'15 English-German task. This sets a new state-of-the-art performance (SOTA) on the ST task while outperforming the existing end-to-end ASR systems. |