Paper ID | HLT-7.5 | ||
Paper Title | STREAMING SIMULTANEOUS SPEECH TRANSLATION WITH AUGMENTED MEMORY TRANSFORMER | ||
Authors | Xutai Ma, Johns Hopkins University, United States; Yongqiang Wang, Mohammad Dousti, Facebook, United States; Philipp Koehn, Johns Hopkins University, United States; Juan Pino, Facebook, United States | ||
Session | HLT-7: Speech Translation 1: Models | ||
Location | Gather.Town | ||
Session Time: | Wednesday, 09 June, 14:00 - 14:45 | ||
Presentation Time: | Wednesday, 09 June, 14:00 - 14:45 | ||
Presentation | Poster | ||
Topic | Human Language Technology: [HLT-MTSW] Machine Translation for Spoken and Written Language | ||
IEEE Xplore Open Preview | Click here to view in IEEE Xplore | ||
Abstract | Transformer-based models have achieved state-of-the-art performance on speech translation tasks. However, the model architecture is not efficient enough for streaming scenarios since self-attention is computed over an entire input sequence and the computational cost grows quadratically with the length of the input sequence. Nevertheless, most of the previous work on simultaneous speech translation, the task of generating translations from partial audio input, ignores the time spent in generating the translation when analyzing the latency. With this assumption, a system may have good latency quality trade-offs but be inapplicable in real-time scenarios. In this paper, we focus on the task of streaming simultaneous speech translation, where the systems are not only capable of translating with partial input but are also able to handle very long or continuous input. We propose an end-to-end transformer-based sequence-to-sequence model, equipped with an augmented memory transformer encoder, which has shown great success on the streaming automatic speech recognition task with hybrid or transducer-based models. We conduct an empirical evaluation of the proposed model on segment, context and memory sizes and we compare our approach to a transformer with a unidirectional mask. |