Paper ID | HLT-6.1 | ||
Paper Title | ST-BERT: CROSS-MODAL LANGUAGE MODEL PRE-TRAINING FOR END-TO-END SPOKEN LANGUAGE UNDERSTANDING | ||
Authors | Minjeong Kim, Gyuwan Kim, NAVER CLOVA, South Korea; Sang-Woo Lee, Jung-Woo Ha, NAVER CLOVA, NAVER AI LAB, South Korea | ||
Session | HLT-6: Language Understanding 2: End-to-end Speech Understanding 2 | ||
Location | Gather.Town | ||
Session Time: | Wednesday, 09 June, 13:00 - 13:45 | ||
Presentation Time: | Wednesday, 09 June, 13:00 - 13:45 | ||
Presentation | Poster | ||
Topic | Human Language Technology: [HLT-UNDE] Spoken Language Understanding and Computational Semantics | ||
IEEE Xplore Open Preview | Click here to view in IEEE Xplore | ||
Abstract | Language model pre-training has shown promising results in various downstream tasks. In this context, we introduce a cross-modal pre-trained language model, called Speech-Text BERT (ST-BERT), to tackle end-to-end spoken language understanding (E2E SLU) tasks. Taking phoneme posterior and subword-level text as an input, ST-BERT learns a contextualized cross-modal alignment via our two proposed pre-training tasks: Cross-modal Masked Language Modeling (CM-MLM) and Cross-modal Conditioned Language Modeling (CM-CLM). Experimental results on three benchmarks present that our approach is effective for various SLU datasets and shows a surprisingly marginal performance degradation even when 1% of the training data are available. Also, our method shows further SLU performance gain via domain-adaptive pre-training with domain-specific speech-text pair data. |