Paper ID | HLT-5.1 |
Paper Title |
END2END ACOUSTIC TO SEMANTIC TRANSDUCTION |
Authors |
Valentin Pelloin, Nathalie Camelin, Antoine Laurent, LIUM - Le Mans Université, France; Renato De Mori, LIA - Université d'Avignon, France; Antoine Caubrière, LIUM - Le Mans Université, France; Yannick Estève, LIA - Université d'Avignon, France; Sylvain Meignier, LIUM - Le Mans Université, France |
Session | HLT-5: Language Understanding 1: End-to-end Speech Understanding 1 |
Location | Gather.Town |
Session Time: | Wednesday, 09 June, 13:00 - 13:45 |
Presentation Time: | Wednesday, 09 June, 13:00 - 13:45 |
Presentation |
Poster
|
Topic |
Human Language Technology: [HLT-UNDE] Spoken Language Understanding and Computational Semantics |
IEEE Xplore Open Preview |
Click here to view in IEEE Xplore |
Virtual Presentation |
Click here to watch in the Virtual Conference |
Abstract |
In this paper, we propose a novel end-to-end sequence-to-sequence spoken language understanding model using an attention mechanism. It reliably selects contextual acoustic features in order to hypothesize semantic contents. An initial architecture capable of extracting all pronounced words and concepts from acoustic spans is designed and tested. With a shallow fusion language model, this system reaches a 13.6 concept error rate (CER) and an 18.5 concept value error rate (CVER) on the French MEDIA corpus, achieving an absolute 2.8 points reduction compared to the state-of-the-art. Then, an original model is proposed for hypothesizing concepts and their values. This transduction reaches a 15.4 CER and a 21.6 CVER without any new type of context. |