Paper ID | HLT-5.2 |
Paper Title |
ACOUSTICS BASED INTENT RECOGNITION USING DISCOVERED PHONETIC UNITS FOR LOW RESOURCE LANGUAGES |
Authors |
Akshat Gupta, Xinjian Li, SaiKrishna Rallabandi, Alan Black, Carnegie Mellon University, United States |
Session | HLT-5: Language Understanding 1: End-to-end Speech Understanding 1 |
Location | Gather.Town |
Session Time: | Wednesday, 09 June, 13:00 - 13:45 |
Presentation Time: | Wednesday, 09 June, 13:00 - 13:45 |
Presentation |
Poster
|
Topic |
Human Language Technology: [HLT-UNDE] Spoken Language Understanding and Computational Semantics |
IEEE Xplore Open Preview |
Click here to view in IEEE Xplore |
Virtual Presentation |
Click here to watch in the Virtual Conference |
Abstract |
With recent advancements in language technologies, humans are now speaking to devices. Increasing the reach of spoken language technologies requires building systems in local languages. A major bottleneck here are the underlying data-intensive parts that make up such systems, including automatic speech recognition (ASR) systems that require large amounts of labelled data. With the aim of aiding development of spoken dialog systems in low resourced languages, we propose a novel acoustics based intent recognition system that uses discovered phonetic units for intent classification. The system is made up of two blocks - the first block is a universal phone recognition system that generates a transcript of discovered phonetic units for the input audio, and the second block performs intent classification from the generated phonetic transcripts. We propose a CNN+LSTM based architecture and present results for two languages families - Indic languages and Romance languages, for two different intent recognition tasks. We also perform multilingual training of our intent classifier and show improved cross-lingual transfer and zero-shot performance on an unknown language within the same language family. |