Paper ID | SPE-48.5 | ||
Paper Title | META-ADAPTER: EFFICIENT CROSS-LINGUAL ADAPTATION WITH META-LEARNING | ||
Authors | Wenxin Hou, Yidong Wang, Shengzhou Gao, Takahiro Shinozaki, Tokyo Institute of Technology, Japan | ||
Session | SPE-48: Speech Recognition 18: Low Resource ASR | ||
Location | Gather.Town | ||
Session Time: | Friday, 11 June, 11:30 - 12:15 | ||
Presentation Time: | Friday, 11 June, 11:30 - 12:15 | ||
Presentation | Poster | ||
Topic | Speech Processing: [SPE-GASR] General Topics in Speech Recognition | ||
IEEE Xplore Open Preview | Click here to view in IEEE Xplore | ||
Abstract | Transfer learning from a multilingual model has shown favorable results on low-resource automatic speech recognition (ASR). However, full-model fine-tuning generates a separate model for every target language and is not suitable for deploying and maintaining in production. The key challenge lies in how to efficiently extend the pre-trained model with fewer parameters. In this paper, we propose to combine the adapter module with meta-learning algorithms to achieve high recognition performance under low-resource settings and improve the parameter-efficiency of the model. Extensive experiments show that our methods can achieve comparable or even superior recognition rates than the state-of-the-art baselines on low-resource languages, especially under very-low-resource conditions, with a significantly smaller model profile. |