Paper ID | MLSP-40.6 | ||
Paper Title | ON SCALING CONTRASTIVE REPRESENTATIONS FOR LOW-RESOURCE SPEECH RECOGNITION | ||
Authors | Lasse Borgholt, University of Copenhagen, Denmark; Tycho M. S. Tax, Independent researcher (no affiliation), Denmark; Jakob D. Havtorn, Lars Maaløe, Corti, Denmark; Christian Igel, University of Copenhagen, Denmark | ||
Session | MLSP-40: Contrastive Learning | ||
Location | Gather.Town | ||
Session Time: | Friday, 11 June, 11:30 - 12:15 | ||
Presentation Time: | Friday, 11 June, 11:30 - 12:15 | ||
Presentation | Poster | ||
Topic | Machine Learning for Signal Processing: [MLR-SSUP] Self-supervised and semi-supervised learning | ||
IEEE Xplore Open Preview | Click here to view in IEEE Xplore | ||
Abstract | Recent advances in self-supervised learning through contrastive training have shown that it is possible to learn a competitive speech recognition system with as little as 10 minutes of labeled data. However, these systems are computationally expensive since they require pre-training followed by fine-tuning in a large parameter space. We explore the performance of such systems without fine-tuning by training a state-of-the-art speech recognizer on the fixed representations from the computationally demanding wav2vec 2.0 framework. We find performance to decrease without fine-tuning and, in the extreme low-resource setting, wav2vec 2.0 is inferior to its predecessor. In addition, we find that wav2vec 2.0 representations live in a low dimensional subspace and that decorrelating the features of the representations can stabilize training of the automatic speech recognizer. Finally, we propose a bidirectional extension to the original wav2vec framework that consistently improves performance. |