2021 IEEE International Conference on Acoustics, Speech and Signal Processing

6-11 June 2021 • Toronto, Ontario, Canada

Extracting Knowledge from Information

2021 IEEE International Conference on Acoustics, Speech and Signal Processing

6-11 June 2021 • Toronto, Ontario, Canada

Extracting Knowledge from Information

Technical Program

Paper Detail

Paper IDMLSP-40.3
Paper Title CONTRASTIVE SEMI-SUPERVISED LEARNING FOR ASR
Authors Alex Xiao, Christian Fuegen, Abdelrahman Mohamed, Facebook, United States
SessionMLSP-40: Contrastive Learning
LocationGather.Town
Session Time:Friday, 11 June, 11:30 - 12:15
Presentation Time:Friday, 11 June, 11:30 - 12:15
Presentation Poster
Topic Machine Learning for Signal Processing: [MLR-SSUP] Self-supervised and semi-supervised learning
IEEE Xplore Open Preview  Click here to view in IEEE Xplore
Virtual Presentation  Click here to watch in the Virtual Conference
Abstract Pseudo-labeling is the most adopted method for pre-training automatic speech recognition (ASR) models. However, its performance suffers with degrading quality of the supervised teacher model. Inspired by the successes of contrastive representation learning for both computer vision and speech applications, and more recently for supervised learning of visual objects[1], we propose Contrastive Semi-supervised Learning (CSL). CSL eschews directly predicting teacher generated pseudo-labels in favor of utilizing them to select positive and negative examples. In the challenging task of transcribing public social media videos, using CSL reduces the WER by 8%, compared to the standard Cross-Entropy pseudo-labeling (CE-PL), when 10hr of supervised data is used to annotate 75,000hr of videos. The WER reduction jumps to 19% under the ultra low-resource condition of using 1hr labels for teacher supervision. In out-of-domain conditions, CSL generalizes much better showing up to 17% WER reduction compared to the strongest CE-PL pre-trained model.