2021 IEEE International Conference on Acoustics, Speech and Signal Processing

6-11 June 2021 • Toronto, Ontario, Canada

Extracting Knowledge from Information

2021 IEEE International Conference on Acoustics, Speech and Signal Processing

6-11 June 2021 • Toronto, Ontario, Canada

Extracting Knowledge from Information

Technical Program

Paper Detail

Paper IDSPE-8.2
Paper Title SELF-SUPERVISED LEARNING BASED DOMAIN ADAPTATION FOR ROBUST SPEAKER VERIFICATION
Authors Zhengyang Chen, Shuai Wang, Yanmin Qian, Shanghai Jiao Tong University, China
SessionSPE-8: Speaker Recognition 2: Channel and Domain Robustness
LocationGather.Town
Session Time:Tuesday, 08 June, 14:00 - 14:45
Presentation Time:Tuesday, 08 June, 14:00 - 14:45
Presentation Poster
Topic Speech Processing: [SPE-SPKR] Speaker Recognition and Characterization
IEEE Xplore Open Preview  Click here to view in IEEE Xplore
Virtual Presentation  Click here to watch in the Virtual Conference
Abstract Large performance degradation is often observed for speaker verification systems when applied to a new domain dataset. Given an unlabeled target-domain dataset, unsupervised domain adaptation(UDA) methods, which usually leverage adversarial training strategies, are commonly used to bridge the performance gap caused by the domain mismatch. However, such adversarial training strategy only uses the distribution information of target domain data and cannot ensure the performance improvement on the target domain. In this paper, we incorporate self-supervised learning strategy to the un-supervised domain adaptation system and proposed a self-supervised learning based domain adaptation approach (SSDA). Compared to the traditional UDA method, the new SSDA training strategy can fully leverage the potential label information from target domain and adapt the speaker discrimination ability from source domain simultaneously. We evaluated the proposed approach on the Vox-Celeb (labeled source domain) and CnCeleb (unlabeled target do-main) datasets, and the best SSDA system obtains 10.2% EER on the CnCeleb dataset without using any speaker labels on CnCeleb, which also can achieve the state-of-the-art results on this corpus.