Paper ID | SPE-12.4 | ||
Paper Title | ZERO-SHOT VOICE CONVERSION WITH ADJUSTED SPEAKER EMBEDDINGS AND SIMPLE ACOUSTIC FEATURES | ||
Authors | Zhiyuan Tan, Jianguo Wei, Junhai Xu, Yuqing He, Wenhuan Lu, College of Intelligence and Computing, Tianjin University, China | ||
Session | SPE-12: Voice Conversion 2: Low-Resource & Cross-Lingual Conversion | ||
Location | Gather.Town | ||
Session Time: | Tuesday, 08 June, 16:30 - 17:15 | ||
Presentation Time: | Tuesday, 08 June, 16:30 - 17:15 | ||
Presentation | Poster | ||
Topic | Speech Processing: [SPE-SYNT] Speech Synthesis and Generation | ||
IEEE Xplore Open Preview | Click here to view in IEEE Xplore | ||
Abstract | Zero-shot voice conversion (VC) where both source and target speakers are unseen in the training dataset has become a new research direction. Using speaker embeddings instead of one-hot vectors to represent speaker identity is a key point, which makes VC models work on unseen speakers. In our work, a newly designed neural network was used to adjust the speaker embeddings of unseen speakers. This enables speaker embeddings to perform better on zero-shot VC. In addition, disentangled representation of features is the mainstream method to achieve zero-shot VC. In terms of input features of VC model, we use Mel-cepstral and F0 as simple acoustic features (SAF) rather than Mel-spectrograms. This avoids F0 conflicts in decoder that existed in the previous methods. The evaluations demonstrate that our proposed methods improve the quality of converted speech in terms of naturalness and similarity. |