Paper ID | SPE-12.3 |
Paper Title |
ONE-SHOT VOICE CONVERSION BASED ON SPEAKER AWARE MODULE |
Authors |
Ying Zhang, Hao Che, Chenxing Li, Xiaorui Wang, Zhongyuan Wang, Kwai, China |
Session | SPE-12: Voice Conversion 2: Low-Resource & Cross-Lingual Conversion |
Location | Gather.Town |
Session Time: | Tuesday, 08 June, 16:30 - 17:15 |
Presentation Time: | Tuesday, 08 June, 16:30 - 17:15 |
Presentation |
Poster
|
Topic |
Speech Processing: [SPE-SYNT] Speech Synthesis and Generation |
IEEE Xplore Open Preview |
Click here to view in IEEE Xplore |
Virtual Presentation |
Click here to watch in the Virtual Conference |
Abstract |
Voice conversion (VC) is a task to convert the voice of speech while preserving its linguistic content. Although several methods have been proposed to enable VC with non-parallel data, it is still difficult to model the voice without a great number of data or an adaptive process. In this paper, we propose a speaker-aware voice conversion (SAVC) system realizing one-shot voice conversion without an adaptation stage. The SAVC utilizes a speaker aware module (SAM) to disentangle speaker embeddings. The SAM comprises a dynamic reference encoder, a static speaker knowledge block (SKB), and a multi-head attention layer. The reference encoder is used to compress a variable-length utterance to a fixed-length vector, the SKB is made up of pre-extraction x-vectors, and the multi-head attention layer is designed to generate weighted combined speaker embeddings. Subsequently, phonetic posteriorgrams (PPGs) as context encoding are concatenated with speaker embeddings and sent to the decoder module for generating acoustic features. Experimental results on the Aishell-1 corpus show that the proposed method can improve speaker similarity and converted utterances’ speech quality. |