Paper ID | SPE-4.5 |
Paper Title |
MULTI-SPEAKER EMOTIONAL SPEECH SYNTHESIS WITH FINE-GRAINED PROSODY MODELING |
Authors |
Chunhui Lu, Xue Wen, Ruolan Liu, Xiao Chen, Samsung Research China-Beijing, China |
Session | SPE-4: Speech Synthesis 2: Controllability |
Location | Gather.Town |
Session Time: | Tuesday, 08 June, 13:00 - 13:45 |
Presentation Time: | Tuesday, 08 June, 13:00 - 13:45 |
Presentation |
Poster
|
Topic |
Speech Processing: [SPE-SYNT] Speech Synthesis and Generation |
IEEE Xplore Open Preview |
Click here to view in IEEE Xplore |
Virtual Presentation |
Click here to watch in the Virtual Conference |
Abstract |
We present an end-to-end system for multi-speaker emotional speech synthesis. In particular, our system learns emotion classes from just two speakers then generalizes these classes to other speakers from whom no emotional data was seen. We address the problem by integrating disentangled, fine-grained prosody features with global, sentence-level emotion embedding. These fine-grained features learn to represent local prosodic variations disentangled from speaker, tone and global emotion label. Compared to systems that model emotions at sentence level only, our method achieves higher ratings in naturalness and expressiveness, while retaining comparable speaker similarity ratings. |