Paper ID | AUD-3.4 | ||
Paper Title | SEQUENCE-TO-SEQUENCE SINGING VOICE SYNTHESIS WITH PERCEPTUAL ENTROPY LOSS | ||
Authors | Jiatong Shi, The Johns Hopkins University, United States; Shuai Guo, Renmin University of China, China; Nan Huo, Yuekai Zhang, The Johns Hopkins University, United States; Qin Jin, Renmin University of China, China | ||
Session | AUD-3: Music Signal Analysis, Processing, and Synthesis 1: Deep Learning | ||
Location | Gather.Town | ||
Session Time: | Tuesday, 08 June, 14:00 - 14:45 | ||
Presentation Time: | Tuesday, 08 June, 14:00 - 14:45 | ||
Presentation | Poster | ||
Topic | Audio and Acoustic Signal Processing: [AUD-MSP] Music Signal Analysis, Processing and Synthesis | ||
IEEE Xplore Open Preview | Click here to view in IEEE Xplore | ||
Abstract | The neural network (NN) based singing voice synthesis (SVS) systems require sufficient data to train well. However, due to high data acquisition and annotation cost, we often encounter data limitation problem in building SVS systems. The NN based models are prone to over-fitting due to data scarcity. In this work, we propose a Perceptual Entropy (PE) loss derived from a psycho-acoustic hearing model to regularize the network. With a one-hour open-source singing voice database, we explore the impact of the PE loss on various mainstream sequence-to-sequence models, including the RNN-based model, transformer-based model, and conformer-based model. Our experiments show that the PE loss can mitigate the over-fitting problem and significantly improve the synthesized singing quality reflected in objective and subjective evaluations. Furthermore, incorporating the PE loss in model training is shown to help the F0-contour and high-frequency-band spectrum prediction. |