Paper ID | AUD-14.3 |
Paper Title |
MBNET: MOS PREDICTION FOR SYNTHESIZED SPEECH WITH MEAN-BIAS NETWORK |
Authors |
Yichong Leng, University of Science and Technology of China, China; Xu Tan, Microsoft Research Asia, China; Sheng Zhao, Microsoft Azure Speech, China; Frank K. Soong, Microsoft Research Asia, China; Xiang-Yang Li, University of Science and Technology of China, China; Tao Qin, Microsoft Research Asia, China |
Session | AUD-14: Quality and Intelligibility Measures |
Location | Gather.Town |
Session Time: | Wednesday, 09 June, 15:30 - 16:15 |
Presentation Time: | Wednesday, 09 June, 15:30 - 16:15 |
Presentation |
Poster
|
Topic |
Audio and Acoustic Signal Processing: [AUD-QIM] Quality and Intelligibility Measures |
IEEE Xplore Open Preview |
Click here to view in IEEE Xplore |
Virtual Presentation |
Click here to watch in the Virtual Conference |
Abstract |
Mean opinion score (MOS) is a popular subjective metric to assess the quality of synthesized speech, and usually involves multiple human judges to evaluate each speech utterance. To reduce the labor cost in MOS test, multiple methods have been proposed to automatically predict MOS scores. To our knowledge, for a speech utterance, all previous works only used the average of multiple scores from different judges as the training target and discarded the score of each individual judge, which did not well exploit the precious MOS training data. In this paper, we propose MBNet, a MOS predictor with a mean subnet and a bias subnet to better utilize every judge score in MOS datasets, where the mean subnet is used to predict the mean score of each utterance similar to that in previous works, and the bias subnet to predict the bias score (the difference between the mean score and each individual judge score) and capture the personal preference of individual judges. Experiments show that compared with MOSNet baseline that only leverages mean score for training, MBNet improves the system-level spearmans rank correlation co-efficient (SRCC) by 2.9% on VCC 2018 dataset and 6.7% on VCC 2016 dataset. |