Paper ID | AUD-23.4 |
Paper Title |
LSSED: A LARGE-SCALE DATASET AND BENCHMARK FOR SPEECH EMOTION RECOGNITION |
Authors |
Weiquan Fan, Xiangmin Xu, Xiaofen Xing, Weidong Chen, South China University of Technology, China; Dongyan Huang, UBTECH Robotics Corp, China |
Session | AUD-23: Detection and Classification of Acoustic Scenes and Events 4: Datasets and metrics |
Location | Gather.Town |
Session Time: | Thursday, 10 June, 15:30 - 16:15 |
Presentation Time: | Thursday, 10 June, 15:30 - 16:15 |
Presentation |
Poster
|
Topic |
Audio and Acoustic Signal Processing: [AUD-CLAS] Detection and Classification of Acoustic Scenes and Events |
IEEE Xplore Open Preview |
Click here to view in IEEE Xplore |
Virtual Presentation |
Click here to watch in the Virtual Conference |
Abstract |
Speech emotion recognition is a vital contributor to the next generation of human-computer interaction (HCI). However, current existing small-scale databases have limited the devel- opment of related research. In this paper, we present LSSED, a challenging large-scale english speech emotion dataset, which has data collected from 820 subjects to simulate real- world distribution. In addition, we release some pre-trained models based on LSSED, which can not only promote the development of speech emotion recognition, but can also be transferred to related downstream tasks such as mental health analysis where data is extremely difficult to collect. Finally, our experiments show the necessity of large-scale datasets and the effectiveness of pre-trained models. The dateset will be released on https://github.com/tobefans/LSSED. |