Paper ID | SPE-6.1 | ||
Paper Title | ONE SHOT LEARNING FOR SPEECH SEPARATION | ||
Authors | Yuan-Kuei Wu, Kuan-Po Huang, National Taiwan University, Taiwan; Yu Tsao, Academia Sinica, Taiwan; Hung-yi Lee, National Taiwan University, Taiwan | ||
Session | SPE-6: Speech Enhancement 2: Speech Separation and Dereverberation | ||
Location | Gather.Town | ||
Session Time: | Tuesday, 08 June, 14:00 - 14:45 | ||
Presentation Time: | Tuesday, 08 June, 14:00 - 14:45 | ||
Presentation | Poster | ||
Topic | Speech Processing: [SPE-ENHA] Speech Enhancement and Separation | ||
IEEE Xplore Open Preview | Click here to view in IEEE Xplore | ||
Abstract | Despite the recent success of speech separation models, they fail to separate sources properly while facing different sets of people or noisy environments. To tackle this problem, we proposed to apply meta-learning to the speech separation task. We aimed to find a meta-initialization model, which can quickly adapt to new speakers by seeing only one mixture generated by those people. In this paper, we use model-agnostic meta-learning(MAML) algorithm and almost no inner loop(ANIL) algorithm in Conv-TasNet to achieve this goal. The experiment results show that our model can adapt not only to a new set of speakers but also noisy environments. Furthermore, we found out that the encoder and decoder serve as the feature-reuse layers, while the separator is the task-specific module. |