Paper ID | SPE-6.3 |
Paper Title |
SPEAKER AND DIRECTION INFERRED DUAL-CHANNEL SPEECH SEPARATION |
Authors |
Chenxing Li, Jiaming Xu, Institute of Automation, Chinese Academy of Sciences, China; Nima Mesgarani, Columbia University, United States; Bo Xu, Institute of Automation, Chinese Academy of Sciences, China |
Session | SPE-6: Speech Enhancement 2: Speech Separation and Dereverberation |
Location | Gather.Town |
Session Time: | Tuesday, 08 June, 14:00 - 14:45 |
Presentation Time: | Tuesday, 08 June, 14:00 - 14:45 |
Presentation |
Poster
|
Topic |
Speech Processing: [SPE-ENHA] Speech Enhancement and Separation |
IEEE Xplore Open Preview |
Click here to view in IEEE Xplore |
Virtual Presentation |
Click here to watch in the Virtual Conference |
Abstract |
Most speech separation methods, trying to separate all channel sources simultaneously, are still far from having enough general- ization capabilities for real scenarios where the number of input sounds is usually uncertain and even dynamic. In this work, we employ ideas from auditory attention with two ears and propose a speaker and direction inferred speech separation network (dubbed SDNet) to solve the cocktail party problem. Specifically, our SDNet first parses out the respective perceptual representations with their speaker and direction characteristics from the mixture of the scene in a sequential manner. Then, the perceptual representations are utilized to attend to each corresponding speech. Our model gener- ates more precise perceptual representations with the help of spatial features and successfully deals with the problem of the unknown number of sources and the selection of outputs. The experiments on standard fully-overlapped speech separation benchmarks, WSJ0- 2mix, WSJ0-3mix, and WSJ0-2&3mix, show the effectiveness, and our method achieves SDR improvements of 25.31 dB, 17.26 dB, and 21.56 dB under anechoic settings. Our codes will be released at https://github.com/aispeech-lab/SDNet. |