Paper ID | AUD-18.1 |
Paper Title |
TOWARDS LISTENING TO 10 PEOPLE SIMULTANEOUSLY: AN EFFICIENT PERMUTATION INVARIANT TRAINING OF AUDIO SOURCE SEPARATION USING SINKHORN’S ALGORITHM |
Authors |
Hideyuki Tachibana, PKSHA Technology, Japan |
Session | AUD-18: Audio and Speech Source Separation 5: Source Separation |
Location | Gather.Town |
Session Time: | Thursday, 10 June, 13:00 - 13:45 |
Presentation Time: | Thursday, 10 June, 13:00 - 13:45 |
Presentation |
Poster
|
Topic |
Audio and Acoustic Signal Processing: [AUD-SEP] Audio and Speech Source Separation |
IEEE Xplore Open Preview |
Click here to view in IEEE Xplore |
Virtual Presentation |
Click here to watch in the Virtual Conference |
Abstract |
In neural network-based monaural speech separation techniques, it has been recently common to evaluate the loss using the permutation invariant training (PIT) loss. However, the ordinary PIT requires to try all N! permutations between N ground truths and N estimates. Since the factorial complexity explodes very rapidly as N increases, a PIT-based training works only when the number of source signals is small, such as N = 2 or 3. To overcome this limitation, this paper proposes a SinkPIT, a novel variant of the PIT losses, which is much more efficient than the ordinary PIT loss when N is large. The SinkPIT is based on Sinkhorn’s matrix balancing algorithm, which efficiently finds a doubly stochastic matrix which approximates the best permutation in a differentiable manner. The author conducted an experiment to train a neural network model to decompose a single-channel mixture into 10 sources using the SinkPIT, and obtained promising results. |