Paper ID | AUD-2.5 | ||
Paper Title | All for One and One for All: Improving Music Separation by Bridging Networks | ||
Authors | Ryosuke Sawata, Stefan Uhlich, Shusuke Takahashi, Yuki Mitsufuji, Sony Corporation, Japan | ||
Session | AUD-2: Audio and Speech Source Separation 2: Music and Singing Voice Separation | ||
Location | Gather.Town | ||
Session Time: | Tuesday, 08 June, 13:00 - 13:45 | ||
Presentation Time: | Tuesday, 08 June, 13:00 - 13:45 | ||
Presentation | Poster | ||
Topic | Audio and Acoustic Signal Processing: [AUD-SEP] Audio and Speech Source Separation | ||
IEEE Xplore Open Preview | Click here to view in IEEE Xplore | ||
Abstract | This paper proposes several improvements for music separation with deep neural networks (DNNs), namely a multi-domain loss (MDL) and two combination schemes. First, by using MDL we take advantage of the frequency and time domain representation of audio signals. Next, we utilize the relationship among instruments by jointly considering them. We do this on the one hand by modifying the network architecture and introducing a CrossNet structure. On the other hand, we consider combinations of instrument estimates by using a new combination loss (CL). MDL and CL can easily be applied to many existing DNN-based separation methods as they are merely loss functions which are only used during training and do not affect the inference step. Experimental results show that the performance of Open-Unmix (UMX), a well-known and state-of-the-art open-source library for music separation, can be improved by utilizing our above schemes. Our modifications of UMX are opensourced together with this paper. |