Paper ID | AUD-2.2 | ||
Paper Title | NEURO-STEERED MUSIC SOURCE SEPARATION WITH EEG-BASED AUDITORY ATTENTION DECODING AND CONTRASTIVE-NMF | ||
Authors | Giorgia Cantisani, Slim Essid, Gaël Richard, LTCI, Télécom Paris, Institut Polytechnique de Paris, France | ||
Session | AUD-2: Audio and Speech Source Separation 2: Music and Singing Voice Separation | ||
Location | Gather.Town | ||
Session Time: | Tuesday, 08 June, 13:00 - 13:45 | ||
Presentation Time: | Tuesday, 08 June, 13:00 - 13:45 | ||
Presentation | Poster | ||
Topic | Audio and Acoustic Signal Processing: [AUD-SEP] Audio and Speech Source Separation | ||
IEEE Xplore Open Preview | Click here to view in IEEE Xplore | ||
Abstract | We propose a novel informed music source separation paradigm, which can be referred to as neuro-steered music source separation. More precisely, the source separation process is guided by the user's selective auditory attention decoded from his/her EEG response to the stimulus. This high-level prior information is used to select the desired instrument to isolate and to adapt the generic source separation model to the observed signal. To this aim, we leverage the fact that the attended instrument's neural encoding is substantially stronger than the one of the unattended sources left in the mixture. This ``contrast'' is extracted using an attention decoder and used to inform a source separation model based on non-negative matrix factorization named Contrastive-NMF. The results are promising and show that the EEG information can automatically select the desired source to enhance and improve the separation quality. |