Paper ID | BIO-12.2 |
Paper Title |
SPEAKER-INDEPENDENT BRAIN ENHANCED SPEECH DENOISING |
Authors |
Maryam Hosseini, Luca Celotti, Éric Plourde, Université de Sherbrooke, Canada |
Session | BIO-12: Feature Extraction and Fusion for Biomedical Applications |
Location | Gather.Town |
Session Time: | Friday, 11 June, 11:30 - 12:15 |
Presentation Time: | Friday, 11 June, 11:30 - 12:15 |
Presentation |
Poster
|
Topic |
Biomedical Imaging and Signal Processing: [BIO] Biomedical signal processing |
IEEE Xplore Open Preview |
Click here to view in IEEE Xplore |
Virtual Presentation |
Click here to watch in the Virtual Conference |
Abstract |
The auditory system is extremely efficient in extracting attended auditory information in the presence of competing speakers. Single-channel speech enhancement algorithms, however, greatly lack this efficacy. In this paper, we propose a novel deep learning method referred to as the Brain Enhanced Speech Denoiser (BESD), that takes advantage of the attended auditory information present in the brain activity of the listener to denoise a multi-talker speech. We use this information to modulate the features learned from the sound and the brain activity, in order to perform speech enhancement. We show that our method successfully enhances a speech mixture, without prior information about the attended speaker, using electroencephalography (EEG) signals recorded from the listener. This makes it a great candidate for realistic applications where no prior information about the attended speaker is available, such as hearing aids or cell phones. |