Paper ID | SPE-6.5 |
Paper Title |
REAL-TIME DENOISING AND DEREVERBERATION WTIH TINY RECURRENT U-NET |
Authors |
Hyeong-Seok Choi, Seoul national university/Supertone, South Korea; Sungjin Park, Seoul national university, South Korea; Jie Hwan Lee, Hoon Heo, Supertone, South Korea; Dongsuk Jeon, Seoul national university, South Korea; Kyogu Lee, Seoul national university/Supertone, South Korea |
Session | SPE-6: Speech Enhancement 2: Speech Separation and Dereverberation |
Location | Gather.Town |
Session Time: | Tuesday, 08 June, 14:00 - 14:45 |
Presentation Time: | Tuesday, 08 June, 14:00 - 14:45 |
Presentation |
Poster
|
Topic |
Speech Processing: [SPE-ENHA] Speech Enhancement and Separation |
IEEE Xplore Open Preview |
Click here to view in IEEE Xplore |
Virtual Presentation |
Click here to watch in the Virtual Conference |
Abstract |
Modern deep learning-based models have seen outstanding performance improvement with speech enhancement tasks. The number of parameters of state-of-the-art models, however, is often too large to be deployed on devices for real-world applications. To this end, we propose Tiny Recurrent U-Net (TRU-Net), a lightweight online inference model that matches the performance of current state-of-the-art models. The size of the quantized version of TRU-Net is 362 kilobytes, which is small enough to be deployed on edge devices. In addition, we combine the small-sized model with a new masking method called phase-aware $\beta$-sigmoid mask, which enables simultaneous denoising and dereverberation. Results of both objective and subjective evaluations have shown that our model can achieve competitive performance with the current state-of-the-art models on benchmark datasets using fewer parameters by orders of magnitude. |