Paper ID | MLSP-10.2 |
Paper Title |
LEARNING SEPARABLE TIME-FREQUENCY FILTERBANKS FOR AUDIO CLASSIFICATION |
Authors |
Jie Pu, Imperial College London, United Kingdom; Yannis Panagakis, University of Athens, Greece; Maja Pantic, Imperial College London, United Kingdom |
Session | MLSP-10: Deep Learning for Speech and Audio |
Location | Gather.Town |
Session Time: | Tuesday, 08 June, 16:30 - 17:15 |
Presentation Time: | Tuesday, 08 June, 16:30 - 17:15 |
Presentation |
Poster
|
Topic |
Machine Learning for Signal Processing: [MLR-DEEP] Deep learning techniques |
IEEE Xplore Open Preview |
Click here to view in IEEE Xplore |
Virtual Presentation |
Click here to watch in the Virtual Conference |
Abstract |
State-of-the-art audio classification systems often apply deep neural networks on hand-crafted features (e.g., spectrogram-based representations), instead of learning features directly from raw audio. Moreover, these audio networks have millions of unknown parameters need to be learned, which causes a great demand for computational resources and training data. In this paper, we aim to learn audio representations directly from raw audio, and at the same time mitigate its training burden by employing a light-weight architecture. In particular, we propose to learn separable filters, parametrized with only a few variables, namely center frequency and bandwidth, facilitating training and offering interpretability of learned representations. The generality of the proposed method is demonstrated by applying it onto two applications, namely 1) speaker identification and 2) acoustic event recognition. Experimental results indicate its effectiveness on these applications, especially when small amount of training data is available. |