Paper ID | AUD-24.4 | ||
Paper Title | SPEECH ENHANCEMENT AUTOENCODER WITH HIERARCHICAL LATENT STRUCTURE | ||
Authors | Koen Oostermeijer, Jun Du, Qing Wang, University of Science and Technology of China, China; Chin-Hui Lee, Georgia Institute of Technology, United States | ||
Session | AUD-24: Signal Enhancement and Restoration 1: Deep Learning | ||
Location | Gather.Town | ||
Session Time: | Thursday, 10 June, 16:30 - 17:15 | ||
Presentation Time: | Thursday, 10 June, 16:30 - 17:15 | ||
Presentation | Poster | ||
Topic | Audio and Acoustic Signal Processing: [AUD-SEN] Signal Enhancement and Restoration | ||
IEEE Xplore Open Preview | Click here to view in IEEE Xplore | ||
Abstract | A new hierarchical convolutional neural network-based autoencoder architecture called SEHAE (Speech Enhancement Hierarchical AutoEncoder) is introduced, in which the latent representation is decomposed into several parts that correspond to different scales. The model consists of three functionally different components. First, a stack of encoders generates a set of latent vectors that contain information from an increasingly larger receptive field. Second, the decoders construct the clean speech in a stage-wise and additive fashion, starting from a learned initial vector. The third component, which we call funnel networks, is tasked with “knitting” together the outputs of the previous decoder and the encoder to compute latent vectors for the next decoder. Several options for initial vectors are explored. Experiments show that SEHAE achieves significant improvements for the considered speech quality and intelligibility measures, outperforming a denoising autoencoder and other step-wise models. Furthermore, its internal workings are investigated using the intermediate results from the decoders. |