Paper ID | AUD-25.6 | ||
Paper Title | ENHANCING INTO THE CODEC: NOISE ROBUST SPEECH CODING WITH VECTOR-QUANTIZED AUTOENCODERS | ||
Authors | Jonah Casebeer, University of Illinois at Urbana-Champaign, United States; Vinjai Vale, Stanford University, United States; Umut Isik, Jean-Marc Valin, Ritwik Giri, Arvindh Krishnaswamy, Amazon Web Services, United States | ||
Session | AUD-25: Signal Enhancement and Restoration 2: Audio Coding and Restoration | ||
Location | Gather.Town | ||
Session Time: | Thursday, 10 June, 16:30 - 17:15 | ||
Presentation Time: | Thursday, 10 June, 16:30 - 17:15 | ||
Presentation | Poster | ||
Topic | Audio and Acoustic Signal Processing: [AUD-AMCT] Audio and Speech Modeling, Coding and Transmission | ||
IEEE Xplore Open Preview | Click here to view in IEEE Xplore | ||
Abstract | Audio codecs based on discretized neural autoencoders have recently been developed and shown to provide significantly higher compression levels for comparable quality speech output. However, these models are tightly coupled with speech content, and produce unintended outputs in noisy conditions. Based on VQ-VAE autoencoders with WaveRNN decoders, we develop compressor-enhancer encoders and accompanying decoders, and show that they operate well in noisy conditions. We also observe that a compressor-enhancer model performs better on clean speech inputs than a compressor model trained only on clean speech. |