Paper ID | AUD-25.5 | ||
Paper Title | SOURCE-AWARE NEURAL SPEECH CODING FOR NOISY SPEECH COMPRESSION | ||
Authors | Haici Yang, Kai Zhen, Indiana University, United States; Seungkwon Beack, Electronics and Telecommunications Research Institute, South Korea; Minje Kim, Indiana University, United States | ||
Session | AUD-25: Signal Enhancement and Restoration 2: Audio Coding and Restoration | ||
Location | Gather.Town | ||
Session Time: | Thursday, 10 June, 16:30 - 17:15 | ||
Presentation Time: | Thursday, 10 June, 16:30 - 17:15 | ||
Presentation | Poster | ||
Topic | Audio and Acoustic Signal Processing: [AUD-AMCT] Audio and Speech Modeling, Coding and Transmission | ||
IEEE Xplore Open Preview | Click here to view in IEEE Xplore | ||
Abstract | This paper introduces a novel neural network-based speech coding system that can handle noisy speech effectively. The proposed source-aware neural audio coding (SANAC) system harmonizes a deep autoencoder-based source separation model and a neural coding system, so that it can explicitly perform source separation and coding in the latent space. An added benefit of this system is that the codec can allocate different amount of bits to the underlying sources, so that the more important source sounds better in the decoded signal. We target the use case where the user on the receiver side cares the quality of the non-speech components in the speech communication, while the speech source still carries the most important information. Both objective and subjective evaluation tests show that SANAC can recover the original noisy speech in a better quality than the baseline neural audio coding system, which is with no source-aware coding mechanism. |