Paper ID | ASPS-5.5 | ||
Paper Title | Exploring the application of synthetic audio in training keyword spotters | ||
Authors | Andrew Werchniak, Roberto Barra-Chicote, Yuriy Mishchenko, Jasha Droppo, Peng Liu, Jeff Condal, Anish Shah, Amazon, United States | ||
Session | ASPS-5: Audio & Images | ||
Location | Gather.Town | ||
Session Time: | Thursday, 10 June, 16:30 - 17:15 | ||
Presentation Time: | Thursday, 10 June, 16:30 - 17:15 | ||
Presentation | Poster | ||
Topic | Applied Signal Processing Systems: Signal Processing Systems [DIS-EMSA] | ||
IEEE Xplore Open Preview | Click here to view in IEEE Xplore | ||
Abstract | The study of keyword spotting, a subfield within the broader field of speech recognition that centers around identifying individual keywords in speech audio, has gained particular importance in recent years with the rise of personal voice assistants such as Alexa. As voice assistants aim to rapidly expand to support new languages, keywords, and use cases, stakeholders face the issue of limited training data for these unseen scenarios. This paper details some initial exploration into the application of Text-To-Speech (TTS) audio as a “helper” tool for training keyword spotters in these low-resource scenarios. In the experiments studied in this paper, the careful mixing of TTS audio with human speech audio during training led to a reduction of over 11% in the detection-error-tradeoff (DET) area under the curve (AUC) metric. |