Paper ID | SS-11.3 |
Paper Title |
MEMORY-EFFICIENT SPEECH RECOGNITION ON SMART DEVICES |
Authors |
Ganesh Venkatesh, Alagappan Valliappan, Jay Mahadeokar, Yuan Shangguan, Christian Fuegen, Mike Seltzer, Vikas Chandra, Facebook, United States |
Session | SS-11: On-device AI for Audio and Speech Applications |
Location | Gather.Town |
Session Time: | Thursday, 10 June, 14:00 - 14:45 |
Presentation Time: | Thursday, 10 June, 14:00 - 14:45 |
Presentation |
Poster
|
Topic |
Special Sessions: On-device AI for Audio and Speech Applications |
IEEE Xplore Open Preview |
Click here to view in IEEE Xplore |
Virtual Presentation |
Click here to watch in the Virtual Conference |
Abstract |
Recurrent transducer models have emerged as a promising solution for speech recognition on the current and next generation smart devices. The transducer models provide competitive accuracy within a reasonable memory footprint alleviating the memory capacity constraints in these devices. However, transducer models access model parameters from off-chip memory for every input speech frame which adversely impact device battery life and limits their usability. We address transducer model’s memory access concerns by optimizing their model architecture and proposing novel efficient recurrent cell designs. We demonstrate that i) model’s energy cost is dominated by model weights access from off-chip memory, ii) a transducer model architecture is pivotal in determining the number of accesses to off-chip memory and just model size is not a good proxy, iii) our transducer model optimizations and novel recurrent cell re-duces off-chip memory accesses by 4.5×and model size by 2× with minimal accuracy impact. |