Paper ID | SPE-40.5 |
Paper Title |
LEARNED TRANSFERABLE ARCHITECTURES CAN SURPASS HAND-DESIGNED ARCHITECTURES FOR LARGE SCALE SPEECH RECOGNITION |
Authors |
Liqiang He, Dan Su, Dong Yu, Tencent, China |
Session | SPE-40: Speech Recognition 14: Acoustic Modeling 2 |
Location | Gather.Town |
Session Time: | Thursday, 10 June, 15:30 - 16:15 |
Presentation Time: | Thursday, 10 June, 15:30 - 16:15 |
Presentation |
Poster
|
Topic |
Speech Processing: [SPE-RECO] Acoustic Modeling for Automatic Speech Recognition |
IEEE Xplore Open Preview |
Click here to view in IEEE Xplore |
Virtual Presentation |
Click here to watch in the Virtual Conference |
Abstract |
In this paper, we explore the neural architecture search (NAS) for automatic speech recognition (ASR) systems. We conduct the architecture search on the small proxy dataset, and then evaluate the network, constructed from the searched architecture, on the large dataset. Specially, we propose a revised search space that theoretically facilitates the search algorithm to explore the architectures with low complexity. Extensive experiments show that: (i) the architecture learned in the revised search space can greatly reduce the computational overhead and GPU memory usage with mild performance degradation. (ii) the searched architecture can achieve more than 15% (average on the four test sets) relative improvements on the large dataset, compared with our best hand-designed DFSMN-SAN architecture. To the best of our knowledge, this is the first report of NAS results with a large scale dataset (up to 10K hours), indicating the promising application of NAS to industrial ASR systems. |