Paper ID | HLT-15.6 |
Paper Title |
CLASSIFYING SPEECH INTELLIGIBILITY LEVELS OF CHILDREN IN TWO CONTINUOUS SPEECH STYLES |
Authors |
Yeh-Sheng Lin, Shu-Chuan Tseng, Institute of Linguistics, Academia Sinica, Taiwan |
Session | HLT-15: Language Assessment |
Location | Gather.Town |
Session Time: | Thursday, 10 June, 16:30 - 17:15 |
Presentation Time: | Thursday, 10 June, 16:30 - 17:15 |
Presentation |
Poster
|
Topic |
Human Language Technology: [HLT-MLMD] Machine Learning Methods for Language |
IEEE Xplore Open Preview |
Click here to view in IEEE Xplore |
Virtual Presentation |
Click here to watch in the Virtual Conference |
Abstract |
Speech difficulties of children may result from pathological problems. Oral language is normally assessed by expert-directed impressionistic judgments on varying speech types. This paper attempts to construct automatic systems that help detect children with severe speech problems at an early stage. Two continuous speech types, repetitive and storytelling speech, produced by Chinese-speaking hearing and hearing-impaired children are applied to Long Short-Term Memory (LSTM) and Universal Transformer (UT) models. Three approaches to extracting acoustic features are adopted: MFCCs, Mel Spectrogram, and acoustic-phonetic features. Results of leave-one-out cross-validation and models trained by augmented data show that MFCCs are more useful than Mel Spectrogram and acoustic-phonetic features. Respective LSTM and UT models have their own advantages in different settings. Eventually, our model trained on repetitive speech is able to achieve an F1-score of 0.74 for testing on storytelling speech. |