Paper ID | AUD-9.6 |
Paper Title |
STATISTICAL CORRECTION OF TRANSCRIBED MELODY NOTES BASED ON PROBABILISTIC INTEGRATION OF A MUSIC LANGUAGE MODEL AND A TRANSCRIPTION ERROR MODEL |
Authors |
Yuki Hiramatsu, Go Shibata, Ryo Nishikimi, Eita Nakamura, Kazuyoshi Yoshii, Kyoto University, Japan |
Session | AUD-9: Music Information Retrieval and Music Language Processing 1: Beat and Melody |
Location | Gather.Town |
Session Time: | Wednesday, 09 June, 14:00 - 14:45 |
Presentation Time: | Wednesday, 09 June, 14:00 - 14:45 |
Presentation |
Poster
|
Topic |
Audio and Acoustic Signal Processing: [AUD-MIR] Music Information Retrieval and Music Language Processing |
IEEE Xplore Open Preview |
Click here to view in IEEE Xplore |
Virtual Presentation |
Click here to watch in the Virtual Conference |
Abstract |
This paper describes a statistical post-processing method for automatic singing transcription that corrects pitch and rhythm errors included in a transcribed note sequence. Although the performance of frame-level pitch estimation has been improved drastically by deep learning techniques, note-level transcription of singing voice is still an open problem. Inspired by the standard framework of statistical machine translation, we formulate a hierarchical generative model of a transcribed note sequence that consists of a music language model describing the pitch and onset transitions of a true note sequence and a transcription error model describing the addition of deletion, insertion, and substitution errors to the true sequence. Because the length of the true sequence might be different from that of the observed transcribed sequence, the most likely sequences with possible different lengths are estimated with Viterbi decoding and the most likely length is then selected with a sophisticated language model based on a long short-term memory (LSTM) network. The experimental results show that the proposed method can correct musically unnatural transcription errors. |