Paper ID | AUD-21.2 | ||
Paper Title | STRUCTURE-AWARE AUDIO-TO-SCORE ALIGNMENT USING PROGRESSIVELY DILATED CONVOLUTIONAL NEURAL NETWORKS | ||
Authors | Ruchit Agrawal, Queen Mary University of London, United Kingdom; Daniel Wolff, Institute for Research and Coordination in Acoustics/Music, France; Simon Dixon, Queen Mary University of London, United Kingdom | ||
Session | AUD-21: Music Information Retrieval and Music Language Processing 4: Structure and Alignment | ||
Location | Gather.Town | ||
Session Time: | Thursday, 10 June, 14:00 - 14:45 | ||
Presentation Time: | Thursday, 10 June, 14:00 - 14:45 | ||
Presentation | Poster | ||
Topic | Audio and Acoustic Signal Processing: [AUD-MIR] Music Information Retrieval and Music Language Processing | ||
IEEE Xplore Open Preview | Click here to view in IEEE Xplore | ||
Abstract | The identification of structural differences between a music performance and the score is a challenging yet integral step of audio-to-score alignment, an important subtask of music information retrieval. We present a novel method to detect such differences between the score and performance for a given piece of music using progressively dilated convolutional neural networks. Our method incorporates varying dilation rates at different layers to capture both short-term and long-term context, and can be employed successfully in the presence of limited annotated data. We conduct experiments on audio recordings of real performances that differ structurally from the score, and our results demonstrate that our models outperform standard methods for structure-aware audio-to-score alignment. |