Paper ID | AUD-3.6 |
Paper Title |
EXTENDING MUSIC BASED ON EMOTION AND TONALITY VIA GENERATIVE ADVERSARIAL NETWORK |
Authors |
Bo-Wei Tseng, Yih-Liang Shen, Tai-Shih Chi, National Chiao Tung University, Taiwan |
Session | AUD-3: Music Signal Analysis, Processing, and Synthesis 1: Deep Learning |
Location | Gather.Town |
Session Time: | Tuesday, 08 June, 14:00 - 14:45 |
Presentation Time: | Tuesday, 08 June, 14:00 - 14:45 |
Presentation |
Poster
|
Topic |
Audio and Acoustic Signal Processing: [AUD-MSP] Music Signal Analysis, Processing and Synthesis |
IEEE Xplore Open Preview |
Click here to view in IEEE Xplore |
Virtual Presentation |
Click here to watch in the Virtual Conference |
Abstract |
We propose a generative model for music extension in this paper. The model is composed of two classifiers, one for music emotion and one for music tonality, and a generative adversarial network (GAN). Therefore, it can generate symbolic music not only based on low level spectral and temporal characteristics, but also on high level emotion and tonality attributes of previously observed music pieces. The generative model works in a universal latent space constructed by the variational autoencoder (VAE) for representing music pieces. We conduct subjective listening tests and derive objective measures for performance evaluation. Experimental results show that the proposed model produces much smoother and more authentic music pieces than the baseline model in terms of all subjective and objective measures. |