Paper ID | BIO-12.4 |
Paper Title |
Human-centered Favorite Music Classification Using EEG-based Individual Music Preference via Deep Time-series CCA |
Authors |
Ryosuke Sawata, Graduate School of Information Science and Technology, Hokkaido University, Japan; Takahiro Ogawa, Miki Haseyama, Faculty of Information Science and Technology, Hokkaido University, Japan |
Session | BIO-12: Feature Extraction and Fusion for Biomedical Applications |
Location | Gather.Town |
Session Time: | Friday, 11 June, 11:30 - 12:15 |
Presentation Time: | Friday, 11 June, 11:30 - 12:15 |
Presentation |
Poster
|
Topic |
Biomedical Imaging and Signal Processing: [BIO] Biomedical signal processing |
IEEE Xplore Open Preview |
Click here to view in IEEE Xplore |
Virtual Presentation |
Click here to watch in the Virtual Conference |
Abstract |
A method to classify a user's like or dislike musical pieces based on the extraction of his or her music preference is proposed in this paper. New scheme of Canonical Correlation Analysis (CCA), called Deep Time-series DTCCA, which can consider the correlation between two sets of input features with considering the time-series relation lurked in each input data is exploited to realize the aforementioned classification. One of the most difference between DTCCA and existing other CCAs is enabling to consider the above time-series relation, and thus DTCCA make the individual electroencephalogram (EEG)-based favorite music classification more effective than the methods using one of other CCAs instead of DTCCA since EEG and audio signals are respectively time-series data. Experimental results show that DTCCA-based favorite music classification outperformed not only method using original features without CCA but also methods using other existing CCAs including even state-of-the-art CCA. |