Paper ID | MLSP-43.2 | ||
Paper Title | SUBJECT-INVARIANT EEG REPRESENTATION LEARNING FOR EMOTION RECOGNITION | ||
Authors | Soheil Rayatdoost, University of Geneva, Switzerland; Yufeng Yin, University of Southern California, United States; David Rudrauf, University of Geneva, Switzerland; Mohammad Soleymani, University of Southern California, United States | ||
Session | MLSP-43: Biomedical Applications | ||
Location | Gather.Town | ||
Session Time: | Friday, 11 June, 13:00 - 13:45 | ||
Presentation Time: | Friday, 11 June, 13:00 - 13:45 | ||
Presentation | Poster | ||
Topic | Machine Learning for Signal Processing: [MLR-APPL] Applications of machine learning | ||
IEEE Xplore Open Preview | Click here to view in IEEE Xplore | ||
Abstract | The discrepancies between the distributions of the train and test data, a.k.a., domain shift, result in lower generalization for emotion recognition methods. One of the main factors contributing to these discrepancies is human variability. Domain adaptation methods are developed to alleviate the problem of domain shift, however, these techniques while reducing between database variations fail to reduce between-subject variability. In this paper, we propose an adversarial deep domain adaptation approach for emotion recognition from electroencephalogram (EEG) signals. The method jointly learns a new representation that minimizes emotion recognition loss and maximizes subject confusion loss. We demonstrate that the proposed representation can improve emotion recognition performance within and across databases. |