Paper ID | BIO-13.6 |
Paper Title |
CLASSIFICATION OF EXPERT-NOVICE LEVEL USING EYE TRACKING AND MOTION DATA VIA CONDITIONAL MULTIMODAL VARIATIONAL AUTOENCODER |
Authors |
Yusuke Akamatsu, Keisuke Maeda, Takahiro Ogawa, Miki Haseyama, Hokkaido University, Japan |
Session | BIO-13: Deep Learning for Biomedical Applications |
Location | Gather.Town |
Session Time: | Friday, 11 June, 11:30 - 12:15 |
Presentation Time: | Friday, 11 June, 11:30 - 12:15 |
Presentation |
Poster
|
Topic |
Biomedical Imaging and Signal Processing: [BIO] Biomedical signal processing |
IEEE Xplore Open Preview |
Click here to view in IEEE Xplore |
Virtual Presentation |
Click here to watch in the Virtual Conference |
Abstract |
Sensor data from wearable devices have been utilized to analyze differences between experts and novices. Previous studies attempted to classify the expert-novice level from sensor data based on supervised learning methods. However, these approaches need to collect enough training data covering various novices’ sensor patterns. In this paper, we propose a semi-supervised anomaly detection approach that requires only sensor data of experts for training and identifies those of novices as anomalies. Our proposed anomaly detection model named conditional multimodal variational autoencoder (CMVAE) has the following two technical contributions: (i) considering action information of persons and (ii) utilizing multimodal sensor data, i.e., eye tracking data and motion data in this case. The proposed method is evaluated on sensor data measured when expert and novice soccer players were shooting, dribbling, and doing soccer ball juggling. Experimental results show that CMVAE can more accurately classify the expert-novice level than previous supervised learning methods and anomaly detection methods using other VAEs. |