2021 IEEE International Conference on Acoustics, Speech and Signal Processing

6-11 June 2021 • Toronto, Ontario, Canada

Extracting Knowledge from Information

2021 IEEE International Conference on Acoustics, Speech and Signal Processing

6-11 June 2021 • Toronto, Ontario, Canada

Extracting Knowledge from Information

Technical Program

Paper Detail

Paper IDBIO-13.1
Paper Title ESTIMATION OF VISUAL FEATURES OF VIEWED IMAGE FROM INDIVIDUAL AND SHARED BRAIN INFORMATION BASED ON FMRI DATA USING PROBABILISTIC GENERATIVE MODEL
Authors Takaaki Higashi, Keisuke Maeda, Takahiro Ogawa, Miki Haseyama, Hokkaido University, Japan
SessionBIO-13: Deep Learning for Biomedical Applications
LocationGather.Town
Session Time:Friday, 11 June, 11:30 - 12:15
Presentation Time:Friday, 11 June, 11:30 - 12:15
Presentation Poster
Topic Biomedical Imaging and Signal Processing: [BIO] Biomedical signal processing
IEEE Xplore Open Preview  Click here to view in IEEE Xplore
Virtual Presentation  Click here to watch in the Virtual Conference
Abstract This paper presents a method for estimation of visual features based on brain responses measured when subjects view images. The proposed method estimates visual features of viewed images by using both individual and shared brain information from functional magnetic resonance imaging (fMRI) data when subjects view images. To extract an effective latent space shared by multiple subjects from high dimensional fMRI data, a probabilistic generative model that can provide a prior distribution to the space is introduced into the proposed method. Also, the extraction of a robust feature space with respect to noise for the individual information becomes feasible via the proposed probabilistic generative model. This is the first contribution of our method. Furthermore, the proposed method constructs a decoder transforming brain information into visual features based on collaborative use of both estimated spaces for individual and shared brain information. This is the second contribution of our method. Experimental results show that the proposed method improves the estimation accuracy of the visual features of viewed images