Paper ID | MLSP-44.3 | ||
Paper Title | GENERATIVE INFORMATION FUSION | ||
Authors | Kenneth Tran, North Carolina State University, United States; Wesam Sakla, Lawrence Livermore National Laboratory, United States; Hamid Krim, North Carolina State University, United States | ||
Session | MLSP-44: Multimodal Data and Applications | ||
Location | Gather.Town | ||
Session Time: | Friday, 11 June, 13:00 - 13:45 | ||
Presentation Time: | Friday, 11 June, 13:00 - 13:45 | ||
Presentation | Poster | ||
Topic | Machine Learning for Signal Processing: [MLR-LMM] Learning from multimodal data | ||
IEEE Xplore Open Preview | Click here to view in IEEE Xplore | ||
Abstract | In this work, we demonstrate the ability to exploit sensing modalities for mitigating an unrepresented modality or for potentially re-targeting resources. This is tantamount to developing proxy sensing capabilities for multi-modal learning. In classical fusion, multiple sensors are required to capture different information about the same target. Maintaining and collecting samples from multiple sensors can be financially demanding. Additionally, the effort necessary to ensure a logical mapping between the modalities may be prohibitively limiting. We examine the scenario where we have access to all modalities during training, but only a single modality at testing. In our approach, we initialize the parameters of our single modality inference network with weights learned from the fusion of multiple modalities through both classification and GANs losses. Our experiments show that emulating a multimodal system by perturbing a single modality with noise can help us achieve competitive results compared to using multiple modalities. |