Paper ID | MMSP-6.4 | ||
Paper Title | Multi-target DoA estimation with an audio-visual fusion mechanism | ||
Authors | Xinyuan Qian, Maulik Madhavi, Zexu Pan, Jiadong Wang, Haizhou Li, National University of Singapore, Singapore | ||
Session | MMSP-6: Human Centric Multimedia 2 | ||
Location | Gather.Town | ||
Session Time: | Thursday, 10 June, 14:00 - 14:45 | ||
Presentation Time: | Thursday, 10 June, 14:00 - 14:45 | ||
Presentation | Poster | ||
Topic | Multimedia Signal Processing: Human Centric Multimedia | ||
IEEE Xplore Open Preview | Click here to view in IEEE Xplore | ||
Abstract | Most of the prior studies in the spatial Direction of Arrival (DoA) domain focus on a single modality. However, humans use auditory and visual senses to detect the presence of sound sources. With this motivation, we propose to use neural networks with audio and visual signals for multi-speaker localization. The use of heterogeneous sensors can provide complementary information to overcome uni-modal challenges, such as noise, reverberation, illumination variations, and occlusions. We attempt to address these issues by introducing an adaptive weighting mechanism for audio-visual fusion. We also propose a novel video simulation method that generates visual features from noisy target 3D annotations that are synchronized with acoustic features. Experimental results confirm that audio-visual fusion consistently improves the performance of speaker DoA estimation, while the adaptive weighting mechanism shows clear benefits. |