Learning Situation Hyper-Graphs for Video Question Answering
Aisha Urooj Khan, Hilde Kuehne, et al.
CVPR 2023
Humans are able to localize objects in the environment using both visual and auditory cues, integrating information from multiple modalities into a common reference frame. We introduce a system that can leverage unlabeled audiovisual data to learn to localize objects (moving vehicles) in a visual reference frame, purely using stereo sound at inference time. Since it is labor-intensive to manually annotate the correspondences between audio and object bounding boxes, we achieve this goal by using the co-occurrence of visual and audio streams in unlabeled videos as a form of self-supervision, without resorting to the collection of ground truth annotations. In particular, we propose a framework that consists of a vision ''teacher'' network and a stereo-sound ''student'' network. During training, knowledge embodied in a well-established visual vehicle detection model is transferred to the audio domain using unlabeled videos as a bridge. At test time, the stereo-sound student network can work independently to perform object localization using just stereo audio and camera meta-data, without any visual input. Experimental results on a newly collected Auditory Vehicles Tracking dataset verify that our proposed approach outperforms several baseline approaches. We also demonstrate that our cross-modal auditory localization approach can assist in the visual localization of moving vehicles under poor lighting conditions.
Aisha Urooj Khan, Hilde Kuehne, et al.
CVPR 2023
Shaokai Ye, Kaidi Xu, et al.
ICCV 2019
Jiaqi Han, Wenbing Huang, et al.
NeurIPS 2022
Han Cai, Chuang Gan, et al.
ICLR 2020