Qian Huang, George C. Stockman
ICPR 1994
One of the key factors of enabling machine learning models to comprehend and solve real-world tasks is to leverage multimodal data. Unfortunately, annotation of multimodal data is challenging and expensive. Recently, self-supervised multimodal methods that combine vision and language were proposed to learn multimodal representations without annotation. However, these methods often choose to ignore the presence of high levels of noise and thus yield sub-optimal results. In this work, we show that the problem of noise estimation for multimodal data can be reduced to a multimodal density estimation task. Using multimodal density estimation, we propose a noise estimation building block for multimodal representation learning that is based strictly on the inherent correlation between different modalities. We demonstrate how our noise estimation can be broadly integrated and achieves comparable results to state-of-the-art performance on five different benchmark datasets for two challenging multimodal tasks: Video Question Answering and Text-To-Video Retrieval. Furthermore, we provide a theoretical probabilistic error bound substantiating our empirical results and analyze failure cases.
Qian Huang, George C. Stockman
ICPR 1994
Hisashi Kashima, Tsuyoshi Id́e, et al.
IEICE Transactions on Information and Systems
James E. Gentile, Nalini Ratha, et al.
BTAS 2009
Alberto Tomita Jr., Tsuyoshi Ebina, et al.
IEICE Transactions on Information and Systems