Publication
ICASSP 2004
Conference paper

Multimodal video search techniques: Late fusion of speech-based retrieval and visual content-based retrieval

Abstract

There has been extensive research into systems for content-based or text-based (e.g. closed captioning, speech transcript) search, some of which has been applied to video. However, the 2001 and 2002 NIST TRECVID benchmarks of broadcast video search systems showed that designing multimodal video search systems which integrate both speech and image (or image sequence) cues, and thereby improve performance beyond that achievable by systems using only speech or image cues, remains a challenging problem. This paper describes multimodal systems for ad-hoc search constructed by IBM for the TRECVID 2003 benchmark of search systems for broadcast video. These multimodal ad-hoc search systems all use a late fusion of independently developed speech-based and visual content-based retrieval systems and outperform our individual speech-based and content-based retrieval systems on both manual and interactive search tasks. For the manual task, our best system used a query-dependent linear weighting between speech-based and image-based retrieval systems. This system has Mean Average Precision (MAP) performance 20% above our best unimodal system for manual search. For the interactive task, where the user has full knowledge of the query topic and the performance of the individual search systems, our best system used an interlacing approach. The user determines the (subjectively) optimal weights A and B for the speech-based and image-based systems, where the multimodal result set is aggregated by combining the top A documents from system A followed by top B documents of system B and then repeating this process until the desired result set size is achieved. This multimodal interactive search has MAP 40% above our best unimodal interactive search system.

Date

Publication

ICASSP 2004