Learning semantic multimedia representations from a small set of examples
Abstract
We approach the problem of semantic multimedia retrieval as a supervised learning problem. Defining a lexicon of a small number of interesting semantic concepts we can handle a number of semantic queries. Since the number of interesting concepts available for training is usually small we explore discriminant learning techniques. In particular, we examine the use of kernel based methods and demonstrate impressive retrieval performance using semantic concepts like rocket, outdoor, greenery, sky and face. We also show that loosely coupled multimodal events can be detected based on the late fusion of detection of related auditory and visual concepts. Using a Bayesian network for inference we show how a rocket-launch event can be detected based on the detection of a related visual concept (rocket object) and a related auditory concept (explosion/blast-off).