Eugene H. Ratzlaff
ICDAR 2001
A framework for event detection is proposed where events, objects, and other semantic concepts are detected from video using trained classifiers. These classifiers are used to automatically annotate video with semantic labels, which in turn are used to search for new, untrained types of events and semantic concepts. The novelty of the approach lies in the: (1) semi-automatic construction of models of events from feature descriptors and (2) integration of content-based and concept-based querying in the search process. Speech retrieval is independently applied and combined results are produced. Results of applying these to the Search benchmark of the NIST TREC Video track 2001 are reported, and the gained experience and future work are discussed. © 2004 Published by Elsevier Inc.
Eugene H. Ratzlaff
ICDAR 2001
Pavel Kisilev, Daniel Freedman, et al.
ICPR 2012
Srideepika Jayaraman, Chandra Reddy, et al.
Big Data 2021
James E. Gentile, Nalini Ratha, et al.
BTAS 2009