Detecting discussion scenes in instructional videos
Ying Li, Chitra Dorai
ICME 2004
A content-based movie parsing and indexing approach is presented in this paper, which analyzes both audio and visual sources and accounts for their interrelations to extract high-level semantic cues. Specifically, the goal of this work is to extract meaningful movie events and assign them semantic labels for the content indexing purpose. Three types of key events, namely, 2-speaker dialogs, multiple-speaker dialogs, and hybrid events, are considered in this work. Moreover, speakers present in the detected movie dialogs are further identified based on the audio source parsing. The obtained audio and visual cues are then integrated to index the movie content. Our experiments have shown that an effective integration of the audio and visual sources can lead to a higher level of video content understanding, abstraction and indexing.
Ying Li, Chitra Dorai
ICME 2004
Ying Li, Xiaochen Ding, et al.
ICWS 2003
Geetika T. Lakshmanan, Ying Li, et al.
IEEE Internet Computing
Ying Li, Chitra Dorai
ICME 2005