Accurate coverage summarization of UAV videos
Abstract
A predominant fraction of UAV videos are never watched or analyzed and there is growing interest in having a summary view of the UAV videos for obtaining a better overall perspective of the visual content. Real time summarization of the UAV video events is also important from tactical perspective. Our research focuses on developing resilient algorithms for summarizing videos that can be efficiently processed either onboard or offline. Our previous work [2] on the video summarization has focused on the event summarization. More recently, we have investigated the challenges in providing the coverage summarization of the video content from UAV videos. Different from the traditional coverage summarization taking SfM approach (e.g., [7]) on SIFT-based [14] feature points, there are several additional challenges including jitter, low resolution, contrast, lack of salient features in UAV videos. We propose a novel correspondence algorithm that exploits the 3D context that can potentially alleviate the correspondence ambiguity. Our results on VIRAT dataset shows that our algorithm can find many correct correspondences in low resolution imagery while avoiding many false positives from the traditional algorithms.