As a large proportion of the available video media concerns humans, human action retrieval is posed as a new topic in the domain of content-based video retrieval. For retrieving complex human actions, measuring the similarity between two videos represented by local features is a critical issue. In this paper, a fast and explicit feature correspondence approach is presented to compute the match cost serving as the similarity metric. Then the proposed similarity metric is embedded into the framework of manifold ranking for action retrieval. In contrast to the Bag-of-Words model and its variants, our method yields an encouraging improvement of accuracy on the KTH and the UCF YouTube datasets with reasonably efficient computation.
|Published - Aug 2013
|AVSS 2013 - 10th International Conference on Advanced Video and Signal Based Surveillance - Krakow, Poland
Duration: 1 Aug 2013 → …
|AVSS 2013 - 10th International Conference on Advanced Video and Signal Based Surveillance
|1/08/13 → …