Feature encoding has been extensively studied for the task of visual action recognition (VAR). The recently proposed super vector-based encoding methods, such as the Vector of Locally Aggregated Descriptors (VLAD) and the Fisher Vectors (FV), have significantly improved the recognition performance. Despite of the success, they still struggle with the superfluous information that presents during the training stage, which makes the methods computationally expensive when applied to a large number of extracted features. In order to address such challenge, this paper proposes a Saliency-Informed Spatio-Temporal VLAD (SST-VLAD) approach which selects the extracted features corresponding to small amount of videos in the data set by considering both the spatial and temporal video-wise saliency scores; and the same extension principle has also been applied to the FV approach. The experimental results indicate that the proposed feature encoding schemes consistently outperform the existing ones with significantly lower computational cost.
|Publication status||Published - 1 Jan 2019|
|Event||29th British Machine Vision Conference, BMVC 2018 - Newcastle, United Kingdom|
Duration: 3 Sep 2018 → 6 Sep 2018
|Conference||29th British Machine Vision Conference, BMVC 2018|
|Period||3/09/18 → 6/09/18|