Abstract
Most existing tracking algorithms do not explicitly consider the motion blur contained in video sequences, which degrades their performance in real-world applications where motion blur often occurs. In this paper, we propose to solve the motion blur problem in visual tracking in a unified framework. Specifically, a joint blur state estimation and multi-task reverse sparse learning framework are presented, where the closed-form solution of blur kernel and sparse code matrix is obtained simultaneously. The reverse process considers the blurry candidates as dictionary elements, and sparsely represents blurred templates with the candidates. By utilizing the information contained in the sparse code matrix, an efficient likelihood model is further developed, which quickly excludes irrelevant candidates and narrows the particle scale down. Experimental results on the challenging benchmarks show that our method performs well against the state-of-the-art trackers.
Original language | English |
---|---|
Pages (from-to) | 5867-5876 |
Journal | IEEE Transactions on Image Processing |
Volume | 25 |
Issue number | 12 |
Early online date | 6 Oct 2016 |
DOIs | |
Publication status | Published - Dec 2016 |
Keywords
- Motion blur
- tracking
- sparse representation