Robust Visual Tracking Based on Improved Perceptual Hashing for Robot Vision

Mengjuan Fei, Jing Li, Ling Shao, Zhaojie Ju, Gaoxiang Ouyang

Research output: Chapter in Book/Report/Conference proceedingChapterpeer-review

5 Citations (Scopus)


In this paper, perceptual hash codes are adopted as appearance models of objects for visual tracking. Based on three existing basic perceptual hashing techniques, we propose Laplace-based hash (LHash) and Laplace-based difference hash (LDHash) to efficiently and robustly track objects in challenging video sequences. By qualitative and quantitative comparison with previous representative tracking methods such as mean-shift and compressive tracking, experimental results show perceptual hashing-based tracking outperforms and the newly proposed two algorithms perform the best under various challenging environments in terms of efficiency, accuracy and robustness. Especially, they can overcome severe challenges such as illumination changes, motion blur and pose variation.
Original languageEnglish
Title of host publicationIntelligent Robotics and Applications
EditorsHonghai Liu, Naoyuki Kubota, Xiangyang Zhu, Rüdiger Dillman, Dalin Zhou
Place of PublicationLondon
ISBN (Print)978-3-319-22872-3
Publication statusPublished - 20 Aug 2015

Publication series

NameLecture Notes in Computer Science
ISSN (Electronic)0302-9743


Dive into the research topics of 'Robust Visual Tracking Based on Improved Perceptual Hashing for Robot Vision'. Together they form a unique fingerprint.

Cite this