Learning to Hash With Optimized Anchor Embedding for Scalable Retrieval

Yuchen Guo, Guiguang Ding, Li Liu, Jungong Han, Ling Shao

Research output: Contribution to journalArticlepeer-review

95 Citations (Scopus)
5 Downloads (Pure)


Sparse representation and image hashing are powerful tools for data representation and image retrieval respectively. The combinations of these two tools for scalable image retrieval, i.e., sparse hashing (SH) methods, have been proposed in recent years and the preliminary results are promising. The core of those methods is a scheme that can efficiently embed the (high-dimensional) image features into a low-dimensional Hamming space, while preserving the similarity between features. Existing SH methods mostly focus on finding better sparse representations of images in the hash space. We argue that the anchor set utilized in sparse representation is also crucial, which was unfortunately underestimated by the prior art. To this end, we propose a novel SH method that optimizes the integration of the anchors, such that the features can be better embedded and binarized, termed as Sparse Hashing with Optimized Anchor Embedding. The central idea is to push the anchors far from the axis while preserving their relative positions so as to generate similar hashcodes for neighboring features. We formulate this idea as an orthogonality constrained maximization problem and an efficient and novel optimization framework is systematically exploited. Extensive experiments on five benchmark image data sets demonstrate that our method outperforms several state-of-the-art related methods.
Original languageEnglish
Pages (from-to)1344-1354
Number of pages11
JournalIEEE Transactions on Image Processing
Issue number3
Early online date15 Jan 2017
Publication statusPublished - 1 Mar 2017


Dive into the research topics of 'Learning to Hash With Optimized Anchor Embedding for Scalable Retrieval'. Together they form a unique fingerprint.

Cite this