Robust Iterative Quantization for Efficient ℓp-norm Similarity Search

Yuchen Guo, Guiguang Ding, Jungong Han, Xiaoming Jin

Research output: Chapter in Book/Report/Conference proceedingChapterpeer-review


Iterative Quantization (ITQ) is one of the most successful hashing based nearest-neighbor search methods for large-scale information retrieval in the past a few years due to its simplicity and superior performance. However, the performance of this algorithm degrades significantly when dealing with noisy data. Additionally, it can barely facilitate a wide range of applications as the distortion measurement only limits to ℓ2 norm. In this paper, we propose an ITQ+ algorithm, aiming to enhance both robustness and generalization of the original ITQ algorithm. Specifically, a ℓp,q-norm loss function is proposed to conduct the ℓp-norm similarity search, rather than a ℓ2} norm search. Despite the fact that changing the loss function to ℓp,q-norm makes our algorithm more robust and generic, it brings us a challenge that minimizes the obtained orthogonality constrained ℓp,q-norm function, which is non-smooth and non-convex. To solve this problem, we propose a novel and efficient optimization scheme. Extensive experiments on benchmark datasets demonstrate that ITQ+ is overwhelmingly better than the original ITQ algorithm, especially when searching similarity in noisy data.
Original languageEnglish
Title of host publicationProceedings of the Twenty-Fifth International Joint Conference on Artificial Intelligence
PublisherInternational Joint Conferences on Artificial Intelligence
ISBN (Print)978-1-57735-771-1
Publication statusPublished - 2016


Dive into the research topics of 'Robust Iterative Quantization for Efficient ℓp-norm Similarity Search'. Together they form a unique fingerprint.

Cite this