Iterative Quantization (ITQ) is one of the most successful hashing based nearest-neighbor search methods for large-scale information retrieval in the past a few years due to its simplicity and superior performance. However, the performance of this algorithm degrades significantly when dealing with noisy data. Additionally, it can barely facilitate a wide range of applications as the distortion measurement only limits to ℓ2 norm. In this paper, we propose an ITQ+ algorithm, aiming to enhance both robustness and generalization of the original ITQ algorithm. Specifically, a ℓp,q-norm loss function is proposed to conduct the ℓp-norm similarity search, rather than a ℓ2} norm search. Despite the fact that changing the loss function to ℓp,q-norm makes our algorithm more robust and generic, it brings us a challenge that minimizes the obtained orthogonality constrained ℓp,q-norm function, which is non-smooth and non-convex. To solve this problem, we propose a novel and efficient optimization scheme. Extensive experiments on benchmark datasets demonstrate that ITQ+ is overwhelmingly better than the original ITQ algorithm, especially when searching similarity in noisy data.
|Title of host publication||Proceedings of the Twenty-Fifth International Joint Conference on Artificial Intelligence|
|Publisher||International Joint Conferences on Artificial Intelligence|
|Publication status||Published - 2016|