Abstract
In this paper, we present a novel sparsity-based hashing framework termed Sparse Embedded Hashing (SEH), exploring the technique of sparse coding. Unlike most of the existing systems that focus on finding either a better sparse representation in hash space or an optimal solution to preserve the pairwise similarity of the original data, we intend to solve these two problems in one goal. More specifically, SEH firstly generates sparse representations in a data-driven way, and then learns a projection matrix, taking sparse representing, affinity preserving and linear embedding into account. In order to make the learned compact features locality sensitive, SEH employs the matrix factorization technique to approximate the Euclidean structures of the original data. The usage of the matrix factorization enables the decomposed matrix to be constructed from either visual or textual features depending on which kind of Euclidean structure is preserved. Due to this flexibility, our SEH framework could handle both single-modal retrieval and cross-modal retrieval simultaneously. Experimental evidence shows this method achieves much better performance in both single- and cross-modal retrieval tasks as compared to state-of-the-art approaches.
Original language | English |
---|---|
Pages (from-to) | 24-36 |
Journal | Neurocomputing |
Volume | 257 |
Early online date | 14 Feb 2017 |
DOIs | |
Publication status | Published - 27 Sept 2017 |
Keywords
- Hashing
- Sparse coding
- Matrix factorization