Supervised Matrix Factorization Hashing for Cross-Modal Retrieval

Jun Tang, Ke Wang, Ling Shao

Research output: Contribution to journalArticlepeer-review

195 Citations (Scopus)


The target of cross-modal hashing is to embed heterogeneous multimedia data into a common low-dimensional Hamming space, which plays a pivotal part in multimedia retrieval due to the emergence of big multimodal data. Recently, matrix factorization has achieved great success in cross-modal hashing. However, how to effectively use label information and local geometric structure is still a challenging problem for these approaches. To address this issue, we propose a cross-modal hashing method based on collective matrix factorization, which considers both the label consistency across different modalities and the local geometric consistency in each modality. These two elements are formulated as a graph Laplacian term in the objective function, leading to a substantial improvement on the discriminative power of latent semantic features obtained by collective matrix factorization. Moreover, the proposed method learns unified hash codes for different modalities of an instance to facilitate cross-modal search, and the objective function is solved using an iterative strategy. The experimental results on two benchmark data sets show the effectiveness of the proposed method and its superiority over state-of-the-art cross-modal hashing methods.
Original languageEnglish
Pages (from-to)3157-3166
JournalIEEE Transactions on Image Processing
Issue number7
Early online date6 May 2016
Publication statusPublished - 1 Jul 2016


Dive into the research topics of 'Supervised Matrix Factorization Hashing for Cross-Modal Retrieval'. Together they form a unique fingerprint.

Cite this