Low-rank representation with adaptive dictionary learning for subspace clustering

Jie Chen, Hua Mao*, Zhu Wang, Xinpei Zhang

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

Abstract

High-dimensional data are often treated as collections of data samples approximately drawn from a union of multiple low-dimensional subspaces. Subspace clustering, where high-dimensional data samples are divided into low-dimensional subspace clusters, provides valuable insight into the underlying structures of high-dimensional data. The key challenge in subspace clustering is how to effectively measure the similarity among data samples. This paper presents an adaptive low-rank representation (ALRR) method for subspace clustering. An adaptive dictionary learning strategy that employs an orthonormality constraint is integrated into the low-rank representation (LRR) model. The dictionary, adaptively learned from the original data, makes the ALRR model robust to noise. The projection matrix and low-rank features are obtained simultaneously using an alternative optimization method. The convergence of ALRR is theoretically guaranteed under certain conditions, where ALRR requires at most three iterations for optimization. Consequently, it effectively obtains a convergence rate for ALRR that is better than those of several existing LRR algorithms. The experimental results on benchmark datasets show that the proposed method significantly outperforms several state-of-the-art subspace clustering methods, which indicates the effectiveness of ALRR for subspace clustering.

Original languageEnglish
Article number107053
Number of pages11
JournalKnowledge-Based Systems
Volume223
Early online date18 Apr 2021
DOIs
Publication statusPublished - 8 Jul 2021

Fingerprint

Dive into the research topics of 'Low-rank representation with adaptive dictionary learning for subspace clustering'. Together they form a unique fingerprint.

Cite this