Learning Object-to-Class Kernels for Scene Classification

Lei Zhang, Xiantong Zhen, Ling Shao

Research output: Contribution to journalArticlepeer-review

110 Citations (Scopus)

Abstract

High-level image representations have drawn increasing attention in visual recognition, e.g., scene classification, since the invention of the object bank. The object bank represents an image as a response map of a large number of pretrained object detectors and has achieved superior performance for visual recognition. In this paper, based on the object bank representation, we propose the object-to-class (O2C) distances to model scene images. In particular, four variants of O2C distances are presented, and with the O2C distances, we can represent the images using the object bank by lower-dimensional but more discriminative spaces, called distance spaces, which are spanned by the O2C distances. Due to the explicit computation of O2C distances based on the object bank, the obtained representations can possess more semantic meanings. To combine the discriminant ability of the O2C distances to all scene classes, we further propose to kernalize the distance representation for the final classification. We have conducted extensive experiments on four benchmark data sets, UIUC-Sports, Scene−15, MIT Indoor, and Caltech−101, which demonstrate that the proposed approaches can significantly improve the original object bank approach and achieve the state-of-the-art performance.
Original languageEnglish
Pages (from-to)3241-3253
JournalIEEE Transactions on Image Processing
Volume23
Issue number8
DOIs
Publication statusPublished - Aug 2014

Keywords

  • Object bank
  • scene classification
  • object-to-class distances
  • object filters
  • kernels

Fingerprint

Dive into the research topics of 'Learning Object-to-Class Kernels for Scene Classification'. Together they form a unique fingerprint.

Cite this