Multi-view action recognition using local similarity random forests and sensor fusion

Fan Zhu, Ling Shao, Mingxiu Lin

Research output: Contribution to journalArticlepeer-review

58 Citations (Scopus)

Abstract

This paper addresses the multi-view action recognition problem with a local segment similarity voting scheme, upon which we build a novel multi-sensor fusion method. The recently proposed random forests classifier is used to map the local segment features to their corresponding prediction histograms. We compare the results of our approach with those of the baseline Bag-of-Words (BoW) and the Naïve–Bayes Nearest Neighbor (NBNN) methods on the multi-view IXMAS dataset. Additionally, comparisons between our multi-camera fusion strategy and the normally used early feature concatenating strategy are also carried out using different camera views and different segment scales. It is proven that the proposed sensor fusion technique, coupled with the random forests classifier, is effective for multiple view human action recognition.
Original languageEnglish
Pages (from-to)20-24
JournalPattern Recognition Letters
Volume34
Issue number1
DOIs
Publication statusPublished - 1 Jan 2013

Keywords

  • Local similarity
  • Random forests
  • Sensor fusion
  • Voting strategy
  • IXMAS
  • Action recognition

Fingerprint

Dive into the research topics of 'Multi-view action recognition using local similarity random forests and sensor fusion'. Together they form a unique fingerprint.

Cite this