Video abstraction based on fMRI-driven visual attention model

Junwei Han, Kaiming Li, Ling Shao, Xintao Hu, Sheng He, Lei Guo, Jungong Han, Tianming Liu

Research output: Contribution to journalArticlepeer-review

19 Citations (Scopus)

Abstract

The explosive growth of digital video data renders a profound challenge to succinct, informative, and human-centric representations of video contents. This quickly-evolving research topic is typically called ‘video abstraction’. We are motivated by the facts that the human brain is the end-evaluator of multimedia content and that the brain’s responses can quantitatively reveal its attentional engagement in the comprehension of video. We propose a novel video abstraction paradigm which leverages functional magnetic resonance imaging (fMRI) to monitor and quantify the brain’s responses to video stimuli. These responses are used to guide the extraction of visually informative segments from videos. Specifically, most relevant brain regions involved in video perception and cognition are identified to form brain networks. Then, the propensity for synchronization (PFS) derived from spectral graph theory is utilized over the brain networks to yield the benchmark attention curves based on the fMRI-measured brain responses to a number of training video streams. These benchmark attention curves are applied to guide and optimize the combinations of a variety of low-level visual features created by the Bayesian surprise model. In particular, in the training stage, the optimization objective is to ensure that the learned attentional model correlates well with the brain’s responses and reflects the attention that viewers pay to video contents. In the application stage, the attention curves predicted by the learned and optimized attentional model serve as an effective benchmark to abstract testing videos. Evaluations on a set of video sequences from the TRECVID database demonstrate the effectiveness of the proposed framework.
Original languageEnglish
Pages (from-to)781-796
JournalInformation Sciences
Volume281
Early online date7 Jan 2014
DOIs
Publication statusPublished - 10 Oct 2014

Keywords

  • Video abstraction
  • Visual attention
  • Functional magnetic resonance imaging
  • Propensity for synchronization
  • Bayesian surprise model

Fingerprint

Dive into the research topics of 'Video abstraction based on fMRI-driven visual attention model'. Together they form a unique fingerprint.

Cite this