Spatial and temporal visual attention prediction in videos using eye movement data

Junwei Han, Liye Sun, Xintao Hu, Jungong Han, Ling Shao

Research output: Contribution to journalArticlepeer-review

26 Citations (Scopus)

Abstract

Visual attention detection in static images has achieved outstanding progress in recent years whereas much less effort has been devoted to learning visual attention in video sequences. In this paper, we propose a novel method to model spatial and temporal visual attention for videos respectively through learning from human gaze data. The spatial visual attention mainly predicts where viewers look in each video frame while the temporal visual attention measures which video frame is more likely to attract viewers׳ interest. Our underlying premise is that objects as well as their movements, instead of conventional contrast-related information, are major factors in dynamic scenes to drive visual attention. Firstly, the proposed models extract two types of bottom-up features derived from multi-scale object filter responses and spatiotemporal motion energy, respectively. Then, spatiotemporal gaze density and inter-observer gaze congruency are generated using a large collection of human-eye gaze data to form two training sets. Finally, prediction models of temporal visual attention and spatial visual attention are learned based on those two training sets and bottom-up features, respectively. Extensive evaluations on publicly available video benchmarks and applications in interestingness prediction of movie trailers demonstrate the effectiveness of the proposed work.
Original languageEnglish
Pages (from-to)140-153
JournalNeurocomputing
Volume145
DOIs
Publication statusPublished - 5 Dec 2014

Keywords

  • Spatial visual attention
  • Temporal visual attention
  • Human gaze
  • Spatiotemporal gaze density
  • Fixation dispersion

Fingerprint

Dive into the research topics of 'Spatial and temporal visual attention prediction in videos using eye movement data'. Together they form a unique fingerprint.

Cite this