A Learned Saliency Predictor for Dynamic Natural Scenes

Eleonora Vig, Michael Dorr, Thomas Martinetz, Erhardt Barth

Abstract

We investigate the extent to which eye movements in natural dynamic scenes can be predicted with a simple model of bottom-up saliency, which learns on different visual representations to discriminate between salient and less salient movie regions. Our image representations, the geometrical invariants of the structure tensor, are computed on multiple scales of an anisotropic spatio-temporal multiresolution pyramid. Eye movement data is used to label video locations as salient. For each location, low-dimensional features are extracted on the multiscale representations and used to train a classifier. The quality of the predictor is tested on a large test set of eye movement data and compared with the performance of two state-of-the-art saliency models on this data set. The proposed model demonstrates significant improvement -- mean ROC score of 0.665 -- over the selected baseline models with ROC scores of 0.625 and 0.635.
Original languageEnglish
Title of host publicationArtificial Neural Networks -- ICANN 2010
EditorsKonstantinos Diamantaras, Wlodek Duch, Lazaros S. Iliadis
Number of pages10
Volume6354
Place of PublicationBerlin, Heidelberg
PublisherSpringer Berlin Heidelberg
Publication date2010
Pages52-61
ISBN (Print)978-3-642-15824-7
ISBN (Electronic)978-3-642-15825-4
DOIs
Publication statusPublished - 2010
Event20th International Conference Artificial Neural Networks
- Thessaloniki, Greece
Duration: 15.09.201018.09.2010

Fingerprint

Dive into the research topics of 'A Learned Saliency Predictor for Dynamic Natural Scenes'. Together they form a unique fingerprint.

Cite this