Learned saliency transformations for gaze guidance

Eleonora Vig, Michael Dorr, Erhardt Barth

Abstract

The saliency of an image or video region indicates how likely it is that the viewer of the image or video fixates that region due to its conspicuity. An intriguing question is how we can change the video region to make it more or less salient. Here, we address this problem by using a machine learning framework to learn from a large set of eye movements collected on real-world dynamic scenes how to alter the saliency level of the video locally. We derive saliency transformation rules by performing spatio-temporal contrast manipulations (on a spatio-temporal Laplacian pyramid) on the particular video region. Our goal is to improve visual communication by designing gaze-contingent interactive displays that change, in real time, the saliency distribution of the scene.
Original languageEnglish
Title of host publicationHuman Vision and Electronic Imaging XVI
EditorsBernice E. Rogowitz, Thrasyvoulos N. Pappas
Number of pages11
Volume78650W
PublisherSPIE-IST
Publication date02.02.2011
Pages7865 - 7865 - 11
ISBN (Print)9780819484024
DOIs
Publication statusPublished - 02.02.2011
EventIS&T/SPIE ELECTRONIC IMAGING - San Francisco Airport, California, United States
Duration: 23.01.201127.01.2011

Fingerprint

Dive into the research topics of 'Learned saliency transformations for gaze guidance'. Together they form a unique fingerprint.

Cite this