Colour saliency on video

Michael Dorr, Eleonora Vig, Erhardt Barth


Much research has been concerned with the notion of bottom-up saliency in visual scenes, i.e. the contribution of low-level image features such as brightness, colour, contrast, and motion to the deployment of attention. Because the human visual system is obviously highly optimized for the real world, it is reasonable to draw inspiration from human behaviour in the design of machine vision algorithms that determine regions of relevance. In previous work, we were able to show that a very simple and generic grayscale video representation, namely the geometric invariants of the structure tensor, predicts eye movements when viewing dynamic natural scenes better than complex, state-of-the-art models. Here, we moderately increase the complexity of our model and compute the invariants for colour videos, i.e. on the multispectral structure tensor and for different colour spaces. Results show that colour slightly improves predictive power.

Original languageEnglish
Title of host publicationBIONETICS 2010: Bio-Inspired Models of Network, Information, and Computing Systems
EditorsJunichi Suzuki, Tadashi Nakano
Number of pages6
VolumeVol. 87
PublisherSpringer Berlin Heidelberg
Publication date06.09.2012
ISBN (Print)978-3-642-32614-1
ISBN (Electronic)978-3-642-32615-8
Publication statusPublished - 06.09.2012
EventBIONETICS 2010 : The 5th Int'l Conference on Bio-Inspired Models of Network, Information and Computing Systems - Boston, United States
Duration: 01.12.201003.12.2010


Dive into the research topics of 'Colour saliency on video'. Together they form a unique fingerprint.

Cite this