SPP 1527, Subproject: Learning Efficient Sensing for Active Vision (Esensing)

Project: DFG ProjectsDFG Joint Research: Priority Programs

Project Details

Description

Agenten in der realen Welt, ausgestattet mit limitierten Ressourcen zur Informationsverarbeitung, müssen effiziente Strategien zur Informationsaufnahme einsetzen. Neue theoretische Ergebnisse zum Compressed Sensing versprechen völlig neue Ansätze zur effizienten Informationsaufnahme. Unser Projekt soll diese Ansätze für Active Vision nutzen, sie erweitern und deren Grenzen ausloten. Zunächst wollen wir Compressed Sensing und Prinzipien des Active Vision zu einer hierarchischen Informationsaufnahme zusammenführen und dadurch die Effizienz signifikant steigern. Die daraus resultierenden Strategien der bildbasierten Informationsaufnahme sollen sich an spezielle Bildklassen (z.B. Landschaften, Gesichter, Text, etc.) oder Aufgaben (z.B. Objektsuche) anpassen. Dazu entwickeln und nutzen wir Verfahren des maschinellen Lernens, die auf den Gütemaßen des Compressed Sensing agieren und mit Ansätzen des ”reinforcement learning” aus dem Bereich Active Vision verknüpft werden sollen. Damit kann der Agent seine Strategie der Informationsaufnahme an Umgebungen und Aufgaben anpassen. Schließlich wollen wir die Erkenntnisse für ein besseres Verständnis des menschlichen aktiven Sehens nutzen und mit ”eye-tracking” Experimenten untermauern. Die experimentellen Daten sollen für entsprechende ”challenges” zur Verfügung gestellt werden.

Key findings

The different sensing frameworks developed during Esensing are novel approaches to the problem of how to adaptively sense the environment, i.e., how to extract relevant information from a particular environment that is previously unknown. Since they are based on learning and can be embedded in an action-perception loop, the novel methods have a great potential in the context of autonomously acting agents that must rely on efficient sensing schemes. Our approaches are inspired by Active Vision, motivated by Compressed Sensing, and are based on the principles of Sparse Coding. The scientific contribution we have accomplished during Esensing is twofold: (i) we developed new algorithms (CA, OSC, GF-OSC) to learn representations for an efficient encoding and sensing, and (ii) we developed new performant hierarchical sensing schemes (AHS, HMS), which are adaptive, because sensing operations are not conducted in a random fashion but are more carefully selected depending on both, the environment and the particular scene that is sensed. We developed AHS and HMS in the context of actionperception loops and collaborated with our partners in Leipzig and Berkeley. Our methods are inspired by biological sensing strategies and enable mobile agents to autonomously adapt their representations and their sensing strategies to a particular environment, which can then be sensed more efficiently.
Statusfinished
Effective start/end date01.10.1130.09.16

Collaborative partners

  • University of Stuttgart (Joint applicant, Co-PI) (lead)

UN Sustainable Development Goals

In 2015, UN member states agreed to 17 global Sustainable Development Goals (SDGs) to end poverty, protect the planet and ensure prosperity for all. This project contributes towards the following SDG(s):

  • SDG 9 - Industry, Innovation, and Infrastructure

Research Areas and Centers

  • Centers: Center for Artificial Intelligence Luebeck (ZKIL)

DFG Research Classification Scheme

  • 409-05 Interactive and Intelligent Systems, Image and Language Processing, Computer Graphics and Visualisation

Fingerprint

Explore the research topics touched on by this project. These labels are generated based on the underlying awards/grants. Together they form a unique fingerprint.
  • An adaptive hierarchical sensing scheme for sparse signals

    Schütze, H., Barth, E. & Martinetz, T., 25.02.2014, Human Vision and Electronic Imaging XIX. Rogowitz, B. E., Pappas, T. N. & de Ridder, H. (eds.). SPIE, Vol. 9014. p. 15:1-8 8 p. (Proceedings of SPIE).

    Research output: Chapters in Books/Reports/Conference ProceedingsConference contributionpeer-review

  • Visual Manifold Sensing

    Burciu, I., Ion-Margineanu, A., Martinetz, T. & Barth, E., 25.02.2014, Human Vision and Electronic Imaging XIX. Rogowitz, B. E., Pappas, T. N. & de Ridder, H. (eds.). SPIE, Vol. 90141B . p. 9014 - 9014 - 8 8 p. (Proceedings of SPIE; vol. 9014).

    Research output: Chapters in Books/Reports/Conference ProceedingsConference contributionpeer-review

  • Learning Orthogonal Bases for k-Sparse Representations

    Schütze, H., Barth, E. & Martinetz, T., 2013, Workshop New Challenges in Neural Computation 2013. Hammer, B., Martinetz, T. & Villmann, T. (eds.). Vol. 02. p. 119-120 2 p. (Machine Learning Reports).

    Research output: Chapters in Books/Reports/Conference ProceedingsConference contributionpeer-review