Learning Transformation Invariance from Global to Local

Jens Hocke, Thomas Martinetz

Abstract

Learning representations invariant to image transformations is fundamental to improving object recognition. We explore the connections between i-theory, Toroidal Subspace Analysis and slow subspace learning. All these methods can only achieve invariance to one transformation. Motivated by this limitation of these global methods we adapt the slow subspace approach to a local convolutional setting. Experimentally we show invariance to multiple transformations, and test object recognition performance.
Original languageEnglish
Title of host publicationWorkshop New Challenges in Neural Computation 2015
EditorsBarbara Hammer, Thomas Villmann
Number of pages9
Volume03
Publication date2015
Pages16-24
Publication statusPublished - 2015
Event37th German Conference on Pattern Recognition - Workshop Program: New Challenges in Neural Computation (NC2) - RWTH Aachen, Aachen, Germany
Duration: 10.10.201510.10.2015
http://vmv2015.rwth-aachen.de/workshops.html

Fingerprint

Dive into the research topics of 'Learning Transformation Invariance from Global to Local'. Together they form a unique fingerprint.

Cite this