Abstract
In this contribution, we show how to incorporate prior knowledge to a deep neural network architecture in a principled manner. We enforce feature space invariances using a novel layer based on invariant integration. This allows us to construct a complete feature space invariant to finite transformation groups. We apply our proposed layer to explicitly insert invariance properties for vision-related classification tasks, demonstrate our approach for the case of rotation invariance and report state-of-the-art performance on the Rotated-MNIST dataset. Our method is especially beneficial when training with limited data.
Original language | English |
---|---|
Pages | 103-108 |
Number of pages | 6 |
Publication status | Published - 10.2020 |
Event | European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning 2020 - Brügge, Belgium Duration: 02.10.2020 → 04.10.2020 |
Conference
Conference | European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning 2020 |
---|---|
Abbreviated title | ESANN 2020 |
Country/Territory | Belgium |
City | Brügge |
Period | 02.10.20 → 04.10.20 |