In this contribution, we show how to incorporate prior knowledge to a deep neural network architecture in a principled manner. We enforce feature space invariances using a novel layer based on invariant integration. This allows us to construct a complete feature space invariant to finite transformation groups. We apply our proposed layer to explicitly insert invariance properties for vision-related classification tasks, demonstrate our approach for the case of rotation invariance and report state-of-the-art performance on the Rotated-MNIST dataset. Our method is especially beneficial when training with limited data.
|Number of pages
|Published - 10.2020
|European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning 2020 - Brügge, Belgium
Duration: 02.10.2020 → 04.10.2020
|European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning 2020
|02.10.20 → 04.10.20