Enhancing Generalization in Convolutional Neural Networks Through Regularization with Edge and Line Features

Christoph Linse*, Beatrice Brückner, Thomas Martinetz

*Corresponding author for this work

Abstract

This paper proposes a novel regularization approach to bias Convolutional Neural Networks (CNNs) toward utilizing edge and line features in their hidden layers. Rather than learning arbitrary kernels, we constrain the convolution layers to edge and line detection kernels. This intentional bias regularizes the models, improving generalization performance, especially on small datasets. As a result, test accuracies improve by margins of 5-11 percentage points across four challenging fine-grained classification datasets with limited training data and an identical number of trainable parameters. Instead of traditional convolutional layers, we use Pre-defined Filter Modules, which convolve input data using a fixed set of 3×3 pre-defined edge and line filters. A subsequent ReLU erases information that did not trigger any positive response. Next, a 1×1 convolutional layer generates linear combinations. Notably, the pre-defined filters are a fixed component of the architecture, remaining unchanged during the training phase. Our findings reveal that the number of dimensions spanned by the set of pre-defined filters has a low impact on recognition performance. However, the size of the set of filters matters, with nine or more filters providing optimal results.

Original languageEnglish
Title of host publicationLecture Notes in Computer Science : International Conference on Artificial Neural Networks
Number of pages446
Volume15016
PublisherSpringer, Cham
Publication date17.09.2024
Pages432
ISBN (Print)978-3-031-72331-5
ISBN (Electronic)978-3-031-72332-2
Publication statusPublished - 17.09.2024

Fingerprint

Dive into the research topics of 'Enhancing Generalization in Convolutional Neural Networks Through Regularization with Edge and Line Features'. Together they form a unique fingerprint.

Cite this