Rethinking generalization of classifiers in separable classes scenarios and over-parameterized regimes

Julius Martinetz*, Christoph Linse, Thomas Martinetz

*Corresponding author for this work

Abstract

We investigate the learning dynamics of classifiers in scenarios where classes are separable or classifiers are over-parameterized. In both cases, Empirical Risk Minimization (ERM) results in zero training error. However, there are many global minima with a training error of zero, some of which generalize well and some of which do not. We show that in separable classes scenarios the proportion of "bad"global minima diminishes exponentially with the number of training data n. Our analysis provides bounds and learning curves dependent solely on the density distribution of the true error for the given classifier function set, irrespective of the set's size or complexity (e.g., number of parameters). This observation may shed light on the unexpectedly good generalization of over-parameterized Neural Networks. For the over-parameterized scenario, we propose a model for the density distribution of the true error, yielding learning curves that align with experiments on MNIST and CIFAR-10.

Original languageEnglish
Title of host publication2024 International Joint Conference on Neural Networks (IJCNN)
Number of pages10
Publication date30.06.2024
Pages1
ISBN (Print)979-8-3503-5932-9
ISBN (Electronic)979-8-3503-5931-2
Publication statusPublished - 30.06.2024

Fingerprint

Dive into the research topics of 'Rethinking generalization of classifiers in separable classes scenarios and over-parameterized regimes'. Together they form a unique fingerprint.

Cite this