TY - JOUR
T1 - Do Highly Over-Parameterized Neural Networks Generalize Since Bad Solutions Are Rare?
AU - Martinetz, Julius
AU - Martinetz, Thomas
PY - 2025/1/23
Y1 - 2025/1/23
N2 - We study over-parameterized classifiers where empirical risk minimization (ERM) for learning leads to zero training error. In these over-parameterized settings, there are many global minima with zero training error, some of which generalize better than others. We show that under certain conditions, the fraction of “bad” global minima with a true error larger than ε decays to zero exponentially fast with the number of training data n. The bound depends on the distribution of the true error over the set of classifier functions used for the given classification problem, and does not necessarily depend on the size or complexity (e.g., the number of parameters) of the classifier function set. This insight provides an alternative perspective on the unexpectedly good generalization even of highly over-parameterized neural networks. We substantiate our theoretical findings through experiments on synthetic data and a subset of MNIST. Additionally, we assess our hypothesis using VGG19 and ResNet18 on a subset of Caltech101.
AB - We study over-parameterized classifiers where empirical risk minimization (ERM) for learning leads to zero training error. In these over-parameterized settings, there are many global minima with zero training error, some of which generalize better than others. We show that under certain conditions, the fraction of “bad” global minima with a true error larger than ε decays to zero exponentially fast with the number of training data n. The bound depends on the distribution of the true error over the set of classifier functions used for the given classification problem, and does not necessarily depend on the size or complexity (e.g., the number of parameters) of the classifier function set. This insight provides an alternative perspective on the unexpectedly good generalization even of highly over-parameterized neural networks. We substantiate our theoretical findings through experiments on synthetic data and a subset of MNIST. Additionally, we assess our hypothesis using VGG19 and ResNet18 on a subset of Caltech101.
M3 - Journal articles
JO - IEEE Transactions on Neural Networks and Learning Systems
JF - IEEE Transactions on Neural Networks and Learning Systems
ER -