Large Neural Networks Learning from Scratch with Very Few Data and without Explicit Regularization

3 Zitate (Scopus)

Abstract

Recent findings have shown that highly over-parameterized Neural Networks generalize without pretraining or explicit regularization. It is achieved with zero training error, i.e., complete over-fitting by memorizing the training data. This is surprising, since it is completely against traditional machine learning wisdom. In our empirical study we fortify these findings in the domain of fine-grained image classification. We show that very large Convolutional Neural Networks with millions of weights do learn with only a handful of training samples and without image augmentation, explicit regularization or pretraining. We train the architectures ResNet018, ResNet101 and VGG19 on subsets of the difficult benchmark datasets Caltech101, CUB_200_2011, FGVCAircraft, Flowers102 and StanfordCars with 100 classes and more, perform a comprehensive comparative study and draw implications for the practical application of CNNs. Finally, we show that a randomly initialized VGG19 with 140 million weights learns to distinguish airplanes and motorbikes with up to 95% accuracy using only 20 training samples per class.
OriginalspracheDeutsch
Seiten279 - 283
Seitenumfang5
DOIs
PublikationsstatusVeröffentlicht - 15.02.2023
Veranstaltung15th International Conference on Machine Learning and Computing - Zhuhai, China
Dauer: 17.02.202320.02.2023
Konferenznummer: 192850
https://www.icmlc.org

Tagung, Konferenz, Kongress

Tagung, Konferenz, Kongress15th International Conference on Machine Learning and Computing
KurztitelICMLC 2023
Land/GebietChina
OrtZhuhai
Zeitraum17.02.2320.02.23
Internetadresse

Strategische Forschungsbereiche und Zentren

  • Zentren: Zentrum für Künstliche Intelligenz Lübeck (ZKIL)

Zitieren