Recursive Autoconvolution for Unsupervised Learning of Convolutional Neural Networks

Abstract

In visual recognition tasks, such as image classification, unsupervised learning exploits cheap unlabeled data and can help to solve these tasks more efficiently. We show that the recursive autoconvolution operator, adopted from physics, boosts existing unsupervised methods by learning more discriminative filters. We take well established convolutional neural networks and train their filters layer-wise. In addition, based on previous works we design a network which extracts more than 600k features per sample, but with the total number of trainable parameters greatly reduced by introducing shared filters in higher layers. We evaluate our networks on the MNIST, CIFAR-10, CIFAR-100 and STL-10 image classification benchmarks and report several state of the art results among other unsupervised methods.
OriginalspracheEnglisch
PublikationsstatusVeröffentlicht - 02.06.2016
VeranstaltungInternational Joint Conference on Neural Networks (IJCNN 2017) - William A. Egan Civic and Convention Center , Anchorage, Alaska, USA / Vereinigte Staaten
Dauer: 14.05.201719.05.2017
http://www.ijcnn.org/

Tagung, Konferenz, Kongress

Tagung, Konferenz, KongressInternational Joint Conference on Neural Networks (IJCNN 2017)
Land/GebietUSA / Vereinigte Staaten
OrtAnchorage, Alaska
Zeitraum14.05.1719.05.17
Internetadresse

Fingerprint

Untersuchen Sie die Forschungsthemen von „Recursive Autoconvolution for Unsupervised Learning of Convolutional Neural Networks“. Zusammen bilden sie einen einzigartigen Fingerprint.

Zitieren