Deep Convolutional Neural Networks for Unconstrained Ear Recognition


This paper employs state-of-the-art Deep Convolutional Neural Networks (CNNs), namely AlexNet, VGGNet, Inception, ResNet and ResNeXt in a first experimental study of ear recognition on the unconstrained EarVN1.0 dataset. As the dataset size is still insufficient to train deep CNNs from scratch, we utilize transfer learning and propose different domain adaptation strategies. The experiments show that our networks, which are fine-tuned using custom-sized inputs determined specifically for each CNN architecture, obtain state-of-the-art recognition performance where a single ResNeXt101 model achieves a rank-1 recognition accuracy of 93.45%. Moreover, we achieve the best rank-1 recognition accuracy of 95.85% using an ensemble of fine-tuned ResNeXt101 models. In order to explain the performance differences between models and make our results more interpretable, we employ the t-SNE algorithm to explore and visualize the learned features. Feature visualizations show well-separated clusters representing ear images of the different subjects. This indicates that discriminative and ear-specific features are learned when applying our proposed learning strategies.
ZeitschriftIEEE Access
Seiten (von - bis)170295-170310
PublikationsstatusVeröffentlicht - 14.09.2020

Strategische Forschungsbereiche und Zentren

  • Zentren: Zentrum für Künstliche Intelligenz Lübeck (ZKIL)
  • Querschnittsbereich: Intelligente Systeme


  • 409-05 Interaktive und intelligente Systeme, Bild- und Sprachverarbeitung, Computergraphik und Visualisierung


Untersuchen Sie die Forschungsthemen von „Deep Convolutional Neural Networks for Unconstrained Ear Recognition“. Zusammen bilden sie einen einzigartigen Fingerprint.