Efficient detection of adversarial, out-of-distribution and other misclassified samples

Julia Lust*, Alexandru P. Condurache

*Corresponding author for this work


Deep Neural Networks (DNNs) are increasingly being considered for safety–critical approaches in which it is crucial to detect misclassified samples. Typically, detection methods are geared towards either the detection of out-of-distribution or adversarial data. Additionally, most detection methods require a significant amount of parameters and runtime. In this contribution we discuss a novel approach for detecting misclassified samples suitable for out-of-distribution, adversarial and additionally real world error-causing corruptions. It is based on the Gradient's Norm (GraN) of the DNN and is parameter and runtime efficient. We evaluate GraN on two different classification DNNs (DenseNet, ResNet) trained on different datasets (CIFAR-10, CIFAR-100, SVHN). In addition to the detection of different adversarial example types (FGSM, BIM, Deepfool, CWL2) and out-of-distribution data (TinyImageNet, LSUN, CIFAR-10, SVHN) we evaluate GraN for novel corruption set-ups (Gaussian, Shot and Impulse noise). Our experiments show that GraN performs comparable to state-of-the-art methods for adversarial and out-of-distribution detection and is superior for real world corruptions while being parameter and runtime efficient.

Original languageEnglish
Pages (from-to)335-343
Number of pages9
Publication statusPublished - 22.01.2022


Dive into the research topics of 'Efficient detection of adversarial, out-of-distribution and other misclassified samples'. Together they form a unique fingerprint.

Cite this