GraN: An efficient gradient-norm based detector for adversarial and misclassified examples

Julia Lust, Alexandru P. Condurache

Abstract

Deep neural networks (DNNs) are vulnerable to adversarial examples and other data perturbations. Especially in safety critical applications of DNNs, it is therefore crucial to detect misclassified samples. The current state-of-the-art detection methods require either significantly more runtime or more parameters than the original network itself. This paper therefore proposes GraN, a time- and parameter-efficient method that is easily adaptable to any DNN. GraN is based on the layer-wise norm of the DNN's gradient regarding the loss of the current input-output combination, which can be computed via backpropagation. GraN achieves state-of-the-art performance on numerous problem set-ups.

Original languageEnglish
Number of pages6
Publication statusPublished - 10.2020
EventEuropean Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning 2020 - Brügge, Belgium
Duration: 02.10.202004.10.2020

Conference

ConferenceEuropean Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning 2020
Abbreviated titleESANN 2020
Country/TerritoryBelgium
CityBrügge
Period02.10.2004.10.20

Fingerprint

Dive into the research topics of 'GraN: An efficient gradient-norm based detector for adversarial and misclassified examples'. Together they form a unique fingerprint.

Cite this