Deep neural networks (DNNs) are vulnerable to adversarial examples and other data perturbations. Especially in safety critical applications of DNNs, it is therefore crucial to detect misclassified samples. The current state-of-the-art detection methods require either significantly more runtime or more parameters than the original network itself. This paper therefore proposes GraN, a time- and parameter-efficient method that is easily adaptable to any DNN. GraN is based on the layer-wise norm of the DNN's gradient regarding the loss of the current input-output combination, which can be computed via backpropagation. GraN achieves state-of-the-art performance on numerous problem set-ups.
|Number of pages
|Published - 10.2020
|European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning 2020 - Brügge, Belgium
Duration: 02.10.2020 → 04.10.2020
|European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning 2020
|02.10.20 → 04.10.20