A Survey on Assessing the Generalization Envelope of Deep Neural Networks at Inference Time for Image Classification

Julia Lust, Alexandru Paul Condurache

Abstract

Deep Neural Networks (DNNs) achieve state-of-the-art performance on numerous applications. However, it is difficult to tell beforehand if a DNN receiving an input will deliver the correct output since their decision criteria are usually nontransparent. A DNN delivers the correct output if the input is within the area enclosed by its generalization envelope. In this case, the information contained in the input sample is processed reasonably by the network. It is of large practical importance to assess at inference time if a DNN generalizes correctly. Currently, the approaches to achieve this goal are investigated in different problem set-ups rather independently from one another, leading to three main research and literature fields: predictive uncertainty, out-of-distribution detection and adversarial example detection. This survey connects the three fields within the larger framework of investigating the generalization performance of machine learning methods and in particular DNNs. We underline the common ground, point at the most promising approaches and give a structured overview of the methods that provide at inference time means to establish if the current input is within the generalization envelope of a DNN.
Original languageEnglish
JournalarXiv.org
Pages (from-to)1-19
Number of pages19
Publication statusPublished - 01.06.2020

Fingerprint

Dive into the research topics of 'A Survey on Assessing the Generalization Envelope of Deep Neural Networks at Inference Time for Image Classification'. Together they form a unique fingerprint.

Cite this