Please use this identifier to cite or link to this item:
Main Title: Explaining nonlinear classification decisions with deep Taylor decomposition
Author(s): Montavon, Grégoire
Lapuschkin, Sebastian
Binder, Alexander
Samek, Wojciech
Müller, Klaus-Robert
Type: Article
Language Code: en
Abstract: Nonlinear methods such as Deep Neural Networks (DNNs) are the gold standard for various challenging machine learning problems such as image recognition. Although these methods perform impressively well, they have a significant disadvantage, the lack of transparency, limiting the interpretability of the solution and thus the scope of application in practice. Especially DNNs act as black boxes due to their multilayer nonlinear structure. In this paper we introduce a novel methodology for interpreting generic multilayer neural networks by decomposing the network classification decision into contributions of its input elements. Although our focus is on image classification, the method is applicable to a broad set of input data, learning tasks and network architectures. Our method called deep Taylor decomposition efficiently utilizes the structure of the network by backpropagating the explanations from the output to the input layer. We evaluate the proposed method empirically on the MNIST and ILSVRC data sets.
Issue Date: May-2017
Date Available: 28-May-2018
DDC Class: 006 Spezielle Computerverfahren
150 Psychologie
Subject(s): deep neural networks
taylor decomposition
relevance propagation
image recognition
Journal Title: Pattern recognition : the journal of the Pattern Recognition Society
Publisher: Elsevier
Publisher Place: Amsterdam
Volume: 65
Publisher DOI: 10.1016/j.patcog.2016.11.008
Page Start: 211
Page End: 222
ISSN: 0031-3203
Appears in Collections:FG Maschinelles Lernen » Publications

Files in This Item:
File Description SizeFormat 
1-s2.0-S0031320316303582-main.pdf1.5 MBAdobe PDFView/Open

This item is licensed under a Creative Commons License Creative Commons