Talk page
Title:
Understanding Deep Neural Networks: From Generalization to Interpretability
Speaker:
Abstract:
Deep neural networks have recently seen an impressive comeback with applications both in the public sector and the sciences. However, despite their outstanding success, a comprehensive theoretical foundation of deep neural networks is still missing.
For deriving a theoretical understanding of deep neural networks, one main goal is to analyze their generalization ability, i.e. their performance on unseen data sets. In case of graph convolutional neural networks, which are today heavily used, for instance, for recommender systems, already the generalization capability to signals on graphs unseen in the training set, typically coined transferability, has not been thoroughly analyzed yet. As one answer to this question, in this talk we will show that spectral graph convolutional neural networks are indeed transferable, thereby also debunking a common misconception about this type of graph convolutional neural networks.
If such theoretical approaches fail or if one is just given a trained neural network without knowledge of how it was trained, interpretability approaches become necessary. Those aim to "break open the black box" in the sense of identifying those features from the input, which are most relevant for the observed output. Aiming to derive a theoretically founded approach to this problem, we introduced a novel approach based on rate-distortion theory coined Rate-Distortion Explanation (RDE), which not only provides state-of-the-art explanations, but in addition allows first theoretical insights into the complexity of such problems. In this talk we will discuss this approach and show that it also gives a precise mathematical meaning to the previously vague term of relevant parts of the input.
Link: