Geometric Interpretation of a CNN’s last layer

Date

June 16, 2019

Source

Conference on Computer Vision and Pattern Recognition (CVPR), Workshop on Explainable AI

Authors

Alejandro de la Calle
Javier Tovar
Emilio J. Almazán
Aitor Aller

Abstract

Deep neural networks (DNNs) have no doubt brought great successes to a wide range of applications in computer vision, computational linguistics and AI. However, foundational principles underlying the DNNs’ success and their resilience to adversarial attacks are still largely missing. Interpreting and theorizing the internal mechanisms of DNNs becomes a compelling yet controversial topic. The statistical methods and rule-based methods for network interpretation have much to offer in semantically disentangling inference patterns inside DNNs and quantitatively explaining the decisions made by DNNs. Rethinking DNNs explicitly toward building explainable systems from scratch is another interesting topic, including new neural architectures, new parameter estimation methods, new training protocols, and new interpretability-sensitive loss functions.