Towards Interpretability of Segmentation Networks by analyzing DeepDreams - Télécom Paris
Communication Dans Un Congrès Année : 2019

Towards Interpretability of Segmentation Networks by analyzing DeepDreams

Résumé

Interpretability of a neural network can be expressed as the identification of patterns or features to which the network can be either sensitive or indifferent. To this aim, a method inspired by DeepDream is proposed, where the activation of a neuron is maximized by performing gradient ascent on an input image. The method outputs curves that show the evolution of features during the maximization. A controlled experiment show how it enables assess the robustness to a given feature, or by contrast its sensitivity. The method is illustrated on the task of segmenting tumors in liver CT images.
Fichier non déposé

Dates et versions

hal-02288076 , version 1 (13-09-2019)

Identifiants

  • HAL Id : hal-02288076 , version 1

Citer

Vincent Couteaux, O. Nempont, Guillaume Pizaine, Isabelle Bloch. Towards Interpretability of Segmentation Networks by analyzing DeepDreams. iMIMIC Workshop at MICCAI 2019: Interpretability of Machine Intelligence in Medical Image Computing, 2019, Shenzhen, China. pp.56-63. ⟨hal-02288076⟩
235 Consultations
0 Téléchargements

Partager

More