LAUGEL Thibault

Doctor
Equipo : LFI
Fecha de salida : 31/07/2020
https://lip6.fr/Thibault.Laugel

Dirección de investigación : Christophe MARSALA, Marie-Jeanne LESOT

Co-supervisión : DETYNIECKI Marcin

Local Post-hoc Interpretability for Black-box Classifiers

This thesis focuses on the field of XAI (eXplainable AI), and more particularly local post-hoc interpretability paradigm, that is to say the generation of explanations for a single prediction of a trained classifier. In particular, we study a fully agnostic context, meaning that the explanation is generated without using any knowledge about the classifier (treated as a black-box) nor the data used to train it. In this thesis, we identify several issues that can arise in this context and that may be harmful for interpretability. We propose to study each of these issues and propose novel criteria and approaches to detect and characterize them. The three issues we focus on are: the risk of generating explanations that are out of distribution; the risk of generating explanations that cannot be associated to any ground-truth instance; and the risk of generating explanations that are not local enough. These risks are studied through two specific categories of interpretability approaches: counterfactual explanations and local surrogate models.

Defensa : 03/07/2020

miembros del jurado :

M. Jamal Atif, Dauphine LAMSADE [rapporteur]
M. Marcin Detyniecki, AXA
Mme Fosca Giannotti, Université de Pise KDDLab / ISTI-CNR [rapporteur]
Mme Marie-Jeanne Lesot, Sorbonne Université LIP6
M. Christophe Marsala, Sorbonne Université LIP6
M. Nicolas Maudet, Sorbonne Université LIP6
M. Chris Russell, Alan Turing Institute / University of Surrey

Fecha de salida : 31/07/2020

1 Doctor 2024

Publicaciones 2018-2023

Mentions légales
Mapa del sitio