DENIS Christophe

Professor Associado
Equipe : ACASA
Data de partida : 31/12/2023
https://lip6.fr/Christophe.Denis

Atividade de Pesquisa

The notions of information and conviviality in deep neural networks – or what about "Explainable AI" or "Trust AI"?
The thunderous return of neural networks occurred in the sublime Florentine setting in 2012 during a renowned international computer vision conference. As for several years, the participants of this conference were invited to test their image recognition techniques. Geoffrey Hinton's team from the University of Toronto was the only one using deep neural networks: it outperformed the other competitors in two out of three categories of the competition. The audience was stunned by the impact of the reduction in prediction error, a factor of three, while the algorithms based on the expertise of the researchers differ by a few percent. Other computational scientific disciplines, like computational fluid dynamics, geophysics, and climatology, have also started to use deep learning methods to predict phenomena which are difficult to solve with a classical hypothetical deductive approach.
We argue that systematically explaining deep learning to all its users is not always justified, could be counterproductive and even raises ethical issues. For example, how to assess the correctness of an explanation that could even be unintentionally permissive or even manipulative in a fraudulent context? There is therefore a need to revisit the theory of information (Fisher, Shannon) and the philosophy of information (eg. Floridi) in the light of deep learning. This information will allow certain users to produce their own reasoning (surely an abductive one) rather than receiving an explanation.
Last but not least, should we trust a machine learning model? Trust means handing over something valuable to someone, relying on them. The corollary is that "the person who trusts is immediately in a state of vulnerability and dependence", and all the more and all the more so on the basis of an explanation whose correctness is difficult to assess. Last but not least, we strongly believe that using human relationship terms, like trust or fairness in the context of machine learning, necessarily induces anthropomorphism, whose bad effects could be addiction (Eliza effect) and persuasion rather than information. In contrast, our philosophical and mathematical research direction tries to define conviviality criteria in machine learning based on Ivan Illich's thought. According to Illich, a convivial tool must have the following properties: • it must generate efficiency without degrading personal autonomy; • it must create neither slave nor master; • it must widen the personal radius of action. As presented in the last part of the talk, neural differential equations, by providing trajectories rather than predictions, seem to be an efficient mathematical formalism to implement convivial deep learning tools.
Data de partida : 31/12/2023

1 Doutorando (Direção de pesquisa / Co-supervisão£o)

  • BAYET Théophile : Utilisation du Machine Learning pour une science soutenable au sud dans un contexte de manque de données fiables

Publicações 2005-2023

Mentions légales
Mapa do site