BAAJ Ismail

دكـتور
وحـدة : SMA
تاريـخ المـغادرة : 04/02/2022
https://lip6.fr/Ismail.Baaj

رئاسـة البـحث : Nicolas MAUDET

تأطـير مـشـترك : POLI Jean-Philippe

Explainability of possibilistic and fuzzy rule-based systems

Today, advances in Artificial Intelligence (AI) have led to the emergence of systems that can automate complex processes, using models that can be difficult to understand by humans. When humans use these AI systems, it is well known that they want to understand their behaviors and actions, as they have more confidence in systems that can explain their choices, assumptions and reasoning. The explanatory capability of AI systems has become a user requirement, especially in human-risk environments such as autonomous vehicles or medicine. This requirement is in line with the recent resurgence of interest for eXplainable Artificial Intelligence (abbreviated as XAI), a research field that aims to develop AI systems that are able to explain their results in a way that is comprehensible to humans.
In this context, we introduce explanatory paradigms for two kinds of AI systems: a possibilistic rule-based system (where rules encode negative information) and a fuzzy rule-based system composed of possibility rules, which encode positive information. We develop a possibilistic interface between learning and if-then rule-based reasoning that establishes meeting points between Knowledge Representation and Reasoning (KRR) and Machine Learning (ML). This interface is built by generalizing the min-max equation system of Henri Farreny and Henri Prade, which was developed for the explainability of possibilistic rule-based systems. In the case of a cascade i.e., when a system uses two chained possibilistic rule sets, we show that we can represent its associated equation system by an explicit min-max neural network. This interface may allow the development of possibilistic learning methods that would be consistent with rule-based reasoning.
For XAI, we introduce methods for justifying the inference results of possibilistic and fuzzy rule-based systems. Our methods lead to form two kinds of explanations of an inference result of these systems: its justification and its unexpectedness (a set of logical statements that are not involved in the determination of the considered result while being related to it).
Finally, we propose a graphical representation of an explanation of an inference result of a possibilistic or fuzzy rule-based system in terms of conceptual graphs. For an inference result, we represent its justification, its unexpectedness and a combination of its justification and its unexpectedness.

مناقـشـة مـذكـرة : 27/01/2022

أعـضاء لجنة المناقـشة :

José Maria Alonso Moral (Ramón y Cajal researcher, Université de Saint-Jacques-de-Compostelle) [rapporteur]
Sébastien Destercke (CR CNRS HDR, Université de Technologie de Compiègne) [rapporteur]
Madalina Croitoru (Professeure, Université de Montpellier)
Vincent Mousseau (Professeur, CentraleSupélec)
Patrice Perny (Professeur, Sorbonne Université)
Marie-Jeanne Lesot (Maîtresse de Conférences HDR, Sorbonne Université)
Nicolas Maudet (Professeur, Sorbonne Université)
Jean-Philippe Poli - (Ingénieur de recherche, CEA)
Wassila Ouerdane - (Maîtresse de Conférences, CentraleSupélec)

تاريـخ المـغادرة : 04/02/2022

إصدارات 2019-2022

Mentions légales
خـريـطـة المـوقـع