This thesis is part of the field of explainable AI for Abstract Argumentation Frameworks (AAF). This is a formalism for representing and reasoning about contradictory information by means of abstract elements, arguments, and a binary relation called attack, which represents a contradiction or an asymmetric conflict between two arguments. An argument can then be accepted or rejected according to a set of rules called semantics. In this context, we work on the generation of so-called causal explanations, i.e. derived from complex causal reasoning, of the acceptability status of a set of arguments adapted to the explainee. We consider three cases: the classical Dung's abstract argumentation framework, a dynamic framework, i.e. one that takes into account the order in which arguments are enunciated, and a quantitative bipolar framework that includes a second binary relation called support as well as a weighting of the arguments. In each case, we propose to make the link by proposing an explicit transformation between AAFs and an adapted causal formalism: structural models in the classical and quantitative bipolar frameworks, and action languages in the temporal framework.
Keywords: Explainable AI, Abstract Argumentation Framework, Causality, Action language, Quantitative Bipolar Argumentation Framework