BERREBY Fiona

Doktor
Forschungsgruppe : ACASA
Datum, an dem das LIP6 verlassen wurde : 06.12.2018
https://lip6.fr/Fiona.Berreby

Forschungsleitung (Direction de recherche) : Jean-Gabriel GANASCIA

Co-Betreuung : BOURGNE Gauvain

Models of Ethical Reasoning

This thesis is part of the ANR eThicAa project, which has aimed to define moral autonomous agents, provide a formal representation of ethical conflicts and of their objects (within one artificial moral agent, between an artificial moral agent and the rules of the system it belongs to, between an artificial moral agent and a human operator, between several artificial moral agents), and design explanation algorithms for the human user. The particular focus of the thesis pertains to exploring ethical conflicts within a single agent, as well as designing explanation algorithms. The work presented here investigates the use of high-level action languages for designing such ethically constrained autonomous agents. It proposes a novel and modular logic-based framework for representing and reasoning over a variety of ethical theories, based on a modified version of the event calculus and implemented in Answer Set Programming. The ethical decision-making process is conceived of as a multi-step procedure captured by four types of interdependent models which allow the agent to represent situations, reason over accountability and make ethically informed choices. More precisely, an action model enables the agent to appraise its environment and the changes that take place in it, a causal model tracks agent responsibility, a model of the Good makes a claim about the intrinsic value of goals or events, and a model of the Right considers what an agent should do, or is most justified in doing, given the circumstances of its actions. The causal model plays a central role here, because it permits identifying some properties that causal relations assume and that determine how, as well as to what extent, we may ascribe ethical responsibility on their basis. The overarching ambition of the presented research is twofold. First, to allow the systematic representation of an unbounded number of ethical reasoning processes, through a framework that is adaptable and extensible by virtue of its designed hierarchisation and standard syntax. Second, to avoid the pitfall of some works in current computational ethics that too readily embed moral information within computational engines, thereby feeding agents with atomic answers that fail to truly represent underlying dynamics. We aim instead to comprehensively displace the burden of moral reasoning from the programmer to the program itself.

Verteidigung einer Doktorarbeit : 06.12.2018

Mitglieder der Prüfungskommission :

Dr. HDR Emiliano LORINI, Université de Toulouse III [Rapporteur]
Pr. Thomas POWERS, University of Delaware [Rapporteur]
Dr. Virginia DIGNUM, TU Delft
Pr. Raja CHATILA, Sorbonne Université
Pr. Nicolas MAUDET, Sorbonne Université
Dr. Grégory BONNET, Université de Caen Normandie
Dr. Gauvain BOURGNE, Sorbonne Université
Pr. Jean-Gabriel GANASCIA, Sorbonne Université

Datum, an dem das LIP6 verlassen wurde : 06.12.2018

Publikationen 2015-2018

Mentions légales
Plan