SARMIENTO LOZANO Camilo

PhD student
Team : ACASA
Arrival date : 10/01/2020
    Sorbonne Université - LIP6
    Boîte courrier 169
    Couloir 26-00, Étage 5, Bureau 503
    4 place Jussieu
    75252 PARIS CEDEX 05
    FRANCE

Tel: +33 1 44 27 88 60, Camilo.Sarmiento (at) nulllip6.fr
https://lip6.fr/Camilo.Sarmiento
https://lip6.fr/Camilo.Sarmiento

Supervision : Jean-Gabriel GANASCIA

Co-supervision : Gauvain BOURGNE

Merging Action Models to Non-Monotonic and Deontic Logics in Order to Simulate Ethical Reasonings

The computational ethics field (cf. [Tolmeijer and al., 2020]) is dedicated to model ethical concepts and ethical reasoning with AI tools. This research is quite new, since the actual need of ethical supervisors, which control artificial agents’ behaviors, appeared very lately to be crucial due to the recent successes of AI techniques, in particular of supervised machine learning and more specifically deep learning, and to their development in everyday applications. Let us insist on the key notion of “ethical supervisor” we want to design here: these are pieces of software that would govern any artificial agents, e.g. autonomous vehicles, in order to ensure that they don’t infringe moral rules. This doesn’t mean that the agents that would result be, properly speaking, ethical, which would be ridiculously anthropocentric, but that such an ethical supervisor makes the agents' behaviors to conform to moral rules of conducts.

2021-2023 Publications