Postdoc : XAI for Tuning Fuzzy Rule-based Systems with Reinforcement Learning

Context The Learning Fuzzy and Intelligent systems group (LFI) at Sorbonne Université’s Computer Science Laboratory (LIP6) offers a post-doc position funded by the ANR ASTRID project ”Interpretable-by-design Fuzzy Policy in Reinforcement Learning (IFP-in-RL)” and conducted in collaboration with Thales RT. In the general context of the eXplainable Artificial Intelligence (XAI), the IFP-in-RL project aims to propose a method for the automatic construction of a control system for a device, such as a drone, that takes into account interpretability constraints in its design. This project takes place within the framework of systems based on fuzzy rules that, since their introduction, aim to facilitate the expression of knowledge in a linguistic form –natural for the user– and easily understandable by a human. Such a knowledge representation is an excellent way to promote human interaction with the computer system and to improve users’ understanding of how it works, thus offering the possibility of making its behaviour transparent and easily validated. In the literature, different approaches to building or to fine-tuning fuzzy rule bases exist, but they generally suffer from the drawback of not incorporating interpretability. In this project an innovative methodology for the design of such systems will be proposed based on the implementation of a reinforcement learning approach using interpretability metrics.
Work to be conducted The post-doc will participate in the main tasks of the IFP-in-RL project. The work to be conducted includes a comprehensive study, both theoretically and experimentally, of interpretability metrics, including existing numerical criteria as well as user needs. This will involve proposing a taxonomy of existing metrics and defining new measures if necessary, in order to complement the existing ones and allow their exploitation in reinforcement learning algorithms. The second step will focus on the proposition of an innovative reinforcement learning algorithm to generate the interpretable control system. A third task will focus on the evaluation of the proposed approaches. An original feature of the project is the integration of a qualitative assessment, conducted with a human panel, of the proposed metrics as well as of the rule bases generated by reinforcement learning.
Applicants are required to have: • a PhD in Computer Science and a good publication record; • advanced skills in Python programming and knowledge of standard ML libraries (scikit-learn, numpy, pandas,. . . ); • a strong background in Machine Learning; • knowledge in eXplainable AI (XAI); • fluency in written and spoken English; • communication skills in French are a plus but not required.
How to apply: Send an email to contact.lfi@listes.lip6.fr with • a full resume including a complete list of publications; • a transcript of higher education records; • a one-page research statement discussing how the candidate’s background fits the proposed topic; • two support letters from persons who worked with them.

More details here …

Contact :Christophe Marsala

Mentions légales
Site map