BHAN Milan

PhD student
Team : LFI
Arrival date : 04/12/2022
    Sorbonne Université - LIP6
    Boîte courrier 169
    Couloir 26-00, Étage 5, Bureau 504
    4 place Jussieu
    75252 PARIS CEDEX 05
    FRANCE

Tel: +33 1 44 27 88 87, Milan.Bhan (at) nulllip6.fr
https://lip6.fr/Milan.Bhan

Supervision : Marie-Jeanne LESOT

Co-supervision : Jean-Noël VITTAUT

Generation of counterfactual texts

The objective of this thesis is to evaluate the possibility of generating counterfactuals in NLP under various forms of constraints such as plausibility, grammatical correctness or goal orientation. Counterfactual generators will be evaluated as a source of interpretability and as a method of strengthening the robustness of the language models handled. Thus, this work will answer the following questions: - How existing post-hoc agnostic methods are suitable for deep learning models applied to NL? deep learning models applied to NLP? - How to interpret deep learning models applied to NLP thanks to the parameters of their structure? Can we derive a method for generating counterfactuals? - How can we integrate the constraints of plausibility, efficiency and goal orientation in NLP to the generation of counterfactuals? To this end, the proposed approaches will be tested on various datasets such as the IMDB Database. State-of-the-art language models such as BERT (Bidirectional Encoder Representation from Transformers) and other derivatives of Transformers architecture will be used to address these issues. In particular, the attention coefficients inherent to Transformers architectures will be investigated. Finally, the use of reinforcement learning algorithms will be considered during the text creation process, which is not necessary for the generation of counterfactual examples. Non-antonymous text generators will be tested to improve the quality of the generated counterfactuals. The counterfactual methods will be systematically tested and used to perform data augmentation and bias detection in order to make the models more robust.