PhD graduated
Team : SYEL
Departure date : 06/16/2023

Supervision : Hichem SAHBI

Co-supervision : STOIAN Andrei

Deep Active Learning for Visual Recognition with Few Examples

Automatic image analysis has improved the exploitation of image sensors, with data coming from different sensors such as phone cameras, surveillance cameras, satellite imagers or even drones. Deep learning achieves excellent results in image analysis applications where large amounts of annotated data are available, but learning a new image classifier from scratch is a difficult task. Most image classification methods are supervised, requiring annotations, which is a significant investment. Different frugal learning solutions (with few annotated examples) exist, including transfer learning, active learning, semi-supervised learning or meta-learning. The goal of this thesis is to study these frugal learning solutions for visual recognition tasks, namely image classification and change detection in satellite images. The classifier is trained iteratively by starting with only a few annotated samples, and asking the user to annotate as few data as possible to obtain satisfactory performance. Deep active learning was initially studied with other methods and suited our operational problem the most, so we chose this solution. In this thesis, we have developed an interactive approach, where we ask the most informative questions about the relevance of the data to an oracle (annotator). Based on its answers, a decision function is iteratively updated. We model the probability that the samples are relevant, by minimizing an objective function capturing the representativeness, diversity and ambiguity of the data. Data with high probability are then selected for annotation. We have improved this approach, using reinforcement learning to dynamically and accurately weight the importance of representativeness, diversity and ambiguity of the data in each active learning cycle. Finally, our last approach consists of a display model that selects the most representative and diverse virtual examples, which adversely challenge the learned model, in order to obtain a highly discriminative model in subsequent iterations of active learning. The good results obtained against the different baselines and the state of the art in the tasks of satellite image change detection and image classification have demonstrated the relevance of the proposed frugal learning models.

Defence : 06/15/2023

Jury members :

M. Michel CRUCIANU, CNAM [Rapporteur]
M. Chaabane DJERABA, Université de Lille [Rapporteur]
Mme Anissa MOKRAOUI, Université Sorbonne Paris Nord
M. Clément MALLET, Université Gustave Eiffel & IGN
M. Andrei STOIAN, Zama
M. Hichem SAHBI, CNRS & Sorbonne Université SIM (Sciences, Ingénierie, Médecine)

Departure date : 06/16/2023

2021-2023 Publications

Mentions légales
Site map