- Computer Science Laboratory LIP6 supports the Pink October campaign for breast cancer awareness.

ENESCU Victor

PhD Student at Sorbonne University
Team : SYEL

Supervision : Hichem SAHBI

Deep Generative Models for Frugal and Continual Visual Learning

The rapid advancements in Artificial Intelligence (AI), while heavily reliant on vast datasets, frequently encounter constrained real-world environments characterized by limited resources and continuous data streams, particularly in continual learning scenarios. In these dynamic settings, AI models must constantly adapt to new information without losing previously acquired knowledge. Generative models offer a powerful solution by synthesizing realistic data, significantly reducing the burdens of data acquisition and storage. However, their effective deployment in continual learning faces a dual challenge: their inherent susceptibility to catastrophic forgetting and the critical need for precise conditioning to ensure generated outputs align with specific user requirements.

Our first contribution introduces a novel framework for disentangled latent space conditioning in generative models, enabling the learning of distinct, separated gaussian distributions in the latent space. We achieve this through a unique combination of a likelihood loss to preserve data manifold continuity and a repelling loss to prevent component overlap. This disentanglement proves critical not only for robust conditional generation but also enhances performance in classification and image augmentation tasks. Building upon this paradigm, our second contribution focuses on incremental conditioning of generative models, optimizing for sustained performance while minimizing forgetting. We propose two distinct solutions: The first leverages the aforementioned disentangled conditioning, demonstrating its substantial benefits in incremental learning scenarios. The second solution explores novel architectures utilizing conditional network sub-instances, designed to efficiently acquire new information with a minimal memory footprint, completely circumventing catastrophic forgetting. Collectively, our methods empower generative models to rapidly assimilate new knowledge without compromising prior learning or demanding excessive memory resources, marking a significant advancement in their applicability within data-constrained, dynamic environments.


Phd defence : 09/26/2025

Jury members :

Anissa MOKRAOUI, Professeure, Université Sorbonne Paris Nord [rapporteur]
Chaabane DJERABA, Professeur, Université de Lille [rapporteur]
Lina MROUEH, Professeure, ISEP Paris
Vincent GRIPON, Professeur, IMT Atlantique
Adrian POPESCU, Directeur de recherche, CEA
Hichem SAHBI, Chercheur CNRS (HDR), Sorbonne Université

Departure date : 09/30/2025

2024-2025 Publications