Inferring and Predicting Dynamic Representations for Structured Temporal Data
Temporal data constitute a large part of data collected digitally. Predicting their next values is an important and challenging task in domains such as climatology, optimal control, or natural language processing. Standard statistical methods are based on linear models and are often limited to low-dimensional data.
We instead use deep learning methods capable of handling high-dimensional structured data and leverage large quantities of examples. In this thesis, we are interested in latent variable models. Contrary to autoregressive models that directly use past data to perform prediction, latent models infer low-dimensional vectorial representations of data on which prediction is performed. First, we propose a structured latent model for spatiotemporal data forecasting. Next, we focus on predicting data distributions, rather than point estimates, through a diachronic language modeling task. Finally, we propose a stochastic prediction model applied to video prediction.
Defence : 06/30/2020 - 14h - Visionconférence Jury members : M. Alexandre Allauzen, Professeur, ESPCI - CNRS [Rapporteur]
M. Thierry Artieres, Professeur, Aix Marseille Université - Ecole Centrale Marseille - LIS [Rapporteur]
M. Ludovic Denoyer, Professeur, Sorbonne Université - LIP6 - FAIR
Mme Ahlame Douzal, Maître de Conférences (HDR), Université Grenoble Alpes - LIG
M. Patrcik Gallinari, Sorbonne Université - LIP6
M. Sylvain Lamprier, Maître de Conférences, Sorbonne Université - LIP6
J.‑Y. Franceschi, E. Delasalles, M. Chen, S. Lamprier, P. Gallinari : “Stochastic Latent Residual Video Prediction”, Proceedings of the 37th International Conference on Machine Learning, vol. 119, Proceedings of Machine Learning Research, Vienne, Austria, pp. 89-102, (PMLR) (2020)
E. Delasalles, S. Lamprier, L. Denoyer : “Dynamic Neural Language Models”, ICONIP 2019 - 26th International Conference on Neural Information Processing, vol. 11955, Lecture Notes in Computer Science, Sydney, Australia, pp. 282-294 (2019)