DO Trinh Minh Tri
رئاسـة البـحث : Thierry ARTIÈRES
Regularized bundle methods for large-scale learning problems with an application to large margin training of hidden Markov models
Machine learning is most often cast as an optimization problem, e.g. the minimization of a function of a set of model parameters, where one wants to find optimal model parameters. A today challenge is to make such a framework practical for large dataset. To do that we need optimization methods that scale well with the problem size, number of data, dimension of data, etc. This thesis addresses the challenge of scalability and efficiency of machine learning approaches, with a particular focus on sequence labeling tasks for signal processing. This thesis describes our works which concern first, designing scalable optimization tools and second, designing new discriminative learning framework for signal labeling tasks. In the optimization part we aimed at developing efficient optimization algorithms for minimizing a regularized objective. We chose to focus on unconstrained optimization learning formulation since it appeared to us a promising approach for dealing with structured data in a large scale setting. Our works have been inspired by the cutting plane technique, which constituted a basis for recent advances in optimization methods for machine learning such as the convex regularized bundle method (CRBM). In a preliminary step we focus on the simpler setting of a convex, non smooth everywhere, objective function. We detail an improved variant of CRBM which limits the computation and memory costs while retaining good, and proved, convergence rate. Then we go further by considering the non-convex and non smooth everywhere optimization problem. It is a much harder problem but it has also much broader applications, especially in the context of signal processing. We describe a new optimization method for this setting, called Non-convex Regularized Bundle Method (NRBM). It is based on similar ideas as in the convex case with few specificities for handling non-convexity. We provide partial but interesting theoretical results on the convergence of the method in this complex setting. We also discuss variants of the method and show that adding a line search procedure increases convergence rate in practice. Next we describe applicative works on discriminative models for sequence and signal labeling tasks such as handwriting and speech. Part of these works rely on the optimization tools we developed for convex and non convex optimization. Our works extend seminal works on structured prediction such as conditional random fields or max margin Markov networks for signal processing. We first explore the use of conditional random fields for handling complex inputs (non linearly separable), multi modality, partially observed training data, and segmental features. Next we consider the large margin training of continuous density Hidden Markov Models (CDHMMs), a state of the art model for signal processing tasks. Although large margin training of CDHMM is promising, previous works usually relied on severe approximations so that it is still an open problem. By formalizing the learning problem as a non-convex regularized objective function minimization, we may use our optimization tool for solving efficiently this difficult problem. Our method was validated on two standard datasets in speech and handwriting demonstrating its efficiency and its scalability, with for instance up to 6 millions frames and up to 2 millions parameters in handwriting and speech recognition experiments.
مناقـشـة مـذكـرة : 17/06/2010
أعـضاء لجنة المناقـشة :
Prof. Thierry ARTIÈRES
Prof. Matthieu CORD
Prof. Patrick GALLINARI
Dr. Gunnar RATSCH (Rapporteur)
Prof. Gerhard RIGOLL
Dr. Jean-Philippe VERT (Rapporteur)
إصدارات 2005-2012
-
2012
- T. Do, Th. Artières : “Méthode de Plans Sécants Régularisée pour l’Optimisation Non convexe : Annexe à l’article dans JMLR”, (2012)
-
2011
- A. Vinel, T.‑M. Do, Th. Artières : “Joint Optimization of Hidden Conditional Random Fields and Non Linear Feature Extraction”, ICDAR 2011 - 11th International Conference on Document Analysis and Recognition, Beijing, China, pp. 513-517, (IEEE) (2011)
- A. Vinel, T.‑M. Do, Th. Artières : “Neuro HCRFs pour l’étiquetage de signaux”, Conférence Francophone d'Apprentissage, Chambéry, France, pp. 409-424, (Presses de l'université des Antilles et de la Guyane) (2011)
- T. Do, Th. Artières : “Modèle hybride champs Markovien conditionnel et réseau de neurones profond”, Document numérique - Revue des sciences et technologies de l'information. Série Document numérique, vol. 14 (2), pp. 11-27, (Hermès) (2011)
-
2010
- T. Do : “Méthode des plans sécants pour des problèmes d’apprentissage à grand échelle avec une application à l’apprentissage de modèle de Markov cachée par maximisation de la marge”, أطروحة, مناقـشـة مـذكـرة 17/06/2010, رئاسـة البـحث Artières, Thierry (2010)
- T. Do, Th. Artières : “Neural conditional random fields”, International Conference on Artificial Intelligence and Statistics (AISTAT), Sardinia, Italy, pp. 177-184 (2010)
-
2009
- T. Do, Th. Artières : “Learning mixture models with support vector machines for sequence classification and segmentation”, Pattern Recognition, vol. 42 (12), pp. 3224-3230, (Elsevier) (2009)
- V. Frinken, T. Peter, A. Fischer, H. Bunke, T. Do, Th. Artières : “Improved Handwriting Recognition by Combining Two Forms of Hidden Markov Models and a Recurrent Neural Network”, International Conference on Computer Analysis of Images and Patterns (CAIP), vol. 5702, Lecture Notes in Computer Science, Münster, Germany, pp. 189-196, (Springer) (2009)
- T. Do, Th. Artières : “Maximum Margin Training of Gaussian HMMs for Handwriting Recognition”, International Conference on Document Analysis and Recognition (ICDAR), Barcelona, Spain, pp. 976-980, (IEEE) (2009)
- T. Do, Th. Artières : “Large Margin Training for Hidden Markov Models with Partially Observed States”, International Conference on Machine Learning (ICML), Montreal, Canada, pp. 265-272, (ACM) (2009)
-
2008
- T. Do, Th. Artières : “A Fast Method for Training Linear SVM in the Primal”, European Conference on Machine Learning (ECML), vol. 5211, Lecture Notes in Computer Science, Antwerp, Belgium, pp. 272-287, (Springer) (2008)
- T. Do, Th. Artières : “Max-Margin Learning of Gaussian Mixtures with Sequential Minimal Optimization”, International Conference on Frontiers in Handwriting Recognition (ICFHR), Montréal, Canada (2008)
- T. Do, Th. Artières : “Apprentissage rapide de SVM dans le primal”, CAp 2008 - 10e Conférence d'Apprentissage, Ile de Porquerolles, France, (Cépaduès) (2008)
- T. Do, Th. Artières : “Optimisation du Primal pour les SVM”, Conference Internationale Francophone sur l'Extraction et la Gestion des Connaissances, vol. RNTI-E-11, RNTI, Sophia Antipolis, France, pp. 273-284, (Cepadues) (2008)
-
2007
- T. Do, Th. Artières : “Apprentissage de mélanges de gaussiens par maximisation de la marge avec SMO”, 9e Conference d'Apprentissage, CAp 2007, Grenoble, France (2007)
-
2006
- T.‑M. Do, Th. Artières : “Conditional Random Fields for Online Handwriting Recognition”, Tenth International Workshop on Frontiers in Handwriting Recognition, La Baule, France, (Suvisoft) (2006)
- T. Do, Th. Artières : “Polynomial Conditional Random Fields for signal processing”, European Conference on Artificial Intelligence (ECAI'06), Riva del Garda, Italy, pp. 797-798, (IOS Press) (2006)
- T. Do, Th. Artières : “Champs de Markov conditionnels pour le traitement de séquences”, Extraction et Gestion de Connaissances (EGC'06), vol. E-6, RNTI, Lille, France, pp. 639-650, (RNTI) (2006)
-
2005
- T. Do, Th. Artières : “Conditional Random Field for tracking user behavior based on his eye’s movements”, NIPS'05 Workshop on Machine Learning for Implicit Feedback and User Modeling, Whistler, BC, Canada, pp. 19-24 (2005)
- T. Do, Th. Artières, P. Gallinari : “Sélection de Modèles par des Méthodes à Noyaux pour la classification de données séquentielles”, Extraction et Gestion de Connaissances (EGC'05), vol. RNTI-E-3, RNTI, Paris, France, pp. 165-176 (2005)