Séminaire Donnees et APprentissage ArtificielRSS

Semantic-based regularizations

09/12/2008
Intervenant(s) : Marco Gori (University of Siena)
Most of the well established methods in pattern recognition are only effective in cases in which the number of classes is quite small. However, many real-world problems require to deal with a large number of categories that are structured in a very rich way and not simply flat, as typically assumed in most machine learning approaches. This structure arises from a prior knowledge on the problem that should be taken into account for any learning scheme expected to scale up to large dimensions. Even when the classes are assumed to be flat, their coding is likely to benefit from imposing constraints on the outputs (for instance, one might be interested in imposing a probabilistic normalization one the outputs). Beginning from the general regularization framework that originates kernel machines arise, in this talk I propose a theory for learning under output constraints. In particular, I discuss linear, quadratic, fuzzy-like constraints and give some preliminary results. Finally, it is advocated that the theory can be used also for incorporating First-Order Logic constraints on variables coded by the outputs.

Plus d'informations ici …
Thomas.Baerecke (at) nulllip6.fr
Mentions légales
Carte du site