22/03/2007

Intervenant(s) : Lorenza SAITTA (Università del Piemonte Orientale,

Computational complexity is often a major obstacle to the application of AI techniques, most notably relational learning, to significant real-world problems. Efforts are then required to understand the sources of this complexity, in order to tame it without introducing too strong simplifications that make either the problem or the technique useless. It is well known that representation is a key issue to facilitate solving a problem, even though there is clearly no hope of turning, by a representation change, an intractable class of problems into a tractable one. However, computational characterization of problem classes is based on worst-case analysis, and, hence, not every problem instance in the class is equally hard to solve. Representation change has been tackled, in Machine Learning, via feature selection and feature construction. These representation changes are both instances of abstraction, i.e., of representation changes aimed at simplifying the representation of both examples and hypotheses. In the literature, several definitions of abstraction have been proposed. In this talk the effects of some of these definitions on the task of learning (relations) will be described. Both the computational complexity reduction (if any), and the impact on the quality of the learned knowledge will be discussed.

Javier.Diaz (at) nulllip6.fr