Team : ACASA

Departure date : 10/30/2014

In the first part, we explain and study the properties of our algorithm. As with all enumeration algorithms, the first problem is to construct all the sets only once. We propose to use a spanning tree, itself based on the lectic order, to avoid multiple constructions of a same set. The use of this spanning tree instead of the classic lectic order increases the space complexity but offers much more flexibility in the enumeration order. We show that, compared to the well-known Next Closure algorithm, ours performs less logical closures on sparse contexts and more once the average number of attributes per object exceeds 30%. The space complexity of the algorithm is also empirically studied and we show that different orders behave differently with the lectic order being the most efficient. We postulate that the efficiency of an order is function of its distance to the order used in the canonicity test.

In the second part, we take an interest in the computation of implications in a more complex setting : relational data. In these contexts, objects are represented by both attributes and binary relations with other objects. The need to represent relation information causes an exponential increase in the number of attributes so naive algorithms become unusable extremely fast. We propose a modification of our algorithm that enumerates the pseudo-intents of contexts in which relational information is represented by attributes. A quick study of the type of relational information that can be considered is provided. We use the example of description logics as a framework for expressing relational data.

In the third part, we extend our work to the more general domain of association rules. Association rules are regularities of the form ``when there is A there is B with x% certainty'' so implications are association rules with 100% certainty. Our algorithm already computes a basis for implications so we propose a very simple modification and demonstrate that it can compute the Luxenburger basis of a context along with the Duquenne-Guigues basis. This effectively allows our algorithm to compute a basis for association rules that is of minimal cardinality.

Karell Bertet, Maitre de conférences, Université de la Rochelle [Rapporteur]

Henry Soldano, Maitre de conférences, Université Paris-Nord [Rapporteur]

Jean-Gabriel Ganascia, Professeur, UPMC

Alain Gély, Maitre de conférences, Université de Lorraine

Annick Valibouze, Professeur, UPMC

- 2014
- A. Bazin : “ De l’Enumération des Pseudo-Intensions : Choix de l’Ordre et Extension aux Implications Partielles”, these, defence 09/30/2014, supervision Ganascia, Jean-Gabriel (2014)