The work presented in this thesis lies at the intersection of decision theory and machine learning. The objective is to propose learning methods for preference models stemming from decision theory, with the aim of explaining or predicting a decision maker’s preferences.
We focus in particular on value function models that account for interactions between different viewpoints on the alternatives, such as the Choquet integral, the multilinear utility, and decomposable GAI utility functions. These models possess strong descriptive power, while also ensuring a form of rationality in preferences through the satisfaction of desirable mathematical properties, and allowing for interpretability via their parameters.
Due to the combinatorial nature of interactions, learning such models poses a computational challenge, as it requires determining an exponential number of parameters, sometimes subject to combinatorial constraints.
In this thesis, we propose to control the flexibility of these models through the learning of sparse representations of interactions, notably through the use of sparsity-inducing regularizations, and to reduce computational complexity by leveraging convex optimization methods from machine learning, suited to high-dimensional sparse learning problems. Then, this thesis contributes by :