|
Publications |
|
Estimating Quality in User-Guided Multi-Objective Bandits OptimizationAbstract - Many real-world applications are characterized by a number of conflicting performance measures. As optimizing in a multi-objective setting leads to a set of non-dominated solutions, a preference function is required for selecting the solution with the appropriate trade-off between the objectives. This preference function is often unknown, especially when it comes from an expert human user. However, if we could provide the expert user with a proper estimation for each action, she would be able to pick her best choice. The question is: how good do these estimations have to be in order for her choice to remain the same as if she had access to the exact values? In this paper, we introduce the concept of preference radius to characterize the robustness of the preference function and provide guidelines for controlling the quality of estimations in the multi-objective setting. More specifically, we provide a general formulation of multi-objective optimization under the bandits setting and the pure exploration setting with user feedback for articulating the preferences. We show how the preference radius relates to the optimal gap and how it can be used to analyze algorithms in the bandits and pure exploration settings. We finally present experiments in the bandits setting, where we evaluate the impact of noise and delayed expert user feedback, and in the pure exploration setting, where we compare multi-objective Thompson sampling with uniform sampling. Bibtex:
@article{Durand1147, Dernière modification: 2017/01/04 par cgagne |
|||
©2002-. Laboratoire de Vision et Systèmes Numériques. Tous droits réservés |