Bayesian Hyperparameter Optimization for Ensemble Learning

Julien-Charles Lévesque, Christian Gagné and Robert Sabourin

Abstract - In this paper, we bridge the gap between hyperparameter optimization and ensemble learning by performing Bayesian optimization of an ensemble with regards to its hyperparameters. Our method consists in building a fixed-size ensemble, optimizing the configuration of one classifier of the ensemble at each iteration of the hyperparameter optimization algorithm, taking into consideration the interaction with the other models when evaluating potential performances. We also consider the case where the ensemble is to be reconstructed at the end of the hyperparameter optimization phase, through a greedy selection over the pool of models generated during the optimization. We study the performance of our proposed method on three different hyperparameter spaces, showing that our approach is better than both the best single model and a greedy ensemble construction over the models produced by a standard Bayesian optimization.

download document


    author    = { Julien-Charles Lévesque and Christian Gagné and Robert Sabourin },
    title     = { Bayesian Hyperparameter Optimization for Ensemble Learning },
    booktitle = { Proceedings of the 32nd conference on Uncertainty in Artificial Intelligence (UAI 2016) },
    pages     = { 10 },
    year      = { 2016 },
    month     = { 06 }

Last modification: 2016/05/26 by jclev7


©2002-. Computer Vision and Systems Laboratory. All rights reserved