Bayesian Optimisation
Description
One of the strengths of evolutionary algorithms (EAs) is that they can be applied to black-box optimisation problems. For the sub-class of low-dimensional continuous black-box problems that are expensive to evaluate, Bayesian optimisation (BO) has become a very popular alternative. BO has applications ranging from hyperparameter tuning of deep learning models and design optimisation in engineering to stochastic optimisation in operational research.
Bayesian optimisation builds a probabilistic surrogate model, usually a Gaussian process, based on all previous evaluations. Gaussian processes are not only able to predict the quality of new solutions, but also approximate the uncertainty around the prediction. This information is then used to decide what solution to evaluate next, explicitly trading off exploration (high uncertainty) and exploitation (high quality). This trade-off is modeled by the acquisition function which quantifies how ‘interesting’ a solution is to evaluate.
Besides both being successful black-box and derivative-free optimizers, EAs and BO have more similarities. They can both handle multiple objectives and noise. EAs have been enhanced with surrogate models (including Gaussian processes) to better handle expensive function evaluations, and EAs are often used within BO to optimize the acquisition function.
The tutorial will introduce the general BO framework for black-box optimisation with and without noise, specifically highlighting the similarities and differences to evolutionary computation. The most commonly used acquisition functions will be explained, and how they are optimised using, e.g., evolutionary algorithms. Furthermore, we will cover multiobjective Bayesian optimisation, with a particular focus on noise handling strategies, and give examples of practical applications such as simulation-optimisation and hyperparameter optimisation.
Organizers