Loading...
 

Bayesian Optimisation

Description

One of the strengths of evolutionary algorithms (EAs) is that they can be applied to black-box optimisation problems. For the sub-class of low-dimensional continuous black-box problems that are expensive to evaluate, Bayesian optimisation (BO) has become a very popular alternative. BO has applications ranging from hyperparameter tuning of deep learning models and design optimisation in engineering to stochastic optimisation in operational research.

Bayesian optimisation builds a probabilistic surrogate model, usually a Gaussian process, based on all previous evaluations. Gaussian processes are not only able to predict the quality of new solutions, but also approximate the uncertainty around the prediction. This information is then used to decide what solution to evaluate next, explicitly trading off exploration (high uncertainty) and exploitation (high quality). This trade-off is modeled by the acquisition function which quantifies how ‘interesting’ a solution is to evaluate.

Besides both being successful black-box and derivative-free optimizers, EAs and BO have more similarities. They can both handle multiple objectives and noise. EAs have been enhanced with surrogate models (including Gaussian processes) to better handle expensive function evaluations, and EAs are often used within BO to optimize the acquisition function.

The tutorial will introduce the general BO framework for black-box optimisation with and without noise, specifically highlighting the similarities and differences to evolutionary computation. The most commonly used acquisition functions will be explained, and how they are optimised using, e.g., evolutionary algorithms. Furthermore, we will cover multiobjective Bayesian optimisation, with a particular focus on noise handling strategies, and give examples of practical applications such as simulation-optimisation and hyperparameter optimisation.


Organizers

Jürgen Branke
Jürgen Branke is Professor of Operational Research and Systems at Warwick Business School (UK). His main research interests are the adaptation of metaheuristics to problems under uncertainty (including optimization in stochastic and dynamic environments) as well as evolutionary multiobjective optimization. Prof. Branke has been an active researcher in evolutionary computation for 25 years, and has published more than 200 papers in international peer-reviewed journals and conferences. He is editor of the ACM Transactions on Evolutionary Learning and Optimization, area editor of the Journal of Heuristics and associate editor of the IEEE Transactions on Evolutionary Computation and the Evolutionary Computation Journal.


Sebastian Rojas Gonzalez
Sebastian Rojas Gonzalez is currently a post-doctoral research fellow at the Surrogate Modeling Lab, in Ghent University, Belgium. Until 2020 he worked at the Department of Decision Sciences of KU Leuven, Belgium, on a PhD in multiobjective simulation-optimisation. He previously obtained a M.Sc. in Applied Mathematics from the Harbin Institute of Technology in Harbin, China, and a masters degree in Industrial Systems Engineering from the Costa Rica Institute of Technology in 2012. Since 2016 his research work has centered on developing stochastic optimization algorithms assisted by machine learning techniques for multi-criteria decision making.


Ivo Couckuyt
Ivo Couckuyt received a M.Sc. degree in Computer Science from the University of Antwerp in 2007. In 2013 he finished a PhD degree working at the Internet Technology and Data Science Lab (IDLab) at Ghent University, Belgium. He was attached to Ghent University – imec where he workedas a post-doctoral research fellow until 2021. Since September 2021, he is an associate professor in the IDLab. His main research interests are automation in machine learning, data analytics, data-efficient machine learning and surrogate modeling algorithms.