Loading...
 

Workshop on Black Box Optimization Benchmarking 2025

Webpage: https://coco-platform.org/workshops/bbob2025.html

Description

Benchmarking optimization algorithms is a crucial aspect for their design and practical application. Since 2009, the Black Box Optimization Benchmarking Workshop has served as a place for discussing general recent advances in benchmarking practices and concrete results from benchmarking experiments with a large variety of (black box) optimizers.

The Comparing Continuous Optimizers platform (COCO, 1, https://coco-platform.org/) was developed in this context to support algorithm developers and practitioners alike by automating benchmarking experiments for black box optimization algorithms in single-
and bi-objective, unconstrained and constrained, continuous and mixed-integer problems in exact and noisy, as well as expensive and non-expensive scenarios.

We welcome *all contributions to black box optimization benchmarking* for the 2025 edition of the workshop, although we would like to put a particular emphasis on:

1) Benchmarking algorithms for problems with underexplored properties (for
example mixed integer, noisy, constrained, multiobjective, ...)
2) Reproducing previous benchmarking results as well as examining performance
improvements or degradations in algorithm implementations over time
(for example with the help of results from earlier BBOB submissions).

Submissions are not limited to the test suites provided by COCO. For convenience, the source code in various languages (C/C++, Matlab/Octave, Java, Python, and Rust) together with all data sets from previous BBOB contributions are provided as an automatized benchmarking pipeline to reduce the time spent for producing the results for:

- single-objective unconstrained problems (the "bbob" test suite)
- single-objective unconstrained problems with noise ("bbob-noisy")
- biobjective unconstrained problems ("bbob-biobj")
- large-scale single-objective problems ("bbob-largescale")
- mixed-integer single- and bi-objective problems ("bbob-mixint" and
"bbob-biobj-mixint")
- almost linearly constrained single-objective problems ("bbob-constrained")
- box-constrained problems ("sbox-cost")

We especially encourage submissions exploring algorithms from beyond the evolutionary computation community, as well as papers analyzing COCO’s extensive, publicly available algorithm datasets (see https://coco-platform.org/data-archive/).

For details, please see the separate BBOB-2025 web page at https://coco-platform.org/workshops/bbob2025.html

1 Nikolaus Hansen, Anne Auger, Raymond Ros, Olaf Mersmann, Tea Tušar,
and Dimo Brockhoff. "COCO: A platform for comparing continuous
optimizers in a black-box setting." Optimization Methods and
Software (2020): 1-31.


Organizers

Anne Auger
Anne Auger is a research director at the French National Institute for Research in Computer Science and Control (Inria) heading the RandOpt team. She received her diploma (2001) and PhD (2004) in mathematics from the Paris VI University. Before to join INRIA, she worked for two years (2004-2006) at ETH in Zurich. Her main research interest is stochastic continuous optimization including theoretical aspects, algorithm designs and benchmarking. She is a member of ACM-SIGECO executive committee and of the editorial board of Evolutionary Computation. She has been General chair of GECCO in 2019. She has been organizing the biannual Dagstuhl seminar "Theory of Evolutionary Algorithms" in 2008 and 2010 and all seven previous BBOB workshops at GECCO since 2009. She is co-organzing the forthcoming Dagstuhl seminar on benchmarking.


Dimo Brockhoff
Dimo Brockhoff received his diploma in computer science from University of Dortmund, Germany in 2005 and his PhD (Dr. sc. ETH) from ETH Zurich, Switzerland in 2009. After two postdocs at Inria Saclay Ile-de-France (2009-2010) and at Ecole Polytechnique (2010-2011), he joined Inria in November 2011 as a permanent researcher (first in its Lille - Nord Europe research center and since October 2016 in the Saclay - Ile-de-France one). His research interests are focused on evolutionary multiobjective optimization (EMO), in particular on theoretical aspects of indicator-based search and on the benchmarking of blackbox algorithms in general. Dimo has co-organized all BBOB workshops since 2013 and has been EMO track co-chair at GECCO in 2013, 2014, and 2023.


Tobias Glasmachers
Tobias Glasmachers is a professor at the Ruhr-University Bochum, Germany. He received his Diploma and Doctorate degrees in mathematics from the Ruhr-University of Bochum in 2004 and 2008. He joined the Swiss AI lab IDSIA from 2009 to 2011. Then he returned to Bochum, where he was a junior professor for machine learning at the Institute for Neural Computation (INI) from 2012 to 2018. In 2018 he was promoted to a full professor. His research interests are machine learning and optimization.


Nikolaus Hansen

Nikolaus Hansen is a research director at Inria and the Institut Polytechnique de Paris, France. After studying medicine and mathematics, he received a PhD in civil engineering from the Technical University Berlin and the Habilitation in computer science from the University Paris-Sud. His main research interests are stochastic search algorithms in continuous, high-dimensional search spaces, learning and adaptation in evolutionary computation, and meaningful assessment and comparison methodologies. His research is driven by the goal to develop algorithms applicable in practice. His best-known contribution to the field of evolutionary computation is the so-called Covariance Matrix Adaptation (CMA).


Olaf Mersmann
Olaf Mersmann is a Professor for Data Science at TH Köln - University of Applied Sciences. He received his BSc, MSc and PhD in Statistics from TU Dortmund. His research interests include using statistical and machine learning methods on large benchmark databases to gain insight into the structure of the algorithm choice problem.


Tea Tušar