GECCO 2025 will feature 35 tutorials.
Title | Organizers |
---|---|
Advanced Use of Automatic Algorithm Configuration: Single- and Multi-Objective Approaches |
|
Advances in Evolutionary Hyper-Heuristics |
|
Automated Machine Learning Tools for Data Science, Modeling, and Algorithm Benchmarking |
|
Bayesian Optimisation |
|
Benchmarking single- and multi-objective optimization algorithms: how to make your experimental data more valuable |
|
Cartesian Genetic Programming - From foundations to recent developments and applications |
|
Coevolutionary Computation for Adversarial Deep Learning |
|
Combinatorial Optimisation Can be Different from Continuous Optimisation for MOEAs |
|
Constraint-Handling Techniques used with Evolutionary Algorithms |
|
Decomposition Evolutionary Multi-Objective Optimization: What We Know from the Literature and What We are not Clear from a Data Science Perspective |
|
Evolution of Neural Networks |
|
Evolutionary Art and Design in the Machine Learning Era |
|
Evolutionary Computation and Evolutionary Deep Learning for Image Analysis, Signal Processing and Pattern Recognition |
|
Evolutionary Computation for Feature Selection and Feature Construction |
|
Evolutionary computation for stochastic problems |
|
Evolutionary Computation meets Machine Learning for Combinatorial Optimisation |
|
Evolutionary Reinforcement Learning |
|
Fair Performance Comparison of Evolutionary Multi-Objective Algorithms |
|
Genetic Programming as a Hyper-Heuristic for Solving Combinatorial Optimisation Problems |
|
Intelligent Evolution Optimization: Guided from Deep Learning to Large Language Model |
|
Introduction to Quantum Optimization |
|
Introductory Mathematical Programming for EC |
|
Linear Genetic Programming |
|
Machine Learning Assisted Evolutionary Multi- and Many-objective Optimization |
|
Model-Based Evolutionary Algorithms |
|
New, more efficient crossover and local search operators for recombination lattices. |
|
Pareto Optimization for Subset Selection: Theories and Practical Algorithms |
|
Recent Advances in Meta-features Used for Representing Black-box Single-objective Continuous Optimization |
|
Recent developments in data structures and algorithms for evolutionary multiobjective optimization |
|
Representations for Evolutionary Algorithms |
|
Statistical Forward Planning Algorithms |
|
Structural bias in optimisation algorithms |
|
Theory and Practice of Population Diversity in Evolutionary Computation |
|
Tutorial: A Gentle Introduction to Theory (for Non-Theoreticians) |
|
What You Always Wanted to Know About Evolution Strategies, But Never Dared to Ask |
|
Tutorials
Advanced Use of Automatic Algorithm Configuration: Single- and Multi-Objective Approaches
Most, if not all, optimisation algorithms have exposed parameters and various components, while the choice of which affects the performance of the algorithm. Finding the right values of these parameters that maximise the performance is a challenging task itself. Even more so, when this is done in a manual iterative manner, which leads to a tedious and suboptimal workflow. Fortunately, there exist several configuration frameworks that automatically search for a good configuration of parameters for the task at hand, optimising specific performance measures. These automated algorithm configurators have demonstrated to be a powerful tool in the design and configuration of algorithms in the last two decades. Although the concept of automated algorithm configuration (AAC) is generally known to algorithm designers and practitioners, the adoption of these methods in empirical research studies stays behind. One reason for this limited adoption is that most AAC frameworks are presented for the general use case of finding an optimally configured algorithm for a specific problem domain and do not explicitly showcase their usability in a broader setting. In this tutorial we go beyond the default AAC scenario to demonstrate the potential and capabilities of several AAC techniques in an actual research context. We especially focus on scenarios with multiple objectives involved, both at an algorithm level (configuring multi-objective optimisers) as at the configuration level (generating optimal performance trade-offs).
The tutorial is structured into multiple parts, each describing a different use case for AAC. In each use case we show the usability of AAC in a practical application setting, followed by an actual demonstration of how to set up the experimental pipeline using various AAC frameworks, such as irace, (MO-)ParamILS or (MO-)SMAC. The use cases will cover four scenarios: 1) fairly comparing (multi-objective) optimisation algorithms with AAC, 2) configuring anytime behaviour of algorithms, 3) configuring algorithms to optimise multiple performance objectives simultaneously, and 4) improving algorithm understanding using AAC with additional analytical tools. The tutorial will end with an open discussion on how to integrate AAC in one’s own research.
Jeroen Rook
Manuel López-Ibáñez
Advances in Evolutionary Hyper-Heuristics
Over the years the benefits of searching an alternative space, namely, the heuristic space, instead of exploring the solution space directly has been shown. This has been achieved by using hyper-heuristics. As the field of hyper-heuristics has evolved since its inception there have been a number of advances in the field. This tutorial focuses on these advances specifically evolutionary algorithms. The tutorial firstly gives an overview of hyper-heuristics and then delves into the advanced topics. A taxonomy of the different levels of generality that can be attained by a hyper-heuristic is firstly presented. The tutorial then examines assessing hyper-heuristic performance in the context of these hyper-heuristics. The tutorial will then explore machine learning and evolutionary algorithm hyper-heuristics. While a lot of the earlier work in the field focused on discrete optimization, recent advancements include solving continuous optimization problems directly using hyper-heuristics. The tutorial will examine how hyper-heuristics can be used to directly solve continuous optimization problems. The tutorial will also highlight the benefits of transfer learning in evolutionary algorithm hyper-heuristics. Explainable hyper-heuristics, that is the use of XAI to better understand the performance of hyper-heuristics will be examined. These advances will be illustrated with EvoHyp a library for evolutionary algorithm hyper-heuristics which is available in Java and Python. The tutorial will conclude with a discussion session on future research in the field of evolutionary algorithm hyper-heuristics.
Tutorial breakdown:
1. Overview of Hyper-Heuristics
This tutorial firstly provides an overview of hyper-heuristics:
1.1 Selection construction hyper-heuristics
1.2 Selection perturbative hyper-heuristics
1.3 Generation construction hyper-heuristics
1.4 Generation perturbative hyper-heuristics
2. Taxonomy for generality levels in hyper-heuristic
Five levels of generality in hyper-heuristics will be described and a case study for each of the levels will be presented.
3. Assessing Hyper-Heuristic Performance
Assessing hyper-heuristic performance in terms of both generality and optimality will be discussed. An assessment measure for assessing generality, namely, standard deviation of differences, will be presented. Multi-objective assessment of hyper-heuristic performance will also be presented.
4. Machine Learning and Hyper-Heuristics
This part of the tutorial will examine the synergistic relationship between machine learning and evolutionary algorithm hyper-heuristics. Firstly, it will examine how machine learning can be used to improve the performance of hyper-heuristics and secondly how hyper-heuristics can be used to improve the performance of machine learning.
5. Evolutionary Algorithm Hyper-Heuristics for Continuous Optimization
Previous work using hyper-heuristics for solving continuous optimization problems have essentially used hyper-heuristics to design the approaches that are applied to the solve the problem. Here we will look at applying hyper-heuristics directly to solving the problem.
6. Transfer learning in Evolutionary Algorithm Hyper-Heuristics
This part of the tutorial will focus on using transfer learning in evolutionary algorithm hyper-heuristics. Case studies for the different types of hyper-heuristics will be examined.
7. Explainable Evolutionary Algorithm Hyper-Heuristics
The tutorial will examine the recent advances in the use of XAI to better understand and improve hyper-heuristic performance.
8. EvoHyp Demonstration
This part of the tutorial will involve a demonstration of EvoHyp an evolutionary algorithm library for hyper-heuristics.
9. Future research directions and discussion
This will involve an interactive session on future research directions in the field of evolutionary algorithm hyper-heuristics.
Nelishia Pillay
Automated Machine Learning Tools for Data Science, Modeling, and Algorithm Benchmarking
Automated Machine Learning (AutoML) has emerged over the last decade as a subfield of artificial intelligence (AI) and machine learning (ML), focused on the automation of machine learning modeling and other key elements of a data science analysis pipeline in order to “relax the need for a user in the loop”. The primary goal of AutoML methods and software packages has been to make the application of ML easier, more accessible (to those with and without programming or ML experience), and more capable of optimizing ML model performance across a wide variety of ML algorithms, hyperparameters, and data processing options.
Notably, a number of available AutoML tools utilize evolutionary optimization strategies to drive search, and/or include evolutionary machine learning approaches in their repertoire of available ML modeling algorithms. Thus, apart from making ML modeling and data analytics applications easier, these frameworks can (and arguably should) be leveraged to conduct fairer, better standardized/reproducible, and more rigorous algorithm performance comparisons and benchmarking.
This tutorial will begin by broadly introducing participants to AutoML tools, discussing their scope, capabilities, and tradeoffs. Next, it will dive more deeply into two specific AutoML packages (STREAMLINE and TPOT) both developed by investigator teams working in the field of evolutionary computation. It will cover how these AutoML frameworks work, what they automate, their unique capabilities, installation and use, and how evolutionary computation relates to each. Lastly, using the STREAMLINE AutoML this tutorial will offer a practical demonstration of how this framework can be applied: (1) to model and evaluate real-world data as part of a comprehensive automated data science pipeline, and (2) to easily, fairly, rigorously, and reproducibly compare and benchmark the performance of new ML modeling approaches (evolutionary or other) to other established algorithms. An outline of this tutorial is detailed further below.
1. Provide an overview of the typical elements of a machine learning data science analysis pipeline.
2. Define and introduce AutoML in contrast with traditional approaches to data science and ML.
3. Briefly review 20+ currently available AutoML tools and packages, contrasting their scope and capabilities.
4. Take a closer look at the STREAMLINE and TPOT AutoML packages focusing on how they work.
5. Walk through an example of applying STREAMLINE to a real-world analysis of biomedical data with the goal of optimizing ML model predictive performance, conducting model interpretation/explanation, and evaluating the reproducibility of model performance on new replication data.
6. Walk through an example of adding a new scikit-learn compatible ML algorithm to the STREAMLINE algorithm repertoire, and utilizing the AutoML framework to benchmark and compare it’s performance to other established ML algorithms across a diversity of benchmark datasets, in a rigorous and reproducible manner.
7. Provide a hands-on demo for participants to try out STREAMLINE for themselves (via Google Colab) on their laptops or smartphones.
Ryan Urbanowicz
Bayesian Optimisation
One of the strengths of evolutionary algorithms (EAs) is that they can be applied to black-box optimisation problems. For the sub-class of low-dimensional continuous black-box problems that are expensive to evaluate, Bayesian optimisation (BO) has become a very popular alternative. BO has applications ranging from hyperparameter tuning of deep learning models and design optimisation in engineering to stochastic optimisation in operational research.
Bayesian optimisation builds a probabilistic surrogate model, usually a Gaussian process, based on all previous evaluations. Gaussian processes are not only able to predict the quality of new solutions, but also approximate the uncertainty around the prediction. This information is then used to decide what solution to evaluate next, explicitly trading off exploration (high uncertainty) and exploitation (high quality). This trade-off is modeled by the acquisition function which quantifies how ‘interesting’ a solution is to evaluate.
Besides both being successful black-box and derivative-free optimizers, EAs and BO have more similarities. They can both handle multiple objectives and noise. EAs have been enhanced with surrogate models (including Gaussian processes) to better handle expensive function evaluations, and EAs are often used within BO to optimize the acquisition function.
The tutorial will introduce the general BO framework for black-box optimisation with and without noise, specifically highlighting the similarities and differences to evolutionary computation. The most commonly used acquisition functions will be explained, and how they are optimised using, e.g., evolutionary algorithms. Furthermore, we will cover multiobjective Bayesian optimisation, with a particular focus on noise handling strategies, and give examples of practical applications such as simulation-optimisation and hyperparameter optimisation.
Jürgen Branke
Sebastian Rojas Gonzalez
Ivo Couckuyt
Benchmarking single- and multi-objective optimization algorithms: how to make your experimental data more valuable
Comparing and evaluating optimization algorithms by empirical means is an important – and probably the most commonly applied – approach to gaining insight into evolutionary computation methods. However, while our community tends to agree that generating and analyzing sound benchmarking data is far from trivial, we treat the process in a rather wasteful manner, giving little importance to a standardization of data records, data sharing, and similar.
With this tutorial, we will share our experience on how to boost the efficacy of our benchmarking efforts at almost no cost using the IOHprofiler software framework and its recent extensions to anytime performance measures and multi-objective optimization. A strong focus will be put on demonstrating the ease by which IOHprofiler modules can be combined with other benchmarking and optimization toolboxes such as COCO, Nevergrad, and Pymoo.
We will also discuss how benchmarking data can be more easily shared within the community and the benefits that this brings, in terms of core research contributions, but also towards more sustainable research practices in evolutionary computation.
Carola Doerr
Diederick Vermetten
Jacob de Nobel
Thomas Bäck
Cartesian Genetic Programming - From foundations to recent developments and applications
Cartesian Genetic Programming (CGP) is a form of genetic programming originally developed by Julian Miller and Peter Thomson in 1997. Its classic form utilizes an integer-based genetic representation of a program decoded as a directed graph. Numerous studies have demonstrated its efficiency compared to other genetic programming techniques. Since its inception, CGP has been refined with features such as automatically defined functions, self-modification operators and multi-modal capacities. Quite recently, progress has been made in the extension of CGP through the use of techniques from genetic algorithms, crossover operators and multi-objective optimization. CGP has also showed its capacity to step up to real-world problems with recent applications in optimal control, ATARI game playing and biomedical data analysis. This tutorial will cover the foundational techniques, advanced developments, and applications across various problem domains. In this tutorial, we propose to present:
- The foundational elements of Cartesian Genetic Programming (encoding, decoding, evolution);
- Advances based on the classic CGP model (e.g., self-modifying operators, mixed-type, multi-modal extensions);
- Evolutionary techniques used for evolving CGP-based genomes, from the original 1+$\lambda$ evolution strategy to recent crossover and genetic algorithm-based approaches;
- Recent real-world applications of CGP and their positive impacts on interpretability and data efficiency within these application domains.
Kalkreuth Roman
Sylvain Cussat-Blanc
Dennis G. Wilson
Coevolutionary Computation for Adversarial Deep Learning
In recent years, machine learning with Generative Adversarial Networks (GANs) has been recognized as a powerful method for generative modeling. Generative modeling is the problem of estimating the underlying distribution of a set of samples. GANs accomplish this using unsupervised learning. They have also been extended to handle semi-supervised and fully supervised learning paradigms. GANs have been successfully applied to many domains. They can generate novel images (e.g., image colorization or super-resolution, photograph editing, and text-to-image translation), sound (e.g., voice translation and music generation), and video (e.g., video-to-video translation, deep fakes generation, and AI-assisted video calls), finding application in domains of multimedia information, engineering, science, design, art, and games.
GANs are an adversarial paradigm. Two NNs compete with each other using antagonistic lost function to train the parameters with gradient descent. This connects them to evolution because evolution also exhibits adversarial engagements and competitive coevolution. In fact, the evolutionary computation community’s study of coevolutionary pathologies and its work on competitive and cooperative coevolutionary algorithms offers a means of solving convergence impasses often encountered in GAN training.
In this tutorial we will explain:
(a) The main concepts of generative modeling and adversarial learning.
(b) GAN gradient-based training and the main pathologies that prevent ideal convergence. Specifically, we will explain mode collapse, oscillation, and vanishing gradients.
(c) Coevolutionary algorithms and how they can be applied to train GANs. Specifically, we will explain how algorithm enhancements address non-ideal convergence
(d) To demonstrate we will draw upon the open-source Lipizzaner framework (url: http://lipizzaner.csail.mit.edu/). This framework is easy to use and extend. It sets up a spatial grid of communicating populations of GANs.
(e) Students will be given the opportunity to set up and use the Lipizzaner framework during the tutorial by means of a jupyter notebook expressly developed for teaching purposes.
Jamal Toutouh
Dr. Jamal Toutouh is an Associate Professor at the University of Málaga, member of the "José María Troya Linero" Institute of Software Technologies and Engineering. Previously, he was a Marie Skłodowska Curie Postdoctoral Fellow at Massachusetts Institute of Technology (MIT) in the USA, at the MIT CSAIL Lab. He obtained his Ph.D. in Computer Engineering at the University of Malaga (Spain), which was awarded the 2018 Best Spanish Ph.D. Thesis in Smart Cities. His dissertation focused on the application of Machine Learning methods inspired by Nature to address Smart Mobility problems.
His ongoing research explores the combination of Nature-inspired gradient-free and gradient-based methods to address Generative Modelling and Adversarial Machine Learning. The main idea is to devise new algorithms to improve the efficiency and efficacy of the state-of-the-art methodology by mainly applying evolutionary computation and related techniques, such as particle swarm optimization in the form of Evolutionary Machine Learning approaches. Besides, he is on the application of Machine Learning to address problems related to Smart Mobility, Smart Cities, and Climate Change.
Una-May O’Reilly
Combinatorial Optimisation Can be Different from Continuous Optimisation for MOEAs
This tutorial explores the differences between multi-objective combinatorial optimisation problems and continuous problems through the lens of evolutionary algorithms. It aims to guide researchers and practitioners in designing effective multi-objective evolutionary algorithms (MOEAs) for multi-objective combinatorial optimisation problems.
In the first part of the tutorial, I will show an interesting yet unwelcome behaviour of MOEAs in handling combinatorial optimisation problems. That is, when dealing with such problems, the search, in different executions of an MOEA (e.g., NSGA-II), tends to stagnate in different areas in the search space. In other words, the final populations obtained by an MOEA under multiple executions, which can be very close in the objective space, are located far away from one another in the search space. This is not the case for MOEAs when dealing with continuous problems.
In the second part, I will extend this discussion by introducing multi-objective local search heuristics, and show that the behaviour of MOEAs can even be more “localised” than local search methods. Following this, I will delve into key questions:
• Are MOEAs less hopeful for tackling combinatorial optimisation problems?
• What went wrong with current MOEAs?
• How to improve them, and what distinguishes the design of MOEAs for multi-objective combinatorial problems versus continuous problems?
These insights aim to provide actionable strategies for enhancing MOEA performance in combinatorial settings.
Li Miqing
Dr. Miqing Li (https://sites.google.com/view/miqing-li) is an Associate Professor at the University of Birmingham, UK. His research centres on multi-objective optimisation, where he designs population-based randomised algorithms, primarily evolutionary algorithms, to address both fundamental and applied problems.
In his foundational research, Dr. Li tackles general challenging problems (e.g. problems with many objectives, complex constraints, expensive to evaluate, or combinatorial object). Some of his algorithms (e.g., GrEA and SDE) are among the most widely used ones in the area. On the applied front, he collaborates with experts in diverse areas, including software engineering, high-performance computing, neural architecture search, mechanical engineering, and chemical engineering, developing innovative methods to address real-world problems. Many of these projects have appeared in high-profile journals, with some being among the most cited ones since their publication (e.g., TPDS’2016 and TOSEM’2016).
Dr. Li’s work has received Best Paper Awards/Nominations at major conferences in evolutionary computation (GECCO, CEC, and SEAL). He was the founding chair of the IEEE CIS Task Force on Many-Objective Optimisation and currently serves as an Associate Editor for IEEE Transactions on Evolutionary Computation. Additionally, he has delivered tutorials at major conferences in evolutionary computation and software engineering, including CEC, ICSE, and FSE.
Constraint-Handling Techniques used with Evolutionary Algorithms
Evolutionary Algorithms (EAs), when used for global optimization, can be seen as unconstrained optimization techniques. Therefore, they require an additional mechanism to incorporate constraints of any kind (i.e., inequality, equality, linear, nonlinear) into their fitness function. Although the use of penalty functions (very popular with mathematical programming techniques) may seem an obvious choice, this sort of approach requires a careful fine tuning of the penalty
factors to be used. Otherwise, an EA may be unable to reach the feasible region (if the penalty is too low) or may reach quickly the feasible region but being unable to locate solutions that lie in
the boundary with the infeasible region (if the penalty is too severe). This has motivated the development of a number of approaches to incorporate constraints into the fitness function of an EA. This tutorial will cover the main proposals in current use, including novel approaches
such as the use of tournament rules based on feasibility, multiobjective optimization concepts, hybrids with mathematical programming techniques (e.g., Lagrange multipliers), cultural algorithms, and artificial immune systems, among others. Other topics such as the importance of
maintaining diversity, current benchmarks and the use of alternative search engines (e.g., particle swarm optimization, differential evolution, evolution strategies, etc.) will also be briefly discussed.
Carlos Coello Coello
Decomposition Evolutionary Multi-Objective Optimization: What We Know from the Literature and What We are not Clear from a Data Science Perspective
Ke Li
Qingfu Zhang
Evolution of Neural Networks
Risto Miikkulainen
Sebastian Risi
David Ha
Yujin Tang
Evolutionary Art and Design in the Machine Learning Era
In recent years, there has been a surge in interest in Computational Creativity, with a particular emphasis on adopting Machine Learning (ML) techniques, particularly Deep Learning, in art and design. Generative Artificial Intelligence (AI) models such as Midjourney, Stable Diffusion and DALL-E, to name a few, are currently widespread and can be used to create a wide range of artefacts effortlessly.
While current generative ML models have achieved results, they are not without their flaws. As this tutorial will show, their inclination toward imitation rather than innovation often results in the generation of stock images rather than genuinely creative pieces. These limitations open the door for the application of evolutionary computation techniques, including evolutionary machine learning (EML).
The goals of this tutorial encompass: (i) Present an overview of the current state of the art in Generative AI, distinguishing between data-driven models, such as deep learning models, and non-data-driven approaches, like most evolutionary methods; (ii) Identify the main challenges and opportunities for the application of EML in the fields of Art and Design, which includes the combination of evolutionary approaches with generative ML; (iii) present concrete examples of hybridisation underscoring the uniqueness of their results; (iv) identify open challenges in the field.
In particular, we will address three main pillars for the development of creative and co-creative applications: representation, quality assessment, and user interaction. We will place a particular emphasis on applications and techniques that expand the user's creative possibilities and lead to novel and unforeseen results. We will also reflect upon the gap between academia and the real-world application of evolutionary art approaches and provide examples of how to bridge it, either by incorporating evolutionary techniques into the artistic and design processes or by making them the ultimate goal. Lastly, we will share recent results indicating that EML may outperform mainstream Generative AI techniques for design tasks, where specific requirements exist and form must meet function.
Penousal Machado
Penousal Machado is an Associate Professor at the Department of Informatics of the University of Coimbra in Portugal, the coordinator of the Cognitive and Media Systems, and the scientific director of the Computational Design and Visualization Lab. of CISUC. He is also the president of SPECIES - the Society for the Promotion of Evolutionary Computation in Europe and its Surroundings.
His research interests include Evolutionary Computation, Computational Creativity, Artificial Intelligence, and Information Visualization. He is the author of more than 200 refereed journal and conference papers in these areas, and his peer-reviewed publications have been nominated and awarded multiple times as best papers. His publications gathered over 4000 citations, an h-index of 29, and an i10-index of 99. He was the advisor of 15 PhD theses and 60 MSc theses. Recently, with Wolfgang Banzhaf and Mengjie Zhang, he has edited the"Handbook of Evolutionary Machine Learning" published by Springer.
He is also the chair of several scientific events, including, among the most recent, ICCC 2020, PPSN XV, and EvoStar 2016; member of the Programme Committee and Editorial Board of some of the main conferences and journals in these fields; member of the Steering Committee of EuroGP, EvoMUSART and Evostar.
He has received several scientific awards, including the prestigious EvoStar Award for Outstanding Contribution to Evolutionary Computation in Europe, and the award for Excellence and Merit in Artificial Intelligence granted by the Portuguese Association for Artificial Intelligence.
Penousal Machado has been invited to perform keynote speeches in various domains, from evolutionary computation to visualization and art. His work was featured in the Leonardo journal, Wired magazine and presented in venues such as the National Museum of Contemporary Art (Portugal) and the “Talk to Me” exhibition of the Museum of Modern Art, NY (MoMA).
João Correia
João Correia is an Assistant Professor at the University of Coimbra, a researcher of the Computational Design and Visualization Lab. and a member of the Evolutionary and Complex Systems (ECOS) of the Centre for Informatics and Systems of the same university. He holds a PhD in Information Science and Technology from the University of Coimbra and an MSc and BS in Informatics Engineering from the same university. His main research interests include Evolutionary Computation, Machine Learning, Adversarial Learning, Computer Vision and Computational Creativity. He is involved in different international program committees of international conferences in the areas of Evolutionary Computation, Artificial Intelligence, Computational Art and Computational Creativity, and he is a reviewer for various conferences and journals for the mentioned areas, namely GECCO and EvoStar, served as remote reviewer for the European Research Council Grants and is an executive board member of SPECIES. He was also the publicity chair and chair of the International Conference of Evolutionary Art Music and Design conference, currently the publicity chair for EvoStar - The Leading European Event on Bio-Inspired Computation and chair of EvoApplications, the International Conference on the Applications of Evolutionary Computation. Furthermore, he has authored and co-authored several articles at the different International Conferences and journals on Artificial Intelligence and Evolutionary Computation. He is involved in national and international projects concerning Evolutionary Computation, Machine Learning, Generative Models, Computational Creativity and Data Science.
Evolutionary Computation and Evolutionary Deep Learning for Image Analysis, Signal Processing and Pattern Recognition
The intertwining disciplines of image analysis, signal processing and pattern recognition are major fields of computer science, computer engineering and electrical and electronic engineering, with past and on-going research covering a full range of topics and tasks, from basic research to a huge number of real-world industrial applications.
Among the techniques studied and applied within these research fields, evolutionary computation (EC) including evolutionary algorithms, swarm intelligence and other paradigms is playing an increasingly relevant role. Recently, evolutionary deep learning has also attracted very good attention to these fields. The terms Evolutionary Image Analysis and Signal Processing and Evolutionary Computer Vision are more and more commonly accepted as descriptors of a clearly defined research area and family of techniques and applications. This has also been favoured by the recent availability of environments for computer hardware and systems such as GPUs and grid/cloud/parallel computing systems, whose architecture and computation paradigm fit EC algorithms extremely well, alleviating the intrinsically heavy computational burden imposed by such techniques and allowing even for real-time applications.
The tutorial will introduce the general framework within which Evolutionary Image Analysis, Signal Processing and Pattern Recognition can be studied and applied, sketching a schematic taxonomy of the field and providing examples of successful real-world applications. The application areas to be covered will include edge detection, segmentation, object tracking, object recognition, motion detection, image classification and recognition. EC techniques to be covered will include genetic algorithms, genetic programming, particle swarm optimisation, evolutionary multi-objective optimisation as well as memetic/hybrid paradigms. We focus on the use of evolutionary deep learning ideas for image analysis --- this includes automatic learning architectures, learning parameters and transfer functions of convolutional neural networks. The use of GPU boxes will be discussed for real-time/fast object classification. We will show how such EC techniques can be effectively applied to image analysis and signal processing problems and provide promising results. Basic deep network-like approaches will also be discussed for GP, in which, in an auto-encoder fashion, parametric regression can be used to build embeddings for signals or 2D patterns.
Stefano Cagnoni
Ying Bi
Yanan Sun
Evolutionary Computation for Feature Selection and Feature Construction
In data mining/big data and machine learning, many real-world problems such as bio-data classification and biomarker detection, image analysis, text mining often involve a large number of features/attributes. However, not all the features are essential since many of them are redundant or even irrelevant, and the useful features are typically not equally important. Using all the features for classification or other data mining tasks typically does not produce good results due to the big dimensionality and the large search space. This problem can be solved by feature selection to select a small subset of original (relevant) features or feature construction to create a smaller set of high-level features using the original low-level features.
Feature selection and construction are very challenging tasks due to the large search space and feature interaction problems. Exhaustive search for the best feature subset of a given dataset is practically impossible in most situations. A variety of heuristic search techniques have been applied to feature selection and construction, but most of the existing methods still suffer from stagnation in local optima and/or high computational cost. Due to the global search potential and heuristic guidelines, evolutionary computation techniques such as genetic algorithms, genetic programming, particle swarm optimisation, ant colony optimisation, differential evolution and evolutionary multi-objective optimisation have been recently used for feature selection and construction for dimensionality reduction, and achieved great success. Many of these methods only select/construct a small number of important features, produce higher accuracy, and generated small models that are easy to understand/interpret and efficient to run on unseen data. Evolutionary computation techniques have now become an important means for handling big dimensionality issues where feature selection and construction are required. Furthermore, feature selection and dimensionality reduction has also been a main approach to explainable machine learning and interpretable AI.
The tutorial will introduce the general framework within which evolutionary feature selection and construction can be studied and applied, sketching a schematic taxonomy of the field and providing examples of successful real-world applications. The application areas to be covered will include bio-data classification and biomarker detection, image analysis and pattern classification, symbolic regression, network security and intrusion detection, and text mining. EC techniques to be covered will include genetic algorithms, genetic programming, particle swarm optimisation, differential evolution, ant colony optimisation, artificial bee colony optimisation, and evolutionary multi-objective optimisation. We will show how such evolutionary computation techniques (with a focus on particle swarm optimisation and genetic programming) can be effectively applied to feature selection/construction and dimensionality reduction and provide promising results.
XUE Bing
Bing Xue is a Fellow of IEEE, Fellow of Engineering New Zealand, currently Professor of Artificial Intelligence, Deputy Head of School in the School of Engineering and Computer Science at VUW, Deputy Director of Centre for Data Science and Artificial Intelligence at VUW. Her research focuses mainly on evolutionary computation, machine learning, big data, feature selection/learning, evolving neural networks, explainable AI and their real-world applications. Bing has over 400 papers published in fully refereed international journals and conferences including many highly cited papers and top most popular papers.
Bing is Chair of IEEE CIS Task Force on Evolutionary Deep Learning and Applications, vice-chair of IEEE CIS Task Force on Transfer Learning and Transfer Optimisation. Bing has also been served as an Associate/Guest Editor or Editorial Board Member for > 10 international journals, including IEEE TEVC, ACM TELO, IEEE TETCI, and IEEE TAI. She is a key organiser for many international conferences, e.g. General Chair of PRICAI 2025, Conference Chair of EuroGP 2025, IEEE CEC 2024, EuroGP 2024, Co-ambassador for Women in Data Science NZ 2025, 2024, and 2023, Panel Chair and Conflict-of-Interest Chair for IEEE CEC 2023, Tutorial Chair for IEEE WCCI 2022, Publication Chair of EuroGP 2022, Track Chair for ACM GECCO 2019-2022, Workshop Chair for IEEE ICDM 2021, Conference Activities Chair for IEEE SSCI 2021, Publicity Chair for IEEE CEC 2021, General Co-Chair of IVCNZ 2020, Program Co-Chair for KETO 2020, Senior PC of IJCAI 2019-2021, Finance Chair of IEEE CEC 2019, Program Chair of Austrasia AI 2018, IEEE CIS FASLIP Symposium founder and Chair since 2016, and others in international conferences.
Mengjie Zhang
Prof Mengjie Zhang is a Fellow of Royal Society of New Zealand, a Fellow of Engineering New Zealand, a Fellow of IEEE, an IEEE Distinguished Lecturer, currently Professor of Computer Science at Victoria University of Wellington, where he heads the interdisciplinary Evolutionary Computation and Machine Learning Research Group. He is the Director of the Centre for Data Science and Artificial Intelligence at the University.
His research is mainly focused on AI, machine learning and big data, particularly in evolutionary learning and optimisation, feature selection/construction and big dimensionality reduction, computer vision and image analysis, scheduling and combinatorial optimisation, classification with unbalanced data and missing data, and evolutionary deep learning and transfer learning. Prof Zhang has published over 900 research papers in refereed international journals and conferences. He has been serving as an associated editor for over ten international journals including IEEE Transactions on Evolutionary Computation, IEEE Transactions on Cybernetics, the Evolutionary Computation Journal (MIT Press), and involving many major AI and EC conferences as a chair. He received the “EvoStar/SPECIES Award for Outstanding Contribution to Evolutionary Computation in Europe” in 2023. Since 2007, he has been listed as a top five (currently No. 3) world genetic programming researchers by the GP bibliography (http://www.cs.bham.ac.uk/~wbl/biblio/gp-html/index.html). He is also a Clarivate Highly Cited Researcher in the field of Computer Science — 2023.
He is the Tutorial Chair for GECCO 2014, 2023 and 2024, an AIS-BIO Track Chair for GECCO 2016, an EML Track Chair for GECCO 2017, and a GP Track Chair for GECCO 2020 and 2021.
Prof Zhang is currently the Chair for IEEE CIS Awards Committee. He is also a past Chair of the IEEE CIS Intelligent Systems Applications Technical Committee, the Emergent Technologies Technical Committee and the Evolutionary Computation Technical Committee, a past Chair for IEEE CIS PubsCom Strategic Planning subcommittee, and the founding chair of the IEEE Computational Intelligence Chapter in New Zealand.
Evolutionary computation for stochastic problems
This tutorial will give an overview of different evolutionary computation approaches for dealing with stochastic problems. It will cover theoretical foundations as well as a wide range of concepts and approaches on how to deal with stochastic objective functions and/or stochastic constraints.
The material presented will range from theoretical studies involving runtime analysis to applications of evolutionary algorithms for stochastic problems in real-world scenarios such as engineering and mine planning.
Frank Neumann
Aneta Neumann
Hemant Kumar Singh
Evolutionary Computation meets Machine Learning for Combinatorial Optimisation
Combinatorial optimisation is an important research area with many real-world applications such as scheduling, vehicle routing, cloud resource allocation, supply chain management, logistics and transport. Most combinatorial optimisation problems are NP-hard, making it challenging to design effective algorithms to solve them to optimality. In practice, in particular (meta-)heuristic methods including evolutionary approaches are therefore widely used to address such problems. Unfortunately, designing an effective and efficient (meta-)heuristic typically requires extensive domain expertise and a lot of trial and error for each different problem variant encountered in the real world.
In recent years, machine learning has emerged to also be a promising ingredient for better and/or easier solving combinatorial optimisation problems. First, machine learning can design combinatorial optimisation algorithms automatically by searching for algorithms/heuristics rather than solutions, and the learned algorithms/heuristics can be generalised to future unseen problem variants to obtain high-quality solutions. This can greatly reduce the dependence on human expertise and time to manually design effective algorithms. Second, machine learning can learn decision-making policies for dynamic combinatorial optimisation problems (e.g., dispatching rules for dynamic scheduling), which can achieve both effectiveness and efficiency simultaneously. Third, machine learning may discover new design patterns and knowledge that can further improve the algorithm design for solving complex combinatorial optimisation problems.
The aim of this tutorial is two-fold. On the one hand, we will give an overview on how classical metaheuristics may profit from the usage of machine learning and provide a few advanced examples. On the other hand, we will introduce how (evolutionary) machine learning, specifically, can be used for solving combinatorial optimisation problems, including basic design issues and some case studies.
The outline of the tutorial is as follows.
1. Introduction and Background
2. Evolutionary Computation to Learn Combinatorial Optimisation Heuristics
3. Machine Learning to Learn Metaheuristics
4. Challenges and Future Directions
Yi Mei
Günther Raidl
Günther Raidl is Professor at the Institute of Logic and Computation, TU Wien, Austria, and member of the Algorithms and Complexity Group. He received his PhD in 1994 and completed his habilitation in Practical Computer Science in 2003 at TU Wien. In 2005 he received a professorship position for combinatorial optimization at TU Wien.
His research interests include algorithms and data structures in general and combinatorial optimization in particular, with a specific focus on metaheuristics, mathematical programming, intelligent search methods, and hybrid optimization approaches. His research work typically combines theory and practice for application areas such as scheduling, network design, transport optimization, logistics, and cutting and packing.
Günther Raidl is associate editor for the INFORMS Journal on Computing, the ACM Transactions on Evolutionary Learning and Optimization and at the editorial board of several journals including Algorithms, Engineering Applications of Artificial Intelligence, and Metaheuristics. He is co-founder and steering committee member of the annual European Conference on Evolutionary Computation in Combinatorial Optimization (EvoCOP). Since 2016 he is also founding faculty member of the Vienna Graduate School on Computational Optimization.
Günther Raidl has co-authored a text book on hybrid metaheuristics and over 190 reviewed articles in scientific journals, books, and conference proceedings. Moreover, he has co-edited 13 books and co-authored one book on hybrid metaheuristics. More information can be found at http://www.ac.tuwien.ac.at/raidl.
Evolutionary Reinforcement Learning
Many significant and headline breakthroughs in AI over the past decade have been powered in part through Reinforcement Learning (RL). This includes beating human-level performance in video games of various complexity from Atari (Mnih et al. 2013; Nature) to StarCraft (Vinyals et al. 2019; Nature) and strategy games like Chess and Go (Silver et al. 2018; Science). It also demonstrates state-of-the-art performance in real-world applications such as control for quadruped robots (Lee et al., 2020; Nature), high-speed drones (Kaufman et al. 2023; Nature) and plasma reactors (Degrave et al. 2022; Nature). Even modern instruction following language models and chatbots like ChatGPT (Christiano et al. 2017) and more recent reasoning-centric models like o1 are possible in part due to RL, which has re-ignited interest in the field.
Interestingly, elements of Evolutionary Computation (EC) such as population-based training (Jaderberg et al. 2017) and Quality-Diversity algorithms (Pugh et al. 2016) represent key elements in some of these breakthroughs. The combination of EC and RL methods is not new and has gained more popularity and interest recently as researchers discover the limitations of the individual approaches, opening up many exciting new research avenues and opportunities.
This tutorial will give a modern overview of the various synergies and questions addressed when combining EC and RL methods, relying on examples from the literature. Past achievements and major contributions, as well as specific challenges and future opportunities at the intersection of EC and RL, will be presented. The tutorial will in particular focus on:
- What is RL?
- Deep learning for RL
- Neuroevolution for RL
- Meta-learning RL with Evolution
- Quality-Diversity for RL
- Curriculum Learning and Environment Generation using EC and RL
- Other modern combinations of EC and RL
- Open questions and future challenges
The tutorial will effectively complement the Complex Systems, Neuroevolution, Evolutionary Machine Learning, Genetic Algorithms and Real-World Applications tracks, each of which respectively contains several papers about RL. For instance, 14/178 of the papers in the proceedings of GECCO in 2024, 13/180 papers in GECCO 2023, and 9/158 papers in GECCO 2022 contained RL elements in their title, keywords or abstract.
Antoine Cully
Bryan Lim
Manon Flageat
Paul Templier
Fair Performance Comparison of Evolutionary Multi-Objective Algorithms
Evolutionary multi-objective optimization (EMO) has been a very active research area in recent years. Almost every year, new EMO algorithms are proposed. When a new EMO algorithm is proposed, computational experiments are usually conducted in order to compare its performance with existing algorithms. Then, experimental results are summarized and reported as a number of tables together with statistical significance test results. Those results usually show higher performance of the new algorithm than existing algorithms. However, fair comparison of different EMO algorithms is not easy since the evaluated performance of each algorithm usually depends on experimental settings. This is also because solution sets instead of solutions are evaluated.
In this tutorial, we will explain and discuss various difficulties in fair performance comparison of EMO algorithms related to the following four issues: (i) the termination condition of each algorithm, (ii) the population size of each algorithm, (iii) performance indicators, (iv) test problems. For each issue, its strong effects on comparison results are clearly demonstrated. Our discussions on those difficulties are to encourage the future development of the EMO research field without excessively focusing on the proposal of overly-specialized new algorithms in a specific setting. This is because those algorithms are not likely to work well on various real-world tasks. Then, we will discuss the handling of each issue for fair comparison. We will also suggest some promising future research topics related to each issue.
Hisao Ishibuchi
Lie Meng Pang
Genetic Programming as a Hyper-Heuristic for Solving Combinatorial Optimisation Problems
Combinatorial optimization problems occur in many real-world situations and scenarios. Often these problems are NP-hard, meaning that they cannot be solved optimally, but have to be solved using various heuristic methods, most commonly improvement-based metaheuristics. However, in many real-world cases, the improvement-based methods cannot be applied because the problems are stochastic or dynamic and therefore not all information about the problem is available in advance, or the problems are simply too large to search the solution space efficiently, even if this is done heuristically. In these cases, constructive heuristics are usually the method of choice to solve such problems. Constructive heuristics build the solution step by step by simply determining the next decision that needs to be made. Therefore, they can easily adapt to changes in the problem. The main problem with constructive heuristics is that they are difficult to design manually for the different problems that arise. For this reason, researchers have turned to the use of hyper-heuristic methods, i.e., methods that are not designed to solve the problem itself, but to generate a heuristic that can be used to solve any new problem instance of the type for which it was designed. Although many hyper-heuristic methods have been proposed in the literature, genetic programming has become the most popular method for this purpose. Over the years, genetic programming has been used to generate heuristics for various combinatorial optimization problems, including various scheduling, routing and similar problems. Especially in recent years, research in this area has gained attention, with many different research directions being investigated, which illustrates the importance of this topic.
The aim of this tutorial is to give a brief introduction to the topic of automated heuristic design using generic programming. To this end, the tutorial will introduce genetic programming as an evolutionary computation method. It will then describe the application of genetic programming to the automated design of heuristics. This part will focus on how genetic programming is used as a hyper-heuristic method. Here it is outlined how the method needs to be adapted and how this problem is approached, i.e. what steps are required to successfully apply genetic programming to create a heuristic for a selected combinatorial optimization problem. All this will be illustrated using a simple combinatorial optimization problem as a showcase. The tutorial will also present several applications of genetic programming to generate heuristics for various combinatorial optimization problems from the literature to illustrate the broad applicability of the method, but also to highlight the differences and challenges that arise in different problems. The tutorial will cover some recent developments in the automatic generation of dispatching rules and outline several new research directions in this area, such as the generation of multi-objective heuristics, the application of ensemble learning methods, the construction of surrogate models, etc. Finally, the current limitations and open questions of the approach will be outlined. The tutorial will help interested researchers to get a good overview of this research area as well as the main ideas and challenges for future studies. Thus, the audience should gain knowledge that will help them to successfully apply genetic programming to a given combinatorial optimization problem.
Marko Đurasević
Francisco J. Gil Gala
Domagoj Jakobovic
Yi Mei
Intelligent Evolution Optimization: Guided from Deep Learning to Large Language Model
The rapid development of machine learning has significantly influenced the field of evolutionary optimization. Traditional evolutionary optimization approaches often suffer from cold starts and inefficient solution searches. Learning-based evolutionary optimization, combining machine learning and evolutionary algorithms, has emerged as a promising research area (e.g., learning for EC appeared for the first time as a new track at GECCO2024). These methods enable the discovery of effective feature representations, prediction of near-optimal solutions, dimensionality reduction, and extraction of knowledge from training data to guide the search process. Despite these advancements, learning-based methods typically require large amounts of training data and often lack generalizability across different problem domains. The recent rise of large language models (LLMs) introduces a new paradigm, as these models are pre-trained on vast amounts of data and can be fine-tuned with a minimal number of examples to generate effective heuristic algorithms and solutions for evolutionary optimization.
This tutorial will provide an overview of the latest research development in intelligent evolutionary optimization, focusing on the transition from traditional machine learning to deep learning, and then to business optimization and evolutionary optimization guided by large language models.
Hua Xu
Xiaodong Li
Yuan Sun
Huigen Ye
Introduction to Quantum Optimization
Scope:
Quantum computers are rapidly becoming more powerful and increasingly applicable to solve problems in the real world. They have the potential to solve extremely hard computational problems, which are currently intractable by conventional computers. A major application domain of quantum computers is solving hard combinatorial optimization problems. This is the emerging field of quantum optimization. The algorithms that quantum computers use for optimization can be regarded as general types of stochastic heuristic optimization algorithms.
There are two main types of quantum computers, quantum annealers and quantum gate computers. These have very different architectures. To solve optimization problems on quantum computers, they need to be reformulated in a format suitable for the specific architecture. Quantum annealers are specially tailored to solve combinatorial optimization problems, once they are reformulated as a Quadratic Unconstrained Binary Optimisation (QUBO) problem. In quantum gates computers, Quantum Approximate Optimization Algorithm (QAOA) can be used to approximately solve optimization problems. In this case, a classical algorithm can be used on top of the quantum computer to guide the search of parameters.
The tutorial is aimed at researchers in optimization who have no previous knowledge of quantum computers and want to learn about how to solve optimization problems on quantum computers. The tutorial will demonstrate how to solve in practice a simple combinatorial optimization problem on the two main quantum computer architectures (quantum gate computers and quantum annealers) using Jupyter notebooks to experiment hands-on on solving simple optimization problems on quantum computer simulators.
Content:
Part 1 (Quantum Annealers):
Quantum Annealers Background
Optimisation on Quantum Annealers
Solving the Max Cut problem on a Quantum Annealer
Part 2 (Quantum Gate Computers):
Quantum Gate Computers Background
Optimisation on Quantum Gate Computers
Solving the Max Cut problem on a Quantum Gate Computer (via QAOA)
Alberto Moraglio
Francisco Chicano
Introductory Mathematical Programming for EC
Global optimization of complex models has been for several decades approached by means of formal algorithms as well as Mathematical Programming (MP) (often branded as Operations Research, yet strongly rooted at Theoretical CS), and simultaneously has been treated by a wide range of dedicated heuristics (frequently under the label of Soft Computing) – where EC resides. The former is applicable when explicit modeling is available, whereas the latter is typically utilized for simulation- or experimentation-based optimization (but also applicable for explicit models). These two branches complement each other, yet practically studied under two independent CS disciplines. It is widely recognized nowadays that EC scholars become stronger, better-equipped researchers when obtaining knowledge on this so-called "optimization complement". In other words, the claim that our EC education should encompass basic MP is untenable at present times, and this tutorial aims at bridging the gap for our community's scholars.
The tutorial comprises three parts, aiming to introduce basic MP for EC audience.
(*) The first part presents the fundamentals of MP. It overviews mathematical optimization and outlines its taxonomy when classified by the underlying model: convex optimization (linear programs (pure-LP) and non-linear programs), versus combinatorial optimization (integer and mixed-integer linear programs M)ILP), integer quadratic programs (IQP. It then discusses some of the theoretical aspects, such as polyhedra and the duality theorems. It covers selected algorithms for the major classes of problems: Dantzig's Simplex for pure-LP, Ellipsoid for convex models, and Branch-and-Bound for integer programming.
(**) The second part focuses on MP in practice. The tutorial presents the principles of MP modeling, with emphasis on the roles of constraints and auxiliary/dummy decision variables in MP. It is compared to equivalent modeling for bio-inspired heuristics, which is formed differently with respect to such components.
(***) The third part constitutes an interactive demo session of problem-solving. We plan to model in a precept-style, using ILOG Studio's OPL, several ILP benchmark problems, including the basic Knapsack, Markowitz portfolio, N-queens, and the Traveling Salesman Problem – as a demonstration of practical problem-solving by means of mathematical optimization.
Ofer Shir
Linear Genetic Programming
Wolfgang Banzhaf
Wolfgang Banzhaf is the John R. Koza Chair for Genetic Programming in the Department of Computer Science and Engineering at Michigan State University. He received his Dr.rer.nat (PhD) from the Department of Physics of the Technische Hochschule Karlsruhe, now Karlsruhe Institute of Technology (KIT). His research interests are evolutionary computing, complex adaptive systems, and self-organization of artificial life. He is a member of the Advisory Committee of ACM-SIGEVO, the Special Interest Group for Evolutionary Computation of the Association of Computing Machinery and has served as its Chair from 2011-2015 after having served as SIGEVO's treasurer 2005-2011. From its foundation, he was member of the Executive Board of SIGEVO from 2005-2021, and of the International Society for Artificial Life (ISAL) from 2009 to 2015, and from 2019 to today. He has founded the scholarly journal "Genetic Programming and Evolvable Machines".
Ting Hu
Machine Learning Assisted Evolutionary Multi- and Many-objective Optimization
This tutorial will focus on how the Machine Learning (ML) techniques applied to the evolving solution sets offered by Evolutionary Multi- and Many-objective Optimization Algorithms (EMaOAs). EMaOAs can facilitate knowledge discovery and performance enhancement across different phases of optimization, including, problem-modeling, optimal-search, and post-optimization decision-making. Beginning with the essential concepts in EMaO and ML domains, this tutorial will cover some representative studies endorsing the benefits of integrated use in these domains. It will highlight how ML intervention can facilitate: (i) better understanding of the problem-structure, (ii) dedicated operators to enhance the search efficacy of EMaOAs on convergence- and diversity-hard problems, and (iii) more efficient and customized decision-making. The presented approaches will be supported by exhaustive experimental results. Importantly, this tutorial will also emphasize the importance of not distorting the basic tenets of EMaOAs in pursuit of ML integration. A template to ensure the latter will be presented in light of ML-based risk-reward trade-off, exploration-exploitation balance, minimizing ad-hoc parameterization and avoiding extra solution evaluations. Notably, this tutorial will be interactive and will also include a walkthrough and execution of the python codes for the recently proposed ML-based operators to enhance EMaOA search efficacy. The tutorial will conclude with pointers to future directions in ML-assisted Evolutionary Multi- and Many-objective Optimization.
Syllabus:
1. Basic tenets of Evolutionary Multi- and Many-objective Optimization Algorithms (EMaOAs).
2. Basic tenets of Machine Learning (ML) techniques and synergy with EMaOAs.
3. Utility of ML intervention towards understanding the problem-structure better.
4. Dedicated ML-based operators for performance enhancement of EMaOAs, covering:
4a. Innovized operators (IP and IP2) for convergence enhancement (employing Artificial Neural Network (ANN) and Random Forests (RF)), with supporting results on convergence-hard test problems.
4b. Innovized operator (IP3) for diversity enhancement (employing k-Nearest Neighbors (kNN)), with supporting results on diversity-hard test problems.
4c. Unified Innovized operator (UIP) for simultaneous convergence and diversity enhancement (employing RF and kNN) with supporting results on both convergence and diversity-hard test problems.
4d. A template for Ml-assisted EMaAO: addressing the key considerations of ML-based risk-reward trade-off, exploration-exploitation balance, minimizing ad-hoc parameterization and avoiding extra solution evaluations
4e. A walkthrough and execution of codes implementing the above operators on test problems.
5. Utility of ML intervention towards more efficient and customized post-optimization decision-making.
6. Pointers to future directions in ML-assisted Evolutionary Multi- and Many-objective Optimization and brief discussion of EMaOA-assisted ML development.
Kalyanmoy Deb
DHISH KUMAR SAXENA
Sukrit Mittal
Model-Based Evolutionary Algorithms
In model-based evolutionary algorithms (MBEAs) the variation operators are guided by the
use of a model that conveys problem-specific information so as to increase the
chances that combining the currently available solutions leads to improved
solutions. Such models can be constructed beforehand for a specific problem, or
they can be learnt during the optimization process.
Replacing traditional crossover and mutation operators by building
and using models enables the use of machine learning techniques for automatic
discovery of problem regularities and subsequent exploitation of these
regularities, thereby enabling the design of optimization techniques that can
automatically adapt to a given problem. This is an especially useful feature
when considering optimization in a black-box setting. The use of models can
furthermore also have major implications for grey-box settings where not
everything about the problem is considered to be unknown a priori.
Well-known types of MBEAs are Estimation-of-Distribution Algorithms (EDAs)
where probabilistic models of promising solutions are built and samples are
subsequently drawn from these models to generate new solutions.
A more recent class of MBEAs is the family of Optimal Mixing EAs such
as the Linkage Tree GA and, more generally, various GOMEA variants.
The tutorial will mainly focus on the latter types of MBEAs.
Dirk Thierens
Peter A. N. Bosman
New, more efficient crossover and local search operators for recombination lattices.
Modern Gray Box Optimization makes it possible to construct new forms of Genetic Algorithms that do not use random mutation or random recombination. For certain classes of NP Hard problems (ranging from MAXSAT to QUBO to Machine Scheduling to the Traveling Salesman Problem), it is possible to exactly compute the location of improving moves, sometimes in constant time.
New results show that there are additional decompositions that can be exploited by intelligent recombination operators such as “Partition Crossover” (PX). New methods are able to “break” critical nonlinear interactions such that the nonlinear interactions cancel, or otherwise have no effect during crossover. PX can identify the best of 2
The tutorial will also show how “transforms” can be used to convert any pseudo-Boolean function (with a polynomial description) into k-bounded pseudo-Boolean functions, including the QUBO functions frequently used in quantum computing . This makes it possible to use more powerful evolutionary algorithms which can exploit k-bounded nonlinearity.
Darrell Whitley
Pareto Optimization for Subset Selection: Theories and Practical Algorithms
Chao Qian
Chao Qian is a Professor in the School of Artificial Intelligence, Nanjing University, China. He received the BSc and PhD degrees in the Department of Computer Science and Technology from Nanjing University. After finishing his PhD in 2015, he became an Associate Researcher in the School of Computer Science and Technology, University of Science and Technology of China, until 2019, when he returned to Nanjing University as an Associate Professor. In 2024, he became a Full Professor.
His research interests include artificial intelligence, evolutionary computation, and machine learning. He has published one book “Evolutionary Learning: Advances in Theories and Algorithms”, and over 60 first/corresponding-authored papers in top-tier journals (PNAS, AIJ, ECJ, TEvC, Algorithmica, TCS) and conferences (AAAI, IJCAI, ICML, NeurIPS, ICLR). He has won the ACM GECCO 2011 Best Theory Paper Award, the IDEAL 2016 Best Paper Award, the IEEE CEC 2021 Best Student Paper Award Nomination, and the 21st Annual Humies Bronze Award. He serves on the editorial board of Artificial Intelligence Journal, Evolutionary Computation Journal, IEEE Transactions on Evolutionary Computation, IEEE Computational Intelligence Magazine, etc. He is the founding chair of IEEE Computational Intelligence Society (CIS) Task Force on Evolutionary Learning, and was also the chair of IEEE CIS Task Force on Theoretical Foundations of Bio-inspired Computation. He has regularly given tutorials and co-chaired special sessions at CEC, GECCO and PPSN, given an Early Career Spotlight Talk at IJCAI 2022, and will be a Program Co-Chair of PRICAI 2025. He has successfully developed algorithms to solve complex optimization problems (e.g., supply chain, wireless network, and chip register optimization) in Huawei, and won Huawei Spark Award twice. He is a recipient of the National Science Foundation for Excellent Young Scholars (2020) and CCF-IEEE CS Young Computer Scientist Award (2023), and has hosted a National Science and Technology Major Project.
Recent Advances in Meta-features Used for Representing Black-box Single-objective Continuous Optimization
This tutorial reviews and presents key advancements in representation learning of meta-features for representing optimization problem instances, algorithm instances, and their interactions in single-objective continuous black-box optimization. These representations are used for learning tasks such as algorithm selection, problem classification, and assessing complementarity between benchmark problem suites. Learning meta-features is highly relevant and important to attendees of the GECCO 2025 conference due to its potential to significantly enhance the performance and efficiency of automated optimization techniques (AutoOPT). The tutorial will explore techniques for learning meta-features specifically designed for single-objective black-box optimization, highlighting their potential applications and the possibilities of transferring them to other areas such as reinforcement learning, multi-objective optimization, etc.
Gjorgjina Cenikj
Gjorgjina Cenikj is a young researcher at the Computer Systems Department at the Jožef Stefan Institute. She is currently pursuing a PhD degree in the area of automated machine learning and representation learning for single-objective continuous optimization.
Her master thesis targeted the development of Information Extraction methods for the domain of food and nutrition.
Her main research interests include machine learning, optimization, automated machine learning, benchmarking, representation learning, natural language processing, and recommender systems.
Ana Nikolikj
Ana Nikolikj is a young researcher at the Computer Systems Department at the Jožef Stefan Institute in Ljubljana, Slovenia. She is working towards her PhD at the Jožef Stefan Postgraduate School, focusing on inventing methodologies to understand the behavior of single-objective numerical optimization algorithms via meta-learning. This is aimed at enhancing the process of algorithm performance prediction and algorithm selection. Her areas of interest encompass machine learning, representation learning, and methods for explainability. During her master thesis, she explored algorithm features based on explainable performance prediction models.
Tome Eftimov
Recent developments in data structures and algorithms for evolutionary multiobjective optimization
Evolutionary multiobjective optimization (EMO) is a vivid area of research with a plethora of various methods proposed. Although these methods differ in some aspects, they also share many common algorithmic procedures like:
• filtering nondominated solutions which is a typical step in many EMO algorithms with an external Pareto archive,
• quality indicators (contribution) calculation/estimation used in indicator-based EMO and in computational experiments,
• scalarizing functions optimization in a set of solutions used in some EMO methods and in hypervolume and other quality indicators estimation.
Thus, the progress in EMO may be obtained not only by developing new EMO methods but also by improving efficiency and/or accuracy of such typical algorithmic procedures. In this tutorial we cover some recent developments in efficient data structures and algorithms for EMO. Precisely, we will cover the following topics:
- ND-Tree data structure which may be used for:
• Efficient update of the Pareto archive, i.e. checking if a new solution is nondominated, if so, adding it to the Pareto archive and removing dominated solutions from the archive.
• Finding the solution minimizing/maximizing a (modified) Chebycheff scalarizing function over a set of solutions, which, in turn, may be applied for:
o Adapting Pareto local search (PLS) for more than two objectives – Many objective PLS,
o Improving efficiency of hypervolume or R2 (contribution) estimation.
- Improved quick hypervolume algorithm for exact hypervolume calculation and its extensions for:
• Finding guaranteed bounds of hypervolume (contribution).
• Improving accuracy of Monte Carlo hypervolume (contribution) estimation.
• Finding extreme (minimum or maximum) hypervolume contribution/contributor in a set of solutions.
• Solving hypervolume subset selection problem through the lazy incremental or decremental greedy approach.
• On-line update of hypervolume value.
- New algorithm for exact calculation of R2 quality indicator, which is the first such algorithm with publically available implementation and computational complexity analysis.
We will also discuss some promising directions for further research in this area.
References:
• Jaszkiewicz and T. Lust. ND-Tree-Based Update: A Fast Algorithm for the Dynamic Nondominance Problem. IEEE Transactions On Evolutionary Computation, 22(5):778–791, OCT 2018.
• Andrzej Jaszkiewicz. Many-Objective Pareto Local Search. European Journal of Operational Research, 271(3):1001–1013, 2018.
• Jaszkiewicz. Improved quick hypervolume algorithm. Computers and Operations Research, 90:72–83, 2018.
• Jaszkiewicz, R. Susmaga, and P. Zielniewicz. Approximate hypervolume calculation with guaranteed or confidence bounds. PPSN XVI, pages 215–228, 2020.
• Andrzej Jaszkiewicz and Piotr Zielniewicz. Quick Extreme Hypervolume Contribution Algorithm. GECCO ’21, page 412–420, 2021.
• Andrzej Jaszkiewicz and Piotr Zielniewicz. Greedy Decremental Quick Hypervolume Subset Selection Algorithms. PPSN XVII, 2022.
• Andrzej Jaszkiewicz and Piotr Zielniewicz. On-Line Quick Hypervolume Algorithm. GECCO ’23 Companion, page 371–374, 2023.
• Andrzej Jaszkiewicz and Piotr Zielniewicz. Exact calculation and properties of the R2 multiobjective quality indicator. IEEE Transactions on Evolutionary Computation, 2024 (early access).
• Andrzej Jaszkiewicz and Piotr Zielniewicz. Improving the efficiency of the distance-based hypervolume estimation using ND-tree. IEEE Transactions on Evolutionary Computation, 2024 (early access).
Andrzej Jaszkiewicz
Andrzej Jaszkiewicz received the M.Sc.Eng. and Ph.D. degrees from the Poznan University of Technology, Poznan, Poland, in 1990 and 1995, respectively. He is an author of multiple research papers in such journals and conferences as IEEE Transactions on Evolutionary Computation, European Journal of Operational Research, Computers and Operations Research, GECCO, PPSN.
He is currently a full Professor at the Institute of Computing Science, Faculty of Computing, Poznan University of Technology, and a member of Committee on Informatics of the Polish Academy of Sciences. His current research interests include multi- and single-objective optimization, combinatorial optimization, evolutionary algorithms, metaheuristics, artificial intelligence, and decision support.
Piotr Zielniewicz
Piotr Zielniewicz received the M.Sc.Eng. and Ph.D. degrees from the Poznan University of Technology, Poznan, Poland, in 1991 and 2000, respectively. He is an author of multiple research papers including one in such journals and conferences as IEEE Transactions on Evolutionary Computation, Expert Systems with Applications, European Journal of Operational Research, Computers and Operations Research, Decision Support Systems, GECCO, PPSN.
He is currently an Assistant Professor at the Institute of Computing Science, Poznan University of Technology. His current research interests include evolutionary algorithms, metaheuristics, multiple criteria decision making, preference learning, multiple objective combinatorial optimization, and advanced software technologies.
Representations for Evolutionary Algorithms
Successful and efficient use of evolutionary algorithms depends on the choice of the genotype, the problem representation (mapping from genotype to phenotype) and on the choice of search operators that are applied to the genotypes. These choices cannot be made independently of each other. The question whether a certain representation leads to better performing EAs than an alternative representation can only be answered when the operators applied are taken into consideration. The reverse is also true: deciding between alternative operators is only meaningful for a given representation.
Research in the last few years has identified a number of key concepts to analyse the influence of representation-operator combinations on the performance of evolutionary algorithms. Relevant concepts are the locality and redundancy of representations. Locality is a result of the interplay between the search operator and the genotype-phenotype mapping. Representations have high locality if the application of variation operators results in new solutions similar to the original ones. Representations are redundant if the number of phenotypes exceeds the number of possible genotypes. Redundant representations can lead to biased encodings if some phenotypes are on average represented by a larger number of genotypes or search operators favor some kind of phenotypes.
The tutorial gives an overview about existing guidelines for representation design, illustrates different aspects of representations, gives a brief overview of models describing the different aspects, and illustrates the relevance of the aspects with practical examples.
It is expected that the participants have a basic understanding of EA principles.
Franz Rothlauf
Statistical Forward Planning Algorithms
Statistical Forward Planning (SFP) is a group of robust and general AI techniques that use a simulation model to adaptively search for effective sequences of actions in various games and other problems characterised as Markov Decision Processes or Extensive Form Games (and their partially observable versions, POMDPs and Factored Observation Stochastic Games).
SFP methods can operate without the need for prior training and can handle complex and dynamic environments. This tutorial will provide a tour of SFP from basics to finer points, complete with pointers to Python code (e.g. in OpenSpiel and other repositories). We will cover a number of powerful SFP algorithms including Monte Carlo Tree Search (MCTS), Rolling Horizon Evolutionary Algorithm (RHEA) and Monte Carlo Graph Search (MCGS), as well as handling partial observability with Information Set MCTS. We’ll also cover:
the relationship between SFP algorithms and Counterfactual Regret Minimisation (CFR)
incorporating policy and value functions (similar to AlphaZero)
efficient exploration functions for flat reward landscapes
handling combinatorial action spaces
Demonstrations will show these algorithms can play a variety of video games surprisingly well and provide insights into their working principles and behaviours. The tutorial will be suitable for those with no experience of SFP, but we also expect MCTS veterans to gain some fresh insights. We will conclude with a discussion of some of the most exciting challenges in the area.
Simon Lucas
Simon is a full professor of AI in the School of Electronic Engineering and Computer Science at Queen Mary University of London where he leads the Game AI Research Group. He was previously Head of School of EECS at QMUL. He recently spent two years as a research scientist / software engineer in the Simulation-Based Testing team at Meta, applying simulation-based AI to automated testing.
Simon was the founding Editor-in-Chief of the IEEE Transactions on Games and co-founded the IEEE Conference on Games, was VP-Education for the IEEE Computational Intelligence Society and has served in many conference chair roles. His research is focused on simulation-based AI (e.g. Monte Carlo Graph Search, Rolling Horizon Evolution) and sample efficient optimisation.
Structural bias in optimisation algorithms
Benchmarking heuristic algorithms is vital for understanding under which conditions and on what kind of problems certain algorithms perform well. Most benchmarks are performance-based, to test algorithm performance under a wide set of conditions. There are also resource- and behaviour-based benchmarks to test resource consumption and behaviour of algorithms. In this Tutorial, we focus on behaviour benchmarking of algorithms and more specifically we focus on Structural Bias (SB).
SB is a form of bias inherent to the iterative heuristic optimiser in the search space that also affects the performance of the optimisation algorithm. Detecting whether, when and what type of SB occurs in a heuristic optimisation algorithm can provide guidance on what needs to be improved in these algorithms, besides helping to identify conditions under which such bias would not occur.
In the tutorial, we start by defining the problem of detecting and identifying different types of structural bias, including many visual examples. We then introduce state-of-the-art methods for bias detection. We follow up with SB results for several well-known and popular optimisation heuristics, give insights and show best practices to avoid SB in algorithm development. We conclude with a live demo of the Python-based BIAS toolkit which analyses a few well-known optimisation heuristics. Participants will be provided with links to live tools, necessary code and data.
Aims and learning objectives of this tutorial:
- convey the importance of benchmarking heuristic algorithms to comprehend their performance across different problem scenarios;
- concentrate on behaviour benchmarking, specifically delving into the concept of Structural Bias in iterative heuristic optimisers;
- enable participants to detect, analyse, and understand the occurrence and impact of Structural Bias in heuristic optimisation algorithms;
- provide insights into how detecting SB can lead to improved algorithm development and refinement;
- showcase the functionality and usage of developed toolkits as practical tools for detecting and addressing Structural Bias;
- share findings from analysing structural bias across well-known optimisation heuristics, offering insights and patterns.
Prerequisite Knowledge of Audience:
- familiarity with standard heuristic optimisation algorithms and benchmarking practices accepted in the community
- ability to follow programming examples in Python
Main references:
1 B. van Stein, D. Vermetten, F. Caraffini, A.V. Kononova, “Deep BIAS: detecting structural bias using explainable AI”, GECCO '23 Companion: Proceedings of the Genetic and Evolutionary Computation Conference, pp. 455–458, Lisbon, Portugal, 15-19 July 2023, doi: 10.1145/3583133.3590551, preprint: arXiv:abs/2304.01869
2 D. Vermetten, B. van Stein, F. Caraffini, L.L. Minku, A.V. Kononova, “BIAS: A Toolbox for Benchmarking Structural Bias in the Continuous Domain", IEEE Transactions on Evolutionary Computation, vol. 26 (6), pp. 1380–1393, 2022, doi: 10.1109/TEVC.2022.3189848, preprint: 10.36227/techrxiv.16594880.v1
3 A.V. Kononova, D.W. Corne, P. De Wilde, V. Shneer, F. Caraffini, “Structural bias in population-based algorithms”, Information Sciences, Elsevier, vol. 298, pp. 468–490, Elsevier, 2015, doi: 10.1016/j.ins.2014.11.035, arXiv:abs/1408.5350
Anna V Kononova
Niki van Stein
Niki van Stein received her PhD degree in Computer Science in 2018, from the Leiden Institute of Advanced Computer Science (LIACS), Leiden University, The Netherlands. From 2018 until 2021 she was a Postdoctoral Researcher at LIACS, Leiden University and she is currently an Assistant Professor at LIACS. Her research interests lie in explainable AI for EC and ML, surrogate-assisted optimisation and surrogate-assisted neural architecture search, usually applied to complex industrial applications.
Theory and Practice of Population Diversity in Evolutionary Computation
Divergence of character is a cornerstone of natural evolution. On the contrary, evolutionary optimization processes are plagued by an endemic lack of population diversity: all candidate solutions eventually crowd the very same areas in the search space. The problem is usually labeled with the oxymoron “premature convergence” and has very different consequences on the different applications, almost all deleterious. At the same time, case studies from theoretical runtime analyses irrefutably demonstrate the benefits of diversity.
This tutorial will give an introduction into the area of “diversity promotion”: we will define the term “diversity” in the context of Evolutionary Computation, showing how practitioners tried, with mixed results, to promote it.
Then, we will analyze the benefits brought by population diversity in specific contexts, namely global exploration and enhancing the power of crossover. To this end, we will survey recent results from rigorous runtime analysis on selected problems. The presented analyses rigorously quantify the performance of evolutionary algorithms in the light of population diversity, laying the foundation for a rigorous understanding of how search dynamics are affected by the presence or absence of diversity and the introduction of diversity mechanisms.
Dirk Sudholt
Giovanni Squillero
Tutorial: A Gentle Introduction to Theory (for Non-Theoreticians)
This tutorial addresses GECCO attendees who do not regularly use
theoretical methods in their research. For these, we give a smooth
introduction to the theory of evolutionary computation.
Complementing other introductory theory tutorials, we do not discuss
mathematical methods or particular results, but explain
- what the theory of evolutionary algorithms aims at,
- how theoretical research in evolutionary computation is conducted,
- how to interpret statements from the theory literature,
- what the most important theory contributions are, and
- what the theory community is trying to understand most at the moment.
Benjamin Doerr
Benjamin Doerr is a full professor at the French Ecole Polytechnique. He received his diploma (1998), PhD (2000) and habilitation (2005) in mathematics from Kiel University. His research area is the theory of both problem-specific algorithms and randomized search heuristics like evolutionary algorithms. Major contributions to the latter include runtime analyses for existing evolutionary algorithms, the determination of optimal parameter values, and complexity theoretic results. Benjamin's recent focus is the theory-guided design of novel operators, on-the-fly parameter choices, and whole new evolutionary algorithms, hoping that theory not only explains, but also develops evolutionary computation.
Together with Frank Neumann and Ingo Wegener, Benjamin Doerr founded the theory track at GECCO and served as its co-chair 2007-2009, 2014, and 2023. He is a member of the editorial boards of "Artificial Intelligence", "Evolutionary Computation", "Natural Computing", "Theoretical Computer Science", and three journals on classic algorithms theory. Together with Anne Auger, he edited the the first book focused on theoretical aspects of evolutionary computation ("Theory of Randomized Search Heuristics", World Scientific 2011). Together with Frank Neumann, he is an editor of the recent book "Theory of Evolutionary Computation - Recent Developments in Discrete Optimization" (Springer 2020).
What You Always Wanted to Know About Evolution Strategies, But Never Dared to Ask
While Evolution Strategies (ES) are widely known as one of the streams of
evolutionary computation and are meanwhile regarded as a competitive
alternative to standard learning techniques in Reinforcement Learning,
most people associate ES with the covariance matrix evolution strategy.
However, there is more than just this particular evolutionary algorithm
designed for unconstrained real-valued search spaces.
This introductory tutorial provides a broader perspective and view stressing
the design philosophy of Evolution Strategies being not restricted to a
specific search space, such as unconstrained real-valued optimization, but
also includes discrete and combinatorial search spaces. This philosophy can
be best understood from the ES history that started from the evolution of
material objects - nowadays often referred to as hardware-in-the-loop
evolutionary optimization. That is, evolution is done on the "phenotype."
Accepting the constraints involved in such optimizations, one naturally can
derive design principles for mutation and recombination operators and the
control of those operators by self-adaptation - one of the great inventions
of ES. Special emphasis is put on a vivid understanding of how the ES
explores the search spaces. Recent findings will be presented and supported
by live experiments to explain the ES's ability to locate global optima in
landscapes with a huge number of local optima. The tutorial will also
investigate the reasons why ESs are now regarded as a scalable alternative
to Reinforcement Learning.
The tutorial will include a live computer experiment demonstrating the
relevance of the design and working principles discussed. This tutorial will
be on an introductory level requiring only a minimum of maths.
Hans-Georg Beyer
Hans-Georg Beyer is best known for his theoretical analyses and the design
of Evolution Strategies based on the stochastic dynamical systems approach.
Dr. Beyer received the Diploma degree in Theoretical Electrical Engineering
from the Ilmenau Technical University, Germany, in 1982 and the Ph.D. in
physics from Bauhaus-University Weimar, Weimar, Germany, in 1989,
and the Habilitation degree in computer science from the University of
Dortmund, Dortmund, Germany, in 1997.
He was an R&D Engineer with the Reliability Physics Department,
VEB Gleichrichterwerk, Stahnsdorf, Germany, from 1982 to 1984.
From 1984 to 1989, he was a Research and Teaching Assistant and
later on Post-Doctoral Research Fellow with the Physics Department and
the Computer Science Department, Bauhaus-University Weimar. From 1990 to
1992, he was a Senior Researcher with the Electromagnetic Fields Theory
Group, Darmstadt University of Technology, Darmstadt, Germany.
From 1993 to 2004, he was with the Computer Science Department, University
of Dortmund. In 1997, he became a DFG (German Research Foundation)
Heisenberg Fellow. He was leader of a working group and a Professor of
Computer Science from 2003 to 2004. Since 2004 he has been professor
with the Vorarlberg University of Applied Sciences, Dornbirn, Austria.
He authored the book "The Theory of Evolution Strategies"
(Heidelberg: Springer-Verlag, 2001) and authored/coauthored about 200 papers.
Dr. Beyer was the Editor-in-Chief of the MIT Press Journal "Evolutionary
Computation" from 2010 to 2016. He was an Associate Editor for the IEEE
"Transactions on Evolutionary Computation" from 1997 to 2021 and is a member
of the advisory board of the Elsevier journal "Swarm and Evolutionary
Computation."