List of Workshops
Title | Organizers |
---|---|
AABOH — Analysing algorithmic behaviour of optimisation heuristics |
|
BBOB 2025 — Workshop on Black Box Optimization Benchmarking 2025 |
|
BENCH@GECCO25 — Good Benchmarking Practices for Evolutionary Computation |
|
DTEO — Decomposition Techniques in Evolutionary Optimization |
|
EC + DM — Evolutionary Computation and Decision Making |
|
ECADA 2025 — 15th Workshop on Evolutionary Computation for the Automated Design of Algorithms |
|
ECXAI — Evolutionary Computation and Explainable AI |
|
EGM — Evolutionary Generative Models |
|
EvoOSS — Open Source Software for Evolutionary Computation |
|
EvoSelf — Evolving self-organisation |
|
GGP — Graph-based Genetic Programming |
|
IAM 2025 — 10th Workshop on Industrial Applications of Metaheuristics |
|
IWERL — 28th International Workshop on Evolutionary Rule-based Machine Learning |
|
LAHS 2025 — Landscape-Aware Heuristic Search |
|
LLMfwEC — Large Language Models for and with Evolutionary Computation |
|
NEWK — Neuroevolution at work |
|
QuantOpt — Quantum Optimization |
|
SAEOpt — Workshop on Surrogate-Assisted Evolutionary Optimisation |
|
SymReg — Symbolic Regression |
|
AABOH — Analysing algorithmic behaviour of optimisation heuristics
Summary
Optimisation and Machine Learning tools are among the most used tools in the modern world with their omnipresent computing devices. Yet, while both these tools rely on search processes (search for a solution or a model able to produce solutions), their dynamics have not been fully understood. This scarcity of knowledge on the inner workings of heuristic methods is largely attributed to the complexity of the underlying processes, which cannot be subjected to a complete theoretical analysis. However, this is also partially due to a superficial experimental setup and, therefore, a superficial interpretation of numerical results. In fact, researchers and practitioners typically only look at the final result produced by these methods. Meanwhile, a great deal of information is wasted in the run. In light of such considerations, it is now becoming more evident that such information can be useful and that some design principles should be defined that allow for online or offline analysis of the processes taking place in the population and their dynamics. Hence, with this workshop, we call for both theoretical and empirical achievements identifying the desired features of optimisation and machine learning algorithms, quantifying the importance of such features, spotting the presence of intrinsic structural biases and other undesired algorithmic flaws, studying the transitions in algorithmic behaviour in terms of convergence, any-time behaviour, traditional and alternative performance measures, robustness, exploration vs exploitation balance, diversity, algorithmic complexity, etc., with the goal of gathering the most recent advances to fill the aforementioned knowledge gap and disseminate the current state-of-the-art within the research community. Thus, we encourage submissions exploiting carefully designed experiments or data-heavy approaches that can come to help in analysing primary algorithmic behaviours and modelling internal dynamics causing them.
Workshop format: invited talks, paper presentations, and a panel discussion.
Organizers
Anna V Kononova
Niki van Stein
Niki van Stein received her PhD degree in Computer Science in 2018, from the Leiden Institute of Advanced Computer Science (LIACS), Leiden University, The Netherlands. From 2018 until 2021 she was a Postdoctoral Researcher at LIACS, Leiden University and she is currently an Assistant Professor at LIACS. Her research interests lie in explainable AI for EC and ML, surrogate-assisted optimisation and surrogate-assisted neural architecture search, usually applied to complex industrial applications.
Daniela Zaharie
Fabio Caraffini
Thomas Bäck
BBOB 2025 — Workshop on Black Box Optimization Benchmarking 2025
Summary
Benchmarking optimization algorithms is a crucial aspect for their design and practical application. Since 2009, the Black Box Optimization Benchmarking Workshop has served as a place for discussing general recent advances in benchmarking practices and concrete results from benchmarking experiments with a large variety of (black box) optimizers.
The Comparing Continuous Optimizers platform (COCO, 1, https://github.com/numbbo/coco) was developed in this context to support algorithm developers and practitioners alike by automating benchmarking experiments for black box optimization algorithms in single-
and bi-objective, unconstrained and constrained, continuous and mixed-integer problems in exact and noisy, as well as expensive and non-expensive scenarios.
We welcome *all contributions to black box optimization benchmarking* for the 2025 edition of the workshop, although we would like to put a particular emphasis on:
1) Benchmarking algorithms for problems with underexplored properties (for
example mixed integer, noisy, constrained, multiobjective, ...)
2) Reproducing previous benchmarking results as well as examining performance
improvements or degradations in algorithm implementations over time
(for example with the help of results from earlier BBOB submissions).
Submissions are not limited to the test suites provided by COCO. For convenience, the source code in various languages (C/C++, Matlab/Octave, Java, Python, and Rust) together with all data sets from previous BBOB contributions are provided as an automatized benchmarking pipeline to reduce the time spent for producing the results for:
- single-objective unconstrained problems (the "bbob" test suite)
- single-objective unconstrained problems with noise ("bbob-noisy")
- biobjective unconstrained problems ("bbob-biobj")
- large-scale single-objective problems ("bbob-largescale")
- mixed-integer single- and bi-objective problems ("bbob-mixint" and
"bbob-biobj-mixint")
- almost linearly constrained single-objective problems ("bbob-constrained")
- box-constrained problems ("sbox-cost")
We especially encourage submissions exploring algorithms from beyond the evolutionary computation community, as well as papers analyzing COCO’s extensive, publicly available algorithm datasets (see https://numbbo.github.io/data-archive/).
For details, please see the separate BBOB-2025 web page at https://numbbo.github.io/workshops/BBOB-2025/index.html (available upon acceptance of the workshop)
1 Nikolaus Hansen, Anne Auger, Raymond Ros, Olaf Mersmann, Tea Tušar,
and Dimo Brockhoff. "COCO: A platform for comparing continuous
optimizers in a black-box setting." Optimization Methods and
Software (2020): 1-31.
Organizers
Anne Auger
Dimo Brockhoff
Tobias Glasmachers
Nikolaus Hansen
Nikolaus Hansen is a research director at Inria and the Institut Polytechnique de Paris, France. After studying medicine and mathematics, he received a PhD in civil engineering from the Technical University Berlin and the Habilitation in computer science from the University Paris-Sud. His main research interests are stochastic search algorithms in continuous, high-dimensional search spaces, learning and adaptation in evolutionary computation, and meaningful assessment and comparison methodologies. His research is driven by the goal to develop algorithms applicable in practice. His best-known contribution to the field of evolutionary computation is the so-called Covariance Matrix Adaptation (CMA).
Olaf Mersmann
Tea Tušar
BENCH@GECCO25 — BENCH@GECCO25 - Good Benchmarking Practices for Evolutionary Computation
Summary
Benchmarking plays a vital role in understanding the performance and search behaviour of sampling-based optimization techniques such as evolutionary algorithms. This workshop will continue our workshop series on good benchmarking practices at different conferences in the context of EC that we started in 2020. The core theme is on benchmarking evolutionary computation methods and related sampling-based optimization heuristics, but each year, the focus is changed.
For GECCO 2025, our focus will be on **“Benchmarking for humans and machines - Differences and Similarities”**.
Many currently popular benchmarks are designed to be interpretable by humans with specific questions in mind. For example, if the algorithm can exploit separability or how it handles disconnected Pareto fronts. As a result, they do not attempt to cover the full space of interesting problems. However, when used in the context of automated algorithm selection, algorithm configuration, and similar machine learning tasks, data requirements may change, as the ability for manual interpretation is no longer a restriction.
At the same time, benchmarking results in publications are often presented as aggregates without heeding the original intent of the benchmark designer. So even without the involvement of machines, the benchmarking data is typically suboptimally presented and interpreted.
In this workshop, we will be addressing the following questions:
- What are key similarities and differences between benchmarks designed for human vs. machine interpretation?
- Are there inherent differences between human vs. machine interpretable benchmarking pipelines that require the experimental setup, apart from the size of the generated data sets, to be different?
- How can we best support the analysis of benchmarking data, for manual interpretation and machine-based learning.
Organizers
Vanessa Volz
Carola Doerr
Boris Naujoks
Mike Preuss
Olaf Mersmann
Pascal Kerschke
DTEO — Decomposition Techniques in Evolutionary Optimization
Summary
Decomposition-based optimization involves transforming a complex problem into multiple smaller, more manageable sub-problems that can be solved cooperatively. The evolutionary computing community actively develops methods to explicitly or implicitly design decomposition across four key facets: (i) environmental parameters, (ii) decision variables, (iii) objective functions, and (iv) available computing resources.
This workshop aims to bring together recent advances in the design, analysis, and understanding of evolutionary decomposition techniques. It also provides a platform to discuss challenges in applying decomposition to increasingly large and complex optimization tasks—such as problems with many variables or objectives, multi-modal problems, simulation-based optimization, and uncertain scenarios—while considering modern large-scale computing environments, including massively parallel and decentralized systems.
The workshop focuses on, but is not limited to, the following topics:
- Large-scale evolutionary decomposition: Decomposition in decision space, gray-box methods, co-evolutionary algorithms, grouping techniques, and cooperative methods for constraint handling.
- Many- and multi-objective decomposition: Aggregation and scalarization methods, hybrid island-based approaches, and (sub-)population decomposition and mapping.
- Parallel and distributed evolutionary decomposition: Scalability across decision and objective spaces, decentralized divide-and-conquer strategies, distributed computing efforts, and deployment on heterogeneous, large-scale parallel platforms.
- New general-purpose decomposition techniques: Machine-learning-assisted decomposition, online and offline configuration, search region decomposition, use of multiple surrogates, and parallel approaches for expensive optimization.
- Emerging applications of evolutionary techniques based on decomposition.
- Understanding and benchmarking decomposition techniques.
- Software tools and libraries for evolutionary decomposition.
In general, this workshop encourages both theoretical and practical contributions, focusing on developmental, implementation, and applied aspects of decomposition techniques in evolutionary optimization.
Organizers
Bilel Derbel
Ke Li
Xiaodong Li
Saúl Zapotecas-Martínez
Qingfu Zhang
EC + DM — Evolutionary Computation and Decision Making
Summary
Solving real-world optimisation problems typically involve an expert or decision-maker. Decision making (DM) tools have been found to be useful in several such applications e.g., health care, education, environment, transportation, business, and production. In recent years, there has also been growing interest in merging Evolutionary Computation (EC) and DM techniques for several applications. This has raised amongst others the need to account for explainability, fairness, ethics and privacy aspects in optimisation and DM. This workshop will showcase research that is at the interface of EC and DM.
The workshop on Evolutionary Computation and Decision Making (EC + DM) to be held in GECCO 2025 aims to promote research on theory and applications in the field. Topics of interest include:
• Interactive multiobjective optimisation or decision-maker in the loop
• Visualisation to support DM in EC
• Aggregation/trade-off operators & algorithms to integrate decision maker preferences
• Fuzzy logic-based DM techniques
• Bayesian and other DM techniques
• Interactive multiobjective optimisation for (computationally) expensive problems
• Using surrogates (or metamodels) in DM
• Hybridisation of EC and DM
• Scalability in EC and DM
• DM and machine learning
• DM in a big data context
• DM in real-world applications
• Use of psychological tools to aid the decision-maker
• Fairness, ethics and societal considerations in EC and DM
• Explainability in EC and DM
• Accounting for trust and security in EC and DM
Organizers
Tinkle Chugh
Richard Allmendinger
Ana B. Ruiz
ECADA 2025 — 15th Workshop on Evolutionary Computation for the Automated Design of Algorithms
Summary
Mode: hybrid
Scope
The main objective of this workshop is to discuss hyper-heuristics and algorithm configuration methods for the automated generation and improvement of algorithms, with the goal of producing solutions (algorithms) that apply to multiple instances of a problem domain. The areas of application of these methods include optimization, data mining, and machine learning.
Automatically generating and improving algorithms by means of other algorithms has been the goal of several research fields, including artificial intelligence in the early 1950s, genetic programming since the early 1990s, and more recently automated algorithm configuration and hyper-heuristics. The term hyper-heuristics generally describes meta-heuristics applied to a space of algorithms. While genetic programming has most famously been used to this end, other evolutionary algorithms and meta-heuristics have successfully been used to automatically design novel (components of) algorithms. Automated algorithm configuration grew from the necessity of tuning the parameter settings of meta-heuristics and it has produced several powerful (hyper-heuristic) methods capable of designing new algorithms by either selecting components from a flexible algorithmic framework or recombining them following a grammar description.
Although most evolutionary algorithms are designed to generate specific solutions to a given instance of a problem, one of the defining goals of hyper-heuristics is to produce solutions that solve more generic problems. For instance, while there are many examples of evolutionary algorithms for evolving classification models in data mining and machine learning, a genetic programming hyper-heuristic has been employed to create a generic classification algorithm which in turn generates a specific classification model for any given classification dataset, in any given application domain. In other words, the hyper-heuristic operates at a higher level of abstraction compared to how most search methodologies are currently employed; i.e., it is searching the space of algorithms as opposed to directly searching in the problem solution space, raising the level of generality of the solutions produced by the hyper-heuristic evolutionary algorithm. In contrast to standard genetic programming, which attempts to build programs from scratch from a typically small set of atomic functions, hyper-heuristic methods specify an appropriate set of primitives (e.g., algorithmic components) and allow evolution to combine them in novel ways as appropriate for the targeted problem class. While this allows searches in constrained search spaces based on problem knowledge, it does not limit the generality of this approach as the primitive set can be selected to be Turing-complete. Typically, however, the initial algorithmic primitive set is composed of primitive components of existing high-performing algorithms for the problems being targeted; this more targeted approach very significantly reduces the initial search space, resulting in a practical approach rather than a mere theoretical curiosity. Iterative refining of the primitives allows for gradual and directed enlarging of the search space until convergence.
As meta-heuristics are themselves a type of algorithm, they too can be automatically designed employing hyper-heuristics. For instance, in 2007, genetic programming was used to evolve mate selection in evolutionary algorithms; in 2011, linear genetic programming was used to evolve crossover operators; more recently, genetic programming was used to evolve complete black-box search algorithms, SAT solvers, and FuzzyART category functions. Moreover, hyper-heuristics may be applied before deploying an algorithm (offline) or while problems are being solved (online), or even continuously learn by solving new problems (life-long). Offline and life-long hyper-heuristics are particularly useful for real-world problem solving where one can afford a large amount of a priori computational time to subsequently solve many problem instances drawn from a specified problem domain, thus amortizing the a priori computational time over repeated problem-solving. Recently, the design of multi-objective evolutionary algorithm components was automated.
Very little is known yet about the foundations of hyper-heuristics, such as the impact of the meta-heuristic exploring algorithm space on the performance of the thus automatically designed algorithm. An initial study compared the performance of algorithms generated by hyper-heuristics powered by five major types of genetic programming. Another avenue for research is investigating the potential performance improvements obtained through the use of asynchronous parallel evolution to exploit the typical large variation in fitness evaluation times when executing hyper-heuristics.
Content
We welcome original submissions on all aspects of Evolutionary Computation for the Automated Design of Algorithms, in particular, evolutionary computation methods and other hyper-heuristics for the automated design, generation or improvement of algorithms that can be applied to any instance of a target problem domain. Relevant methods include methods that evolve whole algorithms given some initial components as well as methods that take an existing algorithm and improve it or adapt it to a specific domain. Another important aspect of automated algorithm design is the definition of the primitives that constitute the search space of hyper-heuristics. These primitives should capture the knowledge of human experts about useful algorithmic components (such as selection, mutation and recombination operators, local searches, etc.) and, at the same time, allow the generation of new algorithm variants. Examples of the application of hyper-heuristics, including genetic programming and automatic configuration methods, to such frameworks of algorithmic components, are of interest to this workshop, as well as the (possibly automatic) design of the algorithmic components themselves and the overall architecture of metaheuristics. Therefore, relevant topics include (but are not limited to):
- Applications of hyper-heuristics, including general-purpose automatic algorithm configuration methods for the design of metaheuristics, in particular evolutionary algorithms, and other algorithms for application domains such as optimization, data mining, machine learning, image processing, engineering, cyber security, critical infrastructure protection, and bioinformatics.
- Novel hyper-heuristics, including but not limited to genetic programming-based approaches, automatic configuration methods, and online, offline and life-long hyper-heuristics, with the stated goal of designing or improving the design of algorithms.
- Empirical comparison of hyper-heuristics.
- Theoretical analyses of hyper-heuristics.
- Studies on primitives (algorithmic components) that may be used by hyper-heuristics as the search space when automatically designing algorithms.
- Automatic selection/creation of algorithm primitives as a preprocessing step for the use of hyper-heuristics.
- Analysis of the trade-off between generality and effectiveness of different hyper-heuristics or of algorithms produced by a hyper-heuristic.
- Analysis of the most effective representations for hyper-heuristics (e.g., Koza style Genetic Programming versus Cartesian Genetic Programming).
- Asynchronous parallel evolution of hyper-heuristics.
Organizers
Daniel Tauritz
John R. Woodward
Emma Hart
ECXAI — Evolutionary Computing and Explainable AI
Summary
‘Explainable AI’ is an umbrella term that covers research on methods designed to provide human-understandable explanations of the decisions made/knowledge captured by AI models. This is currently a very active research area within the AI field. Evolutionary Computation (EC) draws from concepts found in nature to drive development in evolution-based systems such as genetic algorithms and evolution systems. Alongside other nature-inspired metaheuristics, such as swarm intelligence, the path to a solution is driven by stochastic processes. This creates barriers to explainability: algorithms may return different solutions when re-run from the same input, and technical descriptions of these processes often hinder end-user understanding and acceptance. On the other hand, very often, XAI methods require the fitting of some kind of model, and hence EC methods have the potential to play a role in this area. This workshop will focus on the bidirectional interplay between XAI and EC. That is, discuss how XAI can help EC research and how EC can be used within XAI methods.
Recent growth in the adoption of black-box solutions, including EC-based methods into domains such as medical diagnosis, manufacturing, and transport & logistics, has led to greater attention being paid to generating explanations and their accessibility to end-users. This increased attention has helped create a fertile environment for applying XAI techniques in the EC domain for both end-user and researcher-focused explanation generation. Furthermore, many approaches to XAI in machine learning are based on search algorithms (e.g., Local Interpretable Model-Agnostic Explanations / LIME) that have the potential to draw on the expertise of the EC community. Finally, many of the broader questions (such as what kinds of explanations are most appealing or useful to end users) are faced by XAI researchers in general.
From an application perspective, important questions have arisen for which XAI may be crucial: Is the system biased? Has the problem been formulated correctly? Is the solution trustworthy and fair? The goal of XAI and related research is to develop methods to interrogate AI processes with the aim of answering these questions. This can support decision-makers while also building trust in AI decision-support through more readily understandable explanations.
We seek contributions on a range of topics relating evolutionary computation (in all its forms) with explainability. Topics of interest include but are not limited to:
· Interpretability vs explainability in EC and their quantification
· Landscape analysis and XAI
· Contributions of EC to XAI in general
· Use of EC to generate explainable/interpretable models
· XAI in real-world applications of EC
· Possible interplay between XAI and EC theory
· Applications of existing XAI methods to EC
· Novel XAI methods for EC
· Legal and ethical considerations
· Case studies / applications of EC & XAI technologies
Organizers
Jaume Bacardit
Alexander Brownlee
Stefano Cagnoni
Giovanni Iacca
John McCall
David Walker
EGM — Evolutionary Generative Models
Summary
Generative Models has become a key field in Artificial Intelligence. Evolutionary generative models refer to generative approaches that employ any type of evolutionary algorithm, whether applied on its own or in conjunction with other methods. In a broader sense we can divide evolutionary generative models into at least three main types:
(i) Evolutionary Computation (EC) as a Generative Model focuses on exploring how EC techniques that serve directly as generative models to produce data, designs, or solutions that fulfill specific criteria or constraints;
(ii) Generative Models Assisting EC consists in modern generative models, such as Generative Adversarial Networks or diffusion models, that enhance the performance and capabilities of EC methods (e.g., using generative models such as surrogate).
(iii) EC Assisting Generative Models discusses the role of EC techniques in enhancing generative models themselves, particularly through optimization and exploration. This includes approaches where EC is used to evolve or optimize the parameters of generative networks, help address generative models issues, or introduce adaptive mechanisms that improve model flexibility and resilience. It also delves into topics related to EC population dynamics such as cooperative or adversarial approaches.
The workshop on Evolutionary Generative Models (EGM) aims to act as a medium for debate, exchange of knowledge and experience, and encourage collaboration for researchers focused on generative models in the EC community. Thus, this workshop provides a critical forum for disseminating the experience on the topic using EC as a generative model, generative models assisting EC and EC assisting generative models, presenting new and ongoing research in the field, and to attract new interest from our community.
Topics:
. Evolutionary Generative Models
. Generative Models in Evolutionary Computation
. Evolutionary Machine Learning Generative Models
. EC-assisted Generative Machine Learning training, generation, hyperparameter optimisation or architecture search.
. Co-operative or Adversarial Generative Models
. Evolutionary latent and embedding space exploration (e.g. LVEs)
. Interaction with Evolutionary Generative Models
. Real-world applications of Evolutionary Generative Models solutions
. Software libraries and frameworks for Evolutionary Generative Models
Organizers
João Correia
João Correia is an Assistant Professor at the University of Coimbra, a researcher of the Computational Design and Visualization Lab. and a member of the Evolutionary and Complex Systems (ECOS) of the Centre for Informatics and Systems of the same university. He holds a PhD in Information Science and Technology from the University of Coimbra and an MSc and BS in Informatics Engineering from the same university. His main research interests include Evolutionary Computation, Machine Learning, Adversarial Learning, Computer Vision and Computational Creativity. He is involved in different international program committees of international conferences in the areas of Evolutionary Computation, Artificial Intelligence, Computational Art and Computational Creativity, and he is a reviewer for various conferences and journals for the mentioned areas, namely GECCO and EvoStar, served as remote reviewer for the European Research Council Grants and is an executive board member of SPECIES. He was also the publicity chair and chair of the International Conference of Evolutionary Art Music and Design conference, currently the publicity chair for EvoStar - The Leading European Event on Bio-Inspired Computation and chair of EvoApplications, the International Conference on the Applications of Evolutionary Computation. Furthermore, he has authored and co-authored several articles at the different International Conferences and journals on Artificial Intelligence and Evolutionary Computation. He is involved in national and international projects concerning Evolutionary Computation, Machine Learning, Generative Models, Computational Creativity and Data Science.
Jamal Toutouh
I am a Marie Skłodowska Currie Postdoctoral Fellow at Massachusetts Institute of Technology (MIT) in the USA, at the MIT CSAIL Lab. I obtained my Ph.D. in Computer Engineering at the University of Malaga (Spain). The dissertation, Natural Computing for Vehicular Networks, was awarded the 2018 Best Spanish Ph.D. Thesis in Smart Cities. My dissertation focused on the application of Machine Learning methods inspired by Nature to address Smart Mobility problems.
My current research explores the combination of Nature-inspired gradient-free and gradient-based methods to address Adversarial Machine Learning. The main idea is to devise new algorithms to improve the efficiency and efficacy of the state-of-the-art methodology by mainly applying co-evolutionary approaches. Besides, I am working on the application of Machine Learning to address problems related to Smart Mobility, Smart Cities, and Climate Change.
Una-May O’Reilly
Penousal Machado
Erik Hemberg
EvoOSS — Open Source Software for Evolutionary Computation
Summary
Evolutionary computation (EC) methods are applied in many different domains. Therefore, soundly engineered, reusable, flexible, user-friendly, interoperable, and open software for EC is needed more than ever to bridge the gap between theoretical research and practical application. However, due to the heterogeneity of application domains and the large number of EC methods, the development of such software is both, time consuming and complex. Consequently, many EC researchers implement custom, highly specialized, closed source and often throw-away software which focuses on a specific research question and is used only once to produce results for the next paper. It is not yet standard in the EC community that the software used to produce the presented results is also made available as open source software in each publication, let alone that this software is also engineered in such a way that others can easily base their research work on it or apply it in practice. This significantly hinders the comparability and reproducibility of research results in the field.
This workshop promotes the development and dissemination of open source software for evolutionary computation and provides a platform for EC researchers to present their latest open source software libraries, frameworks, and tools for the development, analysis, and application of evolutionary algorithms.
Please note that submissions to this workshop will only be accepted if they describe open source software for EC that has already been released and is publically available. The URL to the source code repository must be included in the paper. Therefore, contributions to this workshop have not to be submitted in anonymized form, as the identity of the authors is usually very easy to determine from the repository.
Organizers
Stefan Wagner
Michael Affenzeller
EvoSelf — Evolving self-organisation
Summary
Recent dramatic advances in the problem-solving capabilities and scale of Artificial Intelligence (AI) systems have enabled their successful application in challenging real-world scientific and engineering problems (Abramson et al 2024, Lam et al 2023). Yet these systems remain brittle to small disturbances and adversarial attacks (Su et al 2019, Cully 2014), lack human-level generalisation capabilities (Chollet 2019), and require alarming amounts of human, energy and financial resources (Strubel et al 2019).
Biological systems, on the other hand, seem to have largely solved many of these issues. They are capable of developing into complex organisms from a few cells and regenerating limbs through highly energy-efficient processes shaped by evolution. They do so through self-organisation: collectives of simple components interact locally with each other to give rise to macroscopic properties in the absence of centralised control (Camazine, 2001). This ability to self-organise renders organisms adaptive to their environments and robust to unexpected failures, as the redundancy built in the collective enables repurposing components, crucially, by leveraging the same self-organisation process that created the system in the first place.
Self-organisation lies at the core of many computational systems that exhibit properties such as robustness, adaptability, scalability and open-ended dynamics. Some examples are Cellular Automata (Von Neumann 1966), reaction-diffusion systems (Turing 1992, Mordvintsev 2021), particle systems (Reynolds 1987, Mordvintsev) , and Neural Cellular Automata (Mordvintsev et al 2020), showing promising results in pattern formation in high dimensional spaces such as images . Examples from neuroevolution are indirect encodings of neural networks inspired from morphogenesis such as cellular encodings (Gruau 1992), HyperNEAT (Stanley et al 2009), Hypernetworks (Ha 2016), HyperNCA (Najarro et al 2022) and Neural Developmental Programs (Najarro et al 2023, Nisioti et al 2024), showing improved robustness and generalisation.
Guiding self-organising systems through evolution is a long-standing and promising practise, yet the inherent complexity of the dynamics of these systems complicates their scaling to domains where gradient-based methods or simpler models excel (Risi 2021). If we view self-organising systems as genotype to phenotype mappings, we can leverage techniques developed in the evolutionary optimization community to understand how they alter evolutionary dynamics and guide them better.
The reverse is also possible: evolution can emerge as an inherent property of a self-organising system allowing us to study questions about the origin of life. Investigating under which conditions they appear, and the particular emergent evolutionary behaviours in these systems could afford insights applicable to existing artificial evolutionary approaches, or even directly provide an evolutionary substrate for learning tasks and achieving open-endedness. Early work in this direction (Ray 1992, Agüera y Arcas et al 2024, Fontana 1990, Adami et al 1994, Rasmussen et al 1991) has demonstrated emergent evolution in several computational substrates.
References
J. Abramson et al., “Accurate structure prediction of biomolecular interactions with AlphaFold 3,” Nature, pp. 1–3, May 2024, doi: 10.1038/s41586-024-07487-w.
J. Su, D. V. Vargas, and S. Kouichi, “One pixel attack for fooling deep neural networks,” IEEE Trans. Evol. Computat., vol. 23, no. 5, pp. 828–841, Oct. 2019, doi: 10.1109/TEVC.2019.2890858.
R. Lam et al., “GraphCast: Learning skillful medium-range global weather forecasting,” Aug. 04, 2023, arXiv: arXiv:2212.12794. Accessed: May 31, 2024. Online. Available: http://arxiv.org/abs/2212.12794
A. Cully, J. Clune, D. Tarapore, and J.-B. Mouret, “Robots that can adapt like animals,” Nature, vol. 521, no. 7553, pp. 503–507, May 2015, doi: 10.1038/nature14422.
F. Chollet, “On the Measure of Intelligence,” Nov. 25, 2019, arXiv: arXiv:1911.01547. doi: 10.48550/arXiv.1911.01547.
E. Strubell, A. Ganesh, and A. McCallum, “Energy and Policy Considerations for Deep Learning in NLP,” Jun. 05, 2019, arXiv: arXiv:1906.02243. doi: 10.48550/arXiv.1906.02243.
S. Camazine, J.-L. Deneubourg, N. R. Franks, J. Sneyd, G. Theraulaz, and E. Bonabeau, Self-Organization in Biological Systems, vol. 38. Princeton University Press, 2001. doi: 10.2307/j.ctvzxx9tx.
A. Mordvintsev, E. Randazzo, and E. Niklasson, “Differentiable Programming of Reaction-Diffusion Patterns,” Jun. 22, 2021, arXiv: arXiv:2107.06862. doi: 10.48550/arXiv.2107.06862.
A. M. Turing, “The chemical basis of morphogenesis,” Bltn Mathcal Biology, vol. 52, no. 1, pp. 153–197, Jan. 1990, doi: 10.1007/BF02459572.
C. W. Reynolds, “Flocks, herds and schools: A distributed behavioral model,” in Proceedings of the 14th annual conference on Computer graphics and interactive techniques, in SIGGRAPH ’87. New York, NY, USA: Association for Computing Machinery, Aug. 1987, pp. 25–34. doi: 10.1145/37401.37406.
Alexander Mordvintsev. Self-Organizing Particle Swarm. https://znah.net/icra23/
B. A. y Arcas et al., “Computational Life: How Well-formed, Self-replicating Programs Emerge from Simple Interaction,” Aug. 02, 2024, arXiv: arXiv:2406.19108. doi: 10.48550/arXiv.2406.19108.
W. Fontana, “Algorithmic Chemistry: A model for functional self-organization.”
C. Adami and C. T. Brown, “Evolutionary Learning in the 2D Artificial Life System ‘Avida,’” May 16, 1994, arXiv: arXiv:adap-org/9405003. doi: 10.48550/arXiv.adap-org/9405003.
S. Rasmussen, C. Knudsen, R. Feldberg, and M. Hindsholm, “The coreworld: emergence and evolution of cooperative structures in a computational chemistry,” in Emergent computation, Cambridge, MA, USA: MIT Press, 1991, pp. 111–134.
E. Najarro, S. Sudhakaran, and S. Risi, “Towards Self-Assembling Artificial Neural Networks through Neural Developmental Programs,” Jul. 16, 2023, arXiv: arXiv:2307.08197. Accessed: Oct. 03, 2023. Online. Available: http://arxiv.org/abs/2307.08197
E. Nisioti, E. Plantec, M. Montero, J. Pedersen, and S. Risi, “Growing Artificial Neural Networks for Control: the Role of Neuronal Diversity,” in Proceedings of the Genetic and Evolutionary Computation Conference Companion, Melbourne VIC Australia: ACM, Jul. 2024, pp. 175–178. doi: 10.1145/3638530.3654356.
Call for papers
We invite authors to submit papers through the Gecco submission system focused on the above subjects. We encourage two categories of submissions: papers of up to four pages showcasing early research ideas and papers up to 8 pages presenting more substantial contributions (such as technical contributions, benchmarks, negative results, surveys). Page count excludes references and appendices and submissions should follow the Gecco format . We encourage submissions related to evolution and self-organisation that address the following questions:
- How can we evolve artificial systems that exhibit robustness, generalisation and adaptability?
- What properties are missing from current self-organising systems? Can we design new ones?
- How can we analyse the trainability/navigability of self-organising systems?
- How can evolutionary processes such as self-replication emerge in a self-organising system?
- Which benchmarks/environments will reveal the benefit of self-organising systems?
- Which scientific and engineering domains would benefit from the development of such systems?
Organizers
Eleni Nisioti
Sebastian Risi
Joachim Winther Pedersen
Ettore Randazzo
Ettore Randazzo is a Senior Software Engineer and Researcher at Google Research in Zürich, Switzerland. Prior to this he was a Research Assistant at the University of Illinois at Chicago where he acquired a Master’s degree in Computer Engineering. His interests include machine intelligence, complex artificial life, philosophy, ethics, logic and mathematics, gaming, music (including playing electric guitar and piano), and writing.
Alexander Mordvintsev
Eyvind Niklasson
Eyvind Niklasson is an AI Resident at Google Research in Zürich, Switzerland, where he among other topics conducts research in Neural Cellular Automatas and self-replicating programs. He has previously worked as a data scientist at Gro Intelligence developing unsupervised learning models. Before that, Eyvind worked as a research assistant at Cornell University attached to the Cornell High Energy Synchrotron Source.
GGP — Graph-based Genetic Programming
Summary
While the classical way to represent programs in Genetic Programming (GP) is using an expression tree, different GP variants with graph-based representations have been proposed and studied throughout the years. Graph-based representations have led to novel applications of GP in circuit design, cryptography, image analysis, and more. This workshop aims to encourage this form of GP by considering graph-based methods from a unified perspective and to bring together researchers in this subfield of GP research.
Organizers
Roman Kalkreuth
Yuri Lavinas
I’m an associate professor at the University of Toulouse 1 Capitole, France, at the Institut de Recherche en Informatique de Toulouse (IRIT) and I’m part of the REVA team. I did a postdoc research working with histopathology image analysis for cancer treatment with Genetic Programming in the IRIT@CRCT group. I got my PhD degree from the University of Tsukuba, Japan. Originally, I’m from Brazil, where I did my undergraduate course, at the University of Brasilia.
My research interests are related to Computational Intelligence, such as Evolutionary Computation and Artificial Life, with a greater focus on multi-objective optimization, fitness landscape and Genetic Programming. Overall, I’m interested in programs that can adapt themselves, in applications of Evolutionary Computation (black box optimization, multi-agent systems, games), as well as more speculative use of these Computational Intelligence for Artificial Life ( such as the evolution of virtual creatures and the worlds where the live).
Eric Medvet
Giorgia Nadizar
Giovanni Squillero
Alberto Tonda
Dennis G. Wilson
IAM 2025 — 10th Workshop on Industrial Applications of Metaheuristics (IAM 2025)
Summary
This workshop proposes to present and debate about the current achievements of applying these techniques to solve real-world problems in industry and the future challenges, focusing on the (always) critical step from the laboratory to the shop floor. A special focus will be given to the discussion of which elements can be transferred from academic research to industrial applications and how industrial applications may open new ideas and directions for academic research.
As in the previous edition, the workshop together with the rest of the conference will be held in a hybrid mode promoting the participation.
Topic areas of IAM 2025 include (but are not restricted to):
• Success stories for industrial applications of metaheuristics
• Pitfalls of industrial applications of metaheuristics.
• Metaheuristics to optimize dynamic industrial problems.
• Multi-objective optimization in real-world industrial problems.
• Meta-heuristics in very constraint industrial optimization problems: assuring feasibility, constraint-handling techniques.
• Reduction of computing times through parameter tuning and surrogate modelling.
• Parallelism and/or distributed design to accelerate computations.
• Algorithm selection and configuration for complex problem solving.
• Advantages and disadvantages of metaheuristics when compared to other techniques such as integer programming or constraint programming.
• New research topics for academic research inspired by real (algorithmic) needs in industrial applications.
Submission
Authors can submit short contributions including position papers of up to 4 pages and regular contributions of up to 8 pages following in each category the GECCO paper formatting guidelines. Software demonstrations will also be welcome.
The submission deadlines will adhere to the standard GECCO schedule for workshops.
The workshop itself will be publicized through mailing lists and academic and industrial contacts of the organizers.
Submissions from industry will be especially welcome.
Organizers
Silvino Fernández Alzueta
Pablo Valledor Pellicer
Thomas Stützle
IWERL — 28th International Workshop on Evolutionary Rule-based Machine Learning
Summary
Modern machine learning systems, including generative AI and large language models (LLMs), offer significant potential for addressing real-world challenges. However, a notable limitation of the majority of these systems is their ``black-box'' nature. The decision-making process of these models is often difficult to interpret, making it challenging for users to understand how a model arrived at a particular decision. The interpretability of decisions is critical in many real-world applications such as defence, biomedical, and lawsuits. Moreover, many modern systems require extensive memory, huge computational resources, and enormous training data, which can be resource-intensive and hinder their widespread adoption.
Evolutionary rule-based machine learning (ERL) stands out for its ability to provide interpretable decisions. The majority of ERL systems generate niche-based solutions, require less memory, and can be trained using comparatively small data sets. A key factor that makes these models interpretable is the generation of human-readable rules. Consequently, the decision-making process of the ERL systems is interpretable, which is an important step toward eXplainable AI (XAI).
The International Workshop on Evolutionary Rule-based Machine Learning (IWERL), previously known as the International Workshop on Learning Classifier Systems (IWLCS), stands as a cornerstone within the vibrant history of GECCO. Celebrating its 28th edition, IWERL is one of the pioneer and successful workshops at GECCO. This workshop plays an important role in nurturing the future of evolutionary rule-based machine learning. It serves as a beacon for the next generation of researchers, inspiring them to delve deep into evolutionary rule-based machine learning, with a particular focus on Learning Classifier Systems (LCSs).
ERL represents a collection of machine learning techniques that leverage the strengths of various metaheuristics to find an optimal set of rules to solve a problem. These methods have been developed using a diverse array of learning paradigms, including supervised learning, unsupervised learning, and reinforcement learning. ERL encompasses several prominent categories, such as Learning Classifier Systems, Ant-Miner, artificial immune systems, and fuzzy rule-based systems. The modes or model structures of these systems are optimized using evolutionary, symbolic, or swarm-based methods. The hallmark characteristic of the ERL models is their innate comprehensibility, which encompasses traits like explainability, transparency, and interpretability. This property has garnered significant attention within the machine learning community, aligning with the broader interest of Explainable AI.
This workshop is designed to provide a platform for sharing the research trends in the realm of ERL. It aims to highlight modern implementations of ERL systems for real-world applications and to show the effectiveness of ERL in creating flexible and eXplainable AI systems. Moreover, this workshop seeks to attract new interest in this alternative and often advantageous modelling paradigm.
Topics of interest include but are not limited to:
- Advances in ERL methods: local models, problem space partitioning, rule mixing, …
- Applications of ERL: medical, navigation, bioinformatics, computer vision, games, cyber-physical systems, …
- State-of-the-art analysis: surveys, sound comparative experimental benchmarks, carefully crafted reproducibility studies, …
- Formal developments in ERL: provably optimal parametrization, time bounds, generalization, …
- Comprehensibility of evolved rule sets: knowledge extraction, visualization, interpretation of decisions, eXplainable AI, …
- Advances in ERL paradigms: Michigan/Pittsburgh style, hybrids, iterative rule learning, …
- Hyperparameter optimization for ERL: hyperparameter selection, online self-adaptation, …
- Optimizations and parallel implementations: GPU acceleration, matching algorithms, …
- Generative AI and LLMs in ERL: integrating generative models and large language models for rule generation, natural language explanations, enhanced interpretability, …
Due to the rather disjointed ERL research community, in addition to full papers (8 pages excluding references) on novel ERL research, we plan to allow submission of extended abstracts (2 pages excluding references) that summarize recent high-value ERL research by the authors, showcasing its practical significance. These will then be presented in a dedicated short paper segment with short presentations.
Organizers
Abubakar Siddique
Dr. Siddique's main research lies in creating novel machine learning systems, inspired by the principles of cognitive neuroscience, to provide efficient and scalable solutions for challenging and complex problems in different domains, such as Boolean, computer vision, navigation, and Bioinformatics. He has shared his expertise by delivering five tutorials and talks at various forums, including the Genetic and Evolutionary Computation Conference (GECCO). Additionally, he serves the academic community as an author for prestigious journals and international conferences, including IEEE Transactions on Cybernetics, IEEE Transactions on Evolutionary Computation, and GECCO.
During his academic journey, Dr. Siddique received the "Student Of The Session" Award, the VUWSA Gold Award, and the "Emerging Research Excellence" Medal. Prior to joining academia, he spent nine years at Elixir Technologies Pakistan, a California (USA) based leading software company. His last designation was a Principal Software Engineer where he led a team of software developers. He developed enterprise-level software for customers such as Xerox, IBM, and Adobe.
Michael Heider
Hiroki Shiraishi
LAHS 2025 — Landscape-Aware Heuristic Search
Summary
This workshop will run in hybrid format. Fitness landscape analysis and visualisation can provide significant insights into problem instances and algorithm behaviour. The aim of the workshop is to encourage and promote the use of landscape analysis to improve the understanding, the design and, eventually, the performance of search algorithms. Examples include landscape analysis as a tool to inform the design of algorithms, landscape metrics for online adaptation of search strategies, mining landscape information to predict instance hardness and algorithm runtime. The workshop will focus on, but not be limited to, topics such as:
- Exploiting problem structure
- Informed search strategies
- Performance and failure prediction
- Proposal of new landscape features
- Applications of landscape analysis to real-world problems
We will invite submissions of three types of articles:
- research papers (up to 8 pages)
- software libraries/packages (up to 4 pages)
- position papers (up to 2 pages)
Organizers
Sarah L. Thomson
Nadarajen Veerapen
Katherine Malan
Arnaud Liefooghe
Sébastien Verel
Gabriela Ochoa
LLMfwEC — Large Language Models for and with Evolutionary Computation Workshop
Summary
Large language models (LLMs), along with other foundational models in generative AI, have significantly changed the traditional expectations of artificial intelligence and machine learning systems. An LLM takes natural language text prompts as input and generates responses by matching patterns and completing sequences, providing output in natural language. In contrast, evolutionary computation (EC) is inspired by Neo-Darwinian evolution and focuses on black-box search and optimization. But what connects these two approaches?
One answer is evolutionary search heuristics (LLM with EC), with operators that use LLMs to fulfill their function. This hybridization turns the conventional paradigm that ECs use on its head, and in turn, sometimes yields high-performing and novel EC systems.
Another answer is using LLM for EC. LLMs may help researchers select feasible candidates from the pool of algorithms based on user-specified goals and provide a basic description of the methods or propose novel hybrid methods. Further, the models can help identify and describe distinct components suitable for adaptive enhancement or hybridization and provide a pseudo-code, implementation, and reasoning for the proposed methodology. Finally, LLMs have the potential to transform automated metaheuristic design and configuration by generating codes, iteratively improving the initially designed solutions or algorithm templates (with or without performance or other data-driven feedback), and even guiding implementations.
This workshop aims to encourage innovative approaches that leverage the strengths of LLMs and EC techniques, thus enabling the creation of more adaptive, efficient, and scalable algorithms by integrating evolutionary mechanisms with advanced LLM capabilities. Thanks to the collaborative platform for researchers and practitioners, the workshop may Inspire novel research directions that could reshape AI, specifically LLMs, and optimization fields through this hybridization and achieve a better understanding and explanation of how these two seemingly disparate fields are related and how knowledge of their functions and operations can be leveraged.
It includes (but is not restricted to the following topics):
- Evolutionary Prompt Engineering
- Optimisation of LLM Architectures
- LLM-Guided Evolutionary Algorithms
- How can an EA using an LLM evolve different of units of evolution, e.g. code, strings, images, multi-modal candidates?
- How can an EA using an LLM solve prompt composition or other LLM development and use challenges?
- How can an EA using an LLM integrate design explorations related to cooperation, modularity, reuse, or competition?
- How can an EA using an LLM model biology?
- How can an EA using an LLM intrinsically, or with guidance, support open-ended evolution?
- What new variants hybridizing EC and/or another search heuristic are possible and in what respects are they advantageous?
- What are new ways of using LLMs for evolutionary operators, e.g. new ways of generating variation through LLMs, as with LMX or ELM, or new ways of using LLMs for selection, as with e.g. Quality-Diversity through AI Feedback)
- How well does an EA using an LLM scale with population size and problem complexity?
- What is the most accurate computational complexity of an EA using an LLM?
- What makes a good EA plus LLM benchmark?
- LLMs for (automated) generation of EC.
- Understanding, fine-tuning, and adaptation of Large Language Models for EC. How large do LLMs need to be? Are there benefits for using larger/smaller ones? Ones trained on different datasets or in different ways?
- Implementing/generating methodology for population dynamics analysis, population diversity measures, control, and analysis and visualization.
- Generating rules for EC (boundary and constraints handling strategies).
- The performance improvement, testing, and efficiency of the improved algorithms.
- Reasoning for component-wise analysis of algorithms.
- Connection of LLM and other ML techniques for EC (Reinforcement learning, AutoML)
- Generation and reasoning for parallel approaches for EC algorithms.
- Benchmarking and Comparative Studies of LLM-generated algorithms.
- Applications of LLM and EC (not limited to):
Organizers
Erik Hemberg
Roman Senkerik
Roman Senkerik was born in Zlin, the Czech Republic, in 1981. He received an MSc degree in technical cybernetics from the Tomas Bata University in Zlin, Faculty of applied informatics in 2004, the Ph.D. degree also in technical Cybernetics, in 2008, from the same university, and Assoc. prof. Degree in Informatics from VSB – Technical University of Ostrava, in 2013.
From 2008 to 2013 he was a Research Assistant and Lecturer with the Tomas Bata University in Zlin, Faculty of applied informatics. Since 2014 he is an Associate Professor and since 2017 Head of the A.I.Lab https://ailab.fai.utb.cz/ with the Department of Informatics and Artificial Intelligence, Tomas Bata University in Zlin. He is the author of more than 40 journal papers, 250 conference papers, and several book chapters as well as editorial notes. His research interests are the development of evolutionary algorithms, their modifications and benchmarking, soft computing methods, and their interdisciplinary applications in optimization and cyber-security, machine learning, neuro-evolution, data science, the theory of chaos, and complex systems. He is a recognized reviewer for many leading journals in computer science/computational intelligence. He was a part of the organizing teams for special sessions/workshops/symposiums at GECCO, IEEE WCCI, CEC, or SSCI events.
Joel Lehman
Una-May O’Reilly
Michal Pluhacek
Niki van Stein
Niki van Stein received her PhD degree in Computer Science in 2018, from the Leiden Institute of Advanced Computer Science (LIACS), Leiden University, The Netherlands. From 2018 until 2021 she was a Postdoctoral Researcher at LIACS, Leiden University and she is currently an Assistant Professor at LIACS. Her research interests lie in explainable AI for EC and ML, surrogate-assisted optimisation and surrogate-assisted neural architecture search, usually applied to complex industrial applications.
Pier Luca Lanzi
Tome Eftimov
NEWK — Neuroevolution at work
Summary
In the last years, inspired by the fact that natural brains themselves are the products of an evolutionary process, the quest for evolving and optimizing artificial neural networks through evolutionary computation has enabled researchers to successfully apply neuroevolution to many domains such as strategy games, robotics, big data, and so on. The reason behind this success lies in important capabilities that are typically unavailable to traditional approaches, including evolving neural network building blocks, hyperparameters, architectures, and even the algorithms for learning themselves (meta-learning).
Although promising, the use of neuroevolution poses important problems and challenges for its future development.
Firstly, many of its paradigms suffer from a lack of parameter-space diversity, meaning a failure to provide diversity in the behaviors generated by the different networks.
Moreover, harnessing neuroevolution to optimize deep neural networks requires noticeable computational power and, consequently, investigating new trends in enhancing computational performance.
A closely related and rapidly growing field is Neural Architecture Search (NAS), which aims to automatically design high-performing neural network architectures by employing techniques from evolutionary computation and reinforcement learning. NAS methods strictly rely on neuroevolution principles to explore the vast space of possible neural network configurations and discover novel, efficient architectures. The tight coupling between neuroevolution and NAS highlights their synergistic relationship, with advancements in one field enabling progress in the other.
Although promising, the use of neuroevolution and NAS poses important problems and challenges for their future development.
Firstly, many of their paradigms suffer from a lack of parameter-space diversity, meaning a failure to provide diversity in the behaviours generated by the different networks.
Moreover, harnessing neuroevolution and NAS to optimize deep neural networks requires noticeable computational power and, consequently, the investigation of new trends in enhancing computational performance.
This workshop aims:
- to bring together researchers working in the fields of deep learning, evolutionary computation, and optimization to exchange new ideas about potential directions for future research;
- to create a forum of excellence on neuroevolution that will help interested researchers from various areas, ranging from computer scientists and engineers on the one hand to application-devoted researchers on the other hand, to gain a high-level view of the current state of the art.archers on the other hand, to gain a high-level view about the current state of the art.
Since an increasing trend to neuroevolution and NAS in the next few years seems likely to be observed, not only will a workshop on this topic be of immediate relevance to get insight into future trends, but it will also provide a common ground to encourage novel paradigms and applications. Therefore, researchers emphasizing neuroevolution and NAS issues in their work are encouraged to submit their work. This event is also ideal for informal contacts, exchanging ideas, and discussions with fellow researchers.
The scope of the workshop is to receive high-quality contributions on topics related to neuroevolution and neural architecture search, ranging from theoretical works to innovative applications in the context of (but not limited to):
• theoretical and experimental studies involving neuroevolution and NAS on machine learning in general, and deep and reinforcement learning in particular
• development of innovative neuroevolution and NAS paradigms
• parallel and distributed neuroevolution and NAS methods
• new search operators for neuroevolution and NAS
• hybrid methods for neuroevolution and NAS
• surrogate models for fitness estimation in neuroevolution and NAS
• adopt evolutionary multi-objective and many-objective optimisation techniques in neuroevolution and NAS
• propose new benchmark problems for neuroevolution and NAS
• applications of neuroevolution and NAS to Artificial Intelligence agents and to real-world problems.
Organizers
Ernesto Tarantino
De Falco Ivanoe
Antonio Della Cioppa
Antonio Della Cioppa received the Laurea degree in Physics and the Ph.D. degree in Computer Science, both from University of Naples “Federico II,” Naples, Italy, in 1993 and 1999, respectively. From 1999 to 2003, he was a Postdoctoral Fellow at the Department of Computer Science and Electrical Engineering, University of Salerno, Salerno, Italy. In 2004, he joined the Department of Information Engineering, Electrical Engineering and Mathematical Applications, University of Salerno, where he is currently Associate Professor of Computer Science and Artificial Intelligence. His main fields of interest are in the Computational Intelligence area, with particular attention to Evolutionary Computation, Swarm Intelligence and Neural Networks, Machine Learning, Parallel Computing, and their application to real-world problems. Prof. Della Cioppa is a member of the Association for Computing Machinery (ACM), the ACM Special Interest Group on Genetic and Evolutionary Computation, the IEEE Computational Intelligence Society and the IEEE Computational Intelligence Society Task Force on Evolutionary Computer Vision and Image Processing. He serves as Associate Editor for the Applied Soft Computing journal (Elsevier), Evolutionary Intelligence (Elsevier), Algorithms (MDPI). He has been part of the Organizing or Scientific Committees for tens of international conferences or workshops, and has authored or co-authored about 100 papers in international journals, books, and conference proceedings.
Edgar Galvan
Mengjie Zhang
Prof Mengjie Zhang is a Fellow of Royal Society of New Zealand, a Fellow of Engineering New Zealand, a Fellow of IEEE, an IEEE Distinguished Lecturer, currently Professor of Computer Science at Victoria University of Wellington, where he heads the interdisciplinary Evolutionary Computation and Machine Learning Research Group. He is the Director of the Centre for Data Science and Artificial Intelligence at the University.
His research is mainly focused on AI, machine learning and big data, particularly in evolutionary learning and optimisation, feature selection/construction and big dimensionality reduction, computer vision and image analysis, scheduling and combinatorial optimisation, classification with unbalanced data and missing data, and evolutionary deep learning and transfer learning. Prof Zhang has published over 900 research papers in refereed international journals and conferences. He has been serving as an associated editor for over ten international journals including IEEE Transactions on Evolutionary Computation, IEEE Transactions on Cybernetics, the Evolutionary Computation Journal (MIT Press), and involving many major AI and EC conferences as a chair. He received the “EvoStar/SPECIES Award for Outstanding Contribution to Evolutionary Computation in Europe” in 2023. Since 2007, he has been listed as a top five (currently No. 3) world genetic programming researchers by the GP bibliography (http://www.cs.bham.ac.uk/~wbl/biblio/gp-html/index.html). He is also a Clarivate Highly Cited Researcher in the field of Computer Science — 2023.
He is the Tutorial Chair for GECCO 2014, 2023 and 2024, an AIS-BIO Track Chair for GECCO 2016, an EML Track Chair for GECCO 2017, and a GP Track Chair for GECCO 2020 and 2021.
Prof Zhang is currently the Chair for IEEE CIS Awards Committee. He is also a past Chair of the IEEE CIS Intelligent Systems Applications Technical Committee, the Emergent Technologies Technical Committee and the Evolutionary Computation Technical Committee, a past Chair for IEEE CIS PubsCom Strategic Planning subcommittee, and the founding chair of the IEEE Computational Intelligence Chapter in New Zealand.
QuantOpt — Workshop on Quantum Optimization
Summary
Scope
Quantum computers are rapidly becoming more powerful and increasingly applicable to solve problems in the real world. They have the potential to solve extremely hard computational problems, which are currently intractable by conventional computers. Quantum optimization is an emerging field that focuses on using quantum computing technologies to solve hard optimization problems.
There are two main types of quantum computers, quantum annealers and quantum gate computers.
Quantum annealers are specially tailored to solve combinatorial optimization problems: they have a simpler architecture, and are more easily manufactured and are currently able to tackle larger problems as they have a larger number of qubits. These computers find (near) optimum solutions of a combinatorial optimization problem via quantum annealing, which is similar to traditional simulated annealing. Whereas simulated annealing uses ‘thermal’ fluctuations for convergence to the state of minimum energy (optimal solution), in quantum annealing the addition of quantum tunnelling provides a faster mechanism for moving between states and faster processing.
Quantum gate computers are general purpose quantum computers. These use quantum logic gates, a basic quantum circuit operating on a small number of qubits, for computation. Constructing an algorithm involves a fixed sequence of quantum logic gates. Some quantum algorithms, e.g., Grover's algorithm, have provable quantum speed-up. Among other things, these computers can be used to solve combinatorial optimization problems using the quantum approximate optimization algorithm.
Quantum computers have also given rise to quantum-inspired computers and quantum-inspired optimisation algorithms.
Quantum-inspired computers use dedicated conventional hardware technology to emulate/simulate quantum computers. These computers offer a similar programming interface of quantum computers and can currently solve much larger combinatorial optimization problems when compared to quantum computers and much faster than traditional computers.
Quantum-inspired optimisation algorithms use classical computers to simulate some physical phenomena such as superposition and entanglement to perform quantum computations, in an attempt to retain some of its benefit in conventional hardware when searching for solutions.
To solve optimization problems on a quantum annealer or on a quantum gate computer using the quantum approximate optimization algorithm, we need to reformulate them in a format suitable for the quantum hardware, in terms of qubits, biases and couplings between qubits. In mathematical terms, this requirement translates to reformulating the optimization problem as a Quadratic Unconstrained Binary Optimisation (QUBO) problem. This is closely related to the renowned Ising model. It constitutes a universal class, since in principle all combinatorial optimization problems can be formulated as QUBOs. In practice, some classes of optimization problems can be naturally mapped to a QUBO, whereas others are much more challenging to map. In quantum gates computers, Grover’s algorithm can be used to optimize a function by transforming the optimization problem into a series of decision problems. The most challenging part in this case is to select an appropriate representation of the problem to obtain the quadratic speedup of Grover’s algorithm compared to the classical computing algorithms for the same problem.
Content
A major application domain of quantum computers is solving hard combinatorial optimization problems. This is the emerging field of quantum optimization. The aim of the workshop is to provide a forum for both scientific presentations and discussion of issues related to quantum optimization.
As the algorithms quantum that computers use for optimization can be regarded as general types of heuristic optimization algorithms, there are potentially great benefits and synergy to bringing together the communities of quantum computing and heuristic optimization for mutual learning.
The workshop aims to be as inclusive as possible, and welcomes contributions from all areas broadly related to quantum optimization, and by researchers from both academia and industry.
Particular topics of interest include, but are not limited to:
Formulation of optimisation problems as QUBOs (including handling of non-binary representations and constraints)
Fitness landscape analysis of QUBOs
Novel search algorithms to solve QUBOs
Experimental comparisons on QUBO benchmarks
Theoretical analysis of search algorithms for QUBOs
Speed-up experiments on traditional hardware vs quantum(-inspired) hardware
Decomposition of optimisation problems for quantum hardware
Application of the quantum approximate optimization algorithm
Application of Grover's algorithm to solve optimisation problems
Novel quantum-inspired optimisation algorithms
Optimization/discovery of quantum circuits
Quantum optimisation for machine learning problems
Optical Annealing
Dealing with noise in quantum computing
Quantum Gates’ optimisation, Quantum Coherent Control
Organizers
Alberto Moraglio
Mayowa Ayodele
Mayowa Ayodele holds a PhD in Evolutionary Computation from Robert Gordon University, Scotland. She works as a Senior Solutions Architect at D-wave Quantum Inc. In this role, she specialises in addressing customer challenges through the utilisation of D-wave's quantum, hybrid, and classical optimisation solvers. Previously, she held the position of Principal Researcher at Fujitsu Research of Europe, United Kingdom, dedicating three years to investigating quantum-inspired techniques for solving optimisation problems.
Over the past decade, a significant portion of her research has revolved around the application of diverse algorithm categories, including, evolutionary algorithms for tackling problems in logistics, including the scheduling of trucks, trailers, ships, and platform supply vessels. In recent years, her focus has shifted towards formulating single and multi-objective constrained optimisation problems as Quadratic Unconstrained Binary Optimization (QUBO) as well as application quantum optimisation techniques to practical problems.
Francisco Chicano
Ofer Shir
Lee Spector
Matthieu Parizy
SAEOpt — Workshop on Surrogate-Assisted Evolutionary Optimisation
Summary
In many real-world optimisation problems, evaluating the objective function(s) is expensive, perhaps requiring days of computation for a single evaluation. Surrogate-assisted optimisation attempts to alleviate this problem by employing computationally cheap 'surrogate' models to estimate the objective function(s) or the ranking relationships of the candidate solutions.
Surrogate-assisted approaches have been widely used across the field of evolutionary optimisation, including continuous and discrete variable problems, although little work has been done on combinatorial problems. Surrogates have been employed in solving a variety of optimization problems, such as multi-objective optimisation, dynamic optimisation, and robust optimisation. Surrogate-assisted methods have also found successful applications in aerodynamic design optimisation, structural design optimisation, data-driven optimisation, chip design, drug design, robotics, and many more. Most interestingly, the need for on-line learning of the surrogates has led to a fruitful crossover between the machine learning and evolutionary optimisation communities, where advanced learning techniques such as ensemble learning, active learning, semi-supervised learning and transfer learning have been employed in surrogate construction.
Despite recent successes in using surrogate-assisted evolutionary optimisation, there remain many challenges. This workshop aims to promote the research on surrogate assisted evolutionary optimization including the synergies between evolutionary optimisation and learning. Thus, this workshop will be of interest to a wide range of GECCO participants. Particular topics of interest include (but are not limited to):
- Bayesian optimisation
- Advanced machine learning techniques for constructing surrogates
- Model management in surrogate-assisted optimisation
- Multi-level, multi-fidelity surrogates
- Complexity and efficiency of surrogate-assisted methods
- Small and big data-driven evolutionary optimization
- Model approximation in dynamic, robust, and multi-modal optimisation
- Model approximation in multi- and many-objective optimisation
- Surrogate-assisted evolutionary optimisation of high-dimensional problems
- Comparison of different modelling methods in surrogate construction
- Surrogate-assisted identification of the feasible region
- Comparison of evolutionary and non-evolutionary approaches with surrogate models
- Test problems for surrogate-assisted evolutionary optimisation
- Performance improvement techniques in surrogate-assisted evolutionary computation
- Performance assessment of surrogate-assisted evolutionary algorithms
Organizers
Alma Rahat
Dr Rahat is an Associate Professor of Data Science. His expertise is in evolutionary and Bayesian search and optimisation. Particularly, he has worked on developing effective acquisition functions for optimising single and multi-objective problems and locating the feasible space of solutions. He has a strong track record of working with industry on a broad range of optimisation problems, which resulted in numerous articles in top journals and conferences, including a best paper in the Real-World Applications track at GECCO, and a patent with Hydro International Ltd. Recently, he has been actively contributing to the Welsh Government's response to the pandemic using his expertise in machine learning and parameter optimisation with funding from both the Welsh Government (Co-PI and Co-I; £750k) and EPSRC (EP/W01226X/1, PI; £230k). His work, with colleagues at Swansea, has resulted in generating medium-term projections of admissions and deaths every week for the First Minister of Wales, and the UK Health Security Agency.
He is one of 24 members of the IEEE Computational Intelligence Society Task Force on Data-Driven Evolutionary Optimization of Expensive Problems. He has been the lead organiser for the Surrogate-Assisted Evolutionary Optimisation (SAEOpt) workshop at GECCO since 2016, and was the Proceedings Chair for GECCO 2022. Furthermore, he successfully led Swansea University's application to join the Turing University Network in 2023, and he is currently the Turing Academic Liaison for the university.
Currently, he is interested in developing methods for optimising constrained and expensive single and multi-objective problems, and active learning, that may be applied in different contexts, e.g. engineering design, educational technology, computational modelling, decision-making, and policy exploration.
Dr Rahat has a BEng (Hons.) in Electronic Engineering from the University of Southampton, UK, and a PhD in Computer Science from the University of Exeter, UK. He completed a Postgraduate Certificate in Teaching in Higher Education at Swansea University, and he is now a fellow of the Higher Education Academy (FHEA). He worked as a product development engineer after his bachelor's degree, and held post-doctoral research positions at the University of Exeter. Before moving to Swansea, he was a Lecturer in Computer Science at the University of Plymouth, UK.
Richard Everson
Jonathan Fieldsend
Handing Wang
Yaochu Jin
Tinkle Chugh
SymReg — Symbolic Regression Workshop
Summary
Symbolic regression is the search for symbolic models that describe a relationship in provided data. Symbolic regression has been one of the first applications of genetic programming and as such is tightly connected to evolutionary algorithms. In recent years several non-evolutionary techniques for solving symbolic regression have emerged, most notably methods based on large language models (LLMs). Especially with the focus on interpretability and explainability in AI research, symbolic regression takes a leading role among machine learning methods, whenever model inspection and understanding by a domain expert is desired.
The focus of this workshop is to further advance the state-of-the-art in symbolic regression and more general equation learning by gathering experts in the field and facilitating an exchange of research ideas. We encourage submissions presenting novel techniques or applications of symbolic regression, theoretical work, or algorithmic improvements to make the techniques more efficient, more reliable, and generally better controlled.
Organizers
Gabriel Kronberger
Fabricio Olivetti de França
Fabricio is an Associated Professor at Federal University of ABC (UFABC), Brazil.
He received his MsC and PhD from State University of Campinas (UNICAMP) with a focus on
data clustering and multimodal optimization. His current research focuses on interpretable
models with Symbolic Regression and real-world applications.
William La Cava
Steven Gustafson
Steven Gustafson received his PhD in Computer Science and Artificial Intelligence, and shortly thereafter was awarded IEEE Intelligent System's "AI's 10 to Watch" for his work in algorithms that discover algorithms. For 10+ years at GE's corporate R&D center he was a leader
in AI, successful technical lab manager, all while inventing and deploying state-of-the-art AI systems for almost every GE business, from GE Capital to NBC Universal and GE Aviation. He has over 50 publications, 13 patents, was a co-founder and Technical Editor in Chief of the Memetic Computing Journal. Steven has chaired various conferences and workshops, including the first Symbolic Regression and Modeling (SRM) Workshop at GECCO2009 and subsequent workshops from 2010 to 2014. As the Chief Scientist at Maana, a Knowledge Platform software company, he invented and architected new AutoML and NLP techniques with publications in AAAI and IJCAI. Dr. Gustafson was the
CTO at Noonum, a FinTech startup that delivers insights on companies and markets using advances in NLP and AI and the Chief Scientist at BigFilter.ai a company focused on AI safety and alignment technology. Currently he is assistant professor at the University of
Washington.