Large Language Models for and with Evolutionary Computation Workshop
Webpage: TBA (in case of acceptance)
Description
Large language models (LLMs), along with other foundational models in generative AI, have significantly changed the traditional expectations of artificial intelligence and machine learning systems. An LLM takes natural language text prompts as input and generates responses by matching patterns and completing sequences, providing output in natural language. In contrast, evolutionary computation (EC) is inspired by Neo-Darwinian evolution and focuses on black-box search and optimization. But what connects these two approaches?
One answer is evolutionary search heuristics (LLM with EC), with operators that use LLMs to fulfill their function. This hybridization turns the conventional paradigm that ECs use on its head, and in turn, sometimes yields high-performing and novel EC systems.
Another answer is using LLM for EC. LLMs may help researchers select feasible candidates from the pool of algorithms based on user-specified goals and provide a basic description of the methods or propose novel hybrid methods. Further, the models can help identify and describe distinct components suitable for adaptive enhancement or hybridization and provide a pseudo-code, implementation, and reasoning for the proposed methodology. Finally, LLMs have the potential to transform automated metaheuristic design and configuration by generating codes, iteratively improving the initially designed solutions or algorithm templates (with or without performance or other data-driven feedback), and even guiding implementations.
This workshop aims to encourage innovative approaches that leverage the strengths of LLMs and EC techniques, thus enabling the creation of more adaptive, efficient, and scalable algorithms by integrating evolutionary mechanisms with advanced LLM capabilities. Thanks to the collaborative platform for researchers and practitioners, the workshop may Inspire novel research directions that could reshape AI, specifically LLMs, and optimization fields through this hybridization and achieve a better understanding and explanation of how these two seemingly disparate fields are related and how knowledge of their functions and operations can be leveraged.
It includes (but is not restricted to the following topics):
- Evolutionary Prompt Engineering
- Optimisation of LLM Architectures
- LLM-Guided Evolutionary Algorithms
- How can an EA using an LLM evolve different of units of evolution, e.g. code, strings, images, multi-modal candidates?
- How can an EA using an LLM solve prompt composition or other LLM development and use challenges?
- How can an EA using an LLM integrate design explorations related to cooperation, modularity, reuse, or competition?
- How can an EA using an LLM model biology?
- How can an EA using an LLM intrinsically, or with guidance, support open-ended evolution?
- What new variants hybridizing EC and/or another search heuristic are possible and in what respects are they advantageous?
- What are new ways of using LLMs for evolutionary operators, e.g. new ways of generating variation through LLMs, as with LMX or ELM, or new ways of using LLMs for selection, as with e.g. Quality-Diversity through AI Feedback)
- How well does an EA using an LLM scale with population size and problem complexity?
- What is the most accurate computational complexity of an EA using an LLM?
- What makes a good EA plus LLM benchmark?
- LLMs for (automated) generation of EC.
- Understanding, fine-tuning, and adaptation of Large Language Models for EC. How large do LLMs need to be? Are there benefits for using larger/smaller ones? Ones trained on different datasets or in different ways?
- Implementing/generating methodology for population dynamics analysis, population diversity measures, control, and analysis and visualization.
- Generating rules for EC (boundary and constraints handling strategies).
- The performance improvement, testing, and efficiency of the improved algorithms.
- Reasoning for component-wise analysis of algorithms.
- Connection of LLM and other ML techniques for EC (Reinforcement learning, AutoML)
- Generation and reasoning for parallel approaches for EC algorithms.
- Benchmarking and Comparative Studies of LLM-generated algorithms.
- Applications of LLM and EC (not limited to):
Organizers
Roman Senkerik was born in Zlin, the Czech Republic, in 1981. He received an MSc degree in technical cybernetics from the Tomas Bata University in Zlin, Faculty of applied informatics in 2004, the Ph.D. degree also in technical Cybernetics, in 2008, from the same university, and Assoc. prof. Degree in Informatics from VSB – Technical University of Ostrava, in 2013.
From 2008 to 2013 he was a Research Assistant and Lecturer with the Tomas Bata University in Zlin, Faculty of applied informatics. Since 2014 he is an Associate Professor and since 2017 Head of the A.I.Lab https://ailab.fai.utb.cz/ with the Department of Informatics and Artificial Intelligence, Tomas Bata University in Zlin. He is the author of more than 40 journal papers, 250 conference papers, and several book chapters as well as editorial notes. His research interests are the development of evolutionary algorithms, their modifications and benchmarking, soft computing methods, and their interdisciplinary applications in optimization and cyber-security, machine learning, neuro-evolution, data science, the theory of chaos, and complex systems. He is a recognized reviewer for many leading journals in computer science/computational intelligence. He was a part of the organizing teams for special sessions/workshops/symposiums at GECCO, IEEE WCCI, CEC, or SSCI events.