Evolutionary Computing and Explainable AI
Webpage: https://ecxai.github.io/ecxai/
Description
‘Explainable AI’ is an umbrella term that covers research on methods designed to provide human-understandable explanations of the decisions made/knowledge captured by AI models. This is currently a very active research area within the AI field. Evolutionary Computation (EC) draws from concepts found in nature to drive development in evolution-based systems such as genetic algorithms and evolution systems. Alongside other nature-inspired metaheuristics, such as swarm intelligence, the path to a solution is driven by stochastic processes. This creates barriers to explainability: algorithms may return different solutions when re-run from the same input, and technical descriptions of these processes often hinder end-user understanding and acceptance. On the other hand, very often, XAI methods require the fitting of some kind of model, and hence EC methods have the potential to play a role in this area. This workshop will focus on the bidirectional interplay between XAI and EC. That is, discuss how XAI can help EC research and how EC can be used within XAI methods.
Recent growth in the adoption of black-box solutions, including EC-based methods into domains such as medical diagnosis, manufacturing, and transport & logistics, has led to greater attention being paid to generating explanations and their accessibility to end-users. This increased attention has helped create a fertile environment for applying XAI techniques in the EC domain for both end-user and researcher-focused explanation generation. Furthermore, many approaches to XAI in machine learning are based on search algorithms (e.g., Local Interpretable Model-Agnostic Explanations / LIME) that have the potential to draw on the expertise of the EC community. Finally, many of the broader questions (such as what kinds of explanations are most appealing or useful to end users) are faced by XAI researchers in general.
From an application perspective, important questions have arisen for which XAI may be crucial: Is the system biased? Has the problem been formulated correctly? Is the solution trustworthy and fair? The goal of XAI and related research is to develop methods to interrogate AI processes with the aim of answering these questions. This can support decision-makers while also building trust in AI decision-support through more readily understandable explanations.
We seek contributions on a range of topics relating evolutionary computation (in all its forms) with explainability. Topics of interest include but are not limited to:
· Interpretability vs explainability in EC and their quantification
· Landscape analysis and XAI
· Contributions of EC to XAI in general
· Use of EC to generate explainable/interpretable models
· XAI in real-world applications of EC
· Possible interplay between XAI and EC theory
· Applications of existing XAI methods to EC
· Novel XAI methods for EC
· Legal and ethical considerations
· Case studies / applications of EC & XAI technologies
Organizers
Jaume Bacardit is Professor of Artificial Intelligence at Newcastle University in the UK. He has received a BEng, MEng in Computer Engineering and a PhD in Computer Science from Ramon Llull University, Spain in 1998, 2000 and 2004, respectively. Bacardit’s research interests include the development of machine learning methods for large-scale problems, the design of techniques to extract knowledge and improve the interpretability of machine learning algorithms, known currently as Explainable AI, and the application of these methods to a broad range of problems, mostly in biomedical domains. He leads/has led the data analytics efforts of several large interdisciplinary consortiums: D-BOARD (EU FP7, €6M, focusing on biomarker identification), APPROACH (EI-IMI €15M, focusing on disease phenotype identification) and PORTABOLOMICS (UK EPSRC £4.3M focusing on synthetic biology). Within GECCO he has organised several workshops (IWLCS 2007-2010, ECBDL’14), been co-chair of the EML track in 2009, 2013, 2014, 2020 and 2021, and Workshops co-chair in 2010 and 2011. He has 100+ peer-reviewed publications that have attracted 7800+ citations and a H-index of 40 (Google Scholar).
Alexander (Sandy) Brownlee is a Senior Lecturer in the Division of Computing Science and Mathematics at the University of Stirling, where he leads the Data Science and Intelligent Systems research group. His main topics of interest are in search-based optimisation methods and machine learning, with a focus on decision support tools, and applications in civil engineering, transportation and software engineering. He has published over 80 peer-reviewed papers on these topics. He has worked with several leading businesses including BT, KLM, and IES on industrial applications of optimisation and machine learning. He serves as a reviewer for several journals and conferences in evolutionary computation, civil engineering and transportation, and is currently an Editorial Board member for the journal Complex And Intelligent Systems. He has been an organiser of several workshops and tutorials at GECCO, CEC and PPSN on genetic improvement of software.
Stefano Cagnoni graduated in Electronic Engineering at the University of Florence, Italy, where he also obtained a PhD in Biomedical Engineering and was a postdoc until 1997. In 1994 he was a visiting scientist at the Whitaker College Biomedical Imaging and Computation Laboratory at the Massachusetts Institute of Technology. Since 1997 he has been with the University of Parma, where he has been Associate Professor since 2004. Recent research grants include: a grant from Regione Emilia-Romagna to support research on industrial applications of Big Data Analysis, the co-management of industry/academy cooperation projects: the development, with Protec srl, of a computer vision-based fruit sorter of new generation and, with the Italian Railway Network Society (RFI) and Camlin Italy, of an automatic inspection system for train pantographs; a EU-funded “Marie Curie Initial Training Network" grant for a four-year research training project in Medical Imaging using Bio-Inspired and Soft Computing. He has been Editor-in-chief of the "Journal of Artificial Evolution and Applications" from 2007 to 2010. From 1999 to 2018, he was chair of EvoIASP, an event dedicated to evolutionary computation for image analysis and signal processing, then a track of the EvoApplications conference. From 2005 to 2020, he has co-chaired MedGEC, a workshop on medical applications of evolutionary computation at GECCO. Co-editor of journal special issues dedicated to Evolutionary Computation for Image Analysis and Signal Processing. Member of the Editorial Board of the journals “Evolutionary Computation” and “Genetic Programming and Evolvable Machines”. He has been awarded the "Evostar 2009 Award" in recognition of the most outstanding contribution to Evolutionary Computation.
Giovanni Iacca is an Associate Professor in Information Engineering at the Department of Information Engineering and Computer Science of the University of Trento, Italy, where he founded the Distributed Intelligence and Optimization Lab (DIOL). Previously, he worked as a postdoctoral researcher in Germany (RWTH Aachen, 2017-2018), Switzerland (University of Lausanne and EPFL, 2013-2016), and The Netherlands (INCAS3, 2012-2016), as well as in industry in the areas of software engineering and industrial automation. He is co-PI of the PATHFINDER-CHALLENGE project "SUSTAIN" (2022-2026). Previously, he was co-PI of the FET-Open project "PHOENIX" (2015-2019). He has received two best paper awards (EvoApps 2017 and UKCI 2012). His research focuses on computational intelligence, distributed systems, explainable AI, and analysis of biomedical data. In these fields, he co-authored more than 180 peer-reviewed publications. He is actively involved in organizing tracks and workshops at some of the top conferences on computational intelligence, and he regularly serves as a reviewer for several journals and conference committees. He is an Associate Editor for IEEE Transactions on Evolutionary Computation, Applied Soft Computing, and Frontiers in Robotics and AI.
John McCall is Emeritus Professor in Computational Intelligence and Industry Optimisation at Robert Gordon University. He has researched in machine learning, search and optimisation for over 30 years, making novel contributions to a range of nature-inspired optimisation algorithms and predictive machine learning methods, including EDA, PSO, ACO and GA. He has 150+ peer-reviewed publications in books, international journals and conferences. These have received over 3500 citations with an h-index of 27. John specialises in industrially-applied optimization and decision support, working with major international companies including BT, BP, EDF, CNOOC and Equinor as well as a diverse range of SMEs. Major application areas for this research are: vehicle logistics, fleet planning and transport systems modelling; predictive modelling and maintenance in energy systems; and decision support in industrial operations management. John has attracted direct industrial funding as well as grants from UK and European research funding councils and technology centres. John is a founding director and CEO of Celerum, which specialises in freight logistics. He is also a founding director and CTO of PlanSea Solutions, which focuses on marine logistics planning. John has served as a member of the IEEE Evolutionary Computing Technical Committee, and as Associate Editor of IEEE Computational Intelligence Magazine, the IEEE Systems, Man and Cybernetics Journal, and Complex And Intelligent Systems. He frequently organises workshops and special sessions at leading international conferences, including several GECCO workshops in recent years.