Interpretable Control Competition
Deadline: 2025-06-20
Webpage: https://giorgia-nadizar.github.io/interpretable-control-competition/
Description
Control systems are essential in modern technology, especially in safety-critical applications where trustworthiness is paramount. Yet, many existing systems in this field are opaque, with a strong focus on performance at the expense of interpretability. Compounding this issue is the lack of objective ways to measure interpretability.
This competition, now in its second edition, aims to spark new research into interpretable control systems by establishing a framework for comparing performance and interpretability trade-offs. We also seek to identify characteristics that enhance the interpretability of control policies, drawing on insights from human evaluators.
Last year’s edition included two tracks: a continuous control track (robotic locomotion in simulation) and a discrete control track (the game 2048). For this year we are considering some more challenging scenarios like visual based games or real world robotics to demonstrate that interpretable control can be effective in a broader range of scenarios.
Participants will be welcome to enter the competition using their preferred methods to develop and interpret control policies for addressing the proposed task. We particularly encourage the incorporation of Evolutionary Computation (EC) techniques to enhance either policy generation or interpretability.
Submissions will be evaluated based on both performance and interpretability. Performance will be assessed through simulations of each submitted policy, while interpretability will be evaluated by a panel of judges.
Organizers
Luigi Rovito is a third year PhD student at the University of Trieste, Italy. His research interests are genetic programming for cryptography and interpretable ML.