on Representation
Learning to Act and Plan
Vaals,
Sept.
8th – 10th, 2025
Welcome
Welcome to the 2025 Aachen Symposium on Representation Learning to Act and Plan at Hotel Kasteel Vaalsbroek in Vaals.
Glad you’re here!
Website
symposium.ml.Aachen by bus
To visit Aachen, you can take the bus 59 and bus 25 for a 40-minute ride. It leaves from Vaals, Vaalsbroek, 100 meters from the venue, and arrives at Aachen, Elisenbrunnen, right in the city center, with one stop in-between. Buy tickets from the bus driver.
Vaals,
Vaalsbroek
Aachen,
Elisenbrunnen
59
25
9 AM
10 AM
11 AM
Noon
1 PM
2 PM
3 PM
4 PM
5 PM
6 PM
7 PM
8 PM
9 PM
Opening and welcome
8:45 AM
Marlos C. Machado
Representation-Driven Option Discovery in RL
Anders Jonsson
Refined and Sample-Efficient Representations for RL
Coffee break
Andrew Cropper
Automating Popper’s Logic of Scientific Discovery
Blai Bonet
Symbolic Methods for Learning General Policies: Ideas and Results
Discussion
Lunch
Sheila McIlraith
TBD
Siddharth Srivastava
Learning Symb. World Models for RL/Planning from Real-Valued Data
Coffee break
Marc Toussaint
Diverse Solvers
Gerhard Neumann
Reinforcement Learning with Extended Action Representations
Poster Session
Dinner
Christopher Morris
Expressivity and Generalization Abilities of GNNs
Steven Schockaert
Reasoning with Region-Based Embeddings
Coffee break
Axel Ngonga
Neurosymbolic Concept Learning
Luc De Raedt
A Perspective on Neurosymbolic Artificial Intelligence
Discussion
Lunch
Forest Agostinelli
Solving Pathfinding Problems with High-Level Goal Specifications
Simon Ståhlberg
First-Order Representation Languages for Goal-Conditioned RL
Coffee break
Herke van Hoof
Visual and Relational Representations for Planning Problems
Vincent François-Lavet
Learning Structured Abstract World Models
Discussion
Dinner
Alfonso Emilio Gerevini
Building Language Models for Planning
Matthijs Spaan
Exploiting Epistemic Uncertainty for Deep Exploration in RL
Coffee break
Giuseppe Marra
Neurosymbolic Safe RL via Probabilistic Logic Shields
Sagar Malhotra
What Can Logic Do for Safe and Explainable Artificial Intelligence?
Discussion
Lunch
Vicenç Gómez
The Linear Bellman Equation and Some Applications
Jonas G., Niklas, Carlos (RLeap members)
Learning STRIPS from Traces
Wrap-up + Coffee
End
Bus back to Aachen
University of South Carolina
Assistant professor at the University of South Carolina. His research interest is to create algorithms that can solve any pathfinding problem.
Solving Pathfinding Problems with High-Level Goal Specifications
RWTH Aachen University
Member of the RLeap group at RWTH. His interests include generalization and explainability in planning and reinforcement learning, and representation and inference with graphs.
Symbolic Methods for Learning General Policies: New Ideas and Results
Ben-Gurion University
Professor at BGU. His interests include planning, modeling, and reinforcement learning, and their application to robotics.
University of Oxford
Research fellow at the University of Oxford, working on integrating machine learning and logical reasoning (inductive logic programming).
Automating Popper’s Logic of Scientific Discovery
KU Leuven
Professor at KU Leuven (Belgium) and Örebro University (Sweden). He is interested in learning and reasoning, especially in neuro-symbolic AI and statistical relational AI.
A Perspective on Neurosymbolic Artificial Intelligence
Vrije Universiteit Amsterdam
Assistant professor at VU Amsterdam. His research focuses on deep learning, reinforcement learning, representation learning, and planning.
Learning Structured Abstract World Models
RWTH Aachen University
Alexander von Humboldt Professor at RWTH. He is interested in learning representations for acting and planning that generalize.
University of Brescia
Full professor at the University of Brescia. His interests include all aspects of AI planning, particularly learning action models, heuristics, and general policies.
Building Language Models for Planning: Achievements, Limitations and Challenges
Universitat Pompeu Fabra
Associate professor at UPF. His interests include machine learning, approximate inference, and optimal control, applied to social networks and robotics.
The Linear Bellman Equation and Some Applications
Universitat Pompeu Fabra
Full professor at UPF, working mainly on different topics in reinforcement learning, such as hierarchical, multiagent, and non-Markovian reinforcement learning.
Refined and Sample-Efficient Representations for Reinforcement Learning
University of Alberta
Assistant professor at the University of Alberta. His interests lie broadly in (deep) reinforcement learning, representation learning, and continual learning.
Representation-Driven Option Discovery in Reinforcement Learning
TU Wien
Postdoc at TU Wien. He studies foundational problems in logic and probability to build safe, efficient, and explainable AI.
What Can Logic Do for Safe and Explainable Artificial Intelligence?
KU Leuven
Assistant professor at KU Leuven. His interests include machine learning and reasoning, with a focus on neuro-symbolic AI and relational learning.
Neurosymbolic Safe Reinforcement Learning via Probabilistic Logic Shields
University of Toronto
Professor at the University of Toronto & the Vector Institute. She researches sequential decision making (symbolic and ML methods), formal languages, and human-compatible AI.
TBD
RWTH Aachen University
Assistant professor at RWTH. His interests include graph machine learning from both theoretical and applied viewpoints.
Expressivity and Generalization Abilities of GNNs
Karlsruhe Institute of Technology
Professor at KIT, heading the Autonomous Learning Robots group. His research focuses on data-efficient and theoretically grounded machine learning methods for robotics.
Reinforcement Learning with Extended Action Representations
Paderborn University
Full professor at Paderborn University. He is interested in neuro-symbolic AI at web scale.
Neurosymbolic Concept Learning
Cardiff University
Professor at Cardiff University, working on Natural Language Processing, neuro-symbolic AI, and representation learning.
Reasoning with Region-Based Embeddings
Delft University of Technology
Professor at TU Delft, focusing on reinforcement learning algorithms for safe and robust decision-making.
Exploiting Epistemic Uncertainty for Deep Exploration in Reinforcement Learning
Arizona State University
Associate professor at ASU, with research interests in learning abstractions for various forms of sequential decision making, reinforcement learning, and taskable robotics.
Learning Symbolic World Models for Reinforcement Learning and Planning from Real-Valued Data
RWTH Aachen University
Postdoctoral researcher at RWTH Aachen University. His research interests include classical planning, with a particular focus on machine learning.
First-Order Representation Languages for Goal-Conditioned Reinforcement Learning
Technische Universität Berlin
Professor at TU Berlin. His research integrates machine learning and optimization in robotics, addressing physical reasoning, task-and-motion planning, and adaptive behavior.
Diverse Solvers
University of Amsterdam
Associate professor at the University of Amsterdam. He is interested in reinforcement learning, particularly in modular approaches and combinations with planning algorithms.
Visual and Relational Representations for Planning Problems
Participants
symposium.ml.RWTH Aachen University
PhD student in the RLeap group. He works on hierarchical RL for generalization with symbolic state descriptions.
RWTH Aachen University
Postdoc in the RLeap group. He works on logical machine learning, formal explainability, and logical GNNs.
Linköping University
PhD student in the RLeap group. He works on learning general policies with RL using automated curriculum learning.
RWTH Aachen University
PhD student in the RLeap group. He works on domain learning for planning, using analytical methods on symbolic input.
RWTH Aachen University
Postdoc in the RLeap group. He works on generalized planning in non-classical settings.
RWTH Aachen University
PhD student in the RLeap group. He works on learning lifted action models for planning from traces.
RWTH Aachen University
Postdoc in the RLeap group. He works on learning symbolic domain models from action traces using transformers.
RWTH Aachen University
PhD student in the RLeap group. She works on learning general policies for robotic tasks in continuous domains.
RWTH Aachen University
PhD student in the RLeap group. He works on unsupervised learning of symbolic domain models from image observations.
University of Brescia
Visiting PhD student in the RLeap group. He works on goal recognition through efficient deep learning approaches.
University of Melbourne
Visiting PhD student in the RLeap group. She works on AI planning, from complexity analysis to algorithm design.
RWTH Aachen University
PhD student in the RLeap group. He works on integrating planning and learning for robotics under geometric constraints.
RWTH Aachen University
PhD student in the RLeap group. He works on learning to search end-to-end in classical planning domains.