With grateful thanks to our sponsor

City St George's, University of London

Neuro-symbolic

AI Workshop

Date

Friday 13th March 2026

Venue

Imperial College London (White City Campus)

Host

SPIKE Research Group

The UK neuro-symbolic AI research community is steadily growing, along with the range and depth of its research contributions. With rapid developments in Generative AI opening new questions and possibilities, this is a timely opportunity for the community to come together — and the reason we are convening this workshop.

This workshop will aim to:

  • Build and deepen connections across the UK neuro-symbolic AI community
  • Share ongoing work and recent results
  • Identify opportunities for collaboration, particularly around upcoming UK and EU funding calls

Programme

Schedule

Morning lightning talks, afternoon breakout sessions

Break
Talk
Session

Morning

08:50 Arrival
09:00 Coffee/Tea & Welcome
09:30 Welcome & Introduction
09:40 Invited Keynote
10:30 Coffee/Tea and Biscuits
10:45 Defining Neurosymbolic Learning and Reasoning
11:00 Exploiting balanced neural network dynamics for probabilistic computations
11:10 Neuro-Argumentative Learning
11:20 Argumentation for Explainable and Globally Contestable Decision Support with LLMs
11:30 Autoformalization and Neuro-Symbolic Systems
11:40 Student Talk (Francesco Belardinelli)
11:50 Automata-based VLM Reward Models
12:00 Symbolic Heuristic Guidance for Efficient Decision-Making
12:10 Uncertainty aware symbolic scaffolds for LLMs
12:20 Novel Attack Surfaces and Safeguards in Text-Guided Diffusion-Based Image Generation
12:30 Neuro-Symbolic Control at Decoding Time: Guaranteeing LLM Correctness by Construction
12:40 Student Talk (Eleonora Giunchiglia)
12:50 Student Talk (Eleonora Giunchiglia)

Afternoon

13:00 Lunch
14:00 Breakout Sessions
15:30 Coffee/Tea and Biscuits
15:45 Breakout Sessions
17:00 Workshop Concludes

Presenters

Speakers

Alessandra Russo

Alessandra Russo

Welcome & Introduction

Abstract:An overview of the SPIKE group's research agenda and the goals of this workshop, setting the stage for a day of keynote, lightning talks and collaborative breakout sessions on neuro-symbolic AI.

Luis Lamb

Luis Lamb

Invited Keynote

Artur Garcez

Artur Garcez

Defining Neurosymbolic Learning and Reasoning

Jorge Lobo

Jorge Lobo

Exploiting balanced neural network dynamics for probabilistic computations

Francesca Toni

Francesca Toni

Neuro-Argumentative Learning

Adam Dejl

Adam Dejl

Argumentation for Explainable and Globally Contestable Decision Support with LLMs

Abstract:Large language models (LLMs) exhibit strong general capabilities, but their deployment in high-stakes domains is hindered by their opacity and unpredictability. Recent work has taken meaningful steps towards addressing these issues by augmenting LLMs with symbolic post-hoc reasoning based on computational argumentation, providing faithful explanations and enabling users to contest incorrect decisions. However, this paradigm is limited to pre-defined binary choices and only supports local contestation for specific instances, leaving the underlying decision logic unchanged and prone to repeated mistakes. In this talk, I will introduce ArgEval, a framework that shifts from instance-specific reasoning to structured evaluation of general decision options. Rather than mining arguments solely for individual cases, ArgEval systematically maps task-specific decision spaces, builds corresponding option ontologies, and constructs general argumentation frameworks (AFs) for each option. These frameworks can then be instantiated to provide explainable recommendations for specific cases while still supporting global contestability through modification of the shared AFs. The effectiveness of ArgEval has been demonstrated in the context of providing treatment recommendations for glioblastoma, an aggressive brain tumour, where it was found to produce explainable guidance aligned with clinical practice.

Kostas Stathis

Kostas Stathis

Autoformalization and Neuro-Symbolic Systems

Abstract:This talk examines how to reason about games with the goal of playing previously unseen ones. We discuss the role of knowledge representation in formalizing game rules and enabling reasoning across games. We then explore how large language models can help autoformalize natural language game descriptions into formal representations. The talk also briefly reviews our work on autoformalization and concludes by linking this approach to neuro-symbolic systems, where neural models support translation and symbolic methods enable reasoning about social interaction scenarios.

Roko Parac

Roko Parac

Autoformalization and Neuro-Symbolic Systems

Abstract:Reinforcement Learning (RL) has enabled learning remarkable skills in robotics when a well-engineered dense reward function is provided. However, crafting these reward functions is a challenging task as it usually requires access to true states and domain expertise. As such, a number of recent works has considered incorporating Vision-language models (VLMs) into the procedure with promising early results. However, VLM defined rewards often struggle with long-horizon tasks and nor do they incorporate task decomposition. For example, in a task where the agent needs to “move the block to a closed drawer” the VLM would give positive rewards even when the agent does not open the drawer first. In this talk, we will discuss ideas for incorporating reward machines, an automata representation of the reward in RL, to tackle these challenges.

Celeste Veronese

Celeste Veronese

Symbolic Heuristic Guidance for Efficient Decision-Making

Abstract:Decision-making in complex environments remains challenging when action spaces are large, rewards are sparse, and long-horizon reasoning is required. While Deep Reinforcement Learning (DRL) has achieved impressive results, it often suffers from inefficient exploration and limited interpretability. My PhD work investigates how symbolic heuristics can guide planning and DRL to address these limitations. By integrating structured, human-interpretable heuristic knowledge into learning systems, this approach aims to improve learning efficiency and enable more transparent and robust decision-making in complex domains.

Lachlan McPheat

Lachlan McPheat

Uncertainty aware symbolic scaffolds for LLMs

Abstract:Symbolic scaffolds for reasoning tasks rely on large language models (LLMs) to translate natural language into formal symbolic representations. Uncertainty in machine learning in general, and natural-to-formal translation in particular, is well documented. We propose a novel method incorporating uncertainties directly into a probabilistic symbolic reasoning process by quantifying uncertainty in the LLM-generated translations and encoding these confidences as imprecise probabilities in probabilistic answer set programming.

Soteris Demetriou

Soteris Demetriou

Novel Attack Surfaces and Safeguards in Text-Guided Diffusion-Based Image Generation

Abstract:Text-guided image generation models are now widely deployed, and significant progress has been made in improving their safety and preventing harmful outputs. However, important challenges remain: (a) relatively little attention has been given to privacy risks arising from user prompts, (b) many existing mitigation approaches rely on coarse model modifications that can degrade utility on benign inputs and, (c) they provide limited or unfaithful explanations into why certain behaviours are suppressed. In this talk, I present recent and ongoing work exploring these challenges. First, I show how diffusion-based image generation systems can enable prompt authorship attribution and dementia inference, illustrating how generative models may unintentionally reveal sensitive information about users. I then present DiDOTS, a prompt-sanitization approach that leverages knowledge distillation to paraphrase prompts while preserving their generation intent. Finally, I outline ongoing work on localized concept unlearning and argumentation-based evaluation frameworks, which aim to enable targeted model edits while maintaining benign utility and providing more interpretable safety decisions in generative AI systems.

Mohammad Albinhassan

Mohammad Albinhassan

Neuro-Symbolic Control at Decoding Time: Guaranteeing LLM Correctness by Construction

Abstract:Large Language Models have demonstrated remarkable capabilities across diverse tasks, yet ensuring the correctness of their outputs remains a fundamental challenge. Existing approaches to controlled generation, whether through syntactic constraints, domain-specific rules, or search-based reasoning, fall short of providing the expressivity and formal guarantees needed for reliable real-world deployment. In this talk, I present a line of work that addresses this gap by directly integrating symbolic reasoning and neural language generation at decoding time. I first introduce SEM-CTRL, a framework that leverages Answer Set Grammars, a logic-based formalism extending CFGs with semantic constraints expressed in Answer Set Programming, to enforce both validity and correctness during LLM generation. By combining token-level constraint verification with Monte Carlo Tree Search guided by domain-specific rewards, SEM-CTRL guarantees that every generated output is semantically valid by construction, while efficiently searching for correct solutions. I then show how the need for manual constraint specification can be eliminated entirely: through a two-phase process of syntactic exploration and constraint exploitation, context-sensitive constraints can be automatically learned from LLM interactions and an oracle, making the approach more accessible and adaptive. Across tasks spanning grammar synthesis, combinatorial reasoning, JSON parsing, and planning, this neuro-symbolic approach enables even small off-the-shelf LLMs (1B parameters) to consistently outperform much larger models and state-of-the-art reasoning systems; thereby providing formal correctness guarantees that neither scale nor inference-time reasoning alone can achieve.

Getting There

Venue

Imperial College London

(White City Campus)

I-HUB (Translation & Innovation Hub)
84 Wood Ln
London W12 0BZ

Nearest Tube

White City · Wood Lane

Contact

doc-spike@imperial.ac.uk

Join Us

Registration

The workshop is open to members of the SPIKE research group and invited collaborators. Afternoon breakout sessions are designed for in-person collaboration, so we encourage you to attend in person if possible. To register your interest, please get in touch.

Register Interest
(Deadline Expired)

With grateful thanks to our sponsor

City St George's, University of London