


7/15/2025
Imperial College London is a world-leading university for science, technology, engineering, medicine and business (STEMB), where scientific imagination leads to world-changing impact.
As a global top ten university in London, they use science to try to understand more of the universe and improve the lives of more people in it. Across Imperial's nine campuses and throughout the Imperial Global network, its 22,000 students, 8,000 staff, and partners work together on scientific discovery, innovation and entrepreneurship. Their work navigates some of the world’s toughest challenges in global health, climate change, AI, business leadership and more.
Founded in 1907, Imperial’s future builds on a distinguished past, having pioneered penicillin, holography and fibre optics. Today, Imperial combines exceptional teaching, world-class facilities and a habit of interdisciplinary practice to unlock scientific imagination.
The Goal
Organizing the activities of field operators is crucial to managing the resources you have at your disposal as efficiently as possible, thus ensuring a very high quality of service to the operators. The project we developed together not only seeks to provide a concrete answer to this problem, but tries to do so by finding a way to explain this solution, as clearly as possible, to the human operator.
The Project
We worked together on a very sophisticated system for optimizing the activities of field operators, such as meter installation, maintenance or readings, developed by Terranova throughout its many years of experience in this field.
Specifically, we focused on reevaluating the solutions produced by this system in certain situations. For example, when environmental conditions make the solutions inadequate, and this involves operators having to work with the system in order to quickly arrive at an optimal solution. We have therefore developed a deployability infrastructure around the existing Terranova system.
This allows operators, should the need arise, to interact with the underlying system. To achieve this, we developed a station of the system using Explainable Ai techniques, based on what we call computational argumentation. This means understanding the system by means of arguments and counter-arguments, so that a debate can be established between the system itself and the operators. This resulted in an augmented system that we tested with the Newfoundland operators themselves, showing that it brings efficient solutions much faster than if the users, if the operators did not use the system developed together.
What is Explainable Ai?
Explainable Ai is used to describe an AI model, expected effects and potential bias. It helps to characterize the accuracy, fairness, transparency and outcomes of the model in the context of AI-based decision making.
Francesca Toni is Professor in Computational Logic and Royal Academy of Engineering/JP Morgan Research Chair on Argumentation-based Interactive Explainable AI (XAI) at the Department of Computing, Imperial College London, UK, as well as the founder and leader of the CLArg (Computational Logic andArgumentation) research group and of the Research Centre in XAI . She holds an ERC Advanced grant on Argumentation-based Deep Interactive eXplanations (ADIX).
Her research interests lie within the broad area of Explainable AI, at the intersection of Knowledge Representation and Reasoning, Machine Learning, Computational Argumentation, Argument Mining, and Multi-Agent Systems. She is EurAI fellow, IJCAI Trustee, in the Board of Directors for KR Inc., member of the editorial board for Theory and Practice of Logic Programming, and general chair for IJCAI2026.
A company like Terranova, which has decades of experience in developing these sophisticated systems obviously poses challenges for us, because the system has to be understood.
So the interest was intellectual in understanding the system itself, but at the same time it gave us the opportunity to take our research forward in a way that would meet the challenges that the system posed. We also began to see how the kind of research that we do can be used in practice and can have a socioeconomic impact.
The results of this collaboration led to the production of the paper ‘Argumentation for Explainable Workforce Optimisation’, written by Jennifer Leigh, Dimitrios Letsios, Alessandro Mella, Lucio Machetti and Francesca Toni. The paper will be presented at the 14th Conference on Prestigious Applications of Intelligent Systems (PAIS-2025).
Terranova & Open Innovation
We collaborate with research organisations, universities and start-ups around the world, with the common goal of creating solutions that enable us to revolutionise the world of Utilities. Over the years we have built a collaborative ecosystem based on knowledge sharing, within which we have created partnerships that accelerate the innovation process and allow us to develop solutions that exceed market standards.
7/15/2025
4/9/2025
1/9/2025