My contributions to Explainable AI (XAI) encompass research, development, mentorship, and industry applications. I have concentrated on improving the transparency and interpretability of AI systems, with a particular emphasis on healthcare and autonomous systems. I developed hybrid models that combine statistical learning with rule-based reasoning to facilitate human-like decision-making. In healthcare, my work included analyzing large datasets, such as those used for Alzheimer’s diagnosis, ensuring that the insights generated were comprehensible and actionable for practitioners. I have also mentored PhD students conducting XAI research, guiding them toward successful publications, and initiated an entrepreneurial venture focused on bringing explainable AI solutions to the finance sector. Furthermore, I designed logic-based frameworks for autonomous systems and XAI, ensuring adherence to legal requirements and building trust in AI-driven decisions.
Explainable and Efficient Machine Learning using Meta Inverse Entailment
Inductive Logic Programming (ILP) stands out for its capacity to leverage domain expertise and produce human-readable, logically sound rules that incorporate common sense reasoning. This approach is efficient, capable of generalizing from limited examples, and particularly adept at learning recursive rules and inventing new predicates and facts from data. MIE's methodological design allows for the extraction and formulation of structured rules directly from sparse datasets, making it invaluable in fields requiring interpretability and rigorous theoretical backing. Its ability to synthesize new knowledge and identify patterns beyond initial observations offers extensive applications across various specialized domains, enhancing both the depth and breadth of potential discoveries and insights.
PyGol is an advanced ILP system designed around the Meta Inverse Entailment (MIE) framework, which significantly enhances the speed and accuracy of learning compared to traditional ILP methods. As a Python-based implementation, PyGol seamlessly integrates with other systems, making it a flexible tool for developing complex reasoning applications. One of its standout features is the ability to perform both abduction (common sense reasoning) and induction within the same framework, facilitating a comprehensive approach to learning from data. This dual capability ensures that PyGol can handle a variety of data types and reasoning tasks efficiently. Additionally, PyGol's data efficiency makes it particularly effective in environments where data is sparse but learning requirements are demanding, providing reliable results even from limited examples.
The functional diversity of microbial communities emerges from a combination of many species and the many interaction types, such as competition, mutualism, predation or parasitism, in microbial ecological networks. Understanding the relationship between microbial networks and the functions delivered by the microbial communities is a key challenge for microbial ecology, particularly as so many of these interactions are difficult to observe and characterise. We believe that this 'Dark Web' of interactions could be unravelled using an explainable machine learning approach, called Abductive/Inductive Logic Programming (A/ILP) in the R package InfIntE, which uses mechanistic rules (interaction hypotheses) to infer directly the network structure and interaction types.
PyILP is availabe as pip package
I designed and implemented a Python tool that combines various cutting-edge Inductive Logic Programming (ILP) algorithms, demonstrating my proficiency in developing versatile tools that enhance the utility and reach of advanced machine learning algorithms within the Python ecosystem. This tool not only simplifies the integration of ILP systems with other machine learning approaches but also facilitates the creation of robust pipelines, enabling more comprehensive data analysis and model development strategies.