Toggle Main Menu Toggle Search

Open Access padlockePrints

Verified Language Processing with Hybrid Explainability

Lookup NU author(s): Ollie Fox, Dr Giacomo BergamiORCiD, Professor Graham MorganORCiD

Downloads


Licence

This work is licensed under a Creative Commons Attribution 4.0 International License (CC BY 4.0).


Abstract

The volume and diversity of digital information have led to a growing reliance on Machine Learning (ML) techniques, such as Natural Language Processing (NLP), for interpreting and accessing appropriate data. While vector and graph embeddings represent data for similarity tasks, current state-of-the-art pipelines lack guaranteed explainability, failing to accurately determine similarity for given full texts. These considerations can also be applied to classifiers exploiting generative language models with logical prompts, which fail to correctly distinguish between logical implication, indifference, and inconsistency, despite being explicitly trained to recognise the first two classes. We present a novel pipeline designed for hybrid explainability to address this. Our methodology combines graphs and logic to produce First-Order Logic (FOL) representations, creating machine-and human-readable representations through Montague Grammar (MG). The preliminary results indicate the effectiveness of this approach in accurately capturing full text similarity. To the best of our knowledge, this is the first approach to differentiate between implication, inconsistency, and indifference for text classification tasks. To address the limitations of existing approaches, we use three self-contained datasets annotated for the former classification task to determine the suitability of these approaches in capturing sentence structure equivalence, logical connectives, and spatiotemporal reasoning. We also use these data to compare the proposed method with language models pre-trained for detecting sentence entailment. The results show that the proposed method outperforms state-of-the-art models, indicating that natural language understanding cannot be easily generalised by training over extensive document corpora. This work offers a step toward more transparent and reliable Information Retrieval (IR) from extensive textual data.


Publication metadata

Author(s): Fox OR, Bergami G, Morgan G

Publication type: Article

Publication status: Published

Journal: Electronics

Year: 2025

Volume: 14

Issue: 17

Print publication date: 01/09/2025

Online publication date: 31/08/2025

Acceptance date: 25/08/2025

Date deposited: 31/08/2025

ISSN (electronic): 2079-9292

Publisher: MDPI

URL: https://doi.org/10.3390/electronics14173490

DOI: 10.3390/electronics14173490

Data Access Statement: The dataset is publicly available at https://osf.io/g5k9q/ (accessed on 1 April 2025). The repository is available through GitHub (https://github.com/LogDS/LaSSI, accessed on 28 April 2025).


Altmetrics

Altmetrics provided by Altmetric


Funding

Funder referenceFunder name
EPSRC

Share