Proceedings of AAAI-MAKE: Empowering Machine Learning and Large Language Models with Domain and Commonsense Knowledge

Enhancing Knowledge Graph Consistency through Open Large Language Models: A Case Study

, , and

High-quality knowledge graphs (KGs) play a crucial role in many applications. However, KGs created by automated information extraction systems can suffer from erroneous extractions or be inconsistent with provenance/source text. It is important to identify and correct such problems. In this paper, we study leveraging the emergent reasoning capabilities of large language models (LLMs) to detect inconsistencies between extracted facts and their provenance. With a focus on “open” LLMs that can be run and trained locally, we find that few-shot approaches can yield an absolute performance gain of 2.5-3.4% over the state-of-the-art method with only 9% of training data. We examine the LLM architectures’ effect and show that Decoder-Only models underperform Encoder-Decoder approaches. We also explore how model size impacts performance and counterintuitively find that larger models do not result in consistent performance gains. Our detailed analyses suggest that while LLMs can improve KG consistency, the different LLM models learn different aspects of KG consistency and are sensitive to the number of entities involved.


  • 161357 bytes

  • 361116 bytes

in-context learning, information extraction, knowledge graph consistency, knowledge graph, large language model, llm

InProceedings

AAAI

Downloads: 423 downloads

UMBC ebiquity