About cookies on this site Our websites require some cookies to function properly (required). In addition, other cookies may be used with your consent to analyze site usage, improve the user experience and for advertising. For more information, please review your options. By visiting our website, you agree to our processing of information as described in IBM’sprivacy statement. To provide a smooth navigation, your cookie preferences will be shared across the IBM web domains listed here.
Abstract
Knowledge Integration Framework (KIF) is a Wikidata-based framework for integrating heterogeneous knowledge sources. These can be SPARQL endpoints, SQL endpoints, RDF files, CSV files, etc., and are represented in KIF as knowledge "stores". A KIF store exposes a Wikidata view of the underlying knowledge source by interpreting its content as a set of Wikidata-like statements and allowing it to be queried through a simple but expressive pattern-matching interface. In this paper, we present LLM Store, a KIF store implementation that uses language models (LLMs) as knowledge sources. Instead of consulting a static knowledge base, when queried, the LLM Store uses the underlying LLM to synthesize Wikidata-like statements on-the-fly. The knowledge completion pipeline used by LLM Store can be fully customized and supports strategies that range from simple zero-shot prompts to retrieval-augment generation (RAG). This paper discusses the design and implementation of LLM Store and presents an evaluation using the test and validation datasets of LM-KBC Challenge @ ISWC 2024. We analyze the results of the evaluation in light of the results obtained by our submission to the same challenge, which was based on LLM Store and achieved a macro averaged F1-score of 91%. LLM Store is released as open-source and its code is available at https://github.com/IBM/kif-llm-store.