About cookies on this site Our websites require some cookies to function properly (required). In addition, other cookies may be used with your consent to analyze site usage, improve the user experience and for advertising. For more information, please review your options. By visiting our website, you agree to our processing of information as described in IBM’sprivacy statement. To provide a smooth navigation, your cookie preferences will be shared across the IBM web domains listed here.
Publication
ISWC-Posters 2020
Conference paper
OWL2Bench: Towards a customizable benchmark for OWL 2 Reasoners
Abstract
In the past decade, there has been remarkable progress towards the development of reasoners3 that involve expressive ontology languages such as OWL 2. However, they still do not scale well on expressive language profiles (OWL 2 DL). To build better quality reasoners, developers need to find and improve the performance bottlenecks of their existing systems. A reasoner benchmark aids the reasoner developers to evaluate their system’s performance and deal with the limitations. Furthermore, it paves the way for further research to improve performance and functionality. In particular, a reasoner needs to be evaluated from several aspects such as support for different language constructs and their combinations, their effect on reasoning performance, ability to handle large ontologies, and capability to handle queries that involve reasoning. Although there are some existing ontology benchmarks, they are limited in scope. LUBM and UOBM are based on the older version of OWL (OWL 1). OntoBench supports OWL 2 profiles but does not evaluate reasoner performance. ORE benchmark framework4 does not consider evaluation in the context of varying sizes of an ontology. In essence, no existing benchmark covers all the above-mentioned aspects for reasoner evaluation. Here, we describe our ongoing efforts towards building a customizable ontology benchmark for OWL 2 reasoners named OWL2Bench5 (to be presented at the ISWC 2020 Resources Track). We also briefly discuss the planned future extensions to the benchmark.