Authors: Usbeck, Ricardo | Yan, Xi | Perevalov, Aleksandr | Jiang, Longquan | Schulz, Julius | Kraft, Angelie | Möller, Cedric | Huang, Junbo | Reineke, Jan | Ngonga Ngomo, Axel-Cyrille | Saleem, Muhammad | Both, Andreas
Article Type:
Research Article
Abstract:
Knowledge Graph Question Answering (KGQA) has gained attention from both industry and academia over the past decade. Researchers proposed a substantial amount of benchmarking datasets with different properties, pushing the development in this field forward. Many of these benchmarks depend on Freebase, DBpedia, or Wikidata. However, KGQA benchmarks that depend on Freebase and DBpedia are gradually less studied and used, because Freebase is defunct and DBpedia lacks the structural validity of Wikidata. Therefore, research is gravitating toward Wikidata-based benchmarks. That is, new KGQA benchmarks are created on the basis of Wikidata and existing ones are migrated. We present a new,
…multilingual, complex KGQA benchmarking dataset as the 10th part of the Question Answering over Linked Data (QALD) benchmark series. This corpus formerly depended on DBpedia. Since QALD serves as a base for many machine-generated benchmarks, we increased the size and adjusted the benchmark to Wikidata and its ranking mechanism of properties. These measures foster novel KGQA developments by more demanding benchmarks. Creating a benchmark from scratch or migrating it from DBpedia to Wikidata is non-trivial due to the complexity of the Wikidata knowledge graph, mapping issues between different languages, and the ranking mechanism of properties using qualifiers. We present our creation strategy and the challenges we faced that will assist other researchers in their future work. Our case study, in the form of a conference challenge, is accompanied by an in-depth analysis of the created benchmark.
Show more
Keywords: Knowledge graph question answering, benchmark, challenge, query analysis
DOI: 10.3233/SW-233471
Citation: Semantic Web,
vol. Pre-press, no. Pre-press, pp. 1-15, 2023