Entity Extraction from Wikipedia List Pages

Abstract: When it comes to factual knowledge about a wide range of domains, Wikipedia is often the prime source of information on the web. DBpedia and YAGO, as large cross-domain knowledge graphs, encode a subset of that knowledge by creating an entity for each page in Wikipedia, and connecting them through edges. It is well known, however, that Wikipedia-based knowledge graphs are far from complete. Especially, as Wikipedia's policies permit pages about subjects only if they have a certain popularity, such graphs tend to lack information about less well-known entities. Information about these entities is oftentimes available in the encyclopedia, but not represented as an individual page. In this paper, we present a two-phased approach for the extraction of entities from Wikipedia's list pages, which have proven to serve as a valuable source of information. In the first phase, we build a large taxonomy from categories and list pages with DBpedia as a backbone. With distant supervision, we extract training data for the identification of new entities in list pages that we use in the second phase to train a classification model. With this approach we extract over 700k new entities and extend DBpedia with 7.5M new type statements and 3.8M new facts of high precision.

Uncovering the Semantics of Wikipedia Categories

Abstract: The Wikipedia category graph serves as the taxonomic backbone for large-scale knowledge graphs like YAGO or Probase, and has been used extensively for tasks like entity disambiguation or semantic similarity estimation. Wikipedia's categories are a rich source of taxonomic as well as non-taxonomic information. The category German science fiction writers, for example, encodes the type of its resources (Writer), as well as their nationality (German) and genre (Science Fiction). Several approaches in the literature make use of fractions of this encoded information without exploiting its full potential. In this paper, we introduce an approach for the discovery of category axioms that uses information from the category network, category instances, and their lexicalisations. With DBpedia as background knowledge, we discover 703k axioms covering 502k of Wikipedia's categories and populate the DBpedia knowledge graph with additional 4.4 M relation assertions and 3.3 M type assertions at more than 87% and 90% precision, respectively.


The complete code for the extraction of CaLiGraph is available on GitHub.


The complete dataset is hosted on Zenodo. All files are gzipped and in N-Triples format. The data is published under the Creative Commons Attribution 4.0 International Public License.
The complete dataset is also available on the DBpedia Databus. Additionally, a version of DBpedia enriched with CaLiGraph is provided as collection.


Metadata about the dataset which is described using void vocabulary.


Class definitions, property definitions, restrictions, and labels of the CaLiGraph ontology.


Mapping of classes and properties to the DBpedia ontology.


Provenance information about classes (i.e. which Wikipedia category or list page has been used to create this class).


Definition of instances and (non-transitive) types.


Transitive types for instances (can also be induced by a reasoner).


Labels for instances.


Relations between instances derived from the class restrictions of the ontology (can also be induced by a reasoner).


Mapping of instances to respective DBpedia instances.


Provenance information about instances (e.g. if the instance has been extracted from a Wikipedia list page).


Additional instances of CaLiGraph that are not in DBpedia.
This file is not part of CaLiGraph but should rather be used as an extension to DBpedia.


Additional types of CaLiGraph that are not in DBpedia.
This file is not part of CaLiGraph but should rather be used as an extension to DBpedia.


Additional relations of CaLiGraph that are not in DBpedia.
This file is not part of CaLiGraph but should rather be used as an extension to DBpedia.