Difference between revisions of "Yago: a Core of Semantic Knowledge Unifying Wordnet and Wikipedia"

From Wikipedia Quality
Jump to: navigation, search
(+ links)
(infobox)
Line 1: Line 1:
 +
{{Infobox work
 +
| title = Yago: a Core of Semantic Knowledge Unifying Wordnet and Wikipedia
 +
| date = 2007
 +
| authors = [[Fabian M. Suchanek]]<br />[[Gjergji Kasneci]]<br />[[Gerhard Weikum]]
 +
| doi = 10.1145/1242572.1242667
 +
| link = http://www2007.org/papers/paper391.pdf
 +
}}
 
'''Yago: a Core of Semantic Knowledge Unifying Wordnet and Wikipedia''' - scientific work related to [[Wikipedia quality]] published in 2007, written by [[Fabian M. Suchanek]], [[Gjergji Kasneci]] and [[Gerhard Weikum]].
 
'''Yago: a Core of Semantic Knowledge Unifying Wordnet and Wikipedia''' - scientific work related to [[Wikipedia quality]] published in 2007, written by [[Fabian M. Suchanek]], [[Gjergji Kasneci]] and [[Gerhard Weikum]].
  
 
== Overview ==
 
== Overview ==
 
Authors present YAGO, a light-weight and extensible [[ontology]] with high coverage and quality. YAGO builds on entities and relations and currently contains more than 1 million entities and 5 million facts. This includes the Is-A hierarchy as well as non-taxonomic relations between entities (such as hasWonPrize). The facts have been automatically extracted from [[Wikipedia]] and unified with [[WordNet]], using a carefully designed combination of rule-based and heuristic methods described in this paper. The resulting knowledge base is a major step beyond WordNet: in quality by adding knowledge about individuals like persons, organizations, products, etc. with their semantic relationships ‐ and in quantity by increasing the number of facts by more than an order of magnitude. Authors empirical evaluation of fact correctness shows an accuracy of about 95%. YAGO is based on a logically clean model, which is decidable, extensible, and compatible with RDFS. Finally, authors show how YAGO can be further extended by state-of-the-art [[information extraction]] techniques.
 
Authors present YAGO, a light-weight and extensible [[ontology]] with high coverage and quality. YAGO builds on entities and relations and currently contains more than 1 million entities and 5 million facts. This includes the Is-A hierarchy as well as non-taxonomic relations between entities (such as hasWonPrize). The facts have been automatically extracted from [[Wikipedia]] and unified with [[WordNet]], using a carefully designed combination of rule-based and heuristic methods described in this paper. The resulting knowledge base is a major step beyond WordNet: in quality by adding knowledge about individuals like persons, organizations, products, etc. with their semantic relationships ‐ and in quantity by increasing the number of facts by more than an order of magnitude. Authors empirical evaluation of fact correctness shows an accuracy of about 95%. YAGO is based on a logically clean model, which is decidable, extensible, and compatible with RDFS. Finally, authors show how YAGO can be further extended by state-of-the-art [[information extraction]] techniques.

Revision as of 10:56, 26 November 2019


Yago: a Core of Semantic Knowledge Unifying Wordnet and Wikipedia
Authors
Fabian M. Suchanek
Gjergji Kasneci
Gerhard Weikum
Publication date
2007
DOI
10.1145/1242572.1242667
Links
Original

Yago: a Core of Semantic Knowledge Unifying Wordnet and Wikipedia - scientific work related to Wikipedia quality published in 2007, written by Fabian M. Suchanek, Gjergji Kasneci and Gerhard Weikum.

Overview

Authors present YAGO, a light-weight and extensible ontology with high coverage and quality. YAGO builds on entities and relations and currently contains more than 1 million entities and 5 million facts. This includes the Is-A hierarchy as well as non-taxonomic relations between entities (such as hasWonPrize). The facts have been automatically extracted from Wikipedia and unified with WordNet, using a carefully designed combination of rule-based and heuristic methods described in this paper. The resulting knowledge base is a major step beyond WordNet: in quality by adding knowledge about individuals like persons, organizations, products, etc. with their semantic relationships ‐ and in quantity by increasing the number of facts by more than an order of magnitude. Authors empirical evaluation of fact correctness shows an accuracy of about 95%. YAGO is based on a logically clean model, which is decidable, extensible, and compatible with RDFS. Finally, authors show how YAGO can be further extended by state-of-the-art information extraction techniques.