Automatically Refining the Wikipedia Infobox Ontology

From Wikipedia Quality
Revision as of 07:45, 8 June 2019 by Naomi (talk | contribs) (Adding new article - Automatically Refining the Wikipedia Infobox Ontology)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to: navigation, search

Automatically Refining the Wikipedia Infobox Ontology - scientific work related to Wikipedia quality published in 2008, written by Fei Wu and Daniel S. Weld.

Overview

The combined efforts of human volunteers have recently extracted numerous facts from Wikipedia, storing them as machine-harvestable object-attribute-value triples in Wikipedia infoboxes. Machine learning systems, such as Kylin, use these infoboxes as training data, accurately extracting even more semantic knowledge from natural language text. But in order to realize the full power of this information, it must be situated in a cleanly-structured ontology. This paper introduces KOG, an autonomous system for refining Wikipedia's infobox-class ontology towards this end. Authors cast the problem of ontology refinement as a machine learning problem and solve it using both SVMs and a more powerful joint-inference approach expressed in Markov Logic Networks. Authors present experiments demonstrating the superiority of the joint-inference approach and evaluating other aspects of system. Using these techniques, authors build a rich ontology, integrating Wikipedia's infobox-class schemata with WordNet. Authors demonstrate how the resulting ontology may be used to enhance Wikipedia with improved query processing and other features.