Difference between revisions of "Machine Reading: from Wikipedia to the Web"

From Wikipedia Quality
Jump to: navigation, search
(New study: Machine Reading: from Wikipedia to the Web)
 
(+ links)
Line 1: Line 1:
'''Machine Reading: from Wikipedia to the Web''' - scientific work related to Wikipedia quality published in 2010, written by Daniel S. Weld and Fei Wu.
+
'''Machine Reading: from Wikipedia to the Web''' - scientific work related to [[Wikipedia quality]] published in 2010, written by [[Daniel S. Weld]] and [[Fei Wu]].
  
 
== Overview ==
 
== Overview ==
Berners-Lee's compelling vision of a Semantic Web is hindered by a chicken-egg problem, which can be best solved via machine reading — automatically extracting information from natural-language texts to make them accessible to software agents. Authors argue bootstrapping is the best way to build such a system. Authors choose Wikipedia as an initial data source, because it is comprehensive, high-quality, and contains enough collaboratively-created structure to launch a self-supervised bootstrapping process. Authors have developed three systems that realize vision: • KYLIN, which applies Wikipedia heuristic of matching sentences with infoboxes to create training examples for learning relation-specific extractors. • KOG, which automatically generates Wikipedia Infobox Ontology by integrating evidence from heterogeneous resources via joint inference using Markov Logic Networks. • WOE, which uses Wikipedia heuristic to create matching sentence set as done in KYLIN, but it abstracts these examples to relation-independent training data to learn an unlexicalized open extractor.
+
Berners-Lee's compelling vision of a Semantic Web is hindered by a chicken-egg problem, which can be best solved via machine reading — automatically extracting information from natural-language texts to make them accessible to software agents. Authors argue bootstrapping is the best way to build such a system. Authors choose [[Wikipedia]] as an initial data source, because it is comprehensive, high-quality, and contains enough collaboratively-created structure to launch a self-supervised bootstrapping process. Authors have developed three systems that realize vision: • KYLIN, which applies Wikipedia heuristic of matching sentences with [[infoboxes]] to create training examples for learning relation-specific extractors. • KOG, which automatically generates Wikipedia Infobox Ontology by integrating evidence from heterogeneous resources via joint inference using Markov Logic Networks. • WOE, which uses Wikipedia heuristic to create matching sentence set as done in KYLIN, but it abstracts these examples to relation-independent training data to learn an unlexicalized open extractor.

Revision as of 09:17, 8 December 2019

Machine Reading: from Wikipedia to the Web - scientific work related to Wikipedia quality published in 2010, written by Daniel S. Weld and Fei Wu.

Overview

Berners-Lee's compelling vision of a Semantic Web is hindered by a chicken-egg problem, which can be best solved via machine reading — automatically extracting information from natural-language texts to make them accessible to software agents. Authors argue bootstrapping is the best way to build such a system. Authors choose Wikipedia as an initial data source, because it is comprehensive, high-quality, and contains enough collaboratively-created structure to launch a self-supervised bootstrapping process. Authors have developed three systems that realize vision: • KYLIN, which applies Wikipedia heuristic of matching sentences with infoboxes to create training examples for learning relation-specific extractors. • KOG, which automatically generates Wikipedia Infobox Ontology by integrating evidence from heterogeneous resources via joint inference using Markov Logic Networks. • WOE, which uses Wikipedia heuristic to create matching sentence set as done in KYLIN, but it abstracts these examples to relation-independent training data to learn an unlexicalized open extractor.