Difference between revisions of "Tractable Probabilistic Knowledge Bases: Wikipedia and Beyond"

From Wikipedia Quality
Jump to: navigation, search
(wikilinks)
(infobox)
 
Line 1: Line 1:
 +
{{Infobox work
 +
| title = Tractable Probabilistic Knowledge Bases: Wikipedia and Beyond
 +
| date = 2014
 +
| authors = [[Mathias Niepert]]<br />[[Pedro M. Domingos]]
 +
| link = https://dl.acm.org/citation.cfm?id=2908353
 +
}}
 
'''Tractable Probabilistic Knowledge Bases: Wikipedia and Beyond''' - scientific work related to [[Wikipedia quality]] published in 2014, written by [[Mathias Niepert]] and [[Pedro M. Domingos]].
 
'''Tractable Probabilistic Knowledge Bases: Wikipedia and Beyond''' - scientific work related to [[Wikipedia quality]] published in 2014, written by [[Mathias Niepert]] and [[Pedro M. Domingos]].
  
 
== Overview ==
 
== Overview ==
 
Building large-scale knowledge bases from a variety of data sources is a longstanding goal of AI research. However, existing approaches either ignore the uncertainty inherent to knowledge extracted from text, the web, and other sources, or lack a consistent probabilistic semantics with tractable inference. To address this problem, authors present a framework for tractable probabilistic knowledge bases (TPKBs). TPKBs consist of a hierarchy of classes of objects and a hierarchy of classes of object pairs such that attributes and relations are independent conditioned on those classes. These characteristics facilitate both tractable probabilistic reasoning and tractable maximum-likelihood parameter learning. TPKBs feature a rich query language that allows one to express and infer complex relationships between classes, relations, objects, and their attributes. The queries are translated to sequences of operations in a relational database facilitating query execution times in the sub-second range. Authors demonstrate the power of TPKBs by leveraging large data sets extracted from [[Wikipedia]] to learn their structure and parameters. The resulting TPKB models a distribution over millions of objects and billions of parameters. Authors apply the TPKB to entity resolution and object linking problems and show that the TPKB can accurately align large knowledge bases and integrate triples from open IE projects.
 
Building large-scale knowledge bases from a variety of data sources is a longstanding goal of AI research. However, existing approaches either ignore the uncertainty inherent to knowledge extracted from text, the web, and other sources, or lack a consistent probabilistic semantics with tractable inference. To address this problem, authors present a framework for tractable probabilistic knowledge bases (TPKBs). TPKBs consist of a hierarchy of classes of objects and a hierarchy of classes of object pairs such that attributes and relations are independent conditioned on those classes. These characteristics facilitate both tractable probabilistic reasoning and tractable maximum-likelihood parameter learning. TPKBs feature a rich query language that allows one to express and infer complex relationships between classes, relations, objects, and their attributes. The queries are translated to sequences of operations in a relational database facilitating query execution times in the sub-second range. Authors demonstrate the power of TPKBs by leveraging large data sets extracted from [[Wikipedia]] to learn their structure and parameters. The resulting TPKB models a distribution over millions of objects and billions of parameters. Authors apply the TPKB to entity resolution and object linking problems and show that the TPKB can accurately align large knowledge bases and integrate triples from open IE projects.

Latest revision as of 22:32, 12 August 2019


Tractable Probabilistic Knowledge Bases: Wikipedia and Beyond
Authors
Mathias Niepert
Pedro M. Domingos
Publication date
2014
Links
Original

Tractable Probabilistic Knowledge Bases: Wikipedia and Beyond - scientific work related to Wikipedia quality published in 2014, written by Mathias Niepert and Pedro M. Domingos.

Overview

Building large-scale knowledge bases from a variety of data sources is a longstanding goal of AI research. However, existing approaches either ignore the uncertainty inherent to knowledge extracted from text, the web, and other sources, or lack a consistent probabilistic semantics with tractable inference. To address this problem, authors present a framework for tractable probabilistic knowledge bases (TPKBs). TPKBs consist of a hierarchy of classes of objects and a hierarchy of classes of object pairs such that attributes and relations are independent conditioned on those classes. These characteristics facilitate both tractable probabilistic reasoning and tractable maximum-likelihood parameter learning. TPKBs feature a rich query language that allows one to express and infer complex relationships between classes, relations, objects, and their attributes. The queries are translated to sequences of operations in a relational database facilitating query execution times in the sub-second range. Authors demonstrate the power of TPKBs by leveraging large data sets extracted from Wikipedia to learn their structure and parameters. The resulting TPKB models a distribution over millions of objects and billions of parameters. Authors apply the TPKB to entity resolution and object linking problems and show that the TPKB can accurately align large knowledge bases and integrate triples from open IE projects.