Difference between revisions of "Expanding Textual Entailment Corpora Fromwikipedia Using Co-Training"

From Wikipedia Quality
Jump to: navigation, search
(Overview: Expanding Textual Entailment Corpora Fromwikipedia Using Co-Training)
 
(wikilinks)
Line 1: Line 1:
'''Expanding Textual Entailment Corpora Fromwikipedia Using Co-Training''' - scientific work related to Wikipedia quality published in 2010, written by Fabio Massimo Zanzotto and Marco Pennacchiotti.
+
'''Expanding Textual Entailment Corpora Fromwikipedia Using Co-Training''' - scientific work related to [[Wikipedia quality]] published in 2010, written by [[Fabio Massimo Zanzotto]] and [[Marco Pennacchiotti]].
  
 
== Overview ==
 
== Overview ==
In this paper authors propose a novel method to automatically extract large textual entailment datasets homogeneous to existing ones. The key idea is the combination of two intuitions: (1) the use of Wikipedia to extract a large set of textual entailment pairs; (2) the application of semisupervised machine learning methods to make the extracted dataset homogeneous to the existing ones. Authors report empirical evidence that method successfully expands existing textual entailment corpora.
+
In this paper authors propose a novel method to automatically extract large textual entailment datasets homogeneous to existing ones. The key idea is the combination of two intuitions: (1) the use of [[Wikipedia]] to extract a large set of textual entailment pairs; (2) the application of semisupervised machine learning methods to make the extracted dataset homogeneous to the existing ones. Authors report empirical evidence that method successfully expands existing textual entailment corpora.

Revision as of 08:25, 2 June 2019

Expanding Textual Entailment Corpora Fromwikipedia Using Co-Training - scientific work related to Wikipedia quality published in 2010, written by Fabio Massimo Zanzotto and Marco Pennacchiotti.

Overview

In this paper authors propose a novel method to automatically extract large textual entailment datasets homogeneous to existing ones. The key idea is the combination of two intuitions: (1) the use of Wikipedia to extract a large set of textual entailment pairs; (2) the application of semisupervised machine learning methods to make the extracted dataset homogeneous to the existing ones. Authors report empirical evidence that method successfully expands existing textual entailment corpora.