Difference between revisions of "Expanding Textual Entailment Corpora Fromwikipedia Using Co-Training"
(+ Infobox work) |
(+ embed code) |
||
Line 9: | Line 9: | ||
== Overview == | == Overview == | ||
In this paper authors propose a novel method to automatically extract large textual entailment datasets homogeneous to existing ones. The key idea is the combination of two intuitions: (1) the use of [[Wikipedia]] to extract a large set of textual entailment pairs; (2) the application of semisupervised machine learning methods to make the extracted dataset homogeneous to the existing ones. Authors report empirical evidence that method successfully expands existing textual entailment corpora. | In this paper authors propose a novel method to automatically extract large textual entailment datasets homogeneous to existing ones. The key idea is the combination of two intuitions: (1) the use of [[Wikipedia]] to extract a large set of textual entailment pairs; (2) the application of semisupervised machine learning methods to make the extracted dataset homogeneous to the existing ones. Authors report empirical evidence that method successfully expands existing textual entailment corpora. | ||
+ | |||
+ | == Embed == | ||
+ | === Wikipedia Quality === | ||
+ | <code> | ||
+ | <nowiki> | ||
+ | Zanzotto, Fabio Massimo; Pennacchiotti, Marco. (2010). "[[Expanding Textual Entailment Corpora Fromwikipedia Using Co-Training]]". | ||
+ | </nowiki> | ||
+ | </code> | ||
+ | |||
+ | === English Wikipedia === | ||
+ | <code> | ||
+ | <nowiki> | ||
+ | {{cite journal |last1=Zanzotto |first1=Fabio Massimo |last2=Pennacchiotti |first2=Marco |title=Expanding Textual Entailment Corpora Fromwikipedia Using Co-Training |date=2010 |url=https://wikipediaquality.com/wiki/Expanding_Textual_Entailment_Corpora_Fromwikipedia_Using_Co-Training}} | ||
+ | </nowiki> | ||
+ | </code> | ||
+ | |||
+ | === HTML === | ||
+ | <code> | ||
+ | <nowiki> | ||
+ | Zanzotto, Fabio Massimo; Pennacchiotti, Marco. (2010). &quot;<a href="https://wikipediaquality.com/wiki/Expanding_Textual_Entailment_Corpora_Fromwikipedia_Using_Co-Training">Expanding Textual Entailment Corpora Fromwikipedia Using Co-Training</a>&quot;. | ||
+ | </nowiki> | ||
+ | </code> |
Revision as of 09:38, 9 October 2019
Authors | Fabio Massimo Zanzotto Marco Pennacchiotti |
---|---|
Publication date | 2010 |
Links | Original |
Expanding Textual Entailment Corpora Fromwikipedia Using Co-Training - scientific work related to Wikipedia quality published in 2010, written by Fabio Massimo Zanzotto and Marco Pennacchiotti.
Overview
In this paper authors propose a novel method to automatically extract large textual entailment datasets homogeneous to existing ones. The key idea is the combination of two intuitions: (1) the use of Wikipedia to extract a large set of textual entailment pairs; (2) the application of semisupervised machine learning methods to make the extracted dataset homogeneous to the existing ones. Authors report empirical evidence that method successfully expands existing textual entailment corpora.
Embed
Wikipedia Quality
Zanzotto, Fabio Massimo; Pennacchiotti, Marco. (2010). "[[Expanding Textual Entailment Corpora Fromwikipedia Using Co-Training]]".
English Wikipedia
{{cite journal |last1=Zanzotto |first1=Fabio Massimo |last2=Pennacchiotti |first2=Marco |title=Expanding Textual Entailment Corpora Fromwikipedia Using Co-Training |date=2010 |url=https://wikipediaquality.com/wiki/Expanding_Textual_Entailment_Corpora_Fromwikipedia_Using_Co-Training}}
HTML
Zanzotto, Fabio Massimo; Pennacchiotti, Marco. (2010). "<a href="https://wikipediaquality.com/wiki/Expanding_Textual_Entailment_Corpora_Fromwikipedia_Using_Co-Training">Expanding Textual Entailment Corpora Fromwikipedia Using Co-Training</a>".