Difference between revisions of "Evaluating Answer Extraction for Why-Qa Using Rst-Annotated Wikipedia Texts"

From Wikipedia Quality
Jump to: navigation, search
(Creating a new page - Evaluating Answer Extraction for Why-Qa Using Rst-Annotated Wikipedia Texts)
 
(wikilinks)
Line 1: Line 1:
'''Evaluating Answer Extraction for Why-Qa Using Rst-Annotated Wikipedia Texts''' - scientific work related to Wikipedia quality published in 2007, written by Suzan Verberne.
+
'''Evaluating Answer Extraction for Why-Qa Using Rst-Annotated Wikipedia Texts''' - scientific work related to [[Wikipedia quality]] published in 2007, written by [[Suzan Verberne]].
  
 
== Overview ==
 
== Overview ==
In this paper the research focus is on the task of answer extraction for why-questions. As opposed to techniques for factoid QA, flnding answers to why- questions involves exploiting text structure. Therefore, authors approach the answer extraction problem as a discourse analysis task, using Rhetorical Structure Theory (RST) as framework. Authors evaluated this method using a set of why-questions that have been asked to the online question answering system answers.com with a corpus of answer fragments from Wikipedia, manually annotated with RST structures. The maximum recall that can be obtained by answer extraction procedure is about 60%. Authors suggest paragraph retrieval as supplementary and alternative approach to RST-based answer extraction.
+
In this paper the research focus is on the task of answer extraction for why-questions. As opposed to techniques for factoid QA, flnding answers to why- questions involves exploiting text structure. Therefore, authors approach the answer extraction problem as a discourse analysis task, using Rhetorical Structure Theory (RST) as framework. Authors evaluated this method using a set of why-questions that have been asked to the online [[question answering]] system answers.com with a corpus of answer fragments from [[Wikipedia]], manually annotated with RST structures. The maximum recall that can be obtained by answer extraction procedure is about 60%. Authors suggest paragraph retrieval as supplementary and alternative approach to RST-based answer extraction.

Revision as of 21:27, 19 June 2019

Evaluating Answer Extraction for Why-Qa Using Rst-Annotated Wikipedia Texts - scientific work related to Wikipedia quality published in 2007, written by Suzan Verberne.

Overview

In this paper the research focus is on the task of answer extraction for why-questions. As opposed to techniques for factoid QA, flnding answers to why- questions involves exploiting text structure. Therefore, authors approach the answer extraction problem as a discourse analysis task, using Rhetorical Structure Theory (RST) as framework. Authors evaluated this method using a set of why-questions that have been asked to the online question answering system answers.com with a corpus of answer fragments from Wikipedia, manually annotated with RST structures. The maximum recall that can be obtained by answer extraction procedure is about 60%. Authors suggest paragraph retrieval as supplementary and alternative approach to RST-based answer extraction.