Evaluating Answer Extraction for Why-Qa Using Rst-Annotated Wikipedia Texts

From Wikipedia Quality
Revision as of 20:54, 25 May 2019 by Olivia (talk | contribs) (Creating a new page - Evaluating Answer Extraction for Why-Qa Using Rst-Annotated Wikipedia Texts)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to: navigation, search

Evaluating Answer Extraction for Why-Qa Using Rst-Annotated Wikipedia Texts - scientific work related to Wikipedia quality published in 2007, written by Suzan Verberne.

Overview

In this paper the research focus is on the task of answer extraction for why-questions. As opposed to techniques for factoid QA, flnding answers to why- questions involves exploiting text structure. Therefore, authors approach the answer extraction problem as a discourse analysis task, using Rhetorical Structure Theory (RST) as framework. Authors evaluated this method using a set of why-questions that have been asked to the online question answering system answers.com with a corpus of answer fragments from Wikipedia, manually annotated with RST structures. The maximum recall that can be obtained by answer extraction procedure is about 60%. Authors suggest paragraph retrieval as supplementary and alternative approach to RST-based answer extraction.