Difference between revisions of "Evaluating Answer Extraction for Why-Qa Using Rst-Annotated Wikipedia Texts"

From Wikipedia Quality
Jump to: navigation, search
(Creating a new page - Evaluating Answer Extraction for Why-Qa Using Rst-Annotated Wikipedia Texts)
 
(+ cat.)
 
(3 intermediate revisions by 3 users not shown)
Line 1: Line 1:
'''Evaluating Answer Extraction for Why-Qa Using Rst-Annotated Wikipedia Texts''' - scientific work related to Wikipedia quality published in 2007, written by Suzan Verberne.
+
{{Infobox work
 +
| title = Evaluating Answer Extraction for Why-Qa Using Rst-Annotated Wikipedia Texts
 +
| date = 2007
 +
| authors = [[Suzan Verberne]]
 +
| link = http://repository.ubn.ru.nl/handle/2066/44141
 +
}}
 +
'''Evaluating Answer Extraction for Why-Qa Using Rst-Annotated Wikipedia Texts''' - scientific work related to [[Wikipedia quality]] published in 2007, written by [[Suzan Verberne]].
  
 
== Overview ==
 
== Overview ==
In this paper the research focus is on the task of answer extraction for why-questions. As opposed to techniques for factoid QA, flnding answers to why- questions involves exploiting text structure. Therefore, authors approach the answer extraction problem as a discourse analysis task, using Rhetorical Structure Theory (RST) as framework. Authors evaluated this method using a set of why-questions that have been asked to the online question answering system answers.com with a corpus of answer fragments from Wikipedia, manually annotated with RST structures. The maximum recall that can be obtained by answer extraction procedure is about 60%. Authors suggest paragraph retrieval as supplementary and alternative approach to RST-based answer extraction.
+
In this paper the research focus is on the task of answer extraction for why-questions. As opposed to techniques for factoid QA, flnding answers to why- questions involves exploiting text structure. Therefore, authors approach the answer extraction problem as a discourse analysis task, using Rhetorical Structure Theory (RST) as framework. Authors evaluated this method using a set of why-questions that have been asked to the online [[question answering]] system answers.com with a corpus of answer fragments from [[Wikipedia]], manually annotated with RST structures. The maximum recall that can be obtained by answer extraction procedure is about 60%. Authors suggest paragraph retrieval as supplementary and alternative approach to RST-based answer extraction.
 +
 
 +
== Embed ==
 +
=== Wikipedia Quality ===
 +
<code>
 +
<nowiki>
 +
Verberne, Suzan. (2007). "[[Evaluating Answer Extraction for Why-Qa Using Rst-Annotated Wikipedia Texts]]". Dublin : Trinity College.
 +
</nowiki>
 +
</code>
 +
 
 +
=== English Wikipedia ===
 +
<code>
 +
<nowiki>
 +
{{cite journal |last1=Verberne |first1=Suzan |title=Evaluating Answer Extraction for Why-Qa Using Rst-Annotated Wikipedia Texts |date=2007 |url=https://wikipediaquality.com/wiki/Evaluating_Answer_Extraction_for_Why-Qa_Using_Rst-Annotated_Wikipedia_Texts |journal=Dublin : Trinity College}}
 +
</nowiki>
 +
</code>
 +
 
 +
=== HTML ===
 +
<code>
 +
<nowiki>
 +
Verberne, Suzan. (2007). &amp;quot;<a href="https://wikipediaquality.com/wiki/Evaluating_Answer_Extraction_for_Why-Qa_Using_Rst-Annotated_Wikipedia_Texts">Evaluating Answer Extraction for Why-Qa Using Rst-Annotated Wikipedia Texts</a>&amp;quot;. Dublin : Trinity College.
 +
</nowiki>
 +
</code>
 +
 
 +
 
 +
 
 +
[[Category:Scientific works]]

Latest revision as of 08:34, 9 June 2020


Evaluating Answer Extraction for Why-Qa Using Rst-Annotated Wikipedia Texts
Authors
Suzan Verberne
Publication date
2007
Links
Original

Evaluating Answer Extraction for Why-Qa Using Rst-Annotated Wikipedia Texts - scientific work related to Wikipedia quality published in 2007, written by Suzan Verberne.

Overview

In this paper the research focus is on the task of answer extraction for why-questions. As opposed to techniques for factoid QA, flnding answers to why- questions involves exploiting text structure. Therefore, authors approach the answer extraction problem as a discourse analysis task, using Rhetorical Structure Theory (RST) as framework. Authors evaluated this method using a set of why-questions that have been asked to the online question answering system answers.com with a corpus of answer fragments from Wikipedia, manually annotated with RST structures. The maximum recall that can be obtained by answer extraction procedure is about 60%. Authors suggest paragraph retrieval as supplementary and alternative approach to RST-based answer extraction.

Embed

Wikipedia Quality

Verberne, Suzan. (2007). "[[Evaluating Answer Extraction for Why-Qa Using Rst-Annotated Wikipedia Texts]]". Dublin : Trinity College.

English Wikipedia

{{cite journal |last1=Verberne |first1=Suzan |title=Evaluating Answer Extraction for Why-Qa Using Rst-Annotated Wikipedia Texts |date=2007 |url=https://wikipediaquality.com/wiki/Evaluating_Answer_Extraction_for_Why-Qa_Using_Rst-Annotated_Wikipedia_Texts |journal=Dublin : Trinity College}}

HTML

Verberne, Suzan. (2007). &quot;<a href="https://wikipediaquality.com/wiki/Evaluating_Answer_Extraction_for_Why-Qa_Using_Rst-Annotated_Wikipedia_Texts">Evaluating Answer Extraction for Why-Qa Using Rst-Annotated Wikipedia Texts</a>&quot;. Dublin : Trinity College.