Difference between revisions of "Recall-Oriented Learning of Named Entities in Arabic Wikipedia"

From Wikipedia Quality
Jump to: navigation, search
(+ wikilinks)
(+ infobox)
Line 1: Line 1:
 +
{{Infobox work
 +
| title = Recall-Oriented Learning of Named Entities in Arabic Wikipedia
 +
| date = 2012
 +
| authors = [[Behrang Mohit]]<br />[[Nathan Schneider]]<br />[[Rishav Bhowmick]]<br />[[Kemal Oflazer]]<br />[[Noah A. Smith]]
 +
| link = http://dl.acm.org/citation.cfm?id=2380839
 +
| plink = https://www.researchgate.net/profile/Kemal_Oflazer/publication/231950531_Recall-Oriented_Learning_of_Named_Entities_in_Arabic_Wikipedia/links/0912f5071c89097f67000000.pdf
 +
}}
 
'''Recall-Oriented Learning of Named Entities in Arabic Wikipedia''' - scientific work related to [[Wikipedia quality]] published in 2012, written by [[Behrang Mohit]], [[Nathan Schneider]], [[Rishav Bhowmick]], [[Kemal Oflazer]] and [[Noah A. Smith]].
 
'''Recall-Oriented Learning of Named Entities in Arabic Wikipedia''' - scientific work related to [[Wikipedia quality]] published in 2012, written by [[Behrang Mohit]], [[Nathan Schneider]], [[Rishav Bhowmick]], [[Kemal Oflazer]] and [[Noah A. Smith]].
  
 
== Overview ==
 
== Overview ==
 
Authors consider the problem of NER in [[Arabic Wikipedia]], a semisupervised domain adaptation setting for which authors have no labeled training data in the target domain. To facilitate evaluation, authors obtain annotations for articles in four topical groups, allowing annotators to identify domain-specific entity types in addition to standard [[categories]]. Standard supervised learning on newswire text leads to poor target-domain recall. Authors train a sequence model and show that a simple modification to the online learner---a loss function encouraging it to "arrogantly" favor recall over precision---substantially improves recall and F1. Authors then adapt model with self-training on unlabeled target-domain data; enforcing the same recall-oriented bias in the self-training stage yields marginal gains.
 
Authors consider the problem of NER in [[Arabic Wikipedia]], a semisupervised domain adaptation setting for which authors have no labeled training data in the target domain. To facilitate evaluation, authors obtain annotations for articles in four topical groups, allowing annotators to identify domain-specific entity types in addition to standard [[categories]]. Standard supervised learning on newswire text leads to poor target-domain recall. Authors train a sequence model and show that a simple modification to the online learner---a loss function encouraging it to "arrogantly" favor recall over precision---substantially improves recall and F1. Authors then adapt model with self-training on unlabeled target-domain data; enforcing the same recall-oriented bias in the self-training stage yields marginal gains.

Revision as of 09:31, 15 April 2020


Recall-Oriented Learning of Named Entities in Arabic Wikipedia
Authors
Behrang Mohit
Nathan Schneider
Rishav Bhowmick
Kemal Oflazer
Noah A. Smith
Publication date
2012
Links
Original Preprint

Recall-Oriented Learning of Named Entities in Arabic Wikipedia - scientific work related to Wikipedia quality published in 2012, written by Behrang Mohit, Nathan Schneider, Rishav Bhowmick, Kemal Oflazer and Noah A. Smith.

Overview

Authors consider the problem of NER in Arabic Wikipedia, a semisupervised domain adaptation setting for which authors have no labeled training data in the target domain. To facilitate evaluation, authors obtain annotations for articles in four topical groups, allowing annotators to identify domain-specific entity types in addition to standard categories. Standard supervised learning on newswire text leads to poor target-domain recall. Authors train a sequence model and show that a simple modification to the online learner---a loss function encouraging it to "arrogantly" favor recall over precision---substantially improves recall and F1. Authors then adapt model with self-training on unlabeled target-domain data; enforcing the same recall-oriented bias in the self-training stage yields marginal gains.