Difference between revisions of "Recall-Oriented Learning of Named Entities in Arabic Wikipedia"

From Wikipedia Quality
Jump to: navigation, search
(Starting a page: Recall-Oriented Learning of Named Entities in Arabic Wikipedia)
 
(+ wikilinks)
 
Line 1: Line 1:
'''Recall-Oriented Learning of Named Entities in Arabic Wikipedia''' - scientific work related to Wikipedia quality published in 2012, written by Behrang Mohit, Nathan Schneider, Rishav Bhowmick, Kemal Oflazer and Noah A. Smith.
+
'''Recall-Oriented Learning of Named Entities in Arabic Wikipedia''' - scientific work related to [[Wikipedia quality]] published in 2012, written by [[Behrang Mohit]], [[Nathan Schneider]], [[Rishav Bhowmick]], [[Kemal Oflazer]] and [[Noah A. Smith]].
  
 
== Overview ==
 
== Overview ==
Authors consider the problem of NER in Arabic Wikipedia, a semisupervised domain adaptation setting for which authors have no labeled training data in the target domain. To facilitate evaluation, authors obtain annotations for articles in four topical groups, allowing annotators to identify domain-specific entity types in addition to standard categories. Standard supervised learning on newswire text leads to poor target-domain recall. Authors train a sequence model and show that a simple modification to the online learner---a loss function encouraging it to "arrogantly" favor recall over precision---substantially improves recall and F1. Authors then adapt model with self-training on unlabeled target-domain data; enforcing the same recall-oriented bias in the self-training stage yields marginal gains.
+
Authors consider the problem of NER in [[Arabic Wikipedia]], a semisupervised domain adaptation setting for which authors have no labeled training data in the target domain. To facilitate evaluation, authors obtain annotations for articles in four topical groups, allowing annotators to identify domain-specific entity types in addition to standard [[categories]]. Standard supervised learning on newswire text leads to poor target-domain recall. Authors train a sequence model and show that a simple modification to the online learner---a loss function encouraging it to "arrogantly" favor recall over precision---substantially improves recall and F1. Authors then adapt model with self-training on unlabeled target-domain data; enforcing the same recall-oriented bias in the self-training stage yields marginal gains.

Latest revision as of 08:46, 11 February 2020

Recall-Oriented Learning of Named Entities in Arabic Wikipedia - scientific work related to Wikipedia quality published in 2012, written by Behrang Mohit, Nathan Schneider, Rishav Bhowmick, Kemal Oflazer and Noah A. Smith.

Overview

Authors consider the problem of NER in Arabic Wikipedia, a semisupervised domain adaptation setting for which authors have no labeled training data in the target domain. To facilitate evaluation, authors obtain annotations for articles in four topical groups, allowing annotators to identify domain-specific entity types in addition to standard categories. Standard supervised learning on newswire text leads to poor target-domain recall. Authors train a sequence model and show that a simple modification to the online learner---a loss function encouraging it to "arrogantly" favor recall over precision---substantially improves recall and F1. Authors then adapt model with self-training on unlabeled target-domain data; enforcing the same recall-oriented bias in the self-training stage yields marginal gains.