Difference between revisions of "Using Wikipedia for Cross-Language Named Entity Recognition"

From Wikipedia Quality
Jump to: navigation, search
(wikilinks)
(Infobox work)
 
Line 1: Line 1:
 +
{{Infobox work
 +
| title = Using Wikipedia for Cross-Language Named Entity Recognition
 +
| date = 2015
 +
| authors = [[Eraldo R. Fernandes]]<br />[[Ulf Brefeld]]<br />[[Roi Blanco]]<br />[[Jordi Atserias]]
 +
| doi = 10.1007/978-3-319-29009-6_1
 +
| link = https://link.springer.com/chapter/10.1007/978-3-319-29009-6_1/fulltext.html
 +
}}
 
'''Using Wikipedia for Cross-Language Named Entity Recognition''' - scientific work related to [[Wikipedia quality]] published in 2015, written by [[Eraldo R. Fernandes]], [[Ulf Brefeld]], [[Roi Blanco]] and [[Jordi Atserias]].
 
'''Using Wikipedia for Cross-Language Named Entity Recognition''' - scientific work related to [[Wikipedia quality]] published in 2015, written by [[Eraldo R. Fernandes]], [[Ulf Brefeld]], [[Roi Blanco]] and [[Jordi Atserias]].
  
 
== Overview ==
 
== Overview ==
 
Named [[entity recognition]] and classification NERC is fundamental for [[natural language processing]] tasks such as [[information extraction]], [[question answering]], and topic detection. State-of-the-art NERC systems are based on supervised machine learning and hence need to be trained on manually annotated corpora. However, annotated corpora hardly exist for non-standard languages and labeling additional data manually is tedious and costly. In this article, authors present a novel method to automatically generate partially annotated corpora for NERC by exploiting the link structure of [[Wikipedia]]. Firstly, Wikipedia entries in the source language are labeled with the NERC tag set. Secondly, Wikipedia language links are exploited to propagate the annotations in the target language. Finally, mentions of the labeled entities in the target language are annotated with the respective tags. The procedure results in a partially annotated corpus that is likely to contain unannotated entities. To learn from such partially annotated data, authors devise two simple extensions of hidden Markov models and structural perceptrons. Empirically, authors observe that using the automatically generated data leads to more accurate prediction models than off-the-shelf NERC methods. Authors demonstrate that the novel extensions of HMMs and perceptrons effectively exploit the partially annotated data and outperforms their baseline counterparts in all settings.
 
Named [[entity recognition]] and classification NERC is fundamental for [[natural language processing]] tasks such as [[information extraction]], [[question answering]], and topic detection. State-of-the-art NERC systems are based on supervised machine learning and hence need to be trained on manually annotated corpora. However, annotated corpora hardly exist for non-standard languages and labeling additional data manually is tedious and costly. In this article, authors present a novel method to automatically generate partially annotated corpora for NERC by exploiting the link structure of [[Wikipedia]]. Firstly, Wikipedia entries in the source language are labeled with the NERC tag set. Secondly, Wikipedia language links are exploited to propagate the annotations in the target language. Finally, mentions of the labeled entities in the target language are annotated with the respective tags. The procedure results in a partially annotated corpus that is likely to contain unannotated entities. To learn from such partially annotated data, authors devise two simple extensions of hidden Markov models and structural perceptrons. Empirically, authors observe that using the automatically generated data leads to more accurate prediction models than off-the-shelf NERC methods. Authors demonstrate that the novel extensions of HMMs and perceptrons effectively exploit the partially annotated data and outperforms their baseline counterparts in all settings.

Latest revision as of 19:47, 14 June 2019


Using Wikipedia for Cross-Language Named Entity Recognition
Authors
Eraldo R. Fernandes
Ulf Brefeld
Roi Blanco
Jordi Atserias
Publication date
2015
DOI
10.1007/978-3-319-29009-6_1
Links
Original

Using Wikipedia for Cross-Language Named Entity Recognition - scientific work related to Wikipedia quality published in 2015, written by Eraldo R. Fernandes, Ulf Brefeld, Roi Blanco and Jordi Atserias.

Overview

Named entity recognition and classification NERC is fundamental for natural language processing tasks such as information extraction, question answering, and topic detection. State-of-the-art NERC systems are based on supervised machine learning and hence need to be trained on manually annotated corpora. However, annotated corpora hardly exist for non-standard languages and labeling additional data manually is tedious and costly. In this article, authors present a novel method to automatically generate partially annotated corpora for NERC by exploiting the link structure of Wikipedia. Firstly, Wikipedia entries in the source language are labeled with the NERC tag set. Secondly, Wikipedia language links are exploited to propagate the annotations in the target language. Finally, mentions of the labeled entities in the target language are annotated with the respective tags. The procedure results in a partially annotated corpus that is likely to contain unannotated entities. To learn from such partially annotated data, authors devise two simple extensions of hidden Markov models and structural perceptrons. Empirically, authors observe that using the automatically generated data leads to more accurate prediction models than off-the-shelf NERC methods. Authors demonstrate that the novel extensions of HMMs and perceptrons effectively exploit the partially annotated data and outperforms their baseline counterparts in all settings.