Named Entity Recognition with Word Embeddings and Wikipedia Categories for a Low-Resource Language

From Wikipedia Quality
Revision as of 07:24, 16 January 2021 by Melinda (talk | contribs) (cats.)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to: navigation, search


Named Entity Recognition with Word Embeddings and Wikipedia Categories for a Low-Resource Language
Authors
Arjun Das
Debasis Ganguly
Utpal Garain
Publication date
2017
DOI
10.1145/3015467
Links
Original

Named Entity Recognition with Word Embeddings and Wikipedia Categories for a Low-Resource Language - scientific work related to Wikipedia quality published in 2017, written by Arjun Das, Debasis Ganguly and Utpal Garain.

Overview

In this article, authors propose a word embedding--based named entity recognition (NER) approach. NER is commonly approached as a sequence labeling task with the application of methods such as conditional random field (CRF). However, for low-resource languages without the presence of sufficiently large training data, methods such as CRF do not perform well. In work, authors make use of the proximity of the vector embeddings of words to approach the NER problem. The hypothesis is that word vectors belonging to the same name category, such as a person’s name, occur in close vicinity in the abstract vector space of the embedded words. Assuming that this clustering hypothesis is true, authors apply a standard classification approach on the vectors of words to learn a decision boundary between the NER classes. Authors NER experiments are conducted on a morphologically rich and low-resource language, namely Bengali. Authors approach significantly outperforms standard baseline CRF approaches that use cluster labels of word embeddings and gazetteers constructed from Wikipedia. Further, authors propose an unsupervised approach (that uses an automatically created named entity (NE) gazetteer from Wikipedia in the absence of training data). For a low-resource language, the word vectors obtained from Wikipedia are not sufficient to train a classifier. As a result, authors propose to make use of the distance measure between the vector embeddings of words to expand the set of Wikipedia training examples with additional NEs extracted from a monolingual corpus that yield significant improvement in the unsupervised NER performance. In fact, expansion method performs better than the traditional CRF-based (supervised) approach (i.e., F-score of 65.4% vs. 64.2%). Finally, authors compare proposed approach to the official submission for the IJCNLP-2008 Bengali NER shared task and achieve an overall improvement of F-score 11.26% with respect to the best official system.

Embed

Wikipedia Quality

Das, Arjun; Ganguly, Debasis; Garain, Utpal. (2017). "[[Named Entity Recognition with Word Embeddings and Wikipedia Categories for a Low-Resource Language]]".DOI: 10.1145/3015467.

English Wikipedia

{{cite journal |last1=Das |first1=Arjun |last2=Ganguly |first2=Debasis |last3=Garain |first3=Utpal |title=Named Entity Recognition with Word Embeddings and Wikipedia Categories for a Low-Resource Language |date=2017 |doi=10.1145/3015467 |url=https://wikipediaquality.com/wiki/Named_Entity_Recognition_with_Word_Embeddings_and_Wikipedia_Categories_for_a_Low-Resource_Language}}

HTML

Das, Arjun; Ganguly, Debasis; Garain, Utpal. (2017). &quot;<a href="https://wikipediaquality.com/wiki/Named_Entity_Recognition_with_Word_Embeddings_and_Wikipedia_Categories_for_a_Low-Resource_Language">Named Entity Recognition with Word Embeddings and Wikipedia Categories for a Low-Resource Language</a>&quot;.DOI: 10.1145/3015467.