Difference between revisions of "Lexical Comparison Between Wikipedia and Twitter Corpora by Using Word Embeddings"

From Wikipedia Quality
Jump to: navigation, search
(Adding new article - Lexical Comparison Between Wikipedia and Twitter Corpora by Using Word Embeddings)
 
(+ wikilinks)
Line 1: Line 1:
'''Lexical Comparison Between Wikipedia and Twitter Corpora by Using Word Embeddings''' - scientific work related to Wikipedia quality published in 2015, written by Luchen Tan, Haotian Zhang, Charles L. A. Clarke and Mark D. Smucker.
+
'''Lexical Comparison Between Wikipedia and Twitter Corpora by Using Word Embeddings''' - scientific work related to [[Wikipedia quality]] published in 2015, written by [[Luchen Tan]], [[Haotian Zhang]], [[Charles L. A. Clarke]] and [[Mark D. Smucker]].
  
 
== Overview ==
 
== Overview ==
Compared with carefully edited prose, the language of social media is informal in the extreme. The application of NLP techniques in this context may require a better understanding of word usage within social media. In this paper, authors compute a word embedding for a corpus of tweets, comparing it to a word embedding for Wikipedia. After learning a transformation of one vector space to the other, and adjusting similarity values according to term frequency, authors identify words whose usage differs greatly between the two corpora. For any given word, the set of words closest to it in a particular embedding provides a characterization for that word’s usage within the corresponding corpora.
+
Compared with carefully edited prose, the language of social media is informal in the extreme. The application of NLP techniques in this context may require a better understanding of word usage within social media. In this paper, authors compute a word embedding for a corpus of tweets, comparing it to a word embedding for [[Wikipedia]]. After learning a transformation of one vector space to the other, and adjusting similarity values according to term frequency, authors identify words whose usage differs greatly between the two corpora. For any given word, the set of words closest to it in a particular embedding provides a characterization for that word’s usage within the corresponding corpora.

Revision as of 21:58, 15 June 2019

Lexical Comparison Between Wikipedia and Twitter Corpora by Using Word Embeddings - scientific work related to Wikipedia quality published in 2015, written by Luchen Tan, Haotian Zhang, Charles L. A. Clarke and Mark D. Smucker.

Overview

Compared with carefully edited prose, the language of social media is informal in the extreme. The application of NLP techniques in this context may require a better understanding of word usage within social media. In this paper, authors compute a word embedding for a corpus of tweets, comparing it to a word embedding for Wikipedia. After learning a transformation of one vector space to the other, and adjusting similarity values according to term frequency, authors identify words whose usage differs greatly between the two corpora. For any given word, the set of words closest to it in a particular embedding provides a characterization for that word’s usage within the corresponding corpora.