Difference between revisions of "Short Text Classification based on Wikipedia and Word2Vec"

From Wikipedia Quality
Jump to: navigation, search
(infobox)
(Cats.)
 
(One intermediate revision by one other user not shown)
Line 10: Line 10:
 
== Overview ==
 
== Overview ==
 
Different from long texts, the [[features]] of Chinese short texts is much sparse, which is the primary cause of the low accuracy in the classification of short texts by using traditional classification methods. In this paper, a novel method was proposed to tackle the problem by expanding the features of short text based on [[Wikipedia]] and Word2vec. Firstly, build the semantic relevant concept sets of Wikipedia. Authors get the articles that have high relevancy with Wikipedia concepts and use the word2vec tools to measure the semantic [[relatedness]] between target concepts and related concepts. And then authors use the relevant concept sets to extend the short texts. Compared to traditional similarity measurement between concepts using statistical method, this method can get more accurate semantic relatedness. The experimental results show that by expanding the features of short texts, the classification accuracy can be improved. Specifically, method appeared to be more effective.
 
Different from long texts, the [[features]] of Chinese short texts is much sparse, which is the primary cause of the low accuracy in the classification of short texts by using traditional classification methods. In this paper, a novel method was proposed to tackle the problem by expanding the features of short text based on [[Wikipedia]] and Word2vec. Firstly, build the semantic relevant concept sets of Wikipedia. Authors get the articles that have high relevancy with Wikipedia concepts and use the word2vec tools to measure the semantic [[relatedness]] between target concepts and related concepts. And then authors use the relevant concept sets to extend the short texts. Compared to traditional similarity measurement between concepts using statistical method, this method can get more accurate semantic relatedness. The experimental results show that by expanding the features of short texts, the classification accuracy can be improved. Specifically, method appeared to be more effective.
 +
 +
== Embed ==
 +
=== Wikipedia Quality ===
 +
<code>
 +
<nowiki>
 +
Wensen, Liu; Zewen, Cao; Jun, Wang; Xiao-yi, Wang. (2016). "[[Short Text Classification based on Wikipedia and Word2Vec]]".DOI: 10.1109/CompComm.2016.7924894.
 +
</nowiki>
 +
</code>
 +
 +
=== English Wikipedia ===
 +
<code>
 +
<nowiki>
 +
{{cite journal |last1=Wensen |first1=Liu |last2=Zewen |first2=Cao |last3=Jun |first3=Wang |last4=Xiao-yi |first4=Wang |title=Short Text Classification based on Wikipedia and Word2Vec |date=2016 |doi=10.1109/CompComm.2016.7924894 |url=https://wikipediaquality.com/wiki/Short_Text_Classification_based_on_Wikipedia_and_Word2Vec}}
 +
</nowiki>
 +
</code>
 +
 +
=== HTML ===
 +
<code>
 +
<nowiki>
 +
Wensen, Liu; Zewen, Cao; Jun, Wang; Xiao-yi, Wang. (2016). &amp;quot;<a href="https://wikipediaquality.com/wiki/Short_Text_Classification_based_on_Wikipedia_and_Word2Vec">Short Text Classification based on Wikipedia and Word2Vec</a>&amp;quot;.DOI: 10.1109/CompComm.2016.7924894.
 +
</nowiki>
 +
</code>
 +
 +
 +
 +
[[Category:Scientific works]]
 +
[[Category:Chinese Wikipedia]]

Latest revision as of 09:13, 17 October 2020


Short Text Classification based on Wikipedia and Word2Vec
Authors
Liu Wensen
Cao Zewen
Wang Jun
Wang Xiao-yi
Publication date
2016
DOI
10.1109/CompComm.2016.7924894
Links
Original

Short Text Classification based on Wikipedia and Word2Vec - scientific work related to Wikipedia quality published in 2016, written by Liu Wensen, Cao Zewen, Wang Jun and Wang Xiao-yi.

Overview

Different from long texts, the features of Chinese short texts is much sparse, which is the primary cause of the low accuracy in the classification of short texts by using traditional classification methods. In this paper, a novel method was proposed to tackle the problem by expanding the features of short text based on Wikipedia and Word2vec. Firstly, build the semantic relevant concept sets of Wikipedia. Authors get the articles that have high relevancy with Wikipedia concepts and use the word2vec tools to measure the semantic relatedness between target concepts and related concepts. And then authors use the relevant concept sets to extend the short texts. Compared to traditional similarity measurement between concepts using statistical method, this method can get more accurate semantic relatedness. The experimental results show that by expanding the features of short texts, the classification accuracy can be improved. Specifically, method appeared to be more effective.

Embed

Wikipedia Quality

Wensen, Liu; Zewen, Cao; Jun, Wang; Xiao-yi, Wang. (2016). "[[Short Text Classification based on Wikipedia and Word2Vec]]".DOI: 10.1109/CompComm.2016.7924894.

English Wikipedia

{{cite journal |last1=Wensen |first1=Liu |last2=Zewen |first2=Cao |last3=Jun |first3=Wang |last4=Xiao-yi |first4=Wang |title=Short Text Classification based on Wikipedia and Word2Vec |date=2016 |doi=10.1109/CompComm.2016.7924894 |url=https://wikipediaquality.com/wiki/Short_Text_Classification_based_on_Wikipedia_and_Word2Vec}}

HTML

Wensen, Liu; Zewen, Cao; Jun, Wang; Xiao-yi, Wang. (2016). &quot;<a href="https://wikipediaquality.com/wiki/Short_Text_Classification_based_on_Wikipedia_and_Word2Vec">Short Text Classification based on Wikipedia and Word2Vec</a>&quot;.DOI: 10.1109/CompComm.2016.7924894.