Short Text Classification based on Wikipedia and Word2Vec

From Wikipedia Quality
Revision as of 05:50, 28 November 2019 by Evelyn (talk | contribs) (+ wikilinks)
Jump to: navigation, search

Short Text Classification based on Wikipedia and Word2Vec - scientific work related to Wikipedia quality published in 2016, written by Liu Wensen, Cao Zewen, Wang Jun and Wang Xiao-yi.

Overview

Different from long texts, the features of Chinese short texts is much sparse, which is the primary cause of the low accuracy in the classification of short texts by using traditional classification methods. In this paper, a novel method was proposed to tackle the problem by expanding the features of short text based on Wikipedia and Word2vec. Firstly, build the semantic relevant concept sets of Wikipedia. Authors get the articles that have high relevancy with Wikipedia concepts and use the word2vec tools to measure the semantic relatedness between target concepts and related concepts. And then authors use the relevant concept sets to extend the short texts. Compared to traditional similarity measurement between concepts using statistical method, this method can get more accurate semantic relatedness. The experimental results show that by expanding the features of short texts, the classification accuracy can be improved. Specifically, method appeared to be more effective.