Difference between revisions of "Wikipedia-Based Kernels for Text Categorization"

From Wikipedia Quality
Jump to: navigation, search
(Creating a new page - Wikipedia-Based Kernels for Text Categorization)
 
(Wikilinks)
Line 1: Line 1:
'''Wikipedia-Based Kernels for Text Categorization''' - scientific work related to Wikipedia quality published in 2007, written by Zsolt Minier, Zalán Bodó and Lehel Csató.
+
'''Wikipedia-Based Kernels for Text Categorization''' - scientific work related to [[Wikipedia quality]] published in 2007, written by [[Zsolt Minier]], [[Zalán Bodó]] and [[Lehel Csató]].
  
 
== Overview ==
 
== Overview ==
In recent years several models have been proposed for text categorization. Within this, one of the widely applied models is the vector space model (VSM), where independence between indexing terms, usually words, is assumed. Since training corpora sizes are relatively small - compared to ap infin what would be required for a realistic number of words - the generalization power of the learning algorithms is low. It is assumed that a bigger text corpus can boost the representation and hence the learning process. Based on the work of Gabrilovich and Markovitch [6], authors incorporate Wikipedia articles into the system to give word distributional representation for documents. The extension with this new corpus causes dimensionality increase, therefore clustering of features is needed. Authors use latent semantic analysis (LSA), kernel principal component analysis (KPCA) and kernel canonical correlation analysis (KCCA) and present results for these experiments on the Reuters corpus.
+
In recent years several models have been proposed for text categorization. Within this, one of the widely applied models is the vector space model (VSM), where independence between indexing terms, usually words, is assumed. Since training corpora sizes are relatively small - compared to ap infin what would be required for a realistic number of words - the generalization power of the learning algorithms is low. It is assumed that a bigger text corpus can boost the representation and hence the learning process. Based on the work of Gabrilovich and Markovitch [6], authors incorporate [[Wikipedia]] articles into the system to give word distributional representation for documents. The extension with this new corpus causes dimensionality increase, therefore clustering of [[features]] is needed. Authors use latent semantic analysis (LSA), kernel principal component analysis (KPCA) and kernel canonical correlation analysis (KCCA) and present results for these experiments on the Reuters corpus.

Revision as of 08:13, 5 June 2019

Wikipedia-Based Kernels for Text Categorization - scientific work related to Wikipedia quality published in 2007, written by Zsolt Minier, Zalán Bodó and Lehel Csató.

Overview

In recent years several models have been proposed for text categorization. Within this, one of the widely applied models is the vector space model (VSM), where independence between indexing terms, usually words, is assumed. Since training corpora sizes are relatively small - compared to ap infin what would be required for a realistic number of words - the generalization power of the learning algorithms is low. It is assumed that a bigger text corpus can boost the representation and hence the learning process. Based on the work of Gabrilovich and Markovitch [6], authors incorporate Wikipedia articles into the system to give word distributional representation for documents. The extension with this new corpus causes dimensionality increase, therefore clustering of features is needed. Authors use latent semantic analysis (LSA), kernel principal component analysis (KPCA) and kernel canonical correlation analysis (KCCA) and present results for these experiments on the Reuters corpus.