Ranking Wikipedia Article's Data Quality by Learning Dimension Distributions

From Wikipedia Quality
Revision as of 22:35, 6 August 2019 by Alice (talk | contribs) (Links)
Jump to: navigation, search

Ranking Wikipedia Article's Data Quality by Learning Dimension Distributions - scientific work related to Wikipedia quality published in 2014, written by Jingyu Han and Kejia Chen.

Overview

As the largest free user-generated knowledge repository, data quality of Wikipedia has attracted great attention these years. Automatic assessment of Wikipedia article’s data quality is a pressing concern. Authors observe that every Wikipedia quality class exhibits its specific characteristic along different first-class quality dimensions including accuracy, completeness, consistency and minimality. Authors propose to extract quality dimension values from article’s content and editing history using dynamic Bayesian network (DBN) and information extraction techniques. Next, authors employ multivariate Gaussian distributions to model quality dimension distributions for each quality class, and combine multiple trained classifiers to predict an article’s quality class, which can distinguish different quality classes effectively and robustly. Experiments demonstrate that approach generates a good performance.