A New Approach to Detecting Content Anomalies in Wikipedia

From Wikipedia Quality
Revision as of 20:20, 23 June 2019 by Lydia (talk | contribs) (Wikilinks)
Jump to: navigation, search

A New Approach to Detecting Content Anomalies in Wikipedia - scientific work related to Wikipedia quality published in 2013, written by Duygu Sinanc and Uraz Yavanoglu.

Overview

The rapid growth of the web has caused to availability of data effective if its content is well organized. Despite the fact that Wikipedia is the biggest encyclopedia on the web, its quality is suspect due to its Open Editing Schemas (OES). In this study, zoology and botany pages are selected in English Wikipedia and their html contents are converted to text then Artificial Neural Network (ANN) is used for classification to prevent disinformation or misinformation. After the train phase, some irrelevant words added in the content about politics or terrorism in proportion to the size of the text. By the time unsuitable content is added in a page until the moderators' intervention, the proposed system realized the error via wrong categorization. The results have shown that, when words number 2% of the content is added anomaly rate begins to cross the 50% border.