Difference between revisions of "The Illiterate Editor: Metadata-Driven Revert Detection in Wikipedia"

From Wikipedia Quality
Jump to: navigation, search
(New work - The Illiterate Editor: Metadata-Driven Revert Detection in Wikipedia)
 
(Links)
Line 1: Line 1:
'''The Illiterate Editor: Metadata-Driven Revert Detection in Wikipedia''' - scientific work related to Wikipedia quality published in 2013, written by Jeffrey Segall and Rachel Greenstadt.
+
'''The Illiterate Editor: Metadata-Driven Revert Detection in Wikipedia''' - scientific work related to [[Wikipedia quality]] published in 2013, written by [[Jeffrey Segall]] and [[Rachel Greenstadt]].
  
 
== Overview ==
 
== Overview ==
As the community depends more heavily on Wikipedia as a source of reliable information, the ability to quickly detect and remove detrimental information becomes increasingly important. The longer incorrect or malicious information lingers in a source perceived as reputable, the more likely that information will be accepted as correct and the greater the loss to source reputation. Authors present The Illiterate Editor (IllEdit), a content-agnostic, metadata-driven classification approach to Wikipedia revert detection. Authors primary contribution is in building a metadata-based feature set for detecting edit quality, which is then fed into a Support Vector Machine for edit classification. By analyzing edit histories, the IllEdit system builds a profile of user behavior, estimates expertise and spheres of knowledge, and determines whether or not a given edit is likely to be eventually reverted. The success of the system in revert detection (0.844 F-measure) as well as its disjoint feature set as compared to existing, content-analyzing vandalism detection systems, shows promise in the synergistic usage of IllEdit for increasing the reliability of community information.
+
As the community depends more heavily on [[Wikipedia]] as a source of reliable information, the ability to quickly detect and remove detrimental information becomes increasingly important. The longer incorrect or malicious information lingers in a source perceived as reputable, the more likely that information will be accepted as correct and the greater the loss to source [[reputation]]. Authors present The Illiterate Editor (IllEdit), a content-agnostic, metadata-driven classification approach to Wikipedia revert detection. Authors primary contribution is in building a metadata-based feature set for detecting edit quality, which is then fed into a Support Vector Machine for edit classification. By analyzing edit histories, the IllEdit system builds a profile of user behavior, estimates expertise and spheres of knowledge, and determines whether or not a given edit is likely to be eventually reverted. The success of the system in revert detection (0.844 F-measure) as well as its disjoint feature set as compared to existing, content-analyzing vandalism detection systems, shows promise in the synergistic usage of IllEdit for increasing the [[reliability]] of community information.

Revision as of 08:38, 19 November 2019

The Illiterate Editor: Metadata-Driven Revert Detection in Wikipedia - scientific work related to Wikipedia quality published in 2013, written by Jeffrey Segall and Rachel Greenstadt.

Overview

As the community depends more heavily on Wikipedia as a source of reliable information, the ability to quickly detect and remove detrimental information becomes increasingly important. The longer incorrect or malicious information lingers in a source perceived as reputable, the more likely that information will be accepted as correct and the greater the loss to source reputation. Authors present The Illiterate Editor (IllEdit), a content-agnostic, metadata-driven classification approach to Wikipedia revert detection. Authors primary contribution is in building a metadata-based feature set for detecting edit quality, which is then fed into a Support Vector Machine for edit classification. By analyzing edit histories, the IllEdit system builds a profile of user behavior, estimates expertise and spheres of knowledge, and determines whether or not a given edit is likely to be eventually reverted. The success of the system in revert detection (0.844 F-measure) as well as its disjoint feature set as compared to existing, content-analyzing vandalism detection systems, shows promise in the synergistic usage of IllEdit for increasing the reliability of community information.