Difference between revisions of "Semantic Super Networks: a Case Analysis of Wikipedia Papers"

From Wikipedia Quality
Jump to: navigation, search
(Semantic Super Networks: a Case Analysis of Wikipedia Papers - new page)
 
(+ wikilinks)
Line 1: Line 1:
'''Semantic Super Networks: a Case Analysis of Wikipedia Papers''' - scientific work related to Wikipedia quality published in 2017, written by Evgeny Kostyuchenko, Taisiya Lebedeva and Alexander Goritov.
+
'''Semantic Super Networks: a Case Analysis of Wikipedia Papers''' - scientific work related to [[Wikipedia quality]] published in 2017, written by [[Evgeny Kostyuchenko]], [[Taisiya Lebedeva]] and [[Alexander Goritov]].
  
 
== Overview ==
 
== Overview ==
An algorithm for constructing super-large semantic networks has been developed in current work. Algorithm was tested using the “Cosmos” category of the Internet encyclopedia “Wikipedia” as an example. During the implementation, a parser for the syntax analysis of Wikipedia pages was developed. A graph based on list of articles and categories was formed. On the basis of the obtained graph analysis, algorithms for finding domains of high connectivity in a graph were proposed and tested. Algorithms for constructing a domain based on the number of links and the number of articles in the current subject area is considered. The shortcomings of these algorithms are shown and explained, an algorithm is developed on their joint use. The possibility of applying a combined algorithm for obtaining the final domain is shown. The problem of instability of the received domain was discovered when starting an algorithm from two neighboring vertices related to the domain.
+
An algorithm for constructing super-large semantic networks has been developed in current work. Algorithm was tested using the “Cosmos” category of the Internet encyclopedia “[[Wikipedia]]” as an example. During the implementation, a parser for the syntax analysis of Wikipedia pages was developed. A graph based on list of articles and [[categories]] was formed. On the basis of the obtained graph analysis, algorithms for finding domains of high connectivity in a graph were proposed and tested. Algorithms for constructing a domain based on the number of links and the number of articles in the current subject area is considered. The shortcomings of these algorithms are shown and explained, an algorithm is developed on their joint use. The possibility of applying a combined algorithm for obtaining the final domain is shown. The problem of instability of the received domain was discovered when starting an algorithm from two neighboring vertices related to the domain.

Revision as of 08:44, 9 October 2019

Semantic Super Networks: a Case Analysis of Wikipedia Papers - scientific work related to Wikipedia quality published in 2017, written by Evgeny Kostyuchenko, Taisiya Lebedeva and Alexander Goritov.

Overview

An algorithm for constructing super-large semantic networks has been developed in current work. Algorithm was tested using the “Cosmos” category of the Internet encyclopedia “Wikipedia” as an example. During the implementation, a parser for the syntax analysis of Wikipedia pages was developed. A graph based on list of articles and categories was formed. On the basis of the obtained graph analysis, algorithms for finding domains of high connectivity in a graph were proposed and tested. Algorithms for constructing a domain based on the number of links and the number of articles in the current subject area is considered. The shortcomings of these algorithms are shown and explained, an algorithm is developed on their joint use. The possibility of applying a combined algorithm for obtaining the final domain is shown. The problem of instability of the received domain was discovered when starting an algorithm from two neighboring vertices related to the domain.