Difference between revisions of "Review-Based Ranking of Wikipedia Articles"

From Wikipedia Quality
Jump to: navigation, search
(+ links)
(Adding infobox)
Line 1: Line 1:
 +
{{Infobox work
 +
| title = Review-Based Ranking of Wikipedia Articles
 +
| date = 2009
 +
| authors = [[Yasser Ganjisaffar]]<br />[[Sara Javanmardi]]<br />[[Cristina Videira Lopes]]
 +
| doi = 10.1109/CASoN.2009.14
 +
| link = http://www.ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=05176107
 +
}}
 
'''Review-Based Ranking of Wikipedia Articles''' - scientific work related to [[Wikipedia quality]] published in 2009, written by [[Yasser Ganjisaffar]], [[Sara Javanmardi]] and [[Cristina Videira Lopes]].
 
'''Review-Based Ranking of Wikipedia Articles''' - scientific work related to [[Wikipedia quality]] published in 2009, written by [[Yasser Ganjisaffar]], [[Sara Javanmardi]] and [[Cristina Videira Lopes]].
  
 
== Overview ==
 
== Overview ==
 
Wikipedia, the largest encyclopedia on the Web, is often seen as the most successful example of crowdsourcing. The encyclopedic knowledge it accumulated over the years is so large that one often uses search engines, to find information in it. In contrast to regular Web pages, [[Wikipedia]] is fairly structured, and articles are usually accompanied with history pages, [[categories]] and [[talk pages]]. The meta-data available in these pages can be analyzed to gain a better understanding of the content and quality of the articles. Authors discuss how the rich meta-data available in wiki pages can be used to provide better search results in Wikipedia. Built on the studies on "Wisdom of Crowds" and the effectiveness of the knowledge collected by a large number of people, authors investigate the effect of incorporating the extent of review of an article in the quality of rankings of the search results. The extent of review is measured by the number of distinct editors contributed to the articles and is extracted by processing Wikipedia's history pages. Authors compare different ranking algorithms that explore combinations of text-relevancy, PageRank, and extent of review. The results show that the review-based ranking algorithm which combines the extent of review and text-relevancy outperforms the rest; it is more accurate and less computationally expensive compared to PageRank-based rankings.
 
Wikipedia, the largest encyclopedia on the Web, is often seen as the most successful example of crowdsourcing. The encyclopedic knowledge it accumulated over the years is so large that one often uses search engines, to find information in it. In contrast to regular Web pages, [[Wikipedia]] is fairly structured, and articles are usually accompanied with history pages, [[categories]] and [[talk pages]]. The meta-data available in these pages can be analyzed to gain a better understanding of the content and quality of the articles. Authors discuss how the rich meta-data available in wiki pages can be used to provide better search results in Wikipedia. Built on the studies on "Wisdom of Crowds" and the effectiveness of the knowledge collected by a large number of people, authors investigate the effect of incorporating the extent of review of an article in the quality of rankings of the search results. The extent of review is measured by the number of distinct editors contributed to the articles and is extracted by processing Wikipedia's history pages. Authors compare different ranking algorithms that explore combinations of text-relevancy, PageRank, and extent of review. The results show that the review-based ranking algorithm which combines the extent of review and text-relevancy outperforms the rest; it is more accurate and less computationally expensive compared to PageRank-based rankings.

Revision as of 09:16, 14 November 2019


Review-Based Ranking of Wikipedia Articles
Authors
Yasser Ganjisaffar
Sara Javanmardi
Cristina Videira Lopes
Publication date
2009
DOI
10.1109/CASoN.2009.14
Links
Original

Review-Based Ranking of Wikipedia Articles - scientific work related to Wikipedia quality published in 2009, written by Yasser Ganjisaffar, Sara Javanmardi and Cristina Videira Lopes.

Overview

Wikipedia, the largest encyclopedia on the Web, is often seen as the most successful example of crowdsourcing. The encyclopedic knowledge it accumulated over the years is so large that one often uses search engines, to find information in it. In contrast to regular Web pages, Wikipedia is fairly structured, and articles are usually accompanied with history pages, categories and talk pages. The meta-data available in these pages can be analyzed to gain a better understanding of the content and quality of the articles. Authors discuss how the rich meta-data available in wiki pages can be used to provide better search results in Wikipedia. Built on the studies on "Wisdom of Crowds" and the effectiveness of the knowledge collected by a large number of people, authors investigate the effect of incorporating the extent of review of an article in the quality of rankings of the search results. The extent of review is measured by the number of distinct editors contributed to the articles and is extracted by processing Wikipedia's history pages. Authors compare different ranking algorithms that explore combinations of text-relevancy, PageRank, and extent of review. The results show that the review-based ranking algorithm which combines the extent of review and text-relevancy outperforms the rest; it is more accurate and less computationally expensive compared to PageRank-based rankings.