Difference between revisions of "Consensus-Based Ranking of Wikipedia Topics"

From Wikipedia Quality
Jump to: navigation, search
(+ links)
(+ cat.)
 
(2 intermediate revisions by 2 users not shown)
Line 1: Line 1:
 +
{{Infobox work
 +
| title = Consensus-Based Ranking of Wikipedia Topics
 +
| date = 2017
 +
| authors = [[Waleed Nema]]<br />[[Yinshan Tang]]
 +
| doi = 10.1145/3106426.3106529
 +
| link = https://dl.acm.org/citation.cfm?id=3106529
 +
}}
 
'''Consensus-Based Ranking of Wikipedia Topics''' - scientific work related to [[Wikipedia quality]] published in 2017, written by [[Waleed Nema]] and [[Yinshan Tang]].
 
'''Consensus-Based Ranking of Wikipedia Topics''' - scientific work related to [[Wikipedia quality]] published in 2017, written by [[Waleed Nema]] and [[Yinshan Tang]].
  
 
== Overview ==
 
== Overview ==
 
To improve the effectiveness of users' information seeking experience in interactive web search authors hypothesize how people might be influenced when making relevance judgment decisions by introducing the C onsensus T heory & Relevance Judgment M odel (CT&M). This is combined with a practical path to assess the extent of difference between suggestions of current search engines versus user expectations. A user-centered, evidence-based, phenomenology approach is used to improve on [[Google]] PageRank (GPR) in two ways. The first by biasing GPR's equal navigation probability assumption using (f)actual usage stats as implicit user consensus which leads to the StatsRank (SR) algorithm. Secondly, authors aggregate users' explicit ranking to derive Consensus Rank (CR) which is shown to predict individual user ranking significantly better than GPR and meta-search of modern search engines Google and [[Yahoo]]/Bing real-time. CT&M contextualizes CR, SR, and a live open online web experiment, called The Ranking Game , which is based on the August-2016 [[English Wikipedia]] corpus (12.7 million pages) and Page View Statistics for May to July 2016. Limiting this work to [[Wikipedia]] makes GPR topic-based since any Wikipedia page is focused on one topic. TREC's pooling is used to merge top 20 results from major search engines and present an alphabetized list for users' explicit ranking via drag and drop. The same platform captures implicit data for future research and can be used for controlled experiments. Authors contributions are: CT&M, SR, CR, and the open online user feedback web experiment research platform.
 
To improve the effectiveness of users' information seeking experience in interactive web search authors hypothesize how people might be influenced when making relevance judgment decisions by introducing the C onsensus T heory & Relevance Judgment M odel (CT&M). This is combined with a practical path to assess the extent of difference between suggestions of current search engines versus user expectations. A user-centered, evidence-based, phenomenology approach is used to improve on [[Google]] PageRank (GPR) in two ways. The first by biasing GPR's equal navigation probability assumption using (f)actual usage stats as implicit user consensus which leads to the StatsRank (SR) algorithm. Secondly, authors aggregate users' explicit ranking to derive Consensus Rank (CR) which is shown to predict individual user ranking significantly better than GPR and meta-search of modern search engines Google and [[Yahoo]]/Bing real-time. CT&M contextualizes CR, SR, and a live open online web experiment, called The Ranking Game , which is based on the August-2016 [[English Wikipedia]] corpus (12.7 million pages) and Page View Statistics for May to July 2016. Limiting this work to [[Wikipedia]] makes GPR topic-based since any Wikipedia page is focused on one topic. TREC's pooling is used to merge top 20 results from major search engines and present an alphabetized list for users' explicit ranking via drag and drop. The same platform captures implicit data for future research and can be used for controlled experiments. Authors contributions are: CT&M, SR, CR, and the open online user feedback web experiment research platform.
 +
 +
== Embed ==
 +
=== Wikipedia Quality ===
 +
<code>
 +
<nowiki>
 +
Nema, Waleed; Tang, Yinshan. (2017). "[[Consensus-Based Ranking of Wikipedia Topics]]".DOI: 10.1145/3106426.3106529.
 +
</nowiki>
 +
</code>
 +
 +
=== English Wikipedia ===
 +
<code>
 +
<nowiki>
 +
{{cite journal |last1=Nema |first1=Waleed |last2=Tang |first2=Yinshan |title=Consensus-Based Ranking of Wikipedia Topics |date=2017 |doi=10.1145/3106426.3106529 |url=https://wikipediaquality.com/wiki/Consensus-Based_Ranking_of_Wikipedia_Topics}}
 +
</nowiki>
 +
</code>
 +
 +
=== HTML ===
 +
<code>
 +
<nowiki>
 +
Nema, Waleed; Tang, Yinshan. (2017). &amp;quot;<a href="https://wikipediaquality.com/wiki/Consensus-Based_Ranking_of_Wikipedia_Topics">Consensus-Based Ranking of Wikipedia Topics</a>&amp;quot;.DOI: 10.1145/3106426.3106529.
 +
</nowiki>
 +
</code>
 +
 +
 +
 +
[[Category:Scientific works]]
 +
[[Category:English Wikipedia]]

Latest revision as of 23:38, 21 March 2021


Consensus-Based Ranking of Wikipedia Topics
Authors
Waleed Nema
Yinshan Tang
Publication date
2017
DOI
10.1145/3106426.3106529
Links
Original

Consensus-Based Ranking of Wikipedia Topics - scientific work related to Wikipedia quality published in 2017, written by Waleed Nema and Yinshan Tang.

Overview

To improve the effectiveness of users' information seeking experience in interactive web search authors hypothesize how people might be influenced when making relevance judgment decisions by introducing the C onsensus T heory & Relevance Judgment M odel (CT&M). This is combined with a practical path to assess the extent of difference between suggestions of current search engines versus user expectations. A user-centered, evidence-based, phenomenology approach is used to improve on Google PageRank (GPR) in two ways. The first by biasing GPR's equal navigation probability assumption using (f)actual usage stats as implicit user consensus which leads to the StatsRank (SR) algorithm. Secondly, authors aggregate users' explicit ranking to derive Consensus Rank (CR) which is shown to predict individual user ranking significantly better than GPR and meta-search of modern search engines Google and Yahoo/Bing real-time. CT&M contextualizes CR, SR, and a live open online web experiment, called The Ranking Game , which is based on the August-2016 English Wikipedia corpus (12.7 million pages) and Page View Statistics for May to July 2016. Limiting this work to Wikipedia makes GPR topic-based since any Wikipedia page is focused on one topic. TREC's pooling is used to merge top 20 results from major search engines and present an alphabetized list for users' explicit ranking via drag and drop. The same platform captures implicit data for future research and can be used for controlled experiments. Authors contributions are: CT&M, SR, CR, and the open online user feedback web experiment research platform.

Embed

Wikipedia Quality

Nema, Waleed; Tang, Yinshan. (2017). "[[Consensus-Based Ranking of Wikipedia Topics]]".DOI: 10.1145/3106426.3106529.

English Wikipedia

{{cite journal |last1=Nema |first1=Waleed |last2=Tang |first2=Yinshan |title=Consensus-Based Ranking of Wikipedia Topics |date=2017 |doi=10.1145/3106426.3106529 |url=https://wikipediaquality.com/wiki/Consensus-Based_Ranking_of_Wikipedia_Topics}}

HTML

Nema, Waleed; Tang, Yinshan. (2017). &quot;<a href="https://wikipediaquality.com/wiki/Consensus-Based_Ranking_of_Wikipedia_Topics">Consensus-Based Ranking of Wikipedia Topics</a>&quot;.DOI: 10.1145/3106426.3106529.