Difference between revisions of "Wikipedia Workload Analysis for Decentralized Hosting"
(Wikilinks) |
(Infobox work) |
||
Line 1: | Line 1: | ||
+ | {{Infobox work | ||
+ | | title = Wikipedia Workload Analysis for Decentralized Hosting | ||
+ | | date = 2009 | ||
+ | | authors = [[Guido Urdaneta]]<br />[[Guillaume Pierre]]<br />[[Maarten van Steen]] | ||
+ | | doi = 10.1016/j.comnet.2009.02.019 | ||
+ | | link = http://www.sciencedirect.com/science/article/pii/S1389128609000541 | ||
+ | }} | ||
'''Wikipedia Workload Analysis for Decentralized Hosting''' - scientific work related to [[Wikipedia quality]] published in 2009, written by [[Guido Urdaneta]], [[Guillaume Pierre]] and [[Maarten van Steen]]. | '''Wikipedia Workload Analysis for Decentralized Hosting''' - scientific work related to [[Wikipedia quality]] published in 2009, written by [[Guido Urdaneta]], [[Guillaume Pierre]] and [[Maarten van Steen]]. | ||
== Overview == | == Overview == | ||
Authors study an access trace containing a sample of [[Wikipedia]]'s traffic over a 107-day period aiming to identify appropriate replication and distribution strategies in a fully decentralized hosting environment. Authors perform a global analysis of the whole trace, and a detailed analysis of the requests directed to the English edition of Wikipedia. In study, authors classify client requests and examine aspects such as the number of read and save operations, significant load variations and requests for nonexisting pages. Authors also review proposed decentralized wiki architectures and discuss how they would handle Wikipedia's workload. Authors conclude that decentralized architectures must focus on applying techniques to efficiently handle read operations while maintaining consistency and dealing with typical issues on decentralized systems such as churn, unbalanced loads and malicious participating nodes. | Authors study an access trace containing a sample of [[Wikipedia]]'s traffic over a 107-day period aiming to identify appropriate replication and distribution strategies in a fully decentralized hosting environment. Authors perform a global analysis of the whole trace, and a detailed analysis of the requests directed to the English edition of Wikipedia. In study, authors classify client requests and examine aspects such as the number of read and save operations, significant load variations and requests for nonexisting pages. Authors also review proposed decentralized wiki architectures and discuss how they would handle Wikipedia's workload. Authors conclude that decentralized architectures must focus on applying techniques to efficiently handle read operations while maintaining consistency and dealing with typical issues on decentralized systems such as churn, unbalanced loads and malicious participating nodes. |
Revision as of 07:12, 28 August 2019
Authors | Guido Urdaneta Guillaume Pierre Maarten van Steen |
---|---|
Publication date | 2009 |
DOI | 10.1016/j.comnet.2009.02.019 |
Links | Original |
Wikipedia Workload Analysis for Decentralized Hosting - scientific work related to Wikipedia quality published in 2009, written by Guido Urdaneta, Guillaume Pierre and Maarten van Steen.
Overview
Authors study an access trace containing a sample of Wikipedia's traffic over a 107-day period aiming to identify appropriate replication and distribution strategies in a fully decentralized hosting environment. Authors perform a global analysis of the whole trace, and a detailed analysis of the requests directed to the English edition of Wikipedia. In study, authors classify client requests and examine aspects such as the number of read and save operations, significant load variations and requests for nonexisting pages. Authors also review proposed decentralized wiki architectures and discuss how they would handle Wikipedia's workload. Authors conclude that decentralized architectures must focus on applying techniques to efficiently handle read operations while maintaining consistency and dealing with typical issues on decentralized systems such as churn, unbalanced loads and malicious participating nodes.