Despite the fact that Wikipedia is often criticized for its poor quality, it still is one of the most popular knowledge bases in the world. Currently, this online encyclopedia is on the 5th place in the ranking of most visited sites (after Google, Youtube, Facebook, Baidu). Articles in this encyclopedia are created and edited in over 300 different languages. Currently Wikipedia contains more than 48 million articles about various topics and languages.
Every day the number of articles in Wikipedia is growing. They can be created and edited even by anonymous users. Authors do not need to formally demonstrate their skills, education and experience in certain areas. Wikipedia does not have a central editorial team or a group of reviewers who could comprehensively check all new and existing texts. For these and other reasons, people often criticize the concept of Wikipedia, in particular pointing out the poor quality of information.
Despite this, in Wikipedia you can sometimes find valuable information – depending on the language version and subject. Practically in every language version there is a system of awards for the best articles. However, the number of these articles is relatively small (less than one percent). In some language versions, there are also other quality grades. However, the overwhelming majority of articles have are unevaluated (in some languages more than 99%).
- 1 Automatic quality assessment of Wikipedia articles
- 2 Data Mining
- 3 Where to get these parameters?
- 4 Are there other ways for quality assessing of Wikipedia articles other than binary?
- 5 Are there ways of assessing the quality of some part of Wikipedia article?
- 6 Why do we need all this?
- 7 References
Automatic quality assessment of Wikipedia articles
So, in Wikipedia, many articles do not have quality grades, so each reader should manually analyze their content. Automatic quality assessment of Wikipedia articles is known and wide area in the scientific world - researchers from over 50 countries published various works related to quality of Wikipedia. Basically, the scientific works describes the most developed language version of Wikipedia – English, which already contains more than 5.7 million articles.
Since it foundation and with the growing popularity of Wikipedia, more and more scientific publications on this subject have published. One of the first studies showed that measuring the volume of content can help determine the degree of “maturity” of the Wikipedia article. Works in this direction shows that, in general, higher-quality articles are long, use many references, are edited by hundreds of authors and have thousands of editions.
How do they come to such conclusions? Simply put: comparing good and bad articles.
As already mentioned earlier, in almost every language version of Wikipedia, there is a system of assessing the quality of articles. The best articles are awarded in a special way – they receive a special “badge”. In Russian Wikipedia such articles are called “Featured Articles” (FA). There is another “badge” for articles that slightly below the best ones – “Good articles” (GA). In some language versions, there are other estimates for more “weak” articles. For example, in English Wikipedia there are also: A-class, B-class, C-class, Start, Stub. On the other hand in Russian Wikipedia we can met the following additional grades: Solid, Full, Developed, In development, Stub.
Even on the example of the English and Russian versions, we can conclude that the standards for the grading scale are different and depends on the language. Moreover, not all language versions of Wikipedia have such a developed system of quality assessment of articles. For example, German Wikipedia, which contains more than 2 million articles, uses only two estimates – equivalents for FA and GA. Therefore, often assessments in scientific papers are grouped into two groups:
- ”Complete” – FA and GA grade,
- ”Incomplete” – all other grades
Let’s call this method “binary” (1 – Complete articles, 0 – Incomplete articles). This separation naturally “blurs” the boundaries between individual classes, but it allows you to build and compare quality models for different language versions of Wikipedia.
To build such models, you can use various algorithms, in particular Data Mining. One of the most commonly used algorithms – Random Forest. There are even studies, which compare it with other algorithms (CART, SMO, Multilayer Perceptron, LMT, C4.5, C5.0 and others). Random Forest allows to build models even using variables that correlates with each other. Additionally, this algorithm can show which variables are more important for determining the quality of articles. If we need to get other information about the importance of variables, we can use other algorithms, including logistic regression.
The results show that there are differences between article quality models in different language versions of Wikipedia. So, if in one language version one of the most important parameters is the number of references (sources), in another language will be more important the number of images and the length of the text.
In this case, the quality is modeled as the probability of referring an article to one of two groups – Complete or Incomplete. The conclusion is made on the basis of analysis of various parameters (metrics): the length of the text, the number of references, images, sections, links to the article, the number of facts, visits, the number of editions and many others. There are also a number of linguistic parameters, which depend on the considered language. Also it can be taken into the account measures that shows number of the links from external sources, such as Reddit, Facebook, Youtube, Twitter, Linkedin, VKontakte and other social services. Additionality, we can take into account the reputation of users who edit Wikipedia articles.. To determine the experience of Wikipedia editors, special online tools can be useful, such as WikiTop.
Currently, in total, more than 300 parameters (or measures) are used in studies, depending on the language version of Wikipedia and the complexity of the quality model. Some parameters, such as references (sources), can be evaluated additionally – we can not only count the quantity, but also assess how well-known and reliable sources are used in the Wikipedia article. Some measures can be obtained on the basis of expert opinions, which can be received from different sources, for example - WikiBest service.
Where to get these parameters?
To get some parameters, you just need to send a request (query) to the appropriate API, for other parameters (especially linguistic ones) you need to use special libraries and parsers. A considerable part of the time, however, is spent writing your own tools (we’ll talk about this in separate articles).
Are there other ways for quality assessing of Wikipedia articles other than binary?
Yes. Recent studies propose the method for estimating articles on a scale from 0 to 100 in a continuous scale. Thus, an article can receive, for example, an estimate of 54.21. This method has been tested in 55 language versions. The results are available on the WikiRank service, which allows you to evaluate and compare the quality and popularity of Wikipedia articles in different languages. The method, of course, is not ideal, but works for locally known topics.
Are there ways of assessing the quality of some part of Wikipedia article?
Of course. For example, one of the important elements of the article is the so-called “infobox”. This is a separate frame (table), which is often located at the top right of the article and shows the most important facts about the subject. So, there is no need to look for this information in the text – you can just look at this table. Evaluation of the quality of these infoboxes is devoted to individual studies. There are also projects, such as Infoboxes.net, which allow you to automatically compare the infoboxes in different language versions.
Why do we need all this?
Wikipedia is used often, but the information quality is not always checked. The proposed methods can simplify this task – if the article is bad, then the reader, knowing this, will be more careful in using its materials for decision making. On the other hand, the user can also see in which language the topic of interest is described better. And most importantly, modern techniques allow you to transfer information between different language versions. This means that you can automatically enrich the weak versions of Wikipedia with high-quality data from other language versions. This will also improve the quality of other semantic databases, for which Wikipedia is the main source of information. First of all, this is – DBpedia, Wikidata, YAGO2 and others.
- Lewoniewski, W., Węcel, K., Abramowicz, W. (2016). Quality and Importance of Wikipedia Articles in Different Languages. In International Conference on Information and Software Technologies (pp. 613-624). Springer International Publishing.
- Węcel, K., Lewoniewski, W. (2015). Modelling the Quality of Attributes in Wikipedia Infoboxes. In International Conference on Business Information Systems (pp. 308-320). Springer International Publishing.
- Lewoniewski, W., Węcel, K., Abramowicz, W. (2015). Comparative Analysis of Information Quality Models in the National Versions of Wikipedia. Prace Naukowe/Uniwersytet Ekonomiczny w Katowicach, pp. 133-154.
- Lewoniewski, W., Węcel, K., Abramowicz, W. (2017), Comparative analysis of classification models for quality assessment of Wikipedia articles, Matematyka i informatyka na usługach ekonomii, Wydawnictwo UEP Poznań, ISBN 9788374179386
- Warncke-Wang, Morten, Dan Cosley, and John Riedl. Tell Me More: An Actionable Quality Model for Wikipedia. Proceedings of the 9th International Symposium on Open Collaboration. ACM, 2013.
- Khairova, N., Lewoniewski, W., Węcel, K. (2017). Estimating the Quality of Articles in Russian Wikipedia Using the Logical-Linguistic Model of Fact Extraction. In International Conference on Business Information Systems (pp. 28-40). Springer, Cham.
- Lewoniewski, W., Khairova, N., Węcel, K., Stratiienko, N., Abramowicz, W. (2017). Using Morphological and Semantic Features for the Quality Assessment of Russian Wikipedia. In International Conference on Information and Software Technologies (pp. 550-560). Springer, Cham. DOI: 10.1007/978-3-319-67642-5_46
- Lewoniewski, W., Wecel, K., Abramowicz, W. (2017). Determining Quality of Articles in Polish Wikipedia Based on Linguistic Features.
- Lamek, A., Lewoniewski, W. (2017). Application Logistic Regression in Assessing the Quality of Information – Wikipedia Articles Case. Studia Oeconomica Posnaniensia 12/2017. DOI: 10.18559/SOEP.2017.12.3
- Blumenstock, J. E. (2008). Automatically Assessing the Quality of Wikipedia Articles. Tech. rep.
- Conti, R., Marzini, E., Spognardi, A., Matteucci, I., Mori, P., Petrocchi, M. (2014). Maturity Assessment of Wikipedia Medical Articles. In: Computer-Based Medical Systems (CBMS), 2014 IEEE 27th International Symposium on. pp. 281-286. IEEE
- Yaari, E., Baruchson-Arbib, S., Bar-Ilan, J. (2011). Information Quality Assessment of Community Generated Content: A User Study of Wikipedia. Journal of Information Science 37(5), 487-498
- Dang, Q.V., Ignat, C.L. (2016). Measuring Quality of Collaboratively Edited Documents: The Case of Wikipedia. In: Collaboration and Internet Computing (CIC), 2016 IEEE 2nd International Conference on. pp. 266-275. IEEE
- Shen, A., Qi, J., Baldwin, T.. (2017) A Hybrid Model for Quality Assessment of Wikipedia Wrticles. In: Proceedings of the Australasian Language Technology Association Workshop 2017. pp. 43-52
- Zhang, S., Hu, Z., Zhang, C., Yu, K. (2018). History-Based Article Quality Assessment on Wikipedia. In: Big Data and Smart Computing (BigComp), 2018 IEEE International Conference on. pp. 1-8. IEEE
- Warncke-Wang, M., Ayukaev, V. R., Hecht, B., & Terveen, L. G. (2015). The Success and Failure of Quality Improvement Projects in Peer Production Communities. In Proceedings of the 18th ACM Conference on Computer Supported Cooperative Work & Social Computing (pp. 743-756). ACM.
- Soonthornphisaj, N., & Paengporn, P. (2017). Thai Wikipedia Article Quality Filtering Algorithm. In Proceedings of the International MultiConference of Engineers and Computer Scientists (Vol. 1).
- Dalip, D.H., Gonçalves, M.A., Cristo, M., Calado, P. (2009). Automatic Quality Assessment of Content Created Collaboratively by Web Communities: A Case Study of Wikipedia. In: Proceedings of the 9th ACM/IEEE-CS Joint Conference on Digital Libraries. pp. 295-304
- di Sciascio, C., Strohmaier, D., Errecalde, M., Veas, E. (2017). Wikilyzer: Interactive Information Quality Assessment in Wikipedia. In: Proceedings of the 22nd International Conference on Intelligent User Interfaces. pp. 377-388. ACM
- Wu, K., Zhu, Q., Zhao, Y., Zheng, H. (2010). Mining the Factors Affecting the Quality of Wikipedia Articles. In: Information Science and Management Engineering (ISME), 2010 International Conference of. vol. 1, pp. 343-346. IEEE
- Liu, J., Ram, S. (2018). Using Big Data and Network Analysis to Understand Wikipedia Article Quality. Data & Knowledge Engineering
- Blumenstock, J.E. (2008). Size Matters: Word Count as a Measure of Quality on Wikipedia. In: WWW. pp. 1095-1096
- Lerner, J., Lomi, A. (2018). Knowledge Categorization Affects Popularity and Quality of Wikipedia Articles. PloS one 13(1), e0190674
- Lex, E., Voelske, M., Errecalde, M., Ferretti, E., Cagnina, L., Horn, C., Stein, B., Granitzer, M. (2012) Measuring the Quality of Web Content Using Factual Information. In Proceedings of the 2nd joint WICOW/AIRWeb workshop on web quality, pp. 7-10. ACM
- Lewoniewski, W., Härting, R. C., Wecel, K., Reichstein, C., Abramowicz, W. (2018). Application of SEO Metrics to Determine the Quality of Wikipedia Articles and Their Sources. In International Conference on Information and Software Technologies (pp. 139-152). Springer, Cham
- Moyer, D., Carson, S. L., Dye, T. K., Carson, R. T., Goldbaum, D. (2015). Determining the Influence of Reddit Posts on Wikipedia Pageviews. In Proceedings of the Ninth International AAAI Conference on Web and Social Media.
- Wu, G., Harrigan, M., Cunningham, P. (2011). Characterizing Wikipedia Pages Using Edit Network Motif Profiles. In Proceedings of the 3rd International Workshop on Search and Mining User-generated Contents, Glasgow, UK.
- Suzuki, Y., Nakamura, S. (2016). Assessing the Quality of Wikipedia Editors Through Crowdsourcing. In Proceedings of the 25th International Conference Companion on World Wide Web, Montreal, QC, Canada; International World Wide Web Conferences Steering Committee: Geneva, Switzerland, 2016; pp. 1001–1006.
- Lewoniewski, W., Węcel, K., Abramowicz, W., (2017), Analysis of References Across Wikipedia Languages. Information and Software Technologies. ICIST 2017. DOI: 10.1007/978-3-319-67642-5_47
- Lewoniewski, W., Węcel, K.. (2017). Features of Wikipedia Articles and Their Extraction Methods for Automatic Information Quality Assessment. Studia Oeconomica Posnaniensia 12/2017. DOI: 10.18559/SOEP.2017.12.7
- Lewoniewski, W., Węcel, K., Abramowicz, W. (2017). Relative Quality and Popularity Evaluation of Multilingual Wikipedia Articles. In Informatics (Vol. 4, No. 4, p. 43). Multidisciplinary Digital Publishing Institute. DOI: 10.3390/informatics4040043
- Lewoniewski, W., Węcel, K. (2017). Relative Quality Assessment of Wikipedia Articles in Different Languages Using Synthetic Measure. In International Conference on Business Information Systems (pp. 282-292). Springer, Cham. DOI: 10.1007/978-3-319-69023-0_24
- Lewoniewski, W. (2017). Completeness and Reliability of Wikipedia Infoboxes in Various Languages. In International Conference on Business Information Systems (pp. 295-305). Springer, Cham. DOI: 10.1007/978-3-319-69023-0_25
- Lewoniewski, W. (2017). Enrichment of Information in Multilingual Wikipedia Based on Quality Analysis. In International Conference on Business Information Systems (pp. 216-227). Springer, Cham. DOI: 10.1007/978-3-319-69023-0_19