Wikipedia Quality

From Wikipedia Quality
(Redirected from Wikipedia quality)
Jump to: navigation, search
This page contains changes which are not marked for translation.

Other languages:
Deutsch • ‎English • ‎español • ‎français • ‎polski • ‎русский
Welcome to Wikipedia Quality,
portal about concepts, researches and services related to quality assessment of the Multilingual Wikipedia.
Articles count: 6,473
Number of scientists in each country who conduct research on the Wikipedia Quality


Despite this, in Wikipedia you can sometimes find valuable information – depending on the language version and subject. More over, Wikipedia does not have to be truthful, but it is important that the information must be confirmed by reliable sources. Practically in every language version there is a system of awards for the best articles. However, the number of these articles is relatively small (less than one percent). In some language versions, there are also other quality grades. However, the overwhelming majority of articles have are unevaluated (in some languages more than 99%).

Why quality is important?

Quality assessment of information on Wikipedia is critical for many reasons, such as:

  • Reliability and Trust: Wikipedia is widely used as a source of information for people worldwide. The information needs to be reliable and trustworthy to maintain Wikipedia's reputation as a valuable resource. Misinformation can lead to confusion, misunderstandings, and potentially significant consequences, particularly in topics related to health, law, or science.
  • Educational Value: Many students and researchers use Wikipedia as a starting point for their studies. High-quality, accurate information helps them build a sound knowledge base and can guide them to more specialized resources.
  • Comprehensiveness: Wikipedia aims to provide a comprehensive overview of a wide range of topics. Quality assessment ensures that all aspects of a subject are covered adequately and objectively, providing a balanced and thorough representation.
  • Objectivity and Neutrality: Wikipedia's credibility relies on its commitment to neutrality. Quality assessment checks that information presented does not reflect bias or subjective viewpoints, but instead presents a fair and balanced view of all perspectives.
  • Verifiability: Information on Wikipedia should be verifiable with reliable sources. Quality assessments help ensure that all facts are backed up by reputable references, providing accountability and allowing users to trace the source of the information.
  • Maintaining Wikipedia's Standards: Wikipedia has specific content policies and guidelines. Quality assessment is the mechanism by which the community ensures these standards are maintained, fostering consistency and credibility across the site.

The quality of information on Wikipedia directly impacts its usefulness and the trust users place in it. By regularly assessing and improving the quality of articles, the Wikipedia community continues to ensure that it remains a reliable, comprehensive, and valuable resource for all.

Automatic quality assessment of Wikipedia articles

So, in Wikipedia, many articles do not have quality grades, so each reader should manually analyze their content. Automatic quality assessment of Wikipedia articles is known and wide area in the scientific world - researchers from over 50 countries published various works related to quality of Wikipedia. Basically, the scientific works describes the most developed language version of Wikipedia – English, which already contains more than 6 million articles.

Since it foundation and with the growing popularity of Wikipedia, more and more scientific publications on this subject have published. One of the first studies showed that measuring the volume of content can help determine the degree of “maturity” of the Wikipedia article. Works in this direction shows that, in general, higher-quality articles are long, use many references, are edited by hundreds of authors and have thousands of editions.

There are different measures related to such quality dimensions as credibility, completeness, objectivity, readability, relevance, style and timeliness.[1]

How do they come to such conclusions? Simply put: comparing good and bad articles.

As already mentioned earlier, in almost every language version of Wikipedia, there is a system of assessing the quality of articles. The best articles are awarded in a special way – they receive a special “badge”. In Russian Wikipedia such articles are called “Featured Articles” (FA). There is another “badge” for articles that slightly below the best ones – “Good articles” (GA). In some language versions, there are other estimates for more “weak” articles. For example, in English Wikipedia there are also: A-class, B-class, C-class, Start, Stub. On the other hand in Russian Wikipedia we can met the following additional grades: Solid, Full, Developed, In development, Stub.

Even on the example of the English and Russian versions, we can conclude that the standards for the grading scale are different and depends on the language. Moreover, not all language versions of Wikipedia have such a developed system of quality assessment of articles. For example, German Wikipedia, which contains more than 2 million articles, uses only two estimates – equivalents for FA and GA. Therefore, often assessments in scientific papers are grouped into two groups:[2][3][4][5][6][7][8]

  • ”Complete” – FA and GA grade,
  • ”Incomplete” – all other grades

Let’s call this method “binary” (1 – Complete articles, 0 – Incomplete articles). This separation naturally “blurs” the boundaries between individual classes, but it allows you to build and compare quality models for different language versions of Wikipedia.

Methods

The automatic quality assessment of Wikipedia articles encompasses a variety of approaches, primarily leveraging machine learning algorithms, textual and structural features, quality metrics, and datasets. These methods can be broadly categorized into classical learning models with features, deep learning models, and metric-based approaches.[9]

To build such models, you can use various algorithms, in particular Data Mining. One of the most commonly used algorithms – Random Forest[2][3][4][6][7][8][5]. There are even studies[4], which compare it with other algorithms (CART, SMO, Multilayer Perceptron, LMT, C4.5, C5.0 and others). Random Forest allows to build models even using variables that correlates with each other. Additionally, this algorithm can show which variables are more important for determining the quality of articles. If we need to get other information about the importance of variables, we can use other algorithms.

The results show that there are differences between article quality models in different language versions of Wikipedia.[2][3][4] So, if in one language version one of the most important features is the number of references (sources), in another language will be more important the number of images and the length of the text.

In this case, the quality is modeled as the probability of referring an article to one of two groups – Complete or Incomplete. The conclusion is made on the basis of analysis of various features (metrics or indicators): the length of the text[10][11][12][13][14][15], the number of references[16][17][18][19], images[20][21], sections[22][23], links to the article, the number of facts[7][24], visits, the number of editions and many others. There are also a number of linguistic features,[6][8] which depend on the considered language. Also it can be taken into the account measures that shows number of the links from external sources, such as Reddit, Facebook, Youtube, Twitter, Linkedin, VKontakte and other social services.[25][26] Additionality, we can take into account the reputation of users who edit Wikipedia articles.[27][28]. To determine the experience of Wikipedia editors, special online tools can be useful, such as WikiTop.

Currently, in total, more than 300 features (or measures) are used in studies, depending on the language version of Wikipedia and the complexity of the quality model. Some features, such as references (sources), can be evaluated additionally[29] – we can not only count the quantity, but also assess how well-known and reliable sources are used in the Wikipedia article. Some measures can be obtained on the basis of expert opinions, which can be received from different sources, for example - WikiBest service.

Where to get these features?

There are several sources – it can be a backup copy of Wikipedia, API service, special tools and others.[30]

To get some features, you just need to send a request (query) to the appropriate API, for other features (especially linguistic ones) you need to use special libraries and parsers. A considerable part of the time, however, is spent writing your own tools (we’ll talk about this in separate articles).

Are there other ways for quality assessing of Wikipedia articles other than binary?

Yes. Recent studies[31][32] propose the method for estimating articles on a scale from 0 to 100 in a continuous scale. Thus, an article can receive, for example, an estimate of 54.21. This method has been tested in 55 language versions. The results are available on the WikiRank service, which allows you to evaluate and compare the quality and popularity of Wikipedia articles in different languages. The method, of course, is not ideal, but works for locally known topics.[32]

Are there ways of assessing the quality of some part of Wikipedia article?

Of course. For example, one of the important elements of the article is the so-called “infobox”. This is a separate frame (table), which is often located at the top right of the article and shows the most important facts about the subject. So, there is no need to look for this information in the text – you can just look at this table. Evaluation of the quality of these infoboxes is devoted to individual studies.[3][33] There are also projects, such as Infoboxes.net, which allow you to automatically compare the infoboxes in different language versions.

Why do we need all this?

Wikipedia is used often, but the information quality is not always checked. The proposed methods can simplify this task – if the article is bad, then the reader, knowing this, will be more careful in using its materials for decision making. On the other hand, the user can also see in which language the topic of interest is described better. And most importantly, modern techniques allow you to transfer information between different language versions. This means that you can automatically enrich the weak versions of Wikipedia with high-quality data from other language versions.[34] This will also improve the quality of other semantic databases, for which Wikipedia is the main source of information. First of all, this is – DBpedia, Wikidata, YAGO2 and others. So, Wikipedia contributes to open data projects, where its content is used to build and enhance freely available datasets

It's worth mentioning that high-quality Wikipedia articles provide reliable information for students, educators, and researchers, supporting learning and teaching across diverse subjects. Researchers can use Wikipedia as a starting point for academic investigations, especially in fields where up-to-date information is essential.

Wikipedia serves as a foundational dataset for building knowledge graphs used by various technologies, including search engines and AI applications. Search engines like Google, Bing, and DuckDuckGo frequently display Wikipedia snippets in their search results, enhancing the accuracy and reliability of the information provided to users. Additionally, social media platforms often link to Wikipedia for fact-checking and providing context in discussions, relying on its accuracy. Virtual assistants such as Siri, Google Assistant, and Alexa often source information from Wikipedia to answer user queries, making the quality of articles crucial for accurate responses.

High-quality Wikipedia data enhances NLP models, improving their ability to understand and generate human language. Moreover, Wikipedia serves as a critical resource for training large language models (LLMs) and identifying fake news due to its extensive, diverse, and up-to-date content. Training AI models, including large language models, on high-quality Wikipedia content improves their performance and reliability.

Government officials and policymakers can use Wikipedia to access and share information quickly, which is crucial for informed decision-making. Journalists frequently reference Wikipedia for background information and fact-checking, relying on its accuracy to produce quality news articles. Medical professionals and patients can use Wikipedia to look up health information, making the accuracy and reliability of health-related articles vital.

Wikipedia documents cultural heritage and history, preserving important information for future generations. Additionally, Wikipedia's multilingual platform helps in preserving and promoting lesser-known languages, contributing to their documentation and use.

Finally, other crowdsourcing platforms can learn from Wikipedia's collaborative model and quality control mechanisms to improve their own processes. Libraries and information professionals use Wikipedia as a reference tool and for training purposes in information literacy.

Training Large Language Models

Wikipedia's vast and continually updated repository of information makes it an invaluable dataset for training high-quality LLMs. These models require large amounts of text data to learn language patterns, context, and knowledge representation. For example:

  • OpenAI's GPT-3 and GPT-4 used Wikipedia as a key part of their training data, leveraging its extensive and structured content to improve language understanding and generation capabilities.[35]
  • BERT (Bidirectional Encoder Representations from Transformers) by Google also incorporated Wikipedia data to enhance its natural language processing performance, demonstrating significant improvements in various NLP tasks.[36]

Identification of Fake News

Wikipedia plays a crucial role in combating misinformation and fake news through its rigorous content moderation and citation requirements. Several studies highlight how Wikipedia's structured information aids in identifying and mitigating fake news. One of the works found that Wikipedia links are prominently featured in search engine results, making them a first line of defense against misinformation for many users.[37] Organizations dedicated to fact-checking, such as Snopes and FactCheck.org, can use Wikipedia as a starting point for verifying information.


References

  1. 1.0 1.1 Lewoniewski, W. (2019). Measures for Quality Assessment of Articles and Infoboxes in Multilingual Wikipedia. Lecture Notes in Business Information Processing, vol 339. Springer, Cham (pp. 619-633)
  2. 2.0 2.1 2.2 Lewoniewski, W., Węcel, K., Abramowicz, W. (2016). Quality and Importance of Wikipedia Articles in Different Languages. In International Conference on Information and Software Technologies (pp. 613-624). Springer International Publishing.
  3. 3.0 3.1 3.2 3.3 Węcel, K., Lewoniewski, W. (2015). Modelling the Quality of Attributes in Wikipedia Infoboxes. In International Conference on Business Information Systems (pp. 308-320). Springer International Publishing.
  4. 4.0 4.1 4.2 4.3 Lewoniewski, W., Węcel, K., Abramowicz, W. (2017), Comparative analysis of classification models for quality assessment of Wikipedia articles, Matematyka i informatyka na usługach ekonomii, Wydawnictwo UEP Poznań, ISBN 9788374179386
  5. 5.0 5.1 Warncke-Wang, Morten, Dan Cosley, and John Riedl. Tell Me More: An Actionable Quality Model for Wikipedia. Proceedings of the 9th International Symposium on Open Collaboration. ACM, 2013.
  6. 6.0 6.1 6.2 Khairova, N., Lewoniewski, W., Węcel, K. (2017). Estimating the Quality of Articles in Russian Wikipedia Using the Logical-Linguistic Model of Fact Extraction. In International Conference on Business Information Systems (pp. 28-40). Springer, Cham.
  7. 7.0 7.1 7.2 Lewoniewski, W., Khairova, N., Węcel, K., Stratiienko, N., Abramowicz, W. (2017). Using Morphological and Semantic Features for the Quality Assessment of Russian Wikipedia. In International Conference on Information and Software Technologies (pp. 550-560). Springer, Cham. DOI: 10.1007/978-3-319-67642-5_46
  8. 8.0 8.1 8.2 Lewoniewski, W., Wecel, K., Abramowicz, W. (2017). Determining Quality of Articles in Polish Wikipedia Based on Linguistic Features.
  9. Moas, P.M. and Lopes, C.T. (2023). Automatic Quality Assessment of Wikipedia Articles - A Systematic Literature Review. ACM Computing Surveys, 56(4), pp.1-37.
  10. Blumenstock, J. E. (2008). Automatically Assessing the Quality of Wikipedia Articles. Tech. rep.
  11. Conti, R., Marzini, E., Spognardi, A., Matteucci, I., Mori, P., Petrocchi, M. (2014). Maturity Assessment of Wikipedia Medical Articles. In: Computer-Based Medical Systems (CBMS), 2014 IEEE 27th International Symposium on. pp. 281-286. IEEE
  12. Yaari, E., Baruchson-Arbib, S., Bar-Ilan, J. (2011). Information Quality Assessment of Community Generated Content: A User Study of Wikipedia. Journal of Information Science 37(5), 487-498
  13. Dang, Q.V., Ignat, C.L. (2016). Measuring Quality of Collaboratively Edited Documents: The Case of Wikipedia. In: Collaboration and Internet Computing (CIC), 2016 IEEE 2nd International Conference on. pp. 266-275. IEEE
  14. Shen, A., Qi, J., Baldwin, T.. (2017) A Hybrid Model for Quality Assessment of Wikipedia Wrticles. In: Proceedings of the Australasian Language Technology Association Workshop 2017. pp. 43-52
  15. Zhang, S., Hu, Z., Zhang, C., Yu, K. (2018). History-Based Article Quality Assessment on Wikipedia. In: Big Data and Smart Computing (BigComp), 2018 IEEE International Conference on. pp. 1-8. IEEE
  16. Warncke-Wang, M., Ayukaev, V. R., Hecht, B., & Terveen, L. G. (2015). The Success and Failure of Quality Improvement Projects in Peer Production Communities. In Proceedings of the 18th ACM Conference on Computer Supported Cooperative Work & Social Computing (pp. 743-756). ACM.
  17. Soonthornphisaj, N., & Paengporn, P. (2017). Thai Wikipedia Article Quality Filtering Algorithm. In Proceedings of the International MultiConference of Engineers and Computer Scientists (Vol. 1).
  18. Dalip, D.H., Gonçalves, M.A., Cristo, M., Calado, P. (2009). Automatic Quality Assessment of Content Created Collaboratively by Web Communities: A Case Study of Wikipedia. In: Proceedings of the 9th ACM/IEEE-CS Joint Conference on Digital Libraries. pp. 295-304
  19. di Sciascio, C., Strohmaier, D., Errecalde, M., Veas, E. (2017). Wikilyzer: Interactive Information Quality Assessment in Wikipedia. In: Proceedings of the 22nd International Conference on Intelligent User Interfaces. pp. 377-388. ACM
  20. Wu, K., Zhu, Q., Zhao, Y., Zheng, H. (2010). Mining the Factors Affecting the Quality of Wikipedia Articles. In: Information Science and Management Engineering (ISME), 2010 International Conference of. vol. 1, pp. 343-346. IEEE
  21. Liu, J., Ram, S. (2018). Using Big Data and Network Analysis to Understand Wikipedia Article Quality. Data & Knowledge Engineering
  22. Blumenstock, J.E. (2008). Size Matters: Word Count as a Measure of Quality on Wikipedia‎. In: WWW. pp. 1095-1096
  23. Lerner, J., Lomi, A. (2018). Knowledge Categorization Affects Popularity and Quality of Wikipedia Articles‎. PloS one 13(1), e0190674
  24. Lex, E., Voelske, M., Errecalde, M., Ferretti, E., Cagnina, L., Horn, C., Stein, B., Granitzer, M. (2012) Measuring the Quality of Web Content Using Factual Information‎. In Proceedings of the 2nd joint WICOW/AIRWeb workshop on web quality, pp. 7-10. ACM
  25. Lewoniewski, W., Härting, R. C., Wecel, K., Reichstein, C., Abramowicz, W. (2018). Application of SEO Metrics to Determine the Quality of Wikipedia Articles and Their Sources. In International Conference on Information and Software Technologies (pp. 139-152). Springer, Cham
  26. Moyer, D., Carson, S. L., Dye, T. K., Carson, R. T., Goldbaum, D. (2015). Determining the Influence of Reddit Posts on Wikipedia Pageviews. In Proceedings of the Ninth International AAAI Conference on Web and Social Media.
  27. Wu, G., Harrigan, M., Cunningham, P. (2011). Characterizing Wikipedia Pages Using Edit Network Motif Profiles. In Proceedings of the 3rd International Workshop on Search and Mining User-generated Contents, Glasgow, UK.
  28. Suzuki, Y., Nakamura, S. (2016). Assessing the Quality of Wikipedia Editors Through Crowdsourcing. In Proceedings of the 25th International Conference Companion on World Wide Web, Montreal, QC, Canada; International World Wide Web Conferences Steering Committee: Geneva, Switzerland, 2016; pp. 1001–1006.
  29. Lewoniewski, W., Węcel, K., Abramowicz, W., (2017), Analysis of References Across Wikipedia Languages. Information and Software Technologies. ICIST 2017. DOI: 10.1007/978-3-319-67642-5_47
  30. Lewoniewski, W., Węcel, K., Abramowicz, A. (2019). Multilingual Ranking of Wikipedia Articles with Quality and Popularity Assessment in Different Topics. Computers 2019, 8, 60. DOI: 10.3390/computers8030060
  31. Lewoniewski, W., Węcel, K., Abramowicz, W. (2017). Relative Quality and Popularity Evaluation of Multilingual Wikipedia Articles. In Informatics (Vol. 4, No. 4, p. 43). Multidisciplinary Digital Publishing Institute. DOI: 10.3390/informatics4040043
  32. 32.0 32.1 Lewoniewski, W., Węcel, K. (2017). Relative Quality Assessment of Wikipedia Articles in Different Languages Using Synthetic Measure. In International Conference on Business Information Systems (pp. 282-292). Springer, Cham. DOI: 10.1007/978-3-319-69023-0_24
  33. Lewoniewski, W. (2017). Completeness and Reliability of Wikipedia Infoboxes in Various Languages. In International Conference on Business Information Systems (pp. 295-305). Springer, Cham. DOI: 10.1007/978-3-319-69023-0_25
  34. Lewoniewski, W. (2017). Enrichment of Information in Multilingual Wikipedia Based on Quality Analysis. In International Conference on Business Information Systems (pp. 216-227). Springer, Cham. DOI: 10.1007/978-3-319-69023-0_19
  35. Brown, T., Mann, B., Ryder, N., Subbiah, M., Kaplan, J.D., Dhariwal, P., Neelakantan, A., Shyam, P., Sastry, G., Askell, A. and Agarwal, S. (2020). Language Models are Few-Shot Learners. Advances in neural information processing systems, 33, pp.1877-1901.
  36. Devlin, J., Chang, M.W., Lee, K. and Toutanova, K. (2018). BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding.
  37. Vincent, N. and Hecht, B. (2021). A deeper investigation of the importance of Wikipedia links to search engine results. Proceedings of the ACM on Human-Computer Interaction, 5(CSCW1), pp.1-15.