https://wikipediaquality.com/api.php?action=feedcontributions&user=Eva&feedformat=atomWikipedia Quality - User contributions [en]2024-03-29T01:19:11ZUser contributionsMediaWiki 1.30.0https://wikipediaquality.com/index.php?title=Wikipedia_%26_Research:_the_Innovative_Character_of_Wikipedia_Research_and_the_New_Challenges_(And_Opportunities)_Associated_with_It&diff=16957Wikipedia & Research: the Innovative Character of Wikipedia Research and the New Challenges (And Opportunities) Associated with It2019-06-05T06:25:16Z<p>Eva: Wikilinks</p>
<hr />
<div>'''Wikipedia & Research: the Innovative Character of Wikipedia Research and the New Challenges (And Opportunities) Associated with It''' - scientific work related to [[Wikipedia quality]] published in 2011, written by [[Mayo Fuster Morell]].<br />
<br />
== Overview ==<br />
The workshop will focus on addressing the stage of [[Wikipedia]] research and in general common based peer production (less focused on the content than on the methodologies and research process itself) and the innovations, problems and new insights regarding (action) research on common-based peer production.</div>Evahttps://wikipediaquality.com/index.php?title=Willinsky_on_Wikipedia&diff=16956Willinsky on Wikipedia2019-06-05T06:23:53Z<p>Eva: Willinsky on Wikipedia - new page</p>
<hr />
<div>'''Willinsky on Wikipedia''' - scientific work related to Wikipedia quality published in 2009, written by Michael McCarthy.<br />
<br />
== Overview ==<br />
A PDF link to an article John Willinsky wrote for First Monday about what open access research can do for Wikipedia. Here is the article's abstract from First Monday : This study examines the degree to which Wikipedia entries cite or reference research and scholarship, and whether that research and scholarship is generally available to readers. Working on the assumption that where Wikipedia provides links to research and scholarship that readers can readily consult, it increases the authority, reliability, and educational quality of this popular encyclopedia, this study examines Wikipedia's use of open access research and scholarship, that is, peer-reviewed journal articles that have been made freely available online. This study demonstrates among a sample of 100 Wikipedia entries, which included 168 sources or references, only two percent of the entries provided links to open access research and scholarship. However, it proved possible to locate, using Google Scholar and other search engines, relevant examples of open access work for 60 percent of a sub-set of 20 Wikipedia entries. The results suggest that much more can be done to enrich and enhance this encyclopedia's representation of the current state of knowledge. To assist in this process, the study provides a guide to help Wikipedia contributors locate and utilize open access research and scholarship in creating and editing encyclopedia entries.</div>Evahttps://wikipediaquality.com/index.php?title=Lifecycle-Based_Evolution_of_Features_in_Collaborative_Open_Production_Communities:_the_Case_of_Wikipedia&diff=16955Lifecycle-Based Evolution of Features in Collaborative Open Production Communities: the Case of Wikipedia2019-06-05T06:21:13Z<p>Eva: Information about: Lifecycle-Based Evolution of Features in Collaborative Open Production Communities: the Case of Wikipedia</p>
<hr />
<div>'''Lifecycle-Based Evolution of Features in Collaborative Open Production Communities: the Case of Wikipedia''' - scientific work related to Wikipedia quality published in 2013, written by Pujan Ziaie and Medin Imamovic.<br />
<br />
== Overview ==<br />
In the last decade, collaborative open production communities have provided an effective platform for geographically dispersed users to collaborate and generate content in a well-structured and consistent form. Wikipedia is a prominent example in this area. What is of great importance in production communities is the prioritization and evolution of features with regards to the community lifecycle. Users are the cornerstone of such communities and their needs and attitudes constantly change as communities grow. The increasing amount and versatility of content and users requires modifications in areas ranging from user roles and access levels to content quality standards and community policies and goals. In this paper, authors draw on two pertinent theories in terms of the lifecycle of online communities and open collaborative communities in particular by focusing on the case of Wikipedia. Authors conceptualize three general stages (Rising, Organizing, and Stabilizing) within the lifecycle of collaborative open production communities. The salient factors, features and focus of attention in each stage are provided and the chronology of features is visualized. These findings, if properly generalized, can help designers of other types of open production communities effectively allocate their resources and introduce new features based on the needs of both community and users.</div>Evahttps://wikipediaquality.com/index.php?title=Liberating_Epistemology:_Wikipedia_and_the_Social_Construction_of_Knowledge&diff=16954Liberating Epistemology: Wikipedia and the Social Construction of Knowledge2019-06-05T06:19:08Z<p>Eva: New study: Liberating Epistemology: Wikipedia and the Social Construction of Knowledge</p>
<hr />
<div>'''Liberating Epistemology: Wikipedia and the Social Construction of Knowledge''' - scientific work related to Wikipedia quality published in 2008, written by Rubén Rosario Rodríguez.<br />
<br />
== Overview ==<br />
This investigation contends that postfoundationalist models of rationality</div>Evahttps://wikipediaquality.com/index.php?title=Collaborative_Projects_(Social_Media_Application):_About_Wikipedia,_the_Free_Encyclopedia&diff=16953Collaborative Projects (Social Media Application): About Wikipedia, the Free Encyclopedia2019-06-05T06:17:49Z<p>Eva: Collaborative Projects (Social Media Application): About Wikipedia, the Free Encyclopedia - basic info</p>
<hr />
<div>'''Collaborative Projects (Social Media Application): About Wikipedia, the Free Encyclopedia''' - scientific work related to Wikipedia quality published in 2014, written by Andreas M. Kaplan and Michael Haenlein.<br />
<br />
== Overview ==<br />
Collaborative projects—defined herein as social media applications that enable the joint and simultaneous creation of knowledge-related content by many end-users—have only recently received interest among a larger group of academics. This is surprising since applications such as wikis, social bookmarking sites, online forums, and review sites are probably the most democratic form of social media and reflect well the idea of user-generated content. The purpose of this article is to provide insight regarding collaborative projects; the concept of wisdom of crowds, an essential condition for their functioning; and the motivation of readers and contributors. Specifically, authors provide advice on how firms can leverage collaborative projects as an essential element of their online presence to communicate both externally with stakeholders and internally among employees. Authors also discuss how to address situations in which negative information posted on collaborative projects can become a threat and PR crisis for firms.</div>Evahttps://wikipediaquality.com/index.php?title=Cross-Lingual_Knowledge_Discovery:_Chinese-To-English_Article_Linking_in_Wikipedia&diff=16952Cross-Lingual Knowledge Discovery: Chinese-To-English Article Linking in Wikipedia2019-06-05T06:16:06Z<p>Eva: Overview - Cross-Lingual Knowledge Discovery: Chinese-To-English Article Linking in Wikipedia</p>
<hr />
<div>'''Cross-Lingual Knowledge Discovery: Chinese-To-English Article Linking in Wikipedia''' - scientific work related to Wikipedia quality published in 2012, written by Ling-Xiang Tang, Andrew Trotman, Shlomo Geva and Yue Xu.<br />
<br />
== Overview ==<br />
In this paper authors examine automated Chinese to English link discovery in Wikipedia and the effects of Chinese segmentation and Chinese to English translation on the hyperlink recommendation. Authors experimental results show that the implemented link discovery framework can effectively recommend Chinese-to-English cross-lingual links. The techniques described here can assist bi-lingual users where a particular topic is not covered in Chinese, is not equally covered in both languages, or is biased in one language; as well as for language learning.</div>Evahttps://wikipediaquality.com/index.php?title=Wikipedia-Based_Kernels_for_Text_Categorization&diff=16951Wikipedia-Based Kernels for Text Categorization2019-06-05T06:13:55Z<p>Eva: Wikilinks</p>
<hr />
<div>'''Wikipedia-Based Kernels for Text Categorization''' - scientific work related to [[Wikipedia quality]] published in 2007, written by [[Zsolt Minier]], [[Zalán Bodó]] and [[Lehel Csató]].<br />
<br />
== Overview ==<br />
In recent years several models have been proposed for text categorization. Within this, one of the widely applied models is the vector space model (VSM), where independence between indexing terms, usually words, is assumed. Since training corpora sizes are relatively small - compared to ap infin what would be required for a realistic number of words - the generalization power of the learning algorithms is low. It is assumed that a bigger text corpus can boost the representation and hence the learning process. Based on the work of Gabrilovich and Markovitch [6], authors incorporate [[Wikipedia]] articles into the system to give word distributional representation for documents. The extension with this new corpus causes dimensionality increase, therefore clustering of [[features]] is needed. Authors use latent semantic analysis (LSA), kernel principal component analysis (KPCA) and kernel canonical correlation analysis (KCCA) and present results for these experiments on the Reuters corpus.</div>Evahttps://wikipediaquality.com/index.php?title=Kann_Wikipedia_Unser_Fachwissen_Bereichern&diff=16950Kann Wikipedia Unser Fachwissen Bereichern2019-06-05T06:11:09Z<p>Eva: Creating a page: Kann Wikipedia Unser Fachwissen Bereichern</p>
<hr />
<div>'''Kann Wikipedia Unser Fachwissen Bereichern''' - scientific work related to Wikipedia quality published in 2015, written by U. Rechenberg, C. Josten and S. Klima.<br />
<br />
== Overview ==<br />
Hintergrund: Die neuen Medien stellen sowohl Ausbilder als auch Auszubildende im Medizinstudium und klinischen Alltag vor immer neue Herausforderungen. Dabei spielt vor allem Wikipedia eine zunehmend bedeutende Rolle fur die Informationsgewinnung. Neben vielen Vorteilen besteht fur Wikipedia weiterhin der Nachteil der fehlenden Kontrollierbarkeit auf Richtigkeit der bereitgestellten Informationen. Ziel dieser Arbeit ist es, die Relevanz orthopadischer und unfallchirurgischer Wikipedia-Artikel im klinischen Alltag zu untersuchen. Material und Methoden: Im September 2013 wurde eine Studiengruppe bestehend aus Studenten im praktischen Jahr, Assistenzarzten und einem Facharzt und Hochschullehrer anhand von 2 Fragebogen zu den medizinischen Themen auf Wikipedia befragt. Dabei wurden klinisch haufige Themen zu Krankheiten/Symptomen, Untersuchungstechniken/Klassifikationen und konservativer/operativer Therapie aus dem Gebiet der Orthopadie und Unfallchirurgie nach objektiven Kriterien bewertet. Insgesamt wurden 211 Wikipedia-Artikel zu medizinischen Themen untersucht. Abschliesend erfolgte eine subjektive Einschatzung der Inhalte auf Wikipedia durch jeden einzelnen Studienteilnehmer. Ergebnisse: 134 von 211 medizinischen Wikipedia-Seiten aus der Orthopadie und Unfallchirurgie erschienen als eigenstandige Artikel. Die Studie zeigte eine hohe Aktualitat und hervorragende Positionierung der Wikipedia-Beitrage in der Google-Suchliste. Durch zahlreiche Verknupfungen, viele Literaturverweise (z. B.: AWMF-Leitlinien, Zeitschriften), hochwertiges Bildmaterial und zuweilen Videos stehen die Fachbeitrage i. d. R. denen von Printmedien nicht nach. Fast die Halfte (42,5 %) der Beitrage beurteilten die Studienteilnehmer bei Wikipedia als geeignet zur Vorbereitung auf das Staatsexamen und den klinischen Alltag von Berufsanfangern. Schlussfolgerung: Besonders die jungen Mediziner, die sogenannte Web-2.0-Generation, nutzen verstarkt die Angebote des Internets zum Wissenserwerb, wodurch die Lernmethoden verandert werden. Wikipedia stellt die geeignete Plattform dar, sowohl wahrend des Studiums als auch im Rahmen der Facharztweiterbildung, um Inhalte aus unserem Fachgebiet vielen Lesern frei zur Verfugung zu stellen. Fur die Inhalte und die Qualitat der Beitrage in unserem Fachgebiet ist unser aller Engagement gefragt.</div>Evahttps://wikipediaquality.com/index.php?title=Edit_This_Page:_the_Socio-Technological_Infrastructure_of_a_Wikipedia_Article&diff=16949Edit This Page: the Socio-Technological Infrastructure of a Wikipedia Article2019-06-05T06:08:22Z<p>Eva: New work - Edit This Page: the Socio-Technological Infrastructure of a Wikipedia Article</p>
<hr />
<div>'''Edit This Page: the Socio-Technological Infrastructure of a Wikipedia Article''' - scientific work related to Wikipedia quality published in 2009, written by Shaun Slattery.<br />
<br />
== Overview ==<br />
Networked environments, such as wikis, are commonly used to support work, including the collaborative authoring of information and "fact-building." In networked environments, the activity of fact-building is mediated not only by the technological features of the interface, but also by the social conventions of the community it supports. This paper examines the social and technological features of a Wikipedia article in order to understand how these features help mediate the activity of fact-building and highlights the need for communication designers to consider the goals and needs of the communities for which they design.</div>Evahttps://wikipediaquality.com/index.php?title=Wikipedia-Based_Kernels_for_Text_Categorization&diff=16948Wikipedia-Based Kernels for Text Categorization2019-06-05T06:05:23Z<p>Eva: Creating a new page - Wikipedia-Based Kernels for Text Categorization</p>
<hr />
<div>'''Wikipedia-Based Kernels for Text Categorization''' - scientific work related to Wikipedia quality published in 2007, written by Zsolt Minier, Zalán Bodó and Lehel Csató.<br />
<br />
== Overview ==<br />
In recent years several models have been proposed for text categorization. Within this, one of the widely applied models is the vector space model (VSM), where independence between indexing terms, usually words, is assumed. Since training corpora sizes are relatively small - compared to ap infin what would be required for a realistic number of words - the generalization power of the learning algorithms is low. It is assumed that a bigger text corpus can boost the representation and hence the learning process. Based on the work of Gabrilovich and Markovitch [6], authors incorporate Wikipedia articles into the system to give word distributional representation for documents. The extension with this new corpus causes dimensionality increase, therefore clustering of features is needed. Authors use latent semantic analysis (LSA), kernel principal component analysis (KPCA) and kernel canonical correlation analysis (KCCA) and present results for these experiments on the Reuters corpus.</div>Evahttps://wikipediaquality.com/index.php?title=Overview_of_the_1St_International_Competition_on_Quality_Flaw_Prediction_in_Wikipedia&diff=16947Overview of the 1St International Competition on Quality Flaw Prediction in Wikipedia2019-06-05T06:03:58Z<p>Eva: Overview of the 1St International Competition on Quality Flaw Prediction in Wikipedia - new page</p>
<hr />
<div>'''Overview of the 1St International Competition on Quality Flaw Prediction in Wikipedia''' - scientific work related to Wikipedia quality published in 2012, written by Maik Anderka and Benno Stein.<br />
<br />
== Overview ==<br />
The paper overviews the task "Quality Flaw Prediction in Wikipedia" of the PAN'12 competition. An evaluation corpus is introduced which comprises 1 592 226 English Wikipedia articles, of which 208 228 have been tagged to con- tain one of ten important quality flaws. Moreover, the performance of three qual- ity flaw classifiers is evaluated.</div>Evahttps://wikipediaquality.com/index.php?title=Semantic_Content_Filtering_with_Wikipedia_and_Ontologies&diff=16946Semantic Content Filtering with Wikipedia and Ontologies2019-06-05T06:02:26Z<p>Eva: Semantic Content Filtering with Wikipedia and Ontologies - new page</p>
<hr />
<div>'''Semantic Content Filtering with Wikipedia and Ontologies''' - scientific work related to Wikipedia quality published in 2010, written by Pekka Malo, Pyry-Antti Siitari, Oskar Ahlgren, Jyrki Wallenius and Pekka Korhonen.<br />
<br />
== Overview ==<br />
The use of domain knowledge is generally found to improve query efficiency in content filtering applications. In particular, tangible benefits have been achieved when using knowledge-based approaches within more specialized fields, such as medical free texts or legal documents. However, the problem is that sources of domain knowledge are time consuming to build and equally costly to maintain. As a potential remedy, recent studies on Wikipedia suggest that this large body of socially constructed knowledge can be effectively harnessed to provide not only facts but also accurate information about semantic concept-similarities. This paper describes a ramework for document filtering, where Wikipedia’s concept relatedness information is combined with a domain ontology to produce semantic content classifiers. The approach is evaluated using Reuters RCV1 corpus and TREC-11 filtering task definitions. In a comparative study, the approach shows robust performance and appears to outperform content classifiers based on Support Vector Machines (SVM) and C4.5 algorithm.</div>Evahttps://wikipediaquality.com/index.php?title=Entityclassifier.Eu:_Real-Time_Classification_of_Entities_in_Text_with_Wikipedia&diff=16945Entityclassifier.Eu: Real-Time Classification of Entities in Text with Wikipedia2019-06-05T05:59:55Z<p>Eva: wikilinks</p>
<hr />
<div>'''Entityclassifier.Eu: Real-Time Classification of Entities in Text with Wikipedia''' - scientific work related to [[Wikipedia quality]] published in 2013, written by [[Milan Dojchinovski]] and [[Tomáš Kliegr]].<br />
<br />
== Overview ==<br />
Targeted Hypernym Discovery (THD) performs unsupervised classification of entities appearing in text. A hypernym mined from the free-text of the [[Wikipedia]] article describing the entity is used as a class. The type as well as the entity are cross-linked with their representation in [[DBpedia]], and enriched with additional types from DBpedia and YAGO knowledge bases providing a semantic web interoperability. The system, available as a web application and web service at entityclassifier.eu, currently supports English, German and Dutch.</div>Evahttps://wikipediaquality.com/index.php?title=Wikipedia_for_Smart_Machines_and_Double_Deep_Machine_Learning&diff=16944Wikipedia for Smart Machines and Double Deep Machine Learning2019-06-05T05:58:40Z<p>Eva: + links</p>
<hr />
<div>'''Wikipedia for Smart Machines and Double Deep Machine Learning''' - scientific work related to [[Wikipedia quality]] published in 2017, written by [[Moshe Ben-Bassat]].<br />
<br />
== Overview ==<br />
Very important breakthroughs in data centric deep learning algorithms led to impressive performance in transactional point applications of Artificial Intelligence (AI) such as Face Recognition, or EKG classification. With all due appreciation, however, knowledge blind data only machine learning algorithms have severe limitations for non-transactional AI applications, such as medical diagnosis beyond the EKG results. Such applications require deeper and broader knowledge in their problem solving capabilities, e.g. integrating anatomy and physiology knowledge with EKG results and other patient findings. Following a review and illustrations of such limitations for several real life AI applications, authors point at ways to overcome them. The proposed [[Wikipedia]] for Smart Machines initiative aims at building repositories of software structures that represent humanity science & technology knowledge in various parts of life; knowledge that authors all learn in schools, universities and during professional life. Target readers for these repositories are smart machines; not human. AI software developers will have these Reusable Knowledge structures readily available, hence, the proposed name ReKopedia. Big Data is by now a mature technology, it is time to focus on Big Knowledge. Some will be derived from data, some will be obtained from mankind gigantic repository of knowledge. Wikipedia for smart machines along with the new Double Deep Learning approach offer a paradigm for integrating datacentric deep learning algorithms with algorithms that leverage deep knowledge, e.g. evidential reasoning and causality reasoning. For illustration, a project is described to produce ReKopedia knowledge modules for medical diagnosis of about 1,000 disorders. Data is important, but knowledge deep, basic, and commonsense is equally important.</div>Evahttps://wikipediaquality.com/index.php?title=Wikipedia%E2%80%99s_Gaps_in_Coverage:_are_Wikiprojects_a_Solution%3F_a_Study_of_the_Cambodian_Wikiproject&diff=16943Wikipedia’s Gaps in Coverage: are Wikiprojects a Solution? a Study of the Cambodian Wikiproject2019-06-05T05:57:13Z<p>Eva: Adding wikilinks</p>
<hr />
<div>'''Wikipedia’s Gaps in Coverage: are Wikiprojects a Solution? a Study of the Cambodian Wikiproject''' - scientific work related to [[Wikipedia quality]] published in 2018, written by [[Brendan Luyt]].<br />
<br />
== Overview ==<br />
Purpose</div>Evahttps://wikipediaquality.com/index.php?title=Reputation_and_Reliability_in_Collective_Goods_the_Case_of_the_Online_Encyclopedia_Wikipedia&diff=16942Reputation and Reliability in Collective Goods the Case of the Online Encyclopedia Wikipedia2019-06-05T05:55:41Z<p>Eva: + links</p>
<hr />
<div>'''Reputation and Reliability in Collective Goods the Case of the Online Encyclopedia Wikipedia''' - scientific work related to [[Wikipedia quality]] published in 2009, written by [[Denise L. Anthony]], [[Sean W. Smith]] and [[Timothy Williamson]].<br />
<br />
== Overview ==<br />
An important organizational innovation enabled by the revolution in information technologies is `[[open source]]' production which converts private commodities into essentially public goods. Similar to other public goods, incentives for [[reputation]] and group identity appear to motivate contributions to open source projects, overcoming the social dilemma inherent in producing such goods. In this paper authors examine how contributor motivations affect the type of contributions made to the open source online encyclopedia [[Wikipedia]] . As expected, authors find that registered participants, motivated by reputation and commitment to the [[Wikipedia community]], make many contributions with high [[reliability]]. Surprisingly, however, authors find the highest reliability from the vast numbers of anonymous `Good Samaritans' who contribute only once. Authors findings of high reliability in the contributions of both Good Samaritans and committed `zealots' suggest that open source production succeeds by altering the scope of production such that a critical mass of contributors can participate.</div>Evahttps://wikipediaquality.com/index.php?title=Governance_of_Massive_Multiauthor_Collaboration_%E2%80%93_Linux,_Wikipedia,_and_Other_Networks:_Governed_by_Bilateral_Contracts,_Partnerships,_or_Something_in_Between%3F&diff=16941Governance of Massive Multiauthor Collaboration – Linux, Wikipedia, and Other Networks: Governed by Bilateral Contracts, Partnerships, or Something in Between?2019-06-05T05:53:24Z<p>Eva: Governance of Massive Multiauthor Collaboration – Linux, Wikipedia, and Other Networks: Governed by Bilateral Contracts, Partnerships, or Something in Between? -- new article</p>
<hr />
<div>'''Governance of Massive Multiauthor Collaboration – Linux, Wikipedia, and Other Networks: Governed by Bilateral Contracts, Partnerships, or Something in Between?''' - scientific work related to Wikipedia quality published in 2010, written by Dan Wielsch.<br />
<br />
== Overview ==<br />
JIPITEC 1 (2010) 2 - Open collaborative projects are</div>Evahttps://wikipediaquality.com/index.php?title=Wikipedia_and_How_to_Use_It_for_Semantic_Document_Representation&diff=16940Wikipedia and How to Use It for Semantic Document Representation2019-06-05T05:51:05Z<p>Eva: Basic information on Wikipedia and How to Use It for Semantic Document Representation</p>
<hr />
<div>'''Wikipedia and How to Use It for Semantic Document Representation''' - scientific work related to Wikipedia quality published in 2010, written by Ian H. Witten.<br />
<br />
== Overview ==<br />
Wikipedia is a goldmine of information; not just for its many readers, but also for the growing community of researchers who recognize it as a resource of exceptional scale and utility. It represents a vast investment of manual effort and judgment: a huge, constantly evolving tapestry of concepts and relations that is being applied to a host of tasks. This talk focuses on the process of "wikification"; that is, automatically and judiciously augmenting a plain-text document with pertinent hyperlinks to Wikipedia articlesas though the document were itself a Wikipedia article. Author first describe how Wikipedia can be used to determine semantic relatedness between concepts. Then Author explain how to wikify documents by exploiting Wikipedia's internal hyperlinks for relational information and their anchor texts as lexical information. Data mining techniques are used throughout to optimize the models involved.</div>Evahttps://wikipediaquality.com/index.php?title=The_Perspectives_of_Higher_Education_Faculty_on_Wikipedia&diff=16939The Perspectives of Higher Education Faculty on Wikipedia2019-06-05T05:48:42Z<p>Eva: The Perspectives of Higher Education Faculty on Wikipedia - creating a new article</p>
<hr />
<div>'''The Perspectives of Higher Education Faculty on Wikipedia''' - scientific work related to Wikipedia quality published in 2010, written by Hsin-liang Chen.<br />
<br />
== Overview ==<br />
Purpose – This purpose of this paper is to investigate whether higher education instructors use information from Wikipedia for teaching and research.Design/methodology/approach – This is an explorative study to identify important factors regarding user acceptance and use of emerging information resources and technologies in the academic community. A total of 201 participants around the world answered an online questionnaire administered by a commercial provider. The questionnaire consisted of 16 Likert‐scaled questions to assess participants' agreement with each question along with an optional open‐ended explanation.Findings – The findings of this project confirm that internet access was related to faculty technology use. Online resources and references were ranked the first choice by the participants when searching for familiar and unfamiliar topics. The investigator found that participants' academic ranking status, frequency of e‐mail use and academic discipline were related to their use of online datab...</div>Evahttps://wikipediaquality.com/index.php?title=How_Much_is_Wikipedia_Lagging_Behind_News&diff=16938How Much is Wikipedia Lagging Behind News2019-06-05T05:45:59Z<p>Eva: + wikilinks</p>
<hr />
<div>'''How Much is Wikipedia Lagging Behind News''' - scientific work related to [[Wikipedia quality]] published in 2015, written by [[Besnik Fetahu]], [[Abhijit Anand]] and [[Avishek Anand]].<br />
<br />
== Overview ==<br />
Wikipedia, rich in entities and events, is an invaluable resource for various knowledge harvesting, extraction and mining tasks. Numerous resources like [[DBpedia]], YAGO and other knowledge bases are based on extracting entity and event based knowledge from it. Online news, on the other hand, is an authoritative and rich source for emerging entities, events and facts relating to existing entities. In this work, authors study the creation of entities in [[Wikipedia]] with respect to news by studying how entity and event based information flows from news to Wikipedia. Authors analyze the lag of Wikipedia (based on the revision history of the [[English Wikipedia]]) with 20 years of The New York Times dataset (NYT). Authors model and analyze the lag of entities and events, namely their first appearance in Wikipedia and in NYT, respectively. In extensive experimental analysis, authors find that almost 20% of the external references in entity pages are news articles encoding the importance of news to Wikipedia. Second, authors observe that the entity-based lag follows a normal distribution with a high standard deviation, whereas the lag for news-based events is typically very low. Finally, authors find that events are responsible for creation of emergent entities with as many as 12% of the entities mentioned in the event page are created after the creation of the event page.</div>Evahttps://wikipediaquality.com/index.php?title=Mediating_at_the_Student%E2%80%93Wikipedia_Intersection&diff=16937Mediating at the Student–Wikipedia Intersection2019-06-05T05:44:31Z<p>Eva: New study: Mediating at the Student–Wikipedia Intersection</p>
<hr />
<div>'''Mediating at the Student–Wikipedia Intersection''' - scientific work related to Wikipedia quality published in 2010, written by Angela Doucet Rand.<br />
<br />
== Overview ==<br />
ABSTRACT Wikipedia is a free online encyclopedia. The encyclopedia is openly edited by registered users. Wikipedia editors can edit their own and others' entries, and some abuse of this editorial power has been unveiled. Content authors have also been criticized for publishing less than accurate content. Educators and students acknowledge casual use of Wikipedia in spite of its perceived inaccuracies. Use of the online encyclopedia as a reference resource in scholarly papers is still debated. The increasing popularity of Wikipedia has led to an influx of research articles analyzing the validity and content of the encyclopedia. This study provides an analysis of relevant articles on academic use of Wikipedia. This analysis attempts to summarize the status of Wikipedia in relation to the scope (breadth) and depth of its contents and looks at content validity issues that are of concern to the use of Wikipedia for higher education. The study seeks to establish a reference point from which educators can make i...</div>Evahttps://wikipediaquality.com/index.php?title=Improved_Text_Annotation_with_Wikipedia_Entities&diff=16936Improved Text Annotation with Wikipedia Entities2019-06-05T05:41:48Z<p>Eva: Adding infobox</p>
<hr />
<div>{{Infobox work<br />
| title = Improved Text Annotation with Wikipedia Entities<br />
| date = 2013<br />
| authors = [[Christos Makris]]<br />[[Yannis Plegas]]<br />[[Evangelos Theodoridis]]<br />
| doi = 10.1145/2480362.2480425<br />
| link = https://dl.acm.org/citation.cfm?doid=2480362.2480425<br />
| plink = https://www.semanticscholar.org/paper/Improved-text-annotation-with-Wikipedia-entities-Makris-Plegas/5ae2f4253f3c4076cc3b77187bf58f4e0930a893/figure/1<br />
}}<br />
'''Improved Text Annotation with Wikipedia Entities''' - scientific work related to [[Wikipedia quality]] published in 2013, written by [[Christos Makris]], [[Yannis Plegas]] and [[Evangelos Theodoridis]].<br />
<br />
== Overview ==<br />
Text annotation is the procedure of initially identifying, in a segment of text, a set of dominant in meaning words and later on attaching to them extra information (usually drawn from a concept [[ontology]], implemented as a catalog) that expresses their conceptual content in the current context. Attaching additional [[semantic information]] and structure helps to represent, in a machine interpretable way, the topic of the text and is a fundamental preprocessing step to many Information Retrieval tasks like indexing, clustering, classification, text summarization and cross-referencing content on web pages, posts, tweets etc. In this paper, authors deal with automatic annotation of text documents with entities of [[Wikipedia]], the largest online knowledge base; a process that is commonly known as Wikification . Moving similarly to previous approaches the cross-reference of words in the text to Wikipedia articles is based on local compatibility between the text around the term and textual information embedded in the article. The main contribution of this paper is a set of disambiguation techniques that enhance previously published approaches by employing both the [[WordNet]] lexical database and the Wikipedia article's PageRank scores in the disambiguation process. The experimental evaluation performed depicts that the exploitation of these additional semantic information sources leads to more accurate Text Annotation.</div>Evahttps://wikipediaquality.com/index.php?title=Using_Wikipedia_Categories_for_Ad_Hoc_Search&diff=16935Using Wikipedia Categories for Ad Hoc Search2019-06-05T05:39:41Z<p>Eva: + links</p>
<hr />
<div>'''Using Wikipedia Categories for Ad Hoc Search''' - scientific work related to [[Wikipedia quality]] published in 2009, written by [[Rianne Kaptein]], [[Marijn Koolen]] and [[Jaap Kamps]].<br />
<br />
== Overview ==<br />
In this paper authors explore the use of category information for ad hoc retrieval in [[Wikipedia]]. Authors show that techniques for entity ranking exploiting this category information can also be applied to ad hoc topics and lead to significant improvements. Automatically assigned target [[categories]] are good surrogates for manually assigned categories, which perform only slightly better.</div>Evahttps://wikipediaquality.com/index.php?title=Hacking_the_Research_Library:_Wikipedia,_Trump,_and_Information_Literacy_in_the_Escape_Room_at_Fresno_State&diff=16934Hacking the Research Library: Wikipedia, Trump, and Information Literacy in the Escape Room at Fresno State2019-06-05T05:37:28Z<p>Eva: Links</p>
<hr />
<div>'''Hacking the Research Library: Wikipedia, Trump, and Information Literacy in the Escape Room at Fresno State''' - scientific work related to [[Wikipedia quality]] published in 2017, written by [[Raymond Pun]].<br />
<br />
== Overview ==<br />
AbstractHow can librarians teach information literacy in such a politicized atmosphere? In spring 2017, the library at Fresno State held a series of workshops that introduced first-year students to information literacy in a “gamification” setting, an escape room, to encourage community learning. The theme of the workshop focused on President Donald Trump. In this one-shot workshop, students were “locked” in the escape room in the library and had to solve a series of information-literacy puzzles and research tasks, including hacking into Donald Trump’s [[Wikipedia]] page, fact-checking Trump’s tweets, and comparing and analyzing fake news with online databases. The article presents this workshop as a case study on how librarians can creatively engage with students to collaborate, learn, and build information literacy skills using Trump as the teaching subject.</div>Evahttps://wikipediaquality.com/index.php?title=Public_Relations_Interactions_with_Wikipedia&diff=16933Public Relations Interactions with Wikipedia2019-06-05T05:36:16Z<p>Eva: Creating a page: Public Relations Interactions with Wikipedia</p>
<hr />
<div>'''Public Relations Interactions with Wikipedia''' - scientific work related to Wikipedia quality published in 2016, written by Gareth Thompson.<br />
<br />
== Overview ==<br />
Purpose – The purpose of this paper is to consider the relevance of the institutional analysis and development (IAD) framework (Ostrom, 1990) in understanding the incentives for public relations (PR) practitioners’ interactions with Wikipedia, and other common-pool media. Design/methodology/approach – This interdisciplinary conceptual paper applies the economics theory of commons governance to two case studies of PR interactions with Wikipedia. Findings – The analysis concludes that commons governance theory identifies the downside risks of opportunistic behaviour by PR practitioners in their interactions with media commons such as Wikipedia. The paper concludes that Ostrom’s IAD model is relevant to the governance of PR interactions and offers guidance on productive PR practice in common-pool media. Research limitations/implications – The analysis was applied to only two cases for which information was widely available. Practical implications – The paper includes implications for the scope of PR practice...</div>Evahttps://wikipediaquality.com/index.php?title=Unsupervised_Techniques_for_Discovering_Ontology_Elements_from_Wikipedia_Article_Links&diff=16932Unsupervised Techniques for Discovering Ontology Elements from Wikipedia Article Links2019-06-05T05:34:58Z<p>Eva: Adding new article - Unsupervised Techniques for Discovering Ontology Elements from Wikipedia Article Links</p>
<hr />
<div>'''Unsupervised Techniques for Discovering Ontology Elements from Wikipedia Article Links''' - scientific work related to Wikipedia quality published in 2010, written by Zareen Syed and Tim Finin.<br />
<br />
== Overview ==<br />
Authors present an unsupervised and unrestricted approach to discovering an infobox like ontology by exploiting the inter-article links within Wikipedia. It discovers new slots and fillers that may not be available in the Wikipedia infoboxes. Authors results demonstrate that there are certain types of properties that are evident in the link structure of resources like Wikipedia that can be predicted with high accuracy using little or no linguistic analysis. The discovered properties can be further used to discover a class hierarchy. Authors experiments have focused on analyzing people in Wikipedia, but the techniques can be directly applied to other types of entities in text resources that are rich with hyperlinks.</div>Evahttps://wikipediaquality.com/index.php?title=Linking_Fast_and_Wikipedia&diff=16931Linking Fast and Wikipedia2019-06-05T05:32:47Z<p>Eva: wikilinks</p>
<hr />
<div>'''Linking Fast and Wikipedia''' - scientific work related to [[Wikipedia quality]] published in 2017, written by [[Rick Bennett]], [[Eric Childress]], [[Kerre Kammerer]] and [[Diane Vizine-Goetz]].<br />
<br />
== Overview ==<br />
This paper describes a research project to develop automated techniques for linking FAST (Faceted Application of Subject Terminology) to [[Wikipedia]] articles. The research is motivated by libraries’ interest in connecting library resources such as authority files to non-library linked data resources such as GeoNames and [[DBpedia]] (a dataset containing structured data extracted from Wikipedia). Of the approximately 183,000 non-subdivided topical headings in the FAST vocabulary, 76,000 terms were matched to Wikipedia article titles with 95% accuracy. Wikipedia links in the FAST authority file and FAST linked data enable people and software applications to take advantage of information in both of these resources.</div>Evahttps://wikipediaquality.com/index.php?title=Transforming_Wikipedia_into_Named_Entity_Training_Data&diff=16930Transforming Wikipedia into Named Entity Training Data2019-06-05T05:30:34Z<p>Eva: Overview: Transforming Wikipedia into Named Entity Training Data</p>
<hr />
<div>'''Transforming Wikipedia into Named Entity Training Data''' - scientific work related to Wikipedia quality published in 2008, written by Joel Nothman, James R. Curran and Tara Murphy.<br />
<br />
== Overview ==<br />
Statistical named entity recognisers require costly hand-labelled training data and, as a result, most existing corpora are small. Authors exploit Wikipedia to create a massive corpus of named entity annotated text. Authors transform Wikipedia’s links into named entity annotations by classifying the target articles into common entity types (e.g. person, organisation and location). Comparing to MUC, CONLL and BBN corpora, Wikipedia generally performs better than other cross-corpus train/test pairs.</div>Evahttps://wikipediaquality.com/index.php?title=User_Engagement_on_Wikipedia,_a_Review_of_Studies_of_Readers_and_Editors&diff=16929User Engagement on Wikipedia, a Review of Studies of Readers and Editors2019-06-05T05:29:30Z<p>Eva: User Engagement on Wikipedia, a Review of Studies of Readers and Editors -- new article</p>
<hr />
<div>'''User Engagement on Wikipedia, a Review of Studies of Readers and Editors''' - scientific work related to Wikipedia quality published in 2015, written by Marc Miquel-Ribé.<br />
<br />
== Overview ==<br />
Is it an encyclopedia or a social network? Without considering both aspects it would not be possible to understand how a worldwide army of editors created the largest online knowledge repository. Wikipedia has a consistent set of rules and it responds to many of the User Engagement Framework attributes, and this is why it works. In this paper, authors identify these confirmed attributes as well as those presenting problems. Authors explain that although having a strong editor base Wikipedia is finding it challenging to maintain this base or increase its size. In order to understand this, scholars have analyzed Wikipedia using current metrics like user session and activity. Authors conclude there exist opportunities to analyze engagement in new aspects in order to understand its success, as well as to redesign mechanisms to improve the system and help the transition between reader and editor.</div>Evahttps://wikipediaquality.com/index.php?title=Wikibench:_a_Distributed,_Wikipedia_based_Web_Application_Benchmark&diff=16928Wikibench: a Distributed, Wikipedia based Web Application Benchmark2019-06-05T05:27:52Z<p>Eva: Starting an article - Wikibench: a Distributed, Wikipedia based Web Application Benchmark</p>
<hr />
<div>'''Wikibench: a Distributed, Wikipedia based Web Application Benchmark''' - scientific work related to Wikipedia quality published in 2009, written by Erik-Jan van Baaren.<br />
<br />
== Overview ==<br />
Many different, novel approaches have been taken to improve throughput and scalability of distributed web application hosting systems and relational databases. Yet there are only a limited number of web application benchmarks available. Authors present the design and implementation of WikiBench, a distributed web application benchmarking tool based on Wikipedia. WikiBench is a trace based benchmark, able to create realistic workloads with thousands of requests per second to any system hosting the freely available Wikipedia data and software. Authors obtained completely anonymized, sampled access traces from the Wikimedia Foundation, and authors created software to process these traces in order to reduce the intensity of its traffic while still maintaining the most important properties such as inter-arrival times and distribution of page popularity. This makes WikiBench usable for both small and large scale benchmarks. Initial benchmarks show a regular day of traffic with its ups and downs. By using median response times, authors are able to show the effects of increasing traffic intensities on system under test.</div>Evahttps://wikipediaquality.com/index.php?title=Multilingual_Historical_Narratives_on_Wikipedia&diff=16927Multilingual Historical Narratives on Wikipedia2019-06-05T05:26:49Z<p>Eva: Creating a new page - Multilingual Historical Narratives on Wikipedia</p>
<hr />
<div>'''Multilingual Historical Narratives on Wikipedia''' - scientific work related to Wikipedia quality published in 2017, written by Markus Strohmaier, Katrin Weller, Gesis, Maria Zens, Florian Lemmerich, Anna Samoilenko and Contact Person.<br />
<br />
== Overview ==<br />
Portrayals of history are never complete, and each description inherently exhibits a specific view-</div>Evahttps://wikipediaquality.com/index.php?title=Efficient_Wikipedia-Based_Semantic_Interpreter_by_Exploiting_Top-K_Processing&diff=16926Efficient Wikipedia-Based Semantic Interpreter by Exploiting Top-K Processing2019-06-05T05:25:15Z<p>Eva: Starting a page: Efficient Wikipedia-Based Semantic Interpreter by Exploiting Top-K Processing</p>
<hr />
<div>'''Efficient Wikipedia-Based Semantic Interpreter by Exploiting Top-K Processing''' - scientific work related to Wikipedia quality published in 2010, written by Jong Wook Kim, Ashwin Kashyap, Dekai Li and Sandilya Bhamidipati.<br />
<br />
== Overview ==<br />
Proper representation of the meaning of texts is crucial to enhancing many data mining and information retrieval tasks, including clustering, computing semantic relatedness between texts, and searching. Representing of texts in the concept space derived from Wikipedia has received growing attention recently, due to its comprehensiveness and expertise, This concept-based representation is capable of extracting semantic relatedness between texts that cannot be deduced with the bag of words model. A key obstacle, however, for using Wikipedia as a semantic interpreter is that the sheer size of the concepts derived from Wikipedia makes it hard to efficiently map texts into concept-space. In this paper, authors develop an efficient algorithm which is able to represent the meaning of a text by using the concepts that best match it. In particular, approach first computes the approximate top- k concepts that are most relevant to the given text. Authors then leverage these concepts for representing the meaning of the given text. The experimental results show that the proposed technique provides significant gains in execution time over current solutions to the problem.</div>Evahttps://wikipediaquality.com/index.php?title=The_Mechanism_of_Spontaneous_Order_in_Online_Knowledge_Sharing_Community:_Taking_Wikipedia_as_an_Example&diff=16925The Mechanism of Spontaneous Order in Online Knowledge Sharing Community: Taking Wikipedia as an Example2019-06-05T05:22:43Z<p>Eva: Links</p>
<hr />
<div>'''The Mechanism of Spontaneous Order in Online Knowledge Sharing Community: Taking Wikipedia as an Example''' - scientific work related to [[Wikipedia quality]] published in 2013, written by [[Xiaoyu Li]].<br />
<br />
== Overview ==<br />
In the background of information explosion and big data problem, online knowledge sharing communities confront new challenge of the mechanism of knowledge organization and sharing. Several of these communities, especially [[Wikipedia]], base the knowledge sharing mechanism on spontaneous order by community users. By analyzing the network structure and evolution process of the community policy environment in Wikipedia, this paper finds out that the set of community policies and user guidelines has its rank of priority — some of which are of fundamental place and function. Different policy functions tend to converge in the evolving process, which proves the increasing stabilization of spontaneous order. By illustrating the mechanism of spontaneous order from the angles of user collaboration and CAS theory, this paper points out that the active user participation, the smooth information flow and the prudent control are key points for the Wikipedia knowledge sharing, which is also enlightening for understanding and constructing knowledge sharing mechanism in similar online communities. Keywords—online community; knowledge sharing; spontaneous order; Wikipedia</div>Evahttps://wikipediaquality.com/index.php?title=Constructing_Large-Scale_Person_Ontology_from_Wikipedia&diff=16924Constructing Large-Scale Person Ontology from Wikipedia2019-06-05T05:21:32Z<p>Eva: Links</p>
<hr />
<div>'''Constructing Large-Scale Person Ontology from Wikipedia''' - scientific work related to [[Wikipedia quality]] published in 2010, written by [[Yumi Shibaki]], [[Masaaki Nagata]] and [[Kazuhide Yamamoto]].<br />
<br />
== Overview ==<br />
This paper presents a method for constructing a large-scale Person Ontology with category hierarchy from [[Wikipedia]]. Authors first extract Wikipedia category labels which represent person (hereafter, Wikipedia Person Category, WPC) by using a machine learning classifier. Authors then construct a WPC hierarchy by detecting is-a relations in the Wikipedia category network. Authors then extract the titles of Wikipedia articles which represent person (hereafter, Wikipedia person instance, WPI). Experiments show that the accuracy of WPC extraction is 99.3% precision and 98.4% recall, while that of WPI extraction is 98.2% and 98.6%, respectively. The accuracies are significantly higher than the previous methods.</div>Eva