https://wikipediaquality.com/api.php?action=feedcontributions&user=Alyssa&feedformat=atomWikipedia Quality - User contributions [en]2024-03-29T15:54:39ZUser contributionsMediaWiki 1.30.0https://wikipediaquality.com/index.php?title=Cross-Media_Topic_Mining_on_Wikipedia&diff=25492Cross-Media Topic Mining on Wikipedia2020-10-03T06:39:32Z<p>Alyssa: Adding embed</p>
<hr />
<div>{{Infobox work<br />
| title = Cross-Media Topic Mining on Wikipedia<br />
| date = 2013<br />
| authors = [[Xikui Wang]]<br />[[Yang Liu]]<br />[[Donghui Wang]]<br />[[Fei Wu]]<br />
| doi = 10.1145/2502081.2502180<br />
| link = http://dl.acm.org/ft_gateway.cfm?id=2502180&amp;type=pdf<br />
}}<br />
'''Cross-Media Topic Mining on Wikipedia''' - scientific work related to [[Wikipedia quality]] published in 2013, written by [[Xikui Wang]], [[Yang Liu]], [[Donghui Wang]] and [[Fei Wu]].<br />
<br />
== Overview ==<br />
As a collaborative wiki-based encyclopedia, [[Wikipedia]] provides a huge amount of articles of various [[categories]]. In addition to their text corpus, Wikipedia also contains plenty of images which makes the articles more intuitive for readers to understand. To better organize these visual and textual data, one promising area of research is to jointly model the embedding topics across multi-modal data (i.e, cross-media ) from Wikipedia. In this work, authors propose to learn the projection matrices that map the data from heterogeneous feature spaces into a unified latent topic space. Different from previous approaches, by imposing the l 1 regularizers to the projection matrices, only a small number of relevant visual/textual words are associated with each topic, which makes model more interpretable and robust. Furthermore, the correlations of Wikipedia data in different modalities are explicitly considered in model. The effectiveness of the proposed topic extraction algorithm is verified by several experiments conducted on real Wikipedia datasets.<br />
<br />
== Embed ==<br />
=== Wikipedia Quality ===<br />
<code><br />
<nowiki><br />
Wang, Xikui; Liu, Yang; Wang, Donghui; Wu, Fei. (2013). "[[Cross-Media Topic Mining on Wikipedia]]".DOI: 10.1145/2502081.2502180. <br />
</nowiki><br />
</code><br />
<br />
=== English Wikipedia ===<br />
<code><br />
<nowiki><br />
{{cite journal |last1=Wang |first1=Xikui |last2=Liu |first2=Yang |last3=Wang |first3=Donghui |last4=Wu |first4=Fei |title=Cross-Media Topic Mining on Wikipedia |date=2013 |doi=10.1145/2502081.2502180 |url=https://wikipediaquality.com/wiki/Cross-Media_Topic_Mining_on_Wikipedia}}<br />
</nowiki><br />
</code><br />
<br />
=== HTML ===<br />
<code><br />
<nowiki><br />
Wang, Xikui; Liu, Yang; Wang, Donghui; Wu, Fei. (2013). &amp;quot;<a href="https://wikipediaquality.com/wiki/Cross-Media_Topic_Mining_on_Wikipedia">Cross-Media Topic Mining on Wikipedia</a>&amp;quot;.DOI: 10.1145/2502081.2502180. <br />
</nowiki><br />
</code></div>Alyssahttps://wikipediaquality.com/index.php?title=Building_Semantic_Kernels_for_Cross-Document_Knowledge_Discovery_Using_Wikipedia&diff=25491Building Semantic Kernels for Cross-Document Knowledge Discovery Using Wikipedia2020-10-03T06:38:22Z<p>Alyssa: Embed</p>
<hr />
<div>{{Infobox work<br />
| title = Building Semantic Kernels for Cross-Document Knowledge Discovery Using Wikipedia<br />
| date = 2017<br />
| authors = [[Peng Yan]]<br />[[Wei Jin]]<br />
| doi = 10.1007/s10115-016-0973-5<br />
| link = https://link.springer.com/article/10.1007/s10115-016-0973-5<br />
}}<br />
'''Building Semantic Kernels for Cross-Document Knowledge Discovery Using Wikipedia''' - scientific work related to [[Wikipedia quality]] published in 2017, written by [[Peng Yan]] and [[Wei Jin]].<br />
<br />
== Overview ==<br />
Research into text mining has progressed over the past decade. One of the main challenges now is gauging the difficulty of taking advantage of outside knowledge in the discovery process. In this work, to address the limitations of the traditional bag-of- words model and expand the search scope beyond the document collections at hand, authors present a new text mining approach incorporating [[Wikipedia]] as the background knowledge. Various semantic kernels are built out of the extensive knowledge derived from Wikipedia and applied to the search scenario of detecting potential semantic relationships between topics. Authors demonstrate the effectiveness of approach through comparing with competitive baselines, as well as alternative solutions where only part of Wikipedia resources (e.g., the Wiki-article contents or the associated Wiki-[[categories]]) is considered.<br />
<br />
== Embed ==<br />
=== Wikipedia Quality ===<br />
<code><br />
<nowiki><br />
Yan, Peng; Jin, Wei. (2017). "[[Building Semantic Kernels for Cross-Document Knowledge Discovery Using Wikipedia]]". Springer London. DOI: 10.1007/s10115-016-0973-5. <br />
</nowiki><br />
</code><br />
<br />
=== English Wikipedia ===<br />
<code><br />
<nowiki><br />
{{cite journal |last1=Yan |first1=Peng |last2=Jin |first2=Wei |title=Building Semantic Kernels for Cross-Document Knowledge Discovery Using Wikipedia |date=2017 |doi=10.1007/s10115-016-0973-5 |url=https://wikipediaquality.com/wiki/Building_Semantic_Kernels_for_Cross-Document_Knowledge_Discovery_Using_Wikipedia |journal=Springer London}}<br />
</nowiki><br />
</code><br />
<br />
=== HTML ===<br />
<code><br />
<nowiki><br />
Yan, Peng; Jin, Wei. (2017). &amp;quot;<a href="https://wikipediaquality.com/wiki/Building_Semantic_Kernels_for_Cross-Document_Knowledge_Discovery_Using_Wikipedia">Building Semantic Kernels for Cross-Document Knowledge Discovery Using Wikipedia</a>&amp;quot;. Springer London. DOI: 10.1007/s10115-016-0973-5. <br />
</nowiki><br />
</code></div>Alyssahttps://wikipediaquality.com/index.php?title=Graf_Version_of_Catalan_Portions_of_Wikipedia_Corpus&diff=25490Graf Version of Catalan Portions of Wikipedia Corpus2020-10-03T06:36:36Z<p>Alyssa: Infobox work</p>
<hr />
<div>{{Infobox work<br />
| title = Graf Version of Catalan Portions of Wikipedia Corpus<br />
| date = 2012<br />
| authors = [[Gemma Boleda]]<br />
| link = https://repositori.upf.edu/handle/10230/20050<br />
}}<br />
'''Graf Version of Catalan Portions of Wikipedia Corpus''' - scientific work related to [[Wikipedia quality]] published in 2012, written by [[Gemma Boleda]].<br />
<br />
== Overview ==<br />
This is the stand-off GrAF version of Catalan portions of the [[Wikipedia]] (based on a 2006 dump). This Wikipedia Catalan Corpus contains 122052 articles that contain about 47,3 million words in raw text format. It has been cleaned by erasing disambiguation pages, removing some XML tags and homogenizing lists ending tag. Then, the corpus has been processed for adding structural tagging (head, paragraph, sentence, list, etc.) and morphosyntactic information.</div>Alyssahttps://wikipediaquality.com/index.php?title=Wikipedia_as_a_Source_of_Ontological_Knowledge:_State_of_the_Art_and_Application&diff=25489Wikipedia as a Source of Ontological Knowledge: State of the Art and Application2020-10-03T06:35:02Z<p>Alyssa: + Embed</p>
<hr />
<div>{{Infobox work<br />
| title = Wikipedia as a Source of Ontological Knowledge: State of the Art and Application<br />
| date = 2010<br />
| authors = [[Angela Fogarolli]]<br />
| doi = 10.1007/978-3-642-16793-5_1<br />
| link = https://link.springer.com/content/pdf/10.1007%2F978-3-642-16793-5_1.pdf<br />
}}<br />
'''Wikipedia as a Source of Ontological Knowledge: State of the Art and Application''' - scientific work related to [[Wikipedia quality]] published in 2010, written by [[Angela Fogarolli]].<br />
<br />
== Overview ==<br />
This chapter motivates that [[Wikipedia]] can be used as a source of knowledge for creating semantic enabled applications, and consists of two parts. First, authors provide an overview over different research fields which attempt to extract knowledge encoded by humans inside Wikipedia. The extracted knowledge can then be used for creating a new generation of intelligent applications based on the collaborative character of Wikipedia, rather than on domain ontologies which require the intervention of knowledge engineers and domain experts. Second, as a proof of concept, authors describe an application whose intelligent behavior is achieved by using Wikipedia knowledge for automatic annotation and representation of multimedia presentations.<br />
<br />
== Embed ==<br />
=== Wikipedia Quality ===<br />
<code><br />
<nowiki><br />
Fogarolli, Angela. (2010). "[[Wikipedia as a Source of Ontological Knowledge: State of the Art and Application]]". Springer Berlin Heidelberg. DOI: 10.1007/978-3-642-16793-5_1. <br />
</nowiki><br />
</code><br />
<br />
=== English Wikipedia ===<br />
<code><br />
<nowiki><br />
{{cite journal |last1=Fogarolli |first1=Angela |title=Wikipedia as a Source of Ontological Knowledge: State of the Art and Application |date=2010 |doi=10.1007/978-3-642-16793-5_1 |url=https://wikipediaquality.com/wiki/Wikipedia_as_a_Source_of_Ontological_Knowledge:_State_of_the_Art_and_Application |journal=Springer Berlin Heidelberg}}<br />
</nowiki><br />
</code><br />
<br />
=== HTML ===<br />
<code><br />
<nowiki><br />
Fogarolli, Angela. (2010). &amp;quot;<a href="https://wikipediaquality.com/wiki/Wikipedia_as_a_Source_of_Ontological_Knowledge:_State_of_the_Art_and_Application">Wikipedia as a Source of Ontological Knowledge: State of the Art and Application</a>&amp;quot;. Springer Berlin Heidelberg. DOI: 10.1007/978-3-642-16793-5_1. <br />
</nowiki><br />
</code></div>Alyssahttps://wikipediaquality.com/index.php?title=Wikipedia_for_Academic_Publishing:_Advantages_and_Challenges&diff=25488Wikipedia for Academic Publishing: Advantages and Challenges2020-10-03T06:32:56Z<p>Alyssa: cat.</p>
<hr />
<div>{{Infobox work<br />
| title = Wikipedia for Academic Publishing: Advantages and Challenges<br />
| date = 2012<br />
| authors = [[Lu Xiao]]<br />[[Nicole Askin]]<br />
| doi = 10.1108/14684521211241396<br />
| link = http://www.emeraldinsight.com/doi/full/10.1108/14684521211241396<br />
}}<br />
'''Wikipedia for Academic Publishing: Advantages and Challenges''' - scientific work related to [[Wikipedia quality]] published in 2012, written by [[Lu Xiao]] and [[Nicole Askin]].<br />
<br />
== Overview ==<br />
Purpose – The purpose of this paper is to explore the potential of [[Wikipedia]] as a venue for academic publishing.Design/methodology/approach – By looking at other sources and studying Wikipedia structures, the paper compares the processes of publishing a peer‐reviewed article in Wikipedia and the open access journal model, discusses the advantages and challenges of adopting Wikipedia in academic publishing, and provides suggestions on how to address the challenges.Findings – Compared to an open access journal model, Wikipedia has several advantages for academic publishing: it is less expensive, quicker, more widely read, and offers a wider variety of articles. There are also several major challenges in adopting Wikipedia in the academic community: the web site structure is not well suited to academic publications; the site is not integrated with common academic search engines such as [[Google]] Scholar or with university libraries; and there are concerns among some members of the academic community about the s...<br />
<br />
== Embed ==<br />
=== Wikipedia Quality ===<br />
<code><br />
<nowiki><br />
Xiao, Lu; Askin, Nicole. (2012). "[[Wikipedia for Academic Publishing: Advantages and Challenges]]". Emerald Group Publishing Limited. DOI: 10.1108/14684521211241396. <br />
</nowiki><br />
</code><br />
<br />
=== English Wikipedia ===<br />
<code><br />
<nowiki><br />
{{cite journal |last1=Xiao |first1=Lu |last2=Askin |first2=Nicole |title=Wikipedia for Academic Publishing: Advantages and Challenges |date=2012 |doi=10.1108/14684521211241396 |url=https://wikipediaquality.com/wiki/Wikipedia_for_Academic_Publishing:_Advantages_and_Challenges |journal=Emerald Group Publishing Limited}}<br />
</nowiki><br />
</code><br />
<br />
=== HTML ===<br />
<code><br />
<nowiki><br />
Xiao, Lu; Askin, Nicole. (2012). &amp;quot;<a href="https://wikipediaquality.com/wiki/Wikipedia_for_Academic_Publishing:_Advantages_and_Challenges">Wikipedia for Academic Publishing: Advantages and Challenges</a>&amp;quot;. Emerald Group Publishing Limited. DOI: 10.1108/14684521211241396. <br />
</nowiki><br />
</code><br />
<br />
<br />
<br />
[[Category:Scientific works]]</div>Alyssahttps://wikipediaquality.com/index.php?title=Why_Be_a_Wikipedian&diff=25487Why Be a Wikipedian2020-10-03T06:30:48Z<p>Alyssa: Categories</p>
<hr />
<div>{{Infobox work<br />
| title = Why Be a Wikipedian<br />
| date = 2009<br />
| authors = [[Hoda Baytiyeh]]<br />[[Jay Pfaffman]]<br />
| doi = 10.3115/1600053.1600117<br />
| link = http://dl.acm.org/citation.cfm?id=1600053.1600117<br />
| plink = https://www.researchgate.net/profile/Hoda_Baytiyeh/publication/221033683_Why_be_a_Wikipedian/links/02e7e53355daba10fc000000.pdf<br />
}}<br />
'''Why Be a Wikipedian''' - scientific work related to [[Wikipedia quality]] published in 2009, written by [[Hoda Baytiyeh]] and [[Jay Pfaffman]].<br />
<br />
== Overview ==<br />
Wikipedia is a user-edited encyclopedia. Unpaid users contribute articles, edit them, and have heated debates about what information should be included or excluded. This study is designed to learn more about why people are willing to do this work without any fiscal compensation. [[Wikipedia]] administrators (n=115) completed an online survey with Likert-scaled items of potential types of satisfaction derived from participation as well as comments that were used to check the validity of the Likert-scaled items and allow participants to say in their own words why they were Wikipedian. Results showed that contributors in Wikipedia are driven largely by motivations to learn and create.<br />
<br />
== Embed ==<br />
=== Wikipedia Quality ===<br />
<code><br />
<nowiki><br />
Baytiyeh, Hoda; Pfaffman, Jay. (2009). "[[Why Be a Wikipedian]]". International Society of the Learning Sciences. DOI: 10.3115/1600053.1600117. <br />
</nowiki><br />
</code><br />
<br />
=== English Wikipedia ===<br />
<code><br />
<nowiki><br />
{{cite journal |last1=Baytiyeh |first1=Hoda |last2=Pfaffman |first2=Jay |title=Why Be a Wikipedian |date=2009 |doi=10.3115/1600053.1600117 |url=https://wikipediaquality.com/wiki/Why_Be_a_Wikipedian |journal=International Society of the Learning Sciences}}<br />
</nowiki><br />
</code><br />
<br />
=== HTML ===<br />
<code><br />
<nowiki><br />
Baytiyeh, Hoda; Pfaffman, Jay. (2009). &amp;quot;<a href="https://wikipediaquality.com/wiki/Why_Be_a_Wikipedian">Why Be a Wikipedian</a>&amp;quot;. International Society of the Learning Sciences. DOI: 10.3115/1600053.1600117. <br />
</nowiki><br />
</code><br />
<br />
<br />
<br />
[[Category:Scientific works]]</div>Alyssahttps://wikipediaquality.com/index.php?title=Extending_a_Multilingual_Lexical_Resource_by_Bootstrapping_Named_Entity_Classification_Using_Wikipedia%27s_Category_System&diff=25486Extending a Multilingual Lexical Resource by Bootstrapping Named Entity Classification Using Wikipedia's Category System2020-10-03T06:28:30Z<p>Alyssa: + cat.</p>
<hr />
<div>{{Infobox work<br />
| title = Extending a Multilingual Lexical Resource by Bootstrapping Named Entity Classification Using Wikipedia's Category System<br />
| date = 2011<br />
| authors = [[]]<br />
| link = http://www.aclweb.org/anthology/W/W11/W11-3607.pdf<br />
}}<br />
'''Extending a Multilingual Lexical Resource by Bootstrapping Named Entity Classification Using Wikipedia's Category System''' - scientific work related to [[Wikipedia quality]] published in 2011, written by [[]].<br />
<br />
== Overview ==<br />
Named Entity Recognition and Classification (NERC) is a well-studied NLP task which is typically approached using machine learning algorithms that rely on training data whose creation usually is expensive. The high costs result in the lack of NERC training data for many languages. An approach to create a [[multilingual]] NE corpus was presented in Wentland et al. (2008). The resulting resource called HeiNER describes a valuable number of NEs but does not include their types. Authors present a bootstrap approach based on [[Wikipedia]]’s category system to classify the NEs contained in HeiNER that is able to classify more than two million [[named entities]] to improve the resource’s quality.<br />
<br />
== Embed ==<br />
=== Wikipedia Quality ===<br />
<code><br />
<nowiki><br />
. (2011). "[[Extending a Multilingual Lexical Resource by Bootstrapping Named Entity Classification Using Wikipedia's Category System]]". Asian Federation of Natural Language Processing. <br />
</nowiki><br />
</code><br />
<br />
=== English Wikipedia ===<br />
<code><br />
<nowiki><br />
{{cite journal |last1= |title=Extending a Multilingual Lexical Resource by Bootstrapping Named Entity Classification Using Wikipedia's Category System |date=2011 |url=https://wikipediaquality.com/wiki/Extending_a_Multilingual_Lexical_Resource_by_Bootstrapping_Named_Entity_Classification_Using_Wikipedia's_Category_System |journal=Asian Federation of Natural Language Processing}}<br />
</nowiki><br />
</code><br />
<br />
=== HTML ===<br />
<code><br />
<nowiki><br />
. (2011). &amp;quot;<a href="https://wikipediaquality.com/wiki/Extending_a_Multilingual_Lexical_Resource_by_Bootstrapping_Named_Entity_Classification_Using_Wikipedia's_Category_System">Extending a Multilingual Lexical Resource by Bootstrapping Named Entity Classification Using Wikipedia's Category System</a>&amp;quot;. Asian Federation of Natural Language Processing. <br />
</nowiki><br />
</code><br />
<br />
<br />
<br />
[[Category:Scientific works]]</div>Alyssahttps://wikipediaquality.com/index.php?title=Disaster_Monitoring_with_Wikipedia_and_Online_Social_Networking_Sites:_Structured_Data_and_Linked_Data_Fragments_to_the_Rescue%3F&diff=25485Disaster Monitoring with Wikipedia and Online Social Networking Sites: Structured Data and Linked Data Fragments to the Rescue?2020-10-03T06:26:54Z<p>Alyssa: + Embed</p>
<hr />
<div>{{Infobox work<br />
| title = Disaster Monitoring with Wikipedia and Online Social Networking Sites: Structured Data and Linked Data Fragments to the Rescue?<br />
| date = 2015<br />
| authors = [[Thomas Steiner]]<br />[[Ruben Verborgh]]<br />
| link = http://www.ufrgs.br/limc/participativo/pdf/wikipedia.pdf<br />
| plink = https://arxiv.org/abs/1501.06329<br />
}}<br />
'''Disaster Monitoring with Wikipedia and Online Social Networking Sites: Structured Data and Linked Data Fragments to the Rescue?''' - scientific work related to [[Wikipedia quality]] published in 2015, written by [[Thomas Steiner]] and [[Ruben Verborgh]].<br />
<br />
== Overview ==<br />
In this paper, authors present the first results of ongoing early-stage research on a realtime disaster detection and monitoring tool. Based on [[Wikipedia]], it is language-agnostic and leverages user-generated multimedia content shared on online [[social network]]ing sites to help disaster responders prioritize their efforts. Authors make the tool and its source code publicly available as authors make progress on it. Furthermore, authors strive to publish detected disasters and accompanying multimedia content following the Linked Data principles to facilitate its wide consumption, redistribution, and evaluation of its usefulness.<br />
<br />
== Embed ==<br />
=== Wikipedia Quality ===<br />
<code><br />
<nowiki><br />
Steiner, Thomas; Verborgh, Ruben. (2015). "[[Disaster Monitoring with Wikipedia and Online Social Networking Sites: Structured Data and Linked Data Fragments to the Rescue?]]".<br />
</nowiki><br />
</code><br />
<br />
=== English Wikipedia ===<br />
<code><br />
<nowiki><br />
{{cite journal |last1=Steiner |first1=Thomas |last2=Verborgh |first2=Ruben |title=Disaster Monitoring with Wikipedia and Online Social Networking Sites: Structured Data and Linked Data Fragments to the Rescue? |date=2015 |url=https://wikipediaquality.com/wiki/Disaster_Monitoring_with_Wikipedia_and_Online_Social_Networking_Sites:_Structured_Data_and_Linked_Data_Fragments_to_the_Rescue?}}<br />
</nowiki><br />
</code><br />
<br />
=== HTML ===<br />
<code><br />
<nowiki><br />
Steiner, Thomas; Verborgh, Ruben. (2015). &amp;quot;<a href="https://wikipediaquality.com/wiki/Disaster_Monitoring_with_Wikipedia_and_Online_Social_Networking_Sites:_Structured_Data_and_Linked_Data_Fragments_to_the_Rescue?">Disaster Monitoring with Wikipedia and Online Social Networking Sites: Structured Data and Linked Data Fragments to the Rescue?</a>&amp;quot;.<br />
</nowiki><br />
</code></div>Alyssahttps://wikipediaquality.com/index.php?title=Classifying_Wikipedia_Articles_into_Ne%27s_Using_Svm%27s_with_Threshold_Adjustment&diff=25484Classifying Wikipedia Articles into Ne's Using Svm's with Threshold Adjustment2020-10-03T06:25:13Z<p>Alyssa: cats.</p>
<hr />
<div>{{Infobox work<br />
| title = Classifying Wikipedia Articles into Ne's Using Svm's with Threshold Adjustment<br />
| date = 2010<br />
| authors = [[Iman Saleh]]<br />[[Kareem Darwish]]<br />[[Aly A. Fahmy]]<br />
| link = https://dl.acm.org/citation.cfm?id=1870457.1870471<br />
}}<br />
'''Classifying Wikipedia Articles into Ne's Using Svm's with Threshold Adjustment''' - scientific work related to [[Wikipedia quality]] published in 2010, written by [[Iman Saleh]], [[Kareem Darwish]] and [[Aly A. Fahmy]].<br />
<br />
== Overview ==<br />
In this paper, a method is presented to recognize [[multilingual]] [[Wikipedia]] [[named entity]] articles. This method classifies multilingual Wikipedia articles using a variety of structured and unstructured [[features]] and is aided by cross-language links and features in Wikipedia. Adding multilingual features helps boost classification accuracy and is shown to effectively classify multilingual pages in a language independent way. Classification is done using Support Vectors Machine (SVM) classifier at first, and then the threshold of SVM is adjusted in order to improve the recall scores of classification. Threshold adjustment is performed using beta-gamma threshold adjustment algorithm which is a post learning step that shifts the hyperplane of SVM. This approach boosted recall with minimal effect on precision.<br />
<br />
== Embed ==<br />
=== Wikipedia Quality ===<br />
<code><br />
<nowiki><br />
Saleh, Iman; Darwish, Kareem; Fahmy, Aly A.. (2010). "[[Classifying Wikipedia Articles into Ne's Using Svm's with Threshold Adjustment]]". Association for Computational Linguistics. <br />
</nowiki><br />
</code><br />
<br />
=== English Wikipedia ===<br />
<code><br />
<nowiki><br />
{{cite journal |last1=Saleh |first1=Iman |last2=Darwish |first2=Kareem |last3=Fahmy |first3=Aly A. |title=Classifying Wikipedia Articles into Ne's Using Svm's with Threshold Adjustment |date=2010 |url=https://wikipediaquality.com/wiki/Classifying_Wikipedia_Articles_into_Ne's_Using_Svm's_with_Threshold_Adjustment |journal=Association for Computational Linguistics}}<br />
</nowiki><br />
</code><br />
<br />
=== HTML ===<br />
<code><br />
<nowiki><br />
Saleh, Iman; Darwish, Kareem; Fahmy, Aly A.. (2010). &amp;quot;<a href="https://wikipediaquality.com/wiki/Classifying_Wikipedia_Articles_into_Ne's_Using_Svm's_with_Threshold_Adjustment">Classifying Wikipedia Articles into Ne's Using Svm's with Threshold Adjustment</a>&amp;quot;. Association for Computational Linguistics. <br />
</nowiki><br />
</code><br />
<br />
<br />
<br />
[[Category:Scientific works]]</div>Alyssahttps://wikipediaquality.com/index.php?title=Cultural_Differences_in_Collaborative_Authoring_of_Wikipedia&diff=25483Cultural Differences in Collaborative Authoring of Wikipedia2020-10-03T06:23:30Z<p>Alyssa: cat.</p>
<hr />
<div>{{Infobox work<br />
| title = Cultural Differences in Collaborative Authoring of Wikipedia<br />
| date = 2006<br />
| authors = [[Ulrike Pfeil]]<br />[[Panayiotis Zaphiris]]<br />[[Chee Siang Ang]]<br />
| doi = 10.1111/j.1083-6101.2006.00316.x<br />
| link = http://onlinelibrary.wiley.com/doi/10.1111/j.1083-6101.2006.00316.x/full<br />
}}<br />
'''Cultural Differences in Collaborative Authoring of Wikipedia''' - scientific work related to [[Wikipedia quality]] published in 2006, written by [[Ulrike Pfeil]], [[Panayiotis Zaphiris]] and [[Chee Siang Ang]].<br />
<br />
== Overview ==<br />
This article explores the relationship between national culture and computer-mediated communication (CMC) in [[Wikipedia]]. The articles on the topic game from the French, German, Japanese, and Dutch Wikipedia websites were studied using content analysis methods. Correlations were investigated between patterns of contributions and the four dimensions of cultural influences proposed by Hofstede (Power Distance, Collectivism versus Individualism, Femininity versus Masculinity, and Uncertainty Avoidance). The analysis revealed cultural differences in the style of contributions across the cultures investigated, some of which are correlated with the dimensions identified by Hofstede. These findings suggest that cultural differences that are observed in the physical world also exist in the virtual world.<br />
<br />
== Embed ==<br />
=== Wikipedia Quality ===<br />
<code><br />
<nowiki><br />
Pfeil, Ulrike; Zaphiris, Panayiotis; Ang, Chee Siang. (2006). "[[Cultural Differences in Collaborative Authoring of Wikipedia]]". Blackwell Publishing Inc. DOI: 10.1111/j.1083-6101.2006.00316.x. <br />
</nowiki><br />
</code><br />
<br />
=== English Wikipedia ===<br />
<code><br />
<nowiki><br />
{{cite journal |last1=Pfeil |first1=Ulrike |last2=Zaphiris |first2=Panayiotis |last3=Ang |first3=Chee Siang |title=Cultural Differences in Collaborative Authoring of Wikipedia |date=2006 |doi=10.1111/j.1083-6101.2006.00316.x |url=https://wikipediaquality.com/wiki/Cultural_Differences_in_Collaborative_Authoring_of_Wikipedia |journal=Blackwell Publishing Inc}}<br />
</nowiki><br />
</code><br />
<br />
=== HTML ===<br />
<code><br />
<nowiki><br />
Pfeil, Ulrike; Zaphiris, Panayiotis; Ang, Chee Siang. (2006). &amp;quot;<a href="https://wikipediaquality.com/wiki/Cultural_Differences_in_Collaborative_Authoring_of_Wikipedia">Cultural Differences in Collaborative Authoring of Wikipedia</a>&amp;quot;. Blackwell Publishing Inc. DOI: 10.1111/j.1083-6101.2006.00316.x. <br />
</nowiki><br />
</code><br />
<br />
<br />
<br />
[[Category:Scientific works]]<br />
[[Category:German Wikipedia]]<br />
[[Category:French Wikipedia]]<br />
[[Category:Dutch Wikipedia]]<br />
[[Category:Japanese Wikipedia]]</div>Alyssahttps://wikipediaquality.com/index.php?title=Controversy_Goes_Online_:_Schizophrenia_Genetics_on_Wikipedia&diff=25482Controversy Goes Online : Schizophrenia Genetics on Wikipedia2020-10-03T06:20:42Z<p>Alyssa: + infobox</p>
<hr />
<div>{{Infobox work<br />
| title = Controversy Goes Online : Schizophrenia Genetics on Wikipedia<br />
| date = 2016<br />
| authors = [[Sally Wyatt]]<br />[[Anna Harris]]<br />[[Susan E. Kelly]]<br />
| link = https://sciencetechnologystudies.journal.fi/article/view/55407<br />
}}<br />
'''Controversy Goes Online : Schizophrenia Genetics on Wikipedia''' - scientific work related to [[Wikipedia quality]] published in 2016, written by [[Sally Wyatt]], [[Anna Harris]] and [[Susan E. Kelly]].<br />
<br />
== Overview ==<br />
Scientific controversy is increasingly played out via the internet, a technology that is simultaneously content, medium and research infrastructure. Here authors analyse material from [[Wikipedia]], focusing on schizophrenia genetics. Authors find that citation and curation of scientific resources follow a negotiated, ad hoc adherence to Wikipedia rules, are based on limited access to scientific literature, and thus lead to a partially constructed ‘review’ of the science that excludes non-professionals. Given its policies and systems for developing neutral, evidence-based articles, one would not expect to find controversy on Wikipedia, yet authors find traces. Scientific ambiguity about schizophrenia genetics lends itself to multiple ways of curating resources, and the infrastructure of online spaces enables the practices behind curation work to become visible in new ways. Authors argue that not only does Wikipedia make scientific controversy visible to a wider range of people, it is also involved in the production of knowledge.</div>Alyssahttps://wikipediaquality.com/index.php?title=Wiki_Means_More:_Hyperreading_in_Wikipedia&diff=25481Wiki Means More: Hyperreading in Wikipedia2020-10-03T06:19:38Z<p>Alyssa: Embed for English Wikipedia, HTML</p>
<hr />
<div>{{Infobox work<br />
| title = Wiki Means More: Hyperreading in Wikipedia<br />
| date = 2006<br />
| authors = [[Yuejiao Zhang]]<br />
| doi = 10.1145/1149941.1149946<br />
| link = http://dl.acm.org/ft_gateway.cfm?id=1149946&amp;type=pdf<br />
}}<br />
'''Wiki Means More: Hyperreading in Wikipedia''' - scientific work related to [[Wikipedia quality]] published in 2006, written by [[Yuejiao Zhang]].<br />
<br />
== Overview ==<br />
Based on the open-sourcing technology of wiki, [[Wikipedia]] has initiated a new fashion of hyperreading. Reading Wikipedia creates an experience distinct from reading a traditional encyclopedia. In an attempt to disclose one of the site's major appeals to the Web users, this paper approaches the characteristics of hyperreading activities in Wikipedia from three perspectives. Discussions are made regarding reading path, user participation, and navigational apparatus in Wikipedia.<br />
<br />
== Embed ==<br />
=== Wikipedia Quality ===<br />
<code><br />
<nowiki><br />
Zhang, Yuejiao. (2006). "[[Wiki Means More: Hyperreading in Wikipedia]]".DOI: 10.1145/1149941.1149946. <br />
</nowiki><br />
</code><br />
<br />
=== English Wikipedia ===<br />
<code><br />
<nowiki><br />
{{cite journal |last1=Zhang |first1=Yuejiao |title=Wiki Means More: Hyperreading in Wikipedia |date=2006 |doi=10.1145/1149941.1149946 |url=https://wikipediaquality.com/wiki/Wiki_Means_More:_Hyperreading_in_Wikipedia}}<br />
</nowiki><br />
</code><br />
<br />
=== HTML ===<br />
<code><br />
<nowiki><br />
Zhang, Yuejiao. (2006). &amp;quot;<a href="https://wikipediaquality.com/wiki/Wiki_Means_More:_Hyperreading_in_Wikipedia">Wiki Means More: Hyperreading in Wikipedia</a>&amp;quot;.DOI: 10.1145/1149941.1149946. <br />
</nowiki><br />
</code></div>Alyssahttps://wikipediaquality.com/index.php?title=The_Efficiency_of_Wikipedia%27s_Evolution&diff=25480The Efficiency of Wikipedia's Evolution2020-10-03T06:17:08Z<p>Alyssa: + Infobox work</p>
<hr />
<div>{{Infobox work<br />
| title = The Efficiency of Wikipedia's Evolution<br />
| date = 2013<br />
| authors = [[Emilie Jackson]]<br />
| doi = 10.2139/ssrn.2403327<br />
| link = https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2403327<br />
}}<br />
'''The Efficiency of Wikipedia's Evolution''' - scientific work related to [[Wikipedia quality]] published in 2013, written by [[Emilie Jackson]].<br />
<br />
== Overview ==<br />
Wikipedia, the world's most popular encyclopedia, is a unique enterprise characterized by its purely decentralized production. Thus, it is important to understand whether editors produce the pages in an efficient manner. Author examine how efficiently [[Wikipedia]] has developed from three perspectives. First, Author evaluate whether the editors produce the pages with the highest eventual views first. Author find that at a given point in time, editors create roughly 80% of the maximum possible views that could have been created up through that time. Second, Author examine whether pages are created in order of how well-connected they are to other pages in the link structure, so how easy they are to find. Author find that a page's probability of being created at any point in time is significantly and substantially increasing in its relative number of views and connections. However, this effect diminishes with time and page creation was much less sensitive to these [[measures]] in 2008 than in 2001. Third, Author compare which pages frequent versus infrequent editors create. Author find that frequent editors tend to produce highly-viewed pages while infrequent editors tend to produce better connected pages.</div>Alyssahttps://wikipediaquality.com/index.php?title=Where%27s_the_Bio%3F_Databases,_Wikipedia,_and_the_Web&diff=25479Where's the Bio? Databases, Wikipedia, and the Web2020-10-03T06:15:33Z<p>Alyssa: Categories</p>
<hr />
<div>{{Infobox work<br />
| title = Where's the Bio? Databases, Wikipedia, and the Web<br />
| date = 2012<br />
| authors = [[Aline Soules]]<br />
| doi = 10.1108/03074801211199068<br />
| link = http://www.emeraldinsight.com/doi/full/10.1108/03074801211199068<br />
}}<br />
'''Where's the Bio? Databases, Wikipedia, and the Web''' - scientific work related to [[Wikipedia quality]] published in 2012, written by [[Aline Soules]].<br />
<br />
== Overview ==<br />
Purpose – This paper aims to compare biographical content for literary authors writing in English among Biography Reference Bank, Contemporary Authors Online, [[Wikipedia]], and the web.Design/methodology/approach – A sample of 500 names was gathered from curricula and textbooks used in English courses and searched in the Contemporary Authors Online portion of Literature Resource Center, Biography Reference Bank, Wikipedia, and the web; the results and content were compared.Findings – Each source has core content plus its own unique offerings and specific challenges, as evidenced in searching, evaluative techniques such as authority and currency, and content.Research limitations/implications – This study can only offer a small part of the picture of what information resides where and a single snapshot in time.Practical implications – This study will help librarians decide whether to subscribe to a biographical database. It also reinforces the need for evidence‐based practice in librarianship.Originality/value...<br />
<br />
== Embed ==<br />
=== Wikipedia Quality ===<br />
<code><br />
<nowiki><br />
Soules, Aline. (2012). "[[Where's the Bio? Databases, Wikipedia, and the Web]]". Emerald Group Publishing Limited. DOI: 10.1108/03074801211199068. <br />
</nowiki><br />
</code><br />
<br />
=== English Wikipedia ===<br />
<code><br />
<nowiki><br />
{{cite journal |last1=Soules |first1=Aline |title=Where's the Bio? Databases, Wikipedia, and the Web |date=2012 |doi=10.1108/03074801211199068 |url=https://wikipediaquality.com/wiki/Where's_the_Bio?_Databases,_Wikipedia,_and_the_Web |journal=Emerald Group Publishing Limited}}<br />
</nowiki><br />
</code><br />
<br />
=== HTML ===<br />
<code><br />
<nowiki><br />
Soules, Aline. (2012). &amp;quot;<a href="https://wikipediaquality.com/wiki/Where's_the_Bio?_Databases,_Wikipedia,_and_the_Web">Where's the Bio? Databases, Wikipedia, and the Web</a>&amp;quot;. Emerald Group Publishing Limited. DOI: 10.1108/03074801211199068. <br />
</nowiki><br />
</code><br />
<br />
<br />
<br />
[[Category:Scientific works]]<br />
[[Category:English Wikipedia]]</div>Alyssahttps://wikipediaquality.com/index.php?title=What_Can_Google_and_Wikipedia_Can_Tell_Us_About_a_Disease%3F_Big_Data_Trends_Analysis_in_Systemic_Lupus_Erythematosus&diff=25478What Can Google and Wikipedia Can Tell Us About a Disease? Big Data Trends Analysis in Systemic Lupus Erythematosus2020-10-03T06:14:25Z<p>Alyssa: Infobox work</p>
<hr />
<div>{{Infobox work<br />
| title = What Can Google and Wikipedia Can Tell Us About a Disease? Big Data Trends Analysis in Systemic Lupus Erythematosus<br />
| date = 2017<br />
| authors = [[Savino Sciascia]]<br />[[Massimo Radin]]<br />
| doi = 10.1016/j.ijmedinf.2017.09.002<br />
| link = http://www.sciencedirect.com/science/article/pii/S1386505617302253<br />
}}<br />
'''What Can Google and Wikipedia Can Tell Us About a Disease? Big Data Trends Analysis in Systemic Lupus Erythematosus''' - scientific work related to [[Wikipedia quality]] published in 2017, written by [[Savino Sciascia]] and [[Massimo Radin]].<br />
<br />
== Overview ==<br />
Abstract Objective To investigate trends of Internet search volumes linked to Systemic Lupus Erythematosus (SLE), on-going clinical trials and research developments associated to the disease, using Big Data monitoring and data mining. Methods Authors performed a longitudinal analysis based on the large amount of data generated by [[Google]] Trends, scientific search tools (SCOPUS, Medline/Pubmed/ClinicalTrails.gov) considering ‘SLE’, and ‘lupus’ in a 5-year web-based research. [[Wikipedia]] page views were also analysed using WikiTrends and the results were compared with the search volumes generated by Google Trends. Results Authors observed an overall higher distribution of search volumes from Google Trends in United States, South America, Canada, South Africa, Australia and Europe (mainly Italy, United Kingdom, Spain, France, Germany), showing a geographically heterogeneity in insight into health-related behaviour of the different populations towards SLE. By comparing the search volumes analysing the Wikipedia page views of both SLE and belimumab, authors found a close peak trend, reflecting the knowledge translation after the approval of belimumab for the treatment of SLE. When focusing on search volumes of Google Trends, authors noticed that the highest peaks were related to news headlines that involved celebrities affected by SLE, also when comparing to the peak generated by the approval of belimumab. Conclusion This new approach, able to investigate health information seeking, might give an estimate of the health-related demand and even of the health-related behaviour of SLE, bringing new light to unanswered questions.</div>Alyssahttps://wikipediaquality.com/index.php?title=Computing_Terms_Semantic_Relatedness_by_Knowledge_in_Wikipedia&diff=25477Computing Terms Semantic Relatedness by Knowledge in Wikipedia2020-10-03T06:12:02Z<p>Alyssa: infobox</p>
<hr />
<div>{{Infobox work<br />
| title = Computing Terms Semantic Relatedness by Knowledge in Wikipedia<br />
| date = 2015<br />
| authors = [[Dexin Zhao]]<br />[[Liangliang Qin]]<br />[[Pengjie Liu]]<br />[[Zhen Ma]]<br />[[Yukun Li]]<br />
| doi = 10.1109/WISA.2015.41<br />
| link = https://dl.acm.org/citation.cfm?id=2925056<br />
}}<br />
'''Computing Terms Semantic Relatedness by Knowledge in Wikipedia''' - scientific work related to [[Wikipedia quality]] published in 2015, written by [[Dexin Zhao]], [[Liangliang Qin]], [[Pengjie Liu]], [[Zhen Ma]] and [[Yukun Li]].<br />
<br />
== Overview ==<br />
Many researchers have recognized [[Wikipedia]] as a resource of huge dynamic knowledge base in recent years. This paper provides a new approach for obtaining [[measures]] of terms semantic [[relatedness]], which maps terms to relevant Wikipedia articles as the background information for analyzing. The proposed algorithm WLA focuses on the hyperlink structure and summary paragraph extracted from the topic pages to compute two terms similarity. Comparing with other similar techniques, the approach is less computationally intensive, because only the first paragraph is analyzed, not the entire text. Authors method achieves good performance on the widely used test set WS-353.</div>Alyssahttps://wikipediaquality.com/index.php?title=Visualizing_Recent_Changes_in_Wikipedia&diff=25476Visualizing Recent Changes in Wikipedia2020-10-03T06:09:09Z<p>Alyssa: Embed</p>
<hr />
<div>{{Infobox work<br />
| title = Visualizing Recent Changes in Wikipedia<br />
| date = 2013<br />
| authors = [[Robert P. Biuk-Aghai]]<br />[[Roy Chi Kit Chan]]<br />[[Yain-Whar Si]]<br />[[Simon Fong]]<br />
| doi = 10.1007/s11432-013-4867-9<br />
| link = https://link.springer.com/content/pdf/10.1007%2Fs11432-013-4867-9.pdf<br />
}}<br />
'''Visualizing Recent Changes in Wikipedia''' - scientific work related to [[Wikipedia quality]] published in 2013, written by [[Robert P. Biuk-Aghai]], [[Roy Chi Kit Chan]], [[Yain-Whar Si]] and [[Simon Fong]].<br />
<br />
== Overview ==<br />
Large wikis such as [[Wikipedia]] attract large numbers of editors continuously editing content. It is difficult to observe what editing activity goes on at any given moment, what editing patterns can be observed, and which are the currently active editors and articles. Authors introduce the design and implementation of an information visualization tool for data streams of recent changes in wikis that aims to address this difficulty. Authors also show examples of visualizations from [[English Wikipedia]], and present several patterns of editing activity that authors have visually identified using tool. Authors have evaluated tool’s usability, accuracy and speed of task performance in comparison with Wikipedia’s recent changes page, and have obtained qualitative feedback from users on the pros and cons of tool. Authors also present a review of the related literature.<br />
<br />
== Embed ==<br />
=== Wikipedia Quality ===<br />
<code><br />
<nowiki><br />
Biuk-Aghai, Robert P.; Chan, Roy Chi Kit; Si, Yain-Whar; Fong, Simon. (2013). "[[Visualizing Recent Changes in Wikipedia]]". SP Science China Press. DOI: 10.1007/s11432-013-4867-9. <br />
</nowiki><br />
</code><br />
<br />
=== English Wikipedia ===<br />
<code><br />
<nowiki><br />
{{cite journal |last1=Biuk-Aghai |first1=Robert P. |last2=Chan |first2=Roy Chi Kit |last3=Si |first3=Yain-Whar |last4=Fong |first4=Simon |title=Visualizing Recent Changes in Wikipedia |date=2013 |doi=10.1007/s11432-013-4867-9 |url=https://wikipediaquality.com/wiki/Visualizing_Recent_Changes_in_Wikipedia |journal=SP Science China Press}}<br />
</nowiki><br />
</code><br />
<br />
=== HTML ===<br />
<code><br />
<nowiki><br />
Biuk-Aghai, Robert P.; Chan, Roy Chi Kit; Si, Yain-Whar; Fong, Simon. (2013). &amp;quot;<a href="https://wikipediaquality.com/wiki/Visualizing_Recent_Changes_in_Wikipedia">Visualizing Recent Changes in Wikipedia</a>&amp;quot;. SP Science China Press. DOI: 10.1007/s11432-013-4867-9. <br />
</nowiki><br />
</code></div>Alyssahttps://wikipediaquality.com/index.php?title=Wikipedia%27s_Role_in_Reputation_Management:_an_Analysis_of_the_Best_and_Worst_Companies_in_the_Usa&diff=25475Wikipedia's Role in Reputation Management: an Analysis of the Best and Worst Companies in the Usa2020-10-03T06:07:08Z<p>Alyssa: Category</p>
<hr />
<div>{{Infobox work<br />
| title = Wikipedia's Role in Reputation Management: an Analysis of the Best and Worst Companies in the Usa<br />
| date = 2012<br />
| authors = [[Marcia W. DiStaso]]<br />[[Marcus Messner]]<br />
| doi = 10.7238/d.v0i14.1473<br />
| link = http://www.redalyc.org/pdf/550/Resumenes/Resumen_55023345002_1.pdf<br />
}}<br />
'''Wikipedia's Role in Reputation Management: an Analysis of the Best and Worst Companies in the Usa''' - scientific work related to [[Wikipedia quality]] published in 2012, written by [[Marcia W. DiStaso]] and [[Marcus Messner]].<br />
<br />
== Overview ==<br />
Being considered one of the best companies in the USA is a great honor, but this [[reputation]] does not exempt businesses from negativity in the collaboratively edited online encyclopedia [[Wikipedia]]. Content analysis of corporate Wikipedia articles for companies with the best and worst reputations in the USA revealed that negative content outweighed positive content irrespective of reputation. It was found that both the best and the worst companies had more negative than positive content in Wikipedia. This is an important issue because Wikipedia is not only one of the most popular websites in the world, but is also often the first place people look when seeking corporate information. Although there was more content on corporate social responsibility in the entries for the ten companies with the best reputations, this was still overshadowed by content referring to legal issues or scandals. Ultimately, public relations professionals need to regularly monitor and request updates to their corporate Wikipedia articles regardless of what kind of company they work for.<br />
<br />
== Embed ==<br />
=== Wikipedia Quality ===<br />
<code><br />
<nowiki><br />
DiStaso, Marcia W.; Messner, Marcus. (2012). "[[Wikipedia's Role in Reputation Management: an Analysis of the Best and Worst Companies in the Usa]]". Universitat Oberta de Catalunya. DOI: 10.7238/d.v0i14.1473. <br />
</nowiki><br />
</code><br />
<br />
=== English Wikipedia ===<br />
<code><br />
<nowiki><br />
{{cite journal |last1=DiStaso |first1=Marcia W. |last2=Messner |first2=Marcus |title=Wikipedia's Role in Reputation Management: an Analysis of the Best and Worst Companies in the Usa |date=2012 |doi=10.7238/d.v0i14.1473 |url=https://wikipediaquality.com/wiki/Wikipedia's_Role_in_Reputation_Management:_an_Analysis_of_the_Best_and_Worst_Companies_in_the_Usa |journal=Universitat Oberta de Catalunya}}<br />
</nowiki><br />
</code><br />
<br />
=== HTML ===<br />
<code><br />
<nowiki><br />
DiStaso, Marcia W.; Messner, Marcus. (2012). &amp;quot;<a href="https://wikipediaquality.com/wiki/Wikipedia's_Role_in_Reputation_Management:_an_Analysis_of_the_Best_and_Worst_Companies_in_the_Usa">Wikipedia's Role in Reputation Management: an Analysis of the Best and Worst Companies in the Usa</a>&amp;quot;. Universitat Oberta de Catalunya. DOI: 10.7238/d.v0i14.1473. <br />
</nowiki><br />
</code><br />
<br />
<br />
<br />
[[Category:Scientific works]]</div>Alyssahttps://wikipediaquality.com/index.php?title=Books_Cited_in_Wikipedia:_Possibility_to_Use_Their_Nippon_Decimal_Classification_Categories_for_Book_Recommendation&diff=25474Books Cited in Wikipedia: Possibility to Use Their Nippon Decimal Classification Categories for Book Recommendation2020-10-03T06:05:20Z<p>Alyssa: Embed for English Wikipedia, HTML</p>
<hr />
<div>{{Infobox work<br />
| title = Books Cited in Wikipedia: Possibility to Use Their Nippon Decimal Classification Categories for Book Recommendation<br />
| date = 2016<br />
| authors = [[Keita Tsuji]]<br />
| doi = 10.1109/IIAI-AAI.2016.247<br />
| link = <br />
}}<br />
'''Books Cited in Wikipedia: Possibility to Use Their Nippon Decimal Classification Categories for Book Recommendation''' - scientific work related to [[Wikipedia quality]] published in 2016, written by [[Keita Tsuji]].<br />
<br />
== Overview ==<br />
This paper investigated the effectiveness of developing a book recommendation system based on books cited in [[Wikipedia]] articles, focusing on their Nippon Decimal Classification (NDC). Among 95,194 articles, 28,154 cited books showing ISBNs in their bibliographies. In many cases, all NDCs of books cited in each article were identical and thus consistent. Such articles can be used for automatic assignment of NDCs.<br />
<br />
== Embed ==<br />
=== Wikipedia Quality ===<br />
<code><br />
<nowiki><br />
Tsuji, Keita. (2016). "[[Books Cited in Wikipedia: Possibility to Use Their Nippon Decimal Classification Categories for Book Recommendation]]".DOI: 10.1109/IIAI-AAI.2016.247. <br />
</nowiki><br />
</code><br />
<br />
=== English Wikipedia ===<br />
<code><br />
<nowiki><br />
{{cite journal |last1=Tsuji |first1=Keita |title=Books Cited in Wikipedia: Possibility to Use Their Nippon Decimal Classification Categories for Book Recommendation |date=2016 |doi=10.1109/IIAI-AAI.2016.247 |url=https://wikipediaquality.com/wiki/Books_Cited_in_Wikipedia:_Possibility_to_Use_Their_Nippon_Decimal_Classification_Categories_for_Book_Recommendation}}<br />
</nowiki><br />
</code><br />
<br />
=== HTML ===<br />
<code><br />
<nowiki><br />
Tsuji, Keita. (2016). &amp;quot;<a href="https://wikipediaquality.com/wiki/Books_Cited_in_Wikipedia:_Possibility_to_Use_Their_Nippon_Decimal_Classification_Categories_for_Book_Recommendation">Books Cited in Wikipedia: Possibility to Use Their Nippon Decimal Classification Categories for Book Recommendation</a>&amp;quot;.DOI: 10.1109/IIAI-AAI.2016.247. <br />
</nowiki><br />
</code></div>Alyssahttps://wikipediaquality.com/index.php?title=A_Novel_Approach_to_Automatic_Gazetteer_Generation_Using_Wikipedia&diff=25473A Novel Approach to Automatic Gazetteer Generation Using Wikipedia2020-10-03T06:02:20Z<p>Alyssa: infobox</p>
<hr />
<div>{{Infobox work<br />
| title = A Novel Approach to Automatic Gazetteer Generation Using Wikipedia<br />
| date = 2009<br />
| authors = [[Ziqi Zhang]]<br />[[José Iria]]<br />
| doi = 10.3115/1699765.1699766<br />
| link = http://dl.acm.org/citation.cfm?id=1699765.1699766<br />
}}<br />
'''A Novel Approach to Automatic Gazetteer Generation Using Wikipedia''' - scientific work related to [[Wikipedia quality]] published in 2009, written by [[Ziqi Zhang]] and [[José Iria]].<br />
<br />
== Overview ==<br />
Gazetteers or entity dictionaries are important knowledge resources for solving a wide range of NLP problems, such as entity extraction. Authors introduce a novel method to automatically generate gazetteers from seed lists using an external knowledge resource, the [[Wikipedia]]. Unlike previous methods, method exploits the rich content and various structural elements of Wikipedia, and does not rely on language- or domain-specific knowledge. Furthermore, applying the extended gazetteers to an entity extraction task in a scientific domain, authors empirically observed a significant improvement in system accuracy when compared with those using seed gazetteers.</div>Alyssahttps://wikipediaquality.com/index.php?title=Wikipedia%27s_Politics_of_Exclusion:_Gender,_Epistemology,_and_Feminist_Rhetorical_(In)Action&diff=25472Wikipedia's Politics of Exclusion: Gender, Epistemology, and Feminist Rhetorical (In)Action2020-10-03T06:01:11Z<p>Alyssa: cat.</p>
<hr />
<div>{{Infobox work<br />
| title = Wikipedia's Politics of Exclusion: Gender, Epistemology, and Feminist Rhetorical (In)Action<br />
| date = 2015<br />
| authors = [[Leigh Gruwell]]<br />
| doi = 10.1016/j.compcom.2015.06.009<br />
| link = http://www.sciencedirect.com/science/article/pii/S8755461515000547<br />
}}<br />
'''Wikipedia's Politics of Exclusion: Gender, Epistemology, and Feminist Rhetorical (In)Action''' - scientific work related to [[Wikipedia quality]] published in 2015, written by [[Leigh Gruwell]].<br />
<br />
== Overview ==<br />
Abstract Compositionists have celebrated [[Wikipedia]] as a space that privileges collaborative, public writing and complicates traditional notions of authorship and revision. Yet, this scholarship has not considered the implications of Wikipedia's “gender gap”—the highly disproportionate number of male editors over female editors. In this article, Author explore how Wikipedia functions as a rhetorical discourse community whose conventions exclude and silence feminist ways of knowing and writing. Drawing on textual analysis of Wikipedia's editorial policies, as well as interviews with female users, Author argue that Wikipedia's insistence on separating embodied subjectivity from the production of knowledge limits the site's ability to facilitate any substantial, subversive feminist rhetorical action. These limitations, Author suggest, should inform a critical pedagogical approach to Wikipedia.<br />
<br />
== Embed ==<br />
=== Wikipedia Quality ===<br />
<code><br />
<nowiki><br />
Gruwell, Leigh. (2015). "[[Wikipedia's Politics of Exclusion: Gender, Epistemology, and Feminist Rhetorical (In)Action]]".DOI: 10.1016/j.compcom.2015.06.009. <br />
</nowiki><br />
</code><br />
<br />
=== English Wikipedia ===<br />
<code><br />
<nowiki><br />
{{cite journal |last1=Gruwell |first1=Leigh |title=Wikipedia's Politics of Exclusion: Gender, Epistemology, and Feminist Rhetorical (In)Action |date=2015 |doi=10.1016/j.compcom.2015.06.009 |url=https://wikipediaquality.com/wiki/Wikipedia's_Politics_of_Exclusion:_Gender,_Epistemology,_and_Feminist_Rhetorical_(In)Action}}<br />
</nowiki><br />
</code><br />
<br />
=== HTML ===<br />
<code><br />
<nowiki><br />
Gruwell, Leigh. (2015). &amp;quot;<a href="https://wikipediaquality.com/wiki/Wikipedia's_Politics_of_Exclusion:_Gender,_Epistemology,_and_Feminist_Rhetorical_(In)Action">Wikipedia's Politics of Exclusion: Gender, Epistemology, and Feminist Rhetorical (In)Action</a>&amp;quot;.DOI: 10.1016/j.compcom.2015.06.009. <br />
</nowiki><br />
</code><br />
<br />
<br />
<br />
[[Category:Scientific works]]</div>Alyssahttps://wikipediaquality.com/index.php?title=Wikilit:_Collecting_the_Wiki_and_Wikipedia_Literature&diff=25471Wikilit: Collecting the Wiki and Wikipedia Literature2020-10-03T05:59:32Z<p>Alyssa: Infobox work</p>
<hr />
<div>{{Infobox work<br />
| title = Wikilit: Collecting the Wiki and Wikipedia Literature<br />
| date = 2011<br />
| authors = [[Phoebe Ayers]]<br />[[Reid Priedhorsky]]<br />
| doi = 10.1145/2038558.2038612<br />
| link = https://dl.acm.org/ft_gateway.cfm?id=2038612&amp;type=pdf<br />
}}<br />
'''Wikilit: Collecting the Wiki and Wikipedia Literature''' - scientific work related to [[Wikipedia quality]] published in 2011, written by [[Phoebe Ayers]] and [[Reid Priedhorsky]].<br />
<br />
== Overview ==<br />
This workshop has three key goals. First, authors will examine existing and proposed systems for collecting and analyzing the research literature about wikis. Second, authors will discuss the challenges in building such a system and will engage participants to design a sustainable collaborative system to achieve this goal. Finally, authors will provide a forum to build upon ongoing wiki community discussions about problems and opportunities in finding and sharing the wiki research literature.</div>Alyssahttps://wikipediaquality.com/index.php?title=Novel_Techniques_for_Text_Annotation_with_Wikipedia_Entities&diff=25470Novel Techniques for Text Annotation with Wikipedia Entities2020-10-03T05:57:20Z<p>Alyssa: + category</p>
<hr />
<div>{{Infobox work<br />
| title = Novel Techniques for Text Annotation with Wikipedia Entities<br />
| date = 2014<br />
| authors = [[Christos Makris]]<br />[[Michael Angelos Simos]]<br />
| doi = 10.1007/978-3-662-44654-6_50<br />
| link = https://link.springer.com/content/pdf/10.1007%2F978-3-662-44654-6_50.pdf<br />
}}<br />
'''Novel Techniques for Text Annotation with Wikipedia Entities''' - scientific work related to [[Wikipedia quality]] published in 2014, written by [[Christos Makris]] and [[Michael Angelos Simos]].<br />
<br />
== Overview ==<br />
Text annotation is the procedure of identifying the semantically dominant words of a text segment and attaching them with conceptual content information in their context. In this paper, authors propose novel methods for automatic annotation of text fragments with entities of [[Wikipedia]], the largest knowledge base online, a process commonly known as Wikification aiming at resolving the semantics of synonymous and polysemous terms accurately. The cornerstone of contribution is a novel iterative Wikification approach, converging at optimal annotations while balancing high accuracy with performance. Authors first two methods can be fine-tuned through a machine-learning technique over large homogenous data sets. Authors experimental evaluation resulted in remarkable improvement over state-of-the-art Wikification approaches.<br />
<br />
== Embed ==<br />
=== Wikipedia Quality ===<br />
<code><br />
<nowiki><br />
Makris, Christos; Simos, Michael Angelos. (2014). "[[Novel Techniques for Text Annotation with Wikipedia Entities]]". Springer, Berlin, Heidelberg. DOI: 10.1007/978-3-662-44654-6_50. <br />
</nowiki><br />
</code><br />
<br />
=== English Wikipedia ===<br />
<code><br />
<nowiki><br />
{{cite journal |last1=Makris |first1=Christos |last2=Simos |first2=Michael Angelos |title=Novel Techniques for Text Annotation with Wikipedia Entities |date=2014 |doi=10.1007/978-3-662-44654-6_50 |url=https://wikipediaquality.com/wiki/Novel_Techniques_for_Text_Annotation_with_Wikipedia_Entities |journal=Springer, Berlin, Heidelberg}}<br />
</nowiki><br />
</code><br />
<br />
=== HTML ===<br />
<code><br />
<nowiki><br />
Makris, Christos; Simos, Michael Angelos. (2014). &amp;quot;<a href="https://wikipediaquality.com/wiki/Novel_Techniques_for_Text_Annotation_with_Wikipedia_Entities">Novel Techniques for Text Annotation with Wikipedia Entities</a>&amp;quot;. Springer, Berlin, Heidelberg. DOI: 10.1007/978-3-662-44654-6_50. <br />
</nowiki><br />
</code><br />
<br />
<br />
<br />
[[Category:Scientific works]]</div>Alyssahttps://wikipediaquality.com/index.php?title=Wikipedia-Based_Extraction_of_Key_Information_from_Resumes&diff=25469Wikipedia-Based Extraction of Key Information from Resumes2020-10-03T05:55:29Z<p>Alyssa: Infobox work</p>
<hr />
<div>{{Infobox work<br />
| title = Wikipedia-Based Extraction of Key Information from Resumes<br />
| date = 2017<br />
| authors = [[Mohammad Ghufran]]<br />[[Nacéra Bennacer]]<br />[[Gianluca Quercini]]<br />
| doi = 10.1109/RCIS.2017.7956530<br />
| link = https://hal.archives-ouvertes.fr/hal-01764238<br />
}}<br />
'''Wikipedia-Based Extraction of Key Information from Resumes''' - scientific work related to [[Wikipedia quality]] published in 2017, written by [[Mohammad Ghufran]], [[Nacéra Bennacer]] and [[Gianluca Quercini]].<br />
<br />
== Overview ==<br />
There is a vast amount of information about individuals available on the Web that has potential uses in Human Resource Management (HRM) - both for recruiters and job seekers. Since people names are inherently ambiguous, finding information related to a specific person is challenging and a simple query by name will likely return web pages related to several different individuals who happen to share the same name as the target of the query.</div>Alyssahttps://wikipediaquality.com/index.php?title=Improving_Distributed_Representation_by_Feature_Selection_of_Wikipedia&diff=25468Improving Distributed Representation by Feature Selection of Wikipedia2020-10-03T05:54:05Z<p>Alyssa: Adding embed</p>
<hr />
<div>{{Infobox work<br />
| title = Improving Distributed Representation by Feature Selection of Wikipedia<br />
| date = 2017<br />
| authors = [[Dao Van Tuan]]<br />[[Hiroshi Sato]]<br />
| doi = 10.1109/acdtj.2017.8259588<br />
| link = http://xplorestaging.ieee.org/ielx7/8253782/8259575/08259588.pdf?arnumber=8259588<br />
}}<br />
'''Improving Distributed Representation by Feature Selection of Wikipedia''' - scientific work related to [[Wikipedia quality]] published in 2017, written by [[Dao Van Tuan]] and [[Hiroshi Sato]].<br />
<br />
== Overview ==<br />
Distributed representation plays an important role in many application of [[Natural Language Processing]] (NLP). Today, Word2Vec model has been getting an attention against the backdrop of the easy access to enormous language data from the Internet such as [[Wikipedia]]. For the effective use of Word2Vec, authors have to concern not only about the improvement of the method itself but also about the process of making training data. In this paper, authors demonstrate that adequate selection of training data can make a great improvement of the performance of Word2Vec compared to existing research. Authors also confirmed that Wikipedia dump data is not a good source of training data as is.<br />
<br />
== Embed ==<br />
=== Wikipedia Quality ===<br />
<code><br />
<nowiki><br />
Tuan, Dao Van; Sato, Hiroshi. (2017). "[[Improving Distributed Representation by Feature Selection of Wikipedia]]".DOI: 10.1109/acdtj.2017.8259588. <br />
</nowiki><br />
</code><br />
<br />
=== English Wikipedia ===<br />
<code><br />
<nowiki><br />
{{cite journal |last1=Tuan |first1=Dao Van |last2=Sato |first2=Hiroshi |title=Improving Distributed Representation by Feature Selection of Wikipedia |date=2017 |doi=10.1109/acdtj.2017.8259588 |url=https://wikipediaquality.com/wiki/Improving_Distributed_Representation_by_Feature_Selection_of_Wikipedia}}<br />
</nowiki><br />
</code><br />
<br />
=== HTML ===<br />
<code><br />
<nowiki><br />
Tuan, Dao Van; Sato, Hiroshi. (2017). &amp;quot;<a href="https://wikipediaquality.com/wiki/Improving_Distributed_Representation_by_Feature_Selection_of_Wikipedia">Improving Distributed Representation by Feature Selection of Wikipedia</a>&amp;quot;.DOI: 10.1109/acdtj.2017.8259588. <br />
</nowiki><br />
</code></div>Alyssahttps://wikipediaquality.com/index.php?title=Exploring_Long_Running_News_Stories_Using_Wikipedia&diff=25467Exploring Long Running News Stories Using Wikipedia2020-10-03T05:51:59Z<p>Alyssa: + Embed</p>
<hr />
<div>{{Infobox work<br />
| title = Exploring Long Running News Stories Using Wikipedia<br />
| date = 2015<br />
| authors = [[Jaspreet Singh]]<br />[[Abhijit Anand]]<br />[[Vinay Setty]]<br />[[Avishek Anand]]<br />
| doi = 10.1145/2786451.2786489<br />
| link = http://dl.acm.org/citation.cfm?doid=2786451.2786489<br />
}}<br />
'''Exploring Long Running News Stories Using Wikipedia''' - scientific work related to [[Wikipedia quality]] published in 2015, written by [[Jaspreet Singh]], [[Abhijit Anand]], [[Vinay Setty]] and [[Avishek Anand]].<br />
<br />
== Overview ==<br />
A significant portion of today's news articles are part of long running stories. To better understand the context of these stories journalists, social scientists and other scholars use news collections to find temporal and topical insights. However these insights are devoid of user impressions, derived from click-through data and query logs, and are only reliable if the collection is complete and consistent. In this work authors introduce the notion of combining user impressions from [[Wikipedia]] with news collection based insights for long running news story exploration and outline promising new research directions. Authors also demonstrate initial attempts with a prototype system called NewsEX.<br />
<br />
== Embed ==<br />
=== Wikipedia Quality ===<br />
<code><br />
<nowiki><br />
Singh, Jaspreet; Anand, Abhijit; Setty, Vinay; Anand, Avishek. (2015). "[[Exploring Long Running News Stories Using Wikipedia]]".DOI: 10.1145/2786451.2786489. <br />
</nowiki><br />
</code><br />
<br />
=== English Wikipedia ===<br />
<code><br />
<nowiki><br />
{{cite journal |last1=Singh |first1=Jaspreet |last2=Anand |first2=Abhijit |last3=Setty |first3=Vinay |last4=Anand |first4=Avishek |title=Exploring Long Running News Stories Using Wikipedia |date=2015 |doi=10.1145/2786451.2786489 |url=https://wikipediaquality.com/wiki/Exploring_Long_Running_News_Stories_Using_Wikipedia}}<br />
</nowiki><br />
</code><br />
<br />
=== HTML ===<br />
<code><br />
<nowiki><br />
Singh, Jaspreet; Anand, Abhijit; Setty, Vinay; Anand, Avishek. (2015). &amp;quot;<a href="https://wikipediaquality.com/wiki/Exploring_Long_Running_News_Stories_Using_Wikipedia">Exploring Long Running News Stories Using Wikipedia</a>&amp;quot;.DOI: 10.1145/2786451.2786489. <br />
</nowiki><br />
</code></div>Alyssahttps://wikipediaquality.com/index.php?title=Geotagging_Aided_by_Topic_Detection_with_Wikipedia&diff=25466Geotagging Aided by Topic Detection with Wikipedia2020-10-03T05:49:28Z<p>Alyssa: + Infobox work</p>
<hr />
<div>{{Infobox work<br />
| title = Geotagging Aided by Topic Detection with Wikipedia<br />
| date = 2011<br />
| authors = [[Rafael Odon de Alencar]]<br />[[Rafael Odon de Alencar]]<br />[[Clodoveu A. Davis]]<br />
| doi = 10.1007/978-3-642-19789-5_23<br />
| link = https://link.springer.com/chapter/10.1007/978-3-642-19789-5_23<br />
}}<br />
'''Geotagging Aided by Topic Detection with Wikipedia''' - scientific work related to [[Wikipedia quality]] published in 2011, written by [[Rafael Odon de Alencar]], [[Rafael Odon de Alencar]] and [[Clodoveu A. Davis]].<br />
<br />
== Overview ==<br />
It is known that geography-aware keyword queries correspond to a significant share of the users’ demand on search engines. This paper describes a strategy for tagging documents with place names according to the geographical context of their textual content by using a topic indexing technique that considers [[Wikipedia]] articles as a controlled vocabulary. By identifying those topics in the text, authors connect documents with the Wikipedia semantic network of articles allowing us to perform operations on Wikipedia’s graph and find related places. Authors present an experimental evaluation on documents tagged as Brazilian states demonstrating the feasibility of proposal and opening the way to further research geotagging based on semantic networks.</div>Alyssahttps://wikipediaquality.com/index.php?title=Open-Domain_Question_Answering_Framework_Using_Wikipedia&diff=22814Open-Domain Question Answering Framework Using Wikipedia2019-12-13T08:20:56Z<p>Alyssa: cat.</p>
<hr />
<div>{{Infobox work<br />
| title = Open-Domain Question Answering Framework Using Wikipedia<br />
| date = 2016<br />
| authors = [[Saleem Ameen]]<br />[[Hyunsuk Chung]]<br />[[Soyeon Caren Han]]<br />[[Byeong Ho Kang]]<br />
| doi = 10.1007/978-3-319-50127-7_55<br />
| link = https://link.springer.com/content/pdf/10.1007%2F978-3-319-50127-7_55.pdf<br />
}}<br />
'''Open-Domain Question Answering Framework Using Wikipedia''' - scientific work related to [[Wikipedia quality]] published in 2016, written by [[Saleem Ameen]], [[Hyunsuk Chung]], [[Soyeon Caren Han]] and [[Byeong Ho Kang]].<br />
<br />
== Overview ==<br />
This paper explores the feasibility of implementing a model for an open domain, automated question and answering framework that leverages [[Wikipedia]]’s knowledgebase. While Wikipedia implicitly comprises answers to common questions, the disambiguation of natural language and the difficulty of developing an [[information retrieval]] process that produces answers with specificity present pertinent challenges. However, observational analysis suggests that it is possible to discount the syntactical and lexical structure of a sentence in contexts where questions contain a specific target entity (words that identify a person, location or organisation) and that correspondingly query a property related to it. To investigate this, authors implemented an algorithmic process that extracted the target entity from the question using CRF based [[named entity recognition]] (NER) and utilised all remaining words as potential properties. Using DBPedia, an ontological database of Wikipedia’s knowledge, authors searched for the closest matching property that would produce an answer by applying standardised string matching algorithms including the Levenshtein distance, similar text and Dice’s coefficient. Authors experimental results illustrate that using Wikipedia as a knowledgebase produces high precision for questions that contain a singular unambiguous entity as the subject, but lowered accuracy for questions where the entity exists as part of the object.<br />
<br />
== Embed ==<br />
=== Wikipedia Quality ===<br />
<code><br />
<nowiki><br />
Ameen, Saleem; Chung, Hyunsuk; Han, Soyeon Caren; Kang, Byeong Ho. (2016). "[[Open-Domain Question Answering Framework Using Wikipedia]]". Springer, Cham. DOI: 10.1007/978-3-319-50127-7_55. <br />
</nowiki><br />
</code><br />
<br />
=== English Wikipedia ===<br />
<code><br />
<nowiki><br />
{{cite journal |last1=Ameen |first1=Saleem |last2=Chung |first2=Hyunsuk |last3=Han |first3=Soyeon Caren |last4=Kang |first4=Byeong Ho |title=Open-Domain Question Answering Framework Using Wikipedia |date=2016 |doi=10.1007/978-3-319-50127-7_55 |url=https://wikipediaquality.com/wiki/Open-Domain_Question_Answering_Framework_Using_Wikipedia |journal=Springer, Cham}}<br />
</nowiki><br />
</code><br />
<br />
=== HTML ===<br />
<code><br />
<nowiki><br />
Ameen, Saleem; Chung, Hyunsuk; Han, Soyeon Caren; Kang, Byeong Ho. (2016). &amp;quot;<a href="https://wikipediaquality.com/wiki/Open-Domain_Question_Answering_Framework_Using_Wikipedia">Open-Domain Question Answering Framework Using Wikipedia</a>&amp;quot;. Springer, Cham. DOI: 10.1007/978-3-319-50127-7_55. <br />
</nowiki><br />
</code><br />
<br />
<br />
<br />
[[Category:Scientific works]]</div>Alyssahttps://wikipediaquality.com/index.php?title=Automatic_Construction_and_Evaluation_of_a_Large_Semantically_Enriched_Wikipedia&diff=22813Automatic Construction and Evaluation of a Large Semantically Enriched Wikipedia2019-12-13T08:19:17Z<p>Alyssa: + embed code</p>
<hr />
<div>{{Infobox work<br />
| title = Automatic Construction and Evaluation of a Large Semantically Enriched Wikipedia<br />
| date = 2016<br />
| authors = [[Alessandro Raganato]]<br />[[Claudio Delli Bovi]]<br />[[Roberto Navigli]]<br />
| link = https://dl.acm.org/citation.cfm?id=3061026<br />
}}<br />
'''Automatic Construction and Evaluation of a Large Semantically Enriched Wikipedia''' - scientific work related to [[Wikipedia quality]] published in 2016, written by [[Alessandro Raganato]], [[Claudio Delli Bovi]] and [[Roberto Navigli]].<br />
<br />
== Overview ==<br />
The hyperlink structure of [[Wikipedia]] constitutes a key resource for many [[Natural Language Processing]] tasks and applications, as it provides several million semantic annotations of entities in context. Yet only a small fraction of mentions across the entire Wikipedia corpus is linked. In this paper authors present the automatic construction and evaluation of a Semantically Enriched Wikipedia (SEW) in which the overall number of linked mentions has been more than tripled solely by exploiting the structure of Wikipedia itself and the wide-coverage sense inventory of BabelNet. As a result authors obtain a sense-annotated corpus with more than 200 million annotations of over 4 million different concepts and [[named entities]]. Authors then show that corpus leads to competitive results on multiple tasks, such as Entity Linking and Word Similarity.<br />
<br />
== Embed ==<br />
=== Wikipedia Quality ===<br />
<code><br />
<nowiki><br />
Raganato, Alessandro; Bovi, Claudio Delli; Navigli, Roberto. (2016). "[[Automatic Construction and Evaluation of a Large Semantically Enriched Wikipedia]]". AAAI Press. <br />
</nowiki><br />
</code><br />
<br />
=== English Wikipedia ===<br />
<code><br />
<nowiki><br />
{{cite journal |last1=Raganato |first1=Alessandro |last2=Bovi |first2=Claudio Delli |last3=Navigli |first3=Roberto |title=Automatic Construction and Evaluation of a Large Semantically Enriched Wikipedia |date=2016 |url=https://wikipediaquality.com/wiki/Automatic_Construction_and_Evaluation_of_a_Large_Semantically_Enriched_Wikipedia |journal=AAAI Press}}<br />
</nowiki><br />
</code><br />
<br />
=== HTML ===<br />
<code><br />
<nowiki><br />
Raganato, Alessandro; Bovi, Claudio Delli; Navigli, Roberto. (2016). &amp;quot;<a href="https://wikipediaquality.com/wiki/Automatic_Construction_and_Evaluation_of_a_Large_Semantically_Enriched_Wikipedia">Automatic Construction and Evaluation of a Large Semantically Enriched Wikipedia</a>&amp;quot;. AAAI Press. <br />
</nowiki><br />
</code></div>Alyssahttps://wikipediaquality.com/index.php?title=Wiki-Mid:_a_Very_Large_Multi-Domain_Interests_Dataset_of_Twitter_Users_with_Mappings_to_Wikipedia&diff=22812Wiki-Mid: a Very Large Multi-Domain Interests Dataset of Twitter Users with Mappings to Wikipedia2019-12-13T08:17:47Z<p>Alyssa: + categories</p>
<hr />
<div>{{Infobox work<br />
| title = Wiki-Mid: a Very Large Multi-Domain Interests Dataset of Twitter Users with Mappings to Wikipedia<br />
| date = 2018<br />
| authors = [[Giorgia Di Tommaso]]<br />[[Stefano Faralli]]<br />[[Giovanni Stilo]]<br />[[Paola Velardi]]<br />
| link = https://link.springer.com/chapter/10.1007%2F978-3-030-00668-6_3<br />
}}<br />
'''Wiki-Mid: a Very Large Multi-Domain Interests Dataset of Twitter Users with Mappings to Wikipedia''' - scientific work related to [[Wikipedia quality]] published in 2018, written by [[Giorgia Di Tommaso]], [[Stefano Faralli]], [[Giovanni Stilo]] and [[Paola Velardi]].<br />
<br />
== Overview ==<br />
This paper presents Wiki-MID, a LOD compliant multi-domain interests dataset to train and test Recommender Systems, and the methodology to create the dataset from [[Twitter]] messages in English and Italian. Authors English dataset includes an average of 90 multi-domain preferences per user on music, books, movies, celebrities, sport, politics and much more, for about half million users traced during six months in 2017. Preferences are either extracted from messages of users who use Spotify, Goodreads and other similar content sharing platforms, or induced from their “topical” friends, i.e., followees representing an interest rather than a social relation between peers. In addition, preferred items are matched with [[Wikipedia]] articles describing them. This unique feature of dataset provides a mean to categorize preferred items, exploiting available semantic resources linked to Wikipedia such as the Wikipedia Category Graph, [[DBpedia]], BabelNet and others.<br />
<br />
== Embed ==<br />
=== Wikipedia Quality ===<br />
<code><br />
<nowiki><br />
Tommaso, Giorgia Di; Faralli, Stefano; Stilo, Giovanni; Velardi, Paola. (2018). "[[Wiki-Mid: a Very Large Multi-Domain Interests Dataset of Twitter Users with Mappings to Wikipedia]]".<br />
</nowiki><br />
</code><br />
<br />
=== English Wikipedia ===<br />
<code><br />
<nowiki><br />
{{cite journal |last1=Tommaso |first1=Giorgia Di |last2=Faralli |first2=Stefano |last3=Stilo |first3=Giovanni |last4=Velardi |first4=Paola |title=Wiki-Mid: a Very Large Multi-Domain Interests Dataset of Twitter Users with Mappings to Wikipedia |date=2018 |url=https://wikipediaquality.com/wiki/Wiki-Mid:_a_Very_Large_Multi-Domain_Interests_Dataset_of_Twitter_Users_with_Mappings_to_Wikipedia}}<br />
</nowiki><br />
</code><br />
<br />
=== HTML ===<br />
<code><br />
<nowiki><br />
Tommaso, Giorgia Di; Faralli, Stefano; Stilo, Giovanni; Velardi, Paola. (2018). &amp;quot;<a href="https://wikipediaquality.com/wiki/Wiki-Mid:_a_Very_Large_Multi-Domain_Interests_Dataset_of_Twitter_Users_with_Mappings_to_Wikipedia">Wiki-Mid: a Very Large Multi-Domain Interests Dataset of Twitter Users with Mappings to Wikipedia</a>&amp;quot;.<br />
</nowiki><br />
</code><br />
<br />
<br />
<br />
[[Category:Scientific works]]<br />
[[Category:English Wikipedia]]<br />
[[Category:Italian Wikipedia]]<br />
[[Category:Twi Wikipedia]]</div>Alyssahttps://wikipediaquality.com/index.php?title=Query_Expansion_Powered_by_Wikipedia_Hyperlinks&diff=22811Query Expansion Powered by Wikipedia Hyperlinks2019-12-13T08:15:42Z<p>Alyssa: Infobox</p>
<hr />
<div>{{Infobox work<br />
| title = Query Expansion Powered by Wikipedia Hyperlinks<br />
| date = 2012<br />
| authors = [[Carson Bruce]]<br />[[Xiaoying Gao]]<br />[[Peter Andreae]]<br />[[Shahida Jabeen]]<br />
| doi = 10.1007/978-3-642-35101-3_36<br />
| link = http://dl.acm.org/citation.cfm?id=2436824.2436867<br />
}}<br />
'''Query Expansion Powered by Wikipedia Hyperlinks''' - scientific work related to [[Wikipedia quality]] published in 2012, written by [[Carson Bruce]], [[Xiaoying Gao]], [[Peter Andreae]] and [[Shahida Jabeen]].<br />
<br />
== Overview ==<br />
This research introduces a new query expansion method that uses [[Wikipedia]] and its hyperlink structure to find related terms for reformulating a query. Queries are first understood better by splitting into query aspects. Further understanding is gained through measuring how well each aspect is represented in the original search results. Poorly represented aspects are found to be an excellent source of query improvement. Authors main contribution is the way of using Wikipedia to identify aspects and underrepresented aspects, and to weight the expansion terms. Results have shown that approach improves the original query and search results, and outperforms two existing query expansion methods.</div>Alyssahttps://wikipediaquality.com/index.php?title=Wikipedian:_a_Social_Identity_Between_Work_and_Contribution&diff=22810Wikipedian: a Social Identity Between Work and Contribution2019-12-13T08:13:14Z<p>Alyssa: + wikilinks</p>
<hr />
<div>'''Wikipedian: a Social Identity Between Work and Contribution''' - scientific work related to [[Wikipedia quality]] published in 2018, written by [[Léo Joubert]].<br />
<br />
== Overview ==<br />
Contributors to the [[Wikipedia]] "free encyclopedia" identify themselves and are identified as "[[Wikipedians]]". A Wikipedian does not leave his job when he becomes a Wikipedian. Nor does he become a Wikipedian in his workplace. The worker's identity and the Wikipedian identity coexist in the social identity of an individual. On which patterns does this coexistence between worker's identity and Wikipedian identity operate? Beyond the differences specific to the social identity of each contributor, authors will try to show that singulars transactions all take place according to a finite number of patterns that it is possible to count. At this stage of analysis, authors are able to distinguish five identity patterns: employment, learning center, alternative development, continuity in upset, parallel arena. Authors model aims to better understanding of why a contributor stay in Wikipedia and identifies himself as a contributor.</div>Alyssahttps://wikipediaquality.com/index.php?title=Learning_to_Extract_Comparison_Points_of_Entity_Pairs_from_Wikipedia_Articles&diff=22809Learning to Extract Comparison Points of Entity Pairs from Wikipedia Articles2019-12-13T08:10:44Z<p>Alyssa: + Embed</p>
<hr />
<div>{{Infobox work<br />
| title = Learning to Extract Comparison Points of Entity Pairs from Wikipedia Articles<br />
| date = 2018<br />
| authors = [[Sandeep Kumar Pani]]<br />[[R Naresh]]<br />[[Pawan Goyal]]<br />[[Plaban Kumar Bhowmick]]<br />
| doi = 10.1145/3197026.3203909<br />
| link = http://doi.acm.org/10.1145/3197026.3203909<br />
}}<br />
'''Learning to Extract Comparison Points of Entity Pairs from Wikipedia Articles''' - scientific work related to [[Wikipedia quality]] published in 2018, written by [[Sandeep Kumar Pani]], [[R Naresh]], [[Pawan Goyal]] and [[Plaban Kumar Bhowmick]].<br />
<br />
== Overview ==<br />
In this paper, authors present preliminary results on a novel task of extracting comparison points for a pair of entities from the text articles describing them. The task is challenging as comparison points in a typical pair of articles tend to be sparse. Authors presented a multi-level document analysis (viz. document, paragraph and sentence level) for extracting the comparisons. For extracting sentence level comparisons, which is the hardest task among three, authors have used Convolutional Neural Network (CNN) with [[features]] extracted around triple. Experiments conducted on a small dataset provide encouraging performance.<br />
<br />
== Embed ==<br />
=== Wikipedia Quality ===<br />
<code><br />
<nowiki><br />
Pani, Sandeep Kumar; Naresh, R; Goyal, Pawan; Bhowmick, Plaban Kumar. (2018). "[[Learning to Extract Comparison Points of Entity Pairs from Wikipedia Articles]]". ACM Press. DOI: 10.1145/3197026.3203909. <br />
</nowiki><br />
</code><br />
<br />
=== English Wikipedia ===<br />
<code><br />
<nowiki><br />
{{cite journal |last1=Pani |first1=Sandeep Kumar |last2=Naresh |first2=R |last3=Goyal |first3=Pawan |last4=Bhowmick |first4=Plaban Kumar |title=Learning to Extract Comparison Points of Entity Pairs from Wikipedia Articles |date=2018 |doi=10.1145/3197026.3203909 |url=https://wikipediaquality.com/wiki/Learning_to_Extract_Comparison_Points_of_Entity_Pairs_from_Wikipedia_Articles |journal=ACM Press}}<br />
</nowiki><br />
</code><br />
<br />
=== HTML ===<br />
<code><br />
<nowiki><br />
Pani, Sandeep Kumar; Naresh, R; Goyal, Pawan; Bhowmick, Plaban Kumar. (2018). &amp;quot;<a href="https://wikipediaquality.com/wiki/Learning_to_Extract_Comparison_Points_of_Entity_Pairs_from_Wikipedia_Articles">Learning to Extract Comparison Points of Entity Pairs from Wikipedia Articles</a>&amp;quot;. ACM Press. DOI: 10.1145/3197026.3203909. <br />
</nowiki><br />
</code></div>Alyssahttps://wikipediaquality.com/index.php?title=Sat0585_Evaluation_of_Wikipedia_Rheumatology_Articles_as_a_Learning_Resource_for_Medical_Students&diff=22808Sat0585 Evaluation of Wikipedia Rheumatology Articles as a Learning Resource for Medical Students2019-12-13T08:06:26Z<p>Alyssa: + Embed</p>
<hr />
<div>{{Infobox work<br />
| title = Sat0585 Evaluation of Wikipedia Rheumatology Articles as a Learning Resource for Medical Students<br />
| date = 2014<br />
| authors = [[Marco Antivalle]]<br />[[M. Battellino]]<br />[[M.C. Ditto]]<br />[[V. Varisco]]<br />[[M. Chevallard]]<br />[[F. Rigamonti]]<br />[[Alberto Batticciotto]]<br />[[Fabiola Atzeni]]<br />[[Piercarlo Sarzi-Puttini]]<br />
| doi = 10.1136/annrheumdis-2014-eular.5610<br />
| link = http://ard.bmj.com/content/73/Suppl_2/801.3<br />
}}<br />
'''Sat0585 Evaluation of Wikipedia Rheumatology Articles as a Learning Resource for Medical Students''' - scientific work related to [[Wikipedia quality]] published in 2014, written by [[Marco Antivalle]], [[M. Battellino]], [[M.C. Ditto]], [[V. Varisco]], [[M. Chevallard]], [[F. Rigamonti]], [[Alberto Batticciotto]], [[Fabiola Atzeni]] and [[Piercarlo Sarzi-Puttini]].<br />
<br />
== Overview ==<br />
Background Despite concerns regarding its accuracy and [[reliability]], [[Wikipedia]] is increasingly being used by medical students as a source of medical information (1,2).<br />
<br />
== Embed ==<br />
=== Wikipedia Quality ===<br />
<code><br />
<nowiki><br />
Antivalle, Marco; Battellino, M.; Ditto, M.C.; Varisco, V.; Chevallard, M.; Rigamonti, F.; Batticciotto, Alberto; Atzeni, Fabiola; Sarzi-Puttini, Piercarlo. (2014). "[[Sat0585 Evaluation of Wikipedia Rheumatology Articles as a Learning Resource for Medical Students]]". BMJ Publishing Group Ltd. DOI: 10.1136/annrheumdis-2014-eular.5610. <br />
</nowiki><br />
</code><br />
<br />
=== English Wikipedia ===<br />
<code><br />
<nowiki><br />
{{cite journal |last1=Antivalle |first1=Marco |last2=Battellino |first2=M. |last3=Ditto |first3=M.C. |last4=Varisco |first4=V. |last5=Chevallard |first5=M. |last6=Rigamonti |first6=F. |last7=Batticciotto |first7=Alberto |last8=Atzeni |first8=Fabiola |last9=Sarzi-Puttini |first9=Piercarlo |title=Sat0585 Evaluation of Wikipedia Rheumatology Articles as a Learning Resource for Medical Students |date=2014 |doi=10.1136/annrheumdis-2014-eular.5610 |url=https://wikipediaquality.com/wiki/Sat0585_Evaluation_of_Wikipedia_Rheumatology_Articles_as_a_Learning_Resource_for_Medical_Students |journal=BMJ Publishing Group Ltd}}<br />
</nowiki><br />
</code><br />
<br />
=== HTML ===<br />
<code><br />
<nowiki><br />
Antivalle, Marco; Battellino, M.; Ditto, M.C.; Varisco, V.; Chevallard, M.; Rigamonti, F.; Batticciotto, Alberto; Atzeni, Fabiola; Sarzi-Puttini, Piercarlo. (2014). &amp;quot;<a href="https://wikipediaquality.com/wiki/Sat0585_Evaluation_of_Wikipedia_Rheumatology_Articles_as_a_Learning_Resource_for_Medical_Students">Sat0585 Evaluation of Wikipedia Rheumatology Articles as a Learning Resource for Medical Students</a>&amp;quot;. BMJ Publishing Group Ltd. DOI: 10.1136/annrheumdis-2014-eular.5610. <br />
</nowiki><br />
</code></div>Alyssahttps://wikipediaquality.com/index.php?title=A_Supervised_Method_for_Lexical_Annotation_of_Schema_Labels_based_on_Wikipedia&diff=22807A Supervised Method for Lexical Annotation of Schema Labels based on Wikipedia2019-12-13T08:05:19Z<p>Alyssa: Overview - A Supervised Method for Lexical Annotation of Schema Labels based on Wikipedia</p>
<hr />
<div>'''A Supervised Method for Lexical Annotation of Schema Labels based on Wikipedia''' - scientific work related to Wikipedia quality published in 2012, written by Serena Sorrentino, Sonia Bergamaschi and Elena Parmiggiani.<br />
<br />
== Overview ==<br />
Lexical annotation is the process of explicit assignment of one or more meanings to a term w.r.t. a sense inventory (e.g., a thesaurus or an ontology). Authors propose an automatic supervised lexical annotation method, called ALATK (Automatic Lexical Annotation -Topic Kernel), based on the Topic Kernel function for the annotation of schema labels extracted from structured and semi-structured data sources. It exploits Wikipedia as sense inventory and as resource of training data.</div>Alyssahttps://wikipediaquality.com/index.php?title=Population_Automation:_an_Interview_with_Wikipedia_Bot_Pioneer_Ram-Man&diff=22806Population Automation: an Interview with Wikipedia Bot Pioneer Ram-Man2019-12-13T08:03:01Z<p>Alyssa: + embed code</p>
<hr />
<div>{{Infobox work<br />
| title = Population Automation: an Interview with Wikipedia Bot Pioneer Ram-Man<br />
| date = 2016<br />
| authors = [[Randall M. Livingstone]]<br />
| doi = 10.5210/fm.v21i1.6027<br />
| link = http://firstmonday.org/ojs/index.php/fm/article/view/6027/5189<br />
}}<br />
'''Population Automation: an Interview with Wikipedia Bot Pioneer Ram-Man''' - scientific work related to [[Wikipedia quality]] published in 2016, written by [[Randall M. Livingstone]].<br />
<br />
== Overview ==<br />
Software robots (“bots”) play a major role across the Internet today, including on [[Wikipedia]], the world’s largest online encyclopedia. Bots complete over 20 percent of all edits to the project, yet often their work goes unnoticed by other users. Their initial integration onto Wikipedia was not uncontested and highlighted the opposing philosophies of “inclusionists” and “deletionists” who influenced the early years of the project. This paper presents an in-depth interview with Wikipedia user Ram-Man, an early bot operator on the site and creator or the rambot, the first mass-editing bot. Topics discussed include the social and technical climate of early Wikipedia, the creation of bot policies and bureaucracy, and the legacy of rambot and Ram-Man’s work.<br />
<br />
== Embed ==<br />
=== Wikipedia Quality ===<br />
<code><br />
<nowiki><br />
Livingstone, Randall M.. (2016). "[[Population Automation: an Interview with Wikipedia Bot Pioneer Ram-Man]]".DOI: 10.5210/fm.v21i1.6027. <br />
</nowiki><br />
</code><br />
<br />
=== English Wikipedia ===<br />
<code><br />
<nowiki><br />
{{cite journal |last1=Livingstone |first1=Randall M. |title=Population Automation: an Interview with Wikipedia Bot Pioneer Ram-Man |date=2016 |doi=10.5210/fm.v21i1.6027 |url=https://wikipediaquality.com/wiki/Population_Automation:_an_Interview_with_Wikipedia_Bot_Pioneer_Ram-Man}}<br />
</nowiki><br />
</code><br />
<br />
=== HTML ===<br />
<code><br />
<nowiki><br />
Livingstone, Randall M.. (2016). &amp;quot;<a href="https://wikipediaquality.com/wiki/Population_Automation:_an_Interview_with_Wikipedia_Bot_Pioneer_Ram-Man">Population Automation: an Interview with Wikipedia Bot Pioneer Ram-Man</a>&amp;quot;.DOI: 10.5210/fm.v21i1.6027. <br />
</nowiki><br />
</code></div>Alyssahttps://wikipediaquality.com/index.php?title=Language-Agnostic_Relation_Extraction_from_Wikipedia_Abstracts&diff=22805Language-Agnostic Relation Extraction from Wikipedia Abstracts2019-12-13T08:01:42Z<p>Alyssa: + Infobox work</p>
<hr />
<div>{{Infobox work<br />
| title = Language-Agnostic Relation Extraction from Wikipedia Abstracts<br />
| date = 2017<br />
| authors = [[Nicolas Heist]]<br />[[Heiko Paulheim]]<br />
| doi = 10.1007/978-3-319-68288-4_23<br />
| link = https://link.springer.com/chapter/10.1007%2F978-3-319-68288-4_23<br />
}}<br />
'''Language-Agnostic Relation Extraction from Wikipedia Abstracts''' - scientific work related to [[Wikipedia quality]] published in 2017, written by [[Nicolas Heist]] and [[Heiko Paulheim]].<br />
<br />
== Overview ==<br />
Large-scale knowledge graphs, such as [[DBpedia]], [[Wikidata]], or YAGO, can be enhanced by relation extraction from text, using the data in the knowledge graph as training data, i.e., using distant supervision. While most existing approaches use language-specific methods (usually for English), authors present a language-agnostic approach that exploits background knowledge from the graph instead of language-specific techniques and builds machine learning models only from language-independent [[features]]. Authors demonstrate the extraction of relations from [[Wikipedia]] abstracts, using the twelve largest language editions of Wikipedia. From those, authors can extract 1.6M new relations in DBpedia at a level of precision of 95%, using a RandomForest classifier trained only on language-independent features. Furthermore, authors show an exemplary geographical breakdown of the information extracted.</div>Alyssahttps://wikipediaquality.com/index.php?title=Generating_Information-Rich_Taxonomy_from_Wikipedia&diff=22804Generating Information-Rich Taxonomy from Wikipedia2019-12-13T08:00:20Z<p>Alyssa: + embed code</p>
<hr />
<div>{{Infobox work<br />
| title = Generating Information-Rich Taxonomy from Wikipedia<br />
| date = 2010<br />
| authors = [[Ichiro Yamada]]<br />[[Chikara Hashimoto]]<br />[[Jong-Hoon Oh]]<br />[[Kentaro Torisawa]]<br />[[Kow Kuroda]]<br />[[Stijn De Saeger]]<br />[[Masaaki Tsuchida]]<br />[[Jun’ichi Kazama]]<br />
| doi = 10.1109/IUCS.2010.5666764<br />
| link = http://ieeexplore.ieee.org/xpl/abstractReferences.jsp?reload=true&amp;arnumber=5666764&amp;punumber%3D5654670<br />
}}<br />
'''Generating Information-Rich Taxonomy from Wikipedia''' - scientific work related to [[Wikipedia quality]] published in 2010, written by [[Ichiro Yamada]], [[Chikara Hashimoto]], [[Jong-Hoon Oh]], [[Kentaro Torisawa]], [[Kow Kuroda]], [[Stijn De Saeger]], [[Masaaki Tsuchida]] and [[Jun’ichi Kazama]].<br />
<br />
== Overview ==<br />
Even though hyponymy relation acquisition has been extensively studied, “how informative such acquired hyponymy relations are” has not been sufficiently discussed. Authors found that the hypernyms in automatically acquired hyponymy relations were often too vague or ambiguous to specify the meaning of their hyponyms. For instance, hypernym work is vague and ambiguous in hyponymy relations work/Avatar and work/The Catcher in the Rye. In this paper, authors propose a simple method of generating intermediate concepts of hyponymy relations that can make such (vague) hypernyms more specific. Authors method generates such an information-rich hyponymy relation as work / work by film director / work by James Cameron / Avatar from the less informative relation work/Avatar. Furthermore, the generated relation work by film director/Avatar can be paraphrased into a new relation movie/Avatar. Experiments showed that method successfully acquired 2,719,441 enriched hyponymy relations with one intermediate concept with 0.853 precision and another 6,347,472 hyponymy relations with 0.786 precision.<br />
<br />
== Embed ==<br />
=== Wikipedia Quality ===<br />
<code><br />
<nowiki><br />
Yamada, Ichiro; Hashimoto, Chikara; Oh, Jong-Hoon; Torisawa, Kentaro; Kuroda, Kow; Saeger, Stijn De; Tsuchida, Masaaki; Kazama, Jun’ichi. (2010). "[[Generating Information-Rich Taxonomy from Wikipedia]]".DOI: 10.1109/IUCS.2010.5666764. <br />
</nowiki><br />
</code><br />
<br />
=== English Wikipedia ===<br />
<code><br />
<nowiki><br />
{{cite journal |last1=Yamada |first1=Ichiro |last2=Hashimoto |first2=Chikara |last3=Oh |first3=Jong-Hoon |last4=Torisawa |first4=Kentaro |last5=Kuroda |first5=Kow |last6=Saeger |first6=Stijn De |last7=Tsuchida |first7=Masaaki |last8=Kazama |first8=Jun’ichi |title=Generating Information-Rich Taxonomy from Wikipedia |date=2010 |doi=10.1109/IUCS.2010.5666764 |url=https://wikipediaquality.com/wiki/Generating_Information-Rich_Taxonomy_from_Wikipedia}}<br />
</nowiki><br />
</code><br />
<br />
=== HTML ===<br />
<code><br />
<nowiki><br />
Yamada, Ichiro; Hashimoto, Chikara; Oh, Jong-Hoon; Torisawa, Kentaro; Kuroda, Kow; Saeger, Stijn De; Tsuchida, Masaaki; Kazama, Jun’ichi. (2010). &amp;quot;<a href="https://wikipediaquality.com/wiki/Generating_Information-Rich_Taxonomy_from_Wikipedia">Generating Information-Rich Taxonomy from Wikipedia</a>&amp;quot;.DOI: 10.1109/IUCS.2010.5666764. <br />
</nowiki><br />
</code></div>Alyssahttps://wikipediaquality.com/index.php?title=Peer_Governance_and_Wikipedia:_Identifying_and_Understanding_the_Problems_of_Wikipedia%E2%80%99s_Governance&diff=22803Peer Governance and Wikipedia: Identifying and Understanding the Problems of Wikipedia’s Governance2019-12-13T07:58:56Z<p>Alyssa: Wikilinks</p>
<hr />
<div>'''Peer Governance and Wikipedia: Identifying and Understanding the Problems of Wikipedia’s Governance''' - scientific work related to [[Wikipedia quality]] published in 2010, written by [[Vasilis Kostakis]].<br />
<br />
== Overview ==<br />
Wikipedia has been hailed as one of the most prominent peer projects that led to the rise of the concept of peer governance. However, criticism has been levelled against [[Wikipedia]]'s mode of governance. This paper, using the Wikipedia case as a point of departure and building upon the conflict between inclusionists and deletionists, tries to identify and draw some conclusions on the problematic issue of peer governance.</div>Alyssahttps://wikipediaquality.com/index.php?title=Text_Summarization_Using_Wikipedia&diff=22802Text Summarization Using Wikipedia2019-12-13T07:56:32Z<p>Alyssa: + wikilinks</p>
<hr />
<div>'''Text Summarization Using Wikipedia''' - scientific work related to [[Wikipedia quality]] published in 2014, written by [[Yogesh Sankarasubramaniam]], [[Krishnan Ramanathan]] and [[Subhankar Ghosh]].<br />
<br />
== Overview ==<br />
Abstract Automatic text summarization has been an active field of research for many years. Several approaches have been proposed, ranging from simple position and word-frequency methods, to learning and graph based algorithms. The advent of human-generated knowledge bases like [[Wikipedia]] offer a further possibility in text summarization – they can be used to understand the input text in terms of salient concepts from the knowledge base. In this paper, authors study a novel approach that leverages Wikipedia in conjunction with graph-based ranking. Authors approach is to first construct a bipartite sentence–concept graph, and then rank the input sentences using iterative updates on this graph. Authors consider several models for the bipartite graph, and derive convergence properties under each model. Then, authors take up personalized and query-focused summarization, where the sentence ranks additionally depend on user interests and queries, respectively. Finally, authors present a Wikipedia-based multi-document summarization algorithm. An important feature of the proposed algorithms is that they enable real-time incremental summarization – users can first view an initial summary, and then request additional content if interested. Authors evaluate the performance of proposed summarizer using the ROUGE metric, and the results show that leveraging Wikipedia can significantly improve summary quality. Authors also present results from a user study, which suggests that using incremental summarization can help in better understanding news articles.</div>Alyssahttps://wikipediaquality.com/index.php?title=Multilingual_Wikipedia:_Editors_of_Primary_Language_Contribute_to_More_Complex_Articles&diff=22801Multilingual Wikipedia: Editors of Primary Language Contribute to More Complex Articles2019-12-13T07:54:45Z<p>Alyssa: + Infobox work</p>
<hr />
<div>{{Infobox work<br />
| title = Multilingual Wikipedia: Editors of Primary Language Contribute to More Complex Articles<br />
| date = 2015<br />
| authors = [[Sungjoon Park]]<br />[[Suin Kim]]<br />[[Scott A. Hale]]<br />[[Soo-Young Kim]]<br />[[Jeongmin Byun]]<br />[[Alice H. Oh]]<br />
| link = http://uilab.kaist.ac.kr/research/ICWSM15/multilingual_wikipedia.pdf<br />
}}<br />
'''Multilingual Wikipedia: Editors of Primary Language Contribute to More Complex Articles''' - scientific work related to [[Wikipedia quality]] published in 2015, written by [[Sungjoon Park]], [[Suin Kim]], [[Scott A. Hale]], [[Soo-Young Kim]], [[Jeongmin Byun]] and [[Alice H. Oh]].<br />
<br />
== Overview ==<br />
For many people who speak more than one language, their language proficiency for each of the languages varies. Authors can conjecture that people who use one language (primary language) more than another would show higher language proficiency in that primary language. It is, however, difficult to observe and quantify that problem because natural language use is difficult to collect in large amounts. Authors identify [[Wikipedia]] as a great resource for studying [[multilingual]]ism, and authors conduct a quantitative analysis of the language complexity of primary and non-primary users of English, German, and Spanish. Authors preliminary results indicate that there are indeed consistent differences of language complexity in the Wikipedia articles chosen by primary and non-primary users, as well as differences in the edits by the two groups of users.</div>Alyssahttps://wikipediaquality.com/index.php?title=A_Wikipedia_based_Hybrid_Ranking_Method_for_Taxonomic_Relation_Extraction&diff=22800A Wikipedia based Hybrid Ranking Method for Taxonomic Relation Extraction2019-12-13T07:51:58Z<p>Alyssa: Embed for English Wikipedia, HTML</p>
<hr />
<div>{{Infobox work<br />
| title = A Wikipedia based Hybrid Ranking Method for Taxonomic Relation Extraction<br />
| date = 2013<br />
| authors = [[Xiaoshi Zhong]]<br />
| doi = 10.1007/978-3-642-45068-6_29<br />
| link = https://link.springer.com/chapter/10.1007/978-3-642-45068-6_29<br />
}}<br />
'''A Wikipedia based Hybrid Ranking Method for Taxonomic Relation Extraction''' - scientific work related to [[Wikipedia quality]] published in 2013, written by [[Xiaoshi Zhong]].<br />
<br />
== Overview ==<br />
This paper proposes a hybrid ranking method for taxonomic relation extraction (or select best position) in an existing taxonomy. This method is capable of effectively combining two resources, an existing taxonomy and [[Wikipedia]], in order to select a most appropriate position for a term candidate in the existing taxonomy. Previous methods mainly focus on complex inference methods to select the best position among all the possible position in the taxonomy. In contrast, algorithm, a simple but effective one, leverage two kinds of information, the expression of and the ranking information of a term candidate, to select the best position for the term candidate (the hypernym of the term candidate in the existing taxonomy). Authors conduct approach on the agricultural domain and the experimental result indicates that the performances are significantly improved.<br />
<br />
== Embed ==<br />
=== Wikipedia Quality ===<br />
<code><br />
<nowiki><br />
Zhong, Xiaoshi. (2013). "[[A Wikipedia based Hybrid Ranking Method for Taxonomic Relation Extraction]]". Springer, Berlin, Heidelberg. DOI: 10.1007/978-3-642-45068-6_29. <br />
</nowiki><br />
</code><br />
<br />
=== English Wikipedia ===<br />
<code><br />
<nowiki><br />
{{cite journal |last1=Zhong |first1=Xiaoshi |title=A Wikipedia based Hybrid Ranking Method for Taxonomic Relation Extraction |date=2013 |doi=10.1007/978-3-642-45068-6_29 |url=https://wikipediaquality.com/wiki/A_Wikipedia_based_Hybrid_Ranking_Method_for_Taxonomic_Relation_Extraction |journal=Springer, Berlin, Heidelberg}}<br />
</nowiki><br />
</code><br />
<br />
=== HTML ===<br />
<code><br />
<nowiki><br />
Zhong, Xiaoshi. (2013). &amp;quot;<a href="https://wikipediaquality.com/wiki/A_Wikipedia_based_Hybrid_Ranking_Method_for_Taxonomic_Relation_Extraction">A Wikipedia based Hybrid Ranking Method for Taxonomic Relation Extraction</a>&amp;quot;. Springer, Berlin, Heidelberg. DOI: 10.1007/978-3-642-45068-6_29. <br />
</nowiki><br />
</code></div>Alyssahttps://wikipediaquality.com/index.php?title=Interactions_and_Influence_of_World_Painters_from_the_Reduced_Google_Matrix_of_Wikipedia_Networks&diff=22799Interactions and Influence of World Painters from the Reduced Google Matrix of Wikipedia Networks2019-12-13T07:49:54Z<p>Alyssa: Interactions and Influence of World Painters from the Reduced Google Matrix of Wikipedia Networks -- new article</p>
<hr />
<div>'''Interactions and Influence of World Painters from the Reduced Google Matrix of Wikipedia Networks''' - scientific work related to Wikipedia quality published in 2018, written by Samer El Zant, Katia Jaffrès-Runser, Klaus M. Frahm and Dima L. Shepelyansky.<br />
<br />
== Overview ==<br />
This paper concentrates on extracting painting art history knowledge from the network structure of Wikipedia. Therefore, authors construct theoretical networks of webpages representing the hyper-linked structure of articles of seven Wikipedia language editions. These seven networks are analyzed to extract the most influential painters in each edition using Google matrix theory. Importance of webpages of over 3000 painters is measured using the PageRank algorithm. The most influential painters are enlisted and their ties are studied with the reduced Google matrix analysis. The reduced Google matrix is a powerful method that captures both direct and hidden interactions between a subset of selected nodes taking into account the indirect links between these nodes via the remaining part of large global network. This method originates from the scattering theory of nuclear and mesoscopic physics and field of quantum chaos. In this paper, authors show that it is possible to extract from the components of the reduced Google matrix meaningful information on the ties between these painters. For instance, analysis groups together painters that belong to the same painting movement and shows meaningful ties between painters of different movements. Authors also determine the influence of painters on world countries using link sensitivity between Wikipedia articles of painters and countries. The reduced Google matrix approach allows to obtain a balanced view of various cultural opinions of Wikipedia language editions. The world countries with the largest number of top painters of selected seven Wikipedia editions are found to be Italy, France, and Russia. Authors argue that this approach gives meaningful information about art and that it could be a part of extensive network analysis on human knowledge and cultures.</div>Alyssahttps://wikipediaquality.com/index.php?title=Automatic_Extraction_of_Semantic_Relations_from_Wikipedia&diff=22798Automatic Extraction of Semantic Relations from Wikipedia2019-12-13T07:48:46Z<p>Alyssa: Automatic Extraction of Semantic Relations from Wikipedia - basic info</p>
<hr />
<div>'''Automatic Extraction of Semantic Relations from Wikipedia''' - scientific work related to Wikipedia quality published in 2015, written by Patrick Arnold and Erhard Rahm.<br />
<br />
== Overview ==<br />
Authors introduce a novel approach to extract semantic relations (e.g., is-a and part-of relations) from Wikipedia articles. These relations are used to build up a large and up-to-date thesaurus providing background knowledge for tasks such as determining semantic ontology mappings. Authors automatic approach uses a comprehensive set of semantic patterns, finite state machines and NLP techniques to extract millions of relations between concepts. An evaluation for different domains shows the high quality and effectiveness of the proposed approach. Authors also illustrate the value of the newly found relations for improving existing ontology mappings.</div>Alyssahttps://wikipediaquality.com/index.php?title=Using_Wikipedia_to_Translate_Oov_Terms_on_Mlir&diff=22797Using Wikipedia to Translate Oov Terms on Mlir2019-12-13T07:46:41Z<p>Alyssa: + categories</p>
<hr />
<div>{{Infobox work<br />
| title = Using Wikipedia to Translate Oov Terms on Mlir<br />
| date = 2007<br />
| authors = [[Chen-Yu Su]]<br />[[Tien-Chien Lin]]<br />[[Shih-Hung Wu]]<br />[[Taichung County]]<br />
| link = http://research.nii.ac.jp/ntcir/workshop/OnlineProceedings6/NTCIR/26.pdf<br />
}}<br />
'''Using Wikipedia to Translate Oov Terms on Mlir''' - scientific work related to [[Wikipedia quality]] published in 2007, written by [[Chen-Yu Su]], [[Tien-Chien Lin]], [[Shih-Hung Wu]] and [[Taichung County]].<br />
<br />
== Overview ==<br />
Authors deal with Chinese, Japanese and Korean [[multilingual]] [[information retrieval]] (MLIR) in NTCIR-6, and submit results on the C-CJK-T and C-CJK-D subtask. In these runs, authors adopt Dictionary-Based Approach to translate query terms. In addition to tradition dictionary, authors incorporate the [[Wikipedia]] as a live dictionary.<br />
<br />
== Embed ==<br />
=== Wikipedia Quality ===<br />
<code><br />
<nowiki><br />
Su, Chen-Yu; Lin, Tien-Chien; Wu, Shih-Hung; County, Taichung. (2007). "[[Using Wikipedia to Translate Oov Terms on Mlir]]".<br />
</nowiki><br />
</code><br />
<br />
=== English Wikipedia ===<br />
<code><br />
<nowiki><br />
{{cite journal |last1=Su |first1=Chen-Yu |last2=Lin |first2=Tien-Chien |last3=Wu |first3=Shih-Hung |last4=County |first4=Taichung |title=Using Wikipedia to Translate Oov Terms on Mlir |date=2007 |url=https://wikipediaquality.com/wiki/Using_Wikipedia_to_Translate_Oov_Terms_on_Mlir}}<br />
</nowiki><br />
</code><br />
<br />
=== HTML ===<br />
<code><br />
<nowiki><br />
Su, Chen-Yu; Lin, Tien-Chien; Wu, Shih-Hung; County, Taichung. (2007). &amp;quot;<a href="https://wikipediaquality.com/wiki/Using_Wikipedia_to_Translate_Oov_Terms_on_Mlir">Using Wikipedia to Translate Oov Terms on Mlir</a>&amp;quot;.<br />
</nowiki><br />
</code><br />
<br />
<br />
<br />
[[Category:Scientific works]]<br />
[[Category:Japanese Wikipedia]]<br />
[[Category:Chinese Wikipedia]]<br />
[[Category:Korean Wikipedia]]</div>Alyssahttps://wikipediaquality.com/index.php?title=Lensingwikipedia:_Parsing_Text_for_the_Interactive_Visualization_of_Human_History&diff=22796Lensingwikipedia: Parsing Text for the Interactive Visualization of Human History2019-12-13T07:43:43Z<p>Alyssa: + Embed</p>
<hr />
<div>{{Infobox work<br />
| title = Lensingwikipedia: Parsing Text for the Interactive Visualization of Human History<br />
| date = 2012<br />
| authors = [[Ravikiran Vadlapudi]]<br />[[Maryam Siahbani]]<br />[[Anoop Sarkar]]<br />[[John Dill]]<br />
| doi = 10.1109/VAST.2012.6400530<br />
| link = https://dl.acm.org/citation.cfm?id=2478300<br />
}}<br />
'''Lensingwikipedia: Parsing Text for the Interactive Visualization of Human History''' - scientific work related to [[Wikipedia quality]] published in 2012, written by [[Ravikiran Vadlapudi]], [[Maryam Siahbani]], [[Anoop Sarkar]] and [[John Dill]].<br />
<br />
== Overview ==<br />
Extracting information from text is challenging. Most current practices treat text as a bag of words or word clusters, ignoring valuable linguistic information. Leveraging this linguistic information, authors propose a novel approach to visualize textual information. The novelty lies in using state-of-the-art [[Natural Language Processing]] (NLP) tools to automatically annotate text which provides a basis for new and powerful interactive visualizations. Using NLP tools, authors built a web-based interactive visual browser for human history articles from [[Wikipedia]].<br />
<br />
== Embed ==<br />
=== Wikipedia Quality ===<br />
<code><br />
<nowiki><br />
Vadlapudi, Ravikiran; Siahbani, Maryam; Sarkar, Anoop; Dill, John. (2012). "[[Lensingwikipedia: Parsing Text for the Interactive Visualization of Human History]]".DOI: 10.1109/VAST.2012.6400530. <br />
</nowiki><br />
</code><br />
<br />
=== English Wikipedia ===<br />
<code><br />
<nowiki><br />
{{cite journal |last1=Vadlapudi |first1=Ravikiran |last2=Siahbani |first2=Maryam |last3=Sarkar |first3=Anoop |last4=Dill |first4=John |title=Lensingwikipedia: Parsing Text for the Interactive Visualization of Human History |date=2012 |doi=10.1109/VAST.2012.6400530 |url=https://wikipediaquality.com/wiki/Lensingwikipedia:_Parsing_Text_for_the_Interactive_Visualization_of_Human_History}}<br />
</nowiki><br />
</code><br />
<br />
=== HTML ===<br />
<code><br />
<nowiki><br />
Vadlapudi, Ravikiran; Siahbani, Maryam; Sarkar, Anoop; Dill, John. (2012). &amp;quot;<a href="https://wikipediaquality.com/wiki/Lensingwikipedia:_Parsing_Text_for_the_Interactive_Visualization_of_Human_History">Lensingwikipedia: Parsing Text for the Interactive Visualization of Human History</a>&amp;quot;.DOI: 10.1109/VAST.2012.6400530. <br />
</nowiki><br />
</code></div>Alyssahttps://wikipediaquality.com/index.php?title=Will_They_Stay_or_Will_They_Go%3F_How_Network_Properties_of_Webics_Predict_Dropout_Rates_of_Valuable_Wikipedians&diff=22795Will They Stay or Will They Go? How Network Properties of Webics Predict Dropout Rates of Valuable Wikipedians2019-12-13T07:41:02Z<p>Alyssa: Links</p>
<hr />
<div>'''Will They Stay or Will They Go? How Network Properties of Webics Predict Dropout Rates of Valuable Wikipedians''' - scientific work related to [[Wikipedia quality]] published in 2011, written by [[Jürgen Lerner]], [[Patrick Kenis]], [[Denise van Raaij]] and [[Ulrik Brandes]].<br />
<br />
== Overview ==<br />
This paper contributes to understanding of an increasingly prevalent work system, web-based internet communities (WebICs). Authors are particularly interested in how WebICs are governed given the fact how different they are compared to more classical forms of organization. Authors study the governance of a WebIC by studying the structure and dynamics of their edit network. Given the fact that the edit network is a relational structure, [[social network]] analysis is key to understanding these work systems. Authors demonstrate that characteristics of the edit network contribute to predicting the dropout hazard of valuable WebIC members. Since WebICs exist only thanks to the activity of their contributors, predicting drop-outs becomes crucial. The results show that [[reputation]] and controversy have different effects for different types of [[Wikipedia]]ns; i.e., an actor’s reputation decreases the dropout hazard of active [[Wikipedians]], while participation on controversial pages decreases the dropout hazard of highly active Wikipedians.</div>Alyssahttps://wikipediaquality.com/index.php?title=Hacking_Wikipedia_for_Hyponymy_Relation_Acquisition&diff=22794Hacking Wikipedia for Hyponymy Relation Acquisition2019-12-13T07:38:06Z<p>Alyssa: Embed for English Wikipedia, HTML</p>
<hr />
<div>{{Infobox work<br />
| title = Hacking Wikipedia for Hyponymy Relation Acquisition<br />
| date = 2008<br />
| authors = [[Asuka Sumida]]<br />[[Kentaro Torisawa]]<br />
| link = http://www.aclweb.org/anthology/I/I08/I08-2126.pdf<br />
}}<br />
'''Hacking Wikipedia for Hyponymy Relation Acquisition''' - scientific work related to [[Wikipedia quality]] published in 2008, written by [[Asuka Sumida]] and [[Kentaro Torisawa]].<br />
<br />
== Overview ==<br />
This paper describes a method for extracting a large set of hyponymy relations from [[Wikipedia]]. The Wikipedia is much more consistently structured than generic HTML documents, and authors can extract a large number of hyponymy relations with simple methods. In this work, authors managed to extract more than 1.4 × 106 hyponymy relations with 75.3% precision from the Japanese version of the Wikipedia. To the best of knowledge, this is the largest machine-readable thesaurus for Japanese. The main contribution of this paper is a method for hyponymy acquisition from hierarchical layouts in Wikipedia. By using a machine learning technique and pattern matching, authors were able to extract more than 6.3 × 105 relations from hierarchical layouts in the Japanese Wikipedia, and their precision was 76.4%. The remaining hyponymy relations were acquired by existing methods for extracting relations from definition sentences and category pages. This means that extraction from the hierarchical layouts almost doubled the number of relations extracted.<br />
<br />
== Embed ==<br />
=== Wikipedia Quality ===<br />
<code><br />
<nowiki><br />
Sumida, Asuka; Torisawa, Kentaro. (2008). "[[Hacking Wikipedia for Hyponymy Relation Acquisition]]".<br />
</nowiki><br />
</code><br />
<br />
=== English Wikipedia ===<br />
<code><br />
<nowiki><br />
{{cite journal |last1=Sumida |first1=Asuka |last2=Torisawa |first2=Kentaro |title=Hacking Wikipedia for Hyponymy Relation Acquisition |date=2008 |url=https://wikipediaquality.com/wiki/Hacking_Wikipedia_for_Hyponymy_Relation_Acquisition}}<br />
</nowiki><br />
</code><br />
<br />
=== HTML ===<br />
<code><br />
<nowiki><br />
Sumida, Asuka; Torisawa, Kentaro. (2008). &amp;quot;<a href="https://wikipediaquality.com/wiki/Hacking_Wikipedia_for_Hyponymy_Relation_Acquisition">Hacking Wikipedia for Hyponymy Relation Acquisition</a>&amp;quot;.<br />
</nowiki><br />
</code></div>Alyssahttps://wikipediaquality.com/index.php?title=An_Entity_Disambiguation_Approach_based_on_Wikipedia_for_Entity_Linking_in_Microblogs&diff=22793An Entity Disambiguation Approach based on Wikipedia for Entity Linking in Microblogs2019-12-13T07:36:23Z<p>Alyssa: + Embed</p>
<hr />
<div>{{Infobox work<br />
| title = An Entity Disambiguation Approach based on Wikipedia for Entity Linking in Microblogs<br />
| date = 2017<br />
| authors = [[Tomoaki Urata]]<br />[[Akira Maeda]]<br />
| doi = 10.1109/IIAI-AAI.2017.171<br />
| link = http://ieeexplore.ieee.org/xpls/abs_all.jsp?arnumber=8113266<br />
}}<br />
'''An Entity Disambiguation Approach based on Wikipedia for Entity Linking in Microblogs''' - scientific work related to [[Wikipedia quality]] published in 2017, written by [[Tomoaki Urata]] and [[Akira Maeda]].<br />
<br />
== Overview ==<br />
The opportunity to read articles and microblogs on the Web to get information is more and more increasing. However, hyperlinks to entities do not often exist in such articles, and it is a troublesome task for the reader to look it up online. In this paper, in order to make it easy to look up entity information in microblog articles, authors propose a method to extract entities in Japanese microblog, and to perform entity linking which links to entity information automatically. The method consists of three phases. First, authors extract [[named entities]], such as personal names, place names, organization names, etc. from a microblog article. Next, authors disambiguate the extracted entities in order to make links to correct entity information. Authors use [[Wikipedia]] as the source of entity information to verify the usefulness of the proposed method. In method, authors extract some Wikipedia articles related to ambiguous entities from microblog articles. Then, authors extract some related entities with the ambiguous entity using word2vec. Authors compare Wikipedia articles of related entities and the Wikipedia articles of ambiguous entities. Finally, authors get the correct Wikipedia article for each entity in the microblog article.<br />
<br />
== Embed ==<br />
=== Wikipedia Quality ===<br />
<code><br />
<nowiki><br />
Urata, Tomoaki; Maeda, Akira. (2017). "[[An Entity Disambiguation Approach based on Wikipedia for Entity Linking in Microblogs]]".DOI: 10.1109/IIAI-AAI.2017.171. <br />
</nowiki><br />
</code><br />
<br />
=== English Wikipedia ===<br />
<code><br />
<nowiki><br />
{{cite journal |last1=Urata |first1=Tomoaki |last2=Maeda |first2=Akira |title=An Entity Disambiguation Approach based on Wikipedia for Entity Linking in Microblogs |date=2017 |doi=10.1109/IIAI-AAI.2017.171 |url=https://wikipediaquality.com/wiki/An_Entity_Disambiguation_Approach_based_on_Wikipedia_for_Entity_Linking_in_Microblogs}}<br />
</nowiki><br />
</code><br />
<br />
=== HTML ===<br />
<code><br />
<nowiki><br />
Urata, Tomoaki; Maeda, Akira. (2017). &amp;quot;<a href="https://wikipediaquality.com/wiki/An_Entity_Disambiguation_Approach_based_on_Wikipedia_for_Entity_Linking_in_Microblogs">An Entity Disambiguation Approach based on Wikipedia for Entity Linking in Microblogs</a>&amp;quot;.DOI: 10.1109/IIAI-AAI.2017.171. <br />
</nowiki><br />
</code></div>Alyssahttps://wikipediaquality.com/index.php?title=The_Role_of_Conflict_in_Determining_Consensus_on_Quality_in_Wikipedia_Articles&diff=22792The Role of Conflict in Determining Consensus on Quality in Wikipedia Articles2019-12-13T07:35:16Z<p>Alyssa: infobox</p>
<hr />
<div>{{Infobox work<br />
| title = The Role of Conflict in Determining Consensus on Quality in Wikipedia Articles<br />
| date = 2013<br />
| authors = [[Kim Osman]]<br />
| doi = 10.1145/2491055.2491067<br />
| link = http://dl.acm.org/citation.cfm?doid=2491055.2491067<br />
}}<br />
'''The Role of Conflict in Determining Consensus on Quality in Wikipedia Articles''' - scientific work related to [[Wikipedia quality]] published in 2013, written by [[Kim Osman]].<br />
<br />
== Overview ==<br />
This paper presents research that investigated the role of conflict in the editorial process of the online encyclopedia, [[Wikipedia]]. The study used a grounded approach to analyzing 147 conversations about quality from the archived history of the Wikipedia article Australia . It found that conflict in Wikipedia is a generative friction, regulated by references to policy as part of a coordinated effort within the community to improve the quality of articles.</div>Alyssa