https://wikipediaquality.com/api.php?action=feedcontributions&user=Aaliyah&feedformat=atomWikipedia Quality - User contributions [en]2024-03-29T14:08:46ZUser contributionsMediaWiki 1.30.0https://wikipediaquality.com/index.php?title=It%27s_a_Man%27s_Wikipedia%3F_Assessing_Gender_Inequality_in_an_Online_Encyclopedia&diff=25184It's a Man's Wikipedia? Assessing Gender Inequality in an Online Encyclopedia2020-08-14T06:51:36Z<p>Aaliyah: + infobox</p>
<hr />
<div>{{Infobox work<br />
| title = It's a Man's Wikipedia? Assessing Gender Inequality in an Online Encyclopedia<br />
| date = 2015<br />
| authors = [[Claudia Wagner]]<br />[[David Garcia]]<br />[[Mohsen Jadidi]]<br />[[Markus Strohmaier]]<br />
| link = http://www.mitpressjournals.org/doi/abs/10.1162/inov_a_00224?ai=tb&amp;af=R<br />
| plink = https://arxiv.org/pdf/1501.06307v2<br />
}}<br />
'''It's a Man's Wikipedia? Assessing Gender Inequality in an Online Encyclopedia''' - scientific work related to [[Wikipedia quality]] published in 2015, written by [[Claudia Wagner]], [[David Garcia]], [[Mohsen Jadidi]] and [[Markus Strohmaier]].<br />
<br />
== Overview ==<br />
Wikipedia is a community-created encyclopedia that contains information about notable people from different countries, epochs and disciplines and aims to document the world’s knowledge from a [[neutral point of view]]. However, the narrow diversity of the [[Wikipedia]] editor community has the potential to introduce systemic biases such as gender biases into the content of Wikipedia. In this paper authors aim to tackle a sub problem of this larger challenge by presenting and applying a computational method for assessing gender bias on Wikipedia along multiple dimensions. Authors find that while women on Wikipedia are covered and featured well in many Wikipedia language editions, the way women are portrayed starkly differs from the way men are portrayed. Authors hope work contributes to increasing awareness about gender biases online, and in particular to raising attention to the different levels in which gender biases can manifest themselves on the web.</div>Aaliyahhttps://wikipediaquality.com/index.php?title=Open_Domain_Question_Answering_Using_Wikipedia-Based_Knowledge_Model&diff=25183Open Domain Question Answering Using Wikipedia-Based Knowledge Model2020-08-14T06:50:30Z<p>Aaliyah: + categories</p>
<hr />
<div>{{Infobox work<br />
| title = Open Domain Question Answering Using Wikipedia-Based Knowledge Model<br />
| date = 2014<br />
| authors = [[Pum-Mo Ryu]]<br />[[Myung-Gil Jang]]<br />[[Hyunki Kim]]<br />
| doi = 10.1016/j.ipm.2014.04.007<br />
| link = http://www.sciencedirect.com/science/article/pii/S0306457314000351<br />
}}<br />
'''Open Domain Question Answering Using Wikipedia-Based Knowledge Model''' - scientific work related to [[Wikipedia quality]] published in 2014, written by [[Pum-Mo Ryu]], [[Myung-Gil Jang]] and [[Hyunki Kim]].<br />
<br />
== Overview ==<br />
This paper describes the use of [[Wikipedia]] as a rich knowledge source for a [[question answering]] (QA) system. Authors suggest multiple answer matching modules based on different types of semi-structured knowledge sources of Wikipedia, including article content, [[infoboxes]], article structure, [[category structure]], and definitions. These semi-structured knowledge sources each have their unique strengths in finding answers for specific question types, such as infoboxes for factoid questions, category structure for list questions, and definitions for descriptive questions. The answers extracted from multiple modules are merged using an answer merging strategy that reflects the specialized nature of the answer matching modules. Through an experiment, system showed promising results, with a precision of 87.1%, a recall of 52.7%, and an F-measure of 65.6%, all of which are much higher than the results of a simple text analysis based system.<br />
<br />
== Embed ==<br />
=== Wikipedia Quality ===<br />
<code><br />
<nowiki><br />
Ryu, Pum-Mo; Jang, Myung-Gil; Kim, Hyunki. (2014). "[[Open Domain Question Answering Using Wikipedia-Based Knowledge Model]]". Pergamon Press, Inc.. DOI: 10.1016/j.ipm.2014.04.007. <br />
</nowiki><br />
</code><br />
<br />
=== English Wikipedia ===<br />
<code><br />
<nowiki><br />
{{cite journal |last1=Ryu |first1=Pum-Mo |last2=Jang |first2=Myung-Gil |last3=Kim |first3=Hyunki |title=Open Domain Question Answering Using Wikipedia-Based Knowledge Model |date=2014 |doi=10.1016/j.ipm.2014.04.007 |url=https://wikipediaquality.com/wiki/Open_Domain_Question_Answering_Using_Wikipedia-Based_Knowledge_Model |journal=Pergamon Press, Inc.}}<br />
</nowiki><br />
</code><br />
<br />
=== HTML ===<br />
<code><br />
<nowiki><br />
Ryu, Pum-Mo; Jang, Myung-Gil; Kim, Hyunki. (2014). &amp;quot;<a href="https://wikipediaquality.com/wiki/Open_Domain_Question_Answering_Using_Wikipedia-Based_Knowledge_Model">Open Domain Question Answering Using Wikipedia-Based Knowledge Model</a>&amp;quot;. Pergamon Press, Inc.. DOI: 10.1016/j.ipm.2014.04.007. <br />
</nowiki><br />
</code><br />
<br />
<br />
<br />
[[Category:Scientific works]]</div>Aaliyahhttps://wikipediaquality.com/index.php?title=Gender_Gap_Through_Time_and_Space:_a_Journey_Through_Wikipedia_Biographies_via_the_Wikidata_Human_Gender_Indicator:&diff=25182Gender Gap Through Time and Space: a Journey Through Wikipedia Biographies via the Wikidata Human Gender Indicator:2020-08-14T06:48:05Z<p>Aaliyah: Adding embed</p>
<hr />
<div>{{Infobox work<br />
| title = Gender Gap Through Time and Space: a Journey Through Wikipedia Biographies via the Wikidata Human Gender Indicator:<br />
| date = 2018<br />
| authors = [[Piotr Konieczny]]<br />[[Maximilian Klein]]<br />
| doi = 10.1177/1461444818779080<br />
| link = http://journals.sagepub.com/doi/full/10.1177/1461444818779080<br />
}}<br />
'''Gender Gap Through Time and Space: a Journey Through Wikipedia Biographies via the Wikidata Human Gender Indicator:''' - scientific work related to [[Wikipedia quality]] published in 2018, written by [[Piotr Konieczny]] and [[Maximilian Klein]].<br />
<br />
== Overview ==<br />
In this study, authors investigate how quantification of [[Wikipedia]] biographies can shed light on worldwide longitudinal gender inequality trends, a macro-level dimension of human development. Authors present the [[Wikidata]] Human Gender Indicator (WHGI), located within a set of [[indicators]] allowing comparative study of gender inequality through space and time, the Wikipedia Gender Indicators (WIGI), based on metadata available through the Wikidata database. Authors research confirms that gender inequality is a phenomenon with a long history, but whose patterns can be analyzed and quantified on a larger scale than previously thought possible. Through the use of Inglehart–Welzel cultural clusters, authors show that gender inequality can be analyzed with regard to world’s cultures. Authors also show a steadily improving trend in the coverage of women and other genders in reference works.<br />
<br />
== Embed ==<br />
=== Wikipedia Quality ===<br />
<code><br />
<nowiki><br />
Konieczny, Piotr; Klein, Maximilian. (2018). "[[Gender Gap Through Time and Space: a Journey Through Wikipedia Biographies via the Wikidata Human Gender Indicator:]]". SAGE PublicationsSage UK: London, England. DOI: 10.1177/1461444818779080. <br />
</nowiki><br />
</code><br />
<br />
=== English Wikipedia ===<br />
<code><br />
<nowiki><br />
{{cite journal |last1=Konieczny |first1=Piotr |last2=Klein |first2=Maximilian |title=Gender Gap Through Time and Space: a Journey Through Wikipedia Biographies via the Wikidata Human Gender Indicator: |date=2018 |doi=10.1177/1461444818779080 |url=https://wikipediaquality.com/wiki/Gender_Gap_Through_Time_and_Space:_a_Journey_Through_Wikipedia_Biographies_via_the_Wikidata_Human_Gender_Indicator: |journal=SAGE PublicationsSage UK: London, England}}<br />
</nowiki><br />
</code><br />
<br />
=== HTML ===<br />
<code><br />
<nowiki><br />
Konieczny, Piotr; Klein, Maximilian. (2018). &amp;quot;<a href="https://wikipediaquality.com/wiki/Gender_Gap_Through_Time_and_Space:_a_Journey_Through_Wikipedia_Biographies_via_the_Wikidata_Human_Gender_Indicator:">Gender Gap Through Time and Space: a Journey Through Wikipedia Biographies via the Wikidata Human Gender Indicator:</a>&amp;quot;. SAGE PublicationsSage UK: London, England. DOI: 10.1177/1461444818779080. <br />
</nowiki><br />
</code></div>Aaliyahhttps://wikipediaquality.com/index.php?title=Tell_Me_More_Using_Ladders_in_Wikipedia&diff=25181Tell Me More Using Ladders in Wikipedia2020-08-14T06:46:35Z<p>Aaliyah: + categories</p>
<hr />
<div>{{Infobox work<br />
| title = Tell Me More Using Ladders in Wikipedia<br />
| date = 2017<br />
| authors = [[Siarhei Bykau]]<br />[[Jihwan Lee]]<br />[[Divesh Srivastava]]<br />[[Yannis Velegrakis]]<br />
| doi = 10.1145/3068839.3068847<br />
| link = https://dl.acm.org/citation.cfm?id=3068847<br />
}}<br />
'''Tell Me More Using Ladders in Wikipedia''' - scientific work related to [[Wikipedia quality]] published in 2017, written by [[Siarhei Bykau]], [[Jihwan Lee]], [[Divesh Srivastava]] and [[Yannis Velegrakis]].<br />
<br />
== Overview ==<br />
Authors focus on the problem of "tell me more" information related to a given fact in [[Wikipedia]]. Authors use the novel notion of role to link information in an infobox with different places in the text of the same Wikipedia page (space) as well as information across different revisions of the page (time). In this way, it is possible to link together pieces of information that may not represent the same real world entity, yet have served in the same role. To achieve this, authors introduce a novel structure called ladder that allows such spatial and temporal linking and authors show how to effectively and efficiently construct such structures from Wikipedia data.<br />
<br />
== Embed ==<br />
=== Wikipedia Quality ===<br />
<code><br />
<nowiki><br />
Bykau, Siarhei; Lee, Jihwan; Srivastava, Divesh; Velegrakis, Yannis. (2017). "[[Tell Me More Using Ladders in Wikipedia]]".DOI: 10.1145/3068839.3068847. <br />
</nowiki><br />
</code><br />
<br />
=== English Wikipedia ===<br />
<code><br />
<nowiki><br />
{{cite journal |last1=Bykau |first1=Siarhei |last2=Lee |first2=Jihwan |last3=Srivastava |first3=Divesh |last4=Velegrakis |first4=Yannis |title=Tell Me More Using Ladders in Wikipedia |date=2017 |doi=10.1145/3068839.3068847 |url=https://wikipediaquality.com/wiki/Tell_Me_More_Using_Ladders_in_Wikipedia}}<br />
</nowiki><br />
</code><br />
<br />
=== HTML ===<br />
<code><br />
<nowiki><br />
Bykau, Siarhei; Lee, Jihwan; Srivastava, Divesh; Velegrakis, Yannis. (2017). &amp;quot;<a href="https://wikipediaquality.com/wiki/Tell_Me_More_Using_Ladders_in_Wikipedia">Tell Me More Using Ladders in Wikipedia</a>&amp;quot;.DOI: 10.1145/3068839.3068847. <br />
</nowiki><br />
</code><br />
<br />
<br />
<br />
[[Category:Scientific works]]</div>Aaliyahhttps://wikipediaquality.com/index.php?title=Wikipedia_as_Frame_Information_Repository&diff=25180Wikipedia as Frame Information Repository2020-08-14T06:43:57Z<p>Aaliyah: + infobox</p>
<hr />
<div>{{Infobox work<br />
| title = Wikipedia as Frame Information Repository<br />
| date = 2009<br />
| authors = [[Sara Tonelli]]<br />[[Claudio Giuliano]]<br />
| doi = 10.3115/1699510.1699547<br />
| link = http://dl.acm.org/citation.cfm?id=1699510.1699547<br />
| plink = https://www.semanticscholar.org/paper/Wikipedia-as-Frame-Information-Repository-Tonelli-Giuliano/d43720e843415cb1fdfa69ccde8f70c93d746b2e<br />
}}<br />
'''Wikipedia as Frame Information Repository''' - scientific work related to [[Wikipedia quality]] published in 2009, written by [[Sara Tonelli]] and [[Claudio Giuliano]].<br />
<br />
== Overview ==<br />
In this paper, authors address the issue of automatic extending lexical resources by exploiting existing knowledge repositories. In particular, authors deal with the new task of linking FrameNet and [[Wikipedia]] using a word sense disambiguation system that, for a given pair frame -- lexical unit (F, l), finds the Wikipage that best expresses the the meaning of l. The mapping can be exploited to straightforwardly acquire new example sentences and new lexical units, both for English and for all languages available in Wikipedia. In this way, it is possible to easily acquire good-quality data as a starting point for the creation of FrameNet in new languages. The evaluation reported both for the monolingual and the [[multilingual]] expansion of FrameNet shows that the approach is promising.</div>Aaliyahhttps://wikipediaquality.com/index.php?title=An_Approach_for_Deriving_Semantically_Related_Category_Hierarchies_from_Wikipedia_Category_Graphs&diff=25179An Approach for Deriving Semantically Related Category Hierarchies from Wikipedia Category Graphs2020-08-14T06:42:46Z<p>Aaliyah: Infobox</p>
<hr />
<div>{{Infobox work<br />
| title = An Approach for Deriving Semantically Related Category Hierarchies from Wikipedia Category Graphs<br />
| date = 2013<br />
| authors = [[Khaled A. Hejazy]]<br />[[Samhaa R. El-Beltagy]]<br />
| doi = 10.1007/978-3-642-36981-0_8<br />
| link = https://link.springer.com/chapter/10.1007/978-3-642-36981-0_8<br />
}}<br />
'''An Approach for Deriving Semantically Related Category Hierarchies from Wikipedia Category Graphs''' - scientific work related to [[Wikipedia quality]] published in 2013, written by [[Khaled A. Hejazy]] and [[Samhaa R. El-Beltagy]].<br />
<br />
== Overview ==<br />
Wikipedia is the largest online encyclopedia known to date. Its rich content and semi-structured nature has made it into a very valuable research tool used for classification, [[information extraction]], and semantic annotation, among others. Many applications can benefit from the presence of a topic hierarchy in [[Wikipedia]]. However, what Wikipedia currently offers is a category graph built through hierarchical category links the semantics of which are un-defined. Because of this lack of semantics, a sub-category in Wikipedia does not necessarily comply with the concept of a sub-category in a hierarchy. Instead, all it signifies is that there is some sort of relationship between the parent category and its sub-category. As a result, traversing the category links of any given category can often result in surprising results. For example, following the category of “Computing” down its sub-category links, the totally unrelated category of “Theology” appears. In this paper, authors introduce a novel algorithm that through measuring the semantic [[relatedness]] between any given Wikipedia category and nodes in its sub-graph is capable of extracting a category hierarchy containing only nodes that are relevant to the parent category. The algorithm has been evaluated by comparing its output with a gold standard data set. The experimental setup and results are presented.</div>Aaliyahhttps://wikipediaquality.com/index.php?title=Computing_Wikipedia_Edit-Networks&diff=25178Computing Wikipedia Edit-Networks2020-08-14T06:40:05Z<p>Aaliyah: + cat.</p>
<hr />
<div>{{Infobox work<br />
| title = Computing Wikipedia Edit-Networks<br />
| date = 2009<br />
| authors = [[Ulrik Brandes]]<br />[[Patrick Kenis]]<br />[[Denise van Raaij]]<br />
| link = http://algo.uni-konstanz.de/publications/bklv-cwen-09.pdf<br />
}}<br />
'''Computing Wikipedia Edit-Networks''' - scientific work related to [[Wikipedia quality]] published in 2009, written by [[Ulrik Brandes]], [[Patrick Kenis]] and [[Denise van Raaij]].<br />
<br />
== Overview ==<br />
This technical paper reviews the definition of [[Wikipedia]] edit-networks proposed in [1] and presents an algorithm to compute them.<br />
<br />
== Embed ==<br />
=== Wikipedia Quality ===<br />
<code><br />
<nowiki><br />
Brandes, Ulrik; Kenis, Patrick; Raaij, Denise van. (2009). "[[Computing Wikipedia Edit-Networks]]".<br />
</nowiki><br />
</code><br />
<br />
=== English Wikipedia ===<br />
<code><br />
<nowiki><br />
{{cite journal |last1=Brandes |first1=Ulrik |last2=Kenis |first2=Patrick |last3=Raaij |first3=Denise van |title=Computing Wikipedia Edit-Networks |date=2009 |url=https://wikipediaquality.com/wiki/Computing_Wikipedia_Edit-Networks}}<br />
</nowiki><br />
</code><br />
<br />
=== HTML ===<br />
<code><br />
<nowiki><br />
Brandes, Ulrik; Kenis, Patrick; Raaij, Denise van. (2009). &amp;quot;<a href="https://wikipediaquality.com/wiki/Computing_Wikipedia_Edit-Networks">Computing Wikipedia Edit-Networks</a>&amp;quot;.<br />
</nowiki><br />
</code><br />
<br />
<br />
<br />
[[Category:Scientific works]]</div>Aaliyahhttps://wikipediaquality.com/index.php?title=Wikipedia_Leeches%3F_the_Promotion_of_Traffic_Through_a_Collaborative_Web_Format&diff=25177Wikipedia Leeches? the Promotion of Traffic Through a Collaborative Web Format2020-08-14T06:37:56Z<p>Aaliyah: Categories</p>
<hr />
<div>{{Infobox work<br />
| title = Wikipedia Leeches? the Promotion of Traffic Through a Collaborative Web Format<br />
| date = 2009<br />
| authors = [[Ganaele Langlois]]<br />[[Greg Elmer]]<br />
| doi = 10.1177/1461444809105351<br />
| link = http://journals.sagepub.com/doi/abs/10.1177/1461444809105351<br />
}}<br />
'''Wikipedia Leeches? the Promotion of Traffic Through a Collaborative Web Format''' - scientific work related to [[Wikipedia quality]] published in 2009, written by [[Ganaele Langlois]] and [[Greg Elmer]].<br />
<br />
== Overview ==<br />
This article investigates the circulation of [[Wikipedia]] entries on the web in an effort to determine the integration of its collaborative model into existing proprietary web formats. In particular it details the use of Wikipedia content as 'tags' or information that is used to increase traffic to webpages through search engine results. Consequently, the article discusses the need to develop theoretical models that provide for an understanding of both content and form on the web, particularly as formatted by [[open-source]] legal frameworks.<br />
<br />
== Embed ==<br />
=== Wikipedia Quality ===<br />
<code><br />
<nowiki><br />
Langlois, Ganaele; Elmer, Greg. (2009). "[[Wikipedia Leeches? the Promotion of Traffic Through a Collaborative Web Format]]". SAGE PublicationsSage UK: London, England. DOI: 10.1177/1461444809105351. <br />
</nowiki><br />
</code><br />
<br />
=== English Wikipedia ===<br />
<code><br />
<nowiki><br />
{{cite journal |last1=Langlois |first1=Ganaele |last2=Elmer |first2=Greg |title=Wikipedia Leeches? the Promotion of Traffic Through a Collaborative Web Format |date=2009 |doi=10.1177/1461444809105351 |url=https://wikipediaquality.com/wiki/Wikipedia_Leeches?_the_Promotion_of_Traffic_Through_a_Collaborative_Web_Format |journal=SAGE PublicationsSage UK: London, England}}<br />
</nowiki><br />
</code><br />
<br />
=== HTML ===<br />
<code><br />
<nowiki><br />
Langlois, Ganaele; Elmer, Greg. (2009). &amp;quot;<a href="https://wikipediaquality.com/wiki/Wikipedia_Leeches?_the_Promotion_of_Traffic_Through_a_Collaborative_Web_Format">Wikipedia Leeches? the Promotion of Traffic Through a Collaborative Web Format</a>&amp;quot;. SAGE PublicationsSage UK: London, England. DOI: 10.1177/1461444809105351. <br />
</nowiki><br />
</code><br />
<br />
<br />
<br />
[[Category:Scientific works]]</div>Aaliyahhttps://wikipediaquality.com/index.php?title=Inducing_Conceptual_Embedding_Spaces_from_Wikipedia&diff=25176Inducing Conceptual Embedding Spaces from Wikipedia2020-08-14T06:35:36Z<p>Aaliyah: + Embed</p>
<hr />
<div>{{Infobox work<br />
| title = Inducing Conceptual Embedding Spaces from Wikipedia<br />
| date = 2017<br />
| authors = [[Gerard de Melo]]<br />
| doi = 10.1145/3041021.3054144<br />
| link = https://dl.acm.org/citation.cfm?id=3041021.3054144<br />
}}<br />
'''Inducing Conceptual Embedding Spaces from Wikipedia''' - scientific work related to [[Wikipedia quality]] published in 2017, written by [[Gerard de Melo]].<br />
<br />
== Overview ==<br />
The word2vec word vector representations are one of the most well-known new semantic resources to appear in recent years. While large sets of pre-trained vectors are available, these focus on frequent words and multi-word expressions but lack sufficient coverage of [[named entities]]. Moreover, [[Google]] only released pre-trained vectors for English. In this paper, authors explore an automatic expansion of Google's pre-trained vectors using [[Wikipedia]], adding millions of concepts and named entities in over 270 languages. Authors method enables all of these to reside in the same vector space, thus flexibly facilitating [[cross-lingual]] semantic applications.<br />
<br />
== Embed ==<br />
=== Wikipedia Quality ===<br />
<code><br />
<nowiki><br />
Melo, Gerard de. (2017). "[[Inducing Conceptual Embedding Spaces from Wikipedia]]". International World Wide Web Conferences Steering Committee. DOI: 10.1145/3041021.3054144. <br />
</nowiki><br />
</code><br />
<br />
=== English Wikipedia ===<br />
<code><br />
<nowiki><br />
{{cite journal |last1=Melo |first1=Gerard de |title=Inducing Conceptual Embedding Spaces from Wikipedia |date=2017 |doi=10.1145/3041021.3054144 |url=https://wikipediaquality.com/wiki/Inducing_Conceptual_Embedding_Spaces_from_Wikipedia |journal=International World Wide Web Conferences Steering Committee}}<br />
</nowiki><br />
</code><br />
<br />
=== HTML ===<br />
<code><br />
<nowiki><br />
Melo, Gerard de. (2017). &amp;quot;<a href="https://wikipediaquality.com/wiki/Inducing_Conceptual_Embedding_Spaces_from_Wikipedia">Inducing Conceptual Embedding Spaces from Wikipedia</a>&amp;quot;. International World Wide Web Conferences Steering Committee. DOI: 10.1145/3041021.3054144. <br />
</nowiki><br />
</code></div>Aaliyahhttps://wikipediaquality.com/index.php?title=A_Framework_for_Co-Classification_of_Articles_and_Users_in_Wikipedia&diff=25175A Framework for Co-Classification of Articles and Users in Wikipedia2020-08-14T06:33:25Z<p>Aaliyah: wikilinks</p>
<hr />
<div>'''A Framework for Co-Classification of Articles and Users in Wikipedia''' - scientific work related to [[Wikipedia quality]] published in 2010, written by [[Lei Liu]] and [[Pang Ning Tan]].<br />
<br />
== Overview ==<br />
The massive size of [[Wikipedia]] and the ease with which its content can be created and edited has made Wikipedia an interesting domain for a variety of classification tasks, including topic detection, spam detection, and vandalism detection. These tasks are typically cast into a link-based classification problem, in which the class label of an article or a user is determined from its content-based and link-based [[features]]. Prior works have focused primarily on classifying either the editors or the articles (but not both). Yet there are many situations in which the classification can be aided by knowing collectively the class labels of the users and articles (e.g., spammers are more likely to post spam content than non-spammers). This paper presents a novel framework to jointly classify the Wikipedia articles and editors, assuming there are correspondences between their classes. Authors experimental results demonstrate that the proposed co-classification algorithm outperforms classifiers that are trained independently to predict the class labels of articles and editors.</div>Aaliyahhttps://wikipediaquality.com/index.php?title=A_Comparison_of_Automatic_Search_Query_Enhancement_Algorithms_That_Utilise_Wikipedia_as_a_Source_of_a_Priori_Knowledge&diff=25174A Comparison of Automatic Search Query Enhancement Algorithms That Utilise Wikipedia as a Source of a Priori Knowledge2020-08-14T06:31:00Z<p>Aaliyah: + Infobox work</p>
<hr />
<div>{{Infobox work<br />
| title = A Comparison of Automatic Search Query Enhancement Algorithms That Utilise Wikipedia as a Source of a Priori Knowledge<br />
| date = 2017<br />
| authors = [[Kyle Goslin]]<br />[[Markus Hofmann]]<br />
| doi = 10.1145/3158354.3158356<br />
| link = https://dl.acm.org/citation.cfm?doid=3158354.3158356<br />
}}<br />
'''A Comparison of Automatic Search Query Enhancement Algorithms That Utilise Wikipedia as a Source of a Priori Knowledge''' - scientific work related to [[Wikipedia quality]] published in 2017, written by [[Kyle Goslin]] and [[Markus Hofmann]].<br />
<br />
== Overview ==<br />
This paper describes the benchmarking and analysis of five Automatic Search Query Enhancement (ASQE) algorithms that utilise [[Wikipedia]] as the sole source for a priori knowledge. The contributions of this paper include: 1) A comprehensive review into current ASQE algorithms that utilise Wikipedia as the sole source for a priori knowledge; 2) benchmarking of five existing ASQE algorithms using the TREC-9 Web Topics on the ClueWeb12 data set and 3) analysis of the results from the benchmarking process to identify the strengths and weaknesses each algorithm. During the benchmarking process, 2,500 relevance assessments were performed. Results of these tests are analysed using the Average Precision @10 per query and Mean Average Precision @10 per algorithm. From this analysis authors show that the scope of a priori knowledge utilised during enhancement and the available term weighting methods available from Wikipedia can further aid the ASQE process. Although approaches taken by the algorithms are still relevant, an over dependence on weighting schemes and data sources used can easily impact results of an ASQE algorithm.</div>Aaliyahhttps://wikipediaquality.com/index.php?title=Structure-Based_Features_for_Predicting_the_Quality_of_Articles_in_Wikipedia&diff=25173Structure-Based Features for Predicting the Quality of Articles in Wikipedia2020-08-14T06:29:32Z<p>Aaliyah: Links</p>
<hr />
<div>'''Structure-Based Features for Predicting the Quality of Articles in Wikipedia''' - scientific work related to [[Wikipedia quality]] published in 2017, written by [[Baptiste de La Robertie]], [[Yoann Pitarch]] and [[Olivier Teste]].<br />
<br />
== Overview ==<br />
Success of [[Wikipedia]] is decidedly due to the free availability of high quality articles across many different expertise areas. If most of these resolute collaborations between authoritative users might constitute referenceable sources, Wikipedia is not sheltered from well-identified problems regarding articles quality, e.g., reputability of third-party sources and vandalism. Because of the huge number of articles and the intensive edit rate, it is not reasonable to even consider the manual evaluation of the content quality of each article. In this paper, authors tackle the problem of modeling and predicting the quality of articles in collaborative platforms. Authors propose a quality model integrating both temporal and structural [[features]] captured from the implicit peer review process enabled by Wikipedia. A generic HITS-like framework is developed and able to capture both the quality of the content and the authority of the associated authors. Notably, a mutual reinforcement principle held between articles quality and author’s authority is exploited in order to take advantage of the collaborative graph generated by the users. Experiments conducted on a set of representative data from Wikipedia show the effectiveness of the computed [[indicators]] both in an unsupervised and supervised scenario.</div>Aaliyahhttps://wikipediaquality.com/index.php?title=Knowledge_Quality_of_Collaborative_Editing_in_Wikipedia:_an_Integrative_Perspective_of_Social_Capital_and_Team_Conflict&diff=25172Knowledge Quality of Collaborative Editing in Wikipedia: an Integrative Perspective of Social Capital and Team Conflict2020-08-14T06:27:55Z<p>Aaliyah: Cats.</p>
<hr />
<div>{{Infobox work<br />
| title = Knowledge Quality of Collaborative Editing in Wikipedia: an Integrative Perspective of Social Capital and Team Conflict<br />
| date = 2015<br />
| authors = [[Liuhan Zhan]]<br />[[Nan Wang]]<br />[[Xiao-Liang Shen]]<br />[[Yongqiang Sun]]<br />
| link = http://aisel.aisnet.org/pacis2015/171/<br />
}}<br />
'''Knowledge Quality of Collaborative Editing in Wikipedia: an Integrative Perspective of Social Capital and Team Conflict''' - scientific work related to [[Wikipedia quality]] published in 2015, written by [[Liuhan Zhan]], [[Nan Wang]], [[Xiao-Liang Shen]] and [[Yongqiang Sun]].<br />
<br />
== Overview ==<br />
Collaborative editing has become one of the most popular forms of knowledge contribution in virtual communities. [[Wikipedia]]— the largest online encyclopaedia— is a representative example of collaborative work. Despite the abundant researches on Wikipedia, to the best of knowledge, no one has considered the integration of social capital and conflict. Besides, extant literatures on knowledge quality just pay attention to task conflict, while relational conflict is rarely mentioned. Meanwhile, study proposes the nonlinear relationship between task conflict and knowledge quality instead of linear relationships in prior studies. Authors also postulate the moderating effect of task complexity. Furthermore, there is little empirical research on the influence of social capital on conflict, especially the distinct effects of cognitive and relational capital. This paper aims at proposing a theoretical model to examine the effect of social capital and conflict, meanwhile taking the task complexity into account. Authors will make efforts to verify research model in the following phases, and authors believe that the present work can make some contributions to both research and practice.<br />
<br />
== Embed ==<br />
=== Wikipedia Quality ===<br />
<code><br />
<nowiki><br />
Zhan, Liuhan; Wang, Nan; Shen, Xiao-Liang; Sun, Yongqiang. (2015). "[[Knowledge Quality of Collaborative Editing in Wikipedia: an Integrative Perspective of Social Capital and Team Conflict]]".<br />
</nowiki><br />
</code><br />
<br />
=== English Wikipedia ===<br />
<code><br />
<nowiki><br />
{{cite journal |last1=Zhan |first1=Liuhan |last2=Wang |first2=Nan |last3=Shen |first3=Xiao-Liang |last4=Sun |first4=Yongqiang |title=Knowledge Quality of Collaborative Editing in Wikipedia: an Integrative Perspective of Social Capital and Team Conflict |date=2015 |url=https://wikipediaquality.com/wiki/Knowledge_Quality_of_Collaborative_Editing_in_Wikipedia:_an_Integrative_Perspective_of_Social_Capital_and_Team_Conflict}}<br />
</nowiki><br />
</code><br />
<br />
=== HTML ===<br />
<code><br />
<nowiki><br />
Zhan, Liuhan; Wang, Nan; Shen, Xiao-Liang; Sun, Yongqiang. (2015). &amp;quot;<a href="https://wikipediaquality.com/wiki/Knowledge_Quality_of_Collaborative_Editing_in_Wikipedia:_an_Integrative_Perspective_of_Social_Capital_and_Team_Conflict">Knowledge Quality of Collaborative Editing in Wikipedia: an Integrative Perspective of Social Capital and Team Conflict</a>&amp;quot;.<br />
</nowiki><br />
</code><br />
<br />
<br />
<br />
[[Category:Scientific works]]</div>Aaliyahhttps://wikipediaquality.com/index.php?title=A_Content-Context-Centric_Approach_for_Detecting_Vandalism_in_Wikipedia&diff=25171A Content-Context-Centric Approach for Detecting Vandalism in Wikipedia2020-08-14T06:25:56Z<p>Aaliyah: + Embed</p>
<hr />
<div>{{Infobox work<br />
| title = A Content-Context-Centric Approach for Detecting Vandalism in Wikipedia<br />
| date = 2013<br />
| authors = [[Lakshmish Ramaswamy]]<br />[[Raga Sowmya Tummalapenta]]<br />[[Calton Pu]]<br />
| doi = 10.4108/icst.collaboratecom.2013.254059<br />
| link = http://ieeexplore.ieee.org/document/6679976/<br />
}}<br />
'''A Content-Context-Centric Approach for Detecting Vandalism in Wikipedia''' - scientific work related to [[Wikipedia quality]] published in 2013, written by [[Lakshmish Ramaswamy]], [[Raga Sowmya Tummalapenta]] and [[Calton Pu]].<br />
<br />
== Overview ==<br />
Collaborative online social media (CSM) applications such as [[Wikipedia]] have not only revolutionized the World Wide Web, but they also have had a hugely positive effect on modern free societies. Unfortunately, Wikipedia has also become target to a wide-variety of vandalism attacks. Most existing vandalism detection techniques rely upon simple textual [[features]] such as existence of abusive language or spammy words. These techniques are ineffective against sophisticated vandal edits, which often do not contain the tell-tale markers associated with vandalism. In this paper, authors argue for a context-aware approach for vandalism detection. This paper proposes a content-context-aware vandalism detection framework. The main idea is to quantify how well the words contained in the edit fit into the topic and the existing content of the Wikipedia article. Authors present two novel metrics, called WWW co-occurrence probability and top-ranked co-occurrence probability for this purpose. Authors also develop efficient mechanisms for evaluating these two metrics, and machine learning-based schemes that utilize these metrics. The paper presents a range of experiments to demonstrate the effectiveness of the proposed approach.<br />
<br />
== Embed ==<br />
=== Wikipedia Quality ===<br />
<code><br />
<nowiki><br />
Ramaswamy, Lakshmish; Tummalapenta, Raga Sowmya; Pu, Calton. (2013). "[[A Content-Context-Centric Approach for Detecting Vandalism in Wikipedia]]".DOI: 10.4108/icst.collaboratecom.2013.254059. <br />
</nowiki><br />
</code><br />
<br />
=== English Wikipedia ===<br />
<code><br />
<nowiki><br />
{{cite journal |last1=Ramaswamy |first1=Lakshmish |last2=Tummalapenta |first2=Raga Sowmya |last3=Pu |first3=Calton |title=A Content-Context-Centric Approach for Detecting Vandalism in Wikipedia |date=2013 |doi=10.4108/icst.collaboratecom.2013.254059 |url=https://wikipediaquality.com/wiki/A_Content-Context-Centric_Approach_for_Detecting_Vandalism_in_Wikipedia}}<br />
</nowiki><br />
</code><br />
<br />
=== HTML ===<br />
<code><br />
<nowiki><br />
Ramaswamy, Lakshmish; Tummalapenta, Raga Sowmya; Pu, Calton. (2013). &amp;quot;<a href="https://wikipediaquality.com/wiki/A_Content-Context-Centric_Approach_for_Detecting_Vandalism_in_Wikipedia">A Content-Context-Centric Approach for Detecting Vandalism in Wikipedia</a>&amp;quot;.DOI: 10.4108/icst.collaboratecom.2013.254059. <br />
</nowiki><br />
</code></div>Aaliyahhttps://wikipediaquality.com/index.php?title=Evaluating_Google,_Twitter,_and_Wikipedia_as_Tools_for_Influenza_Surveillance_Using_Bayesian_Change_Point_Analysis:_a_Comparative_Analysis&diff=25170Evaluating Google, Twitter, and Wikipedia as Tools for Influenza Surveillance Using Bayesian Change Point Analysis: a Comparative Analysis2020-08-14T06:23:08Z<p>Aaliyah: + categories</p>
<hr />
<div>{{Infobox work<br />
| title = Evaluating Google, Twitter, and Wikipedia as Tools for Influenza Surveillance Using Bayesian Change Point Analysis: a Comparative Analysis<br />
| date = 2016<br />
| authors = [[J Danielle Sharpe]]<br />
| doi = 10.2196/publichealth.5901<br />
| link = https://ncbi.nlm.nih.gov/pubmed/27765731<br />
}}<br />
'''Evaluating Google, Twitter, and Wikipedia as Tools for Influenza Surveillance Using Bayesian Change Point Analysis: a Comparative Analysis''' - scientific work related to [[Wikipedia quality]] published in 2016, written by [[J Danielle Sharpe]].<br />
<br />
== Overview ==<br />
Background: Traditional influenza surveillance relies on influenza-like illness (ILI) syndrome that is reported by health care providers. It primarily captures individuals who seek medical care and misses those who do not. Recently, Web-based data sources have been studied for application to public health surveillance, as there is a growing number of people who search, post, and tweet about their illnesses before seeking medical care. Existing research has shown some promise of using data from [[Google]], [[Twitter]], and [[Wikipedia]] to complement traditional surveillance for ILI. However, past studies have evaluated these Web-based sources individually or dually without comparing all 3 of them, and it would be beneficial to know which of the Web-based sources performs best in order to be considered to complement traditional methods. Objective: The objective of this study is to comparatively analyze Google, Twitter, and Wikipedia by examining which best corresponds with Centers for Disease Control and Prevention (CDC) ILI data. It was hypothesized that Wikipedia will best correspond with CDC ILI data as previous research found it to be least influenced by high media coverage in comparison with Google and Twitter. Methods: Publicly available, deidentified data were collected from the CDC, Google Flu Trends, HealthTweets, and Wikipedia for the 2012-2015 influenza seasons. Bayesian change point analysis was used to detect seasonal changes, or change points, in each of the data sources. Change points in Google, Twitter, and Wikipedia that occurred during the exact week, 1 preceding week, or 1 week after the CDC’s change points were compared with the CDC data as the gold standard. All analyses were conducted using the R package “bcp” version 4.0.0 in RStudio version 0.99.484 (RStudio Inc). In addition, sensitivity and positive predictive values (PPV) were calculated for Google, Twitter, and Wikipedia. Results: During the 2012-2015 influenza seasons, a high sensitivity of 92% was found for Google, whereas the PPV for Google was 85%. A low sensitivity of 50% was calculated for Twitter; a low PPV of 43% was found for Twitter also. Wikipedia had the lowest sensitivity of 33% and lowest PPV of 40%. Conclusions: Of the 3 Web-based sources, Google had the best combination of sensitivity and PPV in detecting Bayesian change points in influenza-related data streams. Findings demonstrated that change points in Google, Twitter, and Wikipedia data occasionally aligned well with change points captured in CDC ILI data, yet these sources did not detect all changes in CDC data and should be further studied and developed. [JMIR Public Health Surveill 2016;2(2):e161]<br />
<br />
== Embed ==<br />
=== Wikipedia Quality ===<br />
<code><br />
<nowiki><br />
Sharpe, J Danielle. (2016). "[[Evaluating Google, Twitter, and Wikipedia as Tools for Influenza Surveillance Using Bayesian Change Point Analysis: a Comparative Analysis]]". JMIR Publications Inc., Toronto, Canada. DOI: 10.2196/publichealth.5901. <br />
</nowiki><br />
</code><br />
<br />
=== English Wikipedia ===<br />
<code><br />
<nowiki><br />
{{cite journal |last1=Sharpe |first1=J Danielle |title=Evaluating Google, Twitter, and Wikipedia as Tools for Influenza Surveillance Using Bayesian Change Point Analysis: a Comparative Analysis |date=2016 |doi=10.2196/publichealth.5901 |url=https://wikipediaquality.com/wiki/Evaluating_Google,_Twitter,_and_Wikipedia_as_Tools_for_Influenza_Surveillance_Using_Bayesian_Change_Point_Analysis:_a_Comparative_Analysis |journal=JMIR Publications Inc., Toronto, Canada}}<br />
</nowiki><br />
</code><br />
<br />
=== HTML ===<br />
<code><br />
<nowiki><br />
Sharpe, J Danielle. (2016). &amp;quot;<a href="https://wikipediaquality.com/wiki/Evaluating_Google,_Twitter,_and_Wikipedia_as_Tools_for_Influenza_Surveillance_Using_Bayesian_Change_Point_Analysis:_a_Comparative_Analysis">Evaluating Google, Twitter, and Wikipedia as Tools for Influenza Surveillance Using Bayesian Change Point Analysis: a Comparative Analysis</a>&amp;quot;. JMIR Publications Inc., Toronto, Canada. DOI: 10.2196/publichealth.5901. <br />
</nowiki><br />
</code><br />
<br />
<br />
<br />
[[Category:Scientific works]]<br />
[[Category:Twi Wikipedia]]</div>Aaliyahhttps://wikipediaquality.com/index.php?title=A_Wikipedia-Based_Approach_to_Profiling_Activities_on_Social_Media&diff=25169A Wikipedia-Based Approach to Profiling Activities on Social Media2020-08-14T06:20:15Z<p>Aaliyah: Adding infobox</p>
<hr />
<div>{{Infobox work<br />
| title = A Wikipedia-Based Approach to Profiling Activities on Social Media<br />
| date = 2018<br />
| authors = [[Christian Torrero]]<br />[[Carlo Caprini]]<br />[[Daniele Miorandi]]<br />
| link = https://dl.acm.org/citation.cfm?id=3174140<br />
| plink = http://arxiv.org/pdf/1804.02245.pdf<br />
}}<br />
'''A Wikipedia-Based Approach to Profiling Activities on Social Media''' - scientific work related to [[Wikipedia quality]] published in 2018, written by [[Christian Torrero]], [[Carlo Caprini]] and [[Daniele Miorandi]].<br />
<br />
== Overview ==<br />
Online user profiling is a very active research field, catalyzing great interest by both scientists and practitioners. In this paper, in particular, authors look at approaches able to mine social media activities of users to create a rich user profile. Authors look at the case in which the profiling is meant to characterize the user's interests along a set of predefined dimensions (that authors refer to as [[categories]]). A conventional way to do so is to use semantic analysis techniques to (i) extract relevant entities from the online conversations of users (ii) mapping said entities to the predefined categories of interest. While entity extraction is a well-understood topic, the mapping part lacks a reference standardized approach. In this paper authors propose using graph navigation techniques on the [[Wikipedia]] tree to achieve such a mapping. A prototypical implementation is presented and some preliminary results are reported.</div>Aaliyahhttps://wikipediaquality.com/index.php?title=Revisiting_Reverts:_Accurate_Revert_Detection_in_Wikipedia&diff=25168Revisiting Reverts: Accurate Revert Detection in Wikipedia2020-08-14T06:18:25Z<p>Aaliyah: + category</p>
<hr />
<div>{{Infobox work<br />
| title = Revisiting Reverts: Accurate Revert Detection in Wikipedia<br />
| date = 2012<br />
| authors = [[Fabian Flöck]]<br />[[Denny Vrandecic]]<br />[[Elena Simperl]]<br />
| doi = 10.1145/2309996.2310000<br />
| link = http://dl.acm.org/ft_gateway.cfm?id=2310000&amp;type=pdf<br />
}}<br />
'''Revisiting Reverts: Accurate Revert Detection in Wikipedia''' - scientific work related to [[Wikipedia quality]] published in 2012, written by [[Fabian Flöck]], [[Denny Vrandecic]] and [[Elena Simperl]].<br />
<br />
== Overview ==<br />
Wikipedia is commonly used as a proving ground for research in collaborative systems. This is likely due to its popularity and scale, but also to the fact that large amounts of data about its formation and evolution are freely available to inform and validate theories and models of online collaboration. As part of the development of such approaches, revert detection is often performed as an important pre-processing step in tasks as diverse as the extraction of implicit networks of editors, the analysis of edit or editor [[features]] and the removal of noise when analyzing the emergence of the content of an article. The current state of the art in revert detection is based on a rather naive approach, which identifies revision duplicates based on MD5 hash values. This is an efficient, but not very precise technique that forms the basis for the majority of research based on revert relations in [[Wikipedia]]. In this paper authors prove that this method has a number of important drawbacks - it only detects a limited number of reverts, while simultaneously misclassifying too many edits as reverts, and not distinguishing between complete and partial reverts. This is very likely to hamper the accurate interpretation of the findings of revert-related research. Authors introduce an improved algorithm for the detection of reverts based on word tokens added or deleted to adresses these drawbacks. Authors report on the results of a user study and other tests demonstrating the considerable gains in accuracy and coverage by method, and argue for a positive trade-off, in certain research scenarios, between these improvements and algorithm's increased runtime.<br />
<br />
== Embed ==<br />
=== Wikipedia Quality ===<br />
<code><br />
<nowiki><br />
Flöck, Fabian; Vrandecic, Denny; Simperl, Elena. (2012). "[[Revisiting Reverts: Accurate Revert Detection in Wikipedia]]".DOI: 10.1145/2309996.2310000. <br />
</nowiki><br />
</code><br />
<br />
=== English Wikipedia ===<br />
<code><br />
<nowiki><br />
{{cite journal |last1=Flöck |first1=Fabian |last2=Vrandecic |first2=Denny |last3=Simperl |first3=Elena |title=Revisiting Reverts: Accurate Revert Detection in Wikipedia |date=2012 |doi=10.1145/2309996.2310000 |url=https://wikipediaquality.com/wiki/Revisiting_Reverts:_Accurate_Revert_Detection_in_Wikipedia}}<br />
</nowiki><br />
</code><br />
<br />
=== HTML ===<br />
<code><br />
<nowiki><br />
Flöck, Fabian; Vrandecic, Denny; Simperl, Elena. (2012). &amp;quot;<a href="https://wikipediaquality.com/wiki/Revisiting_Reverts:_Accurate_Revert_Detection_in_Wikipedia">Revisiting Reverts: Accurate Revert Detection in Wikipedia</a>&amp;quot;.DOI: 10.1145/2309996.2310000. <br />
</nowiki><br />
</code><br />
<br />
<br />
<br />
[[Category:Scientific works]]</div>Aaliyahhttps://wikipediaquality.com/index.php?title=Physics_on_Wikipedia&diff=25167Physics on Wikipedia2020-08-14T06:15:25Z<p>Aaliyah: Embed</p>
<hr />
<div>{{Infobox work<br />
| title = Physics on Wikipedia<br />
| date = 2011<br />
| authors = [[Martin Poulter]]<br />[[M. W. Peel]]<br />
| doi = 10.1088/2058-7058/24/09/37<br />
| link = http://iopscience.iop.org/article/10.1088/2058-7058/24/09/37/pdf<br />
}}<br />
'''Physics on Wikipedia''' - scientific work related to [[Wikipedia quality]] published in 2011, written by [[Martin Poulter]] and [[M. W. Peel]].<br />
<br />
== Overview ==<br />
If you have knowledge you can share, [[Wikipedia]] needs you. Get your students involved too – improving articles is a great educational opportunity.<br />
<br />
== Embed ==<br />
=== Wikipedia Quality ===<br />
<code><br />
<nowiki><br />
Poulter, Martin; Peel, M. W.. (2011). "[[Physics on Wikipedia]]". IOP Publishing. DOI: 10.1088/2058-7058/24/09/37. <br />
</nowiki><br />
</code><br />
<br />
=== English Wikipedia ===<br />
<code><br />
<nowiki><br />
{{cite journal |last1=Poulter |first1=Martin |last2=Peel |first2=M. W. |title=Physics on Wikipedia |date=2011 |doi=10.1088/2058-7058/24/09/37 |url=https://wikipediaquality.com/wiki/Physics_on_Wikipedia |journal=IOP Publishing}}<br />
</nowiki><br />
</code><br />
<br />
=== HTML ===<br />
<code><br />
<nowiki><br />
Poulter, Martin; Peel, M. W.. (2011). &amp;quot;<a href="https://wikipediaquality.com/wiki/Physics_on_Wikipedia">Physics on Wikipedia</a>&amp;quot;. IOP Publishing. DOI: 10.1088/2058-7058/24/09/37. <br />
</nowiki><br />
</code></div>Aaliyahhttps://wikipediaquality.com/index.php?title=What_Can_Wikipedia_and_Google_Tell_Us_About_Stock_Prices_Under_Diferent_Market_Regimes&diff=25166What Can Wikipedia and Google Tell Us About Stock Prices Under Diferent Market Regimes2020-08-14T06:14:10Z<p>Aaliyah: Infobox work</p>
<hr />
<div>{{Infobox work<br />
| title = What Can Wikipedia and Google Tell Us About Stock Prices Under Diferent Market Regimes<br />
| date = 2015<br />
| authors = [[Boris Cergol]]<br />[[Matjaž Omladič]]<br />
| doi = 10.26493/1855-3974.561.37f<br />
| link = https://amc-journal.eu/index.php/amc/article/download/561/836<br />
}}<br />
'''What Can Wikipedia and Google Tell Us About Stock Prices Under Diferent Market Regimes''' - scientific work related to [[Wikipedia quality]] published in 2015, written by [[Boris Cergol]] and [[Matjaž Omladič]].<br />
<br />
== Overview ==<br />
In less than five years a surprisingly high level of attention has built up in the possible connection between internet search data and stock prices. It is the main aim of this paper to point out how this connection may depend heavily on different regimes of the market, i.e. the bear market vs. the bull market. Authors consider three types of internet search data (relative [[Google]] search frequencies of company tickers, relative Google search frequencies of company names and page visits of [[Wikipedia]] articles about individual companies) and a substantial sample of companies which are members of the S&P 500 index. Authors discover two inverse patterns in stock prices: in the bear market what authors propose to term a "merry frown" and in bull market a "sour smile", both clearly seen especially for the Wikipedia data. Authors propose market neutral strategies that exploit these new patterns and yield up to 17% in average annual return during sample period from 2008 to 2013.</div>Aaliyahhttps://wikipediaquality.com/index.php?title=Towards_a_Korean_Dbpedia_and_an_Approach_for_Complementing_the_Korean_Wikipedia_based_on_Dbpedia&diff=25165Towards a Korean Dbpedia and an Approach for Complementing the Korean Wikipedia based on Dbpedia2020-08-14T06:12:58Z<p>Aaliyah: Category</p>
<hr />
<div>{{Infobox work<br />
| title = Towards a Korean Dbpedia and an Approach for Complementing the Korean Wikipedia based on Dbpedia<br />
| date = 2010<br />
| authors = [[Eun Kyung Kim]]<br />[[Matthias Weidl]]<br />[[Key-Sun Choi]]<br />[[Sören Auer]]<br />
| link = http://ceur-ws.org/Vol-575/paper3.pdf<br />
}}<br />
'''Towards a Korean Dbpedia and an Approach for Complementing the Korean Wikipedia based on Dbpedia''' - scientific work related to [[Wikipedia quality]] published in 2010, written by [[Eun Kyung Kim]], [[Matthias Weidl]], [[Key-Sun Choi]] and [[Sören Auer]].<br />
<br />
== Overview ==<br />
In the first part of this paper authors report about experiences when applying the [[DBpedia]] extraction framework to the Korean [[Wikipedia]]. Authors improved the extraction of non-Latin characters and extended the framework with pluggable internationalization components in order to facilitate the extraction of localized information. With these improvements authors almost doubled the amount of extracted triples. Authors also will present the results of the extraction for Korean. In the second part, authors present a conceptual study aimed at understanding the impact of international resource synchronization in DBpedia. In the absence of any information synchronization, each country would construct its own datasets and manage it from its users. Moreover the cooperation across the various countries is adversely affected.<br />
<br />
== Embed ==<br />
=== Wikipedia Quality ===<br />
<code><br />
<nowiki><br />
Kim, Eun Kyung; Weidl, Matthias; Choi, Key-Sun; Auer, Sören. (2010). "[[Towards a Korean Dbpedia and an Approach for Complementing the Korean Wikipedia based on Dbpedia]]". CEUR-WS.org. <br />
</nowiki><br />
</code><br />
<br />
=== English Wikipedia ===<br />
<code><br />
<nowiki><br />
{{cite journal |last1=Kim |first1=Eun Kyung |last2=Weidl |first2=Matthias |last3=Choi |first3=Key-Sun |last4=Auer |first4=Sören |title=Towards a Korean Dbpedia and an Approach for Complementing the Korean Wikipedia based on Dbpedia |date=2010 |url=https://wikipediaquality.com/wiki/Towards_a_Korean_Dbpedia_and_an_Approach_for_Complementing_the_Korean_Wikipedia_based_on_Dbpedia |journal=CEUR-WS.org}}<br />
</nowiki><br />
</code><br />
<br />
=== HTML ===<br />
<code><br />
<nowiki><br />
Kim, Eun Kyung; Weidl, Matthias; Choi, Key-Sun; Auer, Sören. (2010). &amp;quot;<a href="https://wikipediaquality.com/wiki/Towards_a_Korean_Dbpedia_and_an_Approach_for_Complementing_the_Korean_Wikipedia_based_on_Dbpedia">Towards a Korean Dbpedia and an Approach for Complementing the Korean Wikipedia based on Dbpedia</a>&amp;quot;. CEUR-WS.org. <br />
</nowiki><br />
</code><br />
<br />
<br />
<br />
[[Category:Scientific works]]<br />
[[Category:Korean Wikipedia]]<br />
[[Category:Latin Wikipedia]]</div>Aaliyahhttps://wikipediaquality.com/index.php?title=Performing_Cross-Language_Retrieval_with_Wikipedia&diff=25164Performing Cross-Language Retrieval with Wikipedia2020-08-14T06:10:44Z<p>Aaliyah: Adding categories</p>
<hr />
<div>{{Infobox work<br />
| title = Performing Cross-Language Retrieval with Wikipedia<br />
| date = 2007<br />
| authors = [[Péter Schönhofen]]<br />[[András A. Benczúr]]<br />[[István Bíró]]<br />[[Károly Csalogány]]<br />
| link = http://ceur-ws.org/Vol-1173/CLEF2007wn-adhoc-SchonhofenEt2007.pdf<br />
}}<br />
'''Performing Cross-Language Retrieval with Wikipedia''' - scientific work related to [[Wikipedia quality]] published in 2007, written by [[Péter Schönhofen]], [[András A. Benczúr]], [[István Bíró]] and [[Károly Csalogány]].<br />
<br />
== Overview ==<br />
Authors describe a method which is able to translate queries extended by narrative information from one language to another, with help of an appropriate machine readable dictionary and the [[Wikipedia]] on-line encyclopedia. Processing occurs in three steps: rst, authors look up possible translations phrase by phrase using both the dictionary and the [[cross-lingual]] links provided by Wikipedia; second, improbable translations, detected by a simple language model computed over a large corpus of documents written in the target language, are eliminated; and nally, further ltering is applied by matching Wikipedia concepts against the query narrative and removing translations not related to the overall query topic. Experiments performed on the Los Angeles Times 2002 corpus, translating from Hungarian to English showed that while queries generated at end of the second step were roughly only half as e ective as original queries, primarily due to the limitations of tools, after the third step precision improved signi cantly, reaching 60% of the native English level.<br />
<br />
== Embed ==<br />
=== Wikipedia Quality ===<br />
<code><br />
<nowiki><br />
Schönhofen, Péter; Benczúr, András A.; Bíró, István; Csalogány, Károly. (2007). "[[Performing Cross-Language Retrieval with Wikipedia]]".<br />
</nowiki><br />
</code><br />
<br />
=== English Wikipedia ===<br />
<code><br />
<nowiki><br />
{{cite journal |last1=Schönhofen |first1=Péter |last2=Benczúr |first2=András A. |last3=Bíró |first3=István |last4=Csalogány |first4=Károly |title=Performing Cross-Language Retrieval with Wikipedia |date=2007 |url=https://wikipediaquality.com/wiki/Performing_Cross-Language_Retrieval_with_Wikipedia}}<br />
</nowiki><br />
</code><br />
<br />
=== HTML ===<br />
<code><br />
<nowiki><br />
Schönhofen, Péter; Benczúr, András A.; Bíró, István; Csalogány, Károly. (2007). &amp;quot;<a href="https://wikipediaquality.com/wiki/Performing_Cross-Language_Retrieval_with_Wikipedia">Performing Cross-Language Retrieval with Wikipedia</a>&amp;quot;.<br />
</nowiki><br />
</code><br />
<br />
<br />
<br />
[[Category:Scientific works]]<br />
[[Category:English Wikipedia]]<br />
[[Category:Hungarian Wikipedia]]</div>Aaliyahhttps://wikipediaquality.com/index.php?title=Decision_Making_in_the_Self-Evolved_Collegiate_Court:_Wikipedia%E2%80%99s_Arbitration_Committee_and_Its_Implications_for_Self-Governance_and_Judiciary_in_Cyberspace:&diff=25163Decision Making in the Self-Evolved Collegiate Court: Wikipedia’s Arbitration Committee and Its Implications for Self-Governance and Judiciary in Cyberspace:2020-08-14T06:07:47Z<p>Aaliyah: cat.</p>
<hr />
<div>{{Infobox work<br />
| title = Decision Making in the Self-Evolved Collegiate Court: Wikipedia’s Arbitration Committee and Its Implications for Self-Governance and Judiciary in Cyberspace:<br />
| date = 2017<br />
| authors = [[Piotr Konieczny]]<br />
| doi = 10.1177/0268580917722906<br />
| link = http://journals.sagepub.com/doi/10.1177/0268580917722906<br />
}}<br />
'''Decision Making in the Self-Evolved Collegiate Court: Wikipedia’s Arbitration Committee and Its Implications for Self-Governance and Judiciary in Cyberspace:''' - scientific work related to [[Wikipedia quality]] published in 2017, written by [[Piotr Konieczny]].<br />
<br />
== Overview ==<br />
This article considers the extent to which non-legal factors (nationality, activity/experience, conflict avoidance, and time constraints) affect decision making within collegiate courts, through the study of the [[Wikipedia]]’s Arbitration Committee. That body is a self-evolved collegiate court of the Internet’s fifth most popular website, whose judges (known as arbitrators) are volunteers. This study shows that the decision-making process of this body seems mostly unaffected by the demographic factors studied and the acclimatization bias. Some evidence of conflict avoidance is found. Despite the professed equality of members of the Committee, there is clear evidence that some are much more active (and thus, influential) than others. Compared to most traditional court settings, in the volunteer collegiate court studied here, time constraints play a much more significant role than previously suggested in the literature.<br />
<br />
== Embed ==<br />
=== Wikipedia Quality ===<br />
<code><br />
<nowiki><br />
Konieczny, Piotr. (2017). "[[Decision Making in the Self-Evolved Collegiate Court: Wikipedia’s Arbitration Committee and Its Implications for Self-Governance and Judiciary in Cyberspace:]]". SAGE PublicationsSage UK: London, England. DOI: 10.1177/0268580917722906. <br />
</nowiki><br />
</code><br />
<br />
=== English Wikipedia ===<br />
<code><br />
<nowiki><br />
{{cite journal |last1=Konieczny |first1=Piotr |title=Decision Making in the Self-Evolved Collegiate Court: Wikipedia’s Arbitration Committee and Its Implications for Self-Governance and Judiciary in Cyberspace: |date=2017 |doi=10.1177/0268580917722906 |url=https://wikipediaquality.com/wiki/Decision_Making_in_the_Self-Evolved_Collegiate_Court:_Wikipedia’s_Arbitration_Committee_and_Its_Implications_for_Self-Governance_and_Judiciary_in_Cyberspace: |journal=SAGE PublicationsSage UK: London, England}}<br />
</nowiki><br />
</code><br />
<br />
=== HTML ===<br />
<code><br />
<nowiki><br />
Konieczny, Piotr. (2017). &amp;quot;<a href="https://wikipediaquality.com/wiki/Decision_Making_in_the_Self-Evolved_Collegiate_Court:_Wikipedia’s_Arbitration_Committee_and_Its_Implications_for_Self-Governance_and_Judiciary_in_Cyberspace:">Decision Making in the Self-Evolved Collegiate Court: Wikipedia’s Arbitration Committee and Its Implications for Self-Governance and Judiciary in Cyberspace:</a>&amp;quot;. SAGE PublicationsSage UK: London, England. DOI: 10.1177/0268580917722906. <br />
</nowiki><br />
</code><br />
<br />
<br />
<br />
[[Category:Scientific works]]</div>Aaliyahhttps://wikipediaquality.com/index.php?title=Semantic_Tagging_Using_Topic_Models_Exploiting_Wikipedia_Category_Network&diff=25162Semantic Tagging Using Topic Models Exploiting Wikipedia Category Network2020-08-14T06:04:45Z<p>Aaliyah: wikilinks</p>
<hr />
<div>'''Semantic Tagging Using Topic Models Exploiting Wikipedia Category Network''' - scientific work related to [[Wikipedia quality]] published in 2016, written by [[Mehdi Allahyari]] and [[Krys J. Kochut]].<br />
<br />
== Overview ==<br />
In this paper authors propose a probabilistic topic model that incorporates [[DBpedia]] knowledge into the topic model for tagging Web pages and online documents with topics discovered in them. Authors method is based on integration of the DBpedia hierarchical category network with statistical topic models where DBpedia [[categories]] are considered as topics. Authors have conducted extensive experiments on two different datasets to demonstrate the effectiveness of method.</div>Aaliyahhttps://wikipediaquality.com/index.php?title=Wikipedia-Based_Hybrid_Document_Representation_for_Textual_News_Classification&diff=25161Wikipedia-Based Hybrid Document Representation for Textual News Classification2020-08-14T06:03:04Z<p>Aaliyah: + Infobox work</p>
<hr />
<div>{{Infobox work<br />
| title = Wikipedia-Based Hybrid Document Representation for Textual News Classification<br />
| date = 2016<br />
| authors = [[Marcos Mouriño García]]<br />[[Roberto Pérez Rodríguez]]<br />[[Manuel Vilares Ferro]]<br />[[Luis Anido Rifón]]<br />
| doi = 10.1007/s00500-018-3101-5<br />
| link = https://link.springer.com/content/pdf/10.1007%2Fs00500-018-3101-5.pdf<br />
}}<br />
'''Wikipedia-Based Hybrid Document Representation for Textual News Classification''' - scientific work related to [[Wikipedia quality]] published in 2016, written by [[Marcos Mouriño García]], [[Roberto Pérez Rodríguez]], [[Manuel Vilares Ferro]] and [[Luis Anido Rifón]].<br />
<br />
== Overview ==<br />
Automatic classification of news articles is a relevant problem due to the large amount of news generated every day, so it is crucial that these news are classified to allow for users to access to information of interest quickly and effectively. On the one hand, traditional classification systems represent documents as bag-of-words (BoW), which are oblivious to two problems of language: synonymy and polysemy. On the other hand, several authors propose the use of a bag-of-concepts (BoC) representation of documents, which tackles synonymy and polysemy. This paper shows the benefits of using a hybrid representation of documents to the classification of textual news, leveraging the advantages of both approaches—the traditional BoW representation and a BoC approach based on [[Wikipedia]] knowledge. To evaluate the proposal, authors used three of the most relevant algorithms in the state-of-the art—SVM, [[Random Forest]] and Naive Bayes—and two corpora: the Reuters-21578 corpus and a purpose-built corpus, Reuters-27000. Results obtained show that the performance of the classification algorithm depends on the dataset used, and also demonstrate that the enrichment of the BoW representation with the concepts extracted from documents through the semantic annotator adds useful information to the classifier and improves their performance. Experiments conducted show performance increases up to 4.12% when classifying the Reuters-21578 corpus with the SVM algorithm and up to 49.35% when classifying the corpus Reuters-27000 with the [[Random Forest]] algorithm.</div>Aaliyahhttps://wikipediaquality.com/index.php?title=Towards_Building_a_Multilingual_Semantic_Network:_Identifying_Interlingual_Links_in_Wikipedia&diff=25160Towards Building a Multilingual Semantic Network: Identifying Interlingual Links in Wikipedia2020-08-14T06:01:55Z<p>Aaliyah: New study: Towards Building a Multilingual Semantic Network: Identifying Interlingual Links in Wikipedia</p>
<hr />
<div>'''Towards Building a Multilingual Semantic Network: Identifying Interlingual Links in Wikipedia''' - scientific work related to Wikipedia quality published in 2012, written by Bharath Dandala, Rada Mihalcea and Razvan C. Bunescu.<br />
<br />
== Overview ==<br />
Wikipedia is a Web based, freely available multilingual encyclopedia, constructed in a collaborative effort by thousands of contributors. Wikipedia articles on the same topic in different languages are connected via interlingual (or translational) links. These links serve as an excellent resource for obtaining lexical translations, or building multilingual dictionaries and semantic networks. As these links are manually built, many links are missing or simply wrong. This paper describes a supervised learning method for generating new links and detecting existing incorrect links. Since there is no dataset available to evaluate the resulting interlingual links, authors create own gold standard by sampling translational links from four language pairs using distance heuristics. Authors manually annotate the sampled translation links and used them to evaluate the output of method for automatic link detection and correction.</div>Aaliyahhttps://wikipediaquality.com/index.php?title=Research_on_the_Extraction_of_Wikipedia-Based_Chinese-Khmer_Named_Entity_Equivalents&diff=25159Research on the Extraction of Wikipedia-Based Chinese-Khmer Named Entity Equivalents2020-08-14T05:59:19Z<p>Aaliyah: Adding infobox</p>
<hr />
<div>{{Infobox work<br />
| title = Research on the Extraction of Wikipedia-Based Chinese-Khmer Named Entity Equivalents<br />
| date = 2015<br />
| authors = [[Qing Xia]]<br />[[Xin Yan]]<br />[[Zhengtao Yu]]<br />[[Shengxiang Gao]]<br />
| doi = 10.1007/978-3-319-25207-0_32<br />
| link = http://dl.acm.org/citation.cfm?id=2978858<br />
}}<br />
'''Research on the Extraction of Wikipedia-Based Chinese-Khmer Named Entity Equivalents''' - scientific work related to [[Wikipedia quality]] published in 2015, written by [[Qing Xia]], [[Xin Yan]], [[Zhengtao Yu]] and [[Shengxiang Gao]].<br />
<br />
== Overview ==<br />
Named entity equivalent has been playing a significant role in the processing of cross-language information. However limited by the corpora resource, few in-depth studies have been made on the extraction of the bilingual Chinese-Khmer [[named entity]] equivalents. On account of this, this paper proposes a [[Wikipedia]]-based approach, utilizes the internal web links in Wikipedia and computes the feature similarity to extract the bilingual Chinese-Khmer named entity equivalents. The experimental result shows that good effect has been achieved when the entity equivalents are acquired through the internal web links in Wikipedia with F value up to 90.67%. Also it shows that the result is quite favorable when the bilingual Chinese-Khmer named entity equivalents are acquired through the computation of feature similarity, turning out that the method proposed in this paper is able to give better effect.</div>Aaliyahhttps://wikipediaquality.com/index.php?title=Societal_Controversies_in_Wikipedia_Articles&diff=25158Societal Controversies in Wikipedia Articles2020-08-14T05:57:32Z<p>Aaliyah: + Infobox work</p>
<hr />
<div>{{Infobox work<br />
| title = Societal Controversies in Wikipedia Articles<br />
| date = 2015<br />
| authors = [[Erik Borra]]<br />[[Esther Weltevrede]]<br />[[Paolo Ciuccarelli]]<br />[[Andreas Kaltenbrunner]]<br />[[David Laniado]]<br />[[Giovanni Magni]]<br />[[Michele Mauri]]<br />[[Richard Rogers]]<br />[[Tommaso Venturini]]<br />
| doi = 10.1145/2702123.2702436<br />
| link = http://dl.acm.org/citation.cfm?id=2702123.2702436<br />
}}<br />
'''Societal Controversies in Wikipedia Articles''' - scientific work related to [[Wikipedia quality]] published in 2015, written by [[Erik Borra]], [[Esther Weltevrede]], [[Paolo Ciuccarelli]], [[Andreas Kaltenbrunner]], [[David Laniado]], [[Giovanni Magni]], [[Michele Mauri]], [[Richard Rogers]] and [[Tommaso Venturini]].<br />
<br />
== Overview ==<br />
Collaborative content creation inevitably reaches situations where different points of view lead to conflict. Authors focus on [[Wikipedia]], the free encyclopedia anyone may edit, where disputes about content in controversial articles often reflect larger societal debates. While Wikipedia has a public edit history and discussion section for every article, the substance of these sections is difficult to phantom for Wikipedia users interested in the development of an article and in locating which topics were most controversial. In this paper authors present Contropedia, a tool that augments Wikipedia articles and gives insight into the development of controversial topics. Contropedia uses an efficient language agnostic measure based on the edit history that focuses on wiki links to easily identify which topics within a Wikipedia article have been most controversial and when.</div>Aaliyahhttps://wikipediaquality.com/index.php?title=Building_Academic_Literacy_and_Research_Skills_by_Contributing_to_Wikipedia:_a_Case_Study_at_an_Australian_University&diff=25157Building Academic Literacy and Research Skills by Contributing to Wikipedia: a Case Study at an Australian University2020-08-14T05:55:48Z<p>Aaliyah: Starting an article - Building Academic Literacy and Research Skills by Contributing to Wikipedia: a Case Study at an Australian University</p>
<hr />
<div>'''Building Academic Literacy and Research Skills by Contributing to Wikipedia: a Case Study at an Australian University''' - scientific work related to Wikipedia quality published in 2014, written by Julia Miller.<br />
<br />
== Overview ==<br />
Many lecturers are unhappy because their students refer to Wikipedia in their academic assignments. Rather than despairing, however, it is possible to use Wikipedia as an incentive to improve students’ writing and research skills. The following case study used an established Research Skills Development framework combined with a Personal Development Plan with the aim of assessing the improvement in research and academic writing skills which students attributed to an assignment in which they wrote entries for potential uploading to Wikipedia. The participants (n = 11) were students enrolled in a semester-long academic literacy course in a preparatory program for study at an Australian university. Scaffolding was provided by the lecturer at all stages of the assignment, including help with database searching, referencing and academic writing style. Although the sample size was small, quantitative data showed an educationally statistical improvement in the students’ research skills, while qualitative comments revealed that despite some technical difficulties in using the Wikipedia site, many students valued the opportunity to write for a “real” audience and not just for a lecturer.</div>Aaliyahhttps://wikipediaquality.com/index.php?title=Extending_Skos:_a_Wikipedia-Based_Unified_Annotation_Model_for_Creating_Interoperable_Domain_Ontologies&diff=25156Extending Skos: a Wikipedia-Based Unified Annotation Model for Creating Interoperable Domain Ontologies2020-08-14T05:54:19Z<p>Aaliyah: Adding infobox</p>
<hr />
<div>{{Infobox work<br />
| title = Extending Skos: a Wikipedia-Based Unified Annotation Model for Creating Interoperable Domain Ontologies<br />
| date = 2015<br />
| authors = [[Elshaimaa Ali]]<br />[[Vijay V. Raghavan]]<br />
| doi = 10.1007/978-3-319-25252-0_39<br />
| link = https://link.springer.com/content/pdf/10.1007%2F978-3-319-25252-0_39.pdf<br />
}}<br />
'''Extending Skos: a Wikipedia-Based Unified Annotation Model for Creating Interoperable Domain Ontologies''' - scientific work related to [[Wikipedia quality]] published in 2015, written by [[Elshaimaa Ali]] and [[Vijay V. Raghavan]].<br />
<br />
== Overview ==<br />
Interoperability of annotations in different domains is an essential demand to facilitate the interchange of data between semantic applications. Foundational ontologies, such as SKOS (Simple Knowledge Organization System), play an important role in creating an interoperable layer for annotation. Authors are proposing a multi-layer [[ontology]] schema, named SKOS-Wiki, which extends SKOS to create an annotation model and relies on the semantic structure of the [[Wikipedia]]. Authors also inherit the [[DBpedia]] definition of [[named entities]]. The main goal of proposed extension is to fill the semantic gaps between these models to create a unified annotation schema.</div>Aaliyahhttps://wikipediaquality.com/index.php?title=Wikipedia_in_the_Free_Culture_Revolution&diff=25155Wikipedia in the Free Culture Revolution2020-08-14T05:52:11Z<p>Aaliyah: Embed</p>
<hr />
<div>{{Infobox work<br />
| title = Wikipedia in the Free Culture Revolution<br />
| date = 2005<br />
| authors = [[Jimmy Wales]]<br />
| doi = 10.1145/1094855.1094859<br />
| link = http://dl.acm.org/ft_gateway.cfm?id=1094859&amp;type=pdf<br />
}}<br />
'''Wikipedia in the Free Culture Revolution''' - scientific work related to [[Wikipedia quality]] published in 2005, written by [[Jimmy Wales]].<br />
<br />
== Overview ==<br />
Jimmy "Jimbo" Wales is the founder of [[Wikipedia]].org, the free encyclopedia project, and Wikicities.com, which extends the social concepts of Wikipedia into new areas. Jimmy was formerly a futures and options trader in Chicago, and currently travels the world evangelizing the success of Wikipedia and the importance of free culture. When not traveling, Jimmy lives in Florida with his wife and daughter.<br />
<br />
== Embed ==<br />
=== Wikipedia Quality ===<br />
<code><br />
<nowiki><br />
Wales, Jimmy. (2005). "[[Wikipedia in the Free Culture Revolution]]".DOI: 10.1145/1094855.1094859. <br />
</nowiki><br />
</code><br />
<br />
=== English Wikipedia ===<br />
<code><br />
<nowiki><br />
{{cite journal |last1=Wales |first1=Jimmy |title=Wikipedia in the Free Culture Revolution |date=2005 |doi=10.1145/1094855.1094859 |url=https://wikipediaquality.com/wiki/Wikipedia_in_the_Free_Culture_Revolution}}<br />
</nowiki><br />
</code><br />
<br />
=== HTML ===<br />
<code><br />
<nowiki><br />
Wales, Jimmy. (2005). &amp;quot;<a href="https://wikipediaquality.com/wiki/Wikipedia_in_the_Free_Culture_Revolution">Wikipedia in the Free Culture Revolution</a>&amp;quot;.DOI: 10.1145/1094855.1094859. <br />
</nowiki><br />
</code></div>Aaliyahhttps://wikipediaquality.com/index.php?title=Domain-Specific_Semantic_Relatedness_from_Wikipedia_Structure:_a_Case_Study_in_Biomedical_Text&diff=25154Domain-Specific Semantic Relatedness from Wikipedia Structure: a Case Study in Biomedical Text2020-08-14T05:50:29Z<p>Aaliyah: Embed for English Wikipedia, HTML</p>
<hr />
<div>{{Infobox work<br />
| title = Domain-Specific Semantic Relatedness from Wikipedia Structure: a Case Study in Biomedical Text<br />
| date = 2015<br />
| authors = [[Armin Sajadi]]<br />[[Evangelos E. Milios]]<br />[[Vlado Keselj]]<br />[[Jeannette C. M. Janssen]]<br />
| doi = 10.1007/978-3-319-18111-0_26<br />
| link = https://link.springer.com/chapter/10.1007%2F978-3-319-18111-0_26<br />
}}<br />
'''Domain-Specific Semantic Relatedness from Wikipedia Structure: a Case Study in Biomedical Text''' - scientific work related to [[Wikipedia quality]] published in 2015, written by [[Armin Sajadi]], [[Evangelos E. Milios]], [[Vlado Keselj]] and [[Jeannette C. M. Janssen]].<br />
<br />
== Overview ==<br />
Wikipedia is becoming an important knowledge source in various domain specific applications based on concept representation. This introduces the need for concrete evaluation of [[Wikipedia]] as a foundation for computing semantic [[relatedness]] between concepts. While lexical resources like [[WordNet]] cover generic English well, they are weak in their coverage of domain specific terms and [[named entities]], which is one of the strengths of Wikipedia. Furthermore, semantic relatedness methods that rely on the hierarchical structure of a lexical resource are not directly applicable to the Wikipedia link structure, which is not hierarchical and whose links do not capture well defined semantic relationships like hyponymy.<br />
<br />
== Embed ==<br />
=== Wikipedia Quality ===<br />
<code><br />
<nowiki><br />
Sajadi, Armin; Milios, Evangelos E.; Keselj, Vlado; Janssen, Jeannette C. M.. (2015). "[[Domain-Specific Semantic Relatedness from Wikipedia Structure: a Case Study in Biomedical Text]]". Springer, Cham. DOI: 10.1007/978-3-319-18111-0_26. <br />
</nowiki><br />
</code><br />
<br />
=== English Wikipedia ===<br />
<code><br />
<nowiki><br />
{{cite journal |last1=Sajadi |first1=Armin |last2=Milios |first2=Evangelos E. |last3=Keselj |first3=Vlado |last4=Janssen |first4=Jeannette C. M. |title=Domain-Specific Semantic Relatedness from Wikipedia Structure: a Case Study in Biomedical Text |date=2015 |doi=10.1007/978-3-319-18111-0_26 |url=https://wikipediaquality.com/wiki/Domain-Specific_Semantic_Relatedness_from_Wikipedia_Structure:_a_Case_Study_in_Biomedical_Text |journal=Springer, Cham}}<br />
</nowiki><br />
</code><br />
<br />
=== HTML ===<br />
<code><br />
<nowiki><br />
Sajadi, Armin; Milios, Evangelos E.; Keselj, Vlado; Janssen, Jeannette C. M.. (2015). &amp;quot;<a href="https://wikipediaquality.com/wiki/Domain-Specific_Semantic_Relatedness_from_Wikipedia_Structure:_a_Case_Study_in_Biomedical_Text">Domain-Specific Semantic Relatedness from Wikipedia Structure: a Case Study in Biomedical Text</a>&amp;quot;. Springer, Cham. DOI: 10.1007/978-3-319-18111-0_26. <br />
</nowiki><br />
</code></div>Aaliyahhttps://wikipediaquality.com/index.php?title=Infoguides:_Women_on_Wikipedia_Edit-A-Thon:_Biographical_Resources&diff=25153Infoguides: Women on Wikipedia Edit-A-Thon: Biographical Resources2020-08-14T05:48:09Z<p>Aaliyah: Infobox</p>
<hr />
<div>{{Infobox work<br />
| title = Infoguides: Women on Wikipedia Edit-A-Thon: Biographical Resources<br />
| date = 2017<br />
| authors = [[Lara Nicosia]]<br />
| link = http://infoguides.rit.edu/WomenWikiRIT/bio-sources<br />
}}<br />
'''Infoguides: Women on Wikipedia Edit-A-Thon: Biographical Resources''' - scientific work related to [[Wikipedia quality]] published in 2017, written by [[Lara Nicosia]].<br />
<br />
== Overview ==<br />
Information and resources relevant to the Women on [[Wikipedia]] Edit-a-thon hosted at RIT on Saturday, March 24th from 11am-4pm</div>Aaliyahhttps://wikipediaquality.com/index.php?title=Hacking_the_Research_Library:_Wikipedia,_Trump,_and_Information_Literacy_in_the_Escape_Room_at_Fresno_State&diff=23286Hacking the Research Library: Wikipedia, Trump, and Information Literacy in the Escape Room at Fresno State2020-01-10T05:20:10Z<p>Aaliyah: + Embed</p>
<hr />
<div>{{Infobox work<br />
| title = Hacking the Research Library: Wikipedia, Trump, and Information Literacy in the Escape Room at Fresno State<br />
| date = 2017<br />
| authors = [[Raymond Pun]]<br />
| doi = 10.1086/693489<br />
| link = https://www.journals.uchicago.edu/doi/full/10.1086/693489<br />
}}<br />
'''Hacking the Research Library: Wikipedia, Trump, and Information Literacy in the Escape Room at Fresno State''' - scientific work related to [[Wikipedia quality]] published in 2017, written by [[Raymond Pun]].<br />
<br />
== Overview ==<br />
AbstractHow can librarians teach information literacy in such a politicized atmosphere? In spring 2017, the library at Fresno State held a series of workshops that introduced first-year students to information literacy in a “gamification” setting, an escape room, to encourage community learning. The theme of the workshop focused on President Donald Trump. In this one-shot workshop, students were “locked” in the escape room in the library and had to solve a series of information-literacy puzzles and research tasks, including hacking into Donald Trump’s [[Wikipedia]] page, fact-checking Trump’s tweets, and comparing and analyzing fake news with online databases. The article presents this workshop as a case study on how librarians can creatively engage with students to collaborate, learn, and build information literacy skills using Trump as the teaching subject.<br />
<br />
== Embed ==<br />
=== Wikipedia Quality ===<br />
<code><br />
<nowiki><br />
Pun, Raymond. (2017). "[[Hacking the Research Library: Wikipedia, Trump, and Information Literacy in the Escape Room at Fresno State]]". University of Chicago PressChicago, IL. DOI: 10.1086/693489. <br />
</nowiki><br />
</code><br />
<br />
=== English Wikipedia ===<br />
<code><br />
<nowiki><br />
{{cite journal |last1=Pun |first1=Raymond |title=Hacking the Research Library: Wikipedia, Trump, and Information Literacy in the Escape Room at Fresno State |date=2017 |doi=10.1086/693489 |url=https://wikipediaquality.com/wiki/Hacking_the_Research_Library:_Wikipedia,_Trump,_and_Information_Literacy_in_the_Escape_Room_at_Fresno_State |journal=University of Chicago PressChicago, IL}}<br />
</nowiki><br />
</code><br />
<br />
=== HTML ===<br />
<code><br />
<nowiki><br />
Pun, Raymond. (2017). &amp;quot;<a href="https://wikipediaquality.com/wiki/Hacking_the_Research_Library:_Wikipedia,_Trump,_and_Information_Literacy_in_the_Escape_Room_at_Fresno_State">Hacking the Research Library: Wikipedia, Trump, and Information Literacy in the Escape Room at Fresno State</a>&amp;quot;. University of Chicago PressChicago, IL. DOI: 10.1086/693489. <br />
</nowiki><br />
</code></div>Aaliyahhttps://wikipediaquality.com/index.php?title=Entity_Extraction,_Linking,_Classification,_and_Tagging_for_Social_Media:_a_Wikipedia-Based_Approach&diff=23285Entity Extraction, Linking, Classification, and Tagging for Social Media: a Wikipedia-Based Approach2020-01-10T05:18:14Z<p>Aaliyah: Embed</p>
<hr />
<div>{{Infobox work<br />
| title = Entity Extraction, Linking, Classification, and Tagging for Social Media: a Wikipedia-Based Approach<br />
| date = 2013<br />
| authors = [[Abhishek Gattani]]<br />[[Digvijay S. Lamba]]<br />[[Nikesh Garera]]<br />[[Mitul Tiwari]]<br />[[Xiaoyong Chai]]<br />[[Sanjib Das]]<br />[[Sri Subramaniam]]<br />[[Anand Rajaraman]]<br />[[Venky Harinarayan]]<br />[[AnHai Doan]]<br />
| doi = 10.14778/2536222.2536237<br />
| link = http://dl.acm.org/citation.cfm?doid=2536222.2536237<br />
}}<br />
'''Entity Extraction, Linking, Classification, and Tagging for Social Media: a Wikipedia-Based Approach''' - scientific work related to [[Wikipedia quality]] published in 2013, written by [[Abhishek Gattani]], [[Digvijay S. Lamba]], [[Nikesh Garera]], [[Mitul Tiwari]], [[Xiaoyong Chai]], [[Sanjib Das]], [[Sri Subramaniam]], [[Anand Rajaraman]], [[Venky Harinarayan]] and [[AnHai Doan]].<br />
<br />
== Overview ==<br />
Many applications that process social data, such as tweets, must extract entities from tweets (e.g., "Obama" and "Hawaii" in "Obama went to Hawaii"), link them to entities in a knowledge base (e.g., [[Wikipedia]]), classify tweets into a set of predefined topics, and assign descriptive tags to tweets. Few solutions exist today to solve these problems for social data, and they are limited in important ways. Further, even though several industrial systems such as OpenCalais have been deployed to solve these problems for text data, little if any has been published about them, and it is unclear if any of the systems has been tailored for social media.<br />
<br />
== Embed ==<br />
=== Wikipedia Quality ===<br />
<code><br />
<nowiki><br />
Gattani, Abhishek; Lamba, Digvijay S.; Garera, Nikesh; Tiwari, Mitul; Chai, Xiaoyong; Das, Sanjib; Subramaniam, Sri; Rajaraman, Anand; Harinarayan, Venky; Doan, AnHai. (2013). "[[Entity Extraction, Linking, Classification, and Tagging for Social Media: a Wikipedia-Based Approach]]". VLDB Endowment. DOI: 10.14778/2536222.2536237. <br />
</nowiki><br />
</code><br />
<br />
=== English Wikipedia ===<br />
<code><br />
<nowiki><br />
{{cite journal |last1=Gattani |first1=Abhishek |last2=Lamba |first2=Digvijay S. |last3=Garera |first3=Nikesh |last4=Tiwari |first4=Mitul |last5=Chai |first5=Xiaoyong |last6=Das |first6=Sanjib |last7=Subramaniam |first7=Sri |last8=Rajaraman |first8=Anand |last9=Harinarayan |first9=Venky |last10=Doan |first10=AnHai |title=Entity Extraction, Linking, Classification, and Tagging for Social Media: a Wikipedia-Based Approach |date=2013 |doi=10.14778/2536222.2536237 |url=https://wikipediaquality.com/wiki/Entity_Extraction,_Linking,_Classification,_and_Tagging_for_Social_Media:_a_Wikipedia-Based_Approach |journal=VLDB Endowment}}<br />
</nowiki><br />
</code><br />
<br />
=== HTML ===<br />
<code><br />
<nowiki><br />
Gattani, Abhishek; Lamba, Digvijay S.; Garera, Nikesh; Tiwari, Mitul; Chai, Xiaoyong; Das, Sanjib; Subramaniam, Sri; Rajaraman, Anand; Harinarayan, Venky; Doan, AnHai. (2013). &amp;quot;<a href="https://wikipediaquality.com/wiki/Entity_Extraction,_Linking,_Classification,_and_Tagging_for_Social_Media:_a_Wikipedia-Based_Approach">Entity Extraction, Linking, Classification, and Tagging for Social Media: a Wikipedia-Based Approach</a>&amp;quot;. VLDB Endowment. DOI: 10.14778/2536222.2536237. <br />
</nowiki><br />
</code></div>Aaliyahhttps://wikipediaquality.com/index.php?title=Leveraging_Wikipedia_and_Context_Features_for_Clinical_Event_Extraction_from_Mixed-Language_Discharge_Summary&diff=23284Leveraging Wikipedia and Context Features for Clinical Event Extraction from Mixed-Language Discharge Summary2020-01-10T05:15:36Z<p>Aaliyah: Adding embed</p>
<hr />
<div>{{Infobox work<br />
| title = Leveraging Wikipedia and Context Features for Clinical Event Extraction from Mixed-Language Discharge Summary<br />
| date = 2014<br />
| authors = [[Kwang-Yong Jeong]]<br />[[Wangjin Yi]]<br />[[Jae-Wook Seol]]<br />[[Jinwook Choi]]<br />[[Kyung-Soon Lee]]<br />
| doi = 10.1007/978-3-319-12844-3_26<br />
| link = https://link.springer.com/chapter/10.1007%2F978-3-319-12844-3_26<br />
}}<br />
'''Leveraging Wikipedia and Context Features for Clinical Event Extraction from Mixed-Language Discharge Summary''' - scientific work related to [[Wikipedia quality]] published in 2014, written by [[Kwang-Yong Jeong]], [[Wangjin Yi]], [[Jae-Wook Seol]], [[Jinwook Choi]] and [[Kyung-Soon Lee]].<br />
<br />
== Overview ==<br />
Unstructured clinical texts contain patients’ disease related narratives, but it is required elaborate work to mine the kind of information. Especially for the classification of semantic types of a clinical term, implementations of domain knowledge from resources such as the Unified Medical Language System (UMLS) are essential. The UMLS has a limitation in dealing with other languages. In this paper, authors leverage [[Wikipedia]] as well as UMLS for clinical event extraction, especially from clinical narratives written in mixed-language. Semantic [[features]] for clinical terms are extracted based on semantic networks of hierarchical [[categories]] in Wikipedia. Semantic types for Korean clinical terms are detected by using translation links and semantic networks in Wikipedia. An additional remarkable feature is a controlled vocabulary of clue words which can be contextual evidence to determine clinical semantic types of a word. The experimental result on 150 discharge summaries written in English and Korean showed 75.9% in F1-measure. This result shows that the proposed features are effective for clinical event extraction.<br />
<br />
== Embed ==<br />
=== Wikipedia Quality ===<br />
<code><br />
<nowiki><br />
Jeong, Kwang-Yong; Yi, Wangjin; Seol, Jae-Wook; Choi, Jinwook; Lee, Kyung-Soon. (2014). "[[Leveraging Wikipedia and Context Features for Clinical Event Extraction from Mixed-Language Discharge Summary]]". Springer, Cham. DOI: 10.1007/978-3-319-12844-3_26. <br />
</nowiki><br />
</code><br />
<br />
=== English Wikipedia ===<br />
<code><br />
<nowiki><br />
{{cite journal |last1=Jeong |first1=Kwang-Yong |last2=Yi |first2=Wangjin |last3=Seol |first3=Jae-Wook |last4=Choi |first4=Jinwook |last5=Lee |first5=Kyung-Soon |title=Leveraging Wikipedia and Context Features for Clinical Event Extraction from Mixed-Language Discharge Summary |date=2014 |doi=10.1007/978-3-319-12844-3_26 |url=https://wikipediaquality.com/wiki/Leveraging_Wikipedia_and_Context_Features_for_Clinical_Event_Extraction_from_Mixed-Language_Discharge_Summary |journal=Springer, Cham}}<br />
</nowiki><br />
</code><br />
<br />
=== HTML ===<br />
<code><br />
<nowiki><br />
Jeong, Kwang-Yong; Yi, Wangjin; Seol, Jae-Wook; Choi, Jinwook; Lee, Kyung-Soon. (2014). &amp;quot;<a href="https://wikipediaquality.com/wiki/Leveraging_Wikipedia_and_Context_Features_for_Clinical_Event_Extraction_from_Mixed-Language_Discharge_Summary">Leveraging Wikipedia and Context Features for Clinical Event Extraction from Mixed-Language Discharge Summary</a>&amp;quot;. Springer, Cham. DOI: 10.1007/978-3-319-12844-3_26. <br />
</nowiki><br />
</code></div>Aaliyahhttps://wikipediaquality.com/index.php?title=Wikipedia_and_the_University,_a_Case_Study&diff=23283Wikipedia and the University, a Case Study2020-01-10T05:14:33Z<p>Aaliyah: Adding new article - Wikipedia and the University, a Case Study</p>
<hr />
<div>'''Wikipedia and the University, a Case Study''' - scientific work related to Wikipedia quality published in 2012, written by Charles Knight and Sam Pryke.<br />
<br />
== Overview ==<br />
This article discusses the use of Wikipedia by academics and students for learning and teaching activities at Liverpool Hope University. Hope has distinctive aspects but authors consider the findings to be indicative of Wikipedia use at other British universities. First authors discuss general issues of Wikipedia use within the university. Second, authors examine existing research on Wikipedia use amongst students and academics. Based upon a sample of 133 academics and 1222 students, principal findings were: (1) 75% of academics and students use Wikipedia; (2) student use is typically confined to the initial stages of assessments; (3) a quarter of academics provide guidance on how to use Wikipedia and (4) 70% of academics use Wikipedia for background information for teaching purposes, something that it is not influenced by whether student use is tolerated or not. Authors conclusion is that whilst Wikipedia is now unofficially integrated into universities, it is not ‘the’ information resource as feared by many and that a...</div>Aaliyahhttps://wikipediaquality.com/index.php?title=Platform_Affordances_and_Data_Practices:_the_Value_of_Dispute_on_Wikipedia&diff=23282Platform Affordances and Data Practices: the Value of Dispute on Wikipedia2020-01-10T05:12:57Z<p>Aaliyah: + infobox</p>
<hr />
<div>{{Infobox work<br />
| title = Platform Affordances and Data Practices: the Value of Dispute on Wikipedia<br />
| date = 2016<br />
| authors = [[Esther Weltevrede]]<br />[[Erik Borra]]<br />
| doi = 10.1177/2053951716653418<br />
| link = http://journals.sagepub.com/doi/10.1177/2053951716653418<br />
}}<br />
'''Platform Affordances and Data Practices: the Value of Dispute on Wikipedia''' - scientific work related to [[Wikipedia quality]] published in 2016, written by [[Esther Weltevrede]] and [[Erik Borra]].<br />
<br />
== Overview ==<br />
In this paper authors introduce the device perspective as a methodological contribution to platform studies. Through an engagement with debates about the notion of affordances, which focus on the relation between the technical and the social, authors put forward an approach to study the production of data within platforms by engaging with the material properties of platforms as well as their interpretation and deployment by various types of users. As a case in point, authors study how the affordances of [[Wikipedia]] are deployed in the production of encyclopedic knowledge and how this can be used to study controversies. The analysis shows how Wikipedia affords unstable encyclopedic knowledge by having mechanisms in place that suggest the continuous (re)negotiation of existing knowledge. Authors furthermore showcase the use of [[open-source]] software, Contropedia, which can be utilized to study knowledge production on Wikipedia.</div>Aaliyahhttps://wikipediaquality.com/index.php?title=Wiki-Rec:_a_Semantic-Based_Recommendation_System_Using_Wikipedia_as_an_Ontology&diff=23281Wiki-Rec: a Semantic-Based Recommendation System Using Wikipedia as an Ontology2020-01-10T05:10:14Z<p>Aaliyah: Adding embed</p>
<hr />
<div>{{Infobox work<br />
| title = Wiki-Rec: a Semantic-Based Recommendation System Using Wikipedia as an Ontology<br />
| date = 2010<br />
| authors = [[Ahmed Elgohary]]<br />[[Hussein Nomir]]<br />[[Ibrahim Sabek]]<br />[[Mohamed Samir]]<br />[[Moustafa Badawy]]<br />[[Noha A. Yoursi]]<br />
| doi = 10.1109/ISDA.2010.5687117<br />
| link = http://ieeexplore.ieee.org/document/5687117/<br />
}}<br />
'''Wiki-Rec: a Semantic-Based Recommendation System Using Wikipedia as an Ontology''' - scientific work related to [[Wikipedia quality]] published in 2010, written by [[Ahmed Elgohary]], [[Hussein Nomir]], [[Ibrahim Sabek]], [[Mohamed Samir]], [[Moustafa Badawy]] and [[Noha A. Yoursi]].<br />
<br />
== Overview ==<br />
Nowadays, satisfying user needs has become the main challenge in a variety of web applications. Recommender systems play a major role in that direction. However, as most of the information is present in a textual form, recommender systems face the challenge of efficiently analyzing huge amounts of text. The usage of semantic-based analysis has gained much interest in recent years. The emergence of ontologies has yet facilitated semantic interpretation of text. However, relying on an [[ontology]] for performing the semantic analysis requires too much effort to construct and maintain the used ontologies. Besides, the currently known ontologies cover a small number of the world's concepts especially when a non-domain-specific concepts are needed. This paper proposes the use of [[Wikipedia]] as ontology to solve the problems of using traditional ontologies for the text analysis in text-based recommendation systems. A full system model that unifies semantic-based analysis with a collaborative via content recommendation system is presented.<br />
<br />
== Embed ==<br />
=== Wikipedia Quality ===<br />
<code><br />
<nowiki><br />
Elgohary, Ahmed; Nomir, Hussein; Sabek, Ibrahim; Samir, Mohamed; Badawy, Moustafa; Yoursi, Noha A.. (2010). "[[Wiki-Rec: a Semantic-Based Recommendation System Using Wikipedia as an Ontology]]".DOI: 10.1109/ISDA.2010.5687117. <br />
</nowiki><br />
</code><br />
<br />
=== English Wikipedia ===<br />
<code><br />
<nowiki><br />
{{cite journal |last1=Elgohary |first1=Ahmed |last2=Nomir |first2=Hussein |last3=Sabek |first3=Ibrahim |last4=Samir |first4=Mohamed |last5=Badawy |first5=Moustafa |last6=Yoursi |first6=Noha A. |title=Wiki-Rec: a Semantic-Based Recommendation System Using Wikipedia as an Ontology |date=2010 |doi=10.1109/ISDA.2010.5687117 |url=https://wikipediaquality.com/wiki/Wiki-Rec:_a_Semantic-Based_Recommendation_System_Using_Wikipedia_as_an_Ontology}}<br />
</nowiki><br />
</code><br />
<br />
=== HTML ===<br />
<code><br />
<nowiki><br />
Elgohary, Ahmed; Nomir, Hussein; Sabek, Ibrahim; Samir, Mohamed; Badawy, Moustafa; Yoursi, Noha A.. (2010). &amp;quot;<a href="https://wikipediaquality.com/wiki/Wiki-Rec:_a_Semantic-Based_Recommendation_System_Using_Wikipedia_as_an_Ontology">Wiki-Rec: a Semantic-Based Recommendation System Using Wikipedia as an Ontology</a>&amp;quot;.DOI: 10.1109/ISDA.2010.5687117. <br />
</nowiki><br />
</code></div>Aaliyahhttps://wikipediaquality.com/index.php?title=Lifting_the_Veil:_Improving_Accountability_and_Social_Transparency_in_Wikipedia_with_Wikidashboard&diff=23280Lifting the Veil: Improving Accountability and Social Transparency in Wikipedia with Wikidashboard2020-01-10T05:09:07Z<p>Aaliyah: Infobox</p>
<hr />
<div>{{Infobox work<br />
| title = Lifting the Veil: Improving Accountability and Social Transparency in Wikipedia with Wikidashboard<br />
| date = 2008<br />
| authors = [[Bongwon Suh]]<br />[[Ed H. Chi]]<br />[[Aniket Kittur]]<br />[[Bryan A. Pendleton]]<br />
| doi = 10.1145/1357054.1357214<br />
| link = http://dl.acm.org/citation.cfm?id=1357214<br />
}}<br />
'''Lifting the Veil: Improving Accountability and Social Transparency in Wikipedia with Wikidashboard''' - scientific work related to [[Wikipedia quality]] published in 2008, written by [[Bongwon Suh]], [[Ed H. Chi]], [[Aniket Kittur]] and [[Bryan A. Pendleton]].<br />
<br />
== Overview ==<br />
Wikis are collaborative systems in which virtually anyone can edit anything. Although wikis have become highly popular in many domains, their mutable nature often leads them to be distrusted as a reliable source of information. Here authors describe a social dynamic analysis tool called WikiDashboard which aims to improve social transparency and accountability on [[Wikipedia]] articles. Early reactions from users suggest that the increased transparency afforded by the tool can improve the interpretation, communication, and trustworthiness of Wikipedia articles.</div>Aaliyahhttps://wikipediaquality.com/index.php?title=Keyword_Extraction_for_Mining_Meaningful_Learning-Contents_on_the_Web_Using_Wikipedia&diff=23279Keyword Extraction for Mining Meaningful Learning-Contents on the Web Using Wikipedia2020-01-10T05:06:37Z<p>Aaliyah: + links</p>
<hr />
<div>'''Keyword Extraction for Mining Meaningful Learning-Contents on the Web Using Wikipedia''' - scientific work related to [[Wikipedia quality]] published in 2014, written by [[Tetsuya Toyota]] and [[Yuan Sun]].<br />
<br />
== Overview ==<br />
The purpose of this paper is to provide a solution of extracting appropriate keywords to identify meaningful learning-contents on the Web. There are some issues in identifying documents that have learning content. Firstly, the documents need to be identified according to the learning area of a student's school year. Secondly, the documents need to be identified according to the learning area that the student is now studying or studied. In this paper, authors present a method of extracting keywords for mining meaningful learning-contents using [[Wikipedia]]. At first, authors select the articles in Wikipedia with the arbitrary input keyword of learning items. Then, authors select other Wikipedia's articles related to the articles selected by the first process, using links and [[categories]] of Wikipedia. Furthermore, authors calculate degrees of association between the articles and the keywords using PF-IBF, and put the degree on each keyword. Finally, authors screen the keywords using his/her curriculum guideline to adjust the keywords to the learning area of the student's school year. In the next step, authors are planning to develop a method of screening keywords according to each student's ability, so that authors can select more appropriate keywords for each student.</div>Aaliyahhttps://wikipediaquality.com/index.php?title=Automated_News_Suggestions_for_Populating_Wikipedia_Entity_Pages&diff=23278Automated News Suggestions for Populating Wikipedia Entity Pages2020-01-10T05:04:51Z<p>Aaliyah: Int.links</p>
<hr />
<div>'''Automated News Suggestions for Populating Wikipedia Entity Pages''' - scientific work related to [[Wikipedia quality]] published in 2015, written by [[Besnik Fetahu]], [[Katja Markert]] and [[Avishek Anand]].<br />
<br />
== Overview ==<br />
Wikipedia entity pages are a valuable source of information for direct consumption and for knowledge-base construction, update and maintenance. Facts in these entity pages are typically supported by references. Recent studies show that as much as 20% of the references are from online news sources. However, many entity pages are incomplete even if relevant information is already available in existing news articles. Even for the already present references, there is often a delay between the news article publication time and the reference time. In this work, authors therefore look at [[Wikipedia]] through the lens of news and propose a novel news-article suggestion task to improve news coverage in Wikipedia, and reduce the lag of newsworthy references. Authors work finds direct application, as a precursor, to Wikipedia page generation and knowledge-base acceleration tasks that rely on relevant and high quality input sources. Authors propose a two-stage supervised approach for suggesting news articles to entity pages for a given state of Wikipedia. First, authors suggest news articles to Wikipedia entities (article-entity placement) relying on a rich set of [[features]] which take into account the salience and relative authority of entities, and the novelty of news articles to entity pages. Second, authors determine the exact section in the entity page for the input article (article-section placement) guided by class-based section templates. Authors perform an extensive evaluation of approach based on ground-truth data that is extracted from external references in Wikipedia. Authors achieve a high precision value of up to 93% in the article-entity suggestion stage and upto 84% for the article-section placement . Finally, authors compare approach against competitive baselines and show significant improvements.</div>Aaliyahhttps://wikipediaquality.com/index.php?title=Yago:_a_Core_of_Semantic_Knowledgeunifying_Wordnet_and_Wikipedia&diff=23277Yago: a Core of Semantic Knowledgeunifying Wordnet and Wikipedia2020-01-10T05:03:44Z<p>Aaliyah: wikilinks</p>
<hr />
<div>'''Yago: a Core of Semantic Knowledgeunifying Wordnet and Wikipedia''' - scientific work related to [[Wikipedia quality]] published in 2007, written by [[Fabian M. Suchanek]], [[Gjergji Kasneci]] and [[Gerhard Weikum]].<br />
<br />
== Overview ==<br />
Authors present YAGO, a lightweight and extensible [[ontology]] with high coverage and quality. YAGO builds on entities and relations and currently contains more than 1 million entities and 5 million facts. This includes the Is-A hierarchy as well as non-taxonomic relations between entities (such as hasWonPrize). The facts have been automatically extracted from [[Wikipedia]] and unified with [[WordNet]], using a carefully designed combination of rule-based and heuris-tic methods described in this paper. The resulting knowledge base is a major step beyond WordNet: in quality by adding knowledge about individuals like persons, organizations , products, etc. with their semantic relationships – and in quantity by increasing the number of facts by more than an order of magnitude. Authors empirical evaluation of fact cor-rectness shows an accuracy of about 95%. YAGO is based on a logically clean model, which is decidable, extensible, and compatible with RDFS. Finally, authors show how YAGO can be further extended by state-of-the-art [[information extraction]] techniques.</div>Aaliyahhttps://wikipediaquality.com/index.php?title=Why_and_Where_Wikipedia_is_Cited_in_Journal_Articles&diff=23276Why and Where Wikipedia is Cited in Journal Articles2020-01-10T05:02:01Z<p>Aaliyah: Adding wikilinks</p>
<hr />
<div>'''Why and Where Wikipedia is Cited in Journal Articles''' - scientific work related to [[Wikipedia quality]] published in 2013, written by [[Fariba Tohidinasab]] and [[Hamid R. Jamali]].<br />
<br />
== Overview ==<br />
The aim of this research was to identify the motivations for citation to [[Wikipedia]] in scientific papers. Also, the number of citation to Wikipedia, location of citation, type of citing papers, subject of citing and cited articles were determined and compared in different subject fields. From all English articles indexed in Scopus in 2007 and 2012 that have cited Wikipedia, 602 articles were selected using stratified random sampling. Content analysis and bibliometric methods were used to carry out the research. Results showed that there are 20 motivations for citing Wikipedia and the most frequent of them are providing general information and definition, facts and figures. Citations to Wikipedia often were in the introduction and introductory sections of papers. Computer sciences, internet and chemistry were the most cited subjects. The use of Wikipedia in articles is increasing both in terms of quantity and diversity. However, there are disciplinary differences both in the amount and the nature of use of Wikipedia.</div>Aaliyahhttps://wikipediaquality.com/index.php?title=Mining_for_Practices_in_Community_Collections:_Finds_from_Simple_Wikipedia&diff=23275Mining for Practices in Community Collections: Finds from Simple Wikipedia2020-01-10T05:00:11Z<p>Aaliyah: + Embed</p>
<hr />
<div>{{Infobox work<br />
| title = Mining for Practices in Community Collections: Finds from Simple Wikipedia<br />
| date = 2008<br />
| authors = [[Matthijs den Besten]]<br />[[Alessandro Rossi]]<br />[[Loris Gaio]]<br />[[Max Loubser]]<br />[[Jean-Michel Dalle]]<br />
| doi = 10.1007/978-0-387-09684-1_9<br />
| link = https://papers.ssrn.com/sol3/Delivery.cfm/SSRN_ID1157919_code831932.pdf?abstractid=1157919&amp;mirid=3<br />
}}<br />
'''Mining for Practices in Community Collections: Finds from Simple Wikipedia''' - scientific work related to [[Wikipedia quality]] published in 2008, written by [[Matthijs den Besten]], [[Alessandro Rossi]], [[Loris Gaio]], [[Max Loubser]] and [[Jean-Michel Dalle]].<br />
<br />
== Overview ==<br />
The challenges of commons based peer production are usually associated with the development of complex software projects such as Linux and Apache. But the case of open content production should not be treated as a trivial one. For instance, while the task of maintaining a collection of encyclopedic articles might seem negligible compared to the one of keeping together a software system with its many modules and interdependencies, it still poses quite demanding problems. In this paper, authors describe the methods and practices adopted by Simple [[Wikipedia]] to keep its articles easy to read. Based on measurements of article [[readability]] and similarity, authors conclude that while the mechanisms adopted by the community had some effect, in the long run more efforts and new practices might be necessary in order to maintain an acceptable level of readability in the Simple Wikipedia collection.<br />
<br />
== Embed ==<br />
=== Wikipedia Quality ===<br />
<code><br />
<nowiki><br />
Besten, Matthijs den; Rossi, Alessandro; Gaio, Loris; Loubser, Max; Dalle, Jean-Michel. (2008). "[[Mining for Practices in Community Collections: Finds from Simple Wikipedia]]". Springer, Boston, MA. DOI: 10.1007/978-0-387-09684-1_9. <br />
</nowiki><br />
</code><br />
<br />
=== English Wikipedia ===<br />
<code><br />
<nowiki><br />
{{cite journal |last1=Besten |first1=Matthijs den |last2=Rossi |first2=Alessandro |last3=Gaio |first3=Loris |last4=Loubser |first4=Max |last5=Dalle |first5=Jean-Michel |title=Mining for Practices in Community Collections: Finds from Simple Wikipedia |date=2008 |doi=10.1007/978-0-387-09684-1_9 |url=https://wikipediaquality.com/wiki/Mining_for_Practices_in_Community_Collections:_Finds_from_Simple_Wikipedia |journal=Springer, Boston, MA}}<br />
</nowiki><br />
</code><br />
<br />
=== HTML ===<br />
<code><br />
<nowiki><br />
Besten, Matthijs den; Rossi, Alessandro; Gaio, Loris; Loubser, Max; Dalle, Jean-Michel. (2008). &amp;quot;<a href="https://wikipediaquality.com/wiki/Mining_for_Practices_in_Community_Collections:_Finds_from_Simple_Wikipedia">Mining for Practices in Community Collections: Finds from Simple Wikipedia</a>&amp;quot;. Springer, Boston, MA. DOI: 10.1007/978-0-387-09684-1_9. <br />
</nowiki><br />
</code></div>Aaliyahhttps://wikipediaquality.com/index.php?title=Explaining_Authors%E2%80%99_Contribution_to_Pivotal_Artifacts_During_Mass_Collaboration_in_the_Wikipedia%E2%80%99s_Knowledge_Base&diff=23274Explaining Authors’ Contribution to Pivotal Artifacts During Mass Collaboration in the Wikipedia’s Knowledge Base2020-01-10T04:58:45Z<p>Aaliyah: + infobox</p>
<hr />
<div>{{Infobox work<br />
| title = Explaining Authors’ Contribution to Pivotal Artifacts During Mass Collaboration in the Wikipedia’s Knowledge Base<br />
| date = 2014<br />
| authors = [[Iassen Halatchliyski]]<br />[[Johannes Moskaliuk]]<br />[[Joachim Kimmerle]]<br />[[Ulrike Cress]]<br />
| doi = 10.1007/s11412-013-9182-3<br />
| link = https://link.springer.com/article/10.1007%2Fs11412-013-9182-3<br />
}}<br />
'''Explaining Authors’ Contribution to Pivotal Artifacts During Mass Collaboration in the Wikipedia’s Knowledge Base''' - scientific work related to [[Wikipedia quality]] published in 2014, written by [[Iassen Halatchliyski]], [[Johannes Moskaliuk]], [[Joachim Kimmerle]] and [[Ulrike Cress]].<br />
<br />
== Overview ==<br />
This article discusses the relevance of large-scale mass collaboration for computer-supported collaborative learning (CSCL) research, adhering to a theoretical perspective that views collective knowledge both as substance and as participatory activity. In an empirical study using the German [[Wikipedia]] as a data source, authors explored collective knowledge as manifested in the structure of artifacts that were created through the collaborative activity of authors with different levels of contribution experience. Wikipedia’s interconnected articles were considered at the macro level as a network and analyzed using a network analysis approach. The focus of this investigation was the relation between the authors’ experience and their contribution to two types of articles: central pivotal articles within the artifact network of a single knowledge domain and boundary-crossing pivotal articles within the artifact network of two adjacent knowledge domains. Both types of pivotal articles were identified by measuring the network position of artifacts based on network analysis indices of topological centrality. The results showed that authors with specialized contribution experience in one domain predominantly contributed to central pivotal articles within that domain. Authors with generalized contribution experience in two domains predominantly contributed to boundary-crossing pivotal articles between the knowledge domains. Moreover, article experience (i.e., the number of articles in both domains an author had contributed to) was positively related to the contribution to both types of pivotal articles, regardless of whether an author had specialized or generalized domain experience. Authors discuss the implications of findings for future studies in the field of CSCL.</div>Aaliyahhttps://wikipediaquality.com/index.php?title=Some_Experiments_on_the_Wikipediamm_2008_Task:_Evaluating_the_Impact_of_Image_Names_in_Context-Based_Retrieval&diff=23273Some Experiments on the Wikipediamm 2008 Task: Evaluating the Impact of Image Names in Context-Based Retrieval2020-01-10T04:57:27Z<p>Aaliyah: + embed code</p>
<hr />
<div>{{Infobox work<br />
| title = Some Experiments on the Wikipediamm 2008 Task: Evaluating the Impact of Image Names in Context-Based Retrieval<br />
| date = 2008<br />
| authors = [[Mouna Torjmen]]<br />[[Karen Pinel-Sauvagnat]]<br />[[Mohand Boughanem]]<br />
| link = http://ceur-ws.org/Vol-1174/CLEF2008wn-ImageCLEF-TorjmenEt2008b.pdf<br />
}}<br />
'''Some Experiments on the Wikipediamm 2008 Task: Evaluating the Impact of Image Names in Context-Based Retrieval''' - scientific work related to [[Wikipedia quality]] published in 2008, written by [[Mouna Torjmen]], [[Karen Pinel-Sauvagnat]] and [[Mohand Boughanem]].<br />
<br />
== Overview ==<br />
The goal of participation in the [[Wikipedia]]MM task of CLEF 2008 was to study the use of the name of images in a context-based retrieval approach. Authors evaluated this factor in three manners. The first one consists of using image names explicitly: authors computed a similarity score between the query and the name of images using the vector space model. The second one consists of combining results obtained using the textual content of documents and results obtained using the first method. Finally, in last approach, image names are used less explicitly. Authors proposed to use all the textual content of documents, but authors increased the weight of terms in the image name. Results show that the ”image name” is an intersting factor. Even if the image names are used as an additional source of evidence, they allow to better performance. Moreover, authors conclude thet using the image name implicitly performs better results than using it explicitly.<br />
<br />
== Embed ==<br />
=== Wikipedia Quality ===<br />
<code><br />
<nowiki><br />
Torjmen, Mouna; Pinel-Sauvagnat, Karen; Boughanem, Mohand. (2008). "[[Some Experiments on the Wikipediamm 2008 Task: Evaluating the Impact of Image Names in Context-Based Retrieval]]".<br />
</nowiki><br />
</code><br />
<br />
=== English Wikipedia ===<br />
<code><br />
<nowiki><br />
{{cite journal |last1=Torjmen |first1=Mouna |last2=Pinel-Sauvagnat |first2=Karen |last3=Boughanem |first3=Mohand |title=Some Experiments on the Wikipediamm 2008 Task: Evaluating the Impact of Image Names in Context-Based Retrieval |date=2008 |url=https://wikipediaquality.com/wiki/Some_Experiments_on_the_Wikipediamm_2008_Task:_Evaluating_the_Impact_of_Image_Names_in_Context-Based_Retrieval}}<br />
</nowiki><br />
</code><br />
<br />
=== HTML ===<br />
<code><br />
<nowiki><br />
Torjmen, Mouna; Pinel-Sauvagnat, Karen; Boughanem, Mohand. (2008). &amp;quot;<a href="https://wikipediaquality.com/wiki/Some_Experiments_on_the_Wikipediamm_2008_Task:_Evaluating_the_Impact_of_Image_Names_in_Context-Based_Retrieval">Some Experiments on the Wikipediamm 2008 Task: Evaluating the Impact of Image Names in Context-Based Retrieval</a>&amp;quot;.<br />
</nowiki><br />
</code></div>Aaliyahhttps://wikipediaquality.com/index.php?title=Automatic_Creation_of_Multilingual_Semantic_Networks_from_Wikipedia&diff=23272Automatic Creation of Multilingual Semantic Networks from Wikipedia2020-01-10T04:55:42Z<p>Aaliyah: + infobox</p>
<hr />
<div>{{Infobox work<br />
| title = Automatic Creation of Multilingual Semantic Networks from Wikipedia<br />
| date = 2016<br />
| authors = [[Océane Chabrol]]<br />[[David Norrestam]]<br />[[Pierre Nugues]]<br />
| link = http://lup.lub.lu.se/search/ws/files/17262518/SLTC_2016_paper_1_1.pdf<br />
}}<br />
'''Automatic Creation of Multilingual Semantic Networks from Wikipedia''' - scientific work related to [[Wikipedia quality]] published in 2016, written by [[Océane Chabrol]], [[David Norrestam]] and [[Pierre Nugues]].<br />
<br />
== Overview ==<br />
This paper describes the automatic creation of semantic networks from [[Wikipedia]]. Following Lipczak et al. (2014), authors constructed the graphs corresponding to the semantic networks by merging across languages the [[categories]] manually assigned by the users. This results in a network of related concepts for each entity of Wikipedia. Authors used these networks as a component of an entity linking system. the networks showed they could improve the results by 1% over an already strong baseline.</div>Aaliyahhttps://wikipediaquality.com/index.php?title=Navigating_the_Topical_Structure_of_Academic_Search_Results_via_the_Wikipedia_Category_Network&diff=23271Navigating the Topical Structure of Academic Search Results via the Wikipedia Category Network2020-01-10T04:54:03Z<p>Aaliyah: + wikilinks</p>
<hr />
<div>'''Navigating the Topical Structure of Academic Search Results via the Wikipedia Category Network''' - scientific work related to [[Wikipedia quality]] published in 2013, written by [[Daniil Mirylenka]] and [[Andrea Passerini]].<br />
<br />
== Overview ==<br />
Searching for scientific publications on the Web is a tedious task, especially when exploring an unfamiliar domain. Typical scholarly search engines produce lengthy unstructured result lists that are difficult to comprehend, interpret and browse. Authors propose a novel method of organizing the search results into concise and informative topic hierarchies. The method consists of two steps: extracting interrelated topics from the result set, and summarizing the topic graph. In the first step authors map the search results to articles and [[categories]] of [[Wikipedia]], constructing a graph of relevant topics with hierarchical relations. In the second step authors sequentially build nested summaries of the produced topic graph using a structured output prediction approach. Trained on a small number of examples, method learns to construct informative summaries for unseen topic graphs, and outperforms unsupervised state-of-the-art Wikipedia-based clustering.</div>Aaliyahhttps://wikipediaquality.com/index.php?title=Wikitology:_Using_Wikipedia_as_an_Ontology&diff=23270Wikitology: Using Wikipedia as an Ontology2020-01-10T04:52:26Z<p>Aaliyah: Categories</p>
<hr />
<div>{{Infobox work<br />
| title = Wikitology: Using Wikipedia as an Ontology<br />
| date = 2008<br />
| authors = [[Zareen Syed]]<br />[[Anupam Joshi]]<br />
| link = http://ebiquity.umbc.edu/get/a/publication/396.pdf<br />
}}<br />
'''Wikitology: Using Wikipedia as an Ontology''' - scientific work related to [[Wikipedia quality]] published in 2008, written by [[Zareen Syed]] and [[Anupam Joshi]].<br />
<br />
== Overview ==<br />
Identifying topics and concepts associated with a set of documents is a task common to many applications. It can help in the annotation and categorization of documents and be used to model a person's current interests for improving search results, business intelligence or selecting appropriate advertisements. Authors have investigated using [[Wikipedia]]'s articles and associated pages as a topic [[ontology]] for this purpose. The benefits of the approach are that the ontology terms are developed through a social process, maintained and kept current by the [[Wikipedia community]], represent a consensus view, and have meaning that can be understood by reading the associated pages.<br />
<br />
== Embed ==<br />
=== Wikipedia Quality ===<br />
<code><br />
<nowiki><br />
Syed, Zareen; Joshi, Anupam. (2008). "[[Wikitology: Using Wikipedia as an Ontology]]".<br />
</nowiki><br />
</code><br />
<br />
=== English Wikipedia ===<br />
<code><br />
<nowiki><br />
{{cite journal |last1=Syed |first1=Zareen |last2=Joshi |first2=Anupam |title=Wikitology: Using Wikipedia as an Ontology |date=2008 |url=https://wikipediaquality.com/wiki/Wikitology:_Using_Wikipedia_as_an_Ontology}}<br />
</nowiki><br />
</code><br />
<br />
=== HTML ===<br />
<code><br />
<nowiki><br />
Syed, Zareen; Joshi, Anupam. (2008). &amp;quot;<a href="https://wikipediaquality.com/wiki/Wikitology:_Using_Wikipedia_as_an_Ontology">Wikitology: Using Wikipedia as an Ontology</a>&amp;quot;.<br />
</nowiki><br />
</code><br />
<br />
<br />
<br />
[[Category:Scientific works]]</div>Aaliyahhttps://wikipediaquality.com/index.php?title=Extraction_and_Recognition_of_Polish_Multiword_Expressions_Using_Wikipedia_and_Finite-State_Automata&diff=23269Extraction and Recognition of Polish Multiword Expressions Using Wikipedia and Finite-State Automata2020-01-10T04:50:36Z<p>Aaliyah: + category</p>
<hr />
<div>{{Infobox work<br />
| title = Extraction and Recognition of Polish Multiword Expressions Using Wikipedia and Finite-State Automata<br />
| date = 2016<br />
| authors = [[Pawel Chrzaszcz]]<br />
| doi = 10.18653/v1/W16-1815<br />
| link = https://aaltodoc.aalto.fi:443/handle/123456789/15381<br />
| plink = https://www.semanticscholar.org/paper/Extraction-and-Recognition-of-Polish-Multiword-and-Chrzaszcz/9f6f532cec52138f5606aaf971895716bf74a084<br />
}}<br />
'''Extraction and Recognition of Polish Multiword Expressions Using Wikipedia and Finite-State Automata''' - scientific work related to [[Wikipedia quality]] published in 2016, written by [[Pawel Chrzaszcz]].<br />
<br />
== Overview ==<br />
Linguistic resources for Polish are often missing multiword expressions (MWEs) – idioms, compound nouns and other expressions which have their own distinct meaning as a whole. This paper describes an effort to extract and recognize nominal MWEs in Polish text using [[Wikipedia]], inflection dictionaries and finite-state automata. Wikipedia is used as a lexicon of MWEs and as a corpus annotated with links to articles. Incoming links for each article are used to determine the inflection pattern of the headword – this approach helps eliminate invalid inflected forms. The goal is to recognize known MWEs as well as to find more expressions sharing similar grammatical structure and occurring in similar context.<br />
<br />
== Embed ==<br />
=== Wikipedia Quality ===<br />
<code><br />
<nowiki><br />
Chrzaszcz, Pawel. (2016). "[[Extraction and Recognition of Polish Multiword Expressions Using Wikipedia and Finite-State Automata]]".DOI: 10.18653/v1/W16-1815. <br />
</nowiki><br />
</code><br />
<br />
=== English Wikipedia ===<br />
<code><br />
<nowiki><br />
{{cite journal |last1=Chrzaszcz |first1=Pawel |title=Extraction and Recognition of Polish Multiword Expressions Using Wikipedia and Finite-State Automata |date=2016 |doi=10.18653/v1/W16-1815 |url=https://wikipediaquality.com/wiki/Extraction_and_Recognition_of_Polish_Multiword_Expressions_Using_Wikipedia_and_Finite-State_Automata}}<br />
</nowiki><br />
</code><br />
<br />
=== HTML ===<br />
<code><br />
<nowiki><br />
Chrzaszcz, Pawel. (2016). &amp;quot;<a href="https://wikipediaquality.com/wiki/Extraction_and_Recognition_of_Polish_Multiword_Expressions_Using_Wikipedia_and_Finite-State_Automata">Extraction and Recognition of Polish Multiword Expressions Using Wikipedia and Finite-State Automata</a>&amp;quot;.DOI: 10.18653/v1/W16-1815. <br />
</nowiki><br />
</code><br />
<br />
<br />
<br />
[[Category:Scientific works]]<br />
[[Category:Polish Wikipedia]]</div>Aaliyah