Difference between revisions of "Extracting Lexical Semantic Knowledge from Wikipedia and Wiktionary"

From Wikipedia Quality
Jump to: navigation, search
(New study: Extracting Lexical Semantic Knowledge from Wikipedia and Wiktionary)
 
(Links)
Line 1: Line 1:
'''Extracting Lexical Semantic Knowledge from Wikipedia and Wiktionary''' - scientific work related to Wikipedia quality published in 2008, written by Torsten Zesch, Christof Müller and Iryna Gurevych.
+
'''Extracting Lexical Semantic Knowledge from Wikipedia and Wiktionary''' - scientific work related to [[Wikipedia quality]] published in 2008, written by [[Torsten Zesch]], [[Christof Müller]] and [[Iryna Gurevych]].
  
 
== Overview ==
 
== Overview ==
Recently, collaboratively constructed resources such as Wikipedia and Wiktionary have been discovered as valuable lexical semantic knowledge bases with a high potential in diverse Natural Language Processing (NLP) tasks. Collaborative knowledge bases however significantly differ from traditional linguistic knowledge bases in various respects, and this constitutes both an asset and an impediment for research in NLP. This paper addresses one such major impediment, namely the lack of suitable programmatic access mechanisms to the knowledge stored in these large semantic knowledge bases. Authors present two application programming interfaces for Wikipedia and Wiktionary which are especially designed for mining the rich lexical semantic information dispersed in the knowledge bases, and provide efficient and structured access to the available knowledge. As authors believe them to be of general interest to the NLP community, authors have made them freely available for research purposes.
+
Recently, collaboratively constructed resources such as [[Wikipedia]] and Wiktionary have been discovered as valuable lexical [[semantic knowledge]] bases with a high potential in diverse [[Natural Language Processing]] (NLP) tasks. Collaborative knowledge bases however significantly differ from traditional linguistic knowledge bases in various respects, and this constitutes both an asset and an impediment for research in NLP. This paper addresses one such major impediment, namely the lack of suitable programmatic access mechanisms to the knowledge stored in these large semantic knowledge bases. Authors present two application programming interfaces for Wikipedia and Wiktionary which are especially designed for mining the rich lexical [[semantic information]] dispersed in the knowledge bases, and provide efficient and structured access to the available knowledge. As authors believe them to be of general interest to the NLP community, authors have made them freely available for research purposes.

Revision as of 08:56, 20 October 2019

Extracting Lexical Semantic Knowledge from Wikipedia and Wiktionary - scientific work related to Wikipedia quality published in 2008, written by Torsten Zesch, Christof Müller and Iryna Gurevych.

Overview

Recently, collaboratively constructed resources such as Wikipedia and Wiktionary have been discovered as valuable lexical semantic knowledge bases with a high potential in diverse Natural Language Processing (NLP) tasks. Collaborative knowledge bases however significantly differ from traditional linguistic knowledge bases in various respects, and this constitutes both an asset and an impediment for research in NLP. This paper addresses one such major impediment, namely the lack of suitable programmatic access mechanisms to the knowledge stored in these large semantic knowledge bases. Authors present two application programming interfaces for Wikipedia and Wiktionary which are especially designed for mining the rich lexical semantic information dispersed in the knowledge bases, and provide efficient and structured access to the available knowledge. As authors believe them to be of general interest to the NLP community, authors have made them freely available for research purposes.