Difference between revisions of "Mining Wikipedia's Snippets Graph - First Step to Build a New Knowledge Base"

From Wikipedia Quality
Jump to: navigation, search
(Information about: Mining Wikipedia's Snippets Graph - First Step to Build a New Knowledge Base)
 
(Links)
Line 1: Line 1:
'''Mining Wikipedia's Snippets Graph - First Step to Build a New Knowledge Base''' - scientific work related to Wikipedia quality published in 2012, written by Andias Wira-Alam and Brigitte Mathiak.
+
'''Mining Wikipedia's Snippets Graph - First Step to Build a New Knowledge Base''' - scientific work related to [[Wikipedia quality]] published in 2012, written by [[Andias Wira-Alam]] and [[Brigitte Mathiak]].
  
 
== Overview ==
 
== Overview ==
In this paper, authors discuss the aspects of mining links and text snippets from Wikipedia as a new knowledge base. Current knowledge base, e.g. DBPedia[1], covers mainly the structured part of Wikipedia, but not the content as a whole. Acting as a complement, authors focus on extracting information from the text of the articles. Authors extract a database of the hyperlinks between Wikipedia articles and populate them with the textual context surrounding each hyperlink. This would be useful for network analysis, e.g. to measure the influence of one topic on another, or for question-answering directly (for stating the relationship between two entities). First, authors describe the technical parts related to extracting the data from Wikipedia. Second, authors specify how to represent the data extracted as an extended triple through a Web service. Finally, authors discuss the usage possibilities upon expectation and also the challenges.
+
In this paper, authors discuss the aspects of mining links and text snippets from [[Wikipedia]] as a new knowledge base. Current knowledge base, e.g. DBPedia[1], covers mainly the structured part of Wikipedia, but not the content as a whole. Acting as a complement, authors focus on extracting information from the text of the articles. Authors extract a database of the hyperlinks between Wikipedia articles and populate them with the textual context surrounding each hyperlink. This would be useful for network analysis, e.g. to measure the influence of one topic on another, or for question-answering directly (for stating the relationship between two entities). First, authors describe the technical parts related to extracting the data from Wikipedia. Second, authors specify how to represent the data extracted as an extended triple through a Web service. Finally, authors discuss the usage possibilities upon expectation and also the challenges.

Revision as of 12:28, 11 January 2020

Mining Wikipedia's Snippets Graph - First Step to Build a New Knowledge Base - scientific work related to Wikipedia quality published in 2012, written by Andias Wira-Alam and Brigitte Mathiak.

Overview

In this paper, authors discuss the aspects of mining links and text snippets from Wikipedia as a new knowledge base. Current knowledge base, e.g. DBPedia[1], covers mainly the structured part of Wikipedia, but not the content as a whole. Acting as a complement, authors focus on extracting information from the text of the articles. Authors extract a database of the hyperlinks between Wikipedia articles and populate them with the textual context surrounding each hyperlink. This would be useful for network analysis, e.g. to measure the influence of one topic on another, or for question-answering directly (for stating the relationship between two entities). First, authors describe the technical parts related to extracting the data from Wikipedia. Second, authors specify how to represent the data extracted as an extended triple through a Web service. Finally, authors discuss the usage possibilities upon expectation and also the challenges.