Difference between revisions of "Learning to Link with Wikipedia"
(Links) |
(Infobox) |
||
Line 1: | Line 1: | ||
+ | {{Infobox work | ||
+ | | title = Learning to Link with Wikipedia | ||
+ | | date = 2008 | ||
+ | | authors = [[David N. Milne]]<br />[[Ian H. Witten]] | ||
+ | | doi = 10.1145/1458082.1458150 | ||
+ | | link = http://dl.acm.org/citation.cfm?id=1458150 | ||
+ | }} | ||
'''Learning to Link with Wikipedia''' - scientific work related to [[Wikipedia quality]] published in 2008, written by [[David N. Milne]] and [[Ian H. Witten]]. | '''Learning to Link with Wikipedia''' - scientific work related to [[Wikipedia quality]] published in 2008, written by [[David N. Milne]] and [[Ian H. Witten]]. | ||
== Overview == | == Overview == | ||
This paper describes how to automatically cross-reference documents with [[Wikipedia]]: the largest knowledge base ever known. It explains how machine learning can be used to identify significant terms within unstructured text, and enrich it with links to the appropriate Wikipedia articles. The resulting link detector and disambiguator performs very well, with recall and precision of almost 75%. This performance is constant whether the system is evaluated on Wikipedia articles or "real world" documents. This work has implications far beyond enriching documents with explanatory links. It can provide structured knowledge about any unstructured fragment of text. Any task that is currently addressed with bags of words - indexing, clustering, retrieval, and summarization to name a few - could use the techniques described here to draw on a vast network of concepts and semantics. | This paper describes how to automatically cross-reference documents with [[Wikipedia]]: the largest knowledge base ever known. It explains how machine learning can be used to identify significant terms within unstructured text, and enrich it with links to the appropriate Wikipedia articles. The resulting link detector and disambiguator performs very well, with recall and precision of almost 75%. This performance is constant whether the system is evaluated on Wikipedia articles or "real world" documents. This work has implications far beyond enriching documents with explanatory links. It can provide structured knowledge about any unstructured fragment of text. Any task that is currently addressed with bags of words - indexing, clustering, retrieval, and summarization to name a few - could use the techniques described here to draw on a vast network of concepts and semantics. |
Revision as of 10:01, 25 January 2020
Authors | David N. Milne Ian H. Witten |
---|---|
Publication date | 2008 |
DOI | 10.1145/1458082.1458150 |
Links | Original |
Learning to Link with Wikipedia - scientific work related to Wikipedia quality published in 2008, written by David N. Milne and Ian H. Witten.
Overview
This paper describes how to automatically cross-reference documents with Wikipedia: the largest knowledge base ever known. It explains how machine learning can be used to identify significant terms within unstructured text, and enrich it with links to the appropriate Wikipedia articles. The resulting link detector and disambiguator performs very well, with recall and precision of almost 75%. This performance is constant whether the system is evaluated on Wikipedia articles or "real world" documents. This work has implications far beyond enriching documents with explanatory links. It can provide structured knowledge about any unstructured fragment of text. Any task that is currently addressed with bags of words - indexing, clustering, retrieval, and summarization to name a few - could use the techniques described here to draw on a vast network of concepts and semantics.