Difference between revisions of "Evaluating Entity Linking with Wikipedia"
(Links) |
(Adding infobox) |
||
Line 1: | Line 1: | ||
+ | {{Infobox work | ||
+ | | title = Evaluating Entity Linking with Wikipedia | ||
+ | | date = 2013 | ||
+ | | authors = [[Ben Hachey]]<br />[[Will Radford]]<br />[[Joel Nothman]]<br />[[Matthew Honnibal]]<br />[[James R. Curran]] | ||
+ | | doi = 10.1016/j.artint.2012.04.005 | ||
+ | | link = https://dl.acm.org/citation.cfm?id=2405914 | ||
+ | | plink = https://www.semanticscholar.org/paper/Evaluating-Entity-Linking-with-Wikipedia-Hachey-Radford/3b589442b9add7f77c4ec47e3c868d688f7ac320/figure/14 | ||
+ | }} | ||
'''Evaluating Entity Linking with Wikipedia''' - scientific work related to [[Wikipedia quality]] published in 2013, written by [[Ben Hachey]], [[Will Radford]], [[Joel Nothman]], [[Matthew Honnibal]] and [[James R. Curran]]. | '''Evaluating Entity Linking with Wikipedia''' - scientific work related to [[Wikipedia quality]] published in 2013, written by [[Ben Hachey]], [[Will Radford]], [[Joel Nothman]], [[Matthew Honnibal]] and [[James R. Curran]]. | ||
== Overview == | == Overview == | ||
Named Entity Linking (nel) grounds entity mentions to their corresponding node in a Knowledge Base (kb). Recently, a number of systems have been proposed for linking entity mentions in text to [[Wikipedia]] pages. Such systems typically search for candidate entities and then disambiguate them, returning either the best candidate or nil. However, comparison has focused on disambiguation accuracy, making it difficult to determine how search impacts performance. Furthermore, important approaches from the literature have not been systematically compared on standard data sets. Authors reimplement three seminal nel systems and present a detailed evaluation of search strategies. Authors experiments find that coreference and acronym handling lead to substantial improvement, and search strategies account for much of the variation between systems. This is an interesting finding, because these aspects of the problem have often been neglected in the literature, which has focused largely on complex candidate ranking algorithms. | Named Entity Linking (nel) grounds entity mentions to their corresponding node in a Knowledge Base (kb). Recently, a number of systems have been proposed for linking entity mentions in text to [[Wikipedia]] pages. Such systems typically search for candidate entities and then disambiguate them, returning either the best candidate or nil. However, comparison has focused on disambiguation accuracy, making it difficult to determine how search impacts performance. Furthermore, important approaches from the literature have not been systematically compared on standard data sets. Authors reimplement three seminal nel systems and present a detailed evaluation of search strategies. Authors experiments find that coreference and acronym handling lead to substantial improvement, and search strategies account for much of the variation between systems. This is an interesting finding, because these aspects of the problem have often been neglected in the literature, which has focused largely on complex candidate ranking algorithms. |
Revision as of 08:29, 8 December 2019
Authors | Ben Hachey Will Radford Joel Nothman Matthew Honnibal James R. Curran |
---|---|
Publication date | 2013 |
DOI | 10.1016/j.artint.2012.04.005 |
Links | Original Preprint |
Evaluating Entity Linking with Wikipedia - scientific work related to Wikipedia quality published in 2013, written by Ben Hachey, Will Radford, Joel Nothman, Matthew Honnibal and James R. Curran.
Overview
Named Entity Linking (nel) grounds entity mentions to their corresponding node in a Knowledge Base (kb). Recently, a number of systems have been proposed for linking entity mentions in text to Wikipedia pages. Such systems typically search for candidate entities and then disambiguate them, returning either the best candidate or nil. However, comparison has focused on disambiguation accuracy, making it difficult to determine how search impacts performance. Furthermore, important approaches from the literature have not been systematically compared on standard data sets. Authors reimplement three seminal nel systems and present a detailed evaluation of search strategies. Authors experiments find that coreference and acronym handling lead to substantial improvement, and search strategies account for much of the variation between systems. This is an interesting finding, because these aspects of the problem have often been neglected in the literature, which has focused largely on complex candidate ranking algorithms.