Difference between revisions of "Inducing Conceptual Embedding Spaces from Wikipedia"

From Wikipedia Quality
Jump to: navigation, search
(+ wikilinks)
(+ Infobox work)
Line 1: Line 1:
 +
{{Infobox work
 +
| title = Inducing Conceptual Embedding Spaces from Wikipedia
 +
| date = 2017
 +
| authors = [[Gerard de Melo]]
 +
| doi = 10.1145/3041021.3054144
 +
| link = https://dl.acm.org/citation.cfm?id=3041021.3054144
 +
}}
 
'''Inducing Conceptual Embedding Spaces from Wikipedia''' - scientific work related to [[Wikipedia quality]] published in 2017, written by [[Gerard de Melo]].
 
'''Inducing Conceptual Embedding Spaces from Wikipedia''' - scientific work related to [[Wikipedia quality]] published in 2017, written by [[Gerard de Melo]].
  
 
== Overview ==
 
== Overview ==
 
The word2vec word vector representations are one of the most well-known new semantic resources to appear in recent years. While large sets of pre-trained vectors are available, these focus on frequent words and multi-word expressions but lack sufficient coverage of [[named entities]]. Moreover, [[Google]] only released pre-trained vectors for English. In this paper, authors explore an automatic expansion of Google's pre-trained vectors using [[Wikipedia]], adding millions of concepts and named entities in over 270 languages. Authors method enables all of these to reside in the same vector space, thus flexibly facilitating [[cross-lingual]] semantic applications.
 
The word2vec word vector representations are one of the most well-known new semantic resources to appear in recent years. While large sets of pre-trained vectors are available, these focus on frequent words and multi-word expressions but lack sufficient coverage of [[named entities]]. Moreover, [[Google]] only released pre-trained vectors for English. In this paper, authors explore an automatic expansion of Google's pre-trained vectors using [[Wikipedia]], adding millions of concepts and named entities in over 270 languages. Authors method enables all of these to reside in the same vector space, thus flexibly facilitating [[cross-lingual]] semantic applications.

Revision as of 09:16, 10 October 2019


Inducing Conceptual Embedding Spaces from Wikipedia
Authors
Gerard de Melo
Publication date
2017
DOI
10.1145/3041021.3054144
Links
Original

Inducing Conceptual Embedding Spaces from Wikipedia - scientific work related to Wikipedia quality published in 2017, written by Gerard de Melo.

Overview

The word2vec word vector representations are one of the most well-known new semantic resources to appear in recent years. While large sets of pre-trained vectors are available, these focus on frequent words and multi-word expressions but lack sufficient coverage of named entities. Moreover, Google only released pre-trained vectors for English. In this paper, authors explore an automatic expansion of Google's pre-trained vectors using Wikipedia, adding millions of concepts and named entities in over 270 languages. Authors method enables all of these to reside in the same vector space, thus flexibly facilitating cross-lingual semantic applications.