Difference between revisions of "Neural Wikipedian: Generating Textual Summaries from Knowledge Base Triples"

From Wikipedia Quality
Jump to: navigation, search
(Creating a page: Neural Wikipedian: Generating Textual Summaries from Knowledge Base Triples)
 
(Int.links)
Line 1: Line 1:
'''Neural Wikipedian: Generating Textual Summaries from Knowledge Base Triples''' - scientific work related to Wikipedia quality published in 2018, written by Pavlos Vougiouklis, Hady Elsahar, Lucie-Aimée Kaffee, Christophe Gravier, Frédérique Laforest, Jonathon S. Hare and Elena Simperl.
+
'''Neural Wikipedian: Generating Textual Summaries from Knowledge Base Triples''' - scientific work related to [[Wikipedia quality]] published in 2018, written by [[Pavlos Vougiouklis]], [[Hady Elsahar]], [[Lucie-Aimée Kaffee]], [[Christophe Gravier]], [[Frédérique Laforest]], [[Jonathon S. Hare]] and [[Elena Simperl]].
  
 
== Overview ==
 
== Overview ==
Most people need textual or visual interfaces in order to make sense of Semantic Web data. In this paper, authors investigate the problem of generating natural language summaries for Semantic Web data using neural networks. Authors end-to-end trainable architecture encodes the information from a set of triples into a vector of fixed dimensionality and generates a textual summary by conditioning the output on the encoded vector. Authors explore a set of different approaches that enable models to verbalise entities from the input set of triples in the generated text. Authors systems are trained and evaluated on two corpora of loosely aligned Wikipedia snippets with triples from DBpedia and Wikidata, with promising results.
+
Most people need textual or visual interfaces in order to make sense of Semantic Web data. In this paper, authors investigate the problem of generating natural language summaries for Semantic Web data using neural networks. Authors end-to-end trainable architecture encodes the information from a set of triples into a vector of fixed dimensionality and generates a textual summary by conditioning the output on the encoded vector. Authors explore a set of different approaches that enable models to verbalise entities from the input set of triples in the generated text. Authors systems are trained and evaluated on two corpora of loosely aligned [[Wikipedia]] snippets with triples from [[DBpedia]] and [[Wikidata]], with promising results.

Revision as of 00:10, 27 October 2019

Neural Wikipedian: Generating Textual Summaries from Knowledge Base Triples - scientific work related to Wikipedia quality published in 2018, written by Pavlos Vougiouklis, Hady Elsahar, Lucie-Aimée Kaffee, Christophe Gravier, Frédérique Laforest, Jonathon S. Hare and Elena Simperl.

Overview

Most people need textual or visual interfaces in order to make sense of Semantic Web data. In this paper, authors investigate the problem of generating natural language summaries for Semantic Web data using neural networks. Authors end-to-end trainable architecture encodes the information from a set of triples into a vector of fixed dimensionality and generates a textual summary by conditioning the output on the encoded vector. Authors explore a set of different approaches that enable models to verbalise entities from the input set of triples in the generated text. Authors systems are trained and evaluated on two corpora of loosely aligned Wikipedia snippets with triples from DBpedia and Wikidata, with promising results.