Neural Wikipedian: Generating Textual Summaries from Knowledge Base Triples

From Wikipedia Quality
Revision as of 11:24, 30 June 2019 by Violet (talk | contribs) (Creating a page: Neural Wikipedian: Generating Textual Summaries from Knowledge Base Triples)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to: navigation, search

Neural Wikipedian: Generating Textual Summaries from Knowledge Base Triples - scientific work related to Wikipedia quality published in 2018, written by Pavlos Vougiouklis, Hady Elsahar, Lucie-Aimée Kaffee, Christophe Gravier, Frédérique Laforest, Jonathon S. Hare and Elena Simperl.

Overview

Most people need textual or visual interfaces in order to make sense of Semantic Web data. In this paper, authors investigate the problem of generating natural language summaries for Semantic Web data using neural networks. Authors end-to-end trainable architecture encodes the information from a set of triples into a vector of fixed dimensionality and generates a textual summary by conditioning the output on the encoded vector. Authors explore a set of different approaches that enable models to verbalise entities from the input set of triples in the generated text. Authors systems are trained and evaluated on two corpora of loosely aligned Wikipedia snippets with triples from DBpedia and Wikidata, with promising results.