Semantic Data Extraction from Infobox Wikipedia Template

From Wikipedia Quality
Revision as of 07:25, 6 August 2019 by Sofia (talk | contribs) (Overview - Semantic Data Extraction from Infobox Wikipedia Template)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to: navigation, search

Semantic Data Extraction from Infobox Wikipedia Template - scientific work related to Wikipedia quality published in 2012, written by Amira AbdEl-atey, Sherif El-etriby and Arabi S. kishk.

Overview

Wikis are established means for collaborative authoring, versioning and publishing of textual articles. The Wikipedia for example, succeeded in creating the by far largest encyclopedia just on the basis of a wiki. Wikis are created by wiki software and are often used to create collaborative works. One of the key challenges of computer science is answering rich queries. Several approaches have been proposed on how to extend wikis to allow the creation of structured and semantically enriched content. Semantic web allows of creation of such web. Also, Semantic web contents help us to answer rich queries. One of the new applications in semantic web is DBpedia. DBpedia project focus on creating semantically enriched structured information of Wikipedia. In this article, authors describe and clarify the DBpedia project. Authors test the project to get structured data as triples from some Wikipedia resources. Authors clarify examples of car resource and Berlin resource. The output data is in RDF (Resource Description Framework) triple format which is the basic technology used for building the semantic web. Authors can answer rich queries by making use of semantic web structure. General Terms Information retrieval; semantic web.