Difference between revisions of "Syntax Analyzer a Selectivity Estimation Technique Applied on Wikipedia Xml Data Set"

From Wikipedia Quality
Jump to: navigation, search
(New study: Syntax Analyzer a Selectivity Estimation Technique Applied on Wikipedia Xml Data Set)
 
(wikilinks)
Line 1: Line 1:
'''Syntax Analyzer a Selectivity Estimation Technique Applied on Wikipedia Xml Data Set''' - scientific work related to Wikipedia quality published in 2013, written by Muath Alrammal and Gaétan Hains.
+
'''Syntax Analyzer a Selectivity Estimation Technique Applied on Wikipedia Xml Data Set''' - scientific work related to [[Wikipedia quality]] published in 2013, written by [[Muath Alrammal]] and [[Gaétan Hains]].
  
 
== Overview ==
 
== Overview ==
Querying large volume of XML data represents a bottleneck for several computationally intensive applications. A fast and accurate selectivity estimation mechanism is of practical importance because selectivity estimation plays a fundamental role in XML query performance. Recently proposed techniques are all based on some forms of structure synopses that could be time consuming to build and not effective for summarizing complex structure relationships. Precisely, current techniques do not handle or process efficiently the large text nodes exist in some data sets as Wikipedia. To overcome this limitation, authors extend previous work [12] that is a stream-based selectivity estimation technique to process efficiently the English data set of Wikipedia. The content of XML text nodes in Wikipedia contains a massive amount of real-life information that techniques bring closer to practical and efficient everyday use. Extensive experiments on Wikipedia data sets (with different sizes) show that technique achieves a remarkable accuracy and reasonable performance.
+
Querying large volume of XML data represents a bottleneck for several computationally intensive applications. A fast and accurate selectivity estimation mechanism is of practical importance because selectivity estimation plays a fundamental role in XML query performance. Recently proposed techniques are all based on some forms of structure synopses that could be time consuming to build and not effective for summarizing complex structure relationships. Precisely, current techniques do not handle or process efficiently the large text nodes exist in some data sets as [[Wikipedia]]. To overcome this limitation, authors extend previous work [12] that is a stream-based selectivity estimation technique to process efficiently the English data set of Wikipedia. The content of XML text nodes in Wikipedia contains a massive amount of real-life information that techniques bring closer to practical and efficient everyday use. Extensive experiments on Wikipedia data sets (with different sizes) show that technique achieves a remarkable accuracy and reasonable performance.

Revision as of 08:52, 9 June 2020

Syntax Analyzer a Selectivity Estimation Technique Applied on Wikipedia Xml Data Set - scientific work related to Wikipedia quality published in 2013, written by Muath Alrammal and Gaétan Hains.

Overview

Querying large volume of XML data represents a bottleneck for several computationally intensive applications. A fast and accurate selectivity estimation mechanism is of practical importance because selectivity estimation plays a fundamental role in XML query performance. Recently proposed techniques are all based on some forms of structure synopses that could be time consuming to build and not effective for summarizing complex structure relationships. Precisely, current techniques do not handle or process efficiently the large text nodes exist in some data sets as Wikipedia. To overcome this limitation, authors extend previous work [12] that is a stream-based selectivity estimation technique to process efficiently the English data set of Wikipedia. The content of XML text nodes in Wikipedia contains a massive amount of real-life information that techniques bring closer to practical and efficient everyday use. Extensive experiments on Wikipedia data sets (with different sizes) show that technique achieves a remarkable accuracy and reasonable performance.