Difference between revisions of "Semi-Automatic Extraction and Modeling of Ontologies Using Wikipedia Xml Corpus"

From Wikipedia Quality
Jump to: navigation, search
(+ links)
(Infobox)
Line 1: Line 1:
 +
{{Infobox work
 +
| title = Semi-Automatic Extraction and Modeling of Ontologies Using Wikipedia Xml Corpus
 +
| date = 2009
 +
| authors = [[Lalindra De Silva]]<br />[[Lakshman Jayaratne]]
 +
| doi = 10.1109/ICADIWT.2009.5273871
 +
| link = http://ieeexplore.ieee.org/document/5273871/
 +
}}
 
'''Semi-Automatic Extraction and Modeling of Ontologies Using Wikipedia Xml Corpus''' - scientific work related to [[Wikipedia quality]] published in 2009, written by [[Lalindra De Silva]] and [[Lakshman Jayaratne]].
 
'''Semi-Automatic Extraction and Modeling of Ontologies Using Wikipedia Xml Corpus''' - scientific work related to [[Wikipedia quality]] published in 2009, written by [[Lalindra De Silva]] and [[Lakshman Jayaratne]].
  
 
== Overview ==
 
== Overview ==
 
This paper introduces WikiOnto: a system that assists in the extraction and modeling of topic ontologies in a semi-automatic manner using a preprocessed document corpus derived from [[Wikipedia]]. Based on the Wikipedia XML Corpus, authors present a three-tiered framework for extracting topic ontologies in quick time and a modeling environment to refine these ontologies. Using [[Natural Language Processing]] (NLP) and other Machine Learning (ML) techniques along with a very rich document corpus, this system proposes a solution to a task that is generally considered extremely cumbersome. The initial results of the prototype suggest strong potential of the system to become highly successful in [[ontology]] extraction and modeling and also inspire further research on extracting ontologies from other semi-structured document corpora as well.
 
This paper introduces WikiOnto: a system that assists in the extraction and modeling of topic ontologies in a semi-automatic manner using a preprocessed document corpus derived from [[Wikipedia]]. Based on the Wikipedia XML Corpus, authors present a three-tiered framework for extracting topic ontologies in quick time and a modeling environment to refine these ontologies. Using [[Natural Language Processing]] (NLP) and other Machine Learning (ML) techniques along with a very rich document corpus, this system proposes a solution to a task that is generally considered extremely cumbersome. The initial results of the prototype suggest strong potential of the system to become highly successful in [[ontology]] extraction and modeling and also inspire further research on extracting ontologies from other semi-structured document corpora as well.

Revision as of 07:12, 22 June 2020


Semi-Automatic Extraction and Modeling of Ontologies Using Wikipedia Xml Corpus
Authors
Lalindra De Silva
Lakshman Jayaratne
Publication date
2009
DOI
10.1109/ICADIWT.2009.5273871
Links
Original

Semi-Automatic Extraction and Modeling of Ontologies Using Wikipedia Xml Corpus - scientific work related to Wikipedia quality published in 2009, written by Lalindra De Silva and Lakshman Jayaratne.

Overview

This paper introduces WikiOnto: a system that assists in the extraction and modeling of topic ontologies in a semi-automatic manner using a preprocessed document corpus derived from Wikipedia. Based on the Wikipedia XML Corpus, authors present a three-tiered framework for extracting topic ontologies in quick time and a modeling environment to refine these ontologies. Using Natural Language Processing (NLP) and other Machine Learning (ML) techniques along with a very rich document corpus, this system proposes a solution to a task that is generally considered extremely cumbersome. The initial results of the prototype suggest strong potential of the system to become highly successful in ontology extraction and modeling and also inspire further research on extracting ontologies from other semi-structured document corpora as well.