Difference between revisions of "A Hybrid Method based on Wordnet and Wikipedia for Computing Semantic Relatedness Between Texts"

From Wikipedia Quality
Jump to: navigation, search
(Adding wikilinks)
(Adding infobox)
Line 1: Line 1:
 +
{{Infobox work
 +
| title = A Hybrid Method based on Wordnet and Wikipedia for Computing Semantic Relatedness Between Texts
 +
| date = 2012
 +
| authors = [[Roghieh Malekzadeh]]<br />[[Jamshid Bagherzadeh]]<br />[[Abdollah Noroozi]]
 +
| doi = 10.1109/AISP.2012.6313727
 +
| link = http://ieeexplore.ieee.org/xpl/articleDetails.jsp?reload=true&amp;arnumber=6313727
 +
}}
 
'''A Hybrid Method based on Wordnet and Wikipedia for Computing Semantic Relatedness Between Texts''' - scientific work related to [[Wikipedia quality]] published in 2012, written by [[Roghieh Malekzadeh]], [[Jamshid Bagherzadeh]] and [[Abdollah Noroozi]].
 
'''A Hybrid Method based on Wordnet and Wikipedia for Computing Semantic Relatedness Between Texts''' - scientific work related to [[Wikipedia quality]] published in 2012, written by [[Roghieh Malekzadeh]], [[Jamshid Bagherzadeh]] and [[Abdollah Noroozi]].
  
 
== Overview ==
 
== Overview ==
 
In this article authors present a new method for computing semantic [[relatedness]] between texts. For this purpose authors use a tow-phase approach. The first phase involves modeling document sentences as a matrix to compute semantic relatedness between sentences. In the second phase, authors compare text relatedness by using the relation of their sentences. Since Semantic relation between words must be searched in lexical [[semantic knowledge]] source, selecting a suitable source is very important, so that produced accurate results with correct selection. In this work, authors attempt to capture the semantic relatedness between texts with a more accuracy. For this purpose, authors use a collection of tow well known knowledge bases namely, [[WordNet]] and [[Wikipedia]], so that provide more complete data source for calculate the semantic relatedness with a more accuracy. Authors evaluate approach by comparison with other existing techniques (on Lee datasets).
 
In this article authors present a new method for computing semantic [[relatedness]] between texts. For this purpose authors use a tow-phase approach. The first phase involves modeling document sentences as a matrix to compute semantic relatedness between sentences. In the second phase, authors compare text relatedness by using the relation of their sentences. Since Semantic relation between words must be searched in lexical [[semantic knowledge]] source, selecting a suitable source is very important, so that produced accurate results with correct selection. In this work, authors attempt to capture the semantic relatedness between texts with a more accuracy. For this purpose, authors use a collection of tow well known knowledge bases namely, [[WordNet]] and [[Wikipedia]], so that provide more complete data source for calculate the semantic relatedness with a more accuracy. Authors evaluate approach by comparison with other existing techniques (on Lee datasets).

Revision as of 21:26, 28 October 2019


A Hybrid Method based on Wordnet and Wikipedia for Computing Semantic Relatedness Between Texts
Authors
Roghieh Malekzadeh
Jamshid Bagherzadeh
Abdollah Noroozi
Publication date
2012
DOI
10.1109/AISP.2012.6313727
Links
Original

A Hybrid Method based on Wordnet and Wikipedia for Computing Semantic Relatedness Between Texts - scientific work related to Wikipedia quality published in 2012, written by Roghieh Malekzadeh, Jamshid Bagherzadeh and Abdollah Noroozi.

Overview

In this article authors present a new method for computing semantic relatedness between texts. For this purpose authors use a tow-phase approach. The first phase involves modeling document sentences as a matrix to compute semantic relatedness between sentences. In the second phase, authors compare text relatedness by using the relation of their sentences. Since Semantic relation between words must be searched in lexical semantic knowledge source, selecting a suitable source is very important, so that produced accurate results with correct selection. In this work, authors attempt to capture the semantic relatedness between texts with a more accuracy. For this purpose, authors use a collection of tow well known knowledge bases namely, WordNet and Wikipedia, so that provide more complete data source for calculate the semantic relatedness with a more accuracy. Authors evaluate approach by comparison with other existing techniques (on Lee datasets).