Finding near-duplicate web pages: A large-scale evaluation of algorithms

Broder et al.'s [3] shingling algorithm and Charikar's [4] random projection based approach are considered "state-of-the-art" algorithms for finding near-duplicate web pages. Both algorithms were either developed at or used by popular web search engines. We compare the two algorithms on a very large scale, namely on a set of 1.6B distinct web pages. The results show that neither of the algorithms works well for finding near-duplicate pairs on the same site, while both achieve high precision for near-duplicate pairs on different sites. Since Charikar's algorithm finds more near-duplicate pairs on different sites, it achieves a better precision overall, namely 0.50 versus 0.38 for Broder et al. 's algorithm. We present a combined algorithm which achieves precision 0.79 with 79% of the recall of the other algorithms. Copyright 2006 ACM.


Published in:
29th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, 284-291
Presented at:
Seatttle, WA
Year:
2006
Publisher:
ACM Press
Keywords:
Note:
Henzinger M
Other identifiers:
Scopus: 2-s2.0-33750296887
Laboratories:




 Record created 2007-01-18, last modified 2018-03-17

n/a:
Download fulltext
PDF

Rate this document:

Rate this document:
1
2
3
 
(Not yet reviewed)