Henzinger, Monika R.2007-01-182007-01-182007-01-18200610.1145/1148170.11482222-s2.0-33750296887https://infoscience.epfl.ch/handle/20.500.14299/239643Broder et al.'s [3] shingling algorithm and Charikar's [4] random projection based approach are considered "state-of-the-art" algorithms for finding near-duplicate web pages. Both algorithms were either developed at or used by popular web search engines. We compare the two algorithms on a very large scale, namely on a set of 1.6B distinct web pages. The results show that neither of the algorithms works well for finding near-duplicate pairs on the same site, while both achieve high precision for near-duplicate pairs on different sites. Since Charikar's algorithm finds more near-duplicate pairs on different sites, it achieves a better precision overall, namely 0.50 versus 0.38 for Broder et al. 's algorithm. We present a combined algorithm which achieves precision 0.79 with 79% of the recall of the other algorithms. Copyright 2006 ACM.Content duplicationNear-duplicate documentsWeb pagesFinding near-duplicate web pages: A large-scale evaluation of algorithmstext::conference output::conference proceedings::conference paper