Files

Abstract

In this paper, we revisit an efficient algorithm for noisy group testing in which each item is decoded separately (Malyutov and Mateev, 1980), and develop novel performance guarantees via an information-theoretic framework for general noise models. For the noiseless and symmetric noise models, we find that the asymptotic number of tests required for vanishing error probability is within a factor log 2 ≈ 0.7 of the informationtheoretic optimum at low parsity levels, and that when a small fraction of incorrectly-decoded items is allowed, this guarantee extends to all sublinear sparsity levels. In many scaling regimes, these are the best known theoretical guarantees for any noisy group testing algorithm.

Details

Actions

Preview