Near-Optimal Noisy Group Testing via Separate Decoding of Items
In this paper, we revisit an efficient algorithm for noisy group testing in which each item is decoded separately (Malyutov and Mateev, 1980), and develop novel performance guarantees via an information-theoretic framework for general noise models. For the noiseless and symmetric noise models, we find that the asymptotic number of tests required for vanishing error probability is within a factor log 2 ≈ 0.7 of the informationtheoretic optimum at low parsity levels, and that when a small fraction of incorrectly-decoded items is allowed, this guarantee extends to all sublinear sparsity levels. In many scaling regimes, these are the best known theoretical guarantees for any noisy group testing algorithm.
Near-Oprimal Noisy.pdf
Postprint
openaccess
337.37 KB
Adobe PDF
0f3a5f1a54327ffd965c6ed99070825a
Near-Optimal Noisy Group Testing via Separate Decoding of Items.pdf
Publisher's version
openaccess
313.26 KB
Adobe PDF
077ddf0911827ea89f7966de47a5a40b