When many (m) null hypotheses are tested with a single dataset, the control of the number of false rejections is often the principal consideration. Two popular controlling rates are the probability of making at least one false discovery (Bonferroni) and the expected fraction of false discoveries among all rejections (Benjamini-Hochberg). Methods controlling these rates based on the ordered p-values p(1) , ... p(m) are well-known. In this talk, we present a new family of multiple testing procedures that bridges the gap between these two extremes. We also discuss the problem of how to choose in practice which procedure to use. This choice depends on the number of tests m, the likely size of the alternative effects and the fraction of true nulls m0 among the m null hypotheses. In the literature this is mostly dealt with by optimizing the number of rejections subject to a bound on a particular controlling rate. This is analogous to the Neyman-Pearson approach of bounding the probability of a false rejection and then, given this constraint, maximizing the power. But since there is no agreement on the choice of control in multiple testing, the analogy is not convincing. This approach does not allow one to compare across a spectrum of controls. Clearly controlling the false discovery rate, for example, can potentially lead to many rejections and is in this sense powerful, but how should this be compared to a method that controls the probability of making at least one erroneous rejection? On can make progress in this question, by considering the number of false rejections F separately from the number of correct rejections T. Using this framework we will show how to choose an element in the new family mentioned above.

Presented at:
26th International Biometric Conference, Kobe, Japan, August 26-31, 2012

 Record created 2012-10-11, last modified 2018-03-18

Rate this document:

Rate this document:
(Not yet reviewed)