Comparing multiple testing procedures and error rates under weighted classification
When many hypotheses are tested simultaneously, the control of the number of false rejections is often the principal consideration. In practice, this is insatisfacory since we also have to keep in mind the power of detecting true effects. At the moment, the preferred choice of practitioners is often restricted to family-wise error rates (FWER) and false discovery rates (FDR). We recently introduced the scaled multiple testing error rates, which includes most existing error rates and bridges the gap between the FWER and the FDR. For example, the Scaled False Discovery Rate (SFDR) limits the number of false positives (FP) relative to an arbitrary increasing function (s) of the number of rejections (R), by bounding E(FP/s(R)). We compare the performance for different choices of the scaling function s and discuss the optimality of the error rates in some practical scenarios by considering the number of false positives FP separately from the number of true positives TP.