Advantages of Variance Stabilization
Variance stabilization is a simple device for normalizing a statistic. Even though its large sample properties are similar to those of studentizing, many simulation studies of confidence interval procedures show that variance stabilization works better for small samples. We investigated this question in the context of testing a null hypothesis involving a single parameter. We provide support for a measure of evidence for an alternative hypothesis that is simple to compute, calibrate and interpret. It has applications in most routine problems in statistics, and leads to more accurate confidence intervals, estimated power and hence sample size calculations than standard asymptotic methods. Such evidence is readily combined when obtained from different studies. Connections to other approaches to statistical evidence are described, with a notable link to Kullback–Leibler symmetrized divergence.
Record created on 2012-11-05, modified on 2016-09-14