Quantitative PCR (qPCR) is a powerful technique that is now commonly used in many research and clinical laboratories. Although it allows precise quantification of specific DNA sequences, it is often not used at its full potential. A number of data collection and processing strategies have been described for the implementation of quantitative PCR. However, they can be experimentally cumbersome, their relative performances have not been evaluated systematically, and they often remain poorly validated statistically and/or experimentally. Highly sophisticated mathematical models have also been proposed to decipher the PCR intimate process, but most of them were never validated. Often poor knowledge of the underlying mechanisms lead to inaccurate results and misinterpretation. Our first objective has been to measure by qPCR chromatin accessibility that was probed by a DNA adenine methylase. Results showed that 2-fold variations in relative accessibility could be assessed. However, the variability of the measures lead us to question the reproducibility of the qPCR. Placing our work in a robust statistical frame work, we proceeded with the evaluation of the parameters influencing PCR efficiency. Performance of known methods were also evaluated in terms of sensitivity, precision and robustness; and compared with various improved models, based on individual or averaged efficiency values. Our results show that when accurate quantification is required, single reaction efficiencies need to be measured and averaged for a given sample and primer. Furthermore, we show that well designed primers can hold the assumption of equal efficiency and therefore that the ∆Ct model is valid for measurement of 5-fold induction of a gene expression, at the least. Finally, we tried, as much as we could, to produce an exhaustive list of pitfalls qPCR users could stumble upon and proposed solutions. Our results allow the precise evaluation of minute amount of DNA, with a predictable and realistic number of measures.