Files

Abstract

We consider increase-decrease congestion controls, a formulation that accommodates many known congestion controls. There have been many works that aim to obtain relation between the loss-event rate $\fpp$ and time-average window $\taw$ for some known particular instances of increase-decrease controls. In contrast, in this note, we study the inverse problem where one is given a target response function $x\rightarrow f(x)$ and the design problem is to construct an increase-decrease control such that, ideally, $\taw=f(\fpp)$, or at least $\taw\leq f(\fpp)$. One common method for solving this is to design a control that satisfies the requirements in a reference system, and then try to evaluate the behaviour in a general system. In this note, we consider that the reference is for deterministic constant inter-loss times. Our finding is as follows. We identify conditions under which if $\taw'\geq f(\fpp')$ in the reference system (i.e. the control overshoots), then for any independent identically distributed (i.i.d.) random inter-loss times, we have $\taw\geq\frac{1}{1+\varepsilon}f(\frac{1}{1+\varepsilon}\fpp)$, for some small $\varepsilon\geq 0$ specified in this note. In other words, moving from the reference system to the more general case of i.i.d. losses will not eliminate any overshoot. We apply our results to a stochastic fluid version of HighSpeed TCP \cite{floyd-02-a}. We show that for this idealized HighSpeed TCP our result applies with $\varepsilon$ not larger than $0.0012$. This implies that for idealized HighSpeed TCP $\taw$ is almost lower bounded by $f(\fpp)$ under the hypotheses above. Our general analysis result rises the issue whether it is a good practice to design congestion controls by taking deterministic constant inter-loss times as a reference system, given that we demonstrate that this reference system is, in some sense explained in the paper, in fact a best case, rather than a worst case, as would be more desirable.

Details

Actions

Preview