Files

Abstract

In this work, we consider four problems in the context of Internet traffic control. The first problem is to understand when and why a sender that implements an equation-based rate control would be TCP-friendly, or not—a sender is said to be TCP-friendly if, under the same operating conditions, its long-term average send rate does not exceed that of a TCP sender. It is an established axiom that some senders in the Internet would need to be TCP-friendly. An equation-based rate control sender plugs-in some on-line estimates of the loss-event rate and an expected round-trip time in a TCP throughput formula, and then at some points in time sets its send rate to such computed values. Conventional wisdom held that if a sender adjusts its send rate as just described, then it would be TCP-friendly. We show exact analysis that tells us when we should expect an equation-based rate control to be TCP-friendly, and in some cases excessively so. We show experimental evidence and identify the causes that, in a realistic scenario, make an equation-based rate control grossly non-TCP-friendly. Our second problem is to understand the throughput achieved by another family of send rate controls—we termed these "increase-decrease controls," with additive-increase/multiplicative-decrease as a special case. One issue that we consider is the allocation of long-term average send rates among senders that adjust their send rates by an additive-increase/multiplicative-decrease control, in a network of links with arbitrary fixed routes, and arbitrary round-trip times. We show what the resulting send rate allocation is. This result advances the state-of-the-art in understanding the fairness of the rate allocation in presence of arbitrary round-trip times. We also consider the design of an increase-decrease control to achieve a given target loss-throughput function. We show that if we design some increase-decrease controls under a commonly used reference loss process—a sequence of constant inter-loss event times—then we know that these controls would overshoot their target loss-throughput function, for some more general loss processes. A reason to study the design problem is to construct an increase-decrease control that would be friendly to some other control, TCP, for instance. The third problem that we consider is how to obtain probabilistic bounds on performance for nodes that conform to the per-hop-behavior of Expedited Forwarding, a service of differentiated services Internet. Under the assumption that the arrival process to a node consists of flows that are individually regulated (as it is commonplace with Expedited Forwarding) and the flows are stochastically independent, we obtained probabilistic bounds on backlog, delay, and loss. We apply our single-node performance bounds to a network of nodes. Having good probabilistic bounds on the performance of nodes that conform to the per-hop-behavior of Expedited Forwarding, would enable a dimensioning of those networks more effectively, than by using some deterministic worst-case performance bounds. Our last problem is on the latency of an input-queued switch that implements a decomposition-based scheduler. With decomposition-based schedulers, we are given a rate demand matrix to be offered by a switch in the long-term between the switch input/output port pairs. A given rate demand matrix is, by some standard techniques, decomposed into a set of permutation matrices that define the connectivity of the input/output port pairs. The problem is how to construct a schedule of the permutation matrices such that the schedule offers a small latency for each input/output port pair of the switch. We obtain bounds on the latency for some schedulers that are in many situations smaller than a best-known bound. It is important to be able to design switches with bounds on their latencies in order to provide guarantees on delay-jitter.

Details

PDF