Fichiers

Résumé

The power consumption of the Internet and datacenter networks is already significant due to a large degree of redundancy and high idle power consumption of the network elements. Therefore, dynamically matching network resources to the actual load is highly desirable. Existing approaches in this domain advocate recomputing network configurations with each substantial change in demand. Unfortunately, computing the minimum network subset is a computationally hard and time-consuming problem, which prevents these approaches from scaling up to large or even medium-sized networks. Thus, the network operates with diminished performance during the periods of energy-aware routing recomputation. In this dissertation, I propose REsPoNse, a design for achieving both energy-proportionality and scalability by taking a fundamentally different hybrid approach. REsPoNse uses additional off-line computation and memory to effectively overcome the optimality-scalability trade-off, leveraging the traffic predictability to: 1) precompute offline as much routing information as it can and install it into a small number of routing tables (called always-on, on-demand, and failover), and 2) utilize a simple, scalable online traffic engineering (EATe) mechanism to deactivate and activate network elements on demand. I then make a significant step towards deployment of REsPoNse by proposing a framework (UNO) that can encode all information about the traffic congestion on the computed paths into an existing IP header. Further, I thoroughly evaluate REsPoNse by: i) replaying traffic demands collected over real topologies, ii) running ns-2 simulations over ISP and data center networks, iii) implementing and experimenting with a Click testbed, and iv) running video-on-demand and web applications live in a network emulator. My findings demonstrate that REsPoNse achieves the same or better energy proportionality as the existing approaches, with little or no impact on network responsiveness, regardless of the network size. Specific energy savings amount to about 30-40% for varying power models of network elements. Finally, the two representative applications experience marginal impact on their application-level throughput and latency when compared to running over an energy-oblivious network.

Détails

Actions