Files

Résumé

Large-scale online services are commonly structured as a network of software tiers, which communicate over the datacenter network using RPCs. Ongoing trends towards software decomposition have led to the prevalence of tiers receiving and generating RPCs with runtimes of only a few microseconds. With such small software runtimes, even the smallest latency overheads in RPC handling have a significant relative performance impact. In particular, we find that growing network bandwidth introduces queuing effects within a server’s memory hierarchy, considerably hurting the response latency of fine-grained RPCs. In this work we introduce NeBuLa, an architecture optimized to accelerate the most challenging microsecond-scale RPCs, by leveraging two novel mechanisms to drastically improve server throughput under strict tail latency goals. First, NeBuLa reduces detrimental queuing at the memory controllers via hardware support for efficient in-LLC network buffer management. Second, NeBuLa’s network interface steers incoming RPCs into the CPU cores’ L1 caches, improving RPC startup latency. Our evaluation shows that NeBuLa boosts the throughput of a state-of-the-art key- value store by 1.25–2.19x compared to existing proposals, while maintaining strict tail latency goals.

Détails

PDF