Margaritov, ArtemiyGupta, SiddharthGonzalez-Alberquilla, RekaiGrot, Boris2019-06-182019-06-182019-06-182019-01-0110.1109/HPCA.2019.00024https://infoscience.epfl.ch/handle/20.500.14299/157433WOS:000469766300002In a drive to maximize resource utilization, today's datacenters are moving to colocation of latency-sensitive and batch workloads on the same server. State-of-the-art deployments, such as those at Google, colocate such diverse workloads even on a single SMT core. This form of aggressive colocation is afforded by virtue of the fact that a latency-sensitive service operating below its peak load has significant slack in its response latency with respect to the QoS target. The slack affords a degradation in single-thread performance, which is inevitable under SMT colocation, without compromising QoS targets. This work makes the observation that many batch applications can greatly benefit from a large instruction window to uncover ILP and MLP. Under SMT colocation, conventional wisdom holds that individual hardware threads should be limited in their ability to acquire and hold a disproportionately large share of microarchitectural resources so as not to compromise the performance of a co-running thread. We show that the performance slack inherent in latency-sensitive workloads operating at low to moderate load makes it safe to shift microarchitectural resources to a co-running batch thread without compromising QoS targets. Based on this insight, we introduce Stretch, a simple ROB partitioning scheme that is invoked by system software to provide one hardware thread with a much larger ROB partition at the expense of another thread. When Stretch is enabled for latency-sensitive workloads operating below their peak load on an SMT core, co-running batch applications gain 13% of performance on average (30% max) over a baseline SMT colocation and without compromising QoS constraints.Computer Science, Hardware & ArchitectureComputer Sciencequality of servicedatacentersimultaneous multi-threadinglatency-sensitive applicationsmicroarchitectureresource-allocationlatencyparallelismpolicyStretch: Balancing QoS and Throughput for Colocated Server Workloads on SMT Corestext::conference output::conference proceedings::conference paper