Files

Abstract

The advent of online video streaming services along with the users' demand for high-quality contents require High Efficiency Video Coding (HEVC), which provides higher quality and compression at the cost of increased complexity. On one hand, HEVC exposes a set of dynamically tunable parameters to provide trade-offs among Quality-of-Service (QoS), performance, and power consumption of multi-core servers. On the other hand, resource management of modern multi-core servers is in charge of adapting system-level parameters, as operating frequency and multithreading, to deal with concurrent applications and their requirements. Therefore, efficient multi-user HEVC streaming necessitates joint adaptation of application- and system-level parameters. Nonetheless, dealing with such a large and dynamic design-space is difficult to address through conventional strategies. In this work, we develop a multiagent Reinforcement Learning framework to jointly adjust application- and system-level parameters at runtime to satisfy the QoS of multi-user HEVC streaming in power-constrained servers. The benefits of our approach are revealed in terms of adaptability and quality (up to to 4x improvements in terms of QoS when compared to a static scheme), and learning time (6x faster than an equivalent mono-agent implementation). Finally, we show how power-capping techniques formulated outperform the hardware-based power capping with respect to quality.

Details

Actions

Preview