000255993 001__ 255993
000255993 005__ 20190317001005.0
000255993 020__ $$a978-1-931971-38-6
000255993 037__ $$aCONF
000255993 245__ $$aDon't cry over spilled records: Memory elasticity of data-parallel applications and its application to cluster scheduling
000255993 269__ $$a2017-07-12
000255993 260__ $$bThe USENIX Association$$c2017-07-12
000255993 300__ $$a11
000255993 336__ $$aConference Papers
000255993 520__ $$aUnderstanding the performance of data-parallel workloads when resource-constrained has significant practical importance but unfortunately has received only limited attention. This paper identifies, quantifies and demonstrates memory elasticity, an intrinsic property of data-parallel tasks. Memory elasticity allows tasks to run with significantly less memory that they would ideally want while only paying a moderate performance penalty. For example, we find that given as little as 10% of ideal memory, PageRank and NutchIndexing Hadoop reducers become only 1.2x/1.75x and 1.08x slower. We show that memory elasticity is prevalent in the Hadoop, Spark, Tez and Flink frameworks. We also show that memory elasticity is predictable in nature by building simple models for Hadoop and extending them to Tez and Spark. To demonstrate the potential benefits of leveraging memory elasticity, this paper further explores its application to cluster scheduling. In this setting, we observe that the resource vs. time trade-off enabled by memory elasticity becomes a task queuing time vs task runtime trade-off. Tasks may complete faster when scheduled with less memory because their waiting time is reduced. We show that a scheduler can turn this task-level trade-off into improved job completion time and cluster-wide memory utilization. We have integrated memory elasticity into Apache YARN. We show gains of up to 60% in average job completion time on a 50-node Hadoop cluster. Extensive simulations show similar improvements over a large number of scenarios.
000255993 6531_ $$aMemory elasticity
000255993 6531_ $$aResource management
000255993 6531_ $$aData-parallel jobs
000255993 6531_ $$aDistributed systems
000255993 700__ $$0248040$$aIorgulescu, Calin
000255993 700__ $$0247561$$aDinu, Florin
000255993 700__ $$aRaza, Aunn
000255993 700__ $$aUl Hassan, Wajih
000255993 700__ $$0243160$$aZwaenepoel, Willy
000255993 7112_ $$dJuly 12-14, 2017$$cSanta Clara, California, USA$$aUSENIX Annual Technical Conference 2017
000255993 773__ $$tProceedings of the USENIX Annual Technical Conference 2017$$q97-109
000255993 8560_ $$fcalin.iorgulescu@epfl.ch
000255993 8564_ $$uhttps://infoscience.epfl.ch/record/255993/files/atc17-iorgulescu.pdf$$zFinal$$s1333499
000255993 909C0 $$xU10700$$pLABOS$$mwilly.zwaenepoel@epfl.ch$$0252226
000255993 909CO $$qGLOBAL_SET$$pconf$$pIC$$ooai:infoscience.epfl.ch:255993
000255993 960__ $$acalin.iorgulescu@epfl.ch
000255993 961__ $$afantin.reichler@epfl.ch
000255993 973__ $$aEPFL$$rREVIEWED
000255993 980__ $$aCONF
000255993 981__ $$aoverwrite