Just-in-time performance without warm-up

Scala has been developed as a language that deeply integrates with the Java ecosystem. It offers seamless interoperability with existing Java libraries. Since the Scala compiler targets Java bytecode, Scala programs have access to high-performance runtimes including the HotSpot virtual machine. HotSpot provides impressive performance results achieved via just-in-time compilation. It starts program execution in interpreter mode, collecting profile feedback about called methods. This information allows HotSpot to identify hot spots in the program, which are then compiled on the fly to native code. This compilation scheme enables high peak performance at the cost of warmup time required to collect the profile data and perform just-in-time compilation. This is a good example of the traditional tradeoff between ahead-of-time (AOT) and just-in-time (JIT) compilation. With AOT, compilers have less information, but the runtime story is reasonably straightforward. With JIT, compilers have more information, which enables advanced optimizations, but the runtime story becomes complicated. In this dissertation, we present the design and implementation of Scala Native, an optimizing compiler for Scala. With Scala Native, Scala programs are compiled ahead of time, which avoids runtime compilation and enables instant startup times. On the other hand, Scala Native is able to match and supersede the peak performance of HotSpot on our benchmarks. In addition to that, Scala Native is a general-purpose Scala compiler - programs compiled by Scala Native closely match the behavior of programs compiled by the Scala compiler. First, we introduce NIR, an intermediate representation designed with ahead-of-time compilation in mind. NIR represents programs in the single-static assignment form and has support for object-oriented features such as virtual dispatch and multiple inheritance. This representation is a key enabler of our compilation and optimization pipeline. Secondly, we present Interflow, a link-time optimizer that takes advantage of the closed-world assumption to optimize the whole program at once. Our optimizer employs a number of techniques including partial evaluation, allocation sinking, and method duplication. The combination of these techniques allows Scala Native to outperform HotSpot on the majority of our benchmarks. Finally, we describe how to improve runtime performance even further based on profile feedback. We propose a technique that splits methods apart isolating key hot paths that are then optimized more aggressively than the cold parts of the program. This provides a further performance advantage over HotSpot.


Advisor(s):
Odersky, Martin
Year:
2020
Publisher:
Lausanne, EPFL
Keywords:
Laboratories:
LAMP1


Note: The status of this file is:


 Record created 2020-02-17, last modified 2020-03-03

Fulltext:
Download fulltext
PDF

Rate this document:

Rate this document:
1
2
3
 
(Not yet reviewed)