Résumé

Static program analysis limits the performance improvements possible from compile-time parallelization. Dynamic program parallelization shifts a portion of the analysis from complie-time to run-time, thereby enabling optimizations whose static detection is overly expensive or impossible. Lambda tagging and heap resolution are two new techniques for finding loop and non-loop parallelism in imperative, sequential languages with first-class procedures and destructive heap operations (e.g., ML and Scheme). Lambda tagging annotates procedures during compilation with a tag that describes the side effects that a procedure's application may cause. During program execution, the program refines and examines tags to identify computations that may safely execute in parallel. Heap resolution uses reference counts to dynamically detect potential heap aliases and to coordinate parallel access to shared structures. An implementation of lambda tagging and heap resolution in an optimizing ML compiler for a shared memory parallel computer demonstrates that the overhead incurred by these run-time methods is easily offset by dynamically-exposed parallelism and that non-trivial procedures can be automatically parallelized with these techniques.

Détails