Using the Run-Time Sizes of Data Structures to Guide Parallel-Thread Creation
Dynamic granularity estimation is a new technique for automatically identifying expressions in functional languages for parallel evaluation. Expressions with little computation relative to thread-creation costs should evaluate sequentially for maximum performance. Static identification of such threads is however difficult. Therefore, dynamic granularity estimation has compile-time and run-time components: Abstract interpretation statically identifies functions whose complexity depends on data structure sizes; the run-time system maintains approximations to these sizes. Compiler-inserted checks consult this size information to make thread creation decisions dynamically.We describe dynamic granularity estimation for a list-based functional language. Extension to general recursive data structures and imperative operations is possible. Performance measurements of dynamic granularity estimation in a parallel ML implementation on a shared-memory machine demonstrate the possibility of large reductions (>20%) in execution time.
1994
79
90
REVIEWED