Using the Run-Time Sizes of Data Structures to Guide Parallel-Thread Creation

Dynamic granularity estimation is a new technique for automatically identifying expressions in functional languages for parallel evaluation. Expressions with little computation relative to thread-creation costs should evaluate sequentially for maximum performance. Static identification of such threads is however difficult. Therefore, dynamic granularity estimation has compile-time and run-time components: Abstract interpretation statically identifies functions whose complexity depends on data structure sizes; the run-time system maintains approximations to these sizes. Compiler-inserted checks consult this size information to make thread creation decisions dynamically.We describe dynamic granularity estimation for a list-based functional language. Extension to general recursive data structures and imperative operations is possible. Performance measurements of dynamic granularity estimation in a parallel ML implementation on a shared-memory machine demonstrate the possibility of large reductions (>20%) in execution time.


Publié dans:
1994 ACM Conference on LISP and Functional Programming, 79-90
Année
1994
Publisher:
ACM
Laboratoires:




 Notice créée le 2013-12-23, modifiée le 2018-03-17

Lien externe:
Télécharger le document
URL
Évaluer ce document:

Rate this document:
1
2
3
 
(Pas encore évalué)