Home > DynaProg for Scala > HTML MARC |

000190098 001__ 190098 000190098 005__ 20190316235738.0 000190098 037__ $$aSTUDENT 000190098 245__ $$aDynaProg for Scala 000190098 269__ $$a2013 000190098 260__ $$c2013 000190098 336__ $$aStudent Projects 000190098 520__ $$aDynamic programming is an algorithmic technique to solve problems that follow the Bellman’s principle: optimal solutions depends on optimal sub-problem solutions. The core idea behind dynamic programming is to memoize intermediate results into matrices to avoid multiple computations. Solving a dynamic programming problem consists of two phases: filling one or more matrices with intermediate solutions for sub-problems and recomposing how the final result was constructed (backtracking). In textbooks, problems are usually described in terms of recurrence relations between matrices elements. Expressing dynamic programming problems in terms of recursive formulae involving matrix indices might be difficult, if often error prone, and the notation does not capture the essence of the underlying problem (for example aligning two sequences). Moreover, writing correct and efficient parallel implementation requires different competencies and often a significant amount of time. In this project, we present DynaProg, a language embedded in Scala (DSL) to address dynamic programming problems on heterogeneous platforms. DynaProg allows the programmer to write concise programs based on ADP [1], using a pair of parsing grammar and algebra; these program can then be executed either on CPU or on GPU. We evaluate the performance of our implementation against existing work and our own hand-optimized baseline implementations for both the CPU and GPU versions. <br><br> Experimental results show that plain Scala has a large overhead and is recommended to be used with small sequences (≤1024) whereas the generated GPU version is comparable with existing implementations: matrix chain multiplication has the same performance as our hand-optimized version (142% of the execution time of [2]) for a sequence of 4096 matrices, Smith-Waterman is twice slower than [3] on a pair of sequences of 6144 elements, and RNA folding is on par with [4] (95% running time) for sequences of 4096 elements. <br><br> [1] Robert Giegerich and Carsten Meyer. Algebraic Dynamic Programming.<br> [2] Chao-Chin Wu, Jenn-Yang Ke, Heshan Lin and Wu Chun Feng. Optimizing dynamic programming on graphics processing units via adaptive thread-level parallelism.<br> [3] Edans Flavius de O. Sandes, Alba Cristina M. A. de Melo. Smith-Waterman alignment of huge sequences with GPU in linear space.<br> [4] Guillaume Rizk and Dominique Lavenier. GPU accelerated RNA folding algorithm. 000190098 6531_ $$adynamic programming 000190098 6531_ $$ascala 000190098 6531_ $$agpu 000190098 6531_ $$acuda 000190098 6531_ $$alms 000190098 6531_ $$astaging 000190098 6531_ $$aadp 000190098 6531_ $$acuda 000190098 6531_ $$abacktracking 000190098 6531_ $$abacktrace 000190098 700__ $$0247456$$aCoppey, Thierry$$g170226 000190098 720_2 $$0241835$$aOdersky, Martin$$edir.$$g126003 000190098 8564_ $$uhttps://github.com/tcknet/dynaprog$$zURL 000190098 8564_ $$s494001$$uhttps://infoscience.epfl.ch/record/190098/files/report.pdf$$yPostprint$$zPostprint 000190098 909C0 $$0252187$$pLAMP$$xU10409 000190098 909CO $$ooai:infoscience.tind.io:190098$$pIC$$qGLOBAL_SET 000190098 917Z8 $$x170226 000190098 937__ $$aEPFL-STUDENT-190098 000190098 973__ $$aEPFL 000190098 980__ $$aSTUDENT$$bMASTERS