Software-Development Strategies for Parallel Computer Architectures

As pragmatic users of high performance supercomputers, we believe that nowadays parallel computer architectures with distributed memories are not vet mature to be used by a wide range of application engineers. A big effort should be made to bring these very promising computers closer to the users. One major flaw of massively parallel machines is that the programmer has to take care himself of the data flow which is often different on different parallel computers. To overcome this problem, we propose that data structures be standardized. The data base then can become an integrated part of the system and the data flow for a given algorithm can be easily prescribed. Fixing data structures forces the computer manufacturer to rather adapt his machine to user's demands and not, as it happens now, the user has to adapt to the innovative computer science approach of the computer manufacturer. In this paper, we present data standards chosen for our ASTRID programming platform for research scientists and engineers, as well as a plasma physics application which won the Cray Gigaflop Performance Awards 1989 and 1990 and which was successfully ported on an INTEL iPSC/2 hypercube.


Published in:
Physics Reports-Review Section of Physics Letters, 207, 3-5, 167-214
Year:
1991
ISSN:
0370-1573
ISBN:
0370-1573
Laboratories:
SPC
CRPP




 Record created 2008-04-16, last modified 2018-09-13


Rate this document:

Rate this document:
1
2
3
 
(Not yet reviewed)