Software-Development Strategies for Parallel Computer Architectures
As pragmatic users of high performance supercomputers, we believe that nowadays parallel computer architectures with distributed memories are not vet mature to be used by a wide range of application engineers. A big effort should be made to bring these very promising computers closer to the users. One major flaw of massively parallel machines is that the programmer has to take care himself of the data flow which is often different on different parallel computers. To overcome this problem, we propose that data structures be standardized. The data base then can become an integrated part of the system and the data flow for a given algorithm can be easily prescribed. Fixing data structures forces the computer manufacturer to rather adapt his machine to user's demands and not, as it happens now, the user has to adapt to the innovative computer science approach of the computer manufacturer. In this paper, we present data standards chosen for our ASTRID programming platform for research scientists and engineers, as well as a plasma physics application which won the Cray Gigaflop Performance Awards 1989 and 1990 and which was successfully ported on an INTEL iPSC/2 hypercube.
- View record in Web of Science
Record created on 2008-04-16, modified on 2016-08-08