Files

Abstract

The parallelized model is first presented and a scalability study is achieved. For computational time saving and memory requirement reasons, the implementation of a three dimensional SPH code implies the need for its parallelization, in order to make possible calculations on a cluster. The parallelization of the presented model is achieved using the MPI (Message Passing Interface) standard for inter-processes communications, used conjointly with FORTRAN 90. A domain decomposition strategy is chosen: the whole fluid domain to be studied is geometrically split into sub-domains, each sub-domain being attributed to one dedicated processor. The interactions between the various sub-domains are then achieved using MPI, by systematic particle data communications. The implemented model has been tested in terms of acceleration and efficiency of the calculation with regards to the number of processes used, with various total particle numbers. Very encouraging results have been found on the ECN Cray XD1 cluster using up to 32 processors. In particular, a mean efficiency of about 90% has been obtained. However, the efficiency of the parallelized model has to be validated at larger scale. A scalability study on the 8092-processor Blue Gene of the EPFL is presently being achieved. Its result will be shown at the workshop. This scalability study is realized on water entry problems. The capabilities of the presented model are first illustrated on a case involving a sphere impacting the free surface at high velocity; comparison to reference data is shown. Then, the water entry of a cone is modelled. Pressure is compared to experimental data registered at gauges embedded into the cone surface, and the influence of the cone edge angle is studied.

Details

Actions

Preview