000125579 001__ 125579
000125579 005__ 20190416055713.0
000125579 037__ $$aARTICLE
000125579 245__ $$aA new parallelized 3D SPH model: resolution of water entry problems ans scalability study
000125579 269__ $$a2008
000125579 260__ $$c2008
000125579 336__ $$aJournal Articles
000125579 520__ $$aThe parallelized model is first presented and a scalability study is achieved. For computational time saving and memory requirement reasons, the implementation of a three dimensional SPH code implies the need for its parallelization, in order to make possible calculations on a cluster. The parallelization of the presented model is achieved using the MPI (Message Passing Interface) standard for inter-processes communications, used conjointly with FORTRAN 90. A domain decomposition strategy is chosen: the whole fluid domain to be studied is geometrically split into sub-domains, each sub-domain being attributed to one dedicated processor. The interactions between the various sub-domains are then achieved using MPI, by systematic particle data communications. The implemented model has been tested in terms of acceleration and efficiency of the calculation with regards to the number of processes used, with various total particle numbers. Very encouraging results have been found on the ECN Cray XD1 cluster using up to 32 processors. In particular, a mean efficiency of about 90% has been obtained. However, the efficiency of the parallelized model has to be validated at larger scale. A scalability study on the 8092-processor Blue Gene of the EPFL is presently being achieved. Its result will be shown at the workshop. This scalability study is realized on water entry problems. The capabilities of the presented model are first illustrated on a case involving a sphere impacting the free surface at high velocity; comparison to reference data is shown. Then, the water entry of a cone is modelled. Pressure is compared to experimental data registered at gauges embedded into the cone surface, and the influence of the cone edge angle is studied.
000125579 6531_ $$aSPH
000125579 6531_ $$aHigh Performance Computing
000125579 6531_ $$aIBM Blue Gene/L
000125579 700__ $$aOger, Guillaume
000125579 700__ $$aLe Touzé, David
000125579 700__ $$aAlessandrini, Bertrand
000125579 700__ $$0243093$$g172125$$aMaruzewski, Pierre
000125579 773__ $$j76$$tERCOFTAC Bulletin$$k35-38
000125579 8564_ $$uhttps://infoscience.epfl.ch/record/125579/files/ERCOFTAC_bulletin_Oger_etal.pdf$$zn/a$$s690341
000125579 909C0 $$xU10309$$0252135$$pLMH
000125579 909CO $$ooai:infoscience.tind.io:125579$$qGLOBAL_SET$$pSTI$$particle
000125579 937__ $$aLMH-ARTICLE-2008-005
000125579 973__ $$rREVIEWED$$sPUBLISHED$$aOTHER
000125579 980__ $$aARTICLE