Files

Abstract

Dynamic mesh adaptation on unstructured grids, by localised refinement and derefinement, is a very efficient tool for enhancing solution accuracy and optimise computational time. One of the major drawbacks however resides in the projection of the new nodes created, during the refinement process, onto the boundary surfaces. This can be addressed by the introduction of a library capable of handling geometric properties given by a CAD (Computer Aided Design) description. This is of particular interest also to enhance the adaptation module when the mesh is being smoothed, and hence moved, to then re-project it onto the surface of the exact geometry. However, the above procedure is not always possibly due to either faulty or too complex designs, which require a higher level of complexity in the CAD library. It is therefore paramount to have a built-in algorithm able to place the new nodes, belonging to the boundary, closer to the geometric definition of it. Such a procedure is proposed in this work, based on the idea of interpolating subdivision. In order to efficiently and effectively adapt a mesh to a solution field, the criteria used for the adaptation process needs to be as accurate as possible. Due to the nature of the solution, which is obtained by discretisation of a continuum model, numerical error is intrinsic in the calculation. A posteriori error estimation allows us to somewhat assess the accuracy by using the computed solution itself. In particular, an a posteriori error estimator based on the Zienkievicz Zhu model is introduced. This can be used in the adaptation procedure to refine the mesh in those areas where the local error exceeds a set tolerance, hence further increasing the accuracy of the solution in those regions during the next computational step. Variants of this error estimator have also been studied and implemented. One of the important aspects of this project is the fact that the algorithmic concepts are developed thinking parallel, i.e. the algorithms take into account the possibility of multiprocessor implementation. Indeed these concepts require complex programming if one tries to parallelise them, once they have been devised serially. Another important and innovative aspect of this work is the consistency of the algorithms with parallel processor execution.

Details

PDF