Multiscale Modelling and Simulation (MSCALE) Session 1

Time and Date: 16:20 - 18:00 on 11th June 2014

Room: Rosser

Chair: Valeria Krzhizhanovskaya

126 Restrictions in model reduction for polymer chain models in dissipative particle dynamics [abstract]
Abstract: We model high molecular weight homopolymers in semidilute concentration via Dissipative Particle Dynamics (DPD). We show that in model reduction methodologies for polymers it is not enough to preserve system properties (i.e., density $\rho$, pressure $p$, temperature $T$, radial distribution function $g(r)$) but preserving also the characteristic shape and length scale of the polymer chain model is necessary. In this work we apply a DPD-model-reduction methodology recently proposed; and demonstrate why the applicability of this methodology is limited upto certain maximum polymer length, and not suitable for solvent coarse graining.
Nicolas Moreno, Suzana Nunes, Victor M. Calo
353 Simulation platform for multiscale and multiphysics modeling of OLEDs [abstract]
Abstract: We present a simulation platform which serves as an integrated framework for multiscale and multiphysics modeling of Organic Light Emitting Diodes (OLEDs) and their components. The platform is aimed at the designers of OLEDs with various areas of expertise ranging from the fundamental theory to the manufacturing technology. The platform integrates an extendable set of in-house and third-party computational programs that are used for predictive modeling of the OLED parameters important for device performance. These computational tools describe properties of atomistic, mesoscale and macroscopic levels. The platform automates data exchange between these description levels and allows one to build simulation workflows and manage remote task execution. The integrated database provides data exchange and storage for the calculated and experimental results.
Maria Bogdanova, Sergey Belousov, Ilya Valuev, Andrey Zakirov, Mikhail Okun, Denis Shirabaykin, Vasily Chorkov, Petr Tokar, Andrey Knizhnik, Boris Potapkin, Alexander Bagaturyants, Ksenia Komarova, Mikhail Strikhanov, Alexey Tishchenko, Vladimir Nikitenko, Vasili Sukharev, Natalia Sannikova, Igor Morozov
336 Scaling acoustical calculations on multicore, multiprocessor and distributed computer environment [abstract]
Abstract: Applying computer systems to calculate acoustic fields is a commonly used practice due to generally high complexity of such tasks. Although implementing algorithmic and software solutions to calculate acoustical fields faces a wide variety of problems caused by impossibility to represent algorithmically all of the physical laws involved in calculation of the field distribution, in all variety of mediums, with wide sets of the field parameters and its sources. Therefore there are lots of limitations on tasks being solved by one simulation system. At the same time a large number of calculations are required to perform general simulation tasks for all of the sets of input parameters. Therefore it is important to develop new algorithmic solutions to calculate acoustic fields for wider range of input parameters, providing scalability to many parallel and distributed computers to increase maximum allowed levels of computation loads with adequate time and cost consumptions caused by the simulation. Tasks of calculating acoustic fields may belong to various domains from the point of view to the physical laws involved in the calculation. In the article a general architecture of the simulation system is presented providing structure and functionality of the system at the top level and its domain independent subsystems. The complete architecture can be defined only for the specific class of calculation tasks. The two classes of them are described: simulating acoustical fields in enclosed rooms and in natural stochastic deep-water waveguides.
Andrey Chusov, Lubov Statsenko, Yulia Mirgorodskaya, Boris Salnikov and Evgeniya Salnikova
384 PyGrAFT: Tools for Representing and Managing Regular and Sparse Grids [abstract]
Abstract: Many computational science applications perform compute-intensive operations on scalar and vector fields residing on multidimensional grids. Typically these codes run on supercomputers--large multiprocessor commodity clusters or hybrid platforms that combine CPUs with accelerators such as GPUs. The Python Grids and Fields Toolkit (PyGrAFT) is set of classes, methods, and library functions for representing scalar and vector fields residing on multidimensional, logically Cartesian--including curvilinear--grids. The aim of PyGrAFT is to accelerate development of numerical analysis applications by combining the high programmer productivity of Python with the high performance of lower-level programming languages. The PyGrAFT data model--which leverages the NumPy {\sf{ndarray}} class--enables representation of tensor product grids of arbitrary dimension, and collections of scalar and/or vector fields residing on these grids. Furthermore, the PyGrAFT data model allows the user to choose field storage ordering for optimal performance for the target application. Class support methods and library functions are implemented to employ where possible reliable, well-tested, high-performance packages from the python software ecosystem (e.g., NumPy, SciPy, mpi4py). The PyGrAFT data model and library is applicable to global address spaces and distributed-memory platforms that utilise MPI. Library operations include intergrid interpolation and support for multigrid solver techniques such as the sparse grid combination technique. We describe the PyGrAFT data model, its parallelisation, and strategies currently underway to explore opportunities for providing multilevel parallelism with relatively little user effort. We illustrate the PyGrAFT data model, library functions, and resultant programming model in action for a variety of applications, including function evaluation, PDE solvers, and sparse grid combination technique solvers. We demonstarte the language interoperability PyGrAFT with a C++ example, and outline strategies for using PyGrAFT with legacy codes written in other programming languages. We explore the implications of this programming model for an emerging problem in computational science and engineering---modelling multiphysics and multiscale systems. We conclude with an outline of the PyGrAFT development roadmap, including full support for vector fields and calculations in curvilinear coordinates, support for GPUs and other parallelisation schemes, and extensions to the PyGrAFT model to accomodate general multiresolution numerical methods.
Jay Larson