Multiscale Modelling and Simulation (MMS) Session 1

Time and Date: 10:35 - 12:15 on 12th June 2017

Room: HG D 3.2

Chair: Derek Groen

-1 Multiscale Modelling and Simulation, 14th International Workshop [abstract]
Abstract: [No abstract available]
Derek Groen, Valeria Krzhizhanovskaya, Alfons Hoekstra, Bartosz Bosak and Petros Koumoutsakos
297 Multiscale Computing Patterns for High Performance Computers [abstract]
Abstract: Moving into the era of exascale machines will lead to drastic changes in the way we use HPC resources, especially for multiscale applications [1, 2]. Hence, there is an increasing demand to device generic methods to execute such multiscale applications on emerging exascale resources. For this we propose generic multiscale computing pattern. These patterns should map in the most efficient way single scale components of a multiscale application on heterogeneous architectures [2]. This research [2] is aimed at identifying and analysing generic multiscale computing patterns in multiscale models to increase the efficient, fault tolerant, and energy-aware usage of HPC computing resources [3–6]. The vision of multiscale computing patterns is rooted in the Multiscale Modelling and Simulation Framework (MMSF) [7–11], which plays a pivotal role in designing, utilising and programming various multiscale applications. A multiscale model in the MMSF is described as a coordinated implementation of single-scale models that are coupled using scale bridging mechanisms. The main components of the MMSF are the scale separation map, coupling topology, multiscale modeling language - with different flavors- and task graphs. The MMSF has shown its capability on a range of multi-sciences applications(e.g. fusion [10, 12], computational biology [10, 13, 14], bio medicine [10, 15–21], nano material science [10, 22, 23], and hydrology [10]). Based on the multiscale models task graphs, we propose one generic task graph per set of multiscale models (a pattern) [2]. The main target of these graphs is to capture the behavior of multiscale applications. The main assumption here is that we can develop an algorithm per pattern that would cover all multiscale scientific applications in the same scenario. A Multiscale Computing Pattern (MCP) can be defined as high-level call sequences that exploit the functional decomposition of multiscale models in terms of single scale models. Taking the generic task graph, multi-scale model information and single-scale performance data, we can apply pattern services to optimise the work based on the desired approach (i.e. optimal mapping based on efficient usage of resources, less wall clock time, load balance, energy efficiency, fault tolerance or total submission to execution time). We distinguish three different types of computing patterns, namely Extreme Scaling (ES), Heterogeneous Multiscale Computing (HMC)and Replica Computing (RC) patterns [2]. We argue that these patterns have the capacity to ensure load balancing, energy awareness and fault tolerance, with the virtues of effective multiscale simulations on exascale HPC resources. This would be achieved by ensuring the best mapping of single-scale models on computing resources as well as applying effective check-pointing strategies and energy analysis.The Extreme Scaling computing pattern represents a situation wherein a sole or a few of the single scale models necessitate exascale performance as coupled to other less expensive models.The Heterogeneous Multiscale Computing pattern denotes to a macroscale model that spawns a large number of instances of microscale models based on set of decisions by an HMC manager.Replica Computing signifies a setting wherein numerous replicas are implemented in the form of a single scale outline with different communication forms. Based on the communication, we defined three different flavors of RC namely ensemble simulations, dynamic ensemble simulations and replica-exchange simulations (for details we refer to [2]). The findings of the study thus indicated the potentiality of these three patterns (ES, HMC and RC) in aligning multiscale applications to computing resources. The Extreme Scaling computing pattern, for instance,helps in ensuring better performance by utilising load balancing between different submodels based on their computation power or energy consumption. The Heterogeneous Multiscale Computing pattern is primarily based on the heterogeneous multiscale method (HMM) [24–26]. This pattern represents a widely known micro-macro multiscale models. Here, a set of microscopic models is coupled to a macroscopic model. Utilising a HMC manager, which uses a dedicate database to restrict the number of microscale simulation needed and enhance re-using data, will increase the usage of extensive resources of parallel computing [27]. Replica Computing is recognised as a combination of numerous petascale and terascale simulations (replicas), which generate statistically vigorous and scientifically vital out-comes. Achieving the best resources of a single replica is a very critical step towards the best usage of resources for the multiscale applications of this kind. One example is estimating binding affinities, where persistent amid proteins and small compounds through the exchange of simulation data between single-scale models are performed [28–30]. The proposed patterns can address the exascale challenges on the level of multiscale computing. The developers of the multiscale models can concentrate on the efficiency of the single-scale models. The execution environment of e.g MMSF will take care of these issues with help of patterns. This can be achieved by having sufficient amount of description of multiscale model(as in xMML) and single-scale performance and power consumption measurements. Then a pattern software can be developed to generate execution scenarios to enhance the current execution based on the required optimization target (i.e. optimal mapping based on efficient usage of resources, less wall clock time, load balance, energy efficiency, fault tolerance or total submission to execution time).
Saad Alowayyed, Derek Groen, Peter Coveney and Alfons Hoekstra
124 Dynamic load balancing for CAFE multiscale modelling methods for heterogeneous hardware infrastructure [abstract]
Abstract: Conventional load balancing algorithms, i.e. for one computing method in one scale, are very well known in literature and developed since 70’s when solutions based on binary trees and Parallel Virtual Machine were created. Since that time more than twenty thousand scientific papers were published in this area. Nowadays the most important part of this branch of science is related to two aspects of balancing i.e. between different computing nodes and inside single node with many CPUs and many computing devices with multicore heterogeneous architectures. The first aspect is studied for homogeneous as well as heterogeneous infrastructures. The second is mainly caused by an unpredictable behavior of sophisticated numerical algorithms on computing devices with hierarchical access to memory typical for NUMA design. This aspect is analyzed regarding scheduling, load balancing and work stealing between computing devices and inside particular device. In this paper both aspects are important. The paper presents new approach to scheduling and Dynamic Load Balancing (DBL) of tightly and loosely coupled multiscale modelling methods executed in heterogeneous hardware infrastructure. The most popular configurations of computing nodes composed of modern multicore CPUs, GPUs and co-processors, are used. The proposed load balancing approach takes into account computational character of methods applied in particular scales, which depends on a size of input data, operational intensity and limitations of hardware architecture. Such constrains are defined by the Roofline model and used in the algorithm as boundary conditions, allowing to determine maximum performance of an algorithm for particular device. The upscaling multiscale approaches are analysed, which in this paper are represented by Cellular Automata Finite Element (CAFE) method. The qualitative as well as quantitative results, obtained after application of proposed load balancing procedure, are discussed in the paper in details.
Lukasz Rauch
623 Performance Monitoring of Multiscale Applications [abstract]
Abstract: The use of performance analysis tools has always been prevalent in HPC, to understand application behaviour and ensure machine utilization. However, these profiles often take an application centric perspective, profiling and visualising a single application at a time, or a machine centric perspective, thus losing information about the specific applications. With multiscale computational patterns, multiple application runs are coupled together to form a workflow. As such, performance analysis tools must be able to capture the context of multiple application runs and combine the resulting data, a capability that is widely lacking in existing tools. This presentation will cover how the profiling tool Allinea MAP has be extended, using a custom metrics interface and a JSON export capability, to support the profiling and visualisation of multiscale models. The demonstration will focus on collecting domain specific data from the MUSCLE2 communication library and its combination with existing data sources. The data is then exported for analysis and visualisation in open source tools, such as Python and Kibana.
Oliver Perks and Keeran Brabazon

Multiscale Modelling and Simulation (MMS) Session 2

Time and Date: 15:45 - 17:25 on 12th June 2017

Room: HG D 3.2

Chair: Derek Groen

20 A concept of a prognostic system for personalized anti-tumor therapy based on supermodeling [abstract]
Abstract: Application of computer simulation for predicting cancer progression/remission/recurrence is still underestimated by clinicians. This is mainly due to the lack of tumor modeling approaches, which are both reliable and realistic computationally. We propose and describe here the concept of a viable prediction/correction system for predicting cancer dynamics, very similar, in spirit, to that used in weather forecast and climate modeling. It is based on the supermodeling technique where the supermodel consists of a few coupled instances (sub-models) of a generic coarse-grained tumor model. Consequently, the latent and fine-grained cancer properties not included in the generic model, e.g. reflecting microscopic phenomena and other unpredictable events influencing tumor dynamics, are hidden in sub-models coupling parameters, which can be learned from incoming real data. Thus instead of matching hundreds of parameters for multi-scale tumor models by using complicated scales-bridging and data adaptation schemes, we need to fit only several values of coupling coefficients between sub-models to the current tumor status. Here, we propose a supermodel based, prediction/correction scheme that can be further employed for planning anti-cancer therapy and drug treatment, being continually updated by incoming diagnostic data.
Witold Dzwinel, Adrian Kłusek and Maciej Paszynski
219 Linking Gene Dynamics to Intimal Hyperplasia – A Predictive Model of Vein Graft Adaptation [abstract]
Abstract: The long term outcome of Coronary Artery Bypass Graft (CABG) surgery remains unsatisfactory to this day. Despite years of improvements in surgical techniques and therapies administered, re-occlusion of the graft is experienced in 10-12% of the cases within just few months (Motwani JC, 1998). We suggest that an efficient post-surgical therapy might be found at the genetic level. Accordingly, we propose a multiscale model that is able to replicate the healing of the graft and detail the level of impact of targeted clusters of genes on the graft’s healing. A key feature of our integrated model is its capability of linking the genetic, cellular and tissue levels with feedback bridges in such a way that every single variation from an equilibrium point is reflected on all the other elements, creating a highly organized loop. Once validated on experimental data, our model offers the possibility to test several gene therapies that aim to improve the patency of the graft lumen in advance. Being able to anticipate the outcome will speed up the development of an efficient therapy and may lead to prolonged life expectancy of the graft.
Stefano Casarin, Scott A. Berceli and Marc Garbey
576 Multiscale Computing and Systems Medicine in COST: a Brief Reflection [abstract]
Abstract: Today's modelling approaches in systems medicine are increasingly multiscale, containing two or more submodels, each of which operates on different temporal and/or spatial scales (Hunter,2008). In addition, as these models become increasingly sophisticated, they tend to be run as multiscale computing applications using computational infrastructures such as clusters, supercomputers, grids or clouds. Constructing, validating and deploying such applications is far from trivial, and communities in different scientific disciplines have chosen very diverse approaches to address these challenges (Groen,2014;Borgdorff,2013). Within this presentation we reflect on the use of Multiscale Computing within the context of Open Multiscale Systems Medicine (OpenMultiMed) COST action and related developments. Multiscale computing is widely applied within this area, and instead of summarizing the field as a whole we will highlight a set of challenges that we believe are of key relevance to the systems medicine community. Among these we will highlight key multiscale computing challenges in the context of healthcare and assisted living settings, systems medicine data analytics, effectively exploiting cloud and HPC infrastructures, and the use of Multiscale Computing in relation to the Internet of Things. Note: This abstract is part of a paper-in-progress project by Multiscale Computing Working Group in the OpenMultiMed project, where we reflect on current advances and seek to formulate a vision on Multiscale Computing specific to this COST project. We appreciate any points raised by the referees or during the presentation, and intend to take those to heart for our future activities. (Hunter:2008) Peter J Hunter, Edmund J Crampin, and Poul MF Nielsen. Bioinformatics, multiscale modeling and the iups physiome project. Briefings in bioinformatics, 9(4):333–343, 2008. (Groen:2014) Derek Groen, Stefan J Zasada, and Peter V Coveney. Survey of multiscale and multiphysics applications and communities. Computing in Science & Engineering, 16(2):34–43, 2014. (Borgdorff:2013) Joris Borgdorff, Jean-Luc Falcone, Eric Lorenz, Carles Bona-Casas, Bastien Chopard, and Alfons G Hoekstra. Foundations of distributed multiscale computing: Formalization, specification, and analysis. Journal of Parallel and Distributed Computing, 73(4):465–483, 2013.
Derek Groen, Elena Vlahu-Gjorgievska, Huiru Zheng, Mihnea Alexandru Moisescu and Ivan Chorbev
83 Phase-Field Based Simulations of Embryonic Branching Morphogenesis [abstract]
Abstract: The mechanism that controls embryonic branching is not fully understood. Of all proposed mechanism, only a Turing pattern-based model succeeds in predicting the location of newly emerging branches during lung and kidney branching morphogenesis. Turing models are based on at least two coupled non-linear reaction-diffusion equations. In case of the lung model, the two components (ligands and receptors) are produced in two different tissue layers [1]. Thus the ligand is produced in the outer mesenchymal layer and the receptor is produced in the inner, branching epithelial layer; the diffusion of receptors is restricted to this epithelial layer. So far, numerical instabilities due to highly complex mesh deformations limit the maximal rounds of branching that can be simulated in an ALE-based framework. A recently developed Phase-Field-based framework [2], shows promising results for the simulation of consecutive 3D branching events. In this talk, I will present our Phase-Field-based framework for simulating the inner epithelial and the outer mesenchymal layer, how we coupled the reaction-diffusion equations to the diffuse / implicit domain as well as the incorporation of additional equations representing further components influencing the growth of a mammalian lung. [1] D. Menshykau et al., "An interplay of geometry and signaling enables robust lung branching morphogenesis.", Development 141(23): 4526-4536, 2014 [2] LD. Wittwer et al., "Simulating Organogenesis in COMSOL: Phase-Field Based Simulations of Embryonic Lung Branching Morphogenesis.", Proceedings of the 2016 COMSOL Conference in Munich, 2016
Lucas D. Wittwer, Sebastian Aland and Dagmar Iber

Multiscale Modelling and Simulation (MMS) Session 3

Time and Date: 10:15 - 11:55 on 13th June 2017

Room: HG D 3.2

Chair: Derek Groen

467 On the numerical evaluation of local curvature for diffuse interface models of microstructure evolution [abstract]
Abstract: Within diffuse interface models for multiphase problems, the curvature of the phase boundary can be expressed as the difference of two terms, a Laplacian and a second, gradient, term of the diffuse interface variable, φ. In phase field simulations of microstructure evolution, the second term is often replaced by f'(φ) =∂f/∂φ, where f(φ) is the potential function in the free energy functional of the underlying physical model. We show here that this replacement systematically deteriorates the accuracy in local curvature evaluation as compared to a discretized evaluation of the second term. Analytic estimates reveal that the discretization errors in the Laplacian and in the second term have roughly the same spatial dependence across the interface, thus leading to a cancellation of errors in κ. This is confirmed in a test case, where the discretization error can be determined via comparison to the exact solution. If, however, the second term is replaced by a quasi exact expression, the error in ∆φ enters κ without being compensated and can obscure the behavior of the local curvature. Due to the antisymmetric variations of this error term, approaches using the average curvature, as obtained from an integral along the interface normal, are immune to this problem.
Samad Vakili, Ingo Steinbach and Fathollah Varnik
170 Astrophysical multiscale modeling with AMUSE. [abstract]
Abstract: Astrophysical phenomena cover many order of magnitude in spatial and temporal scales. An additional complexity is introduced by the multi-physics aspects of the Universe. We present the Astrophysical Multipurpose Software Environment (AMUSE), which was designed specifically to allow researchers to simulate these processes on high-performance architectures. In AMUSE subgrid physical phenomena can be taken into account explicitly. The coupling across scales and across physical domains is realized by means of operator splitting. In multi-scale simulations, when the underlying physics shares the same Hamiltonian, we demonstrate that this coupling strategy captures the right physics to second order. When employing the operator splitting strategy across discipline we validate the results by comparison with historic results. Simulation projects can be setup in AMUSE in a declarative fashion in which the coupling strategies are described at a meta level. These descriptions allow for the strict separation of individual modules for multi-scale and multi-domain simulations in the form of patterns. In this study we describe how these patterns are implemented in AMUSE and where they can be used to help the modeling celestial phenomena.
Arjen van Elteren and Simon Portegies Zwart
228 Multiscale Modeling of Surgical Flow in a Large Operating Room Suite: Understanding the Mechanism of Accumulation of Delays in Clinical Practice [abstract]
Abstract: Improving operating room (OR) management in large hospitals has been a challenging problem that remains largely unresolved. Fifty percent of hospital income depends on OR activities and among the main concerns in most institutions is to improve the efficiency of a large OR suite that. We advocate that optimizing surgical flow in large OR suites is a complex multifactorial problem with an underlying multiscale structure. Numerous components of the system can combine nonlinearly result in the large accumulated delays observed in daily clinical practice. We propose a multiscale agent-based model (ABM) of surgical flow. We developed a smartOR system that utilizes a dedicated network of non-invasive, wireless sensors to automatically track the state of the OR and accurately computes major indicators of performances such as turnover time between procedures. We show that our model can fit these time measurements and that a multiscale description of the system is possible. We will discuss how this model can be used to quantify and target the main limiting factors in optimizing OR suite efficiency.
Marc Garbey, Guillaume Joerger, Juliette Rambourg, Brian Dunkin and Barbara Bass
9 Coarse graining from variationally enhanced sampling: the case of Ginzburg-Landau model [abstract]
Abstract: A powerful way to deal with a complex system is to build a coarse-grained model capable of catching its main physical features, while still being computationally affordable. Inevitably, such coarse-grained models introduce a set of phenomenological parameters, which are often not easily deducible from the underlying atomistic system. We present a novel approach to the calculation of these parameters, based on the recently introduced variationally enhanced sampling method. It allows us to obtain the parameters from atomistic simulations, providing thus a direct connection between the microscopic and the mesoscopic scale. The coarse-grained model we consider is that of Ginzburg-Landau, valid around a second order critical point. In particular we use it to describe a Lennard-Jones fluid in the region close to the liquid-vapor critical point. The procedure is general and can be adapted to other coarse-grained models.
Michele Invernizzi, Omar Valsson and Michele Parrinello