Multiscale Modelling and Simulation (MMS) Session 1

Time and Date: 10:15 - 11:55 on 13th June 2019

Room: 0.5

Chair: Derek Groen

7 The Schwarz Alternating Method for Multiscale Coupling in Solid Mechanics [abstract]
Abstract: Concurrent multiscale methods are essential for the understanding and prediction of behavior of engineering systems when a small-scale event will eventually determine the performance of the entire system. Here, we describe the recently-proposed [1] domain-decomposition-based Schwarz alternating method as a means for concurrent multiscale coupling in finite deformation quasistatic and dynamic solid mechanics. The approach is based on the simple idea that if the solution to a partial differential equation is known in two or more regularly shaped domains comprising a more complex domain, these local solutions can be used to iteratively build a solution for the more complex domain. The proposed approach has a number of advantages over competing multiscale coupling methods, most notably its concurrent nature, its ability to couple non-conformal meshes with different element topologies, and its non-intrusive implementation into existing codes. In this talk, we will first overview our original formulation of the Schwarz alternating method for multiscale coupling in the context of quasistatic solid mechanics problems [1]. We will discuss the method's proven convergence properties, and demonstrate its accuracy, convergence and scalability of the proposed Schwarz variants on several quasistatic solid mechanics examples simulated using the Albany/LCM code. The bulk of the talk will present some recent extensions of the Schwarz alternating formulation to dynamic solid mechanics problems [2]. Our dynamic Schwarz formulation is not based on a space-time discretization like other dynamic Schwarz-like methods; instead, it uses a governing time-stepping algorithm that controls time-integrators within each subdomain. As a result, the method is straight-forward to implement into existing codes (e.g, Albany/LCM), and allows the analyst to use different time-integrators with different time steps within each domain. We demonstrate on several test cases (including bolted-joint problems of interest to production) that coupling using the proposed method introduces no dynamic artifacts that are pervasive in other coupling methods (e.g., spurious wave reflections near domain boundaries), regardless of whether the coupling is done with different mesh resolutions, different element types like hexahedral or tetrahedral elements, or even different time integration schemes, like implicit and explicit. Furthermore, on dynamic problems where energy is conserved, we show that the method is able to preserve the property of energy conservation. REFERENCES [1] A. Mota, I. Tezaur, C. Alleman. “The alternating Schwarz method for concurrent multiscale coupling”, Comput. Meth. Appl. Mech. Engng. 319 (2017) 19-51. [2] A. Mota, I. Tezaur, G. Phlipot. "The Schwarz alternating method for dynamic solid mechanics", in preparation for submission to Comput. Meth. Appl. Mech. Engng.
Alejandro Mota, Irina Tezaur, Coleman Alleman and Greg Phlipot
400 Coupled Simulation of Metal Additive Manufacturing Processes at the Fidelity of the Microstructure [abstract]
Abstract: The Exascale Computing Project (ECP, https://exascaleproject.org/) is a U.S. Dept. of Energy effort developing hardware, software infrastructure, and applications for computational platforms capable of performing 10^18 floating point operations per second (one “exaop”). The Exascale Additive Manufacturing Project (ExaAM) is one of the applications selected for development of models that would not be possible on even the largest of today’s computational systems. In addition to ORNL, partners include Lawrence Livermore National Laboratory (LLNL), Los Alamos National Laboratory (LANL), the National Institute for Standards and Technology (NIST), as well as key universities such as Purdue Univ., UCLA, and Penn. State Univ. Since we are both leveraging existing simulation software and also developing new capabilities, we will describe the physics components that comprise our simulation environment and report on progress to date using highly-resolved melt pool simulations to inform part-scale finite element thermomechanics simulations, drive microstructure evolution, and determine constitutive mechanical property relationships based on those microstructures using polycrystal plasticity. The coupling of melt pool dynamics and thermal behavior, microstructure evolution, and microscale mechanical properties provides a unique, high-fidelity model of the process-structure-property relationship for additively manufactured parts. We will report on the numerics, implementation, and performance of the nonlinearly consistent coupling strategy, including convergence behavior, sensitivity to fluid flow fidelity, and challenges in timestepping. The ExaAM team includes James Belak, co-PI (LLNL), Nathan Barton (LLNL), Matt Bement (LANL), Curt Bronkhorst (Univ. of Wisc.), Neil Carlson (LANL), Robert Carson (LLNL), Jean-Luc Fattebert (ORNL), Neil Hodge (LLNL), Zach Jibben (LANL), Brandon Lane (NIST), Lyle Levine (NIST), Chris Newman (LANL), Balasubramaniam Radhakrishnan (ORNL), Matt Rolchigo (LLNL), Stuart Slattery (ORNL), and Steve Wopschall (LLNL). This work was supported by the Exascale Computing Project (17-SC-20-SC), a collaborative effort of the U.S. Department of Energy Office of Science and the National Nuclear Security Administration.
John Turner
281 A Semi-Lagrangian Multiscale Framework for Advection-Dominant Problems [abstract]
Abstract: We introduce a new parallelizable numerical multiscale method for advection-dominated problems as they often occur in engineering and geosciences. State of the art multiscale simulation methods work well in situations in which stationary and elliptic scenarios prevail but are prone to fail when the model involves dominant lower order terms which is common in applications. We suggest to overcome the associated difficulties through a reconstruction of subgrid variations into a modified basis by solving many independent (local) inverse problems that are constructed in a semi-Lagrangian step. Globally the method looks like a Eulerian method with multiscale stabilized basis. The method is extensible to other types of Galerkin methods, higher dimensions, nonlinear problems and can potentially work with real data. We provide examples inspired by tracer transport in climate systems in one and two dimensions and numerically compare our method to standard methods.
Konrad Simon and Jörn Behrens
397 Projection-Based Model Reduction Using Asymptotic Basis Functions [abstract]
Abstract: Galerkin projection provides a formal means to project a differential equation onto a set of preselected basis functions. This may be done for the purpose of formulating a numerical method, as in the case of spectral methods, or formulation of a reduced-order model (ROM) for a complex system. Here, a new method is proposed in which the basis functions used in the projection process are determined from an asymptotic (perturbation) analysis. These asymptotic basis functions (ABF) are obtained from the governing equation itself; therefore, they contain physical information about the system and its dependence on parameters contained within the mathematical formulation. We refer to this as reduced-physics modeling (RPM) as the basis functions are obtained from a physical, i.e.\ model-driven, rather than data-driven, approach. Therefore, the ABF hold the potential to provide an accurate RPM of a system that captures the physical dependence on model parameters and is accurate over a wider range of parameters than possible for traditional ROM methods. This new approach is tailor-made for modeling multiscale problems as the various scales, whether overlapping or distinct in time or space, are formally accounted for in the ABF. A regular-perturbation problem is used to illustrate that projection of the governing equations onto the ABF allows for determination of accurate approximate solutions for values of the ``small'' parameter that are much larger than possible with the asymptotic expansion alone.
Kevin Cassel
351 Special Aspects of Hybrid Kinetic-Hydrodynamic Model When Describing the Shape of Shockwaves [abstract]
Abstract: A mathematical model of the flow of a polyatomic gas containing a combination of the Navier-Stokes-Fourier model (NSF) and the model kinetic equation of polyatomic gases is presented. At the heart of the hybrid components is a unified physical model, as a result of which the NSF model is a strict first approximation of the model kinetic equation. The model allows calculations of flow fields in a wide range of Knudsen numbers (Kn), as well as fields containing regions of high dynamic nonequilibrium. The boundary conditions on a solid surface are set at the kinetic level, which allows, in particular, to formulate the boundary conditions on the surfaces absorbing or emitting gas. The hybrid model was tested. The example of the problem of the shock wave profile shows that up to Mach numbers near 2 the combined model gives smooth solutions even in those cases where the sewing point is in a high gradient region. For the Couette flow, smooth solutions are obtained at M=5, Kn=0.2. As a result of research, a weak and insignificant difference between the kinetic region of the hybrid model and the “pure” kinetic model was established. A model effect was discovered: in the region of high nonequilibrium, there is an almost complete coincidence of the solutions of the kinetic region of the combined model and the “pure” kinetic solution. This work was conducted with the financial support of the Ministry of Education and Science of the Russian Federation, project №9.7170.2017/8.9.
Yurii Nikitchenko, Sergey Popov and Alena Tikhonovets

Multiscale Modelling and Simulation (MMS) Session 2

Time and Date: 14:20 - 16:00 on 13th June 2019

Room: 0.5

Chair: Derek Groen

465 Introducing VECMAtk - verification, validation and uncertainty quantification for multiscale and HPC simulations [abstract]
Abstract: Multiscale simulations are an essential computational tool in a range of research disciplines, and provide unprecedented levels of scientific insight at a tractable cost in terms of effort and compute re- sources. To provide this, we need such simulations to produce results that are both robust and actionable. The VECMA toolkit (VECMAtk), which is officially released in conjunction with the present paper, estab- lishes a platform to achieve this by exposing patterns for verification, validation and uncertainty quantification (VVUQ). These patterns can be combined to capture complex scenarios, applied to applications in dis- parate domains, and used to run multiscale simulations on any desktop, cluster or supercomputing platform.
Derek Groen, Robin Richardson, David Wright, Vytautas Jancauskas, Robert Sinclair, Paul Karlshoefer, Maxime Vassaux, Hamid Arabnejad, Tomasz Piontek, Piotr Kopta, Bartosz Bosak, Jalal Lakhlili, Olivier Hoenen, Diana Suleimenova, Wouter Edeling, Daan Crommelin, Anna Nikishova and Peter Coveney
350 EasyVVUQ: Building trust in simulation. [abstract]
Abstract: Modelling and simulation are increasingly well established techniques in a wide range of academic and industrial domains. As their use becomes increasingly important it is vital that we understand both their sensitivity to inputs and how much confidence we should have in their results. Nonetheless, few simulations are reported with rigorous validation (V) and verification (V), or even meaningful error bars (uncertainty quantification - UQ). EasyVVUQ is a Python library designed to allow the integration of non-intrusive VVUQ techniques into existing simulation workflows. Our aim is to provide the basis for tools which wrap around applications to allow the user to specify the scientifically interesting parameters of the model and the type of VVUQ algorithm they wish to implement and the details of the setup and analysis are abstracted from them. To this end we have designed JSON based input formats that provide a human readable and comprehensible interface to the code. The EasyVVUQ framework is based on the concept of a Campaign of simulations, the inputs of which are generated by a range of sampling algorithms. This Campaign is executed externally to the library but the results processed, aggregated and analyzed within it. EasyVVUQ provides simple templating features that facilitate mapping between scientific parameters and input options and files for a wide range of applications out of the box. Furthermore, our design allows simple user customization of both the input generation and the extraction of relevant data from simulation outputs by expert users and developers. We present it's use in three example multiscale applications from the VECMA project; protein-ligand binding affinity calculations, coupled molecular dynamics and finite element materials modelling and fusion.
David Wright, Robin Richardson and Peter Coveney
391 Uncertainty quantification in multiscale simulations applied to fusion plasmas [abstract]
Abstract: In order to predict the overall performance of a thermonuclear fusion device, an understanding on how microscale turbulence affects the global transport of the plasma is essential. A multiscale component based fusion simulation was designed by coupling together several single-scale physics models into a workflow comprising a transport code, an equilibrium code and a turbulence code. While previous simulations using such workflow showed promising results on propagating turbulent effects to the overall plasma transport [1], the profiles of densities and temperatures simulated by the transport model carry uncertainties that have yet to be quantified. The turbulence code provides the transport coefficients that are inherently noisy. These coefficients are then propagated through the transport code and produce an uncertainty interval in the calculated profiles, which would be used in the equilibrium and turbulence codes to calculate new uncertainty intervals. Our goal is therefore to study how these uncertainties propagate through the workflow, so that we can draw quantitative comparisons between numerical and experimental results. In this context, we are developing tools based on a non-intrusive polynomial chaos expansion [2] (PCE). For that, each sub-model is treated as a black box in which the PCE method is applied. Then, several statistical metrics are derived directly from the polynomial expansion, and finally we get the uncertainty quantification (UQ) and the parameter sensitivity of the multiscale model involved. References: [1] O.O. Luk, O. Hoenen, A. Bottino, B.D. Scott, D.P. Coster, ComPat framework for multiscale simulations applied to fusion plasmas, Computer Physics Communications (2019), https://doi.org/10.1016/j.cpc.2018.12.021. [2] R. Preuss, U. von Toussaint, Uncertainty quantification in ion–solid interaction simulations, Nuclear Instruments and Methods in Physics Research Section B (2017), https://doi.org/10.1016/j.nimb.2016.10.033.
Jalal Lakhlili, David Coster, Olivier Hoenen, Onnie Luk, Roland Preuss and Udo von Toussaint
293 Analysis of Uncertainty of an In-Stent Restenosis Model [abstract]
Abstract: Uncertainty and sensitivity analysis provides insights on how uncertainty in the model inputs affects the model response [1, 2]. Usually, methods for such analysis are computationally expensive and may require high performance resources. In [3], we perform uncertainty quantification applying the quasi-Monte Carlo method for a two-dimensional version of an in-stent restenosis model (ISR2D) [4]. Additionally, in [5], we improve the efficiency of uncertainty estimation by applying the semi-intrusive multiscale method [6]. We observe approximately 30% uncertainty in the mean neointimal area as simulated by the ISR2D model. Depending on whether a fast initial endothelium recovery occurs, the proportion of the model variance due to natural variability ranges from 15% to 35%. The endothelium regeneration time is identified as the most influential model parameter. The model output contains a moderate quantity of uncertainty, and the model precision can be increased by obtaining a more certain value on the endothelium regeneration time. The results obtained by the semi-intrusive method show a good match to those obtained by a black-box quasi-Monte Carlo method (see Fig. 1). Moreover, we significantly reduce the computational cost of the uncertainty estimation. We conclude that the semi-intrusive metamodeling method is reliable and efficient, and can be applied to such complex models as the ISR2D model.
Anna Nikishova, Lourens Veen, Pavel Zun and Alfons Hoekstra
100 Creating a reusable cross-disciplinary multi-scale and multi-physics framework: from AMUSE to OMUSE and beyond [abstract]
Abstract: We describe our efforts to create a multi-scale and multi-physics framework that can be retargeted across different disciplines. Currently we have implemented our approach in the astrophysical domain, for which we developed AMUSE, and generalized this to the oceanographic and climate sciences, which led to the development of OMUSE. The objective of this paper is to document the design choices that led to the successful implementation of these frameworks as well as the future challenges in applying this approach to other domains.
Federico Inti Pelupessy, Simon Portegies Zwart, Arjen van Elteren, Henk Dijkstra, Fredrik Jansson, Daan Crommelin, Pier Siebesma, Ben van Werkhoven and Gijs van den Oord

Multiscale Modelling and Simulation (MMS) Session 3

Time and Date: 16:30 - 18:10 on 13th June 2019

Room: 0.5

Chair: Derek Groen

392 Regional superparameterization of the OpenIFS atmosphere model by nesting 3D LES models [abstract]
Abstract: We present a superparameterization of the ECMWF global weather forecasting model OpenIFS with a local, cloud-resolving model. Superparameterization is a multiscale modeling approach used in atmospheric science in which conventional parameterizations of small-scale processes are replaced by local high-resolution models that resolve these processes. Here, we use the Dutch Atmospheric Large Eddy Simulation model (DALES) as the local model. Within a selected region, our setup nests DALES instances within model columns of the global model OpenIFS. This is done so that the global model parameterizations of boundary layer turbulence, cloud physics and convection processes are replaced with tendencies derived from the vertical profiles of the local model. The local models are in turn forced towards the corresponding vertical profiles of the global model, making the model coupling bidirectional. We consistently combine the sequential physics scheme of OpenIFS with the Grabowski superparameterization scheme and achieve concurrent execution of the independent DALES models on separate CPUs. The superparameterized region can be chosen to match the available compute resources, and we have implemented mean-state acceleration to speed up the LES time stepping. The coupling of the components has been implemented in a Python software layer using the OMUSE multi-scale physics framework. As a result, our setup yields a cloud-resolving weather model that displays emergent mesoscale cloud organization and has the potential to improve the representation of clouds and convection processes in OpenIFS. It allows us to study the interaction of boundary layer physics with the large scale dynamics, to assess cloud and convection parameterization in the ECMWF model, and eventually to improve our understanding of cloud feedback in climate models. [Regional superparameterization in a Global Circulation Model using Large Eddy Simulations, Fredrik Jansson, Gijs van den Oord, Inti Pelupessy, Johanna H. Grönqvist, A. Pier Siebesma, Daan Crommelin, Under review (2018)]
Gijs van den Oord, Fredrik Jansson, Inti Pelupessy, Maria Chertova, Pier Siebesma and Daan Crommelin
396 MaMiCo: Parallel Noise Reduction for Multi-Instance Molecular-Continuum Flow Simulation [abstract]
Abstract: Transient molecular-continuum coupled flow simulations often suffer from high thermal noise, created by fluctuating hydrodynamics within the molecular dynamics (MD) simulation. Multi-instance MD computations are an approach to extract smooth flow field quantities on rather short time scales, but they require a huge amount of computational resources. Filtering particle data using signal processing methods to reduce numerical noise can significantly reduce the number of instances necessary. This leads to improved stability and reduced computational cost in the molecular-continuum setting. We extend the Macro-Micro-Coupling tool (MaMiCo) - a software to couple arbitrary continuum and MD solvers - by a new parallel interface for universal MD data analytics and post-processing, especially for noise reduction. It is designed modularly and compatible with multi-instance sampling. We present a Proper Orthogonal Decomposition (POD) implementation of the interface, capable of massively parallel noise filtering. The resulting coupled simulation is validated using a three-dimensional Couette flow scenario. We quantify the denoising, conduct performance benchmarks and scaling tests on a supercomputing platform. We thus demonstrate that the new interface enables massively parallel data analytics and post-processing in conjunction with any MD solver coupled to MaMiCo.
Piet Jarmatz and Philipp Neumann
303 A Multiscale Model of Atherosclerotic Plaque Development: toward a Coupling between an Agent-Based Model and CFD Simulations [abstract]
Abstract: Computational models have been widely used to predict the efficacy of surgical interventions in response to Peripheral Occlusive Diseases. However, most of them lack of a multiscale description of the development of the dis-ease, which it is our hypothesis being the key to develop an effective predictive model. Accordingly, in this work we present a multiscale computational framework that simulates the generation of atherosclerotic arterial occlusions. Starting from a healthy artery in homeostatic conditions, the perturbation of specific cellular and extracellular dynamics led to the development of the pathology, with the final output being a diseased artery. The presented model was developed on an idealized portion of a Superficial Femoral Artery (SFA), where an Agent-Based Model (ABM), locally replicating the plaque development, was coupled to Computational Fluid Dynamics (CFD) simulations that define the Wall Shear Stress (WSS) profile at the lumen interface. The ABM was qualitatively validated on histological images and a preliminary analysis on the coupling method was conducted. Once optimized the coupling method, the presented model can serve as a predictive platform to improve the outcome of surgical interventions such as angioplasty and stent deployment.
Anna Corti, Stefano Casarin, Claudio Chiastra, Monika Colombo, Francesco Migliavacca and Marc Garbey
215 Mesoscopic simulation of droplet coalescence in fibrous porous media [abstract]
Abstract: Flow phenomena in porous media are relevant in many industrial applications including fabric filters, gas diffusion membranes, and biomedical implants. For instance, nonwoven membranes can be used as filtration media with tailored permeability range and controllable pore size distribution. However, predicting the structure-property relations that arise from specific porous microstructures remains a challenging task. Theoretical approaches have been limited to simple geometries and can often only predict the general trend of experimental data. Computer simulations are a cost-effective way of validating semi-empirical relations and predicting the precise relations between macroscopic transport properties and microscopic pore structure. To this end, multiscale simulation techniques have proven particularly successful in solving numerically the coupled partial differential equations for the complex boundary conditions in porous media. In this talk, I will present simulations of multiphase flow in fibrous porous media based on a multiphase lattice Boltzmann model for water droplets in oil. We study the effect of fibrous structures and their surface properties on the coalescence behavior of water droplets. We will discuss how the insights can be used to design optimized materials for diesel fuel filters and other filtration devices.
Fang Wang and Ulf D. Schiller
382 Computational Analysis of Pulsed Radiofrequency Ablation in Treating Chronic Pain [abstract]
Abstract: In this paper, a parametric study has been conducted to evaluate the effects of frequency and duration of the short burst pulses during pulsed radiofrequency ablation (RFA) in treating chronic pain. Affecting the brain and nervous system, this disease remains one of the major challenges in neuroscience and clinical practice. A two-dimensional axisymmetric RFA model has been developed in which a single needle radiofrequency electrode has been inserted. A finite-element-based coupled thermo-electric analysis has been carried out utilizing the simplified Maxwell’s equations and the Pennes bioheat transfer equation to compute the electric field and temperature distributions within the computational domain. Comparative studies have been carried out between the continuous and pulsed RFA to highlight the significance of pulsed RFA in chronic pain treatment. The frequencies and durations of short burst RF pulses have been varied from 1 Hz to 10 Hz and from 10 ms to 50 ms, respectively. Such values are most commonly applied in clinical practices for mitigation of chronic pain. By reporting such critical input characteristics as temperature distributions for different frequencies and durations of the RF pulses, this computational study aims at providing the first-hand accurate quantitative information to the clinicians on possible consequences in those cases where these characteristics are varied during the pulsed RFA procedure. The results demonstrate that the efficacy of pulsed RFA is significantly dependent on the duration and frequency of the RF pulses.
Sundeep Singh and Roderick Melnik