ICCS is well known for its line-up of keynote speakers.
This page will be frequently updated with new names, lecture titles and abstracts.
Helen Brooks, United Kingdom Atomic Energy Authority (UKAEA), United Kingdom
Towards In-silico Design of Fusion Power-plant Systems
Jack Dongarra, University of Tennessee, United States of America
An Overview of High Performance Computing and Future Requirements
Derek Groen, Brunel University London, United Kingdom
Building Robust Simulation-based Forecasts During Emergencies
Anders Dam Jensen, European High Performance Computing Joint Undertaking (EuroHPC JU), Luxembourg
Leading the Way in European Supercomputing. User’s Opportunities and Latest Updates from the EuroHPC Joint Undertaking
Jakub Šístek, Institute of Mathematics of the Czech Academy of Sciences & Czech Technical University in Prague, Czechia
Adaptive-Multilevel BDDC: A Scalable Domain Decomposition Method for Problems in Computational Mechanics
United Kingdom Atomic Energy Authority (UKAEA), United Kingdom
Dr Helen Brooks is a Lead HPC Computational Engineer at the United Kingdom Atomic Energy Authority (UKAEA), working in the Advanced Engineering Simulation group. She has worked on the UK Spherical Tokamak for Energy Production (STEP) project since 2020, developing performant simulation tools for fusion power plant components, and is a member of the UKAEA Breeder Blanket Team. Her current research interests are multi-physics modelling, developing automated design workflows, and performance portability. She received her MSc in Natural Sciences from the University of Cambridge in 2013, and her PhD in high energy physics from Durham University in 2018.
In the next two decades, significant effort will be expended globally to deliver first-of-a-kind, commercialisable fusion power-plant facilities. With compressed timelines and budgetary constraints, traditional engineering methodologies must be accelerated through in-silico design and qualification, using validated computational models of fusion components and devices. To derive actionable conclusions from simulation demands sufficient fidelity of modelling, and the quantification and propagation of uncertainties. It is further desirable to facilitate exploration and discovery within a parameterised design space through modern techniques in fields such as multi-objective optimisation and data science.
The ability to exploit such technologies assumes a mechanism to automatically prepare and evaluate a model for a given design specification. Furthermore, such iterative methodologies imply the production of large volumes of simulated data; avoiding a scenario where the computational cost to generate such data is prohibitive necessitates the employment of software applications that are highly performant. In this talk, ongoing activities at the United Kingdom Atomic Energy Authority in support of these ambitions are reviewed. A growing suite of open-source multi-physics applications for modelling the extremes of the fusion environment is described. The scalability of these software tools, and the potential to leverage future exascale architectures is discussed. Finally, recent developments of a novel framework for the automated design of breeder blanket – a critical sub-system of a magnetic-confinement fusion power-plant – are reported.
University of Tennessee, United States of America
WEB 1 | WEB 2
Jack Dongarra specializes in numerical algorithms in linear algebra, parallel computing, the use of advanced computer architectures, programming methodology, and tools for parallel computers. He holds appointments at the University of Manchester, Oak Ridge National Laboratory, and the University of Tennessee, where he founded the Innovative Computing Laboratory. In 2019 he received the ACM/SIAM Computational Science and Engineering Prize. In 2020 he received the IEEE-CS Computer Pioneer Award. He is a Fellow of the AAAS, ACM, IEEE, and SIAM; a foreign member of the British Royal Society and a member of the US National Academy of Engineering. Most recently, he received the 2021 ACM A.M. Turing Award for his pioneering contributions to numerical algorithms and software that have driven decades of extraordinary progress in computing performance and applications.
In this talk we examine how high performance computing has changed over the last ten years and look toward the future in terms of trends. These changes have had and will continue to impact our numerical scientific software significantly. A new generation of software libraries and algorithms are needed for the effective and reliable use of (wide area) dynamic, distributed, and parallel environments. Some of the software and algorithm challenges have already been encountered, such as management of communication and memory hierarchies through a combination of compile-time and run-time techniques, but the increased scale of computation, depth of memory hierarchies, range of latencies, and increased run-time environment variability will make these problems much harder.
Brunel University London, United Kingdom
Dr Derek Groen is a Reader in Computer Science at Brunel University London, and a Visiting Lecturer at University College London. He has a PhD from the University of Amsterdam (2010) in Computational Astrophysics, and was a Post-Doctoral Researcher at UCL for five years prior to joining Brunel as Lecturer. Derek has a strong interest in high performance simulations, multiscale modelling and simulation, and so-called VVUQ (verification, validation and uncertainty quantification). In terms of applications he is a lead developer on the Flee migration modelling code and the Flu And Coronavirus Simulator (FACS) COVID-19 model. He has also previously worked on applications in astrophysics, materials and bloodflow. Derek has been PI for Brunel in two large and recent Horizon 2020 research projects (VECMA on uncertainty quantification, and HiDALGO on global challenge simulations) and he is currently the technical manager of the UK-funded SEAVEA project which develops a VVUQ toolkit for large-scale computing applications (seavea-project.org). His most recent publication (at time of writing) is a software paper about the FabSim3 research automation toolkit, which was selected as a Feature Paper for Computer Physics Communications.
Many of today’s global crises, such as the 2015 migration crisis in Syria and the 2020 COVID pandemic, have a sudden evolution that complicates the preparation of a community response. Simulation-based forecasts for such crises can help to guide the development of mitigation policies or inform a more efficient distribution of support. However, the time required to develop, validate and execute these models can often be intractably long, causing many of these forecasts to only become accurate after the damage has occurred. In this talk I will share the experiences within our group in developing and delivering forecasting reports for two types of emergency situations: conflict-driven migration, and COVID-19 infectious disease outbreaks. We try to achieve this using open-source agent-based models, high performance computing and generic tools for automation, verification, validation and ensemble forecasting with uncertainty. It’s a feat that is extremely difficult to accomplish even with large and dedicated teams. We only have a small and partially dedicated team. Nevertheless, I’ll share the approaches we use to handle the challenge of rapidly developing simulation-based emergency forecasts. These approaches helped us to perform better and deliver more than widely feared, so we believe they could benefit other research teams too.
European High Performance Computing Joint Undertaking (EuroHPC JU), Luxembourg
Anders Dam Jensen is the Executive Director for the European High Performance Computing Joint Undertaking, a joint initiative between the EU, European countries and private partners to develop a World Class Supercomputing Ecosystem in Europe. This appointment is the continuation of a lifelong interest in supercomputers, starting from his time at the Technical University of Denmark, from which he holds a Master of Science Degree and a Master of Business Administration. After spending the first part of his career working in engineering and pioneering IEEE802.11 wireless network technology with Symbol Technologies, Anders joined Cargolux Airlines International as Director IT and was instrumental in the spinoff of the Cargolux IT department into CHAMP Cargosystems S.A. In 2011, Anders became Director ICTM for NATO, and took on responsibility for all Information and IT services as well as one of the largest classified networks in Europe.
Anders will present his organisation, the European High Performance Computing Joint Undertaking (EuroHPC JU). The EuroHPC JU joins together the resources of the European Union, 33 European countries and 3 private partners to develop a World Class Supercomputing Ecosystem in Europe. Anders will present the operational EuroHPC supercomputers located across Europe and give details about their access policy. Free access is already being provided to European research organisations and wider access is planned for the future. Anders will then present some of the JU’s missions, such as the acquisition of new supercomputers including exascale systems and quantum computers, the implementation of an ambitious research and innovation programme supporting European technological and digital autonomy and developing green technologies in HPC, the further strengthening of Europe’s leading position in HPC applications, and other initiatives, part of the effort of the JU to broaden the use of HPC in Europe.
Institute of Mathematics of the Czech Academy of Sciences & Czech Technical University in Prague, Czechia
Jakub Šístek focuses on mathematical algorithms for high performance computing, such as parallel solvers for numerical linear algebra, scalable domain decomposition methods, and applications to problems of structural mechanics and computational fluid dynamics. He is also interested in vortex identification and visualization in fluid flows. Jakub is the head of the Department of Constructive Methods of Mathematical Analysis at the Institute of Mathematics of the Czech Academy of Sciences and an assistant professor at the Department of Applied Mathematics of the Faculty of Information Technology of the Czech Technical University in Prague. Previously, he worked at universities in Denver, Cambridge, and Manchester. He received his Ph.D. in Mathematical and Physical Engineering from the Faculty of Mechanical Engineering of the Czech Technical University in 2008. Jakub is the recipient of the Ivo Babuška Prize (2009) and the Otto Wichterle Premium (2013).
Balancing Domain Decomposition by Constraints (BDDC) by Dohrmann celebrates 20 years since its publication in 2023. I will present the method and review our research in its extensions to multiple levels, which improves its scalability for large numbers of subdomains and processors. Then, I will present the construction of coarse spaces adapted to the solved problem, which greatly increases the robustness of this iterative method. Together, these form the adaptive-multilevel BDDC, which enjoys both scalability and robustness. Next I will review some applications of the method in computational mechanics. First, I will look at time-dependent Navier-Stokes equations, and I will discus applications of variants of the BDDC method to sequences of Poisson problems arising from the pressure-correction method. Then, I will describe one path to accelerating the BDDC solver with Graphics Processing Units (GPUs). I will conclude by presenting our recent work towards making multilevel BDDC a robust and scalable method for solving systems arising from immersed boundary finite element methods with adaptive mesh refinement.