Keynote Lectures

ICCS is well known for its excellent line-up of keynote speakers.
This page will be frequently updated with new names, lecture titles and abstracts.

CONFIRMED SPEAKERS

Maciej Besta, ETH Zürich, Switzerland
        Enabling High-Performance Large-Scale Irregular Computations
Marian Bubak, Sano Centre for Computational Medicine, Poland | AGH University of Science and Technology, Poland
        Towards Personalised Computational Medicine – Sano Centre Perspective
Anne Gelb, Dartmouth College, USA
        Empirical Bayesian Inference using Joint Sparsity
        This keynote lecture is proudly sponsored by Elsevier’s Journal of Computational Science.
Georgiy Stenchikov, King Abdullah University of Science and Technology, Saudi Arabia
        What do Climate Scientists Use Computer Resources for – The Role of Volcanic Activity in Climate and Global Change
Marco Viceconti, University of Bologna, Italy
        Does In Silico Medicine need Big Science?
Krzysztof Walczak, Poznan University of Economics and Business, Poland
        Implementing Serious Virtual Reality Applications
Jessica Zhang, Carnegie Mellon University, USA
        Material Transport Simulation in Complex Neurite Networks Using Isogeometric Analysis and Machine Learning Techniques
        This keynote lecture is proudly sponsored by Elsevier’s Journal of Computational Science.

Enabling High-Performance Large-Scale Irregular Computations
Maciej Besta
Maciej Besta
ETH Zürich, Switzerland
WEB 

 

Maciej is a researcher from Scalable Parallel Computing Lab (SPCL) at ETH Zurich. He works on large-scale graph computations and high-performance networking. He won, among others, the competition for the Best Student of Poland (2012), the first Google Fellowship in Parallel Computing (2013), and the ACM/IEEE-CS George Michael HPC Fellowship (2015). He received Best Paper awards and Best Student Paper awards at ACM/IEEE Supercomputing 2013, 2014, and 2019, at ACM HPDC 2015 and 2016, ACM Research Highlights 2018, and several more best paper nominations (ACM HPDC 2014, ACM FPGA 2019, and ACM/IEEE Supercomputing 2019). More detailed information on a personal site: https://people.inf.ethz.ch/bestam

ABSTRACT
Large graphs are behind many problems in today’s computing landscape. The growing sizes of such graphs, reaching 70 trillion edges recently, require unprecedented amounts of compute power, storage, and energy. In this talk, we illustrate how to effectively process such extreme-scale graphs. Our solutions are related to various forms of graph compression, paradigms and abstractions, effective design and utilization of massively parallel hardware, vectorizability of graph representations, communication avoidance, and others.

Towards Personalised Computational Medicine – Sano Centre Perspective
Marian Bubak
Marian Bubak
Sano Centre for Computational Medicine, Poland | AGH University of Science and Technology, Poland
WEB1 | WEB 2 

 

Marian Bubak is the Scientific Affairs Director of the Sano Centre for Computational Medicine; he also leads the Laboratory of Information Methods in Medicine at the ACC Cyfronet AGH, is a staff member of the Institute of Computer Science AGH, and the Professor of Distributed System Engineering (emeritus) at the Informatics Institute University of Amsterdam. He obtained M.Sc. in Technical Physics and Ph.D. in Computer Science from the AGH University of Science and Technology, Krakow, Poland. His research interests include parallel and distributed computing, problem solving environments, and quantum computing; he authored about 250 papers in these areas and co-edited a number of books. He served key roles in series of EU-funded projects, as a member of editorial boards of FGCS, Bio-Algorithms and Med-Systems, and Computer Science Journal as well as a chairman/organizer of international conferences (including ICCS in 2004 and 2008). He is the chairman of the Malopolska Branch of the Polish information Processing Society.

ABSTRACT
Patients are different in many aspects and these differences are enhanced by the complexity of diseases development. To provide more personalized treatment, human physiology models are becoming more and more complex and the amount of data is too large to be processed in a traditional way. Therefore, advanced methods of modelling and data analysis are necessary. This means that today’s medicine is increasingly entering a field similar to engineering and we observe the development of computational medicine which adopt advanced computational technologies and data systems.
Since 2000 DICE Team (http://dice.cyfronet.pl/) has been involved in the research focused on the elaboration of problem-solving environments and decision support systems for medicine on top of distributed computing infrastructures. This research is financed mainly by projects of the European Commission and the main partners are the University of Sheffield, University of Amsterdam, Juelich Supercomputing Centre, LMU and Leibniz Supercomputing Centre in Munich.
As a result of this scientific collaboration a new scientific unit: Sano Centre for Computational Personalized Medicine – International Research Foundation was established in 2019, in Krakow (https://sano.science/).
Six Sano research teams cover such in-silico medicine areas as modelling and simulation, data science, artificial intelligence and machine learning methods, image processing, IT methods in medicine, large-scale computing, and decision-making support systems. Researchers at Sano use the computing and storage resources of PL-Grid, a.o. the Prometheus computer at Cyfronet AGH.
This research will result in the development of tools supporting doctors in diagnostic and treatment processes. It is extremely valuable from the point of view of an individual patient and will reduce the costs of treatments. Modern computer technologies developed in Sano may also be used in pharmaceutical and biotechnology laboratories.
Acknowledgements. Sano Centre is financed by the European Union’s Horizon 2020 Teaming (grant 857533, 15 M€), the International Research Agendas Programme of the Foundation for Polish Science grant MAB PLUS/2019/13 co-funded by the European Union in the scope of the European Regional Development Fund (10 M €), and Polish Ministry of Education and Science (after 2023, 5 M €).

Empirical Bayesian Inference using Joint Sparsity
Anne Gelb
Anne Gelb
Dartmouth College, USA
WEB

 

Anne Gelb is the John G. Kemeny Parents Professor in the Department of Mathematics at Dartmouth College. She received her Ph.D. in Applied Mathematics from Brown University. Her advisor was Professor David Gottlieb. She was a postdoctoral fellow at the California Institute of Technology under the supervision of Professor Herbert Keller. She held a faculty position in the School of Mathematical and Statistical Science at Arizona State University until 2016, at which time she joined the Department of Mathematics at Dartmouth College.
Professor Gelb is a numerical analyst focusing on high order methods for signal and image restoration, classification, and change detection for real and complex signals from temporal sequences of collected data. There are a wide variety of applications for her work, including speech recognition, medical monitoring, credit card fraud detection, automated target recognition, and video surveillance. Her research is funded in part by the Air Force Office of Scientific Research, the Office of Naval Research, the National Science Foundation, and the National Institutes of Health.

ABSTRACT
We develop a new empirical Bayesian inference algorithm for solving a linear inverse problem given multiple measurement vectors (MMV) of under-sampled and noisy observable data. Specifically, by exploiting the joint sparsity across the multiple measurements in the sparse domain of the underlying signal or image, we construct a new support informed sparsity promoting prior. While a variety of applications can be modeled using this framework, in this talk we discuss classification and target recognition from synthetic aperture radar (SAR) data which are acquired from neighboring aperture windows. Our numerical experiments demonstrate that using this new prior not only improves accuracy of the recovery, but also reduces the uncertainty in the posterior when compared to standard sparsity producing priors. We also discuss how our method can be used to combine and register different types of data acquisition.
This is joint work with Theresa Scarnati formerly of the Air Force Research Lab Wright Patterson and now working at Qualis Corporation in Huntsville, AL, and Jack Zhang, recent bachelor degree recipient at Dartmouth College and now enrolled at University of Minnesota’s PhD program in mathematics.

What do Climate Scientists Use Computer Resources for – The Role of Volcanic Activity in Climate and Global Change
Georgiy Stenchikov
Georgiy Stenchikov
King Abdullah University of Science and Technology, Saudi Arabia
WEB

 

Dr Stenchikov completed his PhD in the Numerical and Analytical Study of Weak Plasma Turbulence at Moscow Physical Technical Institute in 1977. Afterwards, he headed a department at the Russian Academy of Sciences, which used computational analysis to carry out crucial early research into the impact of humans on Earth’s climate and environmental systems. From 1992 until 1998, Dr Stenchikov worked at the University of Maryland in the USA, after which he held a position as a Research Professor in the Department of Environmental Sciences of Rutgers University for almost a decade. Since 2009, he been a Professor and a Chair of the Earth Sciences and Engineering Program at King Abdullah University of Science and Technology in Saudi Arabia. His work has brought about important advances in fields including climate modelling, atmospheric physics, fluid dynamics, radiation transfer, and environmental sciences. Dr Stenchikov co-authored the Nobel Prize winning report from the Intergovernmental Panel on Climate Change IPCC-AR4.

ABSTRACT
Explosive volcanic eruptions are magnificent events that in many ways affect the Earth’s natural processes and climate. They cause sporadic perturbations of the planet’s energy balance, activating complex climate feedbacks and providing unique opportunities to better quantify those processes. We know that explosive eruptions cause cooling in the atmosphere for a few years, but we have just recently realized that they affect the major climate variability modes and volcanic signals can be seen in the subsurface ocean for decades. The volcanic forcing of the previous two centuries offsets the ocean heat uptake and diminishes global warming by about 30%. In the future, the explosive volcanism could slightly delay the pace of global warming and has to be accounted for in long-term climate predictions. The recent interest in dynamic, microphysical, chemical, and climate impacts of volcanic eruptions is also excited by the fact these impacts provide a natural analogue for climate geoengineering schemes involving deliberate development of an artificial aerosol layer in the lower stratosphere to counteract global warming. In this talk I will discuss these recently discovered volcanic effects and specifically pay attention to how we can learn about the hidden Earth-system mechanisms activated by explosive volcanic eruptions.

Does In Silico Medicine need Big Science?
Marco Viceconti
Marco Viceconti
University of Bologna, Italy
WEB 

 

Marco Viceconti is full professor of Computational Biomechanics in the department of Industrial Engineering of the Alma Mater Studiorum – University of Bologna, and Director of the Medical Technology Lab of the Rizzoli Orthopaedic Institute. Prof Viceconti is an expert of neuromusculoskeletal biomechanics in general, and in particular in the use of subject-specific modelling to support the medical decision. He is one of 25 members of the World Council of Biomechanics. Prof Viceconti is one of the key figures in the in silico medicine international community. According to SCOPUS he published 351 papers (H-index = 50).

ABSTRACT
The term Big Science is used to indicate the transformation of some research fields after the second world war, characterised by the creation of very large research groups and infrastructures. Historically bioengineering in general and computational biomedicine in particular has been characterised by small research groups working on a very narrowly defined problem; the opposite of big science. But in the last 12 years saw the emergence of three very large institute mostly focused on In Silico Medicine: the Auckland Bioengineering Institute; the Insigneo Institute; and the Sano Centre. The research team established by Prof Peter Hunter became a Large Scale Research Institute of the University of Auckland (NZ) with the name of Auckland Bioengineering Institute (ABI) in 2008. The Insigneo Institute at the University of Sheffield (UK) was established in 2012; the Sano Centre for Computational Medicine was established in Krakow (PL) in 2019. In this presentation, we analyse the motivations behind these endeavours, framing such analysis in the context of the barriers that are slowing the widespread adoption of In Silico Medicine methods in the clinical and industrial practice.

Implementing Serious Virtual Reality Applications
Krzysztof Walczak
Krzysztof Walczak
Poznan University of Economics and Business, Poland
WEB 

 

Krzysztof Walczak is a full professor in computer science and the head of the Department of Information Technology and the VR Research Laboratory at the Poznań University of Economics and Business in Poland. His research interests cover virtual reality and mixed reality systems, multimedia communication, interactive television, and the semantic web. He was coordinating numerous research and industrial projects in these domains. He often serves as an expert for the European Commission, National Science Centre, National Centre of Applied Research, and the Polish Ministry of Education and Science. He has authored or co-authored two books and over 150 research articles published in books, journals, and proceedings of international scientific conferences. He also holds several EU and US patents. He is an elected member of the Executive Committee of the EuroXR Association.

ABSTRACT
Virtual reality technology enables a new class of computer applications, in which a user is fully immersed in a surrounding synthetic 3D virtual world that can represent either an existing or an imaginary place. Virtual worlds can be interactive and multimodal, providing users with near-reality experiences.
The popularization of virtual reality (VR) and related technologies (XR) has been recently enabled by the significant progress in hardware performance, the availability of versatile input-output devices, and the development of advanced software platforms. XR applications have become widespread in entertainment, but only to a minimal extent are used in other “serious” civil application domains, such as education, training, e-commerce, tourism, and cultural heritage.
Several problems restrain the use of XR in everyday applications. The most important is the inherent difficulty of designing and managing non-trivial interactive 3D multimedia content. Not only the geometry and the appearance of particular elements must be properly represented, but also the temporal, structural, logical, and behavioral composition of virtual scenes and associated scenarios must be taken into account. Moreover, such virtual environments should be created and managed by domain experts or end-users without having to involve programmers or graphic designers each time. Other challenges include the diversity of XR platforms, large amounts of data required by XR applications, and difficulties in the implementation of accurate and efficient interaction in a 3D space.
These problems and proposed solutions, together with examples of practical “serious” virtual reality applications, will be discussed in this presentation.

Material Transport Simulation in Complex Neurite Networks
Using Isogeometric Analysis and Machine Learning Techniques
Jessica Zhang
Jessica Zhang
Carnegie Mellon University, USA
WEB 

 

Jessica Zhang is the George Tallman Ladd and Florence Barrett Ladd Professor of Mechanical Engineering at Carnegie Mellon University with a courtesy appointment in Biomedical Engineering. She received her B.Eng. in Automotive Engineering, and M.Eng. in Engineering Mechanics from Tsinghua University, China; and M.Eng. in Aerospace Engineering and Engineering Mechanics and Ph.D. in Computational Engineering and Sciences from Institute for Computational Engineering and Sciences (now Oden Institute), The University of Texas at Austin. Her research interests include image processing, computational geometry, finite element method, isogeometric analysis, data-driven simulation and their applications in computational biomedicine, materials science and engineering. Zhang has co-authored over 190 publications in peer-reviewed journals and conference proceedings and received several Best Paper Awards. She published a book entitled “Geometric Modeling and Mesh Generation from Scanned Images” with CRC Press, Taylor & Francis Group in 2016. Zhang is the recipient of Simons Visiting Professorship from Mathematisches Forschungsinstitut Oberwolfach of Germany, US Presidential Early Career Award for Scientists and Engineers, NSF CAREER Award, Office of Naval Research Young Investigator Award, and USACM Gallagher Young Investigator Award. At CMU, she received David P. Casasent Outstanding Research Award, George Tallman Ladd and Florence Barrett Ladd Professorship, Clarence H. Adamson Career Faculty Fellow in Mechanical Engineering, Donald L. & Rhonda Struminger Faculty Fellow, and George Tallman Ladd Research Award. She is a Fellow of AIMBE, ASME, USACM and ELATE at Drexel.

ABSTRACT
Neurons exhibit remarkably complex geometry in their neurite networks. So far, how materials are transported in the complex geometry for survival and function of neurons remains an unanswered question. Answering this question is fundamental to understanding the physiology and disease of neurons. Here, we develop an isogeometric analysis (IGA) based platform for material transport simulation in neurite networks. We model the transport process by reaction-diffusion-transport equations and represent geometry of the networks using truncated hierarchical tricubic B-splines (THB-spline3D). We solve the Navier-Stokes equations to obtain the velocity field of material transport in the networks. We then solve the transport equations using the streamline upwind/Petrov-Galerkin (SU/PG) method. Using our IGA solver, we simulate material transport in a number of representative and complex neurite networks. From the simulation we discover several spatial patterns of the transport process. Together, our simulation provides key insights into how material transport in neurite networks is mediated by their complex geometry.

To enable fast prediction of the transport process within complex neurite networks, we develop a Graph Neural Networks (GNN) based model to learn the material transport mechanism from simulation data. In this study, we build the graph representation of the neuron by decomposing the neuron geometry into two basic structures: pipe and bifurcation. Different GNN simulators are designed for these two basic structures to predict the spatiotemporal concentration distribution given input simulation parameters and boundary conditions. In particular, we add the residual term from PDEs to instruct the model to learn the physics behind the simulation data. To recover the neurite network, a GNN-based assembly model is used to combine all the pipes and bifurcations following the graph representation. The loss function of the assembly model is designed to impose consistent concentration results on the interface between pipe and bifurcation. Through machine learning, we can quickly and accurately provide a prediction of material transport given a new complex neuron tree.