Rick L. Stevens Deep Learning In Cancer And Infectious Disease: Novel Driver Problems For Future HPC Architecture
Rick L. Stevens - Argonne National Laboratory, USA
Session chair: Jack Dongarra

Abstract : The adoption of machine learning is proving to be an amazingly successful strategy in improving predictive models for cancer and infectious disease. In this talk I will discuss two projects my group is working on to advance biomedical research through the use of machine learning and HPC. In cancer, machine learning and in deep learning in particular, is used to advance our ability to diagnosis and classify tumors. Recently demonstrated automated systems are routinely out performing human expertise. Deep learning is also being used to predict patient response to cancer treatments and to screen for new anti-cancer compounds. In basic cancer research its being use to supervise large-scale multi-resolution molecular dynamics simulations used to explore cancer gene signaling pathways. In public health it’s being used to interpret millions of medical records to identify optimal treatment strategies. In infectious disease research machine learning methods are being used to predict antibiotic resistance and to identify novel antibiotic resistance mechanisms that might be present. More generally machine learning is emerging as a general tool to augment and extend mechanistic models in biology and many other fields. It’s becoming an important component of scientific workloads. From a computational architecture standpoint, deep neural network (DNN) based scientific applications have some unique requirements. They require high compute density to support matrix-matrix and matrix-vector operations, but they rarely require 64bit or even 32bits of precision, thus architects are creating new instructions and new design points to accelerate training. Most current DNNs rely on dense fully connected networks and convolutional networks and thus are reasonably matched to current HPC accelerators. However future DNNs may rely less on dense communication patterns. Like simulation codes power efficient DNNs require high-bandwidth memory be physically close to arithmetic units to reduce costs of data motion and a high-bandwidth communication fabric between (perhaps modest scale) groups of processors to support network model parallelism. DNNs in general do not have good strong scaling behavior, so to fully exploit large-scale parallelism they rely on a combination of model, data and search parallelism. Deep learning problems also require large-quantities of training data to be made available or generated at each node, thus providing opportunities for NVRAM. Discovering optimal deep learning models often involves a large-scale search of hyperparameters. It’s not uncommon to search a space of tens of thousands of model configurations. Naïve searches are outperformed by various intelligent searching strategies, including new approaches that use generative neural networks to manage the search space. HPC architectures that can support these large-scale intelligent search methods as well as efficient model training are needed.

https://www.anl.gov/contributors/rick-stevens