Fourth International Workshop on Advances in High-Performance Computational Earth Sciences: Applications and Frameworks (IHPCES) Session 1

Time and Date: 14:10 - 15:50 on 2nd June 2015

Room: M104

Chair: Henry Tufo

532 2D Adaptivity for 3D Problems: Parallel SPE10 Reservoir Simulation on Dynamically Adaptive Prism Grids [abstract]
Abstract: We present an approach for parallel simulation of 2.5D applications on fully dynamically adaptive 2D triangle grids based on space-filling curve traversal. Often, subsurface, oceanic or atmospheric flow problems in geosciences have small vertical extent or anisotropic input data. Interesting solution features, such as shockwaves, emerge mostly in horizontal directions and require little vertical capturing. \samoa is a 2D code with fully dynamically adaptive refinement, targeted especially at low-order discretizations due to its cache-oblivious and memory-efficient design. We added support for 2.5D grids by implementing vertical columns of degrees-of-freedom, allowing full horizontal refinement and load balancing but restricted control over vertical layers. Results are shown for the SPE10 benchmark, a particularly hard two-phase flow problem in reservoir simulation with a small vertical extent. SPE10 investigates oil exploration by water injection in heterogenous porous media. Performance of \samoa is memory-bound for this scenario with up to 70\% throughput of the STREAM benchmark and a good parallel efficiency of 85\% for strong scaling on 512 cores and 91\% for weak scaling on 8192 cores.
Oliver Meister and Michael Bader
446 A Pipelining Implementation for High Resolution Seismic Hazard Maps Production [abstract]
Abstract: Seismic hazard maps are a significant input into emergency hazard management that play an important role in saving human lives and reducing the economic effects after earthquakes. Despite the fact that a number of software tools have been developed (McGuire, 1976, 1978; Bender and Perkins, 1982, 1987; Ordaz et al., 2013; Robinson et al. 2005, 2006; Field et al., 2003), map resolution is generally low, potentially leading to uncertainty in calculations of ground motion level and underestimation of the seismic hazard in a region. In order to generate higher resolution maps, the biggest challenge is to handle the significantly increased data processing workload. In this study, a method for improving seismic hazard map resolution is presented that employs a pipelining implementation of the existing EqHaz program suite (Assatourians and Atkinson, 2013) based on IBM InfoSphere Streams – an advanced stream computing platform. Its architecture is specifically configured for continuous analysis of massive volumes of data at high speeds and low latency. Specifically, it treats processing workload as data streams. Processing procedures are implemented as operators that are connected to form processing pipelines. To handle large processing workload, these pipelines are flexible and scalable to be deployed and run in parallel on large-scale HPC clusters to meet application performance requirements. As a result, mean hazard calculations are possible for maps with resolution up to 2,500,000 points with near-real-time processing time of approximately 5-6 minutes.
Yelena Kropivnitskaya, Jinhui Qin, Kristy F. Tiampo, Michael A. Bauer
519 Scalable multicase urban earthquake simulation method for stochastic earthquake disaster estimation [abstract]
Abstract: High-resolution urban earthquake simulations are expected to be useful for improving the reliability of the estimates of damage due to future earthquakes. However, current high-resolution simulation models involve uncertainties in their inputs. An alternative is to apply stochastic analyses using multicase simulations with varying inputs. In this study, we develop a method for simulating the responses of ground and buildings to many earthquakes. By a suitable mapping of computations among computation cores, the developed program attains 97.4% size-up scalability using 320,000 processes (40,000 nodes) on the K computer. This enables the computation of more than 1,000 earthquake scenarios for 0.25 million structures in central Tokyo.
Kohei Fujita, Tsuyoshi Ichimura, Muneo Hori, Lalith Maddegedara, Seizo Tanaka
704 Multi-GPU implementations of parallel 3D sweeping algorithms with application to geological folding [abstract]
Abstract: This paper studies some of the CUDA programming challenges in connection with using multiple GPUs to carry out plane-by-plane updates in parallel 3D sweeping algorithms. In particular, attention must be paid to masking the overhead of various data movements between the GPUs. Multiple OpenMP threads on the CPU side should be combined multiple CUDA streams per GPU to hide the data transfer cost related to the halo computation on each 2D plane. Moreover, the technique of peer-to-peer memory access can be used to reduce the impact of 3D volumetric data shuffles that have to be done between mandatory changes of the grid partitioning. We have investigated the performance improvement of 2- and 4-GPU implementations that are applicable to 3D anisotropic front propagation computations related to geological folding. In comparison with a straightforward multi-GPU implementation, the overall performance improvement due to masking of data movements on four GPUs of the Fermi architecture was 23%. The corresponding improvement obtained on four Kepler GPUs was 50\%.
Ezhilmathi Krishnasamy, Mohammed Sourouri, Xing Cai