International Workshop on Advances in High-Performance Computational Earth Sciences (IHPCES) Session 1

Time and Date: 11:00 - 12:40 on 12th June 2014

Room: Bluewater I

Chair: Kengo Nakajima

408 Application-specific I/O Optimizations on Petascale Supercomputers [abstract]
Abstract: Data-intensive science frontiers and challenges are emerging as computer technology has evolved substantially. Large-scale simulations demand significant I/O workload, and as a result the I/O performance often becomes a bottleneck preventing high performance in scientific applications. In this paper we introduce a variety of I/O optimization techniques developed and implemented when scaling a seismic application to petascale. These techniques include file system striping, data aggregation, reader/writer limiting and less interleaving of data, collective MPI-IO, and data staging. The optimizations result in nearly perfect scalability of the target application on some of the most advanced petascale systems. The techniques introduced in this paper are applicable to other scientific applications facing similar petascale I/O challenges.
Efecan Poyraz, Heming Xu, Yifeng Cui
264 A physics-based Monte Carlo earthquake disaster simulation accounting for uncertainty in building structure parameters [abstract]
Abstract: Physics-based earthquake disaster simulations are expected to contribute to high-precision earthquake disaster prediction; however, such models are computationally expensive and the results typically contain significant uncertainties. Here we describe Monte Carlo simulations where 10,000 calculations were carried out with stochastically varied building structure parameters to model 3,038 buildings. We obtain the spatial distribution of the damage caused for each set of parameters, and analyze these data statistically to predict the extent of damage to buildings.
Shunsuke Homma, Kohei Fujita, Tsuyoshi Ichimura, Muneo Hori, Seckin Citak, Takane Hori
391 A quick earthquake disaster estimation system with fast urban earthquake simulation and interactive visualization [abstract]
Abstract: In the immediate aftermath of an earthquake, quick estimation of damage to city structures can facilitate prompt, effective post-disaster measures. Physics-based urban earthquake simulations, using measured ground motions as input, are a possible means of obtaining reasonable estimates. The difficulty of such estimation lies in carrying out the simulation and arriving at a thorough understanding of large-scale time series results in a limited amount of time. We developed an estimation system based on fast urban earthquake disaster simulation, together with an interactive visualization method suitable for GPU workstations. Using this system, an urban area with more than 100,000 structures can be analyzed within an hour and visualized interactively.
Kohei Fujita, Tsuyoshi Ichimura, Muneo Hori, M. L. L. Wijerathne, Seizo Tanaka
397 Several hundred finite element analyses of an inversion of earthquake fault slip distribution using a high-fidelity model of the crustal structure [abstract]
Abstract: To improve the accuracy of inversion analysis of earthquake fault slip distribution, we performed several hundred analyses using a 10^8-degree-of-freedom finite element (FE) model of the crustal structure. We developed a meshing method and an efficient computational method for these large FE models. We applied the model to the inversion analysis of coseismic fault slip distribution for the 2011 Tohoku-oki Earthquake. The high resolution of our model provided a significant improvement of the fidelity of the simulation results compared to existing computational approaches.
Ryoichiro Agata, Tsuyoshi Ichimura, Kazuro Hirahara, Mamoru Hyodo, Takane Hori, Muneo Hori

International Workshop on Advances in High-Performance Computational Earth Sciences (IHPCES) Session 2

Time and Date: 14:10 - 15:50 on 12th June 2014

Room: Bluewater I

Chair: Huilin Xing

334 An out-of-core GPU approach for Accelerating Geostatistical Interpolation [abstract]
Abstract: Geostatistical methods provide a powerful tool to understand the complexity of data arising from Earth sciences. Since the mid 70’s, this numerical approach is widely used to understand the spatial variation of natural phenomena in various domains like Oil and Gas, Mining or Environmental Industries. Considering the huge amount of data available, standard imple- mentations of these numerical methods are not efficient enough to tackle current challenges in geosciences. Moreover, most of the software packages available for geostatisticians are de- signed for a usage on a desktop computer due to the trial and error procedure used during the interpolation. The Geological Data Management (GDM ) software package developed by the French geological survey (BRGM) is widely used to build reliable three-dimensional geological models that require a large amount of memory and computing resources. Considering the most time-consuming phase of kriging methodology, we introduce an efficient out-of-core algorithm that fully benefits from graphics cards acceleration on desktop computer. This way we are able to accelerate kriging on GPU with data 4 times bigger than a classical in-core GPU algorithm, with a limited loss of performances.
Victor Allombert, David Michea, Fabrice Dupros, Christian Bellier, Bernard Bourgine, Hideo Aochi, Sylvain Jubertie
401 Mesh generation for 3D geological reservoirs with arbitrary stratigraphic surface constraints [abstract]
Abstract: With the advanced image, drilling and field observation technology, geological structure of reservoirs can be described in more details. A novel 3D mesh generation method for meshing reservoir models is proposed and implemented with arbitrary stratigraphical surface constraints, which ensures the detailed structure geometries and material properties of reservoirs are better described and analysed. The stratigraphic interfaces are firstly extracted and meshed, and then a tetrahedron mesh is generated in 3D with the constraints of such meshed surfaces. The proposed approach includes the following five steps: (1) extracting stratum interfaces; (2) creating a background mesh with size field on the interfaces; (3) constructing geodesic isolines from interface boundaries to the interior; (4) employing a geodesic-based approach to create surface triangles on the area between adjacent isolines and then merge them together; (5) generating tetrahedron mesh for 3D reservoirs with constraints of generated surface triangular mesh. The proposed approach has been implemented and applied to the Lawn Hill reservoir as a practical example to demonstrate its effectiveness and usefulness.
Huilin Xing, Yan Liu
403 Performance evaluation and case study of a coupling software ppOpen-MATH/MP [abstract]
Abstract: We are developing a coupling software ppOpen-MATH/MP. ppOpen-MATH/MP is characterized by its wide applicability. This feature comes from the design that grid point correspondence and interpolation coefficients should be calculated in advance. However, calculation of these values on the unstructured grid model requires a lot of computation time in general. So, we developed new effective algorithm and program for calculating the grid point correspondence as a pre-processor of ppOpen-MATH/MP. In this article, an algorithm and performance evaluation of the program is presented in the first half, and next, an application example of ppOpen-MATH/MP, targeting atmospheric model NICAM and ocean model COCO coupling, is described.
Takashi Arakawa, Takahiro Inoue, Masaki Sato
402 Implementation and Evaluation of an AMR Framework for FDM Applications [abstract]
Abstract: In order to execute various finite-difference method applications on large-scale parallel computers with a reasonable cost of computer resources, a framework using an adaptive mesh refinement (AMR) technique has been developed. AMR can realize high-resolution simulations while saving computer resources by generating and removing hierarchical grids dynamically. In the AMR framework, a dynamic domain decomposition (DDD) technique, as a dynamic load balancing method, is also implemented to correct the computational load imbalance between each process associated with parallelization. By performing a 3D AMR test simulation, it is confirmed that dynamic load balancing can be achieved and execution time can be reduced by introducing the DDD technique.
Masaharu Matsumoto, Futoshi Mori, Satoshi Ohshima, Hideyuki Jitsumoto, Takahiro Katagiri, Kengo Nakajima