Session6 11:00 - 12:40 on 12th June 2014

Main Track (MT) Session 6

Time and Date: 11:00 - 12:40 on 12th June 2014

Room: Kuranda

Chair: Andrew Lewis

199 Mechanism of Traffic Jams at Speed Bottlenecks [abstract]
Abstract: In the past 20 years of complexity science, traffic has been studied as a complex sys- tem with a large amount of interacting agents. Since traffic has become an important aspect of our lives, understanding traffic system and how it interacts with various factors is essential. In this paper, the interactions between traffic flow and road topology will be studied, particularly regarding the relationship between a sharp bend in a road segment and traffic jams. As suggested by Sugiyama[1], when car density exceed a critical density, the fluctuations in speed of each car triggers a greater fluctuation in speed of the car be- hind. This enhancement of fluctuation leads to the congestion of vehicles. Using a cellular automata model modified from Nagel-Schreckenberg CA model[2], the simulation results suggests that the mechanism of traffic jam at bottlenecks is similar to this. Instead of directly causing the congestion in cars, bottleneck on roads only causes the local density of traffic to increase. The resultant congestion is still due to the enhancement of fluctuations. Results of this study opened up a large number of possible analytical studies which could be used as grounds for future works.
Wei Liang Quek, Lock Yue Chew
234 Computing, a powerful tool in flood prediction [abstract]
Abstract: Floods have caused widespread damages throughout the world. Modelling and simulation provide solutions and tools enabling us to face this reality in order to forecast and to make necessary prevention. One problem that must be handled by physical systems simulators is the parameters uncertainty and their impact on output results, causing prediction errors. In this paper, we address input parameters uncertainty towards providing a methodology to tune a flood simulator and achieve lower error between simulated and observed results. The tuning methodology, through a parametric simulation technique, implements a first stage to finding an adjusted set of critical parameters which will be used in a next stage to validate the predictive capability of the simulator in order to reduce the disagreement between observed data and simulated results. We concentrate our experiments in three significant monitoring stations and the percentage of improvement over the original simulator values ranges from 33 to 60%.
Adriana Gaudiani, Emilo Luque, Pablo Garcia, Mariano Re, Marcelo Naiouf, Armando De Giusti
117 Benchmarking and Data Envelopment Analysis. An Approach Based on Metaheuristics [abstract]
Abstract: Data Envelopment Analysis (DEA) is a non-parametric technique to estimate the current level of efficiency of a set of entities. DEA provides information on how to remove inefficiency through the determination of benchmarking information. This paper is devoted to study DEA models based on closest efficient targets, which are related to the shortest projection to the production frontier and allow inefficient firms to find the easiest way to improve their performance. Usually, these models have been solved by means of unsatisfactory methods since all of them are related in some sense to a combinatorial NP-hard problem. In this paper, the problem is approached by metaheuristic techniques. Due to the high number of restrictions of the problem, finding solutions to be used in the metaheuristic algorithm is a difficult problem. Thus, this paper analyzes and compares some heuristic algorithms to obtain solutions of the problem. Each restriction determines the design of these heuristics. Thus, the problem is considered by adding constraints one by one. In this paper, the problem is presented and studied taking into account 9 of the 14 constraints, and the solution to this new problem is an upper bound of the optimal value of the original problem.
Jose J. Lopez-Espin, Juan Aparicio, Domingo Gimenez, Jesús T. Pastor
249 Consensus reaching in swarms ruled by a hybrid metric-topological distance [abstract]
Abstract: Recent empirical observations of three-dimensional bird flocks and human crowds have challenged the long-prevailing assumption that a metric interaction distance rules swarming behaviors. In some cases, individual agents are found to be engaged in local information exchanges with a fixed number of neighbors, i.e. a topological interaction. However, complex system dynamics based on pure metric or pure topological distances both face physical inconsistencies in low and high density situations. Here, we propose a hybrid metric-topological interaction distance overcoming these issues and enabling a real-life implementation in artificial robotic swarms. We use network- and graph-theoretic approaches combined with a dynamical model of locally interacting self-propelled particles to study the consensus reaching process for a swarm ruled by this hybrid interaction distance. Specifically, we establish exactly the probability of reaching consensus in the absence of noise. In addition, simulations of swarms of self-propelled particles are carried out to assess the influence of the hybrid distance and noise.
Yilun Shang and Roland Bouffanais
258 Simulating Element Creation in Supernovae with the Computational Infrastructure for Nuclear Astrophysics at nucastrodata.org [abstract]
Abstract: The elements that make up our bodies and the world around us are produced in violent stellar explosions. Computational simulations of the element creation processes occurring in these cataclysmic phenomena are complex calculations that track the abundances of thousands of species of atomic nuclei throughout the star. These species are created and destroyed by ~60,000 thermonuclear reactions whose rates are stored in continually updated databases. Previously, delays of up to a decade were experienced before the latest experimental reaction rates were used in astrophysical simulations. The Computational Infrastructure for Nuclear Astrophysics (CINA), freely available at the website nucastrodata.org, reduces this delay from years to minutes! With over 100 unique software tools developed over the last decade, CINA comprises a “lab-to-star” connection. It is the only cloud computing software system in this field and it is accessible via an easy-to-use, web-deliverable, cross-platform Java application. The system gives users the capability to robustly simulate, share, store, analyze and visualize explosive nucleosynthesis events such as novae, X-ray bursts and (new in 2013) core-collapse supernovae. In addition, users can upload, modify, merge, store and share the complex input data required by these simulations. Presently, we are expanding the capabilities of CINA to meet the needs of our users who currently come from 141 institutions and 32 countries. We will describe CINA’s current suite of software tools and the comprehensive list of online nuclear astrophysics datasets available at the nucastrodata.org website. This work is funded by the DOE’s Office of Nuclear Physics under the US Nuclear Data Program.
E. J. Lingerfelt, M. S. Smith, W. R. Hix and C. R. Smith

Main Track (MT) Session 13

Time and Date: 11:00 - 12:40 on 12th June 2014

Room: Tully I

Chair: I. Moser

344 Finite difference method for solving acoustic wave equation using locally adjustable time-steps [abstract]
Abstract: Explicit finite difference method has been widely used for seismic modeling in heterogeneous media with strong discontinuities in physical properties. In such cases, due to stability considerations, the time step size is primarily determined by the medium with higher wave speed propagation, resulting that the higher the speed, the lower the time step needs to be to ensure stability throughout the whole domain. Therefore, the use of different temporal discretizations can greatly reduce the computational cost involved when solving this kind of problem. In this paper we propose an algorithm for the local temporal discretization setting named Region Triangular Transition (RTT), which allows the local temporal discretizations to be related by any integer value that enables these discretizations to operate at the stability limit of the finite difference approximations used.
Alexandre Antunes, Regina Leal-Toledo, Otton Filho, Elson Toledo
347 Identifying Self-Excited Vibrations with Evolutionary Computing [abstract]
Abstract: This study uses Differential Evolution to identify the coefficients of second-order differential equations of self-excited vibrations from a time signal. The motivation is found in the ample occurrence of this vibration type in engineering and physics, in particular in the real-life problem of vibrations of hydraulic structure gates. In the proposed method, an equation structure is assumed at the level of the ordinary differential equation and a population of candidate coefficient vectors undergoes evolutionary training. In this way the numerical constants of non-linear terms of various self-excited vibration types were recovered from the time signal and the velocity value only at the initial time. Comparisons are given regarding accuracy and computing time. The presented evolutionary method shows good promise for future application in engineering systems, in particular operational early-warning systems that recognise oscillations with negative damping before they can cause damage.
Christiaan Erdbrink, Valeria Krzhizhanovskaya
85 Rendering of Feature-Rich Dynamically Changing Volumetric Datasets on GPU [abstract]
Abstract: Interactive photo-realistic representation of dynamic liquid volumes is a challenging task for today's GPUs and state-of-the-art visualization algorithms. Methods of the last two decades consider either static volumetric datasets applying several optimizations for volume casting, or dynamic volumetric datasets with rough approximations to realistic rendering. Nevertheless, accurate real-time visualization of dynamic datasets is crucial in areas of scientific visualization as well as areas demanding for accurate rendering of feature-rich datasets. An accurate and thus realistic visualization of such datasets leads to new challenges: due to restrictions given by computational performance, the datasets may be relatively small compared to the screen resolution, and thus each voxel has to be rendered highly oversampled. With our volumetric datasets based on a real-time lattice Boltzmann fluid simulation creating dynamic cavities and small droplets, existing real-time implementations are not applicable for a realistic surface extraction. This work presents a volume tracing algorithm capable of producing multiple refractions which is also robust to small droplets and cavities. Furthermore we show advantages of our volume tracing algorithm compared to other implementations.
Martin Schreiber, Atanas Atanasov, Philipp Neumann, Hans-Joachim Bungartz
136 Motor learning in physical interfaces for computational problem solving [abstract]
Abstract: Continuous Interactive Simulation (CIS) maps computational problems concerning the control of dynamical systems to physical tasks in a 3D virtual environment for users to perform. However, deciding on the best mapping for a particular problem is not straightforward. This paper considers how a motor learning perspective can assist when designing such mappings. To examine this issue an experiment was performed to compare an arbitrary mapping with one designed by considering a range of motor learning factors. The particular problem studied was a nonlinear policy setting problem from economics. The results show that choices about how a problem is presented can indeed have a large effect on the ability of users to solve the problem. As a result we recommend the development of guidelines for the application of CIS based on motor learning considerations.
Rohan McAdam
151 Change Detection and Visualization of Functional Brain Networks using EEG Data [abstract]
Abstract: Mining dynamic and non-trivial patterns of interactions of functional brain network has gained significance due to the recent advances in the field of computational neuroscience. Sophisticated data search capabilities, advanced signal processing techniques, statistical methods, complex network and graph mining algorithms to unfold and discover hidden patterns in the functional brain network supported with efficient visualization techniques are essential for making potential inferences of the results obtained. Visualization of change in activity during cognitive function is useful to discover and get insights into the hidden, novel and complex neuronal patterns and trends during the normal and cognitive load conditions from the graph/temporal representation of the functional brain network. This paper explores novel methods to explore and model the dynamics and complexity of the brain. It also uses a new tool called Functional Brain Network Analysis and Visualization (FBNAV) tool to visualize the outcomes of various computational analyses to enable us to identify and study the changing neuronal patterns during various states of the brain activity using augmented/customised Topoplots and Headplots. These techniques may be helpful to locate and identify patterns in certain abnormal mental states resulting due to some mental disorders such as stress.
R Vijayalakshmi, Naga Dasari, Nanda Nandagopal, R Subhiksha, Bernadine Cocks, Nabaraj Dahal, M Thilaga

Computational Optimization, Modelling and Simulation (COMS) Session 1

Time and Date: 11:00 - 12:40 on 12th June 2014

Room: Tully II

Chair: Leifur Leifsson

94 Fast Low-fidelity Wing Aerodynamics Model for Surrogate-Based Shape Optimization [abstract]
Abstract: Variable-fidelity optimization (VFO) can be efficient in terms of the computational cost when compared with traditional approaches, such as gradient-based methods with adjoint sensitivity information. In variable-fidelity methods, the direct optimization of the expensive high-fidelity model is replaced by iterative re-optimization of a physics-based surrogate model, which is constructed from a corrected low-fidelity model. The success of VFO is dependent on the reliability and accuracy of the low-fidelity model. In this paper, we present a way to develop a fast and reliable low-fidelity model suitable for aerodynamic shape of transonic wings. The low-fidelity model is component based and accounts for the zero-lift drag, induced drag, and wave drag. The induced drag can be calculated by a proper method, such lifting line theory or a panel method. The zero-lift drag and the wave drag can be calculated by two-dimensional flow model and strip theory. Sweep effects are accounted for by simple sweep theory. The approach is illustrated by a numerical example where the induced drag is calculated by a vortex lattice method, and the zero-lift drag and wave drag are calculated by MSES (a viscous-inviscid method). The low-fidelity model is roughly 320 times faster than a high-fidelity computational fluid dynamics models which solves the Reynolds-averaged Navier-Stokes equations and the Spalart-Allmaras turbulence model. The responses of the high- and low-fidelity models compare favorably and, most importantly, show the same trends with respect to changes in the operational conditions (Mach number, angle of attack) and the geometry (the airfoil shapes).
Leifur Leifsson, Slawomir Koziel, Adrian Bekasiewicz
128 Minimizing Inventory Costs for Capacity-Constrained Production using a Hybrid Simulation Model [abstract]
Abstract: A hybrid simulation model is developed to determine the cost-minimizing target level for a single-item, single-stage production-inventory system. The model is based on a single discrete-event simulation of the unconstrained production system, from which an analytical approximation of the inventory shortfall is derived. Using this analytical expression it is then possible to evaluate inventory performance, and associated costs, at any target level. From these calculations, the cost-minimizing target level can be found efficiently using a local search. Computational experiments show the model remains highly accurate at high levels of demand variation, where existing analytical methods are known to be inaccurate. By deriving an expression for the shortfall distribution via simulation, no user modelling of the demand distribution or estimation of demand parameters is required. Thus this model can be applied to situations when the demand distribution does not have an identifiable analytical form.
John Betts
23 Computation on GPU of Eigenvalues and Eigenvectors of a Large Number of Small Hermitian Matrices [abstract]
Abstract: This paper presents an implementation on Graphics Processing Units of QR-Householder algorithm used to find all the eigenvalues and eigenvectors of many small hermitian matrices ( double precision) in a very short time to address time constraints for Radar issues.
Alain Cosnuau
299 COFADMM: A Computational features selection with Alternating Direction Method of Multipliers [abstract]
Abstract: Due to the explosion in size and complexity of Big Data, it is increasingly important to be able to solve problems with very large number of features. Classical feature selection procedures involves combinatorial optimization, with computational time increasing exponentially with the number of features. During the last decade, penalized regression has emerged as an attractive alternative for regularization and high dimensional feature selection problems. Alternating Direction Method of Multipliers (ADMM) optimization is suited for distributed convex optimization and distributed computing for big data. The purpose of this paper is to propose a broader algorithm COFADMM which combines the strength of convex penalized techniques in feature selection for big data and the power of the ADMM for optimization. We show that combining the ADMM algorithm with COFADMM can provide a path of solutions efficiently and quickly. COFADMM is easy to use, is available in C, Matlab upon request from the corresponding author.
Mohammed Elanbari, Sidra Alam, Halima Bensmail
101 Computational Optimization, Modelling and Simulation: Past, Present and Future [abstract]
Abstract: An integrated part of modern design practice in both engineering and industry is simulation and optimization. Significant challenges still exist, though huge progress has been made in the last few decades. This 5th workshop on Computational Optimization, Modelling and Simulation (COMS 2014) at ICCS 2014 will summarize the latest developments of optimization and modelling and their applications in science, engineering and industry. This paper reviews the past developments, the state-of-the-art present and the future trends, while highlighting some challenging issues in these areas. It can be expected that future research should focus on the data intensive applications, approximations for computationally expensive methods, combinatorial optimization, and large-scale applications.
Xin-She Yang, Slawomir Koziel, Leifur Leifsson

Computational Optimisation in the Real World (CORW) Session 1

Time and Date: 11:00 - 12:40 on 12th June 2014

Room: Tully III

Chair: Timoleon Kipouros

276 Extending the Front: Designing RFID Antennas using Multiobjective Differential Evolution with Biased Population Selection [abstract]
Abstract: RFID antennas are ubiquitous, so exploring the space of high efficiency and low resonant frequency antennas is an important multiobjective problem. Previous work has shown that the continuous solver differential evolution (DE) can be successfully applied to this discrete problem, but has difficulty exploring the region of solutions with lowest resonant frequency. This paper introduces a modified DE algorithm that uses biased selection from an archive of solutions to direct the search toward this region. Results indicate that the proposed approach produces superior attainment surfaces to the earlier work. The biased selection procedure is applicable to other population-based approaches for this problem.
James Montgomery, Marcus Randall, Andrew Lewis
396 Local Search Enabled Extremal Optimisation for Continuous Inseparable Multi-objective Benchmark and Real-World Problems [abstract]
Abstract: Local search is an integral part of many meta-heuristic strategies that solve single objective optimisation problems. Essentially, the meta-heuristic is responsible for generating a good starting point from which a greedy local search will find the local optimum. Indeed, the best known solutions to many hard problems (such as the travelling salesman problem) have been generated in this hybrid way. However, for multiple objective problems, explicit local search strategies are relatively rarely mentioned or applied. In this paper, a generic local search strategy is developed, particularly for problems where it is difficult or impossible to determine the contribution of individual solution components (often referred to as inseparable problems). The meta-heuristic adopted to test this is extremal optimisation, though the local search technique may be used by any meta-heuristic. To supplement the local search strategy a diversication strategy that draws from the external archive is incorporated into the local search strategy. Using benchmark problems, and a real-world airfoil design problem, it is shown that this combination leads to improved solutions.
Marcus Randall, Andrew Lewis, Jan Hettenhausen, Timoleon Kipouros
411 A Web-Based System for Visualisation-Driven Interactive Multi-Objective Optimisation [abstract]
Abstract: Interactive Multi-Objective Optimisation is an increasingly growing field of evolutionary and swarm intelligence-based algorithms. By involving a human decision a set of relevant non-dominated points can often be acquired at significantly lower computational costs than with \textit{a posteriori} algorithms. An often neglected issue in interactive optimisation is the issue of user interface design and the application of interactive optimisation as a design tool in engineering applications. This paper will discuss recent advances made in and moduli for an interactive multi-objective particle swarm optimisation algorithm. The focus of current implementation is on an aeronautics engineering applications, however, use of it for a wide range of other optimisation problems is conceivable.
Jan Hettenhausen, Andrew Lewis, Timoleon Kipouros

International Workshop on Advances in High-Performance Computational Earth Sciences (IHPCES) Session 1

Time and Date: 11:00 - 12:40 on 12th June 2014

Room: Bluewater I

Chair: Kengo Nakajima

408 Application-specific I/O Optimizations on Petascale Supercomputers [abstract]
Abstract: Data-intensive science frontiers and challenges are emerging as computer technology has evolved substantially. Large-scale simulations demand significant I/O workload, and as a result the I/O performance often becomes a bottleneck preventing high performance in scientific applications. In this paper we introduce a variety of I/O optimization techniques developed and implemented when scaling a seismic application to petascale. These techniques include file system striping, data aggregation, reader/writer limiting and less interleaving of data, collective MPI-IO, and data staging. The optimizations result in nearly perfect scalability of the target application on some of the most advanced petascale systems. The techniques introduced in this paper are applicable to other scientific applications facing similar petascale I/O challenges.
Efecan Poyraz, Heming Xu, Yifeng Cui
264 A physics-based Monte Carlo earthquake disaster simulation accounting for uncertainty in building structure parameters [abstract]
Abstract: Physics-based earthquake disaster simulations are expected to contribute to high-precision earthquake disaster prediction; however, such models are computationally expensive and the results typically contain significant uncertainties. Here we describe Monte Carlo simulations where 10,000 calculations were carried out with stochastically varied building structure parameters to model 3,038 buildings. We obtain the spatial distribution of the damage caused for each set of parameters, and analyze these data statistically to predict the extent of damage to buildings.
Shunsuke Homma, Kohei Fujita, Tsuyoshi Ichimura, Muneo Hori, Seckin Citak, Takane Hori
391 A quick earthquake disaster estimation system with fast urban earthquake simulation and interactive visualization [abstract]
Abstract: In the immediate aftermath of an earthquake, quick estimation of damage to city structures can facilitate prompt, effective post-disaster measures. Physics-based urban earthquake simulations, using measured ground motions as input, are a possible means of obtaining reasonable estimates. The difficulty of such estimation lies in carrying out the simulation and arriving at a thorough understanding of large-scale time series results in a limited amount of time. We developed an estimation system based on fast urban earthquake disaster simulation, together with an interactive visualization method suitable for GPU workstations. Using this system, an urban area with more than 100,000 structures can be analyzed within an hour and visualized interactively.
Kohei Fujita, Tsuyoshi Ichimura, Muneo Hori, M. L. L. Wijerathne, Seizo Tanaka
397 Several hundred finite element analyses of an inversion of earthquake fault slip distribution using a high-fidelity model of the crustal structure [abstract]
Abstract: To improve the accuracy of inversion analysis of earthquake fault slip distribution, we performed several hundred analyses using a 10^8-degree-of-freedom finite element (FE) model of the crustal structure. We developed a meshing method and an efficient computational method for these large FE models. We applied the model to the inversion analysis of coseismic fault slip distribution for the 2011 Tohoku-oki Earthquake. The high resolution of our model provided a significant improvement of the fidelity of the simulation results compared to existing computational approaches.
Ryoichiro Agata, Tsuyoshi Ichimura, Kazuro Hirahara, Mamoru Hyodo, Takane Hori, Muneo Hori

Tools for Program Development and Analysis in Computational Science (TOOLS) Session 1

Time and Date: 11:00 - 12:40 on 12th June 2014

Room: Bluewater II

Chair: Jie Tao

335 High Performance Message-Passing InfiniBand Communication Device for Java HPC [abstract]
Abstract: MPJ Express is a Java messaging system that implements an MPI-like interface. It is used for writing parallel Java applications on High Performance Computing (HPC) hardware including commodity clusters. The software is capable of executing in multicore and cluster mode. In the cluster mode, it currently supports Ethernet and Myrinet based interconnects and provide specialized communication devices for these networks. One recent trend in distributed memory parallel hardware is the emergence of InfiniBand interconnect, which is a high-performance proprietary network and provides low latency and high bandwidth for parallel MPI applications. Currently there is no direct support available in Java (and hence MPJ Express) to exploit the performance benefits of InfiniBand networks. The only option to run distributed Java programs over InfiniBand networks is to rely on TCP/IP emulation layers like IP over InfiniBand (IPoIB) and Sockets Direct Protocol (SDP), which provide poor communication performance. To tackle this issue in the context of MPJ Express, this paper presents a low-level communication device called ibdev that can be used to execute parallel Java applications on InfiniBand clusters. MPJ Express is based on a layered architecture and hence users can opt to use ibdev at runtime on an InfiniBand equipped commodity cluster. ibdev improves Java application performance with access to InfiniBand hardware using native verbs API. Our performance evaluation reveals that MPJ Express achieves much better latency and bandwidth using this new device, compared to IPoIB and SDP. Improvement in communication performance is also evident in NAS parallel benchmark results where ibdev helps MPJ Express achieve better scalability and speedups as compared to IPoIB and SDP. The results show that it is possible to reduce the performance gap between Java and native languages with efficient support for low level communication libraries.
Omar Khan, Mohsan Jameel, Aamir Shafi
300 A High Level Programming Environment for Accelerator-based Systems [abstract]
Abstract: Some of the critical hurdles for the widespread adoption of accelerators in high performance computing are portability and programming difficulty. To be an effective HPC platform, these systems need a high level software development environment to facilitate the porting and development of applications, so they can be portable and run efficiently on either accelerators or CPUs. In this paper we present a high level parallel programming environment for accelerator-based systems, which consists of tightly coupled compilers, tools, and libraries that can interoperate and hide the complexity of the system. Ease of use is possible with compilers making it feasible for users to write applications in Fortran, C, or C++ with OpenACC directives, tools to help users port, debug, and optimize for both accelerators and conventional multi-core CPUs, and with auto-tuned scientific libraries.
Luiz Derose, Heidi Poxon, James Beyer, Alistair Hart
277 Supporting relative debugging for large-scale UPC programs [abstract]
Abstract: Relative debugging is a useful technique for locating errors that emerge from porting existing code to new programming language or to new computing platform. Recent attention on the UPC programming language has resulted in a number of conventional parallel programs, for example MPI programs, being ported to UPC. This paper gives an overview on the data distribution concepts used in UPC and establishes the challenges in supporting relative debugging technique for UPC programs that run on large supercomputers. The proposed solution is implemented on an existing parallel relative debugger ccdb, and the performance is evaluated on a Cray XE6 system with 16,348 cores.
Minh Ngoc Dinh, David Abramson, Jin Chao, Bob Moench, Andrew Gontarek, Luiz Derose