Session4 14:10 - 15:50 on 11th June 2014

Main Track (MT) Session 4

Time and Date: 14:10 - 15:50 on 11th June 2014

Room: Kuranda

Chair: Y. Cui

222 GPU Optimization of Pseudo Random Number Generators for Random Ordinary Differential Equations [abstract]
Abstract: Solving differential equations with stochastic terms involves a massive use of pseudo random numbers. We present an application for the simulation of wireframe buildings under stochastic earthquake excitation. The inherent potential for vectorization of the application is used to its full extent on GPU accelerator hardware. A representative set of pseudo random number generators for uniformly and normally distributed pseudo random numbers has been implemented, optimized, and benchmarked. The resulting optimized variants outperform standard library implementations on GPUs. The techniques and improvements shown in this contribution using the Kanai-Tajimi model can be generalized to other random differential equations or stochastic models as well as other accelerators.
Christoph Riesinger, Tobias Neckel, Florian Rupp, Alfredo Parra Hinojosa, Hans-Joachim Bungartz
229 Design and Implementation of Hybrid and Native Communication Devices for Java HPC [abstract]
Abstract: MPJ Express is a messaging system that allows computational scientists to write and execute parallel Java applications on High Performance Computing (HPC) hardware. The software is capable of executing in two modes namely cluster and multicore modes. In the cluster mode, parallel applications execute in a typical cluster environment where multiple processing elements communicate with one another using a fast interconnect like Gigabit Ethernet or other proprietary networks like Myrinet and Infiniband. In this context, the MPJ Express library provides communication devices for Ethernet and Myrinet. In the multicore mode, the parallel Java application executes on a single system comprising of shared memory or multicore processors. In this paper, we extend the MPJ Express software to provide two new communication devices namely the native and hybrid device. The goal of the native communication device is to interface the MPJ Express software with native—typically written in C—MPI libraries. In this setting the bulk of messaging logic is offloaded to the underlying MPI library. This is attractive because MPJ Express can exploit latest features, like support for new interconnects and efficient collective communication algorithms of the native MPI library. The second device, called the hybrid device, is developed to allow efficient execution of parallel Java applications on clusters of shared memory or multicore processors. In this setting the MPJ Express runtime system runs a single multithreaded process on each node of the cluster—the number of threads in each process is equivalent to processing elements within a node. Our performance evaluation reveals that the native device allows MPJ Express to achieve comparable performance to native MPI libraries—for latency and bandwidth of point-to-point and collective communications—which is a significant gain in performance compared to existing communication devices. The hybrid communication device—without any modifications at application level—also helps parallel applications achieve better speedups and scalability. We witnessed comparative performance for various benchmarks—including NAS Parallel Benchmarks—with hybrid device as compared to the existing Ethernet communication device on a cluster of shared memory/multicore processors.
Bibrak Qamar, Ansar Javed, Mohsan Jameel, Aamir Shafi, Bryan Carpenter
231 Deploying a Large Petascale System: the Blue Waters Experience [abstract]
Abstract: Deployment of a large parallel system is typically a very complex process, involving several steps of preparation, delivery, installation, testing and acceptance. Despite the availability of various petascale machines currently, the steps and lessons from their deployment are rarely described in the literature. This paper presents the experiences observed during the deployment of Blue Waters, the largest supercomputer ever built by Cray and one of the most powerful machines currently available for open science. The presentation is focused on the final deployment steps, where the system was intensively tested and accepted by NCSA. After a brief introduction of the Blue Waters architecture, a detailed description of the set of acceptance tests employed is provided, including many of the obtained results. This is followed by the major lessons learned during the process. Those experiences and lessons should be useful to guide similarly complex deployments in the future.
Celso Mendes, Brett Bode, Gregory Bauer, Jeremy Enos, Cristina Beldica, William Kramer
248 FPGA-based acceleration of detecting statistical epistasis in GWAS [abstract]
Abstract: Genotype-by-genotype interactions (epistasis) are believed to be a significant source of unexplained genetic variation causing complex chronic diseases but have been ignored in genome-wide association studies (GWAS) due to the computational burden of analysis. In this work we show how to benefit from FPGA technology for highly parallel creation of contingency tables in a systolic chain with a subsequent statistical test. We present the implementation for the FPGA-based hardware platform RIVYERA S6-LX150 containing 128 Xilinx Spartan6-LX150 FPGAs. For performance evaluation we compare against the method iLOCi. iLOCi claims to outperform other available tools in terms of accuracy. However, analysis of a dataset from the Wellcome Trust Case Control Consortium (WTCCC) with about 500,000 SNPs and 5,000 samples still takes about 19 hours on a MacPro workstation with two Intel Xeon quad-core CPUs, while our FPGA-based implementation requires only 4 minutes.
Lars Wienbrandt, Jan Christian Kässens, Jorge González-Domínguez, Bertil Schmidt, David Ellinghaus, Manfred Schimmler

Main Track (MT) Session 11

Time and Date: 14:10 - 15:50 on 11th June 2014

Room: Tully I

Chair: Dieter Kranzlmuller

360 Workflow as a Service in the Cloud: Architecture and Scheduling Algorithms [abstract]
Abstract: With more and more workflow systems adopting cloud as their execution environment, it becomes increasingly challenging on how to efficiently manage various workflows, virtual machines (VMs) and workflow execution on VM instances. To make the system scalable and easy-to-extend, we design a Workflow as a Service (WFaaS) architecture with independent services. A core part of the architecture is how to efficiently respond continuous workflow requests from users and schedule their executions in the cloud. Based on different targets, we propose four heuristic workflow scheduling algorithms for the WFaaS architecture, and analyze the differences and best usages of the algorithms in terms of performance, cost and the price/performance ratio via experimental studies.
Jianwu Wang, Prakashan Korambath, Ilkay Altintas, Jim Davis, Daniel Crawl
36 Large Eddy Simulation of Flow in Realistic Human Upper Airways with Obstructive Sleep Apnea [abstract]
Abstract: Obstructive sleep apnea (OSA) is a common type of sleep disorder characterized by abnormal repetitive cessation in breathing during sleep caused by partial or complete narrowing of pharynx in the upper airway. The upper airway surgery is commonly performed for this disorder, however the success rate is limited because the lack of the thorough understanding of the primary mechanism associated with OSA. The computational fluid dynamics (CFD) simulation with Large Eddy Simulation approach is conducted to investigate a patient-specific upper airway flow with severe OSA. Both pre and post-surgical upper airway models are simulated to reveal the effect of the surgical treatment. Only the inhaled breathing is conducted with six periods (about 15 second) unsteady flow. Compared with the results before and after treatment, it is illustrated that there exists a significant pressure and shear stress dropping region near the soft palate before treatment; and after the treatment the flow resistance in the upper airway is decreased and the wall shear stress value is significantly reduced.
Mingzhen Lu, Yang Liu, Jingying Ye, Haiyan Luo
86 Experiments on a Parallel Nonlinear Jacobi-Davidson Algorithm [abstract]
Abstract: The Jacobi-Davidson (JD) algorithm is very well suited for the computation of a few eigenpairs of large sparse complex symmetric nonlinear eigenvalue problems. The performance of JD crucially depends on the treatment of the so-called correction equation, in particular the preconditioner, and the initial vector. Depending on the choice of the spectral shift and the accuracy of the solution, the convergence of JD can vary from linear to cubic. We investigate parallel preconditioners for the Krylov space method used to solve the correction equation. We apply our nonlinear Jacobi-Davidson (NLJD) method to quadratic eigenvalue problems that originate from the time-harmonic Maxwell equation for the modeling and simulation of resonating electromagnetic structures.
Yoichi Matsuo, Hua Guo, Peter Arbenz
184 Improving Collaborative Recommendation via Location-based User-Item Subgroup [abstract]
Abstract: Collaborative filter has been widely and successfully applied in recommendation system. It typically associates a user with a group of like-minded users based on their preferences over all the items, and recommends to the user those items enjoyed by others in the group. Some previous studies have explored that there exist many user-item subgroups each consisting of a subset of items and a group of like-minded users on these items and subgroup analysis can get better accuracy. While, we find that geographical information of user have impacts on user group preference for items. Hence, In this paper, we propose a Bayesian generative model to describe the generative process of user-item subgroup preference under considering users' geographical information. Experimental results show the superiority of the proposed model.
Zhi Qiao, Peng Zhang, Yanan Cao, Chuan Zhou, Li Guo
90 Optimizing Shared-Memory Hyperheuristics on top of Parameterized Metaheuristics [abstract]
Abstract: This paper studies the auto-tuning of shared-memory hyperheuristics developed on top of a unified shared-memory metaheuristic scheme. A theoretical model of the execution time of the unified scheme is empirically adapted for particular metaheuristics and hyperheuristics through experimentation. The model is used to decide at running time the number of threads to obtain a reduced execution time. The number of threads is different for the different basic functions in the scheme, and depends on the problem to be solved, the metaheuristic scheme, the implementation of the basic functions and the computational system where the problem is solved. The applicability of the proposal is shown with a problem of minimization of electricity consumption in exploitation of wells. Experimental results show that satisfactory execution times can be achieved with auto-tuning techniques based on theoretical-empirical models of the execution time.
José Matías Cutillas Lozano, Domingo Gimenez

Dynamic Data Driven Application Systems (DDDAS) Session 4

Time and Date: 14:10 - 15:50 on 11th June 2014

Room: Tully II

Chair: Ana Cortes

74 Dynamic Data Driven Crowd Sensing Task Assignment [abstract]
Abstract: To realize the full potential of mobile crowd sensing, techniques are needed to deal with uncertainty in participant locations and trajectories. We propose a novel model for spatial task assignment in mobile crowd sensing that uses a dynamic and adaptive data driven scheme to assign moving participants with uncertain trajectories to sensing tasks, in a near-optimal manner. Our scheme is based on building a mobility model from publicly available trajectory history and estimating posterior location values using noisy/uncertain measurements upon which initial tasking assignments are made. These assignments may be refined locally (using exact information) and used by participants to steer their future data collection, which completes the feedback loop. We present the design of our proposed approach with rationale to suggest its value in effective mobile crowd sensing task assignment in the presence of uncertain trajectories.
Layla Pournajaf, Li Xiong, Vaidy Sunderam
79 Context-aware Dynamic Data-driven Pattern Classification* [abstract]
Abstract: This work aims to mathematically formalize the notion of context, with the purpose of allowing contextual decision-making in order to improve performance in dynamic data driven classification systems. We present definitions for both intrinsic context, i.e. factors which directly affect sensor measurements for a given event, as well as extrinsic context, i.e. factors which do not affect the sensor measurements directly, but do affect the interpretation of collected data. Supervised and unsupervised modeling techniques to derive context and context labels from sensor data are formulated. Here, supervised modeling incorporates the a priori known factors affecting the sensing modalities, while unsupervised modeling autonomously discovers the structure of those factors in sensor data. Context-aware event classification algorithms are developed by adapting the classification boundaries, dependent on the current operational context. Improvements in context-aware classification have been quantified and validated in an unattended sensor-fence application for US Border Monitoring. Field data, collected with seismic sensors on different ground types, are analyzed in order to classify two types of walking across the border, namely, normal and stealthy. The classification is shown to be strongly dependent on the context (specifically, soil type: gravel or moist soil).
Shashi Phoha, Nurali Virani, Pritthi Chattopadhyay, Soumalya Sarkar, Brian Smith, Asok Ray

Workshop on Data Mining in Earth System Science (DMESS) Session 1

Time and Date: 14:10 - 15:50 on 11th June 2014

Room: Tully III

Chair: Jay Larson

375 Stochastic Parameterization to Represent Variability and Extremes in Climate Modeling [abstract]
Abstract: Unresolved sub-grid processes, those which are too small or dissipate too quickly to be captured within a model's spatial resolution, are not adequately parameterized by conventional numerical climate models. Sub-grid heterogeneity is lost in parameterizations that quantify only the `bulk effect' of sub-grid dynamics on the resolved scales. A unique solution, one unreliant on increased grid resolution, is the employment of stochastic parameterization of the sub-grid to reintroduce variability. We administer this approach in a coupled land-atmosphere model, one that combines the single-column Community Atmosphere Model and the single-point Community Land Model, by incorporating a stochastic representation of sub-grid latent heat flux to force the distribution of precipitation. Sub-grid differences in surface latent heat flux arise from the mosaic of Plant Functional Types (PFT's) that describe terrestrial land cover. With the introduction of a stochastic parameterization framework to affect the distribution of sub-grid PFT's, we alter the distribution of convective precipitation over regions with high PFT variability. The stochastically forced precipitation probability density functions show lengthened tails demonstrating the retrieval of rare events. Through model data analysis we show that the stochastic model increases both the frequency and intensity of rare events in comparison to conventional deterministic parameterization.
Roisin Langan, Richard Archibald, Matthew Plumlee, Salil Mahajan, Daniel Ricciuto, Cheng-En Yang, Rui Mei, Jiafu Mao, Xiaoying Shi, Joshua Fu
426 Understanding Global Climate Variability, Change and Stability through Densities, Distributions, and Informatics [abstract]
Abstract: Climate modelling as it is generally practised is the act of generating large volumes of simu- lated weather through integration of primitive-equation/general circulation model-based Earth system models (ESMs) and subsequent statistical analysis of these large volumes of model-generated history files. This ap- proach, though highly successful, entails explosively growing data volumes, and may not be practicable on exascale computers. This situation begs the question: Can we model climate’s governing dynamics directly? If we pursue this tactic, there are two clear avenues to pursue: i) analysis of the combined primitive equations and subgridscale parameterisations to formulate an “envelope theory” applicable to the system’s larger spa- tiotemporal scales; and ii) a search for governing dynamics through analysis of the existing corpus of climate observation assimilated and simulated data. Our work focuses on strategy ii). Climate data analysis concentrates primarily on statistical moments, quantiles, and extremes, but rarely on the most complete statistical descriptor—the probability density function (PDF). Long-term climate variabil- ity motivates a moving-window-sampled PDF, which we call a time-dependent PDF (TDPDF). The TDPDF resides within a PDF/information-theoretic framework that provides answers to several key questions of cli- mate variability, stability, and change, including: How does the climate evolve in time? How representative is any given sampling interval of the whole record? How rapidly is the climate changing? In this study, we pursue probability density estimation globally sampled climate data using two techniques that are readily applicable to spatially weighted data and yield closed-form PDFs: the Edgworth expansion and kernel smoothing. We explore our concerns regarding serial correlation in the data and effective sample size due to spatiotemporal correlations. We introduce these concepts for a simple dataset: the Central England Temperature Record. We then apply these techniques to larger, spatially-weghted climate data sets, including the USA National Center for Environmental Predictions NCEP-1 Reanalysis, the Australian Water Availability Project (AWAP) dataset, and the Australian Water and Carbon Observatory dataset.
Jay Larson and Padarn Wilson
52 Integration of artificial neural networks into operational ocean wave prediction models for fast and accurate emulation of exact nonlinear interactions [abstract]
Abstract: In this paper, an implementation study was undertaken to employ Artificial Neural Networks (ANN) in third-generation ocean wave models for direct mapping of wind-wave spectra into exact nonlinear interactions. While the investigation expands on previously reported feasibility studies of Neural Network Interaction Approximations (NNIA), it focuses on a new robust neural network that is implemented in Wavewatch III (WW3) model. Several idealistic and real test scenarios were carried out. The obtained results confirm the feasibility of NNIA in terms of speeding-up model calculations and is fully capable of providing operationally acceptable model integrations. The ANN is able to emulate the exact nonlinear interaction for single- and multi-modal wave spectra with a much higher accuracy then Discrete Interaction Approximation (DIA). NNIA performs at least twice as fast as DIA and at least two hundred times faster than exact method (Web-Resio-Tracy, WRT) for a well trained dataset. The accuracy of NNIA is network configuration dependent. For most optimal network configurations, the NNIA results and scatter statistics show good agreement with exact results by means of growth curves and integral parameters. Practical possibilities for further improvements in achieving fast and highly accurate emulations using ANN for emulating time consuming exact nonlinear interactions are also suggested and discussed.
Ruslan Puscasu

Bridging the HPC Tallent Gap with Computational Science Research Methods (BRIDGE) Session 2

Time and Date: 14:10 - 15:50 on 11th June 2014

Room: Bluewater I

Chair: Vassil Alexandrov

412 The HPC Talent Gap: an Australian Perspective [abstract]
Abstract: The recent Super Science initiative by the Australian government has provided funding for two petascale supercomputers to support research nationally, along with cloud, storage and network infrastructure. While some research areas are well-established in the use of HPC, much of the potential user base is still working with desktop computing. To be able to make use of the new infrastructure, these users will need training, support and associated funding. It is important to not only increase uptake in computational science, but also to nurture the workforce based on identified roles and ongoing support for careers and career pathways. This paper will present a survey of a range of efforts made in Australia to increase uptake and skills in HPC, and reflect on successes and the challenges ahead.
Valerie Maxville
418 Measuring Business Value of Learning Technology Implementation in Higher Education Setting [abstract]
Abstract: This paper introduces the concept of Business Value of Learning Technology and presents an approach how to measure the Business Value of Learning Technology in Higher Education setting based on a case study in Computational Science and cognate areas. Computational Science subject area is used as a pilot for the studies described in this paper since it is a multidisciplinary area, attracting students from diverse backgrounds and Computational Science is both the natural environment to promote collaborative teaching methods and collaborative provision of courses and as such requires more streamlined management processes. The paper, based on the above case study, presents the motivators and hygiene factors for Learning Technology Implementation in Higher Education setting. Finally, the Intersecting Influences Model presents the influences of pedagogy, technology and management over the motivation and hygiene factors, together with the corresponding generalization for PG level HE setting.
Nia Alexandrov

Workshop on Cell Based and Individual Based modelling (CBIBM) Session 2

Time and Date: 14:10 - 15:50 on 11th June 2014

Room: Bluewater II

Chair: James Osborne

82 How are individual cells distributed in a spreading cell front? [abstract]
Abstract: Spreading cell fronts are essential for embryonic development, tissue repair and cancer. Mathematical models used to describe the motion of cell fronts, such as Fisher’s equation and other partial differential equations, always invoke a mean-field assumption which implies that there is no spatial structure, such as cell clustering, present in the system. We test this ubiquitous assumption using a combination of in vitro cell migration assays, spatial statistics tools and discrete random walk simulations. In particular, we examine the conditions under which spatial structure can form in a spreading cell population. Our results highlight the importance of carefully examining these kinds of modelling assumptions that can be easily overlooked when applying partial differential equation models to describe the collective migration of a population of cells.
Katrina Treloar, Matthew Simpson and Dl Sean McElwain
170 An approximate Bayesian computation approach for estimating parameters of cell spreading experiments [abstract]
Abstract: Cell spreading process involves cell motility and cell proliferation, and is essential to developmental biology, wound healing and immune responses. Such process is inherently stochastic and should be modelled as such. Unfortunately, there is a lack of a general and principled technique to infer the parameters of these models and quantify the uncertainty associated with these estimates based on experimental data. In this talk we present a novel application of approximation Bayesian computation (ABC) that is able to achieve this goal in a coherent framework. We compare the parameter estimates based on two different implementations of the stochastic models. The first implementation uses the exact continuous time Gillespie (CTG) algorithm while the second is the discrete time approximate (DTA) algorithm. Our results indicate that the DTA algorithm provides very similar result to, but more computationally efficient than the CTG algorithm. The key parameter finding is that the posterior distribution of the time duration between motility events is highly correlated to the experimental time and the initial number of cells. That is, the more crowded cells or the longer experiment, the faster of cell motility rate. This trend also appears in the models with cell spreading driven by combined motility and proliferation. In similar studies, parameter estimates are typically based upon the size of the leading edge, since other sources of data from the experiments can be costly to collect. Our ABC analysis suggests that is possible to infer the time duration precisely from the leading edge but unfortunately brings very little information about the cell proliferation rate. This highlight the need to obtain more detailed information from the experimental observations of cell spreading, such as the cell density profile along a diameter, in order to quantify model parameters accurately.
Nho Vo, Christopher Drovandi, Anthony Pettitt and Matthew Simpson
431 Computer simulations of the mouse spermatogenic cycle [abstract]
Abstract: The mouse spermatogenic cycle describes the periodic development of male germ cells in the testicular tissue. Understanding the spermatogenic cycle has important clinical relevance, because disruption of the process leads to infertility or subfertility, and being able to regulate the process would provide new avenues to male contraceptives. However, the lengthy process prevents visualizing the cycle through dynamic imaging. Moreover, the precise action of germ cells that leads to the emergence of testicular tissue patterns remains uncharacterized. We develop an agent-based model to simulate the mouse spermatogenic cycle on a cross-section of the seminiferous tubule over a time scale of hours to years, taking consideration of multiple cellular behaviors including feedback regulation, mitotic and meiotic division, differentiation, apoptosis, and movement. The computer model is able to elaborate the temporal-spatial dynamics of germ cells in a time-lapse movie format, allowing us to trace individual cells as they change state and location. More importantly, the model provides the mechanistic understanding of the fundamentals of male fertility, namely, how testicular morphology and sperm production are achieved. By manipulating cellular behaviors either individually or collectively in silico, the model predicts the causal events to the altered arrangement of germ cells upon genetic and environmental perturbations. This in silico platform can serve as an interactive tool to perform long-term simulations and identify optimal approaches for infertility treatment and contraceptive development. Such approach may also be applicable to human spermatogenesis and, hence, may lay the foundation for increasing the effectiveness of male fertility regulation.
Ping Ye

Workshop on Teaching Computational Science (WTCS) Session 2

Time and Date: 14:10 - 15:50 on 11th June 2014

Room: Rosser

Chair: Angela Shiflet

339 Double-Degree Master's Program in Computational Science: Experiences of ITMO University and University of Amsterdam [abstract]
Abstract: We present a new double-degree graduate (Master's) programme developed together by the ITMO University, Russia and University of Amsterdam, The Netherlands. First, we look into the global aspects of integration of different educational systems and list some funding opportunities from European foundations. Then we describe our double-degree program curriculum, suggest the timeline of enrollment and studies, and give some examples of student research topics. Finally, we discuss the peculiarities of joint programs with Russia, reflect on the first lessons learnt, and share our thoughts and experiences that could be of interest to the international community expanding the educational markets to the vast countries like Russia, China or India. The paper is written for education professionals and contains useful information for potential students.
Alexey Dukhanov, Valeria Krzhizhanovskaya, Anna Bilyatdinova, Alexander Boukhanovsky, Peter Sloot
254 Critical Issues in the Teaching of High Performance Computing to Postgraduate Scientists [abstract]
Abstract: High performance computing is in increasing demand, especially with the need to conduct parallel processing on very large datasets, whether evaluated by volume, velocity and variety. Unfortunately the necessary skills - from familiarity with the command line interface, job submission, scripting, through to parallel programming - is not commonly taught at the level required for most researchers. As a result the uptake of HPC usage remains disproportionately low, with emphasis on system metrics taking priority, leading to a situation described as 'high performance computing considered harmful'. Changing this is not of a problem of computational science but rather a problem for computational science which can only be resolved from an multi-disciplinary approach. The following example addresses the main issues in such teaching and thus makes an appeal to some universality in application which may be useful for other institutions. For the past several years the Victorian Partnership for Advanced Computing (VPAC) has conducted a range of training courses designed to bring the capabilities of postgraduate researchers to a level of competence useful for their research. These courses have developed in this time, in part through providing a significantly wider range of content for varying skillsets, but more importantly by introducing some of the key insights from the discipline of adult and tertiary education in the context of the increasing trend towards lifelong learning. This includes an androgogical orientation, providing integrated structural knowledge, encouraging learner autonomy, self-efficacy, and self-determination, utilising appropriate learning styles for the discipline, utilising modelling and scaffolding for example problems (as a contemporary version of proximal learning), and following up with a connectivist mentoring and outreach program in the context of a culturally diverse audience.
Lev Lafayette
89 A High Performance Computing Course Guided by the LU Factorization [abstract]
Abstract: This paper presents an experience of Problem-based learning in a High Performance Computing course. The course is part of the specialization of High Performance Architectures and Supercomputing in a Master on New Technologies in Computer Science. It is supposed the students have a basic knowledge of Parallel Programming, but previous studies and the place where they were taken mean the group is heterogeneous. The Problem-based learning approach therefore has to facilitate the individual development and supervision of the students. The course focuses on HPC, matrix computation, parallel libraries, heterogeneous computing and scientific applications of parallelism. The students work on the different aspects of the course using the LU factorization, developing their own implementations, using different libraries, combining different levels of parallelism and conducting experiments in a small heterogeneous cluster composed of multicores of different characteristics and with GPU of different types.
Gregorio Bernabé, Javier Cuenca, Luis P. Garcia, Domingo Gimenez, Sergio Rivas-Gomez
50 Teaching High Performance Computing using BeesyCluster and Relevant Usage Statistics [abstract]
Abstract: The paper presents motivations and experiences from using the BeesyCluster middleware for teaching high performance computing at the Faculty of Electronics, Telecommunications and Informatics, Gdansk University of Technology. Features of BeesyCluster well suited for conducting courses are discussed including: easy-to-use WWW interface for application development and running hiding queuing systems, publishing applications as services and running in a sandbox by novice users, team work and workflow management environments. Additionally, practical experiences are discussed from courses: High Performance Computing Systems and Architectures of Internet Services. For the former, activities such as the number of team work activities, numbers of applications run on clusters and the number of WWW user sessions are shown over the period of one semester. Results of survey from a general course on BeesyCluster for HPC conducted for the university staff and students are also presented.
Pawel Czarnul

Urgent Computing: Computations for Decision Support in Critical Situations (UC) Session 1

Time and Date: 14:10 - 15:50 on 11th June 2014

Room: Mossman

Chair: Alexander Boukhanovsky

429 High Performance Computations for Decision Support in Critical Situations: Introduction to the Third Workshop on Urgent Computing [abstract]
Abstract: This paper is the preface to the Third Workshop on Urgent Computing. The Urgent Computing workshops have been traditionally embedded in frame of International Conference of Computational Science (ICCS) since 2012. They are aimed to develop a dialogue on the present and future of research and applications associated with the large-scale computations for decision support in critical situations. The key workshop topics in 2014 are: methods and principles of urgent computing, middleware, platforms and infrastructures, simulation-based decision support for complex systems control, interactive visualization and virtual reality for decision support in emergency situations, domain-area applications to emergency situations, including natural and man-made disasters, e.g. transportation problems, epidemics, criminal acts, etc.
Alexander Boukhanovsky, Marian Bubak
342 Personal decision support mobile service for extreme situations [abstract]
Abstract: This article discusses aspects of implementation of a massive personal decision support mobile service for evacuation process in extreme situations, based on second-generation cloud computation platform CLAVIRE and a virtual society model. The virtual society model was constructed using an agent-based approach. To increase credibility the individual motivation methods (personal decision support and user training) were used.
Vladislav A. Karbovskii, Daniil V. Voloshin, Kseniia A. Puzyreva, Aleksandr S. Zagarskikh
357 Evaluation of in-vehicle decision support system for emergency evacuation [abstract]
Abstract: One of the most important issues in Decision Support Systems (DSS) technology is in ensuring their effectiveness and efficiency for future implementations and use. DSS is prominent tool in disaster information system, which allows the authority to provide life safety information directly to the mobile devices of anyone physically located in the evacuation area. After that a personal DSS guides users to a safe point. Due to the large uncertainty in initial conditions and assumptions on underlying process such DSS is extremely hard for implementation and evaluation, particularly in real environment. We propose a simulation methodology for the evaluation of in-vehicle DSS for emergency evacuation based on transport system and human decision-making modeling.
Sergei Ivanov, Konstantin Knyazkov
358 Problem solving environment for development and maintenance of St. Petersburg’s Flood Warning System [abstract]
Abstract: Saint-Petersburg Flood Warning System (FWS) is a life-critical system that requires permanent maintenance and development. Tasks that arise during these processes could be much more resource-intensive than an operational loop of the system and may involve complex problems for research. Thereby it is essential to have a special software tool to handle a collection of different models, data sources and auxiliary software that they could be combined in different ways according to a particular research problem to be solved. This paper aims to share the idea of Saint-Petersburg FWS evolution with help of problem-solving environment based on the cloud platform CLAVIRE.
Sergey Kosukhin, Anna Kalyuzhnaya, Denis Nasonov