Session3 11:00 - 12:40 on 11th June 2014

Main Track (MT) Session 3

Time and Date: 11:00 - 12:40 on 11th June 2014

Room: Kuranda

Chair: E. Luque

186 Triplet Finder: On the Way to Triggerless Online Reconstruction with GPUs for the PANDA Experiment [abstract]
Abstract: PANDA is a state-of-the-art hadron physics experiment currently under construction at FAIR, Darmstadt. In order to select events for offline analysis, PANDA will use a software-based triggerless online reconstruction, performed with a data rate of 200 GB/s. To process the raw data rate of the detector in realtime, we design and implement a GPU version of the Triplet Finder, a fast and robust first-stage tracking algorithm able to reconstruct tracks with good quality, specially designed for the Straw Tube Tracker subdetector of PANDA. We reduce the algorithmic complexity of processing many hits together by splitting them into bunches, which can be processed independently. We evaluate different ways of processing bunches, GPU dynamic parallelism being one of them. We also propose an optimized technique for associating hits with reconstructed track candidates. The evaluation of our GPU implementation demonstrates that the Triplet Finder can process almost 6 Mhits/s on a single K20X GPU, making it a promising algorithm for the online event filtering scheme of PANDA.
Andrew Adinetz, Andreas Herten, Jiri Kraus, Marius Mertens, Dirk Pleiter, Tobias Stockmanns, Peter Wintz
189 A Technique for Parallel Share-Frequent Sensor Pattern Mining from Wireless Sensor Networks [abstract]
Abstract: WSNs generate huge amount of data in the form of streams and mining useful knowledge from these streams is a challenging task. Existing works generate sensor association rules using occurrence frequency of patterns with binary frequency (either absent or present) or support of a pattern as a criterion. However, considering the binary frequency or support of a pattern may not be a sufficient indicator for finding meaningful patterns from WSN data because it only reflects the number of epochs in the sensor data which contain that pattern. The share measure of sensorsets could discover useful knowledge about numerical values associated with sensor in a sensor database. Therefore, in this paper, we propose a new type of behavioral pattern called share-frequent sensor patterns by considering the non-binary frequency values of sensors in epochs. To discover share-frequent sensor patterns from sensor dataset, we propose a novel parallel and distributed framework. In this framework, we develop a novel tree structure, called parallel share-frequent sensor pattern tree (PShrFSP-tree) that is constructed at each local node independently, by capturing the database contents to generate the candidate patterns using a pattern growth technique with a single scan and then merges the locally generated candidate patterns at the final stage to generate global share-frequent sensor patterns. Comprehensive experimental results show that our proposed model is very efficient for mining share-frequent patterns from WSN data in terms of time and scalability.
Md Mamunur Rashid, Dr. Iqbal Gondal, Joarder Kamruzzaman
205 Performance-Aware Energy Saving Mechanism in Interconnection Networks for Parallel Systems [abstract]
Abstract: Growing processing power of parallel computing systems require interconnection networks a higher level of complexity and higher performance, thus consuming more energy. Link components contributes a substantial proportion of the total energy consumption of the networks. Many researchers have proposed approaches to judiciously change the link speed as a function of traffic to save energy when the traffic is light. However, the link speed reduction incurs an increase in average packet latency, thus degrades network performance. This paper addresses that issue with several proposals. The simulation results show that the extended energy saving mechanism in our proposals outperforms the energy saving mechanisms in open literature.
Hai Nguyen, Daniel Franco, Emilio Luque
214 Handling Data-skew Effects in Join Operations using MapReduce [abstract]
Abstract: For over a decade, MapReduce has become a prominent programming model to handle vast amounts of raw data in large scale systems. This model ensures scalability, reliability and availability aspects with reasonable query processing time. However these large scale systems still face some challenges: data skew, task imbalance, high disk I/O and redistribution costs can have disastrous effects on performance. In this paper, we introduce MRFA-Join algorithm: a new frequency adaptive algorithm based on MapReduce programming model and a randomised key redistribution approach for join processing of large-scale datasets. A cost analysis of this algorithm shows that our approach is insensitive to data skew and ensures perfect balancing properties during all stages of join computation. These performances have been confirmed by a series of experimentations.
Mostafa Bamha, Frédéric Loulergue, Mohamad Al Hajj Hassan
216 Speeding-Up a Video Summarization Approach using GPUs and Multicore-CPUs [abstract]
Abstract: The recent progress of digital media has stimulated the creation, storage and distribution of data, such as digital videos, generating a large volume of data and requiring ecient technologies to increase the usability of these data. Video summarization methods generate concise summaries of video contents and enable faster browsing, indexing and accessing of large video collections, however, these methods often perform slow with large duration and high quality video data. One way to reduce this long time of execution is to develop a parallel algorithm, using the advantages of the recent computer architectures that allow high parallelism. This paper introduces parallelizations of a summarization method called VSUMM, targetting either Graphic Processor Units (GPUs) or multicore Central Processor Units (CPUs), and ultimately a sensible distribution of computation steps onto both hardware to maximise performance, called \hybrid". We performed experiments using 180 videos varying frame resolution (320 x 240, 640 x 360, and 1920 x 1080) and video length (1, 3, 5, 10, 20, and 30 minutes). From the results, we observed that the hybrid version reached the best results in terms of execution time, achieving 7 speed up in average.
Suellen Almeida, Antonio Carlos Nazaré Jr, Arnaldo De Albuquerque Araújo, Guillermo Cámara-Chávez, David Menotti

Main Track (MT) Session 10

Time and Date: 11:00 - 12:40 on 11th June 2014

Room: Tully I

Chair: S. Smanchat

18 A Workflow Application for Parallel Processing of Big Data from an Internet Portal [abstract]
Abstract: The paper presents a workflow application for efficient parallel processing of data downloaded from an Internet portal. The workflow partitions input files into subdirectories which are further split for parallel processing by services installed on distinct computer nodes. This way, analysis of the first ready subdirectories can start fast and is handled by services implemented as parallel multithreaded applications using multiple cores of modern CPUs. The goal is to assess achievable speed-ups and determine which factors influence scalability and to what degree. Data processing services were implemented for assessment of context (positive or negative) in which the given keyword appears in a document. The testbed application used these services to determine how a particular brand was recognized by either authors of articles or readers in comments in a specific Internet portal focused on new technologies. Obtained execution times as well as speed-ups are presented for data sets of various sizes along with discussion on how factors such as load imbalance and memory/disk bottlenecks limit performance.
Pawel Czarnul
273 A comparative study of scheduling algorithms for the multiple deadline-constrained workflows in heterogeneous computing systems with time windows [abstract]
Abstract: Scheduling tasks with precedence constraints on a set of resources with different performances is a well-known NP-complete problem, and a number of effective heuristics has been proposed to solve it. If the start time and the deadline for each specific workflow are known (for example, if a workflow starts execution according to periodic data coming from the sensors, and its execution should be completed before data acquisition), the problem of multiple deadline-constrained workflows scheduling arises. Taking into account that resource providers can give only restricted access to their computational capabilities, we consider the case when resources are partially available for workflow execution. To address the problem described above, we study the scheduling of deadline-constrained scientific workflows in non-dedicated heterogeneous environment. In this paper, we introduce three scheduling algorithms for mapping the tasks of multiple workflows with different deadlines on the static set of resources with previously known free time windows. Simulation experiments show that scheduling strategies based on a proposed staged scheme give better results than merge-based approach considering all workflows at once.
Klavdiya Bochenina
292 Fault-Tolerant Workflow Scheduling Using Spot Instances on Clouds [abstract]
Abstract: Scientific workflows are used to model applications of high throughput computation and complex large scale data analysis. In recent years, Cloud computing is fast evolving as the target platform for such applications among researchers. Furthermore, new pricing models have been pioneered by Cloud providers that allow users to provision resources and to use them in an efficient manner with significant cost reductions. In this paper, we propose a scheduling algorithm that schedules tasks on Cloud resources using two different pricing models (spot and on-demand instances) to reduce the cost of execution whilst meeting the workflow deadline. The proposed algorithm is fault tolerant against the premature termination of spot instances and also robust against performance variations of Cloud resources. Experimental results demonstrate that our heuristic reduces up to 70% execution cost as against using only on-demand instances.
Deepak Poola, Kotagiri Ramamohanarao, Rajkumar Buyya
308 On Resource Efficiency of Workflow Schedules [abstract]
Abstract: This paper presents the Maximum Effective Reduction (MER) algorithm, which optimizes the resource efficiency of a workflow schedule generated by any particular scheduling algorithm. MER trades the minimal makespan increase for the maximal resource usage reduction by consolidating tasks with the exploitation of resource inefficiency in the original workflow schedule. Our evaluation shows that the rate of resource usage reduction far outweighs that of the increase in makespan, i.e., the number of resources used is halved on average while incurring an increase in makespan of less than 10%.
Young Choon Lee, Albert Y. Zomaya, Hyuck Han
346 GridMD: a Lightweight Portable C++ Library for Workflow Management [abstract]
Abstract: In this contribution we present the current state of the open source GridMD workflow library ( The library was originally designed for programmers of distributed Molecular Dynamics (MD) simulations, however nowadays it serves as a universal tool for creating and managing general workflows from a compact client application. GridMD is a programming tool aimed at the developers of distributed software that utilizes local or remote compute capabilities to perform loosely coupled computational tasks. Unlike other workflow systems and platforms, GridMD is not integrated with heavy infrastructure such as Grid systems, web portals, user and resource management systems and databases. It is a very lightweight tool accessing and operating on a remote site by delegated user credentials. For starting compute jobs the library supports Globus Grid environment; a set of cluster queuing managers such as PBS(Torque) or SLURM and Unix/Windows command shells. All job starting mechanisms may either be used locally or remotely via the integrated SSH protocol. Working with different queues, starting of parallel (MPI) jobs and changing job parameters is generically supported by the API. The jobs are started and monitored in a “passive” way, not requiring any special task management agents to be running or even installed on the remote system. The workflow execution is monitored by an application (task manager performing GridMD API calls) running on a client machine. Data transfer between different compute resources and from the client machine and a compute resource is performed by the exchange of files (gridftp or ssh channels). Task manager is able to checkpoint and restart the workflow and to recover from different types of errors without recalculating the whole workflow. Task manager itself can easily be terminated/restarted on the client machine or transferred to another client without breaking the workflow execution. Apart from the separated tasks such as command series or application launches, GridMD workflow may also manage integrated tasks that are described by the code compiled as part of task manager. Moreover, the integrated tasks may change the workflow dynamically by adding additional jobs or dependencies to the existing workflow graph. The dynamical management of the workflow graph is an essential feature of GridMD, which adds large flexibility for the programmer of the distributed scenarios. GridMD also provides a set of useful workflow skeletons for standard distributed scenarios such as Pipe, Fork, Parameter Sweep, Loop (implemented as dynamical workflow). In the talk we will discuss the architecture and special features of GridMD. We will also briefly describe the recent applications of GridMD as a base for distributed job manager, for example in the multiscale OLED simulation platform (EU-Russia IM3OLED project).
Ilya Valuev and Igor Morozov

Dynamic Data Driven Application Systems (DDDAS) Session 3

Time and Date: 11:00 - 12:40 on 11th June 2014

Room: Tully II

Chair: Abani Patra

80 A posteriori error estimates for DDDAS inference problems [abstract]
Abstract: Inference problems in dynamically data-driven application systems use physical measurements along with a physical model to estimate the parameters or state of a physical system. Errors in measurements and uncertainties in the model lead to inaccurate inference results. This work develops a methodology to estimate the impact of various errors on the variational solution of a DDDAS inference problem. The methodology is based on models described by ordinary differential equations, and use first-order and second-order adjoint methodologies. Numerical experiments with the heat equation illustrate the use of the proposed error estimation machinery.
Vishwas Hebbur Venkata Subba Rao, Adrian Sandu
162 Mixture Ensembles for Data Assimilation in Dynamic Data-Driven Environmental Systems [abstract]
Abstract: Many inference problems in environmental DDDAS must contend with high dimensional models and non-Gaussian uncertainties, including but not limited to Data Assimilation, Targeting and Planning. In this this paper, we present the Mixture Ensemble Filter (MEnF) which extends ensemble filtering to non-Gaussian inference using Gaussian mixtures. In contrast to the state of the art, MEnF embodies an exact update equation that neither requires explicit calculation of mixture element moments nor ad-hoc association rules between ensemble members and mixture elements. MEnF is applied to the chaotic Lorenz-63 model and to a chaotic soliton model that allows idealized and systematic studies of localized phenomena. In both cases, MEnF outperforms contemporary approaches, and replaces ad-hoc Gaussian Mixture approaches for non-Gaussian inference.
Piyush Tagade, Hansjorg Seybold, Sai Ravela
169 Optimizing Dynamic Resource Allocation [abstract]
Abstract: We present a formulation, solution method, and program acceleration techniques for two dynamic control scenarios, both with the common goal of optimizing resource allocations. These approaches allocate resources in a non-myopic way, accounting for long-term impacts of current control decisions via nominal belief-state optimization (NBO). In both scenarios, the solution techniques are parallelized for reduced execution time. A novel aspect is included in the second scenario: dynamically allocating the computational resources in an online fashion which is made possible through constant aspect ratio tiling (CART).
Lucas Krakow, Louis Rabiet, Yun Zou, Guillaume Iooss, Edwin Chong, Sanjay Rajopadhye
165 A Dataflow Programming Language and Its Compiler for Streaming Systems [abstract]
Abstract: The dataflow programming paradigm shows an important way to improve the programming productivity for domain experts. In this position paper we propose COStream,a programming language that is based on synchronization dataflow execution model for application. We also propose a compiler framework for COStream on multi-core architecture. In the compiler, we use an inter-thread software pipelining schedule to exploit the parallelism among the cores. We implement the COStream compiler framework on x86 multi-core architecture and perform the experiments to evaluate the system.
Haitao Wei, Stephane Zuckerman, Xiaoming Li, Guang Gao
280 Static versus Dynamic Data Information Fusion analysis using DDDAS for Cyber Security Trust [abstract]
Abstract: Information fusion includes signals, features, and decision-level analysis over various types of data including imagery, text, and cyber security detection. With the maturity of data processing, the explosion of big data, and the need for user acceptance; the Dynamic Data-Driven Application System (DDDAS) philosophy fosters insights into the usability of information systems solutions. In this paper, we explore a notion of an adaptive adjustment of secure communication trust analysis that seeks a balance between standard static solutions versus dynamic-data driven updates. A use case is provided in determining trust for a cyber security scenario exploring comparisons of Bayesian versus evidential reasoning for dynamic security detection updates. Using the evidential reasoning proportional conflict redistribution (PCR) method, we demonstrate improved trust for dynamically changing detections of denial of service attacks.
Erik Blasch, Youssif Al-Nashif, Salim Hariri

Agent Based Simulations, Adaptive Algorithms and Solvers (ABS-AA-S) Session 3

Time and Date: 11:00 - 12:40 on 11th June 2014

Room: Tully III

Chair: Aleksander Byrski

325 Agent-based Evolutionary Computing for Difficult Discrete Problems [abstract]
Abstract: Hybridizing agent-based paradigm with evolutionary computation can enhance the field of meta-heuristics in a significant way, giving to usually passive individuals autonomy and capabilities of perception and interaction with other ones, treating them as agents. In the paper as a follow-up to the previous research, an evolutionary multi-agent system (EMAS) is examined in difficult discrete benchmark problems. As a means for comparison, classical evolutionary algorithm (constructed along with Michalewicz model) implemented in island-model is used. The results encourage for further research regarding application of EMAS in discrete problem domain.
Michal Kowol, Aleksander Byrski, Marek Kisiel-Dorohinicki
225 Translation of graph-based knowledge representation in multi-agent system [abstract]
Abstract: Agents provide a feasible mean for maintaining and manipulating large scale data. This paper deals with the problem of information exchange between different agents. It uses graph based formalism for the representation of knowledge maintained by an agent and graph transformations as a mean of knowledge exchange. Such a rigorous formalism ensures the cohesion of graph-based knowledge held by agents after each modification and exchange action. The approach presented in this paper is illustrated by a case study dealing with the problem of personal data held in different places (maintained by different agents) and the process of transmitting such information
Leszek Kotulski, Adam Sedziwy, Barbara Strug
239 Agent-based Adaptation System for Service-Oriented Architectures Using Supervised Learning [abstract]
Abstract: In this paper we propose an agent-based system for Service-Oriented Architecture self-adaptation. Services are supervised by autonomous agents which are responsible for deciding which service should be chosen for interoperation. Agents learn the choice strategy autonomously using supervised learning. In experiments we show that supervised learning (Naive Bayes, C4.5 and Ripper) allows to achieve much better efficiency than simple strategies such as random choice or round robin. What is also important, supervised learning generates a knowledge in a readable form, which may be analyzed by experts.
Bartlomiej Sniezynski
324 Generation-free Agent-based Evolutionary Computing [abstract]
Abstract: Metaheuristics resulting from the hybridization of multi-agent systems with evolutionary computing are efficient in many optimization problems. Evolutionary multi-agent systems (EMAS) are more similar to biological evolution than classical evolutionary algorithms. However, technological limitations prevented the use of fully asynchronous agents in previous EMAS implementations. In this paper we present a new algorithm for agent-based evolutionary computations. The individuals are represented as fully autonomous and asynchronous agents. Evolutionary operations are performed continuously and no artificial generations need to be distinguished. Our results show that such asynchronous evolutionary operators and the resulting absence of explicit generations lead to significantly better results. An efficient implementation of this algorithm was possible through the use of Erlang technology, which natively supports lightweight processes and asynchronous communication.
Daniel Krzywicki, Jan Stypka, Piotr Anielski, Lukasz Faber, Wojciech Turek, Aleksander Byrski, Marek Kisiel-Dorohinicki
27 Hypergraph grammar based linear computational cost solver for three dimensional grids with point singularities [abstract]
Abstract: In this paper we present a hypergraph grammar based multi-frontal solver for three dimensional grids with point singularities. We show experimentally that the computational cost of the resulting solver algorithm is linear with respect to the number of degrees of freedom. We also propose a reutilization algorithm that enables to reuse LU factorizations over unrefined parts of the mesh when new local refinements are executed by the hypergraph grammar productions.
Piotr Gurgul, Anna Paszynska, Maciej Paszynski

Bridging the HPC Tallent Gap with Computational Science Research Methods (BRIDGE) Session 1

Time and Date: 11:00 - 12:40 on 11th June 2014

Room: Bluewater I

Chair: Vassil Alexandrov

153 In Need of Partnerships – An Essay about the Collaboration between Computational Sciences and IT Services [abstract]
Abstract: Computational Sciences (CS) are challenging in many aspects, not only from the scientific domain they address, but especially also from its needs of the most sophisticated IT infrastructures to perform their research. Often, the latest and most powerful supercomputers, high-performance networks and high-capacity data storages are utilized for CS, while being offered, developed and operated by experts outside CS. This standard service approach has certainly been useful for many domains, but more and more often it represents a limitation to the needs of CS and the restrictions of the IT services. The partnership initiative πCS established at the Leibniz Supercomputing Centre (LRZ) moves the collaboration between Computational Scientists and IT service providers to a new level, moving from a service-centered approach to an integrated partnership. The interface between them is a gateway to an improved collaboration between equal partners, such that future IT services address the requirements of CS in a better, optimized, and more efficient way. In addition, it sheds some light on future professional development.
Anton Frank, Ferdinand Jamitzky, Helmut Satzger, Dieter Kranzlmüller
281 Development of Multiplatform Adaptive Rendering Tools to Visualize Scientific Experiments [abstract]
Abstract: In this paper, we propose methods and tools for multiplatform adaptive visualization system development adequate to the specific visualization goals of the experiments in the different fields of science. Approach proposed was implemented and we present a client-server rendering system SciVi (Scientific Visualizer) which provides multiplatform portability and automated integration with different solvers based on ontology engineering methods. SciVi is developed in Perm State University to help scientists and researchers acquire the multidisciplinary skills and to solve real scientific problems.
Konstantin Ryabinin, Svetlana Chuprina
296 Education 2.0: Student Generated Learning Materials through Collaborative Work [abstract]
Abstract: In order to comply with the Integrated Learning Processes model a course on operating systems was redesigned in such a way that students would generate most of their learning materials as well a significant part of their evaluation exams. This new approach resulted in a statistical significant improvement of student’s grade as measured by a standardized exam compared with a previous student intake.
Raul Ramirez-Velarde, Raul Perez-Cazares, Nia Alexandrov, Jose Jesus Garcia-Rueda
413 Challenges of Big Data and the Skills Gap [abstract]
Abstract: At present, Big Data becomes reality that no one can ignore. Big Data is our environment whenever we need to make a decision. Big Data is a buzz word that makes everyone understands how important it is. Big Data shows a big opportunity for academia, industry and government. Big Data then is a big challenge for all parties. This talk will discuss some fundamental issues of Big Data problems, such as data heterogeneity vs. decision heterogeneity, data stream research and data-driven decision management. Furthermore, this talk will provide a number of real-life Bid Data Applications and will outline the challenges in bridging the skills gap in while focusing on Big Data.
Yong Shi and Yingjie Tian

Workshop on Cell Based and Individual Based modelling (CBIBM) Session 1

Time and Date: 11:00 - 12:40 on 11th June 2014

Room: Bluewater II

Chair: James Osborne

395 The future of cell based modelling: connecting and coupling individual based models [abstract]
Abstract: When investigating the development and function of multicellular biological systems it is not enough to only consider the behaviour of individual cells in isolation. For example when studying tissue development, how individual cells interact, both mechanically and biochemically, influences the resulting tissues form and function. Cell based modelling allows you to represent and track the interaction of individual cells in a developing tissue. Existing models including lattice based models (cellular automata and cellular Potts) and off-lattice based models (cell centre and vertex based representations) have given us insight into how tissues maintain homeostasis and how mutations spread. However, when tissues develop they interact biochemically and biomechanically with the environment and in order to capture these interactions, and the effect they have on development, the environment must be considered. We present a framework which allows multiple individual based models to be coupled together, in order to model both the tissue and the surrounding environment. The framework can use different modeling paradigms for each component, and subcellular behaviour (for example the cell cycle) can be considered. In this talk we present two examples of such a coupling, from the fields of developmental biology and vascular remodelling.
James Osborne
206 Discrete-to-continuum modelling of nutrient-dependent cell growth [abstract]
Abstract: Continuum partial differential equation models of the movement and growth of large numbers of cells generally involve constitutive assumptions about macro-scale cell population behaviour. It is difficult to know whether these assumptions accurately represent the mechanical and chemical processes that occur at the level of discrete cells. By deriving continuum models from individual-based models (IBMs) we can obtain PDE approximations to IBMs and conditions for their validity. We have developed a hybrid discrete-continuum model of nutrient-dependent growth of a line of discrete cells on a substrate in a nutrient bath. The cells are represented by linear springs connected in series, with resting lengths that evolve according to the local nutrient concentration. In turn, the continuous nutrient field changes as the cells grow due to the change in nutrient uptake with changes in cell density and the length of the cell line. Following Fozard et al. [Math. Med. and Biol., 27(1):39--74, 2010], we have derived a PDE continuum model from the discrete model ODEs for the motion of the cell vertices and cell growth by taking the large cell number limit. We have identified the conditions under which the continuum model accurately approximates the IBM by comparing numerical simulations of the two models. In addition to making the discrete and continuum frameworks more suitable for modelling cell growth by incorporating nutrient transport, our work provides conditions on the cell density to determine whether the IBM or continuum model should be used. This is an important step towards developing a hybrid model of tissue growth that uses both the IBM and its continuum limit in different regions.
Lloyd Chapman, Rebecca Shipley, Jonathan Whiteley, Helen Byrne and Sarah Waters
434 Distinguishing mechanisms of cell aggregate formation using pair-correlation functions [abstract]
Abstract: `
Edward Green
432 Cell lineage tracing in invading cell populations: superstars revealed! [abstract]
Abstract: Cell lineage tracing is a powerful tool for understanding how proliferation and differentiation of individual cells contribute to population behaviour. In the developing enteric nervous system (ENS), enteric neural crest (ENC) cells move and undergo massive population expansion by cell division within mesenchymal tissue that is itself growing. We use an agent-based model to simulate ENC colonisation and obtain agent lineage tracing data, which we analyse using econometric data analysis tools. Biological trials with clonally labelled ENS cells were also performed. In all realisations a small proportion of identical initial agents accounts for a substantial proportion of the total agent population. We term these individuals superstars. Their existence is consistent across individual realisations and is robust to changes in model parameters. However which individual agents will become a superstar is unpredictable. This inequality of outcome is amplified at elevated proliferation rate. Biological trials revealed identical and heretofore unexpected clonal behaviour. The experiments and model suggest that stochastic competition for resources is an important concept when understanding biological processes that feature high levels of cell proliferation. The results have implications for cell fate processes in the ENS and in other situations with invasive proliferative cells, such as invasive cancer.
Kerry Landman, Bevan Cheeseman and Donald Newgreen
435 Agent-based modelling of the mechanism of immune control at the cellular level in HIV infection [abstract]
Abstract: There are over 40 million people currently infected worldwide, and efforts to develop a vaccine would be improved greatly by a better understanding of how HIV survives and evolves. Recent studies discovered the ability of HIV target cells to present viral particles on the surface and trigger immune recognition and suppression by ÒkillerÓ cells of immune system. The effect of ÒkillersÓ remains to be poorly understood, however it plays a key role in control of HIV infection. While traditional vaccine approaches became unsuccessful, the vaccines against early expressed conservative viral parts are promising and would make possible managing the ability of the virus to mutate and avoid immune recognition. To discover the mechanism of ÒkillerÓ cells I developed an agent-based stochastic model of HIV dynamics at the cellular level. While the classic ODE approach is unable to simulate similar dynamics that I observed in the experimental data, the agent-based stochastic model is easily comprehensible and exposes similar kinetics. The complexity of the method increases greatly with the number of agents in the model and may be effectively resolved by using parallel computations on Graphics Processing Units (GPUs). I found that the simulated dynamics almost completely resembles the experimental data and provides answer on the addressed question. Also, the model may be applied in further developments on the design of experiments to distinguish mechanisms more precisely.
Alexey Martyushev

Workshop on Teaching Computational Science (WTCS) Session 1

Time and Date: 11:00 - 12:40 on 11th June 2014

Room: Rosser

Chair: Angela Shiflet

56 An Introduction to Agent-Based Modeling for Undergraduates [abstract]
Abstract: Agent-based modeling (ABM) has become an increasingly important tool in computational science. Thus, in the final week of the in 2013 fall semester, Wofford College's undergraduate Modeling and Simulation for the Sciences course (COSC/MATH 201) considered ABM using the NetLogo tool. The students explored existing ABMs and completed two tutorials that developed models on unconstrained growth and the average distance covered by a random walker. The models demonstrated some of the utility of ABM and helped illustrate the similarities and differences between agent-based modeling and previously discussed techniques—system dynamics modeling, empirical modeling, and cellular automaton simulations. Improved test scores and questionnaire results support the success of the goals for the week.
Angela Shiflet, George Shiflet
220 Computational Science for Undergraduate Biologists via QUT.Bio.Excel [abstract]
Abstract: Molecular biology is a scientific discipline which has changed fundamentally in character over the past decade to rely on large scale datasets – public and locally generated - and their computational analysis and annotation. Undergraduate education of biologists must increasingly couple this domain context with a data-driven computational scientific method. Yet modern programming and scripting languages and rich computational environments such as R and matlab present significant barriers to those with limited exposure to computer science, and may require substantial tutorial assistance over an extended period if progress is to be made. In this paper we report our experience of undergraduate bioinformatics education using the familiar, ubiquitous spreadsheet environment of Microsoft Excel. We describe a configurable extension called QUT.Bio.Excel, a custom ribbon, supporting a rich set data sources, external tools and interactive processing within the spreadsheet, and a range of problems to demonstrate its utility and success in addressing the needs of students over their studies.
Lawrence Buckingham, James Hogan
54 A multiple intelligences theory-based 3D virtual lab environment for digital systems teaching [abstract]
Abstract: This paper describes a 3D virtual lab environment that was developed using OpenSim software integrated into Moodle. Virtuald software tool was used to provide pedagogical support to the lab by enabling to create online texts and delivering them to the students. The courses taught in this virtual lab are methodologically in conformity to theory of multiple intelligences. Some results are presented
Toni Amorim, Norian Marranghello, Alexandre C.R. Silva, Aledir S. Pereira, Lenadro Tapparo
349 Exploring Rounding Errors in Matlab using Extended Precision [abstract]
Abstract: We describe a simple package of Matlab programs which implements an extended-precision class in Matlab. We give some examples of how this class can be used to demonstrate the effects of rounding errors and truncation errors in scientific computing. The package is based on a representation called Double-Double, which represents each floating-point real as an unevaluated sum of IEEE double-precision floating point numbers. This allows Matlab computations that are accurate to 30 decimal digits. The data structure, basic arithmetic and elementary functions are implemented as a Matlab class, entirely using the Matlab programming language.
Dina Tsarapkina, David Jeffrey

Large Scale Computationl Physics (LSCP) Session 1

Time and Date: 11:00 - 12:40 on 11th June 2014

Room: Mossman

Chair: Fukuko YUASA

404 Development of lattice QCD simulation code set ``Bridge++'' on accelerators [abstract]
Abstract: We are developing a new code set ``Bridge++'' for lattice QCD (Quantum Chromodynamics) simulations. It aims at an extensible, readable, and portable workbench, while achieving high performance. Bridge++ covers popular lattice actions and numerical algorithms. The code set is constructed in C++ with an object oriented programming. In this paper, we describe our code design focusing on the use of accelerators such as GPGPUs. For portability our implementation employs OpenCL to control the devices while encapsulates the details of manipulation by providing generalized interfaces. The code is successfully applied to several recent accelerators.
Shinji Motoki, Shinya Aoki, Tatsumi Aoyama, Kazuyuki Kanaya, Hideo Matsufuru, Yusuke Namekawa, Hidekatsu Nemura, Yusuke Taniguchi, Satoru Ueda, Naoya Ukita
406 GPGPU Application to the Computation of Hamiltonian Matrix Elements between Non-orthogonal Slater Determinants in the Monte Carlo Shell Model [abstract]
Abstract: We apply the computation with a GPU accelerator to calculate Hamiltonian matrix elements between non-orthogonal Slater determinants utilized in the Monte Carlo shell model. The bottleneck of this calculation is the two-body part in the computation of Hamiltonian matrix elements. We explain an efficient computational method to overcome this bottleneck. For General-Purpose computing on the GPU (GPGPU) of this method, we propose a computational procedure to avoid the unnecessary costs of data transfer into a GPU device and aim for efficient computation with the cuBLAS interface and the OpenACC directive. As a result, we achieve about 40 times better performance in FLOPS as compared with a single-threaded process of CPU for the two-body part in the computation of Hamiltonian matrix elements.
Tomoaki Togashi, Noritaka Shimizu, Yutaka Utsuno, Takashi Abe, Takaharu Otsuka