Environmental Computing Applications - State of the Art (ECASA) Session 1

Time and Date: 10:35 - 12:15 on 6th June 2016

Room: Plumeria Suite

Chair: M. Heikkurinen

552 Introduction to Environmental Computing [abstract]
Abstract: TBC
Dieter Kranzlmüller
556 Scientific Workflows for Environmental Computing [abstract]
Abstract: Environmental computing often involves diverse data sets, large and complex simulations, user interaction, optimisation, high performance computing, scientific visualisation and complex orchestration. Scientific Workflows are an ideal platform for incorporating all these aspects of environmental computing. They provide a common framework that both specifies and documents complex applications, but also also provides an execution platform. In this talk I will describe how Nimrod/OK achieves this goal. Nomrod/OK is based on the long running Nimrod tool set and the Kepler scientific workflow engine. It incorporates a novel user interaction tool called WorkWays, which combines Kepler and Science Gateways. It also includes non-linear optimisation algorithms that allow complex environmental problems to be solved. I will demonstrate Nimrod/OK and WorkWays with a number of environmental applications involving wild fire simulations and ecological planning.
David Abramson
558 Automating Real-time Seismic Analysis Through Streaming and High Throughput Workflows [abstract]
Abstract: In order to support the computational and data needs of today’s science, new knowledge must be gained on how to deliver the growing capabilities of the national cyberinfrastructures and more recently commercial clouds to the scientist’s desktop in an accessible, reliable, and scalable way. In over a decade of working with domain scientists, the Pegasus workflow management system has being used by researchers to model seismic wave propagation, to discover new celestial objects, to study RNA critical to human brain development, and to investigate other important research questions. Recently, the Pegasus and the dispel4py teams have collaborated to enable automated processing of real-time seismic interferometry and earthquake “repeater” analysis using data collected from the IRIS database. The proposed integrated solution empowers real-time stream-based workflows to seamlessly run on different distributed infrastructures (or in the wide area), where data is automatically managed by a task-oriented workflow system, which orchestrates the distributed execution. We have demonstrated the feasibility of this approach by using docker containers to deploy the workflow management systems and two different computing infrastructures: an Apache Storm cluster for real-time processing, and an MPI-based cluster for shared memory computing. Stream-based executions is managed by dispel4py, while the data movement between the clusters and the workflow engine (submit host) is managed by Pegasus.
Rafael Ferreira Da Silva