Session3 16:50 - 18:30 on 12th June 2018

ICCS 2019 Main Track (MT) Session 3

Time and Date: 16:50 - 18:30 on 12th June 2018

Room: 1.5

Chair: To be announced

12 Forecasting Model for Network Throughput of Remote Data Access in Computing Grids [abstract]
Abstract: Computing grids are one of the key enablers of computational science. Researchers from many fields (High Energy Physics, Bioinformatics, Climatology, etc.) employ grids for execution of distributed computational jobs. Such computing workloads are typically data-intensive. The current state of the art approach for data access in grids is data placement: a job is scheduled to run at a specific data center, and its execution commences only when the complete input data has been transferred there. An alternative approach is remote data access: a job may stream the input data directly from storage elements. Remote data access brings two innovative benefits: (1) the jobs can be executed asynchronously with respect to the data transfer; (2) when combined with data placement on the policy level, it may help to optimize the network load grid-wide, since these two data access methodologies partially exhibit nonoverlapping bottlenecks. However, in order to employ such a technique systematically, the properties of its network throughput need to be studied carefully. This paper presents results of experimental identification of parameters influencing the throughput of remote data access, a statistically tested formalization of these parameters and a derived throughput forecasting model. The model is applicable to large computing workloads, robust with respect to arbitrary dynamic changes in the grid infrastructure and exhibits a long-term forecasting horizon. Its purpose is to assist various stakeholders of the grid in decision-making related to data access patterns. This work is based on measurements taken on the Worldwide LHC Computing Grid at CERN.
Volodimir Begy, Martin Barisits, Mario Lassnig and Erich Schikuta
408 Collaborative Simulation Development Accelerated by Cloud Based Computing and Software as a Service Model [abstract]
Abstract: Simulations are increasingly used in pharmaceutical development to deliver medicines to patients more quickly; more efficiently; and with better designs, safety, and effect. These simulations need high performance computing resources as well as a variety of software to model the processes and effects on the pharmaceutical product at various scales of scrutiny: from the atomic scale to the entire production process. The demand curve for these resources has many peaks and can shift in a time scale much faster than a typical procurement process. Both on-demand cloud based computing capability and software as a service models have been growing in use. This presentation describes the efforts of the Enabling Technology Consortium to apply these information technology models to pharmaceutical simulations which have special needs of documentation and security. It is expected that the environment will have more benefits as the cloud can be configured for collaborative work among companies in the non-competitive space and all the work can be made available for use by contract service vendors or health authorities. The expected benefits of this computing environment include economies of scale for both the providers and the consumer, increased resources and for consumers by the information available to accelerate and improve delivery of pharmaceutical products.
Howard Stamato
487 Accurately Simulating Energy Consumption of I/O-intensive Scientific Workflows [abstract]
Abstract: While distributed computing infrastructures can provide infrastructure-level techniques for managing energy consumption, application-level energy consumption models have also been developed to support energy-efficient scheduling and resource provisioning algorithms. In this work, we analyze the accuracy of application-level models that have been developed and used in the context of scientific workflow executions. To this end, we profile two production scientific workflows on a distributed platform instrumented with power meters. We then conduct an analysis of power and energy consumption measurements. This analysis shows that power consumption is not linearly related to CPU utilization and that I/O operations significantly impact power, and thus energy, consumption. We then propose a power consumption model that accounts for I/O operations, including the impact of waiting for these op- erations to complete, and for concurrent task executions on multi-socket, multi-core compute nodes. We implement our proposed model as part of a simulator that allows us to draw direct comparisons between real-world and modeled power and energy consumption. We find that our model has high accuracy when compared to real-world executions. Furthermore, our model improves accuracy by about two orders of magnitude when compared to the traditional models used in the energy-efficient workflow scheduling literature.
Rafael Ferreira Da Silva, Anne-Cécile Orgerie, Henri Casanova, Ryan Tanaka, Ewa Deelman and Frédéric Suter
62 Exploratory Visual Analysis of Anomalous Runtime Behavior in Streaming High Performance Computing Applications [abstract]
Abstract: Online analysis of runtime behavior is essential for performance tuning in streaming scientific workflows. Integration of anomaly detection and visualization is necessary to support human-centered analysis, such as verification of candidate anomalies utilizing domain knowledge. In this work, we propose an efficient and scalable visual analytics system for online performance analysis of scientific workflows toward the exascale scenario. Our approach uses a call stack tree representation to encode the structural and temporal information of the function executions. Based on the call stack tree features (e.g., execution time of the root function or vector representation of the tree structure), we employ online anomaly detection approaches to identify candidate anomalous function executions. We also present a set of visualization tools for verification and exploration in a level-of-detailed manner. General information, such as distribution of execution times, are provided in an overview visualization. The detailed structure (e.g., function invocation relations) and the temporal information (e.g., message communication) of the execution call stack of interest are also visualized. The usability and efficiency of our methods are verified in the NWChem use case.
Cong Xie, Wonyong Jeong, Gyorgy Matyasfalvi, Hubertus Van Dam, Klaus Mueller, Shinjae Yoo and Wei Xu

ICCS 2019 Main Track (MT) Session 11

Time and Date: 16:50 - 18:30 on 12th June 2018

Room: 1.3

Chair: To be announced

331 Robust Ensemble-Based Evolutionary Calibration of the Numerical Wind Wave Model [abstract]
Abstract: The adaptation of numerical wind wave models to the local time-spatial conditions is a problem that can be solved by using various calibration techniques. However, the obtained sets of physical parameters become over-tuned to specific events if there is a lack of observations. In this paper, we propose a robust evolutionary calibration approach that allows to build the stochastic ensemble of perturbed models and use it to achieve the trade-off between quality and robustness of the target model. The implemented robust ensemble-based evolutionary calibration (REBEC) approach was compared to the baseline SPEA2 algorithm in a set of experiments with the SWAN wind wave model configuration for the Kara Sea domain. Provided metrics for the set of scenarios confirm the effectiveness of the REBEC approach for the majority of calibration scenarios.
Pavel Vychuzhanin, Nikolay Nikitin and Anna Kalyuzhnaya
438 Approximate Repeated Administration Models for Pharmacometrics [abstract]
Abstract: Employing multiple processes in parallel is a common approach to reduce running-times in high-performance computing applications. However, improving performance through parallelization is only part of the story. At some point, all available parallelism is exploited and performance improvements need to be sought elsewhere. As part of drug development trials, a compound is periodically administered, and the interactions between it and the human body are modeled through pharmacokinetics and pharmacodynamics by a set of ordinary differential equations. Numeric integration of these equations is the most computationally intensive part of the fitting process. For this task, parallelism brings little benefit. This paper describes how to exploit the nearly periodic nature of repeated administration models by numeric application of the method of averaging on the one hand and reusing previous computational effort on the other hand. The presented method can be applied on top of any existing integrator while requiring only a single tunable threshold parameter. Performance improvements and approximation error are studied on two pharmacometrics models. In addition, automated tuning of the threshold parameter is demonstrated in two scenarios. Up to 1.7-fold and 70-fold improvements are measured with the presented method for the two models respectively.
Balazs Nemeth, Tom Haber, Jori Liesenborgs and Wim Lamotte
466 Evolutionary Optimization of Intruder Interception Plans for Mobile Robot Groups [abstract]
Abstract: The task of automated intruder detection and interception is often considered as a suitable application for groups of mobile robots. Realistic versions of the problem include representing uncertainty, which turns it into NP-hard optimization tasks. In this paper we define the problem of indoor intruder interception with probabilistic intruder motion model and uncertainty of intruder detection. We define a model for representing the problem and propose an algorithm for optimizing plans for groups of mobile robots patrolling the building. The proposed evolutionary multi-agent algorithm uses a novel representation of solutions. The algorithm has been evaluated using different problem sizes and compared with other methods.
Wojciech Turek, Agata Kubiczek and Aleksander Byrski
434 Synthesizing quantum circuits via numerical optimization [abstract]
Abstract: We provide a simple framework for the synthesis of quantum circuits based on a numerical optimization algorithm. This algorithm is used in the context of the trapped-ions technology. We derive theoretical lower bounds for the number of quantum gates required to implement any quantum algorithm. Then we present numerical experiments with random quantum operators where we compute the optimal parameters of the circuits and we illustrate the correctness of the theoretical lower bounds. We finally discuss the scalability of the method with the number of qubits.
Timothée Goubault de Brugière, Marc Baboulin, Benoît Valiron and Cyril Allouche
455 Application of continuous time quantum walks to image segmentation [abstract]
Abstract: This paper provides the new algorithm that applies concept of continuous time quantum walks to image segmentation problem. The work, inspired by results from its classical counterpart, presents and compares two versions of the solution regarding calculation of pixel-segment association: the version using limiting distribution of the walk and the version using last step distribution. The obtained results vary in terms of accuracy and possibilities to be ported to a real quantum device. The described results were obtained by simulation on classical computer, but the algorithms were designed in a way that will allow to use a real quantum computer, when ready.
Michał Krok, Katarzyna Rycerz and Marian Bubak

Applications of Matrix Methods in Artificial Intelligence and Machine Learning (AMAIML) Session 2

Time and Date: 16:50 - 18:30 on 12th June 2018

Room: 0.3

Chair: Kourosh Modarresi

521 Determining Adaptive Loss Functions and Algorithms for Predictive Models [abstract]
Abstract: We consider the problem of training models to predict sequential processes. We use two econometric datasets to demonstrate how different losses and learning algorithms alter the predictive power for a variety of state-of-the-art models. We investigate how the choice of loss function impacts model training and find that no single algorithm or loss function results in optimal predictive performance. For small datasets, neural models prove especially sensitive to training parameters, including choice of loss function and pre-processing steps. We find that a recursively-applied artificial neural network trained under L1 loss performs best under many different metrics on a national retail sales dataset, whereas a differenced autoregressive model trained under L1 loss performs best under a variety of metrics on an e-commerce dataset. We note that different training metrics and processing steps result in appreciably different performance across all model classes and argue for an adaptive approach to model fitting.
Kourosh Modarresi and Michael Burkhart
522 Adoptive Objective Functions and Distance Metrics for Recommendation Systems [abstract]
Abstract: We describe, develop, and implement different models for the stan-dard matrix completion problem from the field of recommendation sys-tems. We benchmark these models against the publicly available Netflix Prize challenge dataset, consisting of ratings on a 1-5 scale for (user,movie)-pairs. We used the 99 million examples to develop individual models, built ensembles on a separate validation set of 1 million examples, and tested both individual models and ensembles on a held-out set of over 400,000 examples. While the original competition concentrated only on RMSE, we experiment with different objective functions for model training, ensemble construction, and model/ensemble testing. Our best-performing estimators were (1) a linear ensemble of base models trained using linear regression (see ensemble e1, RMSE: 0.912) and (2) a neural network that aggregated predictions from individual models (see ensemble e4, RMSE: 0.912). Many of the constituent models in our ensembles had yet to be developed at the time the Netflix competition con-cluded in 2009. To our knowledge, not much research has been done to es-tablish best practices for combining these models into ensembles. We con-sider this problem, with a particular emphasis on the role that the choice of objective function plays in ensemble construction. For a full list of learned models and ensembles, see Tables 1 and 2.
Kourosh Modarresi and Michael Burkhart
60 An Early Warning Method for Basic Commodities Price Spike Based on Artificial Neural Networks Prediction [abstract]
Abstract: Basic commodities price spike is a serious problem for food security and can carry wide effect and even social unrest. Its occurrences should always be anticipated early enough because government needs sufficient time to form anticipatory policies and proactive actions to overcome the problem. According to law regarding food in Indonesia, the government should develop an integrated information system on food security, which includes an early warning function. This study proposes an early warning method based on Multi-Layer Perceptron predictive model with Multiple Input Multiple Output (MIMO). The warning status is determined based on the coefficient of variation of obtained price prediction from the government’s reference price. A great deal of attention was paid for tuning the model parameters to obtain the most accurate prediction. Model selection was conducted by time series k-fold cross-validation with the mean squared error criterion. The predictive model gives a good performance, where the average of normalized root mean squared errors of sample commodities is ranging from 9.909% to 18.046%. Importantly, the method is promising for modelling basic commodities price and may help the government to predict price spikes and to determine further anticipatory policies.
Amelec Viloria
22 Predicting Heart Attack through Explainable Artificial Intelligence [abstract]
Abstract: This paper reports a novel classification technique by implementing a genetic al-gorithm (GA) based trained ANFIS to diagnose heart diseases. The performance of the proposed system was investigated by evaluation functions including sensi-tivity, specificity, precision, accuracy and also Root Mean Squared Error (RMSE) between the desired and predicted outputs. It was shown that the sug-gested model is reliable and suggests high values of evaluation functions. Also, a novel technique was proposed which provides explainability graphs based on the predicted results for the patients, automatically. The reliability and explainability of the system was the main aim of this paper and was proved by providing dif-ferent criteria. Additionally, the importance of the different symptoms and fea-tures in diagnosis of heart disease was investigated by defining an importance evaluation function and it was shown that some features have key role in predic-tion of the heart disease.
Mehrdad Aghamohammadi, Manvi Madan, Jung Ki Hong and Ian Watson

Data Driven Computational Sciences 2019 (DDCS) Session 2

Time and Date: 16:50 - 18:30 on 12th June 2018

Room: 0.4

Chair: Craig Douglas

141 An Implementation of Coupled Dual-Porosity-Stokes Model with FEniCS [abstract]
Abstract: Porous media and conduit coupled systems are heavily used in a variety of areas. A coupled dual-porosity-Stokes model has been proposed to simulate the fluid flow in a dual-porosity media and conduits coupled system. In this paper, we propose an implementation of this multi-physics model. We solve the system with the automated high performance differential equation solving environment FEniCS. Tests of the convergence rate of our implementation in both 2D and 3D are conducted in this paper. We also give tests on performance and scalability of our implementation.
Xiukun Hu and Craig C. Douglas
443 Anomaly Detection in Social Media using Recurrent Neural Network [abstract]
Abstract: In today’s information environment there is an increasing reliance on online and social media in the acquisition, dissemination and consumption of news. Specifically, the utilization of social media platforms such as Facebook and Twitter has increased as a cutting edge medium for breaking news. On the other hand, the low cost, easy access and rapid propagation of news through so-cial media makes the platform more sensitive to fake and anomalous reporting. The propagation of fake and anomalous news is not some benign exercise. The extensive spread of fake news has the potential to do serious and real damage to individuals and society. As a result, the detection of fake news in social media has become a vibrant and important field of research. In this paper, a novel ap-plication of machine learning approaches to the detection and classification of fake and anomalous data are considered. An initial clustering step with the K-Nearest Neighbor (KNN) algorithm is proposed before training the result with a Recurrent Neural Network (RNN). The results of a preliminary application of the KNN phase before the RNN phase produces a quantitative and measureable im-provement in the detection of outliers, and as such is more effective in detecting anomalies or outliers against the test dataset of 2016 US Presidential Election predictions.
Madhu Goyal
539 Conditional BERT Contextual Augmentation [abstract]
Abstract: We propose a novel data augmentation method for labeled sentences called con- ditional BERT contextual augmentation. Data augmentation methods are often ap- plied to prevent overfitting and improve generalization of deep neural network models. Recently proposed contextual augmentation augments labeled sentences by randomly replacing words with more varied substitutions predicted by language model. BERT demonstrates that a deep bidirectional language model is more pow- erful than either an unidirectional lan- guage model or the shallow concatena- tion of a forward and backward model. We retrofit BERT to conditional BERT by introducing a new conditional masked language model 1 task. The well trained conditional BERT can be applied to en- hance contextual augmentation. Experi- ments on six various different text classi- fication tasks show that our method can be easily applied to both convolutional or re- current neural networks classifier to obtain obvious improvement.
Xing Wu, Shangwen Lv, Liangjun Zang, Jizhong Han and Songlin Hu
552 An innovative and reliable water leak detection service supported by data-intensive remote sensing processing [abstract]
Abstract: In the scope of the H2020 WADI project, an airborne water leak detection surveillance service, based on manned and unmanned aerial vehicles, is being developed to provide water utilities with adequate information on leaks in large water distribution infrastructures outside urban areas. Given the high cost associated with water infrastructure networks repairs, a reliability layer is necessary to improve the trustworthiness of the WADI leak identification, based on complementary technologies for leak detection. Herein, a methodology based on the combined use of Sentinel remote sensing data and a water leak pathways model is presented, based on data-intensive computing. The resulting water leak detection reliability service, provided to the users through a web interface, targets prompt and cost-effective infrastructure repairs with the required degree of confidence on the detected leaks. The web platform allows for both data analysis and visualization of Sentinel images and relevant leak indicators at the sites selected by the user. The user can provide aerial imagery inputs, to be processed together with Sentinel remote sensing data at the satellite acquisition dates identified by the user. The platform provides information about the detected leaks location and time evolution, and will be linked in the future with the outputs from water pathway models.
Ricardo Martins, Anabela Oliveira, André Fortunato, Alberto Azevedo, Elsa Alves and Alexandra Carvalho

Machine Learning and Data Assimilation for Dynamical Systems (MLDADS) Session 3

Time and Date: 16:50 - 18:30 on 12th June 2018

Room: 0.5

Chair: Rossella Arcucci

323 Physics-Informed Echo State Networks for Chaotic Systems Forecasting [abstract]
Abstract: In this work, we propose a physics-informed Echo State Networks (ESN) to predict the evolution of chaotic systems. Compared to conventional echo state networks, the physics-informed ESN are trained to solve supervised learning tasks while ensuring that their predictions do not violate the given physical laws. This is done by introducing an additional loss during the training of the ESN, which penalizes non-physical predictions by the ESN. The potential of this approach is demonstrated on the Lorenz system where the obtained predictability horizon of the physics-informed ESN was improved by up to nearly 2 Lyapunov times compared to conventional ESN without the need of additional training data. These results illustrate the potential of using machine learning tools combined with prior physical knowledge to improve the time-accurate prediction of chaotic dynamical systems.
Nguyen Anh Khoa Doan, Wolfgang Polifke and Luca Magri
242 On improving urban flood prediction through data assimilation using CCTV images: potential for machine learning [abstract]
Abstract: Recent use of satellite synthetic aperture radar (SAR) images in flood forecasting has allowed assimilation of spatially dense observations over large rural areas into flood forecasting models. This rich source of observational information has offered a valuable improvement in flood forecasting accuracy as the instruments are able to image day and night, and can see through clouds. However, in urban areas, the use of SAR data is limited due to building shadows and layover effects. Hence, in urban areas it is even more important to use observational data to constrain hydrodynamic flood models, due to the complexity of the landscape and interactions with buildings, sewers, rivers etc. To increase the amount of observation data available in urban areas, and to make use of abundance of technology in cities, our research is concentrating on using novel and easily available data from cities such as CCTV camera images. We have carried out an initial investigation into the impact of assimilating such data on flood forecasts. Our experiments used water level observations extracted from river camera images from four Farson Digital Ltd cameras, for a flood event near Tewkesbury, UK in 2012. We show that these data can improve flood forecast accuracy, especially as they capture the rising limb of the flood when satellite data is usually unavailable. However, in our initial experiments we used manual water level extraction and quality control for the observations, due to complications with the camera settings, image processing, and various digital terrain map resolutions and accuracies. Our next aim is to use machine learning to automatically extract water levels from CCTV images, with associated observation uncertainty. Machine learning will allow us to obtain and use real time water observations from images on a large scale, especially in complex systems such as cities, and we will discuss the potential of this approach.
Sanita Vetra-Carvalho, Sarah L. Dance, Javier García-Pintado and David C. Mason
394 Tuning Covariance Localization using Machine Learning [abstract]
Abstract: Ensemble Kalman filter (EnKF) has proven successful in assimilating observations of large-scale dynamical systems, such as the atmosphere, into computer simulations for better predictability. Due to the fact that a limited-size ensemble of model states is used, sampling errors accumulate, and manifest themselves as long-range spurious correlations, leading to filter divergence. This effect is alleviated in practice by applying covariance localization. This work investigates the possibility of using machine learning algorithms to automatically tune the parameters of the covariance localization step of ensemble filters. Numerical experiments carried out with the Lorenz96 model reveal the potential of the proposed machine learning approaches.
Azam Moosavi, Ahmed Attia and Adrian Sandu

Computational Science in IoT and Smart Systems (IoTSS) Session 1

Time and Date: 16:50 - 18:30 on 12th June 2018

Room: 0.6

Chair: Vaidy Sunderam

131 Fog computing architecture based blockchain for industrial IoT [abstract]
Abstract: Industry 4.0 is also referred to as the fourth industrial revolution and is the vision of a smart factory built with CPS. The ecosystem of the manufacturing industry is expected to be activated through autonomous and intelligent systems such as self-organization, self-monitoring and self-healing. The Fourth Industrial Revolution is beginning with an attempt to combine the myriad elements of the industrial system with Internet communication technology to form a future smart factory. The related technologies derived from these attempts are creating new value. However, the existing Internet has no effective way to solve the problem of cyber security and data information protection against new technology of future industry. In a future industrial environment where a large number of IoT devices will be supplied and used, if the security problem is not resolved, it is hard to come to a true industrial revolution. Therefore, in this paper, we propose block chain based fog system architecture for Industiral IoT. In this paper, we propose a new block chain based fog system architecture for industial IoT. In order to guarantee fast performance, And the performance is evaluated and analyzed by applying a proper fog system-based permission block chain.
Jang Su Hwan, Jongpil Jeong and Jo Guejong
492 Exploration of Data from Smart Bands in the Cloud and on the Edge - the Impact on the Data Storage Space [abstract]
Abstract: Smart bands are wearable devices that are frequently used in monitoring people's activity, fitness, and health state. They can be also used in early detection of possibly dangerous health-related problems. The increasing number of wearable devices frequently transmitting data to scalable monitoring centers located in the Cloud may raise the Big Data challenge and cause network congestion. In this paper, we focus on the storage space consumed while monitoring people with smart IoT devices and performing classification of their health state and detecting possibly dangerous situations with the use of machine learning models in the Cloud and on the Edge. We also test two different repositories for storing sensor data in the Cloud monitoring center - a relational Azure SQL Database and the Cosmos DB document store.
Mateusz Gołosz and Dariusz Mrozek
209 Security of Low Level IoT Protocols [abstract]
Abstract: Application of formal methods in security is demonstrated. Formalism for description of security properties of low level IoT protocols is proposed. It is based on timed process algebra and on security concept called infinite step opacity. We prove some of its basic properties as well as we show its relation to other security notions. Finally, complexity issues of verification and security enforcement are discussed.
Damas Gruska and M.Carmen Ruiz
569 FogFlow - computation organization for heterogeneous Fog computing environments [abstract]
Abstract: With the arising amounts of devices and data that Internet of Things systems are processing nowadays, solutions for computational applications are in a high demand. Many concepts targeting at more efficient data processing are arising and among them edge and fog computing are the ones gaining significant interest since they reduce cloud load. In consequence Internet of Things systems are becoming more and more diverse in terms of architecture. In this paper we present FogFlow - model and execution environment allowing for organization of data-flow applications to be run on the heterogeneous environments. We propose unified interface for data-flow creation, graph model and we evaluate our concept in the use case of production line model that mimic real-world factory scenario.
Joanna Sendorek, Tomasz Szydlo, Robert Brzoza-Woch and Mateusz Windak

Simulations of Flow and Transport: Modeling, Algorithms and Computation (SOFTMAC) Session 3

Time and Date: 16:50 - 18:30 on 12th June 2018

Room: 1.4

Chair: Shuyu Sun

187 Accelerated Phase Equilibrium Predictions for Subsurface Reservoirs Using Deep Learning Methods [abstract]
Abstract: Multiphase fluid flow with complex compositions is an increasingly attractive research topic with more and more attentions paid on related engineering problems, including global warming and green house effect, oil recovery enhancement and subsurface water pollution treatment. Prior to study the flow behaviors and phase transitions in multi-component multiphase flow, the first effort should be focused on the accurate prediction of the total phase numbers existing in the fluid mixture, and then the phase equilibrium status can be determined. In this paper, a novel and fast prediction technique is proposed based on deep learning method. The training data is generated using a selected VT dynamic flash calculation scheme and the network constructions are deeply optimized on the activation functions. Compared to previous machine learning techniques proposed in literatures to accelerate vapor liquid phase equilibrium calculation, the total number of phases existing in the mixture is determined first and other phase equilibrium properties will be estimated then, so that we do not need to ensure that the mixture is in two phase conditions any more. Our method could handle fluid mixtures with complex compositions, with 8 different components in our example and the original data is in a large amount. The analysis on prediction performance of different deep learning models with various neural networks using different activation functions can help future researches selecting the features to construct the neural network for similar engineering problems. Some conclusions and remarks are presented at the end to help readers catch our main contribution and insight the future related research.
Tao Zhang, Yiteng Li and Shuyu Sun
29 Multigrid solver for flow and heat transfer problems in heterogeneous irregular regions: effects of the upscaling approaches [abstract]
Abstract: Modeling and simulation of fluid flow and heat transfer processes occurring in heterogeneous irregular regions have received extensive attention in recent years. The presence of heterogeneous properties would exert crucial impacts on the overall performance of fluid flow and heat transfer simulations. For example, the high heterogeneous properties always worsen the model coefficient matrix and complicate the simulation difficulty. Therefore, the need to develop high-efficient and accurate numerical methods for general fluid flow and heat transfer occurring in heterogeneous irregular regions which could significantly reduce the computational efforts at the same time conserve the main physical properties, is highly addressed among engineering and academic communities. In this study, we present a highly efficient solver, geometrical multi-grid (GMG), for the fast simulation of fluid flow and heat transfer problems occurring in heterogeneous irregular regions in the framework of body-fitted coordinate (BFC) system. The key point of the proposed multigrid solver lies in the calculation of heterogeneous properties on coarse grid levels within the original physical domain, in which the up-scaling method is widely used. However, different upscaling methods would yield the effective properties with different numerical accuracy and computational efficiency. To explore the influence of the upscaling approaches on overall performances of the proposed multigrid solver, in this study we adopt the general statistical averages (e.g. harmonic average, arithmetic average, geometric average, harmonic-arithmetic average, etc.) and flow-based methods (e.g. sealed-side boundary condition, open-side boundary conditions, etc. for fluid flows) to compute the unscaled effective properties on corresponding coarse grid levels. The numerical accuracy of the quantity of interest on different grid levels and the computational speed-up of the proposed multigrid solver for flow and heat transfer problems occurring in heterogeneous irregular regions are validated by several examples to assess the influence of different upscaling approaches on computations. The proposed multigrid solver for fluid flow and heat transfer problems in heterogeneous irregular regions can not only markedly improve the computational efficiency of the fine grid solution, but also can provide the computation byproduct - solution on coarse grid levels for specific applications, for example in which the coarse grid solution can be used for sample recycling in multigrid multilevel Monte Carlo method to avoid the repeated realization of sampling on coarse grid levels.
Jingfa Li, Yang Liu, Shuyu Sun, Bo Yu and Piyang Liu
196 Study on the thermal-hydraulic coupling model for the enhanced geothermal systems [abstract]
Abstract: Enhanced geothermal systems (EGS) are the major way of the hot dry rock (HDR) exploitation. At present, the finite element method (FEM) is often used to simulate the thermal energy extraction process of the EGS. Satisfactory results can be obtained by this method to a certain extent. However, when many discrete fractures exist in the computational domain, a large number of unstructured grids must be used, which seriously affects the computational efficiency. To solve this challenge, based on the embedded discrete fracture model (EDFM), two sets of seepage and energy conservation equations are respectively used to describe the flow and heat transfer processes of the matrix media and the fracture media. The main advantage of the proposed model is that the structured grid can be used to mesh the matrix, and there is no need to refine the mesh near the fracture. Comparing with commercial software, COMSOL Multiphysics, the accuracy of the proposed model is verified. Subsequently, a specific example of geothermal exploitation is designed, and the spatial-temporal evolutions of pressure and temperature fields are analyzed.
Tingyu Li, Dongxu Han, Fusheng Yang, Bo Yu, Daobing Wang and Dongliang Sun
21 Modelling of thermal transport in wire + arc additive manufacturing process [abstract]
Abstract: Modelling the fusion and heat affected microstructure of an Additive Manufacturing (AM) process bridges many length and time scales and requires more than intelligent meshing schemes to make simulations feasible. The aim of this research was to develop an efficient and simple, yet significantly accurate high quality and high precision thermal model in wire + arc additive manufacturing process. To describe the influence of the process parameters and materials on the entire welding process, a 3D transient non-linear finite element model to simulate multi-layer deposition of cast IN-738LC alloy onto SAE-AISI 1524 Carbon Steel Substrates was developed. Temperature-dependent material properties and the effect of forced convection were included in the model. The heat source represented by a moving Gaussian power density distribution was applied over the top surface of the specimen during a period of time that depends on the welding speed. The effect of multi-layer deposition on the prediction and validation of melting pool shape and thermal cycles was also investigated. The effect of convection and radiation heat loss from the weldment (layers) surfaces were included into the finite element analysis. As the AM layers itself act as extended surfaces (fins), it was found that the heat extraction is quite significant. It is encouraging to note that the thermal model is sufficiently accurate to predict thermal cycles, FZ and HAZ weld profiles. A firm foundation for modelling thermal transport in wire + arc additive manufacturing process it was established.
Edison Bonifaz

Marine Computing in the Interconnected World for the Benefit of the Society (MarineComp) Session 3

Time and Date: 16:50 - 18:30 on 12th June 2018

Room: 2.26

Chair: Flávio Martins

555 The NARVAL software toolbox in support of ocean model skill assessment at regional and coastal scales [abstract]
Abstract: The significant advances in high-performance computational resources have boosted the seamless evolution in ocean modeling techniques and numerical efficiency, giving rise to an inventory of operational ocean forecasting systems with ever-increasing complexity. The skill of the Iberia-Biscay-Ireland (IBI) regional ocean forecasting system, implemented within the frame of the Copernicus Marine Environment Monitoring Service (CMEMS), is routinely evaluated by means of the NARVAL (North Atlantic Regional VALidation) web-based toolbox. Multi-parameter validations against observational sources (encompassing both in situ end remote-sensing platforms) are regularly conducted along with model intercomparisons in the overlapping areas. Product quality indicators and skill metrics are automatically computed not only averaged over the entire IBI domain but also over specific sub-regions of particular interest in order to identify strengths and weaknesses of the model. The primary goal of this work is threefold. Firstly, to provide a flavor of the basic functionalities of NARVAL software package in order to validate IBI near real time components (physical, biogeochemical and waves); secondly, to showcase a number of the practical applications of NARVAL; finally, to present the future roadmap to build a new upgraded version of this software package, which will include the validation of multi-year and interim products, the computation of long-term skill metrics or the evaluation of event-oriented multi-model intercomparison exercises. This synergistic approach, based on the integration of numerical models and diverse observational networks, should be useful to comprehensively characterize the highly dynamic sea states and the dominant modes of spatio-termporal variability.
Pablo Lorente
546 Salinity control on Saigon river downstream of Dautieng reservoir within multi-objective simulation-optimisation framework for reservoir operation [abstract]
Abstract: This research proposes a modelling framework in which simulation and optimisation tools are used together in order to obtain optimal reservoir operation rules for the multi-objective Dautieng reservoir on the Saigon River (Vietnam), where downstream salinity control is the main objective. In this framework, hydrodynamic and salinity transport modelling of the Saigon River is performed using the MIKE 11 modelling system. In the first optimisation step this simulation model is coupled with the population simplex evolution (PSE) algorithm from the AUTOCAL optimisation utility (available as a part of MIKE 11) to estimate to estimate the discharge required to meet salinity standards at the downstream location of Hoa Phu pumping station for public water supply. In the second optimisation step, with the use of MATLAB optimisation toolbox, an elitist multi-objective genetic algorithm is coupled with a simple water balance model of the Dautieng reservoir to investigate how the optimised discharges obtained from the first optimisation step can be balanced with the other objectives of the reservoir. The results indicate that optimised releases improve the performance of the reservoir especially on controlling salinity at Hoa Phu pumping station. In addition, the study demonstrates that use of smaller time steps in optimisation gives a closer match between varying demands and releases.
Ioana Popescu, Okan Aygun and Andreja Jonoski
544 Clustering hydrographic conditions in Galician estuaries [abstract]
Abstract: In this paper we describe our endeavours to explore the role of unsupervised learning technology in profiling marine conditions. The characterization of the marine environment with hydrographic variables allows, for example, to make technical and health control of sea products. However, the continuous monitoring of the environment produces large amounts of data and, thus, new information technology tools are needed to support decision-making. We present here a first contribution to this area by building a tool able to represent and normalize hydrographic conditions, cluster them using unsupervised learning methods, and present the results to domain experts. The tool, which implements visualization methods adapted to the problem at hand, was developed under the supervision of specialists on monitoring marine environment in Galicia (Spain). This software solution is promising to early identify risk factors and to gain a better understanding of sea conditions.
David Losada, Pedro Montero, Diego Brea, Silvia Allen-Perkins and Begoña Vila
547 Early Warning Systems for Shellfish Safety - The Pivotal Role of Computational Science [abstract]
Abstract: Toxins from harmful algae and certain food pathogens (Escherichia coli and Norovirus) found in shellfish can cause significant health problems to the public and have a negative impact on the economy. For the most part, these outbreaks cannot be prevented but, with the right technology and know-how, they can be predicted. These Early Warning Systems (EWS) require reliable data from multiple sources: satellite imagery, in situ data and numerical tools. The data is processed and analyzed and a short-term forecast is produced. Computational science is at the heart of any EWS. Current models and fore-cast systems are becoming increasingly sophisticated as more is known about the dynamics of an outbreak. This paper discusses the need, main components and future challenges of EWS.
Marcos Mateus, J. Fernandes, M. Revilla, L. Pinto, L. Ferrer, M. Ruiz Villarreal, P. I. Miller, J. A. Maguire and Wiebke Schmidt