Session1 10:35 - 12:15 on 12th June 2019

ICCS 2019 Main Track (MT) Session 1

Time and Date: 10:35 - 12:15 on 12th June 2019

Room: 1.5

Chair: Howard Stamato

67 Efficient Computation of Sparse Higher Derivative Tensors [abstract]
Abstract: The computation of higher derivatives tensors is expensive even for adjoint algorithmic differentiation methods. In this work we introduce methods to exploit the symmetry and the sparsity structure of higher derivatives to considerably improve the efficiency of their computation. The proposed methods apply coloring algorithms to two-dimensional compressed slices of the derivative tensors. The presented work is a step towards feasibility of higher-order methods which might benefit numerical simulations in numerous applications of computational science and engineering.
Jens Deussen and Uwe Naumann
120 Being Rational about Approximating Scientific Data [abstract]
Abstract: Scientific datasets are becoming increasingly challenging to transfer, analyze, and store. There is a need for methods to transform these datasets into compact representations that facilitate their downstream management and analysis, and ideally model the underlying scientific phenomena with defined numerical fidelity. To address this need, we propose nonuniform rational B-splines (NURBS) for modeling discrete scientific datasets; not only to compress input data points, but also to enable further analysis directly on the continuous fitted model, without the need for decompression. First, we evaluate three different methods for NURBS fitting, and compare their performance relative to unweighted least squares approximation (B-splines). We then extend current state-of-the-art B-spline adaptive approximation to NURBS; that is, adaptively determining optimal rational basis functions and weighted control point locations that approximate given input data points to prespecified accuracy. Additionally, we present a novel local adaptive algorithm to iteratively approximate large data input domains. This method takes advantage of NURBS local support to refine regions of the approximated model, acting locally on both input and model subdomains, without affecting other regions of the global approximation. We evaluate our methods in terms of approximated model compactness, achieved accuracy, and computational cost on both synthetic smooth functions and real-world scientific data.
Youssef Nashed, Tom Peterka, Vijay Mahadevan and Iulian Grindeanu
336 Design of a High-Performance Tensor-Vector Multiplication with BLAS [abstract]
Abstract: Tensor contraction is an important mathematical operation for many scientific computing applications that use tensors to store massive multidimensional data. Based on the Loops-over-GEMMs (LOG) approach, this paper discusses the design of high-performance algorithms for the mode-q tensor-vector multiplication using efficient implementations of the matrix-vector multiplication (GEMV). Given dense tensors with any non-hierarchical storage format, tensor order and dimensions, the proposed algorithms either directly call GEMV with tensors or recursively apply GEMV on higher-order tensor slices multiple times. We analyze strategies for loop-fusion and parallel execution of slice-vector multiplications with higher-order tensor slices. Using OpenBLAS, our implementations attain up to 113% of the GEMV's peak performance. Our parallel version of the tensor-vector multiplication achieves speedups of up to 12.6x over other state-of-the-art approaches.
Cem Bassoy
388 High Performance Partial Coherent X-ray Ptychography [abstract]
Abstract: During the last century, X-ray science has enabled breakthrough discoveries in fields as diverse as medicine, material science or electronics, and recently, ptychography has risen as a reference imaging technique in the field. It provides resolutions of a billionth of a meter, macroscopic field of view, or the capability to retrieve chemical or magnetic contrast, among other features. The goal of ptychography is to reconstruct a 2D visualization of a sample from a collection of diffraction patterns generated from the interaction of a light source with the sample. Reconstruction involves solving a nonlinear optimization problem employing a large amount of measured data —typically two orders of magnitude bigger than the reconstructed sample— so high performance solutions are normally required. A common problem in ptychography is that the majority of the flux from the light sources is often discarded to define the coherence of an illumination. Gradient Decomposition of the Probe (GDP) is a novel method devised to address this issue. It provides the capability to significantly improve the quality of the image when partial coherence effects take place, at the expense of a three-fold increase of the memory requirements and computation. This downside, along with the fine-grained degree of parallelism of the operations involved in GDP, makes it an ideal target for GPU acceleration. In this paper we propose the first high performance implementation of GDP for partial coherence X-ray ptychography. The proposed solution exploits an efficient data layout and multi-gpu parallelism to achieve massive acceleration and efficient scaling. The experimental results demonstrate the enhanced reconstruction quality and performance of our solution, able process up to 4 million input samples per second on a single high-end workstation, and compare its performance with a reference HPC ptychography pipeline.
Pablo Enfedaque, Stefano Marchesini, Huibin Chang, Bjoern Enders and David Shapiro
452 Monte Carlo Analysis of Local Cross-Correlation ST-TBD Algorithm [abstract]
Abstract: The Track-Before-Detect (TBD) algorithms allow the estimation of the state of an object, even if the signal is hidden in the background noise. The application of local cross-correlation for modified Information Update formula improves this estimation for extended objects (tens of cells in the measurement space) compared to direct application of the Spatio-Temporal TBD (ST-TBD) algorithm. Monte Carlo test was applied to evaluate algorithms by using a variable standard deviation of additive Gaussian noise. Proposed solution does not require prior knowledge of the size or measured values of the object.
Przemyslaw Mazurek and Robert Krupinski

ICCS 2019 Main Track (MT) Session 9

Time and Date: 10:35 - 12:15 on 12th June 2019

Room: 1.3

Chair: Gabriela Schütz

30 A Deep Surrogate Model for Estimating Water Quality Parameters [abstract]
Abstract: For large-scale automated, water quality monitoring, some physical or chemical parameters are unable to be measured directly due to financial or environmental limitations. As an example, the excess nitrogen run-off can cause severe ecological damage to ecosystems. However, the cost of high accuracy measurement of nitrogen is prohibitive, and one can only measure nitrogen in creeks and rivers at selected locations. If nitrate concentrations are related to some other, more readily measured water parameters, it may be possible to use these parameters (“surrogates”) to estimate nitrogen concentrations. Though one can estimate water quality parameters based on some different, but simultaneously monitored parameters, most surrogate models lack the consideration of spatial variation among monitoring stations. Those models are usually developed based on water quality data from a single station and applied to target stations in different locations for estimating water quality properties. In this case, the different weather, geophysical or biological conditions may reduce the effectiveness of the surrogate model’s performance because the surrogate relationship may not be strong between the source and target stations. We propose a deep surrogate model (DSM) for indirect nitrogen measurement in large-scale water quality monitoring networks. The DSM applies a stacked denoising autoencoder to extract the features of the water quality surrogates. This strategy allows one to utilize all the sensory data across the monitoring network, which can significantly extend the size of training data. For data-driven modeling, large amounts of training data collected from various monitoring stations can substantially improve the generalization of the DSM. Furthermore, instead of only learning the regression relationship between water quality surrogates and the nitrogen concentration in the source stations, the DSM is designed to gain the sensor data distribution differences between the source and target stations by calculating the Kullback-Leibler divergence. In this approach, the training of DSM can be guided by acknowledging the information from the target station. Therefore, the performance of the DSM approached will be significantly higher than source station-based approaches. It is because of that the surrogate relationship learned by the DSM includes the diversity among monitoring stations. We evaluate the DSM by using real-world time series data from a wireless water quality monitoring network in Australia. Compared to models based on Support Vector Machine and Artificial Neural Network, the DSM achieves up to 49.0\% and 42.4\% improvements regarding the RMSE and MAE respectively. Hence, the DSM is an attractive strategy for generating the estimated nitrogen concentration for large-scale environmental monitoring projects.
Yifan Zhang, Peter Thorburn and Peter Fitch
103 Six Degrees of Freedom Numerical Simulation of Tilt-Rotor Plane [abstract]
Abstract: Six degrees of freedom coupled simulation is presented for a tilt-rotor plane represented by V-22 Osprey. The Moving Computational Domain (MCD) method is used to compute a flow field around aircraft and the movement of the body with high accuracy. This method enables to move a plane through space without restriction of computational ranges. Therefore it is different from computation of such the flows by using conventional methods that calculate a flow field around a static body placing it in a uniform flow like a wind tunnel. To calculate with high accuracy, no simplification for simulating propeller was used. Fluid flows are created only by moving boundaries of an object. A tilt-rotor plane has a hovering function like a helicopter by turning ax-es of rotor toward the sky during takeoff or landing. On the other hand in flight, it behaves as a reciprocating aircraft by turning axes of rotor forward. To per-form such two flight modes in the simulation, multi-axis sliding mesh approach was proposed which is a computational technique to enable us to deal with multiple axes of different direction. Moreover, using in combination with the MCD method, the approach has been able to be applied to the simulation which has more complicated motions of boundaries.
Ayato Takii, Masashi Yamakawa and Shinichi Asao
300 A Macroscopic Study on Dedicated Highway Lanes for Autonomous Vehicles [abstract]
Abstract: The introduction of AVs will have far-reaching effects on road traffic in cities and on highways. The implementation of AHS, possibly with a dedicated lane only for AVs, is believed to be a requirement to maximise the benefit from the advantages of AVs. We study the ramifications of an increasing percentage of AVs on the whole traffic system with and without the introduction of a dedicated highway AV lane. We conduct a macroscopic simulation of the city of Singapore under user equilibrium conditions with realistic traffic demand. We present findings regarding average travel time, throughput, road usage, and lane-access control. Our results show a reduction of average travel time as a result of increasing the portion of AVs in the system. We show that the introduction of an AV lane is not beneficial in terms of average commute time. Furthermore a notable shift of travel demand away from the highways towards major and small roads is noticed in early stages of AV penetration of the system. Finally, our findings show that after a certain threshold percentage of AVs the differences between AV and no AV lane scenarios become negligible.
Jordan Ivanchev, Alois Knoll, Daniel Zehe, Suraj Nair and David Eckhoff
355 An Agent-Based Model for Evaluating the Boarding and Alighting Efficiency of Public Transport Vehicles [abstract]
Abstract: A key metric in the design of interior layouts of public transport vehicles is the dwell time required to allow passengers to board and alight. Real-world experimentation using physical vehicle mock-ups and involving human participants can be performed to compare dwell times among vehicle designs. However, the associated costs limit such experiments to small numbers of trials. In this paper, we propose an agent-based simulation model of the behavior of passengers during boarding and alighting. High-level strategical behavior is modeled according to the Recognition-Primed Decision paradigm, while the low-level collision-avoidance behavior relies on an extended Social Force Model tailored to our scenario. To enable successful navigation within the confined space of the vehicle, we propose a mechanism to emulate passenger turning while avoiding complex geometric computations. We validate our model against real-world experiments from the literature, demonstrating deviations of less than 11%. In a case study, we evaluate the boarding and alighting times required by three autonomous vehicle interior layouts proposed by industrial designers.
Boyi Su, Philipp Andelfinger, David Eckhoff, Henriette Cornet, Goran Marinkovic, Wentong Cai and Alois Knoll Knoll
243 MLP-IA: Multi-Label User Profile Based on Implicit Association Labels [abstract]
Abstract: Multi-Label user profile is widely used and have made great contributions in the field of recommendation systems, personalized searches, etc. Current researches on multi-label user profile either ignore the associations among labels or only consider the explicit associations among them, which are not sufficient to take full advantage of the internal associations. In this paper, a new insight is presented to mine the internal correlation among implicit association labels. To take advantage of this insight, a multi-label propagation method with implicit associations (MLP-IA) is proposed to get user profile. A probability matrix is first designed to rec-ord the implicit associations and then combine the multi-label propagation method with this probability matrix to get more accurate user profile. Finally, this method proves to be convergent and faster than traditional label propagation algorithm. Experiments on six real-world datasets in Weibo show that, compared with state-of-the-art methods, our approach can accelerate the convergence and its perfor-mance is significantly better than the previous ones.
Lingwei Wei, Wei Zhou, Jie Wen, Jizhong Han and Songlin Hu

Advances in High-Performance Computational Earth Sciences: Applications and Frameworks (IHPCES) Session 1

Time and Date: 10:35 - 12:15 on 12th June 2019

Room: 0.3

Chair: Takashi Shimokawabe

414 A Fast 3D Finite-element Solver for Large-scale Seismic Soil Liquefaction Analysis [abstract]
Abstract: The accumulation of spatial data and development of computer architectures and computational techniques raise expectations for large-scale soil liquefaction simulations using highly detailed three-dimensional (3D) soil-structure models; however, the associated large computational cost remains the major obstacle to realizing this in practice. In this study, we increased the speed of large-scale 3D soil liquefaction simulation on computers with many-core wide SIMD architectures. A previous study overcame the large computational cost by expanding a method for large-scale seismic response analysis for application in soil liquefaction analysis; however, that algorithm did not assume the heterogeneity of the soil liquefaction problem, resulting in a load imbalance among CPU cores in parallel computations and limiting performance. Here we proposed a load-balancing method suitable for soil liquefaction analysis. We developed an efficient algorithm that considers the physical characteristics of soil liquefaction phenomena in order to increase the speed of solving the target linear system. The proposed method achieved a 29-fold increase in speed over the previous study. Soil liquefaction simulations were performed using large-scale 3D models with up to 3.5 billion degrees-of-freedom on an Intel Xeon Phi (Knights Landing)-based supercomputer system (Oakforest-PACS).
Ryota Kusakabe, Kohei Fujita, Tsuyoshi Ichimura, Muneo Hori and Lalith Wijerathne
173 Performance evaluation of tsunami inundation simulation on SX-Aurora TSUBASA [abstract]
Abstract: As tsunamis may cause damage in wide area, it is difficult to immediately understand the whole damage. To quickly estimate the damages of and respond to the disaster, we have developed a real-time tsunami inundation forecast system that utilizes the vector supercomputer SX-ACE for simulating tsunami inundation phenomena. The forecast system can complete a tsunami inundation and damage forecast for the southwestern part of the Pacific coast of Japan at the level of a 30-m grid size in less than 30 minutes. The forecast system requires higher-performance supercomputers to increase resolutions and expand forecast areas. In this paper, we compare the performance of the tsunami inundation simulation on SX-Aurora TSUBASA with those on Xeon Gold and SX-ACE. SX-Aurora TSUBASA is a new vector supercomputer released in 2018 and its peak performance is 4.3 Tflop/s of single precision floating-point operations. We clarify that SX-Aurora TSUBASA achieves the highest performance among the three systems and has a high potential for increasing resolutions as well as expanding forecast areas.
Akihiro Musa, Takashi Abe, Takumi Kishitani, Takuya Inoue, Masayuki Sato, Kazuhiko Komatsu, Yoichi Murashima, Shunichi Koshimura and Hiroaki Kobayashi
315 Parallel Computing for Module-Based Computational Experiment [abstract]
Abstract: Large-scale scientific code plays an important role in scientific researches. In order to facilitate module and element evaluation in scientific applications, we introduce a unit testing framework and describe the demand for module-based experiment customization. We then develop a parallel version of the unit testing framework to handle long-term simulations with a large amount of data. Specifically, we apply message passing based parallelization and I/O behavior optimization to improve the performance of the unit testing framework and use profiling result to guide the parallel process implementation. Finally, we present a case study on litter decomposition experiment using a standalone module from a large-scale Earth System Model. This case study is also a good demonstration on the scalability, portability, and high-efficiency of the framework.
Zhuo Yao and Dali Wang
390 Heuristic Optimization with CPU-GPU Heterogeneous Wave Computing for Enhancing Three-dimensional Inner Structure [abstract]
Abstract: To increase the reliability of numerical simulations, it is important to use more reliable models. This study proposes a method to generate a finite element model that can reproduce observational data in a target domain. Our proposed method searches parameters to determine finite element models by combining simulated annealing and finite element wave propagation analyses. In the optimization, we utilize heterogeneous computer resources. The finite element solver, which is the computationally expensive portion, is computed rapidly using GPU computation. Simultaneously, we generate finite element models using CPU computation to overlap the computation time of model generation. We estimate the inner soil structure as an application example. The soil structure is reproduced from the observed time history of velocity on the ground surface using our developed optimizer.
Takuma Yamaguchi, Tsuyoshi Ichimura, Kohei Fujita, Muneo Hori and Lalith Wijerathne
383 A Generic Interface for Godunov-type Finite Volume Methods on Adaptive Triangular Meshes [abstract]
Abstract: We present and evaluate a programming interface for creating high performance Godunov-type finite volume applications with the framework sam(oa)2. This interface requires application developers only to provide problem-specific implementations of a set of operators, while sam(oa)2 transparently manages its many HPC features, such as memory-efficient adaptive mesh refinement, parallelism in distributed and shared memory and vectorization of Riemann solvers. We focus especially on the performance of vectorization, which can be either managed by the framework (with compiler auto-vectorization of the operator calls) or directly by the developers in the operator implementation (possibly using more advanced techniques). We demonstrate the interface's performance using two example applications based on variations of the shallow water equations. Our performance results show successful vectorization using both approaches, with similar performance. They also show that the applications developed with the new interface achieve performance comparable to other analogous applications developed without the new layer of abstraction, directly into the framework's core.
Chaulio R. Ferreira and Michael Bader

Architecture, Languages, Compilation and Hardware Support for Emerging and Heterogeneous Systems (ALCHEMY) Session 1

Time and Date: 10:35 - 12:15 on 12th June 2019

Room: 0.4

Chair: Stéphane Louise

404 Dynamic and Distributed Security Management for NoC based MPSoCs [abstract]
Abstract: Multi-Processors System-on-Chip (MPSoCs) have emerged as the enabler technology for new computational paradigms such as Internet-of-Things (IoT) and Machine Learning. Network-on-Chip (NoC) communication paradigm has been adopted in several commercial MPSoCs as an effective solution for mitigating the communication bottleneck. The widespread deployment of such MPSoCs and their utilization in critical and sensitive applications, turns security a key requirement. However, the integration of security into MPSoCs is challenging. The growing complexity and high hyper-connectivity to external networks expose MPSoC to Malware infection and code injection attacks. Isolation of tasks to manage the ever-changing and strict mixed-criticality MPSoC operation is mandatory. Hardware-based firewalls are an effective protection technique to mitigate attacks to MPSoCs. However, the fast reconfiguration of these firewalls impose a huge performance degradation, prohibitive for critical applications. To this end, this paper proposes a lightweight broadcasting mechanism for firewall reconfiguration in NoC-based MPSoC. Our solution supports efficient and secure creation of dynamic security zones in the MPSoC through the communication management while avoiding deadlocks. Results show that our approach decreases the security reconfiguration process by a factor of 7.5 on average when compared to the state of the art approaches, while imposing only 11\% area overhead.
Siavoosh Payandeh Azad, Gert Jervan and Johanna Sepulveda
450 Scalable Fast Multipole Method for Electromagnetic Simulations [abstract]
Abstract: To address recent many-core architecture design, HPC applications are exploring hybrid parallel programming, mixing MPI and OpenMP. Among them, very few large scale applications in production today are exploiting asynchronous parallel tasks and asynchronous multithreaded communications to take full advantage of the available concurrency, in particular from dynamic load balancing, network, and memory operations overlapping. In this paper, we present our first results of ML-FMM algorithm implementation using GASPI asynchronous one-sided communications and task-based programming to improve code scalability and performance. On 32 nodes, we show an 83.5% reduction on communication costs over the optimized MPI+OpenMP version.
Nathalie Möller, Eric Petit, Quentin Carayol, Quang Dinh and William Jalby

Machine Learning and Data Assimilation for Dynamical Systems (MLDADS) Session 1

Time and Date: 10:35 - 12:15 on 12th June 2019

Room: 0.5

Chair: Rossella Arcucci

241 Kernel embedded nonlinear observational mappings in the variational mapping particle filter [abstract]
Abstract: Recently, some works have suggested methods to combine variational probabilistic inference with Monte Carlo sampling. One promising approach is via local optimal transport. In this approach, a gradient steepest descent method based on local optimal transport principles is formulated to transform deterministically point samples from an intermediate density to a posterior density. The local mappings that transform the intermediate densities are embedded in a reproducing kernel Hilbert space (RKHS). This variational mapping method requires the evaluation of the log-posterior density gradient and therefore the adjoint of the observational operator. In this work, we evaluate nonlinear observational mappings in the variational mapping method using two approximations that avoid the adjoint, an ensemble based approximation in which the gradient is approximated by the particle covariances in the state and observational spaces the so-called ensemble space and an RKHS approximation in which the observational mapping is embedded in an RKHS and the gradient is derived there. The approximations are evaluated for highly nonlinear observational operators and in a low-dimensional chaotic dynamical system. The RKHS approximation is shown to be highly successful and superior to the ensemble approximation.
Manuel Pulido, Peter Jan Vanleeuwen and Derek Posselt
463 Adaptive Ensemble Optimal Interpolation for Efficient Data Assimilation in the Red Sea [abstract]
Abstract: Ensemble optimal interpolation (EnOI) have been introduced to drastically reduce the computational cost of the ensemble Kalman filter (EnKF). The idea is to use a static (pre-selected) ensemble to parameterize the background covariance matrix, which avoids the costly integration step of the ensemble members with the dynamical model. To better represent the strong variability of the Red Sea circulation, we propose new adaptive EnOI schemes in which the ensemble members are adaptively selected at every assimilation cycle from a large dictionary of ocean states describing the variability of the Red Sea system. Those members would account for the strong eddy and seasonal variability of the Red Sea circulation and enforce climatological smoothness in the filter update. We implement and test different schemes to adaptively choose the ensemble members based on (i) the similarity to the forecast, or (ii) an Orthogonal Matching Pursuit (OMP) algorithm. Results of numerical experiments assimilating remote sensing data into a high-resolution MIT general circulation model (MITgcm) of the Red Sea will be presented to demonstrate the efficiency of the proposed approach.
Habib Toye, Peng Zhan, Furrukh Sana and Ibrahim Hoteit
445 A Learning-Based Approach for Uncertainty Analysis in Numerical Weather Prediction Models [abstract]
Abstract: This paper demonstrates the use of machine learning techniques to study the uncertainty in numerical weather prediction models due to the interaction of multiple physical processes. We aim to address the following problems: 1)estimation of systematic model errors in output quantities of interest at future times and 2)identification of specific physical processes that contribute most to the forecast uncertainty in the quantity of interest under specified meteorological conditions. To address these problems, we employ simple machine learning algorithms and perform numerical experiments with Weather Research and Forecasting (WRF) model. The results demonstrate the potential of machine learning approaches to aid the study of model errors.
Azam Moosavi, Vishwas Hebbur Venkata Subba Rao and Adrian Sandu
432 Scalable Weak Constraint Gaussian Processes [abstract]
Abstract: A Weak Constraint Gaussian Process (WCGP) model is presented to integrate noisy inputs into the classical Gaussian Process predictive distribution. This follows a Data Assimilation approach i.e. by considering information provided by observed values of a noisy input in a time window. Due to the increased number of states processed from real applications and the time complexity of GP algorithms, the problem mandates a solution in a high performance computing environment. In this paper, parallelism is explored by defining the parallel WCGP model based on domain decomposition. Both a mathematical formulation of the model and a parallel algorithm are provided. We prove that the parallel implementation preserves the accuracy of the sequential one. The algorithm’s scalability is further proved to be O(p^2) where p is the number of processors.
Rossella Arcucci, Douglas McIlwraith and Yi-Ke Guo

Classifier Learning from Difficult Data (CLDD) Session 1

Time and Date: 10:35 - 12:15 on 12th June 2019

Room: 0.6

Chair: Michal Wozniak

284 Keynote: ARFF data source library for distributed single/multiple instance, single/multiple output learning on Apache Spark [abstract]
Abstract: Apache Spark has become a popular framework for distributed machine learning and data mining. However, it lacks support for operating with Attribute-Relation File Format (ARFF) files in a native, convenient, transparent, efficient, and distributed way. Moreover, Spark does not support advanced learning paradigms represented in the ARFF definition including learning from data comprising single/multiple instances and/or single/multiple outputs. This paper presents an ARFF data source library to provide native support for ARFF files, single/multiple instance, and/or single/multiple output learning on Apache Spark. This data source extends seamlessly the Apache Spark machine learning library allowing to load all the ARFF file varieties, attribute types, and learning paradigms. The ARFF data source allows researchers to incorporate a large number of diverse datasets, and develop scalable solutions for learning problems with increased complexity. The data source is implemented on Scala, just like the Apache Spark source code, however, it can be used from Java, Scala, and Python. The ARFF data source is free and open source, available on GitHub under the Apache License 2.0.
Jorge Gonzalez Lopez, Sebastián Ventura and Alberto Cano
540 On the role of cost-sensitive learning in imbalanced data oversampling [abstract]
Abstract: Learning from imbalanced data is still considered as one of the most challenging areas of machine learning. Among plethora of methods dedicated to alleviating the challenge of skewed distributions, two most distinct ones are data-level sampling and cost-sensitive learning. The former modifies the training set by either removing majority instances or generating additional minority ones. The latter associates a penalty cost with the minority class, in order to mitigate the classifiers' bias towards the better represented class. While these two approaches have been extensively studied on their own, no works so far have tried to combine their properties. Such a direction seems as highly promising, as in many real-life imbalanced problems we may obtain the actual misclassification cost and thus it should be embedded in the classification framework, regardless of the selected algorithm. This work aims to open a new direction for learning from imbalanced data, by investigating an interplay between the oversampling and cost-sensitive approaches. We show that there is a direct relationship between the misclassification cost imposed on the minority class and the oversampling ratios that aim to balance both classes. This becomes vivid when popular skew-insensitive metrics are modified to incorporate the cost-sensitive element. Our experimental study clearly shows a strong relationship between sampling and cost, indicating that this new direction should be pursued in the future in order to develop new and effective algorithms for imbalanced data.
Bartosz Krawczyk and Michal Wozniak
219 Characterization of Handwritten Signature Images in Dissimilarity Representation Space [abstract]
Abstract: The offline Handwritten Signature Verification (HSV) problem can be considered as having difficult data since it presents imbalanced class distributions, high number of classes, high-dimensional feature space and small number of learning samples. One of the ways to deal with this problem is the writer-independent (WI) approach, which is based on the dichotomy transformation (DT). In this work, an analysis of the difficulty of the data in the space triggered by this transformation is performed based on the instance hardness (IH) measure. Also, the paper reports on how this better understanding can lead to better use of the data through a prototype selection technique.
Victor L. F. Souza, Adriano L. I. Oliveira, Rafael M. O. Cruz and Robert Sabourin

Simulations of Flow and Transport: Modeling, Algorithms and Computation (SOFTMAC) Session 1

Time and Date: 10:35 - 12:15 on 12th June 2019

Room: 1.4

Chair: Shuyu Sun

45 deal.II Implementation of a Weak Galerkin Finite Element Solver for Darcy Flow [abstract]
Abstract: This paper presents a weak Galerkin (WG) finite element solver for Darcy flow and its implementation on the \texttt{deal.II} platform. The solver works for quadrilateral and hexahedral meshes in a unified way. It approximates pressure by $ Q $-type degree $ k (\ge 0) $ polynomials separately in element interiors and on edges/faces. Numerical velocity is obtained in the unmapped Raviart-Thomas space $ RT_{[k]} $ via postprocessing based on the novel concepts of discrete weak gradients. The solver is locally mass-conservative and produces continuous normal fluxes. It is implemented in \texttt{deal.II} in the dimension-independent paradigm and allows polynomial degrees up to $ 5 $. Numerical experiments show that our new WG solver performs better than the classical mixed finite element methods.
Zhuoran Wang, Graham Harper, Patrick O'Leary, Jiangguo Liu and Simon Tavener
164 A mixed elasticity formulation for fluid poroelastic structure interaction [abstract]
Abstract: We study a mathematical model and its finite element approximation for solving the coupled problem arising in the interaction between a free fluid and a fluid in a poroelastic material. The free fluid flow is governed by the Stokes equations, while the poroelastic material is modeled using the Biot system of poroelasticity. The model is based on a mixed stress-displacement-rotation elasticity formulation and mixed velocity-pressure Darcy and Stokes formulations. The mixed finite element approximation provides local mass and momentum conservation in the poroelastic media. We discuss stability, accuracy, and robustness of the method. Applications to flows in fractured poroelastic media and arterial flows are presented.
Ivan Yotov and Tongtong Li
208 Recovery of the Interface Velocity for the Incompressible Flow in Enhanced Velocity Mixed Finite Element Method [abstract]
Abstract: The velocity, coupling term in the flow and transport problems, is important in the accurate numerical simulation or in the posteriori error analysis for adaptive mesh refinement. We consider Enhanced Velocity Mixed Finite Element Method for the incompressible Darcy flow. In this paper, our aim to study the improvement of velocity at interface to achieve the better approximation of velocity between subdomains. We propose the reconstruction of velocity at interface by using the post-processed pressure. Numerical results at the interface show improvement on convergence rate.
Yerlan Amanbek, Gurpreet Singh and Mary F. Wheeler
163 A New Approach to Solve the Stokes-Darcy-Transport System Applying Stabilized Finite Element Methods [abstract]
Abstract: In this work we propose a new combination of finite element methods to solve incompressible miscible displacements in heterogeneous media formed by the coupling of the free-fluid with the porous medium employing the stabilized hybrid mixed finite element method developed and analyzed by Igreja and Loula in \cite{Igreja:2018} and the classical Streamline Upwind Petrov--Galerkin (SUPG) method presented and analyzed by Brooks and Hughes in \cite{brooks-hughes:82}. The hydrodynamic problem is governed by the Stokes and Darcy systems coupled by Beavers-Joseph-Saffman interface conditions. To approximate the Stokes-Darcy coupled system we apply the stabilized hybrid mixed method, characterized by the introduction of the Lagrange multiplier associated with the velocity field in both domains. This choice naturally imposes the Beavers-Joseph-Saffman interface conditions on the interface between Stokes and Darcy domains. Thus, the global system is assembled involving only the degrees of freedom associated with the multipliers and the variables of interest can be solved at the element level. Considering the velocity fields given by the hybrid method we adopted the SUPG method combined with an implicit finite difference scheme to solve the transport equation associated with miscible displacements. Numerical studies are presented to illustrate the flexibility and robustness of the hybrid formulation. To verify the efficiency of the combination of hybrid and SUPG methods, computer simulations are also presented for the recovery hydrological flow problems in heterogeneous porous media, such as continuous injection.
Iury Igreja

Marine Computing in the Interconnected World for the Benefit of the Society (MarineComp) Session 1

Time and Date: 10:35 - 12:15 on 12th June 2019

Room: 2.26

Chair: Flávio Martins

325 Marine and Atmospheric Forecast Computational System for Nautical Sports in Guanabara Bay (Brazil) [abstract]
Abstract: An atmospheric and marine computational forecasting system for Guanabara Bay (GB) was developed to support the Brazilian Sailing Teams in the 2016 Olympic and Paralympic Games. This system, operational since August 2014, is composed of the Weather Research and Forecasting (WRF) and the Regional Ocean Modeling System (ROMS) models, which are both executed daily, yielding 72-h prognostics. The WRF model uses the Global Forecast System (GFS) as the initial and boundary conditions, configured with a three nested-grid scheme. The ocean model is also configured using three nested grids, obtaining atmospheric fields from the implemented WRF and ocean forecasts from CMEMS and TPXO7.2 as tidal forcing. To evaluate the model performances, the atmospheric results were compared with data from two local airports, and the ocean model results were compared with data collected from an acoustic current profiler and tidal prediction series obtained from harmonic constants at four stations located in GB. According to the results, reasonable model performances were obtained in representing marine currents, sea surface heights and surface winds. The system could represent the most important local atmospheric and oceanic conditions, being suitable for nautical applications.
Rafael Rangel, Luiz Paulo Assad, Elisa Passos, Caio Souza, William Cossich, Ian Dragaud, Raquel Toste, Fabio Hochleitner and Luiz Landau
549 An integrated perspective of the Operational Forecasting System in Rı́as Baixas (Galicia, Spain) with observational data and end-users [abstract]
Abstract: Rı́as Baixas is a small region in the north of the Iberian At- lantic Margin, located between Cape Fisterra and the Portugal-Spain boarder. Due to its natural resources, this area is quite relevant for the socio-economic development of the entire northwestern Iberian Penin- sula. However, it is also highly vulnerable to natural and anthropogenic stress. A significant amount of the current economic activities in this re- gion, such as aquaculture, fishery, offshore operations, navigation, coastal management and tourism rely on the state of the ocean and atmosphere and largely benefit from high-resolution numerical models predicting that state several days in advance. In this study, we present the opera- tional ocean forecasting system developed at the meteorological agency of the regional Galician government, MeteoGalicia, focussing on the Rı́as Baixas region. This system includes four models providing daily output data: the hydrodynamic models ROMS and MOHID, the atmospheric model WRF and the hydrological model SWAT. Here, MOHID’s imple- mentation for the Rı́as Baixas region is described and the model’s per- formance with respect to observations is shown for those locations where Current Temperature and Depth (CTD) profiles are obtained weekly by the Technological Institute for the Monitoring of the Marine Envi- ronment in Galicia (INTECMAR). Although the hydrodynamical con- ditions of this region are complex, the model skilfully reproduces these CTDs. The results and derived products from the operational system are publicly available to the end-user through MeteoGalicia’s web page and data server (www.meteogalicia.gal).
Anabela Venâncio, Pedro Montero and Pedro Costa
542 Climate evaluation of a high-resolution regional model over the Canary current upwelling system [abstract]
Abstract: Coastal upwellings systems are very important from the socio-economic point of view due to their high productivity, but they are also vulnerable under changing climate. The impact of climate change on the Canary Current Upwelling System (CCUS) has been studied in recent years by different authors. However, these studies show contradictory results on the question whether coastal upwelling will be more intense or weak in the next decades. One of the reasons for this uncertainty is the low resolution of climate models, making it difficult to properly resolve coastal zone processes. To solve this issue, we propose the use of a high-resolution regional climate coupled model. In this work we evaluate the performance of the regional climate coupled model ROM (REMO-OASSIS-MPIOM) in the influence zone of the CCUS as a first step towards a regional climate change scenario downscaling. The results were compared to the output of the global MPI-ESM, showing a significant improvement.
Ruben Vazquez, Ivan Parras-Berrocal, William Cabos, Dmitry V. Sein, Rafael Mañanes, Juan I. Perez and Alfredo Izquierdo
558 Validating Ocean General Circulation Models via Lagrangian particle simulation and data from drifting buoys [abstract]
Abstract: Drifting Fish Aggregating Devices (dFADs) are small drifting platforms with an attached solar powered buoy that report their position with daily frequency via GPS. We use data of 9,440 drifting objects provided by Satlink, a buoys manufacturing company, to test the predictions of surface current velocity provided by two of the main models:the NEMO model used by Copernicus Marine Environment Monitoring Service (CMEMS) and the HYCOM model used by the Global Ocean Forecast System (GOFS).
Karan Bedi, David Gómez-Ullate, Alfredo Izquierdo and Tomás Fernández-Montblanc