Session5 14:20 - 16:00 on 13th June 2019

ICCS 2019 Main Track (MT) Session 5

Time and Date: 14:20 - 16:00 on 13th June 2019

Room: 1.5

Chair: Jorge González-Domínguez

367 An On-line Performance Introspection Framework for Task-based Runtime Systems [abstract]
Abstract: The expected high levels of parallelism together with the heterogeneity of new computing systems pose many challenges to current performance monitoring frameworks. Classical post-mortem approaches will not be sufficient for such dynamic, complex and highly concurrent environments. First, the amounts of data that can be generated from such systems will be impractical. And second, the access to real-time performance data to orchestrate program execution will be a necessity. In this paper, we present a lightweight monitoring infrastructure developed within the AllScale Runtime System, a task-based runtime system for extreme scale. This monitoring component provides on-line introspection capabilities that help the runtime scheduler in its decision making process and adaptation, while introducing minimum overhead. In addition, the monitoring component provides several post-mortem reports as well as real-time data visualisation that can be of great help in the task of performance debugging.
Xavier Aguilar, Herbert Jordan, Thomas Heller, Alexander Hirsch, Thomas Fahringer and Erwin Laure
405 Productivity-aware Design and Implementation of Distributed Tree-based Search Algorithms [abstract]
Abstract: Parallel tree-based search algorithms are present in different areas, such as operations research, machine learning and artificial intelligence. This class of algorithms is highly compute-intensive, irregular and usually relies on context-specific data structures and hand-made code optimizations. Therefore, C and C++ are the languages often employed, due to their low-level features and performance. In this work, we investigate the use of Chapel high-productivity language for the design and implementation of distributed tree search algorithms for solving combinatorial problems. The experimental results show that Chapel is a suitable language for this purpose, both in terms of performance and productivity. Despite the use of high-level features, the distributed tree search in Chapel is on average 16% slower and reaches up to 85% of the scalability observed for its MPI+OpenMP counterpart.
Tiago Carneiro Pessoa and Nouredine Melab
462 Development of Element-by-Element Kernel Algorithms in Unstructured Implicit Low-Order Finite-Element Earthquake Simulation for Many-Core Wide-SIMD CPUs [abstract]
Abstract: Acceleration of the Element-by-Element (EBE) kernel in matrix-vector products is essential for high-performance in unstructured implicit finite-element applications. However, the EBE kernel is not straight forward to attain high performance due to random data access with data recurrence. In this paper, we develop methods to circumvent these data races for high performance on many-core CPU architectures with wide SIMD units. The developed EBE kernel attains 16.3% and 20.9% of FP32 peak on Intel Xeon Phi Knights Landing based Oakforest-PACS and Intel Skylake Xeon Gold processor based system, respectively. This leads to 2.88-fold speedup over the baseline kernel and 2.03-fold speedup of the whole finite-element application on Oakforest-PACS. An example of urban earthquake simulation using the developed finite-element application is shown.
Kohei Fujita, Masashi Horikoshi, Tsuyoshi Ichimura, Larry Meadows, Kengo Nakajima, Muneo Hori and Lalith Maddegedara
516 A High-productivity Framework for Adaptive Mesh Refinement on Multiple GPUs [abstract]
Abstract: Recentlygrid-basedphysicalsimulationswithmultipleGPUs require effective methods to adapt grid resolution to certain sensitive regions of simulations. In the GPU computation, an adaptive mesh re- finement (AMR) method is one of the effective methods to compute certain local regions that demand higher accuracy with higher resolu- tion. However, the AMR methods using multiple GPUs demand compli- cated implementation and require various optimizations suitable for GPU computation in order to obtain high performance. Our AMR framework provides a high-productive programming environment of a block-based AMR for grid-based applications. Programmers just write the stencil functions that update a grid point on Cartesian grid, which are executed over a tree-based AMR data structure effectively by the framework. It also provides the efficient GPU-suitable methods for halo exchange and mesh refinement with a dynamic load balance technique. The framework- based application for compressible flow has achieved to reduce the com- putational time to less than 15% with 10% of memory footprint in the best case compared to the equivalent computation running on the fine uniform grid. It also has demonstrated good weak scalability with 84% of the parallel efficiency on the TSUBAME3.0 supercomputer.
Takashi Shimokawabe and Naoyuki Onodera
197 Harmonizing Sequential and Random Access to Datasets in Organizationally Distributed Environments [abstract]
Abstract: Computational science is rapidly developing, which pushes the boundaries in data management concerning the size and structure of datasets, data processing patterns, geographical distribution of data and performance expectations. In this paper, we present a solution for harmonizing data access performance, i.e. finding a compromise between local and remote read/write efficiency that would fit those evolving requirements. It is based on variable-size logical data-chunks (in contrast to fixed-size blocks), direct storage access and several mechanisms improving remote data access performance. The solution is implemented in the Onedata system and suited to its multi-layer architecture, supporting organizationally distributed environments -- with limited trust between data providers. The solution is benchmarked and compared to XRootD + XCache, which offers similar functionalities. The results show that the performance of both systems is comparable, although overheads in local data access are visibly lower in Onedata.
Michał Wrzeszcz, Łukasz Opioła, Bartosz Kryza, Łukasz Dutka, Renata Słota and Jacek Kitowski

ICCS 2019 Main Track (MT) Session 13

Time and Date: 14:20 - 16:00 on 13th June 2019

Room: 1.3

Chair: Carlos Gonçalves

71 Lung Nodule Diagnosis via Deep Learning and Swarm Intelligence [abstract]
Abstract: Cancer diagnosis is usually an arduous task for medicine, specially when it comes to pulmonary cancer, which is one of the most deadly and hard to treat types of cancer. Early detection of pulmonary cancerous nodules drastically increases surviving chances, but also makes it an even harder problem to solve, as it mostly depends on a visual inspection of tomography scans. To help improving this detection and surviving rates, engineers and scientist have been developing computer-aided diagnosis techniques, as the one presented in this paper. Here, we use computational intelligence to propose a new approach towards solving the problem of detecting pulmonary carcinogenic nodules in computerized tomography scans. The technology applied consists in using Deep Learning and Swarm Intelligence to develop a novel nodule detection and classification model. Seven different Swarm Intelligence algorithms and Convolutional Neural Networks for biomedical image segmentation are used to detect and classify cancerous pulmonary nodules in the Lung Image Database Consortium and Image Database Resource Initiative (LIDC-IDRI). The aim of this work is to train Convolutional Neural Networks using swarm intelligence techniques and demonstrate that this approach is more efficient than the classic training with Back-propagation and Gradient Descent. It improves the average accuracy from 93% to 94%, precision from 92% to 94%, sensitivity from 91% to 93% and specificity from 97% to 98%, which constitute a relevant improvement regarding the statistical T-test.
Cesar Affonso De Pinho Pinheiro, Nadia Nedjah and Luiza de Macedo Mourelle
85 Marrying Graph Kernel with Deep Neural Network: A Case Study for Network Anomaly Detection [abstract]
Abstract: Network anomaly detection has caused widespread concern among researchers and the industry. Existing work mainly focuses on applying machine learning techniques to detect network anomalies. The ability to exploit the potential relationships of communication patterns in network traffic has been the focus of many existing studies. Graph kernels provide a powerful means for representing complex interactions between entities, while deep neural networks break through new foundations for the reason that data representation in the hidden layer is formed by specific tasks and is thus customized for network anomaly detection. However, deep neural networks cannot learn communication patterns among network traffic directly. At the same time, deep neural networks require a large amount of training data and are computationally expensive, especially when considering the entire network flows. For these reasons, we employ a novel method to marry graph kernels to deep neural networks, which exploits the relationship expressiveness among network flows and combines ability of neural networks to mine hidden layers and enhances the learning effectiveness when a limited number of training examples are available. We evaluate the proposed method on two real-world datasets which contains low-intensity network attacks and experimental results reveal that our model achieves significant improvements in accuracies over existing network anomaly detection tasks.
Yepeng Yao, Liya Su, Zhigang Lu and Baoxu Liu
114 Machine learning for performance enhancement of molecular dynamics simulations [abstract]
Abstract: We explore the idea of integrating machine learning with simulations to enhance the performance of the simulation and improve its usability for research and education. The idea is illustrated using hybrid openMP/MPI parallelized molecular dynamics simulations designed to extract the distribution of ions in nanoconfinement. We find that an artificial neural network based regression model successfully learns the desired features associated with the output ionic density profiles and rapidly generates predictions that are in excellent agreement with the results from explicit molecular dynamics simulations. The results demonstrate that the performance gains of parallel computing can be further enhanced by using machine learning.
Jcs Kadupitiya, Geoffrey Fox and Vikram Jadhao
210 2D-Convolution based Feature Fusion for Cross-Modal Correlation Learning [abstract]
Abstract: Cross-modal information retrieval (CMIR) enables users to search for semantically relevant data of various modalities from a given query of one modality. The predominant challenge is to alleviate the "heterogeneous gap" between different modalities. For text-image retrieval, the typical solution is to project text features and image features into a common semantic space and measure the cross-modal similarity. However, semantically relevant data from different modalities usually contains imbalanced information. Aligning all the modalities in the same space will weaken modal-specific semantics and introduce unexpected noise. In this paper, we propose a novel CMIR framework based on multi-modal feature fusion. In this framework, the cross-modal similarity is measured by directly analyzing the fine-grained correlations between the text features and image features without common semantic space learning. Specifically, we preliminarily construct a cross-modal feature matrix to fuse the original visual and textural features. Then the 2D-convolutional networks are proposed to reason about inner-group relationships among features across modalities, resulting in fine-grained text-image representations. The cross-modal similarity is measured by a multi-layer perception based on the fused feature representations. We conduct extensive experiments on two representative CMIR datasets, i.e. English Wikipedia and TVGraz. Experimental results indicate that our model outperforms state-of-the-art methods significantly. Meanwhile, the proposed cross-modal feature fusion approach is more effective in the CMIR tasks compared with other feature fusion approaches.
Jingjing Guo, Jing Yu, Yuhang Lu, Yue Hu and Yanbing Liu
222 DunDi: Improving Robustness of Neural Networks using Distance Metric Learning [abstract]
Abstract: The deep neural networks (DNNs), although highly accurate, are vulnerable to adversarial attacks. A slight perturbation applied to a sample may lead to misprediction of the DNN, even it is imperceptible to humans. This defect makes the DNN lack of robustness to malicious perturbations, and thus limits their usage in many safety-critical systems. To this end, we present DunDi, a metric learning based classication model, to provide the ability to defend adversarial attacks. The key idea behind DunDi is a metric learning model which is able to pull samples of the same label together meanwhile pushing samples of dierent labels away. Consequently, the distance between samples and model's boundary can be enlarged accordingly, so that signicant perturbations are required to fool the model. Then, based on the distance comparison, we propose a two-step classication algorithm that performs eciently for multi-class classication. DunDi can not only build and train a new customized model but also support the incorporation of the available pre-trained neural network models to take full advantage of their capabilities. The results show that DunDi is able to defend 94.39% and 88.91% of adversarial samples generated by four state-of-the-art adversarial attacks on the MNIST dataset and CIFAR-10 dataset, without hurting classication accuracy.
Lei Cui, Rongrong Xi and Zhiyu Hao

Workshop on Teaching Computational Science (WTCS) Session 2

Time and Date: 14:20 - 16:00 on 13th June 2019

Room: 0.3

Chair: Nia Alexandrov

537 “Two Things Interact and Something Happens” Using Analogy to support Interdisciplinary Thinking in Computational Science [abstract]
Abstract: Many computational models across disciplines are based on a few fundamental analogies, and exposing this truth helps students understand the fundamentals of the science, the mathematics, and the computing. In this talk I will demonstrate introductory models from ecology, medicine, and physics to show how students can better appreciate the interdisciplinary nature of computational science instead of falling into the old “silo” way of thinking and modeling.
Robert Panoff
518 Enabling Interdisciplinary Instruction inComputer Science and Humanities: An Innovative Teaching and Learning Model Customizedfor Small Liberal Arts Colleges [abstract]
Abstract: Infiltration of data-driven computational methods of humanities research has generated mutual interests between the two communities of computer science and humanities. Larger institutions have adopted drastic structural reforms to meet the challenges to bridge the two fields. Successful examples include the integrated major programs launched at Stanford University and the collaborative workshop at Carnegie Mellon University. These types of exploratory experiments require 1) intensive resources as well as 2) strong support of faculty and administration. At a small college, both can be luxuries. We presented an innovative model to carry out effective synchronized courses of computational humanities and digital humanities that pulls together efforts between two small programs and needs little additional support. This paper reviews the proposal, design, and delivery of a pair of interdisciplinary graduate courses in the small college setting. We discussed the details of our implementation and provided our observations and recommendations.
William Crum, Aaron Angello, Xinlian Liu and Corey Campion
271 A project-based course on software development for (engineering) research [abstract]
Abstract: This paper describes the motivation and design of a 10-week graduate course that teaches practices for developing research software; although offered by an engineering program, the content applies broadly to any field of scientific research where software may be developed. Topics taught in the course include local and remote version control, licensing and copyright, structuring Python modules, testing and test coverage, continuous integration, packaging and distribution, open science, software citation, and reproducibility basics, among others. Lectures are supplemented by in-class activities and discussions, and all course material is shared openly via GitHub. Coursework is heavily based on a single, term-long project where students individually develop a software package targeted at their own research topic; all contributions must be submitted as pull requests and reviewed/merged by other students. The course was initially offered in Spring 2018 with 17 students enrolled, and will be taught again in Spring 2019.
Kyle Niemeyer
490 Programming paradigms for computational science: three fundamental models [abstract]
Abstract: The widespread of data science languages and libraries have raised new interest in teaching computational science programming that leverage the capabilities of both single-computer and cluster-based computation infrastructures. Some of the programming paradigms are converging, yet there are specialized uses and cases that require learners to switch from one to another. In this paper, we report on our experience and action research with more than ten cohorts of mixed background students in postgraduate level data science classes. We first discuss the key mental models found to be essential to understanding problems, and then review the three fundamental models that students must face when coding and their interrelation. Finally, we discuss how decision criteria for choosing frameworks can be introduced to students.
Miguel-Angel Sicilia, Elena Garcia-Barriocanal, Salvador Sanchez-Alonso and Marçal Mora Cantallops

Agent-Based Simulations, Adaptive Algorithms and Solvers (ABS-AAS) Session 2

Time and Date: 14:20 - 16:00 on 13th June 2019

Room: 0.4

Chair: Maciej Paszynski

245 Isogeometric Residual Minimization Method (iGRM) with Direction Splitting for Time-Dependent Advection-Diffusion Problems [abstract]
Abstract: We propose a novel computational implicit method called Isogeometric Residual Minimization (iGRM) with direction splitting. The method mixes the benefits of isogeometric analysis, implicit dynamics, residual minimization, and alternating direction solver. We utilize tensor product B-spline basis functions in space, implicit second-order time integration schemes and residual minimization at every time step. Then, we implement an implicit time integration scheme and apply, for each space-direction, a stabilized mixed method based on residual minimization. Finally, we show that the resulting system of linear equations has a Kronecker product structure, which results in a linear computational cost alternating direction solver, even using implicit time integration schemes together with the stabilized mixed formulation. We test the proposed method on three advection-diffusion computational examples, including model "membrane" problem, the circular wind problem, and the simulations modelling pollution propagating from a chimney.
Judit Muñoz-Matute, Marcin Los, Ignacio Muga and Maciej Paszynski
328 Augmenting Multi-Agent Negotiation in Interconnected Freight Transport Using Complex Networks Analysis [abstract]
Abstract: This paper proposes the use of computational methods of Complex Networks Analysis to augment the capabilities of a broker involved in multi agent freight transport negotiation.We have developed an experimentation environment that provides compelling arguments that using our proposed approach the broker is able to apply more effective negotiation strategies for gaining longer term benefits, than those offered by the standard Iterated Contract Net negotiation approach. The proposed negotiation strategies take effect on the entire population of biding agents and are driven by market inspired purposes like for example breaking monopolies and supporting agents with diverse transportation capabilities.
Alex Becheru and Costin Badica
358 Security-Aware Distributed Job Scheduling in Cloud Computing Systems: A Game-Theoretic Cellular Automata-based Approach [abstract]
Abstract: We consider the problem of security-aware scheduling and load balancing in Cloud Computing systems. This optimization problem we replace by a game-theoretic approach where players tend to achieve a solution by reaching a Nash equilibrium. We propose a fully distributed algorithm based on applying iterated spatial Prisoner's Dilemma Game and a phenomenon of collective behavior of players participating in the game. Brokers representing users participate in the game to fulfill their own two criteria: the execution time of the submitted tasks and the level of provided security assurance. We experimentally show that in the process of the game a solution is found which provides an optimal resource utilization while users meet their applications’ performance and security requirements with a minimum expenditure and overhead.
Jakub Gasior and Franciszek Seredynski
402 Residual minimization for isogeometric analysis in reduced and mixed forms [abstract]
Abstract: Most variational forms of isogeometric analysis use highly-continuous basis functions for both trial and test spaces. For a partial differential equation with a smooth solution, isogeometric analysis with highly-continuous basis functions for trial space results in excellent discrete approximations of the solution. However, we observe that high continuity for test spaces is not necessary. In this work, we present a framework which uses highly-continuous B-splines for the trial spaces and basis functions with minimal regularity and possibly lower order polynomials for the test spaces. To realize this goal, we adopt the residual minimization methodology. We pose the problem in a mixed formulation, which results in a system governing both the solution and a Riesz representation of the residual. We present various variational formulations which are variationally-stable and verify their equivalence numerically via numerical tests.
Victor Calo, Quanling Deng, Sergio Rojas and Albert Romkes

Multiscale Modelling and Simulation (MMS) Session 2

Time and Date: 14:20 - 16:00 on 13th June 2019

Room: 0.5

Chair: Derek Groen

465 Introducing VECMAtk - verification, validation and uncertainty quantification for multiscale and HPC simulations [abstract]
Abstract: Multiscale simulations are an essential computational tool in a range of research disciplines, and provide unprecedented levels of scientific insight at a tractable cost in terms of effort and compute re- sources. To provide this, we need such simulations to produce results that are both robust and actionable. The VECMA toolkit (VECMAtk), which is officially released in conjunction with the present paper, estab- lishes a platform to achieve this by exposing patterns for verification, validation and uncertainty quantification (VVUQ). These patterns can be combined to capture complex scenarios, applied to applications in dis- parate domains, and used to run multiscale simulations on any desktop, cluster or supercomputing platform.
Derek Groen, Robin Richardson, David Wright, Vytautas Jancauskas, Robert Sinclair, Paul Karlshoefer, Maxime Vassaux, Hamid Arabnejad, Tomasz Piontek, Piotr Kopta, Bartosz Bosak, Jalal Lakhlili, Olivier Hoenen, Diana Suleimenova, Wouter Edeling, Daan Crommelin, Anna Nikishova and Peter Coveney
350 EasyVVUQ: Building trust in simulation. [abstract]
Abstract: Modelling and simulation are increasingly well established techniques in a wide range of academic and industrial domains. As their use becomes increasingly important it is vital that we understand both their sensitivity to inputs and how much confidence we should have in their results. Nonetheless, few simulations are reported with rigorous validation (V) and verification (V), or even meaningful error bars (uncertainty quantification - UQ). EasyVVUQ is a Python library designed to allow the integration of non-intrusive VVUQ techniques into existing simulation workflows. Our aim is to provide the basis for tools which wrap around applications to allow the user to specify the scientifically interesting parameters of the model and the type of VVUQ algorithm they wish to implement and the details of the setup and analysis are abstracted from them. To this end we have designed JSON based input formats that provide a human readable and comprehensible interface to the code. The EasyVVUQ framework is based on the concept of a Campaign of simulations, the inputs of which are generated by a range of sampling algorithms. This Campaign is executed externally to the library but the results processed, aggregated and analyzed within it. EasyVVUQ provides simple templating features that facilitate mapping between scientific parameters and input options and files for a wide range of applications out of the box. Furthermore, our design allows simple user customization of both the input generation and the extraction of relevant data from simulation outputs by expert users and developers. We present it's use in three example multiscale applications from the VECMA project; protein-ligand binding affinity calculations, coupled molecular dynamics and finite element materials modelling and fusion.
David Wright, Robin Richardson and Peter Coveney
391 Uncertainty quantification in multiscale simulations applied to fusion plasmas [abstract]
Abstract: In order to predict the overall performance of a thermonuclear fusion device, an understanding on how microscale turbulence affects the global transport of the plasma is essential. A multiscale component based fusion simulation was designed by coupling together several single-scale physics models into a workflow comprising a transport code, an equilibrium code and a turbulence code. While previous simulations using such workflow showed promising results on propagating turbulent effects to the overall plasma transport [1], the profiles of densities and temperatures simulated by the transport model carry uncertainties that have yet to be quantified. The turbulence code provides the transport coefficients that are inherently noisy. These coefficients are then propagated through the transport code and produce an uncertainty interval in the calculated profiles, which would be used in the equilibrium and turbulence codes to calculate new uncertainty intervals. Our goal is therefore to study how these uncertainties propagate through the workflow, so that we can draw quantitative comparisons between numerical and experimental results. In this context, we are developing tools based on a non-intrusive polynomial chaos expansion [2] (PCE). For that, each sub-model is treated as a black box in which the PCE method is applied. Then, several statistical metrics are derived directly from the polynomial expansion, and finally we get the uncertainty quantification (UQ) and the parameter sensitivity of the multiscale model involved. References: [1] O.O. Luk, O. Hoenen, A. Bottino, B.D. Scott, D.P. Coster, ComPat framework for multiscale simulations applied to fusion plasmas, Computer Physics Communications (2019), https://doi.org/10.1016/j.cpc.2018.12.021. [2] R. Preuss, U. von Toussaint, Uncertainty quantification in ion–solid interaction simulations, Nuclear Instruments and Methods in Physics Research Section B (2017), https://doi.org/10.1016/j.nimb.2016.10.033.
Jalal Lakhlili, David Coster, Olivier Hoenen, Onnie Luk, Roland Preuss and Udo von Toussaint
293 Analysis of Uncertainty of an In-Stent Restenosis Model [abstract]
Abstract: Uncertainty and sensitivity analysis provides insights on how uncertainty in the model inputs affects the model response [1, 2]. Usually, methods for such analysis are computationally expensive and may require high performance resources. In [3], we perform uncertainty quantification applying the quasi-Monte Carlo method for a two-dimensional version of an in-stent restenosis model (ISR2D) [4]. Additionally, in [5], we improve the efficiency of uncertainty estimation by applying the semi-intrusive multiscale method [6]. We observe approximately 30% uncertainty in the mean neointimal area as simulated by the ISR2D model. Depending on whether a fast initial endothelium recovery occurs, the proportion of the model variance due to natural variability ranges from 15% to 35%. The endothelium regeneration time is identified as the most influential model parameter. The model output contains a moderate quantity of uncertainty, and the model precision can be increased by obtaining a more certain value on the endothelium regeneration time. The results obtained by the semi-intrusive method show a good match to those obtained by a black-box quasi-Monte Carlo method (see Fig. 1). Moreover, we significantly reduce the computational cost of the uncertainty estimation. We conclude that the semi-intrusive metamodeling method is reliable and efficient, and can be applied to such complex models as the ISR2D model.
Anna Nikishova, Lourens Veen, Pavel Zun and Alfons Hoekstra
100 Creating a reusable cross-disciplinary multi-scale and multi-physics framework: from AMUSE to OMUSE and beyond [abstract]
Abstract: We describe our efforts to create a multi-scale and multi-physics framework that can be retargeted across different disciplines. Currently we have implemented our approach in the astrophysical domain, for which we developed AMUSE, and generalized this to the oceanographic and climate sciences, which led to the development of OMUSE. The objective of this paper is to document the design choices that led to the successful implementation of these frameworks as well as the future challenges in applying this approach to other domains.
Federico Inti Pelupessy, Simon Portegies Zwart, Arjen van Elteren, Henk Dijkstra, Fredrik Jansson, Daan Crommelin, Pier Siebesma, Ben van Werkhoven and Gijs van den Oord

Computational Science in IoT and Smart Systems (IoTSS) Session 3

Time and Date: 14:20 - 16:00 on 13th June 2019

Room: 0.6

Chair: Vaidy Sunderam

560 Fuzzy Join as a Preparation Step for the Analysis of Training Data [abstract]
Abstract: Analysis of training data has become an inseparable part of sports preparation not only for professional athletes but also for sports enthusiasts and sports amateurs. Nowadays, smart wearables and IoT devices allow monitoring of various parameters of our physiology and activity. The intensity and effectiveness of the activity and values of some physiology parameters may depend on weather conditions in particular days. Therefore, for efficient analysis of training data, it is important to align training data to weather sensor data. In this paper, we show how this process can be performed with the use of the fuzzy join technique, which allows to combine data points shifted in time.
Anna Wachowicz and Dariusz Mrozek
291 Collaborative Learning Agents (CLA) for Swarm Intelligence and Application to Health Monitoring of System of Systems [abstract]
Abstract: The statistical significance for machine learning (ML) and artificial intelligence (AI) applications improves due purely to the increasing big data size. This positive impact can be a great advantage. However, other challenges arise for processing and learning from big data. Traditional data sciences, ML and AI used in small- or moderate-sized analysis typically require tight coupling of the computations, where such an algorithm often executes in a single machine or job and reads all the data at once. Making a generic case of parallel and distributed computing for a ML/AI algorithm using big data proves a difficult task. In this paper, we described a novel infrastructure, namely collaborative learning agents (CLA) and the application in an operational environment, namely swarm intelligence, where a swarm agent is implemented using a CLA. This infrastructure enables a collection of swarms working together for fusing heterogeneous big data sources in a parallel and distributed fashion as if they are as in a single agent. As a use case, we described a data set from the Hack the Machine event, where data sciences and ML/AI work together to better understand Navy's engines, ships and system of systems. The sensors installed in a distributed environment collect heterogeneous big data. We showed how CLA and swarm intelligence used to analyze data from system of systems and quickly examine the health and maintenance issues across multiple sensors. The methodology can be applied to a wide range of applications that leverage collaborative, distributed learning agents and AI for automation.
Ying Zhao and Charles Zhou
344 Computationally Efficient Classification of Audio Events Using Binary Masked Cochleagrams [abstract]
Abstract: In this work, a computationally efficient technique for acoustic events classification is presented. The approach is based on cochleagram structure by identification of dominant time-frequency units. The input signal is splitting into frames, then cochleagram is calculated and masked by the set of masks to determine the most probable audio class. The mask for the given class is calculated using a training set of time aligned events by selecting dominant energy parts in the time--frequency plane. The process of binary mask estimation exploits the thresholding of consecutive cochleagrams, computing the sum, and then final thresholding is applied to the result giving the representation for a particular class. All available masks for all classes are checked in sequence to determine the highest probability of the considered audio event. The proposed technique was verified on a small database of acoustic events specific to the surveillance systems. The results show that such an approach can be used in systems with limited computational resources giving satisfying classification results.
Tomasz Maka

Computational Optimization, Modelling and Simulation (COMS) Session 1

Time and Date: 14:20 - 16:00 on 13th June 2019

Room: 1.4

Chair: Xin-She Yang

437 Comparison of Constraint-Handling Techniques for Metaheuristic Optimization [abstract]
Abstract: Most engineering design problems have highly nonlinear constraints and the proper handling of such constraints can be important to ensure solution quality. There are many different ways of handling constraints and different algorithms for optimization problems, which makes it difficult to choose for users. This paper compare six different constraint-handling techniques such as penalty methods, barrier functions, $\epsilon$-constrained method, feasibility criteria and stochastic ranking. The pressure vessel design problem is solved by the flower pollination algorithm, and results show that stochastic ranking and $\epsilon$-constrained method are most effective for this type of design optimization.
Xing-Shi He, Qin-Wei Fan and Xin-She Yang
231 Dynamic Partitioning of Evolving Graph Streams using Nature-inspired Heuristics [abstract]
Abstract: Detecting communities of interconnected nodes is a frequently addressed problem in situation that be modeled as a graph. A common practical example is this arising from Social Networks. Anyway, detecting an optimal partition in a network is an extremely complex and highly time-consuming task. This way, the development and application of meta-heuristic solvers emerges as a promising alternative for dealing with these problems. The research presented in this paper deals with the optimal partitioning of graph instances, in the special cases in which connections among nodes change dynamically along the time horizon. This specific case of networks is less addressed in the literature than its counterparts. For efficiently solving such problem, we have modeled and implements a set of meta-heuristic solvers, all of them inspired by different processes and phenomena observed in Nature. Concretely, considered approaches are Water Cycle Algorithm, Bat Algorithm, Firefly Algorithm and Particle Swarm Optimization. All these methods have been adapted for properly dealing with this discrete and dynamic problem, using a reformulated expression for the well-known modularity formula as fitness function. A thorough experimentation has been carried out over a set of 12 synthetically generated dynamic graph instances, with the main goal of concluding which of the aforementioned solvers is the most appropriate one to deal with this challenging problem. Statistical tests have been conducted with the obtained results for rigorously concluding the Bat Algorithm and Firefly Algorithm outperform the rest of methods in terms of Normalized Mutual Information with respect to the true partition of the graph.
Eneko Osaba, Miren Nekane Bilbao, Andres Iglesias, Javier Del Ser, Akemi Galvez-Tomida, Iztok Jr. Fister and Iztok Fister
317 Bat Algorithm for Kernel Computation in Fractal Image Reconstruction [abstract]
Abstract: Computer reconstruction of digital images is an important problem in many areas such as image processing, computer vision, medical imaging, sensor systems, robotics, and many others. A very popular approach in that regard is the use of different kernels for various morphological image processing operations such as dilation, erosion, blurring, sharpening, and so on. In this paper, we extend this idea to the reconstruction of digital fractal images. Our proposal is based on a new affine kernel particularly tailored for fractal images. The kernel computes the difference between the source and the reconstructed fractal images, leading to a difficult nonlinear constrained continuous optimization problem, solved by using a powerful nature-inspired metaheuristics for global optimization called the bat algorithm. An illustrative example is used to analyze the performance of this approach. Our experiments show that the method performs quite well but there is also room for further improvement. We conclude that this approach is promising and that it could be a very useful technique for efficient fractal image reconstruction.
Akemi Galvez-Tomida, Eneko Osaba, Javier Del Ser and Andres Iglesias Prieto
107 Heuristic Rules for Coordinated Resources Allocation and Optimization in Distributed Computing [abstract]
Abstract: In this paper, we consider heuristic rules for resources utilization optimization in distributed computing environments. Existing modern job-flow execution mechanics impose many restrictions for the resources allocation procedures. Grid, cloud and hybrid computing services operate in heterogeneous and usually geographically distributed computing environments. Emerging virtual organizations and incorporated economic models allow users and resource owners to compete for suitable allocations based on market principles and fair scheduling policies. Subject to these features a set of heuristic rules for coordinated compact scheduling are proposed to select resources depending on how they fit a particular job execution and requirements. Dedicated simulation experiment studies integral job flow characteristics optimization when these rules are applied to conservative backfilling scheduling procedure.
Victor Toporkov and Dmitry Yemelyanov
37 Nonsmooth Newton’s Method: Some Structure Exploitation [abstract]
Abstract: We investigate real asymmetric linear systems arising in the search direction generation in a nonsmooth Newton’s method. This applies to constrained optimisation problems via reformulation of the necessary conditions into an equivalent nonlinear and nonsmooth system of equations. We propose a strategy to exploit the problem structure. First, based on the sub-blocks of the original matrix, some variables are selected and ruled out for a posteriori recovering; then, a smaller and symmetric linear system is generated; eventually, from the solution of the latter, the remaining variables are obtained. We prove the method is applicable if the original linear system is well-posed. We propose and discuss different selection strategies. Finally, numerical examples are presented to compare this method with the direct approach without exploitation, for full and sparse matrices, in a wide range of problem size.
Alberto De Marchi and Matthias Gerdts

Smart Systems: Bringing Together Computer Vision, Sensor Networks and Machine Learning (SmartSys) Session 2

Time and Date: 14:20 - 16:00 on 13th June 2019

Room: 2.26

Chair: João Rodrigues

468 Smart Campus Parking – Parking Made Easy [abstract]
Abstract: The number of users of the parking lots of the Polytechnic of Leiria, a higher education institution has been increasing each year and it’s becoming a ma-jor concern to address the high demand for a free parking spot on the cam-pus. In order to ease this problem, this paper proposes the design of a smart parking system that can help users to easily find a parking spot using an in-tegrated system that includes actual sensors and a mobile application. The system is based on the information about the occupation status of parking lots generated by parking sensors. This information is accessed by the mobile application through a REST webservice and presented to end-users contributing to the decrease of time wasted on the quest of finding an empty spot. The software architecture behind this layer is a set of decoupled modules that compute and share the information generated by sensors. This architectural approach is noteworthy because it maximizes system scalabil-ity and responsiveness to change. It allows the system to expand with the integration of new applications and perform updates on the existing ones, without an overall impact on the operations of the other system modules.
Catarina I. Reis, Marisa Maximiano, Amanda Paula, Iolanda Rosa, Ivo Santos, Tiago Paulo and Nuno Costa
527 The network topology of connecting Things: Defense of IoT graph in the smart city [abstract]
Abstract: The Internet of Things (IoT) is a novel paradigm based on the connectivity among different entities namely "things". The vision of IoT environment based on smart "things" represents an essential strategy based on the progress of effective and efficient solutions related to the urban context (e.g., system architecture, design and development, human involvement, data management and applications). On the other hand, with the introduction of the IoT environment, the security of the devices and the network become fundamental, challenging issues. Moreover, the proliferation of human IoT connecting in the system required to focus the efforts in the vulnerability of the complex network as well as the defence challenges at the topologic level. This paper addresses these challenges from the perspective of the graph theory. In this work, the authors use their AV11 algorithm to identify the most critical and influential IoT nodes in a Social IoT (SIoT) network in a smart city context using ENEA- Cresco infrastructure.
Marta Chinnici, Vincenzo Fioriti and Andrea Arbore
484 SILKNOWViz: Spatio-temporal data ontology viewer. [abstract]
Abstract: Interactive visualization of spatio-temporal data is a very active area that has experienced remarkable advances in the last decade. This is due to the emergence of fields of research such as big data and advances in hardware that allow better analysis of information. This article describes the methodology fol-lowed and the design of an open source tool, which in addition to interactively visualizing spatio-temporal data that are represented in an ontology, allows the definition of what to visualize and how to do it. The tool allows selecting, filter-ing and visualizing in a graphical way the entities of the ontology with spatio-temporal data, as well as the instances related to them. The graphical elements used to display the information are specified on the same ontology, extending the VISO graphic ontology, used for mapping concepts to graphic objects with RDFS/OWL Visualization Language (RVL). This extension contemplates the data visualization on rich real-time 3D environments, allowing different modes of visualization according to the level of detail of the scene, while also empha-sizing the treatment of spatio-temporal data, very often used in cultural heritage models. This visualization tool involves simple visualization scenarios and high interaction environments that allow complex comparative analysis. It combines traditional solutions, like hypercube or time-animations with innovative data se-lection methods. This work has been developed in the SILKNOW project, which received funding from the European Union’s Horizon 2020 research and innova-tion programme under grant agreement No 769504.
Javier Sevilla Peris, Cristina Portales Ricart, Jesús Gimeno Sancho and Jorge Sebastian Lozano
420 Ontology-Driven Automation of IoT-Based Human-Machine Interfaces Development [abstract]
Abstract: The paper is devoted to the development of high-level tools to automate tangible human-machine interfaces creation bringing together IoT technologies and ontology engineering methods. We propose using ontology-driven approach to enable automatic generation of firmware for the devices and middleware for the applications to design from scratch or transform the existing M2M ecosystem with respect to new human needs and, if necessary, to transform M2M systems into human-centric ones. Thanks to our previous research, we developed the firmware and middleware generator on top of SciVi scientific visualization system that was proven to be a handy tool to integrate different data sources, including software solvers and hardware data providers, for monitoring and steering purposes. The high-level graphical user SciVi interface enables to design human-machine communication in terms of data flow and ontological specifications. Thereby the SciVi platform capabilities are sufficient to automatically generate all the necessary components for IoT ecosystem software. We tested our approach tackling the real-world problems of creating hardware device turning human gestures into semantics of spatiotemporal deixis, which relates to the verbal behavior of people having different psychological types. The device firmware generated by means of SciVi tools enables researchers to understand complex matters and helps them analyze the linguistic behavior of users of social networks with different psychological characteristics, and identify patterns inherent in their communication in social networks.
Konstantin Ryabinin, Svetlana Chuprina and Konstantin Belousov
531 Towards Parameter-Optimized Vessel Re-identication based on IORnet [abstract]
Abstract: Reliable vessel re-identification would enable maritime surveillance systems to analyze the behavior of vessels by drawing their accurate trajectories, when they pass along different camera locations. However, challenging outdoor conditions and varying viewpoint appearances combined with the large size of vessels limit conventional methods to obtain robust re-identification performance. This paper employs CNNs to address these challenges. In this paper, we propose an Identity Oriented Re- identification network (IORnet), which improves the triplet method with a new identity-oriented loss function. The resulting method increases the feature vector similarities between vessel samples belonging to the same vessel identity. Our experimental results reveal that the proposed method achieves 81.5% and 91.2% on mAP and Rank1 scores, respectively. Additionally, we report experimental results with data augmentation and hyper-parameters optimization to facilitate reliable ship re- identification. Finally, we provide our real-world vessel re- identification dataset with various annotated multi-class features to public access.
Amir Ghahremani, Yitian Kong, Egor Bondarev and Peter H.N. de With