Poster papers (POSTER) Session 1

Time and Date: 12:15 - 12:45 on 12th June 2019, 11:55 - 12:25 on 13th June 2019, 11:55 - 12:25 on 14th June 2019

Room: Poster Hall

Chair: None

33 Mixed Finite Element Solution for the Natural-Gas Dual-Mechanism Model [abstract]
Abstract: Throughout this paper, we introduce a dual-porosity dual-permeability (DPDP) model, to describe the transport of the natural gas in shale. The governing differential system has been solved numerically using the mixed finite element method (MFEM). Numerical experiments are conducted under the corresponding physical parameters of the model. Selected numerical results are represented in graphs such as cumulative rate and variations in pressures and relative permeabilities.
Mohamed El-Amin
55 On the Feasibility of Distributed Process Mining in Healthcare [abstract]
Abstract: Process mining is gaining significant importance in the healthcare domain, where the quality of services depends on the suitable and efficient execution of processes. A pivotal challenge for the application of process mining in the healthcare domain comes from the growing importance of multi-centric studies, where data from different medical centers are considered in order to increase the number of recruited patients and gain a better understanding of causal relations on a large number of clinical variables. However, to be in the position to exploit such distributed knowledge, which is spread across hospitals, efficient and privacy-preserving techniques are strongly needed. In this paper, building on top of the well-known Alpha algorithm, which allows to deal with process discovery, we introduce and empirically test a distributed process mining approach, that allows to overcome problems related to privacy and data being spread around. The introduced technique allows to perform process mining without sharing any patients-related information, thus ensuring privacy and maximizing the possibility of cooperation among hospitals.
Roberto Gatta, Mauro Vallati, Jacopo Lenkowicz, Carlotta Masciocchi, Francesco Cellini, Luca Boldrini, Carlos Fernandez Llatas, Vincenzo Valentini and Andrea Damiani
69 How to Plan Roadworks in Urban Regions? A Principled Approach Based on AI Planning [abstract]
Abstract: Roadworks are required to keep roads in acceptable condition, and to perform maintenance of essential infrastructure. However, due to increasing traffic volumes and increasing urbanisation, road agencies are currently facing the problem of how to effectively plan frequent –and usually concurrent– roadworks in the controlled region. Automated Planning can be fruitfully exploited as a Decision Support toolkit that, given a specification of available actions (elementary decisions to be taken) and constraints, an initial situation and goals to be achieved, is capable of generating plans that can achieve specified goals. In this paper, we exploit Automated Planning for roadworks planning. We introduce a planning domain model that allows us to plan a set of required roadworks, over a period of time, in a large urban region, by specifying constraints to be satisfied and suitable quality metrics. Our empirical analysis shows the suitability of the proposed approach.
Mauro Vallati, Lukas Chrpa and Diane Kitchin
72 Big data approach to fluid dynamics visualization problem [abstract]
Abstract: Present work is dedicated to development of the software for interactive visualization of results of simulation of gas dynamics problems on meshes of extra large sizes. Kitware ParaView visualization tool, which is popular among engineers and scientists is used as a frontend. The coupling of client and server instances of ParaView is used in the project. The crucial feature of the work is an application of Apache Hadoop and Apache Spark for distributed retrieving of simulation data from files on hard disk. The data is stored on the cluster in Hadoop Distributed File System (HDFS) managed by Apache Hadoop and is provided to ParaView server by Apache Spark data processing tool.
Vyacheslav Reshetnikov, Egor Golubchikov, Andrey Pyatlin, Alexey Kuzin, Vladislav Kiev, Nikolay Shabrov, Alexey Zhuravlev and Ekaterina Guseva
77 Dolphin Kick Swimmer using the Unstructured Moving Mesh Method [abstract]
Abstract: The dolphin kick assumes a vital role in swimming competitions, as it is used after dives and turns in several swimming styles. To improve the swimmer's dolphin kick performance, flows around him were simulated. Using video footage of a male swimmer's joint angles, a 3D model simulation was created. The flows were computed using the unstructured moving grid finite volume method to ex-press the complicated motion of swimmers. The mesh around the swimmer is moved according to his motion. In this method, a geometric conservation law is satisfied as well as a physical one. Furthermore, the moving computational do-main method is also adopted for calculation efficiency. The numerical swimmer is finally completed by a coupled computation between motion of human and fluid. In this paper, the simulation results revealed that the influence of the maximum knee oscillation angles affect the speed of the swimmer.
Masashi Yamakawa, Norihito Mizuno and Yongmann Chung
81 Improvement in the performance of SPH simulation with the interaction-list-sharing method for many-core architectures [abstract]
Abstract: The demands for the optimization of particle-based methods with short-range interaction forces such as those in smoothed particle hydrodynamics (SPH) is increasing, especially for many-core architectures. However, because particle-based methods require large amount of memory access, it is challenging to obtain high efficiency for low-byte/FLOP many-core architectures. Hence, an efficient technique, the so-called ``multiwalk'' method, was developed in an $N$-body gravitational field. The key of the multiwalk method is in sharing of the interaction lists with multiple particles. The shared interaction list offers an efficient use of the cache memory in the double-loops operation for calculating the interactions and reducing the main memory access. This technique is known to greatly improve the performance of $N$-body simulations. However, such performance improvement is not clear for the problems with short-range interaction forces such as those in SPH, because the total cost of floating point operations increases when using the shared interaction list. In this paper, we examine the tradeoff relations between the memory and the cost of floating point operations to optimize the SPH code. In particular, we predict the wall-clock time depending on the number of particles in one group of interaction list for a given device's bandwidth and FLOP/s. To validate our prediction model, we measured the wall-clock time spent on a device. We employed the following target devices: NVIDIA GPUs (K40 and P100) and PEZY-SCs (SC1 and SC2). Our model is useful for establishing the efficient choice of the number of particles in one group for the given bandwidth and FLOP/s.
Natsuki Hosono and Mikito Furuichi
84 Influence of the architectural features of the SNC-4 mode of the Intel Xeon Phi KNL on the matrix multiplication [abstract]
Abstract: The Sub-NUMA Clustering or SNC-4 affinity mode of the Intel Xeon Phi Knights Landing provides a new environment for parallel applications, as it provides a NUMA system in a single chip. Due to the additional effort for the programmers required by these systems, a characterization in terms of performance can help to understand and use them efficiently. The main target of this work is to characterize the behaviour of this system, focusing in nested parallelization for a well known code, with regular and predictable memory access patterns, ensuring that the experiments can be easily understood and reproduced. It has been studied how the thread distribution in the processor affects performance when using SNC-4 affinity mode, the differences of the MCDRAM memory using cache and flat modes and the improvements given by vectorization in different situations in terms of data locality. Results show that the best thread location is the scatter distribution, allocating the lowest number of threads per core as possible using a total of 64 or 128 threads. The vectorization resulted to be efficient only when the data locality is good and there is a low amount of replacements in the caches, specially when the MCDRAM is used as a last level cache. Furthermore, the use of other techniques as introducing padding has given great improvements in the execution times. In the best of the tests, a performance of 117.46 GFlop/s was achieved.
Ruben Laso, Francisco F. Rivera and José Carlos Cabaleiro
87 Improving Planning Performance in PDDL+ Domains via Automated Predicate Reformulation [abstract]
Abstract: In the last decade, planning with domains modelled in thehybrid PDDL+ formalism has been gaining significant research interest.PDDL+ models enable the representation of problems that involve bothcontinuous processes and discrete events and actions, which are requiredin many real-world applications. A number of approaches have been pro-posed that can handle PDDL+, and their exploitation fostered the useof planning in complex scenarios.In this paper we introduce a PDDL+ reformulation method that reducesthe size of the grounded problem, by reducing the arity ofsparsepredi-cates, i.e. predicates with a very large number of possible groundings, outof which very few are actually exploited in the planning problems. Arityis reduced by merging suitable objects together, and partially ground-ing the operators, processes and events in which reformulated predicatesare involved. We include an empirical evaluation which demonstratesthat these methods can substantially improve performance of domain-independent planners on PDDL+ domains.
Santiago Franco, Mauro Vallati, Alan Lindsay and Lee McCluskey
89 The Case of iOS vs. Android: Applying System Dynamics to Digital Business Platforms [abstract]
Abstract: Platforms are multi-sided marketplaces that bring together groups of users that would otherwise not have been able to connect or transact. The application markets for Apple iOS and Google Android are examples of such markets. System dynamics is a powerful method to gain useful insight into environments of dynamic complexity and policy resistance. In this paper we argue that, adapted to the context of digital business platforms, the practice of system dynamics facilitates the understanding of the role of incentives in such marketplaces as means and causes to increase participation, value generation, and market growth. In particular, we describe our efforts to simulate the Android vs. iOS market competition in terms of the interacting markets for devices and their applications.
Ektor Arzoglou, Tommi Elo and Pekka Nikander
94 Sockpuppet Detection in Social Network via Propagation Entropy [abstract]
Abstract: Sockpuppets detection is a valuable and challenging issue in social network. Current works are continually making efforts to detect sockpuppets based on verbal, non-verbal or network-structure features. However, they do not consider the propagation characteristic and propagation structure of sockpuppets. With our observation, the propagation trees of sockpuppets and ordinary accounts are different. Sockpuppets propagation trees are evidently wider and deeper than that of the ordinary ones, and the sockpuppets pairs tend to build similar propagation trees. Based on these observations, we propose a propagation entropy method based on sockpuppets detection. We first construct the propagation tree to detect sockpuppets and then propose the propagation entropy for each account to detect sockpuppet pairs. The experiments on two real-world datasets of Sina Weibo demonstrate that our method obtains excellent detection performance, which significantly outperforms previous methods.
Jiacheng Li, Wei Zhou, Jizhong Han and Songlin Hu
126 Event-Oriented Keyphrase Extraction Based on Bi-Clustering Model [abstract]
Abstract: Keyphrase extraction, as a basis for many natural language processing and information retrieval tasks, can help people efficiently discover their interesting information from vast streams of online documents. Previous methods are mostly proposed in general purpose, where keyphrases that represent the main topics are extracted. However, such keyphrases can hardly distinguish events from massive streams of long text documents that share similar topics and contain highly redundant information. In this paper, we address the task of keyphrase extraction for event-oriented retrieval. We propose a novel bi-clustering model for clustering the documents and keyphrases simultaneously. The model consequently makes the extracted keyphrases more specific and related to the event. We conduct a series of experiments on a real-world dataset. The experimental results demonstrate the better performance of our approach than other unsupervised approaches.
Lin Zhao, Liangjun Zang, Longtao Huang, Jizhong Han and Songlin Hu
148 Application and Security Issues of Internet of Things [abstract]
Abstract: Internet of things is a kind of advanced information technology which connects billons of devices together. IOT uses new addressing scheme and network to create new application .The IOT is the backbone of smart cities and smart cities are the backbone of smart government. Smart – Government makes new generation of e- Government. After coming power in 2014 Prime Minister Narendra Modi‟s government announced the ambitious programme of building 100 smart cities in India, While enjoying the convenience and efficiency that IOT brings us, new threats from IOT also have emerged .In this paper we analyze various security requirement & some of its application areas, evolution of the IOT .
Priyanka Gautam
158 Automated epileptic seizure detection method based on the multi-attribute EEG feature pool and mRMR feature selection method [abstract]
Abstract: Electroencephalogram (EEG) signals reveal many crucial hidden attributes of the human brain. Classification based on EEG-related features can be used to detect brain-related diseases, especially epilepsy. The quality of EEG-related features is directly related to the performance of automated epileptic seizure detection. Therefore, finding prominent features bears importance in the study of automated epileptic seizure detection. In this paper, a novel method is proposed to automatically detect epileptic seizure. This work proposes a novel time-frequency-domain feature named global volatility index (GVIX) to measure holistic signal fluctuation in wavelet coefficients and original time-series signals. Afterwards, the multi-attribute EEG feature pool is constructed by combining time-frequency-domain features, time-domain features, nonlinear features, and entropy-based features. Minimum redundancy maximum relevance (mRMR) is then introduced to select the most prominent features. Results in this study indicate that this method performs better than others for epileptic seizure detection using an identical dataset, and that our proposed GVIX is a prominent feature in automated epileptic seizure detection.
Bo Miao, Junling Gun, Liangliang Zhang, Qingfang Meng and Yulin Zhang
166 Exploring the performance of fine-grained synchronization and data exchange across process boundaries on modern multi-core architectures [abstract]
Abstract: Whether to use multiple threads in one process (MPI+X) or multiple processes (pure MPI) has long been an important question in HPC. Techniques like in situ analysis and visualization further complicate matters, as it may be very difficult to couple the different components in a way that would allow them to run in the same process. Combined with the growing interest in task-based programming models, which often rely on fine-grained tasks and synchronization, a question arises: Is it possible to run two tightly coupled task-based applications in two separate processes efficiently or do they have to be combined into one application? Through a range of experiments on the latest Intel Xeon Scalable (Skylake) and AMD EPYC (Zen) many-core architectures, we have compared performance of fine-grained synchronization and data exchange between threads in the same process and threads in two different processes. Our experiments show that although there may be a small price to pay for having two processes, it is still possible to achieve very good performance. The key factors are utilizing shared memory, selecting the right thread affinity, and carefully selecting the way the processes are synchronized.
Jiri Dokulil and Siegfried Benkner
192 Accelerating Wild Fire Simulator using GPU [abstract]
Abstract: In the last years, forest fire spread simulators have proven to be very promising tools in the fight against these disasters. Due to the necessity to achieve realistic predictions of the fire behavior in a relatively short time, execution time may be reduced. Moreover, several studies have tried to apply the computational power of GPUs (Graphic Processors Units) to accelerate the simulation of the propagation of fires. Most of these studies use forest fires simulators based on Cellular Automata (CA). CA approaches are fast and its parallelization is relatively easy; conversely, they suffer from precision lack. Elliptical wave propagation is an alternative approach for performing more reliable simulations. Unfortunately, its higher complexity makes their parallelization challenging. Here we explore two different parallel strategies based on Elliptical wave propagation forest fire simulators; the multicore architecture of CPU (Central Processor Unit) and the computational power of GPUs to improve execution times. The aim of this work is to assess the performance of the simulation of the propagation of forest fires on a CPU and on a GPU, and finding out when the execution on GPU is more efficient than on CPU. In this study, a fire simulator has been designed based on the basic model for one point evolution in the FARSITE simulator. As study case, a synthetic fire with an initial circular perimeter has been used; The wind, terrain and vegetation conditions have been maintained constant for all points of the fire front and throughout the simulation. Results highlighted that GPUs allow obtaining more accurate results while reducing the execution time of the simulations.
Carlos Carrillo, Ana Cortes, Toni Espinosa and Tomas Margalef
195 Nanoscopic Scale Simulations of the Effects of Low Salinity Water Injection in Oil Reservoirs [abstract]
Abstract: An atomistic modelling study on the nanoscopic scale was devised in order to improve the current knowledge of the adhesion phenomena involving hydrocarbons, rock and brine in oil deposits in conditions of extremely low salinity. Energetic/geometric relaxation was applied by Density Functional Theory to a mineral surface model, before adding it to Periodic Boundary Condition boxes containing isolated molecules of varying structure and hydropathic property. After that, the free volume of each box – containing a single organic molecule and a sample of rock surface, extending on the 10 nm scale – was filled with water molecules and enough inorganic ions – Na+ and Cl- - to reach the required density and ionic strength. All simulation assemblies were subjected to classical Molecular Dynamics and the resulting equilibrated configurations (see Fig. 1) were compared with each other, along a series with ionic strength varying from pure water to about 30000 ppm.
Francesco Frigerio, Luigi Abbondanza, Alberto Savoini, Andrea Ortenzi and Paola Ceragioli
201 Augmented Reality for Real-time Navigation Assistance to Wheelchair Users with Obstacles’ Management [abstract]
Abstract: Despite a rapid technological evolution in the field of technical assistance for people with motor disabilities, their ability to move independently in a wheelchair is still limited. New information and communication technologies (NICT) such as augmented reality (AR) are a real opportunity to integrate people with disabilities into their everyday life and work. AR can afford real-time information about buildings and locations’ accessibility through mobile appli-cations that allow the user to have a clear view of the building details such as the existence of an elevator or an access ramp in the entrance, etc. By interacting with augmented environ-ments that appear in the real world using a smart device, users with disabilities have more control of their environment and can interact with several information they could not access to before. In this paper, we propose a decision support system using AR for motor disabled peo-ple navigation assistance. We describe a real-time wheelchair navigation system equipped with geological mapping that indicates access path to a desired location, the shortest route towards it and identifies obstacles to avoid. Information about navigation is displayed on AR glasses that give the user the possibility to interact with the system according to the external environment. The prototyped wheelchair navigation system was developed for use within the University of Lille campus.
Sarah Ben Othman
204 Deep assimilation: Adversarial variational Bayes for implicit and non-linear data assimilation [abstract]
Abstract: Currently used data assimilation methods heavily rely on linearization and Gaussian assumptions. Caused by these simplifications, non-linear relationships between observations and state variables are difficult to represent. On the other hand, developments in statistics and machine learning evolved into generative and unsupervised deep learning, with algorithms like variational autoencoder and generative adversarial networks. These deep generative models learn to process and generate high-dimensional spaces. We therefore argue that deep neural networks are suitable to solve the data assimilation problem. Here, we wish to suggest a method to combine sequential data assimilation with deep learning. This method is based on amortized variational inference, which allows us to train and use a neural network for Bayesian inference without knowing the target, an analysis. We minimize during training a loss function, which approximates the closeness of the estimated posterior to the real, intractable, posterior by a lower bound. There, we further approximate the Kullback-Leibler divergence between estimated posterior and prior by adversarial training and a discriminative neural network. To train this discriminator, we only need to sample from the estimated posterior and prior. We do not need an assumption about the probability distribution in model space, because it is specified implicitly. The second part of the loss function is the log-likelihood of the observations. Following, we can use any non-linear observation operator, as long as the operator is differentiable. We are therefore not restricted to Gaussian assumptions in model and observation space. We do not use any ensemble information in our inference network; as only input, we use a deterministic prior, observations and a random vector. This architecture is for a linear inference network comparable to a stochastic ensemble Kalman filter with a static B matrix. For the Lorenz ‘96 model, Gaussian observation noise and a linear observation operator, our results suggest comparable performance to a localized ensemble transform Kalman filter (LETKF). The ensemble spread of the posterior is further automatically tuned with amortized variational inference, while the inference is up to 10 times faster than with the LETKF. The optimized solution of the inference network is nevertheless depending on the stability of the adversarial training, which is an ongoing question in computer science. But, these results show a promising new direction for data assimilation, especially with regard to the scalability of deep learning.
Tobias Sebastian Finn, Gernot Geppert and Felix Ament
211 p3Enum: A new Parameterizable and Shared-Memory Parallelized Shortest Vector Problem Solver [abstract]
Abstract: Due to the advent of quantum computers which may break all public-key cryptography in use today, quantum-safe cryptographic al- ternatives are required. Promising candidates are based on lattices. The hardness of the underlying problems must be assessed also on classical hardware. In this paper, we present the open source framework p3Enum for solving the important lattice problem of finding the shortest non-zero vector in a lattice, based on enumeration with extreme pruning. Our par- allelized enumeration routine scales very well on SMP systems with an extremely high parallel efficiency of up to 0.91 with 60 threads on a sin- gle node. A novel parameter ν within the pruning function increases the probability of success and the workload of the enumeration. This enables p3Enum to achieve runtimes for parallel enumerations which are compa- rable to single-threaded cases but with higher success rate. We compare the performance of p3Enum to various publicly available libraries and re- sults reported in the literature. We also visualize the statistical effects in the algorithms under consideration, thus allowing a better understand- ing of the behavior of the implementations than previous average-value considerations. In the range of lattice dimensions from 66 to 88, p3Enum performs the best which makes it a good candidate as a building block in lattice reduction frameworks.
Michael Burger, Christian Bischof and Juliane Krämer
217 Rendering Non-Euclidean Geometry in Real-Time Using Spherical and Hyperbolic Trigonometry [abstract]
Abstract: This paper introduces a method of calculating and rendering shapes in a non-Euclidean 2D space. In order to achieve this, we developed a physics and graphics engine that uses hyperbolic trigonometry to calculate and subsequently render the shapes in a 2D space of constant negative or positive curvature in real-time. We have chosen to use polar coordinates to record the parameters of the objects as well as an azimuthal equidistant projection to render the space onto the screen because of the multiple useful properties they have. For example, polar coordinate system works well with trigonometric calculations, due to the distance from the reference point (analogous to origin in Cartesian coordinates) being one of the coordinates by definition. Azimuthal equidistant projection is not a typical projection, used for neither spherical nor hyperbolic space, however one of the main features of our engine relies on it: changing the curvature of the world in real-time without stopping the execution of the application in order to re-calculate the world. This is due to the projection properties that work identically for both spherical and hyperbolic space, as can be seen in the Figure 1 above. We will also be looking at the complexity analysis of this method as well as renderings that the engine produces. Finally we will be discussing the limitations and possible applications of the created engine as well as potential improvements of the described method.
Daniil Osudin
220 Improving Academic Homepage Identification from the Web using Neural Networks [abstract]
Abstract: Identifying academic homepages is a fundamental work of many tasks, such as expert finding, researcher profile extraction and homonym researcher disambiguation. Many works have been proposed to obtain researcher homepages using search engines. These methods only extract features at the lexical-level from each single retrieval result,which is not enough to identify homepage from retrieval results with high similarity. To address this problem, we first make deep-insight improvements on three aspects. (1) Fine-gained features are designed to efficiently detect whether researcher’s name appears in retrieval results; (2) Establishing correlation of multiple retrieval results for the same researcher; (3) Obtaining semantic information involved in URL, title and snippet of each retrieval result by recurrent neural networks. Afterwards, we employ a joint neural network framework which is able to make comprehensive use of these informative information. In comparison with previous work, our approach gives a substantial increase of 11% accuracy on a real-world dataset provided by AMiner. Experimental results demonstrate the effectiveness of our method.
Zhao Jiapeng, Tingwen Liu and Jinqiao Shi
230 Combining Fuzzy Logic and CEP Technology to Improve Air Quality in Cities [abstract]
Abstract: Road traffic has become a main source of air pollution in urban areas. For this reason, governments are applying traffic regulations trying to fulfil the recommendations of Air Quality (AQ) standards in order to reduce the pollution level. In this paper, we present a novel proposal to improve AQ in cities by combining fuzzy logic and Complex Event Processing (CEP) technology. In particular, we propose a flexible fuzzy inference system to improve the decision-making process by recommending the actions to be carried out on each pollution scenario. This fuzzy inference system is fed with pollution data obtained by a CEP engine and weather forecast from domain experts.
Hermenegilda Macià, Gregorio Díaz, Juan Boubeta-Puig, Edelmira Valero and Valentín Valero
232 Parallel parametric linear programming solving, and application to polyhedral computations% [abstract]
Abstract: Parametric linear programming is a central operation for polyhedral computations, as well as in certain control applications. In this paper we propose a task-based scheme for parallelizing it, with quasi-linear speedup over large problems.
Camille Coti, David Monniaux and Hang Yu
236 Automating the Generation of Comparison Weights for Enhancing the AHP Decision-Making Process [abstract]
Abstract: The Analytic Hierarchy Process (AHP) method is widely used to deal with multi-criteria decision-making problems thanks to its simplicity and flexibility. However, it is often criticized for subjectivity and inconsistency in assigning the comparison weights that are based on expert judgments. Moreover, these weights are assigned manually for comparing each pair of alternatives, which weighs down on the decision-making process. In order to remedy these shortcomings, we propose in this paper an algorithm that automatically generates the pairwise comparison weights of alternatives according to each considered criterion. In addition, we demonstrate through an example that the judgment matrices constructed by the algorithm are very consistent.
Karim Zarour, Djamel Benmerzoug, Nawal Guermouche and Khalil Drira
238 Parallel algorithm based on Singular Value Decomposition for high performance training of Neural Networks [abstract]
Abstract: Neural Networks (NNs) are frequently applied to Multi Input Multi Output (MIMO) problems, where the amount of data to manage is extremely high and, hence, the computational time required for the training process is too large. The aim of this paper is to present an optimazed approach for training NNs based on the properties of Singular Value Decomposition (SVD), that allows to decompose the MISO NN into a collection of Single Input Single Output (SISO) NNs. The decomposition provides a two-fold advantage: firstly, each SISO NN can be trained by using a one-dimensional function, namely a limited dataset, and then a parallel architecture can be implemented on a PC-cluster, decreasing the computational cost. The parallel algorithm performance are validated using magnetic hysteresis dataset with the aim to prove the computational speed up by preserving the accuracy.
Gabriele Maria Lozito, Valentina Lucaferri, Mauro Parodi, Martina Radicioni, Francesco Riganti Fulginei and Alessandro Salvini
246 In-Situ Visualization with Membrane Layer for Movie-based Visualization [abstract]
Abstract: Movie-based visualization is a new approach to High Performance Computing (HPC) visualization. In this method, a viewer interactively explores a movie database with a specially designed application program called movie data browser. The database is a collection of movie files that are tied with spatial coordinates of their viewpoints. One can walk through the simulation's data space by extracting a sequence of image files from the database with the browser. In this method, it is important to scatter as many viewpoints as possible for smooth display. After proposing the movie-based visualization method, we have been developing a couple of critical tools for it. In this paper, we report the latest development of Multiple Program Multiple Data (MPMD) framework for supercomputers to apply many in-situ visualizations with different viewpoints. A key point in this framework is to place a membrane-like layer between the simulation program and the visualization program. Hidden behind the membrane layer, the simulation program is not affected by the visualization program even if the number of scattered viewpoints is large.
Kohei Yamamoto and Akira Kageyama
248 Genetic Algorithm for an On-Demand Public Transit System using EV [abstract]
Abstract: The popularity of real-time on-demand transit as a fast-evolving mobility service has paved the way to explore novel solutions for point-to-point transit requests. In addition, strict government regulations on green house gas emission calls for energy efficient transit solutions. To this end, we propose an on-demand public transit system using a fleet of electric vehicles, which provides real-time service to passengers by linking a zone to a predetermined rapid transit node. Subsequently, we model the problem using a scalable Genetic Algorithm, which provides routes and schedules in real-time while minimizing passenger travel time. We also propose an optimal formulation to generate baseline results. Experiments performed using real-map data show that the proposed algorithm not only generates near-optimal results but also advances the state-of-the-art at a marginal cost of computation time.
Thilina Perera, Alok Prakash and Thambipillai Srikanthan
255 Short-term irradiance forecasting on the basis of spatially distributed measurements [abstract]
Abstract: The output power of photovoltaic (PV) systems is heavily influenced by mismatching conditions that can drastically reduce the power produced by PV arrays. The mismatching power losses in PV systems are mainly related to partial or full shading conditions, i.e. non-uniform irradiation of the array. An essential point is the detection of the irradiance level in the whole PV plant. The use of irradiance sensors is generally avoided because of their cost and necessity for periodic calibration. In this work, an Artificial Neural Network (ANN) based method is proposed to forecast the irradiance value of each panel constituting the PV module, starting from a number of spatially distributed analytical irradiance computations on the array. A 2D random and cloudy 12 hours irradiance profile is generated considering wind action; the results show that the implemented system is able to provide an accurate temporal prevision of the PV plant irradiance distribution during the day.
Antonino Laudani, Gabriele Maria Lozito, Valentina Lucaferri and Martina Radicioni
272 Autism Screening using Deep Embedding Representation [abstract]
Abstract: Autism spectrum disorder (ASD) is a developmental disorder that affects communication and behavior. An early diagnosis of neurodevelopmental disorders can improve treatment and significantly decrease associated healthcare cost, which reveals an urgent need for the development of ASD screening. However, the data used for ASD screening is heterogenous and multi-source, resulting in existing screening tools for ASD screening are expensive, time intensive and sometimes fall short in predictive accuracy. In this paper, we apply novel feature engineering and feature encoding techniques, along with a deep learning classifier for ASD screening. Algorithms were created via a robust deep learning classifier and deep embedding representation for categorical variables to diagnose ASD based on behavioral features and individual characteristics. The proposed algorithm is effective compared with baselines, achieving 99\% sensitivity and 99\% specificity. The results suggest that deep embedding representation learning is a reliable method for ASD screening.
Haishuai Wang, Li Li, Lianhua Chi and Ziping Zhao
273 Multi-GPU Acceleration of the iPIC3D Implicit Particle-in-Cell Code [abstract]
Abstract: iPIC3D is a widely used massively parallel Particle-in-Cell code for the simulation of space plasmas. However, its current implementation does not support execution on multiple GPUs. In this paper, we describe the porting of iPIC3D particle mover to GPUs and the optimization steps to increase the performance and parallel scaling on multiple GPUs. We analyze the strong scaling of the mover on two GPU clusters and evaluate its performance and acceleration. The optimized GPU version which uses pinned memory and asynchronous data prefetching outperforms the CPU version by 5-10x on two different systems equipped with NVIDIA K80 and V100. Pinned memory and data prefetching are essential for parallel scaling: the parallel efficiency of the fully optimized mover reaches 73% on 16 GPUs, while the naive synchronous implementation only gives 44%.
Chaitanya Prasad Sishtla, Wei Der Chien, Vyacheslav Olshevsky, Erwin Laure and Stefano Markidis
297 Reducing Symbol Search Overhead on Stream-based Data Compression [abstract]
Abstract: Lossless data compression is emerged to utilize in the BigData applications in the recent days. The conventional algorithms mainly generate a symbol lookup table to replace a frequent data pattern in the inputted data to a symbol, and then compresses the information. This kind of the dictionary-based compression mechanism potentially has an overhead problem regarding the number of symbol matchings in the table. This paper focuses on a novel method to reduce the number of searches in the table using a bank separation technique. This paper uses a stream-based compression algorithm called LCA-DLT. Its software implementation inevitably needs the search overhead. Separating the table entries to several banks, compression speed has been improved. This paper reports design and implementation of the bank select method on the LCT-DLT, and shows the performance evaluations to validate the effects of the method.
Shinichi Yamagiwa, Ryuta Morita and Koichi Marumo
305 Stabilized variational formulations for solving cells response to applied electric field [abstract]
Abstract: In this work a stabilized variational formulation is proposed to solve the interface problem describing the electric response of cells to an applied electric field. The proposed stabilized formulation is attractive since the discrete operator resulting from finite element discretization generates a definite linear system for which efficient iterative solvers can be applied. The interface problem describing the cell response is solved with a primal variational formulation and the proposed stabilized formulation. Both methods are compared in terms of the approximation properties of the primal and the Lagrange multiplier variable. The computational performance of the methods are also compared in terms of the mean number of iterations needed to solve one time step during the polarization process of an isolated square cell. Moreover, numerical experiments are performed to validate the convergence properties of the methods.
Cesar Augusto Conopoima Perez, Bernardo Martins Rocha, Iury Higor Aguiar da Igreja, Rodrigo Weber dos Santos and Abimael Fernando Dourado Loula
329 Data-driven PDE discovery with evolutionary approach [abstract]
Abstract: The data-driven models allow one to define the model structure in cases when the a priori information is not sufficient to build other types of models.The possible way to obtain physical interpretation are the data-driven differential equation discovery techniques. The existing methods of PDE (partial derivative equations) discovery are bound with the sparse regression. However, sparse regression is restricting the resulting model form, since the terms for PDE are defined before regression. Evolutionary approach describred in the article has a symbolic regression as the background instead and thus has less restrictions on the PDE form. The evolutionary method of PDE discovery is described and tested on a several canonical PDEs. The question of robustness is examined on a noised data example.
Michail Maslyaev, Alexander Hvatov and Anna Kalyuzhnaya
337 Predicting Cervical Cancer with Metaheuristic optimizers for training LSTM [abstract]
Abstract: Cervical cancer, also known as uterine cancer, is the fourth most frequent cancer in women with an estimated 570,000 new cases in 2018 representing 6.6% of all female cancers [1]. Also in accordance with the organization, the mortality rate for this type of cancer reaches 90% in the underdeveloped countries and that the high mortality rate found in it could suffer a substantial reduction if there were: preven- tion, early diagnosis, effective screening and treatment programs. With the increase in computational power and the facility to obtain medical records with low data loss, the researchers realized that the application of neural networks in the development of systems / techniques in the attempt to diagnose diseases, such as uterine cancer . In this paper, tests were performed using a Long Short Term Memory networks (LSTM) for the diagnosis of cervical cancer. The LSTM was trained using 5 differ- ent metaheuristic algorithms (Cuckoo Search (CS), Genetic Algorithm (GA), Gravitational Search (GSA) Gray Wolf Optimizer (GWO), and Particle Swarm Optimization (PSO)), with an accuracy measure instead of the mean square error. In the tests undertaken, the diagnostic effi- ciency obtained was 95%, using the k-fold cross-validation method for assessing the results.
André Quintiliano Bezerra Silva
346 Top k 2-Clubs in a Network: A Genetic Algorithm [abstract]
Abstract: Identifying cohesive subgraphs is a well-known problem that has many applications in mining biological networks, for example protein interaction networks. In this article, we focus on the identification of subgraphs having diameter at most of 2 (2-clubs). We present a genetic algorithm to compute a collection of k 2-clubs, with k ≥ 1, and report some preliminary results on synthetic data using Erdös-Rényi random graphs.
Mauro Castelli, Riccardo Dondi, Sara Manzoni, Giancarlo Mauri and Italo Zoppis
349 CA-RPT: Context-Aware Road Passage Time Estimation for Urban Traffic [abstract]
Abstract: Road passage time is an important measure of urban traffic. Accurate estimation of road passage time contributes to the route programming and the urban traffic planning. Currently, the estimation of road passage time for a particular road is usually based on its historical data which is simple to express the general law of road traffic. However, with the increase of the number of roads in the urban area, the connection between roads becomes more complex. The existing methods fail to make use of the connection between different roads and the road passage time, merely based on its own historical data. In this paper, we propose a road passage time estimating model, called ”CA-RPT”, which utilizes the contextual information between road connections as well as the date and time period. We evaluate our method based on a real geolocation information data set collected by mobile APP anonymously. The results demonstrate that our method is more accurate than the state-of-the-art methods.
Ying Liu and Zhenyu Cui
359 Modelling and Analysis of Complex Patient-Treatment Process using GraphMiner Toolbox [abstract]
Abstract: This article describes the results of multidisciplinary research in the areas of analysis and modeling of complex processes of treatment on the cohort of patients with cardiovascular diseases. In the course of the study, methods and algorithms for processing large volumes of various and semi-structured series data of medical information systems were developed. Moreover, the method for predicting treatment events have been developed. Graphs, algorithms of community detection and machine learning method are applied. The use of graphs and machine learning methods has expanded the capabilities of process mining for a better understanding of the complex process of medical care. Moreover, the algorithms for parallel computing using CUDA for graph calculation is developed. The improved methods and algorithms are taken into account in the corresponding developed visualization tool for complex treatment processes analysis.
Oleg Metsker, Alexey Yakovlev, Sergey Kovalchuk, Sergey Kesarev, Ekaterina Bolgova, Kirill Golubev and Andrey Karsakov
364 Combining Algorithmic Rethinking and AVX-512 Intrinsics for Efficient Simulation of Subcellular Calcium Signaling [abstract]
Abstract: Calcium signaling is vital for the contraction of the heart. Physiologically realistic simulation of this subcellular process requires nanometer resolutions and a complicated mathematical model of differential equations. Since the subcellular space is composed of several irregularly-shaped and intricately-connected physiological domains with distinct properties, one particular challenge is to correctly compute the diffusion-induced calcium fluxes between the physiological domains. The common approach is to pre-calculate the effective diffusion coefficients between all pairs of neighboring computational voxels, and store them in long arrays. Such a strategy avoids complicated if-tests when looping through the computational mesh, but suffers from substantial memory overhead. In this paper, we adopt a memory-efficient strategy that uses a small lookup table of diffusion coefficients. The memory footprint and traffic are both drastically reduced, while also avoiding the if-tests. However, the new strategy induces more instructions on the processor level. To offset this potential performance pitfall, we use AVX-512 intrinsics to effectively vectorize the code. Performance measurements on a Knights Landing processor and a quad-socket Skylake server show a clear performance advantage of the manually vectorized implementation that uses lookup tables, over the counterpart using coefficient arrays. We also discuss other performance engineering results of the subcellular simulator.
Chad Jarvis, Glenn Terje Lines, Johannes Langguth, Kengo Nakajima and Xing Cai
368 Ocean Circulation Hindcast at the Brazilian Equatorial Margin [abstract]
Abstract: The growth of the activities of the Petroleum Industry in the Brazilian Equatorial Margin, reinforces the need for the environmental knowledge of the region, which will be potentially exposed to risks related to such activities. The environmental importance of this region evidences the need to deepen and sys- tematize not only the knowledge about the environmental sensitivity of the re- gion, but also about the characteristics that will exert influence over it. The Costa Norte Project can be identified with one of these initiatives. The project has as one of the main objectives to evaluate the efficiency of the use of marine hydro- dynamic environmental computational modeling methods to represent the marine dynamics over that region. In this paper a regional ocean computational model was used to produce a ten year hindcast simulation in order to represent the main aspects associated with mesoscale climatological ocean circulation at the Brazil- ian equatorial margin. This article aims to present the methodology and the re- sults analysis and evaluation associated to the cited hydrodynamic computational simulation. The obtained results clearly demonstrated the ocean model potential to represent the most important ocean variables space and time distribution over the studied region. Comparative analysis with observed data demonstrated good agreement with temperature, salinity and sea surface height fields generated by the implemented model. The Costa Norte Project is carrying out under the Bra- zilian National Petroleum Agency (ANP) R&D levy as “Investiment Commit- ment to Research and Development” and is financially supported by Queiroz Galvão Exploração e Produção S.A.
Luiz Paulo Assad, Raquel Toste, Carina Böck, Dyellen Queiroz, Anne Guedes, Maria Eduarda Pessoa and Luiz Landau
369 A matrix-free eigenvalue solver for the multigroup neutron diffusion equation. [abstract]
Abstract: The stationary neutron transport equation describes the neutron population ad thus, the generated heat, inside a nuclear reactor core. Obtaining the solution of this equation requires to solve a generalized eigenvalue problem efficiently. The majority of the eigenvalue solvers use the factorization of the system matrices to construct preconditioners, such as the ILU decomposition or the ICC decomposition, to speed up the convergence of the methods. The storage of the involved matrices and incomplete factorization demands high quantities of computational memory in spite of the sparse compressed sparse row (CSR) format is used. This makes the computational memory the limiting factor for this kind of calculations in some personal computers. In this work, we propose a matrix-free preconditioned eigenvalue solver that does not need to have the matrices allocated in memory explicitly. This method is based on the block inverse-free preconditioned Arnoldi method (BIFPAM) with the innovation that uses a preconditioner that is applied from matrix-vector operations. As well as reducing enormously the computational memory, this methodology removes the time to assembly the sparse matrices involved in the system. A two-dimensional and three-dimensional benchmark are used to study the performance of the methodology proposed.
Amanda Carreño, Antoni Vidal-Ferràndiz, Damian Ginestar and Gumersindo Verdú
376 Term structure calibration and option pricing with jumps [abstract]
Abstract: We derive numerical series representations for option prices on interest rate index for affine jump-diffusion models in a stochastic jump intensity framework with an adaptation of the Fourier-cosine series expansions method, focusing on the European vanilla derivatives. We analyzed the yield curve calibrated parameters for nine different Ornstein-Uhlenbeck models enhanced with different jump size distributions. The interest rate index option prices are accurately and efficiently approximated by solving the corresponding set ordinary differential equations and parsimoniously truncating the Fourier series representations. The option prices are then implied by the calibrated parameters of the proposed models.
Allan Jonathan da Silva, Jack Baczynski and João Felipe da Silva Bragança
379 Composite data types in dynamic dataflow languages as copyless memory sharing mechanism [abstract]
Abstract: This paper presents new optimization approaches aiming at reducing the impact of memory accesses on the performance of dataflow programs. The approach is based on introducing an high level management of composite data types in dynamic dataflow programming language for the memory processing of data tokens without require essential changes to the properties and to the model of computation (MOC) with minimal changes to the dataflow program itself. The objective of the approach is to remove the unnecessary constraints of memory isolations without introducing limitations to the scalability and composability properties of the dataflow paradigm. Thus the identified optimizations allow to keep the same design and programming philosophy of dataflow, whereas aiming at improving the performance of the specific configuration implementation. The different optimizations can be integrated into the current RVC-CAL design flows and synthesis tools and can be applied to different sub-networks partitions of the dataflow program. The paper introduces the context, the definition of the optimization problem and describes how it can be applied to dataflow designs. Some examples of the optimizations are provided.
Aurélien Bloch, Endri Bezati and Marco Mattavelli
395 A coupled food security and refugee movement model for the South Sudan conflict [abstract]
Abstract: We investigate, through data sets correlation analysis, how relevant to the simulation of refugee dynamics the food situation is. Armed conflicts often imply difficult food access conditions for the population, which can have a great impact on the behavior of the refugees, as is the case in South Sudan, according to various reports. To test our approach, we adapt the Flee agent-based simulation code, combining it with a data-driven food security model to enhance the rule set for determining refugee movements. We tested two different approaches for the South Sudan civil war and find promising yet negative results. While our first approach to modelling refugees response to food insecurity didn't improve the error of the SDA, we show that this behavior is highly non-trivial and that properly understanding it could prove determinant for the development of reliable models of refugee dynamics. *References in the original abstract have been ommitted here for format reasons.
Christian Vanhille Campos, Diana Suleimenova and Derek Groen
403 Data-based learning and reduced models for efficient scale coupling in atmospheric flow dynamics [abstract]
Abstract: Objective is the efficient modelling of large scales in atmosphere science. We develop a small scale stochastic model for convective activity and describe convective feedback on the large scale atmospheric flow using data-based learning and reduced models for efficient scale coupling. The aim is a hybrid model with a stochastic component for a conceptual description of convection embedded in a deterministic atmospheric flow model. To analyse atmospheric processes on different scales, we need to consider the process as an embedded system as the restriction of an unknown larger dynamical system . Therefore, we extend the theory and algorithms for coherent sets for embedded domains and incomplete trajectory data and make towards unified transfer-operator approach for coherent sets and patterns. The state of the art considering the work on transport-oriented methods and data-based analytics will be illustrated. In view of upward coupling the future work on a model for cloud characteristics with the help of the already obtained theory for coherent set analysis will be described. Perspectively, we try to combine machine learning with coherent sets in dynamical systems.
Robert Malte Polzin
411 A Proposal to Model Ancient Silk Weaving Techniques and Extracting Information from Digital Imagery - Ongoing Results of the SILKNOW Project [abstract]
Abstract: Three dimensional (3D) virtual representations of the internal structure of textiles are of interest for a variety of purposes related to fashion, industry, education or other areas. The modeling of ancient weaving techniques is relevant to understand and preserve our heritage, both tangible and intangible. However, ancient tech-niques cannot be reproduced with standard approaches, which usually are aligned with the characteristics of modern, mechanical looms. The aim of this paper is to propose a mathematical modelling of ancient weaving techniques by means of matrices in order to be easily mapped to a virtual 3D representation. The work fo-cuses on ancient silk textiles, ranging from the 15th to the 19th centuries. We also propose a computer vision-based strategy to extract relevant information from digital imagery, by considering different types of images (textiles, technical draw-ings and macro images). The work here presented has been carried out in the scope of the SILKNOW project, which has received funding from the European Union’s Horizon 2020 research and innovation program under grant agreement No. 769504.
Cristina Portalés Ricart, Javier Sevilla, Manolo Pérez and Arabella León
412 A Comparison of Selected Variable Ordering Methods for NFA Induction [abstract]
Abstract: In the paper, we study one of the fundamental problems of grammatical inference, namely the induction of nondeterministic finite automata (NFA). We consider the induction of NFA consistent with the given sets of examples and counterexamples. We transform the induction problem into a constraint satisfaction problem and propose two variable ordering methods to solve it. We evaluate experimentally the proposed variable ordering methods and compare them with a state-of-the-art method. Additionally, through the experiments we assess the impact of sample sets sizes on the performance of the induction algorithm using the respective variable ordering methods.
Tomasz Jastrząb
430 Traffic3D: A Rich 3D-Traffic Environment to Train Intelligent Agents [abstract]
Abstract: Last few years marked a significant progress in the field of Deep Reinforcement Learning. However, an important and not yet fully-attained goal is to produce intelligent agents which can be successfully taken out of the laboratory and employed in the real-world. Intelligent agents that are successfully deployable in true real-world settings, require substantial prior exposure to their intended environments. When this is not practical or possible, the agents benefit from being trained and tested on powerful test-beds, effectively replicating the real-world. To achieve traffic management at an unprecedented level of efficiency, in this paper, we introduce a significantly richer new traffic simulation environment; Traffic3D. Traffic3D is a unique platform built to effectively simulate and evaluate a variety of 3D road traffic scenarios, closely mimicking real-world traffic characteristics including faithful simulation of individual vehicle behavior, precise physics of movement and photo-realism. We discuss the merits of Traffic3D in comparison to state-of-the-art traffic-based simulators. We also demonstrate its applicability by developing a vision-based deep reinforcement learning agent, to efficiently address the problem of congestion around road intersections. In addition to deep reinforcement learning, Traffic3D facilitates research across several other domains such as imitation learning, learning by interaction, visual question answering, object detection and segmentation, unsupervised representation learning and procedural generation.
Deepeka Garg, Maria Chli and George Vogiatzis
442 Energy Efficiency Evaluation of Distributed Systems [abstract]
Abstract: Rapid growth in Big Data and Cloud technologies has fueled rising energy demands in large server systems such as data centers, leading to a need for effective power management. Servers in these data centers are commonly virtualized and applications running on them are dynamic and increasingly distributed with ever-growing data volumes. Scientific and engineering applications are no exception, such as the Montage astronomy application dealing with thousands of dependent jobs and up to several terabytes of data. While hardware virtualization in data centers has brought cost efficiency and elasticity, it significantly complicates energy monitoring. In this paper, we investigate the energy consumption characteristics of data-intensive distributed applications in terms of the CPU and memory subsystem. In particular, we study the relationship between power limits and their effect on application performance and system-level energy consumption, respectively. To this end, we develop PowerSave as a lightweight software framework that enables dynamic reconfiguration of power limits. PowerSave uses Running Average Power Limit (RAPL)---a standard feature in new Intel CPUs---to impose power limits. It significantly eases the effective evaluation of energy efficiency of distributed systems; this leverages effective power management, resulting in reduction of energy consumption of distributed systems. Our evaluation study, conducted on three different real systems, demonstrates that for workloads typical of servers used in data centers, higher power caps correlate with higher overall CPU energy use. Also, we show that CPU energy consumption strongly correlates to system-level energy consumption. Consequently, optimizing CPU energy use will tend to optimize system-level power use.
James Phung, Young Choon Lee and Albert Zomaya
469 Support for high-level quantum Bayesian inference [abstract]
Abstract: In this paper, we present an AcausalNets.jl library supporting inference in a quantum generalization of Bayesian networks and their application to quantum games. The proposed solution is based on modern approach to numerical computing provided by Julia. The library provides a high-level functions for Bayesian inference that can be applied to both classical and quantum Bayesian networks. Furthermore, we discuss the extensibility of belief propagation algorithms into the quantum domain.
Marcin Przewiezlikowski, Michał Grabowski, Dariusz Kurzyk and Katarzyna Rycerz
488 Financial Time Series Motif Discovery and Analysis Using VALMOD [abstract]
Abstract: Motif discovery and analysis in time series data-sets have a wide-range of applications from genomics to finance. In consequence, development and critical evaluation of what such algorithms offer is required with the focus progressing beyond mere detection to their evaluation and interpretation of overall significance. In order to achieve this, we focus on their analysis using one particular algorithm (VALMOD). Algorithms in wide use for motif discovery are summarised and briefly compared, as well as typical evaluation methods. Strengths of these are highlighted, with the principal focus here being on the superior performance of VALMOD over a range of motif lengths (as opposed to other methods targeting efficiencies over individual motif lengths. In addition Taxonomy diagrams for both motif discovery and evaluation techniques are constructed to illustrate the relationship between different approaches and inter-dependencies. Finally evaluation measures based upon results obtained from VALMOD analysis of a GBP vs USD foreign exchange rate data-set are demonstrated.
Eoin Cartwright, Martin Crane and Heather Ruskin
491 Profiling of Household Residents’ Electricity Consumption Behavior using Clustering Analysis [abstract]
Abstract: In this study we apply clustering techniques for analyzing and understanding households’ electricity consumption data. The knowledge extracted by this analysis is used to create a model of normal electricity consumption behavior for each particular household. Initially, the household’s electricity consumption data are partitioned into a number of clusters with similar daily electricity consumption profiles. The centroids of the generated clusters can be considered as representative signatures of a household’s electricity consumption behavior. The proposed approach is evaluated by conducting a number of experiments on electricity consumption data of ten selected households. The obtained results show that the proposed approach is suitable for data organizing and understanding, and can be applied for modeling electricity consumption behavior on a household level.
Christian Nordahl, Veselka Boeva, Håkan Grahn and Marie Netz
528 DNAS-STriDE Framework for Human Behavior Modelling in Dynamic Environments [abstract]
Abstract: People spend a major portion of their lifetime in buildings. Their presence inside the buildings and their interactions with the building systems are referred as human behaviors are monitored in real-time for an efficient execution of the facility management (FM) operations such as space utilization, asset management, energy optimization, and safety management. Though, a huge amount of studies has been conducted over the past few decades on human behavior modelling and simulation by considering the dynamicity of the humans for different FM applications. For example; the Drivers, Needs, Actions and Systems (DNAS) framework which provides a standardized way to conceptually represent energy-related occupant behaviors in buildings and allows an exchange of occupant behavior information and integration with building simulation tools. Despite numerous studies dealing with dynamic interactions of the building occupants, there is still a gap exists in the knowledge modelling of occupant behaviors for dynamic building environments. Such environments are best observed on construction sites where the semantic information linked to the building spaces evolve often over time in terms of their location, size, properties and relationships with the site environment. The evolving semantic information of a building is required to be mapped with the occupant interactions for an improved understanding of their changing behaviors using contextual information. To fill this research gap, a framework is designed for providing a ‘blueprint map’ to integrate DNAS framework with our Semantic Trajectories in Dynamic Environments (STriDE) data model to incorporate the dynamicity of building environments. The proposed framework extends the usability of a DNAS framework by providing a centralized knowledge base that holds the mobility data of occupants with relevant historicized semantic information of the building environment to study occupant behaviors for different FM applications.
Christophe Cruz and Muhammad Arslan
553 OPENCoastS: An open-access app for sharing coastal prediction information for management and recreational use [abstract]
Abstract: Coastal forecast systems provide coastal managers with accurate and timely water predictions, supporting multiple uses such as navigation, water monitoring, port operations and dredging activities. They are also useful tools to support recreational activities. Limitations on the widespread usage of coastal forecasts are generally the unavailability of open forecasts for consultation, the expertise needed to build an operational forecast system and the human and computational resources needed to maintain it in operation every day. In the scope of the EOSC-Hub project, a new service for the generic deployment of forecast systems at user-specified locations was developed to address these limitations. Denoted OPENCoastS, this service builds circulation forecast systems for user-selected coastal areas and maintains them in operation using EOSC computational resources. OPENCoastS can be applied to any coastal re-gion and has been in operation for the last 9 months, forced by GFS, CMEMs, Arpege, PRISM2017 and FES2014. It has attracted over 150 users from around 45 institutions across the globe. However, the only requirement needed to use this service - a computational grid of the domain of interest - has proven to be difficult to obtain by most coastal managers. Thus, most users come from re-search institutions. Herein, we propose a new way to bring coastal managers and the general public to the OPENCoastS community. By creating an open, scalable and organized repository of computational grids, shared by expert coastal modelers across the globe, the benefits from the use of OPENCoastS can now be shared with all coastal actors.
Anabela Oliveira, Marta Rodrigues, André Fortunato, João Rogeiro, Joana Teixeira, Alberto Azevedo and Pedro Lopes
564 PROCESS -- PROviding Computing solutions for ExaScale challengeS [abstract]
Abstract: Addressing emerging grand challenges in scientific research, health, engineering or global consumer services necessitates dramatic increases in responsive supercomputing and extreme data capacities. PROCESS -- PROviding Computing solutions for ExaScale challengeS – as a collaborative research and innovation EU project offers solutions for these challenges. With an adaptable service prototype, we will provide a scalable and easy-to-use ecosystem for different disciplines and scientific areas. Our distributed services connect several storage and computing sites across Europe and enable inter-centre computations. In this workshop we will present the PROCESS ecosystem and demonstrates its usage with an extreme data use case.
Maximilian Höb