Session4 10:15 - 11:55 on 7th June 2016

ICCS 2016 Main Track (MT) Session 4

Time and Date: 10:15 - 11:55 on 7th June 2016

Room: KonTiki Ballroom

Chair: Alfredo Tirado-Ramos

287 Embedded real-time stereo estimation via Semi-Global Matching on the GPU [abstract]
Abstract: Dense, robust and real-time computation of depth information from stereo-camera systems is a computationally demanding requirement for robotics, advanced driver assistance systems (ADAS) and autonomous vehicles. Semi-Global Matching (SGM) is a widely used algorithm that propagates consistency constraints along several paths across the image. This work presents a real-time system producing reliable disparity estimation results on the new embedded energy- efficient GPU devices. Our design runs on a Tegra X1 at 42 frames per second (fps) for an image size of 640×480, 128 disparity levels, and using 4 path directions for the SGM method.
Daniel Hernández Juárez, Alejandro Chacón, Antonio Espinosa, David Vázquez, Juan Carlos Moure, Antonio M. López
314 Multivariate Polynomial Multiplication on GPU [abstract]
Abstract: Multivariate polynomial multiplication is a fundamental operation which is used in many scientific domains, for example in the optics code for particle accelerator design at CERN. We present a novel and efficient multivariate polynomial multiplication algorithm for GPUs using floating-point double precision coefficients implemented using the CUDA parallel programming platform. We obtain very good speedups over another multivariate polynomial multiplication library for GPUs (up to 548x), and over the implementation of our algorithm for multi-core machines using OpenMP (up to 7.46x).
Diana Andreea Popescu, Rogelio Tomas Garcia
329 CUDA Optimization of Non-Local Means Extended to Wrapped Gaussian Distributions for Interferometric Phase Denoising [abstract]
Abstract: Interferometric Synthetic Aperture Radar (InSAR) captures hundreds of millions of phase measurements with a single image, which can be differenced with a subsequent matching image to measure the Earth’s physical properties such as atmosphere, topography, and ground instability. Each pixel in an InSAR image lies somewhere between perfect information and complete noise; deriving useful measurements from InSAR is therefore predicated upon estimating the quality (coherence) of each pixel, while also enhancing the information-bearing pixels through filtering. Rejecting noisy pixels at the outset and filtering the available information without introducing artifacts is crucial for generating accurate and spatially dense measurements. A capable filtering strategy must accommodate the diversity of manmade and natural ground cover exhibiting noise spawned by vegetation and water interwoven with useable signals echoed by infrastructure, rocks, and bare ground. Traditional filtering strategies assuming spatial homogeneity have lately been replaced by filters that honor discontinuities in ground cover, but two key improvements are needed: a) techniques must be adapted to enhance phase rather than amplitude, and b) runtime needs to be reduced to support deployment for operational land-information products. We present a new algorithm for wrapped phase filtering based on the nonlocal means algorithm (NLM) of Baudes et al. (2005) and the non-local InSAR (NL-InSAR) algorithm of Deledalle et al. (2011). The new filter, wrapped-NLM (WNLM), extends NLM to wrapped phase data that is inherently lossy due to an unknown integer number of phase ambiguities per pixel. The filter is similar to that of NL-InSAR in that we adopt their procedure of iteratively improving the filtered phase estimates by updating the Bayesian prior based on the previously filtered data (2009). Our filter differs from NL-INSAR in that it does not assume the Goodman model (1963) nor that of speckle noise (Goodman J. W., 2007) which were found to suffer in some areas due to having too many degrees of freedom; instead we use a more general assumption that the phase noise distribution is additive wrapped Gaussian, making the filter more robust to a larger variety of input data. This also simplifies the algorithm making it possible to implement an efficient parallel algorithm on the GPU using CUDA.
Aaron Zimmer, Parwant Ghuman
449 A Performance Prediction and Analysis Integrated Framework for SpMV on GPUs [abstract]
Abstract: This paper presents unique modeling algorithms of performance prediction for sparse matrix-vector multiplication on GPUs. Based on the algorithms, we develop a framework that is able to predict SpMV kernel performance and to analyze the reported prediction results. We make the following contributions: (1) We provide theoretical basis for the generation of benchmark matrices according to the hardware features of a given specic GPU. (2) Given a sparse matrix, we propose a quantitative method to collect some features representing its matrix settings. (3) We propose four performance modeling algorithms to accurately predict kernel performance for SpMV computing using CSR, ELL, COO, and HYB SpMV kernels. We evaluate the accuracy of our framework with 8 widely-used sparse matrices (totally 32 test cases) on NVIDIA Tesla K80 GPU. In our experiments, the average performance differences between the predicted and measured SpMV kernel execution times for CSR, ELL, COO, and HYB SpMV kernels are 5.1%, 5.3%, 1.7%, and 6.1%, respectively.
Ping Guo, Chung-Wei Lee
71 A Multi-GPU Fast Iterative Method for Eikonal Equations using On-the-fly Adaptive Domain Decomposition [abstract]
Abstract: The recent research trend of Eikonal solver focuses on employing state-of-the-art parallel computing technology, such as GPUs. Even though there exists previous work on GPU-based parallel Eikonal solvers, only little research literature exists on the multi-GPU Eikonal solver due to its complication in data and work management. In this paper, we propose a novel on-the-fly, adaptive domain decomposition method for efficient implementation of the Block-based Fast Iterative Method on a multi-GPU system. The proposed method is based on dynamic domain decomposition so that the region to be processed by each GPU is determined on-the-fly when the solver is running. In addition, we propose an efficient domain assignment algorithm that minimizes communication overhead while maximizing load balancing between GPUs. The proposed method scales well, up to 6.17x for eight GPUs, and can handle large computing problems that do not fit to limited GPU memory. We assess the parallel efficiency and runtime performance of the proposed method on various distance computation examples using up to eight GPUs.
Sumin Hong, Won-Ki Jeong

ICCS 2016 Main Track (MT) Session 11

Time and Date: 10:15 - 11:55 on 7th June 2016

Room: Toucan

Chair: Raymond de Callafon

43 An Evaluation of Data Stream Processing Systems for Data Driven Applications [abstract]
Abstract: Real-time data stream processing technologies play an important role in enabling time-critical decision making in many applications. This paper aims at evaluating the performance of platforms that capable of processing streaming data. Candidate technologies include Storm, Samza, and Spark Streaming. To form the recommendation, a prototype pipeline is designed and implemented in each of the platform using data collected from sensors used in monitoring heavy-haul railway systems. Through the testing and evaluation of each candidate platform, using both quantitative and qualitative metrics, the paper describes the findings.
Jonathan Samosir, Maria Indrawan-Santiago, Pari Delir Haghighi
122 Improving Multivariate Data Streams Clustering [abstract]
Abstract: Clustering data streams is an important task in data mining research. Recently, some algorithms have been proposed to cluster data streams as a whole, but just few of them deal with multivariate data streams. Even so, these algorithms merely aggregate the attributes without touching upon the correlation among them. In order to overcome this issue, we propose a new framework to cluster multivariate data streams based on their evolving behavior over time, exploring the correlations among their attributes by computing the fractal dimension. Experimental results with climate data streams show that the clusters' quality and compactness can be improved compared to the competing method, leading to the thoughtfulness that attributes correlations cannot be put aside. In fact, the clusters' compactness are 7 to 25 times better using our method. Our framework also proves to be an useful tool to assist meteorologists in understanding the climate behavior along a period of time.
Christian Bones, Luciana Romani, Elaine de Sousa
465 Network Services and Their Compositions for Network Science Applications [abstract]
Abstract: Network science is moving more and more to computing dynamics on networks (so-called contagion processes), in addition to computing structural network features (e.g., key players and the like) and other parameters. Generalized contagion processes impose additional data storage and processing demands that include more generic and versatile manipulations of networked data that can be highly attributed. In this work, we describe a new network services and workflow system called MARS that supports structural network analyses and generalized network dynamics analyses. It is accessible through the internet and can serve multiple simultaneous users and software applications. In addition to managing various types of digital objects, MARS provides services that enable applications (and UIs) to add, interrogate, query, analyze, and process data. We focus on several network services and workflows of MARS in this paper. We also provide a case study using a web-based application that MARS supports, and several performance evaluations of scalability and work loads. We find that MARS efficiently processes networks of hundreds of millions of edges from many hundreds of simultaneous users.
Sherif Abdelhamid, Chris Kuhlman, Madhav Marathe, S. S. Ravi

Agent-based simulations, adaptive algorithms and solvers (ABS-AAS) Session 4

Time and Date: 10:15 - 11:55 on 7th June 2016

Room: Macaw

Chair: Maciej Paszynski

153 Time-Domain Goal-Oriented Adaptivity using Unconvetional Error Representations [abstract]
Abstract: Goal-oriented adaptive algorithms have been widely employed during the last three decades to produce optimal grids in order to solve challenging engineering problems. In this work, we extend the error representation using unconventional dual problems for goal-oriented adaptivity in the context of frequency-domain wave-propagation problems to the case of time-domain problems. To do that, we express the entire problem in weak form in order to formulate the adjoint problem and apply the goal-oriented adaptivity. We have also chosen specific spaces of trial and test functions that allow us to express a classical Method of Lines in terms of a Galerkin scheme. Some numerical results are provided in on espatial dimension which show that the upper bounds of the new error representation are sharper than the classical ones and, therefore, this new error representation can be used to design better goal-oriented adaptive processes.
Judit J. Muñoz Matute, Elisabete Alberdi Celaya and David Pardo
46 Hypergraph Grammars in non-stationary hp-adaptive Finite Element Method [abstract]
Abstract: The paper presents an extension of the hypergraph grammar model of the hp-adaptive finite element method algorithm with rectangular elements to the case of non-stationary problems. In our approach the finite element mesh is represented by hypergraphs, the mesh transformations are modelled by means of hypergraph grammar rules. The extension concerns the construction of the elimination tree during the generation of the mesh and mesh adaptation process. Each operation on the mesh (generation of the mesh as well as h-adaptation of the mesh) is followed by the corresponding operation on the elimination tree. The constructed elimination tree allows the solver for reutilization of the matrices computed in the previous step of Finite Element Method. Based on the constructed elimination tree the solver can efficiently solve non-stationary problems.
Anna Paszynska, Maciej Woźniak, Andrew Lenharth, Donald Nguyen, Keshav Pingali
326 Dimensional Adaptivity in Magnetotellurics [abstract]
Abstract: The magnetotelluric (MT) method is a passive electromagnetic (EM) exploration technique governed by Maxwell's equations aiming at estimating the resistivity distribution of the subsurface on scales varying from few meters to hundreds of kilometers. Natural EM sources induce electric currents in the Earth, and these currents generate secondary fields. By measuring simultaneously the horizontal components of these fields on the Earth's surface, it is possible to obtain information about the electrical properties of the subsurface. The dimensionality analysis of MT data is a hot and ongoing research topic in the area. In particular, the work of Weaver et al. (2000) has to be highlighted. There, he presented a dimensionality study based on the rotational invariants of the MT tensor. We also emphasize the more recent work of Martí et al. (2009), who implemented a software (based on these invariants) able to describe in a robust way the dimensionality of the problem when real measurements are employed. The dimension of the formation is not clear in some scenarios. When employing traditional inversion techniques, the dimension (the full 2D (or 3D) problem) is usually fixed in forward simulations and inversion. However, a proper study of the dimensionality of the problem may indicate some areas where the problem is fully 2D (or 3D), while others where a 1D (or 2D) consideration of the problem may be sufficient. Following this idea, we propose an initial step towards an algorithm that takes advantage of this scenario via an adaptivity in the spatial variable. Thus, we first consider a full 1D inverse problem, with an exact (and fast) forward solution, and after that, we introduce this 1D inverse problem solution into the 2D inverse problem. Numerical results show savings in the inversion process of the 75% in some scenarios.
Julen Alvarez-Aramberri, David Pardo and Ángel Rodríguez-Rozas
51 Computational complexity of isogeometric analysis with T-splines and B-splines over 2D grids refined towards singularities [abstract]
Abstract: In this paper we compare three different strategies for dealing with local singularities with two dimensional isogeometric finite element method. The first strategy employs local h refinements with T-spline basis functions. The second strategy is a modification of the first one, also using local h refinements and T-splines, but with some additional refinements intended to localize the support of the T-spline basis functions. The third strategy utilizes C^0 separators and B-splines. We compare the strategies by means of their computational cost from the point of view of multi-frontal direct solvers. We also compare the computational costs of our strategies with classical FEM using second order polynomials and C^0 separators between elements. We analyse the computational costs theoretically and as well compare the number of floating point operations (FLOPs) executed by the multi-frontal direct solver MUMPS. We show that third strategy outperforms both IGA-FEM and classical FEM.
Bartosz Janota, Pawel Lipski, Maciej Paszynski, Victor Calo and Grzegorz Gurgul

Workshop on Computational Optimization, Modelling & Simulation (COMS) Session 1

Time and Date: 10:15 - 11:55 on 7th June 2016

Room: Cockatoo

Chair: Leifur Leifsson

55 Cost-Efficient Microwave Design Optimization Using Adaptive Response Scaling [abstract]
Abstract: In this paper, a novel technique for cost-efficient design optimization of microwave structures has been proposed. Our approach exploits an adaptive response scaling that ensures good alignment between an equivalent circuit (used as an underlying low-fidelity model) and an electromagnetic (EM) simulation model of the structure under design. As the adaptive scaling tracks the low-fidelity model changes both in terms of frequency and the response level, it exhibits better generalization capability than traditional (e.g., space mapping) surrogates. This translates into improved design reliability and reduced design cost. Our methodology is demonstrated using two examples of microstrip filters and compared to several variations of conventional space mapping.
Slawomir Koziel, Adrian Bekasiewicz, Leifur Leifsson
62 Expedited Dimension Scaling of Microwave and Antenna Structures Using Inverse Surrogates [abstract]
Abstract: Re-designing circuits for various sets of performance specifications is an important problem in microwave and antenna engineering. Unfortunately, this is a difficult task that is normally realized as a separate design process, which is often as expensive (in computational terms) as obtaining the original design. In this work, we consider the application of inverse surrogate modeling for fast geometry scaling of microwave and antenna structures. Computational efficiency of the discussed procedure is ensured by representing the structure at the low-fidelity model level. The explicit relation between design specifications (here, operating frequency) of the structure and its geometry dimensions is determined based on a set of predetermined reference designs. Subsequently, the model is corrected to elevate the re-designed geometry to the high-fidelity electromagnetic (EM) model level. Our approach is demonstrated through a compact rat-race coupler and a patch antenna with enhanced bandwidth.
Slawomir Koziel, Adrian Bekasiewicz, Leifur Leifsson
79 Trawl-Door Shape Optimization by Space-Mapping-Corrected CFD Models and Kriging Surrogates [abstract]
Abstract: Trawl-doors are a large part of the fluid flow resistance of trawlers fishing gear and has considerable effect on the fuel consumption. A key factor in reducing that consumption is by implementing computational models in the design process. This study presents a robust two dimensional computational fluid dynamics models that is able to capture the nonlinear flow past multi-element hydrofoils. Efficient optimization algorithms are applied to the design of trawl-doors using problem formulation that captures true characteristics of the design space where lift-to-drag ratio is maximized. Four design variables are used in the optimization process to control the fluid flow angle of attack, as well as position and orientation of a leading-edge slat. The optimization process involves both multi-point space mapping, and mixed modeling techniques that utilize space mapping to create a physics-based surrogate model. The results demonstrate that lift-to-drag maximization is more appropriate than lift-constraint drag minimization in this case and that local search using multi-point space mapping can yield satisfactory design at low computational cost. By using global search with mixed modeling a solution with higher quality is obtained, but at a higher computational cost than local search.
Ingi Jonsson, Leifur Leifsson, Slawomir Koziel, Yonatan Tesfahunegn, Adrian Bekasiewicz
100 Preference-Based Economic Scheduling in Grid Virtual Organizations [abstract]
Abstract: A preference-based approach is proposed for Grid computing with regard to preferences given by various groups of virtual organization (VO) stakeholders (such as users, resource owners and administrators) to improve overall quality of service and resource load efficiency. Computational resources being competed for by local jobs (initiated by owners) and global (users') job flow complicate the problem of a required service quality level substantially. A specific cyclic job batch scheduling scheme is examined in the present work which enables to distribute and share resources considering all the VO stakeholders' preferences and find a balance between VO global preferences and those of its users. Two different general utility functions are introduced to represent users' preferences satisfaction.
Victor V. Toporkov, Dmitry Yemelyanov, Alexander Bobchenkov, Petr Potekhin
116 Cache aware dynamics layout efficient shared memory parallelisation of EUROPLEXUS [abstract]
Abstract: Parallelizing industrial simulation codes like the EUROPLEXUS software dedicated to the analysis of fast transient phenomena, is challenging. In this paper we focus on the efficient parallelization on shared memory node coupling. We propose to have each thread gather the data it needs for processing a given iteration range, before to actually advance the computation by one time step on this range. This lazy cache aware layout construction enables to keep the original data structure and leads to very localised code modifications. We show that this approach can improve execution by up to 40% when the task size is set to have the data fit in the L2 cache.
Marwa Sridi, Bruno Raffin, Vincent Faucher

Workshop on Computational and Algorithmic Finance (WCAF) Session 4

Time and Date: 10:15 - 11:55 on 7th June 2016

Room: Boardroom East

Chair: A. Itkin and J.Toivanen

158 Optimum Liquidation Problem Associated with the Poisson Cluster Process [abstract]
Abstract: In this research, we develop a trading strategy for the discrete-time optimal liquidation problem of large order trading with different market microstructures in an illiquid market. In this framework, the flow of orders can be viewed as a point process with stochastic intensity. We model the price impact as a linear function of a self-exciting dynamic process. We formulate the liquidation problem as a discrete-time Markov Decision Processes, where the state process is a Piecewise Deterministic Markov Process (PDMP). The numerical results indicate that an optimal trading strategy is dependent on characteristics of the market microstructure. When no orders above certain value come the optimal solution takes offers in the lower levels of the limit order book in order to prevent not filling of orders and facing final inventory costs.
Amirhossein Sadoghi and Jan Vecer
429 Expected Utility or Prospect Theory: which better fits agent-based modeling of markets? [abstract]
Abstract: Agent-based simulations may be a way to model human society behavior in decisions under risk. However, it is well known in economics that Expected Utility Theory (EUT) is flawed as a descriptive model. In fact, there are some models based on Prospect Theory (PT), that try to provide a better description. If people behave according to PT in finance environments, it is arguable that PT based agents may be a better choice for such environments. We investigate this idea, in a specific risky environment, financial market. We propose an architecture for PT-based agents. Due to some limitations of original PT, we use an extension of PT called Smooth Prospect Theory (SPT). We simulate artificial markets with PT and traditional (TRA) agents using historical data of many different assets over a period of twenty years. The results showed that SPT-based agents provided behavior closer to real market data than TRA agents in a statiscally significant way. It supports the idea that PT based agents may be a better pick to risky environments.
Paulo A. L. Castro, Anderson R. B. Teodoro and Luciano de Castro
487 Market Trend Visual Bag of Words Informative Patterns in Limit Order Books [abstract]
Abstract: This paper presents a graphical representation that fully depicts the price-time-volume dynamics in a Limit Order Book (LOB). Based on this pattern representation, a clustering technique is applied to predict market trends. The clustering technique is tested on information from the USD/COP market. Competitive trend prediction results were found, and a benchmark for future extensions was settled.
Javier Sandoval, German Hernandez, Jaime Nino, Andrea Cruz
494 Modeling High Frequency Data Using Hawkes Processes with Power-Law Kernels [abstract]
Abstract: Those empirical properties exhibited by high frequency financial data, such as time-varying intensities and self-exciting features, make it a challenge to model appropriately the dynamics associated with, for instance, order arrival. To capture the microscopic structures pertaining to limit order books, this paper focuses on modeling high frequency financial data using Hawkes processes. Specifically, the model with power-law kernels is compared with the counterpart with exponential kernels, on the goodness of fit to the empirical data, based on a number of proposed quantities for statistical tests. Based on one-trading-day data of one representative stock, it is shown that Hawkes processes with power-law kernels are able to reproduce the intensity of jumps in the price processes more accurately, which suggests that they could serve as a realistic model for high frequency data on the level of microstructure.
Changyong Zhang

Advances in the Kepler Scientific Workflow System and Its Applications (Kepler) Session 1

Time and Date: 10:15 - 11:55 on 7th June 2016

Room: Boardroom West

Chair: Jianwu Wang

504 Kepler WebView: A Lightweight, Portable Framework for Constructing Real-time Web Interfaces of Scientific Workflows [abstract]
Abstract: Modern web technologies facilitate the creation of high-quality data visualizations, and rich, interactive components across a wide variety of devices. Scientific workflow systems can greatly benefit from these technologies by giving scientists a better understanding of their data or model leading to new insights. While several projects have enabled web access to scientific workflow systems, they are primarily organized as a large portal server encapsulating the workflow engine. In this vision paper, we propose the design for Kepler WebView, a lightweight framework that integrates web technologies with the Kepler Scientific Workflow System. By embedding a web server in the Kepler process, Kepler WebView enables a wide variety of usage scenarios that would be difficult or impossible using the portal model.
Daniel Crawl, Alok Singh, Ilkay Altintas
291 A Smart Manufacturing Use Case: Furnace Temperature Balancing in Steam Methane Reforming Process via Kepler Workflows [abstract]
Abstract: The industrial scale production of hydrogen gas through steam methane reforming (SMR) process requires an optimum furnace temperature distribution to not only maximize the hydrogen yield but also increase the longevity of the furnace infrastructure which usually operates around 1300 degree Kelvin (K). Kepler workflows are used in temperature homogenization, termed as balancing of this furnace through Reduced Order Model (ROM) based Matlab calculations using the dynamic temperature inputs from an array of infrared sensors. The outputs of the computation are used to regulate the flow rate of fuel gases which in turn optimizes the temperature distribution across the furnace. The input and output values are stored in a data historian which is a database for real-time data and events. Computations are carried out on an OpenStack based cloud environment running Windows and Linux virtual machines. Additionally, ab initio computational fluid dynamics (CFD) calculation using Ansys Fluent software is performed to update the ROM periodically. ROM calculations complete in few minutes whereas CFD calculations usually take a few hours to complete. The Workflow uses an appropriate combination of the ROM and CFD models. The ROM only workflow currently runs every 30 minutes to process the real-time data from the furnace, while the ROM CFD workflow runs on demand. ROM only workflow can also be triggered by an operator of the furnace on demand.
Prakashan Korambath, Jianwu Wang, Ankur Kumar, Jim Davis, Robert Graybill, Brian Schott, Michael Baldea
335 Running simultaneous Kepler sessions for the parallelization of parametric scans and optimization studies applied to complex workflows [abstract]
Abstract: In this paper we present approach taken to run multiple Kepler sessions at the same time. This kind od execution is one of the requirements for the codes developed within EUROfusion. It allows to gain speed and resources. The choice of Integrated Modelling made by the former EFDA ITM-TF and pursued now under EUROfusion WPCD is unique and original: it entails the development of a comprehensive and completely generic tokamak simulator including both the physics and the machine, which can be applied for any fusion device. All components are linked inside workflows and this way allow complex coupling of various algorithms while providing consistency. Workflows are composed of Kepler and Ptolemy II elements as well as set of the native libraries written in various languages (Fortran, C, C++). In addition to that, there are Python based components that are used for visualization of results as well as for pre/post processing. At the bottom of all these components there is database layer that may vary between software releases. All these constraints make it really challenging to run multiple Kepler sessions at the same time. However, ability to run numerous sessions in parallel is a must - to reduce computation time and to make it possible to run released codes while working with new software at the same time. In this paper we present our approach to solve this issue and we present applications of this approach.
Michał Owsiak, Marcin Plociennik, Bartek Palak, Tomasz Zok, Cedric Reux, Luc Di Gallo, Mireille Schneider, Thomas Johnson, Denis Kalupin
444 Forest fire spread prediction system workflow: an experience using Kepler [abstract]
Abstract: Natural hazard prediction systems provide a key information to mitigate the effects of such disasters. Having this information in advance, taking decision teams could be more efficient and effective. However, run these systems in real time with data assimilation is not a trivial task. Most of the studies using these systems are post disaster analysis. For that reason, scientific workflow software and sensor cyberinfraestructures are crucial to monitor the system execution and take profit of data assimilation. In this work, a workflow for a forest fire spread prediction system is proposed using Kepler. The main modules of the workflow are WindNinja and FARSITE. The sources of the data assimilation come from HPWREN weather stations. The workflow allows running a spread prediction for a given ignition boundary using data assimilation and monitoring the execution of the different modules of the workflow.
Tomàs Artés, Daniel Crawl, Ana Cortes and Ilkay Altintas

Solving Problems with Uncertainties (SPU) Session 1

Time and Date: 10:15 - 11:55 on 7th June 2016

Room: Rousseau West

Chair: Vassil Alexandrov

166 Bounded Support and Confidence over Evidential Databases [abstract]
Abstract: Evidential database has showed its potential application in many fields of study. This specific database framework allows frequent patterns and associative classification rules to be extracted in a more efficient way from uncertain and imprecise data. The definition of support and confidence measures plays an important role in the extraction process of meaningful patterns and rules. In this present work, we proposed a new definition of support and confidence measures based on interval representation. Moreover, a new algorithm, named EBS-Apriori, based on these bounded measures and several pruning strategies was developed. Experiments were conducted using several database benchmarks. Performance analysis showed a better prediction outcome for our proposed approach in comparison with several literature-based methods.
Ahmed Samet, Tien Tuan Dao
174 Probabilistic Semantics [abstract]
Abstract: This paper proposes a concise overview of Probabilistic Semantics according to a technology-oriented perspective. Indeed, while the progressive consolidation of Semantic Technology in a wide context and on a large scale is going to be a fact, the non-deterministic character of many problems and environments suggests the rise of additional researches around semantics to integrate the mainstream. Probabilistic extensions and their implications to the current semantic ecosystems are discussed in this paper with an implicit focus on the Web and its evolution. The critical literature review undertaken shows valuable theoretical works, effective applications, evidences of an increasing research interest as the response to real problems, as well as largely unexplored research areas.
Salvatore Flavio Pileggi
300 Reducing Data Uncertainty in Surface Meteorology using Data Assimilation: A Comparison Study. [abstract]
Abstract: Data assimilation in weather forecasting is a well-known technique used to obtain an improved estimation of the current atmosphere state or data analysis. Data assimilation methods such as LAPS (Local Analysis and Prediction System) and STMAS (Space-Tim Multiscale Analysis System) provide reasonable results when dealing with a low resolution model, but they have restrictions when high real time resolution analysis is required for surface parameters. In particular, the Meteorological Service of Catalunya (SMC) is seeking for a real time high resolution analysis of surface parameters over Catalonia (north-east of Spain), in order to know the current weather conditions at any point of that region. For this purpose, a comparative study among several data assimilation methods, including the Altava's method designed in this weather forecast center, has been performed to determine which one delivers better results. The classical data assimilation techniques combine observational data with numerical weather prediction models to estimate the current state of the atmosphere and the multi-regression technique proposed by the SMC are included in this comparison analysis. The comparison has been done using as true state independent observational data the one provided by the Spanish Meteorological State Agency (Agencia Estatal de METeorologia, AEMET). The results show that the multi-regression technique provides more accurate analyses of temperature and relative humidity than the data assimilation methods because the multi-regression methodology only uses observations and consequently the model biases are avoided.
Angel Farguell, Jordi Moré, Ana Cortes, Josep Ramon Miró, Tomàs Margalef, Vicent Altava
364 Psychological warfare analysis using Network Science approach [abstract]
Abstract: In this paper, we analyze the concept of psychological warfare in Internet as the continuous opinion opposition forwarded into influencing at public opinion in some area and showed via manifold mass media publications. We formulated the basic research questions concerning psychological war and suggested simple steps to provide initial analysis. In our research, we decided to take Twitter as a typical representative of the social networking phenomenon as it is one of most subscribed social networks. To determine points of view related to chosen theme the algorithm of keyword clusterization is provided. We proposed a network composing method and performed network properties analysis, which we could interpret in the context of psychological war analysis.
Ilya Blokh, Vassil Alexandrov

Modeling and Simulation of Large-scale Complex Urban Systems (MASCUS) Session 1

Time and Date: 10:15 - 11:55 on 7th June 2016

Room: Rousseau East

Chair: Matthias Berger

491 The Manyfold Challenges for Modeling the Urban Heat Island [abstract]
Abstract: The so-called urban heat island (UHI) is an anthropogenic effect of an elevated temperature level in urban areas compared to their surrounding. While the many causes of the UHI have been identified in the past, the magnitude of each component strongly depends on the individual city and its geography. In most cases a UHI is a thread, resulting in heat-related stress and health issues, higher costs for air-conditioning and cooling, and loss of quality of urban living in general. Based on the experience and research done in the tropical megacity of Singapore, where the UHI results in elevated temperature levels of up to 8 degree Celsius, several barriers and challenges in tackling the problem have been identified: 1) lack of data, information, and knowledge; 2) missing interdisciplinary and transdisciplinary collaboration; 3) synthesis of various modeling approaches; and last but not least 4) computational challenges. In addressing all four points above we suggest an approach based on combining top-down with bottom-up models, which has been introduced by the Future Cities Lab of the Singapore-ETH Center in 2012, and further developed known as the Cooler Calmer Singapore project. The past, current, and future research within the project and the collaboration to the outside will be demonstrated. Before the UHI of Singapore can be fought, the complex problem needs to be fully understood. Our research shall enable this task.
Matthias Berger
87 Traffic State Estimation Using Floating Car Data [abstract]
Abstract: Increasing availability of floating car data both historic in the form of trajectory data-sets and real-time in the form of continuous data stream paves the way for data driven traffic simulations. While historic data-sets can be used to construct spatiotemporal models of different roads, the continuous data streams from probe vehicles can be used for purposes such as current traffic-state estimation, incident detection and predicting short term evolution of traffic. A service which incorporates the aforementioned features will be invaluable for advanced traffic management and information services. In this paper we present a thorough analyses of using probe vehicles for reconstructing traffic state in real time, by employing detailed agent based microscopic traffic simulations of several scenarios on real world expressway.
Abhinav Sunderrajan, Vaisagh Viswanathan, Wentong Cai, Alois Knoll
478 Information Dynamics in Transportation Systems with Traffic Lights Control [abstract]
Abstract: Due to recent advanced communication possibilities between traffic infrastructure, vehicles and drivers, the optimization of traffic lights control can be approached in novel ways. At the same time, this may introduce new unexpected dynamics in transportation systems. Our research aims to determine how drivers and traffic lights systems interact and influence each other when they are informed one about another's behaviour. In order to study this, we developed an agent based model to simulate transportation systems with static and dynamic traffic lights and drivers using information about the traffic lights behaviour. Experiments reveal that the system's performance improves when a bigger share of drivers receive information for both static and dynamic traffic lights systems. This performance improvement is due to drivers managing to avoid stopping at red light rather them adapting their speed to different distances to the traffic lights systems. Additionally, it is demonstrated that the duration of the fixed phases also influences the performance when drivers use speed recommendations. Moreover, the results show that dynamic traffic lights can produce positive effects for roads with high speed limits and high traffic intensity, while in the rest of the cases static control is better. Our findings can be used for building more efficient traffic lights systems.
Sorina Costache Litescu, Vaisagh Viswanathan, Heiko Aydt, Alois Knoll
225 An integrated simulation environment for testing V2X protocols and applications [abstract]
Abstract: Implementation of Vehicle-to-everything (V2X) communication technologies, for traffic management, has been envisioned to have a plethora of far-reaching and useful consequences. However, before any hardware/software infrastructure can be developed and implemented, a thorough phase of testing is warranted. Since actual vehicles and traffic conditions cannot be physically re-constructed, it is imperative that accurate simulation tools exist in order to model pragmatic traffic scenarios and communication amongst the participating vehicles. In order to realize this need of simulating V2X technology, we have created an integrated simulation environment that combines three software packages, VISSIM (traffic modelling), MATLAB (traffic management applications) and NS3 (Communication network simulation). The combination of the simulators, has been carried out in a manner that allows on-line exchange of data amongst them. This enables one to visualize whether a traffic management algorithm creates the desired effect and also the efficacy of the communication protocol used. In order to test the simulator, we have modelled the Green Light Optimized Speed Advisory (GLOSA) application, whose objective is communication of the present traffic signal phase information to oncoming vehicles using a transmitting unit installed on the signal itself. This information will allow the vehicles to calculate the desired speeds necessary to cross the relevant intersection without stopping. Therefore, a "Green Wave" can be created for all vehicles without the need to coordinate traffic signal timers, which can be rather complex in a multiple intersection traffic corridor.
Apratim Choudhury, Tomasz Maszczyk, Justin Dauwels, Chetan Math, Hong Li