Session4 15:25 - 17:05 on 12th June 2018

ICCS 2018 Main Track (MT) Session 4

Time and Date: 15:25 - 17:05 on 12th June 2018

Room: M1

Chair: Thilina Perera

370 Hyper-heuristic Online Learning for Self-assembling Swarm Robots [abstract]
Abstract: Robot swarm is a solution for difficult and large scale tasks. However controlling and coordinating a swarm of robots is a challenge because of the complexity and uncertainty of the environment where manual programming of robot behaviours is often impractical. In this study we proposed a hyper-heuristic methodology for swarm robots. It allows robots to create suitable actions based on a set of low-level heuristics. Each heuristic is a behavioural element. With online learning, the robot behaviours can be improved during the executions by autonomous heuristic adjustment. The proposed hyper-heuristic framework is applied to building surface cleaning tasks where multiple separate surfaces exist and the complete surface information is difficult to obtain. Under this scenario, the robot swarm need not only to clean the surfaces efficiently by distributing the robots, but also to move across surfaces by self-assembling into a bridge structure. Experiment results showed the effectiveness of the hyper-heuristic framework as a group of robots were able to autonomously clean multiple surfaces of different layouts without prior programming. Their behaviours can improve over time because of the online learning mechanism.
Shuang Yu, Aldeida Aleti, Jan Carlo Barca and Andy Song
241 An Innovative Heuristic for Planning-based Urban Traffic Control [abstract]
Abstract: The global growth in urbanisation increases the demand for services including road transport infrastructure, presenting challenges in terms of mobility. In this scenario, optimising the exploitation of urban road network is a pivotal challenge, particularly in the case of unexpected situations. In order to tackle this challenge, approaches based on mixed discrete-continuous planning have been recently proposed and although their feasibility has been demonstrated, there are a lack of informative heuristics for this class of applications. Therefore, existing approaches tend to provide low-quality solutions, leading to a limited impact of generated plans on the actual urban infrastructure. In this work, we introduce the Time-Based heuristic: a highly informative heuristic for PDDL+ planning-based urban traffic control. The heuristic, which has an admissible and an inadmissible variant, has been evaluated considering scenarios that use real-world data.
Santiago Franco, Alan Lindsay, Mauro Vallati and Lee Mccluskey
111 Automatic Web News Extraction Based on DS Theory Considering Content Topics [abstract]
Abstract: In addition to the news content, most news web pages also contain various noises, such as advertisements, recommendations, and navigation panels. These noises may hamper the studies and applications which require pre-processing to extract the news content accurately. Existing methods of news content extraction mostly rely on non-content features, such as tag path, text layout, and DOM structure. However, without considering topics of the news content, these methods are difficult to recognize noises whose external characteristics are similar to those of the news content. In this paper, we propose a method that combines non-content features and a topic feature based on Dempster-Shafer (DS) theory to increase the recognition accuracy. We use maximal compatibility blocks to generate topics from text nodes and then obtain feature values of topics. Each feature is converted into evidence for the DS theory which can be utilized in the uncertain information fusion. Experimental results on English and Chinese web pages show that combining the topic feature by DS theory can improve the extraction performance obviously.
Kaihang Zhang, Chuang Zhang, Xiaojun Chen and Jianlong Tan
118 DomainObserver: A Lightweight Solution for Detecting Malicious Domains Based on Dynamic Time Warping [abstract]
Abstract: People use the Internet to shop, access information and enjoy entertainment by browsing web sites. At the same time, cyber-criminals operate malicious domains to spread illegal information and acquire money, which poses a great risk to the security of cyberspace. Therefore, it is of great importance to detect malicious domains in the field of cyberspace security. Typically, there are broad research focusing on detecting malicious domains either by blacklist or exploiting the features via machine learning techniques. However, the former is infeasible due to the limited crowd, and the later requires complex feature engineering. Different from most of previous methods, in this paper, we propose a novel lightweight solution named DomainObserver to detect malicious domains. Our technique of DomainObserver is based on dynamic time warping that is used to better align the time series. To the best of our knowledge, it is a new trial to apply passive traffic measurements and time series data mining to malicious domain detection. Extensive experiments on real datasets are performed to demonstrate the effectiveness of our proposed method.
Guolin Tan, Peng Zhang, Qingyun Liu, Xinran Liu and Chunge Zhu
157 You Have More Abbreviations than You Know: A Study of AbbrevSquatting Abuse [abstract]
Abstract: Domain squatting is a speculative behavior involving the registration of domain names that are trademarks belonging to popular companies, important organizations or other individuals, before the latters have a chance to register. This paper presents a specific and unconcerned type of domain squatting called “AbbrevSquatting”, the phenomena that mainly happens on institutional websites. As institutional domain names are usually named with abbreviations(i.e., short forms) of the full names or official titles of institutes, attackers can mine abbreviation patterns from existed pairs of abbreviations and full names, and register forged domain names with unofficial but meaningful abbreviations for a given institute. To measure the abuse of AbbrevSquatting, we first mine the common abbreviation patterns used in institutional domain names, and generate potential AbbrevSquatting domain names with a data set of authoritative domains. Then, we check the maliciousness of generated domains with a public API and seven different blacklists, and group the domains into several categories with crawled data. Through a series of manual and automated experiments, we discover that attackers have already been aware of the principles of AbbrevSquatting and are monetizing them in various unethical and illegal ways. Our results suggest that AbbrevSquatting is a real problem that requires more attentions from security communities and institutions' registrars.
Pin Lv, Jing Ya, Tingwen Liu, Jinqiao Shi, Binxing Fang and Zhaojun Gu

ICCS 2018 Main Track (MT) Session 10

Time and Date: 15:25 - 17:05 on 12th June 2018

Room: M2

Chair: Pablo Enfedaque

216 Elastic CPU Cap Mechanism for Timely Dataflow Applications [abstract]
Abstract: Sudden surges in the incoming workload can cause adverse consequences on the run-time performance of data-flow applications. Our work addresses the problem of limiting CPU associated with the elastic scaling of timely data-flow (TDF) applications running in a shared computing environment while each application can possess a different quality of service (QoS) requirement. The key argument here is that an unwise consolidation decision to dynamically scale up/out the computing re- sources for responding to unexpected workload changes can degrade the performance of some (if not all) collocated applications due to their fierce competition getting the shared resources (such as the last level cache). The proposed solution uses a queue-based model to predict the performance degradation of running data-flow applications together. The problem of CPU cap adjustment is addressed as an optimization problem, where the aim is to reduce the quality of service violation incidents among applications while raising the CPU utilization level of server nodes as well as preventing the formation of bottlenecks due to the fierce competition among collocated applications. The controller uses and efficient dynamic method to find a solution at each round of the controlling epoch. The performance evaluation is carried out by comparing the proposed controller against an enhanced QoS-aware version of round robin strategy which is deployed in many commercial packages. Experimental results confirmed that the proposed solution improves QoS satisfaction by near to 148% on average while it can reduce the latency of processing data records for applications in the highest QoS classes by near to 19% during workload surges.
M. Reza Hoseinyfarahabady, Nazanin Farhangsadr, Albert Zomaya, Zahir Tari and Samee Khan
351 Blockchain-based transaction integrity in distributed big data marketplace [abstract]
Abstract: Today Big Data occupies crucial part as in scientific research areas as in large companies business analysis. Each company tries to find the best way how generated big data can be made valuable and profitable. However, in most cases, companies have not enough opportunities and budget to solve this complex problem. On the other hand, there are companies (i.e., in insurance, banking) who can significantly improve their business organization by applying hidden knowledge extracted from such big data. This situation leads to the necessity of building a platform for the exchange, processing, and sale of collected big data. In this paper, we propose a distributed big data platform that implements digital data market, based on the blockchain mechanism for data transaction integrity
Denis Nasonov, Alexander Visheratin and Alexander Boukhanovsky
363 Workload Characterization and Evolutionary Analyses of Tianhe-1A Supercomputer [abstract]
Abstract: Currently, supercomputer systems face a variety of application challenges, includ-ing high-throughput, data-intensive, and stream-processing applications. At the same time, there is more challenge to improve user satisfaction at the supercom-puters such as Tianhe-1A, Tianhe-2 and TaihuLight, because of the commercial service model. It is important to understand HPC workloads and their evolution to facilitate informed future research and improve user satisfaction. In this paper, we present a methodology to characterize workloads on the commercial supercomputer (users need to pay), at a particular period and its evo-lution over time. We apply this method to the workloads of Tianhe-1A at the Na-tional Supercomputer Center in Tianjin. This paper presents the concept of quota-constrained waiting time for the first time, which has significance for optimizing scheduling and enhancing user satisfaction on the commercial supercomputer.
Jinghua Feng, Guangming Liu, Jian Zhang, Zhiwei Zhang, Jie Yu and Zhaoning Zhang
378 The Design of Fast and Energy-Efficient Linear Solvers: On The potential Of Half Precision Arithmetic And Iterative Refinement Techniques [abstract]
Abstract: As parallel computers approach the exascale, power efficiency in High-performance computing (HPC) systems is of increasing concern. Exploiting both, the hardware features, and algorithms is an effective solution to achieve power efficiency, and address the energy constraints in modern and future HPC systems. In this work, we present a novel design and implementation of an energy efficient solution for dense linear systems of equations, which are at the heart of large-scale HPC applications. Energy efficient linear system solvers are based on two main components: (1) iterative refinement techniques, and (2) reduced precision computing features in the modern accelerators and co-processors. While most of the energy efficiency approaches aim to reduce the consumption with a minimal performance penalty, our method improves both, the performance and the energy-efficiency. Compared to highly optimised linear system solvers, our kernels are up to 2X faster to deliver the same accuracy solution, and reduce the energy consumption up to half on Intel KNL architectures. By using efficiently the tensor cores available in the NVIDIA V100 PCIe GPUs, the speedups can be up to 4X with more than 80\% reduction on the energy consumption.
Azzam Haidar, Ahmad Abdelfattah, Mawussi Zounon, Panruo Wu, Srikara Pranesh, Stanimire Tomov and Jack Dongarra
386 Design of Parallel BEM Analyses Framework for SIMD Processors [abstract]
Abstract: A software framework titled BEM-BB has been developed to conduct parallel boundary element method (BEM) analyses. By Imple- menting a fundamental solution or a Green’s function that is the most important element of the BEM, and it depends on the targeted physical phenomenon, the users get the benefit of MPI and OpenMP hybrid par- allelization with H-matrix approximation provided by the framework. However, the framework does not take into account the single instruc- tion multiple data SIMD vectorization, which is important for high- performance computing and is supported by majority of the existing processors. Dealing with SIMD vectorization of a user-defined function is difficult because SIMD exploits instruction-level parallelization and is closely associated with the user-defined function. This study describes the conceptual framework for enhancing the SIMD vectorization. The new framework was evaluated using the two BEM problems of static electric field analysis with a perfect conductor and dielectric on an Intel Broadwell processor and an Intel Xeon Phi KNL. We observed that the framework provides good vectorization with limited SIMD knowledge. The numerical results illustrate the improved performance of the frame- work. In particular, perfect conductor analyses using H-matrix achieved performance improvements of 2.22x and 4.33x as compared with that achieved using the original BEM-BB framework for Broadwell processor and KNL, respectively.
Tetsuya Hoshino, Akihiro Ida, Toshihiro Hanawa and Kengo Nakajima

Simulations of Flow and Transport: Modeling, Algorithms and Computation (SOFTMAC) Session 2

Time and Date: 15:25 - 17:05 on 12th June 2018

Room: M3

Chair: Shuyu Sun

94 A new edge stabilization method for the convection-dominated diffusion-convection equations [abstract]
Abstract: We study a new edge stabilization method for the finite element discretization of the convection-dominated diffusion-convection equations. In addition to the stabilization of the jump of the normal derivatives of the solution across the inter-element-faces, we additionally introduce a SUPG/GaLS-like stabilization term but on the domain boundary other than in the interior of the domain. New stabilization parameters are also designed. Stability and error bounds are obtained. Numerical results are presented. Theoretically and numerically, the new method is much better than other edge stabilization methods and is comparable to the SUPG method, and generally, the new method is more stable than the SUPG method.
Huoyuan Duan and Yu Wei
259 A multiscale hybrid approach to the behavior prediction of transport networks [abstract]
Abstract: The behavior prediction of the transport system of a region and optimization of such a system in connection with the data networks, especially for large-scale areas, seems very necessary and quite challenging problem and attracts an attention of many researchers [1]. Due to extensive expansion of cities, increase of the number of vehicles on the roads and constant grow of the transportation lines with increased complexity, this task is up-to-date and hard to solve with the methods presented at the moment. Empirical correlations or data obtained during observations of some crucial areas of interest are not always enough to predict parameters of that networks at different states, critical situations or moments of time, therefore numerical simulation seems to be a good choice to predict traffic flow behavior. It should be mentioned that numerical methods used earlier are not always capable to deal with the large-scale problems due to dependence on the extensive computational resources or inability to resolve local features of the transport flow and data networks [2]. Existing mathematical models can be divided into two classes: a) local microscopic models which are working on car-by-car basis [3] and b) continuum-like models which are dealing with the entire traffic flow and operating with the averaged quantities [4]. The first class is capable to resolve interaction between different vehicles, data transfer between cars or between a car and a road infrastructure, thus providing detailed information in critical areas (crossroads, joints, etc.), but requires significant amount of computational resources as simulated domain grows. The continuum approach, on the other hand, is much less resource demanding and suitable to simulate large-scale road elements (long parts of the roads, for example), but unable to resolve complex parts of the transportation system without some additional knowledge about traffic flow in that regions obtained “a-priori”. Presented work is focused on the overcoming of two deficiencies of both classes by the proposal of a hybrid modeling approach. This approach utilizes both types of methods at different scales. For the local (small) scale the microscopic models are used to obtain distribution of traffic parameters at particular road elements and provide network data distribution between road users. The integration procedure is performed for car parameters such as density and velocity and averaged values is substituted into the continuum-like model based on the hydrodynamic approach. That part of the algorithm performs the simulation of the entire transport network at macro scales without detailed description of local elements, and output values obtained at that step are used as input values for the microscopic model. Therefore, an iterative technique is performed to obtain developed transport flow for the entire region. In parallel, the traffic flow information will be used to predict the load of the data networks which are used for communication purposes between vehicles and road infrastructure and evaluation of data payload optimization will be conducted to provide sufficient throughput and reliability of such networks. The algorithm is implemented on the basis of high level programming language and initial simulations for the purpose of the testing and validation will be conducted. These simulations will be compared along with the experimental data obtained by the means of the experimental apparatus initially prepared by the research group from SPbSPU. The paper is presented within the framework of the project No. 18-07-00430 which is supported by the Russian Foundation for Basic Research. 1. Xinkai Wu, Henry X. Liu, Using high-resolution event-based data for traffic modeling and control: An overview, Transportation Research Part C, 2014, Vol. 42, P. 28–43. 2. M. Kontorinaki, A. Spiliopoulou, C. Roncoli, M. Papageorgiou, First-order traffic flow models incorporating capacity drop: Overview and real-data validation, Transportation Research Part B, 2017, Vol. 106, P. 52-75. 3. Nagel K., Schreckenberg M. A cellular automaton model for freeway traffic. Phys. I France, 1992, Vol. 2, P. 2221–9. 4. C. Wagner, A Navier-Stokes-like traffic model, Physica A, 1997, Vol. 245, P. 124-38.
Alexander Chernyshev, Leonid Kurochkin, Vadim Glazunov, Mikhail Kurochkin, Mikhail Chuvatov and Maksim Sharagin
177 Symmetric Sweeping Algorithms for Intersections of Two Quadrilateral Mesh [abstract]
Abstract: An conservative remapping scheme often requires intersections between two mesh and a reconstruction scheme on the old cells (Lagrangian mesh). Computing the exact overlaps is complicated even in the simplest case. In this paper, We propose method to calculate intersections of two dismissable general quadrilateral mesh of the same logically structure in a planar domain. The quadrilateral polygon intersection problem is reduced to a problem that how an edge in a new mesh intersects with a local frame which consists at most 7 connected edges in the old mesh. As such, locality of the method is persevered. The alternative direction technique is applied to reduce the dimension of the searching space, We call the method as a symmetric sweep algorithm. It reduces more than 256 possible intersections between a new cell with the old mesh to 34 (17 when considering symmetry) programmable intersections between an edge and an local frame whenever the intersection between the old and new cell does not degenerate. Besides, we shall show how the computational amount depends on the underlying problem in term of singular intersection points. A simple and detailed classification on the type of overlaps is presented. According to classification, degeneracy of an intersection can be easily identified.
Xihua Xu and Shengxin Zhu
222 A Two-field Finite Element Solver for Poroelasticity on Quadrilateral Meshes [abstract]
Abstract: This paper presents a finite element solver for linear poroelasticity problems on quadrilateral meshes based on the displacement-pressure two-field model. This new solver combines the Bernardi-Raugel element for linear elasticity and a weak Galerkin element for Darcy flow through the implicit Euler temporal discretization. The solver does not use any penalty factor and has less degrees of freedom compared to other existing methods. The solver is free of nonphysical pressure oscillations, as demonstrated by numerical experiments on two widely tested benchmarks. Extension to other types of meshes in 2-dim and 3-dim is also discussed.
Graham Harper, Jiangguo Liu, Simon Tavener and Zhuoran Wang
208 Preprocessing parallelization for the ALT-algorithm [abstract]
Abstract: In this paper, we improve the preprocessing phase of the ALT algorithm through parallelization. ALT is a preprocessing-based, goal-directed speed-up technique that uses A* (A star), Landmarks and Triangle inequality which allows fast computations of shortest paths (SP) in large-scale networks. Although faster techniques such as arc-flags, SHARC, Contraction Hierarchies and Highway Hierarchies already exist, ALT is usually combined with these faster algorithms to take advantage of its goal-directed search to further reduce the SP search calculation time and its search space. However, ALT relies on landmarks and optimally choosing these landmarks is NP-hard, hence, no effective solution exists. Since landmark selection relies on constructive heuristics and the current SP search speed-up is inversely proportional to landmark generation time, we propose a parallelization technique which cuts the landmark generation time significantly while increasing its effectiveness.
Genaro Jr Peque, Junji Urata and Takamasa Iryo

Architecture, Languages, Compilation and Hardware support for Emerging ManYcore systems (ALCHEMY) Session 1

Time and Date: 15:25 - 17:05 on 12th June 2018

Room: M4

Chair: Stephane Louise

405 Trends in programming Many-Core System [abstract]
Abstract: These last ten years saw the emergence and the transition from multi-core systems that can be usually defined (roughly speaking) as systems with a single bus for a means of communication between the different execution cores and between the cores and the memory subsystem to many-core systems where there are several communication buses, organized as Networks on Chip (NoC) as a natural consequence of the growing number of cores. Another trend is the appearance of heterogeneous execution cores where the performance of a given computation will depend strongly on the type of processing and even the types of data to process. While these architectures can provide theoretically a large factor of acceleration or a significant reduction of power consumption, programming them has always been the toughest challenge. The implementation of the architecture is an important factor so that the circulation of data and its processing can flow without impediments, but in this presentation, we will focus on the major aspects of the programming paradigms and where the future of these techniques could lead. We will present the main approaches including models of computation, programming models, runtime generation, JIT generation and some of the other emerging trends.
Stephane Louise
338 Architecture Emulation and Simulation of Future Many-Core Epiphany RISC Array Processors [abstract]
Abstract: The Adapteva Epiphany many-core architecture comprises a scalable 2D mesh Network-on-Chip (NoC) of low-power RISC cores with minimal uncore functionality. The Epiphany architecture has demonstrated significantly higher power-efficiency compared with other more conventional general-purpose floating-point processors. The original 32-bit architecture has been updated to create a 1,024-core 64-bit processor recently fabricated using a 16-nm process. We present here our recent work in developing an emulation and simulation capability for future many-core processors based on the Epiphany architecture. We have developed an Epiphany system on a chip (SoC) device emulator that can be installed as a virtual device on an ordinary x86 platform and utilized with the existing software stack used to support physical devices, thus creating a seamless software development environment capable of targeting new processor designs just as they would be interfaced on a real platform. These virtual Epiphany devices can be used for research in the area of many-core RISC array processors in general. We also report on a simulation framework for software development and testing on large-scale systems based on Epiphany RISC array processors.
David Richie and James Ross
170 Automatic mapping for OpenCL-Programs on CPU/GPU Heterogeneous Platforms [abstract]
Abstract: Heterogeneous computing systems with multiple CPUs and GPUs are increasingly popular. Today, heterogeneous platforms are deployed in many setups, ranging from low-power mobile systems to high performance computing systems. Such platforms are usually programmed using OpenCL which allows to execute the same program on different types of device. Nevertheless, programming such platforms is a challenging job for most non-expert programmers. To enable an efficient application runtime on heterogeneous platforms, programmers require an efficient workload distribution to the available compute devices. The Decision how the application should be mapped is non-trivial. In this paper, we present a new approach to build accurate predictive-models for OpenCL programs. We use a machine learning-based predictive model to estimate which device allows best application speed-up. With the LLVM compiler framework we develop a tool for dynamic code-feature extraction. We demonstrate the effectiveness of our novel approach by applying it to different prediction schemes. Using our dynamic feature extraction techniques, we are able to build accurate predictive models, with accuracies varying between 77% and 90%, depending on the prediction mechanism and the scenario. We tested our method on an extensive set of parallel applications. One of our findings is, that dynamically extracted code-features improve the accuracy of the predictive-models by 6.1% on average ( maximum 9.5% ) as compared to the state-of-the-art.
Konrad Moren and Diana Goehringer

Teaching Computational Science (WTCS) Session 2

Time and Date: 15:25 - 17:05 on 12th June 2018

Room: M5

Chair: Angela B. Shiflet

164 Introductory Parallel Programming and Electronics [abstract]
Abstract: Introductory courses in computer science typically do not teach students much about computer hardware. Many courses in robotics have a tight integration of software and hardware leading to high performance at the cost of lower flexibility in the functionality of the programs. The emergence of low cost processors and rapid prototyping electronic platforms allows for a rectification of this situation. Students can be introduced to both hardware and software for parallel scientific computing by classroom co-design using simple low power microcontrollers such as the Atmel AVR used in Arduino. Experiences developing such a platform for calculating Pi using a Monte Carlo method, doing matrix multiplication and implementing a lattice Boltzmann solver are discussed.
Hannes Haljaste, Liem Radita Tapaning Hesti and Benson Muite
108 Interconnected Enterprise Systems − A Call for New Teaching Approaches [abstract]
Abstract: Enterprise Resource Planning Systems (ERPS) have continually extended their scope over the last decades. The evolution has currently reached a stage where ERPS support the entire value chain of an enterprise. This study deals with the rise of a new era, where ERPS is transformed into so-called interconnected Enterprise Systems (iES), which have a strong outside-orientation and provide a networked ecosystem open to human and technological actors (e.g. social media, Internet of Things). Higher education institutions need to prepare their students to understand the shift and to transfer the implications to today’s business world. Based on literature and applied learning scenarios the study shows existing approaches to the use of ERPS in teaching and elaborates whether and how they can still be used. In addition, implications are outlined and the necessary changes towards new teaching approaches for iES are proposed.
Bettina Schneider, Petra Maria Asprion and Frank Grimberg
163 Collaborative Project-Based Learning Environment and Model-Based Learning Assessment for Computational and Data Science Courses [abstract]
Abstract: To prepare future scientists, engineers, and technicians to harness big data and solve complex problems, undergraduates in STEM (Science, Technology, Engineering, and Mathematics) need to become competent in conducting basic data-enabled research, interpreting data, and applying findings across multiple disciplinary contexts. Integrating Computational and Data Science and Engineering (CDSE) coursework into the undergraduate curriculum that embeds authentic research experiences and follows a Course-based Under-graduate Research Experience (CURE) pedagogical model can address these needs. Collaborative project-based learning (CPBL) is identified as a practical approach to implement CURE and build student proficiency in these vital areas. This paper addresses the collaborative problem-solving environment and model-based learning assessment for two blended learning CDSE courses that we delivered to the students across multiple universities.
Hong Liu, Matthew Ikle and Jayathi Raghavan

Agent-Based Simulations, Adaptive Algorithms and Solvers (ABS-AAS) Session 1

Time and Date: 15:25 - 17:05 on 12th June 2018

Room: M6

Chair: Maciej Paszynski

19 A Fast 1.5D Multi-scale Finite Element Method for Borehole Resistivity Measurements [abstract]
Abstract: Logging-While-Drilling (LWD) devices are often used for geosteering applications. They interpret (invert) measurements in real time to determine the well trajectory. To perform the inversion, we require a forward solver with high performance since: (a) we often need to invert for thousands of logging positions in real time, and (b) we need to solve a considerable number of forward problems. In these applications, it is a common practice to approximate the domain with a sequence of 1D models. In a 1D model, the material properties vary only along one direction (z-direction). For such 1D models, we reduce the dimensionality of the problem using a Hankel transform. We can solve the resulting system of Ordinary Differential Equations (ODEs): (a) analytically, which leads to a so-called semi-analytic method after performing a numerical inverse Hankel transform, or (b) numerically. Semi-analytic methods are used vastly due to their high performance. However, they have major limitations, namely: • By today’s knowledge, the analytical solution of the aforementioned system of ODEs is available only for piecewise-constant resistivity values. • To perform geosteering, we need to invert the measurements with respect to some inversion variables using a gradient-based inversion method. For resistivity measurements, these inversion variables are often the constant resistivity values of each layer and the bed boundary positions. However, the analytical derivatives of cross-bedded formations and the analytical derivatives of the measurements with respect to the bed boundary positions have not been published to the best of our knowledge. The main contribution of this work is to overcome the above limitations by using an efficient multi-scale finite element method to solve the system of ODEs corresponding to each Hankel mode. To do so, we divide our computations into two parts, namely: • Computations which are independent of logging positions and consist of computing the multi-scale basis functions. Hence, we precompute them once, and we use them for all logging positions. • Computations which depend upon the logging positions. Using aforementioned method, we can: (a) consider arbitrary resistivity distributions which depend upon one direction, and (b) easily and rapidly compute the derivatives with respect to any inversion variable at almost no additional cost using an adjoint state method. Although the proposed method is slower than semi-analytic ones, it is highly efficient and more flexible when computing the derivatives. In addition, the proposed method is perfectly parallelizable with respect to Hankel modes and multi-scale basis functions.
Mostafa Shahriari, Segio Rojas, David Pardo, Angel Rodriguez-Rozas, Shaaban. A Bakr, Victor. M Calo, Ignacio Muga and Judith Muñoz-Matute
140 Hybrid Swarm and Agent-based Evolutionary Optimization [abstract]
Abstract: In this paper a novel hybridization of agent-based evolu- tionary system (EMAS) is presented. This method assumes utilization of PSO for upgrading certain agents living in the EMAS population, thus serving similar to local-search methods already used in EMAS (in memetic fashion). The gathered and presented results prove the applica- bility of this hybrid based on a selection of a number of 500 dimensional benchmark functions.
Leszek Placzkiewicz, Marcin Sendera, Adam Szlachta, Mateusz Paciorek, Aleksander Byrski, Marek Kisiel-Dorohinicki and Mateusz Godzik
200 Data-driven Agent-based Simulation for Pedestrian Capacity Analysis [abstract]
Abstract: In this paper, an agent-based data-driven model that focuses on path planning layer of origin/destination popularities and route choice is developed. This model improves on the existing mathematical modeling and pattern recognition approaches. The paths and origins/destinations are extracted from a video. The parameters are calibrated from density map generated from the video. We carried out validation on the path probabilities and densities, and showed that our model generates better results than the previous approaches. To demonstrate the usefulness of the approach, we also carried out a case study on capacity analysis of a building layout based on video data.
Sing Kuang Tan, Nan Hu and Wentong Cai

Multiscale Modelling and Simulation (MMS) Session 2

Time and Date: 15:25 - 17:05 on 12th June 2018

Room: M7

Chair: Derek Groen

207 A Versatile Hybrid Agent-Based, Particle and Partial Differential Equations Method to Analyze Vascular Adaptation [abstract]
Abstract: Failure of peripheral endovascular interventions occurs at the intersec-tion of vascular biology, biomechanics, and clinical decision making. It is our hypothesis that most of the endovascular treatments share the same driving mech-anisms during post-surgical follow-up, and accordingly, a deep understanding of them is mandatory in order to improve the current surgical outcome. This work presents a versatile model of vascular adaptation post vein graft bypass intervention to treat arterial occlusions. The goal is to improve the computational models developed so far by effec-tively modeling the cell-cell and cell-membrane interactions that are recognized to be pivotal elements for the re-organization of the graft’s structure. A numerical method is here designed to combine the best features of an Agent-Based Model and a Partial Differential Equations model in order to get as close as possible to the physiological reality while keeping the implementation both simple and general.
Marc Garbey, Stefano Casarin and Scott Berceli
279 Systematic Identification and Evaluation of Antiviral Drugs against the Influenza Virus through Large-Scale Network Simulations [abstract]
Abstract: Influenza as an emerging infectious diseases poses a formidable challenge to global health due to lack of effective antivirals and continued drug resistance. Traditional antiviral drug-discovery targeting viral surface proteins is susceptible to drug resistance due to selective pressure driven by high antigenic mutation rates. Influenza virus is a host-obligate parasite that activates numerous signaling, regulatory and metabolic pathways at both molecular and cellular levels as the host attempts to fight the infection and the virus challenges to survive but also replicate in a highly efficient manner. In an effort to understand the complex interplay, we developed a comprehensive model of the influenza virus interacting with the host epithelial cell. Commonly activated host signaling pathways such as Protein Kinase C (PKC), Mitogen-activated Protein Kinase (MAPK), and PI3K/AKT were modeled in detail, enabling in silico simulations to determine their effects on viral internalization and replication. Perturbation analysis of the virus-host interactome revealed several previously unknown host targets. Our multiscale model of virus-host interactions has the potential to enable the development of more sophisticated, and potentially more efficient drugs.
Madrahimov Alex, Helikar Tom and Guoqing Lu
332 Development of a multiscale simulation approach for forced migration [abstract]
Abstract: In this work I reflect on the development of a multiscale simulation approach for forced migration, and present two prototypes which extend the existing Flee agent-based modelling code. These include one extension for parallelizing Flee and one for multiscale coupling. I provide an overview of both extensions and present performance and scalability results of these implementations in a desktop environment.
Derek Groen

Solving Problems with Uncertainties (SPU) Session 2

Time and Date: 15:25 - 17:05 on 12th June 2018

Room: M8

Chair: Vassil Alexandrov

62 Modification Of Interval Arithmetic For Modelling And Solving Uncertainly Defined Problems By Interval Parametric Integral Equations System [abstract]
Abstract: In this paper we present the concept of modeling and solving uncertainly defined boundary value problems described by 2D Laplace's equation. We define uncertainty of input data (shape of boundary and boundary conditions) using interval numbers. Uncertainty can be considered separately for selected or simultaneously for all input data. We propose interval parametric integral equations system (IPIES) to solve so-define problems. We obtain IPIES in result of PIES modification, which was previously proposed for precisely (exactly) defined problems. For this purpose we have to include uncertainly defined input data into mathematical formalism of PIES. We use pseudo-spectral method for numerical solving of IPIES and propose modification of directed interval arithmetic to obtain interval solutions. We present the strategy on examples of potential problems. To verify correctness of the method, we compare obtained interval solutions with analytical ones. For this purpose, we obtain interval analytical solutions using classical and directed interval arithmetic.
Eugeniusz Zieniuk, Marta Kapturczak and Andrzej Kużelewski
276 A Hybrid Heuristic for the Probabilistic Capacitated Vehicle Routing Problem with Two-Dimensional Loading Constraints [abstract]
Abstract: The Probabilistic Capacitated Vehicle Routing Problem (PCVRP) is a generalization of the classical Capacitated Vehicle Rout- ing Problem (CVRP). The main difference is the stochastic presence of the customers, that is, the number of them to be visited each time is a random variable, where each customer associates with a given probability of presence. We consider a special case of the PCVRP, in which a eet of identical vehicles must serve customers, each with a given demand consisting in a set of rectangular items. The vehicles have a two-dimensional loading surface and a maximum capacity. The resolution of problem consists in finding an a priori route visiting all customers which minimizes the expected length over all possibilities. We propose a hybrid heuristic, based on a branch-and-bound algorithm, for the resolution of the problem. The effectiveness of the approach is shown by means of computational results.
Soumaya Sassi Mahfoudh and Monia Bellalouna
37 A human-inspired model to represent uncertain knowledge in the Semantic Web [abstract]
Abstract: One of the most evident and well-known limitations of the Semantic Web technology is its lack of capability to deal with uncertain knowledge. As uncertainty is often part of the knowledge itself or can be inducted by external factors, such a limitation may be a serious barrier for some practical applications. A number of approaches have been proposed to extend the capabilities in terms of uncertainty representation; some of them are just theoretical or not compatible with the current semantic technology; others focus exclusively on data spaces in which uncertainty is or can be quantified. Human-inspired models have been adopted in the context of different disciplines and domains (e.g. robotics and human-machine interaction) and could be a novel, still largely unexplored, pathway to represent uncertain knowledge in the Semantic Web. Human-inspired models are expected to address uncertainties in a way similar to the human one. Within this paper, we (i) briefly point out the limitations of the Semantic Web technology in terms of uncertainty representation, (ii) discuss the potentialities of human-inspired solutions to represent uncertain knowledge in the Semantic Web, (iii) present a human-inspired model and (iv) a reference architecture for implementations in the context of the legacy technology.
Salvatore Flavio Pileggi
392 Novel Monte Carlo Algorithm for Solving Singular Linear Systems [abstract]
Abstract: A new Monte Carlo algorithm for solving singular linear sys- tems of equations is introduced. In fact, we consider the convergence of resolvent operator R and we construct an algorithm based on the map- ping of the spectral parameter . The approach is applied to systems with singular matrices. For such matrices we show that fairly high accuracy can be obtained.
Behrouz Fathi Vajargah, Vassil Alexandrov, Samaneh Javadi and Ali Hadian