Session3 13:15 - 14:55 on 12th June 2018

ICCS 2018 Main Track (MT) Session 3

Time and Date: 13:15 - 14:55 on 12th June 2018

Room: M1

Chair: Panruo Wu

152 Hybrid Genetic Algorithm for an On-Demand First Mile Transit System using Electric Vehicles [abstract]
Abstract: First/Last mile gaps are a significant hurdle in large scale adoption of public transit systems. Recently, demand responsive transit systems have emerged as a preferable solution to first/last mile problem. However, existing work requires significant computation time or advance bookings. Hence, we propose a public transit system linking the neighborhoods to a rapid transit node using a fleet of demand responsive electric vehicles, which reacts to passenger demand in real-time. Initially, the system is modeled using an optimal mathematical formulation. Owing to the complexity of the model, we then propose a hybrid genetic algorithm that computes results in real-time with an average accuracy of 98%. Further, results show that the proposed system saves travel time up to 19% compared to the existing transit services.
Thilina Perera, Alok Prakash, Chathura Nagoda Gamage and Thambipillai Srikanthan
179 Comprehensive Learning Gene Expression Programming for Automatic Implicit Equation Discovery [abstract]
Abstract: Automatic Implicit Equation Discovery (AIED), which aims to automatically find implicit equations to fit observed data, is a promising and challenging research topic in data mining and knowledge discovery. Existing methods for AIED are designed based on calculating derivatives, which require high computational cost and will become invalid when the problem encountered contains sparse training data. To tackle these drawbacks, this paper proposes a new mechanism named Comprehensive Learning Fitness Evaluation Mechanism (CL-FEM). The mechanism is capable of learning knowledge from both the given training data and the disturbed data generated by adding stochastic noise to the training data, to measure the validity and fitting error of a given equation model. The proposed CL-FEM is further integrated with the Self-Learning Gene Expression Programming (SL-GEP), forming a Comprehensive Learning Gene Expression Programming (CL-GEP) to solve AIED problems. The proposed CL-GEP is tested on several benchmark problems with different scales and difficulties, and the experiment results have demonstrated that the CL-GEP can offer very promising performance.
Yongliang Chen and Jinghui Zhong
203 Multi-population Genetic Algorithm for Cardinality Constrained Portfolio Selection Problems [abstract]
Abstract: Portfolio Selection (PS) is recognized as one of the most important and challenging problems in financial engineering. The aim of PS is to distribute a given amount of investment fund across a set of assets in such a way that the return is maximised and the risk is minimised. To solve PS more effectively and more effectively, this paper introduces a Multi-population Genetic Algorithm (MPGA) methodology. The proposed MPGA decomposes a large population into multiple populations to explore and exploit the search space simultaneously. These populations evolve independently during the evolutionary learning process. Yet different populations periodically exchange their individuals so promising genetic materials could be shared between different populations. The proposed MPGA method was evaluated on the standard PS benchmark instances. The experimental results show that MPGA can find better investment strategies in comparison with state-of-the-art portfolio selection methods. In addition, the search process of MPGA is more efficient than these existing methods requiring significantly less amount of computation.
Nasser Sabar, Ayad Turky and Andy Song
343 Recognition and Classification of Rotorcraft by Micro-Doppler Signatures using Deep Learning [abstract]
Abstract: Detection and classification of rotorcraft targets are of great significance not only in civil fields but also in defense. However, up to now, it is still difficult for the traditional radar signal processing methods to detect and distinguish rotorcraft targets from various types of moving objects. Moreover, it is even more challeng-ing to classify different types of helicopters. As the development of high-precision radar, classification of moving targets by micro-Doppler features has become a promising research topic in the modern signal processing field. In this paper, we propose to use the deep convolutional neural networks (DCNNs) in rotorcraft detection and helicopter classification based on Doppler radar signals. We apply DCNN directly to raw micro-Doppler spectrograms for rotorcraft de-tection and classification. The proposed DCNNs can learn the features automati-cally from the micro-Doppler signals without introducing any domain back-ground knowledge. Simulated data are used in the experiments. The experimental results show that the proposed DCNNs achieve superior accuracy in rotorcraft detection and superior accuracy in helicopter classification, outperforming the tra-ditional radar signal processing methods.
Ying Liu and Jinyi Liu
360 Data Allocation based on Evolutionary Data Popularity Clustering [abstract]
Abstract: This study is motivated by the high-energy physics experiment ATLAS, one of the four major experiments at the Large Hadron Collider at CERN. ATLAS comprises 130 data centers worldwide with datasets in the Petabyte range. In the processing of data across the grid, transfer delays and subsequent performance loss emerged as an issue. The two major costs are the waiting time until input data is ready and the job computation time. In the ATLAS workflows, the input to computational jobs is based on grouped datasets. The waiting time stems mainly from WAN transfers between data centers when job properties require execution at one data center but the dataset is distributed among multiple data centers. The proposed novel data allocation algorithm redistributes the constituent files of datasets such that the job efficiency is increased in terms of a cost metric. An evolutionary algorithm is proposed that addresses the data allocation problem in a network based on data popularity and clustering. The number of expected job’s file transfers is used as the target metric and it is shown that job waiting times can be decreased by faster input data readiness.
Ralf Vamosi, Mario Lassnig and Erich Schikuta

ICCS 2018 Main Track (MT) Session 9

Time and Date: 13:15 - 14:55 on 12th June 2018

Room: M2

Chair: Denis Nasonov

129 Global Simulation of Planetary Rings on Sunway TaihuLight [abstract]
Abstract: In this paper, we report the implementation and measured performance of global simulation of planetary rings on Sunway TaihuLight. The basic algorithm is the Barnes-Hut tree, but we have made a number of changes to achieve good performance for extremely large simulations on machines with extremely large number of cores. The measured performance is around 35% of the theoretical peak. The main limitation comes from the performance of the interaction calculation kernel itself, which is s currently around 50%.
Masaki Iwasawa, Long Wang, Keigo Nitadori, Daisuke Namekata, Miyuki Tsubouchi, Junichiro Makino, Zhao Liu, Haohuan Fu, Guangwen Yang and Takayuki Muranushi
350 Parallel Performance Analysis of Bacterial Biofilm Simulation Models [abstract]
Abstract: Modelling and simulation of bacterial biofilms is a computationally expensive process necessitating use of parallel computing. Fluid dynamics and advection-consumption models can be decoupled and solved to handle the fluid-solute-bacterial interactions. Data exchange between the two processes add up to the communication overheads. The heterogenous distribution of bacteria within the simulation domain further leads to non-uniform load distribution in the parallel system. We study the effect of load imbalance and communication overheads on the overall performance of simulation at different stages of biofilm growth. We develop a model to optimize the parallelization procedure for computing the growth dynamics of bacterial biofilms.
Sheraton M V and Peter M.A. Sloot
240 RT-DBSCAN: Real-time Parallel Clustering of Spatio-Temporal Data using Spark-Streaming [abstract]
Abstract: Clustering algorithms are essential for many big data applica- tions involving point-based data, e.g. user generated social media data from platforms such as Twitter. One of the most common approaches for clustering is DBSCAN. However, DBSCAN has numerous limitations. The algorithm itself is based on traversing the whole dataset and identi- fying the neighbours around each point. This approach is not suitable when data is created and streamed in real-time however. Instead a more dynamic approach is required. This paper presents a new approach, RT- DBSCAN, that supports real-time clustering of data based on continuous cluster checkpointing. This approach overcomes many of the issues of existing clustering algorithms such as DBSCAN. The platform is real- ised using Apache Spark running over large-scale Cloud resources and container based technologies to support scaling. We benchmark the work using streamed social media content (Twitter) and show the advant- ages in performance and flexibility of RT-DBSCAN over other clustering approaches.
Yikai Gong, Richard Sinnott and Paul Rimba
137 GPU-based implementation of Ptycho-ADMM for high performance X-ray imaging [abstract]
Abstract: X-ray imaging allows biologists to retrieve the atomic arrangement of proteins and doctors the capability to view broken bones in full detail. In this context, ptychography has risen as a reference imaging technique. It provides resolutions of one billionth of a meter, macroscopic field of view, or the capability to retrieve chemical or magnetic contrast, among other features. The goal is to reconstruct a 2D visualization of a sample from a collection of diffraction patterns generated from the interaction of a light source with the sample. The data collected is typically two orders of magnitude bigger than the final image reconstructed, so high performance solutions are normally desired. One of the latest advances in ptychography imaging is the development of Ptycho-ADMM, a new ptychography reconstruction algorithm based on the Alternating Direction Method of Multipliers (ADMM). Ptycho-ADMM provides faster convergence speed and better quality reconstructions, all while being more resilient to noise in comparison with state-of-the-art methods. The downside of Ptycho-ADMM is that it requires additional computation and a larger memory footprint compared to simpler solutions. In this paper we tackle the computational requirements of Ptycho-ADMM, and design the first high performance multi-GPU solution of the method. We analyze and exploit the parallelism of Ptycho-ADMM to make use of multiple GPU devices. The proposed implementation achieves reconstruction times comparable to other GPU-accelerated high performance solutions, while providing the enhanced reconstruction quality of the Ptycho-ADMM method.
Pablo Enfedaque, Stefano Marchesini, Hari Krishnan and Huibin Chang

Simulations of Flow and Transport: Modeling, Algorithms and Computation (SOFTMAC) Session 1

Time and Date: 13:15 - 14:55 on 12th June 2018

Room: M3

Chair: Shuyu Sun

66 ALE Method for a Rotating Structure Immersed in the Fluid and Its Application to the Artificial Heart Pump in Hemodynamics [abstract]
Abstract: In this paper, we study a dynamic fluid-structure interaction (FSI) problem involving a rotational elastic turbine, which is modeled by the incompressible fluid model in the fluid domain with the arbitrary Lagrangian-Eulerian (ALE) description and by the St. Venant-Kirchhoff structure model in the structure domain with the Lagrangian description, and the application to a hemodynamic FSI problem involving an artificial heart pump with a rotating rotor. A linearized rotational and deformable structure model is developed for the rotating rotor and a monolithic mixed ALE finite element method is developed for the hemodynamic FSI system. Numerical simulations are carried out for a hemodynamic FSI model with an artificial heart pump, and are validated by comparing with a commercial CFD package for a simplified artificial heart pump.
Pengtao Sun, Wei Leng, Chen-Song Zhang, Rihui Lan and Jinchao Xu
113 Free Surface Flow Simulation of Fish Turning Motion [abstract]
Abstract: In this paper, the influence of depth from the free surface of the fish and turning motion will be clarified by numerical simulation. We used Moving-Grid Finite volume method and Moving Computational Domain Method with free surface height function for numerical simulation schemes. Numerical analysis is performed by changing the radius at a certain depth, and the influence of the difference in radius is clarified. Next, analyze the fish that changes its depth and performs rotational motion at the same rotation radius, and clarify the influence of the difference in depth. In any cases, the drag coefficient was a positive value, the side force coefficient was a negative value and the lift coefficient was a smaller value than drag. Analysis was performed with the radius of rotation changed at a certain depth. The depth was changed and the rotational motion at the same rotation radius was analyzed. As a result, it was found the following. The smaller radius of rotation, the greater the lift and side force coefficients. The deeper the fish from free surface, the greater the lift coefficient. It is possible to clarify the influence of depth and radius of rotation from the free surface of submerged fish that is in turning motion on the flow.
Sadanori Ishihara, Masashi Yamakawa, Takeshi Inomoto and Shinichi Asao
285 In-Bend Pressure Drop and Post-Bend Heat Transfer for a Bend with a Partial Blockage at its Inlet [abstract]
Abstract: The full paper describes a three-part numerical investigation of fluid flow and heat transfer in a bend situation that has not been studied in the past. The investigation is motivated by interest in how downstream fluid-flow and heat transfer processes are affected by upstream flow disturbances. The investigated physical situation is a 90o pipe bend fitted with a wall-adjacent obstruction that partially blocks the flow cross section. The first phase of the work consisted of comparing results of numerical simulations with experimental data. The second phase of the paper is focused on determining the impact of the inlet flow distribution on the pressure drop in the bend proper and in the attached pipe. Heat transfer in a straight pipe situated downstream of the bend exit is the focus of the third and the most significant section of the paper. The heat transfer results are reported in terms of the circumferentially averaged Nusselt number displayed as a function of position along the pipe for Reynolds numbers ranging from 100 to 10,000. Each set of simulations consisted of cases with two different bend radii, each being simulated for six different Reynolds numbers between 100 and 10,000. There were four different sets, ranging from no blockage to as high as 60% blockage, all created using an orifice situated at the same position right before the start of the bend. It was found that the disturbances caused by the blockage significantly enhance the Nusselt number values. As expected for Nusselt numbers in the section of pipe after the bend, the numbers are higher for higher Reynolds number flows. Nusselt numbers increase non-monotonically with increase in blockage upstream of the flow, possibly due to jet-like flow patterns that develop as a result of increased blockage. A rather unsuspected result is that Nusselt numbers are seemingly more affected by the sharpness of the bends than the blockage ratio such that increasing the sharpness of the pipe bend increases the Nusselt number right after the bend more than increasing the blockage ratio does. Another interesting phenomenon demonstrated in these numerical investigations is the existence of plateaus in what was expected to be monotonic decrease in Nusselt numbers along the straight sections of pipes after the bend, specifically in high Reynolds number flows. Given that this cannot be explained by existing understanding of heat transfer in pipe bends, experimental verification of this phenomenon would be the logical next step in understanding heat transfer in pipe bends.
Abhimanyu Ghosh, John Gorman, Ephraim Sparrow and Christopher Smith
359 Computational Studies of an Underground Oil Recovery Model [abstract]
Abstract: The modified Buckley-Leverett (MBL) equation describes two-phase flow in porous media, and it is a prototype for modeling the underground oil recovery process. In this paper, we extend the second and third order classical central schemes for the hyperbolic conservation laws to solve the MBL equation which is of pseudo-parabolic type. The MBL equation differs from the classical Buckley-Leverett (BL) equation by including a balanced diffusive-dispersive combination. The classical BL equation gives a monotone water saturation profile for any Riemann problem; on the contrast, when the dispersive parameter is large enough, the MBL equation delivers non-monotone water saturation profiles for certain Riemann problems as suggested by the experimental observations. Numerical results in this paper confirm the existence of non-monotone water saturation profiles consisting of constant states separated by shocks.
Ying Wang
247 Circular Function-Based Gas-kinetic Scheme for Simulation of Viscous Compressible Flows [abstract]
Abstract: A stable gas-kinetic scheme based on circular function is proposed for simu-lation of viscous compressible flows in this paper. The main idea of this scheme is to simplify the integral domain of Maxwellian distribution func-tion over the phase velocity and phase energy to modified Maxwellian func-tion, which will integrate over the phase velocity only. Then the modified Maxwellian function can be degenerated to a circular function with the as-sumption that all particles are distributed on a circle. Firstly, the RAE2822 airfoil is simulated to validate the accuracy of this scheme. Then the nose part of an aerospace plane model is studied to prove the potential of this scheme in industrial application. Simulation results show that the method presented in this paper has a good computational accuracy and stability.
Zhuxuan Meng, Liming Yang, Donghui Wang, Chang Shu and Weihua Zhang

Computational Optimization, Modelling and Simulation (COMS) Session 3

Time and Date: 13:15 - 14:55 on 12th June 2018

Room: M4

Chair: Tiew On Ting

53 Explicit Size-Reduction-Oriented Design of a Compact Microstrip Rat-Race Coupler Using Surrogate-Based Optimization Methods [abstract]
Abstract: In this paper, an explicit size reduction of a compact rat-race coupler implemented in a microstrip technology is considered. The coupler circuit features a simple to-pology with a densely arranged layout that exploits a combination of high- and low-impedance transmission line sections. All relevant dimensions of the struc-ture are simultaneously optimized in order to explicitly reduce the coupler size while maintaining equal power split at the operating frequency of 1 GHz and suf-ficient bandwidth for return loss and isolation characteristics. Acceptable levels of electrical performance are ensured by using a penalty function approach. Two de-signs with footprints of 350 mm2 and 360 mm2 have been designed and experi-mentally validated. The latter structure is characterized by 27% bandwidth. For the sake of computational efficiency, surrogate-based optimization principles are utilized. In particular, we employ an iterative construction and re-optimization of the surrogate model involving a suitably corrected low-fidelity representation of the coupler structure. This permits rapid optimization at the cost corresponding to a handful of evaluations of the high-fidelity coupler model.
Slawomir Koziel, Adrian Bekasiewicz, Leifur Leifsson, Yonatan Tesfahunegn and Xiaosong Du
88 Stochastic-Expansions-Based MAPOD Analysis of the Spherically-Void-Defect Benchmark Problem [abstract]
Abstract: Probability of detection (POD) is used for reliability analysis of nondestructive testing systems. POD is determined by experiments, but it can be enhanced by information through physics-based simulation models and model-assisted probability of detection (MAPOD) methods. Due to time-consuming evaluations of the physics-based models and a large random input parameter space, MAPOD analysis can be impractical to complete in a timely manner. In this paper, we use stochastic polynomial chaos expansions (PCE) in place of the true model to accelerate the MAPOD analysis. In particular, we use state-of-the-art least-angle regression method and hyperbolic sparse technique to construct the PCE. The proposed method is demonstrated on a spherically-void-defect benchmark problem developed by the World Federal Nondestructive Evaluation Center. In this work, the benchmark problem is setup with two random input parameters. The results show that accurate MAPOD analysis obtained with the proposed approach. Moreover, the proposed framework requires around 100 samples for the convergence on the statistical moments, whereas direct Monte Carlo sampling (MCS) with the true model needs over 10,000 samples, and MCS with the deterministic Kriging model does not converge due to its inability to accurately represent the true model.
Xiasong Du, Praveen Gurrala, Leifur Leifsson, Jiming Song, William Meeker, Ronald Roberts, Slawomir Koziel, Adrian Bekasiewicz and Yonatan Tesfahunegn
126 Accelerating Optical Absorption Spectra and Exciton Energy Computation via Interpolative Separable Density Fitting [abstract]
Abstract: We present an efficient way to solve the Bethe-Salpeter equation (BSE), which is developed to model collective excitation of electron-hole pairs in molecules and solids. The BSE is an eigenvalue problem. The Bethe--Salpeter Hamiltonian matrix to be diagonalized requires at least $O(N_e^5)$ operations with a large pre-constant to construct, where $N_e$ is proportional to the number of electrons in the system, in a conventional approach. This can be extremely costly for large systems. Our approach is based on using the interpolative separable density fitting (ISDF) technique to construct low-rank approximations to the bare and screened exchange operators associated with the BSE Hamiltonian. This approach allows us to reduce the complexity of the Hamiltonian construction to $O(N_e^3)$ with a much smaller pre-constant. We implement this ISDF method for the BSE calculations under the Tamm-Dancoff approximation (TDA) in the BerkeleyGW software package. We show that the ISDF based BSE calculations in molecules and solids can produce accurate exciton energies and optical absorption spectra with significantly reduced computational cost.
Wei Hu, Meiyue Shao, Andrea Cepellotti, Felipe Jornada, Kyle Thicke, Lin Lin, Chao Yang and Steven G. Louie
89 Model-Assisted Probability of Detection for Structural Health Monitoring of Flat Plates [abstract]
Abstract: The paper presents a computational framework for assessing quantitatively the detection capability of structural health monitoring (SHM) systems for flat plates. The detection capability is quantified using the probability of detection (POD) metric, developed within the area of nondestructive testing, which accounts for the variability of the uncertain system parameters and describes the detection accuracy using confidence bounds. SHM provides the capability of continuously monitoring the structural integrity using multiple sensors placed sensibly on the structure. It is important that the SHM can reliably and accurately detect damage when it occurs. The proposed computational framework models the structural behavior of flat plate using a spring-mass system with a lumped mass at each sensor location. The quantity of interest is the degree of damage of the plate, which is defined in this work as the difference in the strain field of a damaged plate with respect to the strain field of the healthy plate. The computational framework determines the POD based on the degree of damage of the plate for a given loading condition. The proposed approach is demonstrated on a numerical example of a flat plate with two sides fixed and a load acting normal to the surface. The POD is estimated for two uncertain parameters, the plate thickness and the modulus of elasticity of the material, and a damage located in one spot of the plate. The results show that the POD is close to zero for small loads, but increases quickly with increasing loads.
Xiaosong Du, Jin Yan, Simon Laflamme, Leifur Leifsson, Yonatan Tesfahunegn, Slawomir Koziel and Adrian Bekasiewicz

Teaching Computational Science (WTCS) Session 1

Time and Date: 13:15 - 14:55 on 12th June 2018

Room: M5

Chair: Angela B. Shiflet

61 Revealing Hidden Markov Models in Educational Modules and the Classroom [abstract]
Abstract: Prof. Angela Shiflet in computer science and mathematics and Prof. George Shiflet in biology are Fulbright Specialists. In January, 2015, they participated in a three-week collaborative project at University “Magna Græcia” of Catanzaro in Italy, in the Department of Medical and Surgical Sciences, hosted by Prof. Mario Cannataro. While there, the three along with Prof. Pietro Hiram Guzzi started a project to develop educational modules on high-performance-computing bioinformatics algorithms. Drs. Cannataro and Guzzi have written a book, Data Management of Protein Interaction Networks (Wiley, 2011), and regularly teach bioinformatics and HPC. Upon returning to the United States, the Drs. Shiflet applied to have undergraduates Daniel Couch and Dmitriy Kaplun be Blue Waters Interns in subsequent years, working on the project. The NSF-funded Blue Waters Project, which provides a stipend for the intern, supports “experiences involving the application of high-performance computing to problems in the sciences, engineering, or mathematics” ( Each student participated in a two-week workshop at the National Center for Supercomputing Applications (NCSA) facilities on the University of Illinois Urbana-Champaign campus. In the 2016-2017 year of the project, Kaplun wrote sequential and HPC programs and performed timings to accompany a pair of educational modules, “What Are the Chances?--Hidden Markov Models” and “Viterbi Hidden Markov Models,” available at Hidden Markov Models (HMM) are used in numerous applications that involve recognition, such as image tracking in sports, speech or facial recognition, handwriting analysis, language translation, cryptanalysis, predicting protein structure, aligning multiple nucleotide sequences, and discovering locations of genes. After an introductory vignette, the first module explains the mathematics behind HMM, particularly probability, and develops a sequential HMM forward algorithm to determine the likelihood of a hidden sequence of states. After motivating the need for HPC, the module also discusses a parallel forward algorithm, its implementation, and timings with speedups, as developed by the intern. To aid students, the module contains sixteen Quick Review Questions, many with multiple parts; three exercises; and five projects. Using similar pedagogical features, the second module discusses the Viterbi algorithm to solve another type of HMM problem, decoding. Completed sequential and parallel C with OpenMP programs are available upon request by instructors. Students and faculty members in a bioinformatics course at University “Magna Græcia” of Catanzaro used the materials, which Ph.D. student Chiara Zucco assisted in incorporating and evaluating.
Angela Shiflet, George Shiflet, Dmitriy Kaplun, Chiara Zucco, Pietro Guzzi and Mario Cannataro
168 Design and Analysis of an Undergraduate Computational Engineering Degree at Federal University of Juiz de Fora [abstract]
Abstract: The undergraduate course in Computational Engineering at Federal University of Juiz de Fora, Brazil, was created in 2008 as a joint initiative of two distinct departments in the University, Computer Science, located in the Exact Science Institute, and Applied and Computational Mechanics, located in the School of Engineering. First freshmen began in 2009 and graduated in 2014. This work presents the curriculum structure of this pioneering full bachelor's degree in Computational Engineering in Brazil.
Marcelo Lobosco, Flávia de Souza Bastos, Bernardo Martins Rocha and Rodrigo Santos
166 Extended Cognition Hypothesis View on Computational Thinking in Computer Science Education [abstract]
Abstract: Computational thinking is a much-used concept in the computer science education. Here we examine the concept from the viewpoint of the extended cognition hypothesis. The analysis reveals that the extent of the concept is limited by its strong historical roots in computer science and software engineering. According to the extended cognition hypothesis, there is no meaningful distinction between human cognitive functions and the technology. This standpoint promotes a broader interpretation of the human-technology interaction. Human cognitive processes spontaneously adapt available technology enhanced skills when technology is used in cognitively relevant levels and modalities. A new concept technology synchronized thinking is presented to denote this conclusion. More diverse and practical approach is suggested for the computer science education.
Mika Letonsaari

Data, Modeling, and Computation in IoT and Smart Systems (DMC-IoT) Session 1

Time and Date: 13:15 - 14:55 on 12th June 2018

Room: M6


154 Service-oriented approach for Internet of Things [abstract]
Abstract: . The new era of industrial automation has been developed and implemented quickly and it is impacting different areas on society. Especially in recent years, much progress has been made in this area, leading to some people talking about the fourth industrial revolution. Every day factories are more connected and able to communicate and interact in real time between industrial systems. There is a need to flexibilization on the shop floor to promote a higher customization of products in a short life cycle and service-oriented architecture is a good option to materialize this. This chapter discusses challenges of this new revolution, also known as Industry 4.0, addressing the introduction of modern communication and computing technologies to maximize interoperability across all the different existing systems. Moreover, it will cover technologies that support this new industrial revolution and discuss impacts, possibilities, needs and adaptation.
Eduardo Moraes
49 Anomalous Trajectory Detection between Regions of Interest Based on ANPR System [abstract]
Abstract: With the popularization of automobiles, more and more algorithms have been proposed in the last few years for the anomalous trajectory detection. However, existing approaches, in general, deal only with the data generated by GPS devices, which need a great deal of pre-processing works. Moreover, without the consideration of region's local characteristics, those approaches always put all trajectories even though with different source and destination regions together. Therefore, in this paper, we devise a novel framework for anomalous trajectory detection between regions of interest by utilizing the data captured by Automatic Number Plate Recognition(ANPR) system. Our framework consists of three phases: abstraction, detection, classification, which is specially engineered to exploit both spatial and temporal features. In addition, extensive experiments have been conducted on a large-scale real-world datasets and the results show that our framework can work effectively.
Gao Ying, Yang Wei, Xu Hongli, Huang Liusheng, Nie Yiwen and Huang Huan
389 Dynamic real-time infrastructure planning and deployment for disaster early warning systems [abstract]
Abstract: An effective nature disaster early warning system often relies on widely deployed sensors, simulation based predicting components, and a deci-sion making system. In many cases, the simulation components require advanced infrastructures such as Cloud for performing the computing tasks. However, effectively customizing the virtualized infrastructure from Cloud based time critical constraints and locations of the sensors, and scaling it based on dynamic loads of the computation at runtime is still difficult. The suitability of a Dynamic Real-time Infrastructure Planner (DRIP) that handles the provisioning within cloud environ-ments of the virtual infrastructure for time-critical applications is demonstrated with respect to disaster early warning systems. The DRIP system is part of the SWITCH project (Software Workbench for Interac-tive, Time Critical and Highly self-adaptive Cloud applications).
Zhiming Zhao
119 Calibration and Monitoring of IoT Devices by Means of Embedded Scientific Visualization Tools [abstract]
Abstract: In the paper we propose ontology based scientific visualization tools to calibrate and monitor various IoT devices in a uniform way. We suggest using ontologies to describe associated controllers, chips, sensors and related data filters, visual objects and graphical scenes to provide self-service solutions for IoT developers and device makers. High-level interface of these solutions enables composing data flow diagrams defining both the behavior of the IoT devices and rendering features. According to the data flow diagrams and the set of ontologies the firmware for IoT devices is automatically generated incorporating both the data visualization and device behavior code. After the firmware loading, it's possible to connect to these devices using desktop computer or smartphone/tablet, get the visualization client code over HTTP, monitor the data and calibrate the devices taking into account monitoring results. To monitor the distributed IoT networks a new visualization model based on circle graph is presented. We demonstrate the implementation of suggested approach within ontology based scientific visualization system SciVi. It was tested in a real world project of an interactive Permian Antiquities Museum exhibition creating.
Konstantin Ryabinin, Svetlana Chuprina and Mariia Kolesnik
324 Gated Convolutional LSTM for Speech Commands Recognition [abstract]
Abstract: As the mobile device gaining increasing popularity, Acoustic Speech Recognition on it is becoming a leading application. Unfortunately, the limited battery and computational resources on a mobile device highly restrict the potential of Speech Recognition systems, most of which have to resort to a remote server for better performance. To improve the performance of local Speech Recognition, we propose C-1-G-2-Blstm. This model shares Convolutional Neural Network’s ability of learning local feature and Recurrent Neural Network’s ability of learning sequence data’s long ependence. Furthermore, by adopting the Gated Convolutional Neural Network instead of a traditional CNN, we manage to greatly improve the models capacity. Our tests demonstrate that C-1-G-2-Blstm can achieve a high accuracy at 90.6% on the Google SpeechCommands data set, which is 6.4% higher than the state-of-art methods.
Dong Wang, Shaohe Lv, Xiaodong Wang and Xinye Lin
308 An OAuth2.0-Based Unified Authentication System for Secure Services in the Smart Campus Environment [abstract]
Abstract: Based on the construction of Shandong Normal University’s smart authentication system, this paper researches the key technologies of Open Authorization(OAuth) protocol, which allows secure authorization in a simple and standardized way from third-party applications accessing online services. Through the analysis of OAuth2.0 standard and the open API details between different applications, and concrete implementation procedure of the smart campus authentication platform, this paper summarizes the research methods of building the smart campus application system with existing educational resources in cloud computing environment. Through the conducting of security experiments and theoretical analysis, this system has been proved to run stably and credibly, flexible, easy to integrate with existing smart campus services, and efficiently improve the security and reliability of campus data acquisition. Also, our work provides a universal reference and significance to the authentication system construction of the smart campus.
Baozhong Gao, Fangai Liu, Shouyan Du and Fansheng Meng
390 Enabling machine learning on resource constrained devices by source code generation of the learned models [abstract]
Abstract: Due to the development of IoT solutions, we can observe the constantly growing number of these devices in almost every aspect of our lives. The machine learning may improve their intelligence and smartness. Unfortunately, the highly regarded programming libraries consume to much resources to be ported to the embedded processors. Thus, in the paper the concept of source code generation of machine learning models is presented as well as the generation algorithms for commonly used machine learning methods. The concept has been proven in the use cases.
Tomasz Szydło, Joanna Sendorek and Robert Brzoza-Woch

Multiscale Modelling and Simulation (MMS) Session 1

Time and Date: 13:15 - 14:55 on 12th June 2018

Room: M7

Chair: Derek Groen

268 Optimized Eigenvalue Solvers for the Neutron Transport Equation [abstract]
Abstract: A discrete ordinates method has been developed to approximate the neutron transport equation for the computation of the lambda modes of a given configuration of a nuclear reactor core. This method is based on discrete ordinates method for the angular discretization, resulting in a very large and sparse algebraic generalized eigenvalue problem. The computation of the dominant eigenvalue of this problem and its corresponding eigenfunction has been done with a matrix-free implementation using both, the power iteration method and the Krylov-Schur method. The performance of these methods has been compared solving different benchmark problems with different dominant ratios.
Antoni Vidal-Ferràndiz, Sebastián González-Pintor, Damián Ginestar, Amanda Carreño and Gumersindo Verdú
274 Noise propagation in a PWR nuclear reactor [abstract]
Abstract: In order to reproduce and study the neutron noise transients present in the nuclear reactor core, it is compulsory to develop a suitable tool. Unfortunately, this kind of capacity is not originally considered in the time-domain neutron diffusion codes so, a complex methodology have to be developed in each code. Thus, with the aim of endowing the U.S. Nuclear Regulatory Commission (NRC) neutron diffusion code of reference, PARCSv3.2, with the capability for reproducing this type of transients, a complete methodology, involving changes in the source code and the development of new auxiliary tools, has been created in order to ensure accurate reproductions of the core behaviour under the existence of a neutron noise source. This approach is performed for reproducing two representative sources of sinusoi-dal oscillations existing at a nuclear reactor core: a point-wise source, corresponding to the fluctuation created by an absorber of variable length, and a traveling perturba-tion, simulating a perturbation in the thermal-hydraulic data along an entire channel. Besides, one of the main limitations of reproducing this type of problem is the big size data needed, since we need to solve sometimes-long transients with small time steps for an entire nuclear reactor core. In addition, an analysis of the proficiency of the most consolidated numerical schemes available in PARCSv3.2 and the dependence on cell size for this kind of transients are applied to a real case of study in order to understand better their influ-ence in neutron noise transients.
Nicolás Olmo-Juan, Teresa María Barrachina Celda, Rafael Miró Herrero and Gumersindo Jesús Verdú Martín
327 Multi-scale homogenization of pre-treatment rapid and slow filtration processes with experimental and computational validations [abstract]
Abstract: In this paper, we summarize on an approach which couples the multi-scale method with the homogenization theory to develop engineering models for three unique granular filtration cases, namely, effective rapid filtration to remove turbidity particles, adsorption and biofilm absorption of natural organic matters. These cases differ in their microscale Peclet and Damköhler numbers due to varying hydraulic loading rates, sizes of solutes and removal mechanisms to achieve the purification step. By first coupling the fluid and solute problems, we systematically derive the homogenized effective equations for the effective rapid filtration process while introducing an appropriate boundary condition to account for the particles’ deposition occurring on the spheres’ boundaries within a pre-scribed face-centred cubic (FCC) periodic cell. Validation of the derived homogenized equation for this case is achieved by comparing the predictions with our experimentally-derived values for the normalized pressure gradient acting upon the experimental filter. The same approach can subsequently be extended to the latter two cases by changing the involved time scale. Experimental works for validating these models are currently underway. Most importantly, we identify a need to include a computational approach to resolve for the concentration parameter within the periodic cell at higher orders. The computational values will then be introduced back into the respective homogenized equations for further predictions which are to be compared with the obtained experimental values under varying real-world conditions. This proposed hybrid methodology is currently in progress.
Alvin Wei Ze Chew and Adrian Wing-Keung Law
256 The solution of the lambda modes problem using block iterative eigensolvers [abstract]
Abstract: High efficient methods are required for the computation of several lambda modes associated with the neutron diffusion equation. Multiple iterative methods have been used to solve this problem. In this work, three different block methods are studied to solve this problem. The first method is a procedure based on the modified block Newton method. The second one is an eigensolver based on subspace iteration and accelerated with Chebyshev polynomials. Finally, a block inverse-free preconditioned Krylov subspace method is analyzed. Two benchmark problems are studied illustrating the convergence properties and the competitiveness of the methods proposed.
A. Carreño, A. Vidal-Ferràndiz, D. Ginestar and G. Verdú

Solving Problems with Uncertainties (SPU) Session 1

Time and Date: 13:15 - 14:55 on 12th June 2018

Room: M8

Chair: Vassil Alexandrov

334 Statistical and Multivatiate Analysis Applied to a Database of Patients with Type-2 Diabetes [abstract]
Abstract: The prevalence of type 2 Diabetes Mellitus (T2DM) has reached critical proportions globally over the past few years. Diabetes can cause devastating personal suffering and its treatment represents a major economic burden for every country around the world. To property guide effective actions and measures, the present study aims to examine the profile of the diabetic population in Mexico. We used the Karhunen-Lo\`{e}ve transform which is a form of principal component analysis, to identify the factors that contribute to T2DM. The results revealed a unique profile of patients who cannot control this disease. Results also demonstrated that compared to young patients, old patients tend to have better glycemic control. Statistical analyses reveal patient profiles and their health results and identify the variables that measure overlapping health issues as reported in the database (i.e. collinearity).
Diana Canales, Neil Hernandez-Gress, Ram Akella and Ivan Perez
368 Bayesian based approach learning for outcome prediction of soccer matches [abstract]
Abstract: In the current world, sports produce considerable data such as players skills, game results, season matches, leagues management, etc. The big challenge in sports science is to analyze this data to gain a competitive advantage. The analysis can be done using several techniques and statistical methods in order to produce valuable information. The problem of modeling soccer data has become increasingly popular in the last few years, with the prediction of results being the most popular topic. In this paper, we propose a Bayesian Model based on rank position and shared history that predicts the outcome of future soccer matches. The model was tested using a data set containing the results of over 200,000 soccer matches from different soccer leagues around the world.
Laura Hervert-Escobar, Neil Hernandez-Gress and Timothy I. Matis
387 Reducing Data Uncertainty in Forest Fire Spread Prediction: a Matter of Error Function Assessment [abstract]
Abstract: Forest fires are a significant problem that every year causes important damages around the world. In order to efficiently tackle these hazards, one can rely on forest fire spread simulators. Any forest fire evolution model requires several input data parameters to describe the scenario where the fire spread is taking place, however, these data are usually subject to high levels of uncertainty. To reduce the impact of the input-data uncertainty, different strategies have been developed during the last years. One of these strategies consists of adjusting the input parameters according to the observed evolution of the fire. This strategy emphasizes how critical is the fact of counting on reliable and solid metrics to assess the error of the computational forecasts. The aim of this work is to assess eight different error functions applied to forest fires spread simulations in order to understand their respective advantages and drawbacks, as well as to determine in which cases they are beneficial or not.
Carlos Carrillo, Ana Cortés, Tomàs Margalef, Antonio Espinosa and Andrés Cencerrado
335 Analysis of the accuracy of OpenFOAM solvers for the problem of supersonic flow around a cone [abstract]
Abstract: The numerical results of comparing the accuracy for some OpenFOAM solvers are presented. The comparison was made for the problem of inviscid compressible flow around a cone at zero angle of attack. The results obtained with the help of various OpenFOAM solvers are compared with the known numerical solution of the problem with the variation of cone angle and flow velocity. This study is a part of a project aimed to create a reliable numerical technology for modelling the flows around elongated bodies of rotation (EBR).
Alexander Bondarev and Artem Kuvshinnikov