Architecture, Languages, Compilation and Hardware support for Emerging ManYcore systems (ALCHEMY) Session 1
Time and Date: 15:25 - 17:05 on 12th June 2018
Chair: Stephane Louise
| Trends in programming Many-Core System [abstract]
Abstract: These last ten years saw the emergence and the transition from multi-core systems that can be usually defined (roughly speaking) as systems with a single bus for a means of communication between the different execution cores and between the cores and the memory subsystem to many-core systems where there are several communication buses, organized as Networks on Chip (NoC) as a natural consequence of the growing number of cores. Another trend is the appearance of heterogeneous execution cores where the performance of a given computation will depend strongly on the type of processing and even the types of data to process. While these architectures can provide theoretically a large factor of acceleration or a significant reduction of power consumption, programming them has always been the toughest challenge. The implementation of the architecture is an important factor so that the circulation of data and its processing can flow without impediments, but in this presentation, we will focus on the major aspects of the programming paradigms and where the future of these techniques could lead. We will present the main approaches including models of computation, programming models, runtime generation, JIT generation and some of the other emerging trends.
| Architecture Emulation and Simulation of Future Many-Core Epiphany RISC Array Processors [abstract]
Abstract: The Adapteva Epiphany many-core architecture comprises a scalable 2D mesh Network-on-Chip (NoC) of low-power RISC cores with minimal uncore functionality. The Epiphany architecture has demonstrated significantly higher power-efficiency compared with other more conventional general-purpose floating-point processors. The original 32-bit architecture has been updated to create a 1,024-core 64-bit processor recently fabricated using a 16-nm process. We present here our recent work in developing an emulation and simulation capability for future many-core processors based on the Epiphany architecture. We have developed an Epiphany system on a chip (SoC) device emulator that can be installed as a virtual device on an ordinary x86 platform and utilized with the existing software stack used to support physical devices, thus creating a seamless software development environment capable of targeting new processor designs just as they would be interfaced on a real platform. These virtual Epiphany devices can be used for research in the area of many-core RISC array processors in general. We also report on a simulation framework for software development and testing on large-scale systems based on Epiphany RISC array processors.
|David Richie and James Ross
| Automatic mapping for OpenCL-Programs on CPU/GPU Heterogeneous Platforms [abstract]
Abstract: Heterogeneous computing systems with multiple CPUs and GPUs are increasingly popular. Today, heterogeneous platforms are deployed in many setups, ranging from low-power mobile systems to high performance computing systems. Such platforms are usually programmed using OpenCL which allows to execute the same program on different types of device. Nevertheless, programming such platforms is a challenging job for most non-expert programmers. To enable an efficient application runtime on heterogeneous platforms, programmers require an efficient workload distribution to the available compute devices. The Decision how the application should be mapped is non-trivial. In this paper, we present a new approach to build accurate predictive-models for OpenCL programs. We use a machine learning-based predictive model to estimate which device allows best application speed-up. With the LLVM compiler framework we develop a tool for dynamic code-feature extraction. We demonstrate the effectiveness of our novel approach by applying it to different prediction schemes. Using our dynamic feature extraction techniques, we are able to build accurate predictive models, with accuracies varying between 77% and 90%, depending on the prediction mechanism and the scenario. We tested our method on an extensive set of parallel applications. One of our findings is, that dynamically extracted code-features improve the accuracy of the predictive-models by 6.1% on average ( maximum 9.5% ) as compared to the state-of-the-art.
|Konrad Moren and Diana Goehringer