Tutorial 1: Embedded Machine Learning Today and Tomorrow
Luca Benini , ETH Zurich & Universita’ di Bologna, Italy
The tutorial will start with an overview of architectures and systems for machine learning available today, with a focus on advanced techniques to boost energy efficiency for ML accelerators, such as exploiting temporal redundancy, sparsity and reduced precision in inference and training. We will then cover some of the recent progress and challenges in implementing machine learning algorithms using analog resistive memory devices. Finally, we will discuss spiking neural networks and the properties and advantages that are unique to them, including new algorithms applied on spatiotemporal data. The emulation of these bio-inspired mechanisms, through the physics of nanodevices, particularly memristors, will be covered.
Tutorial 2: Spectrum of Run-time Management for Modern and Next Generation Multi/Many-core Systems
Amit Kumar Singh (Essex Univ, UK), Geoff V. Merrett (Southampton Univ, UK), Amir Rahmani (UC Irvine, USA), Akash Kumar (TU Dresden, Germany)
Run-time management of multi/many-core systems is becoming extremely challenging due to several factors, e.g. increasing demand to execute concurrent applications, inefficient exploitation of heterogeneous cores, changing workload variations over time, changing run-time scenarios and desire for optimization of several metrics such as performance, energy consumption and reliability. For next generation multi/many-core systems, the challenges will further increase mainly due to higher number of cores and increased heterogeneity.
This tutorial starts with a taxonomy of run-time management approaches, providing an overview of the field and comparing approaches. The attention then shifts to focus on a range of run-time power and energy management approaches. Thereafter, approaches considering reliability as their primarily optimization goal will be addressed. Finally, run-time management approaches that leverage multiple- input, multiple-output and supervisory control theory to offer scalable, autonomous, and coordinated resource management will be covered. Depending upon the target problems, the designers can employ these methodologies to achieve efficiency in multi/many-core systems in terms of performance, energy consumption and/or reliability.
Tutorial 3: Schedulability Analysis under Uncertainty using Formal Methods
Étienne André (Université Paris, France), Giuseppe Liapri (Université de Lille, France)
Modern real-time systems must cope with different sources of variability. Modern hardware processors introduce several sources of variability in the execution time of the software (cache, pipeline, bus contention, etc.); and the timing of external events may change due to changes in the environments, malfunctions, etc. This variability adds additional challenges for the design, development and validation of modern cyber-physical systems.
It is then necessary to estimate the robustness of the system w.r.t. variations of the parameters. A key issue is to estimate for which values of the parameters the system continues to meet all its timing constraints.
In this tutorial, we present the background for analyzing real-time systems using formal methods, and notably the formalism of parametric timed automata to analyze real-time scheduling under uncertainty. Then we will give a survey of some real-time scheduling problems, and we will show how to model a typical real-time system using the IMITATOR tool. The participants will be guided toward building and verifying a model of a real-time system, exploring the capability of the analysis tool.
Tutorial 4: Kickstarting Developing on seL4, The World’s Most Trustworthy and Difficult to Work with Microkernel
Anna Lyons, Gernot Heiser (UNSW, Australia)
seL4 is a microkernel offering unprecedented trustworthiness in the form of formal verification of correctness, integrity, isolation and a known WCET. This tutorial will kick-start everything you need to know to start working on seL4 and is in four parts: an introduction to seL4, hands on working with the kernel API and scheduling model, hands on use of rump kernels for basic POSIX support, and a presentation on how to set up Linux VMs with guest to guest communication.
Tutorial 5: A Comprehensive Analysis of Approximate Computing Techniques: From Component- to Application-Level
Alberto Bosio (LIRMM. France), Daniel Menard (INSA Rennes, France), Olivier Sentieys (INRIA, France)
A new design paradigm, Approximate Computing (AxC), has been established to investigate how computing systems can be more energy efficient, faster, and less complex. Intuitively, instead of performing exact computation and, consequently, requiring a high amount of resources, AxC aims to selectively violate the specifications, trading accuracy off for efficiency. It has been demonstrated in the literature the effectiveness of imprecise computation for both software and hardware components implementing inexact algorithms, showing an inherent resiliency to errors.
This tutorial introduces basic and advanced topics on AxC. We intend to follow a bottom-up approach: from component- up to application-level. More in detail, we will first present the main concept and techniques (e.g., functional approximation, voltage over-scaling). We then move to present some compile-time results in terms of energy-efficiency, area, performance versus accuracy of computations when using customized arithmetic (fixed-point, floating-point) and also try to derive some conclusions by comparing the different paradigms. The algorithmic-level approximation methods are then presented. Energy consumption can be reduced by approximating or skipping part of the computation. The concept of incremental refinement, early termination and fast decision will be detailed.