Challenge
Nowaday multiprocessor machine architectures are more and more complex: NUMA memory, shared cache levels, GPGPU/FPGA accelerators, network boards ... How to properly schedule non-regular scientific computing on such machines ?! It is not realistic to perform manual scheduling any more, one now uses dynamic schedulers, which allow to exploit these machines. To be able to optimize the scheduling, one more and more uses the tasks paradigm, to which are attached more or less precise information about duration, memory occupation, priority, etc.
Publications (.bib), Google Scholar, Research Gate
My HDR thesis is available as PDF (english), as well as the slides (english), and the Christine chocolate cake recipe (french).
- On Runtime Systems for Task-based Programming on Heterogeneous Platforms,
Samuel Thibault,
Habilitation à diriger les recherches, Université de Bordeaux, 2018.
Main publications:
- Achieving High Performance on Supercomputers with a Sequential Task-based Programming Model (PDF),
Emmanuel Agullo, Olivier Aumage, Mathieu Faverge, Nathalie Furmento, Florent Pruvost, Marc Sergent, Samuel Thibault,
IEEE Transactions on Parallel and Distributed Systems, Institute of Electrical and Electronics Engineers, 2017 - Bridging the Gap between Performance and Bounds of Cholesky Factorization on Heterogeneous Platforms (PDF),
Emmanuel Agullo, Olivier Beaumont, Lionel Eyraud-Dubois, Julien Herrmann, Suraj Kumar, Loris Marchal, Samuel Thibault,
Heterogeneity in Computing Workshop 2015, May 2015, Hyderabad, India. 2015 - Faithful Performance Prediction of a Dynamic Task-Based Runtime System for Heterogeneous Multi-Core Architectures (PDF),
Luka Stanisic, Samuel Thibault, Arnaud Legrand, Brice Videau, Jean-François Méhaut,
Concurrency and Computation: Practice and Experience, Wiley, 2015, pp.16 - Faster, Cheaper, Better -- A Hybridization Methodology for High-Performance Linear Algebra Software for GPUs (PDF),
Emmanuel Agullo, Cédric Augonnet, Jack Dongarra, Hatem Ltaief, Raymond Namyst, Samuel Thibault, Stanimire Tomov,
Wen-mei W. Hwu. GPU Computing Gems, 2, Morgan Kaufmann, 2010 - StarPU: A Unified Platform for Task Scheduling on Heterogeneous Multicore Architectures
(PDF),
Cédric Augonnet, Samuel Thibault, Raymond Namyst, Pierre-André Wacrenier,
Concurrency and Computation: Practice and Experience, Wiley, 2011, Euro-Par 2009 best papers - hwloc: a Generic Framework for Managing Hardware Affinities in HPC Applications
(PDF),
François Broquedis, Jérôme Clet-Ortega, Stéphanie Moreaud, Nathalie Furmento, Brice Goglin, Guillaume Mercier, Samuel Thibault, Raymond Namyst,
IEEE. PDP 2010 - The 18th Euromicro International Conference on Parallel, Distributed and Network-Based Computing, Feb 2010, Pisa, Italy. 2010
PhD advisor for:
- Jean-François David
- Radjasouria Vinayagame
- Thomas Morin
Previously PhD co-advisor for:
- Maxime Gonthier, manuscrit, now post-doc at Argonne labs
- Romain Lion, manuscrit, now engineer at Inria
- Idriss Daoudi, manuscrit, now researcher at BSC
- Suraj Kumar, manuscrit, now researcher at Roma Inria team, Lyon
- Marc Sergent, manuscrit, now engineer at Eviden, France
- Corentin Rossignon, manuscrit, now engineer at Spacebel, France
- Paul-Antoine Arras, manuscrit, now Freelance, France
- Cédric Augonnet, manuscrit, now senior research scientist at NVIDIA
Scheduling tasks on heterogeneous systems, StarPU
Cédric Augonnet, during his PhD under my co-direction, has designed StarPU, a framework for scheduling tasks over heterogeneous machines. The idea is to try to perform all optimizations at runtime: data transfers are minimized and performed in advanced, overlapped with computation, and interact with the task scheduling decisions. The latter take into account performance models of the tasks, which permits to capture the heterogeneous aspect of the machine, and even take benefit from it!
StarPU is more and more used for various scientific computation libraries such as linear algebra (MORSE project).
We have extended the StarPU programming model to exploiting clusters in a distributed way thanks to MPI, which poses questions of scaling.
We have extended the data management of StarPU to using disks, thus allowing out of core computations, which poses questions of optimizing transfers.
Combining StarPU with Simgrid allows us to simulation execution, which not only saves time to observe performances obtained with different scheduling heuristics, but also modify parameters of the simulated architecture (bandwidth, computation units, ...)! More generally, with the modularization of the StarPU schedulers, it brings theoreticians with a platform for testing various heuristics on real applications, while avoiding all the technical constraints of real execution on a production system (hardware failures, changing software versions, ...).
A video recording (26') of my presentation at the XDC2014 conference gives an overview of this work (slides):
A (french) presentation to a elementary school class explains the global challenge in a very simple way. (ODP source)
Current Projects related with StarPU:
- EU TEXTAROSSA, TBD
- ANR SOLHARIS, aims at achieving strong and weak scalability (i.e., the ability to solve problems of increasingly large size while making an effective use of the available computational resources) of sparse, direct solvers on large scale, distributed memory, heterogeneous computers. These solvers will rely on asynchronous task-based parallelism, rather than traditional and widely adopted message-passing and multithreading techniques; this paradigm will be implemented by means of modern runtime systems which have proven to be good tools for the development of scientific computing applications.
Within this project, we will devise new scheduling heuristics that favour locality of data, to improve execution in the memory-constrained cases. - IPL HPC-BIGDATA, the goal is to gather teams from HPC, Big Data and Machine Learning (ML) areas to work at the intersection between these domains. It targets a converged architecture capable of supporting HPC as well as Big Data applications, with a high performance interconnect (HPC-like) and on-node permanent storage capabilities (Cloud-like). Research is organized along three main axes: data-aware resource management, advanced data analytics for scientific simulation and high-performance learning.
Within this project, we will study the port of SciKit-learn on top of task-based programming, which will provide an interesting data management challenge to StarPU.
We will also integrate into StarPU a machine-learning-based scheduling strategy, i.e. a BigData-for-HPC approach.
Past projects related with StarPU:
- EU H2020 FETHPC EXA2PRO, the goal was to develop a programming environment that enables the productive deployment of highly parallel applications in exascale computing systems. It addresses performance, performance portability, programmability, abstraction and reusability, fault tolerance and technical debt. It leverages on skeleton programming, component composition, a dynamic runtime system, and FPGA accelerators.
Within this project, we studied the level of fault-tolerance support that the StarPU runtime system can provide, and design multicriteria scheduling policies to optimize for time, energy, and fault tolerance. - IPL HAC SPECIS, the goal was to answer methodological needs of HPC application and runtime developers and to allow to study real HPC systems both from the correctness and performance point of view. To this end, it gathered experts from the HPC, formal verification, and performance evaluation communities.
Within this project, we extended the simulation support of StarPU, up to trying and verifying parts of it with a model checker. We studied visualization techniques to provide performance feedback to the runtime programmer and to the application programmer. We also modeled sparse data inflation to be able to provide probabilistic guarantees of memory non-overflow. - MORSE associate team with the University of Tennessee (UTK), the goal was to design dense and sparse linear algebra methods that achieve the fastest possible time to an accurate solution on large-scale multicore systems with GPU accelerators, using all the processing power that future high-end systems can make available. We designed a research framework for describing linear algebra algorithms at a high level of abstraction, to enable the strong collaboration between research groups in linear algebra and runtime systems.
Within this project, we have experimented StarPU with dense and sparse linear algebra applications, and improved scheduling heuristics. We have shown that such an approach was indeed very effective and convinced the UTK team to use runtime systems for linear algebra. - DGA Rapid HI-BOX, the goal was to develop a generic library of fast parallel solvers that can be used in existing BEM codes. It leverages the latest advances in numerical methods for integral equations, linear algebra and parallel computing: FMM, H-matrices, GMRES, and task-based programming with a runtime engine.
Within this project, we have experimented with StarPU's Out-of-core capabilities, which showed us the contradiction between prioritizing the critical path and privileging locality. We have also extended StarPU with master-slave MPI support. - ANR SOLHAR, the goal was to study and design algorithms and parallel programming models for implementing direct methods for the resolution of sparse linear systems on emerging computing platforms equipped with accelerators.
Within this project, we have experimented using StarPU with sparse linear algebra applications, and addressed challenges in terms of very small task management and memory usage. We have also started filling the gap between theoretical scheduling and practical scheduling within a runtime system. We have eventually started addressing large-scale platforms. - ANR SONGS, the goal was to extend the applicability of the SimGrid simulation framework from Grids and Peer-to-Peer systems to Clouds and High Performance Computing systems.
Within this project, we have added simulation support to StarPU thanks to SimGrid, which allows to perform completely reproducible task-based experiments, and even extrapolate performance capabilities. - HPC-GA, the goal was to evaluate the functionalities provided by runtime systems for geophysics applications, in order to exploit nowadays heterogeneous supercomputers, and to design new methods and mechanisms for efficient scheduling and clever data distribution.
Within this project, we have confronted our views on dynamic execution of threads and tasks. - ANR/JST FP3C, the goal was to establish software technologies, languages and programming models to explore extreme performance computing beyond petascale computing, on the road to exascale computing.
Within this project, we have collaborated with the University of Tsukuba on porting the XcalableMP compiler on top of StarPU, typically for seismic simulation. - EU STREP PEPPHER, the goal was to devise a unified framework for programming and optimizing applications for architecturally diverse, heterogeneous many-core processors to ensure performance portability.
Within this project, we have integrated the use of StarPU in SkePU, a skeleton-based programming environment from the University of Linköping, and in a pipelined execution framework from the University of Vienna. We have integrated the use of an accelerator simulator from Movidius in StarPU, and have ported StarPU on top of the MIC Intel processor, now know as the Xeon Phi. - ANR MediaGPU, the goal was to design and implement new mathematical models and new algorithmic models to deal with large-scale or large resolution multimedia content processing, thanks to GPU execution.
Within this project, we have collaborated with Institut Telecom on integrating StarPU with an OpenGL graphical rendering, which required to introduce on-the-fly data conversion and the corresponding scheduling compromise. - ANR ProHMPT, the goal was to express and extract elementary applicative tasks with semantic annotations rich enough to allow for precise application driving by dynamic scheduling code, and a precise and permanent information feedback on the behavior of the software stack during computation.
This project has been the ground for the foundation of the StarPU runtime system. We have collaborated with CAPS on integrating static and dynamic functionalities of their HMPP compiler with the use of the StarPU runtime system.
Previous work: modeling architectural structures, hwloc
From my work on the hierarchy of a machine described below, we extracted a software component, HwLoc, which handles abstracting the details of detection and representation of the hierarchy of a machine, which is modeled through an annotated tree. Computation software can thus easily, in a portable way, explicitly manipulate “cores”, “sockets”, but also consider the machine as a generic hierarchy, without caring about architectural details. This component is now used in all the main implementations of the MPI communication interface, and in numerous computation projects. It is thus installed in the majority of computation centers.
Previous work: scheduling threads on hierarchical systems, Marcel
The basic idea I have developped during my PhD thesis is providing programmers a way to express how threads of their application relate together: bubbles. A bubble expresses for instance that some threads work on the same set of data, that they often communicate together, ... so that they should be schedule in the same "corner" of the machine ; in a hierarchical manner.
I have developped an API that permits to manipulate these bubbles with a high level of abstraction. That way, people can experiment different distribution schedulers without having to care about hardware details for instance. They can really focus on algorithmic issues.
I have developped "bubble schedulers" that manipulates such hierarchy of bubbles: spreading the computation load while keeping affinities into account, gang scheduling, work stealing. Trainees could experiment some other strategies: favoring affinities above all, taking into account the size of data, how it is shared and the access rate, ... All this in a way that can automatically adapt itself to any hierarchical machine! The PhD thesis of François Broquedis developped these schedulers, experimenting them with OpenMP applications.
My PhD thesis is available as PDF (french), as well as the slides of the defense (english).
This was developped within Marcel, the efficient, portable and flexible thread library of the PM2 project.
Past projects related with Marcel:
Other projects
- The BrlAPI project, initially a mere undergrad project, has become the de-facto standard for an application to take control of a braille display without having to care about driver details (similarly to X11) BrlAPI: Simple, Portable, Concurrent, Application-level Control of Braille Terminals (ICTA 2007)
- 2nd year internship: "Developing a software tool for precise kernel measurements"
- 1st year internship: "Distributed OpenGL" report (french), and paper submitted to Commodity-based Clusters for Visualization 02 (ccviz02): "Unreliable Transport Protocol for Commodity-based OpenGL Distributed Visualization", slides