At Austin, TX for #micro24.

I'll be presenting our work on exploiting the ARMv8-A contiguous bit TLB coalescing feature with Elastic Translations [1].

We also have an SRC poster on using eBPF for the Linux mm subsystem [2].

#sosp24 is being held this week at Austin as well. Unfortunately, no combined social events.

Will be fun (hopefully) to be in the US for election day.

[1] https://site.psomas.xyz/assets/files/et-poster.pdf
[2] https://site.psomas.xyz/assets/files/ebpfmm-poster.pdf

This sunday at #SOSP24, we will be hosting a Blueprint tutorial that will provide a hands-on introduction on how to use Blueprint for doing microservices research.

More Info here: https://blueprint-uservices.github.io/sosp/

Here is a short teaser to whet your appetite :)

https://www.youtube.com/watch?v=SWuAmjosQJA

SOSP 2024 Tutorial

Tutorial on using Blueprint for accelerating Microservice Research

Blueprint
1/ We're excited to announce that our paper on finding bugs in retry logic was accepted at SOSP'24! #sosp #sosp24 #sigops #acm

I am very proud to announce that our work Tenplex got accepted to SOSP24!

Tenplex achieves dynamic multi-dimensional parallelism changes during runtime by introducing a new abstraction Parallelizable Tensor Collection (PTC) which enables parallelization reconfiguration.

Have a read!
https://arxiv.org/abs/2312.05181

#tenplex #sosp #sosp24

Tenplex: Dynamic Parallelism for Deep Learning using Parallelizable Tensor Collections

Deep learning (DL) jobs use multi-dimensional parallelism, i.e. combining data, model, and pipeline parallelism, to use large GPU clusters efficiently. Long-running jobs may experience changes to their GPU allocation: (i) resource elasticity during training adds or removes GPUs; (ii) hardware maintenance may require redeployment on different GPUs; and (iii) GPU failures force jobs to run with fewer devices. Current DL frameworks tie jobs to a set of GPUs and thus lack support for these scenarios. In particular, they cannot change the multi-dimensional parallelism of an already-running job in an efficient and model-independent way. We describe Scalai, a state management library for DL systems that enables jobs to change their parallelism dynamically after the GPU allocation is updated at runtime. Scalai achieves this through a new abstraction, a parallelizable tensor collection (PTC), that externalizes the job state during training. After a GPU change, Scalai uses the PTC to transform the job state: the PTC repartitions the dataset state under data parallelism and exposes it to DL workers through a virtual file system; and the PTC obtains the model state as partitioned checkpoints and transforms them to reflect the new parallelization configuration. For efficiency, Scalai executes PTC transformations in parallel with minimum data movement between workers. Our experiments show that Scalai enables DL jobs to support dynamic parallelization with low overhead.

arXiv.org
I am very happy to announce that our work Tenplex got accepted at #sosp24!
https://arxiv.org/abs/2312.05181
Tenplex: Dynamic Parallelism for Deep Learning using Parallelizable Tensor Collections

Deep learning (DL) jobs use multi-dimensional parallelism, i.e. combining data, model, and pipeline parallelism, to use large GPU clusters efficiently. Long-running jobs may experience changes to their GPU allocation: (i) resource elasticity during training adds or removes GPUs; (ii) hardware maintenance may require redeployment on different GPUs; and (iii) GPU failures force jobs to run with fewer devices. Current DL frameworks tie jobs to a set of GPUs and thus lack support for these scenarios. In particular, they cannot change the multi-dimensional parallelism of an already-running job in an efficient and model-independent way. We describe Scalai, a state management library for DL systems that enables jobs to change their parallelism dynamically after the GPU allocation is updated at runtime. Scalai achieves this through a new abstraction, a parallelizable tensor collection (PTC), that externalizes the job state during training. After a GPU change, Scalai uses the PTC to transform the job state: the PTC repartitions the dataset state under data parallelism and exposes it to DL workers through a virtual file system; and the PTC obtains the model state as partitioned checkpoints and transforms them to reflect the new parallelization configuration. For efficiency, Scalai executes PTC transformations in parallel with minimum data movement between workers. Our experiments show that Scalai enables DL jobs to support dynamic parallelization with low overhead.

arXiv.org