The #Snakemake plugin for #SLURM on #HPC clusters will support JobArrays, soon:

1057691_1 2dcf44cc-+ rule_map_reads_wild+ 32 COMPLETED 0:0
1057691_2 2dcf44cc-+ 32 RUNNING 0:0
1057691_3 2dcf44cc-+ 32 RUNNING 0:0
1057691_4 2dcf44cc-+ 32 RUNNING 0:0
1057691_5 2dcf44cc-+ 32 RUNNING 0:0
1057691_6 2dcf44cc-+ 32 RUNNING 0:0

Hope to do more during next week's #SnakemakeHackathon2026 / #SnakemakeHackathon

RE: https://fediscience.org/@snakemake/115611862667755622

Now, this is huge!

Thanks to a contribution from Cade Mirchandani (Santa Cruz, CA), whom I met at this year's #SnakemakeHackathon users can now supply a partition profile. So, instead of wrangling #SLURM partition information into a workflow profile (indicated with --workflow-profile), we can now have a global file to contain this information.

I added a time conversion function, such that the SLURM time format is obeyed, too.

There are several other development needs, before we continue in this direction (e.g. parsing SLURM partition information directly). But a task to be done is summing this up for non-users, e.g. administrators, is due too.

In any case, I think this merits a new major version.

#Snakemake #HPC #ReproducibleComputing