A few #slurm tidbits:

Total submitted jobs per user, sorted:
```
squeue | sed 's/ \+/\t/g' | cut -f5 \
| sort | uniq -c | sort -hr
```

Running jobs per user:
```
squeue | grep ' R ' | sed 's/ \+/\t/g' \
| cut -f5 | sort | uniq -c | sort -hr
```

Pending jobs per user:
```
squeue | grep ' PD ' | sed 's/ \+/\t/g' \
| cut -f5 | sort | uniq -c | sort -hr
```

#bash #hpc

@plantarum that's so much less stupid than the way I was doing this previously!
@plantarum one useful one I have is to sum up the nodes or cores per user in category R/PD (and also PD, but excluding jobs which are waiting due to Dependency).
For that one, I can't see how to do it in quite such a clean way as yours, but I'm sure there's something possible without much effort.

@plantarum I certainly wouldn't condone putting squeue in a for loop per user, because I'm told too many squeue commands isn't healthy.

Nope, certainly I wouldn't have done that 👀

@hattom

Any day I find a problem I can solve with awk makes me feel extra clever!

@hattom

Using a few flags to `squeue` and some awk magic:

Total CPUs running per user:
```
squeue -t 'R' -o '%u %C' | tail -n +2 \
| awk ' {a[$1] += $2} ; \
END { for (x in a) print a[x], x }' | \
sort -hr
```

And for PD:
```
squeue -t 'PD' -o '%u %C' | tail -n +2 \
| awk ' {a[$1] += $2} ; \
END { for (x in a) print a[x], x }' | \
sort -hr
```

GitHub - OleHolmNielsen/Slurm_tools: My tools for the Slurm HPC workload manager

My tools for the Slurm HPC workload manager. Contribute to OleHolmNielsen/Slurm_tools development by creating an account on GitHub.

GitHub
@admin @plantarum Here's one for summarizing pending job reasons by user, partition, and reason (see alt text for script contents):
@admin @plantarum And here's one to list high-level summaries by job status for each partition:
@admin @plantarum I have more, just ask!