My new daily backup script, pg_dump with zstd compression level 19.
docker exec \
ak-postgres-1 \
pg_dump -U umeyashiki umeyashiki_akkoma | \
nice -n 19 \
ionice -c 3 \
chrt --idle 0 \
zstd -T0 -19 --rsyncable -q > "$BACKUP_DIR/db_latest.sql.zst";
Today I learned something new about the scheduling priority. Since zstd compression is very CPU-intensive, set it to low priority so it doesn’t slow the entire system down during compression.
Commands that precede zstd here are:
nice -n 19 [cmd]ionice -c 3 [cmd]chrt --idle 0 [cmd]By chaining nice, ionice, and chrt together before the zstd command, the script forces the compression process to run with the absolute lowest possible priority for both the CPU and the disk.
chrt --idle 0: Set scheduling policy to SCHED_IDLE (scheduling very low priority jobs).References:
man 1 chrtman 1 niceman 1 ioniceInteresting blog post about text classification using compression, specifically the new "compression.zstd" module contributed by @emmatyping

Python 3.14 introduced the compression.zstd module. It is a standard library implementation of Facebook’s Zstandard (Zstd) compression algorithm. It was developed a decade ago by Yann Collet, who holds a blog devoted to compression algorithms. I am not a compression expert, but Zstd caught my eye because it supports incremental compression. You can feed it data to compress in chunks, and it will maintain an internal state. It’s particularly well suited for compressing small data. It’s perfect for the classify text via compression trick, which I described in a previous blog post 5 years ago.
[Перевод] Пишем свой git: минимальная реализация на Rust
Контроль версий долгое время был для меня «чёрным ящиком»: я не понимал, как именно хранятся файлы, как формируются diff’ы и из чего состоят коммиты. А поскольку я люблю изобретать велосипеды, почему бы не попробовать реализовать git самому?
https://habr.com/ru/companies/cloud4y/articles/990052/
#git #rust #контроль_версий #sha256 #zstd #хеширование #petproject #системы_контроля_версий #объекты_git #commit
Many people seem weirdly suspicious about Google's #Brotli compression while being weirdly chill about Facebook's #ZSTD, to the point of commenting on posts about Brotli compression being added to things to the effect of, "This is a conspiracy by Google, they clearly should've chosen ZSTD instead". What's up with that? Is Google really so much less scary than 𝘍𝘢𝘤𝘦𝘣𝘰𝘰𝘬?
(This is a subtweet about a certain HN post about Brotli compression coming to #PDF)
I wish my machine had more RAM for zstd level 22
If you use #btrfs on kernels ranging from 6.12 to 6.19 and get the error:
VFS: Unable to mount root fs on unknown_block(0,0)
Add the 'btrfs' and 'microcode' hooks to /etc/mkinitcpio.conf via chroot and rerun initramfs generation.
If that doesn't help install `intel-ucode.img` and add it to your grub boot parameters in the initrd list.
If that still doesn't help (happens on Intel 3770k and older) disable #zstd initramfs compression, use gzip instead.
You are welcome
This!
But sadly this doesn't work since I do #zstd...
https://mastodon.social/@itsfoss/115588332469221299
@kde I see, the "Compress" services appear to be hard-coded in the application code.
https://invent.kde.org/utilities/ark/-/blob/master/app/compressfileitemaction.cpp#L36