35 Debian LTS advisories were released in February fixing 527 CVEs across various packages. These include security fixes for gnutls28, xrdp, ClamAV, tomcat9, zabbix, linux kernel, ceph, glib2.0, MUNGE and many more.

Debian LTS contributors also prepared updates for more recent releases, Debian 12 (#bookworm) , Debian 13 (#trixie) and Debian unstable. In addition, improvements were made to documentation and tooling used by the team.

Read the full report at https://www.freexian.com/blog/debian-lts-report-2026-02/?utm_source=mastodon&utm_medium=social

This work is funded by Freexian's Debian LTS offering. Become a sponsor of Debian LTS (https://www.freexian.com/lts/debian/?utm_source=mastodon&utm_medium=social) and enjoy the benefits (https://www.freexian.com/lts/debian/details/#benefits).

#debian #debianlts #freexian #ceph #zabbix

Monthly report about Debian Long Term Support, February 2026

The Debian LTS Team, funded by [Freexian’s Debian LTS offering] (https://www.freexian.com/lts/debian/), is pleased to report its activities for February. Activity summary During the month of February, 20 contributors have been paid to work on Debian LTS (links to individual contributor reports are located below). The team released 35 DLAs fixing 527 CVEs. We also welcomed Arnaud Rebillout to the team and had to say farewell to Roberto, who left the team after more than nine years as part of it.

Freexian

Ceph auf Arbeit... das schaut mir bei den Latenzen so aus, als wenn er die NVMe fΓΌr WAL/DB nicht so richtig nutzt.

#ceph

Another question regarding #Ceph:

Is it better to have Hyper Threading on AMD Epyc CPUs enabled or not?

System has 32c/64t with 44x OSDs and 12x NVMe for system & WAL/DB

Enable it. More Cores/Threads=better
50%
Disable it. There is no speed gain
25%
see comment
25%
Poll ended at .
Five-node #Proxmox cluster? Upgraded from 8 to 9 without issue.
Except a dead Marvell 88SE9230 sata controller card, which degraded the #Ceph cluster for three days, where I borrowed a replacement card from a relatives employer.
Ceph restored to a healthy state in ten-ish minutes, and is happy again.
Now waiting for the actual replacement to arrive, so I can begin my storage migration journey.

Sodele... irgendwie ist der #Ceph Cluster nun auf Arbeit installiert.

Nun geht es daran, mit Ceph Erfahrungen zu sammeln, die nicht auf Proxmox aufbauen...

Das wird lustig...

Well, I have a plan going forward for my #Proxmox #Ceph storage servers.
Migrating all servers (with months apart) to 8-bay hot-swap cases with backplane, and adding my remaining 6 #SMR HDDs to them (2 for each of the three servers) for a low-performance #RadosGW/S3 glacier type storage (with SSD DB-pool, but that's not the point).
Reducing the footprint of bulk/backup storage on my expensive CMR-drives.

#Ceph being really happy about a dead #Marvell 88SE9230 SATA controller.

#Proxmox

Hey, do you know a thing or two about #email and mailing list management, and do you (or does your organisation) use #Ceph? This is your time to shine:

https://lists.ceph.io/hyperkitty/list/[email protected]/thread/LJKZPLNNHHU5RKZY2WCWN3ZXCSJVRWZK/

(Please boost for reach, thanks!)

Das mit dem #Ceph auf #Debian installieren, stellt sich die Doku auf https://docs.ceph.com/en/latest/cephadm/install/ aber auch einfacher vor als es tatsaechlich ist.

Wenn ist cephadm aus Debian nehme, will mir das das Repo fuer trixie anlegen. Ceph hat aber auf https://download.ceph.com/debian-squid/dists/ nur bookworm anzubieten.

Per curl das cephadm zu installieren, das das mir dann ein rpm-noarch auf die Kiste zieht, ist nun nicht unbedingt meine favorisierte Variante...

Naja... mal schauen, was die Reise noch alles mit sich bringt...

Using Cephadm to Deploy a New Ceph Cluster β€” Ceph Documentation

Ich sammle jetzt Metriken von meinem #Ceph, stopfe sie in Prometheus, lasse mir mit Grafana bunte Bildchen malen. Das volle Programm, so mit Zertifikaten und Basic Auth... Stage, erstmal, aber Prod soll zeitnah folgen.

Meint ihr, es lohnt sich, sowas ins #Blog zu schreiben? πŸ€”