@ij ach Kuck an. Lange nicht mehr „Open-e“ gehört. Hatte das in der Uni (2011?) im Einsatz, für #proxmox. Hatten auch so manches Problem damit, vor allem wenn sich die Conf von dem internen #DRBD zerschossen hat. Aber es lief recht lange, bis mein Nachfolger es sinnigerweise durch #ceph ersetzt hat.

Cursed homelab update:

I learned a _lot_ about Rook and how it manages PersistentVolumes today while getting my PiHole working properly. (Rook is managed Ceph in Kubernetes)

In Kubernetes, the expectation is that your persistent volume provider has registered a CSI driver (Container Storage Interface) and defined StorageClasses for the distinct "places" where volumes can be. You then create a volume by defining a PersistentVolumeClaim (PVC) which defines a single volume managed by a StorageClass. The machinery behind this then automatically creates a PersistentVolume to define the underlying storage. You can create PersistentVolumes manually, but this isn't explored much in the documentation.

In Rook, this system is mapped onto Ceph structures using a bunch of CSI drivers. The default configuration defines StorageClasses for RBD images and CephFS filesystems. There are also CSI drivers for RGW and NFS backed by CephFS. You then create PVCs the normal way using those StorageClasses and Rook takes care of creating structures where required and mounting those into the containers.

However there's a another mechanism which is much more sparsely mentioned and isn't part of the default setup: "static provisioning". You see, Ceph clusters are used to store stuff for systems that aren't Kubernetes and people tend to organise things in ways that the "normal" CSI Driver + StorageClass + PVC mechanism can't understand and shouldn't manage. So if we want to share that data with some pod, you need to create specially structured PersistentVolumes to map those structures into Kubernetes.

Once you set up one of these special PersistentVolumes and attach them to a pod using a PVC, then you effectively get a "traditional" "cephfs" volume mount, but using Rook's infrastructure and configuration, so all you need to specify is the authentication data and the details for that specific volume and you're done.

The only real complication is that you need a separate secret for this, but chances are you're referencing things in separate places to the "normal" StorageClass stuff and giving Rook very limited access to your storage, so this isn't a big deal.

So circling back around to the big question I wanted answers for: Does Rook mess with stuff it doesn't know about in a CephFS filesystem?

No.

If you use the CSI driver + StorageClass mechanism it will only delete stuff that it creates itself and won't touch anything else existing in the filesystem, even if it's in folders it would create or use.

If you use a static volume, then you're in control of everything it has access to and the defaults are set so that even if the PersistentVolume is deleted, the underlying storage remains.

So now onto services that either should be using CephFS volumes or need to access "non-kubernetes" storage, starting with finding a way to make Samba shares in a container.

#ceph #rook #homelab #kubernetes #CursedHomelab

Rook has released a good one: https://blog.rook.io/rook-v1-17-storage-enhancements-5eb9a6abd1ea

Especially interesting for me is the ability to specify a bucket owner for a new OBC instead of having Rook create a new S3 uder for each OBC.

Also interesting: MONs behind a k8s service. This could be interesting to avoid hardcoded IPs in my Ceph client configs, making MON relocations a lot simpler.

#HomeLab #Ceph

Rook v1.17 Storage Enhancements - Rook Blog

The Rook v1.17 release is out! v1.17 is another feature-filled release to improve storage for Kubernetes. Thanks again to the community for all the great support in this journey to deploy storage in…

Rook Blog

New blog post: https://blog.mei-home.net/posts/k8s-migration-25-controller-migration/

I like to think that many of my blog posts are mildly educational, perhaps even helping someone in a similar situation.

This blog post is the exception. It is a cautionary tale from start to finish. I also imagine that it might be the kind of post someone finds on page 14 of google at 3 am and names their firstborn after me.

#HomeLab #Ceph #Blog

Nomad to k8s, Part 25: Control Plane Migration

Migrating my control plane to my Pi 4 hosts.

ln --help

🚨Registration for the #OpenInfraSummit Europe is now LIVE! 🚨

Come talk all things #OpenStack, #Kubernetes, #Linux, #KataContainers, #Ceph, and more!

Join the #OpenInfra community this October in Paris-Saclay to collaborate on the future of #OpenSource infrastructure!

Secure your spot today! 🛫🌍
🔗 https://summit2025.openinfra.org/

OpenInfra Summit Europe 2025

The OpenInfra Summit Europe is headed to France! The event will take place at École Polytechnique Campus in Paris-Saclay, France, on October 17-19, 2025, to bring together OpenInfra community members from across the globe to collaborate, learn, and drive innovation in open infrastructure.

Découvrez pourquoi #Ceph est la solution de stockage préférée de notre CTO !

Dans notre dernier article, Thibaut Démaret, CTO de Worteks, explore les multiples avantages de Ceph, une solution #OpenSource de stockage flexible et performante.

👀 Lisez l'article complet pour en savoir plus et découvrez pourquoi 𝗖𝗲𝗽𝗵, 𝗰'𝗲𝘀𝘁 𝗯𝗶𝗲𝗻 !

https://www.worteks.com/blog/Ceph-c-est-bien/

@ow2 @OpenInfra @opensource_experts @osxp_paris

#Infrastructure #Ceph #OpenSource #FreeSoftware #Stockage #Virtualisation

Ceph, c’est bien !

Ceph, c’est bien pour gérer du stockage en production et pour un usage intensif des données

Worteks - Expertise Open Source

One observation from today’s test that I need to figure out:

The rook operator removed custom labels from the ceph-exporter and csi-provisioner deployments when it was restarted. The annotations were untouched. Need to work out is this is by design or not…..

Would it matter if these #rook #ceph deployments are not scaled down?

#homelab #upsScaler

Proxmox Virtual Environment 8.4 est disponible - LinuxFr.org

L’actualité du logiciel libre et des sujets voisins (DIY, Open Hardware, Open Data, les Communs, etc.), sur un site francophone contributif géré par une équipe bénévole par et pour des libristes enthousiastes

HPC woes:

5 * MONs with 2 * 100 Gb/s links each
16 OSDs with 2 * 100 Gb/s links each and 1 * 15 TB NVMe each rated at 5,5 GiB/s at 128 KiB IO write

fio says 👌
iperf3 says 👌

Ceph:

#ceph #storage #hpc

CEPH là gì? Các loại hình lưu trữ mã nguồn mở CEPH
Ceph là gì? Cùng khám phá nền tảng lưu trữ phân tán mạnh mẽ này nhé:
Xem chi tiết tại: https://hostingviet.vn/ceph-la-gi
#hostingviet #cephlagi #ceph