The ultimate #NAS-setup is a single-node #Ceph, is that correct?
#ZFS can't rebalance data (unless using dangerous rewriting)
#BTRFS shouldn't RAID5.
#LVM can't rebalance data.

#CephFS is able to mix and match any number of striping, mirroring and erasure coding on the same set of hard drives.

Windows Storage Spaces is LVM in better, but CephFS makes all of this available on Linux. Have I missed any FS that is as flexible as either, Storage Spaces or CephFS?

#Linux #HomeLab #SelfHost

Wheee, after some database upgrades and a whole pile of other things, lots of things updated today.

Big find of the day: kernel-6.17.11-200.fc42.x86_64 has a kernel: BUG: kernel NULL pointer dereference, address with #CephFS #Ceph, backing down to kernel-6.16.7-200.fc42.x86_64 got me back up on that machine, but that was weird seeing ls get killed trying to look what was in a directory. Caused a whole MESS of debug issues with what was trying to use that too.

Fun times.

Introducing Storage Management for Proxmox Nodes & Clusters with the new Ansible Module proxmox_storage • gyptazy - The DevOps Geek

Managing Proxmox storage resources at scale has traditionally been a cumbersome task. In clustered environments where consistency, reliability, and speed are critical, manually adding or removing storage definitions on each node wastes valuable time and introduces the risk of human error. Imagine configuring NFS shares, CephFS mounts, iSCSI targets or Proxmox Backup Server repositories across

gyptazy - The DevOps Geek • DevOps, coding, cloud and open source in a geeky way.
Second #server rack installed in the #homelab. Most important takeway is that there is pretty RGB in it. For the nerds, I've decommissioned a lot of old 11th and 12th gen Dell Servers. Currently running a SuperMicro H11 Epyc Server in a CSE-826 chassis, a SuperMicro 4028GR-TRT2, and still a Dell R720 (last to be replaced) in a #Proxmox HA cluster. Currently moving all my media storage from #ZFS to #CephFS with 2+1 Erasure Coding and using replicaset 3 for more important data. Eventually would like to run five nodes specifically for #Ceph so I can run 3+2 EC to be more comfortable but with mostly read-only media storage, some downtime when I need to reboot a node doesn't bother me too much

RE:
https://transfem.social/notes/a1ceiuaamq9700l3
Brandon T. Nguyen :v_bi: (@btn)

Current state of the #homelab affectionately named Neuro (she/her). Currently running a whitebox 1st gen AMD Epyc, Dell R720, and Dell R620 as my #Proxmox Hypervisor nodes in HA using #Ceph as the backend storage all networked together with 2x40Gbps per node. Planning on upgrading the aging Dell nodes with R740XD for U.2 NVMe storage. Definitely hard to get a snapshot of the datacenter when it's always in a state of WIP. (📎3)

TransFem Space
#Proxmox VE - C'est une question qui revient souvent 😉
Non, je n'ai pas (encore) de stockage distribué #CephFS, et ce, peu importe le cluster❗
En attendant, je m'en passe bien 👍
↪️ https://pve.proxmox.com/wiki/Deploy_Hyper-Converged_Ceph_Cluster
Deploy Hyper-Converged Ceph Cluster - Proxmox VE

#Proxmox VE - C'est une question qui revient souvent 😉
Non, je n'ai pas (encore) de stockage distribué #CephFS, et ce, peu importe le cluster❗
En attendant, je m'en passe bien 👍
↪️ https://pve.proxmox.com/wiki/Deploy_Hyper-Converged_Ceph_Cluster
Deploy Hyper-Converged Ceph Cluster - Proxmox VE

Now that my #ceph cluster is up and running, I am trying to create a new user for mounting #cephfs on Ubuntu. I have the mount working with the admin user but am confused setting up a new user...

Do I need to create a keyring file and a secret file?

Does anyone have a blog post that might be useful?

#homelab

Дата-центр ЦЕРН на 1 эксабайт: как хранят данные

Большая наука невозможна без больших вычислений. По крайней мере, это утверждение справедливо в ядерной физике. Мы видим, что львиная доля самых мощных суперкомпьютеров установлена именно в научных учреждениях, в том числе университетах. Сразу несколько отраслей современной науки напрямую зависят от компьютерных расчётов и анализа больших данных, собранных по результатам наблюдений. Например, в Европейском центре ядерных исследований (ЦЕРН) работает один из крупнейших дата-центров в мире. Без этого вычислительного кластера мы бы искали бозон Хиггса ещё очень долго, а Стандартная модель так бы и осталась незавершённой.

https://habr.com/ru/companies/ruvds/articles/822681/

#ruvds_статьи #поиск_лекарств #CERN #ЦЕРН #БАК #Большой_взрыв #зептосекунда #JBOD #Ultrastar_Data102 #CMS #ALICE #ATLAS #LHCb #бозон_Хиггса #European_Grid_Infrastrucrure #OpenStack #CephFS #CASTOR #XRootD #Swift #RADOS_block_devices #RBD #сверхтекучий_гелий

Дата-центр ЦЕРН на 1 эксабайт: как хранят данные

Большая наука невозможна без больших вычислений. По крайней мере, это утверждение справедливо в ядерной физике. Мы видим, что львиная доля самых мощных суперкомпьютеров установлена именно в научных...

Хабр

Do I know anyone who's successfully gotten cephfs exported from a proxmox cephfs cluster into a VM?

I can get it mounted, all good. It can see contents there, yay! When it goes to write it gets a "Operation not permitted" error but creates an empty file with the right name.

What weird permissions bit do I have not set right?

#Ceph #CephFS #Proxmox

Fun and games trying to get CephFS working today… decided I'd use that to keep a copy of some of my archives on the cluster. (Might as well use those 10 2TB SSDs for something.)

Well, turns out the stock instructions for setting up a user for CephFS do not work. You will be allowed to mount a filesystem, you'll be able to create directories, and *empty* files, but you'll not be able to put any *data* in those files.

My file system (unimaginatively named `cephfs`), uses pools named `data` and `metadata` for storage.

This wound up being the user permissions that worked:

```
caps mds = "allow rw"
caps mon = "allow r"
caps osd = "allow rw tag cephfs data=metadata, allow rw pool=data"
```

#Ceph #CephFS