Как создать программно-определяемое хранилище в SpaceVM

Привет, Хабр! Меня зовут Даниил Киселёв, я специалист по техническому сопровождению Space. В этой статье я на практическом примере покажу, как в SpaceVM собрать программно-определяемое кластерное хранилище. Рассмотрим типовую конфигурацию, учитывая, что в реальном продакшене параметры и архитектурные решения могут отличаться. Бизнес ожидает от СХД простых характеристик — чтобы данные были доступны всегда, а сбои и обслуживание не останавливали работу виртуальных машин и сервисов. Именно поэтому программно-определяемые хранилища становятся распространенным инструментом. На примере SpaceVM разбираем, как за считанные минуты собрать отказоустойчивое кластерное SDS, которое решает сразу несколько ключевых задач: снижает стоимость владения и обеспечивает стабильную работу в реальных условиях эксплуатации. Вопрос о том, зачем вообще нужны программно-определяемые хранилища не лишен смысла. Объемы данных, которые приходится хранить бизнесам, постоянно растут, емкость хранилищ приходится постоянно наращивать, а многим компаниям приходится, к тому же, еще и обеспечивать соответствие требованиям регуляторов. Но стоимость аппаратных СХД и объем инвестиций в них перевешивают – и компании задумываются о переходе на SDS. Они не только дешевле в принципе – экономия становится еще заметнее, если приходится иметь дело с неструктурированными данными. Есть и другие преимущества: можно абстрагироваться от аппаратной платформы и успешно побороть пресловутый vendor-lock. (это особенно важно в России), компании куда проще обеспечить независимость от вендорских санкций.

https://habr.com/ru/companies/spacevm/articles/995776/

#sds #glusterfs #виртуализация #хранение_данных #инфраструктура

Как создать программно-определяемое хранилище в SpaceVM

Привет, Хабр! Меня зовут Даниил Киселёв, я специалист по техническому сопровождению Space. В этой статье я на практическом примере покажу, как в SpaceVM собрать программно-определяемое кластерное...

Хабр

In this guide, I will try to explain how to set up a Docker Swarm system that is completely highly available

https://hostlab.tech/blog/docker-swarm-ha-gluster

#docker #dockerswarm #keepalived #nginx #linux #ubuntu #portainer #gluster #glusterfs #tutorial #opensource #highavailability

Docker Swarm HA (gluster) | HostLab Tech

In this guide I will try to explain how to set up a Docker Swarm system that is completely highly available. In total we need to setup three ...

Woke up early again with a smidge of acid reflux. In the process of getting water, both dogs swarmed me so I can't stretch my legs or lie down again. This is my life now.

Yesterday ended up being a "fuck everything" day so no writing group, no Leicmin, and no chores.

Today, my goal are to migrate another couple of terabytes from SeaweedFS to GlusterFS, start ripping a fresh copy of Kids in the Hall and Supernatural the day after I deleted them to make the move easier because we haven't watched either show in years (we hate S15 of Supernatural) but then Partner had the sudden craving to watch out of the blue (more evidence that telepathy exists between us).

In Leicmin, I went through the TypeID crates in hope to find one that "just works" to avoid learning Rust macros but none support my need of sqlx and JSON serialization so... I have to learn Rust meta coding.if I want to avoid a lot of boilerplate (I'm up to thirteen typesafe identifiers).

My initial exploration of macros isn't going well, I haven't found the connection of what I want (an attribute on a call) to where I need to go (a bunch of impl blocks and common functions).

At work, I'm now interacting with a second team of fourteen developers in India, so my personal coding time is now reduced to "while other people are speaking during meetings" and a two hour block at the end of the day.

I also had to show our team how to use C# extensions to avoid copy pasting the same block of code twelve times.

Which led nicely into my meeting with the India recruiters for the third team of "about six" developers and what I'm looking for (the ability to pay attention to details with a bonus of understanding features of a language that has been out for two decades).

So it's going to be a "hard" puzzle day.

It would be awesome I could move from the bed or lie down. :D

#SeaweedFS #GlusterFS #Rust

So far, GlusterFS is working. I'm going to try moving some files over to the new node while evacuating another seaweedfs node. Since I have to do three seaweedfs nodes for every GlusterFS, it will take a while. Plus, rsync is really slow since I have so many files that appear good but aren't.

But I got Bob's Burgers done. If I can get Golden Girls, Partner will be happier.

Still sad that SeaweedFs didn't work, it was promising for a few years. It is just too difficult to figure out what is wrong with it today.

#GlusterFS #SeaweedFS

Takes a few days to evacuate a few nodes to find out if GlusterFS will work.

The whole reason I'm trying it out is that I've been fighting getting a good pipeline from the arrs to Jellyfin/Plex. For some reason, the copies have been failing with an "input/output" error since last March. I usually can get it working by remounting the FUSE drive or having to reboot the machine, but it seems to be get steadily worse and it keeps littering the system with size 0 files when the copy fails (and I can't replace the arrs' version of copying to use something more toleranant).

I told Partner that if I couldn't figure it out by the first of the year, I would try something else. Obviously, I could go back to Ceph which continues to have all the features I want but also has an obscenely higher maintenance load but I thought GlusterFS might be lower level despite it having an requirement I'm really not fond of (my media replicates twice so I have to add two identical size drives at a time as opposed to Ceph/Seaweed which can have disparate volume sizes and scatter them as appropriate).

If I can't get GlusterFS, I'll probably go back to Ceph and see how painful it is to set up in NixOS again. Things look like they have changed in the last few years, so it might be viable but as I said, it's a lot of maintenance.

#SeaweedFS #Ceph #GlusterFS #NixOS

After experiments at home it is now time to introduce #terramaster Debian-based bricks at work within our #glusterfs storage to start migrations to new storage appliances. Here we go, the first 200TB of brick is served to tuneup and healing...

https://lovergine.com/installing-debian-on-a-usb-stick-for-a-terramaster-nas.html

Installing Debian on a USB stick for a Terramaster NAS — frankie-tales

Несмотря на всю мою симпатию к #glusterfs и опыт, в моём случае она не выдерживает проверку геораспределением и небольшой, но постоянной нагрузкой. Уже были пара вопросов с небыстрым обновлением файлов, а сегодня словил один файл, который даже удалить нельзя, если не размонтировать volume.

Многостраничный тюнинг не спасает ситуацию и есть шанс, что для небольшого кластерка буду искать что-то другое, но более устойчивое. :)

I'm still searching for a distributed filesystem for my 2.5 node #cluster.
#glusterfs is still running fine, but the support by #qemu and #proxmox will come to an end soon.
Maybe I should go back to #zfs replication for proxmox guests and manually handle the data storages based on glusterfs..?
#ceph is far too big for my small installation...
But why do I need a #proxmox test cluster?
Well, I have #glusterfs since a few month installed, and its quite easy and straight forward (and reliable so far).
Unfortunately Proxmox will not support it anymore and therefore I need an alternative solution.
#DRBD is my candidate to go, #ceph is far to much for my #homelab..
hmm.. Ende September ist mir der dritte Node im Cluster abgeschmiert (Arbiter für #GlusterFS und voting-device für #Proxmox) und ich habe es erst heute durch Zufall bemerkt.
Mein Monitoring hat gut Luft nach oben 🤡 🤣 😱