What does your #selfhosted backup situation look like?

Mine is a box at a friends place with ZFS zrepl based replication hourly. Plus a second box at my parents for geographically distributed safety using restic to separate any ZFS replication issues. Basically a backup of a backup 🎰

@ironicbadger, borgbackup backup of all my servers in my home and on the Internet to rsync.net which is then rsync’ed down to my main home server running #ZFS. All Linux computers run ZFS, make backups using #sanoid and send to that server using #syncoid. I’m planning to set up a personal remote backup at my in-laws’ place which provides a ZFS send target over #Tailscale

@ironicbadger For photos:

* Local filesystem
* iCloud Photos
• Backblaze
• Synology that’s the _actual_ source of record
• Synology at my folks’ house 50 miles away gets a Synology backup over Tailscale nightly
• External drive that gets a rsync every two weeks and is stored on the opposite side of the house
• External drive that lives at my folks’ and I update any time I’m there (also made possible by Tailscale)
• Redundant copy (same account, different computer) at Backblaze

@ironicbadger (I concede this is utterly bananas, and most data isn’t THIS redundant. But I nearly lost my Synology array early in 2020, and I swore NEVER AGAIN. That led to this nuttiness.)
@caseyliss are you sure this is enough copies? It’s the 3-2-1 rule not the 321 copies rule 😃

@ironicbadger 🤣

Like I said, most of my data is 3-2-1. But when I was staring down the barrel in 2020 of losing all of my photos, I swore to myself that would NEVER EVER happen.

(tldr, had to rebuild a 6-disk array in my Synology’s array that had 1-drive tolerance. During the rebuild… a second drive briefly died. I was able to recover, but for a few hours I thought I lost **everything**.)

@ironicbadger Borg backup of my laptop/desktops to my NAS (one copy), ZFS mirror on my NAS, sanoid/syncoid to replicate to USB drive (second copy), offsite backup to Backblaze (third copy).

@ironicbadger ⁃ Big thunderbolt RAID for local backup (runs nightly)
⁃ Backblaze to store offsite (runs continuously)

Ditched my Synology after their switch to proprietary drives and moved all the HDs to a TB4 drive cage.

@ironicbadger you can see my section servers in the page.
@ironicbadger some ad hoc copies to other machines, but mainly restic everyday to backblaze b2.

@ironicbadger

I've got everything virtualised with esxi and Veeam backing up to an external hard disk on a standalone Windows pc

Macrium reflect free backs that up to another hdd

Also Veeam is replicating vms between esxi servers

File data (videos and photos) replicated to a pc in my parents house using syncthing, with their photos and documents replicating to me, and multiple copies across various laptops and workstations

I've also got new photos backing up from my phone to proton drive

@ironicbadger 3 Family Computers + 1 RPi5 Homelab > Synology NAS > Backblaze B2. Wish I had a self-hosted alternative to B2 but parents’ not really viable. No great off-site options.
@ironicbadger https://ZFS.rent, shipped them two disks for a ZFS mirror and I encrypted send to it once a week.
zfs.rent

Hard-drive colocation service. Rent virtual servers with physically attached storage. Cost-effective target for ZFS replication. Bring-your-own-hardware. Host your 3.5" hard-drives in a professional datacenter. In business for over five years! Located in Fremont, CA.

@ironicbadger Tested 3 Linux backup solution for some months: Duplicati and Kopia both lost to Zerobyte (= Restic under the hood).
Daily snaps to a separate local NAS, mirrored to a cloud service. Weekly snaps to another cloud service.

@ironicbadger I need to convince the Mrs that my home server needs an upgrade, then I put my current server off-site (preferably at a niece I'm good with, 144 km away as the birds fly) as backup.

I can't the easiest of docker images, so I just sync my photos/videos there, but that's the only automation. I'd like to run something like Ente Photos on there for my yearly reminder of stuff I've done and been to over the years.

@ironicbadger I don't want to go full overboard with my next system but some more modern CPU and between 8 to 16 GB would be nice.

For now would 16 TB be more than enough, but more would be nice.

@ironicbadger I have a friend in the the southern hemisphere, we host each others off site encrypted backups and share a jellyhub server.

Currently planning an overhaul and simplification of my backup scheme, bit of a mess of overlapping services.

@ironicbadger I use #ProxmoxBackup for virtual machines, containers and desktops. Datasets are mirrored locally and separated on different floors. The third mirror is offsite encrypted on a #hetzner #objectstorage in Europe.
@ironicbadger proxmox backup server running on nucs in house and second one at a friends house
@ironicbadger Data is generally on a Synology NAS. Backup to usb drive + remote backup via Synologys backup solution to my brothers Synology. My brother of do backup of his data to my NAS.

@ironicbadger Ansible and Terraform as much of the config based stuff as possible, then encrypted periodic cloud backups anything non recoverable

Stuff like my Jellyfin library (my biggest data vault) is manually recovered from the raw DVD/Blu-ray disks again

@ironicbadger restic to: usb drive, server, and b2. Covers portable, local and remote.
@ironicbadger some deviation where longhorn just dumps to b2. But that’s lower risk data and more about application state. Plus is replicated data across 3+ nodes.
@ironicbadger Backup via Borg Backup to a Hetzner Storage Box. Has been working reliably for years.