How do you guys back up your server?

https://pawb.social/post/1183576

How do you guys back up your server? - Pawb.Social

I have a home server that I’m using and hosting files on it. I’m worried about it breaking and loosing access to the files. So what method do you use to backup everything?

A simple script using duplicity to FTP data on my private website with infinite storage. I can’t say if it’s good or not. It’s my first time doing it.
How do you have infinite storage? Gsuite?
I confirm that in the terms and condition they discourage the use as a private cloud backup and only to host stuff related to the website. Now… until now I’ve had no complaints as I’ve been paying and kept the traffic at minimum. I guess I’ll have to switch to some more cloud oriented version if I keep expanding. But it’s worked for now !
Compressed pg_dump rsync’ed to off-site server.
Borgbackup to Borgbase
BorgBase - Simple and Secure Hosting for your Borg and Restic Repositories

Simple and Secure Hosting for your Borg and Restic Repositories. From $2/month or $5/TB/month.

BorgBase - Simple and Secure Backup Hosting
Proxmox backs up the VMs -> backups are uploaded to the cloud.
You guys back up your server?
Zfs z2 pool . Not a perfect backup, but it covers disk failure (already lost one disk with no data loss), and accidental file deletion. I'm vulnerable to my house burning down, but overall I sleep well enough.
dont overthink it.. servers/workstations rsync to a nas, then sync that nas to another nas offsite.
restic backup to Azure and Backblaze

3-2-1

Three copies. The data on your server.

  • Buy a giant external drive and back up to that.

  • Off site. Backblaze is very nice

  • How to get your data around? Free file sync is nice.

    Veeeam community version may help you too

    I’m not sure how you understand the 3-2-1 rule given how you explained it, even though you’re stating the right stuff (I’m confused about your numbered list…) so just for reference for people reading that, it means that your backups need to be on:

    • 3 copies
    • 2 mediums
    • 1 offsite location
    Huh. I always heard 3 copies, 2 locations, 1 of the locations offsite. Yours makes sense though.
    All my backups are in /home/Ryan/Documents. Please don't break my Minecraft server.
    cronjobs with rsync to a Synology NAS and then to Synology's cloud backup.

    ITT: lots of the usual paranoid overkill. If you do rsync with the –backup switch to a remote box or a VPS, that will cover all bases in the real world. The probability of losing anything is close to 0.

    The more serious risk is discovering that something broke 3 weeks ago and the backups were not happening. So you need to make sure you are getting some kind of notification when the script completes successfully.

    While I don't agree that using something like restic is overkill you are very right that backup proess monitoring is very overlooked. And recovering with the backup system of your choice is too.

    I let my jenkins run the backup jobs as I have it running anyways for development tasks. When a job fails it notifies me immediately via email and I can also manually check in the web ui how the backup went.

    Running a Duplicacy container backing up to Google drive for some stuff and Backblaze for mostly all other data. Been using it for a couple years with no issues. The GUI and scheduling is really nice too.
    Running a Duplicacy container backing up to Google drive for some stuff and Backblaze for mostly all other data. Been using it for a couple years with no issues. The GUI and scheduling is really nice too.

    For config files, I use tarsnap. Each server has its own private key, and a /etc/tarsnap.list file which list the files/directories to backup on it. Then a cronjob runs every week to run tarsnap on them. It’s very simple to backup and restore, as your backups are simply tar archives. The only caveat is that you cannot “browse” them without restoring them somewhere, but for config files it’s pretty quick and cheap.

    For actual data, I use a combination of rclone and dedup (because I was involved in the project at some point, but it’s similar to Borg). I sync it to backblaze because that’s the cheapest storage I could find. I use dedup to encrypt the backup before sending it to backblaze though. Restoration is very similar to tarsnap:

    dup-unpack -k keyfile snapshot-yyyymmdd | tar -C / -x [files..] .
    Tarsnap - Online backups for the truly paranoid

    Tarsnap is a secure online backup system for UNIX

    Hourly backups with Borg, nightly syncs to B2. I’ve been playing around with zfs snapshots also, but I don’t rely on them yet
    The method I use is same, but I wrote a script that makes snapshots, streams them locally and then rsync takes over and copies them over to my server at home.

    The simplicity of containerized setup:

    • docker-compose and kubernetes yaml files are preserved in a git repo
    • nightly cron to create database dumps
    • nightly cron to run rsync to backup data volumes and database dumps to rsync.net

    For my webserver, mysqldump to a secured folder, then restic backup the whole /svr folder, then rsync the restic backup to another server. Also have a system that emails me if these things don't happen daily. The log files are uploaded to a url, the log file is checked for simple errors, and if no file is uploaded in time, email.

    Of course, in my case, the url files are uploaded to - and the email server... are the same server I'm backing up... but at least if that becomes a problem, I probably only need the backups I've already made to my second server.

    restic · Backups done right!

    Cronjobs and rclone have been enough for me for the past year or so. Interestingly, I’ve only needed to restore from a backup once after a broken update. It felt great fixing that problem so easily.

    Proxmox Backup Server. It’s life-changing. I back up every night and I can’t tell you the number of times I’ve completely messed something up only to revert it in a matter of minutes to the nightly backup. You need a separate machine running it–something that kept me from doing it for the longest time–but it is 100% worth it.

    I back that up to Backblaze B2 (using Duplicati currently, but I’m going to switch to Kopia), but thankfully I haven’t had to use that, yet.

    Proxmox Backup Server - Open-Source Enterprise Backup Solution

    Proxmox Backup Server is an enterprise backup solution, for backing up and restoring VMs, containers, and physical hosts.

    PBS backs up the host as well, right? Shame Veeam won’t add Proxmox support. I really only backup my VMs and some basic configs
    PBS only backs up the VMs and containers, not the host. That being said, the Proxmox host is super-easy to install and the VMs and containers all carry over, even if you, for example, botch and upgrade (ask me how I know…)
    Then what’s the purpose over just setting up the built in snapshot backup tool, that unlike PBS can natively back up onto an SMB network share?
    I’m not super familiar with how snapshots work, but that seems like a good solution. As I remember, what pushed me to PBS was the ability to make incremental backups to keep them from eating up storage space, which I’m not sure is possible with just the snapshots in Proxmox. I could be wrong, though.
    @juliette daily borg backup to NAS upstairs.
    I’ve recently begun using duplicati to backup the data from my docker containers and VMware snapshots for the guest VM itself, just currently struggling to understand how to automate the snapshots yet so I do them manually
    TrueNAS zfs snapshots, and then a weekly Cron rsync to a servarica VPS with unlimited expanding storage.
    If you use a VPS as a backup target, you can also format it with ZFS and use replication. Sending snapshots is faster than using file-level backup tool, especially with a lot of small files.
    Interesting, I have noticed it’s very slow with initial backups. So snapshot replication sends one large file? What if you want to recover individual files?

    You can access ZFS snapshots from the hidden .zfs folder at the root dir of your volume. From there you can restore individual files.

    There is also a command line tool (httm) that lists all snapshotted versions of a files and allows you to restore them.

    If the snapshot you want to restore from is on a remote machine, you can either send it over or scp/rsync the files from the .zfs directory.

    GitHub - kimono-koans/httm: Interactive, file-level Time Machine-like tool for ZFS/btrfs/nilfs2 (and even Time Machine and Restic backups!)

    Interactive, file-level Time Machine-like tool for ZFS/btrfs/nilfs2 (and even Time Machine and Restic backups!) - kimono-koans/httm

    GitHub
    Almost all the services I host run in docker container (or userland systemd services). What I back up are sqlite databases containing the config or plain data. Every day, my NAS rsyncs the db from my server onto its local storage, and I have Hyper Backup backup the backups into an encrypted S3 bucket. HB keeps the last n versions, and manages their lifecycle. It’s all pretty handy!
    Synology Inc.

    Centralize data storage and backup, streamline file collaboration, optimize video management, and secure network deployment to facilitate data management.

    I use Duplicati and backup server to both another PC and the cloud. Unlike a lot of data hoarders I take a pretty minimalist approach to only backing up core (mostly docker) configs and OS installation.

    I have media lists but to me all that content is ephemeral and easily re-acquired so I don’t include it.

    Duplicati is great in many ways but it’s still considered as being in beta by it’s developers. I would not trust it if the data you back up is extremely important to you.

    I am lucky enough to have a second physical location to store a second computer, with effectively free internet access (as long as the data volume is low, under about 1TB/month.)

    I use the ZFS file system for my storage pool, so backups are as easy as a few commands in a script triggered every few hours, that takes a ZFS snapshot and tosses it to my second computer via SSH.

    It’s kind of broken at the moment, but I have set up duplicity to create encrypted backups to Bacblaze B2 buckets.

    Of course the proper way would be to back up to at least 2 more locations. Perhaps a local NAS for starters. Also could be configured in duplicity.

    I backup using a simple rsync script to a Hetzner storage box.