"Set and forget" NAS setups are a myth (and they put your data at risk)
"Set and forget" NAS setups are a myth (and they put your data at risk)
How to monitor select program's folder for changes to revert if something breaks ?
# Example 1. sometimes foobar2000 randomly changes some settings and things break, I don’t want to create duplicates (to save space) instead I want to track changes in it. 2. Same for chromium. I say select because I don’t want the imaging solution
Suggest Backup software configed per directory with .ignore file like Kopia ? Save exclude's metadata. Use as FileChangeTracker / File History.
# These software have option to config directory using .ignore file 1. Kopia - .kopiaignore 2. Restic (hard to use) * --exclude-file * Ignore File: create a custom text file e.g., excludes.txt * .exclude_me supports --exclude-if-present, which lets you skip any directory containing a specific file, such as .exclude_me. * Uses gitignore-style syntax [https://restic.readthedocs.io/en/latest/040_backup.html] * wrapper like Restatic - crawls directories before Restic starts and looks for .restaticignore files. It merges all those local files into one big exclude list and passes it to Restic automatically. ------------------------------------ Rclone (Only ignores no option to config) 1. --exclude-if-present .rcloneignore (any name) 2. Global --exclude-from list.txt Why is it only recommended for “Large-scale cloud copies” and “Google Drive/Dropbox/S3”? Duplicacy (Global Only) .duplicacy * While the official binary uses .duplicacy/filters, many users use scripts to generate that filter file by scanning for .cvsignore or custom ignore files in subdirectories. * https://forum.duplicacy.com/t/filters-include-exclude-patterns/1089/23 [https://forum.duplicacy.com/t/filters-include-exclude-patterns/1089/23] * https://github.com/TheBestPessimist/duplicacy-utils [https://github.com/TheBestPessimist/duplicacy-utils] * https://github.com/markfeit/duplicacy-scripts [https://github.com/markfeit/duplicacy-scripts] * https://github.com/gilbertchen/duplicacy/issues/337 [https://github.com/gilbertchen/duplicacy/issues/337] * https://forum.duplicacy.com/tag/filters [https://forum.duplicacy.com/tag/filters] Borg (Only ignores no option to config) .borgignore FreeFileSync (Maybe global only).ffs_gui or .ffs_batch # How to set up like this with features - 1. Forever full backups/DeDuplication [https://www.reddit.com/r/Backup/wiki/index/faq/incremental_and_differential_backups/#%3A%7E%3Atext=Forever+full+backups%2C-New] - Option Delete changes older than x. Where full-backup is taken once then incremental-s are merged. I have heard terms like, snapshot and CBT but don’t understand it. 2. Save a “Ghost” for excluded data ie only Filename, Metadata and Folder-Structure. 3. “File Change Tracker” to see summary of what files are moved/deleted/renamed between 2 backups. Like Kopia’s https://kopia.io/docs/reference/command-line/common/diff/ [https://kopia.io/docs/reference/command-line/common/diff/] 4. “File History” where I see previous version of files on the main disk. Like Kopia’s kopia snapshot list <filename> # To Backup * External and Internal Disk (files) to backup, * all separately backup-ed to the same backup disk, . Old Post: https://www.reddit.com/r/DataHoarder/comments/1rzmg67/discussion/_backup/_solution/_comparison/ [https://www.reddit.com/r/DataHoarder/comments/1rzmg67/discussion_backup_solution_comparison/]
HDD upgrades on the server are nearly complete! I added a 26TB drive as the new parity drive, then reformatted the 20TB parity drive for use as a data drive. I did a SnapRAID sync to ensure all drives have recovery data, then I shut down the server and replaced the 20TB drive closest to me with the other 26TB drive.
As luck would have it, this ended up being the same drive I'd just formatted! Running a SnapRAID fix now to generate recovery data on the new drive.
building a pre index of IPFS CIDs & more (BTv2)

publication croisée depuis : https://lemmy.dbzer0.com/post/67080379 [https://lemmy.dbzer0.com/post/67080379] > Hello, since it’s complicated to index DHTs, I figured it’d be more efficient to build an index of fingerprints from real data once. > > So I’ve been collecting releases hashes for this index. It can be used for various purposes: > - check the integrity of your own files (bit rot is a real thing) > - identify BTv2 torrent files that contain specific files (a database of torrent files is required) > - locate alive IPFS swarms to join more easily (no need to read all your data multiple times to recompute various CIDs yourself) > > The collection contains around 1K releases and weights 40MB. > I’ve prioritized scene Bluray rips of movies (1080p / 2160p). > No infohash will be included, as these are not reproducible enough. > > I’m using a basic script to add a new release (filename must match the official release name). I’m using others to discover scene releases in a filesystem; retrieve official release names from files using the srrdb api (crc32 search); collect torrents from Prowlarr and H&R them (although I’d prefer crowd-source directly from the community!). > > The index is stored on git to allow collaboration. It is hosted using Radicale to avoid centralization and reduce hosting pressures. > > If you are interested, join and add your own hashes to the collection in Radicle patches! (see instructions in the README) > > Let me know what you think, suggest improvements or discuss similar projects you know about!
Blombooru: A modern, self-hosted, single-user booru built with FastAPI & Tailwind. Docker-ready and simple to set up.
Blombooru: A modern, self-hosted, single-user booru built with FastAPI & Tailwind. Docker-ready and simple to set up.

cross-posted from: https://lemmy.ml/post/46121046 [https://lemmy.ml/post/46121046] > so i’ve been searching for a long time for an image tagging software, there are some tools out there, some are electron based so no thanks, then there’s Hydrus which is actually very similar to what this does, but the GUI is horrible and the installation is way more complicated. I created a docker compose file and I was running Blombooru in a few minutes. So if anyone was looking for something like this to organize their system, please give it a try because it is really good!
Follow-up to backing up bluray discs as ISOs
Follow-up to backing up bluray discs as ISOs #datahoarder #bluray #backup
Crossposted from [https://thebrainbin.org/m/[email protected]/t/1576555](https://thebrainbin.org/m/[email protected]/t/1576555) ...
Follow-up to backing up bluray discs as ISOs
Follow-up to backing up bluray discs as ISOs #datahoarder #bluray #backup
After [the original post](https://thebrainbin.org/m/[email protected]/t/1556425), I ordered the generic reader I mentioned and it arrived today. ...