Well, attempt one failed. It claimed to have installed, but then refused to reboot. When I "power-cycled" the VM it crashed immediately on boot.

Installing was very slow and used a lot of CPU - should I blame #HAMMER2?

It will be nice to get feedback from people who work in small to medium scale storage systems. I want to learn efficiënt ways to improve upon my archival storage procedures on a femto scale budget

^Z

#OpenSource #Photography #DSLR #SLR #camera #storage #technology #tar #tape #HDD #SSD #EXT4 #ZFS #HAMMER2 #RAID

There's one subject I haven't seen photographers talk about

The subject of photo archive management
It's not as easy as you may think.

Many people, especially in this last decade, grab an Android, grab a toy camera, as I call the small yet much handier cameras from camera brands like Fuji Nikon and Canon then point at the scene, often don't even know what the difference is between proper lighting and inverse lighting, then shoot their photograph.

Most end users don't even realize that they need an infrastructure to properly archive their photographs whatever the quality maybe.

The last time I lost data was in the floppy disk years; after that I never lost anything I archived on magnetic 🧲 media.
That case was a dirty read write head on a floppy drive which scratched my master and my backup disc on surface one.

I mitigated that error by using two floppy drives. One FDD specifically for the master disc after I cleaned the heads, one drive specifically for the backup disc after I cleaned those heads.

In the harddisk era I never lost any data. I use tried and tested procedures which are documented in request for comments RFC. You will find many, just search for them. Even those using tar -cvfz are good

At a certain point in time I switched from film to DSLR. The question was immediate
With my negatives I print what I need and since film 🎥 lasts {almost} forever I had no backup issues since everything was analog.

My first digital DSLR body comes with two SD card slots. 8 GB per slot giving me 16 GB of storage. I usually shoot one card full then go to the other. I use my analog method of shooting only proper scenes. I don't use the camera as a machine gun and then sort out through all the mess which photographs are fair. Quality above quantity. I have a couple of hard drives where i stored the backups and the data Is never Lost. Sounds easy right?

When another DSLR or Point & Shoot body comes in the mix you know need to manage the backups of two devices. Another point and shoot comes in the mix and another DSLR comes in the mix.
Backing up all these devices to the hard drives that I use is risky if one drive fails. The file system I use is EXT4 a tried and tested stable file system.

I mitigated that problem by doubling the amount of hard drives. Now the amount of hard drives start to grow in such a manner that they do not fit in one case anymore. That means another machine had to be built to spread the amount of hard drives. There are significant costs, you need a motherboard a processor, memory, video output, a power supply, another case and you need to pay for electricity to power the new system.

You're starting to see the pattern and the risks involved when you don't want to lose data. When Android phones started to come in the mix it became really interesting.

You can use your Google account to store the images but you'll soon realize that even though they seem to give you a lot of space photographs fill it up exponentially fast.

Also you're storing it on somebody else's computer, who will abuse your data.

There has never been, there shall never be any cloud.

Soon you're faced with the fact that you need a network attached storage system NAS, where you can add drives and still use at the most two or three of those systems to manage your photographic data.

Without realizing it you've become a librarian, without the proper training and study which takes about 2 years!

If you have the computing & Database experience like me, it is easy to set up a plan to do proper archival storage.

The proper plan can include ZFS as a file system, which you will run natively in freeBSD or any other flavor. You can also choose HAMMER2 as your file system running in DragonFlyBSD, which has a very light footprint and a massive robustness built in, just like ZFS

For most people the financial factor will be a bottleneck, when you need to manage eight to twelve to even twenty-four camera devices, when you have stored photographs digitally for decades, starting from the beginning of DSLR camera bodies.

^Z

#OpenSource #Photography #DSLR #SLR #camera #storage #technology #tar #tape #HDD #SSD #EXT4 #ZFS #HAMMER2 #RAID

I'm also happy to learn that DragonFly BSD was created by Matt Dillon who is an Amiga programmer and freeBSD programmer before he forked freeBSD 4.8

https://en.wikipedia.org/wiki/DragonFly_BSD?wprov=sfla1

#DragonFly #DragonFlyBSD #BSD #freeBSD #OpenSource #Lightweight #HAMMER2 #FileSystem #technology #programming

@marcan Well, #ZFS and #Ceph have entirely different use-cases and original designs.

  • Ceph, like #HAMMER & #HAMMER2 was specifically designed to be a #cluster #filesystem, whereas ZFS & #btrfs are designed for single-device, local storage options.

  • OFC I did see and even setup some "cursed" stuff like Ceph on ZFS myself, and yes, that is a real deployment run by a real corporation in production...

https://forum.proxmox.com/threads/solution-ceph-on-zfs.98437/

Still less #cursed than what a predecessor of mine once did and deploy ZFS on a Hardware-#RAID-Controller!

[Solution] CEPH on ZFS

Hi, i have many problems to install Ceph OSD on ZFS I Get you complet solution to resolve it: Step 1. (repeat on all machine) Install Ceph - #pveceph install Step 2. (run only in main machine on cluster) Init ceph - #pveceph init --network 10.0.0.0/24 -disable_cephx 1 10.0.0.0/24 - your...

Proxmox Support Forum

@puppygirlhornypost well, AFAICT from people who used #DragonflyBSD (like @fuchsiii ) it's optimized for #Clustering with #HAMMER & #HAMMER2 filesystem as well as #LWKT which do allow higher throughput that scales I/O and network across multi-socket and -threaded architectures...

https://en.wikipedia.org/wiki/DragonFly_BSD

DragonFly BSD - Wikipedia

Yes source based distro's have been around since the very beginning - in fact, MCC Interim Linux and #SLS weren't far from that mark, except that they merely tried to make it a bit more convenient by packaging up tarballs to be exploded during installation. And there's always #LFS.

If you think about Slackpkg - and you consider that you can actually re-install the entire system by compiling every single component of the default (full) install with the evocation of a single command, followed by the customization of your entire system by installing every kind of software imaginable through the use of #sbopkg or some other automated, dependency resolving package manager that uses #SlackBuilds (which are downloaded, then exectuted, and subsequently download the latest release of he software package desired, which is in turn compiled, packaged, and exploded) - you actually have a fully source based distro installed on your box.

That's right - Slackware is (can be forced to be) an entirely source based distro installed on your device.

And choosing to convert from a point release to Slackware -current switches you from a point release to a #Rolling_Release distro.

*Debian Testing, aka at this time, Trixie is a rolling release. #Arch_Linux is a rolling release, SourceMage and Lunar Linux are source based distros based on #Sorcerer_Linux, the original fully source based Linux distro released when Linux was only about 8yrs old in 2000, and the #Gentoo or #Funtoo source based Linux distros.

SystemD my ass. That has nothing to do with nothing in that conversation - it's completely non-sequitur and truth be told, most source based distros (Arch, Gentoo) support the type of init system that *YOU CHOOSE. For Debiantards such as myself, well..... There's #Devuan - and that's very refreshing to actually have control over your system again with true init scripts. But I rarely use Devuan, even though I've been associated with the initiative since its inception, after leaving the #Mageia team several years ago.

As I state in almost all of my profiles, I'm a Slacker, since 1993 (Slackware Linux), and I'm also a bit of a #Debiantard. On the BSD side, after leaving #Jolix (386BSD) for Slackware, I've pretty much settled on either #OpenBSD or #Dragonfly_BSD, w/the awesome #HAMMER2 FS. I still have a lot of love for #FreeBSD and of course #NetBSD - where I spend a lot of time in my proper #Korn Shell....

But what the heck does any of this have to do with a comparison of using Gentoo Linux being akin to using SystemD?

I don't like SystemD - but if you're a realist, that doesn't mean you forgo using distros that only have that init tooling. You just roll with the punches and keep following the innovations that support you - NO ONE STILL RUNS WINDOWS XP in production - at least, no one outside of state mental hospitals, that's just insane to do in a forward facing business environment.

But a lot of companies do leverage OpenRC, SysVinit, etc., instead of SystemD - that's not going away, and SystemD itself and Poetering have their own up and coming challengers.

SystemD is (supposed to be, originally) a way to boot your box. Yes, it's indeed encroached upon other landscapes since, but not all of those constructs are even considered by many mainstream distros - it's not a fact of life. Other init systems thrive in the UNIX world to this day and will continue to do so.

Likewise, Source based Linux distros are just one among many distros that exist, and may or may not leverage SystemD as their init systems - to really get a good grasp of this, I recommend doing a few Arch Linux installs - with and without SystemD as the base init system. Heck, even Debian still supports your regular, good old #syslog, and at every turn during your updates, reminds you how to keep it enabled since the whole journalctl crap just isn't as elegant, IMO.

Personally, I think more concurrent options are usually better - space is cheap. Storage no longer costs a dollar a meg. or worse, like it was when I was a kid, a few thousand dollars a meg. That's right... MegaByte - Not TB for penny's!

Okay so now I'm waiting to hear back from the OP and see just what the heck they meant when I got triggered. In the meantime....

Enjoy installing and using #Sorcerer_Linux, or the subesquent forks of it's surviving lineage like #SourceMage and #Lunar_Linux - you're now a part of mainstream source-basedLinux History once you do 🤘 💀 🤘

#tallship #Linux #FOSS #distros #Sorcerer

⛵️

.

RE: https://social.sdf.org/users/tallship/statuses/111957857148746923

@tallship

News - Source Mage GNU/Linux

How much do you trust your File System?
What kind of idiot would do this on purpose?

rm -f .ssh/*

I'll tell you: The kind that is using #DragonFlyBSD #Hammer2

root@anzu:~ # hammer2 recover /dev/da0s1d .ssh /home/gnemmi/tmp
[gnemmi@anzu ~]% ls tmp/.ssh/
id_rsa.00001 known_hosts.00002
id_rsa.pub.00001 known_hosts.old.00001

Rename the files, moved them back to .ssh/ and done

Even the mtime got preserved!

In detail https://leaf.dragonflybsd.org/~gnemmi/recover

7 years without a corrupted file !

#RUNBSD #BSD

Hello all,

Just a quick #zfs question!

Is it better to create more zpools or datasets? For instance, say I have 2 ssd disks of varying sizes(Which I do, on my laptop lol), and I want to continually zfs send some snapshots from an unencrypted dataset.

Would it make more sense to create say an unencrypted zpool, then create an encrypted dataset for say, /home and then say an unencrypted dataset for storage of things that aren't super important? As opposed to 2 zpools with multiple datasets each, one encrypted and one not.

I love #zfs, and I don't mind #btrfs, and I'm waiting to see what #bcachefs brings to the table.

BTW, #hammer2 (From #dragonflybsd ) is also cool <3