They Said Self-Hosting Was Hard! - arthurpizza
They Said Self-Hosting Was Hard! - arthurpizza
For old photos, you can easily have half a dozen copies on old HDDs, DVDs, cloud… a few GB maybe? How many photos can be that important?
If you bork your server, those photos are not lost, just harder to access. The Missus can still be upset, just not as much.
Who are they? Hard for who?
I propose a new title… “This thing I know a lot about is easy!”
yes quite - self hosting is tricky and dangerous
i think there is space for a distro or box you can plug into your router that makes it safe and easy
maybe that’s what unraid and trunas are getting towards?
Immich is amazing until you update and your wife is complaining she can’t see her photos.
The most reliable piece of hardware and software I have is my Synology.
I just rename the immich file, install a new immich instance and copy the data over manually to the new install, deleting the old install file after a week or so
I’ve had the least buggy experience that way
Immich updating is a dogwater experience
Yeah I wouldnt trust immich with directly storing it myself
Get that stuff off on its own and have immich access that as shown in Louis Rossmann’s setup video
Think of it like having a dedicated steam drive with the os on its own, so if you have to format or decide to distrohop, you don’t have to download and reinstall a dozen +250GB games
As long as you don’t directly connect it to the internet, it’s not hard.
When you do, it does become hard.
I setup caddy and a proxy server for ingress.
Essentially I have a server with wireguard connections between my home server and the external VM.
Proxy using proxy protocol with nginx so it preserves the ip.
DNS certificate management with cloudflare, and I’ve got Authelia in front of the majority of my websites, with some exclusion rules, say for a share link.
Authelia has mandatory 2FA, anything less is silly, with Grafana alloy scrapping caddy metrics.
Anywho most of my stuff runs in docker. The stuff I don’t want on the WAN but on tailscale/Lan has a filter to block the wireguard interface.
Still feels like I’m doing too little, but kinda hate 2fa.
And I kinda don’t want to know if complex passwords and low retries before an account gets locked out are enough.
And I kinda don’t want to know if complex passwords and low retries before an account gets locked out are enough.
I’ve created a custom cert that I verify within my nginx proxy using ssl_client_certificate and ssl_verify_client on. I got that cert on every device I use in the browser storage, additionally on a USB stick on my keychain in case I’m on a foreign or new machine. That is so much easier that bothering with passwords and the likes.
That would only work if I’m the only one using my hosted stuff, but can’t really expect non tech ppl to deal with stuff like that.
They already struggle with the little 2fa they have to use. Introducing yet another system is too much to ask.
Adding certificates is a 5 step process: Settings -> Privacy and Security -> View Certificates -> Import -> Select file and confirm. That’s on firefox at least, idk about chrome, but probably not significantly more complex. With screenshots, a small guide would be fairly easy to follow.
Don’t get me wrong, I do get your point, but I don’t feel like making users add client certs to their browser storage is more work than helping them every 2 weeks because they forgot their password or shit like that lol. At least, that’s my experience. And the cool thing about client certs is they can’t really break it, unlike passwords which they can forget, or change them because they forgot, just to then forget they changed it. Once it runs, it runs.
A lot of people simply don’t have time to go the extra steps.
Instead you should be focused on secure by default design. E.g. not setting a static router password to admin admin.
It’s stupid in this day and age to continue to see default logins occur still.
simply don’t have time
Sorry, but that is no reason. That’s a bit akin to having a dog and saying: “Nah I don’t have time to walk the dog now”. Selfhosting something that is publicly available (not as in “everyone can use it” but “everyone can access it") bears some level of responsibility. You either make the time to properly set up and maintain it, or you shouldn’t selfhost stuff.
The “average user” shouldn’t selfhost anything. Might sound mean or like gatekeeping, but it’s the truth. It can be dangerous. There’s a reason why I hire an electrician to do my house installation even tho I theoretically know how to do it myself - because I’m not amazingly well versed in it and might burn down my house, or worse, burn down other peoples houses.
People who are serious about selfhosting need to learn how to do it. Halfassing it will only lead to it getting breached, integrated into a botnet and being a burden on the rest of humanity.
Acronyms, initialisms, abbreviations, contractions, and other phrases which expand to something larger, that I’ve seen in this thread:
Fewer Letters More Letters DNS Domain Name Service/System HTTP Hypertext Transfer Protocol, the Web nginx Popular HTTP server[Thread #134 for this comm, first seen 5th Mar 2026, 16:50] [FAQ] [Full list] [Contact] [Source code]
I don’t. Synology stores all the files and it comes with Synology photos, but it’s clunky if you don’t have an Intel chip that has an onboard GPU.
I have a 10 GbE connection to my proxmox running the immich with only read access.