Okay, I've got a question to anyone who is a more experienced #sysadmin or #selfhoster than me: I'm currently self-hosting Umami as a fun side project for website analytics. I embed a snippet of JS in my personal website, this makes a request to an endpoint of my instance of Umami which therefore needs to be publicly accessible.

Umami also exposes a web interface where I can look at all the statistics and whatnot but since it's the same service, it's also open to the internet. I'd rather not expose the web interface (with a login page) to the internet, if I can avoid it.

Does anyone have an idea of something clever than I can do to mitigate this? I don't have a stable public IP address so IP allowlisting doesn't seem to be an option.

#askfedi #askmastodon #askgotosocial #askAbsolutelyEveryone

P.S.: If I'm being totally stupid and there's an obvious solution, please don't hesitate to tell me, I'd really like to be wrong here.

Umami

Umami is a simple, fast, privacy-friendly alternative to Google Analytics.

@thedoctor I have no experience with Umami, so this is probably a very obviously dumb question, but... are the frontend and backend running on the same ports?
Or rather, does the JS send a request directly to the same site that you're accessing for the dashboard?
Alternatively, you can probably mess around with a reverse proxy and block access to the dashboard. You could then use SSH port forwarding to access the dashboard directly on your server with localhost.

@irgndsondepp They don't run on the same port but I have the thing behind a reverse proxy that directs a subdomain of mine to the frontend. I could stop doing that but then I'd have to open the firewall for whatever port the frontend runs on which isn't much better, IMHO.

I'll take a look at SSH port forwarding, I had forgotten that was a thing.

@thedoctor @irgndsondepp ssh port forwarding is also a reaaal quick and easy way to do it...if you don't mind your ssh port being public facing too.

I've battled so many drive-by port prods that I just put everything behind VPN now - massively reduces concerns.

@thedoctor with a reverse proxy you could define IP addresses that can access certain web-subdirectories.
Though, if you have a dynamic public IP you'd be better off accessing via VPN (i.e. tailscale direct wireguard) as your wireguard network would have a static IP range (tailscale unsure, but headscale you'd have a static IP range too)

i'm tempted to say something about old school port knocking that opens a port based on a secret knock (simplified example, but reall 😁

@paul @thedoctor yeah I was thinking reverse proxy, but I don't do open to the internet stuff much, since I'm not a security person, and am afraid of getting stuff hacked :p

@sotolf @thedoctor yeah, when it comes to protecting services you use personally/for admin purposes, and not for public, then stick it behind a VPN - a VPN's whole thing is "don't let stuff i don't know in", if it can't do that right then there's no point in it.

Ssh is a bit of an exception as it too handles security pretty well (except for when it doesn't, and you're not using password authentication, right? 🙂), but still, it's doing more than a VPN is == more server load == more potential for denial of service attack (or worse).

@paul @sotolf I totally agree, I normally would stick this behind a VPN but I don't see a way to have the data ingestion be public but not the web interface. I think I'll try to have caddy proxy only requests that fetch the tracking script and post data to the endpoint and access the web interface via SSH.

And yes, I disabled password and root login.

@paul @sotolf Ok, I now instructed caddy to only allow GETing the analytics script and POSTing the collected data back. Anything else, including the login page is off-limits. That still leaves the servie vulnerable to DDoS but Hetzner claims to have protection against that and, honestly, the analytics are definitely not crucial to me so I think I'll accept it.

@paul @thedoctor
100% on both those solutions (revprox and VPN).

It is also possible to setup a SOCKS proxy using OpenSSH (I'm assuming the server has public facing SSH for management), then define some firewall rules so the login page is only available via the proxy. This would require some browser configuration as well, so you don't pass all your traffic through the SOCKS proxy (best to only use it for the given site for example).

@paul @thedoctor
Side note: nothing wrong with Tailscale/Headscale, but there is also NetBird. It's a personal preference on my part, but don't see it mentioned as much as Tailscale so just want to mention it. :)
@kln @paul Looks interesting but apparently the Android app isn't good.

@thedoctor yeah, it does look interesting, same sort of alternative like zerotier.

Any idea if there's a self-hosted version, @kln ? That's why I like headscale - I can run the tail[head]net

Or, just standalone wireguard is stupidly simple with no overheads

I love options!

@thedoctor @kln don't worry, just found the self-hosted bit on the website. will take a looky
@paul @kln The self-hosted version seems on par with the cloud one and it's actually open source. I might try it out myself.

@thedoctor @kln
Not sure about it myself, docker first development (ew) and if you want to run without docker there's a bunch of interacting daemons you need to run that make assumptions on how the others operate

Not that I'm against multiple services interacting, but this feels a bit messy in its current state.
Headscale - one binary, one job, does it well.

@paul @kln I don't mind Docker but I see the point. On the other hand, Headscale seems more like a second-class citizen in the Tailscale world which worries me a bit. I mean it's not developed by the Tailscale people, IIRC.

@thedoctor @kln no it isn't developed by them, although one Tailscale devs does contribute. But you're right, Tailscale could interrupt it at any time... though, technically Netbird or zerotier could do the same for their self-hosted services too. It's a tough world.

But I'd still say a standalone wireguard service, or a bunch of them, is still the most robust option. no reliance on anything other than the server you're connecting to being online.

@paul @kln True, but I don't feel up to managing all the legwork myself. There area whole lot of niceties that come with these services that I wouldn't know how to achieve myself. And even if I did, it'd probably be much more brittle.

@thedoctor @paul

NetBird does have an issue with Android - there is a work around but yea... It's one thing that makes me sad about it. Still, it's open source, which gives me hope that if it does go bad, we will just get an LXD/Incus situation, not RHEL/CentOS. If that makes sense.

Wireguard and a VPS isn't too much work, but by the sound of of it you should go with just Tailscale or NetBird free tiers. Then it is all largely managed for you, you just setup devices/access policies ;)

@kln @paul

Wireguard and a VPS isn't too much work, but by the sound of of it you should go with just Tailscale or NetBird free tiers. Then it is all largely managed for you, you just setup devices/access policies ;)

Exactly. It would surely be a nice exercise and I'd learn a lot but I'm not comfortable with such an endeavor at the moment because I rely too much on this and wouldn't have the time for troubleshooting. And I'm sure I'd need to troubleshoot

@thedoctor @paul absolutely, nothing is perfect and time is limited, so focus on what you want/need to learn and find solutions that work for the ready.

Good luck.

@thedoctor can you access the admin page from the public internet? Usually it's only on the local network by default at least if it's in docker...
@ay Yes, I have to expose the service publicly. There's an option to disable the login but that's not terribly practical.

I used to self host umami for a while from my own computer. I just left the whole thing accessible, behind Cloudflare. But you can use a proxy to forward only the routes you need.

Let me tell you how I'd do it.

First off, umami is often blocked by ad blockers, based on the path, but luckily, you can change that.

I just checked my old umami installation, and saw that I set two environment variables: TRACKER_SCRIPT_NAME and COLLECT_API_ENDPOINT. These will determine the routes you need to be public.

Whatever values you choose for those, I wouldn't recommend anything including umami, track, or analytics. Those will likely get blocked. I'd recommend something innocuous like fred.

For example, let's say your umami domain is logs.example.com, and you set those two values to log_visits and /api/log, you would need these two routes publicly accessible: logs.example.com/log_visits.js and logs.example.com/api/log. Any other route you can leave private.

How you do this depends on the proxy you use. I use caddy, so I would create a Caddyfile like this:

logs.example.com { @js { method GET path /log_visits.js } @api { method POST path /api/log } handle @js { reverse_proxy localhost:3000 } handle @api { reverse_proxy localhost:3000 } handle * { close } }

I think that will properly forward requests to your js route and your api route and drop all others. From the same computer, you can still go to localhost:3000 to get to the admin screen. Or somewhere else on your home network with the 192.168 address.

I haven't used umami in a while, so some things may have changed, but that should be the basic idea.

You'll also need a domain name with an automatic DNS updater. I use no-ip.com for that. You can get free subdomains that update when your IP address updates. And you'll need to forward ports on your router to your computer. Ports 80 and 443 should be fine, but don't forward port 3000.

@thedoctor

I just tested this setup. I turned my umami back on, and modified my Caddyfile like I showed you. That last block I had to change to

handle * { abort }

With that, it works as I expected. Tracking works, but my dashboard is only accessible locally from my home network. Outside it, connections are dropped.

@thedoctor

@danjones000 Thank you very much, I really appreciate it! I already thought about this a bit and ended up implementing almost exactly what you described (including using caddy) so that's a nice sanity check.

Also, thanks for pointing out the renaming of the routes, I'll go and do that, too.