why don't more sites do it like this? i think because

  • wow oauth and oidc are tedious
  • google and facebook and apple and microsoft and auth0 by okta would all prefer that you use code that they control, or pay for their service instead of rolling your own
  • why would you go to the effort to avoid storing session data on your server when you have this huge database right here to collect as much customer info as possible to sell to the highest bidder

yay i'm gonna implement an OpenID Connect client so you can log into my website with any of the many OpenID Connect Providers out there and i don't have to keep a list of usernames and passwords!

except i hate google, reddit, github, twitter, facebook, amazon, paypal, and apple so,

you can log into my website with your existing account on...
salesforce,
auth0, or
yahoo

so convenient! โœจ

i bet i hate salesforce too and just forgot. they have such a punchable name

back to the tech wip: currently figuring out how to use the state parameter to stop csrf attacks, without storing stuff on the backend to verify it hasn't changed, also without accidentally inviting tampering and replay attacks

i think i can just stick a random number in there, "sign" it with an hmac, store the hmac in cookies, check that against state when it comes back

"don't roll your own crypto" but ya'll didn't roll it for me so guess what

(cue ridin' by chamillionaire)

december adventure, wip:

  • learned about blake2, used it for the signed state parameter, seems to work well
  • got the secrets out of my script so i can commit it to version control and share it
  • got a couple of the possible error messages to be less ugly
  • tightened up the security params of the cookies i'm using
  • deleted the state cookie as soon as we're done using it

to do:

  • use a small signed data blob in cookies not the big id token google hands us
  • tidy up my use of pyjwt
  • stop hardcoding, and instead properly cache, and refresh, oidc discovery documents and providers' certs
  • see how hard it is to enable more oidc providers than google (ideally just, register with them and add their discovery document url?)
  • something neat on the frontend so you can actually tell that this stuff is working ๐Ÿ˜…
  • blog about it

i must be getting deep into it, just learned that this issue i'm struggling with is not my own fuckup, it's a bug in haproxy that's causing responses to fail in glitchy ways when a jwt signature doesn't validate

haproxy devs know about it and fixed it but the patch hasn't been backported to various versions yet

https://github.com/haproxy/haproxy/commit/46b1fec0e9a6afe2c12fd4dff7c8a0d788aa6dd4

my workaround for now is going to be only attempt jwt validation when the keys match

random achievement: got my travel router set up as a proper travel router

updated openwrt, installed travelmate. now my partner and i can use our phones and laptops together on the hotel wireless without paying exorbitant additional-device fees (not that we ever did that) and have high-quality file transfers between our own devices

there's not enough room on it to install tailscale so maybe it's time to more seriously consider zerotier or a manual wireguard setup

my workaround for the haproxy issue upthread ๐Ÿงตโ˜๏ธ doesn't work great because google and probably other identity providers rotate their certificates frequently

instead of building a giant contraption to keep haproxy config updated with multiple configuration lines for every cert i might need, i'm trying to do what people used to do before haproxy grew a jwt_verify builtin: do it with a lua plugin ๐Ÿ˜Ž

part of me gets excited about it like  what other clever hacks could i do with a lil lua plugin to haproxy!

openresty pretty famously implements an entire web platform in what was probably intended to be a little lua extension for nginx

then on top of that web frameworks like lapis exist which let you code your web thing in lua, moonscript (kinda coffeescriptish) or fennel (lisp!)

https://openresty.org/en/
https://leafo.net/lapis/
https://fennel-lang.org/

OpenRestyยฎ - Open source

a high quality mufo recently mentioned caddy so i looked and gosh

caddy (w/plugins) has a lot of things i stalled out trying to make happen with haproxy+lua+lighttpd:

  • jwt, oidc, client cert auth
  • webdav
  • fastcgi
  • dispatch by sni
  • a static fileserver
  • precompressed files
  • if-unmodified-since (with PUT?)

but i agree that the glossy marketing website is sus. some exec is going to spring a trap as soon as it's popular

Caddy - The Ultimate Server with Automatic HTTPS

Caddy is a powerful, enterprise-ready, open source web server with automatic HTTPS written in Go

Caddy Web Server

caddy is golang which i've been stubbornly avoiding learning for as long as it has existed. i would prefer software that stands between my computer and the nasty internet to be written in rust...

but, as much as i dislike golang i think i dislike c and c++ even more

haproxy and nginx are both written in c, and i've segfaulted both with mere configuration file mistakes which makes me super nervous

so maybe i will try caddy next

oof, caddy with my selected modules eats 14% of my weak little vps's half-gig of ram. that's even chonkier than nodejs who i have already been complaining about and trying to expunge

but maybe that's fine for now since it's fast and nice and has lots of features i like

caddy's webdav + static fileserver works pretty well except that it has the same flaw nginx does: GET requests understand and behave correctly in the presence of HTTP conditional request headers like If-Match or If-Unmodified-Since, while PUT requests silently ignore those headers. which will cause lost data with concurrent edits

nbd i'll just write my own simple tiny PUT handler as a fastcgi i guess since that's all i wanted out of webdav really

recently learned that debian has a software repo explicitly for installing recent versions of haproxy. so i could get one where the bug mentioned upthread is fixed

burned up a ridiculous amount of time troubleshooting a "file not found" error in my config, when i should have stopped relying on docs and read the source. because the haproxy docs and the error message is mistaken:

(contd)

HAProxy packages for Debian and Ubuntu

jwt_verify(alg, key): ... the key parameter should either hold a secret or a path to a public certificate

no, it wants the public key extracted from a certificate, not the certificate itself. gotta do this:

openssl x509 -noout -pubkey < cert.pem > pubkey.pem

also you can only have one key per file so it's on you to match the id in the jwt to the file holding the corresponding key

anyway, it's sinking in that having haproxy verify google's jwt stuck in a cookie every request is a bad idea; i'm supposed to check it once with my cgi or whatevs, then forget it and issue my own signed jwt instead

today i learned that if you are reverse-proxying :80 and :443 with PROXY protocol to a caddy set up like this...

{
servers {
listener_wrappers {
proxy_protocol {
allow ...
}
tls
}
}
}

then, in addition to your https site definitions, you also have to toss this line in there to make proxy_protocol apply to the automatic http->https redirect that caddy sets up

:80 { }

https://caddyserver.com/docs/caddyfile/options#:~:text=unless%20you%20explicitly%20declare

Caddy - The Ultimate Server with Automatic HTTPS

Caddy is a powerful, enterprise-ready, open source web server with automatic HTTPS written in Go

Caddy Web Server

but with this figured out, i now have an ipv6 vm that can still serve up its websites to people stuck on ipv4-only networks! without paying for an ipv4 address allocation! ๐Ÿฅณ

next, to see if a prosody xmpp server and a coturn turn/stun server will work just as happily as the webserver does within this setup

๐Ÿค” turn and stun are mechanisms to help ipv4 clients behind nats talk to each other directly, or failing that, proxy their traffic to each other

nats are pretty much only for ipv4 clients. if they have an ipv6 address is probably isn't nat'd

so it probably doesn't make sense to imagine how to run a stun/turn server on an ipv6-only vm

the appropriate place for it to live is probably alongside the sniproxy doing ip4->ip6 reverse proxying

possible workarounds:

  • put all the clients i care about doing video calls with (my immediate family) on tailscale, which also performs the function of nat traversal
  • talk to my provider about installing coturn on the ipv4 proxy box? but the cost in bandwidth and maintenance might be too high, especially if i'm the only customer using it
  • maybe stun/turn as-a-service exists?
  • admit defeat and just pay $1/mo for an ipv4 address

today i learned firefox's network.dns.preferIPv6 is false by default :C

๐Ÿค” that setting fixes mine but how can i trick other people's browsers on dual-stack networks into preferring ipv6, instead of unnecessarily using the shared ipv4->v6 proxy my host generously provides?

first software deployed to the beefy new vps is TiddlyPWA, and it is pretty great, and easier to install than i thought it would be. going to use it as a lab notebook and planner for the rest of the things i want to deploy

i was originally reluctant to use it because it doesn't seem like it would handle multiplayer -- multiple people working on the same page at the same time -- very well. but for now i'm the only user and it's great

TiddlyPWA โ€” TiddlyWiki Storage & Sync Solution

also going to break up my existing tiddlywiki5 into multiple separate tiddlypwa's. having to transfer the entire tiddlywiki5 to my mobile device on a spotty lte connection on the other side of the world just to edit a page is not just slow but sometimes totally unusable (timeouts, i think)

a small tiddlypwa continues to work great, even on my mobile device on random networks on the other side of the planet. a far better experience than my big tiddlywiki5 instance.

deno (and node) are both chunkier than i'd like but the utility tiddlypwa provides outweighs that. a problem for later.

try it out! support the author!

TiddlyPWA โ€” TiddlyWiki Storage & Sync Solution

set up systemd socket activation to make deno gtfo when nobody's accessing the tiddlypwa wiki for >= 3 minutes ๐Ÿ‘ ๐Ÿ˜Ž

setting StateDirectory=foo in a systemd unit makes it create the directory /var/lib/foo and set $STATE_DIRECTORY to that
https://www.freedesktop.org/software/systemd/man/latest/systemd.exec.html#RuntimeDirectory=

$STATE_DIRECTORY will expand in your Exec= declaration but not in e.g. Environment=

so i can't say

Environment=DB_FILENAME=${STATE_DIRECTORY}/database.sqlite3
...
Exec=/usr/bin/command

instead i have to
Exec=env DB_FILENAME=${STATE_DIRECTORY}/database.sqlite3 /usr/bin/command

is there a prettier way?

systemd.exec

is there a prettier way?

there is!

StateDirectory=%N
Environment=DB_FILENAME=%S/%N/database.sqlite3

systemd.unit "specifiers"

systemd.unit

setting up xmpp service on an ipv6-only vm with a graciously provided sniproxy to reverse-proxy ipv4 traffic to it

xmpp seems fine. i think my fantastic vps will let me forward those ports but even if not you can make xmpp share http(s) ports with a webserver with some hacks

but if i want to do voice and video, i'll need stun/turn to help ipv4 users

stun seems fine. client connects to it, server says "you appear to me to be (address):(port)," done.

turn however...

turn is for when no clever way can be found to connect clients directly to each other; they both connect to a turn server which just proxies: it shovels data back and forth to each client

even though my host would probably grant a request to proxy traffic through their ip4->ip6 machine to my turn service, which would proxy it again, that seems wasteful and squandering of a currently free community resource. i think i will build this out pretending that i can't do that

i'm not sure if there's any need for stun or turn with ipv6 clients, but otoh ipv6 nats are possible

like when your isp assigns you a /64 but you want to set up several subnets

think i'll rely on the reverse proxy to enable xmpp for ipv4-only users, and just say that voice and video is unavailable to them. the only real users of my xmpp server are my immediate family, and if both happen to be on an ip4-only network and need voice/video, they can get on the vpn

i got a ipv6-only vm from my sweet hosting provider
they gave me a /56 worth of ipv6 address space to play in. woah.
said my vm is currently using only the first /64 of it.
my vm's address is (prefix)::1/64, amazing!
so uh
how do i plug servers and containers into the rest of my available space? do i have to request each address get added to a table somewhere? (no)

(to be continued. but i think i did it, and understand it now!) ๐ŸŽ‰

(i might do this as a blog instead)

woo! connected to a test instance of prosody xmpp on port 443 (https) using xmpp's direct-tls protocol

which means

  • hopefully, certain employers won't block the traffic anymore
  • hopefully, it can pass through sniproxy the same way https traffic does, which will enable ipv4 users to talk to my ipv6 xmpp server

there's still different ways i can think of to wire all this together, wondering if any would be better

other ways i could wire it up:

  • instead of using a separate ip address so it and caddy can both listen on port 443, i could have caddy reverse proxy to it

  • might need to put it behind a proxy anyway because it might not handle PROXY protocol from sniproxy

  • might need to put it behind a proxy anyway so iocaine can slap the llm scrapers that try it

  • maybe run it in a container and figure out how to get those their own ip address

wow incus (anti-ubuntu fork of lxd) and its web ui is pretty slick

also libvirt and virt-manager connected to lxc offers the ability to create an application container or an operating system container

(compared to incus which says application containers require docker?)

this feels like a deep rabbit hole, hope i can get grips on it soon

all right incus there you go, a whole-ass lvm volume group all to yourself, let's see what you do with it

also: learning wtf a network bridge is and how to use one ๐Ÿง‘โ€๐ŸŽ“๐Ÿ“– instead of the usual winging it with vague guesswork and assumptions from context

is xmpp's direct-tls protocol (usually on port 5223) the same as its unencrypted protocol (usually port 5222) wrapped in tls? same as imaps and pops and https? so could i terminate the tls with haproxy and reverse proxy to an unencrypted xmpp server?

the protocol is all spec'd in RFCs for anybody to look at but they don't wanna get in my brain

also, aw, the wikipedia article for xmpp describes as an example a transport for icq, rip https://en.wikipedia.org/wiki/XMPP

XMPP - Wikipedia

the docs don't reveal the info i want, and i don't want to try reading reams of source code in an unfamiliar language right now, so i'll set up an experiment and see how it behaves i guess

(the only activity that ever feels slightly close to doing science in my field of software-jiggling)

my ipv4-only client
-> ipv4-to-ipv6 sniproxy port 443
-> ipv6-only vm
-> haproxy to conditionally unwrap proxy protocol
-> prosody xmpp server

... experiment is working โœจ๐Ÿคฉโœจ

calling it now: even though haproxy has lots of sharp edges, i like it, or its configuration mechanism, way more than caddy's

  • i seem to be able to make stuff work in haproxy that takes struggle and uncertainty in caddy
  • caddy's magical get-your-free-ssl-cert-automatically is nice when you want to stand up an experiment but i like cronned certbot for "prod"
  • otoh caddy has a nice builtin static webserver ๐Ÿคท

i think i'm going to be stuck using both for a while

if you're using old versions of certbot to keep your LetsEncrypt TLS certificates up-to-date, it keeps old certs archived. i've never needed them but maybe somebody does?

nope, they're useless and never cleaned up: https://github.com/certbot/certbot/issues/4635

they've fixed it, but if your certbot doesn't get updated often you might still be accumulating files

run this occasionally to remove them:

sudo find /etc/letsencrypt/{csr,keys,archive} -type f -name '*.pem' -mtime +91 -delete

Purge old private key material ยท Issue #4635 ยท certbot/certbot

Related to #4634. When we designed the lineage format with /etc/letsencrypt/archive, we thought sysadmins will manually inspect their new certificates before "deploying" them sysadmins might find s...

GitHub