i want to build a thing that lets me restrict access to some pages on a mostly static site, mostly with haproxy, using oauth like AoC does

i think if i make a cgi thing that handles the backend POST to retrieve the tokens, i can make the server stick the returned id_token in the browser cookies and do it with zero state on the backend and zero javascript on the frontend... 

so like, all the staff at the kids' school have google-workspace-managed gmail addresses at the school's domain name. i can tell haproxy to let them "sign in with google" (oauth2) and thereafter they may access staff-only pages without maintaining any database or user list on my server

i can configure haproxy to do the denying, or have it set the visitor's email address into the headers that get proxied back to the webserver. no need for the webserver to know how to oauth

this counts as december adventure

with a short python cgi script i'm close to having oauth2 working
(via google first, then more)

oauth2 gives a signed jwt
haproxy can validate and pull data out of that for use as acl material

this means i can say "only people who have a google workspace account at the kids' school may request these urls from my static site"

or

"only my local fedi tl may post changes to this wiki"

with no server side database or redis or anything 😎

if it works

december adventuring

work in wip progress: i have a short python cgi that implements enough of OIDC (Open ID Connect, based on OAuth2) that you can click a link on my site to "sign in with google"

i receive what you told google your name and email address is, signed by google

i stick that into your browser cookies so i don't have to store it

next step: get haproxy to parse and verify it to let you see the secret pages, if i like you

maybe replace with my own, shorter, signed cookie

oh my gosh i am logged in -- securely attesting my identity to my mostly static website! the only moving parts on my vps are a little script that stuffs the key into a cookie and some configuration in haproxy to make it inspect and verify it

the list of allowed users amounts to a text file, no database or session storage on the server. all i will have to do to "give you an account" so that you can access the secrets and writeable parts is put your google email address in a text file

next steps:

  • as is, this doesn't let google track you as you click around my site. it only knows you granted permission to my site to see your name and e-mail. but i want to expand this to work on any oidc provider that i decide to trust including your or my jank homebrew ones because fuck google

  • stuffing the whole signed jwt into a cookie is kinda heavy. will probably replace that with my own smaller one that doesn't encode anything i don't need to control access

why don't more sites do it like this? i think because

  • wow oauth and oidc are tedious
  • google and facebook and apple and microsoft and auth0 by okta would all prefer that you use code that they control, or pay for their service instead of rolling your own
  • why would you go to the effort to avoid storing session data on your server when you have this huge database right here to collect as much customer info as possible to sell to the highest bidder

yay i'm gonna implement an OpenID Connect client so you can log into my website with any of the many OpenID Connect Providers out there and i don't have to keep a list of usernames and passwords!

except i hate google, reddit, github, twitter, facebook, amazon, paypal, and apple so,

you can log into my website with your existing account on...
salesforce,
auth0, or
yahoo

so convenient! ✨

i bet i hate salesforce too and just forgot. they have such a punchable name

back to the tech wip: currently figuring out how to use the state parameter to stop csrf attacks, without storing stuff on the backend to verify it hasn't changed, also without accidentally inviting tampering and replay attacks

i think i can just stick a random number in there, "sign" it with an hmac, store the hmac in cookies, check that against state when it comes back

"don't roll your own crypto" but ya'll didn't roll it for me so guess what

(cue ridin' by chamillionaire)

december adventure, wip:

  • learned about blake2, used it for the signed state parameter, seems to work well
  • got the secrets out of my script so i can commit it to version control and share it
  • got a couple of the possible error messages to be less ugly
  • tightened up the security params of the cookies i'm using
  • deleted the state cookie as soon as we're done using it

to do:

  • use a small signed data blob in cookies not the big id token google hands us
  • tidy up my use of pyjwt
  • stop hardcoding, and instead properly cache, and refresh, oidc discovery documents and providers' certs
  • see how hard it is to enable more oidc providers than google (ideally just, register with them and add their discovery document url?)
  • something neat on the frontend so you can actually tell that this stuff is working 😅
  • blog about it

i must be getting deep into it, just learned that this issue i'm struggling with is not my own fuckup, it's a bug in haproxy that's causing responses to fail in glitchy ways when a jwt signature doesn't validate

haproxy devs know about it and fixed it but the patch hasn't been backported to various versions yet

https://github.com/haproxy/haproxy/commit/46b1fec0e9a6afe2c12fd4dff7c8a0d788aa6dd4

my workaround for now is going to be only attempt jwt validation when the keys match

random achievement: got my travel router set up as a proper travel router

updated openwrt, installed travelmate. now my partner and i can use our phones and laptops together on the hotel wireless without paying exorbitant additional-device fees (not that we ever did that) and have high-quality file transfers between our own devices

there's not enough room on it to install tailscale so maybe it's time to more seriously consider zerotier or a manual wireguard setup

my workaround for the haproxy issue upthread 🧵☝️ doesn't work great because google and probably other identity providers rotate their certificates frequently

instead of building a giant contraption to keep haproxy config updated with multiple configuration lines for every cert i might need, i'm trying to do what people used to do before haproxy grew a jwt_verify builtin: do it with a lua plugin 😎

part of me gets excited about it like  what other clever hacks could i do with a lil lua plugin to haproxy!

openresty pretty famously implements an entire web platform in what was probably intended to be a little lua extension for nginx

then on top of that web frameworks like lapis exist which let you code your web thing in lua, moonscript (kinda coffeescriptish) or fennel (lisp!)

https://openresty.org/en/
https://leafo.net/lapis/
https://fennel-lang.org/

OpenResty® - Open source

a high quality mufo recently mentioned caddy so i looked and gosh

caddy (w/plugins) has a lot of things i stalled out trying to make happen with haproxy+lua+lighttpd:

  • jwt, oidc, client cert auth
  • webdav
  • fastcgi
  • dispatch by sni
  • a static fileserver
  • precompressed files
  • if-unmodified-since (with PUT?)

but i agree that the glossy marketing website is sus. some exec is going to spring a trap as soon as it's popular

Caddy - The Ultimate Server with Automatic HTTPS

Caddy is a powerful, enterprise-ready, open source web server with automatic HTTPS written in Go

Caddy Web Server

caddy is golang which i've been stubbornly avoiding learning for as long as it has existed. i would prefer software that stands between my computer and the nasty internet to be written in rust...

but, as much as i dislike golang i think i dislike c and c++ even more

haproxy and nginx are both written in c, and i've segfaulted both with mere configuration file mistakes which makes me super nervous

so maybe i will try caddy next

oof, caddy with my selected modules eats 14% of my weak little vps's half-gig of ram. that's even chonkier than nodejs who i have already been complaining about and trying to expunge

but maybe that's fine for now since it's fast and nice and has lots of features i like

caddy's webdav + static fileserver works pretty well except that it has the same flaw nginx does: GET requests understand and behave correctly in the presence of HTTP conditional request headers like If-Match or If-Unmodified-Since, while PUT requests silently ignore those headers. which will cause lost data with concurrent edits

nbd i'll just write my own simple tiny PUT handler as a fastcgi i guess since that's all i wanted out of webdav really

recently learned that debian has a software repo explicitly for installing recent versions of haproxy. so i could get one where the bug mentioned upthread is fixed

burned up a ridiculous amount of time troubleshooting a "file not found" error in my config, when i should have stopped relying on docs and read the source. because the haproxy docs and the error message is mistaken:

(contd)

HAProxy packages for Debian and Ubuntu

jwt_verify(alg, key): ... the key parameter should either hold a secret or a path to a public certificate

no, it wants the public key extracted from a certificate, not the certificate itself. gotta do this:

openssl x509 -noout -pubkey < cert.pem > pubkey.pem

also you can only have one key per file so it's on you to match the id in the jwt to the file holding the corresponding key

anyway, it's sinking in that having haproxy verify google's jwt stuck in a cookie every request is a bad idea; i'm supposed to check it once with my cgi or whatevs, then forget it and issue my own signed jwt instead
@pho4cexa I am successfully using Caddy on my linux server. It's not big installation though - just some static sites, some PHP stuff and some proxies to custom stuff (custom map renderer, icecast, websocket server). But it's really easy to deploy and configure and unlike Apache it has sane config language.

@pho4cexa ttw uses Caddy. I really like it; even made a small contribution (fixing Solaris support).

The ownership of the project _is_ weird; it's "owned" by ZeroSSL which is owned by a holding company, but also describes itself as a "HID" company (the people who do the security fobs in offices?)

But also it's Apache licensed and I think web servers are generic enough that there is no "moat" to be had by turning them proprietary. And if someone tries: 🍴 ...

@pho4cexa also the way that the glossy website is put together is also cool -- Caddy supports SHTML-style templates based off Go template language:

https://github.com/caddyserver/website/blob/master/src/on-demand-tls.html#L75

https://caddyserver.com/docs/modules/http.handlers.templates

Lets things occupy a nice niche between fully static website and web application -- like PHP used to enable (or still does, I suppose) but without being PHP, which is a positive.

website/src/on-demand-tls.html at master · caddyserver/website

The Caddy website. Contribute to caddyserver/website development by creating an account on GitHub.

GitHub
the novelty of caddy's worn off
and i recently bumped to haproxy 3.3
and learned haproxy 3.3 supports ktls
so, thinking about converting everything from caddy completely back to haproxy+lighttpd
and then giving the oauth2 / openid connect thing another shot. (betting the bug i had trouble with upthread is gone now)

it's a famous song whose lyrics begin,

they see me rollin', they hatin'

https://music.youtube.com/watch?v=CtwJvgPJ9xw&list=PLiy0XOfUv4hGOgrdMw9qfCWMMK07f-P8J

and it plays in my head every single time i hear the phrase popular in the field of computer security: "don't roll your own crypto."

https://security.stackexchange.com/questions/18197/why-shouldnt-we-roll-our-own#18198

Chamillionaire - Ridin' (Official Music Video) ft. Krayzie Bone - YouTube Music

REMASTERED IN HD! Music video by Chamillionaire performing Ridin'. (C) 2005 Universal Records #Chamillionaire #Ridin #Remastered

YouTube Music

@pho4cexa back in the day (1996, apparently!), the FastCGI spec offered the option of defining an app whose only job was to make authorization decisions. so in principle you could make auth dynamic while letting your web server do static file serving or whatever. I feel like people have understood that this kind of separation of concerns is useful for a long time, and nonetheless still failed to build tools that make it usable. (https://fast-cgi.github.io/original/#S6.3)

that said, I think there is growing interest in using client-stored JWTs to record authorization decisions. big sites like doing that because the JWT can be verified in edge compute and only the original OAuth dance needs to go all the way to the backend

FastCGI Specification | FastCGI -

@pho4cexa we did it like this at Shopify and it was kind of cool -- every web server you exposed to the Internet had an OAuth2 proxy in front which only allowed @shopify.com Google accounts.

You needed a security review to have the proxy removed or altered.

I'm sure it stopped a million things accidentally leaking over the years 😁 -- glad to see it was a fun December Adventure.

No pressure but are you planning on posting your code anywhere? I had the same idea as you re: Mastodon...

@insom @pho4cexa I agree with Aaron. This is fire and I would be happy to see a write up ❤️

Also you rock obviously

@kingcons @insom ahh delicious encouraging feedback, it grants the dopamines

i'm definitely going to both write this up and publish the code that makes it work, just as soon as i tidy up some of these TODO comments like "don't store secrets verbatim in the source code" and "actually check this value otherwise it invites a cross site attack" 😅

@insom @pho4cexa
hello
since when do you start this if i may ask?