A few thoughts on Astral / OpenAI, now that the emotions have sat for a bit.

First, let me start by noting that AI is an attack on open source, inherently, by necessity, and at a structural level. That argument is bigger than Astral, but the short version is that you cannot simultaneously expand the public commons and work towards it's enclosure; moreover, if the public commons do not stand for the public good, then it's not really a commons any more.

Second, the unfortunate reality of the software and hardware industries is that funding for the public commons is nearly non-existent. To get anything done, it either has to serve the interests of some company, or has to get done by tricking leadership of one or more companies into believing that their interests are best served by expanding the public commons.

So yeah, lots of folks in the industry have to walk the line between doing good within a system, and that system being extractionist.

The consequence of those two facts strikes me as being that lots of people are doing good work, much of it at evil companies, and that that tension pretty much defines this motherfucking cursed industry. If your job depends on making AI numbers go up, that means your job depends on undermining open source. Sometimes you can malicious compliance that into helping open source as well, and hoping the two balance in an overall helpful direction.

Point being, I'm not criticizing specific individuals here. While I think there are some specific individuals who have made this situation demonstrably worse, on purpose, and for their own personal ends, that's not germane here, and so I'll keep those specifics to myself for now.

Rather, I want to talk about exit strategies.

Because let's face it, we depend on some pretty fucked up shit in software development. Much of the shit that we depend on that isn't currently and actively fucked up is in immediate danger of becoming fucked up, a la OpenAI buying out Astral.

So it's a matter of knowing how, when you adopt a new tool or technology or whatever else, you will eventually stop using it.

Using GitHub was great back in the day. They gave OSS projects a lot of free shit that was hard to get elsewhere. But it's clear in retrospect that we needed more and better exit strategies.

With the Astral buyout, it's a good reminder that uv came with very sensible exit strategies almost built-in: reliance on openly developed and published specs. But that only works because PDM exists, and that in turn only works because PEPs are collaboratively developed, and so forth.

As I mentioned earlier, it's a bit more difficult to have good exit strategies with ruff, given that the specs around linting are much more loose. It's even harder to have a good exit strategy for ty, even though there's good specs, because there's not a great type checker to use instead¹.

___
¹As has been pointed out to me, mypy is, for all its strengths and weaknesses, not a type checker. It doesn't follow formal mathematical type checking rules, it follows linting heuristics.

@xgranade Regarding the footnote - what's that referring to? I'd always believed mypy at least tried to apply mathematical type-checking rules (to the extent possible in a language where the formal type system has largely been retrofitted). What sort of thing does it fall short on?

@mal3aby I said a bit in another branch of the replies, but basically it comes down to that mypy will reject programs that are correctly types, but that likely are erroneous at a logic level. That's something I expect a linter to do, but it's surprising and frustrating to have out of a type checker.

https://wandering.shop/@xgranade/116262065605008679

@mal3aby Like, it's well designed as a linter that's a bit more rigorous, but not fully reproducible from specifications alone.