A few thoughts on Astral / OpenAI, now that the emotions have sat for a bit.

First, let me start by noting that AI is an attack on open source, inherently, by necessity, and at a structural level. That argument is bigger than Astral, but the short version is that you cannot simultaneously expand the public commons and work towards it's enclosure; moreover, if the public commons do not stand for the public good, then it's not really a commons any more.

Second, the unfortunate reality of the software and hardware industries is that funding for the public commons is nearly non-existent. To get anything done, it either has to serve the interests of some company, or has to get done by tricking leadership of one or more companies into believing that their interests are best served by expanding the public commons.

So yeah, lots of folks in the industry have to walk the line between doing good within a system, and that system being extractionist.

The consequence of those two facts strikes me as being that lots of people are doing good work, much of it at evil companies, and that that tension pretty much defines this motherfucking cursed industry. If your job depends on making AI numbers go up, that means your job depends on undermining open source. Sometimes you can malicious compliance that into helping open source as well, and hoping the two balance in an overall helpful direction.

Point being, I'm not criticizing specific individuals here. While I think there are some specific individuals who have made this situation demonstrably worse, on purpose, and for their own personal ends, that's not germane here, and so I'll keep those specifics to myself for now.

Rather, I want to talk about exit strategies.

Because let's face it, we depend on some pretty fucked up shit in software development. Much of the shit that we depend on that isn't currently and actively fucked up is in immediate danger of becoming fucked up, a la OpenAI buying out Astral.

So it's a matter of knowing how, when you adopt a new tool or technology or whatever else, you will eventually stop using it.

Using GitHub was great back in the day. They gave OSS projects a lot of free shit that was hard to get elsewhere. But it's clear in retrospect that we needed more and better exit strategies.

With the Astral buyout, it's a good reminder that uv came with very sensible exit strategies almost built-in: reliance on openly developed and published specs. But that only works because PDM exists, and that in turn only works because PEPs are collaboratively developed, and so forth.

As I mentioned earlier, it's a bit more difficult to have good exit strategies with ruff, given that the specs around linting are much more loose. It's even harder to have a good exit strategy for ty, even though there's good specs, because there's not a great type checker to use instead¹.

___
¹As has been pointed out to me, mypy is, for all its strengths and weaknesses, not a type checker. It doesn't follow formal mathematical type checking rules, it follows linting heuristics.

So: an exit strategy relies on good specs and parallel tooling. For ruff, we have parallel tooling. For ty, we have good specs. For uv, we have both.

That only matters if we take that exit, but we're still in the kinda-sorta OK case to some approximate degree.

But what about the next time some infrastructure gets yoinked out from the Python ecosystem? How do we make sure we keep having good exit strategies?

That's when I get back to the first fact: AI is an attack on open source.

Every single PR that is extruded or summarized by an AI product weakens exit strategies by undermining parallel tooling. Our choice to adopt AI, or even to insufficiently oppose its adoption, means we are that much more vulnerable to *infrastructure* becoming enclosed.

That's true in the obvious way: in the most generous interpretation of AI, if you're renting your brain, someone else can jack the prices on you or turn off projects they don't like.

But it's also true from a labor rights perspective. You cannot undermine the value and power of labor without also eroding that balance I talked about in the very beginning of this thread. Individual workers can say no, they can bend corporate policies towards public good through malicious compliance or outright defiance. They can form temporary alliances of convenience.

AI products cannot. They are designed to enclose, and can only ever enclose.

Long and short of it being, if you think OpenAI, a weapons contractor who is gleefully helping the US bomb Iran, buying out Python tooling is a bad thing, then follow through. Don't hem and haw about AI in OSS: oppose it.

Oppose AI in the negative sense, ban it where you can, shout (without harassing) until the ink has rubbed off your keycaps. Oppose AI in the *positive* sense by building specs and parallel tooling.

But whatever you do, please don't make the problem *worse* by allowing AI.

@xgranade Fantastic thread, followed! Thanks @teknomagic for boosting.