A few thoughts on Astral / OpenAI, now that the emotions have sat for a bit.

First, let me start by noting that AI is an attack on open source, inherently, by necessity, and at a structural level. That argument is bigger than Astral, but the short version is that you cannot simultaneously expand the public commons and work towards it's enclosure; moreover, if the public commons do not stand for the public good, then it's not really a commons any more.

Second, the unfortunate reality of the software and hardware industries is that funding for the public commons is nearly non-existent. To get anything done, it either has to serve the interests of some company, or has to get done by tricking leadership of one or more companies into believing that their interests are best served by expanding the public commons.

So yeah, lots of folks in the industry have to walk the line between doing good within a system, and that system being extractionist.

The consequence of those two facts strikes me as being that lots of people are doing good work, much of it at evil companies, and that that tension pretty much defines this motherfucking cursed industry. If your job depends on making AI numbers go up, that means your job depends on undermining open source. Sometimes you can malicious compliance that into helping open source as well, and hoping the two balance in an overall helpful direction.

Point being, I'm not criticizing specific individuals here. While I think there are some specific individuals who have made this situation demonstrably worse, on purpose, and for their own personal ends, that's not germane here, and so I'll keep those specifics to myself for now.

Rather, I want to talk about exit strategies.

Because let's face it, we depend on some pretty fucked up shit in software development. Much of the shit that we depend on that isn't currently and actively fucked up is in immediate danger of becoming fucked up, a la OpenAI buying out Astral.

So it's a matter of knowing how, when you adopt a new tool or technology or whatever else, you will eventually stop using it.

Using GitHub was great back in the day. They gave OSS projects a lot of free shit that was hard to get elsewhere. But it's clear in retrospect that we needed more and better exit strategies.

With the Astral buyout, it's a good reminder that uv came with very sensible exit strategies almost built-in: reliance on openly developed and published specs. But that only works because PDM exists, and that in turn only works because PEPs are collaboratively developed, and so forth.

As I mentioned earlier, it's a bit more difficult to have good exit strategies with ruff, given that the specs around linting are much more loose. It's even harder to have a good exit strategy for ty, even though there's good specs, because there's not a great type checker to use instead¹.

___
¹As has been pointed out to me, mypy is, for all its strengths and weaknesses, not a type checker. It doesn't follow formal mathematical type checking rules, it follows linting heuristics.

So: an exit strategy relies on good specs and parallel tooling. For ruff, we have parallel tooling. For ty, we have good specs. For uv, we have both.

That only matters if we take that exit, but we're still in the kinda-sorta OK case to some approximate degree.

But what about the next time some infrastructure gets yoinked out from the Python ecosystem? How do we make sure we keep having good exit strategies?

That's when I get back to the first fact: AI is an attack on open source.

Every single PR that is extruded or summarized by an AI product weakens exit strategies by undermining parallel tooling. Our choice to adopt AI, or even to insufficiently oppose its adoption, means we are that much more vulnerable to *infrastructure* becoming enclosed.

That's true in the obvious way: in the most generous interpretation of AI, if you're renting your brain, someone else can jack the prices on you or turn off projects they don't like.

But it's also true from a labor rights perspective. You cannot undermine the value and power of labor without also eroding that balance I talked about in the very beginning of this thread. Individual workers can say no, they can bend corporate policies towards public good through malicious compliance or outright defiance. They can form temporary alliances of convenience.

AI products cannot. They are designed to enclose, and can only ever enclose.

Long and short of it being, if you think OpenAI, a weapons contractor who is gleefully helping the US bomb Iran, buying out Python tooling is a bad thing, then follow through. Don't hem and haw about AI in OSS: oppose it.

Oppose AI in the negative sense, ban it where you can, shout (without harassing) until the ink has rubbed off your keycaps. Oppose AI in the *positive* sense by building specs and parallel tooling.

But whatever you do, please don't make the problem *worse* by allowing AI.

@xgranade this thread is brilliant.

Please make it a blog post.

@onepict Thank you, I really appreciate it! ♥

May have to do so, yeah!

Stop Gen AI – Mutual Aid and Political Activism

@xgranade Fantastic thread, followed! Thanks @teknomagic for boosting.

@xgranade I like this notion of renting one's brain. It's a clear and precise description of what's happening when people use slop generators.

Earlier today I saw someone commenting on how people who spend a lot of time interacting with the hallucination engines start sounding like them. As is common with humans, we take cues from our environment. Pokemon Go trained its users. LLMs are training their users.

These things are the new cigarettes and need to be handled as such.

@xgranade this talk by Richard Fontana (@richardfontana) is very relevant. He argues that the four freedoms should be viewed differently if we want to preserve them (see attached slide). He also adds a fifth one about stewardship.

I couldn't find a video of the talk online, but the LWN article is very good.
@xgranade Consider also Python (cpython) being implementation-defined, which makes the creation of parallel and/or new implementations compliant/compatible with the language more difficult.

(The Language Reference technically documents the implementation, rather than serving as an implementation target.)

That's some pretty core infrastructure to it and ripe for LLM compromise.
@lispi314 Yes, and to wit, there's already more than one Claude-extruded PR in core CPython. The impact seems fairly small so far, given that it's not very many commits yet, but that's not a *good* sign.

@xgranade I was somewhat dismayed recently when looking at languages that do have proper specs vs not, as I am considering the necessity of abandoning the others.

Unfortunately, specs seem to be very unpopular with new popular languages for whatever reason.

Especially scripting languages. Other than ECMAScript there's Scheme, sh and that's about it.

@xgranade Regarding the footnote - what's that referring to? I'd always believed mypy at least tried to apply mathematical type-checking rules (to the extent possible in a language where the formal type system has largely been retrofitted). What sort of thing does it fall short on?

@mal3aby I said a bit in another branch of the replies, but basically it comes down to that mypy will reject programs that are correctly types, but that likely are erroneous at a logic level. That's something I expect a linter to do, but it's surprising and frustrating to have out of a type checker.

https://wandering.shop/@xgranade/116262065605008679

@mal3aby Like, it's well designed as a linter that's a bit more rigorous, but not fully reproducible from specifications alone.
@xgranade Though I thought everyone learned their lesson with sourceforge.net in 2013 when they started to bundle stuff with your installer?

@pinskia There's a difference between learning your lesson and affording it. GitHub meant I didn't need to spend time managing a VCS, paying for hosting, and the like, I could get to the brass tacks of writing my project.

That's not for nothing.

@xgranade @pinskia We could've had a mostly-p2p answer to that though.

Most of the NAT hole punching overlay networks rely on these days was possible even back then (IPv6 adoption is vaguely better now).

And some content-addressed layer like git-torrent and some manner of indexing and we could've avoided this.
@lispi314 @pinskia No disagreement, but we didn't have that at the time that GitHub started getting popular. For me, personally, that subsidy enabled a lot.
@xgranade @pinskia We had Gnutella/Limewire, Bittorrents (the updatable torrent BEPs are pretty old), I2P and Freenet (now Hyphanet).

They were just promptly disregarded.
@lispi314 @pinskia As source control systems, though?
@xgranade @pinskia As transports and/or backing for version-controlled source artifacts, yes.

There's no real reason that wouldn't work.

Literally all that was necessary was to glue together systems that already existed.

(Or figure out some adaptable schema that could be independently implemented over each and over new things, to prevent single points of failure on the scale of entire networks.)

@lispi314 @pinskia Right, but I don't think it's irrational for someone starting a new project circa 2010 to use a subsidized system and getting to work on their project instead of building out that glue.

My point was that what I think *was* irrational was not having an exit strategy, perhaps such as building that better system out in parallel.

@xgranade @pinskia Personally, had there been p2p options when I started and I had known to know about them, I would have used them.

Unfortunately when I got started I had neither sufficient understanding nor skill to implement that myself.

And then by the time I could reasonably consider doing so it was already far too late and no one would even deign to consider alternatives so people fell back on paltry far more brittle options like self-hosting.

That does rejoin the irrational part you see, the unwillingness to merely entertain alternatives. (And the demoralization for others.)

@lispi314 @pinskia Part of where I think I disagree is the separation between what's irrational for an individual under limited resources and what's irrational for a community writ large.

Like I no longer say "just use Linux," but rather "have an exit plan for when Windows or macOS are no longer options." The world would be better with more people using Linux, hands-down, and in many ways it's already more usable than Windows or macOS.

But not everyone is in a good position to challenge that.

@lispi314 @pinskia Like, industry-specific tooling that's built around Windows. A lot of publishers only talk Word, and if you want to publish a book, you need *specifically* Word even though LibreOffice is easier to use for many cases. That kind of lock-in is hard for some individuals to fight if their livelihoods depend on Word.

But systemically, *something* has to change.

@xgranade @pinskia Part of the reason I don't say "just use Linux" is that I'm not sure Linux won't also melt into slop.

There was also implied the assumption that distributions were sufficiently standalone to endure.

Which they aren't. And while the other UNIX-likes may be... some of them are also at slop risk.

@lispi314 @pinskia Well, there's that. systemd, Python, and who the fuck knows what else have all started melting.

I think that's extremely bad, *and also* that Windows and macOS add additional harms on top of that, such that "just use Linux" is, in some limited scenarios, still reasonable advice.

Regardless, though, I absolutely agree that Linux is in real danger of melting into slop, and fast.

@xgranade @pinskia

the separation between what's irrational for an individual under limited resources and what's irrational for a community writ large.

Well, frequently it starts with one.

But yes, there is a difference in how much it applies, and systematization.

That kind of lock-in is hard for some individuals to fight if their livelihoods depend on Word.

Honestly that should be considered anticompetitive collusion (and it was pretty deliberately fostered by Microsoft).

Stuff shouldn't depend exclusively on proprietary formats & interfaces.

@lispi314 @pinskia I mean, not everyone can solve every problem, though? I think of like immigration lawyers I know who work in Word and don't challenge that lock-in because they're busy doing pro bono work to rescue people from ICE.

I think both problems are critical to solve, but with differently shaped fallouts, and on different timescales... it's fair for someone to focus on one kind of injustice and not the other?

@xgranade @pinskia

it's fair for someone to focus on one kind of injustice and not the other?

It is. My statement is also criticism of the legal system. It hasn't been sued into oblivion years ago as the blatantly anticompetitive measure it is because making use of the legal system is (deliberately) priced out of the reach of the majority.

@xgranade
It was never great and the FSF has warned us about these kinds of pitfalls since, well since forever.
@xgranade The Annexation of the Commons was, unfortunately, not a traditional parable.