Glyph

@glyph
6.6K Followers
318 Following
30.1K Posts

he/him

You probably heard about me because I am the founder of the Twisted python networking engine open source project. But I’m also the author and maintainer of several other smaller projects, a writer and public speaker about software and the things software affects (i.e.: everything), and a productivity nerd due to my ADHD. I also post a lot about politics; I’d personally prefer to be apolitical but unfortunately the global rising tide of revanchist fascism is kind of dangerous to ignore.

postshttps://blog.glyph.im/
disclosureshttps://blog.glyph.im/pages/disclosures.html
codehttps://github.com/glyph
patronshttps://www.patreon.com/creatorglyph

Ah. So this is why I have seen several mentions of this over the last few days. Somebody posted my article to lobste.rs with an edited title, and there has been some Discourse I didn't notice.

I've now replied in the comments, but will update the post itself later, too: https://lobste.rs/s/rvgvgj/best_line_length_is_88#c_mhal5b

The Best Line Length is 88

96 comments

Lobsters
(This is not an original thought. Although I've expanded on it a bit here, I have sadly lost reference to the original citation I wanted to use and search on Mastodon is intentionally dysfunctional; if you know who I'm paraphrasing here, feel free to link it up in a reply.)

Thus, when an LLM absorbs some stolen data, what is happening cannot be 'learning'; it's something else. When we call it 'training', that's a metaphor, not a description. In reality, it is a parasitic activity that requires fresh non-LLM-generated information from humans in order to be sustainable.

Q.E.D. <https://en.wikipedia.org/wiki/Model_collapse>

Model collapse - Wikipedia

A teacher “learning more from their students” is such a common observation that it is a cliché. Colleagues mutually learn from each other in professional settings. Actual artists are in conversation with one another, not just learning from a static historical canon. Etc, etc.

LLMs cannot do this. The output that an LLM produces contains a sort of poisonous residue that makes it destroy the reasoning capacity of other LLMs; this is a well-known problem in the field, known as "model collapse".

"LLMs learn the same way a person does, it's not plagiarism"

This is a popular self-justification in the art-plagiarist community. It's frustrating to read because it's philosophically incoherent but making the philosophical argument is annoyingly difficult, particularly if your interlocutor maintains a deliberate ignorance about the humanities (which you already know they do). But there is a simpler mechanical argument you can make instead: "learning" is inherently mutual.

Do CEOs Dream Of Electric Cocaine

Let's ask the real question:

Firefox users,

do you want any AI directly built into Firefox, or separated out into extensions?

@firefoxwebdevs
@davidgerard
@tante

#Firefox #InformedConsent

I want AI built into Firefox
I want AI separated into extensions
Mozilla should not focus on AI features at all
Poll ends at .
The text mode lie: why modern TUIs are a nightmare for accessibility — The Inclusive Lens https://xogium.me/the-text-mode-lie-why-modern-tuis-are-a-nightmare-for-accessibility #Accessibility #CLI #TUI
The text mode lie: why modern TUIs are a nightmare for accessibility

The mythical, it's text, so it's accessible There is a persistent misconception among sighted developers: if an application runs in a te...

The Inclusive Lens
The number one thing I've been hearing from people in tech lately is, basically, "How the hell am I supposed to work in this industry anymore?" Though most folks are kind of afraid to say it out loud. So I wrote about how to think about it: https://www.anildash.com/2026/01/05/a-tech-career-in-2026/
How the hell are you supposed to have a career in tech in 2026?

A blog about making culture. Since 1999.

Anil Dash

I have in fact made all my code open source; I write updates about it, I try to stream myself coding once a week, and while I wouldn't say I get *zero* engagement, it's… closer to zero than any other number.

The thing about actual human beings is that they have shit to do. It's *very* hard to get their attention.

In a way, I'm lucky that LLMs just don't work for what I'm trying to do, because it feels like standing on a trap door that could be opened by glancing in the wrong direction.