My god. Everyone needs to read this:
“The Reverse-Centaur’s Guide to Criticizing AI”
(from @pluralistic)
My god. Everyone needs to read this:
“The Reverse-Centaur’s Guide to Criticizing AI”
(from @pluralistic)
@linux_mclinuxface @pluralistic That was an excellent read. It reframed how I think about the AI bubble, and also taught me a few things. Like the bit about the Taft-Hartley Act, for example.
I rarely read things that long on a screen, but this was worth it.
@linux_mclinuxface @pluralistic
Really excellent piece that absolutely nails the current situation and the problems with it.
I loudly protest the idea that 'AI' (in its current form anyway) poses any existential danger to humanity, because that's just a way for these companies to pretend that what they have created is in any way a real AI and not just spicy auto complete.
@Soloflow @linux_mclinuxface @pluralistic
True, but that's more down to the companies building/utilising them then to the 'AI' itself.
@linux_mclinuxface @pluralistic Thank you, this is great! Half way through now. I love this bit:
"And because AI is just a word guessing program, because all it does is calculate the most probable word to go next, the errors it makes are especially subtle and hard to spot, because these bugs are literally statistically indistinguishable from working code (except that they're bugs)."
@macronencer @linux_mclinuxface @pluralistic
You are describing systems from pre 2025.
As of May 2025 there is a reasoning layer on most public frontier models.
Additionally many models fact check and provide clickable references.
Many models today are defacto #RAG systems rather than pure #LLM
It's perfectly fine to have formed an informed, robust opinion on a tech you don't use.
But as the tech rapidly processes, the baseline changes.
Increasingly, your opinion will be diverging from the facts.
Increasingly, your opinion will seem informed ONLY to other non users.
The other folk will see statements that describe ancient systems and understand that that opinion is no longer informed.
I understand Doctorow wrote a well regarded text "How to criticise AI"
For the sake of efficacy, I hope there is a "How to stay up to date" chapter.
#AI is a moving target.
@macronencer @linux_mclinuxface @pluralistic I guess that's what bugs are, fundamentally. Chunks of code that look right, but when you dig into it, don't do what you want.
Until now we've only produced them by accident.
@negative12dollarbill Not all of them look right. Ever taken a look at the DailyWTF? :)
Interestingly, if you switch "code" to "writing" in the above, you've also summarised an analogous issue. Once, a colleague thought a draft email I'd shared with him was verbose, so asked ChatGPT to shorten it. It took 30% off, but also subtly changed the nuance of my meaning in three places.
@linux_mclinuxface @pluralistic
Got halfway through, I see some problems in the argumentation in that post.
The "models" are not merely "collecting facts" - that's insanely reductive. Instead, they are compressing.
I think it's still valid to argue how far copyright is from a working solution - but pretending that there's any overlap between statistical and reconstructive information or that the law is unlikely to make that distinction, or that it wouldn't help seems counterproductive.
@linux_mclinuxface @pluralistic
Also, summarizing documents is another one of those reverse centaur use cases. Plenty of documented research showing high failure rates. Would not put it in the bucket of "useful stuff", particularly if I was a writer. 👀
And finally, while I agree that the focus should be on "can it replace our jobs", there's remarkably very little hard data (if any) in the post about that part specifically, which seems like a wasted opportunity.