I find it disillusioning to see the casual use of "AI" slowly creeping into our hacker circles. Most of the discussions about AI focus on the quality of its output. I think we're not doing a good job communicating its more fundamental dangers.

In this blog post I write about how tools shape who we are and why the resource intensiveness of AI is ingrained in its purpose. About the devaluation of skills, and power cycles.

Let me know what you think.

https://fokus.cool/2025/11/25/i-dont-care-how-well-your-ai-works.html

I don't care how well your "AI" works - fiona fokus

@fionafokus When AI was first all the buzz, I downloaded gpt4all, and fed it a *ton* of Sega Dreamcast hacking notes I and others had compiled. Figured I could use it like a quick reference. First day, I tried to ask it some things, and was impressed how it had answers. Then I tried implementing the answers, and quickly realized it was constantly full of shit. Even when trained entirely locally on hyper specific documents, it'll lie and give me crazy bullshit answers. It's useless for this stuff
@fionafokus In order to actually effectively use AI like this, you have to know the subject to be able to constantly check its work, which defeats the purpose of using a cheat tool to get around knowing the subject. It's like a dunning-kruger machine, how much you trust it reveals how little you know about the subject.
@fionafokus I see people be like "just use it for boiler plate!" except I work in embedded systems that are performance bottlenecked and need to be optimal. This generated boilerplate code has no regard for cache, no regard for smart register usage, etc. It's like the world's most slightly-below-average programmer doing the work for you.
@GabeMoralesVR @fionafokus also, there are a lot of tools that reliably produce boilerplate without AI.

@GabeMoralesVR @fionafokus

Every time I break down and use AI to solve some problem I'm flummoxed by, I'm kind of astounded at how useless it is. XD

@fionafokus I agree with you. I was thinking the other day about how sloppy thinking leads to sloppy code, and back around again. It's not a new idea, but LLMs massively increase the problem.
@huwr @fionafokus a huge amount of #Nostr apps claim to have been vibe coded and they all seem to work pretty well
@fionafokus it resonates a lot with me, especially the part about lost skills. thank you for writing it up.

/cc @kabel42 insbesondere das "wenn die Trefferrate reicht" hat mich noch nie überzeugt, und hier ist besser in Worte gegossen, warum.

Der verlinkte Artikel zu Tools werden Bestandteil Deiner Selbst ist auch mindfucking und ziemlich interessant.

@fionafokus the cited post about writing resonates with me a lot. for me, programming was never about building stuff dispassionately in a vaccuum! I guess it makes sense that some people do just want to be done with it so they can get their pay packet and go home, but for me it's never worked like that, and I'm sad that the general industry trend is seemingly towards commoditizing everything and taking all the skill and creativity out of the job (even outside of just AI)

@fionafokus I'm fascinated by the take about the resource usage being an advantage to the AI bros.

They've created software that cannot (practically) be replicated as open source software / free software, because there is no community of people with sufficient hardware / data sets. It will inherently always be a centralized technology.

Fascinating and scary.

@bastelwombat @fionafokus It’s similar to web browsers, isn’t it? It’s all open standards (afaik), but good luck making your own browser. There’s just so much stuff to implement and it’s constantly changing/evolving, nobody but a corporation can keep up with this.
@movq @bastelwombat @fionafokus Ladybird is making amazing progress. https://ladybird.org
Ladybird

Ladybird is a truly independent web browser, backed by a non-profit.

@kerrick
They're just led by a fash defender.

@movq @bastelwombat @fionafokus

@bastelwombat @fionafokus

it is why the chinese one with much lower needs scared the shit out of them.

@fionafokus Thank you for writing this, it represents 90% of what I think - and encouraged me to write an article by myself. :)
@fionafokus "this is as the worst ai will ever be" vs its literally a bubble and none of these companies who are currently operating at huge losses will provide this service in 5 years
@fionafokus @jan_leila, I’m not sure these theses contradict one another — yes, those companies will die, but the technology itself is unlikely to stop progressing.
@volemo sure the open weight modles will still exist but the actually really large models are the only ones that are actually useful to do anything with and they take several thousands of dollars of GPUs to run which just isn't really financially viable. The only reason people or even companies can afford to use them right now is because they are getting access for so cheap. Cut out those subsidies and they arnt really usable anymore
@fionafokus even without all the other issues than technical, i wouldn't use them, simply because i believe a developer's tools should be reimplementable, by a single person, which LLMs very much aren't
The Chalkeaters' "It Just Works" came on as I was reading this and I can't help but find it fitting;

@fionafokus There was a series of sci-fi books I read when I was a teenager, I think Will Macarthay's Collapsium series. In it, there's two big things, the first of which is a super-dense material called collapsium which is incredibly useful for many things, and the second being that we figured out how to fax in 3D well enough to send living beings over the wire.

You absolutely die when you fax yourself somewhere. 100%, you are murdered. And some data is transmitted, and then someone who is indistinguishable from the original you comes out the other side somewhere.

This was a society that took information security very seriously.

Not seriously enough in the end. But that's inevitability for you.

Anyway, literally everybody used the faxes to get around casually, because why fly when you can just walk? And I think that's so goddamn true, and we're absolutely seeing it here with AI. Folk will always take the easy path en masse. 🤦

@fionafokus i’ve refrained from making posts on the matter (I always feel like the only thing I can do with my style of narration, on a topic like this, is be negative and sad) but this is really well put. the part about different professions experiencing what we do right now sent chills down my spine. I now feel like I finally understand those old-timey cornershops that still do things the old way…
@fionafokus complaining about the quality is a more solid arguement in mixed circles, the other fundamental issues are on topics that too many fail to agree on reality. So it makes some pragmatic sense.

@fionafokus
I agree, but I think this can be said more to-the-point:

Don't outsource your thinking to a SaaS, because that gives the company behind the SaaS the power to control how you think and when you're allowed to think.

@fionafokus
there are also other reasons to stay away from LLMs:

They cannot add more information to the their output than was present in the prompt, so they add noise. The details they add are meaningless, they don't communicate anything. Which means:

- using LLMs makes you dumber, because you learn to ignore details, and because you unlearn to decide about details

- sending LLM output to anyone is insulting, because you imply they don't deserve meaningful details, or a sunnicit message

1/

@fionafokus

On top of that:

- Computers were meant to be logical, help humans with things we're naturally bad at, such as repetitive tasks, consistency, and mathematical rigor. Making computers imitate human fallibility through LLMs goes against the very point of having computers.

- LLM companies are selling their services at a loss, which means they will jack up prices when the investors demand ROI, and then it may be too late to learn how to live without an LLM

@fionafokus

I worked ~5 years with a language that has a built in Read/Eval/Print Loop.

Having a REPL is an astonishing boost to productivity; you can interactively stay in the Curious And Productive zone for hours at a time, without the interruptions inherent to compile/unit test cycle.

Vibe Coding feels like trying to achieve the same thing, but in a much more clumsy and error prone way.

Just give the people REPLs and watch AI 'assistance' wither away

@benh @fionafokus well said. I would add context sensitive help to this. You tab through the completions and see the docs straight away. Whenever I tried working with a copilot, it regurgitates stuff from the web and then I have to hit the docs eventually, because a lot of it is wrong bs. All it did was slow me down, and wear me out detecting BS. Conclusion: Repl plus inline/context sensitive help is pure bliss, less fatiguing and faster than AI or even googling
@fionafokus well expressed, thank you!

@fionafokus This post is really cool. As hackers and techies we might remember that our professions are primary means of control. Code has become law, and one side of the AI craze is about its creation.

One thing that we might do, is to demonstrate that power; it's how we reached the masses about online control.

@fionafokus thank you!
"I find it particularly disillusioning to realize how deep the LLM brainworm is able to eat itself even into progressive hacker circles."

This is the part I struggle with. How people who are smart enough to know how these things are built and all the problems therein but are still enthusiastic about using it.

@fionafokus I tend to link this directly to https://en.wikipedia.org/wiki/The_Dictator%27s_Handbook, which is the popular science version of a 600-something pages academic work. It very clearly explains the mechanics by which tyrannies and democracies form, and it comes down to resources.

If you have natural resources, you need a small percentage of the population more or less directly involved in extracting them to stay in power. If you don't, then you need to run a service economy.

A service economy depends on a...

1/2

The Dictator's Handbook - Wikipedia

@fionafokus ... large, healthy, well-fed, well-educated population, which tends to demand a say in how things go.

AI is a deliberate attempt to "hack" this, and establish an artificial resource in lieu of natural ones. The goal is the same, that you need fewer people to stay in power.

AI - as it is shilled - is a strategic weapon for establishing tyrannies. And while not every tyranny must be fascist, every fascist is a tyrant.

It's no stretch to state that AI is a fascist project.

2/2

@fionafokus I feel this so much. All of it. 🥴😵‍💫 Thanks for writing it down!

@fionafokus

"I personally don’t touch LLMs with a stick. I don’t let them near my brain."

I am constantly baffled how people around me always claim "It's just a tool" without recognizing what it does to their brains.

Thanks for the great text.

@fionafokus https://web.archive.org/web/20241114041431/https://cohost.org/vectorpoem/post/5421000-please-avoid-getting this old post of mine i was kinda making the point that we should center our critiques on labor and social harms, as you say, but bolster that wherever possible with evidence-based arguments that so often the products just don't work, and likely won't ever work no matter how much more capital and compute and un[der]compensated human labor they throw at it.
please avoid getting played by tech companies' claims that their dys/nonfunctional tech "will be so much more advanced in [timeframe]"

To be clear the most critical arguments, against the dogshit they're putting into the world right now, have and must continue to center on harms to people and labor power. But I think it's still really important to point out, every single time, when their shit just doesn't do what they claim it does, especially when it has no clear path to doing so in the future. Tech capitalists are used to not being called on this. Most of the media is eating out of their hands, they've developed these rhetorical reflexes that we've all come to recognize from years of uncritical coverage: "Soon, this could be everywhere", "Right now it can only do X, but you can easily imagine in a year or two's time...", "It may not be ready for prime time yet, but...", etc. And the thing is, tech capitalists ultimately don't even care whether or not what they're selling does what they claim it can. But calling bullshit hurts their sales pitch - and with enough people doing it loudly and well enough, it can truly shift the rhetorical power balance in a given situation. The recent Amazon "actually just a bunch of exploited workers" potemkin AI [https://reallifemag.com/potemkin-ai] store bullshit was apparently aiming for only 50 out of every 1000 transactions needing human intervention. They didn't get below 700/1000 [https://arstechnica.com/gadgets/2024/04/amazon-ends-ai-powered-store-checkout-which-needed-1000-video-reviewers/]. That shit was never going to happen, it was a pure fantasy by some executives giddy with the profits they projected if they could cut workers even closer to the bone, and roll that out as the Future of Retail, everywhere. But you know right up until they bailed, they were out there pointing to The Numbers (fudging them as needed, as one can always do with numbers when ethics are of no concern) and being like "see? our very sophisticated very cool Machine Learning is getting better and better at detecting stuff. why, in just a few years, it'll be pretty close to perfect!" (do not use this as a drinking game phrase. you will die.) So I think it's very important to bolster our central arguments - that this is a power grab for the future of humanity, perpetrated by capitalists who are wielding tech to exploit and control us - with the plain truth of technical critique, which is that in a vast majority of cases they are making wild extraordinary technical claims that do not hold up to scrutiny and that they are presenting without sufficient evidence. If you are a person knowledgeable in technical matters, this is a good use of the authority society has pretty much automatically granted you. Be rigorous, of course: ask for proof, point out flaws and discrepancies, distinguish marketing from reality. And be comfortable with the inherent ambiguities of forecasting: don't bother making a specific counter-claim unless you're nearly certain of it. The most important thing is to displace a tech capitalist's claim as the sole word on the matter. In a better world, people would default to doubting every single word out of these companies' mouths. This edge of the Overton window already has a nice handle on it. The biggest reason I think the past few hype waves have swept up so many people and had such far-reaching negative impacts is that the tech industry has secured this implacable position in the public mind, creating self-amplifying cycles of both positive (the yearly PR rituals, product launches etc) and negative reinforcement. Most media people still live in absolute terror of being the next "guy in 2007 who said the iPhone was going to flop" - the world they operate in means they will never be punished for being too credulous but punished severely for not being credulous enough (ie being critical). Tech has seized the entire territory marked "The Future" in the popular consciousness, and they've shown us very clearly what they intend to do with it. They're going to be capitalists, they're going to make our lives as precarious and powerless and miserable as possible as they further concentrate all wealth and power. Fighting to reclaim the future from them will be a generations-spanning project, and we have to get good at firing every effective weapon we have. The general public will pick up on this, the "techlash" is getting more and more mainstream every year. We can win this, but we have to go for the throat.

JP on cohost

@fionafokus At least in business contexts, 'AI' seems to be dangerously well suited to exacerbating existing dysfunctions.

If you make pretending to write something that isn't worth writing faster; and pretending to read something not worth reading faster, you cut the incentive to ask why you are doing this.

If you create the impression that actual writing or reading is now faster, the stuff that actually is worth doing ends up handled by the tools you use for not-doing. Rot from both ends.

@fionafokus Fucking YEAH. Great article. I'm glad I'm not the only one going "I don't /care/ if it's any good, it's fundamentally wrong".

Hard disagree on your footnote about "spend less time on social media" though. Here is where we live, here is where our friends are. (Corporate social media though, I get staying off of that.)

@fionafokus James Burke also made this thesis, tools shape who we are, with The AxeMaker's Gift.

I look forward to reading this.

Genai is definitely shaping us in ways we can't expect and in ways I think we don't want.

https://www.penguinrandomhouse.com/books/349717/the-axemakers-gift-by-james-burke/

The Axemaker's Gift by James Burke: 9780874778564 | PenguinRandomHouse.com: Books

"A detailed, original and persuasive reading of cultural and intellectual history."—Los Angeles Times. "A genuine tour de force."—San Francisco Ch...

PenguinRandomhouse.com
@fionafokus thank you for this post. every day i feel so crazy that these tools are being normalized. i never want to have to convince my computer to do work via prompts. i portend a lot of ppl will regret their decisions to build workflows around these tools when llms are no longer fully subsidized and the true price is revealed.

@fionafokus

Good stuff

AI systems exist to reinforce and strengthen existing structures of power and violence. They are the wet dream of capitalists and fascists. Enormous physical infrastructure designed to convert capital into power, and back into capital. Those who control the infrastructure, control the people subject to it.
AI systems being egregiously resource intensive is not a side effect — it’s the point.
Craft, expression and skilled labor is what produces value, and that gives us control over ourselves. In order to further centralize power, craft and expression need to be destroyed. And they sure are trying.

@fionafokus Excellent ideas. Your post took me back to my mid-teen years. I was a precocious reader and found Marshall McLuhan's books in my local university library. His thesis that technology is an extension of our being was to me, a revolutionary perspective. You have succinctly placed the new AI technology into that continuum. We cannot foresee all ramifications as they are too numerous to know. Your assessment of the control and creation dynamic are perceptive and a warning to all.

@fionafokus Thanks for writing this!

On one hand it's sad to see so many skilled developers and hackers who cave in to peer pressure. On the other hand, my impression is they often seem to have been forced by their bosses at work, which then spills over into private life too.

Let's make worker owned #tech #cooperatives more common and it will be easier to keep control of the tools.

@fionafokus what I like best in your post is the conclusion/starting points. In my opinion, AI is not going to disappear and we all know its dangers... And attractiveness.
What we need now is solutions.
@fionafokus brilliant article! thanks for writing this, sums up my thoughts pretty nicely too... I think calling these statistical methods "AI" was the big money-making selling point, and sadly people just want to believe that
@fionafokus This is a truly excellent piece. Thank you for sharing it with us 💚