I find it disillusioning to see the casual use of "AI" slowly creeping into our hacker circles. Most of the discussions about AI focus on the quality of its output. I think we're not doing a good job communicating its more fundamental dangers.

In this blog post I write about how tools shape who we are and why the resource intensiveness of AI is ingrained in its purpose. About the devaluation of skills, and power cycles.

Let me know what you think.

https://fokus.cool/2025/11/25/i-dont-care-how-well-your-ai-works.html

I don't care how well your "AI" works - fiona fokus

@fionafokus
I agree, but I think this can be said more to-the-point:

Don't outsource your thinking to a SaaS, because that gives the company behind the SaaS the power to control how you think and when you're allowed to think.

@fionafokus
there are also other reasons to stay away from LLMs:

They cannot add more information to the their output than was present in the prompt, so they add noise. The details they add are meaningless, they don't communicate anything. Which means:

- using LLMs makes you dumber, because you learn to ignore details, and because you unlearn to decide about details

- sending LLM output to anyone is insulting, because you imply they don't deserve meaningful details, or a sunnicit message

1/

@fionafokus

On top of that:

- Computers were meant to be logical, help humans with things we're naturally bad at, such as repetitive tasks, consistency, and mathematical rigor. Making computers imitate human fallibility through LLMs goes against the very point of having computers.

- LLM companies are selling their services at a loss, which means they will jack up prices when the investors demand ROI, and then it may be too late to learn how to live without an LLM