lmao 🤡

@nixCraft
Everytime we invent something there are a group of people who get left behind to continue to do things the way we used too... if you look at LLMs and think its a joke you are a fool.

Do you think these tools are gonna get worse over time or better? Learn it now or other people will and you'll be left with all your morals and no work.

AI slop comes from sloppy people. If your AI work is sloppy its your own laziness. Don't blame the machine.

@TheTearMiser @nixCraft Yeah yeah, go sell your tokens somewhere else.

@doragasu
Funny how no one seems to be able to actually have an intellectual conversation about one of the largest inventions in mordern history without it falling to insults or nonsense like this.

I'm not selling for anyone. I don't suggest any corporate AI you have to pay for. You can run AI Local FOSS and for the cost of the equiptment. You can tool it to trust anything YOU want it to trust.

But no reject the new thing. That always has worked out well historically... right?
@nixCraft

@TheTearMiser @doragasu @nixCraft Maybe it should have worked out, we sure are in a polycrisis including a climate catastrophe and pandemic because of the "inevitable progress of modern history". The issue is that you're taking a standpoint of inevitability without moral concern, when this is a moral discussion to most of the people at the other side. And all you're doing by being glib with "But no reject the new thing" is refusing to engage with valid criticism and pretending it doesn't exist instead, you're refusing to engage here not us.

@GLaDTheresCake @doragasu @nixCraft

A real response I love it. Ok...

In what way is it actually even a moral decision? Maybe if I could be helped to understand where your morals even are then I could see your side.

I don't see the connection between a new type of typewriter and the state of world being caused by mega-corporations or how the pandemic that was fuelled by political motivations

@TheTearMiser @GLaDTheresCake @nixCraft I don't like discussing with people that drank so much "Kool Aid" because they present sensational arguments without proof (like this inevitability argument of yours, how inevitable is AI revolution, as Sora maybe?) and tend to dismiss facts they don't like. But since you asked, here we go 🤷‍♂️. Let's discuss first the big elephant in the room for many of us actively rejecting things like Twitter and coming to Mastodon. Ethics. (1/?)
@TheTearMiser @GLaDTheresCake @nixCraft There is no ethical use case for Generative AI. People blaming capitalism for the wrongdoings of GenAI are funny, because this is a tech that can only work on capitalist heavy scenarios: you cannot decouple GenAI and capitalism. Why? Because for these models to be somehow effective at what they do, they need to suck tons of resources (energy, chips, raw materials) and data, most of it copyrighted/licensed. (2/?)
@TheTearMiser @GLaDTheresCake @nixCraft They are built on top of massive copyright violation and license infringement. And I include code licensing issues because for example if you train on freely available GPL code and the model spits some of it on a non GPL repo, you are infringing the license. (3/?)
@TheTearMiser @GLaDTheresCake @nixCraft And this happens more than most people think (https://www.theatlantic.com/technology/2026/01/ai-memorization-research/685552/). Can you train your own model locally (and with patience) using a $7000+ RTX6000 and non copyrighted/licensed works? Sure, but first $7000+ is not cheap, and more important, good luck achieving the performance of any of these "frontier" models with your toy. People defending these kind of use cases and pretending the resulting model to be comparable to Claude are delusional. (4/?)
AI's Memorization Crisis

Large language models don’t “learn”—they copy. And that could change everything for the tech industry.

The Atlantic
@TheTearMiser @GLaDTheresCake @nixCraft In the end, these causes that only a handful companies can train frontier models, and for that they require massive hoarding of semiconductors, energy and raw materials, causing scarcity of GPUs, RAM, raw materials and energy pricing increase. They are greatly accelerating the climate collapse (with most countries backing off their emissions compromises as soon as the GenAI boom happened) and economical collapse. (5/?)
@TheTearMiser @GLaDTheresCake @nixCraft
Setting ethics aside, about "inevitability". While I think GenAI is here to stay, I think their use cases are very very limited. First, it will not get much better unless someone makes a breakthrough discovery, and almost nobody is working that way, everybody is only trying scaling, scaling and scaling. And scaling gives you diminishing returns, it will not make models much better. (6/?)
@TheTearMiser @GLaDTheresCake @nixCraft Also GenAI companies have run out of training material, and Internet pollution with LLM generated content is a growing problem (read about "LLM Model Collapse"). Very worrying is also that companies offering GenAI services are losing money by the billions and there is no road to profitability (read from someone doing numbers like Ed Zitron). (7/?)
@TheTearMiser @GLaDTheresCake @nixCraft Companies are heavily subsidizing GenAI, but at some point they will have to crank up the costs **a whole lot** to make money. And even subsidized, it is expensive. I have a friend working at a software company in which like many others, they forced GenAI usage two years ago. Now they have just published an internal memo asking devs to try not spending more than $500/month... (8/?)
@TheTearMiser @GLaDTheresCake @nixCraft What do you think will happen here when companies ask what these services really cost and $500 allows to do a lot less? Yeah, I see you thinking "but prices will go down". And while there is some truth there, reality is that these services are more expensive by the day. Because the cost per token is going down and will likely continue this trend, but the number of tokens required grows exponentially. (9/?)

@TheTearMiser @GLaDTheresCake @nixCraft (LRMs burn many more tokens, coding models require much bigger context windows, etc.).

But the funniest argument is the "you should learn to use this or will be left behind". This is aimed at causing FOMO, but is a really dumb argument when you think about it. These tools are designed to be easy to use, nobody would use them otherwise. (10/?)

@TheTearMiser @GLaDTheresCake @nixCraft Do you really think some proficient guy doing coding will have problems catching up with how to use these tools when he decides/is forced to? Also the more you use GenAI the more your skills degrade, so I would be wary of using them too much, it has been proved LLM usage causes cognitive decline. (11/?)
@TheTearMiser @GLaDTheresCake @nixCraft There's also these people stating you don't need to learn coding because the LLM will do it. But who do you think will be more efficient coding for example with Claude, someone that is an expert prompting but knows 0 about coding, or someone that is learning how to prompt but is skilled at coding? And that's just supposing these tools really increase productivity. (12/?)

@TheTearMiser @GLaDTheresCake @nixCraft All rigorous studies I have seen at trying to measure productivity of proficient users when using these tools show very modest increases if any. And at a high cost (for example on coding, you get a lot less understanding of the code base, so you are generating technical debt).

So here you have your "real answer". Enjoy. Or just tell ChatGPT to summarize it, I don't care 🤷‍♂️ (13/13)