Machine translations are often brought up as a gotcha whenever I criticize LLMs. It's worth pointing out two things: Machine translations existed decades before LLMs, and yes, machine translations are useful. However: I would never in my life read a machine translated book. Understanding what a social media post is talking about in rough terms? Sure. Literature? Absolutely not. Hell, have you ever seen machine translated subtitles? It's absolute garbage.
I have the impression that primarily anglophone people don't read as much translated literature, because so much good literature already exists in their language, so this issue may not be as familiar within that demographic. As someone who did not grow up anglophone, I can tell you there is a world of difference between a good and a bad translation even when done by humans. Machine translations are not even on the scale.
From what I've observed, people who claim that LLMs can replace artists don't understand art, people who claim that they can replace musicians don't understand music, people who claim that they can replace writers don't understand literature, and people who claim they can replace translators don't rely on translations. If I had a button that would erase LLMs from the world but it would take machine translations away (which is a false dichotomy anyway), I would absolutely still press it.
Technology is not inevitable. We've decided not to have asbestos in our walls, lead in our pipes, or carginogenic chemicals in our food. (If you're going to argue that it's not everywhere, where would you rather live?) We could just not do LLMs. It's allowed.
@Gargron It is a technology that humanity has been seeking for a long time. At least since the 1950s, with Turing and his colleagues.
@df No, this is marketing. OpenAI, Google, Anthropic &co want you to believe that what they're doing is artificial intelligence. My professional opinion is that LLMs are a dead end technology to creating actual intelligence. And if any of those companies did create actual intelligence for the purposes they pursue, it would be slavery, for which I cannot advocate.
@Gargron LLMs are not exclusively a product of large corporations or just marketing. Much of the research and development also takes place in open source and academic communities. The codes for these LLMs are public and can be audited or run locally. Furthermore, I argue that serious ethical reflection is necessary, but prohibition is not the way forward.
@df
Consciously not using something ≠ prohibition
Edit: Also, who cares who worked/ envisioned or works on this now? If you think about LLMs enough, you will likely see enough good arguments about the resource waste, centralization of power and multiplication of slop which describe LLMs. We lived without it before and we can live without it in future times.
@df @Gargron Academics may study LLMs out in the open, but I don't think academia has been able to produce LLMs whose outputs are sufficiently marketable compared to the current commercially available ones. Because the first "L" ("large") is - in our current, limited understanding - crucial for the verisimilitude of the synthetic text, and only corporations (and governments, but they mostly haven't gotten to this yet) have the scale to get large enough for that so far.

@joshuagrochow and the lack of moral compass or publicly stated ethicals standards that would allow university employees to steal large enough sets. small sets of text are read and understood by humans who can, far more efficiently, apply appropriate prior written and other formats of source material to a specific use case.

programming a calculator only makes reasonable sense if the computation requires enough repetition to warrant the resources used in building it, or it's a closed set without novelty... like, for example, a numerical calculator. ; )

edited for typos and clarity: it was killing me, apologies for the notification disruption.

@df @Gargron

@df @Gargron

The code for the LLM interpreter is relatively simple, and bears the same relationship to the actual LLM as the C compiler does to an operating system. The models are the real software and the ones big and complex enough to be useful are the product of large corporations and mass copyright violation.

@Gargron they'll never create intelligence because intelligence requires will and they do not understand will. they dont even posses one of their own: their own behaviour is driven by feelings and shaped by a commercial playbook. there is zero chance they will ever create intelligence.

@Gargron @df

> My professional opinion is that LLMs are a dead end technology to creating actual intelligence.

Also they're sucking all the oxygen out of the room and choking off any research that might NOT be a dead end.

@Gargron @df
Yes. The flowchart has three boxes:

1. Create LLM
2. Then a miracle occurs
3. Profit from AGI !!!

The companies pushing so-called "AI" have completed step 1. Some of them try to tell us that they've nearly got a handle on step 2, but that's just an attempt to swindle more investors. There is literally NOTHING that fits in the hole of step 2.

@df @Gargron

Transformers are neural networks.

LLMs are transformers wrapped in some Python scripting.

Every neural network can be accurately represented as an Excel sheet, even if it ends up having billions of cells.

Since it's just addition and multiplication, the model is fully deterministic. Same input, same output. Not intelligent.

It's Python code that does probabilistic sampling of the output. It's just a few lines of well-understood math plus a dice roll. Again, not intelligent.

@df @Gargron To be clear, ā€œPythonā€ is a placeholder language, it can be Rust, or it can be a GPU shader, and it changes nothing.

@patrys @df @Gargron does determinism imply non-intelligence?
If you hooked up the computer to a Geiger counter for true random noise and used that to modulate the output, would that have any bearing on its intelligence?

Or from the other side, what makes you think our brains are non deterministic, and why does that make us more intelligent than if the exact same history and sense-data always produced the same response?

@FishFace @df @Gargron If it’s deterministic, it can be unrolled into a giant lookup table. Did we kill phone books because they were on the verge of achieving AGI?

To me, intelligence implies a lot of things, like being able to form higher-order abstractions, learn, and thus remember things (no, being passed your ā€œmemoriesā€ as part of every prompt does not count). It also implies being curious.

@patrys @df @Gargron given that the lookup table would generally be infinite, I don't even see what that would have to do with anything. What about the Geiger counter?

I don't think those things are really needed for human-like intelligence, and something like curiosity can easily be simulated by a rules-based system.

@FishFace @df @Gargron No, you got it wrong. The model itself can be unrolled into a finite lookup table. The only random part is which word you take from the few options in the resulting row.
@patrys a computable function can generally produce infinitely many different outputs. You're still not saying why a non-deterministic part affects intelligence.
@FishFace It generates one token at a time, which makes it impossible to formulate higher-order abstractions that are not already baked into the weight matrix. I said it in another answer, not being able to learn disqualifies it as intelligence.

@patrys LLMs are intelligent only in the sense of pattern recognition; that is, they possess logical intelligence. However, some psychologists argue that there are multiple intelligences that cannot be reduced to logic, nor are LLMs capable of possessing them. See psychologist Howard Gardner.

@FishFace @Gargron

@df @FishFace @Gargron This pattern recognition is an artifact of the training process, not something that occurs at inference time. It’s like having termites dig some tunnels in an earth mound, then removing the termites, pouring aluminum into the mound, and attributing the resulting intricate shapes to the intelligence of the mound. The patterns it carries are from human artifacts used as input for the model before its weights settled.

@patrys Undoubtedly, LLMs in this regard end up being mirrors of who we are, reflecting our biases, our prejudices, and our worldviews. That is why they are not innocuous tools and why #ethics and regulation of #AI are necessary.

@FishFace @Gargron

@FishFace @patrys @df @Gargron

"Or from the other side, what makes you think our brains are non deterministic"

Us having free will/being non-deterministic is pretty much the base assumption we all operate on to even be able to function as humans. That of course doesn't mean that it's automatically true, but it makes the question of why do you think your brain is non-deterministic a no-brainer to answer: because we can't help but perceive ourselves as such.

@frog_reborn @FishFace @df @Gargron The very fact that you can read and mid-sentence learn something that changes your perception of the world means that you have brain plasticity that no neural network possesses. It’s deterministic AND rigid because training and inference happen separately.
@patrys you're talking about differences between brains and neural networks that exist, but still not arguing the philosophical point about why that is relevant to intelligence.
@df @Gargron Turing did not dream of spending the entire energy budget of the world at the time so people could generate letters from a few bullet points and the recipients could summarise them to different bullet points.

@Gargron

LLMs are Shannon 1948 as far as the theory goes (building on Markov, but adding computer technology). With some compression techniques.

But I think you're talking about something else entirely, not purely syntactical.

@df @Gargron

A small section of humanity. Not everyone.

@df @Gargron No, it is a fake, an emulation of what we have been seeking but not the real thing.
@Gargron while all your examples are 100% valid, I seriously question whether we would be able to manage to do that today. With the utter shambles most democracies are in currently, multi-national Corporations can run roughshod on environmental protection, worker safety, child protection and just about everything that past generations fought hard for.

@DJGummikuh

imagine for a moment, the billionaires have been beheaded and the yachts sunk into the sea. the value in the output of workers 100% reinvested into local communities. all of it. none for colonial masters far away. the 20 hour work weeks and all human workers hands full of the satisfaction their efforts are meaningful... no more busy work for shareholders to skim value out of. only meaningful work. custom artisanal everything. housewares repaired by local handicrafters. clothes sewn and tailored to each body. homes and townhomes and communal living spaces built and maintained by cooperative owners. neighboring towns and regions and nations translating with loving care between the communities of meaning... interconnected with care. šŸ’œ

@Gargron

@melioristicmarie @DJGummikuh @Gargron That's a dreamy vision. Thank you. I love it.
@mason
hanging on to the hope we can survive as a species and get to the good stuff of loving each other up, bigly. ;
@DJGummikuh @Gargron

@melioristicmarie

And that lasts 1-2 generations before new people who don't understand the problems that lead their parents to create the paradise chafe under their constraints and begin changing the system to something its originators wouldn't like, this creating conflict, diversity of thought, and continuing the cycle of history.

See: reality.

@TheServitor hm. so you do not believe in evolution then?

you ignore the myriad plants and beasties with behavior change documented in the geological record? or the changes in human written language documented over millennia? or dr. martin luther king's statement on the arch of the moral universe? or even the historical shifts in graphic novels and comic book tales over 80 or so years?

weird.

nihilism has never had much sway with me. i accept humans believe such a framework applies beyond the self and assert omniscient determinism universally. however, i do not abide such a concept as the universal human. we are varied beasties and i have faith in evolution of arrangements of living matter as well as of patterned practices.

see: the data.

@TheServitor of course, the u.s. and israeli "leadership" could destroy the species in the next week and whatever survives the nuclear fallout would have a very different future. humans are weird.
@Gargron But it seems that LLMs are here to stay. This time, it doesn't seem to be just a passing fad. There is a lot of investment involved.

@df

Just because a bunch of drug addicts dump all their money (and that from others) into drugs doesn't make them inevitable/good/useful... šŸ¤·ā€ā™‚ļø

Latest example: NFTs

@ePD5qRxX

thanks for the inspiration - i somehow like the idea of calculating how much weed we could buy for everyone from the money burnt on AI…

@df

@df @Gargron let me introduce you to this very fine tulip bulb, it is very good. There is a lot of investment involved
@Taco_lad @df @Gargron Have you seen how much money there is in Beanie Babies?
@df @Gargron but it seems like Asbestos is here to stay. This time, it doesn’t seem to be a passing fad. There is a lot of investment involved.
@trisweb @df @Gargron The same money going round and round and round. Mostly it's propping up Nvidia.

@trisweb The asbestos analogy doesn’t really hold. Asbestos is an inherently harmful material whose risk inevitably increases with use. #LLMs are computational tools: their impact depends on how they are developed, applied, and regulated. That’s why the key issue isn’t the existence of the technology, but the #ethics around its development and use, namely transparency, accountability, risk mitigation, and public oversight.

@Gargron

@df @Gargron alright, well, let's review:

* literally no one likes it, not even the normies who do not care about any of the myriad ethical issues surrounding it
* a bunch of very rich people dropped an unprecedented amount of cash to make it happen and now, in their desperation for that investment to pay off, are trying VERY hard to gaslight people into thinking they like it

sounds inevitable to me šŸ‘

@df @Gargron That or the people who invested find out that it's not a profitable venue, no matter how much they are trying to force the issue.
@ainmosni @df @Gargron my take is: Investors will figure out it’s too expensive to be a viable business so big AI providers will fail, especially those who try to archive ā€žgeneral knowledgeā€œ AI like OpenAI.

Small models will then be the focus, and integrating them on-device for AI assistance. Latest models, like Gwen 3.5-9b already show promising results and performance locally.

The question is who will invest in training small models to deploy on-device and will those models be open sourced? I hope they will.

@kevin @df @Gargron small models are well and good and hopefully will be focused on actually useful things, as I'm personally still not convinced that LLMs are really that useful at all, and are taking winds out of the sail out of other AI avenues that have been very useful, things that we would classify as machine learning.

But if we want general models... those might just take too many resources to build and I honestly think society will be better off with no new ones of those anyway, while letting stuff like ollama collect enough bitrot that it loses most of its damaging potential.

@kevin @df @Gargron Note that with useful I mean "something we couldn't have done without LLMs".
@ainmosni @df @Gargron I agree. Focusing on machine learning would be a better way of spending all that money, and I sincerely hope the LLM market crashes to make space for _real_ ai products and companies trying to solve problems
@df @Gargron does anyone recall the 2 years where you could not purchase a TV that didn't have 3D? There was a lot of investment. When any AI provider is actual turning an operating profit come talk to me about it.
@df@s.dfaria.eu @Gargron Investment by people who only four years ago were telling us blockchain was inevitable and here to stay, and who have been telling us fully autonomous self-driving cars were right around the corner for twenty years. At some point you have to pull your head out of the fog and recognize that all of this is nothing more than marketing. Of course they want to convince you that this time it's different, this time X or Y is here to stay---because they become very wealthy if most of us believe that. That doesn't make it true.
@abucci The investment I was referring to was not the recent venture capital trend. What I had in mind was investment in artificial intelligence research, which has been a major academic and scientific endeavor since the 1950s, with pioneers such as Alan Turing laying the groundwork.
@abucci Current models are part of a long history of research in machine learning, computer science, and computational linguistics. Of course, there is marketing around it, but reducing the entire field to a passing marketing fad ignores decades of serious scientific work.
@df@s.dfaria.eu @abucc I have a PhD in computer science and did research under the umbrella of AI. I have a decent sense of the state of the art. I stand by my earlier post. You are correct that there is intellectual investment in AI as well, but my own view is that a bunch of the so-called research is in fact fake, or if not fake then of dubious quality. Meanwhile, all the other diverse subareas of computer science continue just fine without LLMs and are able to prudce results with provable guarantees, something current LLMs cannot do and may never be able to do. So, again there is a fog of hype here that we need to pull our heads out of to see clearly.

Regarding my claim that some of the research is fake, arXiv recently stopped accepting submissions to their computer science category because it was being overwhelmed with slop submissions and what amount to corporate whitepapers that would never hold up if submitted to a proper scientific journal. Nature Publishing Group, a formerly prestigious scientific publisher, has been horrible about promoting low-quality corporate marketing that pretends to be science. The money and marketing penetrates there too.
@abucci Right, I see your point. But what solution are you proposing? Banning the use of AI? Surely there must be other ways? We could try to educate people on the ethical and responsible use of AI...
@df@s.dfaria.eu Demanding that a person analyzing a situation should also immediately provide a solution does not make sense. Why are you making this demand? I am not a policymaker, nor a dictator. We should make this decision collectively, in a way that's fair and reasonable, while taking full account of the facts as we know them. One way of taking full account of the facts---and therefore making better decisions---is clearing away hype, mania, illusion, con artistry, etc., which is something I attempt to do.

What's the problem with banning things? In the US we've banned asbestos. We've banned smoking cigarettes in certain locations. We ban dangerous things all the time. We ban unethical things too, for instance Ponzi schemes (in theory). If AI is irredeemably bad, why shouldn't we ban it? I don't know that it is, but why preemptively take that option off the table?

In any case, if you're interested in "ethical use of AI", how do you suggest it is possible to ethically use this technology? It's been built on stolen material and the labor of underpaid content taggers who now have PTSD from their work, and is repeatedly being promoted with lies. How is it ethical to use a technology that is causing people's electric bills to double or triple to pay for data centers, and that is causing water crises in an increasing number of towns across the US? Among other deep issues, such as being implicated in numerous assaults, murders, and suicides. As far as I can tell, as it's currently constituted (generative/LLM-based) AI is an ugly and destructive technology down to its very core, and it's hard to see how it can be used ethically unless one perverts the meaning of the word "ethics" so far that it becomes meaningless.

@abucci I’m not making any demands. What I’m pushing back against is the tone of this recent wave of Mastodon posts, which seem to jump very quickly to blanket prohibition as the supposed ā€œsolutionā€ to AI.

Of course AI has risks and real problems. No serious person denies that. But proposing to simply ban AI is neither realistic nor particularly helpful. AI isn’t a single product or substance that can be neatly removed from society; it’s a broad set of techniques already embedded across science, medicine, infrastructure, and everyday software. Calling to ban it is like trying to stop the wind with your hands.

The asbestos analogy doesn’t hold either. Asbestos is intrinsically harmful: its normal use causes serious health damage. AI is not that kind of thing. Treating them as equivalent is a false analogy. AI is much closer to a tool. Like a knife, it can be used harmfully or constructively. The ethical question is not whether the tool exists, but how it is built, governed, and used.

You’re also presenting claims about ā€œAIā€ being built on stolen material and exploited labor as if they applied to the entire field. Some of those criticisms are valid in specific cases, especially regarding certain corporate practices. But generalizing them to all AI development ignores the existence of university research, open-source projects, and systems trained on licensed or public datasets.

What’s striking is that your argument completely ignores the positive applications that already exist: AI assisting medical diagnosis, enabling accessibility tools for disabled users, improving translation between languages, accelerating scientific research, helping analyze complex datasets, or supporting education. You may think some of those benefits are overstated, but pretending they don’t exist at all weakens your argument rather than strengthening it.

If we want a serious ethical discussion, the relevant question isn’t ā€œshould we ban AI?ā€ but which uses should be restricted or prohibited, and under what rules the rest should operate. That’s precisely the direction policymakers are taking, for example with the European Union AI Act @EUCommission which regulates AI according to risk levels and bans specific harmful uses rather than treating the entire technology as inherently illegitimate.

@df@s.dfaria.eu
What I’m pushing back against is the tone of this recent wave of Mastodon posts, which seem to jump very quickly to blanket prohibition as the supposed ā€œsolutionā€ to AI.
I made no such statement, and reading "tone" on the internet has been shown over and over again to be nearly impossible. It is quite odd for you to vent your frustrations at me over a perceived pattern on Mastodon, especially on a post of mine that is not exhibiting what you're frustrated about.

But proposing to simply ban AI is neither realistic nor particularly helpful.
Incorrect on both counts. As I suggested in my previous post, we have successfully banned products before that were found to be unacceptable. What's neither realistic nor helpful is this style of fatalism, arguing that something that's quite possible and has been done before is somehow not possible or realistic anymore.

AI is much closer to a tool. Like a knife, it can be used harmfully or constructively.
The AI at issue, LLM and image-based generative AI, represents a political project. That political project could be ended. "It's just a tool" is the cover story used by people unwilling to acknowledge the politics of it, or those who have an interest in furthering it without copping to it. It's the same form as the "gun's don't kill people, people kill people" argument we hear in the US every time someone shoots up a school, and is fallacious and ahistorical.

You’re also presenting claims about ā€œAIā€ being built on stolen material and exploited labor as if they applied to the entire field.
You did catch the fact that I have a background in AI and do not need to be told things about my own field? I qualified one portion of my post with generative AI/LLMs. If you need me to label every single claim with the precise piece of technology I'm referring to I can do that, but it seems ludicrous to me--clearly we are not discussing expert systems or case-based reasoning. Mastodon is not aflutter with posts about banning partial order planning. The discourse is about generative AI/LLMs, so for brevity I left off that qualifier in most places. I feel you're moving the goalposts in your attempt to argue with my non-argument.

What’s striking is that your argument completely ignores
I am taking this to be trending into bad faith territory. I've made no argument and am not looking for one, but you seem very keen on having one anyway. I've pointed out known issues with generative AI/LLMs, made an aesthetic statement ("AI is ugly"), made a factual statement supported by evidence ("AI is destructive"), and stated I have difficulty seeing how this particular form of AI could be used ethically (a statement about my own shortcomings!). It's fine if you do not care about how I feel about generative AI, but it'd help everyone if you read the words as saying what they were written to say instead of reading something different and seemingly agenda driven into them.

enabling accessibility tools for disabled users
It is deeply offensive to use disabled people as a pawn in an online dispute, and I will not take part in this. When I said that I thought AI is ugly, one of the many observations that led me to this conclusion is that people frequently say and do ugly things in support of it.

improving translation between languages
Can be done without the current generation of LLMs or even AI, and better for non-English language pairs.

accelerating scientific research
AI, and specifically LLMs, do not accelerate scientific research. This is hype. Nor is it desirable to do so, regardless of the method used. "Slow science" is something worth looking into if you haven't before.

helping analyze complex datasets
Hallucinations harm data analysis.

supporting education.
Available evidence strongly suggests use of digital technology in the classroom has significantly harmed education (e.g. Jared Horvath's testimony to the US Congress and his other work details these harms, which are serious and widespread). Available evidence also suggests that LLM technology in particular is having an even worse effect on education and learning.

I believe that you are arguing with someone else, not me, and I find the direction you're going with your own arguments to be disturbing, so I am ending this interaction here.

@abucci Banning something like asbestos is not comparable to banning AI. Asbestos is a specific substance with inherently harmful properties. Generative AI and LLMs are techniques used across many different systems and applications, from medical imaging analysis to language technology and scientific data processing. That makes blanket prohibition a fundamentally different, and far more impractical, regulatory problem.

I also don’t think describing AI as a tool is a ā€œcover story.ā€ Technologies can absolutely be embedded in political and economic projects, but that does not negate the fact that they are still general-purpose tools with multiple possible uses. Both things can be true at the same time.

On the benefits: pointing to accessibility, translation, or scientific data analysis is not ā€œusing disabled people as pawns.ā€ These are documented applications that already exist and are used in practice. For example, AI-based assistive technologies are widely used for speech-to-text, captioning, and screen-reader improvements; machine translation systems are used daily by millions of people; and machine learning methods are routinely used in fields like protein structure prediction, medical imaging analysis, and climate modeling.

None of this denies the real problems you raise: copyright disputes, labor conditions in data labeling, environmental costs, or misuse. Those are legitimate policy questions. But acknowledging harms does not logically lead to the conclusion that the entire technological field is inherently unethical or should be banned outright. That’s precisely why current policy discussions, such as the European Union AI Act, focus on risk-based regulation and banning specific harmful uses rather than prohibiting the technology as a whole.