I’m an industry dev turned comp sci prof. I’ve been writing code for 40-some years; my first commercial software was for the Apple ][+. I’ve worked on web, mobile, desktop, server, and data plumbing projects for companies ranging from startups to Fortune 100 to arts nonprofits. In short, I’ve been lucky to know a lot of people in a lot of corners of tech.

@anildash’s opening paragraphs are spot on. If consensus is not 100%, well, it’s 99%.

https://www.anildash.com//2025/10/17/the-majority-ai-view/

The Majority AI View

A blog about making culture. Since 1999.

Anil Dash

There •is• definitely debate about LLMs among folks who have both tech expertise and a level head, and that debate is mostly between:

(1) “If only we could have reasonable conversations about what if anything this tech is actually good for! It could be useful! Curse this hype! Curse this bubble!”

and

(2) “This technology is so toxic we shouldn’t even be considering •any• uses for it”

That’s a •continuum•, and folks (including me) fall in all kinds of complicated places along it. Just note the continuum’s endpoints.

@anildash

Because this is Mastodon, I know lots of people will take the previous post as an invitation to put their opinion in my replies. It was not in fact such an invitation, but…it’s cool. C’est la vie Mastodonique!

What’s really important, per Dash’s OP, is for those of us in tech to get the actual debate among experts into view of the broader public — i.e. not just here — and not let charlatan billionaires dominate the conversation.

@inthehands Perhaps the language of stage magic and mentalist cold readings is appropriate for distinguishing the play to the audience from the content and rhythm of the interactions, and the use of LLMs as instruments.

I have trouble seeing beyond the terrible synergy of the financial bubble and the motivated reasoning of the boosters to the genuine positive applications and the real costs and limits.

Working with conversational support and workflow agents before the bubble, I could see that there were really good ways to use them, "everybody wins" ways, but a temptation to cheap out on creating the right interaction context for that case.

@inthehands I'd rather talk about the terrible pattern of financial hyperbole cycles and how to break the pattern than the nature of the current darling technology.

I started my software career on the down side of the dot-com boom and this is hard to distinguish from the past hype cycles. Another Force Ten cyclonic finance weather event.

Driven by global financial climate change.

@inthehands Personally, I am quite tired of posts in my feed from people (who I know are knowledgeable and intelligent) that are constantly dissing AI/LLM.

@inthehands @anildash
OK. I'll bite. What — if anything — is this tech actually good for?

(eta: Not simply trawling, here. But I've yet to hear a coherent answer to "So what are the legitimate applications of LLMs?")

@mikro2nd @inthehands @anildash

IMHO, as pattern synthesis machines, any use case for identifying patterns, predicting, and then prescribing could be tested for effectiveness.

@paninid @inthehands @anildash
I've always thought that pattern recognition and synthesis is the one thing that human beings are really, really good at! (Think "faces in clouds".) Indeed, I suspect that much of the confusion/misapplication around LLMs and their outputs lies precisely in our inbuilt compulsion to find patterns given the most tenuous evidence.

@mikro2nd @inthehands I’m personally anti-LLM, but in a vacuum I think https://goblin.tools/ToDo is a good (tiny) example of them working well:

* nobody cares about copyright infringement for a todo list
* pretty decent adhd accessibility aid
* if it generates something wrong the human will just be like “huh that’s a stupid thing to have on my todo list”

of course, this doesn’t exist in a vacuum, so the societal cost isn’t worth it, but it demonstrates “can do at least one thing” imo

Magic ToDo - Goblin Tools

@mikro2nd
LLMs are good for any task that has a lot of fiddly bits but is easily recognized as correct once implemented.

An example is the generation of code to make detailed precise plots of data. Theres lots of fiddly bits to adjust the axes and the coloring by groups and titles and transformations of the data and etc... It frequently requires wading deep into the documentation... But if an LLM generated code you can easily check visually if it worked.
@inthehands @anildash

@mikro2nd @inthehands I haven’t been able to keep current on front-end dev, but I like to maintain my personal website. Using an LLM coding tool to assist, I can make updates to my site myself, and know enough to debug anything that’s off. Doing this without these tools would take more time than I have to spend.

@mikro2nd Power company profits. (Not entirely a joke; I'm not very convinced by the theory that fossil fuel interests are actively pushing "AI", but I daresay they're delighted by it).

More seriously, there are alas fields of human endeavour where plausible sounding bullshit is useful. It's just that they tend to be harmful too; "SEO" sleaze, endless Webpages full of slop and affiliate links, cranking out junk papers to send to paper mill journals, trash ebooks, plagiarised art, writing consultancy which gives the answer the client wanted so they won't check the sources anyway...

Another way this is reminiscent of the cryptocurrency bubble, of course - sure, there are uses for it, it's just most of them are criminal, like ransomware.

@inthehands @anildash “If only we could have reasonable conversations” is tough these days.

A friend of mine who works in Philips medical devices described the immense pressure they’re getting from top executives to find ways to apply AI to their products. It’s a horrifying mismatch, doomed to fail or (I fear) worse, and sounds driven entirely by hype/FOMO, conflation of different AI kinds, etc.

No reasonable conversation can be had here without (hopefully benign) failure and cultural change.

@davepeck @inthehands @anildash I sure want this thing to decide how much radiation I get /s
@inthehands @anildash 1,000,000% agree about spectrum between (1) applicable solutions and (2) toxic technical and business foundation. So much for foundational models!
@[email protected]

There are so many different technologies within the #AI cap that it's impossible to reason about the whole.

And this over-hyped whole is toxic because of the billionaries that finance its development and narration.

The first step to build a less anti-social technology is to get rid of the billionaries in the loop.

To this aim, we should replace the anthropomorphic language we use about it so that it becomes unsellable.

No "artificial intelligence", just statistically programmed software.

No "artificial neural networks" but vector mapping (virtual) machines that can be statistically¹ programmed.

No "traning data" but source data.

No "training" but data compilation.

And so on...

With such conceptual framework, most problems of "ai" disappear:

  • The source data are to the weights what the source code is to an executable binary.
  • The software running the "inference" is just a virtual machine with a custom architecture running a software expressed as matrices of floats.
  • The weights are indeed an executable lossy compression of the source data.
  • In the case of #LLM the software extract and patch together excerpt of such archive full of decompression artifacts.
  • The "data scientists" are just programmers that instead of writing code to get an x86_64 executable, collect and select the source data. And just like with the source code a classic programmer use, they need the rights or permission to use the source data.
  • Because of the loss and randomization during source data compilation, the inference output has no intrinsic meaning, even when it's optimized to fool a human and make it think it has one.
  • In such output there are no hallucinations ever: it has no meaning, thus it's neither right or wrong.
  • Those publishing LLM outputs on their website or distributing them through API (#OpenAI and friends) are plainly violating the copyrights of authors whose text were included in the source data, because the model is just a float-vector-encoded lossy archive of such texts.
Get rid of the antropomorphism, and you really get a normal technology. Pretty boring and useful in few specific situations that do not require generalized access by untrained people.

@[email protected] @[email protected]

¹ some argue that statistics imply theoretical models of a phenomenon just to decide which data to collect and analyze, while current "ai" digest unstructured data. This is generally false, even unsupervised learning assume certain hypothesis in the choice of data to use. But even so, the point is that you need tons of examples to tune the matrices, so if you don't like "statistical programming", maybe "data-driven software synthesis" or "programming by tons of examples" works better (but I'm really open to alternatives).
@inthehands
Uh I guess it's like with literally _anything_: when it turns into a religion, it becomes insanity. Be it tech or whatever else.
@anildash

@inthehands @anildash …someone just pissed of our future robot overlords.

#AI #puttingTheIinAI

@inthehands @anildash yeah good call. "be normal about it" sounds exactly right.
@inthehands @anildash i think about the parallel universe where llms were something mentioned in some google engineering blog about how they drastically enhanced the quality of search results with previously unimaginable pattern matching, and that was about it

@inthehands I’m in a smaller organization with a well funded IT department (a rare enough thing) and these are the conversations we’ve had. We’ve definitely found things that LLMs have some utility for but none of it is at the scale you’d think from the hype. We’re definitely not going to be blowing the budget on it.

@anildash