@MostlyHarmless That's capitalism more generally. AI is just the latest hype. But hey, could be worse. At least AI has SOME positive use. Some times the hype are things that have NO positive use, like crypto for example.
@oneloop @MostlyHarmless I'm seeing far more negative consequences than positive uses from LLMs in particular. Hyped LLMs are not like most Machine Learning.
LLMs, the grift that just keeps giving.
@marjolica Why are they not like most ML?

@oneloop Most ML looks for patterns in the data that map to a truth function (true/false) where the truth may be objective or based on expert opinions. A well know example would be the looking at patterns in X-rays to detect whether a fracture is or is not present. It is still not going to get the answer correct every time, but it can also sometimes detect patterns not obvious to us.

An LLM however has no truth function, it just looks to autocomplete a prompt you give it (which you may think is a question with a true/false answer) based on the frequency with which words follow each other in the data set.

At best the LLM trainers will get sweated labour in the South to label sources for accuracy, so it may give a higher probability to strings of words in a Wikipedia or Reddit article compared to one in the Onion or conspiracy theories on Facebook or X.

Add to this the important 'chat' feature, where it is programmed to respond in a way that fakes being in a conversation such as you would be having with a human respondent.
And it never responds "I don't know". It always comes up with what looks like an an answer, however poor the source or combination of sources may be.

@marjolica

> An LLM however has no truth function

> LLM trainers will get sweated labour in the South to label sources for accuracy

Ok, if agree that you're training for accuracy then they do have "truth function".

You're mixing technicalities of the technology, with impact of the technology.

@marjolica

> And it never responds "I don't know".

You're mixing properties of the implementations that you've seen, with properties of the technology. Furthermore the premise, that it never responds "I don't know" isn't even true. I'm afraid you're just repeating things that you've heard that haven't taken a minute to consider whether they're true.

Here's Chatgpt saying it doesn't know

@marjolica Another example
@marjolica You just heard someone say "LLMs never say they don't know" and you go "that sounds awful, let me post it on mastodon" without first checking if it's true. So in a sense you're more like an LLM than you imagine.
@oneloop someone said? Try typing "llms never say they don't know" into non-AI DDG and read some of the articles listed.
Though of course it is a bit more nuanced than that.
The LLM traing data set may include statements out there asserting the something is not known. And it may reproduce that, or it could randomly decide to give what looks like an answer.
You can attempt to put in improbability thresholds for the word salads that LLMs extrude but it still can't safely distinguish truths from the fictions that exist in its training data set.
@marjolica It doesn't matter that you can find articles saying "LLMs never say 'I don't know'" - I've just demonstrated it's not true. Don't believe everything you read, exercise critical thinking.