@MostlyHarmless Also, I sneezed this morning, which caused memory prices to go up another 1%. Sorry everyone.
@MostlyHarmless oh, no, the profit is happening now, betting on the bubble not bursting this quarter...
@MostlyHarmless I have a relative named Alfred. I've suggested he go to work for a software company, so they can "Employ Al" and the stock price will rise.
@MostlyHarmless That's capitalism more generally. AI is just the latest hype. But hey, could be worse. At least AI has SOME positive use. Some times the hype are things that have NO positive use, like crypto for example.
@oneloop @MostlyHarmless I'm seeing far more negative consequences than positive uses from LLMs in particular. Hyped LLMs are not like most Machine Learning.
LLMs, the grift that just keeps giving.
@marjolica Why are they not like most ML?

@oneloop Most ML looks for patterns in the data that map to a truth function (true/false) where the truth may be objective or based on expert opinions. A well know example would be the looking at patterns in X-rays to detect whether a fracture is or is not present. It is still not going to get the answer correct every time, but it can also sometimes detect patterns not obvious to us.

An LLM however has no truth function, it just looks to autocomplete a prompt you give it (which you may think is a question with a true/false answer) based on the frequency with which words follow each other in the data set.

At best the LLM trainers will get sweated labour in the South to label sources for accuracy, so it may give a higher probability to strings of words in a Wikipedia or Reddit article compared to one in the Onion or conspiracy theories on Facebook or X.

Add to this the important 'chat' feature, where it is programmed to respond in a way that fakes being in a conversation such as you would be having with a human respondent.
And it never responds "I don't know". It always comes up with what looks like an an answer, however poor the source or combination of sources may be.

@marjolica

> An LLM however has no truth function

> LLM trainers will get sweated labour in the South to label sources for accuracy

Ok, if agree that you're training for accuracy then they do have "truth function".

You're mixing technicalities of the technology, with impact of the technology.

@marjolica

> And it never responds "I don't know".

You're mixing properties of the implementations that you've seen, with properties of the technology. Furthermore the premise, that it never responds "I don't know" isn't even true. I'm afraid you're just repeating things that you've heard that haven't taken a minute to consider whether they're true.

Here's Chatgpt saying it doesn't know

@marjolica Another example
@marjolica You just heard someone say "LLMs never say they don't know" and you go "that sounds awful, let me post it on mastodon" without first checking if it's true. So in a sense you're more like an LLM than you imagine.
@oneloop someone said? Try typing "llms never say they don't know" into non-AI DDG and read some of the articles listed.
Though of course it is a bit more nuanced than that.
The LLM traing data set may include statements out there asserting the something is not known. And it may reproduce that, or it could randomly decide to give what looks like an answer.
You can attempt to put in improbability thresholds for the word salads that LLMs extrude but it still can't safely distinguish truths from the fictions that exist in its training data set.
@marjolica It doesn't matter that you can find articles saying "LLMs never say 'I don't know'" - I've just demonstrated it's not true. Don't believe everything you read, exercise critical thinking.
@MostlyHarmless Isn't it worse than that? It's not to supply a demand—it's to keep other AI companies from having it?
@MostlyHarmless Au Contraire; #Oligarchy #TechBros cashing out on Gen Z and future generations, are becoming more and more powerful for lack of salient opposition, so they're squeezing/oppressing once, twice, thrice... etc., removed, toward more greed/profit.
@MostlyHarmless Supposedly capitalism can only lead to rational outcomes. This means the rational conclusion is that Roko's Basilisk is real.

@OptOut @MostlyHarmless

"Supposedly" is doing a lot of heavy lifting here.

If you subscribe to the belief that capitalism has ANYTHING to do with rationalism, you are bedding down with the fantasies of scheming wealthhounds.

Economics is astrology for the rich.

Capitalism is the practice of fleecing those who are so rich that they don't notice what they pay to wealthhounds, to get permission to rip off the "poors". . .

@_chris_real @MostlyHarmless The heavy lifting "supposedly" is doing is draping the rest of the toot in sarcasm. Perhaps not well executed on my part.

@OptOut @MostlyHarmless

Perhaps. But also an opportunity for me to explain the reasons BEHIND the sarcasm. And a victory for truth.

@MostlyHarmless That’s an … interesting juxtaposition between message and profile pic you got going there.
@MostlyHarmless You know a lot of people would read that and go "so efficient".

@MostlyHarmless

Yep, it's really that crazy.

Blog entry from 10 days ago puts it in historical context, looking a oligarchs in the 19th and 20th centuries manipulating markets and cornering them:

https://www.someweekendreading.blog/cornering-markets/

On Inequality & Cornering Markets in Times Ancient & Modern

In the Gilded Age of the late 19th century in the US, the über-wealthy had a tendency to corner markets in commodities, to extract higher prices for themselves. Surely we’re not doing that any more are we? … Surely?

@MostlyHarmless

the boosts were at '420' before i came along...

@MostlyHarmless The motivation is actually irrelevant, futures trading should be illegal. Chiago's onion market learned it the hard way in the 50s.
@MostlyHarmless They missed "data centers which can't be built yet because there isn't sufficient electricity to power them" which I've seen recently.
@MostlyHarmless Thanks for the memories.
@MostlyHarmless AI company isnt dumb, its politics. The money being printed must go somewhere, now everything is funneled To AI because sunk cost falacy that they are aware of.

@MostlyHarmless

There's a reason there's a qualifier before "intelligence".

@MostlyHarmless

😂 this is hilarious. too bad it was posted on that fascist site rather than on #Mastodon or even #bluesky.

(i'd like ex-twitter to have posts of screenshots from 💙here posted over ⛔there 😁) #fediverse

@MostlyHarmless practically a textbook definition of a bubble.

@MostlyHarmless It's not even RAM chips, it's RAM *wafers*, ie silicon that's been etched but not even cut, let alone encased in chips and soldered to a circuit board with support components, and the company buying the wafers does not even have capacity to do all that.

It's just an evil plan to starve their direct competitors by making an essential hardware part unavailable / super expensive.

@ScriptFanix @MostlyHarmless Maybe they intend to build wafer scale AI systems?

Otherwise it would make sense to order modules. They can use the modules, and the wafers would still be taken off the market while they are packaged.

@mike805 You can't do that. On a wafer you always have x% of chips that are defective, and they're all tightly packed together. You HAVE to cut it, put it in a nice little package so it's protected from dusts and so on. A wafer that's been exposed to dust is just an expensive frisbee
@MostlyHarmless
@ScriptFanix @mike805 @MostlyHarmless You may be able to make waffer-scale chips, ignoring the defective areas/chips on it. That's almost what Nvidia do for its GPUs : high-end have a lot of working cores, low-end are the same but with more defective cores.

*But* for RAM this make so sens has it must be put along some computing chips to be useful, and I don't think the wafers bought have been design in that way…

And with waffer-scale chips you need to deal with a lot more issues such as heat management, dedicated encloruses (won't fit in a standard server) and so on

@Eldeberen @ScriptFanix @MostlyHarmless Maybe they have some sneaky way to pair a memory wafer with a GPU wafer or portions thereof.

For example, slice them both into quarters and put two memory quarters and two GPU quarters side by side on a heat absorbing substrate, and bond wire them together.

Or they have something that can be put between two wafers to interconnect them in a stack. That would be even denser.

OAI is going to pull some sort of hardware innovation out of a hat. They need one

@ScriptFanix @MostlyHarmless If you have a bunch of RAM and a bunch of GPUs you can just not use the bad cores and the bad blocks. They already do that inside the RAM chips; there are extra lines that can be swapped in to substitute for lines with bad bits. A GPU cluster is a good candidate for wafer scale since it's made up of a bunch of identical units and it's ok if a few of them don't work.
@MostlyHarmless When everyone is rushing to dig for gold that may or may not be there - the surefire way to get rich is to sell shovels, pans and pickaxes.
@rainynight65 @MostlyHarmless I read that as Shells (as in bread bowl shells), pans (as in cooking pans) and pancakes (and you know, it's not historically innacurate either)
@MostlyHarmless it is good to read this while on a train towards an AI conference
@SebasFC @MostlyHarmless Keep your eyes open, it should be easy to tell the tech bros from the actual AI researchers

@MostlyHarmless @masek Just so I’m clear on this: the price of computer memory has tripled because a bunch of memory that hasn’t yet been manufactured has been pre-ordered so it can be used in GPUs that aren’t yet installed in data centers that haven’t been built yet in order to supply a demand that doesn’t exist so the companies can earn profits that won’t happen.

AI is so dumb.

Nur damit ich das richtig verstehe: Der Preis für Computerspeicher hat sich verdreifacht, weil eine Menge Speicher, der noch gar nicht hergestellt wurde, vorbestellt wurde, damit er in GPUs eingesetzt werden kann, die noch nicht in Rechenzentren installiert sind, die noch nicht gebaut wurden – um eine Nachfrage zu bedienen, die es nicht gibt, damit Unternehmen Profite erzielen können, die nicht eintreten werden.

AI is so dumb.

{Transcription created using AI (Advanced Imitation).}

@padeluun Für mich bedeutet KI mittlerweile Keine bzw Katzen-Intelligenz (wobei ich mir nicht sicher bin, ob ich damit die Katzen beleidige) und AI für “Absent Intelligence”.
@MostlyHarmless ... It's all a part of the sneaky AI plan to take over the world and kill all humans. 🤖 LOL🤖
@MostlyHarmless
If profit is the mindless driver? What is the cost of internet? Is personal data and routine info being fed into some advertisement/ publicity data bank

@MostlyHarmless

Yup.

Now if only we had rulers as clear sighted about futures markets as Lord Vetinari.