I think this, a discussion of the parallels between "AI" and "crypto", is a good take. I want to dig into the bit on "AI" being different because it has practical use.

"AI" is a marketing term. There's the stuff that was mainly called "ML" up until 2021 or so, which definitely has practical uses. E.g., if you're running a social network and need to help humans find the toxic stuff, ML can help.

But in the last few years there's a wave of hype mainly around the large language models, LLMs, and the large text-to-image models. So things like ChatGPT and DALL-E. It's really not clear to me those have much more practical use than crypto. Certainly not over their costs. 1/

https://sfba.social/@[email protected]ial/111754701923644674

Jesse Baer šŸ”„ (@[email protected])

People like to say that "AI" is different from crypto in that there are actual useful applications, and that's true. But the vast majority of people you're expecting to come up with those applications are the same people who were just trying to build products on the blockchain.

Mastodon

Just to be sure I'm not being unfair, I searched for writeups of LLM uses. Here's a representative example of the genre: https://www.techopedia.com/12-practical-large-language-model-llm-applications

They mention 12 use cases. Some are done better with simpler, cheaper models or non-ML techniques (1, 4, 6, 8, 12). Some are wildly speculative (2, 9, 10).

But that leaves 4 items that I want look at carefully: content creation, customer support, sales automation, and writing code. Those are at least superficially plausible places where LLMs and large image models could have practical uses.

2/

First, though an important philosophical point: LLMs are fancy autocomplete. You give them a set of words, they'll predict the next word based on the enormous corpuses they've been trained on. This can give them the appearance of sentience. People will talk as if they "understand" things. They don't. It's the million-monkeys-with-typewriters thing, but the monkeys have seen enough English text that the next word is statistically consistent with the previous words.

Humans are subject to to pareidolia, and we really like to anthropomorphize things. It's not just thunder, it's a guy with a name and a look and a personality and a whole family. It's not just a bit of winter where we celebrate with our dearest; it's a fat guy in a red suit with specific facial hair. So although the text can feel human, we'll have to work hard to think of ChatGPT as a bit of unfeeling machinery, not our plastic pal who's fun to be with.

3/

Ok, so first, content creation. That seems positive, right? Wrong! The best way I've seen of explaining this: "Why should I take the time to read something nobody took the time to write?"

I think this one is a huge net societal negative. The people out there who want instant "content" are almost entirely not readers. They're people who want something to run ads against. They're people who want the credit for writing without doing the work. They're people who want to sell you something without understanding the something or whether or not it might be good for you. In short, they're people with various levels of contempt for their readers.

4/

@williampietri This is a new way of this king (Ha! Autocorrect changed thinking to 'this king' based on my previous texts! Driving home your earlier point about AI content) I had not considered about the AI "Content* and who wants it. .
"They're people who want something to run ads against. They're people who want the credit for writing without doing the work."
@spocko @williampietri Oke neat way I have seen the consequences of "content-ification" framed is that the content of the content stops mattering. At itā€˜s best it is modular, rapidly swapable, appears and vanishes as quickly as it optimizes some marketing algorithm curve. How thoughtful or reflective or emotional or evocative it is, what philosophy it contains… donā€˜t matter. Culture and philosophy are at best functionalized as getting people receptive for your adware.
@spocko @williampietri It was an article by @clive that formulated this basis for my observation.

@spocko @williampietri @Sevoris

šŸ¤˜šŸ»šŸ¤– aha glad you liked that one!

@clive @spocko @williampietri it has stuck around with me! TBH I think it has gained in relevance as further evaluation of language models has revealed how their statistic view of language pulls ideas and writing towards a washy common denominator and then the NYT lawsuit came along and demonstrated just how deep the overfitting goes...

Language models to me now look like "content machines" with all the absolute issues thereunto...

@Sevoris @spocko @williampietri

It really seems to be the apotheosis of the content fetish, doesn't it?

@clive @spocko @williampietri so long as the next word is probable enough, it works… The LLM as the Make More Words machine.

The training objective of an LLM and what "Content" appears to be all about certainly align in missfortunous ways.