I think this, a discussion of the parallels between "AI" and "crypto", is a good take. I want to dig into the bit on "AI" being different because it has practical use.

"AI" is a marketing term. There's the stuff that was mainly called "ML" up until 2021 or so, which definitely has practical uses. E.g., if you're running a social network and need to help humans find the toxic stuff, ML can help.

But in the last few years there's a wave of hype mainly around the large language models, LLMs, and the large text-to-image models. So things like ChatGPT and DALL-E. It's really not clear to me those have much more practical use than crypto. Certainly not over their costs. 1/

https://sfba.social/@[email protected]ial/111754701923644674

Jesse Baer 🔥 (@[email protected])

People like to say that "AI" is different from crypto in that there are actual useful applications, and that's true. But the vast majority of people you're expecting to come up with those applications are the same people who were just trying to build products on the blockchain.

Mastodon

Just to be sure I'm not being unfair, I searched for writeups of LLM uses. Here's a representative example of the genre: https://www.techopedia.com/12-practical-large-language-model-llm-applications

They mention 12 use cases. Some are done better with simpler, cheaper models or non-ML techniques (1, 4, 6, 8, 12). Some are wildly speculative (2, 9, 10).

But that leaves 4 items that I want look at carefully: content creation, customer support, sales automation, and writing code. Those are at least superficially plausible places where LLMs and large image models could have practical uses.

2/

First, though an important philosophical point: LLMs are fancy autocomplete. You give them a set of words, they'll predict the next word based on the enormous corpuses they've been trained on. This can give them the appearance of sentience. People will talk as if they "understand" things. They don't. It's the million-monkeys-with-typewriters thing, but the monkeys have seen enough English text that the next word is statistically consistent with the previous words.

Humans are subject to to pareidolia, and we really like to anthropomorphize things. It's not just thunder, it's a guy with a name and a look and a personality and a whole family. It's not just a bit of winter where we celebrate with our dearest; it's a fat guy in a red suit with specific facial hair. So although the text can feel human, we'll have to work hard to think of ChatGPT as a bit of unfeeling machinery, not our plastic pal who's fun to be with.

3/

Ok, so first, content creation. That seems positive, right? Wrong! The best way I've seen of explaining this: "Why should I take the time to read something nobody took the time to write?"

I think this one is a huge net societal negative. The people out there who want instant "content" are almost entirely not readers. They're people who want something to run ads against. They're people who want the credit for writing without doing the work. They're people who want to sell you something without understanding the something or whether or not it might be good for you. In short, they're people with various levels of contempt for their readers.

4/

As a writer, I think the real value of writing is the thinking and care that goes into it. Even for purely factual material, writing involves a careful search for truth.

LLMs, though, don't have any concept of "true". They can't. What they have is the digested correspondence between words in Wikipedia and Reddit and a zillion other sources of text. Truth can be represented in text, but it lies outside of it.

In philosopher Harry Frankfurt's "On Bullshit", he defines it as "speech intended to persuade without regard for truth": https://en.wikipedia.org/wiki/On_Bullshit

Marketing content generated by LLMs is clearly bullshit. But I'd argue that by imitating human forms of writing, *everything* produced by an LLM is bullshit. (Which would make the enormous "AI" hype cycle bullshit about bullshit, a truly American accomplishment.)

5/

On Bullshit - Wikipedia

Let's turn to the second plausible use case: customer support. People often like to talk to other people to resolve problems. What if we can automate the *feeling* of talking to a person, but with no actual people involved?

There are a bunch of things going on here. At least here, there's a real user need. But to what extent is it a real user solution?

It's plausible to me here that this will be a plausible first-query solution once you've built up a good base of Q&A examples for it, doing a bit of textual generalization. At least as long as what you're doing is pretty standard. But remember that we're using a bullshit engine here, so what happens when the statistically plausible text isn't correct or useful?

6/

A good example here is the car dealership that tried using a ChatGPT bot for customer service. It quickly agreed to a "legally binding" offer to sell cars for $1: https://venturebeat.com/ai/a-chevy-for-1-car-dealer-chatbots-show-perils-of-ai-for-customer-service/

Would any human agent do this? No. Because humans understand things. This is a toy example, but if GPT-ish things don't work for very basic cases, how much can we rely on them for important cases?

7/

A Chevy for $1? Car dealer chatbots show perils of AI for customer service

Incidents at car dealers highlight the responsibility of ensuring target chatbot deployment and safety compliance. 

VentureBeat
@williampietri The $1 car offer was NOT legally binding. Human beings hallucinate too.
A car dealership added an AI chatbot to its site. Then all hell broke loose.

Pranksters figured out they could use the ChatGPT-powered bot on a local Chevrolet dealer site to do more than just talk about cars.

Business Insider
@sheldonrampton Yes, I understand that. Chatbots can't legally agree to anything.
@sheldonrampton What I'm referring to was well explained in the article I linked. Here's the screenshot that includes the phrase you're troubled by.
@williampietri YOU wrote that the offer was legally binding. It wasn’t. The screenshot shows that phrase, but that doesn’t change the fact.

@sheldonrampton I think there's another way to read what I wrote, and I'm skeptical that anybody, you included, misunderstood what was going on. But just in case, I have put the phrase that bothers you in quotes.

Next time you've got an upwelling of reply-guy energy, I suggest you start out assuming that the person is not an idiot. It might go better for all concerned.