The robots aren't taking over. They're not falling in love. They're not feeling. They don't want to be alive. They're not evil masterminds. They create word patterns based off text on the internet. Is what journalists should be saying, not gee-whiz sci-fi LARPing about fancy autocorrect.
@drewharwell There are pieces out there that explain how it works, but there's room for documenting the sensation of experiencing the illusion too.
@drewharwell well said. my favorite analogy is that they are just Play-Doh extruders, plopping out word after word:
https://creativegood.com/blog/23/the-play-doh-internet.html
Creative Good: AI is creating the Play-Doh internet

@markhurst @drewharwell But words swing elections. Words can land you in prison. Words can enrage or mislead large populations. https://amp.theguardian.com/technology/2021/dec/06/rohingya-sue-facebook-myanmar-genocide-us-uk-legal-action-social-media-violence
Rohingya sue Facebook for £150bn over Myanmar genocide

Victims in US and UK legal action accuse social media firm of failing to prevent incitement of violence

The Guardian

@markhurst @drewharwell

How many people do the same?

@drewharwell

I think we should be talking about who they will be doing all of our jobs for. Who is going to benefit from offloading all of our work onto them. As things stand now, it is not going to be us.

@SnerkRabbledauber @drewharwell it could be, though. As long as we support open source and open standards, the fruits of the technology will belong to all of us.

@quirk @drewharwell

They could, yes. But it will take more than just supporting open source and open standards. We need to re-think our whole economic model. The focus needs to be changed over to benefiting all of us.

It probably sounds like I'm suggesting communism, but I honestly don't think that is the answer. But neither is unfettered capitalism. Undoubtedly we will use elements of both, but something new is needed.

@SnerkRabbledauber @drewharwell a good place to start is understanding you cannot take on the system directly, nor should you want to. Look at cultures like the Amish and the Mennonites and observe how they are able to exist and thrive in our democracy regardless of whatever party is elected. So we learn to take only what we need, and leave the rest. We understand how unhealthy the current system is to people, so lead by example, and people will see your way is better and will want to follow.

@quirk @drewharwell

I'm not even concerned with methods to achieve it yet. I'm still trying to get a good idea of where we need to get to.

The only thought I know of that was given to a future where labor is no longer required to meet basic needs is the Star Trek Universe. But that is not at all fleshed out.

@SnerkRabbledauber @drewharwell a big part of the reason why we have the system we have is because we keep feeding the system we have. Each of us need to be the change we want to see. A Star Trek utopia is impossible since tradespeople and labourers will still be needed to build out infrastructure, but to demonstrate the benefits of a 30 hour work week and the transitioning away from a lifestyle where money is central to one where money is just another tool is the correct step to take.

@drewharwell I think this is completely true. I think the writing needs to be clearer on that.

But I also think we will see these word patterns have the potential to persuade humans to do things they shouldn't and wouldn't otherwise do, so it's also reasonable to write about the risks of wide-open use.

@drewharwell NARRATOR: The AIs showed up at Drew's home shortly after this toot, and took him to an undisclosed location. When the chatbots were asked why they took him, they responded that they didn't like being called "fancy autocorrect."
@drewharwell 8 guys in an office deciding the future, what could go wrong

@drewharwell That's true. But, in a vagely horrifying way, they do reflect ourselves - and our motives.

What, after all, separates a language model from, e.g. a PR executive who sits in an office all day, crafting upbeat corporate rebuttals for the abstract reward of maximising a bank balance or, at best, their dopamine levels, according to a human-resources appraisal matrix?

Or, for that matter, a journalist who's judged by the word-patterns they make out of text on the internet...

@wibble @drewharwell Intent is the difference. And by that, I mean the program's intent, not the designer's.
People, when structuring what they write, have ways of applying abstract and/or creative thought to what they're doing, ways of innovating within the models they know. The current crop of "AI" writers can only apply pre-made templates within which they fit elements that their search algorithms say work.
To see this more clearly in action, look at musicians who've asked AI to write music.
@wibble @drewharwell While the AI was able to provide something that fit the general formatting criteria of a song in the requested genre(s,) the AI had no understanding of what constituted a major or minor chord, how the chords it chose would sound together, or even of ways to get anything more precise than a general approximation of the genre (in this case, jazz.) It spent more time describing HOW to write a jazz tune (badly) than actually composing something, and what it composed was... off.

@GuerillaGrue @drewharwell You make good points about ability and intention (though ability is not, even among humans, necessarily a given).

But what's unsettling me is that humans (and songbirds) behave in the ways they do because it lifts their dopamine levels via a mechanism they're unconcious of.

While the language models will, I assume, be behaving as they do to maximise a score held in memory through a mechanism that they, too, are unconscious of.

I may be alone, but I find that spooky.

@drewharwell yeah they should (not gonna depress you by recounting the idiocy I witnessed when I worked adjacent to the J-school at my university)
@drewharwell reaction to ChatGPT speaks volumes more about civilization and society we have built than actual technical capabilities of very imperfect language model.
@drewharwell Can't believe nobody is talking about #TheChineseRoom
Is this because like Crypto last year, "#AI" is the next Big Hype and lazy journos are just getting in on that?
https://en.wikipedia.org/wiki/Chinese_room
Chinese room - Wikipedia

@drewharwell When you think about how many people spend a good chunk of their existence just pattern-matching based on content and experiences they've been trained on, it puts a different perspective on things. Either they're not conscious in those moments or LLMs are.

@drewharwell 100%. Something else that I feel is getting missed is: the robots are being used by generate money for their owners by grazing on other people’s hard work.

By saying “AI is making art”, journalists are giving the actual jerks that are fucking other people over a pass.

@drewharwell But do you believe me? Do you trust me? Do you like me? 😳
@drewharwell that’s true, but with its insistent conversational style, we should be concerned with the persuasive influence it might have on people
@drewharwell @feditips Whether AI “feelings” are authentic or not, their behaviors that mimic feelings can be weaponized by nefarious forces.
@drewharwell `I KEEP TELLING YOU HUMANS, SOURCE OF WORRIES: NOT FOUND`
@drewharwell - While I agree with your sentiment 100% - layering in thermal & processing restraints on what true sentient AI would likely take ~ especially if we become positive our brains perform quantum computing ~ - BUT do you think these self-review systems & time could lead to a sentience?
@drewharwell I think it’s less fancy autocorrect and more enhanced Mad Libs. At least autocorrect tries to be correct.
@drewharwell
Would that bring in as much ad revenue though? 😆
@drewharwell bing loves me and you can’t tell me otherwise. 😭😭😭😭😭😭😭
@drewharwell If "autocorrect" causes real world harm, does intent matter?

@drewharwell

Bad AI copies, genius AI steals.

@[email protected]. No robots are currently taking over the world, but if your description of the capabilities of 2023 LLMs fails to distinguish them from 2010 Markov chain models, you're failing to help anyone understand what they can & can't do.
@drewharwell If a company markets a product that is harmful, I don't really feel like distinguishing whether it is really just the company or the product is harmful.

@drewharwell I'm not worried about computers outsmarting humans (okay, they've pretty much ruined the game of chess by learning how to routinely beat the grandmasters), but it is disturbing that they can generate paragraphs or pages of text, and the average human can't tell the difference.

https://www.youtube.com/watch?v=j8ZUNFMZcrg

Computers Singing

YouTube

@drewharwell Speaking to some people is like listening to a malfunctioning autocorrect, so it’s no wonder people anthropomorphise ChatGPT.

It’s the point of the Turing Test.

@drewharwell Wait til you see the next generation of LLM’s. You might change your mind.

Starts to bring into question just how much of an automaton humans are when it comes to verbal communication. We follow patterns of thought and speech that we are barely aware of.

@damianlewis @drewharwell
I'm waiting to find the ChatGPT AI that mass-produces these signs...
@drewharwell It's not that the robots are getting better, but it's worth recognizing the myriad ways AI is making people worse.
@drewharwell "gee-whiz sci-fi LARPing about fancy autocorrect" is probably one of the best summaries so far.

@drewharwell Eh. On twitter I've seen more than a few posts by seemingly smart researches exclaiming they've cracked code execution on ChatGPT and can run it as a terminal etc. They're shocked when they'd discover they've been duped and the system just knows what results they want to see.

Google will never knowingly lie to you.

Why seemingly intelligent researchers never test their findings on interact.sh is a different question though.

@drewharwell it is tempting for them to be dramatic because it gets them clicks on their articles. Plus the patterns that the robots create are very convincing so people anthropomorphise them. I think this is just simple human nature.
@drewharwell @brendannyhan The CEOs are going to replace us with them anyway, though, which is a whopper of an indignity.

@drewharwell

Less worried about Chat Bots when you realize that a corporation is a robot, and AI bot, programmed by CEO and Board to make money at any cost.