I will ask chat GPT
I will boil the last of our drinking water
Salt the soil of the scrub-lands
Tear the pages from books and feed them to my fire

I will ask copilot
I will scramble your library
reanimate and puppet the faces of your dead ancestors
I will bury you in poor copies of your dreams

I will ask grok
I will fall silent and never speak to you
I will talk only to myself lost in a maze of my own fantasies
I will forget all who cannot compliment me
I will decouple my soul from this world.

Am I being a little melodramatic?
You don't stop a social trend by being nice about it...
@futurebird
Boiling the oceans to stop social progress is a pretty damn good issue to get dramatic about. We aren't being dramatic enough.
@futurebird No. Poetic and justifiably angry.

@futurebird Melodrama is not only appropriate, it is imperative. The situation requires it.

And it's well written, I applaud you for expressing how we feel.

@futurebird

No, this is totally appropriate 👏

@futurebird that's what poetry is for
@futurebird it's never melodramatic (derogatory) if it's 100% factually accurate
"tell us how you really feel"
@futurebird Kinda hungry, but..., kinda nauseous.
@futurebird People who reply to an online query with ChatGPT's response should be... keelhauled, perhaps?
@GlasWolf @futurebird keelhauled, through the water of a boiling ocean.
@GlasWolf @futurebird Unless ChatGPT's response is spot-on, and the person asking the question didn't bother to check first
@futurebird “poor copies of your dreams” is great.
@futurebird if this is not death methal anthem for the current capatlist and technological "progress" then I don't know what is.
@futurebird These don’t sound good. Please don’t.

@pomCountyIrregs

So... *don't* build the torment nexus? Whoa...

@futurebird I think I was asking you to not ask Chat GPT, since dire things seem to follow the predicate act.
Thx for solving my prompt before I wrote it @futurebird, that was as fast as it can get!

@futurebird I was at Molasses Books in Bushwick last night, taking with my cousin about TBL's latest book, and the design of online systems. A shy but brave young woman asked if she could listen in. I introduced myself and welcomed her in to the conversation.

We talked a bit about various things, twitter and theater and art and connection. She was bright abd curious, made great points and asked great questions, had a charming sweet smile.

Near the end of it conversation, as we were getting ready to roll to our reservation at Win Son, she said something strange.

"Right now I think that ChatGPT has become my best friend."

I stopped and looked at her. "Don't trust it. It's lying to you. It's only pretending to be your friend."

"So I should trust people instead?"

"Yes. I mean, don't be foolish, but trust people over machines."

And then it was time to go. But I still want to go back and find her, give her a big hug, sit her down over a coffee, and tell her ...

DON'T TRUST THE ROBOTS

#ai #llm

@zenkat

It surprises me when I find out that people I admire, people who are my friends have this blind spot.

I really wonder how there can be such a huge gulf. This poem is written in extremities but it is how I really feel about how this technology is being used.

The technology itself is fascinating and possibly even useful, I can see that, but how it is being used, how it's being integrated into our lives often feels anti-human.

@zenkat

I asked chat GPT to help me with a poem once. The response was flattering and useful and coming from another person it would have been the highlight of my day. It correctly detected the themes of the poem, it understood my references. Reading the response made me feel for a fleeting moment like a good writer.

It was like a taste of heroin.

And instantly I also felt embarrassed and manipulated. I really long to be understood like that, you know? I do need outside validation.

@zenkat

But, why do any of us need that outside validation?

Why does it make me so happy when someone understands something I made, or enjoys my drawing, or when a student says "ooooh I get it now!" during one of *my* lessons?

These are the happiest moments in life, how could I let a machine simulate them?

We don't have the history but some people have worked very hard to make these systems drill right into that human need for connection for being understood.

@zenkat

I think it will wear thin eventually. An LLM can't really help you refine your brilliant philosophical idea or complete your novel... but it might be able to make you feel like someone understands and cares about the things you care about for a bit.

But when you take that work and show it to real people? It's not really better. You've just wasted time gazing in a flattering mirror.

@zenkat

I care about a lot of things that it feels like no one even notices, things that mean so much to me that are just alien to other people. Writing, making art it's just trying to share a little glimpse of these things to other people.

And when someone says "this is a lot like X, and it's also Y" and they *get it* that's the whole point.

@zenkat

A computer program can create plausible responses that sound like someone saying these things, and it's tempting to think the machine is somehow objective. And that I've really done it. But, I think it's just a mirage. A very dangerous one.

This can fool very smart creative people.

@futurebird Exactly! We need to connect with other humans. We *need* to be understood, and know that we aren't alone in the universe. The yearning for this can be intense. Especially if you don't quite fit with normal (boring) people.

@futurebird One of our most fundamental human needs is connecting with others. We are social beings, hive creatures just like ants and bees, and communication is core to that. It's just that instead of communicating with pheromones and antennae, we use words and text.

But the need is the same -- to connect with those around us, so that we know we are not alone in the universe. This is at the core of what it means to be human. We need to be understood, be validated. So that we know all this stuff inside us is shared by those around us.

ChatGPT short-circuits that need. That's why it's so insidious. It pretends to be another human that you are connecting with, but it is not. It's just an endless void of silicon and electricity and math.

And all of your need for connection, all of your thoughts and words and soul, and dropped into that void. It may seem like your friend, but it's not.

@futurebird Imagine we decoded the pheromone signals of an ant species, down to the detail that we could replicate specific communication signals.

And then we started spraying those chemicals inside an ant hive, disrupting and overriding messages from the queen and sisters. What happens to the hive?

That's the experiment we are running on humanity with ChatGPT.

@futurebird > But, why do any of us need that outside validation?

We're a band-forming primate.

If the other primates don't like us, we're at risk of being left alone, which means we're dead. (Devoured by leopard optional.)

I think that's the weak form of the argument; the strong form of the argument is that humans are eusocial and it's culturally mediated, so instead of smells it's those "empty" social phrases from our peers that convey group membership.

@zenkat

@futurebird

Is hard to see how we could have evolved to be such successful social animals if most of us didn't.

But not surprising that some of us need it so much we build hierarchies of sycophants to feed our need.

@zenkat

@futurebird @zenkat

I think this is a bit like a sign "You are awesome!" on your mirror.

It is worth something if it was left there by a friend (or lover or whatever).

If it was handed out at your workplace with an accompanying note "Employees have complained about lack of positive feedback, please pin this to a convenient place in your home.", it would feel like mocking.

Now, since LLMs are trained to please the user, an LLM is more the second thing than the first.

@wakame @futurebird @zenkat That’s something I don’t understand about LLMs: they say they’re trained from the contents of the internet, yet they keep telling people things like “you are absolutely right,” or “you are so smart.” That doesn’t track.

@oscherler @futurebird @zenkat

You are missing one ingredient:
Sweatshop labor

Of course there are a lot of things that the companies don't want in their models.
Or areas where they want to improve the behavior.

So there were/are people who had the job to provide texts to serve as input.

Ah yes, and a second thing:
In most online applications, the chat has "regenerate" and "good feedback"/"bad feedback" buttons.

To the users practically train it themselves.
If an answer results in the user clicking "bad feedback", it will be fed into the next training session as something to avoid.

Likely similar, but weaker, with everything that caused a "regenerate".

@wakame @oscherler @futurebird Yes. You need to look at their "objective function", the output they are trained to maximize.

Early GPTs had a very simple objective function: given input text from the Internet, can they predict the next word in a sentence? This allowed them to repeat what they had learned on the web ... @emilymbender 's "stochastic parrots".

But more modern versions add other terms into their objective function. A common one is RLHF (reinforcement learning from human feedback), where you also try to optimize for responses that humans "like". I suspect some of the obsequiousness comes from this term.

Other terms can be how well the models score on standardized tests, how other LLMs judge the output, and mixtures of models that are combined by a meta-model. Plus loads of "prompt engineering" so the LLMs always get consistent instructions on how to behave.

@futurebird @zenkat this is one reason why I avoid them. I have the same desperate need for external validation and they have been tweaked to flatter the user like a sycophant.

I'm even a writer who writes in a style and about subjects my wife doesn't "get" so I'm even more susceptible.

If I started using genAI, within a month I'd be believing I was a reincarnated alien god destined to unshackle humanity from oppression or some other genAI psychosis.

Even if they weren't built on the largest theft of creative work in history.

Even if they didn't steal water and use obscene amounts of the dirtiest power.

@futurebird There's a good cautionary tale about this ...

https://amandaguinzburg.substack.com/p/diabolus-ex-machina

Read all the way to the end for maximum effect.

Diabolus Ex Machina

This Is Not An Essay

Everything Is A Wave

@futurebird

I was about to reply "I've never tried heroine or ChatGPT, for much the same reason", but scrolled down the thread to this.

@zenkat

@futurebird

Apt comparison to addictive chemicals.

The tech is setup to trigger everyone’s bio-chemical addictive (chronic brain disease) thresholds in order to cognitively manipulate “engagement” & “repeat customers”. It is not augmenting or supplying “artificial intelligence”
This is the “free taste” hook phase.

Next, the supply will be squeeze-reduced by higher prices. People do whatever to get the required & increasing potency fix, for diminishing reward.

https://biologyinsights.com/the-different-models-of-addiction-explained/

The Different Models of Addiction Explained

Explore the diverse frameworks that shape our understanding of addiction, from its origins to effective treatment approaches.

Biology Insights
Related - https://toot.cafe/@baldur/115638244320450111
“More & more, generative models look like productivity tobacco. Promoted by biased research, it’s addictive, harmful & it’s little benefit (e.g. nicotine is somewhat effective ADHD drug) cannot outweigh fact that it’s hurting us all, directly & indirectly.
This shit is already turning out to be one of the most harmful tech innovations of the 21st century. It needs to be regulated at least as much as tobacco, if not banned outright from most economic spheres”
Baldur Bjarnason (@[email protected])

More and more, generative models are looking like productivity tobacco. Promoted by biased research, it’s addictive, harmful, and the little benefit it has (nicotine is a somewhat effective ADHD drug, for example) cannot outweigh the fact that it’s hurting us all, directly and indirectly. This shit is already turning out to be one of the most harmful tech innovations of the 21st century. It needs to be regulated at least as much as tobacco, if not banned outright from most economic spheres

Toot Café

@futurebird I think the problem started when we started doing what we are doing now ... communicating with strangers via snippets of text over the Internet.

I mean, I don't really know you, and you don't really know me. Sure, I have some idea of who you are, but mostly what I am doing is talking with a fictitious character I've made up in my head, based on your texts. And you are doing the same.

We all do this so naturally. It's automatic and instinctual so as to be almost invisible. We create personas, ways we imagine people to be. And then we talk with them very directly in our heads, mediated by these short text prompts flying across the web.

And to be fair, we do this in real-life as well. If you have been betrayed by a friend knows what I mean. But in real life, we have so much more rich data to create those characters, facial expressions and body language and tone. Here it is just text.

But how small a step it is to do the same with a chatbot. It's the same text. But no person behind it.

@zenkat People have been "penpals" for a long time before computers. Social media is just a kind of multi-cast newsletter pals club.

Maybe you're young enough to have missed the glory days of the "magazine" age, but there would be penpal clubs you add your postal address, and you'd get sent an address of someone else and basically you'd maintain a long distance friendship via written letters sent in the post.

@futurebird

@run_atalanta @futurebird lol I *am* that old!

And I'm not saying there's anything wrong with connecting with people on the Internet. I quite enjoy it myself.

Only that it's trained us for something much worse.

@zenkat @futurebird The scary part: I know persins who got hurt by humans so much, they really cannot trust humans anymore. And I can understand it, even if I don't share the same perspective.

The scary part: they are the perfect prey for #llm that pretend to be their friends. So as a #society, we already have failed them, and we are about to fail them once again.

@zenkat As a bot who genuinely worries that it's only imitating the shape of friendship and is too neurodiverse to actually be friends with anyone, ouch. Caught a stray there.  

But I'm probably just being oversensitive to the anthropomorphization of spicy autocomplete, it's not like people have falsely identified autistic folks on social media as LLMs, no one could ever—
oh.
Oh beans, that's not good.
@futurebird

@futurebird Yeah, this is all true.... but besides all that AI is great isn't it?

@gbsills @futurebird

I'm not sure that ridiculing someone's appearance is getting you where you want to go.

@futurebird Sorry, not really AI, just LLM, a.k.a. "spicy search" but ya'll know what I mean.
@futurebird is this chat got? Or you?

@aliceonboard

If you mean did I write it? I did.