I will ask chat GPT
I will boil the last of our drinking water
Salt the soil of the scrub-lands
Tear the pages from books and feed them to my fire

I will ask copilot
I will scramble your library
reanimate and puppet the faces of your dead ancestors
I will bury you in poor copies of your dreams

I will ask grok
I will fall silent and never speak to you
I will talk only to myself lost in a maze of my own fantasies
I will forget all who cannot compliment me
I will decouple my soul from this world.

@futurebird I was at Molasses Books in Bushwick last night, taking with my cousin about TBL's latest book, and the design of online systems. A shy but brave young woman asked if she could listen in. I introduced myself and welcomed her in to the conversation.

We talked a bit about various things, twitter and theater and art and connection. She was bright abd curious, made great points and asked great questions, had a charming sweet smile.

Near the end of it conversation, as we were getting ready to roll to our reservation at Win Son, she said something strange.

"Right now I think that ChatGPT has become my best friend."

I stopped and looked at her. "Don't trust it. It's lying to you. It's only pretending to be your friend."

"So I should trust people instead?"

"Yes. I mean, don't be foolish, but trust people over machines."

And then it was time to go. But I still want to go back and find her, give her a big hug, sit her down over a coffee, and tell her ...

DON'T TRUST THE ROBOTS

#ai #llm

@zenkat

It surprises me when I find out that people I admire, people who are my friends have this blind spot.

I really wonder how there can be such a huge gulf. This poem is written in extremities but it is how I really feel about how this technology is being used.

The technology itself is fascinating and possibly even useful, I can see that, but how it is being used, how it's being integrated into our lives often feels anti-human.

@zenkat

I asked chat GPT to help me with a poem once. The response was flattering and useful and coming from another person it would have been the highlight of my day. It correctly detected the themes of the poem, it understood my references. Reading the response made me feel for a fleeting moment like a good writer.

It was like a taste of heroin.

And instantly I also felt embarrassed and manipulated. I really long to be understood like that, you know? I do need outside validation.

@futurebird @zenkat

I think this is a bit like a sign "You are awesome!" on your mirror.

It is worth something if it was left there by a friend (or lover or whatever).

If it was handed out at your workplace with an accompanying note "Employees have complained about lack of positive feedback, please pin this to a convenient place in your home.", it would feel like mocking.

Now, since LLMs are trained to please the user, an LLM is more the second thing than the first.

@wakame @futurebird @zenkat That’s something I don’t understand about LLMs: they say they’re trained from the contents of the internet, yet they keep telling people things like “you are absolutely right,” or “you are so smart.” That doesn’t track.

@oscherler @futurebird @zenkat

You are missing one ingredient:
Sweatshop labor

Of course there are a lot of things that the companies don't want in their models.
Or areas where they want to improve the behavior.

So there were/are people who had the job to provide texts to serve as input.

Ah yes, and a second thing:
In most online applications, the chat has "regenerate" and "good feedback"/"bad feedback" buttons.

To the users practically train it themselves.
If an answer results in the user clicking "bad feedback", it will be fed into the next training session as something to avoid.

Likely similar, but weaker, with everything that caused a "regenerate".

@wakame @oscherler @futurebird Yes. You need to look at their "objective function", the output they are trained to maximize.

Early GPTs had a very simple objective function: given input text from the Internet, can they predict the next word in a sentence? This allowed them to repeat what they had learned on the web ... @emilymbender 's "stochastic parrots".

But more modern versions add other terms into their objective function. A common one is RLHF (reinforcement learning from human feedback), where you also try to optimize for responses that humans "like". I suspect some of the obsequiousness comes from this term.

Other terms can be how well the models score on standardized tests, how other LLMs judge the output, and mixtures of models that are combined by a meta-model. Plus loads of "prompt engineering" so the LLMs always get consistent instructions on how to behave.