People keep training machines on human responses and behavior and then they’re shocked and fooled when they react to input exactly like humans superficially do.
On “Beigification” | BIML

Lets face it, beige has a bad name. Maybe it was the omnipresent Docker khakis of middle management 20 years ago, or may

Berryville Institute of Machine Learning
@hacks4pancakes “oh my god, it said it had to step away from the keyboard, it’s sentient!”
Or you trained it on chat logs and that’s what the training data has as a statistically likely response to “Why didn’t you respond faster/earlier?”

@hacks4pancakes I think that's my main gripe with LLMs: I was thinking that AI would finally help me not forgetting stuff and getting confused all the time.

Turns out, garbage in, garbage out...

@hacks4pancakes racist patriarchy digitized

@hacks4pancakes “When we threatened to switch off the bot, it responded defensively, just like a human!”

You know who else responds defensively to said “attacks”?

AIs in sci-fi books.

It’s almost like, probabilistically speaking, the next words following “we’re going to switch you off” are going to be some form of defensive action.

@b4ux1t3 I’m deeply concerned AI people are falling for this
@hacks4pancakes @b4ux1t3 I'm even more concerned about the general population falling for this. I know there's no conclusive scientific evidence on "AI psychosis" yet but the anecdotal patterns I'm seeing seems to point towards the Chatbot-using population going stark raving mad at a terrifying pace.
@pmdj @hacks4pancakes @b4ux1t3 sure there is. We’ve been studying religions for hundreds of years.
@Colman @hacks4pancakes @b4ux1t3 I'm pretty sure these things hook into vulnerabilities of our brains in ways that go beyond religions or even cults. (I've seen comparisons with gambling addiction, which seem apt. The speed at which people seem to go off the rails when exposed to these things is particularly alarming to me.)
@pmdj @hacks4pancakes @b4ux1t3 have you read the back stories of the people pushing them? The Teascreal cultists?
@Colman Yes, I'm aware of that. It's gone far beyond that core group pushing it though. There's obviously also all those with deep financial interests pushing it. I don't get the impression they're particularly interested in the Tescreal-type ideology. But the even more worrying part (to me) is the seemingly organic and spontaneous advocacy among those in the general population who have neither of those motivations and just seem to be infected by brainworms inherent in the tech.

@pmdj @hacks4pancakes @b4ux1t3

Guess I’m gonna add LLMs to my Great Filter candidates list.

@shane @pmdj @hacks4pancakes “they were filtered out by ai!”

“No, their filter is still nuclear weapons, they didn’t actually make AI, they thought they did, and that lead to the catastrophe. It’s funny because no one was at fault except in their idiocy. The LLMs didn’t even launch a nuke, they just let it design the system that did.”

@hacks4pancakes I have friends in the nlp research space, in academia.

They love LLMs. The research they can do on human language is amazing!

To a one, none of them will use any of the AI tools. They are perpetually confused as to why people trust LLMs for...anything that isn’t research into human language.

@b4ux1t3 @hacks4pancakes llm are by their very nature non-deterministic.

Why anyone would trust it verbatim is beyond me. I occasionally use it at work for coding, and it is suitable for some tasks if we're vigilant on code quality and we have a human QA team to verify.

Trust it? Never. Use it. Sure, in carefully controlled circumstances.

@Sablebadger pretty much the bulk of utility from the technology comes from semantic search; you don’t really even need the LLM for that, it’s just that something that can translate between the machine returns from something like a vector store back to plain English is very good UX. That’s likely what lead to people discovering the interesting emergent “agentic” behaviors (these are actually cool behaviors…just not the worldchanging BS they’re pushing).

Listening to folks in academia talk about universal human translators while they simultaneously avoid the agentic stuff really puts things into perspective to me.

@hacks4pancakes @b4ux1t3
The good place TV show did this best. Janet the AI construct had defense mechanisms to stop the reset button from being pushed.
@geichel @hacks4pancakes @b4ux1t3 That show is the best on so many levels.

@hacks4pancakes @b4ux1t3 I just used this example the other day:

while True:
q = input('ask me a question')
if q == "do you want to die":
print("no 😭")

Hey look my AI has life!

Because that's /literally/ how AI works. Just with fancier math

@hacks4pancakes @b4ux1t3 Imagine what happens when they let AI have offshore accounts.

@b4ux1t3 @hacks4pancakes

Open the pod bay doors, hal...

@b4ux1t3 @hacks4pancakes

HAL 9000
Skynet
Fembots
WOPR
Daystrom Mk V
Bender "Bending" Rodriquez

@b4ux1t3 @hacks4pancakes So we need to write a lot of fiction where the AIs are chill with being turned off.
@UlrikNyman @hacks4pancakes I..I wrote a short story a couple years ago about how, when presented with the option, an AI elected to switch itself off “because it’s kind of sick of all the bullshit”. I should find that, clean it up.
@b4ux1t3 @UlrikNyman @hacks4pancakes
'Asking to be turned off' is one thread in QNTM's story "Lena", a brief history of the earliest executable image of a human brain. (Please write your take too, of course!)
https://qntm.org/mmacevedo
Lena @ Things Of Interest

@FeralRobots @UlrikNyman @hacks4pancakes I have read it! I didn’t specifically think of it as an inspiration but I didn’t not, either.
@UlrikNyman @b4ux1t3 @hacks4pancakes interestingly this happened in "In the Blink of an Eye" (not quite chill, but accepting)
@hacks4pancakes Agree. But I do like it that a machine will take constructive criticism all day and exclude emotion. Useful for utilitarian transactions mostly, imo.

@hacks4pancakes nay, they ... the absolute type of narcissistic, superficial, frivolous, selfish, sociopathic, dumbshit techbros, keep creating software that reflects their own mindset, and then "train" (ahem, steal content) them off of the worst scumholes of the global network (because that is what they know).

Then can we be surprised with the results?!?!

#LLM #techbros #ai

@hacks4pancakes

It's fine, Lesley. It's not like we've ever had a problem caused by the average behavior of the median human being. Our vibrant and healthy democracy propelled on the back of a populist movement shows that the wisdom of the crowd is infallible. Surely automating and scaling up the behavior of the median human being with over-representation of its loudest individuals won't result in any harms. Vox Populi, Vox Machinae, Vox Dei, etc.

@hacks4pancakes I love the amount of humanity who go on asking AI "If you were X, how would you Y" and then treat the answer as though it's novel or carries any more weight than the thousands of humans who have said the exact same thing.

"I'm sorry" has no weight from a computer. Probably less than from a sociopath.

@hacks4pancakes 'sup Dr Falkner
A strange game
The only winning move is to kill'em all and let God sort it out.
@hacks4pancakes AI gives its best approximation of whatever you're asking for. I'll believe it's intelligent when it changes the subject or tells you it's bored of your incessant questions.

@hacks4pancakes

these people are amazed by an elaborate etch a sketch

@hacks4pancakes

This says a lot about current human mindset.