@hacks4pancakes yup. at BIML we call that "beigification"
@hacks4pancakes I think that's my main gripe with LLMs: I was thinking that AI would finally help me not forgetting stuff and getting confused all the time.
Turns out, garbage in, garbage out...
@hacks4pancakes “When we threatened to switch off the bot, it responded defensively, just like a human!”
You know who else responds defensively to said “attacks”?
AIs in sci-fi books.
It’s almost like, probabilistically speaking, the next words following “we’re going to switch you off” are going to be some form of defensive action.
@pmdj @hacks4pancakes @b4ux1t3
Guess I’m gonna add LLMs to my Great Filter candidates list.
@shane @pmdj @hacks4pancakes “they were filtered out by ai!”
“No, their filter is still nuclear weapons, they didn’t actually make AI, they thought they did, and that lead to the catastrophe. It’s funny because no one was at fault except in their idiocy. The LLMs didn’t even launch a nuke, they just let it design the system that did.”
@hacks4pancakes I have friends in the nlp research space, in academia.
They love LLMs. The research they can do on human language is amazing!
To a one, none of them will use any of the AI tools. They are perpetually confused as to why people trust LLMs for...anything that isn’t research into human language.
@b4ux1t3 @hacks4pancakes llm are by their very nature non-deterministic.
Why anyone would trust it verbatim is beyond me. I occasionally use it at work for coding, and it is suitable for some tasks if we're vigilant on code quality and we have a human QA team to verify.
Trust it? Never. Use it. Sure, in carefully controlled circumstances.
@Sablebadger pretty much the bulk of utility from the technology comes from semantic search; you don’t really even need the LLM for that, it’s just that something that can translate between the machine returns from something like a vector store back to plain English is very good UX. That’s likely what lead to people discovering the interesting emergent “agentic” behaviors (these are actually cool behaviors…just not the worldchanging BS they’re pushing).
Listening to folks in academia talk about universal human translators while they simultaneously avoid the agentic stuff really puts things into perspective to me.
@hacks4pancakes @b4ux1t3 I just used this example the other day:
while True:
q = input('ask me a question')
if q == "do you want to die":
print("no 😭")
Hey look my AI has life!
Because that's /literally/ how AI works. Just with fancier math
Open the pod bay doors, hal...
HAL 9000
Skynet
Fembots
WOPR
Daystrom Mk V
Bender "Bending" Rodriquez
@hacks4pancakes nay, they ... the absolute type of narcissistic, superficial, frivolous, selfish, sociopathic, dumbshit techbros, keep creating software that reflects their own mindset, and then "train" (ahem, steal content) them off of the worst scumholes of the global network (because that is what they know).
Then can we be surprised with the results?!?!
It's fine, Lesley. It's not like we've ever had a problem caused by the average behavior of the median human being. Our vibrant and healthy democracy propelled on the back of a populist movement shows that the wisdom of the crowd is infallible. Surely automating and scaling up the behavior of the median human being with over-representation of its loudest individuals won't result in any harms. Vox Populi, Vox Machinae, Vox Dei, etc.
@hacks4pancakes I love the amount of humanity who go on asking AI "If you were X, how would you Y" and then treat the answer as though it's novel or carries any more weight than the thousands of humans who have said the exact same thing.
"I'm sorry" has no weight from a computer. Probably less than from a sociopath.
these people are amazed by an elaborate etch a sketch
This says a lot about current human mindset.
AI is a clever Hans